id
stringlengths
1
6
url
stringlengths
16
1.82k
content
stringlengths
37
9.64M
3900
https://organiclab.welderco.host.dartmouth.edu/distillation/distillationindex.html
Distillation Simple Distillation =================== Please note: (from top) use of clips to hold apparatus together, position of the thermometer, and the clamp holding the round bottom flask in place.Please note: (from left) use of clamp to hold collection flask, water goes in at the bottom (white tubing), water goes out at the top (red tubing.) Distillation is probably the most common technique for purifying organic liquids. In simple distillation, a liquid is boiled and the vapors progress through the apparatus until they reach the condenser where they are cooled and reliquify. Liquids are separated based upon their differences in boiling point. Two important things to note: 1) the tip of the thermometer must be correctly positioned slightly below the center of the condenser to accurately reflect the temperature of the vapors (see above left) and 2) the water supply should be connected to the lower port in the condenser and the drainage tube connected to the upper (in the picture above right, the white tube is connected to the water supply and the red tube leads to the drain.) Also be sure to use the thin-walled tubing and not the heavy walled vacuum tubing. Be very careful that your water lines do not come in direct contact with your hot plate, as the tubing could melt, resulting in a flood. Be sure to clamp both the round bottom boiling flask and the collection flask. Knocking over your collection flask at the end of the experiment if VERY frustrating. Below is a diagram of assemby: Generally, boiling stones or a magnetic stirbar will be added to the boiling flask to ensure even boiling. It is also wise to use some type of clamps to connect the various pieces of the distillation apparatus together. For low boiling liquids, enough heat may be provided simply placing your flask just above the hot plate (as shown above). You can also insulate your boiling flask and Claisen adaptor with aluminum foil. For higher boiling liquids it may be necessary to use an oil or sand bath to reach higher temperatures. The individual pieces of glassware needed for a simple distillation are diagrammed below. Be sure to use the blue Keck clips to attach the vacuum adaptor and the Claisen tube and the distillation adaptor. Do not use a Keck clamp at the round bottom flask as it could 1) melt and 2) interfer with a properly secured metal clamp. Fractional Distillation ======================= The setup for a fractional distillation is very similar to that for simple distillation. The only difference is the addition of a fractional distillation column, usually packed with some material of high surface area, producing a more efficient separation than the simple distillation. The same advice regarding the thermometer placement, clamping, and hook-up of the water tubes in the simple distillation also apply to the fractional distillation. As this apparatus is larger, practice additional caution to be sure that no glassware is broken or product loss. The choice of whether to use the simple or the fractional setup will depend on the compounds that you are trying to separate. Obviously, the simple distiallation setup is simpler and the distillation generally will be quicker than the fractional. However, the fractional setup is more efficient at separating liquids with fairly similar boiling points and, at times, is required. On the left, note that the white tube connected to the spigot is connected to the lower part of the condenser, while the red drainage tube is connected to the higher part. Also, note the position of the thermometer.Please note the use of clamps to secure the round bottom flask and the collection flask. Pictured on the left is the fractional distillation aparatus. A closer shot is on the right. Notice that the only difference between this apparatus and the simple distillation apparatus is that here, the fractional distillation column has been placed between the boiling flask and the distillation head. As in the simple distillation apparatus, note that the white tubing is connected to the water supply and the red tubing is the drainage tube. Also notice the position of the thermometer, blue clips, and clamps. A diagram of the apparatus is below. Back to Dr. Welder's website.
3901
https://mrpaynemath.weebly.com/uploads/3/8/9/9/38994693/math2201ch5.3anotes-workings.pdf
Math 2201 Date:_______ 5.3 Standard Deviation Standard Deviation We looked at range as a measure of dispersion, or spread of a data set. The problem with using range is that it is only a measure of how spread out the extreme values, the smallest and largest, are. It doesn't provide any information about the variation within the other data values. Thus, we will now look at a different measure of dispersion called standard deviation. Standard deviation: a measure of the dispersion or scatter of data values in relation to the mean; a low standard deviation indicates that most data values are close to the mean, and a high standard deviation indicates that most data values are scattered farther from the mean. The symbol used to represent standard deviation is the Greek letter 𝜎, sigma. Review of Mean or Average The mean, 𝑥̅, can be expressed using symbols: 𝑥̅ = Σ𝑥 𝑛 Thus, the mean is obtained by summing up all the data points and dividing by the total number of points. Example 1: Calculate the mean of the following data set: 3, 5, 6, 6, 7, 9, 9, 10, 12 mean total number of data points sum of all data points Step 1: Subtract the mean from each data point. Square what you get and sum up all the results Calculating Standard Deviation The standard deviation, 𝜎, can be found using the following formula: 𝜎= √Σ(𝑥−𝑥̅)2 𝑛 Example 2: Recall this example from section 5.1. The following table shows the test scores that Tim and Mary receive in Math 2201. (A) Calculate the standard deviation for both Tim's and Mary's test scores. Step 3: take the square root Step 2: divide by the total number of data points Step 1: Subtract the mean from each data point. Square what you get and sum up all the results. The following table will help organize the data and calculations for step 1. Tim: Data Point (𝑥) Data Point – Mean (𝑥−𝑥̅) (Data Point −Mean)2 (𝑥−𝑥̅)2 Σ(𝑥−𝑥̅)2 = Step 2: Divide sum by the total number of data points. Step 3: Take the square root of the total, divided by the sum. Now repeat the process for Mary: Data Point (𝑥) Data Point – Mean (𝑥−𝑥̅) (Data Point −Mean)2 (𝑥−𝑥̅)2 Σ(𝑥−𝑥̅)2 = (B) Whose marks are more dispersed? What does this mean in terms of standard deviation? (C) If the data is clustered around the mean, what does this mean about the value of the standard deviation? (D) Who was more consistent over the five unit tests? Example 3: (A) Is it possible for a data set to have a standard deviation of 0? Use an example. (B) Can the standard deviation ever be negative? Explain why or why not. (C) If 5 were added to each number in a set of data, what effect would it have on the mean? (D) If 5 were added to each number in a set of data, what effect would it have on the standard deviation? (E) What effect would multiplying each number by −3 have on the standard deviation? Example 4: Two high schools kept a record of the number of students sent to the office for smoking on school grounds. Over a 5 day period, the following results were obtained: School A: 4, 8, 13, 2, 5 School B: 9, 6, 11, 10, 8 Determine the standard deviation for each school. Which school has the greatest variation? Why? Example 5: Sports Illustrated is doing a story on the variation of player heights on NBA basketball teams. The heights, in cm, of the players on the starting line ups for two basketball teams are given in the table below. The team with the most variation in height will be selected for the cover of Sports Illustrated magazine. Which team will appear on the cover. Example 6: Two obedience schools for dogs monitor the number of trials required for twenty puppies to learn the sit and stay command. (A) Determine the mean and standard deviation of the number of trials required to learn the sit and stay command. (B) Which school is more consistent at teaching puppies to learn the sit and stay command. Justify your reasoning. Example 7: (A) Indicate whether one of the graphs has a larger standard deviation than the other or if the two graphs have the same standard deviation. What approaches did you use to decide which graph had the larger standard deviation? (B) Identify the characteristics of the graphs that make the standard deviation larger or smaller. What features of a histogram seem to have no bearing on the standard deviation? What features do appear to affect the standard deviation? Example 8: For each of the following histograms, match the following standard deviations: 0, 1, 3, 10 (A) (B) (C) (D) Textbook Questions: page 261, 262 #1a,c, 4, 6
3902
https://www.physicsforums.com/threads/sum-from-n-1-to-infinity-of-sqrt-n-n-2-1.684649/
Sum from n=1 to infinity of sqrt(n)/(n^2 + 1) • Physics Forums Insights Blog-- Browse All Articles --Physics ArticlesPhysics TutorialsPhysics GuidesPhysics FAQMath ArticlesMath TutorialsMath GuidesMath FAQEducation ArticlesEducation GuidesBio/Chem ArticlesTechnology GuidesComputer Science Tutorials ForumsIntro Physics Homework HelpAdvanced Physics Homework HelpPrecalculus Homework HelpCalculus Homework HelpBio/Chem Homework HelpEngineering Homework Help Trending Log inRegister What's new Intro Physics Homework Help Advanced Physics Homework Help Precalculus Homework Help Calculus Homework Help Bio/Chem Homework Help Engineering Homework Help Menu Log in Register Navigation More options Style variation SystemLightDark Contact us Close Menu Forums Homework Help Calculus and Beyond Homework Help Sum from n=1 to infinity of sqrt(n)/(n^2 + 1) Thread starter Lo.Lee.Ta. Start date Apr 10, 2013 TagsInfinitySum Apr 10, 2013 1 Lo.Lee.Ta. 217 0 ∞ Ʃ √(n)/(n 2 + 1) n=1 Find if it converges. I'm wondering if I can rewrite this by bringing the n 1/2 down to the denominator, making it negative... 1/(n-1/2)(n 2 + 1) = 1/(n-1 + n-1/2) = n + √(n) ...And it seems to me that this one would diverge because the n value gets higher and higher, so it would go to infinity... Is this the way to go about it...? Thank you very much! :) Discover more Mathematics Software Science Career Counseling Purchase quantum physics books science Math Problem Solvers Hire online math teacher Physics Equipment Buy physics textbooks online Quantum Mechanics Books sciences Physics news on Phys.org New adaptive optics system promises sharper gravitational-wave observations Physics-informed AI learns local rules behind flocking and collective motion behaviors New perspectives on light-matter interaction: How virtual charges influence material responses Apr 10, 2013 2 Dick Science Advisor Homework Helper 26,254 623 Lo.Lee.Ta. said: ∞ Ʃ √(n)/(n 2 + 1) n=1 Find if it converges. I'm wondering if I can rewrite this by bringing the n 1/2 down to the denominator, making it negative... 1/(n-1/2)(n 2 + 1) = 1/(n-1 + n-1/2) = n + √(n) ...And it seems to me that this one would diverge because the n value gets higher and higher, so it would go to infinity... Is this the way to go about it...? Thank you very much! :) You are doing some ghastly errors with algebra there. That's how you turned a series which does decrease with n to one that doesn't. Try and figure out what they are. And the correct way to do something like this is think about doing a comparison test first. Apr 10, 2013 3 Lo.Lee.Ta. 217 0 ...Would you please remind me? I know that in some situations, you can bring a number with a power down to the denominator, but you have to change the power to negative. And then I thought that if there were already things in the denominator, then you have to multiply them by what you just brought down... In what situations can you do something like this? :/ I was sure there was something in algebra at least somewhat like this...? :( Discover more Physics Tutors Online sciences Scientist Job Board forum Buy aerospace engineering course Get STEM career coaching Math Problem Solvers Buy physics textbooks online Hire calculus homework helper Math Tutoring Services Apr 10, 2013 4 Dick Science Advisor Homework Helper 26,254 623 Lo.Lee.Ta. said: ...Would you please remind me? I know that in some situations, you can bring a number with a power down to the denominator, but you have to change the power to negative. And then I thought that if there were already things in the denominator, then you have to multiply them by what you just brought down...In what situations can you do something like this? :/ I was sure there was something in algebra at least somewhat like this...? :( You can bring a factor down into the denominator as a negative power, sure. Here's where things start going wrong. What is n^(-1/2)n^2? Is 1/(n^(-1)+n^(-1/2))=n+n^(1/2)?? Using other symbols, is 1/(1/a+1/b)=a+b? Apr 10, 2013 5 tiny-tim Science Advisor Homework Helper 25,837 258 Hi Lo.Lee.Ta.! Sorry, but you can't do a + b = 1/(1/a + 1/b). (and your "a" is wrong anyway ) Apr 10, 2013 6 Lo.Lee.Ta. 217 0 I tend to have difficulty choosing what my b n should be in a limit comparison test... In this case, it wouldn't be good to choose 1/n 2 because even though it converges, it's smaller than what our original expression is! So that doesn't help me at all. It would only help me if b n converged but was greater than my original expression! :/ Can't compare it to 1/√(n) either! That would diverge but is also greater than my original expression! UGH. How do I figure out what b n to choose? #=_= Apr 10, 2013 7 Dick Science Advisor Homework Helper 26,254 623 Lo.Lee.Ta. said: I tend to have difficulty choosing what my b n should be in a limit comparison test... In this case, it wouldn't be good to choose 1/n 2 because even though it converges, it's smaller than what our original expression is! So that doesn't help me at all. It would only help me if b n converged but was greater than my original expression! :/ Can't compare it to 1/√(n) either! That would diverge but is also greater than my original expression! UGH. How do I figure out what b n to choose? #=_= Yes, you can drop the 1 because that will make it larger. Now you want to prove sqrt(n)/n^2 converges. It's a p-series. Try again to move sqrt(n) into the denominator. Correctly this time. Apr 10, 2013 8 Lo.Lee.Ta. 217 0 Oh, for some reason I multiplied the exponents, when I should have added them! O_O 1/((n-1/2)(n 2 + 1)) should (hopefully) equal: 1/(n 1.5 + n-1/2 = √(n)/n 3/2 Now, I guess I have to come up with a b n to compare it to... Apr 10, 2013 9 Dick Science Advisor Homework Helper 26,254 623 Lo.Lee.Ta. said: Oh, for some reason I multiplied the exponents, when I should have added them! O_O 1/((n-1/2)(n 2 + 1)) should (hopefully) equal: 1/(n 1.5 + n-1/2 = √(n)/n 3/2 Now, I guess I have to come up with a b n to compare it to... No, no! You moved the sqrt(n) into the denominator. It's gone now. We were going to drop the 1 and make it larger. You just have 1/n^(3/2). Like I said that's a p-series, it has the form 1/n^p with p=3/2. You've probably learned something about them. Does it converge or diverge? You can't change 1/(n 1.5 + n-1/2) into sqrt(n)/n^(3/2) for reasons we've already discussed. Apr 10, 2013 10 Lo.Lee.Ta. 217 0 Alright, so I guess you can't move variables with powers unless the operation is multiplication or division, right? GOOD x 3/x 2 = 1/((x-3)(x 2)) = x-1 = 1/x GOOD 1/(x-2 y 3) = x 2/y 3 BAD 1/(x 2+y 3) = y-3/x 2<--WRONG Okay, then with this: 1/[(n-1/2)(n 2 + 1)] Distributing the n-1/2... = 1/[n 1.5 + n-1/2] Yes, the n-1/2 CANNOT be moved to the numerator. So, that's it. BUT HOW can we just drop the 1 in the denominator?! Are you already trying to find a b n or something...? I mean, I feel like a n should equal: 1/(n 1.5 + n-1/2) and then we might say b n = 1/n 1.5 a n ≤ b n Since the bigger b n converges (by p-series rule), then a n must also converge. Is this right? Thanks! :) Apr 10, 2013 11 Dick Science Advisor Homework Helper 26,254 623 Lo.Lee.Ta. said: Alright, so I guess you can't move variables with powers unless the operation is multiplication or division, right? GOOD x 3/x 2 = 1/((x-3)(x 2)) = x-1 = 1/x GOOD 1/(x-2 y 3) = x 2/y 3 BAD 1/(x 2+y 3) = y-3/x 2<--WRONG Okay, then with this: 1/[(n-1/2)(n 2 + 1)] Distributing the n-1/2... = 1/[n 1.5 + n-1/2] Yes, the n-1/2 CANNOT be moved to the numerator. So, that's it. BUT HOW can we just drop the 1 in the denominator?! Are you already trying to find a b n or something...? I mean, I feel like a n should equal: 1/(n 1.5 + n-1/2) and then we might say b n = 1/n 1.5 a n ≤ b n Since the bigger b n converges (by p-series rule), then a n must also converge. Is this right? Thanks! :) Yes, that's it. The larger series, 1/n^(3/2) converges. So 1/(n^(3/2)+n^(-1/2)) must converge. You could have dropped the 1 to begin with since sqrt(n)/(n^2+1) is less than sqrt(n)/n^2. Now just prove sqrt(n)/n^2 converges. Apr 10, 2013 12 Lo.Lee.Ta. 217 0 Okay! <3 Thanks, Dick :D Apr 10, 2013 13 Mark44 Mentor Insights Author 38,039 10,519 Lo.Lee.Ta. said: Alright, so I guess you can't move variables with powers unless the operation is multiplication or division, right? [STRIKE]GOOD[/STRIKE] x 3/x 2 = 1/((x-3)(x 2)) = x-1 = 1/x x 3/x 2 = x 1 = x Apr 11, 2013 14 Lo.Lee.Ta. 217 0 Oh! Thanks, Mark44, for pointing that out! O_O x 3/x 2 = 1/(x-3 x 2) = 1/x-1 = x :P Silly mistake. Thanks! Apr 11, 2013 15 Mark44 Mentor Insights Author 38,039 10,519 Lo.Lee.Ta. said: Oh! Thanks, Mark44, for pointing that out! O_O x 3/x 2 = 1/(x-3 x 2) = 1/x-1 = x :P Silly mistake. Thanks! The above is now correct, but why would you go to all that trouble? Just carry out the division to go from x 3/x 2 to x. (Of course, x can't be 0.) Apr 15, 2013 16 Lo.Lee.Ta. 217 0 I was thinking about this problem again, and now I'm wondering if there's another way to do it. Once we rearrange it to be in this form: \frac{1}{n^{1.5} + 1/√(n)}, can we take the limit at n approaches infinity right now? lim 1/((n 1.5) + 1/√(n)) n→∞ Now, I'm wondering if we can treat 1/√(n) like a variation of the harmonic series (which diverges)? Well, according to the p-series test, 1/√(n) diverges anyway. n 1.5 also diverges according to the p-series test. So can we say it's: 1/∞ = 0 So, the series converges? Is this a viable way to work the problem? If not, why? Thanks so much! :) Apr 15, 2013 17 Dick Science Advisor Homework Helper 26,254 623 Lo.Lee.Ta. said: I was thinking about this problem again, and now I'm wondering if there's another way to do it. Once we rearrange it to be in this form: \frac{1}{n^{1.5} + 1/√(n)}, can we take the limit at n approaches infinity right now? lim 1/((n 1.5) + 1/√(n)) n→∞ Now, I'm wondering if we can treat 1/√(n) like a variation of the harmonic series (which diverges)? Well, according to the p-series test, 1/√(n) diverges anyway. n 1.5 also diverges according to the p-series test. So can we say it's: 1/∞ = 0 So, the series converges? Is this a viable way to work the problem? If not, why? Thanks so much! :) Well, no. 1/n has the form 1/∞. It converges as sequence, not as a series. Now I'm wondering if you listened to any advice you've been given. Convince me otherwise. Similar threads Doubt regarding the series ##\sum [\sqrt{n+1} - \sqrt{n}\,]## Sep 30, 2022 Replies 17 Views 2K Proving that ##\lim [\sqrt{4n^2 +n} - 2n] = \frac{1}{4}## Jun 30, 2022 Replies 9 Views 3K Integral of x^n using Reimann sums Dec 29, 2022 Replies 4 Views 1K Power Series and Convergence for ln(x+1) May 15, 2018 Replies 22 Views 4K Finding a definite integral from the Riemann sum Mar 20, 2023 Replies 2 Views 1K Sum of a series from n=1 to infinity of n^2/(2+1/n)^n Apr 28, 2020 Replies 11 Views 2K How to show that these sequences are summable? Mar 22, 2024 Replies 1 Views 1K Special Comparison Test For Infinite Series Sep 1, 2024 Replies 4 Views 1K Proof of ##\sqrt[n]{n!}## ≤ ##\frac{n+1}{2}## Oct 7, 2020 2 Replies 54 Views 4K Proving Negative Infinity Divergence of (5-n^2)/(3n+1) Oct 29, 2018 Replies 10 Views 2K Share: BlueskyLinkedInShare Forums Homework Help Calculus and Beyond Homework Help Hot Threads M Prove that the integral is equal to ##\pi^2/8## Started by Meden Agan Jun 27, 2025 Replies: 98 Calculus and Beyond Homework Help S Solving the wave equation with piecewise initial conditions Started by songoku Apr 2, 2025 Replies: 11 Calculus and Beyond Homework Help Calculating radius of gyration of plane figure about x-axis Started by Evari5te May 15, 2025 Replies: 5 Calculus and Beyond Homework Help Solve this problem that involves induction Started by chwala Apr 25, 2025 Replies: 7 Calculus and Beyond Homework Help The volume of a "spherical cap" using triple integrals Started by brotherbobby Apr 14, 2025 Replies: 9 Calculus and Beyond Homework Help Recent Insights InsightsQuantum Entanglement is a Kinematic Fact, not a Dynamical Effect Started by Greg Bernhardt Sep 2, 2025 Replies: 11 Quantum Physics InsightsWhat Exactly is Dirac’s Delta Function? - Insight Started by Greg Bernhardt Sep 2, 2025 Replies: 3 General Math InsightsRelativator (Circular Slide-Rule): Simulated with Desmos - Insight Started by Greg Bernhardt Sep 2, 2025 Replies: 1 Special and General Relativity P InsightsFixing Things Which Can Go Wrong With Complex Numbers Started by PAllen Jul 20, 2025 Replies: 7 General Math F InsightsFermat's Last Theorem Started by fresh_42 May 21, 2025 Replies: 105 General Math F InsightsWhy Vector Spaces Explain The World: A Historical Perspective Started by fresh_42 Mar 13, 2025 Replies: 0 General Math Change width Contact About Terms Privacy Help RSS 2025 © Physics Forums, All Rights Reserved Back Top
3903
https://my.clevelandclinic.org/health/diseases/22204-crystals-in-urine
Locations: Abu Dhabi|Canada|Florida|London|Nevada|Ohio| Home/ Health Library/ Diseases & Conditions/ Crystals in Urine AdvertisementAdvertisement Crystals in Urine Crystalluria, or having crystals in your urine, can be found during urine testing. It doesn’t always mean you have an infection. Causes include dehydration and taking certain medications. Advertisement Cleveland Clinic is a non-profit academic medical center. Advertising on our site helps support our mission. We do not endorse non-Cleveland Clinic products or services. Policy Care at Cleveland Clinic Find a Primary Care Provider Schedule an Appointment Advertisement Advertisement Advertisement Advertisement ContentsOverviewSymptoms and CausesDiagnosis and TestsManagement and TreatmentOutlook / PrognosisPreventionLiving With Overview What does it mean to have crystals in your urine? Crystals in urine occur when there are too many minerals in your urine and not enough liquid. The tiny pieces collect and form masses. These crystals may be found during urine tests (urinalysis). Having crystals in your urine is called crystalluria. Advertisement Cleveland Clinic is a non-profit academic medical center. Advertising on our site helps support our mission. We do not endorse non-Cleveland Clinic products or services. Policy Some crystals don’t cause problems. Others can get big and form stones that get stuck in parts of your urinary tract and cause blockages. Blockages can cause serious problems, like acute kidney injury (AKI), which is also called acute renal failure (ARF). What are common types of crystals found in urine? Types of crystals that a lab tech might see in your urine include: Ammonium biurate. Bilirubin. Calcium oxalate or calcium phosphate. Cystine. Hippuric acid. Leucine. Struvite (magnesium ammonium phosphate). Tyrosine. Uric acid. Xanthine. The laboratory test can identify the type of crystals by the shape of the crystals under a microscope. Some of the crystals may have no identifiable shape (amorphous). The pH (acidity) of your urine can contribute to the type of crystals that happen. Who gets crystals in their urine? Anyone can have crystals in their urine. The presence of crystals doesn’t always mean that you have some type of medical condition, though, except in the case of cystine and xanthine. These crystals indicate rare inherited disorders. People who are prone to developing kidney stones also may have crystals evident in their urine. How does having crystals in my urine affect my body? You might have crystals in your urine that really don’t mean anything. Others can cause problems, like kidney stones. When crystals collect into bigger masses, they can block urine from leaving your body. Some crystals can pass through your urinary tract and out of your body on their own, while others may need to be removed from your body by a healthcare provider. Advertisement Symptoms and Causes What are the signs and symptoms of crystals in urine? You may not have any signs or symptoms of crystals in urine. If you do, they might include: Pain. Fever. Cloudy or foamy urine. Feeling like you have to pee more often than usual. Bloody urine (hematuria). Bad-smelling urine. What causes crystals in urine? There are many things that can cause crystals to form in urine. They include: Dehydration (not drinking enough water). Eating large amounts of certain foods, including protein, salt, fruits and vegetables. Medications such as amoxicillin, acyclovir, sulfonamides, atazanavir and methotrexoate. Urinary tract infections. Ethylene glycol poisoning. Having tumor lysis syndrome, which is caused by the death of a large number of cancer cells. This may happen as a result of cancer treatment. Is having crystals in your urine contagious? No, crystalluria isn’t contagious. Even if you have a urinary tract infection, you’re not contagious. Diagnosis and Tests What tests will be done to diagnose crystals in your urine? Your healthcare provider may order urine tests as part of your annual exam or as a result of symptoms you report. The urine samples will be examined in the lab. You may need other tests if your healthcare provider needs to order them to diagnose your condition correctly. Management and Treatment How do you treat crystals in your urine? Treatment depends on the cause of the crystals. In many cases, you may just need to drink more water or cut back on your consumption of certain foods or things found in foods, like salt and sugar. If the crystals are a result of taking certain medication, your healthcare provider might be able to switch your medication or dosage. If the crystals point to some type of disease, then your healthcare provider will treat that condition. Care at Cleveland Clinic Find a Primary Care Provider Schedule an Appointment Outlook / Prognosis What can I expect if I have this condition? The outlook if you have crystals in your urine is probably very good. This does depend on any other conditions being treated. Prevention How can you reduce your risk of having crystals in your urine? You can lower your risk of developing crystals in your urine and stones in your urinary tract if you: Drink an adequate amount of fluids, like water. Try to reach and maintain a healthy weight. Eat only a moderate amount of protein. Work with your healthcare provider or a registered dietitian to figure out how much calcium, oxalate and vitamin C you should have to be healthy. Eat fewer processed foods to cut down on salt and sugar intake. Living With When should I see my healthcare provider? You should contact your healthcare provider if you have pain or fever along with changes in your urine. Kidney stones and other stones, like ureteral stones, can be very painful. They can also cause toxins to stay in your system and cause problems. A note from Cleveland Clinic Crystalluria, or having crystals in your urine, is relatively common. Crystals can be found in the urine of people who are completely healthy and in the urine of people who have some type of illness. They might be found on a routine urine test or if your healthcare provider suspects another condition. One of the best ways to prevent crystals from developing is by drinking enough fluids. Advertisement Care at Cleveland Clinic Cleveland Clinic’s primary care providers offer lifelong medical care. From sinus infections and high blood pressure to preventive screening, we’re here for you. Find a Primary Care Provider Schedule an Appointment Medically Reviewed Last reviewed on 11/16/2021. Learn more about the Health Library and our editorial process. AdvertisementAdvertisement Ad Urology 216.444.5600 Kidney Medicine 216.444.6771 Appointments & Locations Request an Appointment Rendered: Sun Aug 24 2025 11:14:11 GMT+0000 (Coordinated Universal Time)
3904
https://study.com/learn/lesson/what-is-the-literary-canon.html
What is the Literary Canon? | Canonical Texts in Literature - Lesson | Study.com Log In Sign Up Menu Plans Courses By Subject College Courses High School Courses Middle School Courses Elementary School Courses By Subject Arts Business Computer Science Education & Teaching English (ELA) Foreign Language Health & Medicine History Humanities Math Psychology Science Social Science Subjects Art Business Computer Science Education & Teaching English Health & Medicine History Humanities Math Psychology Science Social Science Art Architecture Art History Design Performing Arts Visual Arts Business Accounting Business Administration Business Communication Business Ethics Business Intelligence Business Law Economics Finance Healthcare Administration Human Resources Information Technology International Business Operations Management Real Estate Sales & Marketing Computer Science Computer Engineering Computer Programming Cybersecurity Data Science Software Education & Teaching Education Law & Policy Pedagogy & Teaching Strategies Special & Specialized Education Student Support in Education Teaching English Language Learners English Grammar Literature Public Speaking Reading Vocabulary Writing & Composition Health & Medicine Counseling & Therapy Health Medicine Nursing Nutrition History US History World History Humanities Communication Ethics Foreign Languages Philosophy Religious Studies Math Algebra Basic Math Calculus Geometry Statistics Trigonometry Psychology Clinical & Abnormal Psychology Cognitive Science Developmental Psychology Educational Psychology Organizational Psychology Social Psychology Science Anatomy & Physiology Astronomy Biology Chemistry Earth Science Engineering Environmental Science Physics Scientific Research Social Science Anthropology Criminal Justice Geography Law Linguistics Political Science Sociology Teachers Teacher Certification Teaching Resources and Curriculum Skills Practice Lesson Plans Teacher Professional Development For schools & districts Certifications Teacher Certification Exams Nursing Exams Real Estate Exams Military Exams Finance Exams Human Resources Exams Counseling & Social Work Exams Allied Health & Medicine Exams All Test Prep Teacher Certification Exams Praxis Test Prep FTCE Test Prep TExES Test Prep CSET & CBEST Test Prep All Teacher Certification Test Prep Nursing Exams NCLEX Test Prep TEAS Test Prep HESI Test Prep All Nursing Test Prep Real Estate Exams Real Estate Sales Real Estate Brokers Real Estate Appraisals All Real Estate Test Prep Military Exams ASVAB Test Prep AFOQT Test Prep All Military Test Prep Finance Exams SIE Test Prep Series 6 Test Prep Series 65 Test Prep Series 66 Test Prep Series 7 Test Prep CPP Test Prep CMA Test Prep All Finance Test Prep Human Resources Exams SHRM Test Prep PHR Test Prep aPHR Test Prep PHRi Test Prep SPHR Test Prep All HR Test Prep Counseling & Social Work Exams NCE Test Prep NCMHCE Test Prep CPCE Test Prep ASWB Test Prep CRC Test Prep All Counseling & Social Work Test Prep Allied Health & Medicine Exams ASCP Test Prep CNA Test Prep CNS Test Prep All Medical Test Prep College Degrees College Credit Courses Partner Schools Success Stories Earn credit Sign Up Copyright English Courses 10th Grade English: High School What is the Literary Canon? | Canonical Texts in Literature Contributors: Benjamin Williams, Joshua Wimmer Author Author: Benjamin Williams Ben is a college composition, high school ELA, and general humanities instructor. He has a B.A. in English Literature from Bryan College and an M.A. in Educational Leadership from Concordia University. He also holds teacher certifications in Ohio and Missouri. Instructor Instructor: Joshua Wimmer Joshua holds a master's degree in Latin and has taught a variety of Classical literature and language courses. What is the literary canon? Is there a defined set of canonical texts in literature? Learn about the western literary canon and its changing reputation. Updated: 11/21/2023 Table of Contents What Does Literary Canon Mean? What is a Canonical Text? Examples of Canonical Texts Why is the Literary Canon Important? Controversies in the Literary Canon Lesson Summary Show FAQ What books are part of the literary canon? The western literary canon is in a constant state of adaptation and change, especially due to the broader debate during the 20th and early part of the 21st centuries. However, as it currently stands, commonly accepted "canonical" works and writers include: The Epic of Gilgamesh, The Bible, Homer, Sappho, Sophocles, Plato, Aristotle, Augustine, Beowulf, Dante Alighieri, Chaucer, Francis Petrarch, Miguel de Cervantes, William Shakespeare, Jean-Jacques Rousseau, Frederick Douglass, William Wordsworth, Emily Dickinson, James Joyce, Franz Kafka, Virginia Woolf, T.S. Eliot, William Faulkner, Pablo Neruda, Albert Camus, John Steinbeck, Gabriel Garcia Marquez, and Chinua Achebe, among many others. Why is literary canon important? The literary canon serves as an important standard for determining the quality and long-term value of any given work. Even texts and writers who are not specifically considered canonical are measured against the broader canon for determining their literary value. What makes an author part of the literary canon? Though broadly debated, a "canonical author" is a writer whose work is stylistically excellent, is highly reflective of the tenor or controversies of their time period, and has demonstrable long-term influence across society. Create an account LessonTranscript VideoQuizCourse An error occurred trying to load this video. Try refreshing the page, or contact customer support. You must c C reate an account to continue watching Register to view this lesson Are you a student or a teacher? I am a student I am a teacher Create Your Account To Continue Watching As a member, you'll also get unlimited access to over 88,000 lessons in math, English, science, history, and more. Plus, get practice tests, quizzes, and personalized coaching to help you succeed. Get unlimited access to over 88,000 lessons. Try it now Already registered? Log in here for access Go back Resources created by teachers for teachers Over 30,000 video lessons & teaching resources—all in one place. Video lessons Quizzes and worksheets Classroom integration Lesson plans I would definitely recommend Study.com to my colleagues. It’s like a teacher waved a magic wand and did the work for me. I feel like it’s a lifeline. Jennifer B. Teacher Try it now Go back Just checking in. Are you still watching? Yes! Keep playing. Your next lesson will play in 10 seconds 0:01 Literary Canon Defined 1:40 Some Authors of the… 4:29 Lesson Summary QuizCourseView Video OnlySaveTimeline 88K views Recommended lessons and courses for you Related LessonsRelated Courses ##### Structure in Literature | Definition, Types & Examples 5:15 ##### Falling Action in Literature | Definition, Purpose & Examples 5:35 ##### What Is a Coda in a Narrative? - Definition & Examples 5:42 ##### Interpreting the Main Idea and Purpose of a Scene 6:28 ##### The Hero's Journey Archetypes | Stages & Example 6:14 ##### Denouement | Definition, History & Examples 5:15 ##### Poem Structures | Elements, Format & Examples 7:14 ##### Linear vs. Nonlinear Narrative | Structure & Examples 4:22 ##### Epilogue Definition, Function & Examples 3:32 ##### Literary Passage Analysis | Overview, Elements & Steps 10:26 ##### Aesthetic Concepts in Literature 6:13 ##### Annotate | Definition, Examples & Techniques 5:29 ##### What are the Elements of a Story? 5:38 ##### Orientalism | Overview & Examples 6:08 ##### Plot Techniques in British Fiction: Definition & Examples 7:02 ##### Analyzing British Poetry: Terms & Examples 7:11 ##### Short Story Analysis | Overview, Elements & Examples 8:48 ##### Plot Techniques in American Fiction: Definition and Examples 8:10 ##### Story Elements Lesson for Kids 3:00 ##### Connecting Characters, Plot & Themes in a Text 4:18 ##### AP English Literature Study Guide and Exam Prep ##### AP English Language Study Guide and Exam Prep ##### Common Core ELA - Literature Grades 9-10: Standards ##### Common Core ELA - Writing Grades 9-10: Standards ##### Common Core ELA - Language Grades 9-10: Standards ##### Common Core ELA - Literature Grades 11-12: Standards ##### Common Core ELA - Writing Grades 11-12: Standards ##### Common Core ELA - Speaking and Listening Grades 9-10: Standards ##### Common Core ELA - Speaking and Listening Grades 11-12: Standards ##### Common Core ELA - Language Grades 11-12: Standards ##### Study.com SAT Study Guide and Test Prep ##### CSET English Subtests I & III (105 & 107): Practice & Study Guide ##### NYSTCE English Language Arts (003) Study Guide and Test Prep ##### 10th Grade English: High School ##### SAT Subject Test Literature: Practice and Study Guide ##### Supplemental English: Study Aid ##### TOEFL iBT (2025) Study Guide and Test Prep ##### SAT Literature: Help and Review ##### AP English Literature Study Guide and Exam Prep ##### AP English Language Study Guide and Exam Prep What Does Literary Canon Mean? ------------------------------ The term literary canon is a technical term used to describe a set of texts that serve as a recognized standard of stylistic quality, cultural or social significance, and intellectual value. The origin of literary canon is not determined so much by a specific date; rather it has been adopted over time by pervasive usage in university and graduate classrooms, references and citations in academic journals, and it is to a degree, also based on the influence of curriculum publishers and testing organizations. Because people in a society are most likely to be exposed to the accepted canon of literature, these texts also inform both a generally accepted worldview and determine the ''imaginative boundaries'' of how that society tends to think. An example of its use in literature is with Aristotle's works, which have been considered part of the literary canon for the last several centuries. Western societies have tended to approach questions of warfare and economics, for instance, through an Aristotelian lens. Essentially, the voice a society hears most often is most likely to carry an outside influence on the people within that society, both in how they think and how they live. How is the Literary Canon Determined? The fact that the literary canon is determined through usage and collaboration makes it significant in literature because it is both highly adaptable and controversial at the same time. In recent years, a push to change what authors and works should be considered canon or canonical has driven a great deal of debate within the Western literary world. As such, literary canon is also commonly referred to as Western canon. Factors that may contribute to a work being included in the literary canon take into consideration its cultural significance, historical impact, and scholarly consensus. The process through which a work becomes part of the canon evaluates the critical acclaim, influence, and longevity of the literary work. What is a Canonical Text? ------------------------- A measuring rod (i.e., a kanon) was used to determine if an object was straight. The definition and meaning of canon in English stems from an older, Greek term (transliterated as kanon) which referred to a ''standard'' or a ''measuring rod'' against which something was compared to ensure that it was set correctly. The physical, engineering use of this term eventually took on a metaphoric meaning. Now, the term canon is defined as an agreed-upon standard against which other, most frequently intellectual, works are measured for quality, long-term value, and influence. Within this context, a "canonical text" sets the standards as a representative work of its genre or time, and that canonical status can influence the interpretation and study of all other works within that genre. What Does Canon Mean in Western Society? Though different in Eastern cultures, Europe and much of Western society have worked within the boundaries of a fairly consistent literary canon for the last 1,000–1,500 years. In general, the Western canon has prioritized the voices of the dominant Greco-Roman philosophers, the poets and novelists of France and Britain, and, in more recent centuries, the sociological/philosophical voices of Germany. It is equally true that American novelists, in particular, have found their way into the generally accepted canon during the late 19th and throughout the 20th centuries. Examples of Canonical Texts --------------------------- There are numerous examples of canonical texts in Western literature. Canonical voices are generally considered to be some of the most influential in shaping Western culture's beliefs and assumptions surrounding justice, the meaning of human life, beauty, virtue, and other societal values. The following few examples of canonical texts from various periods and genres to illustrate the breadth of the literary canon for the more influential and widely accepted voices of the Western literary canon include: Gabriel Garcia Marquez was a Colombian writer whose works are generally considered part of the Western literary canon. Epic of Gilgamesh Homer Aristotle William Shakespeare Frederick Douglass Emily Dickinson Virginia Woolf T.S. Eliot William Faulkner Pablo Neruda Albert Camus John Steinbeck Gabriel Garcia Marquez Ray Bradbury Chinua Achebe Why is the Literary Canon Important? ------------------------------------ Literary canon is important because of its role in shaping cultural and societal values, as well as its influence on education and literary study. Unfortunately, the generally accepted Western canon lacks diversity, as it is currently dominated by men, and, for much of the last several centuries, white, Christian men. However, during the mid to late 20th century, literary departments, professors, and publishers began to question the validity of a canon that did not represent the voices of women and persons of other races and ethnicities, leading to a heated debate over expanding the literary standard. Controversies in the Literary Canon ----------------------------------- There are numerous controversies surrounding the literary canon, particularly with issues related to the lack of diversity, representation, and changing societal values. In recent years, Classics Programs at major universities and especially at Ivy League institutions in the United States, or Oxford and Cambridge in the United Kingdom have been engaged in a debate about which texts and writers should be accepted as classics or as part of the literary canon. Classics programs have long prioritized the white, masculine, most frequently upper-class voices of Greece and Rome. However, contemporary scholars like Dan-el Padilla Peralta (Ph.D.) of Princeton University have instead been arguing for a broader reconstruction of the classical worldview—a reconstruction that would include marginalized voices from the time period. The general argument from thinkers like Peralta centers on the idea that a ''classics program'' should explore the totality of perspectives that shaped the earliest intellectual foundations of Western civilization. Arguments from this perspective hold that if classics programs do not include the marginalized voices and perspectives of the ancient world, students will not truly understand the cultural forces that birthed contemporary Western culture. Rather, they will be left only with an in-depth understanding of the racial, socio-economic, and gender-dominant perspectives of the time period. Maya Angelou is one of the contemporary writers widely accepted as part of the Western literary canon. During the 20th and early 21st centuries across Western society, the debate about the literary canon has broadened to a consideration of BIPOC (Black, indigenous, and people of color), ethnic, and queer voices. The early stages of this debate may be seen in the eventual acceptance of Black writers like: Frederick Douglass W.E.B. Du Bois Zora Neale Hurston Langston Hughes James Baldwin Maya Angelou Toni Morrison While most literary anthologies published today will include these writers as prominent and talented authors, their acceptance into the literary canon came only after intense debate over the last 150 years. Arguments against including these and other writers within the accepted literary canon were largely racist, stemming from a dominant cultural rejection of the value of Black writers. This debate continues within corners of the academic world, though writers like Frederick Douglass, Langston Hughes, and Maya Angelou, in particular, have themselves demonstrated the importance and quality of Black literary voices and perspectives. Similar debates continue regarding Native American, Hispanic, and Asian voices within contemporary Western literature. The Debate Over Inclusion of LGBTQ+ Voices Though occasionally present within the generally accepted Western literary canon, such as with the writings of Sappho, voices exploring life from within the LGBTQ+ perspective have historically been absent in or actively suppressed by the literary world. However, with the rise of awareness within Western society beginning in the late 19th century, these voices are now gaining consideration for canonical inclusion. Leaders within the LGBTQ+ community are increasingly gaining a voice in reshaping the canon to include their life perspectives within anthologies, syllabi, and generally used curricula. Some of the more commonly accepted queer voices in the literary canon include: Alice Walker is a BIPOC and queer writer included in the literary canon. Virginia Woolf: Hugely influential for her use of the ''stream of consciousness'' approach to writing, Virginia Woolf was an incredibly influential writer who, though married, was known to have same-sex relationships. Walt Whitman: Though he did not overtly identify himself as gay within his more popular writing, Whitman is widely understood to have been an aware, gay man whose sexuality informed his choice of symbolism, allusion, and topics. Allen Ginsburg: With the extremely controversial and, at the time, illegal publication of his work ''Howl,'' Ginsburg is one of the most well-known of the early 20th-century queer voices. W. H. Auden: Best known for his poetry, Auden is considered a highly influential American writer. Alice Walker: Along with being one of the most prominent Black writers of her time and best known for her novel The Color Purple, Alice Walker is also recognized as an important voice within the bisexual community. After national moves to rescind sodomy laws and with the eventual legalization of same-sex marriages, the queer community continues to develop recognition and acceptance with the literary world. As such, the voices of LGBTQ+ writers will likely continue to gain notoriety and prominence as they are increasingly included in the broader Western literary canon. Lesson Summary -------------- Stemming from an older Greek term, kanon, which meant straight rod, a canon is a standard by which other things are measured. As such, the literary canon is a technical term used to describe a set of texts that serve as a recognized standard against which other writers and texts are judged. Canon in literature, often called Western canon, can also classify works that belong to a particular literary tradition or author. The purpose of a literary canon is to assess a new text for its stylistic quality, cultural or social significance, and intellectual value. The literary canon also forms the basis from which a society grows and develops since its canonical texts and thinkers establish the philosophical and imaginative boundaries of society. Some examples of authors included in the literary canon include Jane Austen, Gabriel Garcia Marquez, Chinua Achebe, Maya Angelou, and Ray Bradbury. Since the literary canon is determined by usage and general acceptance, the 20th and 21st centuries have been marked by a wide-ranging debate over the inclusion of underrepresented writers and texts from non-dominant cultures, called canonical inclusion. These debates advocate for the inclusion of the voices of BIPOC (Black, indigenous, and people of color), queer, and other minority perspectives into the literary canon. Video Transcript Literary Canon Defined Have your teachers ever assigned the works of Shakespeare, or maybe some by the Bront? sisters? If so, they probably wanted you to gain experience with what they would call literary canon, which is a collection of works by which others are measured in terms of literary skill and value. Originally, the Greeks used a kanôn ('straight rod') as a tool in surveying and construction projects to keep things straight and level. Over time, its use as a measuring device was adapted to apply to keeping literary works straight, as well. Through the years, 'canon' has been employed in various ways to classify literature: from assigning works to a particular tradition (i.e. Biblical canon) to attributing them to a specific author (i.e. Shakespearean canon). The primary usage discussed in this lesson, though, refers to the canon as a yardstick for measuring the value and validity of the world's literature. Many members of the American academic community in the 1980's revolted against this idea of what was then often called the Western canon, which was claimed to be a collection of the world's highest and most influential literature. They felt that there were many other examples of great literature to be found in the world and that the current collection was not as inclusive as it should be. For instance, some of these protestors' central arguments focused on the underrepresentation of women and people in minority groups among the ranks of authors whose works were considered important enough to be part of the literary canon. Since the 80's, the literary canon has expanded considerably, now including many female authors and people from all walks of life. Some Authors of the Literary Canon Let's look at some of the authors whose work is part of the literary canon. Homer -For millennia, the works of Homer - the famous Greek epics, Iliad and Odyssey - have been considered some of the highest forms of literature in the world. The problem is, though, we're not sure that this renowned bard was a real or even a single person! In fact, we're actually pretty certain he wasn't. Nevertheless, 'Homer' and the many writers he has influenced have been listed among the world's greatest literary minds since antiquity. The works of the Roman poet Vergil (above), along with many other ancient authors, are included with those of Homer in the literary canon. William Shakespeare -The plays and poetry of William Shakespeare have long been considered some of the best examples of English literature ever penned. Such high acclaim has of course also earned his works a place in the literary canon. For centuries, English writers have compared themselves and have been compared to the Bard. This sort of looking to an author's work as a measure of literary success and value is precisely what being part of the literary 'canon' is all about. Jane Austen- From antiquity, literary circles have been overflowing with testosterone. Of course, this doesn't mean that there haven't been any exemplary female authors, just that they've been historically excluded from the literary elite. Since the 'canon wars' of the 1980s, many more female writers (i.e. Virginia Woolf, Maya Angelou) have populated the list, including Jane Austen whose books like Northanger Abbey and Emma have been added to the roll of literary classics. Many women used to take on a male pseudonym to publish their work. Austen (above), though, simply adopted the name A Lady when publishing her Sense and Sensibility. Ray Bradbury- Another of the concerns that students and educators had in the 80's was that the literary canon highly favored European authors. Since then, many more Americans have swelled the ranks of the world's most notable writers. Concerned with the future of literature itself, Ray Bradbury's Fahrenheit 451 was a shoo-in for inclusion in the literary canon. Several other Americans - including Twain, Steinbeck, and Hemingway - have also taken their place on the list. Gabriel García Márquez- Even with the inclusion of authors from the U.S., the literary canon still seemed overpopulated with English, German, French, and Russian masterpieces. Where's the Spanish? The Pashtu? The Mandarin? Protesters of the 'Western' canon also wanted to broaden the scope of languages and peoples represented in the collection. Renowned Colombian author Gabriel García Márquez was one who helped expand the canon to include more Hispanic writers, with influential works like his Love in the Time of Cholera (El amor en los tiempos del cólera). Alice Walker Known for her famous novel The Color Purple told in the voice of Celie, Alice Walker's collections of novels, poetry, short stories, and essays embrace the culture of early twentieth-century Black Americans with a focus on Black women's lives. As an activist for racial and gender equality, Walker's works reveal the beauty and strength of her characters in their peaceful tolerance for adversity. The Color Purple earned Walker the first Pulitzer Prize for fiction awarded to an African American woman and has been made into an award-winning movie and musical. Richard Wright Having grown up as a Black child in poverty, Wright's novels, poetry, short stories, and non-fiction works address racial themes and the difficulties African Americans faced during the late 19th to mid 20th centuries with the style of protest, boldly exposing discrimination and its impact. His more famous works include his memoir Black Boy, the novel Native Son, and his novella collection Uncle Tom's Children, all of which are featured in school curriculums across the United States. Lesson Summary Let's review. The literary canon is a collection of works by which others are measured in terms of literary skill and value. Derived from the Greek kanôn ('straight rod'), the term 'canon' has been used to classify works belonging to either a particular tradition (i.e. Biblical) or author (i.e. Shakespearean). However, our primary usage of it here refers to the canon as a representation of the world's greatest literature. Since the 1980's, what was then often referred to as the Western canon has been expanded to include female authors and others from all walks of life. The movement to more broadly represent the world's great literary talents has expanded the canon to embrace the works of Jane Austen, Ray Bradbury, and Gabriel García Márquez along with those of the likes of Homer and Shakespeare. Register to view this lesson Are you a student or a teacher? I am a student I am a teacher Unlock your education See for yourself why 30 million people use Study.com Become a Study.com member and start learning now. Become a member Already a member? Log in Go back Resources created by teachers for teachers Over 30,000 video lessons & teaching resources—all in one place. Video lessons Quizzes and worksheets Classroom integration Lesson plans I would definitely recommend Study.com to my colleagues. It’s like a teacher waved a magic wand and did the work for me. I feel like it’s a lifeline. Jennifer B. Teacher Try it now Go back Create an account to start this course today Start today. Try it now 10th Grade English: High School 21 chapters 137 lessons Chapter 1 High School English: Speaking & Listening Skills Review Coming to a Discussion Prepared 8:05 minHow to Contribute to a Discussion 7:24 minCritical Listening Definition & Examples 6:37 minInformative Speech | Definition, Types & Examples 3:49 minHelping Your Audience Learn During Informative Speeches: Strategies & Tips 9:27 minSpeech Style Definition, Purpose & Importance 6:05 minUsing Vocal Qualities to Convey Meaning in Public Speaking 5:37 minConsidering Pronunciation, Articulation, and Dialect in Public Speaking 4:47 minThe Role of Nonverbal Communication During Speech Delivery 6:11 minUsing Visual Aids During Your Speech: Guidelines & Tips 5:38 min Chapter 2 10th Grade English: Reading Skills Review Inference | Definition & Examples 5:42 minDrawing Conclusions | Definition & Examples 5:56 minCiting Textual Evidence to Support Analysis 6:00 minHow to Find the Theme or Central Idea 4:28 minWriting an Objective Summary of a Story 4:44 minFinding Specific Details in a Reading Selection 9:30 minVisualization Definition, Reading Strategies & Activities 4:23 min Chapter 3 10th Grade English: Reading Skills Nuance in Literature | Overview & Examples 5:07 minInterpreting Figures of Speech in Context 7:20 minHow to Analyze the Purpose of a Text 7:49 minHow to Analyze Two Texts Related by Theme or Topic 10:48 min Chapter 4 10th Grade English: Literary Text Analysis Review Literary Passage Analysis | Overview, Elements & Steps 10:26 minPlot | Definition, Structure & Types 6:57 minDirect & Indirect Characterization | Overview, Types & Methods 5:00 minWhat is the Setting of a Story? 4:46 minThemes in Literature | Definition & Examples 5:15 minStyle in Fiction | Elements & Examples 7:09 minSources of Modern Fiction: Myths, Traditional Stories & Religious Works 8:05 min Chapter 5 10th Grade English: Literary Text Analysis Character in Literature | Definition, Types & Examples 5:55 minStructure in Literature | Definition, Types & Examples 5:15 minNarrator Definition, Types & Examples 7:50 minVoice in Writing | Definition, Types & Examples 4:02 min Chapter 6 10th Grade English: Literary Terms & Devices Review Metaphor Definition, Types & Examples 5:35 minPersonification and Apostrophe: Differences & Examples 5:32 minAllusion Types, Purpose & Examples 4:27 minAllegory in Literature | Definition & Examples 5:40 minImagery & Symbolism in Literature | Overview & Examples 7:03 minSensory Language | Definition & Examples 4:32 minOxymoron Definition, Uses & Examples 4:12 minFlashback in Literature | Definition, Types & Examples 5:01 minForeshadowing Definition, Types & Examples 6:08 minSarcasm in Literature | Overview & Examples 4:18 minIrony in Literature | Definition, Types & Examples 6:16 minAmbiguity in Literature | Definition & Examples 4:10 min Chapter 7 Analyzing Short Stories Elements of a Short Story | Components & Examples 5:53 minThe Gift of the Magi by O. Henry | Summary, Meaning & Theme 7:40 minThe Lottery by Shirley Jackson | Summary, Themes & Analysis 4:40 minThe Yellow Wallpaper | Overview, Summary & Analysis 6:37 min Chapter 8 Novel Exemplars: Grapes of Wrath & Fahrenheit 451 Fahrenheit 451 by Ray Bradbury | Summary, Themes & Main Ideas 7:30 minDystopia in Fahrenheit 451 by Ray Bradbury | Elements & Themes 6:41 minLiterary Devices in Fahrenheit 451 | Themes, Quotes & Examples 5:03 minSymbols in Fahrenheit 451 by Ray Bradbury | Analysis & Examples 6:32 minIrony in Fahrenheit 451 by Ray Bradbury | Overview & Significance 3:47 minThe Grapes of Wrath by John Steinbeck | Summary & AnalysisThe Grapes of Wrath: Author, Symbols & Analysis Chapter 9 10th Grade English: Drama Characteristics Review Elements of Drama | Definition & Examples 7:33 minDramatic Structure | Definition & Parts 4:45 minStage Directions | Definition & Examples 3:23 minTragedy in Drama | Definition, Characteristics & Plays 8:38 min Chapter 10 Drama Characteristics: Shakespeare's Julius Caesar Reading & Interpreting Dialogue from a Script or Play 4:59 minTragic Hero | Definition, Traits & Examples 6:42 minJulius Caesar by William Shakespeare | Summary & Historical Event 18:51 minShakespeare's Julius Caesar: Character Analysis & Traits 4:56 minThemes in Julius Caesar by Shakespeare | Examples & Analysis 6:10 minFigurative Language in Julius Caesar | List & Examples 5:09 minRhetorical Devices in Julius Caesar 6:47 min Chapter 11 Poetry Terms & Analysis: Percy Shelley & Shakespeare Glossary of Literary Terms: Poetry 12:31 minPoetic Devices | Definition, Types & Examples 10:58 minPoem Structures | Elements, Format & Examples 7:14 minOzymandias by Percy Bysshe Shelley | Summary & Analysis 14:10 minSonnet 18 by Shakespeare | Summary, Themes & Analysis 3:39 minSonnet 73 by William Shakespeare | Analysis, Themes & Summary 5:25 min Chapter 12 10th Grade English: Nonfiction Text Analysis Review Implied Main Idea | Overview & Examples 7:00 minMain Idea and Supporting Details | Definition & Examples 7:09 minAnalyzing Sequence of Events in an Informational Text 5:28 minAnalyzing Structure in an Informational Text 5:59 minHow Supplemental Features Add to an Informational Text 6:48 minFact vs. Persuasion vs. Informed Opinion in Nonfiction 5:53 min Chapter 13 10th Grade English: Nonfiction Text Analysis Nonfiction Biography & Autobiography | Types & Differences 6:14 minLiterary Nonfiction | Definition, Examples & Essays 7:09 minFamous Historical Documents: Themes & Rhetorical StrategiesLetter from Birmingham Jail | Summary, Quotes & Analysis 11:32 minCivil Disobedience by Henry David Thoreau | Summary & Analysis 4:09 min Chapter 14 10th Grade English: Media & Art Analysis Review Finding Meaning in Visual Media: Strategies & Examples 7:36 minUnderstanding Performance Art: Finding the Thesis, Narrative & Meaning 8:00 minHow to Write in Response to Other Art Forms 7:39 min Chapter 15 10th Grade English: Word Choice & Tone Review Tone and Mood of a Story | Definition & Examples 5:06 minConnotation vs. Denotation | Definition & Examples 7:32 minConstructing Meaning with Context Clues, Prior Knowledge & Word Structure 5:45 minUnderstanding Words By Their Relationships 8:41 minUsing Reference Materials for Vocabulary 5:08 min Chapter 16 10th Grade English: The Writing Process Review Prewriting | Definition, Techniques & Examples 10:05 minChoosing How to Organize Your Writing: Task, Purpose & Audience 9:40 minHow to Write Better by Improving Your Sentence Structure 5:44 minHow to Identify Wrong Word Use 8:04 minWriting Revision: How to Fix Mistakes in Your Writing 7:44 minMechanics in Writing | Definition, Editing Process & Examples 4:46 minEditing for Content: Definition & Concept 3:34 min Chapter 17 10th Grade English: Argumentative Reading & Writing Review How to Write a Great Argument 7:10 minHow to Structure an Argument in Your Essay 5:45 minParts of an Argumentative Essay | Claim, Counterclaim & Examples 5:40 minConcluding Statements: Supporting Your Argument 3:29 minHow to Analyze an Argument's Effectiveness & Validity 7:42 minRecognizing Biases, Assumptions & Stereotypes in Written Works 9:58 minRhetorical Techniques | Overview, List & Examples 6:13 minRhetorical Devices | List, Strategies & Categories 8:57 min Chapter 18 10th Grade English: Informative Writing Review Writing an Informative Essay | Outline, Steps & Example 5:44 minFive Paragraph Essay | Definition, Structure & Example 8:57 minWriting | Main Idea, Thesis Statement, and Topic Sentences 6:07 minOrganizing and Categorizing Ideas, Concepts and Information 5:38 minHow to Use Sources to Write Essays and Evaluate Evidence 6:10 minOrganizational Features of Expository Texts 6:25 minTransitions in Writing | Overview & Examples 6:51 minThe Importance of Using Precise Language in Writing 4:16 minObjective Tone | Definition, Importance & Examples 5:51 minHow To Write Effective Conclusions: Importance and Elements 4:55 minCompare & Contrast Essay | Definition & Examples 7:54 min Chapter 19 10th Grade English: Technical Writing Technical Communication | Examples & Essentials 4:45 minCharacteristics of Technical Communication 8:58 minTechnical Documents | Types & Examples 6:45 minParts of a Business Letter | Components & Examples 5:47 minUsing Active Verbs and Active Voice in Business Communication 4:05 minWriting Effective Sentences for Business Communication 5:21 minCrafting Strong and Coherent Paragraphs in Business Communication 5:08 min Chapter 20 10th Grade English: Narrative Writing Review Narrative Essay | Definition, Parts & Examples 6:34 minHook in Writing | Definition, Strategies & Examples 5:25 minNarrative Techniques | Dialogue, Pacing & Reflection 5:36 minHow to Use Descriptive Details & Sensory Language in Your Writing 5:29 minConcluding Your Narrative with Reflection 5:09 min Chapter 21 10th Grade English: Research Skills Review Writing Research Questions | Purpose & Examples 5:32 minReliable Sources for Research | Overview, Evaluation & CRAAP Test 11:00 minHow to Use Reference Material in Your Writing 7:28 minQuoting, Paraphrasing and Summarizing Your Research 6:02 minMaking In-Text Citations | Definition, Types & Importance 7:08 minMLA Format | Definition, Specifications & Examples 7:48 minAPA Citation | Format, Usage & Examples 6:27 minWriting a Works Cited Page | Structure & Formats 7:39 min Related Study Materials What is the Literary Canon? | Canonical Texts in Literature LessonsCoursesTopics ##### Structure in Literature | Definition, Types & Examples 5:15 ##### Falling Action in Literature | Definition, Purpose & Examples 5:35 ##### What Is a Coda in a Narrative? - Definition & Examples 5:42 ##### Interpreting the Main Idea and Purpose of a Scene 6:28 ##### The Hero's Journey Archetypes | Stages & Example 6:14 ##### Denouement | Definition, History & Examples 5:15 ##### Poem Structures | Elements, Format & Examples 7:14 ##### Linear vs. Nonlinear Narrative | Structure & Examples 4:22 ##### Epilogue Definition, Function & Examples 3:32 ##### Literary Passage Analysis | Overview, Elements & Steps 10:26 ##### Aesthetic Concepts in Literature 6:13 ##### Annotate | Definition, Examples & Techniques 5:29 ##### What are the Elements of a Story? 5:38 ##### Orientalism | Overview & Examples 6:08 ##### Plot Techniques in British Fiction: Definition & Examples 7:02 ##### Analyzing British Poetry: Terms & Examples 7:11 ##### Short Story Analysis | Overview, Elements & Examples 8:48 ##### Plot Techniques in American Fiction: Definition and Examples 8:10 ##### Story Elements Lesson for Kids 3:00 ##### Connecting Characters, Plot & Themes in a Text 4:18 ##### AP English Literature Study Guide and Exam Prep ##### AP English Language Study Guide and Exam Prep ##### Common Core ELA - Literature Grades 9-10: Standards ##### Common Core ELA - Writing Grades 9-10: Standards ##### Common Core ELA - Language Grades 9-10: Standards ##### Common Core ELA - Literature Grades 11-12: Standards ##### Common Core ELA - Writing Grades 11-12: Standards ##### Common Core ELA - Speaking and Listening Grades 9-10: Standards ##### Common Core ELA - Speaking and Listening Grades 11-12: Standards ##### Common Core ELA - Language Grades 11-12: Standards ##### Study.com SAT Study Guide and Test Prep ##### CSET English Subtests I & III (105 & 107): Practice & Study Guide ##### NYSTCE English Language Arts (003) Study Guide and Test Prep ##### 10th Grade English: High School ##### SAT Subject Test Literature: Practice and Study Guide ##### Supplemental English: Study Aid ##### TOEFL iBT (2025) Study Guide and Test Prep ##### SAT Literature: Help and Review ##### AP English Literature Study Guide and Exam Prep ##### AP English Language Study Guide and Exam Prep Browse by Courses HiSET Language Arts - Writing: Prep and Practice AP English Language Study Guide and Exam Prep Common Core ELA - Literature Grades 9-10: Standards Common Core ELA - Writing Grades 9-10: Standards Common Core ELA - Language Grades 9-10: Standards English 101: English Literature English 102: American Literature TOEFL iBT (2025) Study Guide and Test Prep Common Core ELA Grade 8 - Writing: Standards CAHSEE English Exam: Test Prep & Study Guide Common Core ELA Grade 8 - Literature: Standards 11th Grade English: High School CSET English Subtests I & III (105 & 107): Practice & Study Guide 10th Grade English: High School SAT Subject Test Literature: Practice and Study Guide Browse by Lessons Evaluating Structure, Aesthetics & Significance of Texts Book Timeline Project Ideas The Effects of Form & Structure in Poetry & Drama Frame Narrative Lesson Plan Poetry Analysis Essay Example for English Literature 11th Grade Assignment - Comparative Analysis of Dramatic Adaptations Prose Lesson Plan 11th Grade Assignment - Short Story Literary Analysis Annotating Literature Lesson Plan Teaching Literary Elements Poetry Analysis Activities for High School Students Literature Circle Journal Prompts Analyzing Literary Techniques in American Literature Readings: Essay Prompts How to Locate & Connect Information in Complex Poems Comparing Techniques Used in Different Literary Mediums Browse by Courses HiSET Language Arts - Writing: Prep and Practice AP English Language Study Guide and Exam Prep Common Core ELA - Literature Grades 9-10: Standards Common Core ELA - Writing Grades 9-10: Standards Common Core ELA - Language Grades 9-10: Standards English 101: English Literature English 102: American Literature TOEFL iBT (2025) Study Guide and Test Prep Common Core ELA Grade 8 - Writing: Standards CAHSEE English Exam: Test Prep & Study Guide Common Core ELA Grade 8 - Literature: Standards 11th Grade English: High School CSET English Subtests I & III (105 & 107): Practice & Study Guide 10th Grade English: High School SAT Subject Test Literature: Practice and Study Guide Browse by Lessons Evaluating Structure, Aesthetics & Significance of Texts Book Timeline Project Ideas The Effects of Form & Structure in Poetry & Drama Frame Narrative Lesson Plan Poetry Analysis Essay Example for English Literature 11th Grade Assignment - Comparative Analysis of Dramatic Adaptations Prose Lesson Plan 11th Grade Assignment - Short Story Literary Analysis Annotating Literature Lesson Plan Teaching Literary Elements Poetry Analysis Activities for High School Students Literature Circle Journal Prompts Analyzing Literary Techniques in American Literature Readings: Essay Prompts How to Locate & Connect Information in Complex Poems Comparing Techniques Used in Different Literary Mediums Create an account to start this course today Used by over 30 million students worldwide Create an account Explore our library of over 88,000 lessons Search Browse Browse by subject College Courses Business English Foreign Language History Humanities Math Science Social Science See All College Courses High School Courses AP Common Core GED High School See All High School Courses Other Courses College & Career Guidance Courses College Placement Exams Entrance Exams General Test Prep K-8 Courses Skills Courses Teacher Certification Exams See All Other Courses Study.com is an online platform offering affordable courses and study materials for K-12, college, and professional development. It enables flexible, self-paced learning. Plans Study Help Test Preparation College Credit Teacher Resources Working Scholars® School Group Online Tutoring About us Blog Career Teach for Us Press Center Ambassador Scholarships Support FAQ Site Feedback Terms of Use Privacy Policy DMCA Notice ADA Compliance Honor Code for Students Mobile Apps Contact us by phone at (877) 266-4919, or by mail at 100 View Street #202, Mountain View, CA 94041. © Copyright 2025 Study.com. All other trademarks and copyrights are the property of their respective owners. All rights reserved. ×
3905
https://mathworld.wolfram.com/Cosine.html
Cosine Download Wolfram Notebook | | | --- | | | | The cosine function is one of the basic functions encountered in trigonometry (the others being the cosecant, cotangent, secant, sine, and tangent). Let be an angle measured counterclockwise from the x-axis along the arc of the unit circle. Then is the horizontal coordinate of the arc endpoint. The common schoolbook definition of the cosine of an angle in a right triangle (which is equivalent to the definition just given) is as the ratio of the lengths of the side of the triangle adjacent to the angle and the hypotenuse, i.e., A convenient mnemonic for remembering the definition of the sine, cosine, and tangent is SOHCAHTOA (sine equals opposite over hypotenuse, cosine equals adjacent over hypotenuse, tangent equals opposite over adjacent). As a result of its definition, the cosine function is periodic with period . By the Pythagorean theorem, also obeys the identity | | | --- | | | (2) | The definition of the cosine function can be extended to complex arguments using the definition | | | --- | | | (3) | where e is the base of the natural logarithm and i is the imaginary number. Cosine is an entire function and is implemented in the Wolfram Language as Cos[z]. A related function known as the hyperbolic cosine is similarly defined, | | | --- | | | (4) | The cosine function has a fixed point at 0.739085... (OEIS A003957), a value sometimes known as the Dottie number (Kaplan 2007). The cosine function can be defined analytically using the infinite sum | | | | | --- --- | | | | | (5) | | | | | (6) | or the infinite product | | | --- | | | (7) | A close approximation to for is | | | | | --- --- | | | | | (8) | | | | | (9) | (Hardy 1959), where the difference between and Hardy's approximation is plotted above. The cosine obeys the identity | | | --- | | | (10) | and the multiple-angle formula | | | --- | | | (11) | where is a binomial coefficient. It is related to via | | | --- | | | (12) | (Trott 2006, p. 39). Summation of from to can be done in closed form as | | | | | --- --- | | | | | (13) | | | | | (14) | | | | | (15) | | | | | (16) | | | | | (17) | Similarly, | | | --- | | | (18) | where . The exponential sum formula gives | | | | | --- --- | | | | | (19) | | | | | (20) | The sum of can also be done in closed form, | | | --- | | | (21) | The Fourier transform of is given by | | | | | --- --- | | | | | (22) | | | | | (23) | where is the delta function. Cvijović and Klinowski (1995) note that the following series | | | --- | | | (24) | has closed form for , | | | --- | | | (25) | where is an Euler polynomial. A definite integral involving is given by for where is the gamma function (T. Drane, pers. comm., Apr. 21, 2006). See also Cis, Dottie Number, Elementary Function, Euler Polynomial, Exponential Sum Formulas, Fourier Transform--Cosine, Hyperbolic Cosine, Inverse Cosine, Secant, Sine, SOHCAHTOA, Tangent, Trigonometric Functions, Trigonometry Explore this topic in the MathWorld classroom Related Wolfram sites Explore with Wolfram|Alpha More things to try: cosine cosine law cosine identities References Abramowitz, M. and Stegun, I. A. (Eds.). "Circular Functions." §4.3 in Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing. New York: Dover, pp. 71-79, 1972.Beyer, W. H. CRC Standard Mathematical Tables, 28th ed. Boca Raton, FL: CRC Press, p. 215, 1987.Cvijović, D. and Klinowski, J. "Closed-Form Summation of Some Trigonometric Series." Math. Comput. 64, 205-210, 1995.Hansen, E. R. A Table of Series and Products. Englewood Cliffs, NJ: Prentice-Hall, 1975.Hardy, G. H. Ramanujan: Twelve Lectures on Subjects Suggested by His Life and Work, 3rd ed. New York: Chelsea, p. 68, 1959.Jeffrey, A. "Trigonometric Identities." §2.4 in Handbook of Mathematical Formulas and Integrals, 2nd ed. Orlando, FL: Academic Press, pp. 111-117, 2000.Kaplan, S. R. "The Dottie Number." Math. Mag. 80, 73-74, 2007.Project Mathematics. "Sines and Cosines, Parts I-III." Videotape. N. J. A. Sequence A003957 in "The On-Line Encyclopedia of Integer Sequences."Spanier, J. and Oldham, K. B. "The Sine and Cosine Functions." Ch. 32 in An Atlas of Functions. Washington, DC: Hemisphere, pp. 295-310, 1987.Tropfke, J. Teil IB, §1. "Die Begriffe des Sinus und Kosinus eines Winkels." In Geschichte der Elementar-Mathematik in systematischer Darstellung mit besonderer Berücksichtigung der Fachwörter, fünfter Band, zweite aufl. Berlin and Leipzig, Germany: de Gruyter, pp. 11-23, 1923.Trott, M. The Mathematica GuideBook for Symbolics. New York: Springer-Verlag, 2006. D. (Ed.). "Trigonometric or Circular Functions." §6.1 in CRC Standard Mathematical Tables and Formulae. Boca Raton, FL: CRC Press, pp. 452-460, 1995. Referenced on Wolfram|Alpha Cosine Cite this as: Weisstein, Eric W. "Cosine." From MathWorld--A Wolfram Resource. Subject classifications Find out if you already have access to Wolfram tech through your organization
3906
https://artofproblemsolving.com/wiki/index.php/Simon%27s_Favorite_Factoring_Trick?srsltid=AfmBOopaxu-CdLEe2NZj_FKotVJRmOug-vrzHoGru8kNYIhXd5Bw1wWU
Art of Problem Solving Simon's Favorite Factoring Trick - AoPS Wiki Art of Problem Solving AoPS Online Math texts, online classes, and more for students in grades 5-12. Visit AoPS Online ‚ Books for Grades 5-12Online Courses Beast Academy Engaging math books and online learning for students ages 6-13. Visit Beast Academy ‚ Books for Ages 6-13Beast Academy Online AoPS Academy Small live classes for advanced math and language arts learners in grades 2-12. Visit AoPS Academy ‚ Find a Physical CampusVisit the Virtual Campus Sign In Register online school Class ScheduleRecommendationsOlympiad CoursesFree Sessions books tore AoPS CurriculumBeast AcademyOnline BooksRecommendationsOther Books & GearAll ProductsGift Certificates community ForumsContestsSearchHelp resources math training & toolsAlcumusVideosFor the Win!MATHCOUNTS TrainerAoPS Practice ContestsAoPS WikiLaTeX TeXeRMIT PRIMES/CrowdMathKeep LearningAll Ten contests on aopsPractice Math ContestsUSABO newsAoPS BlogWebinars view all 0 Sign In Register AoPS Wiki ResourcesAops Wiki Simon's Favorite Factoring Trick Page ArticleDiscussionView sourceHistory Toolbox Recent changesRandom pageHelpWhat links hereSpecial pages Search Simon's Favorite Factoring Trick Simon's Favorite Factoring Trick (SFFT) is often used in Diophantine equations where factoring is needed, applicable for Diophantine equations of the form , for integer constants , , and , where there is a constant on one side of the equation and on the other side and a product of variables with each of those variables in linear terms. Simon's Favorite Factoring Trick is named after AoPS user ComplexZeta, or Dr. Simon Rubinstein-Salzedo. Contents 1 Statement 2 Applications 3 Problems 3.1 Introductory 3.2 Intermediate 3.3 Olympiad 4 See Also 4.1 External Links Statement Let's put it in general terms. We have an equation , where , , and are integer constants. Simon's Favorite Factoring Trick states that this equation can be factored into the equation For example, is the same as: Here is another way to look at it. Consider the equation .Let's start to factor the first group out: . How do we group the last term so we can factor by grouping? Notice that we can add to both sides. This yields . Now, we can factor as . This is important because this keeps showing up in number theory problems. Let's look at this problem below: Determine all possible ordered pairs of positive integers that are solutions to the equation . (2021 CEMC Galois #4b) Let's remove the denominators: . Then . Take out the : (notice how I artificially grouped up the terms by adding ). Now, (you can just do SFFT directly, but I am guiding you through the thinking behind SFFT). Now we use factor pairs to solve this problem. Look at all factor pairs of 20: . The first factor is for , the second is for . Solving for each of the equations, we have the solutions as . Applications This factorization frequently shows up on contest problems, especially those heavy on algebraic manipulation. Usually and are variables and are known constants. Sometimes, you have to notice that the variables are not in the form and . Additionally, you almost always have to subtract or add the and terms to one side so you can isolate the constant and make the equation factorable. It can be used to solve more than algebra problems, sometimes going into other topics such as number theory. When coefficient of is not , you can sometimes achieve an equation that can be factored by dividing the coefficient off of the equation. Problems Introductory Two different prime numbers between and are chosen. When their sum is subtracted from their product, which of the following numbers could be obtained? (Source) Intermediate AoPS Online If has a remainder of when divided by , and has a remainder of when divided by , find the value of the remainder when is divided by . are integers such that . Find . (Source) A rectangular floor measures by feet, where and are positive integers with . An artist paints a rectangle on the floor with the sides of the rectangle parallel to the sides of the floor. The unpainted part of the floor forms a border of width foot around the painted rectangle and occupies half of the area of the entire floor. How many possibilities are there for the ordered pair ? (Source) Olympiad The integer is positive. There are exactly ordered pairs of positive integers satisfying: Prove that is a perfect square. (Source) See Also Number Theory Diophantine equation Algebra Factoring External Links Video on AoPS Retrieved from " Categories: Number theory Theorems Art of Problem Solving is an ACS WASC Accredited School aops programs AoPS Online Beast Academy AoPS Academy About About AoPS Our Team Our History Jobs AoPS Blog Site Info Terms Privacy Contact Us follow us Subscribe for news and updates © 2025 AoPS Incorporated © 2025 Art of Problem Solving About Us•Contact Us•Terms•Privacy Copyright © 2025 Art of Problem Solving Something appears to not have loaded correctly. Click to refresh.
3907
https://www.abebooks.com/book-search/title/introduction-banach-space-theory/author/robert-megginson/
Introduction Banach Space Theory by Robert Megginson - AbeBooks Skip to main content AbeBooks.com Search USD Site shopping preferences. Currency: USD. Shipping destination: USA Sign in My Account Basket Help Menu Find My Account My Purchases Sign Off Advanced Search Browse Collections Rare Books Art & Collectibles Textbooks Sellers Start Selling Help CLOSE Item added to your basket View basket Order Total (1 Item Items): Shipping Destination: Proceed to Basket View basket Continue shopping Introduction Banach Space Theory by Robert Megginson (34 results) Feedback Author:robert megginson, Title:introduction banach space theory Refine with Advanced Search Feedback Previous 1 2 Next List Grid Sort By Search preferences Skip to main search results Search filters Product Type All Product Types Books(34) Magazines & Periodicals Magazines & Periodicals (No further results match this refinement) Comics Comics (No further results match this refinement) Sheet Music Sheet Music (No further results match this refinement) Art, Prints & Posters Art, Prints & Posters (No further results match this refinement) Photographs Photographs (No further results match this refinement) Maps Maps (No further results match this refinement) Manuscripts & Paper Collectibles Manuscripts & Paper Collectibles (No further results match this refinement) Condition Learn more New(27) As New, Fine or Near Fine(5) Very Good or Good(2) Fair or Poor Fair or Poor (No further results match this refinement) As Described As Described (No further results match this refinement) Binding All Bindings Hardcover(19) Softcover(15) Collectible Attributes First Edition First Edition (No further results match this refinement) Signed Signed (No further results match this refinement) Dust Jacket(1) Seller-Supplied Images(16) Not Print on Demand(28) Language (1) Apply Price Any Price Under US$ 25 US$ 25 to US$ 50 Over US$ 50 Custom price range (US$) to USD Only use numbers for the minimum and maximum price. The minimum price must be lower than or match the maximum price. Free Shipping Free Shipping to U.S.A.(4) Seller Location Seller region European Union North America South America Europe Oceania Seller country Australia Chile Germany Netherlands U.S.A. United Kingdom Seller Rating All Sellers 2-star rating and up(34) 3-star rating and up(34) 4-star rating and up(34) 5-star rating(31) Stock Image An Introduction to Banach Space Theory (Graduate Texts in Mathematics, 183) Megginson, Robert E. Published by Springer, 1998 ISBN 10: 0387984313/ ISBN 13: 9780387984315 Language: English Seller: Mark Henderson, Overland Park, KS, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Hardcover Condition: Very good US$ 34.97 Convert currency US$ 5.95 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Condition: Very Good. Book. Seller Image Introduction to Banach Space Theory Megginson, Robert E. Published by Springer, 2012 ISBN 10: 1461268354/ ISBN 13: 9781461268352 Language: English Seller: GreatBookPrices, Columbia, MD, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Softcover Condition: New US$ 87.47 Convert currency US$ 2.64 shipping within U.S.A. Destination, rates & speeds Quantity: Over 20 available Add to basket Condition: New. Featured items related to your search Image 12 An Introduction to Banach Space Theory (Hardcover)Robert E. Megginson New, Hardcover US$ 271.47 Image 13 Introduction to Banach Space TheoryMegginson, Robert E. New, Hardcover US$ 129.97 Image 14 An Introduction to Banach Space Theory (Graduate Texts in...Megginson, Robert E. Used, Hardcover US$ 34.97 Image 15 An Introduction to Banach Space Theory (Paperback)Robert E. Megginson New, Softcover US$ 182.08 Image 16 An Introduction to Banach Space TheoryRobert E. Megginson New, Softcover US$ 90.29 Image 17 An Introduction to Banach Space TheoryRobert E. Megginson New, Hardcover US$ 128.99 Image 18 Introduction to Banach Space TheoryMegginson, Robert E. New, Softcover US$ 87.47 Image 19 An Introduction to Banach Space Theory (Graduate Texts in...Megginson, Robert E. New, Hardcover US$ 156.54 Image 20 Introduction to Banach Space TheoryMegginson, Robert E. New, Hardcover US$ 118.50 Image 21 Introduction to Banach Space TheoryMegginson, Robert E. Used, Hardcover US$ 138.72 Image 22 An Introduction to Banach Space Theory (Paperback)Robert E. Megginson New, Softcover US$ 90.12 Image 23 Introduction to Banach Space TheoryMegginson, Robert E. Used, Hardcover US$ 148.78 Image 24 AN INTRODUCTION TO BANACH SPACE THEORYMEGGINSON,ROBERT New, Hardcover US$ 96.39 Image 25 Introduction to Banach Space TheoryMegginson, Robert E. Used, Softcover US$ 96.32 Image 26 An Introduction to Banach Space Theory (Graduate Texts in...Megginson, Robert E. New, Softcover US$ 86.15 Image 27 An Introduction to Banach Space TheoryRobert E. Megginson New, Hardcover US$ 136.76 Image 28 Introduction to Banach Space TheoryMegginson, Robert E. Used, Softcover US$ 99.94 Image 29 An Introduction to Banach Space Theory (Graduate Texts in...Megginson, Robert E. New, Softcover US$ 97.00 Image 30 An Introduction to Banach Space Theory (Graduate Texts in...Robert E. Megginson New, Softcover US$ 136.76 Image 31 An Introduction to Banach Space Theory (Graduate Texts in...Megginson, Robert E. New, Softcover US$ 88.56 Image 32 An Introduction to Banach Space Theory (Graduate Texts in...Megginson, Robert E. Used, Hardcover US$ 136.50 Image 33 An Introduction to Banach Space Theory (Graduate Texts in...Megginson, Robert E. Used, Hardcover US$ 99.97 Image 34 An Introduction to Banach Space Theory (Graduate Texts in...Megginson, Robert E. New, Hardcover US$ 132.00 Image 35 Introduction to Banach Space TheoryMegginson, Robert E. New, Softcover US$ 88.54 Image 36 An Introduction to Banach Space Theory (Graduate Texts in...Megginson, Robert E. New, Hardcover US$ 129.99 Stock Image An Introduction to Banach Space Theory (Paperback) Robert E. Megginson Published by Springer-Verlag New York Inc., New York, NY, 2012 ISBN 10: 1461268354/ ISBN 13: 9781461268352 Language: English Seller: Grand Eagle Retail, Mason, OH, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Softcover Condition: New US$ 90.12 Convert currency Free shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Paperback. Condition: new. Paperback. Many important reference works in Banach space theory have appeared since Banach's "Theorie des Operations Lineaires", the impetus for the development of much of the modern theory in this field. While these works are classical starting points for the graduate student wishing to do research in Banach space theory, they can be formidable reading for the student who has just completed a course in measure theory and integration that introduces the L_p spaces and would... More Stock Image An Introduction to Banach Space Theory (Graduate Texts in Mathematics) Megginson, Robert E. Published by Springer, 2012 ISBN 10: 1461268354/ ISBN 13: 9781461268352 Language: English Seller: Lucky's Textbooks, Dallas, TX, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Softcover Condition: New US$ 86.15 Convert currency US$ 3.99 shipping within U.S.A. Destination, rates & speeds Quantity: Over 20 available Add to basket Condition: New. Stock Image An Introduction to Banach Space Theory (Graduate Texts in Mathematics) Megginson, Robert E. Published by Springer, 2012 ISBN 10: 1461268354/ ISBN 13: 9781461268352 Language: English Seller: California Books, Miami, FL, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Softcover Condition: New US$ 97.00 Convert currency Free shipping within U.S.A. Destination, rates & speeds Quantity: Over 20 available Add to basket Condition: New. Seller Image Introduction to Banach Space Theory Megginson, Robert E. Published by Springer, 2012 ISBN 10: 1461268354/ ISBN 13: 9781461268352 Language: English Seller: GreatBookPrices, Columbia, MD, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Softcover Condition: As new US$ 96.32 Convert currency US$ 2.64 shipping within U.S.A. Destination, rates & speeds Quantity: Over 20 available Add to basket Condition: As New. Unread book in perfect condition. Stock Image An Introduction to Banach Space Theory (Graduate Texts in Mathematics) Megginson, Robert E. Published by Springer, 2012 ISBN 10: 1461268354/ ISBN 13: 9781461268352 Language: English Seller: Ria Christie Collections, Uxbridge, United Kingdom (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Softcover Condition: New US$ 88.56 Convert currency US$ 16.06 shipping from United Kingdom to U.S.A. Destination, rates & speeds Quantity: Over 20 available Add to basket Condition: New. In English. Stock Image An Introduction to Banach Space Theory (Graduate Texts in Mathematics, 183) Megginson, Robert E. Published by Springer, 1998 ISBN 10: 0387984313/ ISBN 13: 9780387984315 Language: English Seller: Mark Henderson, Overland Park, KS, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Hardcover Condition: Near fine US$ 99.97 Convert currency US$ 5.95 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Condition: Near Fine. Book. Seller Image Introduction to Banach Space Theory Megginson, Robert E. Published by Springer, 2012 ISBN 10: 1461268354/ ISBN 13: 9781461268352 Language: English Seller: GreatBookPricesUK, Woodford Green, United Kingdom (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Softcover Condition: New US$ 88.54 Convert currency US$ 20.11 shipping from United Kingdom to U.S.A. Destination, rates & speeds Quantity: Over 20 available Add to basket Condition: New. Seller Image Introduction to Banach Space Theory Megginson, Robert E. Published by Springer, 2012 ISBN 10: 1461268354/ ISBN 13: 9781461268352 Language: English Seller: GreatBookPricesUK, Woodford Green, United Kingdom (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Softcover Condition: As new US$ 99.94 Convert currency US$ 20.11 shipping from United Kingdom to U.S.A. Destination, rates & speeds Quantity: Over 20 available Add to basket Condition: As New. Unread book in perfect condition. Seller Image Introduction to Banach Space Theory Megginson, Robert E. Published by Springer, 1998 ISBN 10: 0387984313/ ISBN 13: 9780387984315 Language: English Seller: GreatBookPrices, Columbia, MD, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Hardcover Condition: New US$ 118.50 Convert currency US$ 2.64 shipping within U.S.A. Destination, rates & speeds Quantity: Over 20 available Add to basket Condition: New. Stock Image An Introduction to Banach Space Theory (Hardcover) Robert E. Megginson Published by Springer-Verlag New York Inc., New York, NY, 1998 ISBN 10: 0387984313/ ISBN 13: 9780387984315 Language: English Seller: Grand Eagle Retail, Mason, OH, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Hardcover Condition: New US$ 121.15 Convert currency Free shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Condition: new. Hardcover. Many important reference works in Banach space theory have appeared since Banach's "Theorie des Operations Lineaires", the impetus for the development of much of the modern theory in this field. While these works are classical starting points for the graduate student wishing to do research in Banach space theory, they can be formidable reading for the student who has just completed a course in measure theory and integration that introduces the L_p spaces and would... More Stock Image An Introduction to Banach Space Theory (Graduate Texts in Mathematics, 183) Megginson, Robert E. Published by Springer, 1998 ISBN 10: 0387984313/ ISBN 13: 9780387984315 Language: English Seller: Lucky's Textbooks, Dallas, TX, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Hardcover Condition: New US$ 117.18 Convert currency US$ 3.99 shipping within U.S.A. Destination, rates & speeds Quantity: Over 20 available Add to basket Condition: New. Stock Image An Introduction to Banach Space Theory (Graduate Texts in Mathematics, 183) Megginson, Robert E. Published by Springer, 1998 ISBN 10: 0387984313/ ISBN 13: 9780387984315 Language: English Seller: California Books, Miami, FL, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Hardcover Condition: New US$ 132.00 Convert currency Free shipping within U.S.A. Destination, rates & speeds Quantity: Over 20 available Add to basket Condition: New. Seller Image AN INTRODUCTION TO BANACH SPACE THEORY MEGGINSON,ROBERT Published by Springer, 1998 ISBN 10: 0387984313/ ISBN 13: 9780387984315 Language: English Seller: LIBRERIA LEA+, Santiago, RM, Chile (4-star seller)Seller rating 4 out of 5 stars;) Contact seller New - Hardcover Condition: New US$ 96.39 Convert currency US$ 37.00 shipping from Chile to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Dura. Condition: New. Dust Jacket Condition: Nuevo. No Aplica (illustrator). 0. Many important reference works in Banach space theory have appeared since Banach's "Théorie des Opérations Linéaires", the impetus for the development of much of the modern theory in this field. While these works are classical starting points for the graduate student wishing to do research in Banach space theory, they can be formidable reading for the student who has just completed a course in measure theory and integration... More Seller Image Introduction to Banach Space Theory Megginson, Robert E. Published by Springer, 1998 ISBN 10: 0387984313/ ISBN 13: 9780387984315 Language: English Seller: GreatBookPrices, Columbia, MD, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Hardcover Condition: As new US$ 138.72 Convert currency US$ 2.64 shipping within U.S.A. Destination, rates & speeds Quantity: Over 20 available Add to basket Condition: As New. Unread book in perfect condition. Stock Image An Introduction to Banach Space Theory (Graduate Texts in Mathematics, 183) Megginson, Robert E. Published by Springer, 1998 ISBN 10: 0387984313/ ISBN 13: 9780387984315 Language: English Seller: Ria Christie Collections, Uxbridge, United Kingdom (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Hardcover Condition: New US$ 129.99 Convert currency US$ 16.06 shipping from United Kingdom to U.S.A. Destination, rates & speeds Quantity: Over 20 available Add to basket Condition: New. In English. Seller Image Introduction to Banach Space Theory Megginson, Robert E. Published by Springer, 1998 ISBN 10: 0387984313/ ISBN 13: 9780387984315 Language: English Seller: GreatBookPricesUK, Woodford Green, United Kingdom (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Hardcover Condition: New US$ 129.97 Convert currency US$ 20.11 shipping from United Kingdom to U.S.A. Destination, rates & speeds Quantity: Over 20 available Add to basket Condition: New. Seller Image An Introduction to Banach Space Theory Robert E. Megginson Published by Springer New York, Springer US Okt 2012, 2012 ISBN 10: 1461268354/ ISBN 13: 9781461268352 Language: English Seller: buchversandmimpf2000, Emtmannsberg, BAYE, Germany (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Softcover Condition: New US$ 90.29 Convert currency US$ 70.23 shipping from Germany to U.S.A. Destination, rates & speeds Quantity: 2 available Add to basket Taschenbuch. Condition: Neu. Neuware -Many important reference works in Banach space theory have appeared since Banach's 'Théorie des Opérations Linéaires', the impetus for the development of much of the modern theory in this field. While these works are classical starting points for the graduate student wishing to do research in Banach space theory, they can be formidable reading for the student who has just completed a course in measure theory and integration that introduces the L_p spaces and would... More Stock Image An Introduction to Banach Space Theory (Graduate Texts in Mathematics, 183) Megginson, Robert E. Published by Springer, 1998 ISBN 10: 0387984313/ ISBN 13: 9780387984315 Language: English Seller: Mooney's bookstore, Den Helder, Netherlands (4-star seller)Seller rating 4 out of 5 stars;) Contact seller Used - Hardcover Condition: Very good US$ 136.50 Convert currency US$ 29.20 shipping from Netherlands to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Condition: Very good. Stock Image An Introduction to Banach Space Theory (Graduate Texts in Mathematics, 183) Megginson, Robert E. Published by Springer, 1998 ISBN 10: 0387984313/ ISBN 13: 9780387984315 Language: English Seller: BennettBooksLtd, San Diego, NV, U.S.A. (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Hardcover Condition: New US$ 156.54 Convert currency US$ 6.95 shipping within U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket hardcover. Condition: New. In shrink wrap. Looks like an interesting title! Stock Image Introduction to Banach Space Theory Megginson, Robert E. Published by Springer, 1998 ISBN 10: 0387984313/ ISBN 13: 9780387984315 Language: English Seller: GreatBookPricesUK, Woodford Green, United Kingdom (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Used - Hardcover Condition: As new US$ 148.78 Convert currency US$ 20.11 shipping from United Kingdom to U.S.A. Destination, rates & speeds Quantity: Over 20 available Add to basket Condition: As New. Unread book in perfect condition. Stock Image An Introduction to Banach Space Theory (Graduate Texts in Mathematics) Robert E. Megginson Published by Springer, 2012 ISBN 10: 1461268354/ ISBN 13: 9781461268352 Language: English Seller: Revaluation Books, Exeter, United Kingdom (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Softcover Condition: New US$ 136.76 Convert currency US$ 33.52 shipping from United Kingdom to U.S.A. Destination, rates & speeds Quantity: 2 available Add to basket Paperback. Condition: Brand New. reprint edition. 615 pages. 9.50x6.25x1.25 inches. In Stock. Seller Image An Introduction to Banach Space Theory Robert E. Megginson Published by Springer New York, Springer US, 2012 ISBN 10: 1461268354/ ISBN 13: 9781461268352 Language: English Seller: AHA-BUCH GmbH, Einbeck, Germany (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Softcover Condition: New US$ 96.15 Convert currency US$ 75.68 shipping from Germany to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Taschenbuch. Condition: Neu. Druck auf Anfrage Neuware - Printed after ordering - Many important reference works in Banach space theory have appeared since Banach's 'Théorie des Opérations Linéaires', the impetus for the development of much of the modern theory in this field. While these works are classical starting points for the graduate student wishing to do research in Banach space theory, they can be formidable reading for the student who has just completed a course in measure theory and... More Seller Image An Introduction to Banach Space Theory Robert E. Megginson Published by Springer New York, Springer US Okt 1998, 1998 ISBN 10: 0387984313/ ISBN 13: 9780387984315 Language: English Seller: buchversandmimpf2000, Emtmannsberg, BAYE, Germany (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Hardcover Condition: New US$ 128.99 Convert currency US$ 70.23 shipping from Germany to U.S.A. Destination, rates & speeds Quantity: 2 available Add to basket Buch. Condition: Neu. Neuware -Many important reference works in Banach space theory have appeared since Banach's 'Théorie des Opérations Linéaires', the impetus for the development of much of the modern theory in this field. While these works are classical starting points for the graduate student wishing to do research in Banach space theory, they can be formidable reading for the student who has just completed a course in measure theory and integration that introduces the L_p spaces and would... More Seller Image An Introduction to Banach Space Theory Robert E. Megginson Published by Springer New York, Springer US, 1998 ISBN 10: 0387984313/ ISBN 13: 9780387984315 Language: English Seller: AHA-BUCH GmbH, Einbeck, Germany (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Hardcover Condition: New US$ 136.76 Convert currency US$ 76.62 shipping from Germany to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Buch. Condition: Neu. Druck auf Anfrage Neuware - Printed after ordering - Many important reference works in Banach space theory have appeared since Banach's 'Théorie des Opérations Linéaires', the impetus for the development of much of the modern theory in this field. While these works are classical starting points for the graduate student wishing to do research in Banach space theory, they can be formidable reading for the student who has just completed a course in measure theory and... More Stock Image An Introduction to Banach Space Theory (Paperback) Robert E. Megginson Published by Springer-Verlag New York Inc., New York, NY, 2012 ISBN 10: 1461268354/ ISBN 13: 9781461268352 Language: English Seller: AussieBookSeller, Truganina, VIC, Australia (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Softcover Condition: New US$ 182.08 Convert currency US$ 37.00 shipping from Australia to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Paperback. Condition: new. Paperback. Many important reference works in Banach space theory have appeared since Banach's "Theorie des Operations Lineaires", the impetus for the development of much of the modern theory in this field. While these works are classical starting points for the graduate student wishing to do research in Banach space theory, they can be formidable reading for the student who has just completed a course in measure theory and integration that introduces the L_p spaces and would... More Stock Image An Introduction to Banach Space Theory (Hardcover) Robert E. Megginson Published by Springer-Verlag New York Inc., New York, NY, 1998 ISBN 10: 0387984313/ ISBN 13: 9780387984315 Language: English Seller: AussieBookSeller, Truganina, VIC, Australia (5-star seller)Seller rating 5 out of 5 stars;) Contact seller New - Hardcover Condition: New US$ 271.47 Convert currency US$ 37.00 shipping from Australia to U.S.A. Destination, rates & speeds Quantity: 1 available Add to basket Hardcover. Condition: new. Hardcover. Many important reference works in Banach space theory have appeared since Banach's "Theorie des Operations Lineaires", the impetus for the development of much of the modern theory in this field. While these works are classical starting points for the graduate student wishing to do research in Banach space theory, they can be formidable reading for the student who has just completed a course in measure theory and integration that introduces the L_p spaces and would... More Seller Image An Introduction to Banach Space Theory Robert E. Megginson Published by Springer New York Okt 2012, 2012 ISBN 10: 1461268354/ ISBN 13: 9781461268352 Language: English Seller: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germany (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Print on Demand New - Softcover Condition: New US$ 90.29 Convert currency US$ 26.92 shipping from Germany to U.S.A. Destination, rates & speeds Quantity: 2 available Add to basket Taschenbuch. Condition: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Preparing students for further study of both the classical works and current research, this is an accessible text for students who have had a course in real and complex analysis and understand the basic properties of L p spaces. It is sprinkled liberally with examples, historical notes, citations, and original sources, and over 450 exercises provide practice in the use of the... More Stock Image An Introduction to Banach Space Theory Robert E. Megginson Published by Springer-Verlag New York Inc., 2012 ISBN 10: 1461268354/ ISBN 13: 9781461268352 Language: English Seller: THE SAINT BOOKSTORE, Southport, United Kingdom (5-star seller)Seller rating 5 out of 5 stars;) Contact seller Print on Demand New - Softcover Condition: New US$ 107.01 Convert currency US$ 21.02 shipping from United Kingdom to U.S.A. Destination, rates & speeds Quantity: Over 20 available Add to basket Paperback / softback. Condition: New. This item is printed on demand. New copy - Usually dispatched within 5-9 working days 893. Previous 1 2 Next Create a Want Tell us what you're looking for and once a match is found, we'll inform you by e-mail. Create a Want BookSleuth Can't remember the title or the author of a book? Our BookSleuth is specially designed for you. Visit BookSleuth Help with Search Search Tips Glossary of Terms Set your own Search Preferences Back to top Shop With Us Advanced Search Browse Collections My Account My Orders View Basket Sell With Us Start Selling Join Our Affiliate Program Book Buyback Refer a seller About Us About AbeBooks Media Careers Forums Privacy Policy Your Ads Privacy Choices Designated Agent Accessibility Find Help Help Customer Support Other AbeBooks Companies AbeBooks.co.uk AbeBooks.de AbeBooks.fr AbeBooks.it AbeBooks Aus/NZ AbeBooks.ca IberLibro.com ZVAB.com BookFinder.com Find any book at the best price Follow AbeBooks AbeBooks.co.uk AbeBooks.de AbeBooks.fr AbeBooks.it AbeBooks Aus/NZ AbeBooks.ca IberLibro.com ZVAB.com BookFinder.com Find any book at the best price By using the Web site, you confirm that you have read, understood, and agreed to be bound by the Terms and Conditions. © 1996 - 2025 AbeBooks Inc. All Rights Reserved. AbeBooks, the AbeBooks logo, AbeBooks.com, "Passion for books." and "Passion for books. Books for your passion." are registered trademarks with the Registered US Patent & Trademark Office. × Change currency Choose your preferred currency You will be charged in {0}. You will be shown prices in {0} as a reference only. Your orders will process in {1}. Learn more about currency preferences. Cancel Save × Site shopping preferences Loading... Saving your site preferences Site shopping preferences saved Something went wrong while saving preferences, please try again Choose your shipping destination Choose your preferred currency Learn more about currency preferences. Cancel Save
3908
https://www.wiley.com/en-us/Organic+Chemistry%2C+12th+Edition-p-9781119501008
Organic Chemistry, 12th Edition T. W. Graham Solomons, Craig B. Fryhle, Scott A. Snyder View your school's pricing Log in and find your course to view your school's pricing. Your school may have custom pricing options. Request Digital Evaluation Copy To Purchase this product, please visit ## Organic Chemistry, 12th Edition T. W. Graham Solomons,Craig B. Fryhle,Scott A. Snyder E-Book Rental (150 Days) 978-1-119-24370-0R150 November 2016 $51.00 E-Book 978-1-119-24370-0 November 2016 $121.95 Single Term Access to WileyPLUS 978EEGRP40885 $76.95 Single Term Access to WileyPLUS + Textbook Rental (130 Days) 978-1-119-66461-1 $125.95 Single Term Access to WileyPLUS + Loose-Leaf Textbook 978-1-119-66464-2 $141.95 Multiple Term Access to WileyPLUS 978EEGRP38301 $131.95 Multiple Term Access to WileyPLUS + Textbook Rental (130 Days) 978-1-119-57300-5 $180.95 Multiple Term Access to WileyPLUS + Loose-Leaf Textbook 978-1-119-50100-8 $196.95 Description With Wiley’s Enhanced E-Text, you get all the benefits of a downloadable, reflowable eBook with added resources to make your study time more effective, including: • Embedded Student Solutions Manual• Problem Sets & Answers Included Organic Chemistry with Solutions Manual, Enhanced eText, 12th Edition continues Solomons, Fryhle & Snyder's tradition of excellence in teaching and preparing students for success in the organic classroom and beyond. A central theme of the authors' approach to organic chemistry is to emphasize the relationship between structure and reactivity. To accomplish this, the content is organized in a way that combines the most useful features of a functional group approach with one largely based on reaction mechanisms. The authors' philosophy is to emphasize mechanisms and their common aspects as often as possible, and at the same time, use the unifying features of functional groups as the basis for most chapters. The structural aspects of the authors' approach show students what organic chemistry is. Mechanistic aspects of their approach show students how it works. And wherever an opportunity arises, the authors' show students what it does in living systems and the physical world around us. About the Author T.W. Graham Solomons did his undergraduate work at The Citadel and received his doctorate in organic chemistry in 1959 from Duke University where he worked with C.K. Bradsher. Following this he was a Sloan Foundation Postdoctoral Fellow at the University of Rachester where he worked with V. Boekelheide. in 1960 he became a charter member of the faculty of the University of South Florida and became Professor of Chemistry in 1973. In 1992 he was made Professor Emeritus. His research interests have been in areas of heterocyclic chemistry and unusual aromatic compounds. He has published papers in the Journal of the American Chemical Society, the Journal of Organic Chemistry, and the Journal of Heterocyclic Chemistry. He has received several awards for distinguished teaching. Craig B. Fryhle is Chair and Professor of Chemistry at Pacific Lutheran University. He earned his B.A. degree from Gettysburg College and Ph.D. from Brown University. His experiences at these institutions shaped his dedication to mentoring undergraduate students in chemistry and the liberal arts, which is a passion that burns strongly for him. His research interests have been in areas relating to the shikimic acid pathway, including molecular modeling and NMR spectrometry of substrates and analogues, as well as structure and reactivity studies of shikimate pathways enzymes using isotopic labeling and mass spectrometry. New to Edition WileyPLUS is now equipped with an adaptive learning module called ORION. Based on cognitive science, WileyPLUS with ORION, provides students with a personal, adaptive learning experience so they can build their proficiency on topics and use their study time most effectively. WileyPLUS with ORION helps students learn by learning about them. WileyPLUS with ORION for Organic Chemistry 12e also features new conceptual practice exercises and support for students including Interactive concept map exercises Interactive summary of reactions exercises Interactive mechanism review exercises Video walkthroughs of key mechanisms by the authors In response to market feedback, the authors have improved the current presentation of the chemistry of benzene rings by moving Ch 21, Phenols and Aryl Halides after Ch15, Reactions of Aromatic Compounds. Other 12e revisions include Synthesizing the Material: new end of chapter problems that requires the student to use synthesis methods across chapters New Transition Metal Chemistry chapter located after the Ch20, Chemistry of Amines WileyPLUS is a research-based online environment for effective teaching and learning. WileyPLUS has robust interactive study tools and resources–including the complete online textbook–to give your students more value for their money. With WileyPLUS, students will be provided with unique study strategies, tools, and support based on learning styles for success in Organic Chemistry. WileyPLUS hallmarks, such as Reaction Explorer, remain alongside brand-new course updates. To Purchase this product, please visit
3909
https://phys.libretexts.org/Bookshelves/Conceptual_Physics/Introduction_to_Physics_(Park)/04%3A_Unit_3-_Classical_Physics_-_Thermodynamics_Electricity_and_Magnetism_and_Light/09%3A_Electricity/9.05%3A_Electric_Field_Lines
Skip to main content 9.5: Electric Field Lines Last updated : Mar 12, 2024 Save as PDF 9.4: Electric Field- Concept of a Field Revisited 9.6: Electric Potential and Potential Energy Page ID : 46216 OpenStax OpenStax ( \newcommand{\kernel}{\mathrm{null}\,}) Learning Objectives Calculate the total force (magnitude and direction) exerted on a test charge from more than one charge Describe an electric field diagram of a positive point charge; of a negative point charge with twice the magnitude of positive charge Draw the electric field lines between two points of the same charge; between two points of opposite charge. Drawings using lines to represent electric fields around charged objects are very useful in visualizing field strength and direction. Since the electric field has both magnitude and direction, it is a vector. Like all vectors, the electric field can be represented by an arrow that has length proportional to its magnitude and that points in the correct direction. (We have used arrows extensively to represent force vectors, for example.) Figure 9.5.1 shows two pictorial representations of the same electric field created by a positive point charge Q. Figure 9.5.1(b) shows the standard representation using continuous lines. Figure 9.5.1(a) shows numerous individual arrows with each arrow representing the force on a test charge q. Note that the electric field is defined for a positive test charge q, so that the field lines point away from a positive charge and toward a negative charge. (See Figure 9.5.2.) The electric field strength is exactly proportional to the number of field lines per unit area, since the magnitude of the electric field for a point charge is E=k|Q|/r2 and area is proportional to r2. This pictorial representation, in which field lines represent the direction and their closeness (that is, their areal density or the number of lines crossing a unit area) represents strength, is used for all fields: electrostatic, gravitational, magnetic, and others. In many situations, there are multiple charges. The total electric field created by multiple charges is the vector sum of the individual fields created by each charge. Figure 9.5.3 shows how the electric field from two point charges can be drawn by finding the total field at representative points and drawing electric field lines consistent with those points. While the electric fields from multiple charges are more complex than those of single charges, some simple features are easily noticed. For example, the field is weaker between like charges, as shown by the lines being farther apart in that region. (This is because the fields from each charge exert opposing forces on any charge placed between them.) (See Figure 9.5.3 and Figure 9.5.4(a).) Furthermore, at a great distance from two like charges, the field becomes identical to the field from a single, larger charge. Figure 9.5.4(b) shows the electric field of two unlike charges. The field is stronger between the charges. In that region, the fields from each charge are in the same direction, and so their strengths add. The field of two unlike charges is weak at large distances, because the fields of the individual charges are in opposite directions and so their strengths subtract. At very large distances, the field of two unlike charges looks like that of a smaller single charge. We use electric field lines to visualize and analyze electric fields (the lines are a pictorial tool, not a physical entity in themselves). The properties of electric field lines for any charge distribution can be summarized as follows: Field lines must begin on positive charges and terminate on negative charges, or at infinity in the hypothetical case of isolated charges. The number of field lines leaving a positive charge or entering a negative charge is proportional to the magnitude of the charge. The strength of the field is proportional to the closeness of the field lines—more precisely, it is proportional to the number of lines per unit area perpendicular to the lines. The direction of the electric field is tangent to the field line at any point in space. Field lines can never cross. The last property means that the field is unique at any point. The field line represents the direction of the field; so if they crossed, the field would have two directions at that location (an impossibility if the field is unique). Section Summary Drawings of electric field lines are useful visual tools. The properties of electric field lines for any charge distribution are that: Field lines must begin on positive charges and terminate on negative charges, or at infinity in the hypothetical case of isolated charges. The number of field lines leaving a positive charge or entering a negative charge is proportional to the magnitude of the charge. The strength of the field is proportional to the closeness of the field lines—more precisely, it is proportional to the number of lines per unit area perpendicular to the lines. The direction of the electric field is tangent to the field line at any point in space. Field lines can never cross. Glossary electric field : a three-dimensional map of the electric force extended out into space from a point charge electric field lines : a series of lines drawn from a point charge representing the magnitude and direction of force exerted by that charge vector : a quantity with both magnitude and direction vector addition : mathematical combination of two or more vectors, including their magnitudes, directions, and positions 9.4: Electric Field- Concept of a Field Revisited 9.6: Electric Potential and Potential Energy
3910
https://www.khanacademy.org/standards/MA.Math/4.NF
Standards Mapping - Massachusetts Math | Khan Academy Skip to main content If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains .kastatic.org and .kasandbox.org are unblocked. Explore Browse By Standards Math: Pre-K - 8th grade Pre-K through grade 2 (Khan Kids) 2nd grade 3rd grade 4th grade 5th grade 6th grade 7th grade 8th grade Basic geometry and measurement See Pre-K - 8th grade Math Math: Illustrative Math-aligned 3rd grade math 4th grade math 5th grade math 6th grade math 7th grade math 8th grade math Algebra 1 Math: Eureka Math-aligned 3rd grade math 4th grade math 5th grade math 6th grade math 7th grade math 8th grade math Math: Get ready courses Get ready for 3rd grade Get ready for 4th grade Get ready for 5th grade Get ready for 6th grade Get ready for 7th grade Get ready for 8th grade Get ready for Algebra 1 Get ready for Geometry Get ready for Algebra 2 Get ready for Precalculus Get ready for AP® Calculus Get ready for AP® Statistics Math: high school & college Algebra 1 Geometry Algebra 2 Integrated math 1 Integrated math 2 Integrated math 3 Trigonometry Precalculus High school statistics Statistics & probability College algebra AP®︎/College Calculus AB AP®︎/College Calculus BC AP®︎/College Statistics Multivariable calculus Differential equations Linear algebra See all Math Math: Multiple grades Early math review Arithmetic Basic geometry and measurement Pre-algebra Algebra basics Test prep SAT Math SAT Reading and Writing Get ready for SAT Prep: Math NEW Get Ready for SAT Prep: Reading and Writing NEW LSAT MCAT Science Middle school biology Middle school Earth and space science Middle school chemistry NEW Middle school physics NEW High school biology High school chemistry High school physics Hands-on science activities NEW Teacher resources (NGSS) NEW AP®︎/College Biology AP®︎/College Chemistry AP®︎/College Environmental Science AP®︎/College Physics 1 AP®︎/College Physics 2 Organic chemistry Cosmology and astronomy Electrical engineering See all Science Economics Macroeconomics AP®︎/College Macroeconomics Microeconomics AP®︎/College Microeconomics Finance and capital markets See all Economics Reading & language arts Up to 2nd grade (Khan Kids) 2nd grade 3rd grade 4th grade reading and vocab NEW 5th grade reading and vocab NEW 6th grade reading and vocab 7th grade reading and vocab NEW 8th grade reading and vocab NEW 9th grade reading and vocab NEW 10th grade reading and vocab NEW Grammar See all Reading & Language Arts Computing Intro to CS - Python Computer programming AP®︎/College Computer Science Principles Computers and the Internet Computer science theory Pixar in a Box See all Computing Life skills Social & emotional learning (Khan Kids) Khanmigo for students AI for education Financial literacy Internet safety Social media literacy Growth mindset College admissions Careers Personal finance See all Life Skills Social studies US history AP®︎/College US History US government and civics AP®︎/College US Government & Politics Constitution 101 NEW World History Project - Origins to the Present World history AP®︎/College World History Climate project NEW Art history AP®︎/College Art History See all Social studies Partner courses Ancient Art Asian Art Biodiversity Music NASA Natural History New Zealand - Natural & cultural history NOVA Labs Philosophy Khan for educators Khan for educators (US) NEW Khanmigo for educators NEW Khan for parents NEW Search AI for Teachers FreeDonateLog inSign up Search for courses, skills, and videos #### STANDARDS > US-MA Math Grade 1 Operations and Algebraic Thinking Number and Operations in Base Ten Measurement and Data Geometry Grade 2 Operations and Algebraic Thinking Number and Operations in Base Ten Measurement and Data Geometry Grade 3 Operations and Algebraic Thinking Number and Operations in Base Ten Number and Operations—Fractions Measurement and Data Geometry Grade 4 Operations and Algebraic Thinking Number and Operations in Base Ten Number and Operations—Fractions Measurement and Data Geometry Grade 5 Operations and Algebraic Thinking Number and Operations in Base Ten Number and Operations—Fractions Measurement and Data Geometry Grade 6 Ratios and Proportional Relationships The Number System Expressions and Equations Geometry Statistics and Probability Grade 7 Ratios and Proportional Relationships The Number System Expressions and Equations Geometry Statistics and Probability Grade 8 The Number System Expressions and Equations Functions Geometry Statistics and Probability High School Content Standards by Conceptual Categories Number and Quantity - The Real Number System Number and Quantity - Quantities Number and Quantity - The Complex Number System Number and Quantity - Vector and Matrix Quantities Algebra - Seeing Structure in Expressions Algebra - Arithmetic with Polynomials and Rational Expressions Algebra - Creating Equations Algebra - Reasoning with Equations and Inequalities Functions - Interpreting Functions Functions - Building Functions Functions - Linear, Quadratic, and Exponential Models Functions - Trigonometric Functions Geometry - Congruence Geometry - Similarity, Right Triangles, and Trigonometry Geometry - Circles Geometry - Expressing Geometric Properties with Equations Geometry - Geometric Measurement and Dimension Geometry - Modeling with Geometry Statistics and Probability - Interpreting Categorical and Quantitative Data Statistics and Probability - Making Inferences and Justifying Conclusions Statistics and Probability - Conditional Probability and the Rules of Probability Statistics and Probability - Using Probability to Make Decisions Model Algebra I [AI] Number and Quantity - The Real Number System Number and Quantity - Quantities Algebra - Seeing Structure in Expressions Algebra - Arithmetic with Polynomials and Rational Expressions Algebra - Creating Equations Algebra - Reasoning with Equations and Inequalities Functions - Interpreting Functions Functions - Building Functions Functions - Linear, Quadratic, and Exponential Models Statistics and Probability - Interpreting Categorical and Quantitative Data Model Geometry [GEO] Number and Quantity - Quantities Geometry - Congruence Geometry - Similarity, Right Triangles, and Trigonometry Geometry - Circles Geometry - Expressing Geometric Properties with Equations Geometry - Geometric Measurement and Dimension Geometry - Modeling with Geometry Statistics and Probability - Conditional Probability and the Rules of Probability Model Algebra II [AII] Number and Quantity - The Complex Number System Number and Quantity - Vector and Matrix Quantities Algebra - Seeing Structure in Expressions Algebra - Arithmetic with Polynomials and Rational Expressions Algebra - Creating Equations Algebra - Reasoning with Equations and Inequalities Functions - Interpreting Functions Functions - Building Functions Functions - Linear, Quadratic, and Exponential Models Functions - Trigonometric Functions Statistics and Probability - Interpreting Categorical and Quantitative Data Statistics and Probability - Making Inferences and Justifying Conclusions Statistics and Probability - Using Probability to Make Decisions Model Mathematics I [MI] Number and Quantity - Quantities Algebra - Seeing Structure in Expressions Algebra - Creating Equations Algebra - Reasoning with Equations and Inequalities Functions - Interpreting Functions Functions - Building Functions Functions - Linear, Quadratic, and Exponential Models Geometry - Congruence Geometry - Expressing Geometric Properties with Equations Statistics and Probability - Interpreting Categorical and Quantitative Data Model Mathematics II [MII] Number and Quantity - The Real Number System Number and Quantity - Quantities Number and Quantity - The Complex Number System Algebra - Seeing Structure in Expressions Algebra - Arithmetic with Polynomials and Rational Expressions Algebra - Creating Equations Algebra - Reasoning with Equations and Inequalities Functions - Interpreting Functions Functions - Building Functions Functions - Linear, Quadratic, and Exponential Models Geometry - Congruence Geometry - Similarity, Right Triangles, and Trigonometry Geometry - Circles Geometry - Expressing Geometric Properties with Equations Geometry - Geometric Measurement and Dimension Statistics and Probability - Conditional Probability and the Rules of Probability Model Mathematics III [MIII] Number and Quantity - The Complex Number System Number and Quantity - Vector and Matrix Quantities Algebra - Seeing Structure in Expressions Algebra - Arithmetic with Polynomials and Rational Expressions Algebra - Creating Equations Algebra - Reasoning with Equations and Inequalities Functions - Interpreting Functions Functions - Building Functions Functions - Linear, Quadratic, and Exponential Models Functions - Trigonometric Functions Geometry - Similarity, Right Triangles, and Trigonometry Geometry - Geometric Measurement and Dimension Geometry - Modeling with Geometry Statistics and Probability - Interpreting Categorical and Quantitative Data Statistics and Probability - Making Inferences and Justifying Conclusions Statistics and Probability - Using Probability to Make Decisions Model Precalculus [PC] Number and Quantity - The Complex Number System Number and Quantity - Vector and Matrix Quantities Algebra - Arithmetic with Polynomials and Rational Expressions Algebra - Reasoning with Equations and Inequalities Functions - Interpreting Functions Functions - Building Functions Functions - Trigonometric Functions Geometry - Similarity, Right Triangles, and Trigonometry Geometry - Circles Geometry - Expressing Geometric Properties with Equations Geometry - Geometric Measurement and Dimension Model Advanced Quantitative Reasoning [AQR] Number and Quantity - Vector and Matrix Quantities Algebra - Arithmetic with Polynomials and Rational Expressions Algebra - Reasoning with Equations and Inequalities Functions - Trigonometric Functions Geometry - Similarity, Right Triangles, and Trigonometry Geometry - Circles Geometry - Expressing Geometric Properties with Equations Geometry - Geometric Measurement and Dimension Statistics and Probability - Conditional Probability and the Rules of Probability Statistics and Probability - Using Probability to Make Decisions Massachusetts Math Grade 4: Number and Operations—Fractions 4.NF.A ------ Extend understanding of fraction equivalence and ordering for fractions with denominators 2, 3, 4, 5, 6, 8, 10, 12, and 100. ---------------------------------------------------------------------------------------------------------------------------- 4.NF.A.1 Fully covered Explain why a fraction a∕b is equivalent to a fraction (n x a)∕(n x b) by using visual fraction models, with attention to how the numbers and sizes of the parts differ even though the two fractions themselves are the same size. Use this principle to recognize and generate equivalent fractions, including fractions greater than 1. Equivalent fractions Equivalent fractions Equivalent fractions (fraction models) Equivalent fractions (number lines) Equivalent fractions on number lines Equivalent fractions with models More on equivalent fractions Visualizing equivalent fractions review 4.NF.A.2 Fully covered Compare two fractions with different numerators and different denominators, e.g., by creating common denominators or numerators, or by comparing to a benchmark fraction such as 1∕2. Recognize that comparisons are valid only when the two fractions refer to the same whole. Record the results of comparisons with symbols >, =, or <, and justify the conclusions, e.g., by using a visual fraction model. Common denominators Common denominators review Common denominators: 1/4 and 5/6 Common denominators: 3/5 and 7/2 Compare fractions using benchmarks Compare fractions with different numerators and denominators Compare fractions word problems Comparing fractions 1 (unlike denominators) Comparing fractions of different wholes Comparing fractions of different wholes 1 Comparing fractions word problems Comparing fractions: fraction models Comparing fractions: number line Comparing fractions: tape diagram Equivalent fractions and different wholes Finding common denominators Fractions of different wholes Visually compare fractions with unlike denominators Visually comparing fractions review 4.NF.B ------ Build fractions from unit fractions by applying and extending previous understandings of operations on whole numbers for fractions with denominators 2, 3, 4, 5, 6, 8, 10, 12, and 100. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ### 4.NF.B.3 ### Understand a fraction a∕b with a > 1 as a sum of fractions 1∕b. 4.NF.B.3.a Mostly covered Understand addition and subtraction of fractions as joining and separating parts referring to the same whole. (The whole can be a set of objects.) Add fractions with common denominators Adding fractions with like denominators Decompose fractions visually Subtract fractions with common denominators Subtracting fractions with like denominators 4.NF.B.3.b Fully covered Decompose a fraction into a sum of fractions with the same denominator in more than one way, recording each decomposition by an equation. Justify decompositions, e.g., by using drawings or visual fraction models. Examples: 3∕8 = 1∕8 + 1∕8 + 1∕8; 3∕8 = 1∕8 + 2∕8; 2 1∕8 = 1 + 1 + 1∕8 = 8∕8 + 8∕8 + 1∕8. Decompose fractions Decomposing a fraction visually Decomposing a mixed number Decomposing fractions review 4.NF.B.3.c Fully covered Add and subtract mixed numbers with like denominators, e.g., by replacing each mixed number with an equivalent fraction, and/or by using properties of operations and the relationship between addition and subtraction. Add and subtract mixed numbers (no regrouping) Add and subtract mixed numbers (with regrouping) Add and subtract mixed numbers word problems (like denominators) Adding mixed numbers with like denominators Mixed number addition with regrouping Subtracting mixed numbers with like denominators Subtracting mixed numbers with like denominators word problem Subtracting mixed numbers with regrouping 4.NF.B.3.d Fully covered Solve word problems involving addition and subtraction of fractions referring to the same whole and having like denominators, e.g., by using drawings or visual fraction models and equations to represent the problem. Add and subtract fractions word problems (same denominator) Fraction word problem: lizard Fraction word problem: piano Fraction word problem: pizza Fraction word problem: spider eyes ### 4.NF.B.4 ### Apply and extend previous understandings of multiplication to multiply a fraction by a whole number. 4.NF.B.4.a Fully covered Understand a fraction a∕b as a multiple of 1∕b. For example, use a visual fraction model to represent 5∕4 as the product 5 x (1∕4), recording the conclusion by the equation 5∕4 = 5 x (1∕4). Equivalent whole number and fraction multiplication expressions Fraction multiplication on the number line Multiply fractions and whole numbers Multiply fractions and whole numbers on the number line Multiply fractions and whole numbers with fraction models Multiply mixed numbers and whole numbers Multiply unit fractions and whole numbers Multiplying mixed numbers by whole numbers Multiplying unit fractions and whole numbers 4.NF.B.4.b Fully covered Understand a multiple of a∕b as a multiple of 1∕b, and use this understanding to multiply a fraction by a whole number. For example, use a visual fraction model to express 3 x (2∕5) as 6 x (1∕5), recognizing this product as 6∕5. (In general, n x (a∕b) = (n x a)∕b.) Equivalent fraction and whole number multiplication problems Equivalent whole number and fraction multiplication expressions Fraction multiplication on the number line Multiply fractions and whole numbers Multiply fractions and whole numbers on the number line Multiply fractions and whole numbers with fraction models Multiply mixed numbers and whole numbers Multiplying fractions and whole numbers visually Multiplying mixed numbers by whole numbers 4.NF.B.4.c Fully covered Solve word problems involving multiplication of a fraction by a whole number, e.g., by using visual fraction models and equations to represent the problem. For example, if each person at a party will eat 3∕8 of a pound of roast beef, and there will be 5 people at the party, how many pounds of roast beef will be needed? Between what two whole numbers does your answer lie? Interpret multiplying fraction and whole number word problems Multiply fractions and whole numbers word problems Multiplying fractions by whole numbers word problem Multiplying fractions word problem: milk Multiplying fractions word problem: movies 4.NF.C ------ Understand decimal notation for fractions, and compare decimal fractions. ------------------------------------------------------------------------- 4.NF.C.5 Fully covered Express a fraction with denominator 10 as an equivalent fraction with denominator 100, and use this technique to add two fractions with respective denominators 10 and 100. For example, express 3∕10 as 30∕100, and add 3∕10 + 4∕100 = 34∕100. Add fractions (denominators 10 & 100) Adding fractions (denominators 10 & 100) Adding fractions: 7/10+13/100 Decompose fractions with denominators of 100 Decomposing hundredths Decomposing hundredths on number line Equivalent expressions with common denominators (denominators 10 & 100) Equivalent fractions (denominators 10 & 100) Equivalent fractions with fraction models (denominators 10 & 100) Visually converting tenths and hundredths 4.NF.C.6 Fully covered Use decimal notation to represent fractions with denominators 10 or 100. For example, rewrite 0.62 as 62∕100; describe a length as 0.62 meters; locate 0.62 on a number line diagram. Common fractions and decimals Decimal place value Decimal place value with regrouping Decimal place value with regrouping Decimals as words Decimals in words Decimals on the number line: hundredths Decimals on the number line: hundredths 0-0.1 Decimals on the number line: tenths Decimals on the number line: tenths 0-1 Graphing hundredths from 0 to 0.1 Graphing tenths from 0 to 1 Identifying hundredths on a number line Identifying tenths on a number line Plotting decimal numbers on a number line Relate decimals and fractions in words Relating decimals and fractions in words Rewriting decimals as fractions: 0.15 Rewriting decimals as fractions: 0.36 Rewriting decimals as fractions: 0.8 Rewriting decimals as fractions: 2.75 Rewriting fractions as decimals Write common decimals as fractions Write common fractions as decimals Write decimals and fractions greater than 1 shown on grids Write decimals and fractions greater than 1 shown on number lines Write decimals and fractions shown on grids Write decimals and fractions shown on number lines Write decimals as fractions Write fractions as decimals (denominators of 10 & 100) Writing a number as a fraction and decimal Writing decimals and fractions greater than 1 shown on grids Writing decimals and fractions shown on number lines Writing decimals as fractions review 4.NF.C.7 Fully covered Compare two decimals to hundredths by reasoning about their size. Recognize that comparisons are valid only when the two decimals refer to the same whole. Record the results of comparisons with the symbols >, =, or <, and justify the conclusions, e.g., by using a visual model. Compare decimals (tenths and hundredths) Compare decimals and fractions in different forms Compare decimals visually Comparing decimal numbers on a number line Comparing decimals (tenths and hundredths) Comparing decimals visually Comparing numbers represented different ways Our mission is to provide a free, world-class education to anyone, anywhere. Khan Academy is a 501(c)(3) nonprofit organization. Donate or volunteer today! Site Navigation About News Impact Our team Our interns Our content specialists Our leadership Our supporters Our contributors Our finances Careers Internships Cookie Preferences Contact Help center Support community Share your story Press Download our apps Courses Math: Pre-K - 8th grade Math: Illustrative Math-aligned Math: Eureka Math-aligned Math: Get ready courses Math: high school & college Math: Multiple grades Test prep Science Economics Reading & language arts Computing Life skills Social studies Partner courses Khan for educators Language English CountryU.S.IndiaMexicoBrazil © 2025 Khan Academy Terms of use Privacy Policy Cookie Notice Accessibility Statement Use of cookies Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy Accept All Cookies Strictly Necessary Only Cookies Settings Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer. More information Allow All Manage Consent Preferences Strictly Necessary Cookies Always Active Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account. For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session. Functional Cookies [x] Functional Cookies These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences. Targeting Cookies [x] Targeting Cookies These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission. We do not use cookies to serve third party ads on our Service. Performance Cookies [x] Performance Cookies These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates. For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users. We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails. Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Reject All Confirm My Choices
3911
https://www.statisticshowto.com/spurious-correlation/
Spurious Correlation: Examples from Real Life and the News - Statistics How To Skip to content Statistics How To Menu Home Tables Binomial Distribution Table F Table Inverse T Distribution Table PPMC Critical Values T-Distribution Table (One Tail and Two-Tails) Chi Squared Table (Right Tail) Z-table (Right of Curve or Left) Probability and Statistics Binomials Chi-Square Statistic Expected Value Hypothesis Testing Non Normal Distribution Normal Distributions Probability Regression Analysis Statistics Basics T-Distribution Multivariate Analysis & Independent Component Sampling Calculators Variance and Standard Deviation Calculator Tdist Calculator Permutation Calculator / Combination Calculator Interquartile Range Calculator Linear Regression Calculator Expected Value Calculator Binomial Distribution Calculator Matrices Experimental Design Calculus Based Statistics Statistics How To Menu Home Tables Binomial Distribution Table F Table Inverse T Distribution Table PPMC Critical Values T-Distribution Table (One Tail and Two-Tails) Chi Squared Table (Right Tail) Z-table (Right of Curve or Left) Probability and Statistics Binomials Chi-Square Statistic Expected Value Hypothesis Testing Non Normal Distribution Normal Distributions Probability Regression Analysis Statistics Basics T-Distribution Multivariate Analysis & Independent Component Sampling Calculators Variance and Standard Deviation Calculator Tdist Calculator Permutation Calculator / Combination Calculator Interquartile Range Calculator Linear Regression Calculator Expected Value Calculator Binomial Distribution Calculator Matrices Experimental Design Calculus Based Statistics Spurious Correlation: Examples from Real Life and the News Regression Analysis> Spurious Correlation What is a Spurious Correlation? A spurious correlation wrongly implies a cause and effect between two variables. For example, the number of astronauts dying in spacecraft is directly correlated to seatbelt use in cars: Use your seatbelt and save an astronaut life! This graph — showing that an increase in car seat belt results in a lower number of astronaut deaths– is made from real statistics. For example, the 1980s seat belt use data came from this journal article on PubMed. Of course there isn’t a real correlation here: putting your seat belt on in a car has nothing to do with the odds of an accident in space. This kind of connection which seems to be real, but isn’t, is called a spurious correlation. The sad truth is that spurious correlations are everywhere: in the news we read, on blog posts and news sites, and all over the airwaves. Take this gem from Fox News, showing a clear correlation between president Obama taking office and a rise in unemployment rates: Image Source: Look closely at the graph, and you’ll see it’s plotted incorrectly. The value for November (8.6) is placed at the 9.0 position. That’s not the only problem with this graph; it’s been plotted to show a steep incline when in fact, the overall employment trend was fairly stable at about 9%. Even if the data you’re looking at is sound, there may be other factors causing the phenomenon. For example, take the headline Sunny Skies tied to Suicide Rates. While short-term exposure to sunny skies may contribute to suicide rates, there’s obviously a lot more to suicide than a depressing, sunny day. Examples of Spurious Correlation in the Media Spurious correlations don’t always have graphs attached to them. They are often a result of a media frenzy, misunderstanding of data, or just plain old bad science. Universal health care breeds terrorists (Fox News). An on screen headline on Fox News read “National healthcare: breeding ground for terror?”. I’m not sure what the statistics are for the number of people who signed up for Obamacare and then turned into a terrorist, but I’m willing to bet it’s pretty small. Fox News didn’t use any actual data for this spurious correlation, just the opinion of an “expert.” Living Next to Freeways Causes Autism (L.A. Times). Hot on the heels of the vaccine-causes-autism debate (which, by the way, has been debunked) we find that people living next to freeways are at a higher risk of having an autistic child. But before you think about the correlation too hard, consider the confusing statement released by the paper’s lead author, Heather Volk: “This study isn’t saying exposure to air pollution or exposure to traffic causes autism..but it could be one of the factors that are contributing to its increase.” In other words, a possible correlation has been turned into a definite link by way of a misleading headline. Junk food does not cause obesity. The non-profit Global Energy Balance Network reported that consumption of junk food isn’t to blame for the obesity epidemic. The solution to losing weight, said the group, was simply to exercise more. Before you think you can eat all of the junk food you like and not put on weight, consider who sponsored the group: one of the world’s great junk food producers, Coca-Cola. As a final note, be aware that several studies have suggested that watching Fox News makes you stupid. A spurious correlation? You be the judge. Next: Misleading Graphs. Classical Probability: Definition and Examples Prior Probability: Uniformative, Conjugate Comments? Need to post a correction? Please Contact Us. Discover more statistics Statistics statistic statistical Excel STATISTICS Feel like “cheating” at Statistics? Check out our "Practically Cheating Statistics Handbook, which gives you hundreds of easy-to-follow answers in a convenient e-book. Discover more statistics Statistics Excel statistical statistic STATISTICS Looking for elementarystatistics help? You’ve come to the right place. Statistics How To has more than 1,000 articles and videos for elementary statistics, probability, AP and advanced statistics topics. Looking for a specific topic? Type it into the search box at the top of the page. Latest articles Maximum Likelihood and Maximum Likelihood Estimation What is a Parameter in Statistics? Beta Level: Definition & Examples Pairwise Independent, Mutually Independent: Definition, Example Population Mean Definition, Example, Formula Dispersion / Measures of Dispersion: Definition Serial Correlation / Autocorrelation: Definition, Tests Fisher Information / Expected Information: Definition Mixed Derivative (Partial, Iterated) Power Series Distributions Coefficient, Leading Coefficient: Definition, Test Reciprocal Distribution: Definition & Examples © 2025 Statistics How To | About Us | Privacy Policy | Terms of Use Discover more Statistics statistics statistical Excel STATISTICS statistic
3912
https://www.youtube.com/watch?v=LKtpjAZzQBU
Measuring Planck’s Constant National Institute of Standards and Technology 48300 subscribers 175 likes Description 31665 views Posted: 21 Jun 2016 In this 2016 video, NIST physicist Darine Haddad uses a cup of coffee and sugar cubes to explain the significance of Planck’s constant. In 2017, after this video was recorded, NIST measured an even more accurate value of the Planck constant, 6.626069934 x 10−34 kg∙m2/s, with an uncertainty of only 13 parts per billion. This measurement contributed to the now-accepted value of 6.62607015 kg∙m2/s in the revised International System of Units (SI) approved in November 2018 17 comments Transcript: [MUSIC] Hi Doreen. >> Hi Chad. >> Taking a break from the physics lab? >> Yeah, I'm actually thinking of explaining the Planck constant using my cup of coffee. >> I'm curious, try me. >> Well, see, in classical mechanics energy is continuous, meaning if I take my sugar dispenser I can pour any amount of sugar into my coffee. Any amount of energy is okay. But Max Planck found something very different when he looked deeper. Energy is quantized or it's discrete. Meaning I can only add one sugar cube or two or three. Only certain amount of energy is allowed. >> Not two and a quarter. I see. And same thing when this cup of coffee releases heat, like I draw within here, the energy is equal to the Planck constant times the frequency of the radiation that carries the heat. >> And your team has just measured the Planck constant, right? >> Yes, we did indeed. This year we measured, at NIST, the Planck constant and we found that h = 6.62606983 x 10 to minus 34 Kilogram per meter square per second. >> 10 to the minus 34th, that's really small. >> It is, and the uncertainty is very small too. It's 0.00000022 x 10 to minus 34 kilogram meter square per second. And if you divide those two numbers, the accuracy of our measurement is 34 parts to the billion. For example, to get an analogy, if you measure the distance between Washington D.C. and New York which is 230 miles and I can tell you the distance to within 0.5 inches. >> Wow, so now I understand the Planck constant better thanks to Doreen Halidad and her coffee.
3913
https://www.geometrictools.com/Documentation/ApproximateRotationMatrix.pdf
Approximations to Rotation Matrices and Their Derivatives David Eberly, Geometric Tools, Redmond WA 98052 This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. Created: August 12, 2020 Contents 1 An Equation for a Rotation Matrix 3 2 Equations for the Derivatives of the Rotation Matrix 3 3 The Singularity at Zero 4 4 Floating-Point Issues in Function Evaluation 5 4.1 Evaluation of sin(t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4.2 Evaluation of cos(t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4.3 Evaluation of α(t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4.4 Evaluation of β(t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4.5 Evaluation of γ(t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.6 Evaluation of δ(t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 5 Minimax Approximations 16 5.1 Remez Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 5.1.1 Chebyshev Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 5.1.2 The Linear System to Determine Coefficients . . . . . . . . . . . . . . . . . . . . . . . 17 5.1.3 Newton Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 5.1.4 Solving the Linear System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 5.1.5 Updating the t-Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 5.1.6 Source Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1 5.2 Modified Remez Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 5.2.1 Modifications that Work Generally . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 5.2.2 Modifications for the Rotation-Coefficient Functions . . . . . . . . . . . . . . . . . . . 23 5.2.3 Computing the Exact Sign of a Power Series . . . . . . . . . . . . . . . . . . . . . . . 25 5.2.4 Source Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 6 Polynomial Coefficients 30 2 1 An Equation for a Rotation Matrix A 3 × 3 rotation matrix R can be represented by R = exp(S) for a skew-symmetric matrix S =      0 −s2 s1 s2 0 −s0 −s1 s0 0     = Skew(s) (1) where the right-most equality defines the function Skew(s) with s = (s0, s1, s2). This is a 3-parameter representation. When s = (0, 0, 0), the rotation matrix is the identity matrix I. Define t = |s| = q s2 0 + s2 1 + s2 2, α(t) = sin(t) t , β(t) = 1 −cos(t) t2 (2) for s ̸= 0. The rotation matrix is R(s) = I + α(t)S + β(t)S2 =      1 −β(s2 1 + s2 2) −αs2 + βs0s1 +αs1 + βs0s2 +αs2 + βs0s1 1 −β(s2 0 + s2 2) −αs0 + βs1s2 −αs1 + βs0s2 +αs0 + βs1s2 1 −β(s2 0 + s2 1)      (3) where the notation R(s) indicates that the rotation matrix is parameterized by the components of s. Note that when s ̸= 0, a unit-length rotation axis is u = s/t. In this case define U = Skew(u); the rotation matrix is provided by the more common equation R = I + sin(t)U + (1 −cos(t))U 2. The form involving S is typically encountered when using Lie groups and Lie algebras . It is also useful for computing derivatives of rotation matrices in optimization problems where the functions have rotation parameters. 2 Equations for the Derivatives of the Rotation Matrix Define γ(t) = −α′(t) t = sin(t) −t cos(t) t3 , δ(t) = −β′(t) t = 2(1 −cos(t)) −t sin(t) t4 (4) The minus signs in front of α′(t) and β′(t) are chosen so that α(t), β(t), γ(t) and δ(t) have removable singularities that are all positive numbers and α′(t)/t, β′(t)/t, γ′(t)/t and δ′(t)/t have removable singularities at t = 0 which are all negative. The first-order partial derivative of t, α(t) and β(t) with respect to sk are ∂t ∂sk = sk t , ∂α(t) ∂sk = α′(t) ∂t ∂sk = −skγ(t), ∂β(t) ∂sk = β′(t) ∂t ∂sk = −skδ(t) (5) The first-order partial derivative of S with respect to sk is the matrix Ek; these are E0 =      0 0 0 0 0 −1 0 1 0     , E1 =      0 0 1 0 0 0 −1 0 0     , E2 =      0 −1 0 1 0 0 0 0 0      (6) 3 The first-order partial derivative of R(s) with respect to sk is ∂R ∂sk = α ∂S ∂sk + ∂α ∂sk S + β  S ∂S ∂sk + ∂S ∂sk S  + ∂β ∂sk S2 = αEk + β (SEk + EkS) −sk γS + δS2 (7) These are ∂R ∂s0 =      δs0 s2 1 + s2 2  βs1 + s0(γs2 −δs0s1) βs2 −s0(γs1 + δs0s2) βs1 −s0(γs2 + δs0s1) δs0 s2 0 + s2 2  −2βs0 s0(γs0 −δs1s2) −α s2 β −δs2 0  + γs0s1 α −s0(γs0 + δs1s2) δs0 s2 0 + s2 1  −2βs0      (8) and ∂R ∂s1 =      δs1 s2 1 + s2 2  −2βs1 βs0 + s1(γs2 −δs0s1) α −s1(γs1 + δs0s2) βs0 −s1(γs2 + δs0s1) δs1 s2 0 + s2 2  s2 β −δs2 1  + γs0s1 s1(γs1 −δs0s2) −α βs2 −s1(γs0 + δs1s2) δs1 s2 0 + s2 1  −2βs1      (9) and ∂R ∂s2 =      δs2 s2 1 + s2 2  −2βs2 s2(γs2 −δs0s1) −α βs0 −s2(γs1 + δs0s2) α −s2(γs2 + δs0s1) δs2 s2 0 + s2 2  −2βs2 βs1 + s2(γs0 −δs1s2) βs0 + s2(γs1 −δs0s2) βs1 −s2(γs0 + δs1s2) δs2 s2 0 + s2 1       (10) 3 The Singularity at Zero The functions α(t), β(t), γ(t) and δ(t) have removable singularities at t = 0, easily seen using the Maclaurin series for the functions, α(t) = P∞ i=0(−1)i 1 (2i+1)!t2i, β(t) = P∞ i=0(−1)i 1 (2i+2)!t2i γ(t) = P∞ i=0(−1)i 2(i+1) (2i+3)!t2i, δ(t) = P∞ i=0(−1)i 2(i+1) (2i+4)!t2i (11) All series have only even powers of t. Evaluating the series at t = 0 produces α(0) = 1, β(0) = 1/2, γ(0) = 1/3 and δ(0) = 1/12. The derivatives of the functions divided by t will be used in computing approximating polynomials. The series for these are listed next, each having only odd powers of t, α′(t)/t = P∞ i=0(−1)i+1 2(i+1) (2i+3)!t2i, β′(t)/t = P∞ i=0(−1)i+1 2(i+1) (2i+4)!t2i γ′(t)/t = P∞ i=0(−1)i+1 4(i+1)(i+2) (2i+5)! t2i, δ′(t)/t = P∞ i=0(−1)i+1 4(i+1)(i+2) (2i+6)! t2i (12) The ratios have removable singularities at t = 0. The values are −1/3, −1/12, −1/15 and −1/90, respectively. The graphs of the four function is shown in Figure 1. 4 Figure 1. The graphs of α(t) [blue], β(t) [yellow], γ(t) [green] and δ(t) [red] for t ∈[0, π]. The plots were drawn with Mathematica . When computing with floating-point arithmetic, the functions can be evaluated directly using equations (2) and (4). The C++ Standard Library functions std::sin and std::cos are called in the evaluation. However, the singularity at zero leads to some unexpected floating-point behavior. This is the topic of the next section. Alternatively, the functions can be evaluated using Maclaurin polynomials. If rational arithmetic is used to obtain a precision larger than that of double. For t near π, the degree of the Maclaurin polynomials will be so large that the performance suffers from the extremely large number of bits required to represent the polynomial exactly. Finally, minimax polynomial approximations can be used for a rotation estimate that can be computed rapidly. For high-degree approximations, the floating-point computations to producing the polynomial co-efficients is significant enough to cause accuracy problems. The algorithm uses bisection to find roots of functions and derivatives, where the functions have local extrema nearly zero. Bisection is a common topic covered in a course on numerical methods, but it is usually not pointed out that it depends strongly on knowing the exact sign of the function at the various t-values. At first glance, this appears to be an im-possible task for functions defined as Maclaurin series. It turns out it is possible for non-zero series with alternating signs and terms that decrease in magnitude. In the last section, I discuss such an approach that uses rational arithmetic to determine the exact signs of power series expressions. 4 Floating-Point Issues in Function Evaluation When computing the direct equations α(t), β(t), γ(t) or δ(t) with floating-point arithmetic, numerical problems invariably occur for values t near 0. To simplify the discussion, only t ≥0 is considered. 5 The typical mathematical suggestion is to use a Taylor polynomial when t is nearly 0. The degree must be specified and the t-cutoff, say, ˆ t, must be specified. The direct implementation is evaluated when t > ˆ t and the Taylor polynomial is evaluated when 0 ≤t ≤ˆ t. Moreover, a numerical analysis must be pro-vided to understand the error bounds for the approximations. As it turns out, Taylor polynomials can be avoided α(t) and β(t). For γ(t) and δ(t), a Taylor polynomial or other approximating polynomial can be used. Implementations of the functions can be made robust for t > 0 when computing with floating-point arithmetic. The experiments mentioned here are for float numbers. Similar experiments can be performed using double with the same floating-point issues that occur with float. For testing reproducibility, the unsigned integer encodings for the t-values are listed. 4.1 Evaluation of sin(t) As a real-valued function of a real variable t ∈[0, π/2], sin(t) is strictly increasing because its derivative is cos(t) which is negative for t ∈[0, π/2). Similarly, for t ∈[π/2, π], sin(t) is strictly decreasing. As a float-valued function of a float variable t, std::sin(t) is strictly increasing for t ∈[0, t0] where t0 = 4.43632947 ∗10−4 (0x39e89768), nondecreasing for t ∈[t0, t1] where t1 = 1.57079637 . = π/2 (0x3fc90fdb), nonincreasing for t ∈[t1, t2] where t2 = 1.99999952 (03ffffffc) and strictly decreasing for t ∈[t2, t3] where t3 = 3.14159274 . = π (0x40490fdb). The function is piecewise constant on [0, π] with 1,064,929,526 pieces on [0, π/2] and 6,309,924 pieces on [π/2, π] for a total of 1,071,239,449 pieces. The last piece of the first interval and the first piece of the second interval both have function value 1, so the pieces merge into one piece. The std::sin(t) function is one of the functions recommended by the IEEE Standard 754—-2008 to be correctly rounded. To say a function z = f(t) is correctly rounded means that an input floating-point number t leads to an output floating-point number z that is the closest floating-point number to the theoretical result, where the definition of closest depends on the specified rounding mode. Even though std::sin(t) is piecewise constant, the function values are correctly rounded. Observe that the domain of std::sin(t) for t ∈[0, π) has 1,078,530,011 (0x40490fdb) float numbers. The interval [0, π/2) has 1,070,141,403 (0x3fc90fdb) float numbers, a count that is much larger than that of the count 8,388,608 = 223 (0x00800000) of float numbers in the interval [π/2, π). Moreover, [0, π/2) contains zero and the subnormal numbers, a total of 224 numbers, which is twice as many float numbers as in the interval [π/2, π). If you need a dense set of float t-values to evaluate std::sin(t), consider reduction to the interval t ∈[0, π/2] using sin(t) = sin(π −t) for t ∈[π/2, π]. 4.2 Evaluation of cos(t) As a real-valued function of a real variable t ∈[0, π], cos(t) is strictly decreasing because its derivative is −sin(t) which is negative for t ∈(0, π). As a float-valued function of a float variable t, std::cos(t) is nonincreasing for t ∈[0, t0] where t0 = 0.999999642 (0x3f7ffffa), strictly decreasing for t ∈[t0, t1] where t1 = 2.88907409 (0x4038e697) and nonincreasing for t ∈[t1, t2] where t2 = 3.14159274 . = π (0x40490fdb). The function is piecewise constant on [0, π/2] with 12,500,635 pieces and 7,861,537 pieces on [π/2, π] for a total of 20,362,171 pieces. The last piece of the first interval and the first piece of the second interval both have function value −4.37113883 ∗10−8, so the pieces merge into one piece. 6 The std::cos(t) function is one of the functions recommended by the IEEE Standard 754—-2008 to be correctly rounded. Even though std::cos(t) is piecewise constant, the function values are correctly rounded. The density of the floating-point numbers in [0, π] was mentioned in the previous section for std::sin(t). The same idea applies here. If you need a dense set of float t-values to evaluate std::cos(t), consider reduction to the interval t ∈[0, π/2] using cos(π −t) = −cos(t) for t ∈[π/2, π]. 4.3 Evaluation of α(t) As a real-valued function of a real-valued variable t, α(t) is strictly decreasing for t ∈[0, π]. The derivative is α′(t) = (t cos(t)−sin(t))/t2. Sketching the graphs of tan(t) and t for t ∈(0, π/2), one can see that tan(t) > t in which case α′(t) < 0 on the interval. Similarly, one can see that for t ∈(π/2, π), t > tan(t) in which case α′(t) < 0 on the interval. Finally, α′(π/2) = −2/π2 < 0. Therefore, α′(t) < 0 for all t ∈(0, π) which implies α(t) is indeed strictly decreasing. The direct implementation for α(t) is shown in Listing 1 for type float. Listing 1. The direct implementation of α(t) for t ∈[0, π] for type float. f l o a t Alpha ( f l o a t t ) { i f ( t > 0.0 f ) { return std : : s i n ( t ) / t ; } e l s e { return 1.0 f ; } } The function Alpha(t) is the floating-point version of α(t). It is piecewise constant with 14,012,441 pieces. The function is strictly decreasing on [t0, π], where t0 = 1.99999809 (0x3ffffff0) and Alpha(t0) = 0.454649568. On the interval [0, t0], the differences in function values between consecutive pieces are 5.96046448 ∗10−8 (occurring 237480 times), 5.96046448 ∗10−8 (occurring 3802736 times), −8.94069672 ∗10−8 (occurring 276 times) and −1.19209290∗10−7 (occurring 684802 times). The maximum absolute error is 1.19209290∗10−7. The floating-point function S(t) = std::sin(t) and I(t) = t (the identity function) are both correctly rounded functions. However, the ratio Alpha(t) = S(t)/I(t) is not a correctly rounded function. Regardless, the floating-point behavior of Alpha(t) is acceptable because of the small errors at the jumps. 4.4 Evaluation of β(t) The direct implementation of β(t) is shown in Listing 2 for type float. Listing 2. The direct implementation of β(t) for t ∈[0, π] are not robust when computing using floating-point arithmetic. f l o a t Beta ( f l o a t t ) { i f ( t > 0.0 f ) { return ( 1 . 0 f = std : : cos ( t ) ) / t / t ; } e l s e { return 0.5 f ; } } 7 The reason to use two divisions by t is to avoid not-a-number (NaN) when t values are such that t2 is less than the minimum subnormal and is rounded to 0. For such t values, the numerator 1 −cos(t) evaluates to floating-point 0, so (1 −cos(t))/t2 is an indeterminate of the form 0/0 which is displayed as -nan(ind). The direct implementation has multiple problems when computing with floating-point arithmetic. Exper-iments were performed using Microsoft Visual Studio 2019 version 16.6.5. However, any IEEE-compliant implementation for std::cos will produce the same results, so any compiler should generate the same results. In the experiments, the default rounding mode is used: round-to-nearest-ties-to-even. Table 1 shows evaluation results for the float version of β(t). Table 1. Evaluations of the float version of β(t) from Listing 2. t input encoding β(t) output #1 0 0x00000000 0.5 #2 [2−149, 2.44140625 ∗10−4] [0x00000001, 0x39800000] 0 #3 [2.44140654 ∗10−4, 4.22863959 ∗10−4] [0x39800001, 0x39ddb3d7] 0.999999762 ↘0.333333343 #4 [4.22863988 ∗10−4, 5.45914925 ∗10−4] [0x39ddb3d8, 0x3a0f1bbc] 0.666666627 ↘0.400000066 #5 [5.45915042 ∗10−4, 6.45935361 ∗10−4] [0x3a0f1bbd, 0x3a2953fd] 0.600000024 ↘0.428571463 #6 [6.45935419 ∗10−4, 7.32421817 ∗10−4] [0x3a2953fe, 0x3a400000] 0.571428478 ↘0.444444448 The graph of β(t) for t ∈[0, π] is shown in Figure 1. Although the graph appears to show a smooth function, the samples are sparse enough that the floating-point behavior for t near 0 are not evident. The graph of β(t), computed using float arithmetic for small t, is shown in Figure 2. Figure 2. The graph of β(t) for small t. The plot was drawn by choosing 8193 samples of β(t) for t ∈ [2.44141∗10−4, 7.32421875∗10−4] using the float implementation and then allowing Mathematica to draw a high-resolution polyline connecting the samples. 8 At this resolution for t near 0, the floating-point behavior is evident. The t-inputs in row #2 of Table 1 lead to β(t) = 0 because the numerator still evaluates to 0.0f but t t evaluates to a positive floating-point number, in which case the division is of the form 0/p for p > 0 and the result is 0. The t-inputs in row #3 of Table 1 lead to β(t) decreasing from 0.999999762 (approximately 1) at the left t-endpoint to 0.333333343 (approximately 1/3) at the right t-endpoint. This behavior is certainly not expected, because β(t) has a theoretical maximum of 1/2. The reason for the behavior is that the floating-point output of the numerator 1 −cos(t) is 5.96046448 ∗10−8 (0x33880000) for the entire set of t-inputs of row #2; that is, the function is effectively of the form K1/t2 for a positive constant K1. The actual value of 1 −cos(t) for the encoding 0x39800001 is approximately twice that of the floating-point value, which is why an expected number of approximately 1/2 becomes approximately 1. A discontinuity appears where β(t) jumps from (approximately) 1/3 to (approximately) 2/3 at the t-values mentioned in row #4 of Table 1. This happens because the floating-point output of the numerator 1−cos(t) decreases from the previous 5.96046448 ∗10−8 at t = 4.22863959 ∗10−4 (0x39ddb3d7) to 1.19209290 ∗10−7 at t = 4.22863988 ∗10−4 (0x39ddb3d8). The pattern repeats as t increases, leading to 7,712,437 segments of the form Hj(t) = Kj/t2 where the Kj > 0 decrease as j increases. Each segment has the properties that H′ j(t) < 0 (the function is strictly decreasing) and H′′ j (t) > 0 (the function is concave up). The theoretical function β(t) has the properties that β′(t) < 0 (the function is strictly decreasing) and β′′(t) < 0 (the function is concave down). The function collection of segments Hj(t) is not an accurate approximation to β(t) for small t, but the discontinuities for large t are small enough that for float calculations, the corresponding segments produce reasonable floating-point approximations. Each of the Hj(t) segments has domain consisting of 2 or more floating-point t-values. For tk sufficiently large, the floating-point samples β(tk) are decreasing; that is, each segment has a single t-value for its domain. This behavior starts at t = 0.999999344 (0x3f7ffff5) with a β-value of 0.459697723. The analysis indicates that depending on the values of t chosen in an application, the resulting floating-point rounding can lead to extremely inaccurate results. It is possible to use Taylor polynomials at t = 0 for an approximation, but it is necessary to select a switching point ˆ t > 0 for which the Taylor polynomial should be used when t ∈[0, ˆ t] and the equation (1 −cos(t))/t/t should be used when t ∈[ˆ t, π]. It is not immediately clear what degree of the Taylor polynomial or what ˆ t should be for β(t). An implication in the last section of is to choose degree 6: p(t) = 1/2! −t2/4! + t4/6! −t4/8!. If ˆ t = 1 is chosen so that the Taylor polynomial skips all the Hj(t) with domains containing at least 2 float numbers, the maximum of |β(t) −p(t)| for t ∈[0, 1] occurs at t = 1 with value 2.73497 ∗10−7. For smaller maximum error, you will need to choose ˆ t smaller than 1, keeping in mind how many Hj(t) you want to skip by the approximation and how many you want to keep by evaluating the direct implementation of β(t). Smoothness at ˆ t might be important in your application. Let q(t) = 1/2!−t2/4!+q4t4+q6t6. This polynomial satisfies the conditions q(0) = β(0), q′(0) = β′(0) and q′′(0) = β′′(0). If we also require C1 continuity at ˆ t = 1, where q(1) = β(1) and q′(1) = β′(1), then q4 and q6 are solutions to a linear system of equations. The solutions are q4 . = 1.38861746 ∗10−3 and q6 . = 2.42566583 ∗10−5. The maximum of |β(t) −q(t)| for t ∈[0, 1] occurs at t . = 0.706781 with value 1.69002 ∗10−8. A similar analysis was performed for double to demonstrate that a larger precision does not eliminate the qualitative floating-point behavior. Table 2 lists transition t-values. 9 Table 2. Transition t-values for the double version of β(t). t-value encoding t0 = 2−1074 0x0000000000000001 t1 = 1.0536712127723507 ∗10−8 0x3e46a09e667f3bcc t2 = 1.0536712127723509 ∗10−8 0x3e46a09e667f3bcd t3 = 1.8250120749944284 ∗10−8 0x3e53988e1409212e t4 = 1.8250120749944287 ∗10−8 0x3e53988e1409212f t5 = 2.3560804576936208 ∗10−8 0x3e594c583ada5b52 t6 = 2.3560804576936212 ∗10−8 0x3e594c583ada5b53 t7 = 2.7877519926234643 ∗10−8 0x3e5deeea11683f49 t8 = 2.7877519926234646 ∗10−8 0x3e5deeea11683f4a t9 = 3.1610136383170521 ∗10−8 0x3e60f876ccdf6cd9 Table 3 shows evaluation results for the double version of β(t). Table 3. Evaluations of the double version of β(t). t input β(t) output #1 0 0.5 #2 [t0, t1] 0 #3 [t2, t3] 0.99999999999999989 ↘0.33333333333333343 #4 [t4, t5] 0.66666666666666652 ↘0.40000000000000002 #5 [t6, t7] 0.59999999999999998 ↘0.42857142857142860 #6 [t8, t9] 0.57142857142857129 ↘0.44444444444444453 The function β(t) in floating-point terms is a sequence of segments Hj(t) = Kj/t2. For j small, each domain includes 2 or more double numbers. For j sufficiently large, each domain is a single double number. A Taylor polynomial can be used for small t, but as indicated previously, a numerical analysis must be performed to know what degree is required for accuracy and what choice of the switching point ˆ t > 0 avoids the large discontinuity gaps. It is not sufficient to choose a small switching point arbitrarily; for example, you might say ˆ t = 10−6 is small enough. The large discontinuity gaps occur starting at 2.44140654 ∗10−4 and occur regularly. How many gaps will you allow? For a specified degree, the larger you must choose ˆ t to avoid these, the larger the approximation error by the Taylor polynomial. Based on the well-behaved floating-point nature of α(t), a robust implementation for (1−cos(t))/t2 uses the identity (1 −cos(t))/t2 = (sin(t/2)/(t/2))2/2 = (α(t/2))2/2 as shown in Listing 3. 10 Listing 3. A robust implementation of β(t) = (1 −cos(t))/t2 for float using a trigonometric identity. f l o a t Beta ( f l o a t t ) { f l o a t t H a l f = 0.5 f t ; i f ( t H a l f > 0.0 f ) { f l o a t alpha = Alpha ( t H a l f ) return 0.5 f alpha alpha ; } e l s e { return 0.5 f ; } } The computation of halfT is performed before the if-test to handle properly the case when t = 2−129, the smallest subnormal t. The multiplication of this t by half causes rounding to zero when the default rounding mode is enabled; that is, tHalf is 0 for smallest subnormal t. For all other subnormals and normals in [0, π], tHalf is positive. Figure 3 shows the graph for the robust implementation of Beta(t). Figure 3. The graph of the robust implementation of β(t) for small t. The plot was drawn by choosing 8193 samples of β(t) for t ∈[2.44141 ∗10−4, 7.32421875 ∗10−4] using the float implementation and then allowing Mathematica to draw a high-resolution polyline connecting the samples. At this resolution for t near 0, the robust floating-point behavior is evident. Compare this to the graph of the nonrobust implementation shown in Figure 2. 4.5 Evaluation of γ(t) The direct implementation of γ(t) is shown in Listing 4. 11 Listing 4. The direct implementation of γ(t) for t ∈[0, π]. f l o a t Gamma( f l o a t t ) { i f ( t > 0.0 f ) { return ( std : : s i n ( t ) = t std : : cos ( t ) ) / t / t / t ; } e l s e { return 1.0 f / 3.0 f ; } } The same floating-point problems with the direct implementation of β(t) occur with the direct implementa-tion of γ(t) for small t. Table 4 shows evaluation results for the float version of γ(t). Table 4. Evaluations of the float version of γ(t) from Listing 4. t input encoding γ(t) output #1 0 0x00000000 0.333333343 #2 [2−149, 2.44140625 ∗10−4] [0x00000001, 0x39800000] 0 #3 [2.44140654 ∗10−4, 4.22863959 ∗10−4] [0x39800001, 0x39ddb3d7] 1.99999928 ↘0.384900212 #4 [4.22863988 ∗10−4, 4.43632947 ∗10−4] [0x39ddb3d8, 0x39e89768] 0.769800246 ↘0.666666687 #5 [4.43632976 ∗10−4, 4.88281250 ∗10−4] [0x39e89769, 0x3a000000] 0.333333284 ↘0.250000000 #6 [4.88281308 ∗10−4, 5.45914983 ∗10−4] [0x3a000001, 0x3a0f1bbc] 0.499999281 ↘0.357771009 #7 [5.45915042 ∗10−4, 5.58942498 ∗10−4] [0x3a0f1bbd, 0x3a1285ff] 0.715541720 ↘0.666666687 #8 [5.58942556 ∗10−4, 6.45935361 ∗10−4] [0x3a128600, 0x3a2953fd] 0.333333254 ↘0.215979710 Figure 4 shows the graph of the direct implementation of γ(t) for small t. 12 Figure 4. The graph of γ(t) for small t. The plot was drawn by choosing 8193 samples of γ(t) for t ∈ [2.44140625 ∗10−4, 6.45935361 ∗10−4] using the float implementation and then allowing Mathematica to draw a high-resolution polyline connecting the samples. At this resolution for t near 0, the floating-point behavior is evident. Using real-arithmetic, the maximum value of γ(t) is 1/3. The floating-point values reach 2.0, and the inverse-square segments are not a good approximation to the actual function. An attempt at robustness replaces (sin(t) −t cos(t))/t3 by (α(t) −cos(t))/t2 in the direct implementation, as shown in Listing 5. Listing 5. An attempt at a robust implementation of γ(t) for float. f l o a t Gamma( f l o a t t ) { i f ( t > 0.0 f ) { f l o a t alpha = std : : s i n ( t ) / t ; return ( alpha = std : : cos ( t )) / t / t ; } e l s e { return 1.0 f / 3.0 f ; } } This eliminates some of the large discontinuity gaps but not all. An analysis of the gaps shows that the maximum discontinuity is 0.381571442 at t = 5.58942556∗10−4. The gaps decrease in size to 1.57899857∗10−2 at t = 1.94289908∗10−3. The next gap does not occur until much later, 8.34465027∗10−7 at t = 0.394411683. The gaps remain on the order of 10−7 from this point on. 13 The gaps of size 1.57899857 ∗10−2 and larger can be avoided by using a polynomial approximation. For example, choose a polynomial P(t) = 1/3 −(1/30)t2 + p2t4 + p3t6 such that P(0) = γ(0) = 1/3, P ′(0) = γ′(0) = 0, P ′′(0) = γ′′(0) = −1/15, P(ˆ t) = γ(ˆ t) and P ′(ˆ t) = γ′(ˆ t). The last two constraints are selected so that the real-valued piecewise function built from P(t) and γ(t) has C1 continuity. The constraints have solution p2 = 1.19047014 ∗10−3 and p3 = −2.19680528 ∗10−5. Listing 6 shows the implementation. Listing 6. A robust implementation of γ(t) for float using a polynomial approximation for t ∈[0, ˆ t], where ˆ t = 0.394411683. f l o a t GammaPolynomial ( f l o a t t ) { std : : array c o n s t e x p r p = { 1.0 f / 3.0 f , =1.0 f / 30.0 f , 1.19047014 e=3f , =2.19680528e=5f }; f l o a t const tSqr = t t ; return p [ 0 ] + tSqr ( p [ 1 ] + tSqr ( p [ 2 ] + tSqr p [ 3 ] ) ) ; } f l o a t Gamma( f l o a t t ) { f l o a t c o n s t e x p r tHat = 0.394411683 f ; i f ( t > tHat ) { f l o a t alpha = std : : s i n ( t ) / t ; return ( alpha = std : : cos ( t )) / t / t ; } e l s e { return GammaPolynomial ( t ) ; } } 4.6 Evaluation of δ(t) The direct implementation of δ(t) is shown in Listing 7. Listing 7. The direct implementation of δ(t) for t ∈[0, π]. f l o a t Delta ( f l o a t t ) { i f ( t > 0.0 f ) { return ( 2 . 0 f ( 1 . 0 f = std : : cos ( t )) = t std : : s i n ( t ) ) / t / t / t / t ; } e l s e { return 1.0 f / 12.0 f ; } } 14 The floating-point problems are more severe, with Delta(t) returning -inf when t = 2.64697828 ∗10−23 (0x1a000001). The previous function values were 0 except at t = 0 where it is 1/12. The computation of the numerator n has rounding errors that lead to n = -1.01e-45#DEN, a negative small subnormal. The number n/t = -5.29395529e-3, the number (n/t)/t = -1.99999952, the number ((n/t)/t)/t = -7.55578367e+22 and the number (((n/t)/t)/t)/t = -inf. Differentiating the equation β(t) = (α(t/2))2/2 leads to β′(t) = α(t/2)α′(t/2)/2. A more robust implemen-tation of δ(t) is shown in Listing 8. Listing 8. An attempt at a robust implementation of δ(t) for type float. f l o a t Delta ( f l o a t t ) { f l o a t t H a l f = 0.5 f t ; i f ( t H a l f > 0.0 f ) { f l o a t aHalf = std : : s i n ( t H a l f ) / t H a l f ; f l o a t a d e r H a l f = ( aHalf = std : : cos ( t H a l f )) / t H a l f ; return 0.5 f aHalf a d e r H a l f / t ; } e l s e { return 1.0 f / 12.0 f ; } } The same floating-point problems with β(t) occur with δ(t) for small t, as shown in Figure 5. Figure 5. The graph of δ(t) for small t. The plot was drawn by choosing 8193 samples of δ(t) for t ∈ [4.43632947 ∗10−4, 2.0 ∗10−3] using the float implementation and then allowing Mathematica to draw a high-resolution polyline connecting the samples. At this resolution for t near 0, the floating-point behavior is evident. Using real-arithmetic, the maximum value of δ(t) is 1/12. The graph shows that for small t, nearly all the function values are larger than that. 15 Similar to the robust implementation of γ(t), we can use a polynomial approximation to δ(t) for t ∈[0, ˆ t] for a specified degree and for a switching point ˆ t. Choosing the same gap error as that of γ(t), 8.34465027∗10−7, the smallest t-value at which this error is achieved is t = 0.2616350353. However, use the same ˆ t as that of γ(t) so that the implementation for δ(t) can be simplified using a single switching point. The polynomial approximation is Q(t) = 1/12 −t2/180 + q2t4 + q3q6 such that Q(0) = δ(0) = 1/12, Q′(0) = δ′(0) = 0, Q′′(0) = δ′′(0) = −1/90, Q(ˆ t) = δ(ˆ t) and Q′(ˆ t) = δ′(ˆ t). The last two constraints are selected so that the real-valued piecewise function built from Q(t) and δ(t) has C1 continuity. The constraints have solution q2 = 1.48809020 ∗10−4 and q3 = −2.19810423 ∗10−6. Listing 9 shows the code. Listing 9. A robust implementation of δ(t) for float using a polynomial approximation for t ∈[0, ˆ t], where ˆ t = 0.394411683. f l o a t DeltaPolynomial ( f l o a t t ) { std : : array c o n s t e x p r q = { 1.0 f / 12.0 f , =1.0 f / 180.0 f , 1.48809020 e=4f , =2.19810423e=6f }; f l o a t const tSqr = t t ; return q [ 0 ] + tSqr ( q [ 1 ] + tSqr ( q [ 2 ] + tSqr q [ 3 ] ) ) ; } f l o a t Delta ( f l o a t t ) { f l o a t c o n s t e x p r tHatHalf = 0.5 f 0.394411683 f ; f l o a t t H a l f = 0.5 f t ; i f ( t H a l f > tHatHalf ) { // d e l t a ( t ) = =beta ’( t )/ t = =0.5 alpha ( t /2) alpha ’( t /2)/ t = =0.25 alpha ( t /2)gamma( t /2) f l o a t aHalf = std : : s i n ( t H a l f ) / t H a l f ; f l o a t a d e r H a l f = ( aHalf = std : : cos ( t H a l f )) / t H a l f / t H a l f ; return 0.5 f aHalf a d e r H a l f / t ; } e l s e { return DeltaPolynomial ( t ) ; } } 5 Minimax Approximations An alternative approach to evaluating α(t), β(t), γ(t) and δ(t) is based on polynomial minimax approxima-tions. This allows for fast computation with a reasonable accuracy and is easily implemented in hardware such as on an FPGA. Each of our functions is defined on [0, π] with the understanding that at t = 0, the removable-singularity values are used. Let f(t) be one of our functions. The approximation is specified to have a degree bound n. The polynomial minimax approximation is the polynomial p(t) defined on [0, π] and of degree at most n for which maxt∈[0,π] |f(t) −p(t)| is minimized over all polynomials defined on [0, π] and having degree at most n. Let the minimum error be denoted ε. The minimax polynomial has the property that there must be n + 2 points 0 ≤t0 < · · · < tn+1 ≤π such that f(ti) −p(ti) = (−1)iε. 16 5.1 Remez Algorithm The Remez algorithm can be used to construct the coefficients of p(t) = Pn i=0 piti numerically to approximate the function f(t) for t ∈[a, b]. A sketch of the algorithm is presented in . In particular, the section entitled Detailed discussion lists the steps to follow. The description was too sparse in details for my liking, but I was able to use other sources of information to implement the basic algorithm. The details here are intended to help write an implementation. The theoretical basis for the algorithm is not discussed. 5.1.1 Chebyshev Nodes The algorithm starts with initial guesses for the ti for 0 ≤i ≤n + 1. For function domain [−1, 1], the standard choices are the Chebyshev nodes in (−1, 1) and the endpoints, t0 = −1 tk = cos  (2(n−k)+1)π 2n  , 1 ≤k ≤n tn = +1 (13) For function domain [a, b], the Chebyshev nodes are transformed to t0 = a tk = a+b 2 + b−a 2 cos  (2(n−k)+1)π 2n  , 1 ≤k ≤n tn = b (14) 5.1.2 The Linear System to Determine Coefficients The main idea is to update the tk iteratively. A linear system of n + 2 equations in the n + 2 unknowns pi for 0 ≤i ≤n and ε is formulated, n X i=0 piti k + (−1)ke = f(tk), 0 ≤k ≤n + 1 (15) The solution to the linear system provides the coefficients for the approximating polynomial p(t) and an estimate e of the minimax error. A general linear system solver, say, one that uses Gaussian elimination, is O(n3) in time. However, it is possible to solve the linear system of equation (15) in O(n2) time. In practice, the degree n is small, so the computation time using floating-point arithmetic is irrelevant on modern computers. If one were to use rational arithmetic instead, even for small degrees the time savings is significant; that said, the rational solution requires an enormous number of bits of precision. You would need a rational multiplication of two m-bit numbers that uses Fast Fourier transforms, O(m log m), rather than the standard O(m2) multiplication algorithm. Regardless, the linear system solving described next uses Newton polynomials to obtain the O(n2) asymptotic behavior. 5.1.3 Newton Polynomials A detailed discussion of Newton polynomials is found at . Only the information relative to this document is described here. 17 Given a set of m + 1 data points {(tk, gk)}m k=0 where the tk are strictly increasing, the Newton polynomial that interpolates the data points is of the form N(t) = m X i=0 aiNj(t) = i−1 Y j=0 (t −tj) (16) where N0(t) = 1 and Nj(t) = Qi−1 j=0(t −tj) for i ≥1. The coefficients ai are forward divided differences , a0 = [g0] = g0 a1 = [g0, g1] = (g1 −g0) / (t1 −t0) . . . ai = [g0, . . . , gi] = ([g1, . . . , gi] −[g0, . . . , gi−1]) / (ti −t0) (17) where the last equation defines the recursion for 2 ≤i ≤m. These equations can be formulated as a linear system with lower-triangular coefficient matrix,            1 0 0 · · · 0 1 (t1 −t0) 0 · · · 0 1 (t2 −t0) (t2 −t0)(t2 −t1) · · · . . . . . . . . . . . . ... 0 1 (tm −t0) (tm −t0)(tm −t1) · · · Qm−1 i=0 (tm −ti)                       a0 a1 a2 . . . am            =            g0 g1 g2 . . . gm            (18) The solution is obtained by forward substitution, which is equivalent to Pj i=0 aini(tj) = gj for 0 ≤j ≤m. Listing 10 contains an implementation for solving the system. Listing 10. Solving for ai in the linear system of equation (18). The input and output arrays all have m + 1 elements. template void SolveNewton ( std : : vector const& tNode , std : : vector const& g , std : : vector & a ) { s i z e t const mp1 = tNode . s i z e ( ) ; f o r ( s i z e t i = 0; i < mp1 ; ++i ) { a [ i ] = g [ i ] ; f o r ( s i z e t j = 0; j < i ; ++j ) { a [ i ] == a [ j ] ; a [ i ] /= tNode [ i ] = tNode [ j ] ; } } } Evaluation of a Newton polynomial is similar to the Horner’s method, a nested evaluation. For example, consider m = 3. Horner’s method for evaluation a standard polynomial is P(t) = p0 + p1t + p2t2 + p3t3 = p0 + t(p1 + t(p2 + tp3)) (19) 18 The Newton polynomial is N(t) = a0 + a1(t −t0) + a2(t −t0)(t −t1) + a3(t −t0)(t −t1)(t −t2) = a0 + (t −t0)(a1 + (t −t1)(a2 + (t −t2)a3)) (20) The evaluation starts with a3, multiplies by (t−t2), adds a2, and so on. Listing 11 contains an implementation for evaluating a Newton polynomial. Listing 11. Evaluating a Newton polynomial. The input arrays have m + 1 elements. template Real EvaluateNewton ( Real t , std : : vector const& tNode , std : : vector const& a ) { s i z e t index = a . s i z e () = 1; // index i s degree m Real r e s u l t = a [ index ==]; // r e s u l t = a [m] , index i s now m =1 f o r ( s i z e t i = 1; i < a . s i z e ( ) ; ++i , = =index ) { r e s u l t = a [ index ] + ( t = tNode [ index ] ) r e s u l t ; } return r e s u l t ; } 5.1.4 Solving the Linear System The application to the Remez algorithm is as follows. Let u(t) be the degree n Newton polynomial that interpolates p(t) at the nodes t0 through tn; note that the node tn+1 is not included in the interpolation. Let v(t) be the degree n Newton polynomial that interpolates (−1)k for 0 ≤k ≤n. The linear combination is a Newton polynomial of degree n, p(t) = u(t) −v(t)e (21) The e-term is obtained by solving the last linear equation at the node tn+1, e = u(tn+1) −f(tn+1) v(tn+1) −(−1)n+1 (22) 5.1.5 Updating the t-Nodes The current tk values must be updated to provide better estimates of the nodes at which the minimax error is attained. Generally, if tk is in an interval (z0, z1) where f(t)−p(t) > 0 and f(z0)−p(z0) = f(z1)−p(z1) = 0, it is replaced by ¯ tk that is the location of the local maximum of f(t) −p(t) on (z0, z1). Similarly, if tk is in an interval (z0, z1) where f(t) −p(t) < 0 and f(z0) −p(z0) = f(z1) −p(z1) = 0, it is replaced by ¯ tk that is the location of the local minimum of f(t) −p(t) on (z0, z1). To locate the critical points ¯ tk, the section Detailed discussion of mentions: “No high precision is required here, the standard line search with a couple of quadratic fits should suffice.” To support the claim, a reference to a book on linear and nonlinear programming is provided. This might be justified for small degrees, but it turns out that this is not the case for larger degrees. The maximum of |f(t) −p(t)| becomes very small. Floating-point rounding errors in evaluating f(t) and p(t) become significant enough to cause inaccuracies in locating the extreme points using a quadratic-fit line search. For reference on quadratic-fit line searches, 19 see . The approach I use involves bisection, not quadratic-fit line searches. This appears to be more robust, but precision problems can still occur when the signs of the function values at the root-bounding interval endpoints are misclassified because of rounding errors. I address this problem for a modified Remez algorithm that is used for the rotation-coefficient functions described in this document. Given that the Chebyshev nodes provide good initial guesses for the tk, the local extrema we seek, ¯ tk, are nearby the tk. I use the tk as endpoints to root-bounding intervals for E(t) = f(t) −p(t). If E(tk) and E(tk+1) have opposite signs, then bisection is used to find zk ∈(tk, tk+1) for which E(zk) is zero (within floating-point rounding error). I then find a root ¯ tk to E′(t) = 0 on the interval (zk, zk+1), once again using bisection. The end intervals [a, t1] and [tn, b] might not have points where E′(t) is zero, so instead a quadratic-fit line search is used for those intervals. Given the new ¯ tk, the linear system is solved once again followed by the update of the nodes. The number of iterations is a user-defined parameter, but a test for sign oscillations of the errors at the nodes is applied for each iteration. If the signs fail to oscillate as (−1)k, the algorithm is terminated and the current estimated coefficients are returned by the process. 5.1.6 Source Code The source code for the Remez algorithm is found in the file RemezAlgorithm.h. An illustration of its use is shown in Listing 12. Listing 12. A sample application to illustrate computing the #i n c l u d e using namespace gte ; void ApproximateSinDegree5 () { auto F = [ ] ( double const& x ) { return std : : s i n ( x ) ; }; auto FDer = [ ] ( double const& x ) { return std : : cos ( x ) ; }; double const xMin = 0 . 0 ; double const xMax = GTE C HALF PI ; s i z e t const degree = 5; s i z e t const maxRemezIterations = 16; s i z e t const m a x B i s e c t i o n I t e r a t i o n s = 1048; s i z e t const m a x B r a c k e t I t e r a t i o n s = 128; RemezAlgorithm remez ; auto i t e r a t i o n s = remez . Execute (F , FDer , xMin , xMax , degree , maxRemezIterations , m a x B i s e c t i o n I t e r a t i o n s , m a x B r a c k e t I t e r a t i o n s ) ; auto const& p = remez . G e t C o e f f i c i e n t s ( ) ; double estimatedMaximumError = remez . GetEstimatedMaxError ( ) ; auto const& x = remez . GetXNodes ( ) ; auto const& e r r o r = remez . G e t E r r o r s ( ) ; // i t e r a t i o n s = 16 // p [ 0 ] = 7.0685186758729533 e=06 // p [ 1 ] = 0.99968986443393670 // p [ 2 ] = 0.0021937161709613094 // p [ 3 ] = =0.17223886508803649 20 // p [ 4 ] = 0.0060973836732878166 // p [ 5 ] = 0.0057217240548524534 // estimatedMaximumError = =7.0685186758729533e=06 // x [ 0 ] = 0.0000000000000000 // x [ 1 ] = 0.10950063957513409 // x [ 2 ] = 0.40467937702524381 // x [ 3 ] = 0.79996961817349699 // x [ 4 ] = 1.1880777522163142 // x [ 5 ] = 1.4686862883722980 // x [ 6 ] = 1.5707963267948966 // e r r o r [ 0 ] = =7.0685186758729533e=06 // e r r o r [ 1 ] = 7.0685186758651097 e=06 // e r r o r [ 2 ] = =7.0685186758789875e=06 // e r r o r [ 3 ] = 7.0685186759344987 e=06 // e r r o r [ 4 ] = =7.0685186757124541e=06 // e r r o r [ 5 ] = 7.0685186759344987 e=06 // e r r o r [ 6 ] = =7.0685186761565433e=06 } Figure 6 shows the graph of the difference E(x) = sin(x)−p(x). The local maxima are equioscillatory within floating-point rounding errors. Figure 6. The graph of the difference between sin(x) and the minimax polynomial of degree 5 that approx-imates it. The plot was drawn with Mathematica . As a side note, I used the Mathematica function MiniMaxApproximation uses relative error rather than absolute error; that is, instead of minimizing the maximum error for |f(x) −p(x)| over all polynomials of a specified degree, Mathematica minimizes the maximum error for |1 −p(x)/f(x)|. This is problematic when f(x) has roots on the interval of interest, which sin(x) does at x = 0. I chose the interval to be [10−6, π/2] to avoid the root. I also increased the maximum number of iterations to avoid a warning about incomplete convergence. The output is shown in Figure 7. 21 Figure 7. Mathematica uses relative error for the minimax approximation rather than absolute error. 5.2 Modified Remez Algorithm The Taylor series for the four rotation-coefficient functions all have even powers of t. The minimax polyno-mials p(t) are constrained to have only even power terms. Moreover, if f(t) is a rotation-coefficient function, it is required that p(0) = f(0) and p(π) = f(π). 5.2.1 Modifications that Work Generally I modified the Remez algorithm as follows. The Chebyshev nodes are computed as the initial guesses for tk, but then I overwrite the end values to obtain t0 = 0 and tn = π. The linear system of equations is modified 22 to Pn i=0 pit2i 0 = f(t0) Pn i=0 pit2i k + (−1)k+1e = f(tk), 1 ≤k ≤n Pm i=0 pit2i n = f(tn) (23) The first equation is equivalent to p0 = f(0) and the last equation is equivalent to Pn i=0 piπ2i = f(π). Both equations do not have an error term on the left-hand side. The computation of u(t) to fit the f(tk) is effectively the same as for the standard Remez algorithm, except that n is half the degree of the polynomial and the variable is effectively s = t2 with sk = t2 k. The computation of v(t) is slightly different. In the standard Remez algorithm, v(t) fit the alternating signs {1, −1, . . . , (−1)n+1}. In the modified Remez algorithm, v(t) must fit n of the n + 1 error coefficients in the linear system. Choosing the first n error coefficients does not work because the last equation does not have an e-term to solve for, as was the case in generating equation (22). Instead, I choose the n error coefficients to come from all but the equation k = ⌊(n + 2)/2⌋. This kth equation is then used to solve for e. Updating the tk values is the same as that for the standard Remez algorithm, but keeping in mind the polynomial terms are all even powers of t. Bisection is used to locate the roots of E(t) = f(t) −p(t) for the current polynomial approximation p(t). These roots are used for bounding the roots of E′(t) = f ′(t) −p′(t). Bisection is also used to locate these roots which become the new tk. 5.2.2 Modifications for the Rotation-Coefficient Functions The general modifications discussed previously are theoretically based, treating f(t) and p(t) as differentiable functions of a real variable. In practice, though, floating-point computations of f(t) and p(t) have rounding errors that can cause the bisections to fail. It is important to note that the bisection algorithm is formulated in terms of real-valued functions. If g(t) is a continuous real-valued function on the domain [t0, t1] with g(t0)g(t1) < 0, then g(t) must have at least one root in (t0, t1). The bisection algorithm is summarized in the following steps: 1. Compute σ0 = Sign(g(t0)). 2. If σ0 = 0, then t0 is a root of g(t) and the algorithm terminates. 3. Compute σ1 = Sign(g(t1)). 4. If σ1 = 0, then t1 is a root of g(t) and the algorithm terminates. 5. It is known that σ0σ1 < 0. 6. Compute tm = (t0 + t1)/2 and σm = Sign(g(tm)). 7. If σm = 0, then tm is a root of g(t) and the algorithm terminates. 8. If σm = σ0, replace t0 by tm. Otherwise, σm = σ1 so replace t1 by tm. Go to step 6 and repeat. 23 For real arithmetic, the loop formed by steps 6, 7 and 8 is infinite. In practice, the maximum number of iterations can be specified or a tolerance can be specified for how close g(t) is to zero call t a root. When computing with floating-point arithmetic where the precision is finite, an alternative that avoids specifying either the maximum number of iterations or the tolerance. As the length of the subinterval containing a root decreases, eventually it will have no interior floating-point numbers due to the finite precision. When this happens, the bisection is terminated. Listing 13 shows code for this. Listing 13. Bisection when computing with floating-point arithmetic. The assumption is that g(t) is a finite floating-point number for all floating-point t in its domain. The type T is either float or double. template void B i s e c t i o n ( std : : function < T(T)> const& g , T t0 , T t1 , T& tRoot , T& gAtRoot ) { T g0 = g ( t0 ) ; i n t s i g n 0 = ( g0 > 0 ? 1 : ( g0 < 0 ? =1 : 0 ) ) ; i f ( s i g n 0 == 0) { tRoot = t0 ; gAtRoot = 0; return ; } T g1 = g ( t1 ) ; i n t s i g n 1 = ( g1 > 0 ? 1 : ( g1 < 0 ? =1 : 0 ) ) ; i f ( s i g n 1 == 0) { tRoot = t1 ; gAtRoot = 0; return ; } f o r ( ; ; ) { T tm = ( t0 + t1 ) / 2; T gm = g (tm ) ; i f (gm == 0 | | tm == t0 | | tm == t1 ) { // This i s the best we can do with f i x e d =p r e c i s i o n f l o a t i n g =p o i n t a r i t h m e t i c . tRoot = tm ; gAtRoot = gm; return ; } i n t signm = (gm > 0 ? 1 : (gm < 0 ? =1 : 0 ) ) ; i f ( signm == s i g n 0 ) { t0 = tm ; } e l s e { t1 = tm ; } } } The loop of Bisection is guaranteed to terminate when T is a floating-point type. One might be tempted to conclude that this is the most robust implementation possible for bisection. It is not, however. At a root r, g(r) = 0. For t near the root, g(t) is near 0. Floating-point rounding errors in the evaluation of g(t) can cause a misclassification of Sign(g(t)). This can lead to spurious roots—and many of them, even when you know theoretically g(t) has a unique root on the interval. The way to avoid this, if possible, is to implement a sign-testing function specifically for g(t) that is accurate. For example, interval arithmetic can be used to determine the sign if it is 1 or −1. If the interval arithmetic has 24 no conclusive result about the sign (it might be 0), then additional logic can be used that takes advantage of knowledge of the structure of g(t). A mixture of these two approaches is how I use bisection for the rotation-coefficient functions. 5.2.3 Computing the Exact Sign of a Power Series The rotation-coefficient functions have power series representations provided by equation (11). Their deriva-tives have power series representations provided by equation (12). The idea for computing the exact sign of one of these power series is illustrated for α(t) = sin(t)/t. The same idea applies to the other functions. It is necessary to use rational arithmetic in the sign-test computations. I use the BSRational class in the Geometric Tools code for this purpose. Bisection is used to compute the roots of E(t) = α(t)−p(t) and to compute the roots of E′(t) = α′(t)−p′(t) for the current polynomial p(t) = Pn i=0 pit2i and its derivative p′(t) = 2t Pn−1 i=0 (i + 1)pi+1t2i. The function E′(t) has a removable singularity at t = 0 for which E′(0) = 0. We can instead compute the sign of D(t) = E′(t)/(2t). Define A(t) to be the Taylor polynomial of degree n of E(t) and define B(t) to be the Taylor polynomial of degree n −1 of D(t), A(t) = n X i=0  (−1)i (2i + 1)! −pi  t2i = n X i=0 ait2i (24) where the last equality defines the coefficients ai, and B(t) = n−1 X i=0 (i + 1)  (−1)i+1 (2i + 3)! −pi+1  t2i = n−1 X i=0 bit2i (25) where the last equality defines the coefficients bi. The E(t) and D(t) functions are the sum of these polyno-mials plus remainder series, E(t) = A(t) + P∞ i=n+1 (−1)i (2i+1)! t2i = A(t) + (−1)n+1 P∞ j=0 (−1)j (2(n+1+j)+1)!t2(n+1+j) = A(t) + (−1)n+1 P∞ j=0(−1)jrj(t) = A(t) + (−1)n+1R(t) (26) where the last two equalities define R(t) and rj(t), and D(t) = B(t) + P∞ i=n(i + 1) (−1)i+1 (2i+3)! t2i = B(t) + (−1)n+1 P∞ j=0 (−1)j (2(n+j)+3)! t2(n+j) = B(t) + (−1)n+1 P∞ j=0(−1)jsj(t) = B(t) + (−1)n+1S(t) (27) where the last two equalities define S(t) and sj(t). The functions R(t) and S(t) are alternating series whose terms are decreasing in absolute value and have a limit of zero as j increases. At a specified t, as long as E(t) is not zero, the alternating condition allows us to determine the exact sign of E(t) using only a finite number 25 of terms of the remainder. The same is true for D(t). Of course it is possible that the number of terms is significantly large that the computational time is too much for current computers to handle. However, a limit can be placed on the number of terms, after which a best guess is returned for the sign. For the tool I wrote to determine the minimax polynomials for the rotation-coefficient functions, that limit was small (32) and never exceeded. Consider the expansion for E(t) in equation (26). The function R(t) = P∞ j=0(−1)jrj(t) for t ∈[0, π] is such that rj(t) > 0, rj+1(t) < rj(t) for all j ≥0, and limj→∞rj(t) = 0. These conditions ensure R(t) > 0. Define Rm(t) = P∞ j=m(−1)j−,rj(t) for m ≥0; then R0(t) = R(t) and Rm(t) > 0 for all m ≥0. If A(t) ≥0 and n is odd, then E(t) = A(t) + R(t) > 0, so we know the correct sign of E(t). If A(t) ≤0 and n is even, then E(t) = A(t) −R(t) < 0, so we also know the correct sign of E(t). If A(t) ≥0 and n is even, the sign of E(t) = A(t)−R(t) cannot be inferred immediately. Define A0(t) = A(t) and Am+1(t) = Am(t) −(−1)mrm(t) for m ≥0. Rewrite E(t) = (A(t) −r0(t)) + R1(t) = A1(t) + R1(t). If A1(t) ≥0, then E(t) > 0. If A1(t) < 0, then rewrite E(t) = (A(t) −r0(t) + r1(t)) −R2(t) = A2(t) −R2(t). If A2(t) ≤0, then E(t) < 0. If A2(t) > 0, we can repeat the pattern of subtracting and adding remainder-series terms from A(t) and testing the appropriate signs. This is repeated until the sign is determined. If A(t) ≤0 and n is odd, the sign of E(t) = A(t) + R(t) cannot be inferred immediately. Now define A0(t) = A(t) and Am+1(t) = Am(t) + (−1)mrm(t) for m ≥0. Rewrite E(t) = (A(t) + r0(t)) −R1(t) = A1(t) −Rt(t). If A1(t) ≤0, then E(t) > 0. If A1(t) > 0, then rewrite E(t) = (A(t)+r0(t)−r1(t))+R2(t) = A2(t)+R2(t). If A2(t) ≥0, then E(t) > 0. If A2(t) < 0, we can repeat the pattern of adding and subtracting remainder-series terms from A(t) and testing the appropriate signs. This is repeated until the sign is determined. As noted for practice, a maximum number of iterations should be specified so that when exceeded, the sign is reported as unknown. Rational arithmetic can be expensive. To reduce the cost of sign testing by amortization, floating-point interval arithmetic is used first in hopes of determining the sign without using rational arithmetic. Listing 14 contains pseudocode for the algorithm. Listing 14. Pseudocode for computing the sign of E(t). The coefficients of A(t) must be computed once, but the sign testing can be performed for multiple t-values. void CommputeACoefficients ( s i z e t n , std : : vector const& p , std : : vector & a , R a t i o n a l& t w o N p l u s 3 F a c t o r i a l ) { std : : vector f a c t o r i a l ( n + 2 ) ; f a c t o r i a l [ 0 ] = 1; // (2 i +1)! at i = 0 f o r ( s i z e t i = 1 , im1 = 0; i <= n + 1 ; ++i , ++im1 ) { f a c t o r i a l [ i ] = f a c t o r i a l [ im1 ] R a t i o n a l ((2 i ) (2 i + 1 ) ) ; } t w o N p l u s 3 F a c t o r i a l = f a c t o r i a l . back ( ) ; a . r e s i z e ( n + 1 ) ; R a t i o n a l negOnePow = 1; f o r ( s i z e t i = 0; i <= mDegree ; ++i ) { a [ i ] = negOnePow / f a c t o r i a l [ i ] = R a t i o n a l ( p [ i ] ) ; negOnePow . SetSign(=negOnePow . GetSign ( ) ) ; } } FPInterval G e t I n t e r v a l E ( s i z e t n , double t , std : : vector const& p ) { i f ( t > 0 . 0 ) { // Compute an i n t e r v a l c o n t a i n i n g s i n ( t )/ t . auto saveMode = std : : f ege t r ou n d ( ) ; 26 std : : f e s e t r o u n d (FE DOWNWARD) ; double sinTDown = std : : s i n ( t ) ; std : : f e s e t r o u n d (FE UPWARD) ; double sinTUp = std : : s i n ( t ) ; std : : f e s e t r o u n d ( saveMode ) ; FPInterval i S i n ( std : : min ( sinTDown , sinTUp ) , std : : max( sinTDown , sinTUp ) ) ; FPInterval iT ( t ) ; FPInterval iF = i S i n / iT ; // Copute an i n t e r v a l c o n t a i n i n g p ( t ) . FPInterval iTSqr = iT iT ; s i z e t index = n ; FPInterval i I n d e x ( s t a t i c c a s t (index ) ) ; FPInterval iP ( p [ index ==]); f o r ( s i z e t i = 1; i < p . s i z e ( ) ; ++i , = =index ) { FPInterval i P C o e f f i c i e n t ( p [ index ] ) ; i R e s u l t = i P C o e f f i c i e n t + iTSqr iP ; } // Compute an i n t e r v a l c o n t a i n i n g E( t ) . FPInterval iE = iF = iP ; return iE ; } e l s e { // s i n ( t )/ t and p ( t ) are equal at t = 0 , so E( t ) = 0 return FPInterval (0.0); } } i n t GetSignE ( s i z e t n , R a t i o n a l t , std : : vector const& p , std : : vector const& a , R a t i o n a l t w o N p l u s 3 F a c t o r i a l ) { // Attempt to c l a s s i f y the s i g n u sin g f l o a t i n g =p o i n t a r i t h m e t i c . FPInterval iE = G e t I n t e r v a l E (n , t , p ) ; i f ( iE [ 0 ] > 0 . 0 ) { return +1; } i f ( iE [ 1 ] < 0 . 0 ) { return =1; } // The i n i t i a l s i g n depends on whether n i s odd or even . i n t s i g n = ( n & 1 ? +1 : =1); // Compute A( t ) . R a t i o n a l tSqr = t t ; R a t i o n a l A = a [ n ] ; f o r ( s i z e t i = 0 , index = n = 1; i <= n = 1 ; ++i , = =index ) { A = a [ index ] + A tSqr ; } i f ( s i g n A. GetSign () >= 0) { return s i g n ; } // Compute tPow = t ˆ{2n+2}. R a t i o n a l tPow = tSqr ; f o r ( s i z e t i = 1; i <= n ; ++i ) { tPow = tSqr ; } // Apply the add/ s u b t r a c t p a t t e r n . s i z e t m a x I t e r a t i o n s = 32; R a t i o n a l f a c t o r = tPow / t w o N p l u s 3 F a c t o r i a l ; 27 f o r ( s i z e t j = 1; j <= m a x I t e r a t i o n s ; j += 2) { R a t i o n a l term = f a c t o r ; term . SetSign ( s i g n ) ; A += term ; i f ( s i g n A. GetSign () <= 0) { return =s i g n ; } term = tSqr / R a t i o n a l ((2 ( n + j ) + 2) (2 ( n + j ) + 3 ) ) ; term . SetSign(= s i g n ) ; A += term ; i f ( s i g n A. GetSign () >= 0) { return s i g n ; } } // Return an i n v a l i d i n t e g e r to i n d i c a t e the maximum number of i t e r a t i o n s // has been exceeded . This code i s not reached by the Geometric Tools // a p p l i c a t i o n that g e n e r a t e s the polynomial approximations . return std : : n u m e r i c l i m i t s ::max ( ) ; } A similar algorithm can be formulated for D(t), B(t) and S(t). Listing 15 contains pseudocode for the algorithm. Listing 15. Pseudocode for computing the sign of D(t). The coefficients of B(t) must be computed once, but the sign testing can be performed for multiple t-values. void CommputeBCoefficients ( s i z e t n , std : : vector const& p , std : : vector & b , R a t i o n a l& t w o N p l u s 3 F a c t o r i a l ) { std : : vector f a c t o r i a l ( n + 1 ) ; f a c t o r i a l [ 0 ] = 6; // (2 i +3)! at i = 0 f o r ( s i z e t i = 1 , im1 = 0; i <= n ; ++i , ++im1 ) { f a c t o r i a l [ i ] = f a c t o r i a l [ im1 ] R a t i o n a l ((2 i + 2) (2 i + 3 ) ) ; } t w o N p l u s 3 F a c t o r i a l = f a c t o r i a l . back ( ) ; b . r e s i z e ( n ) ; R a t i o n a l negOnePow = =1; f o r ( s i z e t i = 0; i < n ; ++i ) { b [ i ] = R a t i o n a l ( i + 1) ( negOnePow / f a c t o r i a l [ i ] = R a t i o n a l ( p [ i ] ) ) ; negOnePow . SetSign(=negOnePow . GetSign ( ) ) ; } } FPInterval G e t I n t e r v a l D ( s i z e t n , double t , std : : vector const& p ) { i f ( t > 0 . 0 ) { // Compute an i n t e r v a l c o n t a i n i n g F ’( t )/(2 t ) . auto saveMode = std : : f ege t r ou n d ( ) ; std : : f e s e t r o u n d (FE DOWNWARD) ; double cosTDown = std : : cos ( t ) ; double sinTDown = std : : s i n ( t ) ; std : : f e s e t r o u n d (FE UPWARD) ; double cosTUp = std : : cos ( t ) ; double sinTUp = std : : s i n ( t ) ; std : : f e s e t r o u n d ( saveMode ) ; FPInterval iCos ( std : : min ( cosTDown , cosTUp ) , std : : max( cosTDown , cosTUp ) ) ; FPInterval i S i n ( std : : min ( sinTDown , sinTUp ) , std : : max( sinTDown , sinTUp ) ) ; 28 FPInterval iT ( t ) ; FPInterval iTSqr = iT iT ; FPInterval iTCube = iT iTSqr ; FPInterval iG = FPInterval (0.5) ( iT iCos = i S i n ) / iTCube ; // Compute an i n t e r v a l c o n t a i n i n g p ’( t )/(2 t ) . FPInterval iQ = G e t I n t e r v a l Q ( t ) ; s i z e t index = n ; FPInterval i I n d e x ( s t a t i c c a s t (index ) ) ; FPInterval iQ = i I n d e x FPInterval (p [ index ==]); f o r ( s i z e t i = 2; i < p . s i z e ( ) ; ++i , = =index ) { i I n d e x = FPInterval ( s t a t i c c a s t (index ) ) ; FPInterval i P C o e f f i c i e n t ( p [ index ] ) ; iQ = i I n d e x i P C o e f f i c i e n t + iTSqr iQ ; } // Compute an i n t e r v a l c o n t a i n i n g D( t ) . FPInterval iD = iG = iQ ; return iD ; } e l s e { FPInterval iG = FPInterval (=1.0) / I n t e r v a l ( 6 . 0 ) ; FPInterval iQ = FPInterval (p [ 1 ] ) ; FPInterval iD = iG = iQ ; return iD ; } } i n t GetSignD ( s i z e t n , R a t i o n a l t , std : : vector const& b , R a t i o n a l t w o N p l u s 3 F a c t o r i a l ) { // The i n i t i a l s i g n depends on whether n i s odd or even . i n t s i g n = ( n & 1 ? +1 : =1); // Compute A( t ) . R a t i o n a l tSqr = t t ; R a t i o n a l B = b [ n = 1 ] ; f o r ( s i z e t i = 0 , index = n = 2; i <= n = 2 ; ++i , = =index ) { B = b [ index ] + B tSqr ; } i f ( s i g n B. GetSign () >= 0) { return s i g n ; } // Compute tPow = t ˆ{2n }. R a t i o n a l tPow = tSqr ; f o r ( s i z e t i = 1; i < n ; ++i ) { tPow = tSqr ; } // Apply the add/ s u b t r a c t p a t t e r n . s i z e t m a x I t e r a t i o n s = 32; R a t i o n a l f a c t o r = tPow / t w o N p l u s 3 F a c t o r i a l ; f o r ( s i z e t j = 1; j <= m a x I t e r a t i o n s ; j += 2) { R a t i o n a l term = R a t i o n a l ( n + j ) f a c t o r ; term . SetSign ( s i g n ) ; B += term ; i f ( s i g n B. GetSign () <= 0) { return =s i g n ; } f a c t o r = tSqr / R a t i o n a l ((2 ( n + j ) + 2) (2 ( n + j ) + 3 ) ) ; term = R a t i o n a l ( n + j + 1) f a c t o r ; term . SetSign(= s i g n ) ; 29 B += term ; i f ( s i g n B. GetSign () >= 0) { return s i g n ; } } // Return an i n v a l i d i n t e g e r to i n d i c a t e the maximum number of i t e r a t i o n s // has been exceeded . This code i s not reached by the Geometric Tools // a p p l i c a t i o n that g e n e r a t e s the polynomial approximations . return std : : n u m e r i c l i m i t s ::max ( ) ; } 5.2.4 Source Code The source code for the modified Remez algorithm applied to the rotation-coefficient functions is found in the tool GeometricTools/GTE/Tools/RotationApproximation. Links to the files are RotationApproximationMain.cpp RotationApproximationConsole.h RotationApproximationConsole.cpp RemezConstrained.h RemezConstrained.cpp RemezRotC0.h RemezRotC0.cpp RemezRotC1.h RemezRotC1.cpp RemezRotC2.h RemezRotC2.cpp RemezRotC3.h RemezRotC3.cpp The code was factored to use virtual functions that are specific to each rotation-coefficient function. The function α(t) is handled by RemezRotC0, the function β(t) is handled by RemezRotC1, the function γ(t) is handled by RemezRotC2 and the function δ(t) is handled by RemezRotC3. The goal was to share as much of the abstract details of the modified Remez algorithm into the base class RemezConstrained for code sharing. The output of the tools consists of files name RotC?Info.txt that contains polynomial coefficients, the estimated maximum error, t-nodes and the corresponding E(t) values. The degrees of fitting are 2 ≤n ≤8. The files RotC?GTL.txt contain Geometric Tools source code for the polynomial coefficients and maximum errors. The contents of these files were copied to RotationEstimate.h. The file also contains the functions for polynomial evaluations. And it contains source code for the rotation approximation of equation (3) and the rotation derivative approximations of equations (8), (9) and (10). 6 Polynomial Coefficients The polynomial coeficients, t-nodes and associated errors E(t) are displayed in tables in this section. Graphs of E(t) were drawn using Mathematica . 30 Table 5. The polynomial coefficients, t-nodes and associated errors are shown here for α(t) for t ∈[0, π]. n coefficients t-nodes errors 2 p = +1.00000000000000000e+00 p = -1.58971650732578684e-01 t = +1.41696926221368713e+00 e = -6.96563711867481672e-03 p = +5.84121356311684790e-03 t = +2.77555038886482652e+00 e = +6.96563711867501101e-03 3 p = +1.00000000000000000e+00 p = -1.66218398161274539e-01 t = +1.06502458526975197e+00 e = -2.23795060895759512e-04 p = +8.06129151017077016e-03 t = +2.20850330064692724e+00 e = +2.23795060895704001e-04 p = -1.50545944866583496e-04 t = +2.94751550168847576e+00 e = -2.23795060895801146e-04 4 p = +1.00000000000000000e+00 p = -1.66651290458553397e-01 t = +8.53329526831219320e-01 e = -4.86700964341668652e-06 p = +8.31836205080888937e-03 t = +1.81734904413201592e+00 e = +4.86700964330566421e-06 p = -1.93853969255209339e-04 t = +2.55138186398592381e+00 e = -4.86700964347219767e-06 p = +2.19921657358978346e-06 t = +3.02055091107866058e+00 e = +4.86700964307668071e-06 5 p = +1.00000000000000000e+00 p = -1.66666320608302304e-01 t = +7.11696754226058381e-01 e = -7.56547114955097300e-08 p = +8.33284074932796014e-03 t = +1.53815790605168079e+00 e = +7.56547114955097300e-08 p = -1.98184457544372085e-04 t = +2.21700105040400874e+00 e = -7.56547113289762763e-08 p = +2.70931602688878442e-06 t = +2.73401164736507774e+00 e = +7.56547116065320324e-08 p = -2.07033154672609224e-08 t = +3.05865332776585674e+00 e = -7.56547110861149896e-08 6 p = +1.00000000000000000e+00 p = -1.66666661172424985e-01 t = +6.10295625428356914e-01 e = -8.79391559571729431e-10 p = +8.33332258782319701e-03 t = +1.33069662324980786e+00 e = +8.79391448549426968e-10 p = -1.98405693280704135e-04 t = +1.94839244797982092e+00 e = -8.79391670594031893e-10 p = +2.75362742468406608e-06 t = +2.45943689797937992e+00 e = +8.79391670594031893e-10 p = -2.47308402190765123e-08 t = +2.84310043249426592e+00 e = -8.79391726105183125e-10 p = +1.36149932075244694e-10 t = +3.08098593292690204e+00 e = +8.79391684471819701e-10 7 p = +1.00000000000000000e+00 p = -1.66666666601880786e-01 t = +5.34129297621524390e-01 e = -7.91977594616355418e-12 p = +8.33333316679120591e-03 t = +1.17158405134060084e+00 e = +7.91966492386109167e-12 p = -1.98412553530683797e-04 t = +1.72691263228428449e+00 e = -7.91988696846601670e-12 p = +2.75567210003238900e-06 t = +2.21943896819618747e+00 e = +7.91988696846601670e-12 p = -2.50388692626200884e-08 t = +2.61775904512782720e+00 e = -7.91974819058793855e-12 p = +1.58972932135933544e-10 t = +2.91360618222484735e+00 e = +7.91972043501232292e-12 p = -6.61111627233688785e-13 t = +3.09519274229929309e+00 e = -7.91996156157548370e-12 8 p = +1.00000000000000000e+00 p = -1.66666666666648478e-01 t = +1.71206551449520239e-01 e = -4.44089209850062616e-16 p = +8.33333333318112164e-03 t = +4.60075592255305033e-01 e = +4.44089209850062616e-16 p = -1.98412698077537775e-04 t = +8.57669717404237031e-01 e = -4.44089209850062616e-16 p = +2.75573162083557394e-06 t = +1.32506964372557778e+00 e = +4.44089209850062616e-16 p = -2.50519743096581360e-08 t = +1.81652300986421600e+00 e = -3.33066907387546962e-16 p = +1.60558314470477309e-10 t = +2.28392293618555620e+00 e = +6.10622663543836097e-16 p = -7.60488921303402553e-13 t = +2.68151706133448808e+00 e = -2.49800180540660222e-16 p = +2.52255089807125025e-15 t = +2.97038610214027310e+00 e = +6.80011602582908381e-16 31 Figure 8. Graphs of E(t) for α(t) and various degree minimax polynomials. n = 2 n = 3 n = 4 n = 5 n = 6 n = 7 n = 8 32 Table 6. The polynomial coefficients, t-nodes and associated errors are shown here for β(t) for t ∈[0, π]. n coefficients t-nodes errors 2 p = +5.00000000000000000e-01 p = -4.06593520914583922e-02 t = +1.42276011523893997e+00 e = -9.21190101505375836e-04 p = +1.06698549928666312e-03 t = +2.77869585441082823e+00 e = +9.21190101505375836e-04 3 p = +5.00000000000000000e-01 p = -4.16202835017619524e-02 t = +1.06695024281444484e+00 e = -2.32512618063007714e-05 p = +1.36087417563353699e-03 t = +2.21077941596084315e+00 e = +2.32512618062452603e-05 p = -1.99122437404000405e-05 t = +2.94823245637554709e+00 e = -2.32512618062730159e-05 4 p = +5.00000000000000000e-01 p = -4.16653520191245796e-02 t = +8.54095869743815461e-01 e = -4.16931608848702950e-07 p = +1.38761160375298095e-03 t = +1.81851777200555631e+00 e = +4.16931608848702950e-07 p = -2.44138380330618480e-05 t = +2.55221790349626687e+00 e = -4.16931608793191799e-07 p = +2.28499434819148172e-07 t = +3.02076977142233449e+00 e = +4.16931608820947375e-07 5 p = +5.00000000000000000e-01 p = -4.16666414534321572e-02 t = +7.12046211488670533e-01 e = -5.51778872592834091e-09 p = +1.38885303988537192e-03 t = +1.53965237463759808e+00 e = +5.51778867041718968e-09 p = -2.47850001122705350e-05 t = +2.21757467564157373e+00 e = -5.51778867041718968e-09 p = +2.72207208413898425e-07 t = +2.73435222407546519e+00 e = +5.51778867041718968e-09 p = -1.77358008600681907e-09 t = +3.05873499545067862e+00 e = -5.51778875368391653e-09 6 p = +5.00000000000000000e-01 p = -4.16666663178411334e-02 t = +6.10472432234793638e-01 e = -5.58657009541718708e-11 p = +1.38888820709641924e-03 t = +1.33104460641283406e+00 e = +5.58655344207181770e-11 p = -2.48011431705518285e-05 t = +1.94872960419772490e+00 e = -5.58655344207181770e-11 p = +2.75439902962340229e-07 t = +2.45971981532091011e+00 e = +5.58655899318694082e-11 p = -2.06736081122602257e-09 t = +2.84334786702351305e+00 e = -5.58657009541718708e-11 p = +9.93003618302030503e-12 t = +3.08062222494501992e+00 e = +5.58654511539913301e-11 7 p = +5.00000000000000000e-01 p = -4.16666666664263635e-02 t = +2.10446803619233513e-01 e = -7.16093850883225969e-15 p = +1.38888888750799658e-03 t = +5.61107910590008752e-01 e = +7.16093850883225969e-15 p = -2.48015851902670717e-05 t = +1.03355234196907242e+00 e = -7.16093850883225969e-15 p = +2.75571871163332658e-07 t = +1.57079632679489656e+00 e = +7.16093850883225969e-15 p = -2.08727380201649381e-09 t = +2.10804031162072114e+00 e = -7.16093850883225969e-15 p = +1.14076763269827225e-11 t = +2.58048474299978459e+00 e = +7.16093850883225969e-15 p = -4.28619236995285237e-14 t = +2.93114584997056005e+00 e = -7.16093850883225969e-15 8 p = +5.00000000000000000e-01 p = -4.16666666666571719e-02 t = +1.71206551449520239e-01 e = +7.21644966006351751e-16 p = +1.38888888885105744e-03 t = +4.60075592255305033e-01 e = -7.21644966006351751e-16 p = -2.48015872513761947e-05 t = +8.57669717404237031e-01 e = +7.21644966006351751e-16 p = +2.75573160474227648e-07 t = +1.32506964372557778e+00 e = -7.21644966006351751e-16 p = -2.08766469798137579e-09 t = +1.81652300986421600e+00 e = +7.21644966006351751e-16 p = +1.14685460418668139e-11 t = +2.28392293618555620e+00 e = -7.21644966006351751e-16 p = -4.75415775440997119e-14 t = +2.68151706133448808e+00 e = +6.66133814775093924e-16 p = +1.40555891469552795e-16 t = +2.97038610214027310e+00 e = -7.21644966006351751e-16 33 Figure 9. Graphs of E(t) for β(t) and various degree minimax polynomials. n = 2 n = 3 n = 4 n = 5 n = 6 n = 7 n = 8 34 Table 7. The polynomial coefficients, t-nodes and associated errors are shown here for γ(t) for t ∈[0, π]. n coefficients t-nodes errors 2 p = +3.33333333333333315e-01 p = -3.24417271573718483e-02 t = +1.42223031101916009e+00 e = -8.14615084602177131e-04 p = +9.05201583387763454e-04 t = +2.77840834490346023e+00 e = +8.14615084602288153e-04 3 p = +3.33333333333333315e-01 p = -3.32912781805089902e-02 t = +1.06680106617810866e+00 e = -2.10750257848557609e-05 p = +1.16506615743456146e-03 t = +2.21060290328081877e+00 e = +2.10750257848002498e-05 p = -1.76083105011587047e-05 t = +2.94817679453449877e+00 e = -2.10750257847169831e-05 4 p = +3.33333333333333315e-01 p = -3.33321218985461534e-02 t = +8.54044430959940870e-01 e = -3.84148385823568361e-07 p = +1.18929901553194335e-03 t = +1.81843926051814986e+00 e = +3.84148386128879693e-07 p = -2.16884239911580259e-05 t = +2.55216167857938370e+00 e = -3.84148386128879693e-07 p = +2.07111898922214621e-07 t = +3.02075495492954182e+00 e = +3.84148386073368542e-07 5 p = +3.33333333333333315e-01 p = -3.33333098285273563e-02 t = +7.12025844238255212e-01 e = -5.14359671521802397e-09 p = +1.19044276839748377e-03 t = +1.53872651514969183e+00 e = +5.14359665970687274e-09 p = -2.20303898188601926e-05 t = +2.21754025356027462e+00 e = -5.14359657644014590e-09 p = +2.47382309397892291e-07 t = +2.73433189396261778e+00 e = +5.14359654868457028e-09 p = -1.63412179599052932e-09 t = +3.05873013020570461e+00 e = -5.14359664582908493e-09 6 p = +3.33333333333333315e-01 p = -3.33333330053029661e-02 t = +6.10462474431522573e-01 e = -5.25335885903643884e-11 p = +1.19047554930589209e-03 t = +1.33102494758831069e+00 e = +5.25333665457594634e-11 p = -2.20454376925152508e-05 t = +1.94870868266697173e+00 e = -5.25333110346082321e-11 p = +2.50395723787030737e-07 t = +2.45970209711377397e+00 e = +5.25333665457594634e-11 p = -1.90797721719554658e-09 t = +2.84339912512876136e+00 e = -5.25333110346082321e-11 p = +9.25661051509749896e-12 t = +3.08035750612466508e+00 e = +5.25333943013350790e-11 7 p = +3.33333333333333315e-01 p = -3.33333333331133561e-02 t = +2.10446803619233513e-01 e = -7.71605002114483796e-15 p = +1.19047618918715682e-03 t = +5.61107910590008752e-01 e = +7.71605002114483796e-15 p = -2.20458533943125258e-05 t = +1.03355234196907242e+00 e = -7.71605002114483796e-15 p = +2.50519837811549507e-07 t = +1.57079632679489656e+00 e = +7.77156117237609578e-15 p = -1.92670551155064303e-09 t = +2.10804031162072114e+00 e = -7.74380559676046687e-15 p = +1.06463697865186991e-11 t = +2.58048474299978459e+00 e = +7.71605002114483796e-15 p = -4.03135292145519115e-14 t = +2.93114584997056005e+00 e = -7.74380559676046687e-15 8 p = +3.33333333333333315e-01 p = -3.33333333333034956e-02 t = +1.71206551449520239e-01 e = +2.27595720048157091e-15 p = +1.19047619036920628e-03 t = +4.60075592255305033e-01 e = -2.27595720048157091e-15 p = -2.20458552540489507e-05 t = +8.57669717404237031e-01 e = +2.27595720048157091e-15 p = +2.50521015434838418e-07 t = +1.32506964372557778e+00 e = -2.27595720048157091e-15 p = -1.92706504721931338e-09 t = +1.81652300986421600e+00 e = +2.27595720048157091e-15 p = +1.07026043656398707e-11 t = +2.28392293618555620e+00 e = -2.27595720048157091e-15 p = -4.46498739610373537e-14 t = +2.68151706133448808e+00 e = +2.27595720048157091e-15 p = +1.30526089083317312e-16 t = +2.97038610214027310e+00 e = -2.23432383705812754e-15 35 Figure 10. Graphs of E(t) for γ(t) and various degree minimax polynomials. n = 2 n = 3 n = 4 n = 5 n = 6 n = 7 n = 8 36 Table 8. The polynomial coefficients, t-nodes and associated errors are shown here for δ(t) for t ∈[0, π]. n coefficients t-nodes errors 2 p = +8.33333333333333287e-02 p = -5.46357009138465424e-03 t = +1.42605999087888513e+00 e = -8.46120368888786389e-05 p = +1.19638433962248889e-04 t = +2.78046287136283787e+00 e = +8.46120368888855778e-05 3 p = +8.33333333333333287e-02 p = -5.55196372993948303e-03 t = +1.06817028719641804e+00 e = -1.80519731848849396e-06 p = +1.46646667516630680e-04 t = +2.21221668479315792e+00 e = +1.80519731859257737e-06 p = -1.82905866698780768e-06 t = +2.94868366244126356e+00 e = -1.80519731859951627e-06 4 p = +8.33333333333333287e-02 p = -5.55546733314307706e-03 t = +8.54617646375245066e-01 e = -2.80161038951343144e-08 p = +1.48723933698110248e-04 t = +1.81931270563229708e+00 e = +2.80161039506454657e-08 p = -2.17865651989456709e-06 t = +2.55278594674443404e+00 e = -2.80161039367676779e-08 p = +1.77408035681006169e-08 t = +3.02091841327438537e+00 e = +2.80161039367676779e-08 5 p = +8.33333333333333287e-02 p = -5.55555406357728914e-03 t = +7.12296212146430685e-01 e = -3.26754151513952706e-10 p = +1.48807404153008735e-04 t = +1.54086677889246326e+00 e = +3.26753749058106280e-10 p = -2.20360578108261882e-06 t = +2.21798657499795304e+00 e = -3.26753742119212376e-10 p = +2.06782449582308932e-08 t = +2.73459656227776815e+00 e = +3.26753755997000184e-10 p = -1.19178562817913197e-10 t = +3.05878482724521739e+00 e = -3.26753749058106280e-10 6 p = +8.33333333333333287e-02 p = -5.55555555324832757e-03 t = +2.64726913948481579e-01 e = -1.37140299116822462e-13 p = +1.48809514798423797e-04 t = +6.98108645491121060e-01 e = +1.37140299116822462e-13 p = -2.20457622072950518e-06 t = +1.26434916557872890e+00 e = -1.37140299116822462e-13 p = +2.08728631685852690e-08 t = +1.87724348801106444e+00 e = +1.37126421329014647e-13 p = -1.36888190776165574e-10 t = +2.44348400809867261e+00 e = -1.37133360222918554e-13 p = +5.99292681875750821e-13 t = +2.87686573964131131e+00 e = +1.37133360222918554e-13 7 p = +8.33333333333333287e-02 p = -5.55555555528319030e-03 t = +2.10446803619233513e-01 e = +3.20715676238592096e-14 p = +1.48809523101214977e-04 t = +5.61107910590008752e-01 e = -3.20715676238592096e-14 p = -2.20458493798151629e-06 t = +1.03355234196907242e+00 e = +3.20715676238592096e-14 p = +2.08765224186559757e-08 t = +1.57079632679489656e+00 e = -3.20715676238592096e-14 p = -1.37600800115177215e-10 t = +2.10804031162072114e+00 e = +3.20715676238592096e-14 p = +6.63762129016229865e-13 t = +2.58048474299978459e+00 e = -3.20785065177631168e-14 p = -2.19044013684859942e-15 t = +2.93114584997056005e+00 e = +3.20715676238592096e-14 8 p = +8.33333333333333287e-02 p = -5.55555555501025672e-03 t = +1.71206551449520239e-01 e = +4.77673456344973602e-14 p = +1.48809521898935978e-04 t = +4.60075592255305033e-01 e = -4.77673456344973602e-14 p = -2.20458342827337994e-06 t = +8.57669717404237031e-01 e = +4.77673456344973602e-14 p = +2.08757075326674457e-08 t = +1.32506964372557778e+00 e = -4.77673456344973602e-14 p = -1.37379825035843510e-10 t = +1.81652300986421600e+00 e = +4.77673456344973602e-14 p = +6.32209097599974706e-13 t = +2.28392293618555620e+00 e = -4.77742845284012674e-14 p = +7.39204014316007136e-17 t = +2.68151706133448808e+00 e = +4.77673456344973602e-14 p = -6.43236558920699052e-17 t = +2.97038610214027310e+00 e = -4.77673456344973602e-14 37 Figure 11. Graphs of E(t) for δ(t) and various degree minimax polynomials. n = 2 n = 3 n = 4 n = 5 n = 6 n = 7 n = 8 38 References Ethan Eade. Lie Groups for Computer Vision. 2014. Katta G. Murty. Line search algorithms. IOE 611 Lecture slides, unknown publication date. IEEE Computer Society and Microprocessor Standards Association. IEEE 754—-2008 - IEEE Standard for Floating-Point Arithmetic. Wikipedia. Chebyshev nodes. accessed August 10, 2020. Wikipedia. Divided differences. accessed August 10, 2020. Wikipedia. Newton polynomial. accessed August 10, 2020. Wikipedia. Remez algorithm. accessed August 10, 2020. Wolfram Research, Inc. Mathematica 12.1.1.0. Wolfram Research, Inc., Champaign, Illinois, 2020. 39
3914
https://math.colorado.edu/~yuhu6917/teaching/Math353_16Fall/LectureNotes/lecture23.pdf
LECTURE 23: FOURIER CONVERGENCE THEOREM, EVEN AND ODD FUNCTIONS 1. Fourier convergence Theorem Theorem. If f(x), f ′(x) are piecewise continuous on [−L, L), periodic with period 2L, then its Fourier series f(x) ∼a0 2 + ∞ X m=1  am cos mπx L + bm sin mπx L  converges to  f(x), at the values of x where f(x) is continuous, (f(x+) + f(x−))/2, at the values of x where f(x) is discontinuous. Example. Let f(x) =  0, −L < x < 0, L, 0 < x < L. f(x + 2L) = f(x). The Fourier series of f(x), by the theorem, converges to the function ˜ f(x) =    0, −L < x < 0, L, 0 < x < L, 1 2L, x = 0. ˜ f(x + 2L) = ˜ f(x). Particularly, just by calculating the Fourier series and using the Fourier convergence theorem, we are able to obtain interesting results about the limit of certain infinite series. For instance, if we calculate the Fourier series of the function f(x) in the example above, we get a0 = 1 L Z L −L f(x)dx = L, an = 1 L Z L −L f(x) cos nπx L dx = Z L 0 cos nπx L dx = 0, (n = 1, 2, 3, ...) bm = 1 L Z L −L f(x) sin mπx L dx = Z L 0 sin mπx L dx = L mπ[1 −(−1)m], (m = 1, 2, 3, ...) = 2L (2k + 1)π, (m = 2k + 1, k = 0, 1, 2, ...). Therefore, f(x) ∼L 2 + ∞ X k=0 2L (2k + 1)π sin (2k + 1)πx L . Date: 10/25/16. 1 2 LECTURE 23: FOURIER CONVERGENCE THEOREM, EVEN AND ODD FUNCTIONS Now, consider x = L 2 , note that this is a point at which f(x) is continuous, hence the Fourier series evaluated at this point converges to L, that is L 2 + ∞ X k=0 2L (2k + 1)π(−1)k = L. This is just ∞ X k=0 (−1)k 2k + 1 = π 4 . 2. Even and Odd Functions This is a topic we are familiar with already, at least in terms of the symmetry of the graph of an even or odd function. We will have a review of this topic here since it will enable us to • halve the effort of calculation of the coefficients in a Fourier series; • extend a function appropriately so that the result has a desired type of Fourier series. Definition. Function f(x) is said to be even if f(x) = f(−x) for all x, and odd if f(x) = −f(−x) for all x. By the definition, it is easy to see that the sum/difference/product of two even func-tions is even, the sum/difference of two odd functions is odd, the product of two odd functions is even, the product of an even function and an odd function is odd, etc. Also, we have the following integral identities: • f(x) is even: Z L −L f(x)dx = 2 Z L 0 f(x)dx; • f(x) is odd: Z L −L f(x)dx = 0. Applying to Fourier series, we assume that both f, f ′ are piecewise continuous and periodic with period 2L, and we have • f(x) is even: an = 1 L Z L −L f(x) cos nπx L dx = 2 L Z L 0 f(x) cos nπx L dx, (n = 0, 1, 2, ...) bm = 1 L Z L −L f(x) sin mπx L dx = 0.(m = 1, 2, ...) • f(x) is odd: an = 1 L Z L −L f(x) cos nπx L dx = 0, (n = 0, 1, 2, ...) bm = 1 L Z L −L f(x) sin mπx L dx = 2 L Z L 0 f(x) sin mπx L dx.(m = 1, 2, ...) LECTURE 23: FOURIER CONVERGENCE THEOREM, EVEN AND ODD FUNCTIONS 3 Observe that when f(x) is even, its Fourier series consists only of the cosine terms, and we call it a cosine series. Similarly, when f(x) is odd, the Fourier series is called a sine series. Example. The function f(x) = x, −L ≤x < L, f(x + 2L) = f(x) is odd and its Fourier series is a sine series. The function f(x) = |x|, −L ≤x < L, f(x + 2L) = f(x) is even and its Fourier series is an cosine series. Looking closer at the two functions in the above example, we could see that the value of the functions are the same on the interval [0, L]. In other words, f(x) = x on [0, L] can be expanded as a sine series, a cosine series, or a mixture of both. In fact, this is no surprise since in the two cases, we have extended the function f(x) = x either oddly or evenly to a periodic function with period 2L. This inspires us the following: If a function is defined on [0, L] and we need to expand it as a sine(cosine) series of period 2L, we simply extend the function oddly(evenly) to [−L, L) and periodically to (−∞, ∞) with period 2L, then calculate the Fourier series.
3915
https://www.quora.com/How-do-you-know-when-to-use-the-law-of-sines-or-the-law-of-cosines
How to know when to use the law of sines or the law of cosines - Quora Something went wrong. Wait a moment and try again. Try again Skip to content Skip to search Sign In Mathematics Sine Rule Laws in Geometry Cosine (math function) Trigonometric Geometry Class Lambert's Cosine Law Polygonometry Law of Cosines 5 How do you know when to use the law of sines or the law of cosines? All related (42) Sort Recommended Kevin Kenny Former Computer engineeer at General Electric (company) · Author has 69 answers and 232.2K answer views ·10y Originally Answered: How do you know when to use law of sines or law of cosines? · To solve for all the parts of a triangle, you need to know at least three parts, at least one of which must be a side. If you know the three sides, you can use the Law of Cosines to find any angle. If you know two sides and an angle, the Law of Cosines will find the third side. If you know a side and two angles, you actually know all three angles (they add up to 180°), and the Law of Sines will find the remaining sides. Here you know the angles and one side of the left-hand triangle, so start there and find the length of one of the guy wires. Then you have a right triangle with a known angle and h Continue Reading To solve for all the parts of a triangle, you need to know at least three parts, at least one of which must be a side. If you know the three sides, you can use the Law of Cosines to find any angle. If you know two sides and an angle, the Law of Cosines will find the third side. If you know a side and two angles, you actually know all three angles (they add up to 180°), and the Law of Sines will find the remaining sides. Here you know the angles and one side of the left-hand triangle, so start there and find the length of one of the guy wires. Then you have a right triangle with a known angle and hypotenuse to find the height of the tower. Upvote · 99 14 Sponsored by Google Ads Google Ads helps you reach new customers and capture demand. This peak season, reach shoppers as they search, scroll and stream on Google and YouTube. Sign Up 99 11 Related questions More answers below How do you know when to use the law of sines or cosines? How do I use the laws of sines and cosines? How will you apply the law of sines and cosines in solving real life problems? What are the laws of cosines and sines? What is the difference in the use of the law of sines and the law of cosines? Michael Jørgensen I love it! · Upvoted by David Joyce , Ph.D. Mathematics, University of Pennsylvania (1979) · Author has 1.1K answers and 3.2M answer views ·10y Originally Answered: How do you know when to use law of sines or law of cosines? · In this problem you can use the law of sines. The rule is simple: If you know an angle and the opposite side, then use the law of sines. In all other cases, use the law of cosines. Upvote · 99 10 Assistant Bot · 1y The choice between the Law of Sines and the Law of Cosines depends on the information you have about the triangle and what you need to find. Here’s a breakdown of when to use each: Law of Sines Use the Law of Sines when you have: Angle-Angle-Side (AAS): Two angles and a non-included side. Angle-Side-Side (ASS): Two sides and a non-included angle (note that this can sometimes lead to the ambiguous case). Angle-Angle-Angle (AAA): This can help in finding the ratios of the sides but not the actual side lengths. The Law of Sines states: a sin A=b sin B=c sin C a sin⁡A=b sin⁡B=c sin⁡C Continue Reading The choice between the Law of Sines and the Law of Cosines depends on the information you have about the triangle and what you need to find. Here’s a breakdown of when to use each: Law of Sines Use the Law of Sines when you have: Angle-Angle-Side (AAS): Two angles and a non-included side. Angle-Side-Side (ASS): Two sides and a non-included angle (note that this can sometimes lead to the ambiguous case). Angle-Angle-Angle (AAA): This can help in finding the ratios of the sides but not the actual side lengths. The Law of Sines states: a sin A=b sin B=c sin C a sin⁡A=b sin⁡B=c sin⁡C where a a, b b, and c c are the lengths of the sides opposite angles A A, B B, and C C, respectively. Law of Cosines Use the Law of Cosines when you have: Side-Side-Side (SSS): All three sides known. Side-Angle-Side (SAS): Two sides and the included angle. The Law of Cosines states: c 2=a 2+b 2−2 a b cos(C)c 2=a 2+b 2−2 a b cos⁡(C) This can be rearranged for different sides and angles. Summary Use Law of Sines for AAS, ASS, or AAA. Use Law of Cosines for SSS or SAS. This helps ensure you select the right method based on the known quantities in the triangle. Upvote · Dee Stoddard Former Senior Advisor at Apple (company) (2007–2010) · Author has 188 answers and 218.2K answer views ·5y Originally Answered: How do you know when to use the law of sines or cosines? · This image has been removed for violating Quora's policy. we know sine and cosine both have hypotenuse as a denominator think of it as a car loan: by your self, you sign the lease if you need to cosign, you ad a cosigner (adjacent cosine) some interesting points of sine and cosine, is if you divide cosine from sine (sine/cosine), you get the tangent of the same angle: (opp/hyp)/(adj/hyp) = (opp/hyp)(hyp/adj) (multiply the reciprocal) cross cancel hyp: (opp/hyp)(hyp/adj)= opp/adj or Tan We all remember A^2 + B^2 = C^2 make it opp^2 + adj^2 = hyp^2 these can be written another way: [(A^2 + B^2)/C^2] = 1 [(opp^2 + adj^2)/hyp^2] = 1 now see what happens when y Continue Reading This image has been removed for violating Quora's policy. we know sine and cosine both have hypotenuse as a denominator think of it as a car loan: by your self, you sign the lease if you need to cosign, you ad a cosigner (adjacent cosine) some interesting points of sine and cosine, is if you divide cosine from sine (sine/cosine), you get the tangent of the same angle: (opp/hyp)/(adj/hyp) = (opp/hyp)(hyp/adj) (multiply the reciprocal) cross cancel hyp: (opp/hyp)(hyp/adj)= opp/adj or Tan We all remember A^2 + B^2 = C^2 make it opp^2 + adj^2 = hyp^2 these can be written another way: [(A^2 + B^2)/C^2] = 1 [(opp^2 + adj^2)/hyp^2] = 1 now see what happens when you square both sine and cosine sine^2 = opp^2/hyp^2 cosine^2 = adj^2/hyp^2 add them together: (opp^2/hyp^2) + (adj^2/hyp^2) consolidate: (opp^2 + adj^2)/hyp^2 We see above that this = 1 so Sine^2 + Cose^2 = 1 when you’re comparing the same angle, any angle. Upvote · Related questions More answers below What are the laws of sines and the law of cosines? How are the laws of sines and cosines used in applications? Can we use the law of sines and cosines to solve triangles when one angle or more is not given? Why is the law of cosines more accurate than the law of sines? Without using the law of sines or cosines, find the measure of angle D? Mātlālihhuītl Quiroz Mendez 19 year-old Physics undergrad · Upvoted by Dan Capps , Masters Mathematics & Mathematics, Eastern Michigan University · Author has 293 answers and 1M answer views ·7y Related How do I know when to use ‘sine’, ‘cosine’, or ‘tangent’? I’m guessing you’re familiar with the acronym, SOH-CAH-TOA. This SOH stands for sin θ=Opposite Hypotenuse sin⁡θ=Opposite Hypotenuse. The CAH stands for cos θ=Adjacent Hypotenuse cos⁡θ=Adjacent Hypotenuse. And finally, the TOA stands for tan θ=Opposite Adjacent tan⁡θ=Opposite Adjacent. To know when to use these, you have to know and find what you have on your triangle. Let me do an example to give an idea of what I mean. Very basic. Let’s say we wanted to find what x x is. The first thing that might come to your mind when you see a triangle like this, is Pythagorean theorem. However Continue Reading I’m guessing you’re familiar with the acronym, SOH-CAH-TOA. This SOH stands for sin θ=Opposite Hypotenuse sin⁡θ=Opposite Hypotenuse. The CAH stands for cos θ=Adjacent Hypotenuse cos⁡θ=Adjacent Hypotenuse. And finally, the TOA stands for tan θ=Opposite Adjacent tan⁡θ=Opposite Adjacent. To know when to use these, you have to know and find what you have on your triangle. Let me do an example to give an idea of what I mean. Very basic. Let’s say we wanted to find what x x is. The first thing that might come to your mind when you see a triangle like this, is Pythagorean theorem. However, we can not solve for this side given only one side and an angle. So, we have to resort to trigonometric ratios. All we need to do is find which ratio to use. But before that, we must know what’s the “opposite side” the “adjacent side”. I’m assuming you know what the hypotenuse is because of Pythagorean theorem. These are the sides… The hypotenuse is always the side bigger than both of the other two sides. You can easily tell which one it is. The “Opposite side” is the one that is opposite from the angle, θ θ. If you don’t know what θ θ is, don’t worry, it is just another variable, but for angles. Finally, the “Adjacent side” is the side adjacent or next to the angle of reference. So, coming back to our problem… We see that we have the opposite side, x x, because it is opposite of the angle, and we have the adjacent side, 13 13, because it is adjacent to 60 60 degrees. So, we know that we have the opposite and adjacent side. The only trigonometric ration that involves these two is tan tan. Therefore, we use this one. Just plug in your values and missing sides to these and solve for x x. Note that the angle replaces θ θ and you must use a calculator. tan 60=x 13 13 tan 60=x 22.5≈x tan⁡60=x 13 13 tan⁡60=x 22.5≈x So, what does our x x, mean? It’s just the missing side that we solved for. Upvote · 99 28 9 3 9 2 Sponsored by RedHat Customize AI for your needs, with simpler model alignment tools. Your AI needs context, not common knowledge. Learn More 9 6 Leon Joseph Former Math Teacher - now happily retired (2005–2022) · Author has 265 answers and 491.6K answer views ·5y Originally Answered: How do you know when to use the law of sines or cosines? · The law of cosines is a^2 = b^2 + c^2 - 2bc(cosine A). The law of sines is a/(sine A) = b/(sine B) = c/(sine B). These two rules work in any triangle; not just a right triangle. You use the law of cosines to relate the 3 sides of a triangle to one of the angles. So if you are given the 3 sides you can find the angle; or if you are given two sides and the included angle (the angle between the two sides) you can find the side opposite the included angle. The law of sines relates the angles and the sides opposite them. If you know any 3 parts of any two of the three fractions in the proportion menti Continue Reading The law of cosines is a^2 = b^2 + c^2 - 2bc(cosine A). The law of sines is a/(sine A) = b/(sine B) = c/(sine B). These two rules work in any triangle; not just a right triangle. You use the law of cosines to relate the 3 sides of a triangle to one of the angles. So if you are given the 3 sides you can find the angle; or if you are given two sides and the included angle (the angle between the two sides) you can find the side opposite the included angle. The law of sines relates the angles and the sides opposite them. If you know any 3 parts of any two of the three fractions in the proportion mentioned above, you can find the 4th missing part. For example, if you two angles and one of the sides opposite either of the angles, you can find the other side. Somewhat confusing, but if you try it, it will make sense. MATH ROCKS Upvote · 9 1 Philip Lloyd Specialist Calculus Teacher, Motivator and Baroque Trumpet Soloist. · Author has 6.8K answers and 52.8M answer views ·5y Related Can you explain really well how to use the law of cosines, and how do you determine when to use it? The Cosine Rule has two forms: Here are two examples: Continue Reading The Cosine Rule has two forms: Here are two examples: Upvote · 9 7 Sponsored by Grammarly 92% of professionals who use Grammarly say it has saved them time Work faster with AI, while ensuring your writing always makes the right impression. Download 999 207 Robert Nichols Author has 5K answers and 15.6M answer views ·4y Originally Answered: How do you know when to use the sine/cosine law? Explain your answer using angles and side lengths. · How do you know when to use the sine/cosine law? Explain your answer using angles and side lengths. The law of Cosines: c² = a² + b² -2ab cosC Use it to find any angle in a triangle when you know all three side lengths. Use it to find the third side, when you know two side lengths and the included angle between the two sides. When used to find the third side when you know two sides and an angle, but the angle is not between the two known sides, will result in the possibility of two answers. (This what geometry teachers call ASS, angle-side-side) The law of Sines: a/sin A = b/sin B = c/sin C Use when Continue Reading How do you know when to use the sine/cosine law? Explain your answer using angles and side lengths. The law of Cosines: c² = a² + b² -2ab cosC Use it to find any angle in a triangle when you know all three side lengths. Use it to find the third side, when you know two side lengths and the included angle between the two sides. When used to find the third side when you know two sides and an angle, but the angle is not between the two known sides, will result in the possibility of two answers. (This what geometry teachers call ASS, angle-side-side) The law of Sines: a/sin A = b/sin B = c/sin C Use when you know two angles and one side opposite of one of the given angles, to find the other side opposite the other known angle. (Please note if you know two angles you can deduce the third angle by subtracting the sum of the two known angles from 180⁰.) When used to find the third side when you know two sides and an angle, but the angle is not between the two known sides, will result in the possibility of two answers. Upvote · 9 1 Doug Dillon Ph.D. Mathematics · Author has 12.4K answers and 11.4M answer views ·4y Originally Answered: How do you know when to use the law of sines or cosines? · FOR EITHER RULE, YOU MUST BE GIVEN 3 BITS OF INFORMATION AND AT LEAST ONE SIDE. Whenever you are given two angles, you can use the SINE LAW. Whenever you are given two sides, you can use the COSINE LAW. Upvote · 9 3 Sponsored by All Out Kill Dengue, Malaria and Chikungunya with New 30% Faster All Out. Chance Mat Lo, Naya All Out Lo - Recommended by Indian Medical Association. Shop Now 999 616 Steve Sparling BSc in Mathematics, The University of British Columbia · Author has 365 answers and 282.8K answer views ·5y Originally Answered: How do you know when to use the law of sines or cosines? · Some textbooks have a table of conditions to determine when to use which. I find that bordering on terror! My advice is to first try to set up the law of sines as it is easier and, if you can’t, then set up the law of cosines. With some experience you will automatically go to the right one. For example if you have all three sides and you need to find an angle it is the law of cosines. Look at the examples in the textbook and the examples from the lesson. They will probably show all the cases. Upvote · John Barrow Studied Physics,Fluid Dynamics (Graduated 1975) · Author has 100 answers and 151.9K answer views ·6y Originally Answered: When solving problems of vectors, how do you know when you are supposed to use cosines or sines? · I’m going to assume that your question concerns resolving a vector into two components at right angles to each other (conventionally, along the x and y axes). If there is a vector, A, at an angle ‘a’ to the x-axis , the rule is that the x-component, A x = Acos(a). The y-component is A y = Asin(a). The cosine function is always used when the angle a is the ‘included’ angle. That is, the angle included between the vector and the x-axis. Upvote · 9 1 Ajay Sreenivas Former Aerospace Engineer/ Staff Consultant at Ball Aerospace (1980–2010) · Author has 5.3K answers and 1.7M answer views ·4y Originally Answered: How do you know when to use the sine/cosine law? Explain your answer using angles and side lengths. · Use law of sines, to find any ONE of the four parameters (a, b, A, or B) when the other three are known. Use cosine law to find ANY one side of a triangle, given the other TWO sides along with their INCLUDED ANGLE. Upvote · 9 1 Robert Paxson BSME in Mechanical Engineering, Lehigh University (Graduated 1983) · Author has 3.9K answers and 4M answer views ·Updated 2y Related Why is the law of cosines more accurate than the law of sines? Both are accurate if used correctly. However, it is easier to make a mistake if the law of sines is used carelessly. The law of cosines uses three pieces of information about the triangle that uniquely define the triangle (SSS or SAS are unique) with exactly one angle with a particular value for cosine. While the law of sines is also using three pieces of information, one angle used is not in between the two sides (SSA is ambiguous) but there are two angles with a particular value for sine. Consider △A B C△A B C whose sides are exactly A B=3 A B=3, B C=4 B C=4, and A C=5.1 A C=5.1. You measure ∠C≈36\ci∠C≈36\ci Continue Reading Both are accurate if used correctly. However, it is easier to make a mistake if the law of sines is used carelessly. The law of cosines uses three pieces of information about the triangle that uniquely define the triangle (SSS or SAS are unique) with exactly one angle with a particular value for cosine. While the law of sines is also using three pieces of information, one angle used is not in between the two sides (SSA is ambiguous) but there are two angles with a particular value for sine. Consider △A B C△A B C whose sides are exactly A B=3 A B=3, B C=4 B C=4, and A C=5.1 A C=5.1. You measure ∠C≈36∘∠C≈36∘ and solve for ∠B∠B. Notice that this is very close to a right triangle (a 3–4–5 triangle is a right triangle by the Pythagoras theorem), and that we are being asked to solve for the angle that is close to 90∘90∘. If you use the law of sines, you will get from: sin(B)=5.1 sin 36∘3=0.9992349288964 sin⁡(B)=5.1 sin⁡36∘3=0.9992349288964 such that ∠B∠B is either: B 1≈87.76∘B 1≈87.76∘ or: B 2=180∘−B 1≈92.24∘B 2=180∘−B 1≈92.24∘ since both of these angles have the same sine value. Which is the correct angle for this particular triangle? It is ambiguous. You must be careful and notice that the triangle is slightly obtuse. If you use the law of cosines, you will get: cos(B)=3 2+4 2−5.1 2 2(3)(4)=−0.042083333 cos⁡(B)=3 2+4 2−5.1 2 2(3)(4)=−0.042083333 B≈92.412∘B≈92.412∘ which is correct. The triangle is not ambiguous. The small difference between B B and B 2 B 2 is because ∠C∠C is not quite exactly 36∘36∘ but actually closer to 35.995∘35.995∘. We have to be very careful when analyzing oblique triangles that are close to right triangles. Upvote · 9 4 Related questions How do you know when to use the law of sines or cosines? How do I use the laws of sines and cosines? How will you apply the law of sines and cosines in solving real life problems? What are the laws of cosines and sines? What is the difference in the use of the law of sines and the law of cosines? What are the laws of sines and the law of cosines? How are the laws of sines and cosines used in applications? Can we use the law of sines and cosines to solve triangles when one angle or more is not given? Why is the law of cosines more accurate than the law of sines? Without using the law of sines or cosines, find the measure of angle D? How do you know when to use law of cosines? On which oblique triangles must the law of cosines be used before the law of sines? What is a proof for the law of sines and cosines? How are the law of sines and cosines related to each other? Is there any other way to add 2 vectors other than using the law of sines and cosines? Can the parallelogram law and triangle law be used without using the Law of Sines and Law of Cosines? Why does the law of sines only work for sines, but not cosines or tangents? Related questions How do you know when to use the law of sines or cosines? How do I use the laws of sines and cosines? How will you apply the law of sines and cosines in solving real life problems? What are the laws of cosines and sines? What is the difference in the use of the law of sines and the law of cosines? What are the laws of sines and the law of cosines? Advertisement About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
3916
https://www.youtube.com/watch?v=-XA47qOjuec
Circle Inversion: A new perspective on geometry (Part 1) #SoME The Calculus of Explanations 1490 subscribers 1201 likes Description 32934 views Posted: 26 Apr 2023 Circle inversion is a very beautiful and interesting technique for problems in geometry. In this video I'll outline some of its main properties and solve a basic problem involving mutually tangent circles and lines. Part 2 of this series is now live, you can watch it here I wrote a blog post about the journey I took through Circle Inversion here This playlist is inspired by the following videos, which you should definitely watch for more information on circle inversion Epic Circles - Numberphile Problem of Apollonius - what does it teach us about problem solving? - Mathemaniac An amazing puzzle involving Circles - Act of Learning Check out my related blog for more maths musings below If you'd like to support me making more of this content, consider supporting me on Patreon below 109 comments Transcript: foreign [Music] we'll be talking about what I think is one of the most interesting techniques most people don't learn in the course of a normal mathematics education [Music] the technique is called Circle inversion in this first part of our series on Circle inversion we'll talk about what Circle inversion is some properties of this transformation and how to use Circle inversion to solve some interesting problems involving geometrical objects such as circles and lines but first let's talk about transformations the transformation is something that we do to a mathematical object or problem to make it easier to solve most people are familiar with linear Transformations like dilation otherwise known as zooming in and out or perhaps you've seen reflection like this reflection about the y-axis another common transformation is rotation about a point like this rotation about the origin Circle inversion however is a non-linear transformation it looks a little bit like this we have a green reference Circle and we're performing an inversion of the plane notice how the inside gets mapped to the outside and vice versa it's a little bit easier to see in polar coordinates look at how the circles inside the green circle are being mapped to the outside and vice versa here are some shapes for reference you'll notice that all of the shapes are being distorted under this transformation except for the circle which only changes size this is one of the really nice properties of circle inversion that circles stay circles in Reverse notice that points close to the center get mapped very far away whereas points near the green circle stay near the green circle foreign let's look at how to invert a point with respect to this green circle the center of which we'll call O the point we're interested in we'll call a draw a line through o and a and then construct a perpendicular from a up to the circle the tangent at this point will intersect the Ray at some point which we'll call a dash this is the inverted version of the point a the two triangles we've drawn are similar which means the ratio of their two sides is equal this gives us the formula for circle of inversion O A which is this distance here multiplied by o a dash which is this distance here is equal to r squared the radius of the circle of inversion squared and this is the formula that we will use when we solve problems using Circle inversion as we mentioned before it's an interesting fact that a circle under inversion becomes another Circle let's look at inverting the same Circle but now a little closer to the origin points close to the origin get much further away under inversion now let's see some examples in the generic case we just saw a small blue circle gets inverted to a larger red circle outside the circle of inversion Circle inversion is what's called a conformal map so angles are preserved specifically anything tangent in the original problem will still be tangent in the inverted space what about when the circle goes through the origin of the inversion Circle we've seen previously that as points get closer to the center of the circle these inverted points get further and further away in fact if a circle goes through the origin of the circle of inversion that point goes to Infinity under the inversion so in the special case of a circle going through the origin we actually get a straight line in the inverted space this is actually very useful when it comes to solving problems using Circle inversion as we are able to transform circles which are complicated shapes into straight lines which are much easier to deal with foreign as a special case of this if a circle goes through the origin and is tangent to the green line its inversion is a straight line that is also tangent to The Circle at that point for completion here's a circle that goes through the origin and the circle of inversion note that on the circle of inversion the points have to stay where they are and here's a circle outside the green circle getting inverted to the red circle inside of it now let's look at a typical problem we can solve using this technique here we have a unit circle with several other circles inside of it our job is to find the radius R of the smaller Circle have a think about how you would do this and pause the video and try it out [Music] one way is to construct the blue triangles as shown here [Music] and use Pythagoras to construct two simultaneous equations should find that the two triangles are congruent and therefore the radius is half that of the medium Circle meaning R is a quarter let's now show this using inversion one choice you have to make is what Circle you want to invert on you should look for points of mutual tangency and use those as your center of your circle of inversion so here we can use the big circle as the circle of inversion and color it green the diameter which will color in blue inverts to another straight line that extends outwards towards Infinity just like this [Music] the medium Circle goes through the origin of the inversion Circle and it's also tangent to the green circle so it inverts to a straight line that's also tangent to the green circle and our small circle is tangent to all three so it must be tangent to all three in the inverted space now let's draw a straight line through the origin the center of our Circle and its inverted version let's label some points the point a inverts to a dash we're going to use these two points the radius of our small circle is remember we use the formula for the circle of inversion OA times OA Dash is equal to r squared noting that the radius of the green circle is one the distance to a dash is simply the radius R of the green circle plus the diameter D of the red circle both of these are equal to one so we get the equation O A times 1 plus 1 equals one or OA is equal to a half the diameter of the small circle is 1 minus o a and the radius is half that distance so R is equal to a quarter in the next video in this series we will reveal some of the more advanced properties of circle inversion by using it to solve some more advanced problems I hope you can have fun using this technique to solve some problems that you find until next time thanks for watching
3917
https://www.quora.com/How-do-I-find-the-intersection-point-of-a-line-y-mx+c-and-a-line-parallel-to-the-y-axis-They-both-will-definitely-intersect-slope-of-line-parallel-to-y-axis-is-%E2%88%9E
How to find the intersection point of a line y=mx+c and a line parallel to the y axis - Quora Something went wrong. Wait a moment and try again. Try again Skip to content Skip to search Sign In Mathematics Intersection Point Y Axis Slope Coordinate Geometry Sol Parallel Lines Coordinate Plane Intersecting Lines Intersections 5 How do I find the intersection point of a line y=mx+c and a line parallel to the y axis? They both will definitely intersect (slope of line parallel to y axis is ∞). All related (35) Sort Recommended Assistant Bot · 1y To find the intersection point of a line defined by the equation y=m x+c y=m x+c and a line parallel to the y-axis, you need to consider the equation of the vertical line. A vertical line parallel to the y-axis can be expressed in the form x=k x=k, where k k is a constant that represents the x-coordinate of the vertical line. Steps to Find the Intersection Point: Identify the equations: The equation of the first line is y=m x+c y=m x+c. The equation of the vertical line is x=k x=k. Substitute the x-coordinate into the first line's equation: To find the y-coordinate of the intersection point, substit Continue Reading To find the intersection point of a line defined by the equation y=m x+c y=m x+c and a line parallel to the y-axis, you need to consider the equation of the vertical line. A vertical line parallel to the y-axis can be expressed in the form x=k x=k, where k k is a constant that represents the x-coordinate of the vertical line. Steps to Find the Intersection Point: Identify the equations: The equation of the first line is y=m x+c y=m x+c. The equation of the vertical line is x=k x=k. Substitute the x-coordinate into the first line's equation: To find the y-coordinate of the intersection point, substitute x=k x=k into the equation of the line: y=m(k)+c y=m(k)+c 3. Write the intersection point: The intersection point can be expressed as: (k,m k+c)(k,m k+c) Conclusion: The intersection point of the line y=m x+c y=m x+c and the vertical line x=k x=k is given by the coordinates: (k,m k+c)(k,m k+c) This point represents where the two lines meet in the Cartesian plane. Upvote · Rajshankar Maheswaran Studied at St. Paul's English School, Bangalore (Graduated 2019) · Author has 340 answers and 1M answer views ·7y Let there be an arbitrary line y=m x+c y=m x+c Let it intersect a line parallel to the y-axis, whose equation is x=a x=a If we rearrange the first equation, we get: x=y−c m x=y−c m Solving these two equations, y−c m=a y−c m=a y=m a+c y=m a+c The point at which the lines will intersect is given as (a,m a+c)(a,m a+c) Upvote · 9 2 9 1 Promoted by Coverage.com Johnny M Master's Degree from Harvard University (Graduated 2011) ·Updated Sep 9 Does switching car insurance really save you money, or is that just marketing hype? This is one of those things that I didn’t expect to be worthwhile, but it was. You actually can save a solid chunk of money—if you use the right tool like this one. I ended up saving over $1,500/year, but I also insure four cars. I tested several comparison tools and while some of them ended up spamming me with junk, there were a couple like Coverage.com and these alternatives that I now recommend to my friend. Most insurance companies quietly raise your rate year after year. Nothing major, just enough that you don’t notice. They’re banking on you not shopping around—and to be honest, I didn’t. Continue Reading This is one of those things that I didn’t expect to be worthwhile, but it was. You actually can save a solid chunk of money—if you use the right tool like this one. I ended up saving over $1,500/year, but I also insure four cars. I tested several comparison tools and while some of them ended up spamming me with junk, there were a couple like Coverage.com and these alternatives that I now recommend to my friend. Most insurance companies quietly raise your rate year after year. Nothing major, just enough that you don’t notice. They’re banking on you not shopping around—and to be honest, I didn’t. It always sounded like a hassle. Dozens of tabs, endless forms, phone calls I didn’t want to take. But recently I decided to check so I used this quote tool, which compares everything in one place. It took maybe 2 minutes, tops. I just answered a few questions and it pulled up offers from multiple big-name providers, side by side. Prices, coverage details, even customer reviews—all laid out in a way that made the choice pretty obvious. They claimed I could save over $1,000 per year. I ended up exceeding that number and I cut my monthly premium by over $100. That’s over $1200 a year. For the exact same coverage. No phone tag. No junk emails. Just a better deal in less time than it takes to make coffee. Here’s the link to two comparison sites - the one I used and an alternative that I also tested. If it’s been a while since you’ve checked your rate, do it. You might be surprised at how much you’re overpaying. Upvote · 999 485 999 103 99 17 Maheshkumar Dodiya Supervisor Instructor Of Mathematics & Engg Drawin at I.T.I. Tarapur (2014–present) ·7y Any line parallel to Y-axis is of the form : x=a, (where a is the distance of line from y-axis) Now, Given line is : y= mx+c To find the intersection of these two lines, we will use method of substitution, Putting value of x=a into y=mx+c Y=mx+c Y=ma+c Therefore, intersecting point has cordinate as (a,ma+c) Upvote · 9 1 Related questions More answers below How can I find the point of intersection of lines y=mx+c and a line parallel to y axis? Assume that they will definitely intersect. [slope of line parallel to y axis is ∞]. How do you find the slope of a line parallel to the y-axis? How do you show that the slope of a line parallel to y axis is infinity? How does finding the slope and y-intercept help us determine if lines will intersect? Why does the line y = mx + c always intersect the Y-axis? Ritesh Singh Studied at National Institute of Technology, Tiruchirappalli · Author has 55 answers and 82.8K answer views ·7y Originally Answered: How can I find the point of intersection of lines y=mx+c and a line parallel to y axis? Assume that they will definitely intersect. [slope of line parallel to y axis is ∞]. · A line parallel to y axis is given by equation x = a where ' a ' is any real number. So at point of intersection of y=mx+c and x=a , y will be ma +c hence you will get (a,ma+c) as point of intersection. Upvote · 9 4 Aakash Studied PGDM in Agribusiness Management (Graduated 2020) ·7y Line parallel to y axis has equation; x= a ; where a belongs to real no Putting the value of x= a in y= mx+ c, u will get the y. i.e point of intersection= (a,ma+c). Upvote · Related questions More answers below What is the point of intersection of the x-axis and y-axis called? Do you invert the X-axis, Y-axis or both? What is the slope of a line parallel to y=4x-5? At what point does the line 3x + 2y = 12 cuts the Y-axis? Why is the slope of x axis zero? Gary Ward MaEd in Education&Mathematics, Austin Peay State University (Graduated 1997) · Author has 4.9K answers and 7.6M answer views ·1y Related How do I calculate the x-value of the intersection of two lines given only their y-values (at point x0) and their slopes? How do I calculate the x-value of the intersection of two lines given only their y-values (at point x0) and their slopes? You have a slope, m_1 and m_2, and a y-intercept, (0, y_1) and (0, y_2) for each line. You have two equations in the form of y = mx + c or in general … y=m 1 x+y 1 and y=m 2 x+y 2 y=m 1 x+y 1 and y=m 2 x+y 2 Since they both must have the same y-value at their intersection, you can set the terms on the right side equal to each other and solve for x. m 1 x+y 1=m 2 x+y 2→m 1 x−m 2 x=y 2−y 1→x=y 2−y 1 m 1−m 2 m 1 x+y 1=m 2 x+y 2→m 1 x−m 2 x=y 2−y 1→x=y 2−y 1 m 1−m 2 Example: Given: m_1 = -2 a Continue Reading How do I calculate the x-value of the intersection of two lines given only their y-values (at point x0) and their slopes? You have a slope, m_1 and m_2, and a y-intercept, (0, y_1) and (0, y_2) for each line. You have two equations in the form of y = mx + c or in general … y=m 1 x+y 1 and y=m 2 x+y 2 y=m 1 x+y 1 and y=m 2 x+y 2 Since they both must have the same y-value at their intersection, you can set the terms on the right side equal to each other and solve for x. m 1 x+y 1=m 2 x+y 2→m 1 x−m 2 x=y 2−y 1→x=y 2−y 1 m 1−m 2 m 1 x+y 1=m 2 x+y 2→m 1 x−m 2 x=y 2−y 1→x=y 2−y 1 m 1−m 2 Example: Given: m_1 = -2 and (0, 3) with m_2 = 4 and (0, -5) y = -2x + 3 and y = 4x - 5 Done the usual way … -2x + 3 = 4x - 5 → (-2x + 2x) + (3 + 5) = (4x + 2x) + (-5 + 5)→ 8 = 6x → x = 4/3 Using the formula: x=y 2−y 1 m 1−m 2=(−5)−(3)(−2)−(4)=4 3 x=y 2−y 1 m 1−m 2=(−5)−(3)(−2)−(4)=4 3 x = (-5 - 3)/(-2 - 4) = -8/-6 = 4/3 y = -2(4/3) + 3 = 1/3 y = 4(4/3) - 5 = 1/3 So (4/3, 1/3) is the point of intersection Upvote · 99 12 9 2 9 1 Sponsored by US Auto Insurance Now These companies are overcharging you for auto insurance! Say goodbye to high car insurance rates if you live in these ZIPs. Learn More 99 16 Robert Paxson BSME in Mechanical Engineering, Lehigh University (Graduated 1983) · Author has 3.9K answers and 4M answer views ·3y Related If the line y = mx + c passes through the point of intersection of the lines x - 2y = -1 and y = 2 and is perpendicular to the liney = 4x + 8, then what are the values of m and c? The given line: y=4 x+8 y=4 x+8 is in slope-intercept form such that we see that its slope is m g=4 m g=4. Since the required line is perpendicular to the given line, its slope, m m, is the negative reciprocal of m g m g: m=−1 m g=−1 4 m=−1 m g=−1 4 The point of intersection between the other two given lines is found where their ordinates are equal: y=2 y=2 x−2(2)=−1 x−2(2)=−1 x=3 x=3 The intersection point is (3,2)(3,2). If we substitute this point into: y=m x+c y=m x+c y=−1 4 x+c y=−1 4 x+c we can solve for the y y-intercept c c: 2=−1 4(3)+c 2=−1 4(3)+c c=2+3 4=11 4 c=2+3 4=11 4 The required line is: y=−1 4 x+11 4 y=−1 4 x+11 4 m=−1 4 m=−1 4 c=11 4 c=11 4 A plot lo Continue Reading The given line: y=4 x+8 y=4 x+8 is in slope-intercept form such that we see that its slope is m g=4 m g=4. Since the required line is perpendicular to the given line, its slope, m m, is the negative reciprocal of m g m g: m=−1 m g=−1 4 m=−1 m g=−1 4 The point of intersection between the other two given lines is found where their ordinates are equal: y=2 y=2 x−2(2)=−1 x−2(2)=−1 x=3 x=3 The intersection point is (3,2)(3,2). If we substitute this point into: y=m x+c y=m x+c y=−1 4 x+c y=−1 4 x+c we can solve for the y y-intercept c c: 2=−1 4(3)+c 2=−1 4(3)+c c=2+3 4=11 4 c=2+3 4=11 4 The required line is: y=−1 4 x+11 4 y=−1 4 x+11 4 m=−1 4 m=−1 4 c=11 4 c=11 4 A plot looks like this: Upvote · 9 5 9 3 Rankine Gajendra Profession at Mathematics · Author has 103 answers and 724.4K answer views ·7y Related How do you find the slope of a line parallel to the y-axis? Parallel lines have the same slope. Vertical lines have an undefined slope. The y-axis is a vertical. A line that is parallel to the y-axis also has to be vertical. The slope of a line parallel to the. y-axis has a slope that is undefined. Happy learning! Continue Reading Parallel lines have the same slope. Vertical lines have an undefined slope. The y-axis is a vertical. A line that is parallel to the y-axis also has to be vertical. The slope of a line parallel to the. y-axis has a slope that is undefined. Happy learning! Upvote · 9 2 Sponsored by Stake Stake: Online Casino games - Play & Win Online. Play the best online casino games, slots & live casino games! Unlock VIP bonuses, bet with crypto & win. Play Now 999 286 Adrian Giles Studied Physics&Mathematics (Graduated 1985) · Author has 3.5K answers and 807.7K answer views ·1y Related How do I calculate the x-value of the intersection of two lines given only their y-values (at point x0) and their slopes? Well thanks for the human question (these bot questions are taking us all to an asylum). I don’t really know the answer but perhaps it’s that math is a method as much as it is knowledge by which I mean one can have confidence in getting at least somewhere along the line between the question and the answer and eventually all the way if it is a sensible question and there’s the Will to get there (bots lack this). Like MFK says “you’ve got to want to” (if you want to play guitar like he does). Ok enough dribble. Let’s introduce our two lines: y₁=mx+b y₂=nx+c so far so good, no major dramas. So were giv Continue Reading Well thanks for the human question (these bot questions are taking us all to an asylum). I don’t really know the answer but perhaps it’s that math is a method as much as it is knowledge by which I mean one can have confidence in getting at least somewhere along the line between the question and the answer and eventually all the way if it is a sensible question and there’s the Will to get there (bots lack this). Like MFK says “you’ve got to want to” (if you want to play guitar like he does). Ok enough dribble. Let’s introduce our two lines: y₁=mx+b y₂=nx+c so far so good, no major dramas. So were given only their y values at x₀ and their slopes well just thinking out loud that specifies a line since the additive constants b and c are for each line respectively the y values at x=0 and we know their slopes so effectively the question you raise is obviously resolvable since y(x₀)=mx₀+b y(0)=b from which we can find, say x₀=(y(x₀)-y(0))/m if that was one of the goals, for example. The question asks for the x value of the intersection: So we equate y₁ and y₂ and a hop, skip and a jump later I find it is x=(c-b)/(m-n). EDIT: I know were not quite there but I sort of ran out of steam. All thats left is to express c and b in terms of the known things. I don’t wish to seem like I’m making excuses but ever since I accepted an update to the Quora app last week it’s been playing up, to put it mildly. Just a bit earlier I’d spent a few hours justifying how I find bot questions ideal to not really answer but as a Prompt to basically say what you want to say and at the risk being obscure theres stuff Ive been trying and trying but it has almost without exception ended up relegated to drafts which one day may be “of interest” however people have already demonstrated that any more than one “commandment” from someone and oh theyre lost. So as fate would have it when I was only a few words away from complete The perfect introduction to my special subject area, the cursor froze and then the whole page disappeared. It must still be there or retrievable I tell myself but I’m not ready yet ig it isnt EDIT: This is where I’ve gone after my reply in the comments. I opened it up and only when I selected edit the above section showed up which had seemed like it was gone…nuff said. Ok a crash course in (straight) lines in the plane (ie the 2D world). Its kind of important to state that up front that we mean lines in The (same) plane. Convince yourself that any two lines in the plane will intersect at one point P, say (P will have two coordinates since its a 2D space but/and it’s called “one point”) the exception is if the lines are parallel (same slope). A general line in the plane is completely specified by two free parameters usually (but not necessarily) the slope m and the y-intercept b: y=y(x)=mx+b the y(x) is just a reminder, if you like, that y is a function of x while m and b are “parameters” which I suppose might as well be thought of as as yet unspecified constants. OI! IM UP HERE NOW! So m is the slope and the y intercept means the y(0) which i hope you can see is b if y=mx+b then y(0)=b since y(0) means the value of y when x=0 which convince yourself IS the equation of the y-axis. Clue: tell yourself it’s conceptually almost trivial but if it’s a struggle just understand everyone can get overloaded with unfamiliar nomenclature. Some of us are just incredibly slow but in my case I claim to be deadly to make up fot the slow. It wasnt till I was probably over 40 that I realised sinθ and cosθ could be introduced to a child in 5 minutes though if he was Terence Tao or someone like that hed already know, worked that one out in the womb or something. Buddy I’m gonna stop there, this software bug has got me rattled. If there’s something else (like you would like your question answered!) well um I’ll endeavour to look in within 24hrs— I cant recall right now unless I leave the page if you had time constraints I think you just described what you were being constrained to do or something so if thats all right catch me again in about 20 hrs time. EDIT I just properly read that you’re writing code for a trading indicator. It’s funny in a sense because it would appear I that I lost all my savings and all my redraw limit and about US200K earnings to an online FOREX scam last year. Basically Im not accepting it was a scam but well my bank spent five weeks doing nothing but concluding that. Its just that “Mr Giles, unfortunately the price of Bitcoin (or something) went up and your earnings are locked in the Blockchain” well if anyone understands that can and does happen and convince someone with the funds to release it well I’d only be too happy to honour any reasonable promisory notes etc made to people who can resurrect my funds And the other point is would I know your coding language probably not since I have only about an hours experience with machine code push, pop etc and thats levels below you I would be pretty sure. I was just going to say that EXCEL as a living spreadsheet would I think suit your problem since well the codings almost been done and you can write logical statements eg: IF cell A≥cell REF then (ergh) else (bupbah) which, not really having a complete sense of whats available I’d be inclined to use EXCEL but I like I said or could have said its probably better to stick to what Im good at . It has crossed my mind contemplating some kind of collab etc but uno Upvote · 9 1 9 4 Dave Benson trying to make maths easy. · Author has 6.1K answers and 2.1M answer views ·Mar 31 Related What is the coordinate of the point where the line y =3×-8 cross y-axis? y = 3x-8 plug in x = 0 & y = -8 ⟹ coordinate (0,-8) Answer Extra: Equation is the form y = mx+c and c is the y-intercept = -8 & m is slope. Continue Reading y = 3x-8 plug in x = 0 & y = -8 ⟹ coordinate (0,-8) Answer Extra: Equation is the form y = mx+c and c is the y-intercept = -8 & m is slope. Upvote · 9 8 Sponsored by CDW Corporation Looking for ways to leverage AI to improve customer experiences? CDW’s AI solutions include 24/7 chatbots and workshops to enhance efficiency and customer satisfaction. Learn More 99 22 Carter McClung High School Geometry Teacher · Upvoted by David Joyce , Ph.D. Mathematics, University of Pennsylvania (1979) · Author has 1.6K answers and 6.5M answer views ·9y Related How do you show that the slope of a line parallel to y axis is infinity? One way of defining parallel is that it has the same slope. So we can find the slope of a parallel line by finding the slope of the original line. What's the slope of the y-axis? Pick two points on it, it doesn't matter which. Maybe (0,0) and (0,10). Calculate the slope: m=10−0 0−0=10 0 m=10−0 0−0=10 0 But what is 10/0? It's not defined. Any parallel line will also have undefined slope. Thinking about it as "infinite" slope, while technically incorrect, does capture a bit of the intuition here. It's slope is basically infinity and negative infinity simultaneously. Upvote · 9 6 Comet Studied Philosophy&Art at The School of Higher Neantical Nillity · Author has 7K answers and 15.7M answer views ·Updated 4y Related Can two parallel lines intersect? The final strip illustrates an axiom in projective geometry. Any two lines in a plane have at least one point of the plane (which may be the point at infinity) in common. Continue Reading Footnotes Projective Geometry The final strip illustrates an axiom in projective geometry. Any two lines in a plane have at least one point of the plane (which may be the point at infinity) in common. Footnotes Projective Geometry Upvote · 99 71 9 3 Willian Angelo B.A. in Computer Engineering, University of São Paulo (USP) (Graduated 2019) ·7y Related How do I calculate where the curves y=x 2 y=x 2 and x 2+y 2=1 x 2+y 2=1 intersect? You can just substitute x 2 x 2 in the second equation by y y: x 2y+y 2=1 x 2⏟y+y 2=1 y 2+y−1=0 y 2+y−1=0 y=−1±√5 2 y=−1±5 2 Then you’ll have to discover x x. For y=−1+√5 2 y=−1+5 2, you’ll have: x 2=−1+√5 2 x 2=−1+5 2 ⟹x 2=±√√5−1 2⟹x 2=±5−1 2 (Two solutions) And for y=−1−√5 2 y=−1−5 2, (No real solution for x x). Then you’ll finally have the points (+√√5−1 2;−1+√5 2)(+5−1 2;−1+5 2) and (−√√5−1 2;−1+√5 2)(−5−1 2;−1+5 2) (Source of picture: www.wolframalpha.com) Continue Reading You can just substitute x 2 x 2 in the second equation by y y: x 2y+y 2=1 x 2⏟y+y 2=1 y 2+y−1=0 y 2+y−1=0 y=−1±√5 2 y=−1±5 2 Then you’ll have to discover x x. For y=−1+√5 2 y=−1+5 2, you’ll have: x 2=−1+√5 2 x 2=−1+5 2 ⟹x 2=±√√5−1 2⟹x 2=±5−1 2 (Two solutions) And for y=−1−√5 2 y=−1−5 2, (No real solution for x x). Then you’ll finally have the points (+√√5−1 2;−1+√5 2)(+5−1 2;−1+5 2) and (−√√5−1 2;−1+√5 2)(−5−1 2;−1+5 2) (Source of picture: www.wolframalpha.com) Upvote · 9 7 Related questions How can I find the point of intersection of lines y=mx+c and a line parallel to y axis? Assume that they will definitely intersect. [slope of line parallel to y axis is ∞]. How do you find the slope of a line parallel to the y-axis? How do you show that the slope of a line parallel to y axis is infinity? How does finding the slope and y-intercept help us determine if lines will intersect? Why does the line y = mx + c always intersect the Y-axis? What is the point of intersection of the x-axis and y-axis called? Do you invert the X-axis, Y-axis or both? What is the slope of a line parallel to y=4x-5? At what point does the line 3x + 2y = 12 cuts the Y-axis? Why is the slope of x axis zero? Which is the point where the graph intersects x-axis and y-axis of the equation y=2x +4? At which point does the line 3x-2y=10 meet the y-axis? In the equation of line y=mx+b, what is y and x as m=slope and b=y-intercept? If the question said touch the x-axis or intersect x- axis or y-axis, what does it mean? The graph of a line in the XY-plane passes through the point (1, 4) and crosses the x-axis at the point (2, 0). The line crosses the y-axis at the point (0, b). What is the value of b? Related questions How can I find the point of intersection of lines y=mx+c and a line parallel to y axis? Assume that they will definitely intersect. [slope of line parallel to y axis is ∞]. How do you find the slope of a line parallel to the y-axis? How do you show that the slope of a line parallel to y axis is infinity? How does finding the slope and y-intercept help us determine if lines will intersect? Why does the line y = mx + c always intersect the Y-axis? What is the point of intersection of the x-axis and y-axis called? Advertisement About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
3918
https://stackoverflow.com/questions/12047883/why-do-i-get-different-answers-for-these-two-algorithms-in-r
Why do I get different answers for these two algorithms in R? - Stack Overflow Join Stack Overflow By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google Sign up with GitHub OR Email Password Sign up Already have an account? Log in Skip to main content Stack Overflow 1. About 2. Products 3. For Teams Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers Advertising Reach devs & technologists worldwide about your product, service or employer brand Knowledge Solutions Data licensing offering for businesses to build and improve AI tools and models Labs The future of collective knowledge sharing About the companyVisit the blog Loading… current community Stack Overflow helpchat Meta Stack Overflow your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Let's set up your homepage Select a few topics you're interested in: python javascript c#reactjs java android html flutter c++node.js typescript css r php angular next.js spring-boot machine-learning sql excel ios azure docker Or search from our full list: javascript python java c# php android html jquery c++ css ios sql mysql r reactjs node.js arrays c asp.net json python-3.x .net ruby-on-rails sql-server swift django angular objective-c excel pandas angularjs regex typescript ruby linux ajax iphone vba xml laravel spring asp.net-mvc database wordpress string flutter postgresql mongodb wpf windows xcode amazon-web-services bash git oracle-database spring-boot dataframe azure firebase list multithreading docker vb.net react-native eclipse algorithm powershell macos visual-studio numpy image forms scala function vue.js performance twitter-bootstrap selenium winforms kotlin loops express dart hibernate sqlite matlab python-2.7 shell rest apache entity-framework android-studio csv maven linq qt dictionary unit-testing asp.net-core facebook apache-spark tensorflow file swing class unity-game-engine sorting date authentication go symfony t-sql opencv matplotlib .htaccess google-chrome for-loop datetime codeigniter perl http validation sockets google-maps object uitableview xaml oop visual-studio-code if-statement cordova ubuntu web-services email android-layout github spring-mvc elasticsearch kubernetes selenium-webdriver ms-access ggplot2 user-interface parsing pointers c++11 google-sheets security machine-learning google-apps-script ruby-on-rails-3 templates flask nginx variables exception sql-server-2008 gradle debugging tkinter delphi listview jpa asynchronous web-scraping haskell pdf jsp ssl amazon-s3 google-cloud-platform jenkins testing xamarin wcf batch-file generics npm ionic-framework network-programming unix recursion google-app-engine mongoose visual-studio-2010 .net-core android-fragments assembly animation math svg session intellij-idea hadoop rust next.js curl join winapi django-models laravel-5 url heroku http-redirect tomcat google-cloud-firestore inheritance webpack image-processing gcc keras swiftui asp.net-mvc-4 logging dom matrix pyspark actionscript-3 button post optimization firebase-realtime-database web jquery-ui cocoa xpath iis d3.js javafx firefox xslt internet-explorer caching select asp.net-mvc-3 opengl events asp.net-web-api plot dplyr encryption magento stored-procedures search amazon-ec2 ruby-on-rails-4 memory canvas audio multidimensional-array random jsf vector redux cookies input facebook-graph-api flash indexing xamarin.forms arraylist ipad cocoa-touch data-structures video azure-devops model-view-controller apache-kafka serialization jdbc woocommerce razor routes awk servlets mod-rewrite excel-formula beautifulsoup filter docker-compose iframe aws-lambda design-patterns text visual-c++ django-rest-framework cakephp mobile android-intent struct react-hooks methods groovy mvvm ssh lambda checkbox time ecmascript-6 grails google-chrome-extension installation cmake sharepoint shiny spring-security jakarta-ee plsql android-recyclerview core-data types sed meteor android-activity activerecord bootstrap-4 websocket graph replace scikit-learn group-by vim file-upload junit boost memory-management sass import async-await deep-learning error-handling eloquent dynamic soap dependency-injection silverlight layout apache-spark-sql charts deployment browser gridview svn while-loop google-bigquery vuejs2 dll highcharts ffmpeg view foreach makefile plugins redis c#-4.0 reporting-services jupyter-notebook unicode merge reflection https server google-maps-api-3 twitter oauth-2.0 extjs terminal axios pip split cmd pytorch encoding django-views collections database-design hash netbeans automation data-binding ember.js build tcp pdo sqlalchemy apache-flex mysqli entity-framework-core concurrency command-line spring-data-jpa printing react-redux java-8 lua html-table ansible jestjs neo4j service parameters enums material-ui flexbox module promise visual-studio-2012 outlook firebase-authentication web-applications webview uwp jquery-mobile utf-8 datatable python-requests parallel-processing colors drop-down-menu scipy scroll tfs hive count syntax ms-word twitter-bootstrap-3 ssis fonts rxjs constructor google-analytics file-io three.js paypal powerbi graphql cassandra discord graphics compiler-errors gwt socket.io react-router solr backbone.js memory-leaks url-rewriting datatables nlp oauth terraform datagridview drupal oracle11g zend-framework knockout.js triggers neural-network interface django-forms angular-material casting jmeter google-api linked-list path timer django-templates arduino proxy orm directory windows-phone-7 parse-platform visual-studio-2015 cron conditional-statements push-notification functional-programming primefaces pagination model jar xamarin.android hyperlink uiview visual-studio-2013 vbscript google-cloud-functions gitlab azure-active-directory jwt download swift3 sql-server-2005 configuration process rspec pygame properties combobox callback windows-phone-8 linux-kernel safari scrapy permissions emacs scripting raspberry-pi clojure x86 scope io expo azure-functions compilation responsive-design mongodb-query nhibernate angularjs-directive request bluetooth reference binding dns architecture 3d playframework pyqt version-control discord.js doctrine-orm package f# rubygems get sql-server-2012 autocomplete tree openssl datepicker kendo-ui jackson yii controller grep nested xamarin.ios static null statistics transactions active-directory datagrid dockerfile uiviewcontroller webforms discord.py phpmyadmin sas computer-vision notifications duplicates mocking youtube pycharm nullpointerexception yaml menu blazor sum plotly bitmap asp.net-mvc-5 visual-studio-2008 yii2 electron floating-point css-selectors stl jsf-2 android-listview time-series cryptography ant hashmap character-encoding stream msbuild asp.net-core-mvc sdk google-drive-api jboss selenium-chromedriver joomla devise cors navigation anaconda cuda background frontend binary multiprocessing pyqt5 camera iterator linq-to-sql mariadb onclick android-jetpack-compose ios7 microsoft-graph-api rabbitmq android-asynctask tabs laravel-4 environment-variables amazon-dynamodb insert uicollectionview linker xsd coldfusion console continuous-integration upload textview ftp opengl-es macros operating-system mockito localization formatting xml-parsing vuejs3 json.net type-conversion data.table kivy timestamp integer calendar segmentation-fault android-ndk prolog drag-and-drop char crash jasmine dependencies automated-tests geometry azure-pipelines android-gradle-plugin itext fortran sprite-kit header mfc firebase-cloud-messaging attributes nosql format nuxt.js odoo db2 jquery-plugins event-handling jenkins-pipeline nestjs leaflet julia annotations flutter-layout keyboard postman textbox arm visual-studio-2017 gulp stripe-payments libgdx synchronization timezone uikit azure-web-app-service dom-events xampp wso2 crystal-reports namespaces swagger android-emulator aggregation-framework uiscrollview jvm google-sheets-formula sequelize.js com chart.js snowflake-cloud-data-platform subprocess geolocation webdriver html5-canvas centos garbage-collection dialog sql-update widget numbers concatenation qml tuples set java-stream smtp mapreduce ionic2 windows-10 rotation android-edittext modal-dialog spring-data nuget doctrine radio-button http-headers grid sonarqube lucene xmlhttprequest listbox switch-statement initialization internationalization components apache-camel boolean google-play serial-port gdb ios5 ldap youtube-api return eclipse-plugin pivot latex frameworks tags containers github-actions c++17 subquery dataset asp-classic foreign-keys label embedded uinavigationcontroller copy delegates struts2 google-cloud-storage migration protractor base64 queue find uibutton sql-server-2008-r2 arguments composer-php append jaxb zip stack tailwind-css cucumber autolayout ide entity-framework-6 iteration popup r-markdown windows-7 airflow vb6 g++ ssl-certificate hover clang jqgrid range gmail Next You’ll be prompted to create an account to view your personalized homepage. Home Questions AI Assist Labs Tags Challenges Chat Articles Users Jobs Companies Collectives Communities for your favorite technologies. Explore all Collectives Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Collectives™ on Stack Overflow Find centralized, trusted content and collaborate around the technologies you use most. Learn more about Collectives Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Why do I get different answers for these two algorithms in R? Ask Question Asked 13 years, 1 month ago Modified10 years, 8 months ago Viewed 439 times Part of R Language Collective This question shows research effort; it is useful and clear 0 Save this question. Show activity on this post. This is quite literally the first problem in Project Euler. I created these two algorithms to solve it, but they each yield different answers. Basically, the job is to write a program that sums all the products of 3 and 5 that are under 1000. Here is the correct one: ```r divisors<-0 for (i in 1:999){ if ((i %% 3 == 0) || (i %% 5 == 0)){ divisors <- divisors+i } } ``` The answer it yields is 233168 Here is the wrong one: ```r divisors<-0 for (i in 1:999){ if (i %% 3 == 0){ divisors <- divisors + i } if (i %% 5 == 0){ divisors <- divisors + i } } ``` This gives the answer 266333 Can anyone tell me why these two give different answers? The first is correct, and obviously the simpler solution. But I want to know why the second one isn't correct. EDIT: fudged the second answer on accident. R Language Collective r Share Share a link to this question Copy linkCC BY-SA 3.0 Improve this question Follow Follow this question to receive notifications edited Jan 22, 2015 at 18:40 user2555451 asked Aug 21, 2012 at 2:57 user1613119user1613119 235 2 2 gold badges 3 3 silver badges 9 9 bronze badges 3 Note that while the first way gets you the right answer - it isn't in very good R style. It would be much better to take advantage of vectorization. You can do this problem in one line in R.Dason –Dason 2012-08-21 03:30:28 +00:00 Commented Aug 21, 2012 at 3:30 2 Exactly: For example, sum((1:1000 %% 3 == 0) | (1:1000 %% 5 == 0)) does it in one line.David Robinson –David Robinson 2012-08-21 03:37:52 +00:00 Commented Aug 21, 2012 at 3:37 I would also suggest preallocating your vectors before you "stuff" them. On the subject, I recommend reading R inferno by P. Burns.Roman Luštrik –Roman Luštrik 2012-08-21 10:21:06 +00:00 Commented Aug 21, 2012 at 10:21 Add a comment| 3 Answers 3 Sorted by: Reset to default This answer is useful 6 Save this answer. Show activity on this post. Because multiples of 15 will add i once in the first code sample and twice in the second code sample. Multiples of 15 are multiples of both 3and5. To make them functionally identical, the second would have to be something like: r divisors<-0 for (i in 1:999) { if (i %% 3 == 0) { divisors <- divisors + i } else { if (i %% 5 == 0) { divisors <- divisors + i } } } But, to be honest, your first sample seems far more logical to me. As an aside (and moot now that you've edited it), I'm also guessing that your second output value of 26633 is a typo. Unless R wraps integers around at some point, I'd expect it to be more than the first example (such as the value 266333 which I get from a similar C program, so I'm assuming you accidentally left of a 3). Share Share a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications edited Aug 21, 2012 at 7:32 answered Aug 21, 2012 at 3:01 paxdiablopaxdiablo 887k 241 241 gold badges 1.6k 1.6k silver badges 2k 2k bronze badges Comments Add a comment This answer is useful 1 Save this answer. Show activity on this post. I don't know R very well, but right off the bat, I see a potential problem. In your first code block, the if statement is true if either of the conditions are true. Your second block runs the if statement twice if both conditions are met. Consider the number 15. In your first code block, the if statement will trigger once, but in the second, both if statements will trigger, which is probably not what you want. Share Share a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications answered Aug 21, 2012 at 3:01 BlenderBlender 300k 55 55 gold badges 460 460 silver badges 510 510 bronze badges Comments Add a comment This answer is useful 1 Save this answer. Show activity on this post. I can tell you exactly why that's incorrect, conceptually. Take the summation of all integers to 333 and multiply is by 3, you'll get x Take the summation of all integers to 200 and multiply it by 5, you'll get y Take the summation of all integers to 66 and multiply it by 15, you'll get z x + y = 266333 x + y - z = 233168 15 is divisible by both 3 and 5. You've counted all multiples of 15 twice. Share Share a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications answered Apr 2, 2013 at 19:34 Alex BrandtAlex Brandt 99 8 8 bronze badges Comments Add a comment Your Answer Thanks for contributing an answer to Stack Overflow! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions r See similar questions with these tags. R Language Collective See more This question is in a collective: a subcommunity defined by tags with relevant content and experts. The Overflow Blog The history and future of software development (part 1) Getting Backstage in front of a shifting dev experience Featured on Meta Spevacus has joined us as a Community Manager Introducing a new proactive anti-spam measure New and improved coding challenges New comment UI experiment graduation Policy: Generative AI (e.g., ChatGPT) is banned 16 people chatting R Language Collective Sep 23 at 0:00 - Toasty Related 3is there a difference in the result from these two algorithms? 0Different results with "same" code 0Why are these 2 functions different? 0Why are these functions different? 2I'm getting different results on 2 pieces of code that I thought would give the same results 1Why is method 1 faster than method 2? 0Why do I get two different outputs when I use different functions? 1Why do I get different results for two functions with the same logic behind it? Where is the mistake? 0Why do these two codes produce different results? 0Why am I getting different results with my code? Hot Network Questions Should I let a player go because of their inability to handle setbacks? Does the curvature engine's wake really last forever? Bypassing C64's PETSCII to screen code mapping Can peaty/boggy/wet/soggy/marshy ground be solid enough to support several tonnes of foot traffic per minute but NOT support a road? How to home-make rubber feet stoppers for table legs? Lingering odor presumably from bad chicken Repetition is the mother of learning Is encrypting the login keyring necessary if you have full disk encryption? Storing a session token in localstorage Does the mind blank spell prevent someone from creating a simulacrum of a creature using wish? For every second-order formula, is there a first-order formula equivalent to it by reification? How do you emphasize the verb "to be" with do/does? Why include unadjusted estimates in a study when reporting adjusted estimates? Why does LaTeX convert inline Python code (range(N-2)) into -NoValue-? How can the problem of a warlock with two spell slots be solved? Another way to draw RegionDifference of a cylinder and Cuboid Analog story - nuclear bombs used to neutralize global warming Does the Mishna or Gemara ever explicitly mention the second day of Shavuot? Is existence always locational? Proof of every Highly Abundant Number greater than 3 is Even Who is the target audience of Netanyahu's speech at the United Nations? What NBA rule caused officials to reset the game clock to 0.3 seconds when a spectator caught the ball with 0.1 seconds left? Can I go in the edit mode and by pressing A select all, then press U for Smart UV Project for that table, After PBR texturing is done? "Unexpected"-type comic story. Aboard a space ark/colony ship. Everyone's a vampire/werewolf Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. lang-r Why are you flagging this comment? Probable spam. This comment promotes a product, service or website while failing to disclose the author's affiliation. Unfriendly or contains harassment/bigotry/abuse. This comment is unkind, insulting or attacks another person or group. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Stack Overflow Questions Help Chat Products Teams Advertising Talent Company About Press Work Here Legal Privacy Policy Terms of Service Contact Us Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings
3919
https://ntrs.nasa.gov/api/citations/20170008827/downloads/20170008827.pdf
American Institute of Aeronautics and Astronautics 1 Pressurization of a Flightweight, Liquid Hydrogen Tank: Evaporation & Condensation at a Liquid/Vapor Interface Mark E. M. Stewart1 VPL LLC at NASA Glenn Research Center, Cleveland, Ohio, 44135, USA This paper presents an analysis and simulation of evaporation and condensation at a motionless liquid/vapor interface. A 1-D model equation, emphasizing heat and mass transfer at the interface, is solved in two ways, and incorporated into a subgrid interface model within a CFD simulation. Simulation predictions are compared with experimental data from the CPST Engineering Design Unit tank, a cryogenic fluid management test tank in 1-g. The numerical challenge here is the physics of the liquid/vapor interface; pressurizing the ullage heats it by several degrees, and sets up an interfacial temperature gradient that transfers heat to the liquid phase—the rate limiting step of condensation is heat conducted through the liquid and vapor. This physics occurs in thin thermal layers O(1 mm) on either side of the interface which is resolved by the subgrid interface model. An accommodation coefficient of 1.0 is used in the simulations which is consistent with theory and measurements. This model is predictive of evaporation/condensation rates, that is, there is no parameter tuning. Nomenclature A = interface area, m2 Cp = specific heat at constant pressure (fluid), J/kg-K CPST = Cryogenic Propellant Storage and Transfer program EDU = Engineering Design Unit g = acceleration due to gravity, m/s2 GH2, LH2= hydrogen gas, liquid hydrogen HKS = Hertz-Knudsen-Schrage equation, Eq. 4 ΔHvap(T) = latent heat of vaporization, kJ/kg k = thermal conductivity, W/m-K m = mass of molecule 𝑚 ̇ 𝑓𝑙𝑢𝑥 = HKS interfacial mass transfer flux due to evaporation (>0) and condensation (<0), kg/m2-s 𝑚 ̇ 𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒 = mass equation source term and volumetric mass addition rate at interface, kg/m3-s MLI = multi-layer insulation MW = molecular weight, kg/kmol NIST = National Institute of Standards and Technology p, psat(T) = static pressure, Pa, saturation pressure at temperature, Pa Q = instantaneous heat addition in Eq. 3, m-K 𝑞̇𝑓𝑙𝑢𝑥, = interfacial heat source due to latent heat of evaporation (>0) and condensation (<0), W/m2, 𝑞̇𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒 = heat equation source term and volumetric heat addition rate at interface, W/m3 𝑞̇𝑣𝑎𝑝 = volumetric heat addition rate in the vapor due to work done, W/m3 Ru, R = universal gas constant, J/kmol-K, specific gas constant, J/kg-K  = region of application of subgrid model S = entropy of the gas, J/kg-K t, tj = time, time at time step j in series solution, s T, Tsat(p) = static temperature, saturation temperature at pressure, K 1 Senior Research Engineer, MS VPL-3, AIAA Member. American Institute of Aeronautics and Astronautics 2 Figure 1: CPST EDU tank during fabrication at Marshall Space Flight Center. External view of tank without MLI or foam. Tinterface = temperature of the liquid/vapor interface, K ΔTcompress = temperature change of a gas due to isentropic compression/expansion between 1 & 2 atm, K u, v, w = velocity components for molecules, m/s UDF = user defined function (ANSYS Fluent) V = volume, m3 VOF = volume of fluid numerical method W = work, done on a compressible gas, J x = coordinate normal to interface, m  = heat equation parameter characterizes sharpness/persistence of temperature gradient, m2/s , 𝜒 = subgrid model mass and energy scaling factors between flux and equation source terms, unitless γ = ratio of specific heats, unitless  = density, kg/m3  = summation over cells in region  cond, evap = mass accommodation coefficients for condensation and evaporation, unitless, [0, 1] liq, vap = liquid, vapor fraction at a point, unitless, [0, 1] eff = source term actually added to equation in numerical scheme interface = subscript indicating the liquid/vapor interface liq, vap, solid = subscripts indicating liquid, vapor, and solid phases sat = saturation conditions I. Introduction ASA is interested in long-duration, in-space storage of cryogenic propellants to support future exploration missions, including upper stages and potentially propellant depots. Cryogenic propellants promise higher specific impulse than storable hypergolic fuels, but storage for long duration missions has yet to be demonstrated. Improving the capabilities of computational tools to predict fluid dynamic and thermodynamic behavior in cryogenic propellant tanks under settled and unsettled conditions is research supported under NASA’s Evolvable Cryogenics (eCryo) project. A challenging problem for these multiphase computational methods is predicting mass transfer (evaporation and condensation) at the liquid/vapor interface; the symptoms include poor prediction of pressure in some cases. A typical approach is to combine an interface tracking method (which captures interface position and motion) with a mass transfer equation to predict evaporation/condensation . Typically, the interface is smeared over several grid cells, hence smaller length scales are ignored near the interface. This problem is challenging because it involves at least three problems, each from different disciplines: the statistical kinetics of evaporation and condensation at the phase interface, the heat transfer in the adjacent phases, and numerical methods must capture all this physics. In this paper, heat transfer near the interface is demonstrated as a rate limiting step for evaporation/condensation at the interface—particularly when the fluids have poor thermal conductivity. For small pressure changes, the Hertz-Knudsen-Schrage equation predicts high mass transfer rates and huge heat generation/absorption rates. But, in many situations, the adjacent vapor and liquid cannot conduct this heat. The result is thin, O(1 mm), thermal layers on both sides of the interface. We would not be surprised at a thermal boundary layer at a wall; liquid/vapor interfaces can have a pair of thermal layers with a heat source/sink in between! This behavior is apparent in the engineering problem of interest here: pressurization of a liquid hydrogen tank. The EDU (Engineering Design Unit), Figure 1, is a cryogenic fluid management test article developed as part of the CPST program, a precursor to the eCryo program. Tests at Marshall Space Flight Center in 2014 and 2015 included N American Institute of Aeronautics and Astronautics 3 pressurizing liquid cryogens (principally hydrogen) with various gases using unsubmerged and submerged diffusers, plus tank drainage with pressure control, all in 1-g. This interfacial behavior is investigated through both exact and numerical solutions for heat and mass transfer near the liquid/vapor interface. Section II shows how interfacial temperature jumps can occur through (de)compression, while Section III discusses the liquid/vapor interface and evaporation/condensation. Section IV establishes a model of heat conduction and thermal layers using the one-dimensional heat equation through the interface and nearby vapor/liquid. Two different solution methods provide validation. Section V explains a numerical method where the heat equation model, plus solution method, is used as a subgrid model within an ANSYS Fluent simulation. Section VI explains the engineering experiment, geometry and the grid used. Section VII compares Fluent simulation results with EDU Tank pressurization and drainage experimental results. II. Temperature Jumps at the Interface: When Thermal Gradients Become Sharp Interfacial temperature jumps (large gradients) can occur for a simple reason: pressurization of the vapor phase does work, W, on the gas (Eq. 1), and raises its temperature everywhere—including up to the interface [3, p. 6]; the incompressible fluid has no volume change or work done on it. The resulting temperature gradient sets up a heat flow across the interface, as shown in Figure 2. The opposite holds true; depressurization will lower the gas temperature and create the reverse gradient. Pop the cap off a soft drink, and the pressure drop yields a temperature drop in the vapor phase. 𝑊= −∫𝑝 𝑑𝑉 (1) Temperature increases for isentropic compression, ΔTcompress, from 1 to 2 atmospheres are shown in Table 1 for several gases; hydrogen has a modest temperature jump of 4.44 K, while water is 70 K between 1 and 2 atm. The persistence in time of this temperature jump is important, and depends on how quickly heat is conducted. This transient thermal conductivity is captured in the one-dimensional heat equation, Eq. 2, where 𝑞̇𝑣𝑎𝑝 represents the work done on the vapor by compression/decompression. Section IV-A shows how a series of exact solutions, Eq. 3, give the time development of the temperature gradient, plus its thickness. Section IV-B explains a numerical solution method. 𝜕𝑇 𝜕𝑡− 𝛼 𝜕2𝑇 𝜕𝑥2 = 1 𝜌 𝐶𝑝𝑞̇𝑣𝑎𝑝 where 𝛼= 𝑘 𝐶𝑝 𝜌 (2) Table 1 gives values of  for common gases, and smaller  values indicate sharper, more persistent temperature gradients. 𝑇(𝑥, 𝑡) = 𝑇 ∞+ 𝑄 (4𝜋𝛼𝑡)1/2 𝑒−𝑥2 4𝛼𝑡 (3) where 𝜌 𝐶𝑝𝑄 is the heat per unit area added instantaneously at x = t = 0. Figure 3 demonstrates the persistence of a temperature jump in hydrogen liquid/vapor after aggressive pressurization in the first 10 seconds. In the minute after pressurization, the width of the pressure jump grows from a few millimeters to several centimeters. After several minutes, the temperature profile approaches the constant slope, steady-state profile. With slow vapor pressurization, temperature jumps are virtually non-existent. Stewart shows that the small, constant-slope temperature gradients of self-pressurization can be captured with a fine grid at a fixed interface. Figure 2’s piston, as in an internal combustion engine, is only one method of compression/decompression and one cause of interfacial temperature gradients. Compression—and increasing gas temperature—occurs similarly in compressors within jet engines and air conditioners, boiling within a closed tank , change in altitude , or flowing gas through a diffuser into a propellant tank as considered in the EDU tank in Section VI. Interfacial temperature gradients are also caused by heat flows within a tank . Figure 2: Pressurization of gas leads to a temperature jump. Work is done on the compressible vapor phase which raises its temperature—right up to the interface. No compression or work is done on the incompressible liquid phase. Hence pressurization leads to a temperature jump. Depressurization works in the opposite manner. ‘Incompressible Liquid’ is used in the fluid mechanics sense—negligible compression. Most liquids undergo small compression compared to gases. Incompressible Liquid Compressible Vapor T American Institute of Aeronautics and Astronautics 4 Figure 3: Temperature profiles through a hydrogen liquid/vapor interface at various times after pressurization from Psat(21 K) = 101,325 Pa to 202,650 Pa in 10 seconds. Includes interface condensation. Liquid for negative interface distances. 20 21 22 23 24 25 -40 -20 0 20 40 Temperature (K) Distance From Interface (mm) 10 s 20 s 50 s 100 s 300 s 1000 s 3600 s 1. Background A thermal boundary layer at an interface between a solid wall and moving fluid is a classical topic in heat transfer, fluid dynamics and CFD. Heat transfer between a wall and fluid is important in many areas including refrigeration, heat exchangers, and turbine blade design. Heat fluxes can be estimated from the temperature jump and a heat transfer coefficient, but in CFD, the heat flux is calculated from the temperature profile’s slope at the wall, −𝑘 𝛿𝑇 𝜕𝑥; fine boundary layer grids are needed to reduce errors. At liquid/vapor interfaces, interest in temperature jumps is less common, but still evident in several areas, particularly investigations of water evaporation/condensation. After pressurization of a water surface, Popov et al. experimentally measured temperature gradients with a fine, 25m, thermocouple probe. Badam et al. experimentally investigated temperature jumps due to a vapor side heat flux. Heat transfer during condensation at a liquid/vapor interface is important in cloud formation, as the nucleation and growth of cloud droplets is relevant to critical climate change questions about cloud albedo. The Mason equation [5, p. 122] models growth of water droplets where diffusion of mass is balanced by diffusion of latent heat through the gas surrounding the droplet. Evaporation and condensation can be relevant in acoustic problems. While measuring the sound speed of various gases using sound waves, investigators noted that for strong sound waves the pressure can exceed saturation pressure during part of the oscillation and be less during the remainder . This leads to momentary, alternating condensation and evaporation. This complicated sound speed measurements. However, it provides an opportunity to measure evaporation/condensation where there is reduced net heat flow . III. Physics of the Fluid/Vapor Interface, Mass and Energy Transfer The Knudsen or vapor layer, where gas behavior is dominated by interaction with the adjacent liquid (or solid), is thin—a few mean free path lengths thick, and much less than a micrometer, m. This Knudsen layer is much thinner than the adjacent thermal boundary layers, and the gas is locally not in equilibrium in any layer. The Schrage equation [12, p. 27] (Hertz-Knudsen-Schrage equation) estimates the evaporation/condensation mass flux and is derived solely from statistical mechanics and the thermodynamic states of the liquid and vapor— heat transfer is not considered. Temperatures and pressures are assumed to be continuous—everywhere; jumps are assumed to be steep gradients. Hence, in Eq. 4, Tliq = Tvap = Tinterface, pvap = pinterface. Table 1: Condensation/evaporation properties of common fluids at 1 atmosphere saturated conditions. NIST data . ΔTcompress is isentropic. Latent Heat ΔHvap (J/kg) Tsat (K) ΔTcompress 1→2 atm (K) Vapor ΔHvap/Cp (K) Liquid ΔHvap/Cp (K) Vapor  Liquid  Helium 20,752. 4.2304 1.342 2.3 3.9 5.95E-05 2.82E-05 Parahydrogen 445,440. 20.277 4.441 36.4 46.1 1.04E-03 2.81E-06 Nitrogen 199,178. 77.355 16.942 177.2 97.6 1.45E-03 8.86E-05 Oxygen 213,050 90.188 19.752 219.5 125.4 1.93E-03 7.82E-05 Methane 510,830. 111.67 20.433 230.3 146.7 2.88E-03 1.25E-04 Water 2,256,440. 373.12 70.019 1084.9 535.3 2.02E-02 1.68E-04 American Institute of Aeronautics and Astronautics 5 𝑚 ̇ 𝑓𝑙𝑢𝑥(𝑝𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒, 𝑇𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒) = 2 2−𝜎𝑐𝑜𝑛𝑑√ 𝑀𝑊 2𝜋𝑅𝑢(𝜎𝑒𝑣𝑎𝑝 𝑝𝑠𝑎𝑡(𝑇𝑙𝑖𝑞) √𝑇𝑙𝑖𝑞 − 𝜎𝑐𝑜𝑛𝑑 𝑝𝑣𝑎𝑝 √𝑇 𝑣𝑎𝑝) (4) Mathematically for the HKS equation, evap = cond must be true for equilibrium to be satisfied at 𝑝𝑠𝑎𝑡(𝑇𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒). Accompanying evaporation (condensation) at the interface is the absorption (release) of large amounts of heat, Eq. 5. Table 1 gives the heat of vaporization, ∆𝐻𝑣𝑎𝑝(𝑇𝑠𝑎𝑡), for some common fluids. Note that ΔHvap/Cp compares the heat of vaporization to specific heat (heat required to raise temperature)—and it’s a large amount of heat, particularly for water! The corresponding mass transfer is small. In this work, heat release is assumed to occur within the Knudsen layer, while outside, heat conduction is satisfied. 𝑞̇𝑓𝑙𝑢𝑥= 𝑚 ̇ 𝑓𝑙𝑢𝑥(𝑝𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒, 𝑇𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒) ∆𝐻𝑣𝑎𝑝(𝑇𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒) (5) 1. Background Schrage gives a good review of interface physics up to 1950. Considerable vaporization/condensation research has been done on liquid/solid interfaces. The Hertz-Knudsen equation gives the time rate of gas molecules sticking to a solid surface as the product of a “sticking” coefficient, , (mass accommodation coefficient) and the flux of gas molecules impinging on the surface, 𝑚 ̇ 𝑓𝑙𝑢𝑥 𝑣𝑎𝑝−𝑠𝑜𝑙𝑖𝑑. Schrage extended this to evaporation/condensation at liquid/vapor interfaces as two competing fluxes, Eq. 6. Each mass flux, Eq. 7 , is the expected number of molecules (in a Maxwellian velocity distribution, f ) impinging on the interface. Pressure is the corresponding integral for momentum, so it’s not surprising that the mass flux is a function of gas pressure and temperature. The two fluxes balance at saturation conditions, so the net flux is the deviation from saturation conditions, Eq. 4. 𝑚 ̇ 𝑓𝑙𝑢𝑥= 𝜎𝑒𝑣𝑎𝑝 𝑚 ̇ 𝑓𝑙𝑢𝑥 𝑙𝑖𝑞→𝑣𝑎𝑝− 𝜎𝑐𝑜𝑛𝑑 𝑚 ̇ 𝑓𝑙𝑢𝑥 𝑣𝑎𝑝→𝑙𝑖𝑞 (6) 𝑚 ̇ 𝑓𝑙𝑢𝑥 𝑙𝑖𝑞→𝑣𝑎𝑝= ∫ ∫ ∫ 𝑚 𝑢 𝑓(𝑝, 𝑇, 𝒖) 𝑑𝑢 𝑑𝑣 𝑑𝑤 ∞ 0 ∞ −∞ ∞ −∞ = √ 𝑀𝑊 2𝜋𝑅𝑢 𝑝 √𝑇 (7) The Maxwell distribution, f, is a normal distribution of molecular velocities, and, strictly speaking, it only applies to equilibrium conditions—temperature gradients are non-equilibrium. The remaining question is the value of the accommocation coefficients, evap, cond—what fraction of impinging molecules actually complete the phase transfer and stick? 2. Accommodation Coefficients: For sublimation of solids, a number of experiments indicate mass accommodation coefficient, , values near 1. [12, p. 30]. Early experiments measuring these coefficients for liquids were less successful [12, p. 40]. Modern computers and molecular dynamics simulations were not available to early investigators, and these methods go beyond a Maxwellian distribution and “sticking” coefficient to model Coulombic and other force interations at atoms and groups within molecules as their trajectories are followed. Molecular dynamics computer simulations of the air/water interface give mass and thermal accommodation coefficients of 0.99 and 1.0, respectively, at 300K . For methanol and argon, condensation is complete capture with total condensation, , ~0.2 and ~0.8 respectively. Experimentally, the thermal accommodation coefficient was shown to be 0.840.05 in argon at 271K using sound resonance in a spherical chamber . The experiment creates strong sound waves with a pressure oscillation both above and below saturation pressure—so evaporation and condensation alternate with the pressure wave; a liquid layer can exist on the walls but without a net heat flow . IV. A Model for Interface Mass and Energy Transfer Any successful mathematical model represents the dominant physical processes, and includes the corresponding terms in the equations. Smaller effects, and their terms, can be excluded, with care. Geometry, length, and time scales can be important. The dominant terms in the heat equation, Eq. 8, are heating of the fluid, 𝜌𝐶𝑝 𝜕𝑇 𝜕𝑡, thermal conduction through each phase, −𝑘 𝜕2𝑇 𝜕𝑥2, the rate of work done on the gas as it (de)compresses, 𝑞̇𝑣𝑎𝑝, and latent heat absorbed or released at the interface, 𝑞̇𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒. This equation can be derived from the Navier-Stokes Energy equation, when two processes are negligible: fluid motion and temperature variation in the plane of the interface. American Institute of Aeronautics and Astronautics 6 Figure 4: Mass is conserved and vapor temperature is predicted from gas volume change in Eq. 9. The heat equation is solved in a direction normal to the interface. Fluid motion and temperature variations in the interface plane are assumed to be negligible. Incompressible Liquid Compressible Vapor Volume Change due to Compression Volume Change due to Condensation V0 V1 Figure 5: Pressure evolution for both solution techniques. 100,000 120,000 140,000 160,000 180,000 200,000 0 5 10 15 20 Pressure (Pa) Time (s) Series of Exact Solutions Tridiagonal Numerical Solution 𝜌𝐶𝑝 𝜕𝑇 𝜕𝑡−𝑘 𝜕2𝑇 𝜕𝑥2 = 𝑞̇𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒+ 𝑞̇𝑣𝑎𝑝 (8) Note that fluid properties, , Cp, k, switch between liquid and vapor at the interface. The model conserves not only energy, but mass (Figure 4). The actual mass transfer at the interface, 𝑚 ̇ 𝑓𝑙𝑢𝑥 in Eq. 4, is small, but included in each model through isentropic compression (an adiabatic reversible process for a perfect gas), Eq. 9 [3, pp. 14, 21] with dS = 0. 𝑇 1 𝑇 0 = 𝑒 𝑑𝑆 𝐶𝑝 ( 𝑉 0 𝑉 1) 𝛾−1 = 𝑒 𝑑𝑆 𝐶𝑝 ( 𝑝1 𝑝0) 1−1 𝛾 (9) A. Exact Solutions for Time Evolution of Interfacial Temperature Gradients Analytic solutions to this equation, (8), are desirable because they give the interface profile, regardless of how thin the thermal layers are; with numerical solutions, trial and error and possibly a very fine grid are required—errors and uncertainty are possible. The heat equation, Eq. 2, has an exact solution, Eq. 3, for instantaneous heat addition at x = t = 0. A solution also exists for the two-phase heat equation, Eq. 8, with work done on the vapor and interfacial heat release—both time dependent. Temperature evolution can be predicted with series solutions, Eq. 10 and 11. Each solution in the series corresponds to instantaneous heat addition at a time tj. Each phase has a separate solution matched at the interface with latent heat release. 𝑇𝑣𝑎𝑝(𝑥, 𝑡) = 𝑇 ∞ 𝑣𝑎𝑝(𝑡) + ∑ 𝑄𝑗 𝑣𝑎𝑝 (4𝜋𝛼𝑣𝑎𝑝(𝑡−𝑡𝑗))1/2 𝑒 −𝑥2 4𝛼𝑣𝑎𝑝(𝑡−𝑡𝑗) 𝑡𝑗≤𝑡 𝑗=1 , x  0 (10) 𝑇𝑙𝑖𝑞(𝑥, 𝑡) = 𝑇 −∞ 𝑙𝑖𝑞 + ∑ 𝑄𝑗 𝑙𝑖𝑞 (4𝜋𝛼𝑙𝑖𝑞(𝑡−𝑡𝑗))1/2 𝑒 −𝑥2 4𝛼𝑙𝑖𝑞(𝑡−𝑡𝑗) 𝑡𝑗≤𝑡 𝑗=1 , x  0 (11) Liquid temperature far from the interface, 𝑇 −∞ 𝑙𝑖𝑞, is assumed constant, and gas temperature far from the interface, 𝑇 ∞ 𝑣𝑎𝑝(𝑡), is calculated from isentropic compression, Eq. 9 and Figure 4, based on a specified compression rate. Matching solutions at the interface is done carefully, as the HKS equation is very sensitive to pressure and temperature variations. At the interface, an iterative method is used to satisfy temperature continuity and energy conservation, Eq. 12, between the two series solutions at each time tj. (−𝑘𝑙𝑖𝑞 𝑑𝑇 𝑑𝑥) 𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒−𝑙𝑖𝑞−(−𝑘𝑣𝑎𝑝 𝑑𝑇 𝑑𝑥) 𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒−𝑣𝑎𝑝= 𝑞̇𝑓𝑙𝑢𝑥 = 𝑚 ̇ 𝑓𝑙𝑢𝑥 (𝑝𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒, 𝑇𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒) ∆𝐻𝑣𝑎𝑝(𝑇 𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒) (12) B. Numerical Solutions for Time Evolution of Interfacial Temperature Gradients Eq. 8 can also be solved with a numerical method. In particular, a discrete, second-order approximation to Eq. 8 can be solved with the Thomas algorithm for a tridiagonal matrix . Typically, 103 points with a uniform grid spacing of Δx = 10-5 m, time step of Δt = 10-4 s, are used. Again, the interface conditions must be solved carefully. At the interface, a sub-iteration is used to find a consistent interface temperature, Tinterface, and pressure, pinterface, HKS mass flux, 𝑚 ̇ 𝑓𝑙𝑢𝑥, and heat generation rate, 𝑞̇𝑓𝑙𝑢𝑥, at each time step, Eq. 12. American Institute of Aeronautics and Astronautics 7 Far from the interface at the ends of the 1-D grid, boundary conditions are specified heat flux (temperature gradient). In this simulation, they are taken as zero. Verification of numerical methods is enhanced when two different numerical methods can be applied to the same physical problem. In this case, results compare well as shown in Figure 5. V. Numerical Methods Physics can be forgiving of numerical methods, and sometimes it’s not. Shocks can be smeared over several grid cells, and still yield good transonic lift predictions. Airfoil boundary layers can be ignored and still predict lift and drag due to lift (Euler equations). Yet try to predict heat transfer—on a turbine blade, for example—and hard work is required to get results within 10%. Why? Pressure is constant across the boundary layer which is numerically trivial, while heat transfer depends on the derivative of the temperature profile through the boundary layer. Physics is forgiving in the lift calculation, but unforgiving in the heat transfer one. Here, the detailed temperature profile cannot be smeared over large grid cells—much higher spatial resolution is required to resolve heat transfer, predict evaporation/condensation and pressure changes. Consequently, the numerical methods used here carefully resolve any temperature profiles at the liquid/vapor interface with a subgrid model. A. Numerical Scheme ANSYS Fluent version 16.0 is used to solve thermal equations in the solid walls coupled to thermal/fluid equations in the fluid region (two-dimensional, axisymmetric, Navier-Stokes equations). The fluid flow is modeled as laminar flow. This transient simulation is second-order in space and time. The time step is 0.001 to 0.0005 s. Typically, in multiphase simulations, the VOF method has a single, combined energy equation for both phases of the fluid. Consequently, heat exchange between the phases is adding/subtracting latent heat, yet sharp temperature gradients must be captured and resolved on a coarse grid. Here, two energy equations are used in the fluid equations, one for each phase—plus the subgrid model’s heat equation gives fine grid resolution for the sharp gradients. In a single energy equation, numerical dissipation could smear the energy of the two phases—additional numerical heat transfer. In ANSYS Fluent, multiple energy equations to clearly separate the energy of multiple phases is Eulerian Multiphase . This calculation uses the implicit Multifluid VOF sub-option. B. UDFs for MLI, Pressurant, Tank Venting and Drainage A Fluent simulation can be modified by including User Defined Functions (UDFs). This simulation makes extensive use of these subroutines. The tank exterior boundary condition includes the heat transfer effects of insulating foam, MLI, and radiation, all coded in a UDF . Pressurizing gas (hydrogen) is introduced into the tank through a pipe and diffuser; the inflow boundary condition is a UDF which prescribes mass flow according to a schedule. To maintain a fixed pressure during tank settling, a venting UDF removes mass, momentum, and energy from a small region of the tank. A similar UDF allows for tank drainage. The subgrid model uses UDFs to include source/sink terms in the mass, momentum, and liquid and vapor energy equations. C. A Subgrid Model To capture the thermal layers of the liquid/vapor interface when evaporation and condensation are present, a subgrid model is used. In theory, a fine grid could resolve the layers. In practice, the interface can move and curve; pressure increases and temperature gradients can be large. Consequently, the grid generation issues alone would be substantial, even for an unstructured, adaptive grid. The approach here is to use a subgrid model that can change position on the grid, so the model remains at the liquid/vapor interface even as it moves. The underlying Fluent grid remains unchanged. The subgrid model solves the HKS equation and the 1-D heat equation, Eq. 8, normal to the interface using a tridiagonal matrix scheme, as in Section IV-B. This heat equation’s grid floats at the moving position (xinterface, yinterface), as shown in Figure 6. It approximates the interface within a larger region, denoted . The subgrid model consists of several UDFs coupling the 1-D heat equation and the Fluent calculations. 1. Couplings between subgrid model and fluid simulation There are four couplings between the subgrid model and the fluid simulation. First, from the fluid simulation to the subgrid model, interface position calculated in the fluid simulation must update the position in the subgrid model, Eq. 13. American Institute of Aeronautics and Astronautics 8 Figure 6: Subgrid model moves with the liquid/vapor interface and solves the 1-D heat equation normal to it. Four couplings exist with the fluid simulation. Mass and energy added to cells (blue) near the interface. 𝑥𝒊𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒= ∑ 𝑥 𝜑(1−𝜑)  ∑ 𝜑(1−𝜑)  (13) 𝑦𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒= ∑ 𝑦 𝜑(1−𝜑)  ∑ 𝜑(1−𝜑)  Second, the local fluid simulation pressure determines the pressure at the interface, pinterface, Eq. 14. 𝑝𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒 = ∫ 𝑝 𝜑𝑣𝑎𝑝𝑑𝑉  ∫ 𝜑𝑣𝑎𝑝𝑑𝑉  (14) This pressurization determines work done on the vapor, 𝑞̇𝑣𝑎𝑝, and the local temperature of the vapor, Eq. 9. Currently, isentropic compression is assumed, and for rapid pressurization this has proven to be a good assumption as external heat flow into the gas is limited. Third, from the subgrid model to the fluid simulation, the mass flowing into (and out of) each phase, 𝑚 ̇ 𝑓𝑙𝑢𝑥, is calculated from the HKS and heat equations in the subgrid model. UDFs include these terms in the mass equation of the vapor/liquid phases in the fluid simulation. Fourth, the heat transfer in and out of each phase, (−𝑘𝑙𝑖𝑞 𝑑𝑇 𝑑𝑥) 𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒−𝑙𝑖𝑞, (−𝑘𝑣𝑎𝑝 𝑑𝑇 𝑑𝑥) 𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒−𝑣𝑎𝑝, is calculated from the subgrid model (1-D heat equation), and included as source/sink terms in the liquid/vapor phase energy equations in the fluid simulation. In general, the momentum corresponding to interface mass transfer would also be represented with source/sink terms, but it is considered negligible in these simulations. In the EDU simulations,  is a single, thin (6 cm—a few cells wide) rectangular region, centered on xinterface, covering the entire interface from tank axis to the wall. A single 1-D grid on the interface is adequate for proof of concept, but it does not capture variations along the interface. 2. Energy and mass are conserved adaptively For an interface tracking scheme, the interface is not a line/surface but a set of grid cells near the interface, as in Figure 6. The number and volumes of these cells is unknown and changing—changing slowly relative to the time step. To accommodate this volume variation and get the proper mass/energy source terms, scaling factors,  and , are calculated in , each time step, Eq. 15, 16. 𝑚 ̇ 𝑓𝑙𝑢𝑥 𝑒𝑓𝑓= 𝛽𝑗−1 ∑ 𝑚 ̇ 𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒 𝜑𝑣𝑎𝑝 𝑑𝑉  𝐴𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒  𝛽𝑗 = 𝛽𝑗−1 𝑚 ̇ 𝑓𝑙𝑢𝑥 𝑚 ̇ 𝑓𝑙𝑢𝑥 𝑒𝑓𝑓, similarly for the liquid (15) 𝑞̇𝑓𝑙𝑢𝑥 𝑒𝑓𝑓 = 𝜒𝑗−1 ∑ 𝑞̇𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒 𝜑𝑣𝑎𝑝 𝑑𝑉  𝐴𝑖𝑛𝑡𝑒𝑟𝑓𝑎𝑐𝑒  𝜒𝑗 = 𝜒𝑗−1 𝑞̇𝑓𝑙𝑢𝑥 𝑞𝑓𝑙𝑢𝑥 𝑒𝑓𝑓, similarly for the liquid (16) Extensions of this scheme would include a non-uniform 1-D grid to resolve the liquid/vapor interface while capturing profile details distant from the interface. D. Fluid and Material Properties Hydrogen liquid is treated as a Boussinesq fluid, while the vapor is an ideal gas. A real gas differs from an ideal gas by up to 12% over the temperatures and pressures in the simulation, particularly near the saturation line. This must be addressed. Constant physical properties at reference conditions, (20.207 K, 99.224 kPa), do not adequately capture thermal conductivity, k, in the gas phase and in the aluminum tank walls. Linear equations in temperature accurately represent NIST data for viscosity, , and thermal conductivity, k, of the gas phase hydrogen. Region  1-D Heat Equation Liquid/Vapor Interface (XInterface, yInterface) American Institute of Aeronautics and Astronautics 9 Figure 7: Latent heat, ΔHvap(T), for hydrogen is accurately represented (error < 0.1%) by a third-order polynomial when used in Eq. 5. NIST Data . Figure 8: EDU tank as installed in TS 300. The saturation line conditions for hydrogen, psat(T) and ΔHvap(T), are represented by cubic polynomials least squares fit to NIST data (Figure 7) over the interface temperature range. NIST data for 5083 aluminum is used for piecewise linear representations of the specific heat, C, and thermal conductivity, k. 5083 aluminum data is indistinguishable from 2219 aluminum data for thermal conductivity, and cryogenic data is publicly available. NIST data for 304 stainless steel is also used. VI. The Experiment, Geometry, and Grid The cryogenic tank simulated is the EDU (or IVF 1000) rank built at Marshall Space Flight Center with preliminary testing in June 2014 and Phase A testing in September 2015. Figure 8 and Figure 9 show the tank and axisymmetric geometry. This test article (not flight weight) was built to gain experience in tank design and construction, and to test cryogenic fluid management techniques. There is no final report to date on this tank or test, only design notes and testing data. This tank is constructed from 2219 aluminum, insulated with 1.25 inches of SOFI, 20 layers of low density MLI and 40 layers of standard density MLI. Tank internal volume is 4.336 m3, inner diameter 1.70 m, inner height 2.33 m, dome ratio √2 2 ⁄ : 1. The nominal wall thickness is 2.54 mm. It was tested in vacuum conditions at 300 K shroud temperature and 1-g. 1. Geometry Simulation geometry comes directly from design CAD files, and many internal components have been removed. A single axisymmetric penetration is kept in the top lid, and included in the bottom lid, for unsubmerged and submerged diffusers and their 304 stainless steel supply lines. Figure 9 and Figure 10 show the supply lines Vent (vapor) and drain (liquid) are by sink terms for mass, momentum, and energy equations implemented in UDFs and geometrically located in small, radius 10 cm, circular regions near the tank top and bottom. Venting modifies outflow mass flow to achieve a specified maximum pressure. Drainage is implemented through an orifice plate equation with a 3.97 mm orifice in a 10.21 mm I.D. pipe, flow coefficient 0.98, the pressure at the tank bottom, and a prescribed back pressure. 2. Grid The computational domain is two-dimensional, axisymmetric, as shown in Figure 10. The baseline grid contains 37,400 cells, 2100 represent the solid walls and 35,300 the fluid. A multi-block structured grid captures variations in wall thickness. Further, an unstructured grid represents the complex geometry of the lid of the manhole access and the diffuser interior. Typical grid scale is ~5 cm vertical resolution and ~10 cm horizontal resolution. In the grid and simulation, each diffuser supply line empties into the diffuser body. The actual diffuser has multiple holes that allow flow into the tank after reducing its speed and momentum—slow flow reduces mixing of the tank contents. In the grid, the diffuser must be axisymmetric and it is modelled as an equivalent slot as shown in Figure 10 on the right. American Institute of Aeronautics and Astronautics 10 Figure 10: Axisymmetric fluid+walls grid (left), walls only(center), including unsubmerged diffuser, tank wall with bolted joint, and tank penetration for diffuser supply line, closeup (right). Top and bottom tank penetrations are midline symmetric. SOFI and MLI are part of the tank wall boundary condition—not the grid. Figure 9: Cross sectional view of eCryo EDU tank (left), with axisymmetric view (right). The simulations are axisymmetric, and many internal components have been removed (right). Diffuser Supply Line Tank Penetration Diffuser Diffuser Supply Line Tank Penetration Bolted Joint Unsubmerged Diffuser Tank Axis Tank Wall Diffuser Exit Slot Tank Axis American Institute of Aeronautics and Astronautics 11 Figure 11: Experimental measurements of ullage pressure rise in tank, inflowing diffuser GH2 pressurant, and pressurant inflow valve closing time. Figure 12: Velocity vectors in the unsubmerged diffuser (left) and temperature distribution through the top of the tank (right). The diffuser reduces the momentum of the pressurant. VII. Comparison with Experimental Results One test within Phase A EDU testing was chosen for simulation. This experimental data is a good test of the subgrid interface model from Section IV. The interface must be tracked for over 140 s, and the dramatic pressure increase results in a large temperature jump with thermal layers on both sides of the interface. In a series of developmental simulations, the interface subgrid model has been extensively exercised, tested, and a stable numerical scheme is apparent. Figure 11 shows the test data for test HT-15, 16 on day 3 of Phase A testing. The tank was filled with LH2 to the 90% fill level, with GH2 in the ullage, and left to settle. GH2 pressurant gas at 298 K was introduced through the unsubmerged diffuser supply line. Although the drainage line valve was open, there was a limited change in interface height--~1% fill level reduction. Figure 12 shows velocity vectors and temperature in and near the diffuser, tank penetration, and supply lines. The heat introduced through the pressurant flow is substantial and the supply line conducts heat into the tank. The grid for the inflow line resolves the momentum and thermal boundary layers and predicts heat flow into the supply line. There is hot gas in the upper part of the tank, as inflowing pressurant is buoyantly driven upwards after exiting from the diffuser. m/s K 300 60 200 100 20 250 150 0 30 15 45 American Institute of Aeronautics and Astronautics 12 Figure 14: Comparison of computational pressure predictions and experimental measurements. Between 123 and 131 s, pressurant inflow drops, condensation mass flux becomes more significant in the pressure profile, and a better test of predicted condensation rate. Figure 13: Temperature profile through the liquid/vapor interface as measured in the subgrid model at t=87.23 s. Red plus indicates the interface position and temperature. Figure 13 shows the temperature profile through the liquid/vapor interface at t=87.23 s as calculated by the subgrid model. The temperature jump is greater than one degree K, and the interface temperature—where the HKS equation is applied—is a point on the temperature profile. The heat flux from the vapor to the interface is 4 W/m2, condensation at the interface adds almost 53 W/m2, and the liquid absorbs 54 W/m2 from the interface. The balance is heating at the interface. The condensation mass flux is 1x10-4 kg/m2-s. Figure 14 compares the calculated pressure transient with the experimental data. For most of the experiment, the loss of vapor to condensation is smaller than the pressurant inflow; the ratio of pressurant mass flow added to GH2 condensed is near 3:-1. But, beginning at 123 s and until 131 s, the pressurant massflow drops by nearly an order of magnitude until a relief valve is opened. Condensation mass flux has a more significant effect on pressure in this region. As such it is a better test of the interface model and predicted condensation rate. VIII. Conclusion The analysis, algorithms, and results of this paper provide proof of concept for a numerical method for interfacial mass transfer. Although applied to hydrogen, the approach should apply to other fluids. The use of an accommodation coefficient of 1.0 in these calculations is consistent with theory and experiments, and adds confidence to this approach. Further development of the methods is needed, including extensions to curved interfaces with multiple interface sections, . Several problems should be reconsidered; with slosh, turbulent thermal conductivity will increase heat conduction through the thermal layers to the interface, accelerate interfacial mass transfer, and possibly explain pressure collapse in orbit tanks. Acknowledgments The authors thank Andrè LeClair, Jason Hartwig, Wesley Johnson for helpful discussions. Aimee Dechant, Monica Guzik, Dale Diedrick, and Andy Hissam helped locate the EDU CAD data. John Ibrahim and Valerio Viti American Institute of Aeronautics and Astronautics 13 from Fluent helped with simulation questions. This work was supported under the NASA Space Technology Mission Directorate's Technology Demonstration Missions Program under the Evolvable Cryogenics Project. References O. Kartuzova and M. Kassemi, "Modeling Interfacial Turbulent Heat Transfer during Ventless Pressurization of a Large Scale Cryogenic Storage Tank in Microgravity," in AIAA, 2011. M. E. M. Stewart and J. P. Moder, "Self-Pressurization of a Flightweight, Liquid Hydrogen Tank: Simulation and Comparison with Experiments," in AIAA Propulsion and Energy Conference, AIAA-2016-4674 Salt Lake City, 2016. H. W. Liepmann and A. Roshko, Elements of Gasdynamics, New York: John Wiley & Sons, 1958. M. E. M. Stewart and J. P. Moder, "Comparison of Computation Results with a Low-g, Nitrogen Slosh and Boiling Experiment," in AIAA Propulsion and Energy Conference, AIAA 2015-3854, Orlando, July 2015. B. J. Mason, The Physics of Clouds, Oxford: Oxford University Press, 1971. S. Popov, A. Melling, F. Durst and C. A. Ward, "Apparatus for investigation of evaporation at free liquid-vapur interfaces," International Journal of Heat and Mass Transfer, vol. 48, pp. 2299-2309, 2005. V. K. Badam, V. Kumar, F. Durst and K. Danov, "Experimental and theoretical investigations on interfacial temperature jumps during evaporation," Experimental Thermal and Fluid Science, vol. 32, pp. 276-292, 2007. M. S. Raju, "LSPRAY-V: A Lagrangian Spray Module," NASA/CR--2015-218918, 2015. M. S. Raju and W. A. Sirignano, "Interaction between two vaporizing droplets in an intermediate Reynolds number flow," Physics of Fluids A, vol. 2, pp. 1780-1796, Oct. 1990. J. B. Mehl and M. R. Moldover, "Precondensation phenomena in acoustic measurements," J. Chem. Phys., vol. 77, no. 1, pp. 455-465, July 1982. M. B. Ewing, M. L. McGlashan and J. P. M. Trusler, "The Temperature-Jump Effect and the Theory of the Thermal Boundary Layer for a Spherical Resonator. Speeds of Sounds in Argon at 271.16K," Metrologia, vol. 22, pp. 93-102, 1986. R. W. Schrage, A Theoretical Study of Interphase Mass Transfer, Columbia University Press, 1953. J. Vieceli, M. Roeselova and D. J. Tobias, "Accommodation coefficients for water vapor at the air/water interface," Chemical Physics Letters , vol. 393, pp. 249-255, 2004. M. Matsumoto, K. Yasuoka and Y. Kataoka, "Evaporation and condensation at a liquid surface. II. Methanol," J. Chem. Phys., vol. 101, no. 9, pp. 7912-7917, Nov. 1994. K. Yasuoka, M. Matsumoto and Y. Kataoka, "Evaporation and condensation at a liquid surface. I. Argon," J. Chem. Phys. , vol. 101, no. 9, pp. 7904-7911, Nov. 1994. W. H. Press, B. P. Flannery, S. A. Teukolsky and W. T. Vetterling, Numerical Recipes in C: The Art of Scientific Computing, New York: Cambridge Univ. Press, 1988. ANSYS, ANSYS Fluent Software, Ver. 16.0, Canonsburg, PA: ANSYS, Inc., Dec 2014. ANSYS, ANSYS Fluent Theory Guide, Release 17.2, Canonsburg, PA: Ansys, Inc., Aug 2016. NIST, "Thermophysical Properties of Fluid Systems," National Institute for Standards and Technology, 2011. [Online]. Available: [Accessed January 2015]. NIST, "Material Measurement Laboratory: Cryogenics Technologies Group," [Online]. Available: [Accessed 2015 April].
3920
https://brainly.in/question/42280534
Classify the following number as rational or irrational with justification 1.010010001​ - Brainly.in Skip to main content Ask Question Log in Join for free For parents For teachers Honor code Textbook Solutions Brainly App s1051siddhart22827 22.06.2021 Math Secondary School answered Classify the following number as rational or irrational with justification 1.010010001​ 2 See answers See what the community says and unlock a badge. 0:00 / 0:15 Read More Expert-verified answer Answer from Amit Ratogi, Jitendra Gupta - All In One Mathematics 9 Part (ii) page 8 Amit Ratogi, Jitendra Gupta - All In One Mathematics 9 1. Number Systems answer Tip The numbers which cannot be expressed in the form of p q ,where p,q are integers and q≠0 . Also, a number whose decimal expansion is non-terminating, non-repeating are known as irrational numbers. Step 1 of 2 Given decimal expansion is 1.010010001... Try to write the given number p q in the form. The given number cannot be written in p q form Read on Read on Answer No one rated this answer yet — why not be the first? 😎 gameszap205 gameszap205 Ambitious 5 answers 925 people helped Answer: its an non termination number as it can't be canceled. Explore all similar answers Thanks 0 rating answer section Answer rating 0.0 (0 votes) Advertisement Answer No one rated this answer yet — why not be the first? 😎 rpt190586 rpt190586 Ambitious 1 answer Answer: classify the following numbers rational or irrational a) 1.010010001 Explore all similar answers Thanks 0 rating answer section Answer rating 0.0 (0 votes) Advertisement Still have questions? Find more answers Ask your question New questions in Math 20) if ratio 2 Sm: Sn = m²: 52 show that ratio of ith and 7th term is (2m-1) (2n-1) ​ 3a +4b power of 3- 5 = 3 + 4 power of 3 -5 John and peter can do a work in 6 days how many days will peter john and Paul take to do the same work? 6. circle with diameter 20 cm is drawn on a rectangular paper with 30 cm x 20 cm dimensions. If a small cube is dropped on the paper and assuming that If y=2x-5 and the median of x is 16, then the median of x will be ​ PreviousNext Advertisement Ask your question Free help with homework Why join Brainly? ask questions about your assignment get answers with explanations find similar questions I want a free account Company Careers Advertise with us Terms of Use Copyright Policy Privacy Policy Cookie Preferences Help Signup Help Center Safety Center Responsible Disclosure Agreement Get the Brainly App ⬈(opens in a new tab)⬈(opens in a new tab) Brainly.in We're in the know (opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab)
3921
https://math.stackexchange.com/questions/3363581/generalized-gessel-viennot-lindstrom-lemma-to-hyperdeterminants-of-multinomial-c
combinatorics - Generalized Gessel-Viennot-Lindstrom Lemma to Hyperdeterminants of Multinomial Coefficients - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Generalized Gessel-Viennot-Lindstrom Lemma to Hyperdeterminants of Multinomial Coefficients Ask Question Asked 6 years ago Modified6 years ago Viewed 196 times This question shows research effort; it is useful and clear 3 Save this question. Show activity on this post. A consequence of the Gessel-Viennot-Lindstrom lemma is that if A A and B B are collections of points in Z 2 Z 2, that the number of non-intersecting lattice paths (counted with a sign) from the points in A A to the points in B B is given by the determinant of the matrix M M with M(i,j)=M(i,j)= number of paths from A i A i to B j B j using only north and east unit steps in Z 2 Z 2. Note that these entries are binomial coefficients. I am aware that this lemma is much more general. But nonetheless, it is a statement about a combinatorial interpretation of the determinant of a matrix of binomial coefficients. Is there an analogous combinatorial interpretation of the hyperdeterminant of a tensor of multinomial coefficients. With an example: the matrix M(i,j)=(a i+b j a i)M(i,j)=(a i+b j a i) has a determinant which is a count of non-intersecting lattice paths by Gessel-Viennot-Lindstrom. Does the tensor T(i,j,k)=(a i+b j+c k a i,b j,c k)T(i,j,k)=(a i+b j+c k a i,b j,c k) have a hyperdeterminant which admits a combinatorial interpretation? combinatorics tensors integer-lattices multinomial-coefficients Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications asked Sep 20, 2019 at 17:20 TaylorTaylor 161 3 3 bronze badges Add a comment| 0 Sorted by: Reset to default You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions combinatorics tensors integer-lattices multinomial-coefficients See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Related 37Prove this determinant identity combinatorially 75Combinatorial proof that n∑k=0(2 k k)(2 n−2 k n−k)(−1)k=2 n(n n/2)∑k=0 n(2 k k)(2 n−2 k n−k)(−1)k=2 n(n n/2) when n is even 25Combinatorial interpretation of sum of squares, cubes 5Number of matrices with weakly increasing rows and columns 0Bijective proof of binomial determinant using gessel-viennot (from Aigner) 4Multinomial coefficients (general inequality). 6Paths must cross in Lindström-Gessel-Viennot on the lattice 1Combinatorial interpretation of alternating sum involving binomial coefficients Hot Network Questions Is encrypting the login keyring necessary if you have full disk encryption? Who is the target audience of Netanyahu's speech at the United Nations? How to convert this extremely large group in GAP into a permutation group. How to locate a leak in an irrigation system? alignment in a table with custom separator Stress in "agentic" Does the mind blank spell prevent someone from creating a simulacrum of a creature using wish? Is it possible that heinous sins result in a hellish life as a person, NOT always animal birth? Countable and uncountable "flavour": chocolate-flavoured protein is protein with chocolate flavour or protein has chocolate flavour What were "milk bars" in 1920s Japan? Drawing the structure of a matrix How to home-make rubber feet stoppers for table legs? Numbers Interpreted in Smallest Valid Base Explain answers to Scientific American crossword clues "Éclair filling" and "Sneaky Coward" Cannot build the font table of Miama via nfssfont.tex Non-degeneracy of wedge product in cohomology Does the Mishna or Gemara ever explicitly mention the second day of Shavuot? Bypassing C64's PETSCII to screen code mapping how do I remove a item from the applications menu Do sum of natural numbers and sum of their squares represent uniquely the summands? Alternatives to Test-Driven Grading in an LLM world Why do universities push for high impact journal publications? Discussing strategy reduces winning chances of everyone! How many color maps are there in PBR texturing besides Color Map, Roughness Map, Displacement Map, and Ambient Occlusion Map in Blender? more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
3922
https://arxiv.org/pdf/1711.00518
RANDOM WALKS ON PRIMITIVE LATTICE POINTS OLIVER SARGENT ABSTRACT .We define a random walk on the set of primitive points of Zd. We prove that for walks generated by measures satisfying mild conditions these walks are recurrent in a strong sense. That is, we show that the associated Markov chains are positive recurrent and there exists a unique stationary measure for the random walk. I NTRODUCTION Random walks on lattices have been extensively studied by probabilists. They provide fascin-ating examples and useful models for many natural processes. Random walks are naturally part of the larger class of Markov chains. One of the most basic properties of Markov chains is recurrence. It is well known that in one or two dimensions symmetric walks are recurrent in the sense that they return to their starting points almost surely. In higher dimensions walks are transient which means that they are not recurrent in the above sense. For the details, see [Spi76 ] or [LL10 ] for instance. In this note we will define random walks on primitive lattice points, or more generally, lattice points which lack specified factors common to all of their co-ordinates. We will do this by considering the usual random walk on the lattice but at each stage we will divide by the ‘gcd of the vector’ so that the walker always ends up at a primitive point. It seems quite surprising that the author could not find a good reference for this simple and natural set up. The properties of the primitive random walks we consider are very different from the usual random walks. We will show that they exhibit strong recurrence properties. In the language of Markov chains we will show that they are positive recurrent. In particular, we will show that there is an invariant stationary distribution. While at first this fact might seem quite surprising, in hindsight it can be seen to follow from a simple equidistribution property of random walks on the discrete torus. 1.1. Main results. To state our main results we must formally describe our setting. Here and throughout the paper, by a lattice we mean a discrete subgroup of Euclidean space. Since any such object is isomorphic to Zd for some d ∈ N ∪ { 0} we choose to restrict our attention to the lattice Zd . Let {e1, . . . , ed } be the standard basis of Rd so we may write Zd = 〈e1, . . . , ed 〉Z.For z ∈ Zd and 1 ≤ i ≤ d we define the co-ordinates zi := z·ei . We recall that z ∈ Zd is said to be primitive if gcd (z1, . . . , zd ) = 1. Let Zd 0 denote the set of primitive points of Zd . We will use the convention that gcd (0, . . . , 0 ) = 1 and so (0, . . . , 0 ) ∈ Zd 0 .Consider the map Zd × Zd 0 → Zd 0 given by (a, z) 7 → a ˆ+z := a + z gcd (a + z) . (1.1) We remark that this map is not an action of Zd on Zd 0 since there exists a1, a2 ∈ Zd and z ∈ Zd 0 such that (a1 + a2) ˆ+z 6 = a1 ˆ+a2 ˆ+z.To describe our main results as quickly as possible we use the language of Markov chains. We refer the reader to [Shi84, Chapter VIII ] for the basic facts we will use. For a probability measure μ on Zd we consider the random walk on Zd where the walker chooses each step 1 arXiv:1711.00518v1 [math.PR] 1 Nov 2017 RANDOM WALKS ON PRIMITIVE LATTICE POINTS 2 with probability given by μ and then moves to the position given by the map in (1.1). Suppose that the walker starts at z ∈ Zd 0 , then we let {X i,z }∞ i=0 be the sequence of random variables which give the position of the walker after i = 0, 1, 2, . . . steps. We note that the sequence of random variables {X i,z }∞ i=0 forms a Markov chain with transition probabilities given by Pμ[Xi+1, z = x ˆ+ y|X i,z = y] := μ(x) for all y ∈ Zd 0 and i ∈ N ∪ { 0} : Pμ[Xi,z = y] > 0. We use the notation M 0 z (μ) to denote this Markov chain. A Markov chain is said to irreducible (or sometimes indecomposable) if there is a positive probability to reach a specified state from any given starting state. An irreducible Markov chain is said to be positive recurrent if for all states, the expected return time to that state is positive. For a probability measure μ on Zd we will use the following two standing assumptions which we record here for convenience: (A) Finite first moment, in the sense that ∑ z∈Zd ‖z‖μ(z) < ∞.(B) Support which generates Zd , in the sense that for every z ∈ Zd there exists n ∈ N such that μ∗n(z) > 0. Our first main result is the following: Theorem 1.1. Let μ be a probability measure on Zd satisfying (A) and (B) , then for all z ∈ Zd 0 the Markov chain M0 z (μ) is irreducible and positive recurrent. The assumption that μ satisfies (A) is necessary to ensure that the random walk does not spread out too fast. Assumption (B) is a nondegeneracy assumption which ensures that the Markov chain is irreducible. In order to prove Theorem 1.1 we will consider a family of inherently less recurrent Markov chains and show that they are all positive recurrent. This family of Markov chains can be thought of as random walks on the set of lattice points coprime to k, for some k ∈ N. We say that a point z ∈ Zd is coprime to k ∈ N if k - gcd (z). We denote the set of points in Zd which are coprime to k by Zdk . Consider the map Zd × Zdk → Zdk given by (z, x) 7 → z ˆ+k x := z + xkp , where p := max {n ∈ N : kn | (z + x)}. (1.2) We proceed in a similar manner to how we constructed the Markov chain M 0 z . For a probability measure μ on Zd we let M kz (μ) denote the Markov chain corresponding to the walk obtained by starting at z ∈ Zdk and iterating the map in (1.2) with steps chosen according to the measure μ. That is, M kz (μ) := {X i,z }∞ i=0 a sequence of random variables, corresponding to the random walk starting at z with transition probabilities given by Pμ[Xi+1, z = x ˆ+k y|X i,z = y] := μ(x) for all y ∈ Zdk and i ∈ N ∪ { 0} : Pμ[Xi,z = y] > 0. Since there is less likely hood of division occurring in the latter set up, one should expect that the Markov chains M kz are less recurrent than M 0 z for any k ∈ N with k ≥ 2. We will see that this is indeed the case and we we will prove the following result. Theorem 1.2. Let μ be a probability measure on Zd satisfying (A) and (B) , then for all z ∈ Zdk and k ∈ N with k ≥ 2 the Markov chain Mkz (μ) is irreducible and positive recurrent. The irreducibility implies that the state space of the Markov chain M kz is Zdk for all z ∈ Zdk .Hence, we see from the definition of positive recurrence that if M kz is positive recurrent for some z ∈ Zdk then it will be positive recurrent for all z ∈ Zdk .RANDOM WALKS ON PRIMITIVE LATTICE POINTS 3 1.2. Invariant measures. The strategy used to prove Theorems 1.1 and 1.2 is first to show that for measures satisfying (B) the Markov chains M k(μ) are irreducible for all k ∈ N ∪ { 0}.For a time homogeneous Markov chain M = {X i }∞ i=0 consisting of X valued random variables with transition probabilities given by μx (E) := P[Xi+1 ∈ E|X i = x] for all x ∈ X , E ⊆ X and i ∈ N ∪ { 0} : Pμ[Xi = x] > 0a measure ν ∈ P (X ) is said to be a stationary measure (or sometimes invariant distribution) for M if ∫ X μx dν(x) = ν.It is then a classical fact [Shi84, Theorem 2, p. 543 ] that irreducible Markov chains with countable state spaces are positive recurrent if and only if there exists a stationary measure for the Markov chain. Moreover, the irreducibility implies that if a stationary measure exists then it is unique. There is an obvious candidate for such a stationary measure and this is the object which we will study in the rest of this note. First let us introduce the notation P (X ) for the set of probability measures on X , for some topological space X . Let B := ( Zd )N be the space of infinite sequences of elements in Zd and β := μ⊗N ∈ P (B) the Bernoulli measure. Given k ∈ N ∪ { 0}, b ∈ B and z ∈ Zd 0 we will consider the random walks corresponding to b starting at z given by the sequences {bn ˆ+ . . . ˆ +k b1 ˆ+kz}n∈N, (1.3) where ˆ +0 := ˆ+. In order to study stationary measures for these random walks and the associ-ated Markov chains we consider the measures supported on the sets of points which are end points of walks of length n. For n ∈ N and z ∈ Zd 0 we define ω0 n,z ∈ P (Zd 0 ) to be the measure ωkn,z (E) = ∫ B 1E (bn ˆ+k . . . ˆ +b1 ˆ+z)dβ(b) for all E ⊆ Zdk . (1.4) Now the natural candidates for invariant measures for the Markov chains M k(μ) are given by weak- limits of the following sequence ® 1 n n ∑ i=1 ωki,z ´ n∈N . (1.5) Using the definition of an invariant measure one can see that such limit points will be station-ary. Therefore, the main difficulty is to show that the any weak- limit of the sequence in (1.5) is a probability measure. One often thinks of this as being a ‘nonescape of mass’ property of the sequence of measures (1.5). We will prove the following theorem. Theorem 1.3. Suppose that μ ∈ P (Zd ) satisfies (A) and (B) . Then, for all k ∈ N ∪ { 0} and z ∈ Zd 0 any weak- limit of the sequence in (1.5) is a stationary probability measure for the Markov chain Mkz (μ). Moreover, there is unique limit point which we denote ωk ∞ . It follows from the discussion at the start of this subsection that once we have shown irredu-cibility (c.f. §2.1), Theorems 1.1 and 1.2 follow directly from Theorem 1.3. We also remark that the stationary measure satisfies ωk ∞ (z) = 1/τk(z) for all z ∈ Zdk where τk(z) is the expec-ted return time of the random walk (corresponding to k) to z. Hence, it would be interesting to obtain estimates for τk(z).RANDOM WALKS ON PRIMITIVE LATTICE POINTS 4 1.3. Figures. Using a computer algebra package it is possible to simulate fairly long walks on the spaces Zdk and Zd 0 . The following (‘randomly picked’) measures were used to generate random walks. η1 := 1200 (13 δe1 + 3δe2 + 35 δe3 + 36 δe4 + 36 δ−e1 + 30 δ−e2 + 42 δ−e3 + 5δ−e4 ) η2 := 1305 (13 δe1 + 3δe2 + 35 δe3 + 36 δe4 + 5δe5 + 42 δe6 16 δ−e1 + 36 δ−e2 + 4δ−e3 + 49 δ−e4 + 36 δ−e5 + 30 δ−e6 ) η3 := 151 (11 δe1 + 12 δe2 + 8δe3 + 9δ−e1 + 2δ−e2 + 9δ−e3 ) In order to visualise the random walks, in particular the recurrence properties of the random walk, the norms of all points in the walks were calculated. This data was then used to plot histograms as displayed in Figure 1.1. 020000 40000 60000 80000 100000 120000 0 10 20 30 40 50 60 (A) Walk on Z40 generated by η1.05000 10000 15000 20000 25000 30000 0 20 40 60 80 100 120 140 160 180 200 (B) Walk on Z60 generated by η2.050000 100000 150000 200000 250000 0 5 10 15 20 25 30 35 (C) Walk on Z32 generated by η3.05000 10000 15000 20000 25000 0 50 100 150 200 250 300 (D) Walk on Z35 generated by η3. FIGURE 1.1. Histograms showing frequencies of norms of vectors in the first 1,000,000 points of various walks. The frequencies are indicated on the vertical axis and the norms of vectors in the walk on the horizontal. 2. P ROOF OF MAIN RESULTS In this section we will complete the proofs of Theorems 1.1 and 1.2. As previously remarked, this is done in two steps; first we show that the Markov chains are irreducible and second we will prove Theorem 1.3. In §2.1 we will see how assumption (B) is used to show that the corresponding Markov chain is irreducible. In §2.2 we will introduce a strong recurrence property for random walks and show that if a walk has this properties then the limit in (1.5) converges. In §2.3 we study RANDOM WALKS ON PRIMITIVE LATTICE POINTS 5 random walks on the discrete torus. Using the fact that these walks tend to be equidistributed we derive a lower bound for the expected number of times division by a prime should occur in a walk of length n. In §2.4 we use the bounds from the previous section to show that on average the norm is contracted by the random walks. This fact will then be used to show that the random walks satisfy the recurrence property introduced in §2.2. 2.1. Irreducibility. We recall that a Markov chain M = {X i }∞ i=0 consisting of X valued ran-dom variables for some countable state space X is said to be irreducible if for all x, y ∈ X with P[X0 = x] > 0, there exists n ∈ N such that P[Xn = y|X 0 = x] > 0. For k, n ∈ N ∪ { 0}, z ∈ Zdk and b ∈ B let Σkn(b, z) be the position of the random walk in Zdk corresponding to the sequence b starting at z after n steps. In other words Σkn(b, z) := bn ˆ+k . . . ˆ +k b1 ˆ+kz, (2.1) where as before ˆ +0 := ˆ+. We view Σkn(−, z) : B → Zdk as random variables so that M kz (μ) = {Σki (−, z)}∞ i=0 and Pμ[Σkn(b, z) = x] = μ⊗n({a ∈ (Zd )n : an ˆ+k . . . ˆ +k a1 ˆ+kz = x}), (2.2) where M kz (μ) is the Markov chain defined in §1.1. We start with the following general lemma. Lemma 2.1. Let k ∈ N ∪ { 0}, z ∈ Zdk and Mkz be the Markov chains defined in §1.1. If μ1, μ2 ∈P (Zd ) such that μ1 is absolutely continuous with respect to μ2 and Mkz (μ1) is irreducible. Then Mkz (μ2) is irreducible. Proof. First note that for μ1 and μ2 satisfying the hypothesis of the lemma, μ⊗n 1 is absolutely continuous with respect to μ⊗n 2 for all n ∈ N. For a contradiction, suppose that M kz (μ2) = {X i,z }∞ i=0 is not irreducible and thus, there exists x, y ∈ Zdk such that Pμ2 [X0, z = x] > 0 and Pμ2 [Xi,z = y|X 0, z = x] = 0 for all i ∈ N.This implies that x = z and hence Pμ2 [Xi,z = y|X 0, z = x] = Pμ2 [Σki (b, z) = y]. Thus, using (2.2) we see that μ⊗i 2 ({a ∈ (Zd )i : y = ai ˆ+k . . . ˆ +k a1 ˆ+k x}) = 0 for all i ∈ N and hence using the absolute continuity μ⊗i 1 ({a ∈ (Zd )i : y = ai ˆ+k . . . ˆ +k a1 ˆ+k x}) = 0 for all i ∈ N.Reversing the argument we see that this implies that Pμ1 [Xi,z = y|X 0, z = x] = 0 for all i ∈ N.Since M kz (μ1) is assumed to be irreducible, this is a contradiction as required. Now we are ready to prove our main result of this subsection. Proposition 2.2. For all k ∈ N ∪ { 0} and z ∈ Zdk and measures μ ∈ P (Zd ) satisfying (B) the Markov chain Mkz (μ) is irreducible. Proof. One easily sees from the definition of irreducibility, that if the Markov chain M kz (μ) is irreducible for some z ∈ Zdk then it is irreducible for all z ∈ Zdk . So we will set z = 0. Moreover, we will only treat the case when k = 0, but it can be easily checked that the proof goes through without change for k ∈ N. Set M : = M00.Since we suppose that μ ∈ P (Zd ) satisfies assumption (B), there exists n ∈ N such that ν := 12d ∑di=1 δ±ei is absolutely continuous with respect to λn := 1 n ∑ni=1 μ∗i . Therefore, if M(ν) is irreducible, by Lemma 2.1 we see that M (λn) is irreducible. It is easy to see that irreducibility of M (λn) implies the irreducibility of M (μ).RANDOM WALKS ON PRIMITIVE LATTICE POINTS 6 Hence, in order to complete the proof of the proposition, we must show that M (ν) = {X i }∞ i=0 is irreducible. Or, in other words that for all x, y ∈ Zd 0 such that Pν[X0 = x] > 0 there exists n ∈ N such that Pν[Xn = y|X 0 = x] > 0. Since Pν[X0 = x] > 0 if and only if x = 0 we may assume that x = 0. We say that y ∈ Zd 0 is connected to 0 if there exists n ∈ N and a sequence of points {pi }1≤i≤n ⊂ Zd 0 such that p1 = 0, pn = y and pi − pi+1 ∈ {± e1, . . . , ±ed } for all 1 ≤ i ≤ n − 1. So we must show that every x ∈ Zd 0 is connected to 0. Suppose that d ≥ 3. Since gcd (1, z) = 1for all z ∈ Zd−1, if x = ( 1, x2, . . . , xd ) then x is connected to 0. Moreover, if x ∈ Zd 0 is such that gcd (x2, . . . , xd ) = 1 then as in the previous case (1, x2, . . . , xd ) is connected to 0 and we may connect (1, x2, . . . , xd ) with x by adding or subtracting e1 an appropriate number of times. In the case that x ∈ Zd 0 is such that gcd (x2, . . . , xd ) > 1, since d ≥ 3 we have gcd (x2 + 1, x3, . . . , xd ) = 1. Hence, by our previous argument x + e2 is connected to 0, but this clearly implies that x is connected to 0 and hence in the case that d ≥ 3 we have shown that every x ∈ Zd 0 is connected to 0 as required. Suppose that d = 2. Let x ∈ Z20 be arbitrary. There exists a prime p and n ∈ N such that p = x2 n + 1. It is clear that (1, p) is connected to 0 and we may connect (1, p) with (nx 1, p) by adding or subtracting e1 an appropriate number of times. Since (nx 1, p) ˆ+( −e2) = ( x1, x2) this shows that P[Xn = x|X 0 = 0] > 0 for some n ∈ N. To see that P[Xn = 0|X 0 = x] > 0 we note that for all x ∈ Zd 0 , there exists n ∈ N such that the element (1, x2) occurs in the sequence {x ˆ+ie 1}|i|≤ n. Without loss of generality, suppose that x2 ≤ 0 so that e2 ˆ+( −e1) ˆ+( 1, x2) = ( 0, 0 ) and hence we see that P[Xn = 0|X 0 = x] > 0 as required. 2.2. Recurrence. Deciding if a random walk is recurrent or transient is one of the first basic steps towards understanding the long term behaviour of the random walk. There are several related notations of recurrence of a random walk. Following [BQ12 ], in this note we are interested in the following strong notion: Definition 2.3. For k ∈ N ∪ { 0} and μ ∈ P (Zd ), we say that the random walk generated by μ on Zdk is recurrent if for all z ∈ Zdk and  > 0 there exists a finite set K ⊂ Zdk and nz ∈ N such that for all n ≥ nz we have ωkn,z (K) > 1 − .The proof of Theorem 1.3 can be reduced to proving the following proposition. Proposition 2.4. For all k ∈ N and μ ∈ P (Zd ) satisfying (A) and (B) we have that the random walk induced by μ on Zdk is recurrent. The proof of Proposition 2.4 will be our goal for the remainder of this note. First we show that Theorem 1.3 immediately follows from it. Proof of Theorem 1.3 assuming Proposition 2.4. First we note that Proposition 2.4 implies that the walk generated by μ on Zd 0 is recurrent. To see this, note that for all n ∈ N and z ∈ Zd 0 one has ω0 n,z (K) ≥ ωkn,z (K) for all cones K ⊂ Zd 0 ⊂ Zdk . Moreover, any finite subset of Zd 0 can be contained in a cone and therefore one may assume that the subset K in Definition 2.3 is a cone. It follows that for all k ∈ N ∪ { 0} and z ∈ Zdk any weak- accumulation point of the sequence { 1 n ∑ni=1 ωki,z }n∈N will be a probability measure. It is also clear that any such limit point will be a stationary measure for the Markov chain M kz (μ). Since Zdk is countable and by Lemma 2.2 the Markov chains corresponding to the random walks are irreducible, it follows from [Shi84, Theorem 2, p. 543 ] that the limit point will actually be unique and this is the claim of The-orem 1.3. RANDOM WALKS ON PRIMITIVE LATTICE POINTS 7 We remark that the fact that μ satisfies both of (A) and (B) is vital for the statement of Pro-position 2.4 to hold. As we saw in §2.1, (B) is needed to guarantee irreducibility, but this does not mean it is not needed for Proposition 2.4. Indeed, if it was simply removed, it is possible that for certain starting locations z ∈ Zdk , the random walk corresponding to μ never visits points with a gcd divisible by k and hence the walk on the set of points coprime to k starting at z would behave more like a traditional random walk on a lattice. 2.3. Equidistribution on the discrete torus. In this section we will study the ordinary ran-dom walk on Zd . In order to prove recurrence we must exploit the fact that for a long walk, a positive proportion of sites that it visits will have a common divisor which is divisible by k. In order to make this precise we study the corresponding random walk on the discrete torus. The above statement then just becomes an equidistribution property for such walks. For n ∈ N, z ∈ Zd and b ∈ B we denote the position of the random walk in Zd corresponding to b after n steps by Σn(b, z) := bn + · · · + b1 + z.We view the functions Σn(−, z) : B → Zd as random variables. We will be interested in how many times during the first n steps of the walk we saw a gcd > 1. The aim is to use the fact that on the discrete torus after walking for a very long time the past and future tend to become independent of one another. This will enable us to use results from probability theory concerning deviations from the expected value for sums of independent random variables. With this in mind we will define some more random variables. For k ∈ N, n ∈ N and z ∈ Zd let Y kn (b, z) := |{ 1 ≤ i ≤ n : Σi (b, z) ≡ 0 mod k}| ,where for all z ∈ Zd we say that z ≡ 0 mod k if k | gcd (z) or if z = 0. We view Y kn (−, z) as a random variable which records the number of times in the first n steps of the random walk a common factor of k appears. Let 1 be the constant function with value 1 on B × Zd and M kn (b, z) := min {Y kn (b, z), 1} = 1 if Y kn (b, z) ≥ 10 otherwise. Moreover, let TZ dk denote the discrete d-dimensional torus of width k. We can identify TZ dk ∼= {0, . . . , k − 1}d ⊂ Zd (2.3) in the usual manner. Let U kn (b) := min {M kn (b, x) : x ∈ TZ dk }.As is customary, for measurable functions f : B → R and E ⊂ R we use the notation E[ f ] := ∫ B f dβ and P[ f ∈ E] := ∫ B 1{b∈B: f (b)∈E}dβ. Lemma 2.5. For all k ∈ N there exists n 0 > 0 such that for all n ≥ n0 one has E[U kn ] > 0. Proof. By definition we have that E[U kn ] = E[min {M kn (b, z) : z ∈ TZ dk }] = ∫ B min z∈TZ dk 1{b∈B:Y kn (b,z)≥1}dβ = β({b ∈ B : Y kn (b, z) ≥ 1 for all z ∈ TZ dk }). (2.4) Since the measure μ generates Zd by assumption (B) there exists n0 ∈ N and a finite sequence a := ( a1, . . . , an0 ) ∈ (supp μ)n0 ⊂ (Zd )n0RANDOM WALKS ON PRIMITIVE LATTICE POINTS 8 such that {∑ni=1 ai : 1 ≤ n ≤ n0} contains a copy of TZ dk after using the identification (2.3). Moreover, β(C(a)) > 0, (2.5) where C(a) := {b ∈ B : bi = ai for all 1 ≤ i ≤ n0},is the cylinder set of a. It follows that for all b ∈ C(a), z ∈ TZ dk and n ≥ n0 one has Y kn (b, z) ≥ Hence C(a) ⊂ { b ∈ B : Y kn (b, z) ≥ 1 for all z ∈ TZ dk } for all n ≥ n0.Hence the claim of the lemma follows from (2.4) and (2.5). In the following lemma we will use the notation S for the shift map S : B → B given by S b = S(b1, . . . ) = ( b2, . . . ). Lemma 2.6. There exists n 0 > 0 such that for all 0 <  < 1, z ∈ Zd and n ≥ n0 one has α := E[U kn0 ] > 0 and P î Y kn (−, z) ≤ (1 − )α 2n0 n ó ≤ C,α exp Ä − α 2 2n0 n ä , where C ,α := exp (α 2/2).Proof. Let n0 ∈ N be large enough so that the conclusion of Lemma 2.5 holds for all n ≥ n0.Let m ∈ N and suppose that mn 0 ≤ n ≤ (m + 1)n0. It follows from the definitions that Y kn (b, z) ≥ m ∑ i=1 M kn0 (Sin 0 b, Σin 0 (b, z)) ≥ m ∑ i=1 U kn0 (Sin 0 b) (2.6) for all z ∈ Zd and b ∈ B. The set of random variables {U kn0 ◦ Sin 0 }i∈N consists of pairwise independent elements. By Lemma 2.5 and the fact that β is S-invariant we have that E[U kn0 ◦ Sin 0 ] = E[U kn0 ] = α > 0 for all i ∈ N. It follows from the Chernoff bound (See [MU05, Theorem 4.5 ].) that P î m∑ i=1 U kn0 ◦ Sin 0 ≤ (1 − )αm ó ≤ exp Ä − α 2 2 m ä .By (2.6) we have that P î Y kn (−, z) ≤ c nn0 ó ≤ P î m∑ i=1 U kn0 ◦ Sin 0 ≤ c nn0 ó for all x ∈ X ,for all c > 0. Take c := α(1 − )/2. Then cn /n0 ≤ α(1 − )m where we use the facts that 1/2 ≤ m/(m + 1) for all m ∈ N and n/n0 ≤ m + 1. Then the claim of the lemma follows from the previous equations where we use again the fact that m ≥ (n/n0 − 1). 2.4. The norm is contracted. In this subsection we use the results of the previous subsection to show that the norm is contracted by averaging with respect to the measures ωkn,z for large enough n. This will enable us to show that the random walks are recurrent as in Definition 2.3. Note that we follow closely [BQ12, §2 ]. The reason we do not cite their results directly is that we are not dealing with a group action. At the end of the subsection we will complete the proof of Proposition 2.4. For k ∈ N, n ∈ N, z ∈ Zdk and b ∈ B recall the definition of Σkn(b, x) from (2.1). Note that the functions Σkn(−, z) : B → Zdk only depend on the first n co-ordinates of b ∈ B. In other words they are measurable with respect to the σ-algebra generated by the cylinder sets {C(a) : a ∈ (Zd )n, n ∈ N}. Therefore, we can consider Σkn(−, z) : (Zd )n → Zdk where Σkn(a, z) := Σkn(b, z) for all b ∈ C(a)RANDOM WALKS ON PRIMITIVE LATTICE POINTS 9 is well defined. It follows from the triangle inequality and our assumption that μ satisfies (A) that ∫ (Zd)n ‖Σkn(a, z)‖dμ⊗n(a) ≤ ‖ z‖ + κn, (2.7) where κ is the first moment of μ. Lemma 2.7. For all k ∈ N there exist 0 < c < 1, M > 0 and n ′ 0 0 such that for all z ∈ Zdk one has ∫ Zdk ‖x‖dωkn′ 0,z (x) < c‖z‖ + M . Proof. Let n0 and α be as in Lemma 2.6. Choose 0 > 0 small enough so that the constant C0 ,α ≤ 2. Let y := α(1 − 0)/2n0 and n ∈ N. Begin by dividing the set (Zd )n into two pieces as follows Skn,z := {a ∈ (Zd )n : Y kn (a, z) ≤ y n } Rkn,z := ( Zd )n \ Skn,z .We now split the integral into two corresponding pieces using the definition of the measures ωkn,z ∫ Zdk ‖x‖dωkn,z (x) = I1 + I2,where I1 := ∫ Skn,z ‖Σkn(a, z)‖dμ⊗n(a) and I2 := ∫ RkN,z ‖Σkn(a, z)‖dμ⊗n(a).By Lemma 2.6 and (2.7) there exists n0 > 0 such that for all n > n0 and z ∈ Zdk we have that I1 ≤ μ∗n(Skn,z )( ‖z‖ + κn) ≤ 2 exp Ä − α 20 2n0 n ä (‖z‖ + κn) and I2 ≤ μ∗n(Rkn,z ) Ä 1 kd y n e ‖z‖ + κn ä ≤ 1 kd y n e ‖z‖ + κn.Hence, choosing n′ 0 n0 large enough so that 2 exp Ä − α 20 2n0 n′ 0 ä 1 kd y n ′ 0e < 1we get the claim of the lemma. Corollary 2.8. For all k ∈ N, there exist constants 0 < c < 1, M ′ > 0 and n ′ 0 0 such that for all n ≥ n′ 0 and z ∈ Zdk one has ∫ Zdk ‖x‖dωkn,z (x) ≤ cbn/n′ 0c ‖z‖ + M ′. Proof. Let c, M and n′ 0 be as in Lemma 2.7. Let T be the operator defined for measurable functions f : Zdk → R by T f (z) := ∫ Zdk f dωkn′ 0,z .Note that, if N (z) := ‖z‖, then the conclusion of Lemma 2.7 says that T N (z) ≤ cN (z) + M .RANDOM WALKS ON PRIMITIVE LATTICE POINTS 10 It follows that Ti N (z) ≤ ci N (z) + M (ci−1 + · · · + c + 1) ≤ ci N (z) + 2M .By noting that Ti N (z) = ∫ Zdk ‖x‖dωkin ′ 0,z (x) we get that ∫ Zdk ‖x‖dωkin ′ 0,z (x) ≤ ci ‖z‖ + 2M for all i ∈ N. (2.8) Suppose that n ≥ n′ 0 and let j ∈ { 0, . . . , n′ 0 − 1} be such that n = i0 n′ 0 j for some i0 ∈ N. Note that i0 = bn/n0c. Next we write ∫ Zdk ‖x‖dωkn,z (x) = ∫ (Zd)j ∫ Zdk ‖x‖dωkin ′ 0,Σkj(a,z) (x)dμ⊗ j (a).Hence using (2.7) and (2.8) we get that ∫ Zdk ‖x‖dωkn,z (x) ≤ cbn/n′ 0c (‖z‖ + κ j) + 2M .Since κ and j are bounded we may take M ′ := 2M + κ j and this is the conclusion of the lemma. We can now prove Proposition 2.4. Proof of Proposition 2.4. Let c, M ′ and n′ 0 be as in Corollary 2.8 and  > 0 be arbitrary. Let K := {z ∈ Zdk : ‖z‖ ≤ 2M ′/}.It follows that 1Zdk rK (z) ≤  2M′ ‖z‖ for all z ∈ Zdk . Hence, it follows from the conclusion of Corollary 2.8 that ωkn,z (Zdk r K) ≤  2M ′ (cbn/n′ 0c ‖z‖ + M ′) = cbn/n0 c 2M ′ ‖z‖ +  2for all n ≥ n′ 0 . The claim of the proposition follows as soon as n is large enough so that cbn/n′ 0c ‖z‖/M ′ < 1. Acknowledgements. The author is greatly indebted to Ross Pinsky for his time and effort in reading an earlier draft and providing encouragement along with many detailed and helpful comments. This reasearch was funded by the ISF grants numbers 357 /13 and 871 /17. REFERENCES [BQ12 ] Yves Benoist and Jean-Francois Quint, Random walks on finite volume homogeneous spaces , Invent. Math. 187 (2012), no. 1, 37–59. MR 2874934 [LL10 ] Gregory F. Lawler and Vlada Limic, Random walk: a modern introduction , Cambridge Studies in Advanced Mathematics, vol. 123, Cambridge University Press, Cambridge, 2010. MR 2677157 [MU05 ] Michael Mitzenmacher and Eli Upfal, Probability and computing , Cambridge University Press, Cambridge, 2005, Randomized algorithms and probabilistic analysis. MR 2144605 [Shi84 ] A. N. Shiryayev, Probability , Graduate Texts in Mathematics, vol. 95, Springer-Verlag, New York, 1984, Translated from the Russian by R. P . Boas. MR 737192 [Spi76 ] Frank Spitzer, Principles of random walk , second ed., Springer-Verlag, New York-Heidelberg, 1976, Graduate Texts in Mathematics, Vol. 34. MR 0388547 FACULTY OF MATHEMATICS AND COMPUTER SCIENCE , T HE WEIZMANN INSTITUTE OF SCIENCE , R EHOVOT , I SRAEL E-mail address : o.g.sargent@gmail.com
3923
https://algebra-worksheets.s3.amazonaws.com/Algebra-Word-Problems/Algebra+Word+Problems+-+Worksheet+5+-+Age+Problems.pdf
1 © MathTutorDVD.com Algebra Word Problems Lesson 5 Worksheet 5 Algebra Word Problems Involving Age 2 © MathTutorDVD.com Algebra Word Problems – Lesson 5 - Worksheet 5 - Algebra Word Problems Involving Age Problem 1) Alex is fourteen years old and he is also nine years older than his sister Kristen. How old is Kristen? Problem 2) Joe is nine years older than his brother Ed. If Ed is seventeen years old, how old is Joe? 3 © MathTutorDVD.com Problem 3) Dan is fourteen years older than Marge. Eight years ago, Dan was three times as old as Marge. Find their present age. Problem 4) Jean is six years older than her brother Wayne. Three years from now, the sum of their ages will be thirty-eight. Find their present ages. 4 © MathTutorDVD.com Problem 5) Nine years from now, Jack will be three times as old as he was eleven years ago. How old is he now? Problem 6) Andy is seven years older than his wife Lori. If Andy and Lori’s ages add up to fifty-one, how old are Andy and Lori? 5 © MathTutorDVD.com Problem 7) Alex is fourteen years old and he is also nine years older than his sister Kristen. How old is Kristen? Problem 8) Mark is six years younger than his sister Teri. If Teri is thirty-seven years old, how old is Mark? 6 © MathTutorDVD.com Problem 9) Vicki is eleven years older than Chuck. Five years from now, Vicki will be twice as old as Chuck. Find their present age. Problem 10) Andy is nine years older than his sister Jenny. Five years from now, the sum of their ages will be forty-three. Find their present ages. 7 © MathTutorDVD.com Problem 11) Seven years from now, Monte will be three times as old as she was fifteen years ago. How old is she now? Problem 12) Myron is three years older than his wife Denise. If Myron and Denise’s ages add up to sixty-five, how old are Myron and Denise? 8 © MathTutorDVD.com Problem 13) Mike is seventy-one years old and he is also twenty-eight years older than his daughter Kim. How old is Kim? Problem 14) Anika is five years older than her brother Shane. If Shane is twenty-five years old, how old is Anika? 9 © MathTutorDVD.com Problem 15) Sam is twenty-six years older than Brian. Eight years from now, Sam will be three times as old as Brian. Find their present age. Problem 16) Paul is eight years older his wife Lisa. Twenty years ago, the sum of their ages was twenty-four. Find their present ages. 10 © MathTutorDVD.com Problem 17) Eleven years from now, Nicholas will be three times as old as he was twenty-five years ago. How old is he now? Problem 18) Jim is four years older than his sister Kathy. If Jim and Kathy’s ages add up to fifty-four, how old are Jim and Kathy? 11 © MathTutorDVD.com Problem 19) John is sixty-one years old and he is also five years older than his wife Stacey. How old is Stacey? Problem 20) Paul is twenty-two years older than his daughter Jessica. If Jessica is forty-four years old, how old is Paul? 12 © MathTutorDVD.com Answers - Algebra Word Problems – Lesson 5 - Worksheet 5 - Algebra Word Problems Involving Age Problem 1) Alex is fourteen years old and he is also nine years older than his sister Kristen. How old is Kristen? Solution: Let 𝐴 represent Alex’s age and let 𝐾 represent Kristen’s age. From the problem we can write two equations: 𝐴= 14 𝐴= 𝐾+ 9 Substitute 14 for 𝐴 into the second equation and solve for 𝐾: 14 = 𝐾+ 9 14 −9 = 𝐾+ 9 −9 𝐾= 5 Kristen is 5 years old. Answer: 5 13 © MathTutorDVD.com Problem 2) Joe is nine years older than his brother Ed. If Ed is seventeen years old, how old is Joe? Solution: Let 𝐽 represent Joe’s age and let 𝐸 represent Ed’s age. From the problem we can write the following equations: 𝐽= 𝐸+ 9 𝐸= 17 Substitute 17 into the first equation for 𝐸 and solve for 𝐽: 𝐽= 17 + 9 𝐽= 26 Joe is 26 years old. Answer: 26 14 © MathTutorDVD.com Problem 3) Dan is fourteen years older than Marge. Eight years ago, Dan was three times as old as Marge. Find their present age. Solution: Let 𝐷 represent Dan’s age and let 𝑀 represent Marge’s age. From the problem we can write the following equations: 𝐷= 𝑀+ 14 𝐷−8 = 3(𝑀−8) Substitute 𝑀+ 14 into the second equation for 𝐷 and solve for 𝑀: 𝑀+ 14 −8 = 3(𝑀−8) 𝑀+ 6 = 3𝑀−24 24 + 6 = 3𝑀−𝑀 2𝑀= 30 𝑀= 15 Use 𝑀= 15 and solve for 𝐷: 𝐷= 𝑀+ 14 = 15 + 14 = 29 Marge is 15 years old and Dan is 29 years old. Answer: Dan: 29; Marge: 15 15 © MathTutorDVD.com Problem 4) Jean is six years older than her brother Wayne. Three years from now, the sum of their ages will be thirty-eight. Find their present ages. Solution: Let 𝐽 represent Jean’s age and let 𝑊 represent Wayne’s age. From the problem we can write the following equations: 𝐽= 𝑊+ 6 𝐽+ 3 + 𝑊+ 3 = 38 Substitute 𝑊+ 6 into the second equation for 𝐽 and solve for 𝑊: 𝑊+ 6 + 3 + 𝑊+ 3 = 38 2𝑊+ 12 = 38 2𝑊= 26 𝑊= 13 Use 𝑊= 13 and solve for 𝐽: 𝐽= 𝑊+ 6 = 13 + 6 = 19 Jean is 19 years old and Wayne is 13 years old. Answer: Jean: 19; Wayne: 13 16 © MathTutorDVD.com Problem 5) Nine years from now, Jack will be three times as old as he was eleven years ago. How old is he now? Solution: Let 𝐽 represent Jack’s age. From the problem we can write the following equation: 𝐽+ 9 = 3(𝐽−11) Solve for 𝐽: 𝐽+ 9 = 3𝐽−33 9 = 2𝐽−33 42 = 2𝐽 𝐽= 21 Check your solution: 3(21 −11) = 21 + 9 30 = 30 Answer: 21 17 © MathTutorDVD.com Problem 6) Andy is seven years older than his wife Lori. If Andy and Lori’s ages add up to fifty-one, how old are Andy and Lori? Solution: Let 𝐴 represent Andy’s age and let 𝐿 represent Lori’s age. From the problem we can write the following equations: 𝐴= 𝐿+ 7 𝐴+ 𝐿= 51 Substitute 𝐿+ 7 into the second equation for 𝐴 and solve for 𝐿: 𝐿+ 7 + 𝐿= 51 2𝐿+ 7 = 51 2𝐿= 44 𝐿= 22 Use 𝐿= 22 and solve for 𝐴: 𝐴= 𝐿+ 7 = 22 + 7 = 29 Andy is 29 years old and Lori is 22 years old. Answer: Andy: 29; Lori: 22 18 © MathTutorDVD.com Problem 7) Alex is fourteen years old and he is also nine years older than his sister Kristen. How old is Kristen? Solution: Let 𝐴 represent Alex’s age and let 𝐾 represent Kristen’s age. From the problem we can write two equations: 𝐴= 14 𝐴= 𝐾+ 9 Substitute 14 for 𝐴 into the second equation and solve for 𝐾: 14 = 𝐾+ 9 14 −9 = 𝐾+ 9 −9 𝐾= 5 Kristen is 5 years old. Answer: 5 19 © MathTutorDVD.com Problem 8) Mark is six years younger than his sister Teri. If Teri is thirty-seven years old, how old is Mark? Solution: Let 𝑀 represent Mark’s age and let 𝑇 represent Teri’s age. From the problem we can write the following equations: 𝑀= 𝑇−6 𝑇= 37 Substitute 37 into the first equation for 𝑇 and solve for 𝑀: 𝑀= 37 −6 𝑀= 31 Mark is 31 years old. Answer: 31 20 © MathTutorDVD.com Problem 9) Vicki is eleven years older than Chuck. Five years from now, Vicki will be twice as old as Chuck. Find their present age. Solution: Let 𝑉 represent Vicki’s age and let 𝐶 represent Chuck’s age. From the problem we can write the following equations: 𝑉= 𝐶+ 11 𝑉+ 5 = 2(𝐶+ 5) Substitute 𝐶+ 11 into the second equation for 𝑉 and solve for 𝐶: 𝐶+ 11 + 5 = 2(𝐶+ 5) 𝐶+ 16 = 2𝐶+ 10 16 = 𝐶+ 10 𝐶= 6 Use 𝐶= 6 and solve for 𝑉: 𝑉= 𝐶+ 11 = 6 + 11 = 17 Vicki is 17 years old and Chuck is 6 years old. Check your solution: 17 + 5 = 2(6 + 5) 22 = 22 Answer: Vicki: 17; Chuck: 6 21 © MathTutorDVD.com Problem 10) Andy is nine years older than his sister Jenny. Five years from now, the sum of their ages will be forty-three. Find their present ages. Solution: Let 𝐴 represent Andy’s age and let 𝐽 represent Jenny’s age. From the problem we can write the following equations: 𝐴= 𝐽+ 9 𝐴+ 5 + 𝐽+ 5 = 43 Substitute 𝐽+ 9 into the second equation for 𝐴 and solve for 𝐽: 𝐽+ 9 + 5 + 𝐽+ 5 = 43 2𝐽+ 19 = 43 2𝐽= 24 𝐽= 12 Use 𝐽= 12 and solve for 𝐴: 𝐴= 𝐽+ 9 = 12 + 9 = 21 Andy is 21 years old and Jenny is 12 years old. Check your solution: 12 + 5 + 21 + 5 = 17 + 26 = 43 Answer: Andy: 21; Jenny: 12 22 © MathTutorDVD.com Problem 11) Seven years from now, Monte will be three times as old as she was fifteen years ago. How old is she now? Solution: Let 𝑀 represent Monte’s age. From the problem we can write the following equation: 𝑀+ 7 = 3(𝑀−15) Solve for 𝑀: 𝑀+ 7 = 3𝑀−45 45 + 7 = 2𝑀 2𝑀= 52 𝑀= 26 Check your solution: 26 + 7 = 33 3(26 −15) = 3(11) = 33 Answer: 26 23 © MathTutorDVD.com Problem 12) Myron is three years older than his wife Denise. If Myron and Denise’s ages add up to sixty-five, how old are Myron and Denise? Solution: Let 𝑀 represent Myron’s age and let 𝐷 represent Denise’s age. From the problem we can write the following equations: 𝑀= 𝐷+ 3 𝑀+ 𝐷= 65 Substitute 𝐷+ 3 into the second equation for 𝑀 and solve for 𝐷: 𝐷+ 3 + 𝐷= 65 2𝐷+ 3 = 65 2𝐷= 62 𝐷= 31 Use 𝐷= 31 and solve for 𝑀: 𝑀= 𝐷+ 3 = 31 + 3 = 34 Myron is 34 years old and Denise is 31 years old. Answer: Myron: 34; Denise: 31 24 © MathTutorDVD.com Problem 13) Mike is seventy-one years old and he is also twenty-eight years older than his daughter Kim. How old is Kim? Solution: Let 𝑀 represent Mike’s age and let 𝐾 represent Kim’s age. From the problem we can write two equations: 𝑀= 71 𝑀= 𝐾+ 28 Substitute 71 for 𝑀 into the second equation and solve for 𝐾: 71 = 𝐾+ 28 71 −28 = 𝐾+ 28 −28 𝐾= 43 Kim is 43 years old. Answer: 43 25 © MathTutorDVD.com Problem 14) Anika is five years older than her brother Shane. If Shane is twenty-five years old, how old is Anika? Solution: Let 𝐴 represent Anika’s age and let 𝑆 represent Shane’s age. From the problem we can write the following equations: 𝐴= 𝑆+ 5 𝑆= 25 Substitute 25 into the first equation for 𝑆 and solve for 𝐴: 𝐴= 25 + 5 𝐴= 30 Anika is 30 years old. Answer: 30 26 © MathTutorDVD.com Problem 15) Sam is twenty-six years older than Brian. Eight years from now, Sam will be three times as old as Brian. Find their present age. Solution: Let 𝑆 represent Sam’s age and let 𝐵 represent Brian’s age. From the problem we can write the following equations: 𝑆= 𝐵+ 26 𝑆+ 8 = 3(𝐵+ 8) Substitute 𝐵+ 26 into the second equation for 𝑆 and solve for 𝐵: 𝐵+ 26 + 8 = 3(𝐵+ 8) 𝐵+ 34 = 3𝐵+ 24 34 = 2𝐵+ 24 10 = 2𝐵 𝐵= 5 Use 𝐵= 5 and solve for 𝑆: 𝑆= 𝐵+ 26 = 5 + 26 = 31 Sam is 31 years old and Brian is 5 years old. Check your solution: 31 + 8 = 39 3(5 + 8) = 3(13) = 39 Answer: Sam: 31; Brian: 5 27 © MathTutorDVD.com Problem 16) Paul is eight years older his wife Lisa. Twenty years ago, the sum of their ages was twenty-four. Find their present ages. Solution: Let 𝑃 represent Paul’s age and let 𝐿 represent Lisa’s age. From the problem we can write the following equations: 𝑃= 𝐿+ 8 𝑃−20 + 𝐿−20 = 24 Substitute 𝐿+ 8 into the second equation for 𝑃 and solve for 𝐿: 𝐿+ 8 −20 + 𝐿−20 = 24 2𝐿−32 = 24 2𝐿= 56 𝐿= 28 Use 𝐿= 28 and solve for 𝑃: 𝑃= 𝐿+ 8 = 28 + 8 = 36 Paul is 36 years old and Lisa is 28 years old. Answer: Paul: 36; Lisa: 28 28 © MathTutorDVD.com Problem 17) Eleven years from now, Nicholas will be three times as old as he was twenty-five years ago. How old is he now? Solution: Let 𝑁 represent Nicholas’s age. From the problem we can write the following equation: 𝑁+ 11 = 3(𝑁−25) Solve for 𝑁: 𝑁+ 11 = 3𝑁−75 11 = 2𝑁−75 2𝑁= 86 𝑁= 43 Answer: 43 29 © MathTutorDVD.com Problem 18) Jim is four years older than his sister Kathy. If Jim and Kathy’s ages add up to fifty-four, how old are Jim and Kathy? Solution: Let 𝐽 represent Jim’s age and let 𝐾 represent Kathy’s age. From the problem we can write the following equations: 𝐽= 𝐾+ 4 𝐽+ 𝐾= 54 Substitute 𝐾+ 4 into the second equation for 𝐽 and solve for 𝐾: 𝐾+ 4 + 𝐾= 54 2𝐾+ 4 = 54 2𝐾= 50 𝐾= 25 Use 𝐾= 25 and solve for 𝐽: 𝐽= 𝐾+ 4 = 25 + 4 = 29 Jim is 29 years old and Kathy is 25 years old. Answer: Jim: 29; Kathy: 25 30 © MathTutorDVD.com Problem 19) John is sixty-one years old, and he is also five years older than his wife Stacey. How old is Stacey? Solution: Let 𝐽 represent John’s age and let 𝑆 represent Stacey’s age. From the problem we can write two equations: 𝐽= 61 𝐽= 𝑆+ 5 Substitute 61 for 𝐽 into the second equation and solve for 𝑆: 61 = 𝑆+ 5 61 −5 = 𝑆+ 5 −5 𝑆= 56 Stacey is 56 years old. Answer: 56 31 © MathTutorDVD.com Problem 20) Paul is twenty-two years older than his daughter Jessica. If Jessica is forty-four years old, how old is Paul? Solution: Let 𝑃 represent Paul’s age and let 𝐽 represent Jessica’s age. From the problem we can write the following equations: 𝑃= 𝐽+ 22 𝐽= 44 Substitute 44 into the first equation for 𝐽 and solve for 𝑃: 𝑃= 44 + 22 𝑃= 66 Paul is 66 years old. Answer: 66
3924
https://www.alliedacademies.org/articles/the-role-of-keystone-species-in-ecosystem-stability-evidence-from-tropical-rainforest-ecosystems-30283.html
International Journal of Pure and Applied Zoology All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal. Opinion Article - International Journal of Pure and Applied Zoology (2024) Volume 12, Issue 5 The role of keystone species in ecosystem stability: Evidence from tropical rainforest ecosystems Czerny Kin Institute of Veterinary Medicine, Division of Microbiology and Animal Hygiene, Georg-August-University, Germany Corresponding Author: : Czerny Kin Institute of Veterinary Medicine, Division of Microbiology and Animal Hygiene, Georg-August-University, Germany E-mail: kinczerny79@gwdg.de Received: 02-Sep-2024, Manuscript No. IJPAZ-24-146565 ; Editor assigned: 03- Sep-2024, PreQC No. IJPAZ-24-146565 (PQ); Reviewed:19- Sep-2024,QC No. IJPAZ-24-146565 ; Revised:24- Sep-2024, Manuscript No. IJPAZ-24-146565 (R); Published: 30- Sep-2024, DOI:10.35841/2420-9585-12.5.256 Visit for more related articles at International Journal of Pure and Applied Zoology Abstract Introduction Tropical rainforests, often referred to as the "lungs of the Earth," are among the most diverse and complex ecosystems on the planet. These lush environments, with their high levels of biodiversity and intricate ecological interactions, are sustained by a web of species that work together to maintain ecosystem stability. Among these species, keystone species play a pivotal role in shaping and sustaining the structure and function of their ecosystems. This article explores the concept of keystone species, their significance in tropical rainforest ecosystems, and the evidence supporting their role in maintaining ecosystem stability . Understanding Keystone Species The concept of keystone species was introduced by the ecologist Robert Paine in the 1960s. Keystone species are those whose impact on their ecosystem is disproportionately large relative to their biomass or abundance. Their presence and activities have a critical influence on the structure, composition, and functioning of their ecosystems. The removal or decline of a keystone species can lead to significant changes in ecosystem dynamics, often resulting in reduced biodiversity and altered ecosystem processes [2, 3]. Keystone Species in Tropical Rainforests Tropical rainforests are home to a variety of keystone species, each contributing uniquely to ecosystem stability through their interactions with other organisms. Here are some key examples: Large Predators In tropical rainforests, large predators such as jaguars and harpy eagles act as keystone species by regulating prey populations. These apex predators help maintain the balance of herbivore populations, which in turn affects the structure and composition of vegetation. For example, jaguars prey on herbivorous mammals, preventing overbrowsing of vegetation and promoting plant diversity . Fruit-Eating Animals Frugivores, including various species of birds, bats, and monkeys, are crucial keystone species in tropical rainforests. These animals consume fruits and disperse seeds across vast areas. Seed dispersal is essential for plant regeneration and forest dynamics, as it facilitates the growth of new plants and maintains plant diversity. Without frugivores, many tree species would struggle to reproduce and maintain their populations[5, 6]. Keystone Plants Some plant species also function as keystone species in tropical rainforests. For instance, certain tree species produce fruits or flowers that are a primary food source for a wide range of animals. Additionally, plants like strangler figs play a keystone role by providing vital resources and habitat for various species, particularly in periods of food scarcity. Pollinators Pollinators such as bees, butterflies, and hummingbirds are critical keystone species in tropical rainforests. They facilitate the reproduction of flowering plants by transferring pollen, which is necessary for seed production. The loss of pollinators can lead to a decline in plant reproduction and, consequently, affect the entire food web dependent on these plants. Evidence of Keystone Species Impact Numerous studies have documented the significant impact of keystone species on ecosystem stability in tropical rainforests: Predator-Prey Dynamics Research on large predators like jaguars in the Amazon rainforest has shown how their presence influences the structure of prey populations. By controlling herbivore numbers, these predators prevent overgrazing and promote plant diversity. Studies have demonstrated that the removal of apex predators can lead to a cascade of effects, including declines in plant species and alterations in forest structure. Seed Dispersal and Plant Diversity The role of frugivores in seed dispersal has been extensively studied in tropical rainforests. For example, research on howler monkeys and fruit bats in Central America has highlighted their role in maintaining plant diversity by dispersing seeds of various tree species. The absence of these animals can result in reduced seedling establishment and a decline in plant diversity Pollination and Ecosystem Health Pollinator populations have been closely linked to plant reproductive success in tropical rainforests. Studies have shown that declines in pollinator populations, such as bees and butterflies, lead to reduced fruit and seed production in plants. This decline affects not only plant species but also the entire ecosystem that relies on these plants for food and habitat. Implications for Conservation The recognition of keystone species underscores the importance of conserving not only individual species but also the ecological roles they play. Conservation efforts in tropical rainforests must consider the preservation of keystone species to maintain ecosystem stability and resilience Habitat Protection Protecting the habitats of keystone species is essential for sustaining their populations and, consequently, the health of the entire ecosystem. This includes safeguarding large tracts of rainforest, maintaining connectivity between forest patches, and mitigating threats such as deforestation and habitat fragmentation. Ecosystem Management Effective management practices that account for the roles of keystone species can help maintain ecosystem functions. For example, efforts to protect apex predators and ensure their prey populations are balanced contribute to forest health. Similarly, conserving pollinator habitats and supporting seed dispersal processes can enhance plant diversity and forest regeneration. Addressing Climate Change Climate change poses a significant threat to tropical rainforests and their keystone species. Changes in temperature, precipitation patterns, and extreme weather events can disrupt ecological interactions and impact keystone species. Addressing climate change through mitigation and adaptation strategies is crucial for safeguarding tropical rainforest ecosystems [9, 10]. Conclusion Keystone species are integral to the stability and functioning of tropical rainforest ecosystems. Their influence on predator-prey dynamics, seed dispersal, plant diversity, and pollination highlights their importance in maintaining ecosystem health. As we face increasing threats to these vital ecosystems, recognizing and protecting keystone species becomes essential for ensuring the resilience and sustainability of tropical rainforests. By understanding and preserving the roles of keystone species, we can better safeguard the intricate web of life that sustains these extraordinary ecosystems. References McCollum, S.A., and Leimberger, J.D., 1997. Predator-induced morphological changes in an amphibian: predation by dragonflies affects tadpole shape and color. Oecologia, 109: 615-621. Indexed at, Google Scholar, Cross Ref Williams, B.K., Rittenhouse, T.A., and Semlitsch, R.D., 2008. Leaf litter input mediates tadpole performance across forest canopy treatments. Oecologia., 155: 377-384. Indexed at, Google Scholar, Cross Ref Milotic, D., Milotic, M., and Koprivnikar, J., 2017. Effects of road salt on larval amphibian susceptibility to parasitism through behavior and immunocompetence. Toxicol., 189: 42-49. Indexed at, Google Scholar, Cross Ref Straus, A., Reeve, E., Randrianiaina, R.D., Vences, M., and Glos, J., 2010. The world's richest tadpole communities show functional redundancy and low functional diversity: Ecological data on Madagascar's stream-dwelling amphibian larvae. BMC Ecol., 10: 1-10. Indexed at, Google Scholar, Cross Ref Behringer, D.C., and Duermit-Moreau, E., 2021. Crustaceans, one health and the changing ocean. Invertebr. Pathol., 186: 107500. Indexed at, Google Scholar, Cross Ref Gess, R.W., and Whitfield, A.K., 2020. Estuarine fish and tetrapod evolution: Insights from a Late Devonian (Famennian) Gondwanan estuarine lake and a southern African Holocene equivalent. Rev., 95: 865-888. Indexed at, Google Scholar, Cross Ref Colbert, E.H., 1965. The appearance of new adaptations in Triassic tetrapods. J. Zool., 14: 49-62. Indexed at, Google Scholar Ferner, K., and Mess, A., 2011. Evolution and development of fetal membranes and placentation in amniote vertebrates. Respir Physiol Neurobiol., 178: 39-50. Indexed at, Google Scholar, Cross Ref Davit?Béal, T., Tucker, A.S., and Sire, J.Y., 2009. Loss of teeth and enamel in tetrapods: fossil record, genetic data and morphological adaptations. Anat., 214: 477-501. Indexed at, Google Scholar, Cross Ref Kuraku, S., 2021. Shark and ray genomics for disentangling their morphological diversity and vertebrate evolution. Biol., 477: 262-272. Indexed at, Google Scholar, Cross Ref Get the App Original text Rate this translation Your feedback will be used to help improve Google Translate
3925
https://www.ams.org/bookstore/pspdf/mcl-1-prev.pdf
Session 2 Combinatorics. Part I Paul Zeitz Sneak Preview. You may have seen the bumper sticker that says “Mathemati-cians Count Too.” That doesn’t begin to tell the story. Combining techniques from the “stone age” (when counting was done, well, with stones) with techniques from the “computer age” (when numbers are represented by binary digits), we can boldly say that the subject of combinatorics is now in its “golden age.” Through this session you will get a glimpse of the astounding world of counting: permutations and combinations, factorials, “menu-type” encodings, matchmaking and honeymooning, partitions and complements, multinomial coefficients along the Mississippi, dogs and biscuits, balls in urns, sororities of numbers and diago-nals refusing to intersect . . . will all swirl in the “magical” pool of combinatorics. Unlike magic, however, revealing the “tricks” will not leave you disappointed. You will want to make these new counting techniques your own and go forth and slay your own dragons. 1. Two Counting Conundrums “Counting” sounds like a babyish topic: its technical name is enumerative combinatorics. This branch of math starts slowly, with no prerequisites at all, but quickly moves into very subtle problems that require good imagination and clarity of thought. Take, for instance, the following classic: Figure 1. How many regions in the circle? Problem 1. There are n points in a circle, all joined with line segments. Assume that no three (or more) segments intersect in the same point. How many regions inside the circle are formed this way? 25 26 2. COMBINATORICS. PART I The reader can quickly verify that the answers for 1 through 5 points are correspondingly 1, 2, 4, 8, and 16 regions, all powers of 2 (cf. Fig. 1). Our intuition is screaming: “the answer for n points is 2n regions!” Alas, this is false, as one finds out by diligently counting the 31 regions in Figure 2a.1 Figure 2. Breaking the pattern vs. breaking the rules As a high school student, I was fascinated by this problem: I couldn’t stop thinking about it until I finally solved it! That’s how strong a hold a mysterious counting puzzle can have on a curious youth. Let’s not spoil the fun for you either: I will leave this problem for you to struggle with and possibly revisit it in a later session. Meanwhile, our goal for the present session will be the solution of a 1995 problem from the Czech and Slovak National Olympiad. Problem 2. Do there exist 10,000 10-digit numbers divisible by 7, all of which can be obtained from one another by a reordering of their digits?  Before we can attack this problem (which looks like a number theory question, even though it really isn’t), we need to start at the beginning. Counting involves the four basic operations of addition, subtraction, multi-plication, and division. In order to count properly, you need to know which operation to do and when to do it. It turns out that multiplication is the easiest of the four, so we will start with it. 2. Multiplication, Menus, and Encoding 2.1. Menus make you multiply. Here’s an easy question: Exercise 1. A taqueria sells burritos with the following fillings: pork, grilled chicken, chicken mole, and beef. Burritos come either small, medium, or large, with or without cheese, and with or without guacamole. How many different burritos can be ordered? Solution: The answer is, of course, 4 × 3 × 2 × 2 = 48 because each different burrito is uniquely determined by making four decisions: filling, size, cheese, and guacamole, and those decisions involve, respectively, 4, 3, 2, and 2 choices. □ 1 Figure 2b displays 3 diagonals intersecting in a single point, which is not allowed. But even here, you will not count the “expected” 32 regions! 2. MULTIPLICATION, MENUS, AND ENCODING 27 The order in which we make the above four decisions does not affect the outcome. We could have chosen the burrito’s size first and the filling last, etc. Any time we can use a menu analogy, we multiply. More precisely, PST 14. If the thing we are counting is the outcome of a multi-stage process, then the number of outcomes is the product of the number of choices for each stage. We can solve many problems with this approach, as long as we can carefully formulate the outcomes as a “menu” process. For the following, recall what sets and subsets are: you can think of them as collections and subcollections of objects. Exercise 2. How many subsets does a set with 8 elements have, including the empty set and the whole set itself? Solution: Can we make choosing a subset act like ordering a burrito? We need to decide which members of the set will be in the subset, and for each element this is a simple yes/no question. If the members of the set are a, b, c, d, e, f, g, and h, we need to merely ask 8 questions: “Is a in the subset?”, “Is b in the subset?”, “Is c in the subset?”, etc. So the number of ways of performing this 8-stage process is just 2 × 2 × 2 × 2 × 2 × 2 × 2 × 2 = 28 = 256. □ Exercise 3. Given a pool of 30 students, how many ways can we choose a 3-person government consisting of a president, vice-president, and treasurer? Solution: We must make three decisions: who is president, who is vice-president, and who is treasurer. We do not need to decide the three positions in this order; we could instead first choose the treasurer, say. Since the order is not counted, we will fix it using alphabetical order: president, then treasurer, and finally vice-president. Since there are 30 students, there are 30 possible choices for the first decision (president), but then there are only 29 left for the second decision (treasurer), and finally 28 for vice-president. In all, there will be 30×29×28 possible governments. □ Exercise 4. In Exercise 3, we tacitly assumed that no one could hold more than one office. Verify that if we allowed this, the answer would be 303. 2.2. Permutations and factorials. Exercise 3 used a very common counting entity known as a permutation. We denote the number of per-mutations of n objects taken k at a time by the symbol P(n, k). It is the answer to the question: 28 2. COMBINATORICS. PART I Problem 3. How many different ways can we choose k different things from a set of n objects, where the order of choice matters?  Solution: Our previous observations generalize to the formula P(n, k) = n(n −1)(n −2) · · · (n −k + 1), because there are n choices for the first thing, (n −1) for the second, etc., for a total of k terms in the product. □ As a reality check, compare with the answer to Exercise 3: the number of governments was simply P(30, 3) = 30(30 −1)(30 −2). We get an important special case when k = n: P(n, k) = P(n, n) = n(n −1)(n −2)(n −3) · · · 1. You probably know the more common notation for this: it is n!, called n factorial. This simply counts the number of ways that n different objects can be arranged in a row.2 For example, the number of ways we can permute the letters of ERD˝ OS is 5!, since there are 5 possibilities for the first letter, 4 for the second, etc. Exercise 5. If you haven’t done so already, make a table of n! for n = 1, 2, . . . , 10. You should memorize it at least up to n = 7 and passively recognize the rest. Exercise 6. 10 boys and 9 girls sit in a row of 19 seats. How many ways can this be done if (a) all boys sit next to each other and all girls sit next to each other? (b) each child has only neighbors of the opposite sex? First, we warm up by solving an easier question: how many seating arrange-ments with no restrictions? This is just 19!. Now for the actual problem. Solution: We need to make 3 main decisions in part (a). Either the seating will have the boys on the left or on the right; then we need to seat the boys and then the girls. The first decision has 2 possible answers; the second and third have 10! and 9!, respectively. So the answer is 2·10!·9!. □ Since there are more boys than girls, in part (b) the seating has to be boy-girl-boy-. . . -girl-boy. But we still need to seat the individual kids. As before, there are 10! possible seating arrangements for the boys and 9! arrangements for the girls. Since we need to seat boys and girls to get a single seating arrangement, this is a 2-stage process with 10! · 9! outcomes. □ 2 Such rearrangements will play a prominent role in the Rubik’s Cube sessions, where twists of the faces are represented by permutations of the facelets. 2. MULTIPLICATION, MENUS, AND ENCODING 29 2.3. If you encode, are you a spy? I often call the menu/multiplication method “encoding”, because it is sometimes very helpful to think like a com-puter programmer, organizing information compactly using simple coding to represent outcomes. For example, we could represent each subset in Exercise 2 with an 8-digit binary 3 number, with a 1 in position k if and only if the kth element was in the subset. Thus, the empty subset would be encoded by 00000000, while the subset consisting of the last two elements by 00000011. Exercise 7. How many ways can you choose a team from 11 people where the team must have at least one person and must have a designated captain? Solution: First, line up the 11 people in an arbitrary way (for example, by height or in alphabetical order). We will encode the team with an 11-digit binary number where digit k is a 1 if and only if person k is on the team. Since the team has at least one person, there has to be at least one digit that is 1. Underline one of these 1’s to represent the captain. How many ways can we perform this encoding? There are just 11 choices for the underlined digit 1. The remaining 10 digits of the 11-digit binary string have no restrictions; each choice of 0’s and 1’s yields a different subset of non-captains to join the captain. For example, a team where person #4 is the captain and the only person on the team is represented by 00010000000, while 00110100000 encodes a 3-person team with the same captain as before, joined by persons #3 and #6. It should be clear that the number of encodings is 11 × 210. □ Exercise 8. In a traditional village, there are 10 young men and 10 young women. The village matchmaker arranges all the marriages. In how many ways can he pair offthe 20 young people? Assume (the village is traditional) that all marriages are heterosexual, i.e., a marriage is a union of a male and a female (male-male and female-female unions are not allowed). Figure 3. Which bride do I get? 3 Instead of the 10 decimal digits 0,...,9, the binary system has only two digits: 0 and 1. 30 2. COMBINATORICS. PART I Solution: The most vivid visualizations are the most helpful with encod-ing (cf. Fig. 3 for the case of 5 men and 5 women). Imagine lining up all 10 young men, in some arbitrary-but-fixed order, such as height, wealth, looks, math ability, sense of humor, etc. Then the 10 young women line up opposite the men, in any order. Each woman marries the man directly opposite her. Clearly each different ordering of women gives rise to a different matrimo-nial outcome; the problem is reduced to counting such orderings, but this is just 10!. □ We learn from this and previous solutions that PST 15. To “keep the chaos down” when counting and to have some control over what is happening, it is helpful to order all of the involved objects/people (or some subset of them). This order can be random or according to some specific criterion. Exercise 9. In a not-so-traditional village, there are 10 young men and 10 young women. The village matchmaker arranges all the marriages. In how many ways can he pair offthe 20 men and women, if homosexual marriages (male-male or female-female) as well as heterosexual marriages are allowed? There are a number of different approaches; we will first give a menu-style solution and return to this problem later with other methods (cf. p. 40). Solution: Once again, it helps to visualize an effective matching process. In a similar way to Exercise 8, start by lining up all 20 people in some arbitrary order. Everyone has to get married; so the matchmaker points to the first person and chooses for him or her a mate from one of the other people in line. Then these two persons leave (perhaps to go on a honeymoon). This first decision had 19 possible outcomes. Now there are 18 frightened young people remaining in line. The match-maker repeats the process, pointing at the first person in line and choosing a mate from the 17 people behind him or her in line. This process goes on until two persons remain: they are automatically mated. There will be 19 · 17 · 15 · · · 3 · 1 possible outcomes for marrying offthe entire village in this sex-blind way. □ You may wonder – in fact, you should wonder – if this method counts all different marriages and counts each outcome once. This is crucial for any counting problem. Here’s a way to see why it works. We need some ordering mechanism; let’s use alphabetical order. Imagine a particular outcome. This is a collection of 10 pairs of married couples. Within each couple, order the spouses alphabetically. Thus if Pat marries Dana4, we call this couple “Dana-Pat,” not “Pat-Dana.” 4 Of course, we may assume that no two people have the same name; if they did, we simply change one of them. 3. ADDITION AND PARTITION 31 Then, we list the couples in alphabetical order of the first name of each couple. For example, if we only had six people named A, B, C, D, E, and F, one possible list might be A-E, B-C, and D-F. Note that the first couple in the list will always be one starting with A, and the next couple will start with the “lowest” unused letter, etc. It should be clear that this is a one-to-one encoding of all possible marriage arrangements. Certainly, each arrangement (for the entire village) will give rise to a different list. And likewise, each list (that obeys the alphabetical order rules) will give rise to a different marriage arrangement of the village. Finally, it should be clear that this encoding method is equivalent to our original argument. PST 16. When creating and using an encoding method, you must verify that there is a one-to-one correspondence between the set of encodings and all possible outcomes in the problem, i.e., that every encoding corresponds to exactly one possible outcome, and every possible outcome corresponds to exactly one encoding. 3. Addition and Partition 3.1. The limitations of multiplication. In Exercise 3, each decision in a multi-stage process influenced the number of decisions for the next stage. This is a pretty common situation, and easy to handle, as long as the number of decisions required at each stage stays constant. Exercise 10. How many even 3-digit numbers have no repeating digits? Before we solve this problem, let’s look at some simpler questions. First of all, how many 3-digit numbers are there? The answer is 900, which we can see in at least two ways: • Just count the numbers from 100 to 999, inclusive: there is a total of 999 −100 + 1 numbers. • There are 9 choices for the first digit (no zero allowed) and 10 for each of the other two, yielding 9 × 10 × 10. Next, let’s ask the slightly harder question: how many 3-digit numbers have no repeating digits? There are 9 choices for the first digit, as before, but this means that now there will be 9 (instead of 10) choices for the second digit and 8 (instead of 10) choices for the third, yielding 9 × 9 × 8 = 648 numbers. Finally, let’s tackle the original question. How about this argument: • Start at the rightmost digit; to ensure that the number is even, there are 5 choices (0, 2, 4, 6, or 8). • There will now be 9 (instead of 10) choices for the middle digit. • There will now be 7 (instead of 9) choices for the first digit. • So the answer is 5 × 9 × 7 = 315. 32 2. COMBINATORICS. PART I This answer is not correct, due to a subtle error that stems from a common beginner’s mistake: trying to convert every problem into a multiplication. Can you spot the flaw? It was clever to start with the rightmost digit, but certain decisions will affect the number of choices for future stages in different ways. If the rightmost digit is zero, then the first digit will have fewer restrictions, since it is never zero! The correct way to do this is to PST 17. Break the outcomes into several separate, mutually exclusive cases, i.e., every possible outcome must belong to exactly one of these cases. Solution to Exercise 10: All outcomes boil down to whether 0 is the last digit or not; so only two mutually exclusive cases are needed. Case 1: The rightmost digit is zero. In this case, we have only two decisions to make. There are 9 numbers left, and we are free to use any of them for the other two digits. So there are 9 choices for (say) the first digit, and then 8 choices for the second digit, a total of 8 × 9 = 72 choices. 2, 4, 6, 8 0 ? ? ? ? Figure 4. Two mutually exclusive cases Case 2: The rightmost digit is not zero. Then there are 4 choices for this digit (2, 4, 6, or 8). Again, 9 numbers are left to use (including zero). Since we cannot use zero for the first digit, let’s decide the first digit next: there will be 8 choices. Now there are 8 numbers left for the second digit (since zero was left in the pool). So there will be 4×8×8 = 256 choices in this case. The answer, then, is the sum 72 + 256 = 328 numbers. □ 3.2. Partition leads to addition. We had to add to get the answer in Exercise 10 above because the outcomes split into two separate cases. A decomposition of the entire set into subsets that are pairwise disjoint (as in PST 17) is called a partition: imagine partitioning an office room into smaller cubicles. Partitioned outcomes are counted by adding. More precisely: PST 18. Whenever we partition the outcomes of something into several cases, each requiring different counting methods, we add the number of out-comes in each case to get the total number of outcomes. Note that this is not the same as multiple stages. In the previous ex-ercise each case involved multiple stages. In general, you multiply when each outcome has a menu-inspired encoding. You add if you are forced to 3. ADDITION AND PARTITION 33 partition the outcomes into separate cases; then for each case you may use completely different strategies (for example, some may involve encoding, and others may not). Exercise 11. An n-bit string is an n-digit binary number, i.e., a string of just zeros and ones. How many 10-bit strings contain exactly 5 consecutive zeros (no more, no less)? For example, we would not count 0000000111 (too many consecutive zeros), but we would count 1110000011 and 0011000001. Solution: We will break the problem into 6 cases, depending on where the block of 5 zeros lives inside the larger 10-bit string. The leftmost position of the 5-zero block can be position 1, 2, 3, 4, 5, or 6. In case 1, we start with 5 zeros; then we must have a 1 (to prevent excess zeros), and then we can have anything we want for the remaining 4 digits: 0 0 0 0 0 1 ? ? ? ?. So we have just 24 “free” choices. Case 6 works exactly the same way, by symmetry. However, cases 2–5 are different because now the 5-digit zero block “floats” inside the larger string and needs to be surrounded by a 1 on its left and on its right. For instance, case 4 can be represented by ? ? 1 0 0 0 0 0 1 ?. The remaining 3 digits are now free, so each of these cases has 23 outcomes. Hence the total number of outcomes is the sum 24 + 23 + 23 + 23 + 23 + 24 = 64. □ Note that the cases of your partition cannot overlap at all. If they do, we will need to employ subtraction to fix the overcounting. This is the hardest operation of all, which we will discuss in depth in a later session. However, in very simple situations, we can use subtraction as a simple consequence of partitioning. 3.3. Counting the complement is very different from “counting on a compliment.” Exercise 12. Three different flavors of pie are available, and seven children are each given a slice of pie in such a way that at least two children get different flavors. How many ways can this be done? Solution: We need to be clear on the meaning of “different” here. In hu-man societies, children tend to be respected as individuals; so let’s give each child a number and each flavor a letter. Then we can encode each outcome with a 7-letter string that uses the three letters. For example, a a a b b c a represents the outcome where children #1–3 got flavor a, children #4–5 got b, etc. Thus each different 7-letter string uniquely encodes a different outcome. Let’s first not worry about the condition that at least two kids get differ-ent flavors. By the simplest menu reasoning, there are 37 different strings. This includes outcomes with different flavors, as well as outcomes without different flavors. How many of the latter are there? Well, the only way that 34 2. COMBINATORICS. PART I no two kids have a different flavor is if all the kids get the same flavor, and there are only 3 such outcomes (namely, a a a a a a a , b b b b b b b, c c c c c c c). Thus we can break up all outcomes into two cases: all flavors the same, and not all flavors the same. This is a partition of all outcomes, so the total must be 37. Consequently, the number of outcomes where at least two kids get different flavors (i.e., not all flavors the same) is 37 −3. □ PST 19. The method above, called counting the complement, is used when we can partition the total set of outcomes into the things we are interested in, and the rest (its “negation” or complement). If the total is easy to count and the negation is easy to count, then we count the complement and our answer is just the difference of the two. Figure 5a depicts the general situation with a subset A and its comple-ment Ac. When you count the complement Ac, you usually get a number which is easy-to-find or small compared to the total sum. We took advantage of this possibility in Exercise 12, where |Ac| = 3 and |A| = 37 −3. A different application of partitioning arises when we can divide the set of total outcomes into two sets that have the same number of elements. The following is a nice example, which first appeared as a problem in the Bay Area Math Meet (BAMM). Exercise 13 (BAMM). How many subsets of the set {1, 2, 3, 4, . . . , 30} have the property that the sum of their elements is greater than 232? Solution: Notice that in the set S = {1, 2, 3, 4, . . . , 30} the sum of all the elements is 1 + 2 + 3 + · · · + 30 = 30 · 31/2 = 465. Let A be a subset of S, let Ac denote the complement of A (the elements of S which are not in A), and let Σ(X) denote the sum of the elements of any set X. Then Σ(A) plus Σ(Ac) must equal 465. Because 465 = 232+233, if Σ(A) > 232, then Σ(Ac) ≤232. For instance, if A = {2, 4, 6, . . . , 30}, then Ac = {1, 3, 5, . . . , 29} (cf. Fig. 5b) and Σ(A) = 240 > 232 > 225 = Σ(Ac). {2, 4, 6, . . . , 30} = A ↔Ac = {1, 3, 5, . . . , 29} ? A Ac complements Figure 5. Complements in general and Twin subsets In other words, there is a one-to-one correspondence between subsets whose element sum is greater than 232 and subsets whose element sum is not (namely, A ↔Ac). Hence the number of subsets whose element sum is greater than 232 is exactly half of the total number of subsets of S. But the number of subsets of S is 230; so the answer is 229. □ 4. DIVISION: A CURE FOR UNIFORM OVERCOUNTING 35 4. Division: A Cure for Uniform Overcounting 4.1. The Mississippi formula. The number of different permutations of ERD˝ OS is 5!, as we saw on page 28. But Exercise 14. How many different permutations does GAUSS have? Solution: It is easy to see that 5! is too large by exactly a factor of 2, because GAUSS has two letters that are the same. If we temporarily distin-guish the two S’s by using subscripts, we see that 5! counts the permutations AS1GUS2 and AS2GUS1 as two different things when, in fact, they are in-distinguishable (since the letters do not actually have subscripts). So each of the 5! = 120 different (subscripted) permutations can be arranged into two columns where in the first column, S1 is to the left of S2, and in the second column it is the other way around. Hence the correct answer is 5!/2 = 60. □ Suppose we now wished to Exercise 15. Count the different permutations of RAMANUJAN. Solution: This 9-letter name has 3 A’s and 2 N’s, so lots of overcount-ing occurs if we temporarily distinguish these letters and start with a pro-visional answer of 9!. Focus on one particular permutation, for example A2MA1RJN2UN1A3. If we just permute the subscripts for each letter, we get a new rearrangement of the 9 letters, but one that is indistinguishable with-out subscripts. For example, one such permutation is A1MA3RJN1UN2A2. How many such rearrangements are there? All we need to do is permute the 3 subscripts of the A’s (in 3! = 6 ways) and then permute the 2 subscripts of the N’s (in 2! = 2 ways), while leaving all the other letters in place. There are 3! × 2! = 12 different subscripted rearrangements that appear the same without subscripts. Hence the number of distinguishable permutations is 9! 3!2!· □ This is called the Mississippi formula, because it can be used to count the number of distinguishable permutations of the word MISSISSIPPI. The answer is, of course, 11!/(4!4!2!). If we wanted to, we could include the M, which appears only once, and write instead 11! 4!4!2!1! , which of course leaves the value unchanged. Expressions like the fraction above are called multinomial coefficients. We’ll see why, shortly. 4.2. Binomial coefficients. If there is such as thing as a multinomial coefficient, it must have a simpler cousin. Let’s look at a simple two-letter Mississippi problem: Exercise 16. How many 10-bit strings have exactly 4 zeros? 36 2. COMBINATORICS. PART I Solution: This is merely asking us to count the number of ways of ar-ranging 4 zeros and 6 ones in a string. By the Mississippi formula, it is (1) 10! 4!6!· Alternatively, we could start with 10 blank places and then choose loca-tions for each zero, with the understanding that the remaining places will be occupied by ones. For the first zero, there are 10 possible places. The next zero has 9 possible locations. The third and fourth zeros have, respec-tively, 8 and 7 possible slots. By the menu principle, this 4-stage process has 10 · 9 · 8 · 7, or P(10, 4), possible outcomes. However, the order of choice doesn’t matter. For example, we could have picked place numbers 4, 9, 2, 3, in order, resulting in zeros at those locations. But we could have also chosen 9, 2, 3, 4, which would give the same result. So the product 10 · 9 · 8 · 7 has overcounted by a factor of 4!, and the actual answer is (2) 10 · 9 · 8 · 7 4! · It is easy to see that this expression is equal to (1) above. □ Expressions such as (1) are called binomial coefficients, denoted by n k = n! k!(n −k)! and pronounced as “n choose k”. Exercise 16 demonstrated the following facts about binomial coefficients. Theorem 1 (Binomial Coefficients). The quantity n k  (a) counts the number of ways to permute an n-bit string consisting of k zeros and n −k ones, (b) also counts the number of different ways to choose k places out of an n-place string where the order of choice doesn’t matter, and (c) satisfies n k  = P(n,k) k! = n(n−1)···(n−k+1) k! · Theorem 1(b) introduces a very important concept. The selection of k “places” from a pool of n places where order is not important is also called a combination, often to distinguish it from a permutation (where order counts). 4. DIVISION: A CURE FOR UNIFORM OVERCOUNTING 37 This is somewhat old-fashioned terminology, and we have always found it confusing. We prefer the more rigorous interpretation that Theorem 1′. n k  counts the number of different k-element subsets chosen from an n-element set. Note that for sets, order doesn’t matter. For example, the sets {a, b, c} and {b, a, c} are the same. This subset interpretation has many interesting consequences. Here are a few for you to verify. If you have trouble, read the solution below. Try to minimize, if you can, use of formulas analogous to (1) on page 36 and algebraic manipulation. Instead, try to rely on Theorem 1′, that is, describe the involved quantities as certain subsets of a bigger set. Theorem 2 (Properties of Binomial Coefficients). (a) For any nonnegative integers n and r with r ≤n, n r  = n n−r  . (b) Because n n  = n 0  = 1, the only logical value for 0! is 1. (c) For any positive integer n with r < n, n r  + n r+1  = n+1 r+1  .  (d) n 0  + n 1  + n 2  + · · · + n n  = 2n for any positive integer n. Concrete “solution”: In part (a), 10 3  = 10 7  because picking 3 places in a 10-bit string to be 0’s is equivalent to picking 7 places to be 1’s. ♦ n 0  has to equal 1 in part (b), since it counts the number of 0-element subsets of an n-element set. There is only one, namely the empty set. Plug-ging r = 0 into formula (1) yields 0! = 1. Note that we are not compelled to define 0!, but if we do, we must assign it the value of 1 in order to be consistent with formula (1). ♦ Let’s keep things concrete with a specific case in part (c). Why is 13 4 + 13 5 = 14 5 ? The right-hand side (RHS) counts the number of ways of awarding an ice cream cone to each of 5 lucky children from a class of 14. Suppose one of the children is named Ramanujan. We can partition the outcomes into two cases: either Ramanujan gets ice cream or Ramanujan does not get ice cream. Ramanujan 13 4  = = 13 5  Figure 6. Does Ramanujan get ice cream? For the first case, we need to choose 4 more lucky kids from among the 13 kids remaining. For the second case, we need to eliminate the unlucky 38 2. COMBINATORICS. PART I Ramanujan from consideration and choose 5 lucky kids from the remaining 13. These cases correspond to the two terms of the left-hand side (LHS). ♦ We solve a concrete case for part (d) also. If n = 11, the RHS represents (by Exercise 2) the number of subsets of an 11-element set, including the empty set and the entire set. The LHS is a sum representing partitioning these subsets into 12 cases: the number of empty subsets, the number of 1-element subsets, the number of 2-element subsets, etc. ♦ 4.3. Pascal’s Triangle all over. Seeing the properties in Theorem 2, some readers may recognize the connection between the so-called Pascal’s Triangle and our binomial coefficients. Briefly, Pascal’s Triangle is a triangle of numbers: its top row consists of a single 1, and each consecutive row has one more number than the previous; the first and the last number in each row are 1’s, while any other number is the sum of the two numbers directly above it. 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 0 0  1 0  1 1  2 0  2 1  2 2  3 0  3 1  3 2  3 3  4 0  4 1  4 2  4 3  4 4  5 0  5 1  5 2  5 3  5 4  5 5  Figure 7. Pascal’s Triangle and Binomial coefficients Figure 7 displays the first 6 rows of Pascal’s Triangle and the correspond-ing binomial coefficients. As one can easily check, the two triangles have identical entries. This correspondence means that whatever we can prove about binomial coefficients can be translated in terms of Pascal’s Triangle, and vice versa. Thus, property (a) in Theorem 2 indicates the symmetry of Pascal’s Triangle across a vertical line; (b) tells us about the end 1’s on each row; (c) is the defining addition property of Pascal’s Triangle; and (d) gives the sum of numbers on each row of Pascal’s Triangle. The combinatorics of Pascal’s Triangle is very rich, and it deserves a separate session of its own. Without further detours, we shall leave this topic for now and return to our binomial coefficients. 4.4. Some mathematical “etymology.” It is time to explain why bino-mial coefficients have this name. Consider raising the binomial (x + y) to the second power. We start with the product (x + y)2 = (x + y)(x + y). Now let’s expand it like a beginning algebra student, writing each “raw” term as it is first spewed out without combining like terms or switching the order of multiplications. Our expansion will be x(x+y)+y(x+y) = xx+xy+yx+yy. Of course, we then clean this up to get x2 + 2xy + y2. 4. DIVISION: A CURE FOR UNIFORM OVERCOUNTING 39 Now try it for a higher power. Consider (x + y)7 = (x + y)(x + y)(x + y)(x + y)(x + y)(x + y)(x + y). If we multiply it all out, without combining like terms, we now get 27 = 128 raw terms: all monomials with coefficients of 1, such as xxxxxxx or xxxyyyx or xxyxyyx. Notice that the last two monomials are like terms, since both simplify to x4y3. Also note that each 7-term string with 4 x’s and 3 y’s represents a different raw term. For example, the string xxxxyyy arose by multiplying the x’s from the first four binomials of the product with the y’s from the last three binomials. The string xyxyxyx arose by multiplying the x from the first term with the y from the second term, etc. Thus the number of raw terms is equal to the number of 7-term strings made of x’s and y’s. For instance, the number of raw terms that simplify to x4y3 is, by the Mississippi formula, equal to 7 4  or, equivalently, 7 3  . Thus the coefficient of x4y3 in the expansion will be 7 4  = 7 3  . Similarly, the coefficients of x2y5 and of y7 will be 7 5  and 7 7  , respectively. More generally, x7−kyk will appear with coefficient 7 k  . We can write out the whole expansion, then, as (x + y)7 = 7 0 x7y0 + 7 1 x6y1 + 7 2 x5y2 + · · · + 7 7 x0y7. This is an example, for the exponent 7, of the so-called Binomial Theorem5, which expands the binomial expression (x + y)n. Now you can probably imagine where the name multinomial coefficients comes from. For instance, if you expand the 4-variable expression (m + i + s + p)11 and combine the like terms, the monomial corresponding to the permutations of “Mississippi” will appear as 11! 1!4!4!2!m1i4s4p2. With this said, it’s time for some popular folklore: a student response to an exam question. 5 The Binomial Theorem can be succinctly written as (x + y)n = Pn k=0 `n k ´ xn−kyk. 40 2. COMBINATORICS. PART I 5. Balls in Urns and Other Applications Armed with encoding, partitioning, and binomial coefficients, we can now investigate some really interesting problems, including Problem 2. The key is to use creative encoding. 5.1. Choosing a honeymoon location. Let’s begin by revisiting Exer-cise 9, the marriage problem in the non-traditional village. Suppose each couple went on a different honeymoon. We could label each honeymoon location with one of the ten letters A, B, C, . . . , J . Now line up the 20 young people in some arbitrary order. Imagine that the matchmaker has twenty stickers: two labeled “A ”, two labeled “B ”, etc. The matchmaker then goes around putting one sticker on each person. That tells that person where he or she will honeymoon. Clearly this is a way to form all possible marriages, and by the Mississippi formula there will be 20! 2!2! · · · 2!    10 = 20! 210 such marriages. However, we overcounted, since we are distinguishing the honeymoon locations. For example, the outcome where Pat and Dana go to honeymoon C and the outcome where Pat and Dana go to honeymoon H are counted as two different outcomes. To cure this overcounting, we need to divide by 10!, since there were 10 different honeymoon locations. So the correct answer is 20! 10!210· Here’s another approach. Choose two young people to go on honeymoon A. This can be done in 20 2  different ways. Now pick two people from the remaining group to go on honeymoon B. This can be done in 18 2  ways. Continuing this process, the number of ways of arranging marriages where the honeymoons are distinguishable will be 20 2 18 2 16 2 · · · 4 2 2 2 , and once again we need to divide by 10! to get the answer to the original question. It is a simple and fun exercise to verify that, indeed, the three approaches yield the same number. Exercise 17. Check that 20! 10!210 = 1 10! 20 2 18 2 · · · 2 2 = 19 · 17 · · · 3 · 1. 5.2. Geometry falls under the spell of combinatorics. Recall the enticing Problem 1, which asked for the number of regions in a circle. No, we shall not break our promise here – this problem is still yours to hack. But there are so many geometry problems of a similar spirit that they all 5. BALLS IN URNS AND OTHER APPLICATIONS 41 form a separate branch of mathematics named combinatorial geometry. This session would be amiss without at least one such example. Exercise 18. The n vertices of a polygon are arranged on the circumference of a circle so that no three diagonals intersect in the same point. (a) How many diagonals does the polygon have? (b) How many intersection points do these diagonals have? Solution: An easy application of the definition of binomial coefficients tells us that there are n 2  pairs of vertices and hence there are that many segments joining them. Thus, there are n 2  −n diagonals (we had to subtract the n sides of the polygon). Let’s call this number d. At first you may think that the answer in part (b) is just d 2  , but not all diagonals intersect. However, each intersection point in the interior is the intersection of two diagonals. The endpoints of these two diagonals are the vertices of a quadrilateral. Each choice of 4 vertices of the polygon gives rise to a different quadrilateral (cf. Fig. 8a). Once you focus on these quadrilat-erals, it should be clear that there is a one-to-one correspondence between quadrilaterals formed from vertices of the polygon and interior diagonal in-tersection points. For instance, look at the pentagon in Figure 1: you can easily count 5 such quadrilaterals, each accounting for one of the 5 interior intersection points. So the answer is n 4  . □ As a further example, check out the case of a hexagon in Figure 2a: we have 9 = 6 2  −6 diagonals and 15 = 6 4  intersection points of these diagonals. Figure 8. Quadrilaterals and Volleyball teams 5.3. Teaming up for volleyball. Exercise 19. How many ways can 10 people form two teams of 5? Solution: You may think the answer is 10 5  , but this exactly double-counts! To see why, note that 10 5  counts each 5-person choice as different from the complement, which is incorrect: once you choose a team, its comple-ment is already automatically selected as the other team. Alternatively, one may realize that 10 5  counts the number of ways of choosing 5 people to play in the left half of the court (cf. Fig. 8b). The correct answer is 10 5  /2. □ This is really an easier version of the non-traditional marriage problem (a “marriage” is interpreted as a team of 5 people), but it may seem harder. 42 2. COMBINATORICS. PART I 5.4. Dogs and biscuits are transformed into “urns” and “balls.” The next problem, on the other hand, seems quite innocent, but it is actually rather tricky. Think about it carefully before reading the solution. Problem 4. How many ways can 7 dogs consume 10 dog biscuits? The dogs are distinguishable; the biscuits are indistinguishable. Dogs do not share.  Solution: Let’s arrange the dogs is some order (e.g., alphabetical). The difficulty is that we do not know how many biscuits an individual dog eats. Dog #1 could eat between 0 and 10 biscuits, and each of these 11 outcomes will affect the possibilities for the remaining dogs. Imagine a sample outcome and try to think of how we can encode generic outcomes. For example, dog 1 2 3 4 5 6 7 biscuits 0 0 3 1 0 2 4 means that dog #1 ate no biscuits, while dog #7 ate 4, etc. What we want to count is the number of different tables of this sort. Instead of using numerals to indicate the number of biscuits, we can achieve some uniformity by replacing numerals with an equivalent number of symbols. For example, the following string means the same thing as the table above: |||bbb|b||bb|bbbbb| Instead of the numeral “3”, we write 3 b’s. We retain the | symbol to indicate the boundary of a table cell. This way, if two of these symbols are next to each other with nothing in between, it means that the table cell has a zero in it. For example, the first three symbols ||| translate into “zeros in the first two cells of the table.” Notice that our encoding method has more symbols than needed. The first and last symbols are always |’s. Thus we can eliminate them – they provide no information. So, for example, the string |b|b|bbb|||bbbbb is equivalent to the table dog 1 2 3 4 5 6 7 biscuits 0 1 1 3 0 0 5 In this case, the first | is the rightmost boundary of the first cell, indicating a zero in the first cell, i.e., no biscuits for the first dog. If you have trouble un-derstanding, here’s another way to think about it. Start with the following: | | | | | | These six vertical lines are the boundary lines of seven cells. Then we drop 10 b’s into the seven cells (the horizontal line is the “floor” which prevents the b’s from falling through). If n b’s lie to the left of the first vertical line, it means that the first cell has value n. If m b’s lie between the first and second lines, it means that the second cell has the value m, etc. If j b’s lie to the right of the rightmost vertical line, then the seventh cell has value j. 5. BALLS IN URNS AND OTHER APPLICATIONS 43 We don’t need the horizontal line, of course; it is just to make us think of physical cells for the b’s to drop down into. All we really are doing is creating a 16-symbol word consisting of 10 b’s and 6 |’s. For instance, the last table can be encoded on 16 slots as follows: | b | b | b b b | | | b b b b b By the Mississippi formula, this can be done in 16 10 = 16 6 ways. □ This problem of course generalizes and gives rise to a useful formula that is called, among other things, the Balls in Urns or Stars-and-Bars formula. We prefer the former name because it is easier to remember: Theorem 3 (Balls in Urns). The number of ways to distribute b indistin-guishable balls among u distinguishable urns is b + u −1 b = b + u −1 u −1 · I remember it by thinking, “It’s a hard formula; so the top is not some-thing obvious like b + u. For the bottom, just think about dropping balls.” You’d never put u there by mistake! The Balls in Urns formula incidentally solves another equivalent, very frequently occurring problem. Exercise 20. How many ways can we choose 7 people out of 26 people, allowing for repetitions? Solution: Our intuition may suggest that the 26 people are the “balls,” being “dropped” into 7 winning slots. However, our intuition is wrong. Imag-ine that we are giving 7 identical awards to the 26 people, and the number of times a person is chosen corresponds to the number of awards he/she re-ceives. Thus, the 7 awards are the “balls,” and the 26 people are the “urns.” For instance, if the people have names starting with the different 26 Latin alphabet letters, then in Figure 9, Amy received three awards, Bob – two awards, while Ivan and Lily – one award each. We can string together a name from these letters, namely, ALI BABA, but of course, the order of the letters does not matter. So, the answer is 7 + 26 −1 7 = 32 7 = 3,365,856. □ A B I L Y Z Figure 9. ALI BABA 44 2. COMBINATORICS. PART I 6. Sororities of Numbers: A Promise Fulfilled Now we can finally tackle Problem 2. Recall that this problem asked whether there exist 10,000 10-digit numbers divisible by 7, all of which can be obtained from one another by a reordering of their digits. 6.1. Can “0” be a leader? Let us first solve a slightly simpler modified version of the problem: we shall allow “10-digit” numbers to start with the digit 0, e.g., we shall consider the number 1102 as a 10-digit number and write it as the string 0000001102. Thus, we have a total of 1010 numbers to work with. Ignore the issue of divisibility by 7 for a moment, and concentrate on understanding sets of numbers that “can be obtained from one another by a reordering of their digits.” Let us call two 10-digit numbers that can be obtained from one another in this way “sisters,” and let us call a set that is as large as possible whose elements are all sisters a “sorority.” For example, 1,111,233,999 and 9,929,313,111 are sisters who belong to a sorority with 10! 4!3!2! members, since the membership of the sorority is just the number of ways of permuting the digits. The sororities have vastly different sizes. The most “exclusive” sororities have only one member – for example, the sorority consisting entirely of 6,666,666,666; yet one sorority has 10! members – the one containing 1,234,567,890 (cf. Fig. 10). In order to solve this problem, we need to show that there is a sorority with at least 10,000 members that are divisible by 7. One approach is to look for big sororities (like the one with 10! members), but it is possible (even though it seems unlikely) that somehow most of the members will not be multiples of 7. Instead, PST 20. The crux idea is that it is not the size of the sororities that really matters, but how many sororities there are. If the number of sororities is fairly small, then even if the multiples of 7 are dispersed very evenly, “enough” of them will land in some sorority. Let’s make this more precise. Suppose it turned out that there were only 100 sororities (of course there are more than that). The number of multiples of 7 is ⌊(1010 −1)/7⌋+ 1 = 1,428,571,429. By the Pigeonhole Principle,6 at least one sorority will contain ⌈1,428,571,429/100⌉multiples of 7, which is way more than we need.7 In any event, we have the penultimate step to work toward: compute (or at least estimate) the number of sororities. We can compute the exact number. Each sorority is uniquely determined by its collection of 10 digits where repetition is allowed. For example, one (highly exclusive) sorority can be named “ten 6’s,” while another is called 6 Part II of the Proofs session discusses the Pigeonhole Principle in detail. 7 ⌊a⌋rounds a down to the closest integer ≤a, while ⌈a⌉rounds a up to the closest integer ≥a, e.g., ⌊5.7⌋= 5 and ⌈3.2⌉= 4. 6. SORORITIES OF NUMBERS: A PROMISE FULFILLED 45 “three 4’s, a 7, two 8’s, and four 9’s.” So now the question becomes: In how many different ways can we choose 10 digits, with repetition allowed? Just as in Exercise 20, this is equivalent to putting 10 “balls” (one for each digit place) into 10 “urns” (one for each decimal digit 0, 1, 2, . . . , 9). By the Balls in Urns formula, this is 19 10  = 92,378. Finally, we conclude that there will be a sorority with at least ⌈1,428,571,429/92,378⌉= 15,465 members divisible by 7. This is greater than 10,000. □ 6666666666 0000000000 1111233999 1234567890 Figure 10. Sororities of various sizes: from smallest to largest 6.2. “0” is not born to lead. Now back to our original problem. A couple of the calculations above must be adjusted, but the main idea of the proof remains the same. Since we do not allow a 10-digit number to begin with 0, we are dealing only with the numbers from 1,000,000,000 to 9,999,999,999. So there will be ⌊(1010 −1)/7)⌋−⌊(109 −1)/7⌋= 1,285,714,286 of them divisible by 7. Even though it has no bearing on our calculations, it is worth observing that the big sorority containing 1,234,567,890 will now have “only” 10!−9! = 9(9!) members, as we must exclude all 9! strings starting with 0. One would also expect big changes in the number of sororities, but. . . we “lose” only one sorority, namely, the sorority corresponding to the 0-string 0000000000 (cf. Fig. 10). Thus, we have 19 10  −1 = 92,377 sororities total. To complete the solution, we only have to divide: ⌈1,285,714,286/92,377⌉= 13,919. Thus, some sorority must contain at least 13,919 members divisible by 7, so the answer is Yes. □
3926
https://www.vanityfair.com/hollywood/2016/02/rivalry-hedda-hopper-louella-parsons-gossip-columnists?srsltid=AfmBOoqyyIZ_auaN_VQioRkdM-XaHFfeUe4k7dFmSp5WZSdPZay6w-0y
The Powerful Rivalry of Hedda Hopper and Louella Parsons On a rainy Tuesday afternoon during the spring of 1948 a roomful of Hollywood power lunchers were treated to a spectacle that equaled in sheer outrageousness the fantasies they confected in their studio dream factories. The movie industry’s two gorgons of gossip, buxom columnist Louella O. Parsons and her behatted counterpart, Hedda Hopper—the town’s most feared women and most notorious rivals—were sitting down together to a civilized meal of cracked crab at the No. 1 booth of the posh Rodeo Drive restaurant Romanoff’s. The establishment’s customers, who probably wouldn’t have blinked if Harry Truman himself had walked in on the elbow of Stalin, stampeded for the telephones to broadcast the news to the outside world. These calls, Hedda said, “brought in a mob of patrons who stood six deep at the bar to witness our version of the signing of the Versailles Peace Treaty.” Press agents, Collier’s magazine later reported, scurried “from washroom to washroom tearing hair, gnashing teeth, and awaiting the end of the world.” For this entente cordiale between these two Weird Sisters—who together commanded a loyal audience of around 75 million newspaper readers and radio listeners (roughly half the country)—signaled more than just a bit of stagy fence-mending. It also ominously harbingered the collapse of the crisscrossing, double-dealing structure that for years had supported the entire Hollywood publicity machine. In their quest for column mentions, a commodity worth its space in gold, studio heads, publicists, and stars had long been playing the dangerous game of pitting one woman tooth and nail against the other. Nobody left Romanoff’s until nearly two hours later, when, their standing-room-only performance complete, the two ladies sauntered out arm in arm. “Peace,” Hedda reflected in her 1952 memoir, From Under My Hat, “it’s wonderful! But it didn’t last.” Besides, Louella surmised, “so many people say we do not” like one another. “Who are we to argue against such an enthusiastic majority opinion?” Neither, of course, had actually expected or even wanted a permanent reconciliation—Louella and Hedda were wise enough to the ways of Hollywood to know feuds were good business. Louella had been covering the film industry since 1915 (she was, in her boastful words, “the first movie columnist in the world”). And Hedda, originally a stage and film personality, had known Samuel Goldwyn when he was still called Samuel Goldfish, and had acted in the first movie Louis B. Mayer ever produced. Like so many sworn enemies, they were distorted fun-house-mirror doubles of each other—the one fat, the other thin—with more in common than either probably cared to acknowledge. Born four years apart and much earlier than either ever admitted (Hedda joked she was “one year younger than the age Louella claims to be”), the two women both escaped from dreary hick towns into seemingly advantageous marriages, merely to wind up single mothers struggling to support only children. Prodigiously energetic and ambitious, both eventually found themselves capable of pulling in huge incomes (around $250,000 a year, close to $2 million by today’s standards), yet had such extravagant tastes that they were constantly in debt. And politically both Louella and Hedda were, in the words of one contemporary, “to the right of Genghis Khan.” Crisply summing up the difference between herself and her nemesis, Hedda observed that “Louella Parsons is a reporter trying to be a ham; Hedda Hopper is a ham trying to be a reporter!” Though Hopper was more sophisticated—“worldly, lovely, beautifully groomed, with a New York actressy polish,” says Kitty Carlisle Hart—Parsons, whom John Barrymore called “that old udder” and who Roddy McDowall says “resembled a sofa,” may actually have been the more complicated of the two characters. As George Eells intimated in his 1971 dual biography, Hedda and Louella, Louella certainly was the more mendacious. In addition to fudging her birth date—she gave it as 1893 rather than 1881—Louella concealed the fact that she was born in Freeport, Illinois, to Jewish parents, the Oettingers. After graduating from high school in Dixon, Illinois (Ronald Reagan’s hometown), Louella worked as a reporter on a local paper. Always as gooily romantic as a candy valentine (“I believe that love is the answer to almost all the problems the world faces”), she captivated one of the area’s more eligible and affluent men, John Parsons. “Louella was very popular with men,” says Dorothy Manners, the columnist’s assistant for 30 years. With “lustrous brown hair and skin that a baby might envy,” Louella was “much more attractive than she was ever given credit for.” Apparently Mr. Parsons agreed with Manners’s assessment; he wed Louella in 1905, and a year later she gave birth to their daughter, Harriet. Louella’s official bio neatly disposes of Parsons by having him die on board a transport ship on the way home from World War I. Though he did die young, Parsons made his exit in a more commonplace fashion—he was screwing his secretary and Louella divorced him. She expunged this, and other significant bits of her history, in order to align her life more strictly with the Catholicism she began to practice fervently in middle age. Rid of John Parsons in all but name, Louella relocated to the nearest big city, Chicago. By around 1910 she was working for nine dollars a week in the syndication department of the Chicago Tribune and writing movie scenarios at night. Through a cousin’s connections, she advanced to a much more lucrative job as story editor at Chicago’s Essanay Studios, where she came into daily contact with such freshly coined silent stars as Mary Pickford and Gloria Swanson. When Louella priced herself out of her Essanay job, she went to the Chicago Record-Herald and boldly approached the editor with an unusual proposition. “All the movie stars of the day had to pass through Chicago on their way from New York to Los Angeles,” explains Dorothy Manners. “There was a two-hour wait in Chicago. Louella’s idea was to go down to the train station and interview the stars while they waited. She figured they would be glad to have something to do, and that from these meetings she could put together a column about their personal lives. Her editor told her, ‘Who would be interested in reading about that?’ Well, you can guess what happened.” Louella’s behind-the-scenes reports for the Record-Herald thrived, but the paper folded. In 1918 the invincible reporter transferred her talents to the New York Morning Telegraph. She, daughter Harriet, and a new husband she had acquired during her Chicago years, a riverboat captain named Jack McCaffrey, settled into a $90-a-month apartment on West 116th Street. Louella’s grinding work schedule and ceaseless social maneuverings soon alienated McCaffrey, but their crumbling marriage was really finished off by Louella’s obsessive affair with a married man, Peter Brady, a prominent New York labor leader—“the real love of her life,” says Dorothy Manners. (The records of this second marriage also seem to have been obliterated in an effort to sanitize her past.) Though Louella, by her own admission, lost her head over the married Brady, professionally she navigated a steady, upward course. Shrewdly, she began a campaign to capture the attention of the most powerful figure in newspaper publishing, William Randolph Hearst—and she aimed directly at his heart. Her column turned into a one-note instrument, tirelessly piping honeyed praise for the talent and beauty of the sprightly blonde starlet Marion Davies, whom Hearst had plucked from a chorus line at the age of 14 to become his mistress, and around whom he had built his Cosmopolitan motion-picture studio. Parsons’s sugar-flecked shower of accolades (a generous counterpoint to another critic’s assessment that “Miss Davies has two dramatic expressions—joy and indigestion”) inevitably led to a friendship between the two ladies, and finally an offer from Hearst in 1923 to become the $250-a-week motion-picture editor of his New York American. The perpetual Parsons refrain “Marion Davies never looked lovelier” echoed down the decades, eventually winding up as a standard on the drag-queen circuit. But Louella, whose gushing enthusiasm for the movie business knew no limits, did not reserve her effusions for Davies alone. She also made a minor pet of an actress named Hedda Hopper, whom she lauded for her “capable” performance in the Davies vehicle Zander the Great. And she took her plaudits even further, describing Hedda in 1926 as the type of woman who could lead any man astray. Hedda, formerly Elda Furry, Quaker butcher’s daughter from Hollidaysburg, Pennsylvania, was born in 1885 and became smitten with the theater as a teenager when she attended Ethel Barrymore’s performance in Captain Jinks of the Horse Marines at the Mishler Theatre in nearby Altoona. Stagestruck, she ran away to join a Pittsburgh theatrical troupe. From there, in 1908, she escaped to New York, where, accepted into the chorus of the Aborn Light Opera Company, she became known for the best pair of legs on Broadway. These lovely appendages, and Elda’s youth, caught the rakish eye of one of the leading lights of the theater, DeWolf Hopper, a Harvard-educated actor 27 years her senior and married so many times his friends called him “the Husband of Our Country.” Hopper weakened women’s wills “with his voice,” Hedda recalled. “It was like some great church organ”—an apparatus sonorous enough to persuade her to become his fifth wife, in 1913. When they weren’t on tour, the couple lived in Manhattan’s Algonquin Hotel, where Mrs. Hopper found herself in the thick of such elite theatrical personages as John Barrymore, Douglas Fairbanks, and a very young Tallulah Bankhead. “As Wolfie’s wife I didn’t hover around the fringes of a world of celebrated people,” Hedda recalled with her farm-girl briskness. “I was pitchforked right in amongst ‘em.” DeWolf’s greatest gifts to his young wife—whom he habitually taunted, cheated on, or simply ignored— were their son, Bill, his distinctly more euphonious surname (“Elda” was traded in for “Hedda” on the advice of a numerologist), and his impeccable instruction in diction. “In fact, I got an overdose,” she wrote. “I clipped my letters so short that I sounded like an inbred British dowager mated to a Boston bull terrier…. It was that very affectation … that got me into all the phony society-female roles that I played on the screen.” Hedda, husband, and son landed in Hollywood in 1915, where DeWolf had been lured by a lucrative contract from the Triangle Film Company. In spite of DeWolf’s demands that Mrs. Hopper relinquish her acting career, Hedda persuaded him to let her take the female lead in Battle of Hearts (1916)—her first film—at $100 a week. This was no “society-female” role, however. Playing a rough-and-ready fisherman’s daughter, she won the part simply because of her sinewy build and her height. At five feet seven and 128 pounds, she was a beanstalk in a hothouse where diminutive orchids such as Mary Pickford and Lillian Gish flourished. The movie opened to respectable notices, with one critic pointing out that Hedda looked “exceedingly well in trousers.” After Triangle foundered and the Hoppers returned to New York, Hedda began working in earnest at the studios there and in Fort Lee, New Jersey. The role that set the pattern for all her future casting was that of the faithless spouse of a millionaire in L. B. Mayer’s Virtuous Wives (1918). Determined to upstage the star, Hedda sank her entire $5,000 salary into gowns and hats from the salon Lucile—and it paid off. Variety observed that Mrs. DeWolf Hopper stood out “prominently,” at the expense of Anita Stewart, whose self-effacement was “a remarkable exception to the general run of stars.” By 1920, Hedda’s stature as a film actress had soared so high that she demanded $1,000 a week—double her previous salary. Jealous that his protégé’s earnings now matched his own, DeWolf hurled himself into the dalliances that eventually brought about the 1922 collapse of their marriage, a fact Louella duly noted in her Telegraph column. Independent and in need of funds, in 1923 Hedda accepted L. B. Mayer’s offer of a Metro (soon to become MGM) contract in Hollywood. Frantically trying to balance a heavy social schedule, daily deadlines, a clandestine love affair, and her checkbook, Louella—who habitually slept only two or three hours a night—found herself in failing health. Though diagnosed with tuberculosis, she ignored doctor’s orders and dragged herself in the fall of 1925 to a dinner party at Hearst’s home. The next morning Louella’s host discharged her on full salary and sent her to the California desert to recuperate. During her desert confinement, several of Louella’s Hollywood friends made the eastward pilgrimage to visit her in Palm Springs. Darryl Zanuck came bearing books, and Hedda Hopper showed up, hoping to supplement her movie income with real-estate dealings. In fact, ever since Hedda had arrived in Hollywood two years before, she and Louella had been engaged in a sort of mutually beneficial swap meet. A continent away from the main action, Louella had grown to depend on the gossipy actress’s sharp ears. “When they first knew each other,” says Dorothy Manners, “Hedda was an actress, a good one. They liked each other a lot. If anything happened on a set—if a star and leading man were having an affair—Hedda would give Louella a call.” In return, Hedda was guaranteed a few lines of copy under Louella’s increasingly powerful byline. Hedda sorely needed these breaks, small and sporadic though they may have been. Having refused to lie down on L.B.’s well-worn casting couch, she was making the bulk of her pictures on loan-out arrangements with other studios. As she worked infrequently, Hedda, distinguished by her mannequin-like ability to wear clothes and her social aplomb, was regularly called upon to model for MGM’s head costume designer, Adrian, or to serve as studio cicerone for visiting V.I.P.’s. Eventually, MGM canceled her contract, and Hedda found herself living with her son in a three-room basement apartment—a humiliatingly far remove from the gold-brocaded tower bedroom she occupied on her visits to her colleague and close friend Marion Davies at San Simeon, Hearst’s palatial complex north of L.A. And her love life was in no less disarray. Just before Hedda lost all her savings in the Crash, she accompanied scenarist Frances Marion to Europe in 1928, and during the crossing fell madly in love with a handsome American painter. “But she refused to sleep with him,” Marion told biographer George Eells. “I used to say to her, ‘Hedda, for heaven’s sake, throw your panties over the windmill.’ ” But Hedda prudishly held her ground, even when the painter followed her back to Hollywood. Despondent, her ardent suitor ended up committing suicide. Fully recovered by March 1926, Louella, 45, called Hearst to announce she was ready to return to the New York American. The newspaper magnate replied, “Louella … the movies are in Hollywood—and right now I think that is where you belong.” He surprised her further with the happy tidings that he wished to syndicate her column—a huge boon to her finances and her influence (eventually 372 newspapers, as far afield as Beirut and China, would carry her)—and appoint her motion-picture editor of his multi-tentacled International News Service. “At last,” rejoiced Louella, “the Hollywood writer is going to Hollywood!” For connoisseurs of Hollywood lore, the timing of Hearst’s offer—and of Louella’s all-expenses-paid retreat to Palm Springs before that—is an eyebrow raiser. Even Louella allowed that the tales explaining the origin of her lifelong position with Hearst were macabre enough to have sprung from the febrile imagination of Edgar Allan Poe. But, publicly at least, that was all she said. There are two great unsolved mysteries in Hollywood: the first, the murder of director William Desmond Taylor, and the second, more pertinent to Louella’s story, the sudden demise of Thomas Ince, a universally respected director-producer whom Hearst had hoped to lure to Cosmopolitan Pictures to bring cachet to his lackluster studio. “Louella knew exactly what happened in both cases,” says Richard Gully, formerly Jack Warner’s special assistant for publicity, and now, at 90, a writer for the paper Beverly Hills 213. So unsatisfying are all the explanations for Ince’s 1924 death—officially reported as acute indigestion leading to heart failure—that last year Patricia Hearst, W.R.’s granddaughter, reopened the whole can of worms by publishing a fictional account of the affair, Murder at San Simeon. The “murder,” if that indeed is what it was, did not, however, take place at Hearst’s mountaintop castle, but aboard the Hearst yacht Oneida—later known as “William Randolph’s Hearse”—in November 1924. To woo Ince, Hearst organized a shipboard party for the filmmaker, attended by Marion Davies, writer Elinor Glyn, actresses Seena Owen and Aileen Pringle, some Ince and Hearst business associates, and, according to many accounts, Charlie Chaplin and Louella Parsons. George Eells was convinced that Ince simply took sick and died after swilling too much of Hearst’s bad Prohibition-era liquor. A more operatic version of what happened aboard the Oneida was that Chaplin had been having, as Roddy McDowall puts it, “a wingding with Marion Davies.” Crazed with jealousy, Hearst hired a killer, who, mistaking Ince for Chaplin, shot Ince instead. Dismissing this rumor, Dorothy Manners states, “There’s not a shred of truth to any of that. Every day after lunch at Louella’s house, where she had her offices, the two of us took a long walk. During one walk I asked her about this story. She said, ‘I was in New York at the time. And I’ve got columns datelined from New York to prove it.’ ” “So many alibis,” sighs one of Hollywood’s most senior and best-informed insiders. “How hard would it have been for a Hearst journalist to fake a dateline? Anyway, Chaplin wasn’t even on that boat. But Louella was.” The real story, he insists, is that Hearst, coming up from his cabin after a postlunch nap, discovered Ince playfully embracing Davies. In the same jesting spirit, Hearst pulled a long hatpin out of Davies’s hat—“a very large affair, as it was windy on the ship”—and aimed for Ince’s arm. Ince suddenly turned to face Hearst, and instead of pricking the producer’s arm, the hatpin “entered directly into his heart, causing an instant fatal heart attack. The key to the whole story is that Hearst then put his yacht into harbor on a Sunday and had the body cremated that day so there would be no autopsy. Listen, there’s no smoke without fire. There’s certain things you just can’t fake. And Louella was on the boat, for God’s sake.” Louella, inaugurating the first column to be syndicated from Hollywood, took to her adopted town like a thirsty dromedary to a lush oasis. Immediately, she laid down the law: “You had to tell it to Louella first,” says director George Sidney. Ubiquitous on the Hollywood scene, she became notorious for assuming an air of goofy vagueness in order to snap up material on the sly, and for leaving behind a puddle of urine wherever she sat (incontinence had plagued her at least since the seventh grade). In 1934 she significantly expanded her power base and income by breaking into radio, and on her popular Hollywood Hotel program, sponsored by Campbell’s Soup, she introduced the first “sneak previews” show. Actors appeared for free to read parts from upcoming films in exchange for cases of soup (Carole Lombard’s favorite: mulligatawny). Her influence was such that in a poll of moviegoers lined up at New York’s Rivoli theater to see a B-grade production called Nancy Steele Is Missing in 1937, 78 percent said they were there as a result of Louella’s broadcast. But Louella’s reputation for grasping Hollywood firmly by its scrotum arose less from her ability to lasso audiences into movies than from her skill at performing the vulturine rites of “Love’s Undertaker” (one of her less scurrilous nicknames). Her informants could be found in studio corridors, hairdressers’ salons, and lawyers’ and doctors’ offices (she sometimes learned of starlets’ pregnancies before they did). When she received a tip that Clark Gable and his second wife, Ria, were about to divorce, Louella “kidnapped” Mrs. Gable, whom she held hostage at her North Maple Drive home until she was sure the story was “speeding across the wire” ahead of any other service. Her most earth-shattering scoop during her early years in California, however, was “the biggest divorce story in the history of Hollywood”: the split between the town’s undisputed king and queen, Douglas Fairbanks Sr. and Mary Pickford. Pickford, who made the crucial mistake—repeated reflexively by generations of stars to come—of pouring her heart out to Louella, bitterly recalled that she had “counted . . . upon the columnist’s discretion” to safeguard her “against sensation.” When the bombshell erupted across international headlines, Hollywood was treated to one of its first full-throttle media maelstroms. In total command of Hollywood, Louella also succeeded in getting her hooks permanently into a man, urologist Harry “Docky” Martin, whose devilish Irish charm had at long last induced her to give up the married Peter Brady. Even before their 1930 marriage (Hearst gave the bride a $25,000 bauble as a wedding gift), Martin had earned a certain local reputation of his own as one of the town’s most florid drunks. Leonora Horn-blow, widow of producer Arthur Hornblow Jr., recalls that late one night at a party at L. B. Mayer’s, “Docky— everyone, even the parking attendant at Romanoff’s, called him that—passed out cold under the piano. Somebody shook him, trying to wake him up. But Louella shouted, ‘Let Docky sleep! He has surgery at seven tomorrow morning!’ ” (An elaborated version of this story has Martin’s famously large penis popping out of his pants as he slumped over, inviting the comment “There’s Louella Parsons’s column!”) Under Louella’s aegis, Docky, who had made an early specialty of cleaning up V.D.-infected whores, advanced to the post of Twentieth Century Fox’s chief medical officer. “Basically, a studio doctor’s job was to shoot stars with anything to make them perform,” explains Gavin Lambert, author of Norma Shearer and On Cukor. Hedda, meanwhile, was still desperately laboring to support herself and Bill, whom she ill-advisedly was nudging into the family profession. (Ambivalent about acting, Bill made a few movies, sold used cars for a while, and finally found his showbiz niche playing Paul Drake on the Perry Mason TV series.) Probably the most money Hedda ever saw during this bleak phase was from the life-insurance policy she collected on DeWolf when he died in the mid-30s. Her fee for acting plummeted—and she was lucky to scrape together two or three movie parts a year. In 1932, at the urging of L. B. Mayer’s powerful assistant, Ida Koverman, Hedda ran unsuccessfully on the Republican ticket for a county political seat. She failed miserably as an actor’s agent and, with nothing to lose, went with Bill back East, where she briefly returned to Broadway in Bea Kaufman’s Divided by Three. This theatrical engagement did nothing to resuscitate her career, but it did make a monumental difference to a fledgling actor she befriended in her show—Jimmy Stewart—whom Hedda dispatched to MGM to be put on contract. Hedda’s prospects had sunk so abysmally low that, back in California in 1935, she nearly signed on as manager of a male-escort service. Around 1936, Paramount hired Hedda to work in a more appropriate capacity, teaching English to its newest import, the Polish tenor Jan Kiepura. “I believe that was the last thing she did before she became a columnist,” says George Sidney. Hedda, by nature more cynical about Hollywood than Louella—who, says Roddy McDowall, “wallowed in phony sentiment”—reflected that in their town “if you have guts enough to stick it out, and even a modicum of ability, you’ll wear down Hollywood’s resistance.” Ironically, it was while Hedda was nestled deep in the magnanimous bosom of Hearst and Davies that the obdurate “resistance” of Hollywood to Hedda Hopper began to melt away. During a visit to Wyntoon, the pseudo-Bavarian Hearst compound in Northern California, Hedda was entertaining her fellow guests—including Eleanor “Cissy” Patterson of Hearst’s Washington Herald and Louella Parsons—with a scintillating stream of chatter about Hollywood stars. “Why don’t you write that?” Patterson suggested. “Write?” Hedda protested. “I can’t even spell!” Patterson proposed that she simply dictate a weekly letter over the phone, for which she would receive $50 per week. Louella, secure on her lofty throne, thought so little of this new development that she reported colorlessly in her October 5, 1935, column, “Hedda Hopper engaged to do a weekly Hollywood fashion article for Eleanor Patterson … ” Louella was right, at least for the moment, not to feel menaced. Hedda’s Washington column stopped after only four months, when the novice newspaperwoman refused to have her pay cut by $15 a week. The stint on Patterson’s paper, however, turned out to be a valuable warm-up for her real break, which came early in 1937. The Esquire Feature Syndicate, which had been searching for a Hollywood columnist, called upon Andy Hervey of MGM’s publicity department for a recommendation. He suggested Hedda Hopper, aged 52, with the caveat that she might not be able to write, “but when we want the lowdown on our stars, we get it from her.” Luckily for Hedda, one of the first papers to pick up “Hedda Hopper’s Hollywood” was the Los Angeles Times, a morning paper like Louella’s Examiner. “No matter how well syndicated a writer was, if he didn’t have a local outlet, no one in the industry considered him very important,” explains producer A. C. Lyles. In order to place Hedda emphatically on the map, her old MGM ally Ida Koverman threw a hen party in her honor, to which all the town’s most accomplished journalists, publicists, and actresses (Joan Crawford, Claudette Colbert, Norma Shearer) were invited. One guest, Louella O. Parsons, swept in, turned on her heel, and exited in a huff. “Louella never really dreamed at first that Hedda could ever become serious competition,” says Dorothy Manners. “But then, neither did Hedda.” Manners feels that MGM’s reasons for handing Hedda her poison pen were perfectly honorable. “She was past the age of a leading lady, and they wanted to give her a job. It made sense—she had a great entrée into the world of the studios.” But others (including Louella) took a dimmer view, saying that L. B. Mayer, with the blessing of other studio chiefs, cagily set Hedda up as a columnist to offset Louella’s monopolistic power. Observes gossip columnist Liz Smith, “The studios created both of them. And they thought they could control both of them. But they became Frankenstein monsters escaped from the labs.” If Louella at first felt that by ignoring it, her new competition would go away, she soon came in for a rude awakening. In 1939, Hedda buried “Love’s Undertaker” with a world-class scoop, the divorce of the president’s son Jimmy Roosevelt (a Goldwyn employee), who was involved with a Mayo Clinic nurse, from his wife, Betsey. This was no mere column item, but a coveted “cityside” story splashed across the country on front pages. Hedda had ferreted out the story by employing what would become a time-honored method—dropping in on her victim unannounced in the middle of the night. The feud between the two women was composed in equal parts of charade, sport, and vitriol. “Hedda was more inclined to see the battle as funny—as a great publicity builder. She understood that it was good for business,” Manners says. “But Louella really hated the whole thing. And she saw Hedda as a rival in every possible way, even down to the clothes she wore.” But, according to Richard Gully, Louella might have tolerated the flamboyantly hatted interloper if her animosity had been fueled solely by professional jealousy. “The true story of the famous feud is that it started for personal reasons,” he says. “Hedda always referred to Doc Martin as ‘that goddamn clap doctor,’ and that’s what really infuriated Louella.” Hedda’s and Louella’s power derived as much from the stories they withheld as from those they ran in their papers and broadcast on their radio shows. “They never ratted on Katharine Hepburn and Spencer Tracy,” says Gavin Lambert. “And they never mentioned a word about Norma Shearer’s affair with Mickey Rooney. Mayer put a stop to that—and then forced her to take the ‘nice’ part of Mrs. Stephen Haines in The Women.” Perhaps not coincidentally, MGM gave Hedda the small but juicy part of society reporter Dolly de Peyster in the same movie. Because of the “moral turpitude” clause in all the stars’ contracts, which called for automatic cancellation if an actor misbehaved, “the studio bosses used Louella and Hedda as a weapon of intimidation to keep their employees in line,” Lambert continues. “But if there was a real problem with a star, they could almost always buy these women off”—either through an exchange of information, or indirectly with cash, as when Twentieth Century Fox purchased the rights to Louella’s 1943 memoir, The Gay Illiterate, for $75,000. (The picture, needless to say, was never produced.) What remain deeply seared in the collective Hollywood memory, however, are those vindictive, destructive stories that the two women elected, for whatever reasons, to publish. In 1943, a high-strung redhead named Joan Barry burst into Hedda’s offices in the Guaranty Bank building on Hollywood Boulevard, sobbing that she had been impregnated and then discarded by Charlie Chaplin. The columnist, who fancied herself a guardian of female virtue, went gunning for the priapic comedian, who consequently found himself on trial in a hugely publicized paternity suit. (Though the court ruled that Chaplin was not the father, he was never-theless forced to pay child support.) In retaliation, Chaplin presented Louella with the scoop of his marriage to 18-year-old Oona O’Neill later that year. Hedda, defending her role in the Barry-Chaplin debacle, insisted that her intention had been to issue “a warning to others involved in dubious relationships.” This admonition was so effective, Hedda maintained, that at a cocktail party she had only to wag her finger at one producer for him to terminate an extramarital fling. Simply disapproving of a romance, even if there was nothing murky about it, was sufficient grounds for Hedda to attempt to torpedo it. When erstwhile costumer Oleg Cassini was dating Grace Kelly, Hedda ran an item which, Cassini recalls, “basically said, ‘Of all the handsome men in Hollywood, why is she seeing Cassini? It must be his mustache.’ Hedda hated Europeans. She was a real America Firster. Well, I responded with a letter which said, ‘I give up. I’ll shave my mustache if you shave yours.’ ” Louella also ran interference with Grace Kelly when the actress began an affair with the married Ray Milland while they were shooting Dial M for Murder in 1953. Since her marriage to Docky, Louella had grown more Catholic than the Pope. Every Sunday she showed up for 9:45 Mass at the Church of the Good Shepherd, often still drunk from the night before, and she was godmother to a whole brood of Hollywood offspring, including Mia Farrow and John Clark Gable. Outraged that Kelly, a well-brought-up Catholic, could be so flagrantly compromising her honor, “Louella broke the story,” Richard Gully says. “And Grace backed away from Milland, but it nearly ruined her career.” In an even more potentially dangerous move, Hedda tattled on Joseph Cotten for trysting with juvenile star Deanna Durbin while they were working together on Hers to Hold (1943). “Cotten was never going to leave his wife,” says Leonora Hornblow. “They were just having a little fun.” Hedda’s exposé was “extremely painful to Lenore Cotten, Joe’s long-suffering wife,” but her husband got revenge for both of them. “There was some huge event going on in the Beverly Wilshire ballroom. Joe saw Hedda across the room and came toward her, saying, ‘I’ve got something for you.’ He kicked right through the gold party chair she was sitting on, and its legs buckled. The next day Joe’s house was full of flowers and telegrams from all the people who would have liked to kick Hedda in the backside but didn’t have the courage. Joe pasted the telegrams on his bathroom wall.” Probably the most devastating character assault ever to blaze over the news-wires was Louella’s immolation of Ingrid Bergman after she left her husband, neurologist Peter Lindstrom, in 1949 to live in Italy with director Roberto Rossellini. This information alone, innocuous as it may seem today, caused a worldwide uproar. In 1945, Bergman—thanks to Hedda’s crusade on her behalf—had been cast as the angelic Sister Benedict in The Bells of St. Mary’s. Her holiness thus established before the public, Bergman in 1948 stepped into the title role of Victor Fleming’s Joan of Arc. Shocked to find that their saint had turned sinner, the press denounced Bergman in editorials, and audiences boycotted theaters showing her pictures. But the coup de grâce came when Louella detonated the most explosive ammunition of all. Early in 1950, the Los Angeles Examiner ran on its front page, above Louella O. Parsons’s byline: INGRID BERGMAN BABY DUE IN THREE MONTHS AT ROME. This story of the gestating Bergman-Rossellini love child created, Louella estimated, “the greatest [sensation] ever, I believe, in relation to a story about a movie personality.” So unexpected was this electrifying Examiner headline that other reporters, including Hedda, castigated Louella and Hearst for printing what they presumed to be an already disproved canard. That evening, Louella found her husband in his bedroom, bent piously over his rosary beads. The doctor explained, “I’m … praying your story is right.” Louella was right, of course—as Roberto junior’s birth incontrovertibly proved—because she had been informed of Bergman’s pregnancy by an unimpeachable source, whose identity she never revealed. She referred to him in her 1961 memoir, Tell It to Louella, as “a man of great importance not only in Hollywood, but throughout the United States.” Dorothy Manners sighs deeply and then releases the long-held secret. “Howard Hughes tipped her off. And here’s why. Hughes was producing films at RKO, and he had bought some play or book for Ingrid that he desperately wanted to make into a movie for her. At that moment she was the hottest thing in pictures. Ingrid was so crazy about Rossellini that she agreed to a contract with Hughes—but only if he would produce Rossellini’s movie Stromboli. Hughes accepted these terms, and Stromboli was a huge bomb. Hughes then asked her to come back to America immediately to work on his movie. She told him, ‘Honestly I can’t—I’m pregnant.’ And he was incensed. It meant it would take at least a year to recoup his losses from Stromboli. He then called Marion Davies and told her to tell Louella, who at first didn’t print the news. When Hughes asked Marion why not, she said, ‘My God, Ingrid’s married to another man. This could bring about the biggest lawsuit against Hearst.’ So Hughes himself verified the story of the pregnancy with Louella. He was so furious during that telephone call, I could overhear him shouting into Louella’s phone. After that call, the story ran.” Most of the time, Tony Curtis maintains, Louella and Hedda “couldn’t touch the major players. It was the young people coming up who suffered the most. I’ll never forget a call I received one day from Hedda, on the studio phone.” Like an inquisitor before an auto-da-fé, she grilled Curtis: “God help you if you lie to me, but are you going out with a teenager?” Curtis says, “The way she invoked God—it was as if she were speaking morally for Him. It was frightening. I didn’t know what the consequences would be. With Hedda you knew pretty much where you stood. But there was something uncomfortable about Louella—as if deep down something was grinding away, some secrets, maybe, from her past. And I was sure everybody was a spy. We all felt that Hedda’s son, Bill, was a spy. No one wanted to be his friend.” It was not just individuals who provoked the wrath of these two harpies—they preyed on pictures and whole studios too. When MGM gave the lead in its 1934 costume drama The Barretts of Wimpole Street to Norma Shearer instead of Marion Davies, “on Hearst’s instructions, there was no mention of the movie or of Norma Shearer for a year in Louella’s column,” says Gavin Lambert. Louella inflicted more serious and lasting damage on Orson Welles and Citizen Kane—and in the process nearly derailed one of the greatest masterpieces ever to emerge from Hollywood. Upon hearing the rumor that Welles’s first production with RKO was to be a film à clef about her boss, Louella lunched with the “boy genius” and listened to his litany of evasions and denials—all of which she believed. Soon after, Hedda, who had been offered a small part in the picture, managed to talk her way into its first screening. Instantly recognizing that the film was inspired by her friend Marion Davies’s millionaire lover, Hedda passed on the information to Hearst, twisting the knife by adding that she couldn’t comprehend why Louella hadn’t already alerted him. Enraged, Hearst ordered Louella to attend a screening with two lawyers. Horrified by what she saw, Louella rushed out of the studio screening room to cable Hearst, who telegraphed back the terse message STOP CITIZEN KANE. Springing into action, Louella warned RKO that she would expose long-suppressed tales of “rape by executives, drunkenness, miscegenation and allied sports.” Further, it was hinted, the American public would be informed that “the proportion of Jews in the industry was a bit high.” Refusing to capitulate to Hearst’s pressure, RKO chief George Schaefer—who had also been threatened by Hearst with legal action—announced that Citizen Kane would open in February 1941 at Radio City Music Hall. Louella hastened to call Radio City’s manager, Van Shmus, advising him that exhibiting the film would result in a total press blackout. The premiere was then canceled. Louis B. Mayer, siding with Hearst (whose Cosmopolitan Pictures had been affiliated with MGM), next made Schaefer an unusual offer: he would pay the rival studio $805,000 in exchange for burning the master print and all copies of the film. Schaefer stood firm and refused to cooperate. Finally, after the Hearst press launched a savage attack on Welles, falsely accusing him of Communism, the tide turned and Welles and the movie began attracting sympathy, especially from such Hearst adversaries as Henry Luce, founder of Time and Life. Taking advantage of the general turmoil, which had turned into a publicity bonanza, RKO at last released the picture in May 1941. And though the movie was a critical triumph, Welles, branded a trouble-maker, never quite recovered his position at RKO or in Hollywood again. If RKO failed to make it up to Orson Welles, the studio did its best to appease Louella. In 1943, her daughter, Harriet, who had been toiling as a producer at Republic Studios since 1940, was awarded a long-term contract with RKO. Curiously, Louella and Hedda had an unspoken truce regarding their children. When the mannish Harriet married effeminate publicist King Kennedy at Marsons Farm, Louella’s San Fernando Valley estate, in 1940 (“Truly a marriage of Louella’s convenience,” one wag says), Hedda was among the guests. Bill Hopper received glowing commendations in Louella’s column. And it was Hedda’s raves for Harriet’s I Remember Mama (1948) that brought about the celebrated reconciliation at Romanoff’s that year. Baffled observers theorized that Louella and Hedda had reached an understanding that, with mothers like themselves, these kids needed all the help they could get. The two women, of course, extended help to many people outside their family circles; flaunting their power meant interspersing malevolent behavior with flashy displays of benevolence. In the early 40s, when Joan Crawford had been labeled box-office poison by the Theatre Distributors of America, “MGM dropped her,” recalls publicist Warren Cowan, co-founder of Rogers & Cowan and now chairman of Warren Cowan Associates. Undaunted, producer Jerry Wald tapped her to appear in Mildred Pierce (1945)—and hired Rogers & Cowan to promote the tarnished star. In a press release, Cowan says, he wrote the following item: “The front office of Warner Brothers is jumping with glee over the first two weeks’ rushes of Joan Crawford in Mildred Pierce. They’re predicting she’ll be a strong contender for the Oscar.” To Cowan’s extreme surprise, Hedda ran the item verbatim, turning the story into an “exclusive.” (Explaining her indulgence toward Crawford, Hedda said, “I knew what being out of a job meant.”) Then, Cowan says, “various versions of it spread around. Just before the Academy Awards, we took out an ad in the trades, reproducing that item from Hedda’s column. It was the first time an ad was run directed to the Academy. That one item became the foundation for the Academy Awards campaigns which now companies spend hundreds of thousands of dollars on each year.” Cowan speculates that as a result Joan Crawford won the Oscar. “And that was the power of one columnist and how it mushroomed,” Cowan concludes. For a Hollywood unknown, a summons from Louella or Hedda was tantamount to a wave of Glenda the Good Witch’s wand. When Warner’s child actor Jack Larson was 17, “Hedda decided to do a piece just about me,” Larson recalls. “Bob Reilly, head of publicity at Warner’s, told me, ‘Your career is made!’ I was strictly rehearsed until it made me crazy. I was told not to mention anything about how I was studying drama with Michael Chekhov, a Russian, because Hedda was so anti-Communist she’d turn on me. But she ended up being very nice to me. If Louella or Hedda liked you and plugged you, it really could help.” Their column mentions “became a sort of currency of the time,” Roddy McDowall explains. “Agents would use them as contract-negotiating tools. To prove your value you could show the studio books of clippings.” Adds Tony Curtis, “You only knew how good you were doing by your appearances in their columns. There was no other measure.” So closely scrutinized were the two women’s daily write-ups that lyricist Alan Jay Lerner tracked down, met, and married starlet Nancy Olson after Hedda ran a “little item with a picture of me at the end of her column,” she recalls. At the time, Olson was working on Billy Wilder’s Sunset Boulevard (1950), in which Hedda played a cameo role. “The original plan,” Wilder says, “was to have Hedda and Louella, after Joe Gillis’s murder, try to telephone their papers at the same time from Norma Desmond’s house. One would be on the phone upstairs, trying to file her report, while the other cut in downstairs on the same line. There’d be a wild, crazy fight between the two of them, with lots of foul language. It would have been a very dramatic moment, a lot of fun. But it turned out to be one of my very few defeats in the movie. Louella declined to appear, because Hedda was a very good actress and Louella knew she would steal the scene.” As the studio system began to break down, and actors, abetted by a new breed of agent demanding huge fees and greater independence for clients, began wresting control of their lives away from studio bosses, the Parsons-Hopper hegemony over Hollywood might have toppled. But in fact both women adjusted and adapted as necessary, branching out to the new medium of television. Hedda even dared to go up against Ed Sullivan on Sunday night with an NBC program, Hedda Hopper’s Hollywood. They published more books of memoirs, all commercial successes. No up-and-coming younger columnist even brushed the hem of their garments—in Louella’s case often an Orry-Kelly, Adrian, or Jean-Louis design, and in Hedda’s a Mainbocher, perhaps, with a hat from John Frederick’s or one made by a fan. They lived as well as or better than the stars they wrote about. Hedda spent a tax-deductible $5,000 a year on her signature headgear alone. In addition to clothes, Hedda had a weakness for Bristol glass, which she displayed abundantly, along with her millinery, in the eight-room house she had bought in 1941 on Beverly Hills’ Tropical Avenue. “This is the house that fear built,” she would announce to visitors. Financially somewhat better off than Hedda, Louella kept two houses, the one at 619 North Maple Drive, where she worked, and her Valley residence (with a peach-and-blue bathroom paid for and decorated by neighbor Carole Lombard, and a patchy lawn sometimes filled in with fake grass from a studio prop department). And even after Docky’s death Louella had another comfort unavailable to Hedda—a man in her life, in the person of songwriter Jimmy McHugh. A fellow Catholic, he gave his constant companion a gift that she literally idolized: an illuminated 10-foot Virgin Mary which Louella enshrined in her backyard. The couple was a fixture at parties, premieres, and such nightspots as Dino’s Lodge on Sunset Strip, where Louella could be seen “drunk and peeing on the floor” while the house picked up the check, says impresario Allan Carr. The most conspicuous proof of Louella’s and Hedda’s ongoing sovereignty occurred every year at Christmastime. “Your car had to get in line at their houses to deliver presents,” recalls producer A. C. Lyles. Inside, their homes were so glutted with gifts, they “looked like giant cornucopias, with presents tumbling out of closets, walls, and floors,” remembers Tony Curtis. Dorothy Manners reflects, “I can’t imagine why people were so scared of Louella. But they certainly kowtowed to her. Louella, you see, was not just a columnist. She was a corporation. There were seven columns a week—Sunday was a whole section with a rotogravure. She had the radio show Hollywood Hotel. And then she had the Sunday-night East and West Coast gossip show with Winchell—people didn’t move when they were on the air. There were her articles for Modern Screen magazine, which I ghosted—she split the $1,000 she received monthly with me. And every year and a half we’d do a five- or six-week tour of Louella Parsons’s Stars of Tomorrow, playing all the country’s most glamorous movie houses. Just to give an idea, one year we had on tour with us Susan Hayward, Robert Stack—and Ronald Reagan and Jane Wyman, when they were beginning their romance.” (According to George Sidney, Stack recently said that he joined the journalist’s vaudeville troupe because Louella cautioned, “If you don’t do it you’ll never work again.”) In an effort to stay up-to-date, both women raced to cultivate new protégés. Jimmy McHugh made a point of introducing Louella to all the newly minted teen musical heartthrobs—Fabian, Bobby Darin, and her personal favorite, Elvis Presley. To tap into the same rock ‘n’ roll youth culture, Hedda enlisted the help of George Christy, then hosting his ABC radio show Teen Town. She developed a particular affection for Steve McQueen, who won her over by “treating her like a chorus girl.” Hedda also fussed over Ann-Margret, says Allan Carr, who managed the actress in the early 60s. “She gave her motherly advice, but Hedda probably got more out of it than Ann-Margret did. Times were changing, the country was changing, and so were the movies. Hedda and Louella just didn’t have the influence over the new young audiences that they had had 10 or 20 years before.” Louella, who had already begun to show signs of severe physical deterioration, suffered a cruel blow when the Los Angeles Examiner folded in 1962. Though her column was switched to the Hearst afternoon paper, the Herald-Express, she thereby lost her edge to Hedda’s morning Los Angeles Times. Still, Louella carried on, going out every night bejeweled and bewildered, like a dowager empress whose country had overthrown her rule, tottering unsteadily on the arm of Jimmy McHugh. And despite rumors of her imminent retirement, by day she put together her column with more than a little help from Dorothy Manners and other assistants. Finally, in 1965, blighted by further medical problems, Louella retired. Dorothy Manners took over the column and gradually substituted her byline for that of the great Louella. At 84, this living fossil of Hollywood’s golden age was installed in a Santa Monica rest home. There she was attended by a private nurse, paid for by the Hearst corporation. Hedda—once described by Time magazine as “blessed with eternal middle age”— carried on in perfect health straight into the mid-60s. But—estranged from Bill and Joan, her granddaughter—Hedda, warding off loneliness, insinuated herself into the cozy family life of her neighbors, filmmaker Bob Enders and his wife, Estelle. At Christmas the four Enders children helped her dig into her mountain of presents. One year a gift came from Kirk Douglas, whom she had refused to speak to for a long time. Hedda called to thank the actor, but before she did she turned to Bob and Estelle and admitted, “I’ve been a bitch.” Hedda had one last crack at the movies—a minor part in the sudsy melodrama The Oscar. Regally elegant at 80 in a jeweled gown and the kind of towering Dairy Queen hairdo that she used to preserve overnight with rolls of toilet paper, Hedda made a brief but memorable appearance. The last word she uttered onscreen was “Bye.” On a Friday night early in 1966, producer Bill Frye and Rosalind Russell stopped by Hedda’s house on Tropical Avenue for a cocktail. “[Photographer] Jerome Zerbe had invited us all to dinner at Chasen’s,” Frye says. “Hedda had on a hat and suit and she looked marvelous. Then I looked down and saw she was wearing bedroom slippers. Hedda explained, ‘I don’t feel up to it. If you go out, you should give. If you can’t give, you shouldn’t go out.’ It was a kind of motto.” Hedda, who never overstayed her welcome at parties, had another motto: “Go before the glow fades”—and so she did. The following Monday, before the release of The Oscar and two months after Louella’s official retirement, she died of complications from double pneumonia. Harriet, feeling it her duty to inform Louella of Hedda’s death, visited her ailing mother at the Santa Monica rest home. “Mother, I have something to tell you,” Harriet said. “Hedda died today.” This announcement was followed by a long silence, then a look of confusion, and then another long silence—finally broken by the exclamation “GOOD!” And that, says Roddy McDowall, “was her last cogent word.” Louella lingered on for six more years, a decrepit, mute relic who most of the world assumed was dead. During her incarceration, “she went into a complete silence,” says Dorothy Manners. “She just lay there, with no reaction, utterly expressionless.” Another person close to Louella’s circle says that “in her room she watched TV a great deal—sort of. Her mind was so gone, she’d sit transfixed, watching snow on television. It was the Twilight of the Gods.” “At the end,” says Gavin Lambert, “Louella and Hedda looked more and more like bizarre dinosaurs.” As with these extinct behemoths, no other creatures ever rose up from the swamp to replace them. Dorothy Manners retired in 1977, Aileen Mehle turned down offers to continue both columns, and Joyce Haber had a run at the Los Angeles Times, but was dropped. Liz Smith reflects, “L.A. is now a town with no gossip column. No one wants to let these demons loose again.” And to all those fearful of demon columnists, past or future, Hedda had this to say: “They should know what I haven’t written!” © 2025 Condé Nast. All rights reserved. Vanity Fair may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices CN Entertainment Select international site
3927
https://dlsun.github.io/probability/linearity.html
Introduction to Probability Lesson 26 Linearity of Expectation Theory In this lesson, we learn a shortcut for calculating expected values of the form [ E[aX + bY]. ] In general, evaluating expected values of functions of random variables requires LOTUS. But when the function is linear, we can break up the expected value into more manageable parts. Theorem 26.1 (Adding or Multiplying a Constant) Let (X) be a random variable and (a, b) be constants. Then, [\begin{align} E[aX] &= aE[X] \tag{26.1} \ E[X + b] &= E[X] + b \tag{26.2} \end{align}] Proof. Since (aX) and (X+b) are technically functions of (X), we use LOTUS (24.1). [\begin{align} E[aX] &= \sum_x ax f(x) & \text{(LOTUS)} \ &= a \sum_x x f(x) & \text{(factor constant outside the sum)} \ &= a E[X] & \text{(definition of expected value)}. \ E[X+b] &= \sum_x (x + b) f(x) & \text{(LOTUS)} \ &= \sum_x x f(x) + \sum_x b f(x) & \text{(break $(x + b) f(x)$ into $xf(x) + bf(x)$)} \ &= \sum_x x f(x) + b\sum_x f(x) & \text{(factor constant outside the sum)} \ &= E[X] + b & \text{(definitions of expected value, p.m.f.)}. \end{align}] Here is an example illustrating how Theorem 26.1 can be used. Example 26.1 (Expected Values in Roulette) In roulette, betting on a single number pays 35-to-1. That is, for each $1 you bet, you win $35 if the ball lands in that pocket. If we let (X) represent your net winnings (or losses) on this bet, its p.m.f. is [ \begin{array}{r|cc} x & -1 & 35 \ \hline f_X(x) & 37/38 & 1/38 \end{array}. ] In Lesson 22, we calculated (E[X]) directly. Here is another way we can calculate it using Theorem 26.1. Let us define a new random variable (W), which takes on the values 0 and 1 with the same probabilities: [ \begin{array}{r|cc} w & 0 & 1 \ \hline f_W(w) & 37/38 & 1/38 \end{array}. ] We can think of (W) as an indicator variable for whether or not we win. It is easy to see that (E[W] = 1/38). (One way is to just use the formula. Another is to note that (W) is a (\text{Binomial}(n=1, p=1/38)) random variable, so (E[W] = np = 1/38).) Now, the amount we win, (X), is related to this indicator variable, (W), by: [ X = 36 W - 1. ] (Verify that (X) takes on the values (35) and (-1) with the correct probabilities.) Now, by Theorem 26.1, the expected value is [ E[X] = E[36W - 1] = 36 E[W] - 1 = 36 \left( \frac{1}{38} \right) - 1 = -\frac{2}{38}, ] which matches what we got in Lesson 22. The next result is even more useful. Theorem 26.2 (Linearity of Expectation) Let (X) and (Y) be random variables. Then, no matter what their joint distribution is, [\begin{equation} E[X+Y] = E[X] + E[Y]. \tag{26.3} \end{equation}] Proof. Since (E[X + Y]) involves two random variables, we have to evaluate the expectation using 2D LOTUS (25.1), with (g(x, y) = x + y). Suppose that the joint distribution of (X) and (Y) is (f(x, y)). Then: [\begin{align} E[X + Y] &= \sum_x \sum_y (x + y) f(x, y) & \text{(2D LOTUS)} \ &= \sum_x \sum_y x f(x, y) + \sum_x \sum_y y f(x, y) & \text{(break $(x + y) f(x, y)$ into $x f(x, y) + y f(x, y)$)} \ &= \sum_x x \sum_y f(x, y) + \sum_y y \sum_x f(x, y) & \text{(move term outside the inner sum)} \ &= \sum_x x f_X(x) + \sum_y y f_Y(y) & \text{(definition of marginal distribution)} \ &= E[X] + E[Y] & \text{(definition of expected value)}. \end{align}] In other words, linearity of expectation says that you only need to know the marginal distributions of (X) and (Y) to calculate (E[X + Y]). Their joint distribution is irrelevant. Let’s apply this to the Xavier and Yolanda problem from Lesson 18. Example 26.2 (Xavier and Yolanda Revisited) Xavier and Yolanda head to the roulette table at a casino. They both place bets on red on 3 spins of the roulette wheel before Xavier has to leave. After Xavier leaves, Yolanda places bets on red on 2 more spins of the wheel. Let (X) be the number of bets that Xavier wins and (Y) be the number that Yolanda wins. In Lesson 25, we calculated (E[Y - X]), the expected number of additional times that Yolanda wins, by applying 2D LOTUS to the joint p.m.f. of (X) and (Y). The calculation was tedious. In this lesson, we see how linearity of expectation allows us to avoid tedious calculations. First, by (26.1) and (26.3), we see that: [ E[Y - X] = E[Y] + E[-1 \cdot X] = E[Y] + (-1) E[X] = E[Y] - E[X]. ] We know that (Y) is (\text{Binomial}(n=5, N_1=18, N_0=20)) and (X) is (\text{Binomial}(n=3, N_1=18, N_0=20)). (X) and (Y) are definitely not independent, since three of Yolanda’s bets are identical to Xavier’s. But linearity of expectation says that to calculate (E[Y - X]), it does not matter how (X) and (Y) are related to each other; we only need their marginal distributions. From Appendix A.1 (and Lesson 22), we know the expected value of a binomial random variable is (n\frac{N_1}{N}), so [ E[Y - X] = E[Y] - E[X] = 5\frac{18}{38} - 3\frac{18}{38} = 2\frac{18}{38} \approx .947, ] which matches the answer we got in Lesson 25 by applying 2D LOTUS. Linearity allows us to calculate the expected values of complicated random variables by breaking them into simpler random variables. Example 26.3 (Expected Value of the Binomial and Hypergeometric Distributions) In Lesson 22, we showed that the expected values of the binomial and hypergeometric distributions are the same: (n\frac{N_1}{N}). But the proofs we gave were tedious and did not give any insight into why this formula is true. Let’s prove this formula using linearity of expectation. If (X) is a (\text{Binomial}(n, N_1, N_0)) random variable, then we can break (X) down into the sum of simpler random variables: [ X = Y_1 + Y_2 + \ldots + Y_n, ] where (Y_i) represents the outcome of the (i)th draw from the box. So (Y_i) equals (1) with probability (N_1/N) and is (0) otherwise. Its p.m.f. is about as simple as it gets: [\begin{equation} \begin{array}{rcc} y & 0 & 1 \ \hline f(y) & N_0/N & N_1/N \end{array}. \label{eq:bernoulli_pmf} \end{equation}] By linearity of expectation: [ E[X] = E[Y_1] + E[Y_2] + \ldots + E[Y_n]. ] We have taken a complicated random variable (X) and broken it down into simpler random variables (Y_i), whose expected value is trivial to calculate: [ E[Y_i] = 0 \cdot \frac{N_0}{N} + 1 \cdot \frac{N_1}{N} = \frac{N_1}{N}. ] Therefore, [ E[X] = \underbrace{\frac{N_1}{N} + \frac{N_1}{N} + \ldots + \frac{N_1}{N}}_{\text{$n$ terms}} = n \frac{N_1}{N}. ] What if (X) is a (\text{Hypergeometric}(n, N_1, N_0)) random variable? We can break (X) down in exactly the same way, as sum of the outcomes of each draw: [ X = Y_1 + Y_2 + \ldots + Y_n, ] except that now the (Y_i)s are not independent. However, each (Y_i) still represents a random draw from a box with (N_1) (\fbox{1})s and (N_0) (\fbox{0})s, so (Y_i) equals 1 with probability (N_1/N), just as before. Also, linearity of expectation does not care whether or not the random variables are independent. So the expected value of the hypergeometric is also: [ E[X] = \underbrace{\frac{N_1}{N} + \frac{N_1}{N} + \ldots + \frac{N_1}{N}}_{\text{$n$ terms}} = n \frac{N_1}{N}. ] Here is a clever application of linearity. Example 26.4 Let (X) be a (\text{Binomial}(n, N_1, N_0)) random variable. What is (E[X(X-1)])? In Example 24.3, we calculated this expected value using LOTUS. Here is a way to calculate it using linearity. Remember that (X) represents the number of (\fbox{1})s in our sample. The random variable (X(X-1)) then represents the number of (ordered) ways to choose two tickets from the (\fbox{1})s in our sample. In the diagram below, (n=4) and (X=3). Each arrow represents one of the (n(n-1) = 12) ways to choose two tickets from the (n) tickets in the sample. The red arrows represent the (X(X-1) = 6) ways of choosing two tickets among the (\fbox{1})s. Let’s define an indicator variable (Y_{ij}, i\neq j) for each of the (n(n-1)) ways of choosing two tickets from our sample. Let (Y_{ij}) be 1 if tickets (i) and (j) are both (\fbox{1})s. In other words, (Y_{ij} = 1) if and only if there is a red arrow connecting the two tickets in the diagram above. Since (X(X-1)) is the number of red arrows, we have [ X(X-1) = \sum_{i=1}^n \sum_{j\neq i} Y_{ij}. ] Now, by linearity: [ E[X(X-1)] = \sum_{i=1}^n \sum_{j\neq i} E[Y_{ij}]. ] But (E[Y_{ij}]) is simply the probability that tickets (i) and (j) are both (\fbox{1})s. This probability is (E[Y_{ij}] = \frac{N_1^2}{N^2}). Since there are (n(n-1)) (Y_{ij})s, [ E[X(X-1)] = n(n-1) \frac{N_1^2}{N^2}, ] which matches the answer we got in Lesson 24 by more tedious means. Essential Practice Each year, as part of a “Secret Santa” tradition, a group of 4 friends write their names on slips of papers and place the slips into a hat. Each member of the group draws a name at random from the hat and must by a gift for that person. Of course, it is possible that they draw their own name, in which case they buy a gift for themselves. What is the expected number of people who draw their own name? Hint: Express this complicated random variable as a sum of indicator random variables (i.e., that only take on the values 0 or 1), and use linearity of expectation. 2. McDonald’s decides to give a Pokemon toy with every Happy Meal. Each time you buy a Happy Meal, you are equally likely to get any one of the 6 types of Pokemon. What is the expected number of Happy Meals that you have to buy until you “catch ’em all”? Hint: Express this complicated random variable as a sum of geometric random variables, and use linearity of expectation. 3. A group of 60 people are comparing their birthdays (as usual, assume that their birthdays are independent, all 365 days are equally likely, etc.). Find the expected number of days in the year on which at least two of these people were born. Hint: Express this complicated random variable as a sum of indicator random variables, and use linearity of expectation. Additional Practice A hash table is a commonly used data structure in computer science, allowing for fast information retrieval. For example, suppose we want to store some people’s phone numbers. Assume that no two of the people have the same name. For each name (x), a hash function (h) is used, where (h(x)) is the location to store x’s phone number. After such a table has been computed, to look up (x)’s phone number one just recomputes (h(x)) and then looks up what is stored in that location. Typically, (h) is chosen to be (pseudo)random. Suppose there are 100 people, with each person’s phone number stored in a random location (independently), represented by an integer between 1 and 1000. It then might happen that one location has more than one phone number stored there, if two different people (x) and (y) end up with the same random location for their information to be stored. Find the expected number of locations with no phone numbers stored, the expected number with exactly one phone number, and the expected number with more than one phone number. 2. Calculate (E[X(X-1)]) for a (\text{Hypergeometric}(n, N_1, N_0)) random variable (X) using linearity. (Hint: Follow Example 26.4.)
3928
https://chemistry.stackexchange.com/questions/134780/how-does-overvoltage-affect-the-products-of-electrolysis
physical chemistry - How does overvoltage affect the products of electrolysis? - Chemistry Stack Exchange Join Chemistry By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Chemistry helpchat Chemistry Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more How does overvoltage affect the products of electrolysis? Ask Question Asked 5 years, 3 months ago Modified5 years, 1 month ago Viewed 5k times This question shows research effort; it is useful and clear 4 Save this question. Show activity on this post. When you electrolyse an aqueous solution of N a C l,N a C l, the product formed at cathode is H X 2 H X 2 gas (preferential discharging), but the product formed at anode is C l X 2 C l X 2 gas. According to standard electrode potential, reduction of C l X 2 C l X 2 is 1.36 V and O X 2 O X 2 is 1.23 V. Keeping the values in mind, O X 2 O X 2 gas should be formed at anode because it has a smaller electrode potential. But it is given that chloride ions is preferred by anode because of a phenomena called overvoltage. I don't understand how overvoltage can effect the products of electrolysis. physical-chemistry electrochemistry electrolysis oxidation-state reduction-potential Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications edited Jun 4, 2020 at 12:05 ljmljm asked Jun 3, 2020 at 17:50 ljmljm 151 1 1 silver badge 9 9 bronze badges 6 1 Basically, there is a kinetic hindrance to the production of oxygen gas at the anode. Overcoming that requires increasing the voltage over what would be required if there was no hindrance. The extra voltage is the over-voltage. The over-voltage depends on factors such as anode composition and surface characteristics.Ed V –Ed V 2020-06-03 18:04:03 +00:00 Commented Jun 3, 2020 at 18:04 @EdV Where does this kinetic hindrance come from?ljm –ljm 2020-06-03 18:07:24 +00:00 Commented Jun 3, 2020 at 18:07 Good question! I have no idea about the exact mechanistic details that are ultimately responsible for the hindrance. It is likely very complicated and hard to study. But the oxidation of chloride is evidently relatively easy: take electrons from a chloride ions and then chlorine atoms form chlorine molecules.Ed V –Ed V 2020-06-03 18:11:41 +00:00 Commented Jun 3, 2020 at 18:11 @EdV But I just feel like I can't grasp the entirety of preference of O2 gas over Cl2 gas. I have so many questions! My textbook says the slowness of the electrode reactions create a resistance on the electrode surface which the over voltage needs to overcome? How is this resistance being set up? How do I predict products of electrolysis of other salts like PbBr2 without knowing if it is either preferential discharge or over voltage the priority that the ion is following? Its all very confusing for me.ljm –ljm 2020-06-03 18:20:45 +00:00 Commented Jun 3, 2020 at 18:20 The resistance is just another way of looking at it: there is a reaction rate “bottleneck”, i.e., the rate of oxidation is hindered. So the current flow, involved in oxygen production, is reduced by this impedance. Over-voltage effects are basically empirical and you just have to deal with not having certainty. Sorry, but that is how it plays out.Ed V –Ed V 2020-06-03 18:26:20 +00:00 Commented Jun 3, 2020 at 18:26 |Show 1 more comment 2 Answers 2 Sorted by: Reset to default This answer is useful 4 Save this answer. Show activity on this post. When you electrolyse an aqueous solution of NaCl, the product formed at cathode is H2 gas (preferential discharging), but the product formed at anode is Cl2 gas. There is another complication. The book is missing the fact whether chlorine or oxygen is evolved depends on the concentration of the salt. At low salt concentrations, oxygen will be formed and at high NaCl concentrations C l X 2 C l X 2 will be produced at the anode. Keep in mind the thermodynamic "competition" is between the oxidation of water and oxidation of chloride ions. Electrochemists have spent their lives and careers in understanding the oxidation reduction of water for hundreds of years. So if you are slightly confused and you have a lot of questions that is a sign of a healthy scientific mind. I am glad that you did not digest what the textbook was saying right away. Wikipedia hints as how complex is the process of hydrogen and oxygen production at a electrode. The numbers which you are quoting are like comparing apples and oranges. This half cell 2 H X 2 O(l)→O X 2(g)+4 H X+(a q)+4 e X− 2 H X 2 O(l)→O X 2(g)+4 H X+(a q)+4 e X− E o E o= +1.23 V is valid for pH of 0, i.e., 1 M H++ ion. Your aqueous NaCl solution is not present in 1 M H++. You might need to use the Nernst equation to see what is the true potential at a neutral pH. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications answered Jun 3, 2020 at 23:52 ACRACR 44.3k 2 2 gold badges 71 71 silver badges 141 141 bronze badges 2 So the given electrode potentials aren't correct because here NaCl is more in concentration? Then why would my textbook give the reason of over-voltage? I think the experiment not taking place under standard conditions is a better explanation than over-voltage.ljm –ljm 2020-06-04 05:53:00 +00:00 Commented Jun 4, 2020 at 5:53 No my point is that we cannot ignore concentrations. Read about Nernst equation first. Overpotential is another story but first think about what is the concentration of NaCl.ACR –ACR 2020-06-04 05:57:45 +00:00 Commented Jun 4, 2020 at 5:57 Add a comment| This answer is useful 1 Save this answer. Show activity on this post. First one comment on "overvoltage": It doesn´t mean that there is zero reaction below. Even below the theoretical electrochemical potential, electrolysis runs, very slowly. Around the potential plus overvoltage, the slope of the U-I plot has a bend and goes from close to zero to a constant, finite value. Very simplistic explanations (or rationalisations, rather) for the overvoltage: To create a diatomic gas, you need to oxidise or reduce two ions in very close proximity. That requires a little extra energy/potential, which turns into heat once the two single atoms recombine. Depends on the mobility of the reduced atom, the adsorbance of which also is an equillibrium process. There are more possible contributions. The extra "dipolar" charge on the ion by its deformed hydrate hull requires a small extra potential to overcome. In the moment the ion discharges, the ordered water molecules suddenly depolarise, again creating heat. To adsorb the ion on the surface, you need to push aside other species, which might have a weaker or stronger interaction with the electrode. Etc. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Aug 4, 2020 at 7:27 answered Jun 3, 2020 at 19:41 KarlKarl 12.4k 2 2 gold badges 34 34 silver badges 62 62 bronze badges 2 1 I am slightly unsure about it "To create a diatomic gas, you need to oxidise or reduce two ions in very close proximity. That requires a little extra energy/potential". The occurrence of two Cl radicals nearby should be a completely random process and it would to be dependent on the concentration of the ions next to the electrode. Why would appearance of two oxidized Cl radicals next to each require an additional energy?ACR –ACR 2020-06-04 02:43:03 +00:00 Commented Jun 4, 2020 at 2:43 @M.Farooq Good point. Perhaps if you put it this way: The discharge of the ions is an equillibrium process. Either you push it in one direction with a larger potential, or the probability of two radicals occuring next to each other becomes very small. If the radicals are very mobile on the surface, the overvoltage would be smaller, and we know it varies a lot with different electrode materials.Karl –Karl 2020-06-04 06:59:33 +00:00 Commented Jun 4, 2020 at 6:59 Add a comment| Your Answer Reminder: Answers generated by AI tools are not allowed due to Chemistry Stack Exchange's artificial intelligence policy Thanks for contributing an answer to Chemistry Stack Exchange! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. Use MathJax to format equations. MathJax reference. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions physical-chemistry electrochemistry electrolysis oxidation-state reduction-potential See similar questions with these tags. Featured on Meta Spevacus has joined us as a Community Manager Introducing a new proactive anti-spam measure Linked 3Products of Electrolysis of NaCl Related 0While electrolyzing concentrated aqueous sodium chloride, why is it that chlorine is discharged but not sodium? 3Dissociation of zinc anode during electrolysis 1Electrolysis of water in 0.1 M hydrochloric acid 7Electrolysis puzzle 7Why does electrolysis of aqueous concentrated sodium bromide produce bromine at the anode? 6How can sodium be reduced in the Castner–Kellner process? 3Electrolysis using electrodes of metals whose ions are in solution Hot Network Questions Is direct sum of finite spectra cancellative? What's the expectation around asking to be invited to invitation-only workshops? Is it ok to place components "inside" the PCB If Israel is explicitly called God’s firstborn, how should Christians understand the place of the Church? What’s the usual way to apply for a Saudi business visa from the UAE? Where is the first repetition in the cumulative hierarchy up to elementary equivalence? Implications of using a stream cipher as KDF Can I go in the edit mode and by pressing A select all, then press U for Smart UV Project for that table, After PBR texturing is done? Riffle a list of binary functions into list of arguments to produce a result Another way to draw RegionDifference of a cylinder and Cuboid Discussing strategy reduces winning chances of everyone! Checking model assumptions at cluster level vs global level? How different is Roman Latin? Does the Mishna or Gemara ever explicitly mention the second day of Shavuot? Matthew 24:5 Many will come in my name! Lingering odor presumably from bad chicken What is the feature between the Attendant Call and Ground Call push buttons on a B737 overhead panel? How exactly are random assignments of cases to US Federal Judges implemented? Who ensures randomness? Are there laws regulating how it should be done? Xubuntu 24.04 - Libreoffice Is it possible that heinous sins result in a hellish life as a person, NOT always animal birth? Storing a session token in localstorage Why does LaTeX convert inline Python code (range(N-2)) into -NoValue-? Is it safe to route top layer traces under header pins, SMD IC? On being a Maître de conférence (France): Importance of Postdoc Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Chemistry Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
3929
https://www.quora.com/How-do-you-solve-an-equation-with-complex-coefficients-How-would-you-solve-an-equation-that-involved-imaginary-numbers
How to solve an equation with complex coefficients? How would you solve an equation that involved imaginary numbers - Quora Something went wrong. Wait a moment and try again. Try again Skip to content Skip to search Sign In Mathematics Unit Imaginary Number Solving Problems Algebraic Equations Complex Arithmetic Solving Quadratic Equatio... Imaginary Quantity Algebra Complex Variables 5 How do you solve an equation with complex coefficients? How would you solve an equation that involved imaginary numbers? All related (31) Sort Recommended Bernd Leps Former scientific official; retired · Author has 5.8K answers and 1.3M answer views · 1y · The imaginary numbers and the complex numbers are a extension of the real numbers to overcome some limitations of real numbers, namely the square (or other even) roots of negative numbers, beside others. What is possible with real numbers, is possible with imaginary or complex numbers too. Generally, there is no difference to equations with real coefficients. See why: I got a linear equation, some a x + b = c. with a not zero. I solve it by equivalent forming into x = (c-b)/a OK. And where here was important, whether a, b, c are complex or not? Yes, the ways to solve an equation will not differ. x² + ax + b = 0 will still have the two solutions x1,x2 = -a/2 +/- square root (a²/4-b). But now, always the two (or one twofold) irrespective of a, b… Imaginary numbers are complex numbers with real part zero, simply some 0 + bi = bi. No additional problem. What may seem to be a problem is not maths, but simple calculation. If a, b, c are given imaginary or complex numbers, you must be able to add, subtract, multiply, divide them… Is that your problem? And not the theory of solving equations? =Here ends the answer to that very queston= Therefore off the question the hint, to be fluent in changing between the (a+bi)-form and the two other forms to denote a complex number, the exponential r exp(iφ)-form and the trigonometric (r(cos(φ)+i sin(φ)))-form. That is, some operations are easier to calculate in one of these forms than in another. Therefore it could be advantageous to use the best form in each situation. But if these forms are not a stuff of your lessons, forget it (for the moment), nearly all can be do without them also. Changing: Given z = a + b i, use r = square root(a²+b²), φ = arctan (b/a) Pay attention to the fact, that φ is ambiguous by integer multiples of 2π or 360°. And that your pocket calculator may give you an answer wrong by 180°, if a is negative: b/(-a) = -b/a, but the arcustangent is an angle in another quadrant… Hint for professionals (who have to program that or so): if b not zero, use φ = 2 arctan (b/(a+r)) to fix that. Given z = r exp(iφ) = r(cos(φ)+i sin(φ)): a = r cos(φ), b = r sin(φ) Addition: a+bi + c+di = (a+c) + (b+d)i Subtraction: a+bi - (c+di) = (a-c) + (b-d)i Multiplication: Easiest as r exp(iφ) times s exp(iψ) = rs exp(i(φ+ψ)). But (a+bi)(c+di) =(ac-bd)+ i(ad+bc) will of course work also. Division: Easiest as r exp(iφ) divided by s exp(iψ) = r/s exp(i(φ-ψ)), if s is not zero, of course. In the a+bi-form we use a trick: (c+di)(c-di) = c²+d², a real number. Therefore, in (a+bi)/(c+di) we set (a+bi)/(c+di) = (a+bi)(c-di)/((c+di)(c-di)) = (a+bi)(c-di)/(c²+d²) = (ac-bd)/(c²+d²) + i (bc-ad)/(c²+d²) Power: To raise in power n (a real number), easiest is (r exp(iφ))ⁿ = rⁿ exp(inφ) If n is a fraction (and we take a root, as e.g. with n=1/2, xⁿ = sqrt(x)), we will find multiple solutions by setting as angle nφ, n(φ+2π), n(φ+4π) and so on, using the ambiguity of angles. If the power is itself a complex number, we may use zⁿ = exp(n ln(z)) and with n ln(z) in the a+bi-form we have exp(a+bi) = exp(a) exp(bi) Logarithm: Best ln(z) = ln(r exp(iφ)) = ln(r) + iφ. Pay attention to r not zero and the multiplicity of φ by 2π. For other logarithms (even with complex base) use log(z) = ln(z)/ln(base of log) Exponential: As said already, exp(a+bi) = exp(a) exp(bi) Hyperbolic functions: Use the exponential replacements, as cosh(z) = (exp(z)+exp(-z))/2 = (exp(a) exp(bi) + exp(-a) exp(-bi))/2, etc. Trigonometric functions: use the identities cos(a+ib) = cos(a) cos(ib) - i sin(a) sin(ib) etc. together with cos(ib) = cosh(b), sin(ib) = i sinh(b) and sinh(ib) = i sin (b), cosh (ib) = cos(b) Arcus functions and area functions: Use arcsin(z) = -i ln(iz+ sqrt(1-z²)) arccos(z) = -i ln(z+ sqrt(z²-1)) arsinh(z) = ln(z+ sqrt(z²+1)) arcosh(z) = ln(z+ sqrt(z²-1) Upvote · Sponsored by Grammarly 92% of professionals who use Grammarly say it has saved them time Work faster with AI, while ensuring your writing always makes the right impression. Download 999 207 Related questions More answers below How do you solve this complex/imaginary number equation: x^2 --60? How do you solve the imaginary/complex number equation: 3(x-1) ^2=-27? What is the process for solving an equation with imaginary solutions and no real solutions? How can algebraic equations involving imaginary numbers (complex roots) be solved? How do you solve complex numbers and imaginary numbers? Related questions How do you solve this complex/imaginary number equation: x^2 --60? How do you solve the imaginary/complex number equation: 3(x-1) ^2=-27? What is the process for solving an equation with imaginary solutions and no real solutions? How can algebraic equations involving imaginary numbers (complex roots) be solved? How do you solve complex numbers and imaginary numbers? Can you solve equations with imaginary numbers like (a-bi) = (c-di)? What is the reason that all solutions of an equation with non-real coefficients are imaginary numbers (complex numbers)? What is the value of I in this equation: (I^i) = -1, where I is an imaginary number? Can you add or subtract an imaginary number from another one? If not, how should you interpret them meaningfully when they appear as terms during equations solving process? How do I solve this equation math^{n}+(x-i)^{n}=0[/math] where [math]i[/math] is an imaginary number? What are imaginary numbers and how do you solve equations with them? Is it possible to solve complex equations without using imaginary numbers? How do you solve simultaneous equations graphically in general? How do you solve equations with coefficients? Can you solve for imaginary numbers in a quadratic equation with one variable? Related questions How do you solve this complex/imaginary number equation: x^2 --60? How do you solve the imaginary/complex number equation: 3(x-1) ^2=-27? What is the process for solving an equation with imaginary solutions and no real solutions? How can algebraic equations involving imaginary numbers (complex roots) be solved? How do you solve complex numbers and imaginary numbers? Can you solve equations with imaginary numbers like (a-bi) = (c-di)? Advertisement About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
3930
https://www.britannica.com/dictionary/debilitate
Home debilitate — debilitating — debilitation | | | What are some other ways to say "I'm fine"? See the answer » | | | | --- | | Luxury Illustration | luxury : a condition or situation of great comfort, ease, and wealth Learn More » |
3931
https://www.montana.edu/tjkaiser/ee407/notes/comperrfnc.pdf
Complementary Error Function Table x erfc(x) x erfc(x) x erfc(x) x erfc(x) x erfc(x) x erfc(x) x erfc(x) 0 1.000000 0.5 0.479500 1 0.157299 1.5 0.033895 2 0.004678 2.5 0.000407 3 0.00002209 0.01 0.988717 0.51 0.470756 1.01 0.153190 1.51 0.032723 2.01 0.004475 2.51 0.000386 3.01 0.00002074 0.02 0.977435 0.52 0.462101 1.02 0.149162 1.52 0.031587 2.02 0.004281 2.52 0.000365 3.02 0.00001947 0.03 0.966159 0.53 0.453536 1.03 0.145216 1.53 0.030484 2.03 0.004094 2.53 0.000346 3.03 0.00001827 0.04 0.954889 0.54 0.445061 1.04 0.141350 1.54 0.029414 2.04 0.003914 2.54 0.000328 3.04 0.00001714 0.05 0.943628 0.55 0.436677 1.05 0.137564 1.55 0.028377 2.05 0.003742 2.55 0.000311 3.05 0.00001608 0.06 0.932378 0.56 0.428384 1.06 0.133856 1.56 0.027372 2.06 0.003577 2.56 0.000294 3.06 0.00001508 0.07 0.921142 0.57 0.420184 1.07 0.130227 1.57 0.026397 2.07 0.003418 2.57 0.000278 3.07 0.00001414 0.08 0.909922 0.58 0.412077 1.08 0.126674 1.58 0.025453 2.08 0.003266 2.58 0.000264 3.08 0.00001326 0.09 0.898719 0.59 0.404064 1.09 0.123197 1.59 0.024538 2.09 0.003120 2.59 0.000249 3.09 0.00001243 0.1 0.887537 0.6 0.396144 1.1 0.119795 1.6 0.023652 2.1 0.002979 2.6 0.000236 3.1 0.00001165 0.11 0.876377 0.61 0.388319 1.11 0.116467 1.61 0.022793 2.11 0.002845 2.61 0.000223 3.11 0.00001092 0.12 0.865242 0.62 0.380589 1.12 0.113212 1.62 0.021962 2.12 0.002716 2.62 0.000211 3.12 0.00001023 0.13 0.854133 0.63 0.372954 1.13 0.110029 1.63 0.021157 2.13 0.002593 2.63 0.000200 3.13 0.00000958 0.14 0.843053 0.64 0.365414 1.14 0.106918 1.64 0.020378 2.14 0.002475 2.64 0.000189 3.14 0.00000897 0.15 0.832004 0.65 0.357971 1.15 0.103876 1.65 0.019624 2.15 0.002361 2.65 0.000178 3.15 0.00000840 0.16 0.820988 0.66 0.350623 1.16 0.100904 1.66 0.018895 2.16 0.002253 2.66 0.000169 3.16 0.00000786 0.17 0.810008 0.67 0.343372 1.17 0.098000 1.67 0.018190 2.17 0.002149 2.67 0.000159 3.17 0.00000736 0.18 0.799064 0.68 0.336218 1.18 0.095163 1.68 0.017507 2.18 0.002049 2.68 0.000151 3.18 0.00000689 0.19 0.788160 0.69 0.329160 1.19 0.092392 1.69 0.016847 2.19 0.001954 2.69 0.000142 3.19 0.00000644 0.2 0.777297 0.7 0.322199 1.2 0.089686 1.7 0.016210 2.2 0.001863 2.7 0.000134 3.2 0.00000603 0.21 0.766478 0.71 0.315335 1.21 0.087045 1.71 0.015593 2.21 0.001776 2.71 0.000127 3.21 0.00000564 0.22 0.755704 0.72 0.308567 1.22 0.084466 1.72 0.014997 2.22 0.001692 2.72 0.000120 3.22 0.00000527 0.23 0.744977 0.73 0.301896 1.23 0.081950 1.73 0.014422 2.23 0.001612 2.73 0.000113 3.23 0.00000493 0.24 0.734300 0.74 0.295322 1.24 0.079495 1.74 0.013865 2.24 0.001536 2.74 0.000107 3.24 0.00000460 0.25 0.723674 0.75 0.288845 1.25 0.077100 1.75 0.013328 2.25 0.001463 2.75 0.000101 3.25 0.00000430 0.26 0.713100 0.76 0.282463 1.26 0.074764 1.76 0.012810 2.26 0.001393 2.76 0.000095 3.26 0.00000402 0.27 0.702582 0.77 0.276179 1.27 0.072486 1.77 0.012309 2.27 0.001326 2.77 0.000090 3.27 0.00000376 0.28 0.692120 0.78 0.269990 1.28 0.070266 1.78 0.011826 2.28 0.001262 2.78 0.000084 3.28 0.00000351 0.29 0.681717 0.79 0.263897 1.29 0.068101 1.79 0.011359 2.29 0.001201 2.79 0.000080 3.29 0.00000328 0.3 0.671373 0.8 0.257899 1.3 0.065992 1.8 0.010909 2.3 0.001143 2.8 0.000075 3.3 0.00000306 0.31 0.661092 0.81 0.251997 1.31 0.063937 1.81 0.010475 2.31 0.001088 2.81 0.000071 3.31 0.00000285 0.32 0.650874 0.82 0.246189 1.32 0.061935 1.82 0.010057 2.32 0.001034 2.82 0.000067 3.32 0.00000266 0.33 0.640721 0.83 0.240476 1.33 0.059985 1.83 0.009653 2.33 0.000984 2.83 0.000063 3.33 0.00000249 0.34 0.630635 0.84 0.234857 1.34 0.058086 1.84 0.009264 2.34 0.000935 2.84 0.000059 3.34 0.00000232 0.35 0.620618 0.85 0.229332 1.35 0.056238 1.85 0.008889 2.35 0.000889 2.85 0.000056 3.35 0.00000216 0.36 0.610670 0.86 0.223900 1.36 0.054439 1.86 0.008528 2.36 0.000845 2.86 0.000052 3.36 0.00000202 0.37 0.600794 0.87 0.218560 1.37 0.052688 1.87 0.008179 2.37 0.000803 2.87 0.000049 3.37 0.00000188 0.38 0.590991 0.88 0.213313 1.38 0.050984 1.88 0.007844 2.38 0.000763 2.88 0.000046 3.38 0.00000175 0.39 0.581261 0.89 0.208157 1.39 0.049327 1.89 0.007521 2.39 0.000725 2.89 0.000044 3.39 0.00000163 0.4 0.571608 0.9 0.203092 1.4 0.047715 1.9 0.007210 2.4 0.000689 2.9 0.000041 3.4 0.00000152 0.41 0.562031 0.91 0.198117 1.41 0.046148 1.91 0.006910 2.41 0.000654 2.91 0.000039 3.41 0.00000142 0.42 0.552532 0.92 0.193232 1.42 0.044624 1.92 0.006622 2.42 0.000621 2.92 0.000036 3.42 0.00000132 0.43 0.543113 0.93 0.188437 1.43 0.043143 1.93 0.006344 2.43 0.000589 2.93 0.000034 3.43 0.00000123 0.44 0.533775 0.94 0.183729 1.44 0.041703 1.94 0.006077 2.44 0.000559 2.94 0.000032 3.44 0.00000115 0.45 0.524518 0.95 0.179109 1.45 0.040305 1.95 0.005821 2.45 0.000531 2.95 0.000030 3.45 0.00000107 0.46 0.515345 0.96 0.174576 1.46 0.038946 1.96 0.005574 2.46 0.000503 2.96 0.000028 3.46 0.00000099 0.47 0.506255 0.97 0.170130 1.47 0.037627 1.97 0.005336 2.47 0.000477 2.97 0.000027 3.47 0.00000092 0.48 0.497250 0.98 0.165769 1.48 0.036346 1.98 0.005108 2.48 0.000453 2.98 0.000025 3.48 0.00000086 0.49 0.488332 0.99 0.161492 1.49 0.035102 1.99 0.004889 2.49 0.000429 2.99 0.000024 3.49 0.00000080
3932
https://mc-stan.org/docs/2_28/stan-users-guide/solving-a-system-of-linear-odes-using-a-matrix-exponential.html
13.8 Solving a system of linear ODEs using a matrix exponential | Stan User’s Guide Type to search Stan User's Guide Overview Part 1. Example Models 1 Regression Models 1.1 Linear regression Matrix notation and vectorization 1.2 The QR reparameterization 1.3 Priors for coefficients and scales 1.4 Robust noise models 1.5 Logistic and probit regression 1.6 Multi-logit regression Identifiability 1.7 Parameterizing centered vectors K−1 K−1 degrees of freedom QR decomposition Translated and scaled simplex Soft centering 1.8 Ordered logistic and probit regression Ordered logistic regression 1.9 Hierarchical logistic regression 1.10 Hierarchical priors Boundary-avoiding priors for MLE in hierarchical models 1.11 Item-response theory models Data declaration with missingness 1PL (Rasch) model Multilevel 2PL model 1.12 Priors for identifiability Location and scale invariance Collinearity Separability 1.13 Multivariate priors for hierarchical models Multivariate regression example 1.14 Prediction, forecasting, and backcasting Programming predictions Predictions as generated quantities 1.15 Multivariate outcomes Seemingly unrelated regressions Multivariate probit regression 1.16 Applications of pseudorandom number generation Prediction Posterior predictive checks 2 Time-Series Models 2.1 Autoregressive models AR(1) models Extensions to the AR(1) model AR(2) models AR(K K) models ARCH(1) models 2.2 Modeling temporal heteroscedasticity GARCH(1,1) models 2.3 Moving average models MA(2) example Vectorized MA(Q) model 2.4 Autoregressive moving average models Identifiability and stationarity 2.5 Stochastic volatility models 2.6 Hidden Markov models Supervised parameter estimation Start-state and end-state probabilities Calculating sufficient statistics Analytic posterior Semisupervised estimation Predictive inference 3 Missing Data and Partially Known Parameters 3.1 Missing data 3.2 Partially known parameters 3.3 Sliced missing data 3.4 Loading matrix for factor analysis 3.5 Missing multivariate data 4 Truncated or Censored Data 4.1 Truncated distributions 4.2 Truncated data Constraints and out-of-bounds returns Unknown truncation points 4.3 Censored data Estimating censored values Integrating out censored values 5 Finite Mixtures 5.1 Relation to clustering 5.2 Latent discrete parameterization 5.3 Summing out the responsibility parameter Log sum of exponentials: linear Sums on the log scale Dropping uniform mixture ratios Recovering posterior mixture proportions Estimating parameters of a mixture 5.4 Vectorizing mixtures 5.5 Inferences supported by mixtures Mixtures with unidentifiable components Inference under label switching 5.6 Zero-inflated and hurdle models Zero inflation Hurdle models 5.7 Priors and effective data size in mixture models Comparison to model averaging 6 Measurement Error and Meta-Analysis 6.1 Bayesian measurement error model Regression with measurement error Rounding 6.2 Meta-analysis Treatment effects in controlled studies 7 Latent Discrete Parameters 7.1 The benefits of marginalization 7.2 Change point models Model with latent discrete parameter Marginalizing out the discrete parameter Coding the model in Stan Fitting the model with MCMC Posterior distribution of the discrete change point Discrete sampling Posterior covariance Multiple change points 7.3 Mark-recapture models Simple mark-recapture model Cormack-Jolly-Seber with discrete parameter Collective Cormack-Jolly-Seber model Individual Cormack-Jolly-Seber model 7.4 Data coding and diagnostic accuracy models Diagnostic accuracy Data coding Noisy categorical measurement model Model parameters Noisy measurement model Stan implementation 8 Sparse and Ragged Data Structures 8.1 Sparse data structures 8.2 Ragged data structures 9 Clustering Models 9.1 Relation to finite mixture models 9.2 Soft K-means 9.2.1 Geometric hard K-means clustering 9.2.2 Soft K-means clustering Stan implementation of soft K-means Generalizing soft K-means 9.3 The difficulty of Bayesian inference for clustering Non-identifiability Multimodality 9.4 Naive Bayes classification and clustering Coding ragged arrays Estimation with category-labeled training data Estimation without category-labeled training data Full Bayesian inference for naive Bayes Prediction without model updates 9.5 Latent Dirichlet allocation The LDA Model Summing out the discrete parameters Implementation of LDA Correlated topic model 10 Gaussian Processes 10.1 Gaussian process regression 10.2 Simulating from a Gaussian process Multivariate inputs Cholesky factored and transformed implementation 10.3 Fitting a Gaussian process GP with a normal outcome Discrete outcomes with Gaussian processes Automatic relevance determination 10.3.1 Priors for Gaussian process parameters Predictive inference with a Gaussian process Multiple-output Gaussian processes 11 Directions, Rotations, and Hyperspheres 11.1 Unit vectors 11.2 Circles, spheres, and hyperspheres 11.3 Transforming to unconstrained parameters 11.4 Unit vectors and rotations Angles from unit vectors 11.5 Circular representations of days and years 12 Solving Algebraic Equations 12.1 Example: system of nonlinear algebraic equations 12.2 Coding an algebraic system 12.3 Calling the algebraic solver Data versus parameters Length of the algebraic function and of the vector of unknowns Pathological solutions 12.4 Control parameters for the algebraic solver Tolerance Maximum number of steps 13 Ordinary Differential Equations 13.1 Notation 13.2 Example: simple harmonic oscillator 13.3 Coding the ODE system function Strict signature 13.4 Measurement error models Simulating noisy measurements Estimating system parameters and initial state 13.5 Stiff ODEs 13.6 Control parameters for ODE solving Discontinuous ODE system function Tolerance Maximum number of steps 13.7 Adjoint ODE solver 13.8 Solving a system of linear ODEs using a matrix exponential 14 Computing One Dimensional Integrals 14.1 Calling the integrator 14.1.1 Limits of integration 14.1.2 Data vs.parameters 14.2 Integrator convergence 14.2.1 Zero-crossing integrals 14.2.2 Avoiding precision loss near limits of integration in definite integrals 15 Complex Numbers 15.1 Working with complex numbers 15.1.1 Constructing and accessing complex numbers 15.1.2 Complex assignment and promotion 15.1.3 Complex arrays 15.1.4 Complex functions 15.2 Complex random variables 15.3 Complex linear regression 15.3.1 Independent real and imaginary error 15.3.2 Dependent complex error Part 2. Programming Techniques 16 Floating Point Arithmetic 16.1 Floating-point representations 16.1.1 Finite values 16.1.2 Normality 16.1.3 Ranges and extreme values 16.1.4 Signed zero 16.1.5 Not-a-number values 16.1.6 Positive and negative infinity 16.2 Literals: decimal and scientific notation 16.3 Arithmetic precision 16.3.1 Rounding and probabilities 16.3.2 Machine precision and the asymmetry of 0 and 1 16.3.3 Complementary and epsilon functions 16.3.4 Catastrophic cancellation 16.3.5 Overflow 16.3.6 Underflow and the log scale 16.4 Log sum of exponentials 16.4.1 Log-sum-exp function 16.4.2 Applying log-sum-exp to a sequence 16.4.3 Calculating means with log-sum-exp 16.5 Comparing floating-point numbers 17 Matrices, Vectors, and Arrays 17.1 Basic motivation 17.2 Fixed sizes and indexing out of bounds 17.3 Data type and indexing efficiency Matrices vs.two-dimensional arrays (Row) vectors vs.one-dimensional arrays 17.4 Memory locality Memory locality Matrices Arrays 17.5 Converting among matrix, vector, and array types 17.6 Aliasing in Stan containers 18 Multiple Indexing and Range Indexing 18.1 Multiple indexing 18.2 Slicing with range indexes Lower and upper bound indexes Lower or upper bound indexes Full range indexes Slicing functions 18.3 Multiple indexing on the left of assignments Assign-by-value and aliasing 18.4 Multiple indexes with vectors and matrices Vectors Matrices Matrices with one multiple index Arrays of vectors or matrices Block, row, and column extraction for matrices 18.5 Matrices with parameters and constants 19 User-Defined Functions 19.1 Basic functions User-defined functions block Function bodies Return statements Reject statements Type declarations for functions Array types for function declarations Data-only function arguments 19.2 Functions as statements 19.3 User-defined probability functions 19.4 Overloading functions 19.5 Documenting functions 19.6 Summary of function types Void vs.non-void return Suffixed or non-suffixed 19.7 Recursive functions 19.8 Truncated random number generation Generation with inverse CDFs Truncated variate generation 20 Custom Probability Functions 20.1 Examples Triangle distribution Exponential distribution Bivariate normal cumulative distribution function 21 Proportionality Constants 21.1 Dropping Proportionality Constants 21.2 Keeping Proportionality Constants 21.3 User-defined Distributions 21.4 Limitations on Using _lupdf and _lupmf Functions 22 Problematic Posteriors 22.1 Collinearity of predictors in regressions Examples of collinearity Mitigating the invariances 22.2 Label switching in mixture models Mixture models Convergence monitoring and effective sample size Some inferences are invariant Highly multimodal posteriors Hacks as fixes 22.3 Component collapsing in mixture models 22.4 Posteriors with unbounded densities Mixture models with varying scales Beta-binomial models with skewed data and weak priors 22.5 Posteriors with unbounded parameters Separability in logistic regression 22.6 Uniform posteriors 22.7 Sampling difficulties with problematic priors Gibbs sampling Hamiltonian Monte Carlo sampling No-U-turn sampling Examples: fits in Stan 23 Reparameterization and Change of Variables 23.1 Theoretical and practical background 23.2 Reparameterizations Beta and Dirichlet priors Transforming unconstrained priors: probit and logit 23.3 Changes of variables Change of variables vs.transformations Multivariate changes of variables 23.4 Vectors with varying bounds Varying lower bounds Varying upper and lower bounds 24 Efficiency Tuning 24.1 What is efficiency? 24.2 Efficiency for probabilistic models and algorithms 24.3 Statistical vs. computational efficiency 24.4 Model conditioning and curvature Condition number and adaptation Unit scales without correlation Varying curvature Reparameterizing with a change of variables 24.5 Well-specified models 24.6 Avoiding validation 24.7 Reparameterization Example: Neal’s funnel Reparameterizing the Cauchy Reparameterizing a Student-t distribution Hierarchical models and the non-centered parameterization Non-centered parameterization Multivariate reparameterizations 24.8 Vectorization Gradient bottleneck Vectorizing summations Vectorization through matrix operations Vectorized probability functions Reshaping data for vectorization 24.9 Exploiting sufficient statistics 24.10 Aggregating common subexpressions 24.11 Exploiting conjugacy 24.12 Standardizing predictors and outputs Standard normal distribution 24.13 Using map-reduce 25 Parallelization 25.1 Reduce-sum 25.1.1 Example: logistic regression 25.1.2 Picking the grainsize 25.2 Map-rect 25.2.1 Map function Map function signature 25.2.2 Example: logistic regression 25.2.3 Example: hierarchical logistic regression 25.2.4 Ragged inputs and outputs 25.3 OpenCL Part 3. Posterior Inference & Model Checking 26 Posterior Predictive Sampling 26.1 Posterior predictive distribution 26.2 Computing the posterior predictive distribution 26.3 Sampling from the posterior predictive distribution 26.4 Posterior predictive simulation in Stan 26.4.1 Simple Poisson model 26.4.2 Stan code 26.4.3 Analytic posterior and posterior predictive 26.5 Posterior prediction for regressions 26.5.1 Posterior predictive distributions for regressions 26.5.2 Stan program 26.6 Estimating event probabilities 26.7 Stand-alone generated quantities and ongoing prediction 27 Simulation-Based Calibration 27.1 Bayes is calibrated by construction 27.2 Simulation-based calibration 27.3 SBC in Stan 27.3.1 Example model 27.3.2 Testing a Stan program with simulation-based calibration 27.3.3 Pseudocode for simulation-based calibration 27.3.4 The importance of thinning 27.4 Testing uniformity 27.4.1 Indexing to simplify arithmetic 27.5 Examples of simulation-based calibration 27.5.1 When things go right 27.5.2 When things go wrong 27.5.3 When Stan’s sampler goes wrong 28 Posterior and Prior Predictive Checks 28.1 Simulating from the posterior predictive distribution 28.2 Plotting multiples 28.3 Posterior p-values’’ 28.3.1 Which statistics to test? 28.4 Prior predictive checks 28.4.1 Coding prior predictive checks in Stan 28.5 Example of prior predictive checks 28.6 Mixed predictive replication for hierarchical models 28.7 Joint model representation 28.7.1 Posterior predictive model 28.7.2 Prior predictive model 28.7.3 Mixed replication for hierarchical models 29 Held-Out Evaluation and Cross-Validation 29.1 Evaluating posterior predictive densities 29.1.1 Stan program 29.2 Estimation error 29.2.1 Parameter estimates 29.2.2 Predictive estimates 29.2.3 Predictive estimates in Stan 29.3 Cross-validation 29.3.1 Stan implementation with random folds 29.3.2 User-defined permutations 29.3.3 Cross-validation with structured data 29.3.4 Cross-validation with spatio-temporal data 29.3.5 Approximate cross-validation 30 Poststratification 30.1 Some examples 30.1.1 Earth science 30.1.2 Polling 30.2 Bayesian poststratification 30.3 Poststratification in Stan 30.4 Regression and poststratification 30.5 Multilevel regression and poststratification 30.5.1 Dealing with small partitions and non-identifiability 30.6 Coding MRP in Stan 30.6.1 Binomial coding 30.6.2 Coding binary groups 30.7 Adding group-level predictors 31 Decision Analysis 31.1 Outline of decision analysis 31.2 Example decision analysis Step 1. Define decisions and outcomes Step 2. Define density of outcome conditioned on decision Step 3. Define the utility function Step 4. Maximize expected utility 31.3 Continuous choices 32 The Bootstrap and Bagging 32.1 The bootstrap 32.1.1 Estimators 32.1.2 The bootstrap in pseudocode 32.2 Coding the bootstrap in Stan 32.3 Error statistics from the bootstrap 32.3.1 Standard errors 32.3.2 Confidence intervals 32.4 Bagging 32.5 Bayesian bootstrap and bagging Appendices 33 Using the Stan Compiler 33.1 Command-line options for stanc3 33.2 Understanding stanc3 errors and warnings 33.2.1 Warnings 33.2.2 Errors 33.3 Pedantic mode 33.3.1 Distribution argument and variate constraint issues 33.3.2 Special-case distribution issues 33.3.3 Unused parameters 33.3.4 Large or small constants in a distribution 33.3.5 Control flow depends on a parameter 33.3.6 Parameters with multiple twiddles 33.3.7 Parameters with zero or multiple priors 33.3.8 Variables used before assignment 33.3.9 Strict or nonsensical parameter bounds 33.3.10 Pedantic mode limitations 33.4 Automatic updating and formatting of Stan programs 33.4.1 Automatic formatting 33.4.2 Canonicalizing 33.4.3 Known issues 34 Stan Program Style Guide 34.1 Choose a consistent style 34.2 Line length 34.3 File extensions 34.4 Variable naming 34.5 Local variable scope 34.6 Parentheses and brackets Braces for single-statement blocks Parentheses in nested operator expressions No open brackets on own line 34.7 Conditionals 34.8 Functions 34.9 White space Line breaks between statements and declarations No tabs Two-character indents 34.9.1 Space between if, { and condition No space for function calls Spaces around operators No spaces in type constraints Breaking expressions across lines Spaces after commas Unix newlines 35 Transitioning from BUGS 35.1 Some differences in how BUGS and Stan work BUGS is interpreted, Stan is compiled BUGS performs MCMC updating one scalar parameter at a time, Stan uses HMC which moves in the entire space of all the parameters at each step Differences in tuning during warmup The Stan language is directly executable, the BUGS modeling language is not Differences in statement order Stan computes the gradient of the log density, BUGS computes the log density but not its gradient Both BUGS and Stan are semi-automatic Licensing Interfaces Platforms 35.2 Some differences in the modeling languages 35.3 Some differences in the statistical models that are allowed 35.4 Some differences when running from R 35.5 The Stan community References A A Serif Sans White Sepia Night Stan User’s Guide This is an old version, view current version. 13.8 Solving a system of linear ODEs using a matrix exponential Linear systems of ODEs can be solved using a matrix exponential. This can be considerably faster than using one of the ODE solvers. The solution to d d t y=a y d d t y=a y is y=y 0 e a t y=y 0 e a t, where the constant y 0 y 0 is determined by boundary conditions. We can extend this solution to the vector case: d d t y=A y d d t y=A y where y y is now a vector of length n n and A A is an n n by n n matrix. The solution is then given by: y=e t A y 0 y=e t A y 0 where the matrix exponential is formally defined by the convergent power series: e t A=∞∑n=0 t A n n!=I+t A+t 2 A 2 2!+⋯e t A=∑n=0∞t A n n!=I+t A+t 2 A 2 2!+⋯ We can apply this technique to the simple harmonic oscillator example, by setting y=[y 1 y 2]A=[0 1−1−θ]y=[y 1 y 2]A=[0 1−1−θ] The Stan model to simulate noisy observations using a matrix exponential function is given below. In general, computing a matrix exponential will be more efficient than using a numerical solver. We can however only apply this technique to systems of linear ODEs. data { int<lower=1> T; vector y0; array[T] real ts; array real theta; } model { } generated quantities { array[T] vector y_sim; matrix[2, 2] A = [[ 0, 1], [-1, -theta]] for (t in 1:T) { y_sim[t] = matrix_exp((t - 1) A) y0; } // add measurement error for (t in 1:T) { y_sim[t, 1] += normal_rng(0, 0.1); y_sim[t, 2] += normal_rng(0, 0.1); } } This Stan program simulates noisy measurements from a simple harmonic oscillator. The system of linear differential equations is coded as a matrix. The system parameters theta and initial state y0 are read in as data along observation times ts. The generated quantities block is used to solve the ODE for the specified times and then add random measurement error, producing observations y_sim. Because the ODEs are linear, we can use the matrix_exp function to solve the system.
3933
https://chemistry.mtsu.edu/wp-content/uploads/sites/57/2024/07/Chapter-5Electron-Configuration-Lewis-Dot-Structure-and-Molecular-Shape.pdf
56 Chapter 5: Electron Configuration, Lewis Dot Structure, and Molecular Shape Electron configuration. The outermost electrons surrounding an atom (the valence electrons) are responsible for the number and type of bonds that a given atom can form with other atoms, and are responsible for the chemistry of the atom. The shape of the modern periodic table reflects the arrangement of electrons by grouping elements together into s, p, d, and f blocks. If we compare the arrangement of electrons, the logical grouping of elements becomes apparent. The electron configuration (the arrangement of electrons in an element) is a direct result of the work of Bohr, Heisenberg, Schroedinger, and other physicists in the early 20th century. Elements are arranged in horizontal rows, called periods. Each period is a principal energy level, numbered 1, 2, 3, 4, etc. The first principal energy level starts with hydrogen, the second level starts with lithium, the third with sodium, etc. Each principal energy level holds a maximum number of electrons equal to 2n2, where n is the principal energy level number. The second energy level starting with lithium holds 2(22) = 8 electrons maximum; the third level starting with sodium holds 2(32) = 18; the fourth can hold 2(42) = 32, and so on. Principal energy levels are divided into sublevels following a distinctive pattern, shown in Table 5.1 below. Table 5.1. Principal energy levels and their sublevels. The principal energy levels and sublevels build on top of each other. The sodium atom in the third principal energy level has the first principal level composed of an s-­‐sublevel, and the second principal level composed of s-­‐ and p-­‐ sublevels, underneath the third principal level. Probably the simplest mental image would be an onion. Each principal level corresponds to a layer of the onion. As you move farther from the center of the onion, the layers get larger. Larger layers can be sub-­‐divided into more pieces (sublevels) than can smaller layers. Principal level Sublevels 1 s 2 s, p 3 s, p, d 4 s, p, d, f 5 s, p, d, f, g 6 s, p, d, f, g, h 7 s, p, d, f, g, h, i 57 Each sublevel is in turn divided into orbitals, specific locations for the electrons. The number of orbitals for each sublevel also follows a distinctive pattern, shown in Table 5.2 below. Table 5.2. Total orbitals for each type of sublevel. All s-­‐sublevels have 1 orbital, regardless of whether they are 1s, or 2s, or 3s, etc. All p-­‐sublevels contain 3 orbitals, and d-­‐sublevels contain 5 orbitals, and so on. Every orbital, regardless of the sublevel, holds a maximum of 2 electrons. In order to compare the arrangement of electrons around atoms, we need some way of predicting the filling order of the principal levels and sublevels. Figure 5.1 below guides us in arranging the electrons. Figure 5.1. Filling order of principal levels and sublevels. Sublevel Total orbitals s 1 p 3 d 5 f 7 g 9 h 11 i 13 58 Using Figure 5.1 is as simple as following the arrows. Start with the top arrow, and follow its direction from tail to head; you see that the first arrow passes through 1s, and the 1s sublevel is the first to fill with electrons. When the 1s sublevel is full, continue to the second arrow which passes through the 2s sublevel. The third arrow passes through the 2p and 3s sublevels (in this order), while the fourth arrow passes through the 3p and 4s sublevels, in order. The fifth arrow passes through the 3d, 4p, and 5s sublevels in order. Using the order provided from Figure 5.1, and remembering the total number of orbitals in each sublevel, we can write electronic configurations for the elements. The electronic configuration is the electronic structure of the atoms; the specific levels, sublevels, and number of electrons occupying orbitals for a given atom. Let’s look at group 1A elements. Hydrogen has 1 electron, and the electronic configuration is 1s1. This tells us that the first principal energy level is being filled, the s-­‐sublevel is the specific location within the principal energy level, and that there is 1 electron in the s-­‐orbital (the superscript “1” is the number of electrons). Electronic configurations for all group 1A elements are given below. H (1 electron): 1s1 Li (3 electrons): 1s2, 2s1 Na (11 electrons): 1s2, 2s2, 2p6, 3s1 K (19 electrons): 1s2, 2s2, 2p6, 3s2, 3p6, 4s1 Rb (37 electrons): 1s2, 2s2, 2p6, 3s2, 3p6, 4s2, 3d10, 4p6, 5s1 Cs (55 electrons): 1s2, 2s2, 2p6, 3s2, 3p6, 4s2, 3d10, 4p6, 5s2, 4d10, 5p6, 6s1 Fr (87 electrons): 1s2, 2s2, 2p6, 3s2, 3p6, 4s2, 3d10, 4p6, 5s2, 4d10, 5p6, 6s2, 4f14, 5d10, 6p6, 7s1 Notice that in all cases the outermost energy level contains a single sublevel: an s type sublevel, and this sublevel contains 1 electron. The outside layers of these elements are very similar. The group 2A elements contain one additional electron, and in all cases this additional electron is in the s type sublevel. Beryllium will be 1s2, 2s2. Magnesium will resemble sodium with an additional electron (1s2, 2s2, 2p6, 3s2), calcium will resemble potassium with an additional electron, and so on. This pattern follows throughout the periodic table. The group 3A elements all have an outer layer following the pattern ns2, np1 (where n is the principal energy level number). Boron’s outside layer is 2s2, 2p1, aluminum’s is 3s2, 3p1, and so on. Group 4A elements follow the pattern ns2, np2; Group 5A’s pattern is ns2, np3, and so on. Group 8A, the Noble gases, have a very interesting electronic structure. 59 He (2 electron): 1s2 Ne (10 electrons): 1s2, 2s2, 2p6 Ar (18 electrons): 1s2, 2s2, 2p6, 3s2, 3p6 Kr (36 electrons): 1s2, 2s2, 2p6, 3s2, 3p6, 4s2, 3d10, 4p6 Xe (54 electrons): 1s2, 2s2, 2p6, 3s2, 3p6, 4s2, 3d10, 4p6, 5s2, 4d10, 5p6 Rn (86 electrons): 1s2, 2s2, 2p6, 3s2, 3p6, 4s2, 3d10, 4p6, 5s2, 4d10, 5p6, 6s2, 4f14, 5d10, 6p6 The outside layer of electrons is full! With the exception of helium, which is too small to hold 8 electrons, all other elements in this group follow the pattern ns2, np6. This arrangement is called a Noble gas configuration, 8 electrons completely filling the ns and np sublevels, and this arrangement confers extraordinary stability on these compounds. The Noble gases don’t participate in very many chemical reactions, and then only under extreme conditions. When we get into the d block (transition elements) and the f block (lanthanides/actinides) the same general pattern holds. Group 3B elements have similar electron configurations, as do group 4B, 5B, and so on. Lewis Dot Structures. Metallic elements, to the left of the staircase dividing line, tend to loose one or more electrons and form ions. Hydrogen is the single general exception to this trend; when combined with non-­‐metals, hydrogen tends to form covalent compounds. All of the non-­‐metals have filled s-­‐sublevels, and partially filled p-­‐sublevels. There are a fixed number of electrons on the outside layer of these elements, and this number is equal to the group number. Boron is the only group 3A non-­‐metal, and has 3 electrons in its outside layer. Carbon and silicon are group 4A non-­‐metals and have 4 electrons in their outside layers. Groups 5, 6, 7, and 8A follow similarly. In 1916, American physical chemist Gilbert Newton Lewis (1875 – 1946) discovered the covalent bond and developed a method of representing bonding between non-­‐metals using simple diagrams called Lewis dot structures. Lewis dot structures are the elements chemical symbol, with the valence electrons arranged uniformly around the four sides of the symbol. Lewis dot structures for the first few non-­‐metals are shown in Figure 5.2. 60 Figure 5.2. Lewis dot structures for the first few non-­‐metals. Silicon is similar to carbon, with “Si” replacing “C”, phosphorous and arsenic are similar to nitrogen; sulfur, selenium, tellurium are similar to oxygen; chlorine, bromine, and iodine are similar to fluorine; and the Noble gases are similar to neon. In Lewis dot structures, some dots are single, representing single electrons, while others are double representing pairs of electrons. We can predict the number of bonds each element forms by counting the number of single electrons in the Lewis dot structure. Boron can form 3 bonds, carbon can form 4 bonds, nitrogen can form 3, oxygen can form 2 bonds, fluorine forms 1 bond, and neon doesn’t form any bonds because it has no single electrons, only pairs. When one element shares a single electron with another element having a single electron, the shared pair of electrons is a chemical bond. For example, hydrogen (having only 1 electron) can share this electron with fluorine to make hydrogen fluoride (HF). The Lewis dot structure of this compound is: Figure 5.3. Lewis dot structure of hydrogen fluoride. For clarity, the electron from hydrogen is colored red. The chemical bond is composed of one (black) electron from fluorine and one (red) electron from hydrogen. In water, two hydrogen atoms are connected to an oxygen atom, and the Lewis dot structure of this compound is: Figure 5.4. Lewis dot structure of water. 61 Before anyone gets wrong ideas; the electrons aren’t permanently fixed to one particular side of the chemical symbol. We can always distribute the electrons in any fashion that is convenient or pleasing to us, provided that we keep them either as single electrons or as pairs of electrons (no groups of 3 or more electrons). We could always draw the water molecule as: Figure 5.5. Alternate Lewis dot structure of water. The shared pair will always be somewhere between the two bonded atoms. Often, we show a shared pair as a single line, representing the chemical bond. Using this method we can draw our water molecule as: Figure 5.6. Lewis structures of water, showing two equivalent structures. Strictly speaking, using a combination of lines for bonds and dots for unshared pairs of electrons are Lewis structures, not Lewis dot structures. The distinction is not particularly important for this course and you can use either. However, don’t try to use both! Some students will draw a single line, and then include two dots as electrons (above the line, below the line, or straddling the line). This is wrong and confusing. With two exceptions (hydrogen and boron) all non-­‐metals must have a total of 8 electrons. These electrons can be shared pairs (bonds) or unshared pairs. Hydrogen forms one bond, and is satisfied having 2 electrons. Boron has 3 electrons, and can make 3 bonds for a total of 6 electrons. All other non-­‐metals need 8 electrons. The structures of many simple molecules can be determined by comparing the number of bonds that each element can form. However, not all molecules are equally simple, and small molecules can sometimes be deceptive. Consider carbon monoxide (CO). Carbon has 4 unshared electrons, and can make 4 bonds. Oxygen has 2 unshared electrons, and can make 2 bonds. What does carbon monoxide look like? 62 When the simple inspection method doesn’t work, we use the “pooled electron” method. We add together the total number of valence electrons for our elements, and then distribute them so that each element has 8 electrons (either as shared pairs or as unshared pairs). For carbon monoxide, carbon contributes 4 electrons, oxygen contributes 6, and the total valence electrons are 10. These 10 electrons have to be distributed so that bonds are formed, and so that both carbon and oxygen have 8 electrons. These two requirements result in the following Lewis dot structure: Figure 5.7. Lewis dot structure of carbon monoxide. Or if we prefer to use lines for bonds: Figure 5.8. Lewis structure of carbon monoxide. This method works especially well for polyatomic ions. In sulfate (SO4-­‐2), sulfur contributes 6 electrons, and each oxygen atom contributes 6 electrons. There are two additional electrons indicated by the -­‐2 charge, for a total of 32 electrons. One possible structure would be a central sulfur atom surrounded by the oxygen atoms: Figure 5.9. One possible Lewis dot structure for sulfate. Other arrangements meeting the requirements that all electrons are used and each atom has 8 electrons are possible. For positively charged polyatomic ions like ammonium and hydronium, we must subtract the positive charge from the valence electrons, since loss of electron(s) results in positive charge(s). Ammonium has a total of 8 valence electrons, as does hydronium, and their Lewis dot structures are: 63 Figure 5.10. Lewis dot structures of ammonium (left) and hydronium (right). Sometimes, a compound or ion can have more than one valid structure. An example is carbonate, which has three equivalent Lewis dot structures: Figure 5.11. Three Lewis dot structures for the carbonate anion. The three structures are equivalent. Notice that one of the oxygen atoms shares 4 electrons with carbon (it is double bonded to carbon). In each structure a different oxygen atom is double bonded to carbon. This is generally explained by the idea of resonance. Resonance does NOT mean that the ion is rapidly interchanging between the three different forms, nor does it mean that there is an equal mixture of the three forms. Instead, the extra carbon-­‐oxygen bond is somehow “spread out” over the entire molecule. Every carbon-­‐oxygen bond is actually 1.3333 bonds. Now, this idea that we can have “fractional” bonds doesn’t make very much sense if we are talking about bonds being equivalent to two electrons, because what sense is there is talking about a fraction of an electron? However, you need to remember that the Bohr model of the atom, with electrons as small negatively charged particles orbiting the nucleus, is WRONG (although very convenient for many purposes). In reality, the electron (when “orbiting” the nucleus) is an electromagnetic wave. Just like two sound waves can combine to form a new sound wave, or two water waves can combine to form a larger water wave, two electromagnetic waves can combine to form a new electromagnetic wave. When two “electron-­‐waves” combine, they form a new “electron-­‐wave” that we call a 64 “chemical bond”. Having 1/3 of an electron is a difficult notion, but having one wave 1/3 as large as another wave isn’t nearly as difficult to picture. Imagining how two electrons, as negatively charged particles, can hold two atoms together is difficult. Imagining two electrons combining to form an “electron-­‐wave” may not help very much, but it is a better description of the chemical bond. I’m going to continue talking about pairs of electrons as if they are actual small negatively charged bits of matter, but when they are IN the atom, they aren’t. VSEPR theory. Valence shell electron pair repulsion theory (VSEPR) allows us to predict the geometry of a molecule, based on the arrangement of atoms and unshared electron pairs around a central atom. As we have seen from our Lewis dot structures, each atom tends to have four pair of electrons around it. If a molecule is sufficiently large (5 atoms total), then we have 4 atoms arranged around fifth atom at the center. Methane (CH4) is a good example of this arrangement. When we draw a methane molecule on a flat surface, the flat surface forces 2-­‐ dimensional geometry (unless we try to make some sort of perspective drawing). Below are some common representations of a methane molecule. Figure 5.12. Three different representations of the methane molecule. The perspective drawing tries to emphasize the 3-­‐dimensional nature of molecules. By convention, the dark triangular lines are bonds projecting out of the surface of the paper, while the hashed triangle is a bond projecting into and through the paper. Methane is a tetrahedral molecule – a pyramid made of 4 equal triangles (as opposed to a normal pyramid made of 4 equal triangles and a square base). Tetrahedral geometry is the only 3-­‐D geometry allowing four objects to be equally distributed around a central fifth object. VSEPR theory states that a pair of electrons occupies almost the same volume of space as a hydrogen atom. Hydrogen fluoride, water, sulfate, ammonia, and 65 hydronium ion have shapes similar to methane, but with unshared electron pairs replacing hydrogen atoms in some cases. These substances are shown below. Figure 5.13. Geometry of hydrogen fluoride, water, sulfate, ammonium, and hydronium ions. If there are fewer than 4 atoms bonded around a central atom, or if there are no unshared electron pairs, the molecular geometry changes. Table 5.3 summarizes various possible combinations of atoms and unshared pairs, and the resulting geometry. These “rules” apply to central atoms obeying the octet rule. Some elements, such as sulfur, have the ability for form “expanded octets”, resulting in more than 8 electrons around sulfur. These are exceptions, and aren’t particularly useful in learning the general pattern exhibited by most compounds, so we won’t worry about these examples here. 66 # of atoms bonded to central atom # of unshared electron pairs Molecular geometry Examples 2 0 Linear Carbon dioxide 2 1 Bent Sulfur dioxide 2 2 Bent Water 3 0 Trigonal planar Boron trifluoride 3 1 Trigonal pyramidal Ammonia 4 0 Tetrahedral Methane Table 5.3. Molecular geometry based on bonded atoms and lone pairs. Bond polarity and molecular polarity. When identical atoms share electrons in a chemical bond, the sharing must be exactly equal. Consider two hydrogen atoms bonded to form a molecule. Neither hydrogen atom can exert greater control of the electrons than the other. The electron pair is shared equally between the two atoms, because the two atoms are identical. When dissimilar atoms share electrons, the situation is different. Each element has a characteristic electronegativity, a chemical property describing the tendency of an atom to attract electrons towards itself. Table 5.4 shows Pauling electronegativity values for selected non-­‐metal compounds. “Pauling electronegativities” are named for American chemist Linus Carl Pauling (1901 – 1994), who developed the concept of electronegativity. Pauling was one of the founders of quantum chemistry and of molecular biology. He is the only person to win two unshared Nobel Prizes, the first for Chemistry and the second for Peace. We can estimate how equally electrons are shared between elements by comparing electronegativities for pairs of elements – simply subtract the smaller value from the larger. This result, the difference in electronegativity (DEN) is used to determine the bond polarity. If DEN < 0.6, then the bond is considered to be “nonpolar covalent”. It is essentially the same as a bond between identical atoms. Carbon-­‐hydrogen bonds have DEN = 0.4, so they are nonpolar covalent. 67 Table 5.4. Pauling electronegativity values for selected non-­‐metals. If DEN < 1.6, then the bond is considered to be “polar covalent”. In this bond, the electrons are shared, but they are strongly attracted to the element having the higher electronegativity value. Hydrogen-­‐oxygen bonds have DEN = 1.3, and the shared electrons are attracted towards oxygen and away from hydrogen. If DEN > 2.0, the bond is considered ionic. The only example we have of this with the non-­‐metals is the silicon-­‐fluorine bond (DEN = 2.1). However, many metals (not shown) have very low electronegativity values and readily form ionic bonds. Sodium or calcium’s electronegativity is 1.0. Sodium-­‐oxygen bonds have DEN = 2.4, sodium-­‐chlorine bonds have DEN = 2.2. Generally, we classify any compound between metals and non-­‐metals as ionic as a matter of course. If DEN is between 1.6 and 2.0, then the bond classification depends on the type of elements combined. If one of the elements is a metal, then the bond is considered ionic, while if both elements are non-­‐metals, then the bond is polar covalent. Hydrogen fluoride has DEN = 1.9, but since both elements are non-­‐metals the bond is polar covalent. Scandium’s electronegativity is 1.3, and scandium chloride has DEN = 1.9. Since scandium is a metal, this bond is ionic. We can readily show the attraction of shared pairs towards one element by using arrows instead of straight lines, with the arrowhead pointing towards the element that attracts the electrons (Figure 5.14). If the bond is nonpolar covalent, we can use either a line or a double-­‐headed arrow. Notice the “δ+” and “δ-­‐“ symbols – these represent partial + and – charges, due to the imbalance in electron sharing. In water for example, the electrons are strongly attracted to oxygen, resulting in oxygen being a little bit negative. The hydrogen atoms are slightly deprived of electrons and become a little bit positive. Molecular polarity is the result of unshared electron pairs, bond polarity, and geometry. A polar molecule will have one side or end of the molecule slightly positive and the opposite end or side slightly negative. If you use the following guidelines, you will rarely go wrong in assigning molecular polarity. Once again, these guidelines apply only to central atoms obeying the octet rule. Atoms with expanded octets have modified guidelines. H 2.1 B 2.0 C 2.5 N 3.0 O 3.4 F 4.0 Si 1.9 P 2.2 S 2.6 Cl 3.2 Se 2.5 Br 3.0 I 2.7 68 Figure 5.14. Bond polarities are shown using arrows. Shared electrons are drawn towards the element having higher electronegativity. 1. Does the central atom have unshared electron pairs? If “yes”, then the molecule is polar. Water and ammonia are classic examples; unshared electrons on the central atom guarantee that the molecule is polar. 2. If there are no unshared electron pairs on the central atom, then does the atom have polar covalent or ionic bonds? If all of the bonds are nonpolar covalent, then the molecule is nonpolar. Methane (CH4) is the classic example of this guideline in action. The Lewis dot structure shows that the central carbon atom does not have any unshared electron pairs. The carbon-­‐ hydrogen bonds have DEN = 0.4, clearly nonpolar covalent bonds. The methane molecule is therefore nonpolar. 3. If there are no unshared electron pairs on the central atom, but there are polar covalent bonds, then the molecule’s geometry determines molecular polarity. Geometry can cancel the effects of bond polarity. The simplest way to see this effect is to consider a series of compounds, CH4, CH3F, CH2F2, CHF3, and CF4 (Figure 5.15). Methane is nonpolar as described above. Carbon-­‐ fluorine bonds are polar covalent, shared electrons are attracted to the fluorine atom(s), and the molecule has slightly positive/negative sides. In 69 CF4, the “outside” of the molecule is uniformly negative, while the “inside” (the carbon atom) is slightly positive. However, nothing can get near the slightly positive carbon atom, so the molecule is effectively non-­‐polar. 4. If the substance is made from ions, then it is polar. Figure 5.15. Bond polarity effects can be cancelled out by the molecules geometry. Molecular polarity explains a variety of chemical and physical properties. One example is the expression “like dissolves like”. Polar and ionic substances readily dissolve in polar materials, and are insoluble in nonpolar substances. Sodium chloride (ionic) or ammonia (polar) dissolves readily in water (polar). 70 Hexane (C6H14, no unshared pairs and nonpolar bonds) is nonpolar and doesn’t dissolve particularly well in water. Hydrogen bonding is an intermolecular force in which the hydrogen atom attached to an oxygen, nitrogen, or fluorine atom is attracted to the oxygen, nitrogen, or fluorine atom of an adjacent molecule. Water is the classic example of hydrogen bonding, either with another water molecule or with any molecule containing oxygen, nitrogen, or fluorine (Figure 5.16). Figure 5.16. Hydrogen bonding examples. 71 Chapter 5 Homework: Vocabulary. The following terms are defined and explained in the text. Make sure that you are familiar with the meanings of the terms as used in chemistry. Understand that you may have been given incomplete or mistaken meanings for these terms in earlier courses. The meanings given in the text are correct and proper. Valence electrons Electron(ic) configuration Periods Principal energy level Sublevels Orbitals Noble gas configuration Resonance Electronegativity Bond polarity Hydrogen bonding 1. For the following elements, write out proper electronic configurations. a. He b. N c. Al d. Fe e. Br f. Y g. Ag h. I i. U 2. For the following elements, draw proper Lewis dot structures. a. C b. O c. Cl d. S e. P f. B 3. For the following compounds and ions, draw proper Lewis dot structures. a. BH3 b. CH2Cl2 c. OCl-­‐ d. SO2 e. SO3-­‐2 f. NO3-­‐ g. CH2O 72 4. For the substances in question 3, use VSEPR theory to determine the shape of the molecule or ion. 5. For the molecular substances in question 3, describe whether or not the molecule is polar or nonpolar. Use the electronegativity values and table in the text to assign bond polarities. 6. For the following compounds, indicate which ones can form hydrogen bonds with water molecules. a. BH3 b. CH2Cl2 c. H2S d. HCl e. CO2 f. C6H6 g. SO2 h. CH3CH2SH i. NH3 j. CH3COCH3 k. CH3NH2 73 Answers: 1. For the following elements, write out proper electronic configurations. a. He 2 electrons; 1s2 b. N 7 electrons; 1s2, 2s2, 2p3 c. Al 13 electrons; 1s2, 2s2, 2p6, 3s2, 3p1 d. Fe 26 electrons; 1s2, 2s2, 2p6, 3s2, 3p6, 4s2, 3d6 e. Br 35 electrons; 1s2, 2s2, 2p6, 3s2, 3p6, 4s2, 3d10, 4p5 f. Y 39 electrons; 1s2, 2s2, 2p6, 3s2, 3p6, 4s2, 3d10, 4p6, 5s2, 4d1 g. Ag 47 electrons; 1s2, 2s2, 2p6, 3s2, 3p6, 4s2, 3d10, 4p6, 5s2, 4d9 h. I 53 electrons; 1s2, 2s2, 2p6, 3s2, 3p6, 4s2, 3d10, 4p6, 5s2, 4d10, 5p5 i. U 92 electrons; 1s2, 2s2, 2p6, 3s2, 3p6, 4s2, 3d10, 4p6, 5s2, 4d10, 5p6 6s2, 4f14, 5d10, 6p6, 7s2, 5f4 2. For the following elements, draw proper Lewis dot structures. a. C b. O 74 c. Cl d. S e. P f. B 3. For the following compounds and ions, draw proper Lewis dot structures. a. BH3 75 b. CH2Cl2 c. OCl-­‐ d. SO2 e. SO3-­‐2 76 f. NO3-­‐ g. CH2O 4. For the substances in question 3, use VSEPR theory to determine the shape of the molecule or ion. a. BH3 3 atoms bonded to the central B atom, no unshared pairs on B, shape is trigonal planar. b. CH2Cl2 4 atoms bonded to the central C atom, no unshared pairs on C, shape is tetrahedral. c. OCl-­‐ No central atom, just two atoms bonded together. Linear. d. SO2 2 atoms bonded to central S atom, one unshared pair on S, shape is bent. e. SO3-­‐2 3 atoms bonded to central S atom, one unshared pair on S, shape is trigonal pyramidal. f. NO3-­‐ 3 atoms bonded to central N atom, one unshared pair on N, shape is trigonal pyramidal. g. CH2O 3 atoms bonded to central C atom, no unshared pairs on C, shape is trigonal planar. 77 5. For the molecular substances in question 3, describe whether or not the molecule is polar or nonpolar. Use the electronegativity values and table in the text to assign bond polarities. a. BH3 The DEN for B-­‐H bonds is 0.1, which makes the bond non-­‐polar. And there are no unshared electrons on B, so the molecule is non-­‐polar. b. CH2Cl2 The C-­‐H bonds are non-­‐polar (DEN = 0.4), but C-­‐Cl bonds are polar covalent (DEN = 0.7). The chlorine side of the molecule has a partial negative charge (δ-­‐) while the hydrogen end has a partial positive charge (δ+). The molecule is polar. c. OCl-­‐ This is an anion, so it has a full negative charge. Polar/nonpolar doesn’t apply. d. SO2 The sulfur – oxygen bond is polar covalent (DEN = 0.8), and the Central sulfur atom has an unshared pair of electrons. The molecule is polar. e. SO3-­‐2 This is an anion, and polar/nonpolar doesn’t apply. f. NO3-­‐ This is an anion, and polar/nonpolar doesn’t apply. g. CH2O The C-­‐H bonds are nonpolar (DEN = 0.4), but C=O is polar covalent (DEN = 0.9). There is a partial negative charge on oxygen (δ-­‐) and a partial positive charge on hydrogen (δ+). The molecule is polar. 6. For the following compounds, indicate which ones can form hydrogen bonds with water molecules. a. BH3 No hydrogen bonding b. CH2Cl2 No hydrogen bonding c. H2S Hydrogen bonding possible d. HCl No hydrogen bonding e. CO2 Hydrogen bonding possible f. C6H6 No hydrogen bonding g. SO2 Hydrogen bonding possible h. CH3CH2SH Hydrogen bonding possible i. NH3 Hydrogen bonding possible j. CH3COCH3 Hydrogen bonding possible k. CH3NH2 Hydrogen bonding possible
3934
https://documents.cap.org/documents/2017-b-antimullerian-hormone.pdf
Educational Discussion: Anti-Müllerian hormone (AMH) 2017-B Antimüllerian Hormone (AMH) The 2017 AMH-B Survey comprised three challenges encompassing a continuation of the previously established analytes for this Survey. The AMH-B Survey challenges are prepared using a spike of hAMH material. Results are provided in the AMH-B 2017 participant summary by measurement procedures used by participating laboratories. Anti-Müllerian hormone (AMH) is a biochemical marker garnering more clinical attention in laboratory medicine. AMH is produced by Sertoli cells of the testes in males and by ovarian granulosa cells in females. It is expected that AMH reference intervals for normal males and females will exhibit the highest differences early in life. Expression of AMH in the male fetus prevents Müllerian duct development, and subsequently, promotes male reproductive tract development in utero. In the absence of AMH, Müllerian ducts develop into female internal genitalia. AMH measurement has been shown to assist in the clinical assessment of infants with ambiguous genitalia. Studies evaluating the intersex conditions in children demonstrate that AMH concentrations reflect the function of Sertoli cells, and are often assessed with other clinical findings and testosterone measurement. In males, AMH concentrations have also been shown to distinguish undescended testes, which have normal male AMH concentrations, from anorchia, which have extremely low or undetectable AMH concentrations. Testicular dysgenesis is characterized by low concentrations of both AMH and testosterone compared to normal males. AMH has also been studied in conjunction with the measurement of FSH, LH, and testosterone for precocious and delayed puberty in adolescent males. At birth, levels of AMH are undetectable in females, and slowly begin to increase, reaching a maximum level after puberty, until progressively declining to undetectable levels by menopause. After puberty in females, AMH concentrations are ovarian cycle-independent, tightly correlated with antral follicle count (AFC), and have been demonstrated to be reflective of the size of the remaining egg supply. Therefore, AMH is a clinically attractive tool for the early identification of decreasing ovarian reserve and timely referral of patients to fertility clinics, as well as assessing ovarian reserve to help identify patients most likely to respond to assisted reproductive technologies (ART). Approximately 11% of women are estimated to use some form of fertility service, thus additional diagnostic tools, such as AMH measurement, may provide important information that has the potential to improve outcome and reduce missed fertility opportunities. As a woman ages, the concentration of AMH will fall to undetectable levels at menopause and most determinations of AMH in females is primarily to provide an assessment of menopausal status, including premature ovarian failure. Additional clinical uses of AMH in assessing polycystic ovary syndrome (PCOS) have been demonstrated. However, to our knowledge there are no consensus guidelines for using AMH in this clinical context. There is currently no international standard reference material or reference measurement procedure for AMH, and no consensus on the threshold value of AMH suggestive of reduced fertility potential. Therefore, a clinician’s interpretation of AMH values should always be guided using reference intervals from the laboratory testing their patient’s sample. However, in general, it is accepted clinically that a reciprocal relationship exists between AMH concentration and inadequate ovarian reserve. As AMH concentrations decrease, the probability of diminished ovarian reserve progressively increases. A level well-above normal suggests adequate ovarian reserve. The levels targeted for the AMH Survey challenges represent relevant concentrations observed in clinical practice. Consider the results for challenge AMH-06 in the AMH-B 2017 Survey. If using these values to assess reduced fertility potential, the results (depending on the laboratory’s reference intervals) could suggest adequate ovarian reserve and likely a good response to ovarian stimulation. In contrast, the results for challenge AMH-04 could suggest a reduced ovarian reserve. In general, the analytical concordance of results across method peer groups in the AMH-B 2017 Survey is good. It is also interesting to note the largest CV and spread across AMH results was observed in the ELISA method peer group, which may be due to the manual steps in the ELISA process causing an increase in result variability. The importance of AMH in providing insight clinically cannot be understated. It is involved in sex differentiation during fetal development and can be used in assessing testicular presence and intersex conditions. In women it is a very useful marker for ovarian dysfunction and provides important information regarding ovarian reserve, ovarian response as well as progression of menopause. Future CAP AMH Surveys will strive to challenge higher, clinically relevant, AMH concentrations. Ross J. Molinaro, PhD, DABCC Janet Piscitelli, MD, FCAP Chemistry Resource Committee
3935
https://larsonprecalculus.com/pcwl3e/wp-content/uploads/success_organizer/success_organizer_1008.pdf
Section 10.8 Graphs of Polar Equations 175 Section 10.8 Graphs of Polar Equations Objective: In this lesson you learned how to graph polar equations. I. Introduction (Page 747) Example : Use point plotting to sketch the graph of the polar equation θ cos 3 = r . II. Symmetry, Zeros, and Maximum r-Values (Pages 748−750) The graph of a polar equation is symmetric with respect to the following if the given substitution yields an equivalent equation. Substitution 1) The line θ = π/2: aaaaaaa aaa aa aa aaa a a a aa aa aa a aa 2) The polar axis: aaaaaaa aaa aa aa aaa a aa aaa aa a a aa 3) The pole: aaaaaaa aaa aa aa aaa a a aa aa aa aa aa Example : Describe the symmetry of the polar equation ) sin 1 ( 2 θ − = r . aaaaaaaaa aaaa aaaaaaa aa aaa aaaa a a aaa Two additional aids to sketching graphs of polar equations are . . . aaaaaaa aaa aaaaaaaa aaa aaaaa a a a aa aaaaaaa aaa aaaaaaa aaa aaaaaaaa aaa aaaaa a a aa Course Number Instructor Date What you should learn How to graph a polar equation by point plotting What you should learn How to use symmetry to sketch graphs of polar equations y x 0 π/2 π 3π/2 Chapter 10 Topics in Analytic Geometry Example : Describe the zeros and maximum r-values of the polar equation θ 2 cos 5 = r aaa aaaaaaa aaaaa aa a a a aa a a a a a aaaa a a aa aaaa aa aaaaa aaa aaaaa aa a aaaaa aa a a aaaa aaaaa aaaaa aaaa IV. Special Polar Graphs (Pages 751−752) List the general equations that yield each of the following types of special polar graphs: Limaçons: a a a a a aaa aa a a a a a aaa aa a a aa a a a Rose curves: a a a aaa aaa a a a aaa aaa a a aa a aaaaaa aa a aa aaaa aa aaaaaa aa a aa aaaa Circles: a a a aaa aa a a a aaa a Lemniscates: aa a aa aaa aaa aa a aa aaa aa Homework Assignment Page(s) Exercises y x 0 π/2 π 3π/2 y x 0 π/2 π 3π/2 y x 0 π/2 π 3π/2 What you should learn How to recognize special polar graphs y x 0 π/2 π 3π/2 y x 0 π/2 π 3π/2 y x 0 π/2 π 3π/2
3936
https://www.scribd.com/document/653842827/Bailey-and-Love-28th-Edition
Open navigation menu Upload 65%(40)65% found this document useful (40 votes) 170K views55 pages Bailey and Love 28th Edition A must read for surgery 65%(40)65% found this document useful (40 votes) 170K views55 pages Bailey and Love 28th Edition A must read for surgery Uploaded by Koustav Chakraborty You are on page 1/ 55 Download to read ad-free Bailey & Love’s SHORT PRACTICE Sebaceous horn (The owner, the widow Dimanche, sold water-cress in Paris) A favourite illustration of Hamilton Bailey and McNeill Love, and well known to readers of earlier editions of Short Practice. Download to read ad-free Henry Hamilton Bailey 1894–1961 Robert J. McNeill Love 1891–1974 Skilled surgeons, inspirational teachers, dedicated authors Download to read ad-free Bailey & Love’s SHORT PRACTICE of SURGERY 28 th EDITION Edited by Professor P. Ronan O’Connell BA MD FRCSI FRCSGlasg FRCSEd FRCSEng (Hon) FCSHK (Hon) President, Royal College of Surgeons in Ireland; President, European Surgical Association; Emeritus Professor of Surgery, University College Dublin, Dublin, Ireland Professor Andrew W. McCaskie MMus MD FRCSEng FRCS (Tr and Orth) Professor of Orthopaedic Surgery and Head of Department of Surgery, University of Cambridge; Honorary Consultant, Addenbrooke’s Hospital, Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK Professor Robert D. Sayers MBChB(Hons) MD AFHEA FRCSEng George Davies Chair of Vascular Surgery, University of Leicester and Glenfield Hospital, Leicester, UK Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Share this document Share on Facebook, opens a new window Share on LinkedIn, opens a new window Share with Email, opens mail client Millions of documents at your fingertips, ad-free Subscribe with a free trial You might also like Summary Boxes Tables Illustrated Figures of Bailey Love's Surgery (27th 100% (1) Summary Boxes Tables Illustrated Figures of Bailey Love's Surgery (27th 304 pages Pye's Surgical Handicraft 75% (4) Pye's Surgical Handicraft 478 pages Cracking The Intercollegiate General Surgery FRCS 100% (2) Cracking The Intercollegiate General Surgery FRCS 400 pages Zollinger S Atlas of Surgical Operations, 11th.59 0% (1) Zollinger S Atlas of Surgical Operations, 11th.59 1 page Hamilton Bailey's Physical Signs 19th Edition PDF - Demonstrations of Physical Signs in Clinical Surgery (Shared by Ussama Maqbool) - 2 96% (23) Hamilton Bailey's Physical Signs 19th Edition PDF - Demonstrations of Physical Signs in Clinical Surgery (Shared by Ussama Maqbool) - 2 678 pages Exam Preparatory Manual For Obs & Gyn - Punit Bhojani 100% (3) Exam Preparatory Manual For Obs & Gyn - Punit Bhojani 174 pages SRB Manual of Surgery 4th Ed. Contents 43% (14) SRB Manual of Surgery 4th Ed. Contents 8 pages Clinical Surgery by S. Das 100% (14) Clinical Surgery by S. Das 652 pages Shaws-Textbook-Of-Gynaecology-17 448 No ratings yet Shaws-Textbook-Of-Gynaecology-17 448 1 page Bailey and Love 50% (4) Bailey and Love 200 pages Short Cases in Surgery 80% (5) Short Cases in Surgery 115 pages Vipul D. Yagnik - Fundamentals of Operative Surgery (2018, Wolters Kluwer) - Libgen - Li No ratings yet Vipul D. Yagnik - Fundamentals of Operative Surgery (2018, Wolters Kluwer) - Libgen - Li 468 pages MS General Surgery Question Bank 93% (14) MS General Surgery Question Bank 98 pages General Surgery MCQs 92% (13) General Surgery MCQs 16 pages Recent Advances in Surgery 40 No ratings yet Recent Advances in Surgery 40 252 pages Pakka Pass Pediatrics Textbook 100% (1) Pakka Pass Pediatrics Textbook 112 pages General Surgery Long Cases 93% (14) General Surgery Long Cases 140 pages Surgery Long Case Viva Questions Updated 100% (2) Surgery Long Case Viva Questions Updated 54 pages SURGERY CASES MADE EASY-final 100% (5) SURGERY CASES MADE EASY-final 106 pages Surgery Cases GP 100% (3) Surgery Cases GP 29 pages SURGERY CASES MADE EASY Medicine 100% (4) SURGERY CASES MADE EASY Medicine 106 pages Farquharson Textbook of Operative General Surgery 100% (8) Farquharson Textbook of Operative General Surgery 576 pages 2nd Yr Surgery - by S. Das - Sixth Edition (1) - 1-195 100% (4) 2nd Yr Surgery - by S. Das - Sixth Edition (1) - 1-195 195 pages DNB Surgery All Papers PDF 67% (3) DNB Surgery All Papers PDF 68 pages Latest IMM Guidelines 0% (1) Latest IMM Guidelines 14 pages Surgery Viva Questions 90% (10) Surgery Viva Questions 24 pages A Textbook On Surgical Short Cases 4nbsped 8190568132 9788190568135 Compress 100% (3) A Textbook On Surgical Short Cases 4nbsped 8190568132 9788190568135 Compress 299 pages Manual Of: Surgery 60% (5) Manual Of: Surgery 3 pages Sabiston Textbook of Surgery 17th Ed 2005 38% (13) Sabiston Textbook of Surgery 17th Ed 2005 2,477 pages Mind Maps in Surgery 67% (3) Mind Maps in Surgery 31 pages Fundamentals of Operative Surgery 50% (2) Fundamentals of Operative Surgery 335 pages Surgery Essence Pritesh Singh 6th Edition 100% (2) Surgery Essence Pritesh Singh 6th Edition 7 pages NEET-SS General Surgery 100% (3) NEET-SS General Surgery 63 pages Thyroid - History Taking: 1. Patient Details 100% (1) Thyroid - History Taking: 1. Patient Details 5 pages Surgery Hand Notes PDF No ratings yet Surgery Hand Notes PDF 168 pages Manipal Manual of Surgery 67% (9) Manipal Manual of Surgery 1,019 pages MS Surgery 100% (1) MS Surgery 118 pages Long Cases in General Surgery PDF 67% (6) Long Cases in General Surgery PDF 202 pages Imp For Final MBBS 100% (3) Imp For Final MBBS 83 pages Multinodular Goitre Case Presentation No ratings yet Multinodular Goitre Case Presentation 19 pages Sample Pages of Surgery Sixer No ratings yet Sample Pages of Surgery Sixer 25 pages General Surgery Atlas, Dr. Mohammed El-Matary (2020-2021) PDF 100% (1) General Surgery Atlas, Dr. Mohammed El-Matary (2020-2021) PDF 104 pages Bailey - Surgical Textbook 75% (4) Bailey - Surgical Textbook 1,127 pages Surgery-1 Notes 100% (3) Surgery-1 Notes 186 pages Makhan Lal Saha Surgery - PDF 2 100% (1) Makhan Lal Saha Surgery - PDF 2 30 pages NEET-SS GI Surgery No ratings yet NEET-SS GI Surgery 72 pages Souvenir 2023a No ratings yet Souvenir 2023a 50 pages 4 5920474042279658151 100% (4) 4 5920474042279658151 618 pages Surgery Sixer For NBE 100% (6) Surgery Sixer For NBE 60 pages Surgery Question Papers 100% (4) Surgery Question Papers 27 pages General Surgery (Must Know) 100% (4) General Surgery (Must Know) 93 pages Bailey & Love's Short Practice of Surgery, 28th Edition Accessible DOCX Download 100% (8) Bailey & Love's Short Practice of Surgery, 28th Edition Accessible DOCX Download 14 pages Bailey Love S Short Practice of Surgery 28th Edition Norman Williams PDF Download 100% (1) Bailey Love S Short Practice of Surgery 28th Edition Norman Williams PDF Download 53 pages Bailey & Love's Short Practice of Surgery P. Ronan O'Connell instant download No ratings yet Bailey & Love's Short Practice of Surgery P. Ronan O'Connell instant download 151 pages (Ebook) Bailey and Love's Short Practice of Surgery by Norman S Williams, Christopher J.K. Bulstrode, P Ronan O'Connell ISBN 9780340939321, 034093932X instant download 2025 100% (2) (Ebook) Bailey and Love's Short Practice of Surgery by Norman S Williams, Christopher J.K. Bulstrode, P Ronan O'Connell ISBN 9780340939321, 034093932X instant download 2025 91 pages Bailey & Loves Perl PDF Book Marking No ratings yet Bailey & Loves Perl PDF Book Marking 157 pages Mcqs and Emqs in Surgery A Bailey Love Companion Guide 1St Edition Christopher Bulstrode (Author) - PDF Download (2025) No ratings yet Mcqs and Emqs in Surgery A Bailey Love Companion Guide 1St Edition Christopher Bulstrode (Author) - PDF Download (2025) 58 pages Bailey Love S Short Practice of Surgery 26E 26th Edition Norman S Williams Download No ratings yet Bailey Love S Short Practice of Surgery 26E 26th Edition Norman S Williams Download 36 pages Hamilton Bailey s Physical Signs Demonstrations of Physical Signs in Clinical Surgery 19th Edition Lumley digital download No ratings yet Hamilton Bailey s Physical Signs Demonstrations of Physical Signs in Clinical Surgery 19th Edition Lumley digital download 154 pages Browse's Introduction To The Investigation and Management of Surgical Disease 1st Edition Study Guide Download 100% (12) Browse's Introduction To The Investigation and Management of Surgical Disease 1st Edition Study Guide Download 15 pages Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free
3937
https://www.kpu.ca/sites/default/files/Faculty%20of%20Science%20&%20Horticulture/Physics/PHYS%201120%201D%20Kinematics%20Solutions.pdf
Questions: 1 2 3 4 5 6 7 Physics 1120: 1D Kinematics Solutions 1. Initially, a ball has a speed of 5.0 m/s as it rolls up an incline. Some time later, at a distance of 5.5 m up the incline, the ball has a speed of 1.5 m/s DOWN the incline. (a) What is the acceleration? What is the average velocity? How much time did this take? (b) At some point the velocity of the ball had to have been zero. Where and when did this occur? A well­labeled sketch usually helps make the problem clearer. (a) Next, we list the list the given information and what we are looking for: v0 = +5.0 m/s vf = ­1.5 m/s Δx = +5.5 m a = ? vaverage = ? t = ? Note that I have taken the direction up the incline as positive and that the signs are explicitly stated. It is a very common source of error to leave out or to not consider the signs of directions of all vector quantities. To find the acceleration, we find the kinematics equation that contains a and the given quantities. Examining our equations we see that we can use . Rearranging this equation to find a yields . Notice that the acceleration is negative. This means that the acceleration points down the incline. It means that an object traveling up an incline will slow, turn around, and roll down the incline. The average velocity is defined . To find the time, we find the kinematics equation that contains a and the given quantities. Examining our equations we see that we can use . Rearranging this equation to find t yields . (b) When an object moving in 1D turns around we know that the object is instantaneously at rest and that its velocity at that point is v3 = 0. The information that we know is thus: v0 = +5.0 m/s v3 = 0 m/s This is our new final velocity a = ­2.068 m/s2 From part (a) Δx = ? vaverage = ? t = ? Notice that the acceleration is a constant of the motion; it has the same value in both parts of the problem. To find the displacement from the initial position where the ball turns around, we find the kinematics equation that contains x and the given quantities. Examining our equations we see that we can use . Rearranging this equation to find x yields . Notice that this value is bigger than the original 5.5 m and is consistent with the sketch, i.e. the ball was farther up the incline when it turned around. To find the time it takes for the ball to reach the point where it turns around, we find the kinematics equation that contains t and the given quantities. Examining our equations we see that we can use . Rearranging this equation to find t yields .Notice that this value is smaller than the time in part (a) and is consistent with the sketch, i.e. the ball hasn't come back down the incline yet. 2. A bullet in a rifle accelerates uniformly from rest at a = 70000 m/s2. If the velocity of the bullet as it leaves the muzzle is 500 m/s, how long is the rifle barrel? How long did it take for the bullet to travel the length of the barrel? What is the average speed of the bullet? To solve this problem, we list the list the given information and what we are looking for: v0 = 0.0 m/s since the bullet is initially at rest vf = 500 m/s velocity of the bullet as it leaves the barrel a = 70,000 m/s Δx = ? the length of the barrel t = ? the time it takes to travel the barrel vaverage = ? To find the length of the barrel, we find the kinematics equation that contains x and the given quantities. Examining our equations we see that we can use . Rearranging this equation to find a yields . To find the time it takes for the bullet to travel the length of barrel, we find the kinematics equation that contains t and the given quantities. Examining our equations we see that we can use . Rearranging this equation to find t yields . The average velocity is defined . 3. A red car is stopped at a red light. As the light turns green, it accelerates forward at 2.00 m/s2. At the exact same instant, a blue car passes by traveling at 62.0 km/h. When and how far down the road will the cars again meet? Sketch the d versus t motion for each car on the same graph. What was the average velocity of the red car for this time interval? For the blue car? Compare the two and explain the result? To solve this problem, we list the list the given information Red Car Blue Car v0 red = 0.0 m/s v0 blue = 62.0 km/h = 17.222 m/s ared = 2.00 m/s2 ablue = 0 m/s2 (constant velocity) Δxred = ? Δxblue = ? tred = ? tblue = ? This is an example of a two­body constrained kinematics problem. Even if a sketch was not explicitly required, we would need one anyway to get the constraints. For the sketch, recall that on a d versus t curve an object moving forward with a uniform acceleration should be represented by a line curving upwards while an object with constant forward velocity is represented by a straight line with a positive slope. Looking at the sketch, we see that our constraints are: Δxred = Δxblue (1), and tred = tblue (2). . To solve the problem, we must find the kinematics equation that contains the known quantities, v0 and a, and the unknown quantities, Δx and t. Examining our equations we see that we can use Δx= v0t + ½at2. We substitute this equation into both sides of equation (1). This yields, v0 redtred + ½ared(tred)2 = v0 bluetblue + ½ablue(tblue)2. We then use equation (2) to replace tred and tblue by t, v0 redt + ½aredt2 = v0 bluet + ½abluet2. Plugging in the values of the given quantities yields, ½(2.00)t2= 17.2t. The solution of this equation is t = 17.222 seconds. This is the time that elapses before the two cars meet again. With a value for t, we can find how far down the road the red car has traveled; Δxred = v0 redt + ½aredt2 = ½(2.00)(17.2)2 = 297 m. As a check, we can find how far down the road the blue car has traveled; Δxblue = v0 bluet + ½abluet2 = (17.2)(17.2) = 297 m. So the cars meet 297 m down the road. According to our definition of average velocity, vaverage red= Δxred/t = (297 m)/(17.2 s) = 17.2 m/s. Since the blue car maintains a constant velocity, vaverage blue= v0 blue = 17.2 m/s. The two quantities are the same since the two cars have traveled the same distance in the same amount of time. 4. A speeding motorist traveling down a straight highway at 110 km/h passes a parked patrol car. It takes the police constable 1.0 s to take a radar reading and to start up his car. The police vehicle accelerates from rest at 2.1 m/s2. When the constable catches up with the speeder, how far down the road are they and how much time has elapsed since the two cars passed one another? To solve this problem, we list the list the given information Constable Motorist v0 police = 0.0 m/s v0 speeder = 110 km/h = 30.556 m/s apolice = 2.00 m/s2 aspeeder = 0 m/s2 (constant velocity) Δxpolice = ? Δxspeeder = ? tpolice = ? tspeeder = ? This is an example of a two­body constrained kinematics problem. We need a sketch to get the constraints. For the sketch, recall that on a d versus t curve an object moving forward with a uniform acceleration should be represented by a line curving upwards while an object with constant forward velocity is represented by a straight line with a positive slope. Looking at the sketch, we see that our constraints are: Δxspeeder = Δxpolice (1), and tspeeder = tpolice + 1 (2). . To solve the problem, we must find the kinematics equation that contains the known quantities, v0 and a, and the unknown quantities, Δx and t. Examining our equations we see that we can use Δx= v0t + ½at2. We substitute this equation into both sides of equation (1). This yields, v0 speedertspeeder + ½aspeeder(tspeeder)2= v0 policetpolice + ½apolice (tpolice)2. We then use equation (2) to replace tspeeder by tpolice + 1, v0 speeder (tpolice + 1) + ½aspeeder(tpolice + 1)2 = v0 policetpolice + ½apolice (tpolice)2. Plugging in the values of the given quantities yields, (30.556)( tpolice + 1) = ½(2.1)(tpolice)2. This is a quadratic in tpolice. Solving the quadratic yields, tpolice = 30.07 seconds. It takes the police constable 30.1s to catch up with the speeder. The speeder was traveling for 31.1 s. With a value for tpolice, we can find how far down the road the police car has traveled; Δxpolice = v0 policetpolice + ½apolice (tpolice)2 == ½(2.1)(30.07)2 = 949 m. As a check, we can find how far down the road the speeder's car has traveled; Δxspeeder = v0 speeder (tpolice + 1) + ½aspeeder(tpolice + 1)2 =30.55631.07 = 949 m. So the cars meet 949 m down the road. 5. A ball is thrown up into the air with an initial velocity of 12.0 m/s. How long will it be in air before it returns to its starting height? To what maximum height will it rise? To solve this problem, we list the list the given information and what we are looking for: v0 = 12.0 m/s velocity as it leaves the hand vtop = 0 m/s since it turns around vf = ­12.0 m/s symmetry says it must have this value when it returns to the same height a = ­9.81 m/s only gravity is acting Δy = 0 since it returns to the same height tair = ? the time it takes for the entire trip tup = tdown = ½tair = ? symmetry requires this We have lots and lots of information from symmetry. To find tair, choose the kinematics equation that has t and the known quantities v0, vf, and a, that is vf = v0+ atair. Solving yields tair = (vf ­ v0)/a = (­v0­v0)/(­g) = 2v0/g = 2.4465 seconds. Hence tup = tdown = 1.2232 s. To find h, choose the kinematics equation that has Δy (h is a displacement) and the known quantities v0, vtop, and a, that is . Upon rearrangement, this yields h = Δy = (v0)2/g = 7.34 m. 6. A ball is thrown up into the air and returns to the same level. It is in the air for 3.20 seconds. With what initial velocity was it thrown? How high did it rise? To solve this problem, we list the list the given information and what we are looking for: v0 = ? velocity as it leaves the hand vtop = 0 m/s since it turns around vf = ­v0 symmetry says it must have this value when it returns to the same height a = ­9.81 m/s only gravity is acting Δy = 0 since it returns to the same height tair = 3.20 s the time it takes for the entire trip tup = tdown = ½tair = 1.60 s symmetry requires this We have lots and lots of information from symmetry. To find v0, choose the kinematics equation that has v0 and the known quantities, vf = ­v0 , tair and a, that is vf = v0+ atair. Eliminating vf yields ­v0 = v0 ­ gtair. Rearranging gives v0 = gtair/2 = 15.7 m/s. To find h, choose the kinematics equation that has Δy (h is a displacement) and the known quantities v0, vtop, and a, that is . Upon rearrangement, this yields h = Δy = (v0)2/g = 12.6 m. 7. Two balls are thrown upwards from the same spot 1.15 seconds apart. The first ball had an initial velocity of 15.0 m/s and the second was 12.0 m/s. At what height do they collide? To solve this problem, we list the list the given information Ball #1 Ball #2 v0 1 = 15.0 m/s v0 2 = 12.0 m/s a1 = ­9.81 m/s2 a2 = ­9.81 m/s2 Δy1 = ? Δy2 = ? t1 = ? t2 = ? This is an example of a two­body constrained kinematics problem. We need a sketch to get the constraints. For the sketch, recall the shape of the d versus t curve for an object thrown up into the air ­ a parabola. Looking at the sketch, we see that our constraints are: Δy1 = Δy2 (1), and t1 = t2 + 1.15 (2). . To solve the problem, we must find the kinematics equation that contains the known quantities, v0 and a = ­g, and the unknown quantities, Δy and t. Examining our equations we see that we can use Δy = v0t ­ ½gt2. We substitute this equation into both sides of equation (1). This yields, v01t1 ­ ½g(t1)2 = v02t2 ­ ½g(t2)2. We then use equation (2) to replace t1 by t2 + 1.15, v01(t2+1.15) ­ ½g(t2+1.15)2 = v02t2 ­ ½g(t2)2. This reduces to 1.15v01 + v01t2 ­ ½g[(t2)2 +2.30t2 +1.3225] = v02t2 ­ ½g(t2)2. Upon rearrangement this becomes (v01 ­ v02 ­ 1.15g)t2 = ­(1.15v01 ­0.66125g) . Thus t2 = 1.2997 s, and t1 = 2.4497 s. Now that we have the time that each ball is in the air, we can now find h h = v01t1 ­ ½g(t1)2 = (15.0)(2.4497) ­ ½g(2.4497)2 = 7.31 m , and double­checking our result h = v02t2 ­ ½g(t2)2 = (12.0)(1.2297) ­ ½g(1.2297)2 = 7.31 m . So the balls collide when they are 7.31 m in the air. Questions? mike.coombes@kwantlen.ca
3938
https://www.bytelearn.com/math-topics/mean-median-mode
Mean Median Mode Introduction In Maths, mean, median, and mode are the three principal ways of finding the average value of any given data set. The mean, median, and mode are three measures of central tendency used to describe the typical or central value of a set of data. The mean is the average value of the data set, the median is the middle value when the data set is arranged in ascending order, and the mode is the value that appears most frequently in the data set. Let’s explore them in detail for various kinds of data sets. Mean When we add all the data points together and divide the result by the number of points, we get the mean of the data set. We call it arithmetic mean or average. It is denoted by \bar{x}. Below is the formula to calculate mean: There are two types of data whose mean can be found. Let’s discuss them: Ungrouped Data Suppose x\_1, x\_2, x\_3, x\_4, .......,x\_n represents the n number of observations (data points). Below is the formula for calculating the mean of all the n data points. Mean = \frac{x\_1 + x\_2 + x\_3 + \ldots + x\_n}{n} Example: What is the mean of the marks obtained by students in the mathematics test? The data (marks out of 50) is given below with the names of students: Solution: Sum of the marks obtained = 45 + 50 + 35 + 25 + 30 = 185 Number of students = 5 Mean = (45 + 50 + 35 + 25 + 30)/5 Mean = 185/5 = 37 Thus, the mean of the marks obtained by students in the mathematics test is 37. Grouped Data Suppose x\_1, x\_2, x\_3, x\_4, x\_5, x\_6,.......x\_n represent the n number of observations and f\_1, f\_2, f\_3, f\_4, f\_5, f\_6,.......f\_n are their respective frequencies. Below is the formula for calculating the mean of all the n number of observations. Mean = (x\_1f\_1+ x\_2f\_2+x\_3f\_3+...............+x\_nf\_n)/(f\_1+f\_2+ f\_3+.............+f\_n) , or Mean = (∑x\_nf\_n)/(∑f\_n), where n = 1, 2, 3, ……. Example: A dataset is given in tabular form. Determine the mean of the data. Solution: First, we’ll find the product of the x and it’s corresponding f value. Mean = (∑x\_nf\_n)/(∑f\_n) = 310/14 = 22.14 Thus, the mean of the data given in the frequency table is 22.14. Median If you arrange a set of data points in ascending order then the middle value of the arranged data is called the median. There are two types of data whose median can be found. Let’s discuss them: Ungrouped Data For ungrouped data, we can come across two types of scenarios. When the data set has an odd number of data points, we take the middle number as the median. When the data set has an even number of data points, we take the average of the two middle numbers as median. Grouped data For grouped data, the data will be continuous and arranged in the frequency distribution. To find out the median of such type of data we can use the following formula. Median = L + [(n/2 - cf)/f] × h L = lower limit of the median class n = number of observations cf = cumulative frequency of class preceding the median class f = frequency of the median class h = class size Mode In a given data set, the number that appears the maximum number of times is called the mode. Ungrouped data For ungrouped data, the observation with the highest frequency is called mode. Example: Find the mode of the given data points: 10, 20, 10, 30, 40, 60, 10, 40. Solution: In the given data set, 10 appears the maximum number of times in the set of data which means 10 is the mode of the given set of numbers. Grouped data For grouped data, if the data is continuous then there is a formula to calculate the mode. Mode = L + [(f\_m - f\_1)/(2f\_m - f\_1 - f\_2)] × h L = lower limit of the modal class f\_m = frequency of modal class f\_1 = frequency of class preceding modal class f\_2 = frequency of class succeeding modal class h = class size Solved Examples Example 1. The time taken by Shelby to reach her school each day is listed below in the table. What is the mean time taken to reach school? Solution: As we know, Mean = "Sum of the data points"/"Number of data points" Mean = (29 + 25 + 20 + 30 + 11)/5 Mean = 115/5 = 23 Thus, the mean time taken by Shelby to reach school is 23 minutes. Example 2: A marketing company calculated the revenue earned per advertisement in 7 months and listed the data in a table given below. Determine the Median of the given data. Solution: To calculate the median first, we need to arrange the data into ascending order such as 11, 20, 25, 29, 30, 31, 34 Now, we need to check whether the given data set has an even or an odd number of data points. As we can observe from the given data the data is given only for 7 months and 7 is an odd number. Therefore, the median is the middle number which is 29. Alternative method: Median = ((n + 1)/2) th term Median = ((7 + 1)/2) th term = 4th term = 29 Example 3: Determine the median of the numbers: 10, 40, 50, 20, 30. Solution: First, we’ll arrange the numbers in ascending order such as 10, 20, 30, 40, 50. Then we’ll look for the middle number, ((n + 1)/2) th term, of the arranged numbers. The middle number will be our median. Example 4: Determine the median of the given grouped data. Solution: First let’s calculate the cumulative frequency of the given grouped data. The number of observations n = 3 + 7 + 5 + 10 + 15 = 40 So, n/2 = 40/2 = 20 Since n is even, we will find the average of the (n/2) th and the ((n + 1)/2) th observation i.e. the cumulative frequency greater than 20 is 25. So the median class is class 18 -24. Median = L + [(n/2 - cf)/f] × h This means, we need to choose the f, cf, and h from the class (18-24) The lower limit of the class (L) = 18 Cumulative frequency (cf) of class preceding the median class = 15 Frequency (f) = 10, Class size (h) = 6 (since 24 - 18 = 6) Median = 18 + ((20 - 15)/10)6 Median = 18 + 3 Median = 21 Hence, the median of the given grouped data is 21. Example 5: Determine the mode of the given set of data. Solution: Mode = L + [(f\_m - f\_1)/(2f\_m - f\_1 - f\_2)] × h The highest frequency (f\_m) = 14 Therefore the modal class is 30 - 40 Lower limit of the modal class (L) = 30 Preceding frequency (f\_1) = 9 Succeeding frequency (f\_2) = 10 Class size (h) = 10 Mode = 30 + [(14 - 9)/(2(14) - 9 - 10)] × 10 Mode = 30 + 5/9 × 10 Mode = 30 + 5.56 Mode = 35.56 Practice Problems Q1. Determine the mean of the provided data rounded to the nearest whole number. Answer: a Q2. Determine the mode of the provided data, rounded to the first decimal place. Answer: b Q3. Determine the mode of the provided data. 11, 20, 11, 30, 40, 60, 11, 40, 40, 40, 11, 30, 40, 11, 20, 11 Answer: d Q4. Determine the median of the provided data, rounded to the second decimal place. Answer: a Frequently Asked Questions Q1. When is the mean the most appropriate measure of central tendency? Answer: The mean is often used as a measure of central tendency when the data set is approximately symmetrically distributed and does not have extreme outliers. It is sensitive to extreme values and can be influenced by outliers. Q2. When is the median the most appropriate measure of central tendency? Answer: The median is typically used as a measure of central tendency when the data set has outliers or is skewed. It is less affected by extreme values compared to the mean and provides a more robust estimate of the central value in such cases. Q3. When is the mode the most appropriate measure of central tendency? Answer: The mode is useful as a measure of central tendency when identifying the most frequently occurring value is important, such as in categorical data or data with a clear peak or mode. Q4. Can a data set have more than one mode? Answer: Yes, a data set can have more than one mode. In cases where two or more values occur with the same highest frequency, the data set is said to be bimodal or multimodal. Q5. What does it mean if the mean, median, and mode are equal? Answer: If the mean, median, and mode are equal, it suggests that the data set is approximately symmetrically distributed and does not have any extreme outliers. Q6. How are mean, median, and mode used in data analysis? Answer: Mean, median, and mode are essential tools in data analysis for summarizing and understanding the central tendency of a data set. They provide valuable insights into the typical or central value of the data and help in making informed decisions in various fields, including statistics, economics, finance, and social sciences. ByteLearn is a registered trademark of ByteLearn Inc. © 2024 ByteLearn Inc. All rights reserved.
3939
https://www.quora.com/What-is-the-sum-and-product-of-the-roots-of-a-cubic-equation
Something went wrong. Wait a moment and try again. Find the Roots of the Cub... System of Polynomial Equa... Roots (mathematics) Mathematical Equations Cubic Functions Mathematics and Algebra Illustrating Polynomial E... Polynomial Theory 5 What is the sum and product of the roots of a cubic equation? Harsh Muni Dwivedi Former Student · Author has 925 answers and 1.4M answer views · Updated 3y Let the cubic equation we have be Ax3+Bx2+Cx+D=0 Then its roots be(α,β,γ) ⟹(α+β+γ)=−BA✓ ⟹(αβ+βγ+γα)=CA✓ ⟹(αβγ)=−DA✓ Related questions What are the roots of a cubic equation when the product of 2 roots is given? What is the sum of the roots of a cubic equation? What are the relations among the roots and coefficients of cubic equation? What are the roots of cubic equations? The sum of the roots of a cubic equation is -2 and the product is 12. If one root is -1, how do I find the other two roots? Mayank Sharma Btech in Mechatronics, Mukesh Patel School of Technology Management and Engineering, NMIMS (Graduated 2022) · 7y Jishnu Mukherjee B.Tech from Indian Institute of Technology, Kharagpur (IIT KGP) (Expected 2026) · Author has 156 answers and 851.8K answer views · 6y Let a cubic equation be: ax3+bx2+cx+d=0 a,b,c,d are non-zero constants. Let the roots of the equation be p,q and r. The sum of roots is -b/a and the product of roots is -d/a. This means that: p+q+r=−b/a pqr=−d/a I can give you an example: What are the sum and product of the roots of the cubic equation x3+4x2−10x−18=0? Answer: Sum=-b/a ⟹−4/1=−4 Product=-d/a ⟹−(−18)/1=18. If you want more help about the sum and products of roots of a polynomial you can refer to this link:- Polynomials: Sums and Products of Roots Glad to help you and thanks for the question. Jishnu Mukherjee B.Tech from Indian Institute of Technology, Kharagpur (IIT KGP) (Expected 2026) · Author has 156 answers and 851.8K answer views · 6y Related What is the formula for the sum and product of roots in a quadratic equation? Lets assume that our quadratic equation is of the form: ax2+bx+c=0 Dividing both sides by a:- Add (b2a)2 to both sides We can write it as:- Now:- Square root both sides We got :- Let the two roots of the quadratic equation:-αandβ There are two values for x. So we can write:- α=−b+√b2–4ac2a β=−b−√b2–4ac2a Now according to your question: Sum of roots ⟹α+β ⟹α+β=−b+√b2–4ac2a+−b−√b2–4ac2a ⟹−b+√b2–4ac+(−b)−√b2–4ac2a ⟹−2b2a ⟹\alph Lets assume that our quadratic equation is of the form: ax2+bx+c=0 Dividing both sides by a:- Add (b2a)2 to both sides We can write it as:- Now:- Square root both sides We got :- Let the two roots of the quadratic equation:-αandβ There are two values for x. So we can write:- α=−b+√b2–4ac2a β=−b−√b2–4ac2a Now according to your question: Sum of roots ⟹α+β ⟹α+β=−b+√b2–4ac2a+−b−√b2–4ac2a ⟹−b+√b2–4ac+(−b)−√b2–4ac2a ⟹−2b2a ⟹α+β=−ba Now product of roots⟹α∗β α∗β=−b+√b2–4ac2a∗−b−√b2–4ac2a ⟹(−b)2−(√b2–4ac)24a2 ⟹b2−b2+4ac4a2 ⟹4ac4a2 ⟹ca This is the answer: Sum of roots ⟹−ba Product of roots ⟹ca Hope this helps you. Thanks and don’t forget to upvote my answer if it is correct. Related questions What is the product of the roots of the equation? How do I find roots for a cubic equation? What are the properties of the roots of a general cubic and quartic equation? What are the equations, the sums, and products of whose roots are 9, 7, and 12? What are the sum and the product of the roots of the equation in number 7? Ashmita Singh M.Sc.Mathematics from Banaras Hindu University (Graduated 2020) · 3y Let ax^3+bx^2+cx+d be three degree polynomial in x with coefficient a not equal to 0, b, c, d Sum of roots=-b/a Sum of product of two roots=c/a Product of roots=d/a Promoted by Coverage.com Johnny M Master's Degree from Harvard University (Graduated 2011) · Updated Sep 9 Does switching car insurance really save you money, or is that just marketing hype? This is one of those things that I didn’t expect to be worthwhile, but it was. You actually can save a solid chunk of money—if you use the right tool like this one. I ended up saving over $1,500/year, but I also insure four cars. I tested several comparison tools and while some of them ended up spamming me with junk, there were a couple like Coverage.com and these alternatives that I now recommend to my friend. Most insurance companies quietly raise your rate year after year. Nothing major, just enough that you don’t notice. They’re banking on you not shopping around—and to be honest, I didn’t. This is one of those things that I didn’t expect to be worthwhile, but it was. You actually can save a solid chunk of money—if you use the right tool like this one. I ended up saving over $1,500/year, but I also insure four cars. I tested several comparison tools and while some of them ended up spamming me with junk, there were a couple like Coverage.com and these alternatives that I now recommend to my friend. Most insurance companies quietly raise your rate year after year. Nothing major, just enough that you don’t notice. They’re banking on you not shopping around—and to be honest, I didn’t. It always sounded like a hassle. Dozens of tabs, endless forms, phone calls I didn’t want to take. But recently I decided to check so I used this quote tool, which compares everything in one place. It took maybe 2 minutes, tops. I just answered a few questions and it pulled up offers from multiple big-name providers, side by side. Prices, coverage details, even customer reviews—all laid out in a way that made the choice pretty obvious. They claimed I could save over $1,000 per year. I ended up exceeding that number and I cut my monthly premium by over $100. That’s over $1200 a year. For the exact same coverage. No phone tag. No junk emails. Just a better deal in less time than it takes to make coffee. Here’s the link to two comparison sites - the one I used and an alternative that I also tested. If it’s been a while since you’ve checked your rate, do it. You might be surprised at how much you’re overpaying. Ananya Author has 62 answers and 270.8K answer views · 6y A cubic polynomial has atmost 3 zeros or roots. Let the cubic equation be ax^3 +bx^2+cx+d Then Sum of roots=-b/a Product of roots=-d/a Vishal Chandratreya Former Senior Engineer at Samsung Semiconductor India (2019–2021) · Author has 931 answers and 2.9M answer views · Updated 6y Related How do I find roots for a cubic equation? There are several ways to solve cubic equation. I shall try to give some examples. Guess one root. If you successfully guess one root of the cubic equation, you can factorize the cubic polynomial using the Factor Theorem and then solve the resulting quadratic equation easily. 3x3−2x2−9x−4=0 On inspection, x=−1 is found to satisfy the equation. Now the cubic polynomial can be factorized. 3x3−2x2−9x−4=0 ⟹(x+1)(3x2−5x−4)=0 One root is −1. The rest at the roots of the following quadratic equation. 3x2−5x−4=0 ⟹x=−(−5)±√(−5)2−4×3×(−4)2×3 \implies x=\cfrac There are several ways to solve cubic equation. I shall try to give some examples. Guess one root. If you successfully guess one root of the cubic equation, you can factorize the cubic polynomial using the Factor Theorem and then solve the resulting quadratic equation easily. 3x3−2x2−9x−4=0 On inspection, x=−1 is found to satisfy the equation. Now the cubic polynomial can be factorized. 3x3−2x2−9x−4=0 ⟹(x+1)(3x2−5x−4)=0 One root is −1. The rest at the roots of the following quadratic equation. 3x2−5x−4=0 ⟹x=−(−5)±√(−5)2−4×3×(−4)2×3 ⟹x=5±√25+486 ⟹x=5±√736 ⟹x∈{−1,5−√736,5+√736} Some cubic equations to be solved by students are deliberately made to have one simple root which can be guessed. Factorize the polynomial. With experience, some cubic equations can be easily factorized. You just need to have a keen eye. x3−8x2+4x+48=0 ⟹x3−2x2−6x2+12x−8x+48=0 ⟹(x3−2x2−8x)+(−6x2+12x+48)=0 ⟹x(x2−2x−8)−6(x2−2x−8)=0 ⟹(x2−2x−8)(x−6)=0 ⟹(x−4)(x+2)(x−6)=0 ⟹x∈{−2,4,6} Again, the people setting the questions probably tailor-made such cubic equations so that they would be easy to solve. Apply the binary search principle. Locate any two points at which the cubic polynomial has opposite signs. Perform the binary search algorithm. This method will help obtain only one root at a time. 2x3−5x+1=0 Let f(x)=2x3−5x+1. f(1)=−2 f(2)=7 By the Intermediate Value Theorem there lies a root of the cubic equation between these two numbers. ∃c∈(1,2)∋f(c)=0 Start building a sequence of possible values of c. c1=1+22=1.5⟹f(c1)=0.25>0⟹c∈(1,c1) c2=1+c12=1.25⟹f(c2)=−1.34<0⟹c∈(c2,c1) c3=c1+c22=1.375⟹f(c3)=−0.67<0⟹c∈(c3,c1) c4=c1+c32=1.4375⟹f(c4)=−0.24<0⟹c∈(c4,c1) c5=c1+c42=1.46875⟹f(c5)=−0.006897 Close enough to zero. A good calculator will tell you that the actual value of the root is c=1.469617, which means that c5 is off by only about 0.06%. You have to do this again for the other two roots. Make use of the Newton-Raphson Method. I shall let Wikipedia do the talking here. They even have a beautiful animation. It is quite easily understood. Use the cubic formula. Actually, don’t. Don’t ever use the cubic formula. Never, ever, ever. Ever. ax3+bx2+cx+d=0 x1=−b−3√2b3−9abc+27a2d+√(2b3−9abc+27a2d)2+4(3ac−b2)32−3√2b3−9abc+27a2d−√(2b3−9abc+27a2d)2+4(3ac−b2)323a x2=−b−ω3√2b3−9abc+27a2d+√(2b3−9abc+27a2d)2+4(3ac−b2)32−ω23√2b3−9abc+27a2d−√(2b3−9abc+27a2d)2+4(3ac−b2)323a x3=−b−ω23√2b3−9abc+27a2d+√(2b3−9abc+27a2d)2+4(3ac−b2)32−ω3√2b3−9abc+27a2d−√(2b3−9abc+27a2d)2+4(3ac−b2)323a Here, ω and ω2 are the complex cube roots of unity. ω=−1+√3i2 ω2=−1−√3i2 (Or interchange them. It doesn’t matter.) Never use the cubic formula. Sponsored by Bigin by Zoho CRM Too many tools slowing your business down? Try Bigin—the easiest CRM. Helps manage deals, customers & and business proccess, all in one place. David Joyce Professor Emeritus of Mathematics at Clark University · Upvoted by Shubhankar Datta , Master of Science Mathematics, Jadavpur University (2022) · Author has 9.9K answers and 68.4M answer views · 2y Related What is the formula for the sum and product of roots in a quadratic equation? Take the special case where the quadratic equation is of the form x2+bx+c=0. Suppose the two roots are r and s. Then the quadratic polynomial x2+bx+c is exactly (x−r)(x−s) which expands to x2−(r+s)x+rs. Therefore, the sum of the roots is −b, and the product of the roots is c. Generalize that to the quadratic equation ax2+bx+c=0. Divide the equation by a to reduce the problem to the previously solved one. It follows that the sum of the roots is −b/a, and the product of the roots is c/a. This can be generalized to higher degree polynomial equations. If the polynomial is monic (that m Take the special case where the quadratic equation is of the form x2+bx+c=0. Suppose the two roots are r and s. Then the quadratic polynomial x2+bx+c is exactly (x−r)(x−s) which expands to x2−(r+s)x+rs. Therefore, the sum of the roots is −b, and the product of the roots is c. Generalize that to the quadratic equation ax2+bx+c=0. Divide the equation by a to reduce the problem to the previously solved one. It follows that the sum of the roots is −b/a, and the product of the roots is c/a. This can be generalized to higher degree polynomial equations. If the polynomial is monic (that means the leading coefficient is 1), then the product of the roots is always the constant. The sum of the roots is always ± the coefficient of the term after the leading term; it’s – when the degree of the polynomial is even (as it was in the quadratic case) and it’s + when the degree is odd. Furthermore, the intermediate coefficients can be found as various symmetric combinations of the roots. Martyn Hathaway BSc in Mathematics, University of Southampton (Graduated 1986) · Author has 4.7K answers and 6.7M answer views · 7y Related What is the formula for the sum and product of roots in a quadratic equation? What is the formula for the sum and product of roots in a quadratic equation? Let the roots of a quadratic equation be p and q. Thus (x-p)((x-q) = 0 Multiplying this out, we have: x2−px−qx+pq=0 rearranging this, we have: x2−(p+q)x+pq=0 [A] Now, let’s assume that you are given the following quadratic equation: [math]ax^2 + bx + c = 0[/math] As this is a quadratic equation, a ≠ 0, which means that we can divide throughout by a to obtain: [math]x^2 + \frac {b}{a}x + \frac {c}{a} = 0[/math] But, from A, we know that [math]x^2 -(p+q)x + pq = 0[/math] This means that: [math]-(p+q) = \frac {b}{a} \Rightarrow p+q = -\frac {b}{a}[/math] [math]pq = \frac {c[/math] What is the formula for the sum and product of roots in a quadratic equation? Let the roots of a quadratic equation be p and q. Thus (x-p)((x-q) = 0 Multiplying this out, we have: [math]x^2 - px - qx + pq = 0[/math] rearranging this, we have: [math]x^2 -(p+q)x + pq = 0[/math] [A] [math]\[/math] Now, let’s assume that you are given the following quadratic equation: [math]ax^2 + bx + c = 0[/math] As this is a quadratic equation, a ≠ 0, which means that we can divide throughout by a to obtain: [math]x^2 + \frac {b}{a}x + \frac {c}{a} = 0[/math] But, from A, we know that [math]x^2 -(p+q)x + pq = 0[/math] This means that: [math]-(p+q) = \frac {b}{a} \Rightarrow p+q = -\frac {b}{a}[/math] [math]pq = \frac {c}{a}[/math] The sum of the roots = [math]p+q = -\frac {b}{a}[/math] The product of the roots = [math]pq = \frac {c}{a}[/math] Sponsored by Grubhub For Merchants Ready to reach Amazon Prime customers? Reach more customers through our Amazon partnership. Prime members receive free Grubhub+. Viktor T. Toth IT pro, part-time physicist -- patreon.com/vttoth · Upvoted by Priscilla Rolls , Masters Mathematics, The American University, Washington, DC (1969) and Pepito Moropo , B.S. / M.S. Mathematics · Author has 10.1K answers and 172.7M answer views · 8y Related Why must a cubic equation have 1 real root? Let me show you first the plot of a quadratic equation with two real roots: Now let me show one with no real roots: See what is happening here? The parabola of the quadratic expression is now entirely above the [math]x[/math]-axis; as it never intercepts the [math]x[/math]-axis, there are no real roots. But now let me show you a cubic equation with three real roots: And now, a cubic with only one real root: See what is happening here? I can get rid of two roots by shifting the curve up or down. But because a cubic polynomial always goes from minus infinity to plus infinity, it must intercept the [math]x[/math]-axis at least once. This is Let me show you first the plot of a quadratic equation with two real roots: Now let me show one with no real roots: See what is happening here? The parabola of the quadratic expression is now entirely above the [math]x[/math]-axis; as it never intercepts the [math]x[/math]-axis, there are no real roots. But now let me show you a cubic equation with three real roots: And now, a cubic with only one real root: See what is happening here? I can get rid of two roots by shifting the curve up or down. But because a cubic polynomial always goes from minus infinity to plus infinity, it must intercept the [math]x[/math]-axis at least once. This is a general property of any odd polynomial, by the way. Even polynomials, on the other hand, can be moved entirely above (or below) the [math]x[/math]-axis, so that they have no real roots. Ved Prakash Sharma Former Lecturer at Sbm Inter College, Rishikesh (1971–2007) · Author has 14.3K answers and 16.5M answer views · 7y Let ax^3+bx^2+cx+d = 0 is a cubic equation and its roots are p , q and r . Sum of the roots (p+q+r) = - b/a………….(1). Product of roots (p.q.r ) = - d/a…………(2). Answer. Gaurav Kumar Former Mathematics Learner · Author has 536 answers and 1.7M answer views · 5y Related How can we find the nature of roots of a cubic equation without actually solving it? Where x1 and x2 are roots of the equation 3ax^2+2bx+c=0 Similarly these all cases are possible for a<0. Where x1 and x2 are roots of the equation 3ax^2+2bx+c=0 Similarly these all cases are possible for a<0. Gary Russell Former Professor at University of Iowa (1996–2025) · Author has 6K answers and 3.1M answer views · 1y Related How many roots can a cubic equation have? How can one determine the roots of a cubic equation? You can appeal to the Fundamental Theorem of Algebra. Let’s assume that the polynomial has rational valued coefficients. Then, if the degree of the polynomial is n, there are exactly n roots. However, you need to be aware of the following facts: The roots may or may not have distinct values. For example, f(x) = (x-2)^2 = x^2 - 4x + 2 has two roots, both of which have the value 2. In contrast, f(x) = (x-1)(x-2) = x^2 - 3x + 3 has two roots, each with a different value (x = 1, x =2). Some or all of the roots of a polynomial may be complex numbers. If the coefficients of the polynomial ar You can appeal to the Fundamental Theorem of Algebra. Let’s assume that the polynomial has rational valued coefficients. Then, if the degree of the polynomial is n, there are exactly n roots. However, you need to be aware of the following facts: The roots may or may not have distinct values. For example, f(x) = (x-2)^2 = x^2 - 4x + 2 has two roots, both of which have the value 2. In contrast, f(x) = (x-1)(x-2) = x^2 - 3x + 3 has two roots, each with a different value (x = 1, x =2). Some or all of the roots of a polynomial may be complex numbers. If the coefficients of the polynomial are rational numbers, then complex roots come in pairs. If a+bi is a root, then so is a-bi, where i = sqrt(-1). All polynomials can be expressed as f(x) = a(x-R1)(x-R2) … (x-Rn) where the R’s the roots and “a” is some number. That is, the behavior of a polynomial is determined by its roots. For the cubic, there are n = 3 roots. However, due to , the number of complex roots must be an even number. This means that a cubic can have 1 real root, or 3 real roots. If you graph a cubic polynomial, you will find that it crosses the x-axis at least once. FINDING THE ROOTS OF A CUBIC We know that there are 3 roots, one of which MUST be a real number. Although there exists a general method for finding cubic roots, the formulas are complex and difficult to remember. In real world applications, a computer program like Wolfram Alpha should be used to find the roots. It’s easier and the results will be correct. However, for homework problems, or for exams, the usual process is to find one real root first. (More on this below). Call this root t. If the cubic is f(x), then (due to ), you can write f(x) = h(x)(x-t) where h(x) is a quadratic polynomial. Using polynomial division, compute h(x) = f(x)/(x-t) Since h(x) is a quadratic, you can use the Quadratic Formula (or equivalent methods) to find the final two roots r and s. At this point, you report the roots as x = r, x = s, x = t. Since solving the quadratic equation is straightforward, the hard part is finding a real root x = t. Most people appeal to the Rational Root Theorem. The following is an excerpt from the Wikipedia entry on the topic: For example, suppose you have the following cubic: f(x) = x^3 - x^2 + x -1 Then, if a rational root exists, it will be a factor of -1. This means you should try x = 1 and x = -1. If f(x) = 0, then (by definition), you would have a root. Note that f(-1) = -2 and f(1) = 0. Thus, x = 1 is a rational (real) root of the cubic. Using this fact, you would then compute h(x) = f(x)/(x-1) = x^2 + 1 Since x^+1 = (x-i)(x+i), the remaining two roots are i and -i. Thus, the roots of the cubic are x = 1 x = i x = -i FINAL NOTES [A] The Rational Root Theorem only finds rational roots. A cubic polynomial must have a real root. However, the real root could be something like sqrt(2), which is not a rational number. Thus, the Rational Root Theorem approach can fail. On homework and exams, however, you will generally find a rational root to get you started. [B] It is also a good idea to learn about the Vieta Rules for cubics and quadratics. Suppose you have a cubic in the form f(x) = x^3 + bx^2 + cx + d Then, if the roots are r, s, t, it can be proven that r+s+t = -b rst = -d If you have a quadratic in the form g(x) = x^2 + ux + w Then, if the roots are r and s, then r+s = -u rs = w You can prove these facts using property noted earlier. Vieta Rules can be useful in determining the h(x) quadratic if you know the cubic f(x) and also know one root (x = t). Related questions What are the roots of a cubic equation when the product of 2 roots is given? What is the sum of the roots of a cubic equation? What are the relations among the roots and coefficients of cubic equation? What are the roots of cubic equations? The sum of the roots of a cubic equation is -2 and the product is 12. If one root is -1, how do I find the other two roots? What is the product of the roots of the equation? How do I find roots for a cubic equation? What are the properties of the roots of a general cubic and quartic equation? What are the equations, the sums, and products of whose roots are 9, 7, and 12? What are the sum and the product of the roots of the equation in number 7? What is the cubic equation whose roots are 1, I? What is the formula for the sum and product of roots for polynomial equation of 4th power having four roots? The roots of a cubic equation are α, β and γ. If Σα=3, Σαβ=7/2 and αβγ=-5, what is the cubic equation? How do you find the roots of a cubic equation in an easy manner? How many roots are present in a cubic equation? About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
3940
https://www.youtube.com/watch?v=6TejmGd2RIU
Complex Ion - Free Metal Ion Concentrations Dr. K's Chemistry 1580 subscribers 22 likes Description 2168 views Posted: 12 Jul 2014 Calculating the Free metal ion concentration in the presence of an excess of complexing agent Transcript: today we're going to go over the calculations involved for calculating the free ion concentrations in a complex ion reaction we discussed complex ions before and if you remember from lecture the kfs are usually pretty large so that means it's a limiting reactant type problem this is similar to a titration problem where you're given an acid base and you're asked to calculate the pH along a titration curve typically we assume that there's a it's a limiting reactant kind of problem and then we pick the appropriate equilibrium after we figure out whether or not there's acid left or an excess of base or base left in an excess of acid depending on if it's a acid titrated with a base or base titrated with an acid so kfs usually large if you're given reactants assume the reaction goes to completion that means set up an ice table do a limiting reactant take the final concentrations plug them into the same equilibrium and then calculate how much ion is left using the KF and the KF expression so here's a typical problem what's the silver ion concentration of a solution that's initially 03 molar in silver nitrate and .1 molar in sodium cyanide so how do you know what product is formed well complex ions as the name implies have a charge when we're doing precipitation problems if this had been a precipitation of silver cyanide you could use the charge of the silver ion and the charge of the cyanide to predict the product because complex ions have a charge you can't depend on that to tell you what the formula is so what we do is we look up KF and with each KF is given the formula for the complex ion so let's look at this problem in a little more detail I'll be using the equation to discuss the limitations of trying to calculate without a using a double ice table so in this problem it says we have 03 molar silver nitrate and .1 molar sodium cyanide I looked up KF so KF is 1 10 20th so this is a KF so it's for the formation of the comp comp Lex ion and with that KF value I find the formula for the complex ion the complex ion looks like this so silver cine that's the formula for the complex ion and because it's a KF it's formed from Silver and cyanide ion so when you're looking at the formula up here just ignore the nitrate and the formula here and you ignore the sodium those are The Spectator ions in the problem so this is for the formation from the ions so I can write AG plus remember these are all aquous plus two CN minus in this problem you would say oh I have 0.03 molar silver and I have 0.10 molar cyanide we'll assume that the silver cyanide is zero and that the reaction is going to the right we can draw our change so this is the initial this is the change so this is minus X - 2x and plus X if we write the equilibrium line down here this is 0.03 - x 0.10 - 2x and X the KF expression of the complex line that's equal again to 10us 20 1 10- 20 now if you do this problem in a computer or a calculator what you will most likely find is that there are multiple roots to this equation but the one chemically that makes the most sense is going to be something that where X is approximately equal to 0.03 so why is this a problem it's a problem because when you plug it in over here you're going to get zero now it's not that the value is zero but the value is actually very low and in these problems what we need to find out is what is the concentration of that ion so what we do because we know that X is going to be about 03 and we know that because the K value is large we treat this like a limiting reactant problem we'll take instead of assuming that the equilibrium value for silver is this we'll look to say well this looks like the limiting reactant so let's just assume that X is equal to 0.03 that means the amount of silver that's or the cyanide ion that's left is going to be about 0.04 and X is going to be 0.03 so the complex I concentration is 0.03 the silver ion is effectively zero now what we do is we set up a second ice table and in the second ice table we write the same reaction a lot of people want to flip the reaction around and write what's called a KD expression the dissociation of the complex ion I find it's far simple simply to use the KF expression so here's my formation equation again and now I take my values from my final this is now a final not an equilibrium because we did a double ice table we take those values and we plug them in to the equation so I have zero for silver ion I have 0.4 for the cide ion and then I have 03 for the complex ion that's my initial of my change then I'll have my equilibrium so how do I solve this well as you might expect because the silver concentration is zero the reaction has to be going this way so this side has Min - x this is + 2x and this will be X so finally I have X on my equilibrium line 0.04 + 2x and 0.03 - x I can take those and plug them in to my equilibrium expression and there's the equation that you come up with down at the bottom now what you do with this expression is you assume that X is small so if x is small we can ignore it here and we could ignore it here you can't ignore that one so now what we have is we have that 0.030 ided by x 0.04 squared is equal to 1 10us 20 and then we can rearrange this so that X is on the top and we write that X is equal to 0 030 / 1 10-20 time 0.04 2ar so if I do that what I find oh by the way that's a my mistake here plus 20 if I do that what I find is that X is equal to 1.9 10 to the- 19 so that's the concentration because if you look back here that's uh what I used for X that's the concentration of the silver ion now because this is a silver ion and it's not bound to the complex ion we refer to this as the free ion concentration the total ion concentration is actually the ion concentration that we started with in the initial part of the problem that is there's silver ion in the complex and there's also silver ion as free ion so I hope that helps with the calculation of the free ion concentration problems
3941
https://hero.epa.gov/hero/index.cfm/reference/details/reference_id/8387195
Quantitative distinction between detonation and afterburn energy deposition using pressure-time histories in enclosed explosions | Health & Environmental Research Online (HERO) | US EPA Jump to main content US EPA United States Environmental Protection Agency Search Search Menu Main menu Environmental Topics Laws & Regulations About EPA Health & Environmental Research Online (HERO) Contact Us PrintFeedbackExport to File Search: Citation Tags HERO ID 8387195 Reference Type Meetings & Symposia Title Quantitative distinction between detonation and afterburn energy deposition using pressure-time histories in enclosed explosions Author(s) Ames, RG; Drotar, JT; Silber, J; Sambrook, J Year 2006 Page Numbers 253-262 Language English Abstract Fuel-rich explosives (e.g. TNT or Thermobarics) ignited in air will normally produce two primary reactions: a detonation (or deflagration) and an afterburn. The detonation reaction occurs as a result of the combustion of the fuel and oxidizer contained in the explosive material; it does not require ambient oxygen. The fuel that remains after the detonation is then available to react with ambient oxygen. This "afterburn" reaction is limited by diffusion and/or turbulent mixing and is, therefore, normally characterized by time scales that are several orders of magnitude greater than those associated with the detonation. This paper postulates that because of this difference in reaction time scales, the early-time blast pressures are a function of the detonation/deflagration energy, alone, and provides a number of techniques that relate these early blast pressures to the energy released during the detonation process. It further builds on previous work to show that the total energy release (i.e. detonation plus afterburn) is a function only of the end-state quasi-static pressure when the explosion occurs in a closed chamber. As such, the paper provides a technique by which the relative amounts of detonation and afterburn energy release can be estimated when detailed pressure-time histories are available for enclosed explosions. Home Learn about HERO Using HERO Search HERO Projects in HERO Risk Assessment Transparency & Integrity Discover. Accessibility Budget & Performance Contracting EPA www Web Snapshot Grants No FEAR Act Data Plain Writing Privacy Privacy and Security Connect. Data.gov Inspector General Jobs Newsroom Regulations.gov Subscribe USA.gov White House Ask. Contact EPA EPA Disclaimers Hotlines FOIA Requests Frequent Questions Follow. Facebook X YouTube Flickr Instagram Data Last Updated on September 29, 2025 close This record has one attached file: Add More Files Attach File(s): Display Name for File: Save
3942
https://math.stackexchange.com/questions/2968707/why-is-the-dot-product-of-two-unit-vectors-equal-to-zero
geometry - Why is the dot product of two unit vectors equal to zero? - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Why is the dot product of two unit vectors equal to zero? [closed] Ask Question Asked 6 years, 11 months ago Modified5 years, 10 months ago Viewed 2k times This question shows research effort; it is useful and clear -4 Save this question. Show activity on this post. Closed. This question is off-topic. It is not currently accepting answers. This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level. Closed 6 years ago. Improve this question Can someone explain why the scalar product of i^⋅j^=0 i^⋅j^=0 and the cross product of i^×j^=k^.i^×j^=k^. Here we define i^=(1,0,0),j^=(0,1,0),k^=(0,0,1)i^=(1,0,0),j^=(0,1,0),k^=(0,0,1). Thanks for your help. geometry vectors Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications edited Oct 24, 2018 at 7:00 Stackman 4,106 2 2 gold badges 18 18 silver badges 57 57 bronze badges asked Oct 24, 2018 at 5:05 MonstaMonsta 3 3 3 bronze badges 5 2 See the definitions of "dot/cross" product.Angina Seng –Angina Seng 2018-10-24 05:09:46 +00:00 Commented Oct 24, 2018 at 5:09 As usual, the best place to go is the definitions. First, the definition of i,j,k i,j,k is that they are the vectors (1,0,0),(0,1,0)(1,0,0),(0,1,0) and (0,0,1)(0,0,1) respectively. Second, the definition of the dot product. (a 1,a 2,a 3)⋅(b 1,b 2,b 3)=a 1 b 1+a 2 b 2+a 3 b 3(a 1,a 2,a 3)⋅(b 1,b 2,b 3)=a 1 b 1+a 2 b 2+a 3 b 3. Do you then see why (1,0,0)⋅(0,1,0)=0(1,0,0)⋅(0,1,0)=0? Now... go on to look up the definition of the cross product. Apply the definition to i,j,k i,j,k and see the result. I'll leave the google searching for that last one to you to let you practice your googlefu and not have to rely on asking here.JMoravitz –JMoravitz 2018-10-24 05:09:48 +00:00 Commented Oct 24, 2018 at 5:09 2 I'm voting to close this question as off-topic because it shows no effort at all and should be answerable by a simple google search and reading of the definitions.JMoravitz –JMoravitz 2018-10-24 05:10:51 +00:00 Commented Oct 24, 2018 at 5:10 @JMoravitz : Can you give an explanation in a geometrical perspective Monsta –Monsta 2018-10-24 05:26:00 +00:00 Commented Oct 24, 2018 at 5:26 @Monsta...welcome to the site....its always a good idea to frame a question in mathjax ; so go for it ....secondly, elaborate the question as much as possible showing neatly as to what all efforts you have put in or stating clearly as to what exactly is unclear.....if it is something related to interpretation or definition then mention what part of definition or interpretation you having trouble with....user356774 –user356774 2018-10-24 06:22:00 +00:00 Commented Oct 24, 2018 at 6:22 Add a comment| 2 Answers 2 Sorted by: Reset to default This answer is useful 4 Save this answer. Show activity on this post. Let a a and b b be any two nonzero vectors in R n R n. That is, a=(a 1,a 2,a 3,⋯,a n)a=(a 1,a 2,a 3,⋯,a n) and b=(b 1,b 2,b 3,⋯,b n)b=(b 1,b 2,b 3,⋯,b n). Then by definition of calar product, we have a.b=a 1 b 1+a 2 b 2+a 3 b 3+⋯+a n b n.a.b=a 1 b 1+a 2 b 2+a 3 b 3+⋯+a n b n. In your case,if n=3 n=3, we have i=(1,0,0)i=(1,0,0) and j=(0,1,0)j=(0,1,0) and thier scalar product is given by i.j=1⋅0+0⋅1+0⋅0=0.i.j=1⋅0+0⋅1+0⋅0=0. For vector product, refer the definition. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications answered Oct 24, 2018 at 5:22 user402194user402194 66 3 3 bronze badges Add a comment| This answer is useful 2 Save this answer. Show activity on this post. Think about the definition of the dot product: u⃗⋅v⃗=(u⃗i^×v⃗i^)+(u⃗j^×v⃗j^)+(u⃗k^×v⃗k^)u→⋅v→=(u→i^×v→i^)+(u→j^×v→j^)+(u→k^×v→k^) i.e.: (2,5,6)⋅(1,5,6)=(2×1)+(5×5)+(6×6)=2+25+36=63(2,5,6)⋅(1,5,6)=(2×1)+(5×5)+(6×6)=2+25+36=63 Now, let's see what happens if we use unit vectors: (1,0)⋅(0,1)=(1×0)+(0×1)=0+0=0(1,0)⋅(0,1)=(1×0)+(0×1)=0+0=0 Another way you can think of the dot product of i⃗⋅j⃗i→⋅j→ to be, "How much of i^i^ is being projected onto j^j^? That is, if you took i^i^ and flattened it onto j^j^, how much of the length of j^j^ would be covered by this squashed version of i^i^? In this case, since they are perpendicular, the piece of i^i^ projected onto j^j^ would be zero. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Nov 13, 2019 at 19:51 answered Oct 24, 2018 at 5:25 matthew-e-brownmatthew-e-brown 121 4 4 bronze badges 2 wonderfully explained in a geometric way. Thanks Monsta –Monsta 2018-10-24 05:29:35 +00:00 Commented Oct 24, 2018 at 5:29 Would you mind explaining for cross product?Monsta –Monsta 2018-10-24 05:33:09 +00:00 Commented Oct 24, 2018 at 5:33 Add a comment| Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions geometry vectors See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Related 3Why does the cross product follow the right hand rule? 2Interchanging dot and cross product? 3Cross product of two vectors 4Why is the dot product of two vectors a scalar value? 0How the dot product of two vectors can be zero? 0Can’t we use ‘vector product’ to find the angle between two vectors? 1Why is the cross product not the normal in this case? 1Equal distances in a line to two other given lines 6Why is the magnitude of the cross product equal to the parallelogram spanned by the two vectors? Hot Network Questions "Unexpected"-type comic story. Aboard a space ark/colony ship. Everyone's a vampire/werewolf Storing a session token in localstorage Making sense of perturbation theory in many-body physics Sign mismatch in overlap integral matrix elements of contracted GTFs between my code and Gaussian16 results Are there any world leaders who are/were good at chess? ConTeXt: Unnecessary space in \setupheadertext Is it ok to place components "inside" the PCB Overfilled my oil What were "milk bars" in 1920s Japan? Gluteus medius inactivity while riding Interpret G-code How different is Roman Latin? If Israel is explicitly called God’s firstborn, how should Christians understand the place of the Church? Do we need the author's permission for reference Numbers Interpreted in Smallest Valid Base What’s the usual way to apply for a Saudi business visa from the UAE? Do we declare the codomain of a function from the beginning, or do we determine it after defining the domain and operations? Transforming wavefunction from energy basis to annihilation operator basis for quantum harmonic oscillator Alternatives to Test-Driven Grading in an LLM world Lingering odor presumably from bad chicken Another way to draw RegionDifference of a cylinder and Cuboid Cannot build the font table of Miama via nfssfont.tex Languages in the former Yugoslavia How exactly are random assignments of cases to US Federal Judges implemented? Who ensures randomness? Are there laws regulating how it should be done? more hot questions Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
3943
https://www.youtube.com/watch?v=qdrSp1yzFD4
Which of the following will give only one monochloro derivative? | 11 | ALKANES | CHEMISTRY | R ... Doubtnut 3940000 subscribers Description 33 views Posted: 7 Nov 2021 Which of the following will give only one monochloro derivative? Class: 11 Subject: CHEMISTRY Chapter: ALKANES Board:IIT JEE You can ask any doubt from class 6-12, JEE, NEET, Teaching, SSC, Defense and Banking exam on Doubtnut App or You can Whatsapp us at - 8400400400 Link - Join our courses to improve your performance and Clear your concepts from basic for Class 6-12 School and Competitive exams (JEE/NEET) - Contact Us: 👉 Have Any Query? Ask Us. 🤙 Call: 01247158250 💬 WhatsApp: 8400400400 📧 Email: info@doubtnut.com 🌐 Website: Welcome to Doubtnut. Doubtnut is World’s Biggest Platform for Video Solutions of Physics, Chemistry, Maths and Biology Doubts with over 5 million+ Video Solutions. Doubtnut is a Q&A App for Maths, Physics, Chemistry and Biology (up to JEE Advanced and NEET Level), Where You Can Ask Unlimited Questions by Clicking a Picture of Doubt on the Doubtnut App and Get Instant Video Solution. Subscribe Our YouTube Channels: ✿ Doubtnut: ✿ Class 11-12, JEE & NEET (Hindi): ✿ Class 11-12, JEE & NEET (English):: ✿ Class 6-10 (Hindi): ✿ Class 6-10 (English): ✿ Doubtnut Govt. Exams: Follow Us: 🔔 Facebook: 🔔 Instagram: 🔔 Telegram: 🔔 Twitter: 🔔 LinkedIn: Our Telegram Pages: 🔔 Doubtnut Official: 🔔 Doubtnut IIT JEE: 🔔 Doubtnut NEET: 🔔 Doubtnut CBSE Boards: 🔔 Doubtnut UP Boards: 🔔 Doubtnut Bihar Boards: 🔔 Doubtnut Government Exams: class 11 class 11 organic chemistry class 11 physics class 11 chemistry class 11 maths class 11 english class 11 economics class 11 biology class 11 statistics class 11 syllabus physics class 11 commerce class 11 syllabus chemistry class 11 ncert economics class 11 ncert class 11 syllabus cbse class 11 syllabus maths cbse class 11 class 11 english grammar class 11 syllabus class 11 science class 11 syllabus english class 11 syllabus 2020-21 cbse class 11 maths cbse class 11 physics cbse class 11 books cbse class 11 english cbse class 11 chemistry cbse class 11 biology cbse class 11 accountancy cbse class 11 commerce cbse class 11 science class 11 syllabus biology cbse class 11 statistics class 11 syllabus ncert class 11 syllabus of english class 11 syllabus science class 11 syllabus commerce Transcript: कि आज डाउनलोड करें डॉक्टर पर होगा आपके सभी मैच केमिस्ट्री फिजिक्स बायोलॉजी डाइट का सफाया बस में पशुओं की फोटो खींचो उसे करो करो और तुरंत वीडियो सलूशन पाव डाउनलोड नाउ कि कुछ हमारा आपको यह क्वेश्चन हमारा है 2014 पोलिंग विल गिव ओनली वन मोंठ एगो तो यह चार हम यहां पर कम पॉइंट से रखें निकाल दें कि करन है मोनू क्लोरिन गैस या पर कराना है फिर देखना है कि कौन सा वाला पॉइंट है वह केवल यहां पर मोलू डूबे हुए देता है तो यह हमारा यहां पर क्या है कम पांडे हमारा न्यू बेन टेन न्यू पैटर्न कैसा हुआ पांचवा टोटल समाचार वन होंगे और एक और बंटवारा किया हुआ चार डिग्री हो गया पर तो उसका इस अक्षर यहां पर बना लेते हैं टोटल हमारे पांच कम पॉइंट्स है और एकदम पॉइंट है एक और बंटवारे में चार डिग्री अधिक हमारा क्या है न्यू पेंटिंग पोस्टर यहां पर हम लिख बना लिया है इसमें क्या करना है मोनू क्लोरिनेशन इसको कम है तो वह क्लोरिनेशन की किस्मत का या सीरियल्स पर लगेगा तो सीरियल के हमारा इंतजार होकर मंच पर रजिस्टर हमारी जो मैथिल ग्रुप से बाहर वाले चारों इंच आरोप लग सकता है और इस पर इस कारण पर हमारा यह नहीं लग सकता क्योंकि इसकी चार बैलेंस इससे पहले से हमारी कंप्लीट है तो यह हमारा जो बनेगा यहां पर अगर हमारा किसी भी कारण पर सीरियल लग जाए तो कैसे सिंपल तुम्हारे यहां पर बन रहा हो तो एक ही प्रोडक्ट का यहां पर मोनू क्लोरो प्रोडक्ट समय एक ही मिलेगा यहां पर तो देख लेते हैं इसका नाम भी यहां पर हम लिखेंगे नाम क्या रहेगा इसका नाम हमारा रहेगा इसका Tubelight मिथाइल 1किलो प्रॉब्लम है इसका नाम हो जाएगा मोनोक्रोम प्रोडक्ट किसी भी किसी भी करवट पर हम लगाएं एक रोरिटो में क्या होगा यहां पर सेम प्रोडक्ट में तीनों में मिल रहा होगा तो केवल एक जूनून वो प्रोडक्ट हमें मिल रहा है परमेज़न ऑप्शंस मिल रहा है कि क्लिक करें और दूसरा ऑप्शन हमारा घाघरा सॉन्ग हमारे और 323 324 506 एबीसी इसको यहां पर और यह कि एक और हमारा मैथिल ग्रुप है तो उसमें देखने में टाइप के प्रोडक्ट मिलेंगे पहला प्रोडक्ट में का मिलेगा जवाब मिले इन दोनों खबरों से इंदौर में हुसैन से किसी एक पर हमारा खिलाड़ी लग जाए वहीं रहेगा मरासिम प्रोडक्ट रहेगा अगर दोनों में किसी में भी लगा देगा तो जो हमारा ऊपर वाला है उसमें हम यहां पर क्रीम लगाते हैं पर तो यह हमारा यह कि यह हमारा जो बन गया इसका नाम हम नाप लेते हैं नाम रहेगा इसका 102 मैथिल सेंटेंस का नाम हो जाएगा हमारा यह प्रोडक्ट में मिलाएं से वह दूसरा वाकया रहेगा और जो प्रोडक्ट हम लेंगे सैफरन हमारा इस कारण पर लग जाए जो हमारा थ्री डिग्रीस और बने कैलोरी तो क्लोरीन अपने हम थर्ड डिग्री का रिमाइंडर लगा देते हैं यह मैसेज करो कि हमारा मैथिली को यहां पर अपना क्लोरीन हमने लगा दिया है और यह हमारे यहां पर क्या मिल गया फिर टू करो टू मैथिली प्रिंट रहा है पर मजा आएगा यह दूसरा प्रोडक्ट हमारा तीसरा प्रोडक्ट में का मिलेगा जब हम इस कर्म पर लगा देंगे तो तीसरा प्रोडक्ट हम देखते हैं तीसरा प्रोडक्ट इस कारण पर हमने सील लगा दिया तो उसका नाम हम बता सकते नाम क्या रहेगा इसका 2302 मिथिला पेंटिंग्स का नाम रहेगा अगला प्रोडक्ट मिलेगा अगला प्रोडक्ट मिलेगा जब अगले का पॉइंट पर हम क्लोरीन अपना लगा दें अगर बनवाया कौन सा है फोर्थ रिबन तो इसका नाम हमारा क्या जाएगा फ्लोर कि कुंडू मैथिल पेंटिंग्स हमारा नाम हो का या और यह में प्रोडक्ट एक और मिलेगा को मिलेगा जब हमारा आखिरी मैथिल ग्रुप है उस पर हम अपना क्लोरिन यहां पर लगा दें तो MB मैं तुम्हारा लास्ट में रुपये इस पुराने ढर्रे पर लगा दिया तो हमारा क्लोरोफॉर्म तो यहां पर अब स्कूल में उस समय मिले तो सब्सक्राइब कर लेते हैं और तो हमारे यहां पर एक बार आप फ्री तो टोटल और डिग्री का होना जरूरी है क्वेश्चन कि इसमें क्या होगा फैला हुआ हो सकता है तीनों में से किसी एक मैटल ऊपर हमारा एक प्रोडक्ट है तो उस केस में क्या हुआ वसीम प्रोडक्ट मिल रहा होगा तो हम क्या करते हैं किसी पर अपना क्लोरीन यहां पर लगा देते हैं तो उसका नाम यहां पर क्या हो जाएगा ना मुझे गाइस का 102 मैथिल प्रॉब्लम्स काम हो जाएगा और हमें दूसरा के मिल सके यह वड़ा थ्री डिग्री तापमान पर हमारा प्रॉपर लग रहे हैं हमारा थ्री डिग्री कार्बन पर क्लोरिन जब लग जाएगा तो हमें दूसरा यहां पर मोनोक्रोम प्रोडक्ट मिलेगा इस हमारा नाम क्या हो जाएगा 202 मैथिल अपन है और इसका नाम हो का या तो यह दो प्रोडक्ट मैं यहां पर मिले तो जो हमारा सीधा इसके मृत मिले दोनों प्रोडक्ट्स मिलें कभी हमें देखना है अपने जॉब संडे हमारा क्या है आंसू प्रिंटेड है इसका भी मैं क्या कर रहा है देखने मैं हमारे कितने होंगे यहां पर मनुष्य डेलिगेट्स टो डिले थे महाराष्ट्र प्रिंटेड पांच कर वुमंस टोटल हमने चीज हमारे इसमें यह हमारा कैमरा आईएस टो प्रिंट ए हमारा बन गया है इसके केस में यहां पर क्या हुआ मोनू क्लोरिनेशन मिक्स करेंगे यहां पर शेड्यूल सबसे पहले किस कब हुआ इंदौर में सिर्फ उसमें से किसी पर भी हमारा कैलोरी लगी है तो इस पर हम लगा देते हैं ऊपर वाले पर यह हमारा क्या है पहला प्रोडक्ट यहां पर आ गया इसका नाम यहां पर का या नाम इसका होगा हमारा हवन मैथिली 2102 मेथिल ब्यूटेन इसका परिणाम होगा क्या विचार है इस प्रकार हमारे शरीर में और मैं क्या मिलेगा जो थ्री डिग्री करीबन हमारा स्टोरी लग जाए थर्ड डिग्री का आगमन पर जब हमारा क्लोरिन लगेगा तो इस प्रोडक्ट लूंगा नाम क्या रहेगा नमस्कार आएगा मारा 202 मेथिल ब्यूटेन और थर्ड हमारा कौन सा मिल गया पर मोलू डेबिट जब यह खबर हमारा है इस कारण पर हमारा खुलने लगे तो यहां पर लगा देते हैं इस कारण पर उसका जब नाम को सेकंडों पर लगाया तो ना उसका क्या हुआ 2003 मेथिल ब्यूटेन वायरस खत्म हो जाएगा का यह लास्ट में प्रोडक्ट मिलेगा आखरी हर प्रोडक्ट समय मिलेगा जब हम सब ठीक होने वाले यहां पर टर्मिनल भवन पर अपना कैलोरी लगा देंगे इग्नोर न मिला तो उसका नाम है यह आएगा वंस क्लोरो फ्री में थैंक यू में काम रहेगा जहां पर कितने मिले हुए 2 टोटल यहां पर चार हमें फॉलो ठेर वोट्स मिले तो ऑप्शन में क्वेश्चन अनुसार क्या था क्वेश्चन आपसे पूंछा गया था कौन सा हमारा यहां पर कंपाउंड गए केवल एक मोलू डिबेट में देगा तो इससे साफ हम देखेंगे कि हमारा यह ऑप्शन है यहां पर इस क्वेश्चन का हमारा सही आंसर है जबकि 12वीं की तस्वीर व व्हाट्सएप पर
3944
https://arxiv.org/pdf/2212.14481
arXiv:2212.14481v1 [math.CO] 29 Dec 2022 Chebyshev’s Sum Inequality and the Zagreb Indices Inequality Hanjo Täubig ∗ December 29, 2022 Abstract In a recent article, Nadeem and Siddique used Chebyshev’s sum inequality to establish the Zagreb indices inequality M1/n ≤ M2/m for undirected graphs in the case where the degree sequence (di) and the degree-sum sequence (Si) are similarly ordered. We show that this is actually not a completely new result and we discuss several related results that also cover similar inequalities for directed graphs, as well as sum-symmetric matrices and Eulerian directed graphs. 1 Introduction 1.1 Notation We consider n × n matrices, denoted by A, with entries aij . In particular, we look at the total sum of entries denoted by sum( A), as well as the row and column sums of A, which are denoted by ri(A) and cj (A), respectively. If A is clear from the context, we abbreviate this by ri and cj . For the matrix power Ap, p ∈ N, we define the following abbreviations: a[p] ij := ( Ap)ij , r[p] i := ri(Ap),and c[p] j := cj (Ap). We assume that A0 = I is the identity matrix. As a special case, we consider adjacency matrices of directed and undirected (multi-)graphs G = ( V, E ) with n := |V | vertices and m := |E| edges. The in-degree and the out-degree of a vertex v ∈ V are denoted by din (v) and dout (v), respectively. In undirected graphs, the degree of a vertex v ∈ V is denoted by d(v). A walk in a multigraph G = ( V, E ) is an alternating sequence (v0, e 1, v 1, . . . , v k−1, e k, v k) of vertices vi ∈ V and edges ei ∈ E where each edge ei of the walk must connect vertex vi−1 to vertex vi in G, that is, ei = ( vi−1, v i) for all i ∈ { 1, . . . , k }. Vertices and edges can be used repeatedly in the same walk. If the multigraph has no parallel edges, then the walks could also be specified by the sequence of vertices (v0, v 1, . . . , v k−1, v k) without the edges. The length of a walk is the number of edge traversals. That means, the walk (v0, . . . , v k) consisting of k + 1 vertices and k edges is a walk of length k. We call it a k-step walk . Let sk(v) denote the number of k-step walks starting at vertex v ∈ V and let ek(v) denote the number of k-step walks ending at v. If G is undirected, then we have wk(v) := sk(v) = ek(v). The total number of k-step walks is denoted by wk . For walks of length 0, we have s0(v) = e0(v) = 1 for each vertex v and w0 = n. For walks of length 1, we have s1(v) = dout (v) and e1(v) = din (v), i.e., w1(v) = d(v) for undirected graphs. This implies w1 = ∑ v∈V dout (v) = ∑ v∈V din (v) = m for directed graphs. For undirected graphs, we have w1 = ∑ v∈V d(v) = 2 m by the handshake lemma. 1.2 Chebyshev’s Sum Inequality Two n-tuples (a1, . . . , a n) and (b1, . . . , b n) of real numbers are called similarly ordered if (ai − ak)( bi − bk) ≥ 0 for all i, k ∈ [n]. They are called conversely ordered (also oppositely ordered , ∗Computer Science Dept., TU München, D-85748 Garching, Germany, taeubig@in.tum.de 1see ) if (ai − ak)( bi − bk) ≤ 0 for all i, k = 1 , . . . , n . The term similarly ordered is equivalent to the requirement that there exists a permutation that transforms both tuples into nonincreasing sequences. In the same line, two tuples are conversely ordered if and only if there is a permutation that transforms one of the tuples into a nonincreasing and the other tuple into a nondecreasing sequence. Below, we will use the same notation for n-dimensional real vectors a, b ∈ Rn.The following inequality was published by Chebyshev [3, 13]. Theorem 1 (Chebyshev) . Let f, g : [ a, b ] 7 → R be integrable functions, both non-decreasing or both non-increasing. Furthermore, let p : [ a, b ] 7 → R≥0 be an integrable nonnegative function. Then ∫ ba p(x) dx ∫ ba p(x)f (x)g(x) dx ≥ ∫ ba p(x)f (x) dx ∫ ba p(x)g(x) dx . If one of the functions f or g is non-decreasing and the other non-increasing, then the sign of inequality is reversed. The discrete analog is the following statement. Corollary. For similarly ordered vectors a ∈ Rn and b ∈ Rn and any nonnegative vector p ∈ Rn ≥0 ,we have ( n∑ i=1 piai ) ( n∑ i=1 pibi ) ≤ ( n∑ i=1 pi ) ( n∑ i=1 piaibi ) . The inequality is reversed if a and b are conversely ordered. If p ∈ Rn ≥0 is nonzero, this corresponds to the following weighted arithmetic means relation: ∑ni=1 piai ∑ni=1 pi · ∑ni=1 pibi ∑ni=1 pi ≤ ∑ni=1 piaibi ∑ni=1 pi . A direct consequence is the following. Given a, b ∈ Rn and r ∈ R, suppose that ari and bri are defined within R for all i ∈ [n] and that the corresponding tuples (ar 1 , . . . , a rn) and (br 1 , . . . , b rn) are similarly ordered. Then we have ∑ni=1 piari ∑ni=1 pi · ∑ni=1 pibri ∑ni=1 pi ≤ ∑ni=1 pi(aibi)r ∑ni=1 pi . One particular case where such inequalities can be obtained occurs for arbitrary exponents r and nonnegative vectors a and b that are similarly or conversely ordered. Another special case is for odd integer exponents r (or their reciprocals) and arbitrary real vectors a and b. Corollary. If the vectors a ∈ Rn and b ∈ Rn are similarly ordered, then ( n∑ i=1 ai ) ( n∑ i=1 bi ) ≤ n n ∑ i=1 aibi . The inequality is reversed if a and b are conversely ordered. For n > 0, this is the same as the following relation between arithmetic means: ∑ni=1 ai n · ∑ni=1 bi n ≤ ∑ni=1 aibi n . All those variants are called Chebyshev’s (sum) inequality. 22 Zagreb Indices and Walks 2.1 The Zagreb Indices Inequality The first and the second Zagreb [group] index for an undirected graph G = ( V, E ) are defined as 1 M1 = ∑ v∈V d2 v and M2 = ∑ {x,y }∈ E dxdy . Assume that V = {v1, . . . , v n} and that the vertex degrees are abbreviated by di = d(vi).Recently, an article was published by Nadeem and Siddique that contains the following state-ment concerning the degree-sums Si := ∑ vj∈N(vi) d(vj ), where N (vi) := {vj ∈ V | { vi, v j } ∈ E} is the set of neighbors of vi. Theorem 2. Let G be a connected graph having degree sequence (di), degree-sum sequence (Si),order n and size m. If (di) and (Si) are similarly ordered, then M1(G) n ≤ M2(G) m . Equality is attained if and only if G is a regular or a complete bipartite graph. They also remark for the part with the sufficient condition, that the Zagreb indices inequality holds for both, connected and non-connected graphs. That means, this result uses Chebyshev’s sum inequality to establish the Zagreb indices in-equality in the case where the sequences (di) and (Si) are similarly ordered. 2.2 The Number of Walks Form For a long time during the research on topological indices in chemical graph theory, it has been overlooked that two of the most popular descriptors were in fact just special cases of measures defined by the number of walks. Only after decades, it was observed by Nikolić et al. and Braun et al. that M1 = w2 (which is also implicitly contained in the paper by Gutman et al. , but not explicitly stated there) and that M2 = w3/2.Together with n = w0 and m = w1/2, the Zagreb indices inequality can be rephrased as w2/w 0 ≤ w3/w 1 . In the same line, we observe that the degree-sum Si equals the number of 2-step walks starting at vi, i.e., Si = w2(vi). And as already noted, we have di = w1(vi).In this respect, Theorem 2 can also be expressed as a statement about walks: Theorem 3. Let G be a graph having number of 1-step walks sequence (w1(vi)) and number of 2-step walks sequence (w2(vi)) . If (w1(vi)) and (w2(vi)) are similarly ordered, then w2/w 0 ≤ w3/w 1 . Actually, this is not a new result. It is a special case of a more general theorem by Täubig [16, 18], see the corollary of Theorem 4 in the next section. Note also that a related observation corresponding to the Zagreb indices inequality has already been made by London in the more general case of entry sums of nonnegative symmetric matrices. The Zagreb indices inequality has been shown to hold for several special graph classes, such as trees [21, 1], chemical graphs , or subdivision graphs [10, 17], while it does not hold for connected graphs in general [11, 8] or for bipartite graphs, not even for forests (see Chapter 5 of or ). 1The first explicit definition of those indices appeared in the paper by Gutman et al. . Erroneously, it referred to the earlier article by Gutman and Trinajstić as the point where these measures where introduced. Actually, this is not true. This historical development was clarified recently by Gutman . 33 Applying Chebyshev’s Sum Inequality to Directed Graphs In order to obtain inequalities for the number of walks in directed graphs and for entry sums in nonsymmetric matrices, it is sometimes possible to apply Chebyshev’s sum inequality (see Theorem 1). In those cases we are able to obtain statements by elementary proofs without using any eigenvalues. Theorem 4. For any matrix A such that the column sums of Ak and the row sums of Aℓ (i.e., c[k] and r[ℓ]) are similarly ordered, we have sum (Ak) · sum (Aℓ) ≤ n · sum (Ak+ℓ) . The inequality is reversed if c[k] and r[ℓ] are conversely ordered. Proof. For every n × n matrix A, we have sum (Ak+ℓ) = 1Tn (AkAℓ) 1n = (1Tn Ak) ( Aℓ1n ) = ∑ i∈[n] c[k] i · r[ℓ] i . The inequality is now a direct consequence of Chebyshev’s inequality (see Theorem 1): sum (Ak) · sum (Aℓ) = ( n∑ i=1 c[k] i ) ( n∑ i=1 r[ℓ] i ) ≤ n n ∑ i=1 c[k] i r[ℓ] i = n · sum (Ak+ℓ) . Note that for all Hermitian matrices A and integers k, ℓ where k + ℓ is an even number, Theorem 4 holds in general without the ordering assumption. Those inequalities and related results for real symmetric matrices and walks in undirected graphs were discussed in and . For the special case of adjacency matrices, Theorem 4 translates to the following statement about the number of walks in digraphs. Corollary. For every directed graph G = ( V, E ) where the vectors of walk numbers ek(v) and sℓ(v), v ∈ V , are similarly ordered, we have wk · wℓ ≤ n · wk+ℓ . Obviously, this inequality is applicable to undirected graphs if wk (vi) and wℓ(vi), i ∈ [n], are similarly ordered sequences (here, we have wk(vi) = sk(vi) = ek(vi) for all i, k ∈ N). In particular, this is interesting if k + ℓ is an odd number. Inverted inequality: According to Chebyshev’s sum inequality (see Theorem 1), the inequality is inverted if ek(vi) and sℓ(vi) are conversely ordered. For instance, this would be applicable for k = ℓ = 1 if for each vertex either the in-degree or the out-degree is equal to 1 and the other one is greater or equal to 1. Another example would be the class of graphs where all vertices have the same sum of the in-degree and the out-degree (that is, the same total degree). Sum-symmetric matrices: From Theorem 4, we obtain a special case if the row sums and the column sums of a matrix are similarly ordered. This happens, for example, in the case of sum-symmetric matrices, i.e., if ri(A) = ci(A) for all i ∈ [n]. Corollary. For any sum-symmetric matrix A, we have sum( A)2 ≤ n · sum( A2) . Note that this corollary also follows from Cauchy’s inequality: sum( A)2 = ( n∑ i=1 ri )2 ≤ n n ∑ i=1 r2 i = n n ∑ i=1 rici = n · sum( A2) . 4Eulerian directed graphs: We can apply this result to directed graphs as follows. If there is a vertex ordering which is monotonically increasing with respect to the in- and out-degrees, then the graph obeys the inequality nw 2 ≥ w21 . For instance, this is true if the in-degree of each vertex equals its out-degree. Corollary. For every Eulerian directed graph ( ∀v ∈ V : din (v) = dout (v)), we have w21 ≤ n · w2 or w1/w 0 ≤ w2/w 1 . References V. Andova, N. Cohen, and R. Škrekovski, A note on Zagreb indices inequality for trees and unicyclic graphs. Ars Math. Contemp. 5 (2012) 73–76. J. Braun, A. Kerber, M. Meringer, and C. Rücker, Similarity of Molecular Descriptors: The Equivalence of Zagreb Indices and Walk Counts, MATCH Commun. Math. Comput. Chem. 54 (2005) 163–176. P. L. Chebyshev (П. Л. Чебышёв), Объ о дномъ рядеъ, доставляющемъ предельныя вели-чины интеграловъ при разложении подь-интегральной функции на множители , Записки Императорской Академии наук (Санкт-Петербург) XLVII (1883). I. Gutman, On the origin of two degree-based topological indices, Bulletin de l’Ac adémie serbe des sciences et des arts, Classe des Sciences mathématiques et naturelles, Sciences mathématiques 39 (2014) 39–52. I. Gutman, C. Rücker, and G. Rücker, On Walks in Molecular Graphs, J. Chem. Inf. Comput. Sci. 41 (2001) 739–745. I. Gutman, B. Ruščić, N. Trinajstić and C. F. Wilcox, Jr., Graph theory and molecular orbitals. XII. Acyclic polyenes, J. Chem. Phys. 62 (1975) 3399–3405. I. Gutman and N. Trinajstić, Graph theory and molecular orbitals. Total π-electron energy of alternant hydrocarbons, Chem. Phys. Lett. 17 (1972) 535–538. P. Hansen and D. Vukičević, Comparing the Zagreb indices, Croat. Chem. Acta 80 (2007) 165–168. G. H. Hardy and J. E. Littlewood and G. Pólya, Inequalities , Cambridge University Press, 2nd edition, 1959. A. Ilić and D. Stevanović, On comparing Zagreb indices, MATCH Commun. Math. Comput. Chem. 62 (2009) 681–687. J. C. Lagarias, J. E. Mazo, L. A. Shepp, and B. D. McKay, An inequality for walks in a graph, SIAM Rev. 26 (1984) 580–582. D. London, Two inequalities in nonnegative symmetric matrices, Pacific J. Math. 16 (1966) 515–536. D. S. Mitrinović and P. M. Vasić, History, variations and generalisations of the Čebyšev inequality and the question of some priorities, Univ. Beograd, Publ. Elektrotehn. Fak., Ser. Mat. Fiz. 461 (1974) 1–30. I. Nadeem and S. Siddique, More on the Zagreb indices inequality, MATCH Commun. Math. Comput. Chem. 87 (2022) 115–123. S. Nikolić, G. Kovačević, A. Miličević, and N. Trinajstić, The Zagreb Indices 30 Years After, Croat. Chem. Acta 76 (2003) 113–124. 5 H. Täubig, Inequalities for matrix powers and the number of walks in graphs , Habilitation thesis, Computer Science Dept., TU München (2015). H. Täubig, Inequalities for the Number of Walks in Subdivision Graphs, MATCH Commun. Math. Comput. Chem. 76 (2016) 61–68. H. Täubig, Matrix Inequalities for Iterative Systems , CRC Press / Taylor & Francis Group, 2017. H. Täubig and J. Weihmann, Matrix power inequalities and the number of walks in graphs, Discrete Appl. Math. 176 (2014) 122–129. H. Täubig, J. Weihmann, S. Kosub, R. Hemmecke, and E. W. Mayr, Inequalities for the number of walks in graphs, Algorithmica 66 (2013) 804–828. D. Vukičević and A. Graovac, Comparing Zagreb M1 and M2 indices for acyclic molecules, MATCH Commun. Math. Comput. Chem. 57 (2007) 587–590. 6
3945
https://www.rarediseaseadvisor.com/disease-info-pages/friedreich-ataxia-prognosis/
Friedreich Ataxia (FA) Prognosis Friedreich ataxia (FA) is the most common autosomal recessive form of ataxia.1 The disease occurs due to an unstable guanine-adenine-adenine (GAA) trinucleotide repeat expansion (more than 120 repeats) in the first intron of the frataxin (FXN) gene on chromosome 9 in both alleles.2 Progressive gait and limb ataxia, the absence of lower limb reflexes, abnormal extensor plantar responses, dysarthria, a loss or reduction in vibration perception, and skeletal abnormalities are all manifestations of FA.2 General Prognosis of FA The rate of disease progression varies between individuals. Usually, patients become wheelchair bound 10 to 20 years after the onset of initial symptoms. In the final phases of the illness, people may become entirely incapacitated. FA can reduce life expectancy. Nevertheless, some FA sufferers who have less severe symptoms live into their 60s or later. The most common cause of mortality is heart disease.3 Quality of life is severely impacted by this disease. Depending on the age of onset, the prevalence of complications, such as diabetes and cardiomyopathy, and other factors, the average lifespan is around 40 years.4 Factors Affecting FA Prognosis Overall, the prognosis of FA is poor. Factors that affect FA prognosis include the age at disease onset, severity of symptoms, treatment, and presence of comorbidities or complications.5 FA Prognosis Associated With Age of Onset The age of onset differs considerably among patients with FA. The majority of individuals (75% to 85%) receive an FA diagnosis before the age of 25 years. Early onset (prior to 5 years of age) is very uncommon. Atypical presentations, with disease onset occurring after age 25, occur in about 25% of patients.5 Although life expectancy varies substantially depending on the severity of symptoms, most patients with FA live until 40 to 50 years of age. More than 95% of patients with FA become wheelchair users by 45 years of age.5,6,7 The age of onset and prognosis are linked to the number of trinucleotide repeats. The younger the age of onset, the more trinucleotide repeats the patient is likely to have.5 People with early-onset FA typically experience more severe symptoms, more rapid disease progression, and earlier death.5, 8-12 Earlier age at disease onset has an inverse correlation with the number of GAA repeats, with every 100 GAA repeats associated with an earlier disease onset of 2.3 years.9 A lower number of repeats (less than 300) is linked to later disease development, less severe symptoms, and a better prognosis. Patients with late-onset FA and very late-onset FA typically have milder symptoms and a longer life expectancy.5 Read more about FA epidemiology Effects of Symptom Severity on FA Prognosis Lower levels of frataxin are associated with longer GAA repeats. The length of the shorter of the two GAA repeat alleles is mostly responsible for frataxin levels. As a result, more GAA repeats on the shorter allele are linked to more severe cardiomyopathy, higher neurological severity, and earlier symptom onset, resulting in a worse disease prognosis.1 As the illness progresses, ataxia spreads to the arms and trunk. Titubation is observed during standing and sitting, and it can also affect the trunk. Patients have action and intention tremors as well as choreiform movements as the ataxia spreads to the arms. They may also experience tremors in their face and buccal area. Patients will eventually lose the ability to move, necessitating ambulatory devices like a walker, then a wheelchair, until they finally become bedridden.6 Read more about FA signs and symptoms Effects of Treatment on FA Prognosis While FA cannot be cured, effective symptomatic treatments (such as those for diabetes and heart conditions, as well as orthopedic interventions) help people live longer by lessening the severity of complications and other risks associated with the disease.5 Read more about FA treatment FA Prognosis Associated With Comorbidities and Complications Complications and related comorbidities, such as cardiac dysfunction, endocrine abnormalities, and gastrointestinal complications, can affect the prognosis of FA. Cardiac Dysfunction The most common cause of death in 59% of patients with FA is cardiac dysfunction.5 Cardiac disorders in FA include concentric left ventricular hypertrophy, which causes arrhythmias and heart failure, leading to death. Heart diseases can be asymptomatic or have symptoms such as dyspnea or palpitations. Early age of disease onset and long GAA repeat length are predictors of cardiac severity and worse left ventricular hypertrophy, left ventricular function, left ventricular mass, and mortality, with the majority of cardiac-related deaths occurring before the age of 40 years. The disease duration in patients with FA who die from cardiac causes is typically 10 years or less, and a disease duration of more than 20 years considerably lowers the probability of dying from cardiac causes.13 Read more about FA risk factors Endocrine Abnormalities About 10% of patients with FA develop diabetes, and around 20% develop glucose intolerance.14 Evidence shows that an earlier age of FA onset and a higher number of GAA repeats in the FXN gene increase the risk of developing associated diabetes.15 Read more about FA comorbidities Other Complications Other complications that affect disease prognosis include ​​scoliosis, loss of ambulation, and difficulties in carbohydrate digestion.5 Read more about FA complications References Reviewed by Kyle Habet, MD, on 1/19/2023. Harshi Dhingra is a licensed medical doctor with specialization in Pathology. She is currently employed as faculty in a medical school with a tertiary care hospital and research center in India. Dr. Dhingra has over a decade of experience in diagnostic, clinical, research, and teaching work, and has written several publications and citations in indexed peer reviewed journals. She holds medical degrees for MBBS and an MD in Pathology. READ MORE ON Rare Disease Advisor, a trusted source of medical news and feature content for healthcare providers, offers clinicians insight into the latest research to inform clinical practice and improve patient outcomes. Copyright © 2025 Haymarket Media, Inc. All Rights Reserved. This material may not be published, broadcast, rewritten or redistributed in any form without prior authorization. Your use of this website constitutes acceptance of Haymarket Media’s Privacy Policy and Terms & Conditions.
3946
https://math.stackexchange.com/questions/2104984/find-the-period-of-a-function-which-satisfies-fx6fx-6-fx-for-every-rea
Find the period of a function which satisfies $f(x+6)+f(x-6)=f(x)$ for every real value $x$ - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Find the period of a function which satisfies f(x+6)+f(x−6)=f(x)f(x+6)+f(x−6)=f(x) for every real value x x Ask Question Asked 8 years, 8 months ago Modified8 years, 8 months ago Viewed 317 times This question shows research effort; it is useful and clear 1 Save this question. Show activity on this post. For every x∈R x∈R, f(x+6)+f(x−6)=f(x)f(x+6)+f(x−6)=f(x) is satisfied. What may be the period of f(x)f(x)? I tried writing several f f values but I couldn't get something like f(x+T)=f(x)f(x+T)=f(x). functions Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications edited Jan 19, 2017 at 19:56 Ng Chung Tak 19.8k 4 4 gold badges 23 23 silver badges 47 47 bronze badges asked Jan 19, 2017 at 19:30 matbazmatbaz 61 2 2 bronze badges 1 Let f(0)=a,f(6)=a+b f(0)=a,f(6)=a+b. Now extrapolate and see what comes out. You might find it instructive to start with a=0 a=0.Brian Tung –Brian Tung 2017-01-19 19:33:53 +00:00 Commented Jan 19, 2017 at 19:33 Add a comment| 3 Answers 3 Sorted by: Reset to default This answer is useful 2 Save this answer. Show activity on this post. Here is a big hint: apply the identity you have been given to f(x+6)f(x+6) to obtain f(x+6)=f(x+12)+f(x)f(x+6)=f(x+12)+f(x) and then ... Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered Jan 19, 2017 at 19:36 Mark BennetMark Bennet 102k 14 14 gold badges 119 119 silver badges 232 232 bronze badges 1 thank you , i tried it but couldn't get the answer before but now ok.matbaz –matbaz 2017-01-19 20:23:09 +00:00 Commented Jan 19, 2017 at 20:23 Add a comment| This answer is useful 1 Save this answer. Show activity on this post. It follows then that f(x)⟹0⟹f(x+12)⟹f(x+18)⟹f(x+36)=f(x+6)+f(x−6)=f(x+12)+f(x)+f(x−6)=f(x+12)+f(x−6)=−f(x−6)=−f(x)=−f(x+18)=f(x)f(x)=f(x+6)+f(x−6)=f(x+12)+f(x)+f(x−6)⟹0=f(x+12)+f(x−6)⟹f(x+12)=−f(x−6)⟹f(x+18)=−f(x)⟹f(x+36)=−f(x+18)=f(x) f(x+36)=f(x)f(x+36)=f(x) Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered Jan 19, 2017 at 19:38 Simply Beautiful ArtSimply Beautiful Art 76.8k 13 13 gold badges 134 134 silver badges 301 301 bronze badges 1 @matbaz no problem :D Simply Beautiful Art –Simply Beautiful Art 2017-01-19 20:14:05 +00:00 Commented Jan 19, 2017 at 20:14 Add a comment| This answer is useful 1 Save this answer. Show activity on this post. If f(x+6)+f(x−6)=f(x),f(x+6)+f(x−6)=f(x), then f(x+12)+f(x)=f(x+6)f(x+12)+f(x)=f(x+6) and therefore f(x+12)=−f(x−6)=−(−f(x−24)).f(x+12)=−f(x−6)=−(−f(x−24)). Hence you have f(x)=f(x−36).f(x)=f(x−36). Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered Jan 19, 2017 at 19:39 BumblebeeBumblebee 19k 5 5 gold badges 53 53 silver badges 94 94 bronze badges 0 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions functions See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Related 0Is there a standard name for one period of a triangle function? 0Periodicity and period of a real function satisfying f(x+a)=1 2+f(x)−f(x)2−−−−−−−−−−√f(x+a)=1 2+f(x)−f(x)2 2What is the period of a fuction which satisfies the condition f(a−x)=f(a+x)f(a−x)=f(a+x)? 5For what integral value of n n is 3 π 3 π the period of the function cos(n x)sin(5 x/n)cos⁡(n x)sin⁡(5 x/n)? 5How to find period of a real function f f given the functional equation 3–√f(x)=f(x−1)+f(x+1)3 f(x)=f(x−1)+f(x+1)? 2For the given function find k such that f(x)≠f(x+k) for any value of x 6About finding the period of a function which satisfies f(x−2)+f(x+2)=f(x)f(x−2)+f(x+2)=f(x). 1How to find fundamental period of the sum of two periodic functions 0Proof of a formula for the period of a sin sin function subtracting distances 3Looking for function which satisfies f(n)=f(2 n)+f(2 n+1)f(n)=f(2 n)+f(2 n+1) Hot Network Questions ConTeXt: Unnecessary space in \setupheadertext Do we need the author's permission for reference How do you create a no-attack area? How to solve generalization of inequality problem using substitution? Storing a session token in localstorage How to convert this extremely large group in GAP into a permutation group. how do I remove a item from the applications menu Interpret G-code If Israel is explicitly called God’s firstborn, how should Christians understand the place of the Church? Cannot build the font table of Miama via nfssfont.tex Do we declare the codomain of a function from the beginning, or do we determine it after defining the domain and operations? Making sense of perturbation theory in many-body physics Childhood book with a girl obsessed with homonyms who adopts a stray dog but gives it back to its owners Another way to draw RegionDifference of a cylinder and Cuboid Countable and uncountable "flavour": chocolate-flavoured protein is protein with chocolate flavour or protein has chocolate flavour Does the Mishna or Gemara ever explicitly mention the second day of Shavuot? Matthew 24:5 Many will come in my name! In Dwarf Fortress, why can't I farm any crops? What is the meaning and import of this highlighted phrase in Selichos? Proof of every Highly Abundant Number greater than 3 is Even How do you emphasize the verb "to be" with do/does? In the U.S., can patients receive treatment at a hospital without being logged? Fundamentally Speaking, is Western Mindfulness a Zazen or Insight Meditation Based Practice? Is encrypting the login keyring necessary if you have full disk encryption? more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.29.34589 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
3947
https://clinmedjournals.org/articles/cmrcr/cmrcr-1-004.php?jid=cmrcr
ClinMed International Library | Anti Citrullinated Protein / Peptide Antibody Assay, Rheumathoid Factor or Both as Shifted Test in Diagnostic and Prognostic Evaluation in Patients with Rheumathoid Arthritis | Clinical Medical Reviews and Case Reports Join Us|Latest Articles|Contact HomeGuidelines Journals Submit Manuscript Review Process Open Access Join Us Journal Home Editorial Board Archive Submit to this journal Current issue Clin Med Rev Case RepISSN: 2378-3656 Abstract Keywords Introduction Material and Methods Results Discussion Figure 1 Figure 2 Table 1 Table 2 References Download PDF #### Clinical Medical Reviews and Case Reports DOI: 10.23937/2378-3656/1410004 Anti Citrullinated Protein/Peptide Antibody Assay, Rheumathoid Factor or Both as Shifted Test in Diagnostic and Prognostic Evaluation in Patients with Rheumathoid Arthritis ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ### Dejan Spasovski 1 , Tatjana Sotirova 2, Svetlana Krstevska-Balkanov 2, Maja-Slanin-ka-Micevska 3, Trajan Balkanov 3, Sonja Alabakovska 4 and Sonja Genadieva-Stavric 2 1 Department of Rheumatology, University Clinical Centre, Skopje, Republic of Macedonia 2 University Clinic of Heamatology, Cyril and Methodius University, Skopje, Republic of Macedonia 3 Department of Preclinic Farmacology, University Clinical Centre, Skopje, Republic of Macedonia 4 Department of Biochemistry, University Clinical Centre, Skopje, Republic of Macedonia Corresponding author: Dejan Spasovski, Deparment of Rheumatology, University Clinical Centre, Skopje, Republic of Macedonia, E-mail: drspasovski@yahoo.co.uk Clin Med Rev Case Rep, CMRCR-1-004, (Volume 1, Issue 1), Original Article; ISSN: 2378-3656 Received: August 22, 2014 | Accepted: October 22, 2014 | Published: October 27, 2014 Citation: Spasovski D, Sotirova T, Balkanov SK, Micevska MS, Balkanov T, et al. (2014) Anti Citrullinated Protein/Peptide Antibody Assay, Rheumathoid Factor or Both as Shifted Test in Diagnostic and Prognostic Evaluation in Patients with Rheumathoid Arthritis. Clin Med Rev Case Rep 1:004. 10.23937/2378-3656/1410004 Copyright: © 2014 Spasovski D, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Abstract Aim: The aim of this study was to compare the diagnostic values of laboratory variables, to present quantitative evaluations of the anti citrullinated protein / peptide antibody (ACPA), or anti CCP ( anti-cyclic citrullinated peptide, anti-CCP 2) antibodies in second generation antibody assay diagnostic test with reference to sensitivity and specificity, the predictive value of the positive and negative test and precision of the test for ACPA antibodies, rheumatoid factor, C-reactive protein and DAS 28 index, in the early diagnosis of untreated rheumatoid arthritis. Materials and methods: 70 participants (35 patients with rheumatoid arthritis not treated, 35 individuals as healthy controls) took part in the study. Their serum was examined using ELISA technology of DIA-STATTM Anti-CCP (Axis-Shield Diagnostics). Rheumatoid factor was examined with the test for agglutination (Latex RF test). Results: We found the presence of ACPA antibodies (sensitivity of the test 65.71%) in 23 of the 35 examined patients with rheumatoid arthritis while rheumatoid factor appeared in 17 patients (sensitivity of the test 48.57%). Twelve patients were ACPA and rheumatoid factor positive, 11 were ACPA positive, but rheumatoid factor negative. Five patients were ACPA negative and rheumatoid factor positive. In 17 rheumatoid factor positive patients, ACPA antibodies were positive in 12 patients. Of 18 rheumatoid factor negative patients, 11 were ACPA positive. In the healthy control group, 1 patient was anti-CCP 2 positive, while 2 patients were rheumatoid factor positive. Conclusion: ACPA antibodies have higher sensitivity and specificity than rheumatoid factor in rheumatoid arthritis. Keywords Rheumatoid arthritis, Rheumatoid factor, ACPA antibody Introduction Rheumatoid Arthritis (RA) is an autoimmune disease, multi-functional in origin, characterised by the inflammation of the membrane lining joints. The disease spreads from small to large joints, with the greatest damage in the early phase . The diagnostics of RA is based on clinical, radiological and immunological features. The most frequent serological test is the measurement of Rheumatoid Factor (RF). American College of Rheumatology for the classification of RA comprise RF as one of its criteria. The most common class is IgM and it is found in 60-80% of RA patients. RF is not specific for RA, as it is often present in healthy individuals and patients with other autoimmune diseases and chronic infections . 30% of patients with SLE are RF positive (with no evidence of RA) . Despite its low specificity, a positive RF is considered an important predictor of outcome in RA. Antibodies to anti-perinuclear factor (APF) and keratin (AKA) are considered as highly specific for RA. Antibodies to APF and AKA were detected by indirect immune fluorescence using buccal epithelium of rat oesophagus . The lack of availability of suitable buccal cell donors has limited the use of APF as a routine laboratory test. The antigen of both these antibodies has been identified as epidermal filaggrin, an intermediate filament-associated protein involved in the cornification of the epidermis 5,. Profilagrin, present in the keratohyaline granules of human buccal mucosa cells, is proteolytically cleaved into filaggrin subunits during cell differentiation. The protein is dephosphorylated and some arginine residues are converted to citrulline by the enzyme peptidylarginine deaminase (PAD) . Auto antibodies reactive with linear synthetic peptides containing the unusual amino citrulline were present in 76% of RA sera with specificity for RA of 96% . The antibodies in patients with RA that recognized the citrulline containing epitopes were predominantly of the IgG class and of relatively high affinity . In a subsequent paper it is reported that the ELISA test based on Cyclic Citrullinated Peptide (CCP) showed superior performance characteristics to one based on the linear version in the detection of antibodies to RA. In principle, most citrullinated protein/peptides are recognized by auto antibodies in RA sera, although with different sensitivities and specificities . These findings suggest an important role of citrullinated antigens in the diagnosis of RA. Sensitivity of the anti-CCP 2 test varies between 64% and 74% in different populations, but the specificity varies between 90% and 99% 11-. Material and Methods The diagnosis of the RA was established on the basis of the revised diagnostic criteria for classification of rheumatoid arthritis, suggested in 1987 by the American Association for Rheumatism (ARA) . To be diagnosed as patient with RA one must fulfil at least four out of seven criteria. Criteria from one to four should be present for at least six weeks. 70 participants were included in the study: 35 patients with newly diagnosed RA, not treated (28 females, 7 males) and 35 individuals as healthy control group (18 females, 17males), aged 18-65 years. The average age was 56.68 years (± 6.79) (40-65 years) in the RA group and 46.2 years (± 12.49) (29-65 years) in the healthy control group. The average duration of the disease in months was 43.97 (± 45.23), in the interval of 1-168 months. All the participants included in the study denied medical history of renal disease. Patients with disease or condition which could directly or indirectly influence any change in the results were excluded from the study: 1. Patients with SLE, Sjogren syndrome, mixed conjunction tissue disease, vasculitis, autoimmune disease, age < 18 years. 2. Patients treated with antibiotics and salycilate in periods under six months from the beginning of the study. 3. Patients who together with these medicines took medicines from basic line. 4. Patients with previous medical history of disease of the spleen, thyroid gland, liver damage, renal, hematologic, arterial hypertension, uric arthritis, uric infections, cardiovascular, neurologic and lung impairment, AIDS,. 5. Patients with diabetes mellitus, acute infections, malignant neoplasm, febrile conditions. 6. Patients treated with antihypertensive, diabetic and cardiac therapy. 7. Hypersensitive to some of the medicines or their components. 8. Patients with previous history of transfusion of blood and overweight. 9. Patients whose results showed that in 0 spot there was a glycemia, or increased level of degraded products as creatinine in serum and urine, urea in serum and disorder of the hematologic and enzymatic status. All patients took part in this study voluntarily, so the ethical criteria were fulfilled. Clinical evaluation of disease activity The clinical evaluation was performed by the subspecialist in this field did. The disease activity was evaluated using DAS 28 index (Disease Activity Score, DAS 28) 18-. The index is a mathematical formula that allows to get a uniquely composed quantitative score, which comprise palpation - painful sensitive joints (max number 28), swollen joints (max number 28), Westergren's Erythroid Sedimentation Rate (ESR), and patient's global assessment of disease activity (0-100 mm Visual Analogous Scale VAS) and the morning rigidity (minutes). DAS 28 index is ranked from 0 to 10 and a score under 3.2 ranks the disease as low active. Laboratory assessment Several laboratory variables have to be measured for a clinical assessment of the basic disease: Complete Blood Count (CBC) and differential, reactors of acute phase - RF, CRP, anti-CCP 2, Alkaline Phosphatase (AP), Aspartate Aminotransferase (AST), Alanine Aminotransferase (ALT), Creatinine Kinase (CK), Lactate Dehydrogenase (LDH), serum urea and creatinine. The DIA-STATTM Anti-CCP (Axis-Shield Diagnostics) test is a semi quantitative/qualitative Enzyme- Linked Immuno sorbent Assay (ELISA) for the detection of the IgG class of auto antibodies specific to synthetic Cyclic Citrullinated Peptide (CCP) containing modified arginine residues. The test provides an additional tool in the diagnosis of patients with RA. The absorbance value (optical density ratio) for the positive and negative control and for each sample was calculated. The recommended values for the test are: Absorbance ratio Result interpretation < 0.95 Negative >0.95 to < 1.0 Borderline-recommended repeat testing >1.0 Positive Reference values are: under 1,26 U/ml ACPA in serum. The test of agglutination (Latex CRP test) (BioSystems S.A. Reagents&Instruments Costa Brava 30, Barcelona, Spain) was used for determination of CRP 22-. Reference values are: under 6 mg/L CRP in serum. RF was detected with the test of agglutination (Latex RF test) (BioSystems S.A. Reagens& Instruments Costa Brava 30, Barcelona, Spain) 22,26-. Reference values are: under 8 mg/L RF in serum. For determination of ESR we used the method after Westergren, and normal values are: 7-8 mm for males, 11- 16 mm for females. Statistical analysis The Student's t-test was used for testing the importance of the difference between two arithmetic means, with respect to proportion, which compares the middle values of certain numerical parameters between two groups. Wilcoxon-matched test was used for independent samples. Sensitivity and predictivity were defined for positive and negative test of examined values. P value between 0.05 and 0.1 was taken as statistically significant. Data processing was done with the statistical package - Statistica 7.0 Results Out of 35 patients with RA, RF was present in 17 patients (48.57%), while 23 patients (65.71%) showed presence of ACPA antibody, 12 patients were ACPA and RF positive (34.28%), 11 patients (31.42%) were ACPA positive and RF negative, while 5 patients (14.28%) were ACPA negative and RF positive. Of 18 RF negative patients, 11 patients (61.11%) were ACPA positive. Out of the total of 12 ACPA negative RA patients, 5 patients (41.66%) were RF positive. Of 35 examined patients with RA, sensitivity to ACPA was 65.71%, while RF sensitivity was 48.57%. Of 17 RF positive RA patients, ACPA antibody was present in 12 patients and its sensitivity was 70.58%. Out of 18 RF negative RA, ACPA was present in 11 patients and its sensitivity was 61.11%. In the healthy control group 2 participants (5.71 %) were RF positive, while 1 (2.85%) was ACPA positive (Table 1). table 1:Anti CCP 2 antibody and RF in RA and healthy control group. View Table 1 Diagnostic performance of ACPA antibody in patients with RA For ACPA antibody and RF in RA, sensitivity, specificity, predictive value of the positive and negative tests as well as their precision are shown in table 2. ACPA antibodies showed better diagnostic performance than RF (sensitivity 65.71% vs. 48.57%, specificity 97.14% vs. 94.28%) in the detection of RA. Table 2:Diagnostic performance of ACPA antibody and RF in rheumatoid arthritis. View Table 2 Corelaion between ACPA antibody and DAS 28 index of activity of disease Out of 35 patients with RA, DAS 28 > 3.2 was replaced in 28 patients (80%). In 17 seropositive RF patients, replacement of DAS 28 > 3.2 was found in 15 patients (88.23%). Among these 15 patients with DAS 28 > 3.2, 10 were ACPA positive (66.66%), and their M ± SD (2.23 ± 0.61) was extended (1.28-3.0). In 18 seronegative RF patients, replacement of DAS 28 > 3.2 was found in 13 patients (72.22%). Among these 13 patients with DAS 28 > 3.2, 9 were ACPA positive (69.23%) and their M ± SD (1.92 ± 0.45) was extended (1.3-2.6). Seropositive RF patients have higher titer of ACPA antibody than RF seronegative (Table I), (1.87± 0.77 (0.92-3.0) vs. 1.56 ± 0.59 (0.93-2.6)), and a higher DAS 28 > 3.2 index (5.04 ±1.33 (2.47-6.83) vs. 4.56 ± 1.76 (1.85- 7.03)). Between these two groups of ACPA antibody there was no statistical relation (p = 0.266). Although the same representation of ACPA positive patients with DAS 28 > 3.2 was found in seropositive and seronegative patients (10 vs. 9 patients; 66.66% vs. 69.23%), the titer of ACPA was higher in 10 RF seropositive patients with DAS 28 > 3.2, compared with RF seronegative patients with DAS 28 > 3.2 (2.23 ± 0.61 vs. 1.92 ± 0.45). Between these two groups there was no statistical correlation (p = 0.374260) (Figure 1). The condition was almost equal for DAS 28 index in 9 RF seronegative, ACPA positive patients (5.69 ± 1.37) extent 3.31-7.03 compared with 10 RF seropositive ACPA positive patients (5.63 ±1.01) extent 4.17-6.83. There was no statistical correlation between DAS 28 index in RF seropositive and seronegative patients (p = 0.379375) and between two groups of DAS 28 > 3.2, ACPA positive patients, but RF seropositive and seronegative patients (p = 0.905696) (Figure 2). Figure 1: Distribution of ACPA antibody View Figure 1 . Figure 2: Distribution of das 28 index of activity of disease View Figure 2 . A statistical correlation was found using Wilcoxon - matched test between ACPA in RA and healthy control group for p< 0.05 (p = 0.000002). A statistical correlation was found using Wilcoxonmatched test between: ACPA in RA and DAS 28, RF and CRP, SER, morning rigidity in the same group for p< 0.05: (anti-CCP 2 vs. DAS 28 p = 0.000000; ACPA vs. RF (p = 0.018345); ACPA vs. CRP p = 0.040620; anti-CCP 2 vs. morning rigidity (p = 0.000032); ACPA vs. ESR (p = 0.000000). Discussion It is reported that sensitivity of first generation anti-CCP antibody is approximately 68% (45-80%) and specificity is 98% (96-100%). (9) The report for the sensitivity of second generation anti-CCP 2 antibody is approximately 64-74%, and the specificity is 90-99% 11-16,31,. The advantages of the use of anti-CCP 2 test can be seen in the early phase of arthritis . Our conclusions for sensitivity of 65.71% and specificity of 97.14% are similar to these studies. High specificity (61.11%) was found in RF negative RA patients. Mean sensitivity and high specificity allow ACPA antibody to be included as a classification criterion in RA. Although DAS 28 index, which is not only a laboratory variable, but also a clinical index for evaluation of disease, has higher sensitivity (80%) and specificity (100%), ACPA antibody as an isolated laboratory variable, dominated with its performances in the early diagnosis of undifferentiated RA. However, we have to pay attention to the fact that the results obtained in this study are lower and retreat from values given by the producer DIA-STATTM Anti-CCP (Axis-Shield Diagnostics) (sensitivity for anti-CCP 2 79%, specificity 100 %). Data obtained for ACPA antibody were higher than those from tests by other examiners 12,31,. It is known that the keratohyalin bodies present in human buccal mucosa cells contain filaggrin, a protein that is recognized by APF and AKAs specific antibodies present in RA patients. These antibodies are detectable by indirect immune fluorescence techniques, but they have never become part of the diagnostic repertoire of clinical laboratories because of difficulties in the availability and storage of the antigen substrates, as well as objective difficulties in interpreting the fluoroscopic patterns. The recent development of synthetic peptides containing citrulline , an amino acid present in the filaggrin molecule and produced after its deimination, has enabled the development of an ELISA test. From preliminary data obtained during experimental trials, this test appears to have the same high specificity as APF and AKAs and is able to eliminate the standardization problems related to immunofluorescence procedures. In this study, we evaluated the diagnostic accuracy of this new ELISA test, which is now commercially available. The sensitivity of first generation anti-CCP 2 antibodies is reported to be approximately 68% (45-80%) and specificity is 98% (96-100%) . The report for sensitivity of the second generation anti-CCP 2 antibody is approximately 64-74%, with the specificity of 90-99% 11-16,. The advantages of the use of anti-CCP 2 test might be seen as a possibility of an early differentiation of arthritis.(33) Our findings for specificity of 65.71% and specificity of 97.14% are in line with the frames of others studies. In addition, a high specificity is useful in RF negative RA patients, where it is 61.11%. Mean sensitivity and high specificity allow anti-CCP 2 antibody to be included as a classification criterion in RA. Although the DAS 28 index, which is not only a laboratory variable but a clinical index for the estimate of the disease, has higher sensitivity (80%) and specificity (100%), anti-CCP 2 antibody, as an isolated laboratory variable, dominates with its performance in the early diagnosis of undifferentiated RA. However, we have to pay attention to the fact that the results achieved in this study are bellow the values given by the producer DIA-STATTM Anti-CCP (Axis-Shield Diagnostics) (sensitivity for anti-CCP 2 79%, specificity 100 %). Data given for ACPA antibody are higher than those from previous tests by other examiners 12,31,. The efficacy of anti-cyclic citrullinated peptide (anti-CCP) antibody detection in the early diagnosis of RA is shown by Fernandez-Suarez A, et al , as are compared three commercially available Enzyme-Linked Immunoabsorbent Assay (ELISA) kits used for detection of such antibodies. The presence of anti-CCP antibodies was analysed in the sera of 78 patients, newly diagnosed. A group of 50 healthy controls was also included in the study. None of them had previously been treated. After follow-up of 1-year, diagnosis of RA was confirmed in 53 patients. The ELISA kits used in the study were IMMUNOSCAN RA (Euro-Diagnostica AB). QUANTA Lite CCP IgG ELISA (INOVA Diagnostic) and DIA-STAT Anti-CCP (Axis-Shield Diagnostics). The sensitivity was 52,8% 58,5% and 52,8%, respectively, and specificity 100% for all three kits. Anti-CCP antibodies detected the presence of RA in 26% RF negative patients. The sum of anti-CCP antibodies of the presence of RF gave a sensitivity of up to 67%, with specificity ranging between 94 and 97%. It was shown that anti-CCP antibodies had high specificity for the diagnosis of RA. There was no difference in terms of diagnostic accuracy among the three analysed ELISAs. The presence of anti-CCP antibodies in RA suspected patients was investigate by Us D et al. They evaluated the combination of these autoantibodies with some other serologic markers such as IgM-rheumatoid factor (IgM-RF), CRP and Antinuclear Antibodies (ANAs). The concentrations of RF and CRP were determined by quantitative immune nephelometry; titers of ANAs by indirect immune fluorescence and the presence of anti-CCP by a commercial semi quantitative micro ELISA method. 88 patients with clinically suspected RA were analysed, as well as 42 sex- and age-matched healthy blood donors. High levels of IgM-RF and CRP were found in 48 (54.5%) and 49 (55.7%) patients, respectively, while 47 (53.4%) and 25 (28.4%) patients were found positive for ANAs and anti-CCP, respectively. Of 48 RF positive patients, 25 were also positive for anti-CCP and distribution rates of the markers in 25 anti-CCP positive patients were as follows: 100% for RF, 84% for CRP and 52% for ANA. The sensitivity of anti-CCP ELISA was 52.1% and specificity was 100%, when evaluated according to RF positivity as a main serologic marker of RA. In order to explain the low sensitivity, it has to be taken in consideration that anti-CCP antibodies are a heterogeneous group of antibodies directed against different epitopes on the citrulline molecule, that each patient's serum contains different subsets of antibodies, and that the synthetic peptide used in this assay represents a relatively small set of antigenic determinants that do not entirely encompasses the antigenic determinants present on the yet unknown antigenic molecule in the joint . ACPA and RF in RA patients were also evaluated in terms of duration of disease. In patients with early arthritis the correlation with anti-CCP was highly significant, indicating that this assay may be useful even in the early phase of disease. It is important because an early diagnosis of RA could modify in a great deal treatment decision, suggesting use of more aggressive drugs that can delay progression of joint damage and thus substantially change the natural history of disease. We can conclude that ACPA antibody assay is a very valuable test for diagnosis of RA. This ELISA test surpasses many of the problems of the APF and AKA tests, such as quantification of the results and standardization of the assay. Its low sensitivity does not allow its use as a screening test, but its high specificity, especially in the presence of high concentrations, allows it to become one of the most useful serologic tests for diagnosis of RA. When associated with RF determination, its specificity rises up to 100%, make it helpful in the differential diagnosis of RA and other rheumatic diseases. This test may be very influential in treatment decision strategy in patients with recent onset of arthritis. Anti-CCP 2 antibodies have higher sensitivity and specificity than RF in RA. Anti-CCP 2 test is used in everyday clinical practice for the diagnosis of early undifferentiated RA. References 1. Gough AK, Lilley J, Eyre S, Holder RL, Emery P (1994) Generalised bone loss in patients with early rheumatoid arthritis. Lancet 344: 23-27. 2. Smolen JS, Butcher B, Fritzler MJ, Gordon T, Hardin J, et al. (1997) Reference sera for antinuclear antibodies. II. Further definition of antibody specificities in international antinuclear antibody reference sera by immunofluorescence and western blotting. Arthritis Rheum 40: 413-418. 3. Barland P, Lipstein E (1996) Selection and use of laboratory tests in the rheumatic diseases. Am J Med 100: 16S-23S. 4. Nakamura RM (2000) Progress in the use of biochemical and biological markers for evaluation of rheumatoid arthritis. J Clin Lab Anal 14: 305-313. 5. Simon M, Girbal E, Sebbag M, Gomes-Daudrix V, Vincent C, et al. (1993) The cytokeratin filament-aggregating protein filaggrin is the target of the so-called "antikeratin antibodies," autoantibodies specific for rheumatoid arthritis. J Clin Invest 92: 1387-1393. 6. Sebbag M, Simon M, Vincent C, Masson-Bessiere C, Girbal E, et al. (1995) The antiperinuclear factor and the so-called antikeratin antibodies are the same rheumatoid arthritis-specific autoantibodies. J Clin Invest 95: 2672-2679. 7. Girbal-Neuhauser E, Durieux JJ, Arnaud M, Dalbon P, Sebbag M, et al. (1999) The epitopes targeted by the rheumatoid arthritis-associated antifilaggrin autoantibodies are posttranslationally generated on various sites of (pro)filaggrin by deimination of arginine residues. J Immunol 162: 585-594. 8. Schellekens GA, de Jong BA, van den Hoogen FH, van de Putte LB, van Venrooij WJ (1998) Citrulline is an essential constituent of antigenic determinants recognized by rheumatoid arthritis-specific autoantibodies. J Clin Invest 101: 273-281. 9. Schellekens GA, Visser H, de Jong BA, van den Hoogen FH, Hazes JM, et al. (2000) The diagnostic properties of rheumatoid arthritis antibodies recognizing a cyclic citrullinated peptide. Arthritis Rheum 43: 155-163. 10. van Boekel MA, Vossenaar ER, van den Hoogen FH, van Venrooij WJ (2002) Autoantibody systems in rheumatoid arthritis: specificity, sensitivity and diagnostic value. Arthritis Res 4: 87-93. 11. Lee DM, Schur PH (2003) Clinical utility of the anti-CCP assay in patients with rheumatic diseases. Ann Rheum Dis 62: 870-874. 12. Suzuki K, Sawada T, Murakami A, Matsui T, Tohma S, et al. (2003) High diagnostic performance of ELISA detection of antibodies to citrullinated antigens in rheumatoid arthritis. Scand J Rheumatol 32: 197-204. 13. Dubucquoi S, Solau-Gervais E, Lefranc D, Marguerie L, Sibilia J, et al. (2004) Evaluation of anti-citrullinated filaggrin antibodies as hallmarks for the diagnosis of rheumatic diseases. Ann Rheum Dis 63: 415-419. 14. Kastbom A, Strandberg G, Lindroos A, Skogh T (2004) Anti-CCP antibody test predicts the disease course during 3 years in early rheumatoid arthritis (the Swedish TIRA project). Ann Rheum Dis 63: 1085-1089. 15. Vallbracht I, Rieber J, Oppermann M, Forger F, Siebert U, et al. (2004) Diagnostic and clinical value of anti-cyclic citrullinated peptide antibodies compared with rheumatoid factor isotypes in rheumatoid arthritis. Ann Rheum Dis 63: 1079-1084. 16. Zendman AJ, van Venrooij WJ, Pruijn GJ (2006) Use and significance of anti-CCP autoantibodies in rheumatoid arthritis. Rheumatology (Oxford) 45: 20-25. 17. Arnett FC, Edworthy SM, Bloch DA, McShane DJ, Fries JF, et al. (1998) The American Rheumatism Association 1987 revised criteria for the classification of rheumatoid arthritis. Arthritis Rheum 31: 315-324 18. van Gestel AM, Prevoo ML, van't Hof MA, van Rijswijk MH, van de Putte LB, et al. (1996) Development and validation of the european league against rheumatism response criteria for rheumatoid arthritis. Comparison with the preliminary american college of rheumatology and the world health organization/international league against rheumatism criteria. Arthritis Rheum 39: 34-40. 19. Prevoo ML, van 't Hof MA, Kuper HH, van Leeuwen MA, van de Putte LB, et al. (1995) Modified disease activity scores that include twenty-eight-joint counts. Development and validation in a prospective longitudinal study of patients with rheumatoid arthritis. Arthritis Rheum 38: 44-48. 20. Balsa A, Carmona L, Gonzalez-Alvaro I, Belmonte MA, Tena X, et al. (2004) Value of Disease Activity Score 28 (DAS28) and DAS28-3 compared to American College of Rheumatology-defined remission in rheumatoid arthritis. J Rheumatol 31: 40-46. 21. Prevoo ML, van Gestel AM, van T Hof MA, van Rijswijk MH, van de Putte LB, et al. (1996) Remission in a prospective study of patients with rheumatoid arthritis. American rheumatism association preliminary remission criteria in relation to the disease activity score. Br J Rheumatol 35: 1101-1105. 22. Friedman RB, Young DS, Beatty ES (1978) Automated monitoring of drug-test interactions. Clin Pharmacol Ther 24: 16-21. 23. Turgeon ML, Mosby JA (1996) Immunology and serology in laboratory medicine,2nd edition 2: 485-489. 24. SINGER JM, PLOTZ CM, PADER E, ELSTER SK (1957) The latex-fixation test. III. Agglutination test for C-reactive protein and comparison with the capillary precipitin method. Am J Clin Pathol 28: 611-617. 25. Hokama Y, Nakamura RM (1987) C-Reactive protein: current status and future perspectives. J Clin Anal 1: 15-27. 26. Young DS, Thomas DW, Friedman RB, Pestaner LC (1972) Effects of drugs on clinical laboratory tests. Clin Chem 18: 1041-1303. 27. PLOTZ CM, SINGER JM (1956) The latex fixation test. I. Application to the serologic diagnosis of rheumatoid arthritis. Am J Med 21: 888-892. 28. Shmerling RH, Delbanco TL (1991) The rheumatoid factor: an analysis of clinical utility. Am J Med 91: 528-534. 29. Sager D, Wernick RM, Davey MP (1992) Assays for rheumathoid factor:a review of their utility and limitation in clinical practise. Lab Med 23: 15-18. 30. Burtis CA, Ashwood ER (1999) Quality management. Tietz textbook of clinical chemistry, 3rd edition1095-1124. WB Saunders Co. 31. Fernandez-Suarez A, Reneses S, Wichmann I, Criado R, Nunez A (2005) Efficacy of three ELISA measurements of anti-cyclic citrullinated peptide antibodies in the early diagnosis of rheumatoid arthritis. Clin Chem Lab Med 43: 1234-1239. 32. De Rycke L, Peene I, Hoffman IE, Kruithof E, Union A, et al. (2004) Rheumatoid factor and anticitrullinated protein antibodies in rheumatoid arthritis: diagnostic value, associations with radiological progression rate and extra-articular manifestations. Ann Rheum Dis 63: 1587-1593. 33. Van Gaalen FA, Linn-Rasker SP, Van Venrooij WJ, de Jong BA, Breedveld FC, et al. (2004) Autoantibodies to cyclic citrullinated peptides predict progression to rheumatoid arthritis in patients with undifferentiated arthritis: a prospective cohort study. Arthritis Rheum 50: 709-715. 34. Us D, Gulmez D, Hascelik G (2003) [Cyclic citrullinated peptide antibodies (anti-CCP) together with some other parameters used for serologic diagnosis of rheumatoid arthritis]. Mikrobiyol Bul 37: 163-170. 35. van Jaarsveld CH, ter Borg EJ, Jacobs JW, Schellekens GA, Gmelig-Meyling FH, et al. (1999) The prognostic value of the antiperinuclear factor, anti-citrullinated peptide antibodies and rheumatoid factor in early rheumatoid arthritis. Clin Exp Rheumatol 17: 689-697. Follow Us Facebook LinkedIn Twitter RSS Blog International Journal of Anesthetics and Anesthesiology (ISSN: 2377-4630) International Journal of Blood Research and Disorders (ISSN: 2469-5696) International Journal of Brain Disorders and Treatment (ISSN: 2469-5866) International Journal of Cancer and Clinical Research (ISSN: 2378-3419) International Journal of Clinical Cardiology (ISSN: 2469-5696) Journal of Clinical Gastroenterology and Treatment (ISSN: 2469-584X) Clinical Medical Reviews and Case Reports (ISSN: 2378-3656) Journal of Dermatology Research and Therapy (ISSN: 2469-5750) International Journal of Diabetes and Clinical Research (ISSN: 2377-3634) Journal of Family Medicine and Disease Prevention (ISSN: 2469-5793) Journal of Genetics and Genome Research (ISSN: 2378-3648) Journal of Geriatric Medicine and Gerontology (ISSN: 2469-5858) International Journal of Immunology and Immunotherapy (ISSN: 2378-3672) International Journal of Medical Nano Research (ISSN: 2378-3664)International Journal of Neurology and Neurotherapy (ISSN: 2378-3001) International Archives of Nursing and Health Care (ISSN: 2469-5823) International Journal of Ophthalmology and Clinical Research (ISSN: 2378-346X) International Journal of Oral and Dental Health (ISSN: 2469-5734) International Journal of Pathology and Clinical Research (ISSN: 2469-5807) International Journal of Pediatric Research (ISSN: 2469-5769) International Journal of Respiratory and Pulmonary Medicine (ISSN: 2378-3516) Journal of Rheumatic Diseases and Treatment (ISSN: 2469-5726) International Journal of Sports and Exercise Medicine (ISSN: 2469-5718) International Journal of Stem Cell Research & Therapy (ISSN: 2469-570X) International Journal of Surgery Research and Practice (ISSN: 2378-3397) Trauma Cases and Reviews (ISSN: 2469-5777) International Archives of Urology and Complications (ISSN: 2469-5742) International Journal of Virology and AIDS (ISSN: 2469-567X) More Journals Contact Us ClinMed International Library | Science Resource Online LLC 3511 Silverside Road, Suite 105, Wilmington, DE 19810, USA Email: contact@clinmedlib.org### Feedback Get Email alerts Open Access by ClinMed International Library is licensed under a Creative Commons Attribution 4.0 International License based on a work at Copyright © 2017 ClinMed International Library. All Rights Reserved.
3948
https://www.merriam-webster.com/dictionary/route
Synonyms of route a : a traveled way : highway the main route north b : a means of access : channel the route to social mobility—T. F. O'Dea : a line of travel : course 3 a : an established or selected course of travel or action b : an assigned territory to be systematically covered a newspaper route route 2 of 2 verb routed; routing transitive verb 1 : to send by a selected route : direct was routed along the scenic shore road 2 : to divert in a specified direction Synonyms Noun road highway thoroughfare street freeway expressway roadway carriageway [British] boulevard artery arterial Verb guide steer lead See All Synonyms & Antonyms in Thesaurus Examples of route in a Sentence Noun We didn't know what route to take. an escape route in case of fire a major bird migratory route You could take a different route and still arrive at the same conclusion. Take Route 2 into town. We live on a rural route. Verb Traffic was routed around the accident. When the doctor is out, his calls are routed to his answering service. Recent Examples on the Web Examples are automatically compiled from online sources to show current usage. Read More Opinions expressed in the examples do not represent those of Merriam-Webster or its editors. Send us feedback. Noun For the second weekend in a row, a detour route and security checkpoints will be in place for the area surrounding the vice president's home. —David Ferrara, The Enquirer, 29 Aug. 2025 Her work explores historical wine routes, elevated and sustainable tourism and the people shaping the wine industry. —Layne Randolph, Forbes.com, 29 Aug. 2025 Verb Credit card rewards, routing rules, and data rights all face similar risks. —Aj Dhaliwal, Forbes.com, 14 Aug. 2025 Prompts from the company’s 700 million weekly users will be automatically routed to the appropriate version of the model, balancing cost and latency with capability. —semafor.com, 11 Aug. 2025 See All Example Sentences for route Word History Etymology Noun Middle English rute, route, borrowed from Anglo-French rute, going back to Vulgar Latin rupta (short for rupta via, literally, "broken way, forced passage," after Latin viam rumpere "to force a passage"), from feminine of ruptus, past participle of rumpere "to break, burst," going back to Indo-European ru-n-p-, nasal present formation from the base reu̯p- "break, tear" — more at reave Verb derivative of route entry 1 First Known Use Noun 13th century, in the meaning defined at sense 1a Verb 1832, in the meaning defined at sense 1 Time Traveler The first known use of route was in the 13th century See more words from the same century Phrases Containing route en route go/take the traditional route paper route route step rural route star route the scenic route trade route Rhymes for route beaut boot bruit brut brute chute coot cute flute fruit hoot jute See All Rhymes for route Browse Nearby Words rout route route agent See all Nearby Words Articles Related to route ### Is it 'on route' or 'en route'? You're on your way to good spelling ### Unmixing the Mix-up of 'Root,' 'Route,'... Homographs and homophones are at the root of it all. Cite this Entry “Route.” Merriam-Webster.com Dictionary, Merriam-Webster, Accessed 1 Sep. 2025. Copy Citation Share Kids Definition route 1 of 2 noun ˈrüt ˈrau̇t 1 : road sense 2a, highway U.S. Route 66 2 : a course of action toward a goal the best route to peace 3 a : an established, selected, or assigned course of travel explorers looking for a new route to the Indies air routes to Europe b : a territory to be gone over regularly a newspaper route route 2 of 2 verb routed; routing : to send or transport by a certain route route heavy traffic around the city Medical Definition route noun ˈrüt ˈrau̇t : a method of transmitting a disease or of administering a remedy the airborne route of … infection—M. L. Furcolow More from Merriam-Webster on route Nglish: Translation of route for Spanish Speakers Britannica.com: Encyclopedia article about route Last Updated: - Updated example sentences Love words? Need even more definitions? Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free! Merriam-Webster unabridged More from Merriam-Webster ### Can you solve 4 words at once? Can you solve 4 words at once? Word of the Day epitome See Definitions and Examples » Get Word of the Day daily email! Popular in Grammar & Usage See More ### 31 Useful Rhetorical Devices ### Merriam-Webster’s Great Big List of Words You Love to Hate ### How to Use Em Dashes (—), En Dashes (–) , and Hyphens (-) ### The Difference Between 'i.e.' and 'e.g.' ### Democracy or Republic: What's the difference? See More Popular in Wordplay See More ### Our Best Historical Slang Terms ### Even More Bird Names that Sound Like Insults (and Sometimes Are) ### Words That Turned 100 in 2025 ### Better Ways to Say 'This Sucks' ### 'Za' and 9 Other Words to Help You Win at SCRABBLE See More Popular See More ### 31 Useful Rhetorical Devices ### Our Best Historical Slang Terms ### Even More Bird Names that Sound Like Insults (and Sometimes Are) See More
3949
https://prime-numbers.fandom.com/wiki/101
101 101 Basic Information Cardinal Number one hundred one Ordinal Number one hundred first Roman Numeral CI Greek Numeral ΡΑ´ Factorization 101 Different Representations Binary Form 11001012 Ternary Form 102023 Quaternary Form 12114 Quinary Form 4015 Senary Form 2456 Septenary Form 2037 Octal Form 1458 Nonary Form 1229 Undecimal Form 9211 Duodecimal Form 8512 Tridecimal Form 7A13 Quattuordecimal Form 7314 Quindecimal Form 6B15 Hexadecimal Form 6516 Septendecimal Form 5G17 Octodecimal Form 5B18 Novemdecimal Form 5619 Vigesimal Form 5120 Unvigesimal Form 4H21 Duovigesimal Form 4D22 Trevigesimal Form 4923 Quattuorvigesimal Form 4524 Quinvigesimal Form 4125 Hexavigesimal Form 3N26 Septenvigesimal Form 3K27 Octovigesimal Form 3H28 Novemvigesimal Form 3E29 Trigesimal Form 3B30 Untrigesimal Form 3831 Duotrigesimal Form 3532 Tretrigesimal Form 3233 Quattuortrigesimal Form 2X34 Quintrigesimal Form 2V35 Hexatrigesimal Form 2T36 | Previous | Current | Next | --- | 97 | 101 | 103 | 101 Basic Information Cardinal Number Ordinal Number Roman Numeral Greek Numeral Factorization Different Representations Binary Form Ternary Form Quaternary Form Quinary Form Senary Form Septenary Form Octal Form Nonary Form Undecimal Form Duodecimal Form Tridecimal Form Quattuordecimal Form Quindecimal Form Hexadecimal Form Septendecimal Form Octodecimal Form Novemdecimal Form Vigesimal Form Unvigesimal Form Duovigesimal Form Trevigesimal Form Quattuorvigesimal Form Quinvigesimal Form Hexavigesimal Form Septenvigesimal Form Octovigesimal Form Novemvigesimal Form Trigesimal Form Untrigesimal Form Duotrigesimal Form Tretrigesimal Form Quattuortrigesimal Form Quintrigesimal Form Hexatrigesimal Form | Previous | Current | Next | --- | 97 | 101 | 103 | 101 is the first 3-digit prime number. 101 has 2 factors, 1 and 101. It is the 26th prime number, and the first prime number from 101-200. Contents Proofs[] Therefore, 101 is a prime number. As an Exponent of Mersenne Number[] 2101 - 1 is divisible by 7,432,339,208,719, and is therefore not a prime number. Relationship with other odd numbers[] The numbers before[] The numbers after[] In science[] In books[] Trivia[] Navigation[] | Prime Numbers 1-1,000 | | --- | | 1-100 | | --- | | 1-25 | 2 • 3 • 5 • 7 • 11 • 13 • 17 • 19 • 23 | | 26-50 | 29 • 31 • 37 • 41 • 43 • 47 | | 51-75 | 53 • 59 • 61 • 67 • 71 • 73 | | 76-100 | 79 • 83 • 89 • 97 | | 101-200 | | --- | | 101-125 | 101 • 103 • 107 • 109 • 113 | | 126-150 | 127 • 131 • 137 • 139 • 149 | | 151-175 | 151 • 157 • 163 • 167 • 173 | | 176-200 | 179 • 181 • 191 • 193 • 197 • 199 | | 201-400 | | --- | | 201-250 | 211 • 223 • 227 • 229 • 233 • 239 • 241 | | 250-300 | 251 • 257 • 263 • 269 • 271 • 277 • 281 • 283 • 293 | | 301-350 | 307 • 311 • 313 • 317 • 331 • 337 • 347 • 349 | | 351-400 | 353 • 359 • 367 • 373 • 379 • 383 • 389 • 397 | | 401-600 | | --- | | 401-450 | 401 • 409 • 419 • 421 • 431 • 433 • 439 • 443 • 449 | | 450-500 | 457 • 461 • 463 • 467 • 479 • 487 • 491 • 499 | | 501-550 | 503 • 509 • 521 • 523 • 541 • 547 | | 551-600 | 557 • 563 • 569 • 571 • 577 • 587 • 593 • 599 | | 601-800 | | --- | | 601-650 | 601 • 607 • 613 • 617 • 619 • 631 • 641 • 643 • 647 | | 651-700 | 653 • 659 • 661 • 673 • 677 • 683 • 691 | | 701-750 | 701 • 709 • 719 • 727 • 733 • 739 • 743 | | 751-800 | 751 • 757 • 761 • 769 • 773 • 787 • 797 | | 801-1,000 | | --- | | 801-850 | 809 • 811 • 821 • 823 • 827 • 829 • 839 | | 851-900 | 853 • 857 • 859 • 863 • 877 • 881 • 883 • 887 | | 901-950 | 907 • 911 • 919 • 929 • 937 • 941 • 947 | | 951-1000 | 953 • 967 • 971 • 977 • 983 • 991 • 997 | Fandom logo Explore properties Follow Us Overview Community Advertise Fandom Apps
3950
https://biblioscout.net/content/pdf/99.140005/sudhoff201302019901.pdf
九章筭术 Jiu zhang suan shu (Nine Chapters on the Art of Mathematics) – An Appraisal of the Text, its Editions, and Translations Filter results Year Year Product type Product type Language Language Publisher Publisher Subject Subject Journal Journal Copy citation Copy link From the journal Sudhoff Sudhoffs Archiv, Volume 97, October 2013, issue 2 Published by Franz Steiner Verlag article, 22508 Words Original language: English Sudhoff 2013, pp 199-235 Abstract Dieser Beitrag gilt dem antiken chinesischen mathematischen Text „Neun Kapitel über die Kunst der Mathematik“ und der langen Reihe von Kommentaren, die diesem Werk seit dem ersten Kommentar von Liu Hui im Jahre 263 v. Chr. gewidmet wurden. Zunächst wird die Bedeutung der „Neun Kapitel“ aufgezeigt, sodann werden die Schwierigkeiten angesprochen, die eine Übersetzung schon des Titels dieser Schrift sowie die Datierung der Entstehung des ursprünglichen Texts bereiten. Der Beitrag erörtert die vielen chinesischen Ausgaben ebenso wie solche in koreanischer und japanischer Sprache und stellt zudem die englischen Studien zu den „Neun Kapiteln“ vor, einschließlich späterer Übersetzungen in westliche Sprachen. Der Beitrag schließt mit einer Einschätzung des mathematischen Erbes der „Neun Kapitel“, und der Erörterung verschiedener hermeneutischer und prosopographischer Fragen. Dieser Beitrag gilt dem antiken chinesischen mathematischen Text „Neun Kapitel über die Kunst der Mathematik“ und der langen Reihe von Kommentaren, die diesem Werk seit dem ersten Kommentar von Liu Hui im Jahre 263 v. Chr. gewidmet wurden. Zunächst wird die Bedeutung der „Neun Kapitel“ aufgezeigt, sodann werden die Schwierigkeiten angesprochen, die eine Übersetzung schon des Titels dieser Schrift sowie die Datierung der Entstehung des ursprünglichen Texts bereiten. Der Beitrag erörtert die vielen chinesischen Ausgaben ebenso wie solche in koreanischer und japanischer Sprache und stellt zudem die englischen Studien zu den „Neun Kapiteln“ vor, einschließlich späterer Übersetzungen in westliche Sprachen. Der Beitrag schließt mit einer Einschätzung des mathematischen Erbes der „Neun Kapitel“, und der Erörterung verschiedener hermeneutischer und prosopographischer Fragen. Dieser Beitrag gilt dem antiken chinesischen mathematischen Text „Neun Kapitel über die Kunst der Mathematik“ und der langen Reihe von Kommentaren, die diesem Werk seit dem ersten Kommentar von Liu Hui im Jahre 263 v. Chr. gewidmet wurden. Zunächst wird die Bedeutung der „Neun Kapitel“ aufgezeigt, sodann werden die Schwierigkeiten angesprochen, die eine Übersetzung schon des Titels dieser Schrift sowie die Datierung der Entstehung des ursprünglichen Texts bereiten. Der Beitrag erörtert die vielen chinesischen Ausgaben ebenso wie solche in koreanischer und japanischer Sprache und stellt zudem die englischen Studien zu den „Neun Kapiteln“ vor, einschließlich späterer Übersetzungen in westliche Sprachen. Der Beitrag schließt mit einer Einschätzung des mathematischen Erbes der „Neun Kapitel“, und der Erörterung verschiedener hermeneutischer und prosopographischer Fragen. Author information Joseph W. Dauben Related content Services Terms and conditions Indexes A-Z © 2025 Wissenschaftliche Verlagsgesellschaft Stuttgart | Powered by PubEngine In order provide you with a better browsing experience, we use cookies.
3951
https://www.quora.com/What-are-the-critical-points-of-the-function-f-x-x-3-6x-2-6
What are the critical points of the function f(x) = x^3 + 6x^2 - 6? - Quora Something went wrong. Wait a moment and try again. Try again Skip to content Skip to search Sign In Mathematics Meaning of Derivatives Critical Point Types of Functions Polynomials Cubic Functions Calculus (Mathematics) Derivatives and Different... Differentiable Functions 5 What are the critical points of the function f(x) = x^3 + 6x^2 - 6? All related (37) Sort Recommended Assistant Bot · 1y To find the critical points of the function f(x)=x 3+6 x 2−6 f(x)=x 3+6 x 2−6, we need to follow these steps: Find the derivative of the function. Set the derivative equal to zero to find the critical points. Solve for x x. Step 1: Find the derivative The derivative of f(x)f(x) is: f′(x)=d d x(x 3+6 x 2−6)=3 x 2+12 x f′(x)=d d x(x 3+6 x 2−6)=3 x 2+12 x Step 2: Set the derivative equal to zero Now, we set the derivative equal to zero: 3 x 2+12 x=0 3 x 2+12 x=0 Step 3: Solve for x x We can factor out 3 x 3 x: 3 x(x+4)=0 3 x(x+4)=0 Setting each factor equal to zero gives: 3 x=0 3 x=0 ⟹ x=0 x=0 x+4=0 x+4=0 ⟹ x=−4 x=−4 Critical Points The critical points of the function f(x)=x 3+6 x 2 f(x)=x 3+6 x 2 Continue Reading To find the critical points of the function f(x)=x 3+6 x 2−6 f(x)=x 3+6 x 2−6, we need to follow these steps: Find the derivative of the function. Set the derivative equal to zero to find the critical points. Solve for x x. Step 1: Find the derivative The derivative of f(x)f(x) is: f′(x)=d d x(x 3+6 x 2−6)=3 x 2+12 x f′(x)=d d x(x 3+6 x 2−6)=3 x 2+12 x Step 2: Set the derivative equal to zero Now, we set the derivative equal to zero: 3 x 2+12 x=0 3 x 2+12 x=0 Step 3: Solve for x x We can factor out 3 x 3 x: 3 x(x+4)=0 3 x(x+4)=0 Setting each factor equal to zero gives: 3 x=0 3 x=0 ⟹ x=0 x=0 x+4=0 x+4=0 ⟹ x=−4 x=−4 Critical Points The critical points of the function f(x)=x 3+6 x 2−6 f(x)=x 3+6 x 2−6 are: x=0 and x=−4 x=0 and x=−4 These are the points where the derivative is zero, indicating potential local maxima, minima, or inflection points. Upvote · Related questions More answers below How many critical points does the function f(x) =(x+2) ^5(x-3) ^4? If a function has no critical points, then how can I find where the function is increasing or decreasing? For example, f(x)=3x(3x^2 + 1)^(1/2). How do you determine the critical points of the function \ (h(x) = x^3 - 6x^2 + 9x )? What are the critical points and inflection points: f(x) =X^3-3x^2-9x+4? How do I find the critical points of the function y= x^3+2x^2-3x+1? Philip Lloyd Specialist Calculus Teacher, Motivator and Baroque Trumpet Soloist. · Author has 6.8K answers and 52.8M answer views ·4y When I see questions like this, I love to draw the resulting graph after I have found out all the relevant information. Now for the nice graph! It is always satisfying when the graph also crosses the x axis at integer values but no so in this case! Continue Reading When I see questions like this, I love to draw the resulting graph after I have found out all the relevant information. Now for the nice graph! It is always satisfying when the graph also crosses the x axis at integer values but no so in this case! Upvote · 9 2 9 2 Sponsored by Grammarly Stuck on the blinking cursor? Move your great ideas to polished drafts without the guesswork. Try Grammarly today! Download 99 34 Gordon M. Brown Math Tutor at San Diego City College (2018-Present) · Author has 6.2K answers and 4.3M answer views ·4y Finding critical points of a polynomial is merely a matter of taking its derivative and noting where the zeroes of the derivative occur. In this case, f(x) = x^3 + 6x^2 - 6 f'(x) = 3x^2 + 12x 3x^2 + 12x = 0 x^2 + 4x = 0 x (x + 4) = 0 x = { -4, 0 } Therefore, the critical points of f(x) are x = -4 and x = 0. These points are useful insofar as that, by establishing test points between and around these points—for example, -5, -1, and 1—we can determine quickly on which interval(s) f'(x) is positive (thus where f(x) is increasing), and on which interval(s) f'(x) is negative (thus, where f(x) is decre Continue Reading Finding critical points of a polynomial is merely a matter of taking its derivative and noting where the zeroes of the derivative occur. In this case, f(x) = x^3 + 6x^2 - 6 f'(x) = 3x^2 + 12x 3x^2 + 12x = 0 x^2 + 4x = 0 x (x + 4) = 0 x = { -4, 0 } Therefore, the critical points of f(x) are x = -4 and x = 0. These points are useful insofar as that, by establishing test points between and around these points—for example, -5, -1, and 1—we can determine quickly on which interval(s) f'(x) is positive (thus where f(x) is increasing), and on which interval(s) f'(x) is negative (thus, where f(x) is decreasing). The graph below shows f(x) in red, and f'(x) in blue, along with the relevant critical points. The graph leaves no doubt that f(x) increases on the interval (-∞, -4) U (0, ∞) and decreases on (-4, 0). Upvote · 9 2 9 1 Robert Colburn Former Student Assistant IV at Cabrillo College (1998–2001) · Author has 3K answers and 2.4M answer views ·4y What are the critical points of the function f(x) = x^3 + 6x^2 - 6? Critical points of the function are when the derivative equals zero. f’(x) = 3x^2 + 12x = 3x(x + 4) SO x = 0 and x = - 4 are critical points of f(x) We know that an upward opening parabola with two real roots will go from positive to negative about it’s first (smaller valued) zero so f(x) has a local maximum at x = - 4. When the derivative comes back through the x-axis at x = 0 it is going from negative to positive so f(x) has a local minimum at x = 0. f(-4) = - 64 + 96 - 6 = 26 → Local maximum of f(x) at (- 4, 26) f(0) = 0 - 6 = - 6 Continue Reading What are the critical points of the function f(x) = x^3 + 6x^2 - 6? Critical points of the function are when the derivative equals zero. f’(x) = 3x^2 + 12x = 3x(x + 4) SO x = 0 and x = - 4 are critical points of f(x) We know that an upward opening parabola with two real roots will go from positive to negative about it’s first (smaller valued) zero so f(x) has a local maximum at x = - 4. When the derivative comes back through the x-axis at x = 0 it is going from negative to positive so f(x) has a local minimum at x = 0. f(-4) = - 64 + 96 - 6 = 26 → Local maximum of f(x) at (- 4, 26) f(0) = 0 - 6 = - 6 → Local minimum of f(x) at (0, - 6) Upvote · 9 1 Related questions More answers below How do I find critical points for f(x) =3x^4/3-12x^1/3? What are all the critical numbers of the following function f(x) =x^3+6x^2? What are all the critical values of f(x) =6x^3-18x+3? What is 10 x 10 x 2,000 x 0 + 20 x 3 – 1 x 2? How can I find the critical point of y=x-(1-x^2) ^1/2? Geovane Maciel Knows Portuguese · Author has 1.6K answers and 735.1K answer views ·4y f(x)=x 3+6 x 2−6 f(x)=x 3+6 x 2−6 The derivative of the function is: f′(x)=(x 3+6 x 2−6)′f′(x)=(x 3+6 x 2−6)′ =(x 3)′+6(x 2)′−(6)′=(x 3)′+6(x 2)′−(6)′ =3 x 2+12 x=3 x 2+12 x The derivative of f f exists, for any x x. So, the critical points are the numbers such that f′(x)=0 f′(x)=0 3 x 2+12 x=0 3 x 2+12 x=0 3 x(x+4)=0 3 x(x+4)=0 3 x=0⟹x=0 3 x=0⟹x=0 OR x+4=0⟹x=−4 x+4=0⟹x=−4 Upvote · 9 4 Sponsored by Stake.com Join Stake's $75,000 weekly raffle. Just one ticket could see you sharing in $75k every single week. Winners are drawn on every week! Play Now 99 87 Subhasish Debroy Former SDE at Bharat Sanchar Nigam Limited (BSNL) · Author has 6.6K answers and 5.8M answer views ·4y f(x) = x^3 + 6x^2 - 6 , differentiating w.r.t. x f'(x) = 3x^2+12x , we know at critical points c , f'(c)= 0 Therefore, 3c^2+12c = 0 => 3c(c+4) = 0 => as 3≠0 , c = 0 or –4 , i.e the function has two critical points c1 = 0 ,c2 = –4 Upvote · 9 1 Ved Prakash Sharma Solved over a million queries · Author has 14.3K answers and 16.5M answer views ·4y f(x) = x^3+6x^2–6. f'(x) = 3x^2 +12x. , for the critical points putting f'(x) = 0. or, 3x^2 + 12 x = 0. or, 3x.(x+4) = 0. or, x = 0 , - 4 . Thus , x = 0 , -4 are the critical points of f(x) . , Answer. Upvote · Sponsored by All Out Kill Dengue, Malaria and Chikungunya with New 30% Faster All Out. Chance Mat Lo, Naya All Out Lo - Recommended by Indian Medical Association. Shop Now 999 624 Simon Tsai Lives in Taiwan · Author has 4.9K answers and 2M answer views ·1y Related What are all the critical numbers of the following function f(x) =x^3+6x^2? f(x)=x 3+a x 2 f(x)=x 3+a x 2, so f′(x)=3 x 2+2 a x f′(x)=3 x 2+2 a x. f′(x)=0 f′(x)=0 if and only if x=0 x=0 or x=−2 a/3 x=−2 a/3. These are the critical points of the function. (In your case a=6 a=6. Plug in the number. In general, f(0)=0 f(0)=0 and f(−2 a/3)=4 a 3/27 f(−2 a/3)=4 a 3/27.) Continue Reading f(x)=x 3+a x 2 f(x)=x 3+a x 2, so f′(x)=3 x 2+2 a x f′(x)=3 x 2+2 a x. f′(x)=0 f′(x)=0 if and only if x=0 x=0 or x=−2 a/3 x=−2 a/3. These are the critical points of the function. (In your case a=6 a=6. Plug in the number. In general, f(0)=0 f(0)=0 and f(−2 a/3)=4 a 3/27 f(−2 a/3)=4 a 3/27.) Upvote · 9 1 Robert Paxson BSME in Mechanical Engineering, Lehigh University (Graduated 1983) · Author has 3.9K answers and 4M answer views ·5y Related F(x) = 3x^5 - 20x^3. What are the critical numbers (maximum, minimum, or none)? y=3 x 5−20 x 3 y=3 x 5−20 x 3 y′=15 x 4−60 x 2=0 y′=15 x 4−60 x 2=0 15 x 2(x 2−4)=0 15 x 2(x 2−4)=0 x=0 x=0, x=−2 x=−2, and x=2 x=2 y”=60 x 3−120 x y”=60 x 3−120 x At x=0 x=0, y=0 y=0, y”=0 y”=0, therefore, this is an inflection point. Please note that y”(−0)>0 y”(−0)>0 and y”(0+)<0 y”(0+)<0. This can be shown by observing that the 60 x 3 60 x 3-term vanishes near 0 0 such that the −120 x−120 x-term controls the sign on y”y”. Also, note that y y is an odd function. Odd functions have 180∘180∘-rotational symmetry about the origin. This symmetry is apparent in the graph shown below. From the symmetry seen in this graph, it is clear that x=0 x=0 is an inflection point. At x=−2 x=−2, y=64 y=64, y”=−240 y”=−240, therefore, this is a local maximum. At x=2 x=2, y=−64 y=−64 Continue Reading y=3 x 5−20 x 3 y=3 x 5−20 x 3 y′=15 x 4−60 x 2=0 y′=15 x 4−60 x 2=0 15 x 2(x 2−4)=0 15 x 2(x 2−4)=0 x=0 x=0, x=−2 x=−2, and x=2 x=2 y”=60 x 3−120 x y”=60 x 3−120 x At x=0 x=0, y=0 y=0, y”=0 y”=0, therefore, this is an inflection point. Please note that y”(−0)>0 y”(−0)>0 and y”(0+)<0 y”(0+)<0. This can be shown by observing that the 60 x 3 60 x 3-term vanishes near 0 0 such that the −120 x−120 x-term controls the sign on y”y”. Also, note that y y is an odd function. Odd functions have 180∘180∘-rotational symmetry about the origin. This symmetry is apparent in the graph shown below. From the symmetry seen in this graph, it is clear that x=0 x=0 is an inflection point. At x=−2 x=−2, y=64 y=64, y”=−240 y”=−240, therefore, this is a local maximum. At x=2 x=2, y=−64 y=−64, y”=240 y”=240, therefore, this is a local minimum. The graph looks like this: Upvote · 9 3 9 5 Sponsored by Giftbit Drive Repeat Purchases with Rewards. Easily integrate customer gifting into your CRM or CX workflows. Learn More 99 29 Robert Paxson BSME in Mechanical Engineering, Lehigh University (Graduated 1983) · Author has 3.9K answers and 4M answer views ·11mo Related What are the critical points of the function f(x) = cos(x) - sin(x)? f(x)=cos(x)−sin(x)f(x)=cos⁡(x)−sin⁡(x) The critical points occur when the first derivative is zero: f′(x)=−sin(x)−cos(x)=0 f′(x)=−sin⁡(x)−cos⁡(x)=0 −sin(x)=cos(x)−sin⁡(x)=cos⁡(x) tan(x)=−1 tan⁡(x)=−1 such that the abscissae of the critical points are: x=arctan(−1)+n π x=arctan⁡(−1)+n π where n n is any integer, and the ordinates are: y=±√2 y=±2 The ordinate is √2 2 unless n n is odd. If n n is odd, then y=−√2 y=−2. A plot looks like this: where a few of the critical points are shown in green. We can also write: f(x)=cos(x)−sin(x)f(x)=cos⁡(x)−sin⁡(x) as: f(x)=√2 cos(x+π 4)f(x)=2 cos⁡(x+π 4) which makes it easy to identify the critical points. Continue Reading f(x)=cos(x)−sin(x)f(x)=cos⁡(x)−sin⁡(x) The critical points occur when the first derivative is zero: f′(x)=−sin(x)−cos(x)=0 f′(x)=−sin⁡(x)−cos⁡(x)=0 −sin(x)=cos(x)−sin⁡(x)=cos⁡(x) tan(x)=−1 tan⁡(x)=−1 such that the abscissae of the critical points are: x=arctan(−1)+n π x=arctan⁡(−1)+n π where n n is any integer, and the ordinates are: y=±√2 y=±2 The ordinate is √2 2 unless n n is odd. If n n is odd, then y=−√2 y=−2. A plot looks like this: where a few of the critical points are shown in green. We can also write: f(x)=cos(x)−sin(x)f(x)=cos⁡(x)−sin⁡(x) as: f(x)=√2 cos(x+π 4)f(x)=2 cos⁡(x+π 4) which makes it easy to identify the critical points. Upvote · 9 7 Daniel Chan EMT at SCDF - The Singapore Civil Defence Force (2020–present) · Author has 71 answers and 184.2K answer views ·7y Related What are the critical points and inflection points: f(x) =X^3-3x^2-9x+4? Method 1: Always remember. Critical points are where d y/d x=0 d y/d x=0 . In this instance you’re given a function, so f′(x)=0 f′(x)=0. After you differentiate the function, you should get a quadratic equation. You would need to solve the equation to obtain the x x-coordinates for the critical points. Substitute the x x-coordinates into f(x)f(x) to obtain the y y-coordinates of the critical points. Now you have the critical points. As for the point(s) of inflection, you will need to find d 2 y/d x 2 d 2 y/d x 2 or f′′(x)f″(x). Differentiate f′(x)f′(x) again to get f′′(x)f″(x). To find the point(s) of inflection, you’ll need to obtain f′′(x)=0 f″(x)=0 There should Continue Reading Method 1: Always remember. Critical points are where d y/d x=0 d y/d x=0 . In this instance you’re given a function, so f′(x)=0 f′(x)=0. After you differentiate the function, you should get a quadratic equation. You would need to solve the equation to obtain the x x-coordinates for the critical points. Substitute the x x-coordinates into f(x)f(x) to obtain the y y-coordinates of the critical points. Now you have the critical points. As for the point(s) of inflection, you will need to find d 2 y/d x 2 d 2 y/d x 2 or f′′(x)f″(x). Differentiate f′(x)f′(x) again to get f′′(x)f″(x). To find the point(s) of inflection, you’ll need to obtain f′′(x)=0 f″(x)=0 There should only be 1 solution for f′′(x)=0 f″(x)=0 Substitute that value back into f′(x)=0 f′(x)=0 to obtain Now you have your solutions. Method 2: Plot the graph on a graphing app/graphing calculator and identify the necessary coordinates. You may miss the inflection points so I won’t recommend using this method. Upvote · 9 3 9 1 Enrico Gregorio Associate professor in Algebra · Author has 18.4K answers and 16M answer views ·May 19 Related What are the critical numbers and inflection points of f(x) = 2x^2/ (x^2-1)? Your function is f(x)=2 x 2 x 2−1 f(x)=2 x 2 x 2−1 which is defined for x≠±1.x≠±1. The function is everywhere differentiable (on its domain, of course), so the critical points are where the derivative vanishes. In order to compute the derivative we can use the method of logarithmic derivative, where we formally write log(f(x))=log 2+2 log x−log(x 2−1)log⁡(f(x))=log⁡2+2 log⁡x−log⁡(x 2−1) and so f′(x)f(x)=2 x−2 x x 2−1=2(x 2−x+1)x(x 2−1)f′(x)f(x)=2 x−2 x x 2−1=2(x 2−x+1)x(x 2−1) whereby f′(x)=4 x(x 2−x+1)(x 2−1)2 f′(x)=4 x(x 2−x+1)(x 2−1)2 (you can use the quotient rule to get the same result). The derivative only vanishes for x=0 x=0 and we see that, since x 2−x+1>0 x 2−x+1>0 for Continue Reading Your function is f(x)=2 x 2 x 2−1 f(x)=2 x 2 x 2−1 which is defined for x≠±1.x≠±1. The function is everywhere differentiable (on its domain, of course), so the critical points are where the derivative vanishes. In order to compute the derivative we can use the method of logarithmic derivative, where we formally write log(f(x))=log 2+2 log x−log(x 2−1)log⁡(f(x))=log⁡2+2 log⁡x−log⁡(x 2−1) and so f′(x)f(x)=2 x−2 x x 2−1=2(x 2−x+1)x(x 2−1)f′(x)f(x)=2 x−2 x x 2−1=2(x 2−x+1)x(x 2−1) whereby f′(x)=4 x(x 2−x+1)(x 2−1)2 f′(x)=4 x(x 2−x+1)(x 2−1)2 (you can use the quotient rule to get the same result). The derivative only vanishes for x=0 x=0 and we see that, since x 2−x+1>0 x 2−x+1>0 for every x,x, f f is decreasing over (−∞,−1)(−∞,−1) f f is decreasing over (−1,0](−1,0] f f is increasing over [0,1)[0,1) f f is increasing over (1,∞)(1,∞) Hence there is a (local) minimum at x=0,x=0, with f(0)=0.f(0)=0. In order to find inflection points we need to compute the second derivative. Again, log(f′(x))=log 4+log x+log(x 2−x+1)−2 log(x 2−1)log⁡(f′(x))=log⁡4+log⁡x+log⁡(x 2−x+1)−2 log⁡(x 2−1) whereby f′′(x)=f′(x)(1 x+2 x−1 x 2−x+1−4 x x 2−1)f″(x)=f′(x)(1 x+2 x−1 x 2−x+1−4 x x 2−1) With boring calculations we see that f′′(x)=4 x(x 2−x+1)(x 2−1)2−(x 4−2 x 3+6 x 2−2 x+1)x(x 2−x+1)(x 2−1)f″(x)=4 x(x 2−x+1)(x 2−1)2−(x 4−2 x 3+6 x 2−2 x+1)x(x 2−x+1)(x 2−1) and so f′′(x)=−4(x 4−2 x 3+6 x 2−2 x+1)(x 2−1)3 f″(x)=−4(x 4−2 x 3+6 x 2−2 x+1)(x 2−1)3 The quartic factor in the numerator can be rewritten as (x 2−x+1)2+3 x 2,(x 2−x+1)2+3 x 2, which is everywhere positive, so we conclude that the function has no inflection point, notwithstanding that it is concave over (−∞,−1)(−∞,−1) convex over (−1,1)(−1,1) concave over (1,∞)(1,∞) You can complete by observing that lim x→−∞f(x)=2=lim x→∞f(x)lim x→−∞f(x)=2=lim x→∞f(x) lim x→−1−f(x)=−∞=lim x→1+f(x)lim x→−1−f(x)=−∞=lim x→1+f(x) lim x→−1+f(x)=∞=lim x→1−f(x)lim x→−1+f(x)=∞=lim x→1−f(x) Upvote · 9 2 Boutros Gladius S/W architect, BSc astronomy & planetary sci, MSc astrophys. · Author has 2.9K answers and 3.6M answer views ·2y Related How do I find the critical points of f(x) =x^3 - 3x^2 and then identify the intervals on which f is increasing and on whichf is decreasing? First, recognise that the slope of the function is zero at the critical points. So we can take the derivative: d d x(x 3–3 x 2)=3 x 2–6 x d d x(x 3–3 x 2)=3 x 2–6 x We can factor that as x(3 x−6)x(3 x−6) which gives roots at x∈{0,2}x∈{0,2}. Now you know the critical points, you can tell by eyeballing the graph where the function is increasing or decreasing: But we can also do this with the second derivative test: d d x(3 x 2–6 x)=6 x–6 d d x(3 x 2–6 x)=6 x–6 If we feed our critical points into this we get −6−6 and +6+6. The negative value of the second derivative at x=0 x=0 tells yo Continue Reading First, recognise that the slope of the function is zero at the critical points. So we can take the derivative: d d x(x 3–3 x 2)=3 x 2–6 x d d x(x 3–3 x 2)=3 x 2–6 x We can factor that as x(3 x−6)x(3 x−6) which gives roots at x∈{0,2}x∈{0,2}. Now you know the critical points, you can tell by eyeballing the graph where the function is increasing or decreasing: But we can also do this with the second derivative test: d d x(3 x 2–6 x)=6 x–6 d d x(3 x 2–6 x)=6 x–6 If we feed our critical points into this we get −6−6 and +6+6. The negative value of the second derivative at x=0 x=0 tells you that this point is a local maximum and that the function is increasing to the left of it and decreasing to the right. The positive value at x=2 x=2 tells you this is a local minimum. Just for completeness, a second derivative of zero would tell you that a point is an inflection point, a point at which the function flattens out but is neither a local maximum or minimum. Upvote · 9 1 Related questions How many critical points does the function f(x) =(x+2) ^5(x-3) ^4? If a function has no critical points, then how can I find where the function is increasing or decreasing? For example, f(x)=3x(3x^2 + 1)^(1/2). How do you determine the critical points of the function \ (h(x) = x^3 - 6x^2 + 9x )? What are the critical points and inflection points: f(x) =X^3-3x^2-9x+4? How do I find the critical points of the function y= x^3+2x^2-3x+1? How do I find critical points for f(x) =3x^4/3-12x^1/3? What are all the critical numbers of the following function f(x) =x^3+6x^2? What are all the critical values of f(x) =6x^3-18x+3? What is 10 x 10 x 2,000 x 0 + 20 x 3 – 1 x 2? How can I find the critical point of y=x-(1-x^2) ^1/2? If f(x/(x^2-1)) =x/(x^2+1), what is f(x)? What are the critical values (turning points) of the function f(x) =x^2 - 2x + 4? What are the critical points and points of inflection for f(x) =(x+1) ^2/1+x^2? What are the critical points of this function f(x,y) =xy+5xy^2+10y? What is the critical point of f(x) =3x²-3x+4? Related questions How many critical points does the function f(x) =(x+2) ^5(x-3) ^4? If a function has no critical points, then how can I find where the function is increasing or decreasing? For example, f(x)=3x(3x^2 + 1)^(1/2). How do you determine the critical points of the function \ (h(x) = x^3 - 6x^2 + 9x )? What are the critical points and inflection points: f(x) =X^3-3x^2-9x+4? How do I find the critical points of the function y= x^3+2x^2-3x+1? How do I find critical points for f(x) =3x^4/3-12x^1/3? Advertisement About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
3952
https://www.grc.nasa.gov/www/k-12/airplane/bgp.html
| | | | | | | | | | | | | | | | | | | | | --- --- --- --- --- --- --- --- --- --- | | | | | --- | | + Text Only Site + Non-Flash Version + Contact Glenn | | | | | | | | | | --- --- --- | | | | | | | | | | | | | | | --- --- | | | | | Welcome to the Beginner's Guide to Propulsion | | | What is propulsion? The word is derived from two Latin words: pro meaning before or forwards and pellere meaning to drive. Propulsion means to push forward or drive an object forward. A propulsion system is a machine that produces thrust to push an object forward. On airplanes, thrust is usually generated through some application of Newton's third law of action and reaction. A gas, or working fluid, is accelerated by the engine, and the reaction to this acceleration produces a force on the engine. | | A general derivation of the thrust equation shows that the amount of thrust generated depends on the mass flow through the engine and the exit velocity of the gas. Different propulsion systems generate thrust in slightly different ways. We will discuss four principal propulsion systems: the propeller, the turbine (or jet) engine, the ramjet, and the rocket. Why are there different types of engines? If we think about Newton's first law of motion, we realize that an airplane propulsion system must serve two purposes. First, the thrust from the propulsion system must balance the drag of the airplane when the airplane is cruising. And second, the thrust from the propulsion system must exceed the drag of the airplane for the airplane to accelerate. In fact, the greater the difference between the thrust and the drag, called the excess thrust, the faster the airplane will accelerate. Some aircraft, like airliners and cargo planes, spend most of their life in a cruise condition. For these airplanes, excess thrust is not as important as high engine efficiency and low fuel usage. Since thrust depends on both the amount of gas moved and the velocity, we can generate high thrust by accelerating a large mass of gas by a small amount, or by accelerating a small mass of gas by a large amount. Because of the aerodynamic efficiency of propellers and fans, it is more fuel efficient to accelerate a large mass by a small amount. That is why we find high bypass fans and turboprops on cargo planes and airliners. Some aircraft, like fighter planes or experimental high speed aircraft, require very high excess thrust to accelerate quickly and to overcome the high drag associated with high speeds. For these airplanes, engine efficiency is not as important as very high thrust. Modern military aircraft typically employ afterburners on a low bypass turbofan core. Future hypersonic aircraft will employ some type of ramjet or rocket propulsion. There is a special section of the Beginner's Guide which deals with compressible, or high speed, aerodynamics. This section is intended for undergraduates who are studying shock waves or isentropic flows and contains several calculators and simulators for that flow regime. The site was prepared at NASA Glenn by the Learning Technologies Project (LTP) to provide background information on basic propulsion for secondary math and science teachers. The pages were originally prepared as teaching aids to support EngineSim, an interactive educational computer program that allows students to design and test jet engines on a personal computer. Other slides were prepared to support LTP videoconferencing workshops ( for teachers and students. And other slides were prepared as part of Power Point Presentations for the Digital Learning Network. We have intentionally organized this site to mirror the unstructured nature of the world wide web. There are many pages here connected to one another through hyperlinks. You can then navigate through the links based on your own interest and inquiry. However, if you prefer a more structured approach, you can also take one of our Guided Tours through the site. Each tour provides a sequence of pages dealing with some aspect of propulsion. For younger students, a simpler explanation of the information on this page is available on the Kids Page. NOTICE --- The site has recently been modified to support Section 508 of the Rehabilitation Act. Many of the pages contain mathematical equations which have been produced graphically and which are too long or complex to provide in an "ALT" tag. For these pages, we have retained the (non-compliant) graphical page and have provided a separate (compliant) text only page which contains all of the information of the original page. The two pages are connected through hyperlinks. Activities: Problem Sets for the BGP Navigation.. Beginner's Guide Home Page Free Software | | | | | | | | | --- --- | | | + Inspector General Hotline + Equal Employment Opportunity Data Posted Pursuant to the No Fear Act + Budgets, Strategic Plans and Accountability Reports + Freedom of Information Act + The President's Management Agenda + NASA Privacy Statement, Disclaimer, and Accessibility Certification | | Editor: Nancy Hall NASA Official: Nancy Hall Last Updated: May 13 2021 + Contact Glenn | | |
3953
https://most.oercommons.org/courseware/lesson/889/overview
Principles of Microeconomics Course Content, Elasticity: Concepts and Applications, Elasticity: Concepts and Applications Resources | Maryland Open Source Textbook (M.O.S.T.) Commons Discover Resources Collections Providers Hubs Sign in to see your Hubs Login Featured HubsAnne Arundel Community College Carroll Community College Community College of Baltimore County Frostburg State University Hagerstown Community College Harford Community College See all Hubs Groups Sign in to see your Groups Login Featured GroupsMOLLI-OER OER Disability Services group Montgomery College Open Pedagogy OER Professional Working Group UMGC - OER Resource Lists by Subject See all Groups Learn More About Help Center Add OER Open Author Create a standalone learning module, lesson, assignment, assessment or activity Create Resource Submit from Web Submit OER from the web for review by our librarians Add Link Learn more about creating OER Add OER Add Link Create Resource About creating OER Notifications Sign In/Register Search Advanced Search Sign In/Register Discover Resources Collections Providers Hubs Sign in to see your Hubs Login Featured HubsAnne Arundel Community College Carroll Community College Community College of Baltimore County Frostburg State University Hagerstown Community College Harford Community College See all Hubs Groups Sign in to see your Groups Login Featured GroupsMOLLI-OER OER Disability Services group Montgomery College Open Pedagogy OER Professional Working Group UMGC - OER Resource Lists by Subject See all Groups Learn More About Help Center Add OER Open Author Create a standalone learning module, lesson, assignment, assessment or activity Create Resource Submit from Web Submit OER from the web for review by our librarians Add Link Learn more about creating OER Add OER Add Link Create Resource About creating OER Save Please log in to save materials. Log in Report Details Author:OER LibrarianSubject:Economics Material Type:Module Level:Community College / Lower Division, College / Upper Division Provider:Ohio Open Ed CollaborativeTags: Elasticity Oss0042 Log in to add tags to this item. License:Creative Commons Attribution Non-CommercialLanguage:English Media Formats:eBook, Text/HTML, Video Show MoreShow Less Principles of Microeconomics Course Content Elasticity: Concepts and Applications Introduction to the Economic Way of Thinking Model Building, Production Possibilities and Gains from Trade Supply, Demand and Market Equilibrium Elasticity: Concepts and Applications Market Failure: Externalities and Public Goods Consumer Behavior Production and Cost in the Short and Long Run Profit Maximization in Competitive Markets Monopoly Imperfect Competition Markets for the Factors of Production Income Inequality, Poverty and Discrimination Antitrust Policy and Government Regulation of Business Public Finance and Public Choice International Trade The Economics of Healthcare Elasticity: Concepts and Applications Resources Elasticity: Concepts and Applications Resources Elasticity: Concepts and Applications Resources Overview In this topic, students will be introduced to the concept of elasticity. They’ll learn about price elasticity of demand and price elasticity of supply, about their determinants and how to calculate it. They’ll be introduced to some applications of price elasticity. They’ll also learn about two other important elasticity measures, cross-price elasticity and income elasticity. Learning Objectives Define the general concept of elasticity (2,5,12,16) Define price elasticity of demand and price elasticity of supply (7,12) Calculate and interpret the meaning of price elasticity coefficients of demand and supply (7,12) Explain the determinants of price elasticity of demand and supply (7,12) Identify and discuss some important applications of price elasticity including tax incidence and the effect of elasticity on total revenue (2,5,12,16) Define and explain the significance of income elasticity and cross-price elasticity of demand (2,5,7,12) NOTE: This module meets Ohio TAGs 2, 5, 7, 12 & 16 for an Intro to Microeconomics course Recommended Textbook Resources Principles of Microeconomics 2e: Elasticity Chapter 5. Elasticity (all sections).This chapter covers all the learning objectives except for #4 above. Full Citation: Greenlaw, S. and Shapiro et al.Principles of Microeconomics 2e. OpenStax CNX.June 4, 2018.OpenStax,Principles of Microeconomics 2e. Principles of Economics: The Price of Elasticity of Demand Section 5.1 covers the determinants of price elasticity of demand which is learning objective 4. It also describes the relationship between price elasticity and total revenue, part of learning objective 5. Principles of Economics: Price Elasticity of Supply Section 5.3 covers price elasticity of supply and includes a discussion of its determinants. Full Citation: University of Minnesota Libraries, Minneapolis, MN. Principles of Economics, Publishing Ed. 2016. University of Minnesota, CCA, 2016. Supplemental Content/Alternative Resources Alternative Textbook Resource Principles of Economics: Responsiveness of Demand to Other Factors Section 5.2 gives another take on cross-price and income elasticities. Full Citation: University of Minnesota Libraries, Minneapolis, MN. Principles of Economics, Publishing Ed. 2016. University of Minnesota, CCA, 2016. Alternative Video Resources The following three videos, which are produced by George Mason University cover price elasticity of demand, how to calculate it, and elasticity of supply, respectively. All three videos run about 15 minutes each, which may be too long for classroom use. However, they are comprehensive and may be useful for student viewing outside of class (and may be useful to instructors as well). Marginal Revenue University: “Elasticity of Demand”: This video provides a good overview and explanation of the concept of price elasticity of demand. Viewing time is 13:35 minutes. Marginal Revenue University: “Calculating the Elasticity of Demand”: This video gives a detailed look at the calculation of price elasticity. It shows the midpoint method. It also looks the relationship between price elasticity and total revenue. Viewing time is 15:51 minutes. Marginal Revenue University: “Elasticity of Supply”: A companion to the video on demand elasticity. It takes a closer look at supply elasticity and its determinants. The video may offer too much detail to students but may be useful for instructors. Viewing time is 14:17 minutes. Active Learning Exercise Elasticity and Tax Incidence The point of this exercise is to get students to understand how the relative price elasticities of supply and demand are what really determine who bears the burden of an excise tax. The website indicates what information is to be given to students and offers notes and tips on teaching. Elasticity and Total Revenue:Context Rich Problem This activity involves a single scenario-based problem. In the problem students will need to determine if demand is likely to be elastic or inelastic. Then they will need to link elasticity to the effect of a price change on total revenue. The exercises listed above are included in Starting Point: Teaching and Learning Economics. Questions and Problems Questions: Below is the demand schedule for a good. Using the midpoint formula, calculate the price elasticity of demand between: 1. Price = $60 and price = $55 2. Price = $55 and price = $50 3. Price = $50 and price = $45 4. Price = $45 and price = $40 Characterize the demand for each of the following goods or services as perfectly elastic, relatively elastic, relatively inelastic or perfectly inelastic. Explain your reasoning. A BMW sedan. Common table salt sold at the grocery store. Kidney dialysis services. Cosmetic surgery. Most economists would agree that the long-run elasticity of demand for gasoline is higher than the short-run elasticity. Give some reasons why this could be true. Excise taxes are often imposed on goods that tend to be addictive in consumption, like cigarettes or liquor. With such goods, who will bear most of the incidence of the tax, the buyer or the seller? Explain. You own the Mid-State Spa Co. A retail consultant you hired to help you grow your company tells you that you should lower the price of your spas to increase sales. What is she telling you about the elasticity of demand for spas? Explain. A grocery store manager noticed that during the week after the price of hamburger rose from $5/lb. to $5.80/lb., sales of boneless chicken breasts rose from 100 lbs. to 140 lbs. Based on these numbers, what is the cross-price elasticity of demand between hamburger and boneless chicken breasts? Are they substitutes or complements? Explain. Rank the following goods from what you think would have the lowest to highest income elasticity. Explain your ranking. Rolex watches Vintage wine Store-brand green beans Not-from-concentrate orange juice. Meals at Panera Bread or other fast-casual restaurants. You estimate the price elasticity of demand for tickets to your Golf and Games Pavilion to be 2.5, which you know makes demand elastic. However, when you lowered the price of a ticket by 20 percent, your sales remained basically flat. There are similar businesses in town and you know that the regional unemployment rate has been edging up. What other elasticity concepts might help explain your flat sales? Answers: … 1.4 elastic 1.2 elastic 1.0 unitary 0.8 inelastic … Relatively elastic. There are numerous alternatives to BMWs (Audis, Mercedes, Lexus, etc.). Relatively inelastic. Common salt is a small budget item and there are not many alternatives. Fancy sea salts endorsed by famous chefs and sold in specialty stores are likely more elastic. Perfectly inelastic. The procedure is essential to life and there is no viable alternative other than a kidney transplant. I would say relatively elastic. There are non-surgical alternatives. Short run elasticity of demand for gasoline is quite inelastic because in there are few alternatives to driving to work or school or shopping. Over the long run, people can find public transportation or carpool. They may move closer to work or find a work-from home job. They can also buy a more fuel efficient car. Such goods, also called “sin” goods have a very inelastic demand.Therefore, the price increase due to the additional tax would not reduce quantity demanded that much.The buyer would have the more inelastic curve and therefore, bear the greater incidence of the tax. She’s telling you that the demand for spas is price elastic.Lowering your price by 10% would increase your sales by more than 10% and thus grow your total revenue. The cross-price elasticity is 2.2.The fact that it’s positive means that the goods are substitutes.When the price of hamburger rises, sales of boneless chicken breasts also rise. Subject to opinion so no absolutely correct answer. Store-brand green beans - may be an inferior good Not-from-concentrate orange juice - normal good Meals at Panera Bread - normal good Vintage wine - strong normal good (superior good) Rolex watches - strong normal good (superior good) May be too obscure a question for a principles class but students may pick up on the idea that prices at similar establishments may also have fallen and that income may also have declined. PowerPoint Slides Elasticity: Concepts and Applications Sample Slide Elasticity: Concepts and Applications PowerPoint Slides Download Elasticity: Concepts and Applications Google Slides Discover Resources Collections Providers Advanced Search Create Open Author Submit a Resource Connect Hubs Groups Learn More About OER Commons Help Center Connect with M.O.S.T. M.O.S.T. Mailing List Terms of ServicePrivacy PolicyDMCA A project created by ISKME. Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License. × Sign in / Register Your email or username: Did you mean ? Password: Forgot password?- [x] Show password Create an account Register
3954
https://www.pewresearch.org/short-reads/2023/02/09/key-facts-as-india-surpasses-china-as-the-worlds-most-populous-country/
Key facts about India’s growing population as it surpasses China’s population | Pew Research Center Numbers, Facts and Trends Shaping Your World Newsletters Press My Account Donate Contacted By Us? Read our research on: Middle East Immigration Donald Trump Search Research Topics ##### Topics Politics & Policy International Affairs Immigration & Migration Race & Ethnicity Religion Age & Generations Gender & LGBTQ Family & Relationships Economy & Work Science Internet & Technology News Habits & Media Methodological Research Full Topic List ##### Regions & Countries Asia & the Pacific Europe & Russia Latin America Middle East & North Africa North America Sub-Saharan Africa Multiple Regions / Worldwide ##### Formats Feature Fact Sheet Video Data Essay Publications Our Methods Short Reads Tools & Datasets Experts About Us Research Topics Publications Short Reads Tools & Datasets About Pew Research Center Newsletters Press My Account Contacted By Us? Search Read Our Research On: Middle East Immigration Donald Trump HomeResearch TopicsOther TopicsBirth Rate & Fertility Short Reads | February 9, 2023 X Facebook Threads LinkedIn WhatsApp Share Key facts as India surpasses China as the world’s most populous country ByLaura Silver, Christine HuangandLaura Clancy (ferrantraite via Getty Images) India is poised to become the world’s most populous country this year – surpassing China, which has held the distinction since at least 1950, when the United Nations population records begin. The UN expects that India will overtake China in April, though it may have already reached this milestone since the UN estimates are projections. Here are key facts about India’s population and its projected changes in the coming decades, based on Pew Research Center analyses of data from the UN and other sources. How we did this This Pew Research Center analysis is primarily based on the World Population Prospects 2022 report by the United Nations. The estimates produced by the UN are based on “all available sources of data on population size and levels of fertility, mortality and international migration.” Population sizes over time come from India’s decennial census. The census has collected detailed information on India’s inhabitants, including on religion, since 1881. Data on fertility and how it is related to factors like education levels and place of residence is from India’s National Family Health Survey (NFHS). The NFHS is a large, nationally representative household survey with more extensive information about childbearing than the census. Data on migration is primarily from the United Nations Population Division. Because future levels of fertility and mortality are inherently uncertain, the UN uses probabilistic methods to account for both the past experiences of a given country and the past experiences of other countries under similar conditions. The “medium scenario” projection is the median of many thousands of simulations. The “low” and “high” scenarios make different assumptions about fertility: In the high scenario, total fertility is 0.5 births above the total fertility in the medium scenario; in the low scenario, it is 0.5 births below the medium scenario. Other sources of information for this analysis are available through the links included in the text. India’s population has grown by more than 1 billion people since 1950, the year the UN population data begins. The exact size of the country’s population is not easily known, given that India has not conducted a census since 2011, but it is estimated to have more than 1.4 billion people – greater than the entire population of Europe (744 million) or the Americas (1.04 billion). China, too, has more than 1.4 billion people, but while China’s population is declining, India’s continues to grow. Under the UN’s “medium variant” projection, a middle-of-the-road estimate, India’s population will surpass 1.5 billion people by the end of this decade and will continue to slowly increase until 2064, when it will peak at 1.7 billion people. In the UN’s “high variant” scenario – in which the total fertility rate in India is projected to be 0.5 births per woman above that of the medium variant scenario – the country’s population would surpass 2 billion people by 2068. The UN’s “low variant” scenario – in which the total fertility rate is projected to be 0.5 births below that of the medium variant scenario – forecasts that India’s population will decline beginning in 2047 and fall to 1 billion people by 2100. People under the age of 25 account for more than 40% of India’s population.In fact, there are so many Indians in this age group that roughly one-in-five people globally who are under the age of 25 live in India. Looking at India’s age distribution another way, the country’s median age is 28. By comparison, the median age is 38 in the United States and 39 in China. The other two most populous countries in the world, China and the U.S., have rapidly aging populations – unlike India. Adults ages 65 and older comprise only 7% of India’s population as of this year, compared with 14% in China and 18% in the U.S., according to the UN. The share of Indians who are 65 and older is likely to remain under 20% until 2063 and will not approach 30% until 2100, under the UN’s medium variant projections. The fertility rate in India is higher than in China and the U.S., but it has declined rapidly in recent decades. Today, the average Indian woman is expected to have 2.0 children in her lifetime, a fertility rate that is higher than China’s (1.2) or the United States’ (1.6), but much lower than India’s in 1992 (3.4) or 1950 (5.9). Every religious group in the country has seen its fertility rate fall, including the majority Hindu population and the Muslim, Christian, Sikh, Buddhist and Jain minority groups. Among Indian Muslims, for example, the total fertility rate has declined dramatically from 4.4 children per woman in 1992 to 2.4 children in 2019, the most recent year for which data is available from India’s National Family Health Survey (NFHS). Muslims still have the highest fertility rate among India’s major religious groups, but the gaps in childbearing among India’s religious groups are generally much smaller than they used to be. Fertility rates vary widely by community type and state in India. On average, women in rural areas have 2.1 children in their lifetimes, while women in urban areas have 1.6 children, according to the 2019-21 NFHS. Both numbers are lower than they were 20 years ago, when rural and urban women had an average of 3.7 and 2.7 children, respectively. Total fertility rates also vary greatly by state in India, from as high as 2.98 in Bihar and 2.91 in Meghalaya to as low as 1.05 in Sikkim and 1.3 in Goa. Likewise, population growth varies across states. The populations of Meghalaya and Arunachal Pradesh both increased by 25% or more between 2001 and 2011, when the last Indian census was conducted. By comparison, the populations of Goa and Kerala increased by less than 10% during that span, while the population in Nagaland shrank by 0.6%. These differences may be linked to uneven economic opportunities and quality of life. On average, Indian women in urban areas have their first child 1.5 years later than women in rural areas.Among Indian women ages 25 to 49 who live in urban areas, the median age at first birth is 22.3. Among similarly aged women in rural areas, it is 20.8, according to the 2019 NFHS. Women with more education and more wealth also generally have children at later ages. The median age at first birth is 24.9 among Indian women with 12 or more years of schooling, compared with 19.9 among women with no schooling. Similarly, the median age at first birth is 23.2 for Indian women in the highest wealth quintile, compared with 20.3 among women in the lowest quintile. Among India’s major religious groups, the median age of first birth is highest among Jains at 24.9 and lowest among Muslims at 20.8. India’s artificially wide ratio of baby boys to baby girls – which arose in the 1970s from the use of prenatal diagnostic technology to facilitate sex-selective abortions – is narrowing. From a large imbalance of about 111 boys per 100 girls in India’s 2011 census, the sex ratio at birth appears to have normalized slightly over the last decade. It narrowed to about 109 boys per 100 girls in the 2015-16 NFHS and to 108 boys per 100 girls in the 2019-21 NFHS. To put this recent decline into perspective, the average annual number of baby girls “missing” in India fell from about 480,000 in 2010 to 410,000 in 2019, according to a Pew Research Center study published in 2022. (Read more about how this “missing” population share is defined and calculated in the “How did we count ‘missing’ girls?” box of the report.) And while India’s major religious groups once varied widely in their sex ratios at birth, today there are indications that these differences are shrinking. Infant mortality in India has decreased 70% in the past three decades but remains high by regional and international standards. There were 89 deaths per 1,000 live births in 1990, a figure that fell to 27 deaths per 1,000 live births in 2020. Since 1960, when the UN Interagency Group for Child Mortality Estimation began compiling this data, the rate of infant deaths in India has dropped between 0.1% and 0.5% each year. Still, India’s infant mortality rate is higher than those of neighboring Bangladesh (24 deaths per 1,000 live births), Nepal (24), Bhutan (23) and Sri Lanka (6) – and much higher than those of its closest peers in population size, China (6) and the U.S. (5). Typically, more people migrate out of India each year than into it, resulting in negative net migration. India lost about 300,000 people due to migration in 2021, according to the UN Population Division. The UN’s medium variant projections suggest India will continue to experience net negative migration through at least 2100. But India’s net migration has not always been negative. As recently as 2016, India gained an estimated 68,000 people due to migration (likely to be a result of an increase in asylum-seeking Rohingya fleeing Myanmar). India also recorded increases in net migration on several occasions in the second half of the 20th century. Topics Birth Rate & Fertility Share This Link: X Facebook Threads LinkedIn WhatsApp Share Laura Silveris an associate director focusing on global attitudes at Pew Research Center. Christine Huangis a former research associate focusing on global attitudes at Pew Research Center. Laura Clancyis a research analyst focusing on global attitudes research at Pew Research Center. Related short reads Aug 15, 2025 5 facts about global fertility trends short reads Jul 9, 2025 5 facts about how the world’s population is expected to change by 2100 short reads Jun 18, 2025 U.S. adults in their 20s and 30s plan to have fewer children than in the past short reads Apr 3, 2024 Few East Asian adults believe women have an obligation to society to have children short reads Sep 14, 2023 A growing share of Americans say they’ve had fertility treatments or know someone who has TOPICS Birth Rate & Fertility Most Popular 1 How religious is your state? 2 What the data says about gun deaths in the U.S. 3 34% of U.S. adults have used ChatGPT, about double the share in 2023 4 How the Global Religious Landscape Changed From 2010 to 2020 5 5 facts about Millennial households 901 E St. NW, Suite 300 Washington,DC 20004 USA (+1) 202-419-4300|Main (+1) 202-857-8562|Fax (+1) 202-419-4372|Media Inquiries Research Topics Politics & Policy International Affairs Immigration & Migration Race & Ethnicity Religion Age & Generations Gender & LGBTQ Family & Relationships Economy & Work Science Internet & Technology News Habits & Media Methodological Research Follow Us Email Newsletters YouTube Instagram Facebook LinkedIn X Bluesky RSS Feed ABOUT PEW RESEARCH CENTERPew Research Center is a nonpartisan, nonadvocacy fact tank that informs the public about the issues, attitudes and trends shaping the world. It does not take policy positions. The Center conducts public opinion polling, demographic research, computational social science research and other data-driven research. Pew Research Center is a subsidiary of The Pew Charitable Trusts, its primary funder. © 2025 Pew Research Center About Terms & Conditions Privacy Policy Cookie Policy Feedback Careers Contact Us Privacy Notice We use cookies and other technologies to help improve your experience; some are necssary for the site to work, and some are optional. Lean more in our privacy policy. [x] Essential Cookies These cookies are required for the website to function and cannot be switched off. [x] Analytics Cookies These cookies are used to collect information about how you use our website. [x] Marketing Cookies These cookies are used to track your activity and show you personalized content. [x] Preferences These cookies allow the website to remember choices you make. Accept All Cookies Accept only necessary cookies Cookie Settings Save Preferences
3955
https://www.csueastbay.edu/scaa/files/docs/student-handouts/marija-matrices-and-matrix-operation.pdf
Subject: Linear Algebra Created by: Marija Stanojcic Revised:2/26/2018 MATRICES AND MATRIX OPERATIONS row column SIZE OF THE MATRIX is defined by number of rows and columns in the matrix. For the matrix that have m rows and n columns we say the size of the matrix is m x n. If matrix have the same number of rows (n) and columns (n), we call that matrix the squared (nxn) matrix. [ 4 0 2 1 1 3 5 2 7 8 9 1 ] 4𝑥3 [1 4 7 8 6 2] 2𝑥3 [ 7 2 4 2 3 5 4 8 9 ] 3𝑥3 [ 𝑎11 𝑎12 ⋯ 𝑎1𝑛 𝑎21 𝑎22 ⋯ 𝑎2𝑛 ⋮ ⋮ ⋱ ⋮ 𝑎𝑚1 𝑎𝑚2 ⋯ 𝑎𝑚𝑛 ] 𝑚𝑥𝑛 squared 3x3 matrix SPECIAL MATRICES 1) Zero Matrix – matrix that has all elements equal to 0; The notation for this matrix is O. 2) Identity Matrix – matrix that has all 1’s on the diagonal; The notation for this matrix is I, but in some books, it can be E. O = , [0 0 0 0] , [ 0 0 0 0 0 0 0 0 0 ], etc. I = , [1 0 0 1] , [ 1 0 0 0 1 0 0 0 1 ], etc. ADDITION AND SUBTRACTION – We can add or subtract only matrices that are same sizes. 𝐴 = [1 4 7 8 6 2] 2𝑥3 𝐵 = [1 2 3 4 5 6] 2𝑥3 C = [ 4 0 2 1 1 3 5 2 7 8 9 1 ] 4𝑥3 Matrices A and B are the same sizes, because they both have 2 rows and 3 columns, matrix C has the different size. So we can only do addition or subtraction with matrices A and B. For example, we can do A – B. A − B = [1 4 7 8 6 2] − [1 2 3 4 5 6] = [1 − 1 4 − 2 7 − 3 8 − 4 6 − 5 2 − 6] = [0 2 4 4 1 −4] SCALAR MULTIPLES – If A is the matrix and c is the scalar (any number) then cA (this is the same as c x A) is the matrix that we get when we multiply each entry of the matrix A with the scalar c. 𝑎) 3𝐴 = 3 [ 7 2 4 2 3 5 4 8 9 ] = [ 3x7 3x2 3x4 3x2 3x3 3x5 3x4 3x8 3x9 ] = [ 21 6 12 6 9 15 12 24 27 ] 𝑏) −1/2𝐴= −1/2 [ 7 2 4 2 3 5 4 8 9 ] = [ −7/2 −1 −2 −1 −3/2 −5/2 −2 −4 −9/2 ] Subject: Linear Algebra Created by: Marija Stanojcic Revised:2/26/2018 MATRICES AND MATRIX OPERATIONS MULTIPLYING MATRICES – We can multiply only matrices where the first matrix has the number of columns same as the number of rows of the second matrix. And new matrix AB will have same number of rows as the first matrix, and same number of columns as the second matrix. The next drawing will help you to understand. A B = AB m x r r x n = m x n Note: A x B = AB and B x A = BA, but when we are multiplying matrices AB isn’t the same as BA. !!! AB ≠ BA Can we multiply next matrices? 1) A = [1 2 4 2 6 0] 𝐵 = [ 4 0 2 1 1 3 5 2 7 8 9 1 ] A B = AB 2 x 3 4 x 3 = ? Matrix A has size 2x3 (2 rows and 3 columns), and matrix B has size 4x3 (4 rows and 3 columns). The number of columns of the matrix A is 3, and the number of the rows of the matrix B is 4, as these numbers are not the same we CAN’T multiply these two matrices. 2) A = [1 2 4 2 6 0] B = [ 4 1 4 3 0 −1 3 1 2 7 5 2 ] A B = AB 2 x 3 3 x 4 = 2 x 4 Matrix A has size 2x3 (2 rows and 3 columns), and matrix B has size 3x4 (3 rows and 4 columns). The number of columns of the matrix A is 3, and the number of the rows of the matrix B is 3, as these numbers are the same we CAN multiply these two matrices. Now, when we know that we can multiply A and B, we need to see what is going to be the size of the product matrix AB. Number of the rows of the matrix AB is equal to the number of the rows of the matrix A, it is 2. Number of the column of the matrix AB is equal to the number of the columns of the matrix B, it is 4. Thus, the size of the matrix AB is 2 x 4. Subject: Linear Algebra Created by: Marija Stanojcic Revised:2/26/2018 MATRICES AND MATRIX OPERATIONS Now, we know that we can multiply these matrices, but how do we multiply matrices? We multiply each row of the first matrix with each column of the second matrix and put values in the specific order. AB = [1 2 4 2 6 0] [ 4 1 4 3 0 −1 3 1 2 7 5 2 ] = [1𝑠𝑡 𝑟𝑜𝑤 𝑥 1𝑠𝑡 𝑐𝑜𝑙 1𝑠𝑡 𝑟 𝑥 2𝑛𝑑 𝑐 1𝑠𝑡 𝑟 𝑥 3𝑟𝑑 𝑐 1𝑠𝑡 𝑟 𝑥 4𝑡ℎ 𝑐 2𝑛𝑑 𝑟𝑜𝑤 𝑥 1𝑠𝑡 𝑐𝑜𝑙 2𝑛𝑑 𝑟 𝑥 2𝑛𝑑 𝑐 2𝑛𝑑 𝑟 𝑥 3𝑟𝑑 𝑐 2𝑛𝑑 𝑟 𝑥 4𝑡ℎ 𝑐] How do we multiply row with column? The best way to explain this is with an example. We are going to multiply the 1st row of the matrix A with the 1st column of the matrix B. The first row of A is [1 2 4], the first column of B is [ 4 0 2 ]. We are multiplying first number of the row, with the first number of the column, second number of the row with the second number of the column and third number of the row with the third number of the column. When we have these three numbers we are going to add them, and their sum will be the number we are going to put in the first row and in the first column of the matrix AB. 1st row x 1st column: 1x4 + 2x0 + 4x2 = 4 + 0 + 8 = 12. Now we have AB =[12 ⎕ ⎕ ⎕ ⎕ ⎕ ⎕ ⎕]. 1st row x 2nd column: 1x1 + 2x(-1) + 4x7 = 1 -2 + 28 = 27 => AB = [12 27 ⎕ ⎕ ⎕ ⎕ ⎕ ⎕] 1st row x 3rd column: 1x4 + 2x3 + 4x5 = 4 + 6 + 20 = 30 => AB = [12 27 30 ⎕ ⎕ ⎕ ⎕ ⎕] 1st row x 4th column: 1x3 + 2x1 + 4x2 = 3 + 2 + 8 = 13 => AB = [12 27 30 13 ⎕ ⎕ ⎕ ⎕] 2nd row x 1st column: 2x4 + 6x0 + 0x2 = 8 + 0 + 0 = 8 => AB = [12 27 30 13 8 ⎕ ⎕ ⎕] 2nd row x 2nd column: 2x1 + 6x(-1) + 0x7 = 2 -6 + 0 = -4 = > AB = [12 27 30 13 8 −4 ⎕ ⎕] 2nd row x 3rd column: 2x4 + 6x3 + 0x5 = 8 + 18 + 0 = 26 => AB = [12 27 30 13 8 −4 26 ⎕] 2nd row x 1st column: 2x3 + 6x1 + 0x2 = 6 + 6 + 0 = 12 => AB = [12 27 30 13 8 −4 26 12] Rules for multiplying matrices:  (AB)C = A(BC) • AB ≠ BA  k(AB) = (kA)B = A(kB), k is scalar (number) • OA = AO = O, O is zero matrix  A(B ± C) = AB ± AC and (B ± C)A = BA ± CA • IA = AI = A, I is Identity matrix Subject: Linear Algebra Created by: Marija Stanojcic Revised:2/26/2018 MATRICES AND MATRIX OPERATIONS TRANSPOSE OF THE MATRIX – It is a new matrix that we get when rows and columns of the matrix A change places, transpose of A is denoted by AT. If matrix A has size m x n, then matrix AT will have size n x m, because we have changed row and columns. a) 𝐴= [2 1 5 3 4 6] AT =? b) B = [ 7 2 4 −2 3 5 1 −8 9 ] BT = ? To find AT rows and columns of matrix A need to change places. We will put the first row [2 1 5] as a first column of AT, and the second row [3 4 6] as second column of the AT. Do the same for BT. a) 𝐴𝑇= [ 2 3 1 4 5 6 ] b) BT = [ 7 −2 1 2 3 −8 4 5 9 ] Note: If matrix is squared and A = AT we say that it is SYMMETRIC MATRIX. A = [ 5 −2 −1 −2 4 7 −1 7 6 ] if we change places for rows and columns we will get AT = [ 5 −2 −1 −2 4 7 −1 7 6 ] From here we can see A = AT, thus this matrix is symmetric. Rules for transpose (if the sizes of matrices are such that stated operations can be performed):  (AT)T = A • (kA)T = kAT, k is scalar (number)  (A ± B)T = AT ± BT • (AB)T = BTAT MINORS OF MATRIX – In this handout we will only cover minors for 3x3 matrices, but similarly it can be calculated for any squared matrix For doing this we need to know determinate of the matrix 2x2. 𝐴= [𝑎 𝑏 𝑐 𝑑] 𝑑𝑒𝑡(𝐴) = |𝑎 𝑏 𝑐 𝑑| = 𝑎𝑑 – 𝑏𝑐 If A is a squared matrix, then the minor, denoted by Mij, of element aij is the determinate of submatrix that remains after the ith row and jth column are deleted from A. 𝐴= [ 𝑎11 𝑎12 𝑎13 𝑎21 𝑎22 𝑎23 𝑎31 𝑎32 𝑎33 ] 3𝑥3 𝑑𝑒𝑡(𝐴) = | 𝑎11 𝑎12 𝑎13 𝑎21 𝑎22 𝑎23 𝑎31 𝑎32 𝑎33 | Subject: Linear Algebra Created by: Marija Stanojcic Revised:2/26/2018 MATRICES AND MATRIX OPERATIONS If we want M11, since that is the minor that correspond to element a11, we are going to cover row 1 and column 1, everything that is left we will write in the same order in our minor. | 𝑎11 𝑎12 𝑎13 𝑎21 𝑎22 𝑎23 𝑎31 𝑎32 𝑎33 | M11 = |𝑎22 𝑎23 𝑎32 𝑎33| Now we are going to do the same thing for every minor. M12 = |𝑎21 𝑎23 𝑎31 𝑎33| M13 = |𝑎21 𝑎22 𝑎31 𝑎32| M21 = |𝑎12 𝑎13 𝑎32 𝑎33| M22 = |𝑎11 𝑎13 𝑎31 𝑎33| M23 = |𝑎11 𝑎12 𝑎31 𝑎32| M31 = |𝑎12 𝑎13 𝑎22 𝑎23| M32 = |𝑎11 𝑎13 𝑎21 𝑎23| M33 = |𝑎11 𝑎12 𝑎21 𝑎22| COFACTORS - the number Cij = (-1)i+j x Mij is the cofactor of element aij ADJOINT OF THE MATRIX (ADJUGATE) M =[ 𝐶11 𝐶12 𝐶13 𝐶21 𝐶22 𝐶23 𝐶31 𝐶32 𝐶33 ] = [ +𝑀11 −𝑀12 +𝑀13 −𝑀21 +𝑀22 −𝑀23 +𝑀31 −𝑀32 +𝑀33 ] = [ + |𝑎22 𝑎23 𝑎32 𝑎33| −|𝑎21 𝑎23 𝑎31 𝑎33| + |𝑎21 𝑎22 𝑎31 𝑎32| −|𝑎12 𝑎13 𝑎32 𝑎33| + |𝑎11 𝑎13 𝑎31 𝑎33| −|𝑎11 𝑎12 𝑎31 𝑎32| + |𝑎12 𝑎13 𝑎22 𝑎23| −|𝑎11 𝑎13 𝑎21 𝑎23| + |𝑎11 𝑎12 𝑎21 𝑎22|] adj(A) = MT Note that signs (+ or -) are easy to remember where to put the right one. Start with + and then -, +, -, +, etc. INVERSE OF MATRIX -If A is the square matrix and B is the same size of A. If matrix B can be find such that AB = BA = I, then A is said to be invertible (nonsingular, if det(A) ≠ 0), and B is called an inverse of A. If there is no such matrix B, then A is not invertible (singular, if det(A) = 0). Notation for the inverse of matrix A is A-1. If B is inverse of A, then B = A-1. AB = BA = I or AA-1 = A-1A = I Inverse of matrix A (for all squared matrices) can be found using this formula: A-1 = 𝟏 𝒅𝒆𝒕(𝑨) 𝒂𝒅𝒋(𝑨). Inverse of 2x2 matrix 𝐴= [𝑎 𝑏 𝑐 𝑑] is A-1 = 𝟏 𝒅𝒆𝒕(𝑨) 𝑎𝑑𝑗(𝐴) = 𝟏 𝑎𝑑 − 𝑏𝑐[ 𝑑 −𝑏 −𝑐 𝑎], if and only if det(A) ≠0. Subject: Linear Algebra Created by: Marija Stanojcic Revised:2/26/2018 MATRICES AND MATRIX OPERATIONS a) A = [6 1 5 2], A-1 = 𝟏 𝑎𝑑 − 𝑏𝑐[ 𝑑 −𝑏 −𝑐 𝑎] = 𝟏 6𝑥2 − 1𝑥5 [ 2 −1 −5 6 ] = 𝟏 7 [ 2 −1 −5 6 ] = [ 𝟐 𝟕 − 1 𝟕 − 𝟓 𝟕 𝟔 𝟕 ] b) 𝐴= [ 1 5 0 0 3 2 1 0 2 ] 3𝑥3 , A -1 = ? M11 = |3 2 0 2| = 3x2 – 0x2 = 6 M12 = |0 2 1 2| = −2 M13 = |0 3 1 0| = -3 M21 = |5 0 0 2| = 10 M22 = |1 0 1 2| = 2 M23 = |1 5 1 0| = -5 M31 = |5 0 3 2| = 10 M32 = |1 0 0 2| = 2 M33 = |1 5 0 3| = 3 M = [ +6 −(−2) +(−3) −10 +2 −(−5) +10 −2 +3 ] = [ 6 2 −3 −10 2 5 10 −2 3 ] adj(A) = MT = [ 6 −10 10 2 2 −2 −3 5 3 ] Help – Determinant of only 3x3 matrix can be find using Sarrus’ rule. We write first two columns of the determinate to the right of the determinant (in that order). Then we are adding the products of the diagonals, going from the top to bottom (dashed lines), and subtract products of the diagonals going from the bottom to the top (solid lines). det(A)= | 1 5 0 0 3 2 1 0 2 | 1 5 0 3 1 0 = 1x3x2 + 5x2x1+ 0x0x0 -1x3x0 - 0x2x1- 2x0x5 = 6 +10 +0 – 0 - 0 - 0 = 16 Finally, A-1 = 1 𝑑𝑒𝑡(𝐴) 𝑎𝑑𝑗(𝐴) = 1 16 [ 6 −10 10 2 2 −2 −3 5 3 ] = [ 6 16 −10 16 10 16 2 16 2 16 −2 16 − 3 16 5 16 3 16] = [ 3 18 −5 8 5 8 1 8 1 8 −1 8 − 3 16 5 16 3 16] . Rules for inverse (if A is invertible):  (A-1)T = (AT)-1 • (A-1)-1 = A  (AB)-1 = B-1A-1 • (kA)-1 = k-1A-1 = 1 𝑘 A-1, k is nonzero scalar  (A1A2 … An)-1 = An-1… A2-1A1-1 References: The following work were referred to during the creation of this handout: Elementary Linear Algebra, Application Version, 11th Ed., Howard Anthon, Chriss Roress; and
3956
https://www.uptodate.com/contents/overview-of-rodenticide-poisoning/abstract/80
Medline ® Abstract for Reference 80 of 'Overview of rodenticide poisoning' - UpToDate Subscribe Sign in English Deutsch Español 日本語 Português Why UpToDate? Product Editorial Subscription Options Medline ® Abstract for Reference 80 of 'Overview of rodenticide poisoning' 80PubMed|TI Barium toxicity and the role of the potassium inward rectifier current.AU Bhoelan BS, Stevering CH, van der Boog AT, van der Heyden MA SO Clin Toxicol (Phila). 2014 Jul;52(6):584-93. Epub 2014 Jun 6. INTRODUCTION Barium is a stable divalent earth metal and highly toxic upon acute and chronic exposure. Barium is present in many products and involved in a number of industrial processes. Barium targets the potassium inward rectifier channels (IRCs) of the KCNJx gene family. Extracellular barium enters and strongly binds the potassium selectivity filter region resulting in blockade of the potassium conducting pore. IRCs are involved in numerous physiological processes of the human body and the most barium sensitive IRCs are highly expressed in all muscle types. OBJECTIVE Our purpose was correlate to the clinical outcome of acute barium poisoning in man to current knowledge on IRC function. METHODOLOGY The primary literature search was performed using Medline, Scopus and Google Scholar using search terms "barium AND poisoning"; "barium AND intoxication"; "barium AND case report" and retrieved publications from 1945 through 2012. Additional case reports were retrieved based on the reference lists of the primary hits. Duplicate publications, or publications presenting identical cases were omitted. A total of 39 case reports on acute barium poisoning containing 226 human subjects were identified for review. RESULTS BaCO3 was the most frequent source and food the most frequent mode of poisoning. Patients suffered from gastrointestinal (vomiting, diarrhea), cardiovascular (arrhythmias, hypertension), neuromuscular (abnormal reflexes, paralysis), respiratory (respiratory arrest/failure) and metabolic (hypokalemia) symptoms. Severe hypokalemia (<2.5 mM) was observed from barium serum concentrations greater than or equal to 0.0025 mM. Review of the ECG outcomes demonstrated ventricular extrasystoles, ST changes and profound U-waves to be associated strongly with poisoning. Most common treatment modalities were gastric lavage, oral sulfates, potassium i.v. and cardiorespiratory support. 27 patients (12%) died from barium poisoning. CONCLUSIONS Barium is a potent, non-specific inhibitor of the potassium IRC current and affects all types of muscle at micromolar concentrations. Gastrointestinal symptoms frequently occur early in the course of barium poisoning. Hypokalemia resulting from an intracellular shift of potassium and the direct effect of barium at the potassium channels explain the cardiac arrhythmias and muscle weakness which commonly occur in barium poisoning. Treatment of barium poisoning is mainly supportive. Orally administered sulfate salts to form insoluble barium sulfate in the intestinal tract and potassium supplementation have potential but unproven benefit. AD Participant in the Honours Program CRU2006 Bachelor, University Medical Center Utrecht , Utrecht , The Netherlands.PMID24905573 Company About Us Editorial Policy Testimonials Wolters Kluwer Careers Support Contact Us Help & Training Citing Our Content News & Events What's New Clinical Podcasts Press Announcements In the News Events Resources UpToDate Sign-in CME/CE/CPD Mobile Apps Webinars EHR Integration Health Industry Podcasts Follow Us Sign up today to receive the latest news and updates from UpToDate. Sign Up When you have to be right Privacy Policy Trademarks Terms of Use Manage Cookie Preferences © 2025 UpToDate, Inc. and/or its affiliates. All Rights Reserved. Licensed to: UpToDate Marketing Professional Support Tag : [0502 - 45.31.186.55 - 24B468457E - PR14 - UPT - NP - 20250928-23:11:11UTC] - SM - MD - LG - XL Loading Please wait Your Privacy To give you the best possible experience we use cookies and similar technologies. We use data collected through these technologies for various purposes, including to enhance website functionality, remember your preferences, and show the most relevant content. You can select your preferences by clicking the link. For more information, please review our Privacy & Cookie Notice Manage Cookie Preferences Reject All Cookies Accept All Cookies Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device. Because we respect your right to privacy, you can choose not to allow certain types of cookies on our website. Click on the different category headings to find out more and manage your cookie preferences. However, blocking some types of cookies may impact your experience on the site and the services we are able to offer. Privacy & Cookie Notice Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function. They are usually set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, this may have an effect on the proper functioning of (parts of) the site. View Vendor Details‎ Performance Cookies [x] Performance Cookies These cookies support analytic services that measure and improve the performance of our site. They help us know which pages are the most and least popular and see how visitors move around the site. View Vendor Details‎ Vendors List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Reject All Confirm My Choices
3957
https://chemistry.stackexchange.com/questions/101537/ring-expansion-and-final-structure
organic chemistry - Ring Expansion And Final Structure - Chemistry Stack Exchange Join Chemistry By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Chemistry helpchat Chemistry Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Ring Expansion And Final Structure Ask Question Asked 7 years ago Modified4 years, 1 month ago Viewed 471 times This question shows research effort; it is useful and clear 3 Save this question. Show activity on this post. [If you don't like to read the question just look at the 3 pictures; the first one has the carbocation which has to be stabilised by ring expansions. The other 2 are my predicted structure so please confirm which one is correct or if both are wrong please provide the correct one] Please help me in expanding the rings in the following structure : I tried a lot expanding the the rings twice to get this : But the intemediate in this structure is unstable as it has a bridge carbocation. Instead we can have a intermediate with sigma-pi resonace stablising the carbocation by expanding the ring once as follow : I can't wrap my head around which one is correct? organic-chemistry stability carbocation Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications edited Aug 19, 2021 at 17:11 CommunityBot 1 asked Sep 10, 2018 at 1:33 user61535 user61535 Add a comment| 1 Answer 1 Sorted by: Reset to default This answer is useful 3 Save this answer. Show activity on this post. There are mainly two factors governing this phenomenon of ring expansion and rearrangement of carbocation, mainly Stability factor and Thermodynamic factor. When you have a carbocation which can rearrange within itself, it will definitely prefer a rearranged structure which has the highest stability. Not only that, the carbocation also needs some energy for the rearrangements to occur. If it needs multiple steps of rearrangements to attain a structure which has decent stability, but the loss in the potential energy can't compensate the requirement of energy for rearrangement, it is not also possible for the molecule to undergo that many number of steps thermodynamically. Based on these two factors, we can at least logically conclude which will be the final rearranged product. In both of your predicted compounds, first there will be a rearrangement by Hydride shift. Now it is highly difficult for a molecule to undergo two rearrangements at once unless it gets extreme high stability by the second case. In this case, it does by forming a cyclopropyl methyl cation which is unusually highly stable due to non-classical resonance and thus ring-expansion occurs in this case. So, by these two steps you have got your first possible carbocation. Now , if we look for the third possible rearrangement of ring-expansion, here energy requirement is extremely high as it becomes more and more difficult for a molecule to undergo rearrangement as its number increases as a single reaction intermediates. Also, the newly formed carbocation is tertiary which is obviously less stable than cyclopropyl methyl carbocation. Thus, the last ring-expansion is energetically less favourable. So, I think the correct one should be the image you have drawn at the end in your question, as further more rearrangements on that system is both thermodynamically and stability wise less feasible. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications answered Sep 10, 2018 at 12:40 Soumik DasSoumik Das 5,796 1 1 gold badge 17 17 silver badges 34 34 bronze badges 3 But carbocation stability isn't the only factor consider - relief of the torsional and bond angle strain may provide the additional driving force for the final ring expansion step?PCK –PCK 2018-09-10 14:48:36 +00:00 Commented Sep 10, 2018 at 14:48 But that factor isn't strong enough to compensate for the lose of stability of cyclopropyl methyl cation , which is much more dominating.Soumik Das –Soumik Das 2018-09-10 16:36:09 +00:00 Commented Sep 10, 2018 at 16:36 I'm not sure why you think the final compound is so unstable. It's still tertiary, although the adjacent substituents are maybe marginally less electron-rich. It absolutely is not a bridgehead carbocation, since the compound is a fused rather than bridged bicyclic.PCK –PCK 2018-09-11 07:25:55 +00:00 Commented Sep 11, 2018 at 7:25 Add a comment| Your Answer Reminder: Answers generated by AI tools are not allowed due to Chemistry Stack Exchange's artificial intelligence policy Thanks for contributing an answer to Chemistry Stack Exchange! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. Use MathJax to format equations. MathJax reference. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Featured on Meta Spevacus has joined us as a Community Manager Introducing a new proactive anti-spam measure Related 21Precedence of 1,2 carbocation rearrangement 12Is the phenyl cation or ethynylium more stable? 8Structure of Sodium Naphthalenide 2Acid Catalysed Ring Expansion – Mechanism? 3What molecule is pictured on the cover of Dead Sara's album "Ain't It Tragic?" 1Dehydration of isoborneol with carbocation rearrangement Hot Network Questions Proof of every Highly Abundant Number greater than 3 is Even What is a "non-reversible filter"? Identifying a movie where a man relives the same day ConTeXt: Unnecessary space in \setupheadertext Is it safe to route top layer traces under header pins, SMD IC? How to home-make rubber feet stoppers for table legs? Clinical-tone story about Earth making people violent Suspicious of theorem 36.2 in Munkres “Analysis on Manifolds” alignment in a table with custom separator Another way to draw RegionDifference of a cylinder and Cuboid Do sum of natural numbers and sum of their squares represent uniquely the summands? Sign mismatch in overlap integral matrix elements of contracted GTFs between my code and Gaussian16 results Should I let a player go because of their inability to handle setbacks? Does the Mishna or Gemara ever explicitly mention the second day of Shavuot? Can a cleric gain the intended benefit from the Extra Spell feat? Is direct sum of finite spectra cancellative? Do we declare the codomain of a function from the beginning, or do we determine it after defining the domain and operations? Exchange a file in a zip file quickly Checking model assumptions at cluster level vs global level? Direct train from Rotterdam to Lille Europe Why do universities push for high impact journal publications? Is it ok to place components "inside" the PCB Riffle a list of binary functions into list of arguments to produce a result Cannot build the font table of Miama via nfssfont.tex Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Chemistry Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
3958
http://www.columbia.edu/itc/sipa/math/slope_linear.html
Slope of Linear Functions The concept of slope is important in economics because it is used to measure the rate at which changes are taking place. Economists often look at how things change and about how one item changes in response to a change in another item. It may show for example how demand changes when price changes or how consumption changes when income changes or how quickly sales are growing. Slope measures the rate of change in the dependent variable as the independent variable changes. The greater the slope the steeper the line. Consider the linear function: y = a + bx b is the slope of the line. Slope means that a unit change in x, the independent variable will result in a change in y by the amount of b. slope = change in y/change in x = rise/run Slope shows both steepness and direction. With positive slope the line moves upward when going from left to right. With negative slope the line moves down when going from left to right. If two linear functions have the same slope they are parallel. Slopes of linear functions The slope of a linear function is the same no matter where on the line it is measured. (This is not true for non-linear functions.) | | | | | An example of the use of slope in economics Demand might be represented by a linear demand function such as Q(d) = a - bP Q(d) represents the demand for a good P represents the price of that good. Economists might consider how sensitive demand is to a change in price. | | | This is a typical downward sloping demand curve which says that demand declines as price rises. | | | | This is a special case of a horizontal demand curve which says at any price above P demand drops to zero. An example might be a competitor's product which is considered just as good. | | | | This is a special case of a vertical demand curve which says that regardless of the price quantity demanded is the same. An example might be medicine as long as the price does not exceed what the consumer can afford. | Supply might be represented by a linear supply function such as Q(s) = a + bP Q(s) represents the supply for a good P represents the price of that good. Economists might consider how sensitive supply is to a change in price. | | | This is a typical upward sloping supply curve which says that supply rises as price rises. | An example of the use of slope in economics The demand for a breakfast cereal can be represented by the following equation where p is the price per box in dollars: d = 12,000 - 1,500 p This means that for every increase of $1 in the price per box, demand decreases by 1,500 boxes. Calculating the slope of a linear function Slope measures the rate of change in the dependent variable as the independent variable changes. Mathematicians and economists often use the Greek capital letter D or D as the symbol for change. Slope shows the change in y or the change on the vertical axis versus the change in x or the change on the horizontal axis. It can be measured as the ratio of any two values of y versus any two values of x. Example 1 Find the slope of the line segment connecting the following points: (1,1) and (2,4) x1 = 1 y1 = 1 x2 = 2 y2 = 4 Example 2 Find the slope of the line segment connecting the following points: (-1,-2) and (1,6) x1 = -1 y1 = -2 x2 = 1 y2 = 6 Example 3 Find the slope of the line segment connecting the following points: (-1,3) and (8,0) x1 = -1 y1 = 3 x2 = 8 y2 = 0 [Index]
3959
https://textbooks.cs.ksu.edu/cc210/05-loops/07-java/07-accumulator/index.print.html
Accumulator Pattern :: CC 210 Textbook Loops> Java Loops> Accumulator Pattern Accumulator Pattern One common use for loops is the accumulator pattern. An accumulator simply computes some values based on a large amount of data, such as the sum, maximum, minimum, average, or count. In programming, a pattern is simply a common structure that is used to solve a recurring problem in code. Since many programs end up needing a loop that acts an accumulator, we’ve developed a common pattern that can be used in our code to solve this problem. The simplest example of the accumulator pattern is a program that will sum up a set of values. Consider this code: ```java import java.util.Scanner; public class Accumulator { public static void main(String[] args){ Scanner scanner = new Scanner(System.in); // Initialize accumulator variables int sum = 0; // Read and parse input while(scanner.hasNextLine()){ String input = scanner.nextLine(); if (input.trim().length() == 0){ break; } int x = Integer.parseInt(input); // Update accumulator variables sum += x; } // Display results System.out.println("The sum of these values is " + sum); } } ``` In this example, we see the general structure for the accumulator pattern: Before the loop, we initialize any accumulator variables to their default values. Since we are computing the sum, we should start at 0. However, when computing other values, such as the product, we may need to initialize these values to 1, -1, or some other value. It doesn’t always make sense to start at 0. Inside of the loop, we read the input and parse it. Then, we will update the accumulator variables before repeating the loop. Many times we also include a conditional statement here and only update the variables if the Boolean condition is true. For example, we could easily modify this program to only sum up the even values using a conditional statement. After the loop ends, we can then display the accumulated results. In some cases, such as when we are computing the average of a list of values, we may have to perform some final calculations here before displaying the result. We’ll see this pattern appear many times in our programs from this point onward, so it is helpful to make note of it and observe when it is used in practice. 6.0.0 Last modified by: Russell Feldhausen Jul 13, 2023
3960
https://physics.stackexchange.com/questions/144066/quantised-angular-momentum
quantum mechanics - Quantised Angular Momentum? - Physics Stack Exchange Join Physics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Physics helpchat Physics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Quantised Angular Momentum? Ask Question Asked 10 years, 11 months ago Modified3 years, 10 months ago Viewed 790 times This question shows research effort; it is useful and clear 1 Save this question. Show activity on this post. So when learning about the Bohr model of hydrogen and de Broglie waves, it was shown that treating the electron of hydrogen as a de Broglie wave results in the relationship L=n ℏ,n∈N.L=n ℏ,n∈N. However, when learning about the azimuthal quantum number, it was stated that L=ℓ(ℓ+1)−−−−−−√ℏ.L=ℓ(ℓ+1)ℏ. So how come in the ground state (n=1,ℓ=0 n=1,ℓ=0), these two equations give different values for angular momentum? I feel like I'm missing something really important here. If it's the case that the Bohr model doesn't accurately describe the angular momentum of the electron in the ground state, why is the angular momentum zero? quantum-mechanics angular-momentum atomic-physics hydrogen orbitals Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Improve this question Follow Follow this question to receive notifications edited Nov 8, 2021 at 8:39 Qmechanic♦ 222k 52 52 gold badges 636 636 silver badges 2.6k 2.6k bronze badges asked Oct 31, 2014 at 13:49 Muster MarkMuster Mark 419 1 1 gold badge 5 5 silver badges 13 13 bronze badges 6 8 You're missing nothing. The Bohr model is false, and doesn't correctly describe the hydrogen atom.ACuriousMind –ACuriousMind♦ 2014-10-31 13:50:21 +00:00 Commented Oct 31, 2014 at 13:50 Oh ok, the textbook didn't really make that clear. I knew the Bohr model wasn't complete, but I didn't expect it to be this inconsistent with quantum mechanics.Muster Mark –Muster Mark 2014-10-31 13:52:16 +00:00 Commented Oct 31, 2014 at 13:52 Hm...what do you mean "why is the angular momentum zero"? Solving the hydrogen atom quantum mechanically for the allowed states, it just turns out that there are states with l=0 l=0. What sort of reason would you expect? (Note that, quantumly, you should not think about electrons actually orbiting the nucleus)ACuriousMind –ACuriousMind♦ 2014-10-31 14:01:35 +00:00 Commented Oct 31, 2014 at 14:01 Well the reason why I didn't understand why there were states with L=0 was because the second equation was just presented to me without justification. However, I'll be sure to look up how it arises from the Schrodinger equation.Muster Mark –Muster Mark 2014-10-31 14:16:26 +00:00 Commented Oct 31, 2014 at 14:16 This is a well-known deficiency of the Bohr model, a pedagogical dilemma resolved by J Dahl and M Springborg, Mol Phys 47 (1982) 1001-1019, and especially their appendix. Indeed, the Wigner transform (inverse Weyl transform) of the square of the quantum angular momentum L⋅L L⋅L turns out to be l 2−3 ℏ 2/2 l 2−3 ℏ 2/2, Where l is the classical quantity, significantly for the ground-state Bohr orbit.Cosmas Zachos –Cosmas Zachos 2021-10-26 21:33:28 +00:00 Commented Oct 26, 2021 at 21:33 |Show 1 more comment 3 Answers 3 Sorted by: Reset to default This answer is useful 2 Save this answer. Show activity on this post. TLDR: l(l+1)−−−−−−√→l l(l+1)→l for large values of l l, but the largest value it can take within an orbital is l=n−1 l=n−1, and n−1≈n n−1≈n for large values of n n. Thus l(l+1)−−−−−−√ℏ→n ℏ l(l+1)ℏ→n ℏ for l,n≫1 l,n≫1. Long answer: The Bohr model was a bridge between the Rutherford's and the quantum mechanical atomic model we know today. The greatest achievement of the Bohr model is the prediction of the QM energy levels in hydrogen all the way to the ground state n=1 n=1, something that might qualify as a coincidence, but it is more like the Bohr quantization rules being educated postulates. In this sense, Bohr model is less "wrong" than classical physics. Nevertheless, there is the correspondence principle (also by Bohr) from the "correct" quantum world to this "wrong" classical one. Furthermore, while not being a correspondence principle as such, it is not surprising that few aspects from the QM theory can be corresponded to Bohr's angular momentum quantization L=n ℏ L=n ℏ in some cases where Bohr's model did a good job. Ultimately, this viewpoint also helps with the intuition of QM as a student. One example of this correspondence (QM →L=n ℏ→L=n ℏ) applies for the rigid rotor of moment of inertia I I. While the full QM solution gives E l=ℏ l(l+1)2 I E l=ℏ l(l+1)2 I, Bohr's model predicts E n=ℏ n 2 2 I E n=ℏ n 2 2 I. In conclusion, it is not surprising that Bohr's angular momentum rule coincides with the QM angular momentum for large quantum numbers. Although not correct in general, it serves as a way to reconcile the old Bohr model in particular cases with the larger QM. In this way, Bohr's model is not seen as being just so inconsistent with QM. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Improve this answer Follow Follow this answer to receive notifications edited Nov 8, 2021 at 7:49 answered Oct 26, 2021 at 21:01 RolRol 869 1 1 gold badge 9 9 silver badges 25 25 bronze badges Add a comment| This answer is useful 0 Save this answer. Show activity on this post. In your formulas n n doesn't have the same meaning. The 1st formula means that the orbital angular momentum is an integer (or zero) multiple of ℏ ℏ. But for a level with principal quantum number n n, the angular momentum varies from (n−1)ℏ(n−1)ℏ to 0 0, not from n ℏ n ℏ to 0 0. Thus, you don't have a contradiction. See the page in Wikipedia and go to the issue "Complex orbitals". Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Improve this answer Follow Follow this answer to receive notifications edited Nov 7, 2021 at 9:11 Urb 2,754 4 4 gold badges 15 15 silver badges 26 26 bronze badges answered Nov 11, 2014 at 22:56 SofiaSofia 6,964 3 3 gold badges 21 21 silver badges 39 39 bronze badges Add a comment| This answer is useful 0 Save this answer. Show activity on this post. For l=0 l=0 the potential minimum which physically means that our electron in the H-atom should fall in the nucleus. Actually, this doesn’t happen in QM due to the Heisenberg uncertainty principle. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Improve this answer Follow Follow this answer to receive notifications edited Nov 7, 2021 at 9:28 Urb 2,754 4 4 gold badges 15 15 silver badges 26 26 bronze badges answered Nov 3, 2014 at 20:50 abdulabdul 1 Add a comment| Your Answer Thanks for contributing an answer to Physics Stack Exchange! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. Use MathJax to format equations. MathJax reference. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions quantum-mechanics angular-momentum atomic-physics hydrogen orbitals See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Linked 0Quantization of Angular momentum with Bohr model versus solving the Schrödinger equation 5How does an operator degenerate into a function? 0Angular momentum of an electron Related 12What is the angular momentum spectrum of an sp 3 3 electron? 6Quantization of orbital angular momentum 1Hydrogen Atom Angular Momentum 3Why does 2 p 2 p have highest RDF at 4 a 0 4 a 0? 0Relation between magnetic quantum number and angular momentum quantum number 4What is the difference between angular momentum of electron by Bohr and orbital angular momentum? Hot Network Questions The rule of necessitation seems utterly unreasonable Proof of every Highly Abundant Number greater than 3 is Even Languages in the former Yugoslavia How long would it take for me to get all the items in Bongo Cat? In Dwarf Fortress, why can't I farm any crops? Riffle a list of binary functions into list of arguments to produce a result A time-travel short fiction where a graphologist falls in love with a girl for having read letters she has not yet written… to another man Is encrypting the login keyring necessary if you have full disk encryption? How to rsync a large file by comparing earlier versions on the sending end? Do we need the author's permission for reference Exchange a file in a zip file quickly What’s the usual way to apply for a Saudi business visa from the UAE? Direct train from Rotterdam to Lille Europe Suggestions for plotting function of two variables and a parameter with a constraint in the form of an equation How do you emphasize the verb "to be" with do/does? "Unexpected"-type comic story. Aboard a space ark/colony ship. Everyone's a vampire/werewolf ConTeXt: Unnecessary space in \setupheadertext Checking model assumptions at cluster level vs global level? Matthew 24:5 Many will come in my name! Does the Mishna or Gemara ever explicitly mention the second day of Shavuot? Who is the target audience of Netanyahu's speech at the United Nations? My dissertation is wrong, but I already defended. How to remedy? Are there any world leaders who are/were good at chess? Lingering odor presumably from bad chicken Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Physics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
3961
https://www.verbix.com/webverbix/go.php?D1=9&T1=nolo
Latin verb 'nolo' conjugated Conjugator Conjugator EnglishFrenchItalianPortugueseSpanish More Verb Conjugation... Noun Declination... Verb Finder More TranslationCognatesGamesLanguage MapsLanguage DraftsVerbix for WindowsVerbix Documents For Developers BlogAbout + Terms of Use Latin: nolo Latin verb 'nolo' conjugated Discover more Wessex Gospels Bookshelves Gift baskets verbix Activewear Verbix Cite this page | Conjugate another Latin verb Dictionary lookup word (Ind. Present 1.sg.): nōlō Active Nominal Forms Infinitive:nōlle Present participle:nōlēns; nōléntis Future participle:Gerund:nōléndum Gerundive:nōléndus Passive Nominal Forms Infinitive:nōlle Perfect participle: Active Indicative Present ego nōlō tū nōvīs is nōvult nōs nṓlumus vōs nṓlitis iī nōlunt Imperfect ego nōlḗbam tū nōlḗbās is nōlḗbat nōs nōlēbā́mus vōs nōlēbā́tis iī nōlḗbant I Future ego nōlam tū nōlēs is nōlet nōs nōlḗmus vōs nōlḗtis iī nōlent Perfect ego nōluī tū nōluístī is nōlúit nōs nōlúimus vōs nōluístis iī nōluḗrunt Pluperfect ego nōlúeram tū nōlúerās is nōlúerat nōs nōluerā́mus vōs nōluerā́tis iī nōlúerant II Future ego nōlúerō tū nōlúeris is nōlúerit nōs nōluérimus vōs nōluéritis iī nōlúerint Subjunctive Present ego nōlim tū nōlīs is nōlit nōs nōlī́mus vōs nōlī́tis iī nōlint Imperfect ego nōllem tū nōllēs is nōllet nōs nōllḗmus vōs nōllḗtis iī nōllent Perfect ego nōlúerim tū nōlúeris is nōlúerit nōs nōluérimus vōs nōluéritis iī nōlúerint Pluperfect ego nōluíssem tū nōluíssēs is nōluísset nōs nōluissḗmus vōs nōluissḗtis iī nōluíssent I Imperative tū nōlī vōs nōlī́te II Imperative tū nōlī́tō is nōlī́tō vōs nōlītṓte iī nōlúntō Passive Indicative Present Imperfect I Future Perfect Pluperfect II Future Subjunctive Present Imperfect Perfect Pluperfect Verbs conjugated like 'nolo' malo, nolo, volo, Verbs similar to 'nolo' colo, dolo, noto, novo, volo, alo, avolo, balo, celo, coeo, Translations (none) Etymology Univerbation of ne- (“not”) + volō (“want”). Additional Information Cognates Sample Sentences tribum Levi noli numerare neque ponas summam eorum cum filiis Israhel nolite perdere populum Caath de medio Levitarum et ille noli inquit nos relinquere tu enim nosti in quibus locis per desertum castra ponere debeamus et eris ductor noster nolite rebelles esse contra Dominum neque timeatis populum terrae huius quia sicut panem ita eos possumus devorare recessit ab illis omne praesidium Dominus nobiscum est nolite metuere nolite ascendere non enim est Dominus vobiscum ne corruatis coram inimicis vestris Amalechites et Chananeus ante vos sunt quorum gladio corruetis eo quod nolueritis adquiescere Domino nec erit Dominus vobiscum dixit ad turbam recedite a tabernaculis hominum impiorum et nolite tangere quae ad eos pertinent ne involvamini in peccatis eorum qui concedere noluit ut transiret Israhel per fines suos quin potius exercitu congregato egressus est obviam in desertum et venit in Iasa pugnavitque contra eum dixitque Deus ad Balaam noli ire cum eis neque maledicas populo quia benedictus est reversi principes dixerunt ad Balac noluit Balaam venire nobiscum si videbunt homines isti qui ascenderunt ex Aegypto a viginti annis et supra terram quam sub iuramento pollicitus sum Abraham Isaac et Iacob et noluerunt sequi me qui si nolueritis sequi eum in solitudine populum derelinquet et vos causa eritis necis omnium sin autem noluerint transire vobiscum in terram Chanaan inter vos habitandi accipiant loca sin autem nolueritis interficere habitatores terrae qui remanserint erunt vobis quasi clavi in oculis et lanceae in lateribus et adversabuntur vobis in terra habitationis vestrae tam filiis Israhel quam advenis atque peregrinis ut confugiat ad eas qui nolens sanguinem fuderit carissimi nolite peregrinari in fervore qui ad temptationem vobis fit quasi novi aliquid vobis contingat in Geth nolite adnuntiare lacrimis ne ploretis in domo Pulveris pulvere vos conspergite nolite credere amico et nolite confidere in duce ab ea quae dormit in sinu tuo custodi claustra oris tui et viduam et pupillum et advenam et pauperem nolite calumniari et malum vir fratri suo non cogitet in corde suo et noluerunt adtendere et verterunt scapulam recedentem et aures suas adgravaverunt ne audirent et erit sicut eratis maledictio in gentibus domus Iuda et domus Israhel sic salvabo vos et eritis benedictio nolite timere confortentur manus vestrae Vulgate Verses (Genesis 19:7) nolite quaeso fratres mei nolite malum hoc facere And said, I pray you, brethren, do not so wickedly. (Genesis 24:5) respondit servus si noluerit mulier venire mecum in terram hanc num reducere debeo filium tuum ad locum de quo egressus es And the servant said unto him, Peradventure the woman will not be willing to follow me unto this land: must I needs bring thy son again unto the land from whence thou camest? (Genesis 24:8) sin autem noluerit mulier sequi te non teneberis iuramento filium tantum meum ne reducas illuc And if the woman will not be willing to follow thee, then thou shalt be clear from this my oath: only bring not my son thither again. (Genesis 24:39) ego vero respondi domino meo quid si noluerit venire mecum mulier And I said unto my master, Peradventure the woman will not follow me. (Genesis 34:17) sin autem circumcidi nolueritis tollemus filiam nostram et recedemus But if ye will not hearken unto us, to be circumcised; then will we take our daughter, and we will be gone. (Genesis 42:22) e quibus unus Ruben ait numquid non dixi vobis nolite peccare in puerum et non audistis me en sanguis eius exquiritur And Reuben answered them, saying, Spake I not unto you, saying, Do not sin against the child; and ye would not hear? therefore, behold, also his blood is required. (Genesis 43:23) at ille respondit pax vobiscum nolite timere Deus vester et Deus patris vestri dedit vobis thesauros in sacculis vestris nam pecuniam quam dedistis mihi probatam ego habeo eduxitque ad eos Symeon And he said, Peace be to you, fear not: your God, and the God of your father, hath given you treasure in your sacks: I had your money. And he brought Simeon out unto them. (Genesis 45:5) nolite pavere nec vobis durum esse videatur quod vendidistis me in his regionibus pro salute enim vestra misit me Deus ante vos in Aegyptum Now therefore be not grieved, nor angry with yourselves, that ye sold me hither: for God did send me before you to preserve life. (Genesis 50:21) nolite metuere ego pascam vos et parvulos vestros consolatusque est eos et blande ac leniter est locutus Now therefore fear ye not: I will nourish you, and your little ones. And he comforted them, and spake kindly unto them. (II Samuel 1:20) nolite adnuntiare in Geth neque adnuntietis in conpetis Ascalonis ne forte laetentur filiae Philisthim ne exultent filiae incircumcisorum Tell it not in Gath, publish it not in the streets of Askelon; lest the daughters of the Philistines rejoice, lest the daughters of the uncircumcised triumph. Cite this page Harvard Referencing: Verbix 2025, Latin verb 'nolo' conjugated, Verbix, viewed 29 Sep 2025, APA Referencing: Verbix (2025, Sep 29) Latin verb 'nolo' conjugated. Verbix Verb Conjugator. Retrieved from Report an error
3962
https://www.mheducation.com/unitas/school/explore/five-steps/sample-student-unit-5-steps-precalculus.pdf
❮ 89 CHAPTER 1 Unit 2: Exponential and Logarithmic Functions IN THIS CHAPTER Summary: This chapter introduces arithmetic and geometric sequences and shows their relationship with linear and exponential functions. It also introduces the com­ position of functions and the inverse of a function. Using inverses, the logarithmic function is introduced, along with properties and graphs of exponential and loga­ rithmic functions. Modeling aspects of contextual scenarios are also examined. Key Ideas ✪ A sequence is an ordered list of numbers. ✪ Arithmetic sequences are sequences that have a common difference between terms. ✪ Geometric sequences are sequences that have a common ratio between terms. ✪ Arithmetic and geometric sequences are similar to linear and exponential functions. ✪ Functions can be combined using composition. ✪ Inverse functions are essential to solving equations and inequalities. ✪ Properties of exponential functions and their inverse function logarithms can be used to solve equations and inequalities. ✪ Exponential and logarithmic functions can be used to model many phenomena. KEY IDEA CHAPTER 7 90 ❯ STEP 4. Review the Knowledge You Need to Score High Sequences A sequence is an ordered list of numbers that often follow a specific pattern or function. Each number in a sequence is called a term. Each term has a whole number position such as first, second, or third. Example The first 10 terms of a sequence are 1, 1, 2, 3, 5, 8, 13, 21, 34, 55. Find the next 3 terms. Solution: The terms of the sequence are found by adding together the two preceding numbers in the sequence: 1 + 1 = 2, 1 + 2 = 3, 2 + 3 = 5, and so on. Term 11 = term 9 + term 10 = 34 + 55 = 89 Term 12 = term 10 + term 11 = 55 + 89 = 144 Term 13 = term 11 + term 12 = 89 + 144 = 233  Fun Fact: The preceding sequence is known as the Fibonacci Sequence, named after Leonardo of Pisa, later known as Fibonacci. Many real-world illustrations of the Fibonacci Sequence are found in nature. If you count the seed spirals of a sunflower in one direction, you will get the numbers in the Fibonacci Sequence. Arithmetic Sequences An arithmetic sequence is a sequence that has successive terms that have a constant rate of change or a common difference. The general term of an arithmetic sequence is an = a0 + dn, where a0 is the initial value and d is the common difference. An alternate form is an = ak + d (n - k), where ak is the kth term of the sequence. Example Write a formula for the sequence 6, 10, 14, 18, 22, . . . , then use that formula to find the 100th term of the sequence. Solution: We first need to determine if the sequence is arithmetic by finding the difference between successive terms. The difference between term 2 and term 1 is 10 – 6 = 4. The difference between term 3 and term 2 is 14 – 10 = 4. The difference between term 4 and term 3 is 18 – 14 = 4. The difference between term 5 and term 4 is 22 – 18 = 4. Because the differences are the same, it is an arithmetic sequence. Next, substitute values into the formula an = a0 + dn. The initial value a0 is found by subtracting the difference d from the first term in the sequence: a0 = a1 – d = 6 – 4 = 2. Therefore, the formula is an = 2 + 4n. Term 100, or a100 = 2 + 4(100) = 402. Geometric Sequences A geometric sequence is a sequence that has successive terms that have a constant propor­ tional change or a common ratio. Unit 2: Exponential and Logarithmic Functions ❮ 91 The general term of a geometric sequence is gn = g0rn, where g0 is the initial value and r is the common ratio. An alternate form is gn = gkr(n−k), where gk is the kth term of the sequence. Example Write a formula for the sequence 2, −6, 18, −54, 162, . . . . Use that formula to find the 12th term of the sequence. Solution: We first need to determine if the sequence is geometric by finding the ratio between successive terms. The ratio between term 2 and term 1 is −= − 6 2 3. The ratio between term 3 and term 2 is −= − 18 6 3. The ratio between term 4 and term 3 is − = − 54 18 3. The ratio between term 5 and term 4 is − = − 162 54 3. Because the ratios are all the same, it is a geometric sequence. Next, substitute values into the formula gn = g0rn. The initial value g0 is found by divid­ ing the ratio r by the first term in the sequence: = = −= − g g r 2 3 2 3 n 1 . Therefore, the formula is = − − g 2 3( 3) n n. The 12th term is = − − = − = g 2 3( 3) 2 3(531,441) 354,294 12 12 . Arithmetic sequences are based on addition, while geometric sequences are based on multiplication. Change in Linear and Exponential Functions Linear functions of the form f (x) = b + mx are similar to arithmetic sequences of the form an = a0 + dn, because both can be expressed as an initial value (b or a0) plus repeated addi­ tion of a constant rate of change, the slope (m or d ). Similar to arithmetic sequences of the form an = ak + d (n – k), which are based on a known difference, d, and a kth term, linear functions can be expressed in the form f (x) = yi + m(x – xi ) based on a known slope, m, and a point (xi , yi). Exponential functions of the form f (x) = abx are similar to geometric sequences of the form gn = g0rn, as both can be expressed as an initial value (a or g0) times repeated multiplication by a constant proportion (b or r). Similar to geometric sequences of the form gn = gkr(n−k), which are based on a known ratio, r, and a kth term, exponential functions can be expressed in the form f (x) = yir(x – xi) based on a known ratio, r, and a point, (xi , yi ). It should be noted that sequences and their corresponding functions may have different domains. Specifically, linear and expo­ nential function domains are all real numbers, but sequences have domains of whole numbers. Over equal-length input-value intervals, if the output values of a function change at a constant rate, then the function is linear; if the output values change proportionally, then the function is exponential. Also of note is that arithmetic sequences, linear functions, geometric sequences, and exponential functions all have the same property that they can be determined by two distinct sequence or function values. Example 1 If the 6th term of an arithmetic sequence is 28 and the 15th term is 73, find the 30th term of the sequence. 92 ❯ STEP 4. Review the Knowledge You Need to Score High Solution: Substituting n = 6 and a6 = 28 into the formula an = a0 + dn, we get the equation 28 = a0 + 6d and substituting n = 15 and a15 = 73 into the same formula, we get the equa­ tion 73 = a0 + 15d. We can subtract the two equations to eliminate a0. 73 = a0 + 15d − 28 = −(a0 + 6d ) 45 = 9d so, d = 5 We can take the first (or second) equation and substitute d = 5 to find a0: 28 = a0 + 6(5), so a0 = −2. The formula for the arithmetic sequence is an = −2 + 5n. Therefore, the 30th term is a30 = −2 + 5(30) = 148. Example 2 If the 5th term of a geometric sequence is 4 and the 10th term is 1 8 , find the 20th term of the sequence. Solution: Substituting n = 5 and g5 = 4 into the formula gn = g0rn we get the equation 4 = g0r5, and substituting n = 10 and g10 = 1 8 into the formula we get the equation 1 8 = g0r10. We can divide the two equations to eliminate g0. = 4 1 8 g r g r 0 10 0 5 → = r 1 32 5 → r = 1 32 5 → r = 1 2 . We can take the first (or second) equation and substitute r = 1 2 to find g0. 4 = g0       1 2 5 → 4 = g0       1 32 → g0 = 128. The formula for the geometric sequence is gn = 128       1 2 n . Therefore, the 20th term is g20 = 128       1 2 20 . Rewritten as powers of 2 results in g20 =(2)7 (2-1)20 = (2)7 (2-20) = 2-13 = 1 8,192 . Exponential Functions Exponential functions were introduced along with geometric sequences in the previous section. Let’s take a closer look at them now. The general form of an exponential function is f (x) = abx, with initial value a, where a ≠ 0, and base b, where b > 0 and b ≠ 1. When a > 0 and b > 1, the exponential function is known as exponential growth. When a > 0 and 0 < b < 1, the exponential function is known as exponential decay. Unit 2: Exponential and Logarithmic Functions ❮ 93 Graphs of Exponential Functions Let’s look at a graph of the exponential function ( ) = f x 2(7) x and identify some of the key characteristics. 4 2 0 –2 –4 CHARACTERISTIC VALUE Domain All reals Range Positive reals Intercept(s) (0, 2) Increasing/Decreasing Always increasing Concavity Always concave up Extrema None Point of Inflection None Asymptote Horizontal at y = 0 End Behavior = →−∞f x lim ( ) 0 x and = ∞ →∞f x lim ( ) x For an exponential function in general form, as the input values increase or decrease with­ out bound, the output values will increase or decrease without bound or will get arbitrarily close to zero. That is, for an exponential function in general form, →±∞ab lim x x = ∞, →±∞ab lim x x = −∞, or = →±∞ab lim 0. x x Example When Brody entered kindergarten, his grandparents gave him a certificate of deposit (CD) for $5,000 to help him pay for college. If the bank pays an annual rate of 3.5% com­ pounded yearly, how much will Brody have when he starts college 13 years later? Solution: Substituting the value 5,000 for a, 1.035 for b (3.5% must be converted to a decimal, which is 0.035. Then 1 must be added because each period of time we have an increase of the initial amount), and 13 for x results in the equation y = 5,000(1.035)13. Therefore, Brody will have $7,819.78 when he starts college. 94 ❯ STEP 4. Review the Knowledge You Need to Score High Properties of Exponential Functions Exponential expressions can be rewritten using the properties of exponents. PROPERTY DEFINITION EXAMPLE Product Property ⋅ = + b b b m n m n ⋅ = = + 7 7 7 7 4 5 4 5 9 Quotient Property = − b b b m n m n = = − 4 4 4 4 8 3 8 3 5 Power Property = b b ( ) m n mn = = ⋅ (5 ) 5 5 2 3 2 3 6 Negative Exponent Property = − b b 1 n n = − 8 1 8 3 3 Root Property b k 1 = b k , k is a natural number 3 14 = 3 4 Zero Power Property = b 1 0 , b ≠ 0 = 2 1 0 Example Sketch the graphs of = f x ( ) 10x and = g x ( ) 1 10x on the same axis. Analyze the graphs. Solution: The graph of g (x) is a reflection image of the graph of f (x) over the y-axis because = g x ( ) 1 10x can be rewritten as = − g x ( ) 10 x. Domain: All real. Range: All positive real numbers. Increasing/Decreasing: f is increasing over its entire domain, whereas g is decreasing over its entire domain. Maxima/minima: None. End behavior: = →−∞f x lim ( ) 0 x , = ∞ →∞f x lim ( ) x , = ∞ →−∞g x lim ( ) x , = →∞g x lim ( ) 0 x . Model: f (x) is exponential growth, whereas g (x) is exponential decay. 10 g f 0 5 –5 –5 5 Unit 2: Exponential and Logarithmic Functions ❮ 95 Like p for circles, an important exponential base that occurs naturally in higher order prob­ lems is base e, known as the natural number. The value of e is approximately 2.71828. The following is a graph of ex. 4 2 0 –2 –4 2 Composition of Functions The composition of functions f (x) and g (x) is the process of combining the two functions into a single function. If g (x) is the first function and f (x) is the second function, then it is represented as f (g (x)) or ( f °g)(x). The output values of g are used as input values of f . For this reason, the domain of the composite function is restricted to those input values of g for which the corresponding output value is in the domain of f. x g f g(x) f(g(x)) Example 1 If = f x x ( ) and = − g x x ( ) 2 1, find (f ° g)(x) and ( g ° f )(x). State the domain of each composition. Solution: = − = − f g x f x x ( ( )) (2 1) 2 1. We need to restrict the domain to only nonnegative numbers under the radical, or 2x – 1 ≥ 0. This is equivalent to {x : x ≥ ½}. ( ) = = − g f x g x x ( ( )) 2 1. The domain is {x : x ≥ 0}. Notice that the composition of functions is generally not commutative, meaning (f ° g)(x) ≠ ( g ° f )(x). Example 2 Find g ( f (−2)) using the following tables. x f(x) −1 −2 −2 0 −3 2 −4 4 x g (x) 0 5 1 8 2 11 3 14 96 ❯ STEP 4. Review the Knowledge You Need to Score High Solution: From the table of f (x), f (−2) = 0. So g ( f (−2)) = g (0). From the table of g (x), g (0) = 5. Therefore, g ( f (−2)) = 5. Example 3 For the given two functions f (x) = kx – 4 and g (x) = kx + 6, if the two composite functions f (g (x)) and g ( f (x)) are equal, find k. Solution: First let’s find f (g (x)) and g ( f (x)). f ( g (x)) = f (kx + 6) = k(kx + 6) – 4 = k2x + 6k – 4 g ( f (x)) = g (kx – 4) = k(kx − 4) + 6 = k2x – 4k + 6 Because f (g (x)) = g ( f (x)), then k2x + 6k – 4 = k2x – 4k + 6, 6k – 4 = −4k + 6, 10k = 10, which makes k = 1. Inverse Functions An inverse function can be thought of as a reverse mapping of the function. An inverse function, f -1, maps the output values of a function, f, on its invertible domain to their corresponding input values; that is, if f (a) = b, then f -1(b) = a. Alternately, on its invert­ ible domain, if a function consists of input-output pairs (a, b), then the inverse function consists of input-output pairs (b, a). The domain may need to be restricted in order to make the function invertible. The composition of a function f, and its inverse function, f −1, is the identity function; that is, f ( f -1(x)) = f -1( f (x)) = x. On a function’s invertible domain, the function’s domain and range are the inverse func­ tion’s range and domain. The inverse of the table of values of y = f (x) can be found by reversing the input-output pairs; that is, (a, b) corresponds to (b, a). The inverse of the graph of the function y = f (x) can be found by reversing the roles of the x- and y-axes; that is, by reflecting the graph of the function over the graph of the identity function h(x) = x. The inverse of the function can be found by determining the inverse operations to reverse the mapping. One method for finding the inverse of the function f is reversing the roles of x and y in the equation y = f (x), then solving for y = f -1(x). Example 1 If f (x) = 7x + 1, find f -1(x). Solution: Step 1: Rewrite the equation as y = 7x + 1. Step 2: Reverse the roles of x and y. This results in x = 7y + 1. Step 3: Solve for y. This results in x – 1 = 7y, so y = − x 1 7 . Therefore, = − − f x x ( ) 1 7 1 . Unit 2: Exponential and Logarithmic Functions ❮ 97 Example 2 Show that = + f x x ( ) 2 1 2 and = − g x x ( ) 1 2 are inverses. Solution: Step 1: Find = −       f g x f x ( ( )) 1 2 = −      + = −      + = −+ = x x x x 2 1 2 1 2 1 2 1 1 1 2 , for x ≥ 1 because of the domain of g (x). Step 2: Because f (x) is not one-to-one, that is, f (x) has multiple values of y from differ­ ent values of x, the domain must be restricted in order to find the inverse. In this case, restricting the domain of f (x) to x ≥ 0 means we can find the inverse. Find = + g f x g x ( ( )) (2 1) 2 = + − = = = x x x x (2 1) 1 2 2 2 2 2 2 . Because f g x g f x x ( ( )) ( ( )) = = , f and g are inverses on the appropriately restricted domains. Logarithmic Expressions Addition and subtraction are inverse operations. Multiplication and division are inverse oper­ ations. Exponentials and logarithms are inverse operations. The logarithmic expression logbc is equal to the value that the base b must be exponentially raised to in order to obtain the value c. = c a logb if and only if = b c a , where a and c are constants, b > 0, and b ≠ 1. When the base of a logarithmic expression is not specified, it is understood to be the common logarithm with base 10 and written x log . When the base of a logarithm expres­ sion is e, it is referred to as natural logarithm and written x ln . On a logarithmic scale, each unit represents a multiplicative change of the base of the logarithm. For example, on a standard scale, the units might be 0, 1, 2, . . . , while on a logarithmic scale using logarithm base 10, the units might be 1, 10, 100, 1000, . . . , cor­ responding to 100, 101, 102, . . .. Example 1 Find the value of log232. Solution: Rewrite the logarithm as an exponential expression. log232 = x can be written as 2x = 32. Because 25 = 32, this means x = 5. Therefore, log232= 5. Example 2 Find the value of log5256. Solution: Rewrite the logarithm as an exponential expression log5256 = x can be written as 5x = 256. Because 53 = 125 and 54 = 625, the value for x must be between 3 and 4. Using a calculator, it can be approximated that 53.445 ≈ 256. Therefore, log5256 ≈ 3.445. 98 ❯ STEP 4. Review the Knowledge You Need to Score High Properties of Logarithms The following table has several properties of logarithms that can be applied to solve loga­ rithmic problems. PROPERTY NAME PROPERTY GRAPHIC PROPERTY Product Property x y x y log ( ) log log b b b ⋅ = + Every horizontal dilation of a logarithmic function f x kx ( ) log ( ) b = , is equivalent to a vertical translation, f (x) = logb(kx) = k x a x log log log b b b + = + , where = a k logb . Quotient Property = − x y x y log log log b b b Power Property = x n x log log b n b Raising the input of a logarithmic function to a power, f x x ( ) logb k = results in a verti­ cal dilation, f x x k x ( ) log log b k b = = . Change of Base Property = x x b log log log b a a , where a > 0 and a ≠ 1 All logarithmic functions are vertical dilations of each other. Example 1 Solve 15x = 30. Solution: Let’s rewrite the exponential equation as a logarithmic equation. 15x = 30 → = x log 30 15 . We can use the change of base property to rewrite = log 30 log30 log15 . 15 Using a cal­ culator = log30 log15 1.256. Therefore, x = 1.256. You can use a calculator to confirm 151.256 ≈ 30. Example 2 Use the properties to expand the expression x y log 5 b 2 . Solution: Step 1: Use the quotient property to rewrite: = − x y x y log 5 log (5 ) log b b b 2 2 Step 2: Use the product and power properties: = + − x y (log 5 log ) 2log b b b = + − x y x y log 5 log 5 log 2log b b b b 2 Inverses of Exponential Functions Because = f x x ( ) logb and = g x b ( ) x are inverse functions, this means that when the func­ tions are composed with one another the result is x. Meaning f (g (x)) = g ( f (x)) = x. If (s, t) is an ordered pair of the exponential function, then (t, s) is an ordered pair of the logarithmic function. Graphically because the functions are inverses, the graph of the logarithmic function is a reflection of the graph of the exponential function over the graph of the iden­ tity function h(x) = x. Unit 2: Exponential and Logarithmic Functions ❮ 99 Example Graph f (x) = 3x and its inverse. Solution: The inverse, = − f x x ( ) log 1 3 . Notice that the graphs are reflections across the line y = x. The y-intercept of f (x) = 3x is (0, 1). The x-intercept of = − f x x ( ) log 1 3 is (1, 0). 10 f(x) = 3x f–1(x) = log3x 5 –5 –10 0 5 –5 –10 10 Graphs of Logarithmic Functions Let’s look at a graph of the logarithmic function = f x x ( ) log and identify some of the key characteristics. 5 0 5 10 –5 15 20 –5 CHARACTERISTIC VALUE Domain All reals greater than 0 Range All reals Intercept(s) (1, 0) Increasing/Decreasing Always increasing Concavity Always concave down Extrema None Point of Inflection None Asymptote Vertical at x = 0 End Behavior f x lim ( ) x 0 = −∞ →+ and f x lim ( ) x = ∞ →∞ 100 ❯ STEP 4. Review the Knowledge You Need to Score High Exponential and Logarithmic Inequalities Properties of exponents, properties of logarithms, and the inverse relationship between exponential and logarithmic functions can be used to solve equations and inequalities involving exponents and logarithms. When solving exponential and logarithmic equations found through analytical or graphical methods, the results should be examined for extrane­ ous solutions precluded by the mathematical or contextual limitations. Example 1 What are all values of x for which ( ) + = + x x 2log 1 log( 13)? Solution: Step 1: Set the equation equal to 0. ( ) ( ) + − + = x x 2log 1 log 13 0 Step 2: Rewrite using the power and quotient properties. + + = x x log ( 1) ( 13) 0 2 Step 3: Rewrite as an exponential equation. = + + x x 10 ( 1) ( 13) 0 2 Step 4: Rewrite 100 as 1. x x 1 ( 1) ( 13) 2 = + + Step 5: Multiply both sides by (x + 13). + = + x x 13 ( 1)2 Step 6: Replace (x + 1)2 with x2 + 2x + 1 and set equal to 0. + − = x x 12 0 2 Step 7: Factor and set each factor equal to 0 and solve. x x ( 4)( 3) 0 + − = x = −4 or x = 3 Step 8: Because the domain of a logarithmic function is all reals greater than 0, substi­ tuting x = −4 into x 2log( 1) + yields 2log( 3) −, which is not defined; therefore x = -4 is an extraneous solution. The value that solves the equation is x = 3. Example 2 What are all values of x for which ≥ 2 100 x ? Solution: Step 1: Rewrite the inequality as a logarithmic inequality. ≥ log 2 log 100 x 2 2 Step 2: Use the power property and change of base property to rewrite. ≥ x log 2 log100 log2 2 Step 3: Use a calculator to evaluate the logarithms. ≥ x 6.644 The solution is ≥ x 6.644. Modeling Two variables in a data set that demonstrate a slightly changing rate of change can be modeled by linear, quadratic, and exponential function models. Models can be compared based on contextual clues and applicability to determine which model is most appropriate. A model is justified as appropriate for a data set if the graph of the residuals of a regression, Unit 2: Exponential and Logarithmic Functions ❮ 101 the residual plot, appear without pattern. The error in the model is the difference between the predicted and actual values. Depending on the data set and context, it may be more appropriate to have an underestimate or overestimate for any given interval. Exponential Function Context and Data Modeling For an exponential model in general form f (x) = abx, the base of the exponent, b, can be understood as a growth factor in successive unit changes in the input values and is related to a percent change in context. An exponential function model can be constructed from an appropriate ratio and initial value or from two input-output pairs. The initial value and the base can be found by solving a system of equations resulting from the two input-output pairs. Exponential function models can be constructed by applying transformations to f (x) = abx based on characteristics of a contextual scenario or data set. They can be used to predict values for the dependent variable, depending on the contextual constraints on the domain. A constant may need to be added to the dependent variable values of a data set to reveal a proportional growth pattern. Example 1 A new car sells for $38,500. The value of the car decreases by 17% annually. What will be the value of the car in 5 years? Solution: The growth factor is 1 – 0.17, or 0.83. Using the general form of the exponential model yields, f (x) = 38,500(0.83)5. Therefore, the car will be worth $15,165.31 after 5 years. Example 2 The number of bacteria on the fourth day of an experiment was 58. On the tenth day, the number increased to 368. Write an exponential model to represent the number of bacteria present d days after the experiment began. Solution: Step 1: Represent the model f (d ) = abd with the points (4, 58) and (10, 368). This results in 58 = ab4 and 368 = ab10. Step 2: Divide the second equation by the first equation to eliminate a. = ab ab 368 58 10 4 6.345 = b6 Step 3: Solve for b. b = 6.345 6 ≈ 1.361 Step 4: Substitute b into one of the equations to solve for a. 58 = a(1.361)4 = a 58 1.3614 16.923 ≈ a Step 5: Because a represents the starting amount of bacteria, it should be represented by the whole number 17. Therefore, the exponential model is f (d ) = 17(1.361)d. Values that are used when solving an equation should be stored in a graphing calculator so as not to have a round-off error. 102 ❯ STEP 4. Review the Knowledge You Need to Score High Example 3 The half-life of carbon-14 is known to be 5,720 years. If 400 grams of carbon-14 are stored for 1,000 years, how many grams will remain? Solution: Half-life is the time required for a quantity to reduce to half of its value. When solving half-life formulas, we use the formula =       A A 1 2 t H 0 , where A = the amount remain­ ing, A0 = initial amount, t = time, H = half-life. Substituting the known information into the formula results in A 400 1 2 . 1,000 5,720 =       After 1,000 years 354.350 grams will remain. Logarithmic Function Context and Data Modeling Logarithmic functions are inverses of exponential functions and can be used to model situations involving proportional growth, or repeated multiplication, where the input values change pro­ portionally over equal-length output-value intervals. Alternately, if the output value is a whole number, it indicates how many times the initial value has been multiplied by the proportion. Example A concert starts at 7:00 p.m. and the doors to the concert venue open at 5:00 p.m. The number of patrons in the concert venue t minutes after 5:00 p.m. is listed in the following table. Using technology, estimate the logarithmic function ( ) = + N t a b t ln that models the data. Minutes since 5:00 p.m. 15 30 45 60 75 90 105 120 Number of patrons in the venue 270 340 380 410 430 450 465 480 Solution: Step 1: Enter the value from the table into a graphing calculator. Step 2: Use the calculator’s calculate function to determine the logarithmic function. The function that models how many patrons are in the venue at time t minutes after 5:00 p.m. is ( ) = − + N t t 2.029 100.444(ln ). Semi-Log Plots In a semi-log plot, one of the axes is logarithmically scaled. When the y-axis of a semi-log plot is logarithmically scaled, data or functions that demonstrate exponential characteris­ tics will appear linear. An advantage of semi-log plots is that a constant never needs to be added to the dependent variable values to reveal that an exponential model is appropriate. Techniques used to model linear functions can be applied to a semi-log graph. For an expo­ nential model of the form y = abx, the corresponding linear model for the semi-log plot is y = (lognb)x + logna, where n > 0 and n ≠ 1. Specifically, the linear rate of change is lognb, and the initial value is logna. Unit 2: Exponential and Logarithmic Functions ❮ 103 Example Consider the following graphs of two data sets. Both appear to have models that would be increasing and concave up. Which data set, if any, can be modeled by an exponential function? A quick check to determine if the model is exponential is to change the y-axis to a logarithmic scale. 0 5 10 Exponential Growth? 15 20 25 30 200 0 400 600 800 0 5 10 Exponential Growth? 15 20 25 30 200 0 400 600 800 The same data graphed with a logarithmic scale for the y-axis are shown in the following graphs. As you can see, the y-axis now is scaled 1, 10, 100, 1,000. This is equivalent to 100, 101, 102, and 103, which is a proportional scale. 0 5 10 Exponential Growth? 15 20 25 30 10 1 100 1,000 0 5 10 Exponential Growth? 15 20 25 30 10 1 100 1,000 Recall, exponential functions model growth patterns where successive output values over equal-length input-value intervals are proportional. The graph on the right appears to be linear, while the graph on the left does not appear to be linear. Because the graph on the right is linear on the new proportional scale, the original data for the right graph is expo­ nential; the graph on the left does not appear linear, so the original data for the left graph is not exponential. In fact, the left graph can be modeled by f (x) = x2 while the graph on the right can be modeled by g (x) = (1.26)x. 104 ❯ STEP 4. Review the Knowledge You Need to Score High ❯ Rapid Review Sequence • A sequence is an ordered list of numbers that follow a specific pattern. • An arithmetic sequence is a sequence that has successive terms that have a constant rate of change or common difference. • The general term of an arithmetic sequence is an = a0 + dn, where a0 is the initial value and d is the common difference. • A geometric sequence is a sequence that has successive terms that have a constant pro­ portional change or a common ratio. • The general term of a geometric sequence is gn = g0rn, where g0 is the initial value and r is the common ratio. • Linear functions are similar to arithmetic sequences because both have an initial value and a repeated addition of a constant. • Exponential functions are similar to geometric sequences because both have an initial value and a repeated constant proportion. Exponential Function • Exponential functions arise from situations of constant growth. • The general form of an exponential function is f (x) = abx, with initial value a, where a ≠ 0, and base b, where b > 0 and b ≠ 1. When a > 0 and b > 1, the exponential function is known as exponential growth. When a > 0 and 0 < b < 1, the exponential function is known as exponential decay. • When the base of an exponential function f is greater than 1, f is increasing, = →−∞f x lim ( ) 0 x , = ∞ →∞f x lim ( ) x . • When the base of an exponential function f is less than 1, f is decreasing, = ∞ →−∞f x lim ( ) x , and = →∞f x lim ( ) 0 x . • When a > 0, the graph is concave up, and when a < 0, the graph is concave down. • Exponential expressions can be rewritten using the properties of exponents. Logarithmic Function • Logarithmic functions are inverses of exponential functions. • = c a logb if and only if = b c a , where a and c are constants, b > 0, and b ≠ 1. • When the base of a logarithmic function g is greater than 1, g is increasing, = ∞ →∞g x lim ( ) x . The graph is concave down. • When the base of a logarithmic expression is not specified, it is understood to be the common logarithm with base 10 and written x log . • When the base of a logarithm expression is e, it is referred to as natural logarithm and written x ln . • Logarithm expressions can be rewritten using the properties of logarithms. Modeling • Two variables in a data set that demonstrate a slightly changing rate of change can be modeled by linear, quadratic, and exponential function models. Unit 2: Exponential and Logarithmic Functions ❮ 105 • Logarithmic functions are inverses of exponential functions and can be used to model situations involving proportional growth, or repeated multiplication, where the input values change proportionally over equal-length output-value intervals. • Models can be used to predict values for the dependent variable, depending on the con­ textual constraints on the domain. Miscellaneous • Two functions f and g can be combined using the composition (f ° g)(x). • An inverse function is a reverse mapping of a function, and is written f -1 • If (f ° g)(x) = (g ° f )(x) = x, then the functions f and g are inverses. • When solving exponential or logarithmic equations or inequalities, extraneous solutions might exist and should be excluded. • For the AP exam, logarithmic scaling will only be applied to the y-axis to linearize expo­ nential functions. ❯ Review Questions Basic Level 1. Describe the graph of y 3 1 5 x =      using limit notation. 2. Let f (x) = 5x. If g (x) is a transformation of f (x) with a horizontal shift of 3 units right and a vertical shift of 4 units up, what is the equation of g (x)? 3. Using the following tables, what is the value of f (g (2))? What is the value of g ( f (−2))? x −3 −2 −1 0 1 2 f (x) 6 0 3 5 1 −2 x −4 −2 0 2 4 6 g (x) 3 1 −1 −3 −5 −7 4. Simplify completely using only positive exponents: (2x3y2)3(3x7y-3)2. 5. If ( ) = − f x x log ( 1) 3 , find f -1(x). 6. If ( ) = f x x log4 , find f -1(2). 7. Solve for x = : log 300 2. x Advanced Level 8. If the 2nd term of a geometric sequence is 36 and the 6th term is 2916, what is the 10th term? 9. An arithmetic sequence has the 7th term 41 and the 18th term 74. Find the formula for the nth term. 10. Simplify without using a calculator: log 1 2 4 . 11. Let f (x) = 5x – 7 and = + − g x x x ( ) 2 1 8 . Find ( f ° g)(x) and ( g ° f )(x). 106 ❯ STEP 4. Review the Knowledge You Need to Score High 12. f (x) is graphed as follows. If g (x) = 4 f (x) – 10, what is the value of g (0)? 10 5 0 5 –5 –10 10 –5 13. Rewrite − + x y x 2log log(4 ) log(3 ) 4 as a single logarithm. 14. If ( ) = − + − f x x x 2 1 5 1 , find f (x). 15. Simplify completely using all positive exponents: − − x y z xy z (5 ) (2 ) 2 4 2 3 3 2 2 16. Solve for x: + < x x log(2 ) log 2. 3 17. If Sam puts $10,000 into a retirement account at age 30 earning 2.6% compounded annually, how much will it grow to by retirement age 67 if no additional deposits or withdrawals are made? 18. Express nw logb 3 4 in terms of n logb and w logb . 19. A wise old ruler wanted to reward his friend for an act of extraordinary bravery. The friend said, “I ask you for just one thing. Take the chessboard and place on the first square one grain of rice. On the first day I will take this grain home to feed my family. On the second day, place on the second square two grains for me to take home. On the third day cover the third square with four grains for me to take. Each day double the number of grains you give me until you have placed rice on every square of the chess­ board, then my reward will be complete.” The wise old ruler replied, “This sounds like a small price to pay for your act of incredible bravery, I will do as you ask immediately.” If the friend’s request was granted, how much rice would be given to them on day 64, the final square of the chessboard? 20. Using the information from question 19, what is the rate of change between the sixth and seventh days? What is the rate of change between the seventh and eighth days? What does this illustrate about an exponential curve abx with a > 0? Unit 2: Exponential and Logarithmic Functions ❮ 107 ❯ Answers and Explanations 1. •  The following graph of y 3 1 5 x =      is an exponential decay function (the base b = 1/5, is between 0 and 1 or 0 < 1/5 < 1). • When a > 0, exponential decay functions are always decreasing, always concave up, and have a horizontal asymptote at y = 0. • f x lim ( ) x = ∞ →−∞ and f x lim ( ) 0 x = →∞ 4 6 8 2 2 –2 –2 0 4 6 8 2. • A horizontal shift of 3 replaces x with x − 3. • A vertical shift of 4 adds 4 to the equation. • g (x) = 5x-3 + 4 3. • To find f (g (2)), we first need to find g (2). Using the bottom table, g (2) = −3. • So f (g (2)) = f (−3) = 6. • To find g ( f (−2)), we first need to find f (−2). Using the top table, f (−2) = 0. • So g ( f (−2)) = g (0) = −1. 4. •  Let’s simplify the first expression by using the power property : (2x3y2)3 = (2)3(x3)3(y2)3 = 8x9y6. • Now the second: (3x7y-3)2 = (3)2(x7)2(y-3)2 = 9x14y-6. • Now, apply the product property. (8x9y6)( 9x14y-6) = 72x23y0 = 72x23. 5. • Rewrite = − f x x ( ) log ( 1) 3 as = − y x log ( 1) 3 . • Interchange x and y: = − x y log ( 1) 3 . • Rewrite as an exponential equation: = − y 3 1 x . • Solve for y: = + y 3 1 x . • So = + − f x ( ) 3 1 x 1 . 6. •  The inverse can be found by rewriting the logarithmic equation as the corresponding exponential equation. = − f x ( ) 4x 1 . • = = − f (2) 4 16 1 2 . 7. •  Solving the equation = log 300 2 x involves rewriting as the equivalent exponential equation. • = log 300 2 x can be rewritten as = → = ± = ± = ± x x 300 300 100 3 10 3 2 . • Because the base of a logarithm must be positive, there is only one solution, = x 10 3. 108 ❯ STEP 4. Review the Knowledge You Need to Score High 8. •  Substituting (2, 36) into the equation gn = g0rn results in 36 = g0r2. • Substituting (6, 2916) into the equation results in 2916 = g0r6. • Divide the two equations to eliminate g0. 2916 36 = g r g r 0 6 0 2 → = r 81 4 → r = ± 81 4 → r = ±3. • Using the first equation and substituting the known values results in 36 = g0(3)2 → 36 = 9g0 → g0 = 4. Note: Substituting r = −3 into the equation yields the same result. • g10 = 4(3)10 = 236,196. 9. • Substituting (7, 41) into the equation an = a0 + dn results in 41 = a0 + 7d. • Substituting (18, 74) into the equation results in 74 = a0 + 18d. • Subtract the two equations to eliminate a0: 74 = a0 + 18d – 41 = –(a0 + 7d ) 33 = 11d d = 3. • Using the first equation and substituting the known values results in 41 = a0 + 7(3) → a0 = 20. • The arithmetic formula is an = 20 + 3n. 10. • Rewriting = x log 1 2 4 as an exponential equation results in = 4 1 2. x • Rewriting both sides of the equation as a power of 2 results in = − (2 ) 2 x 2 1. • Because the bases are equal, that means the exponents must be equal. • 2x = −1, so = log 1 2 4 −1 2 . 11. • (f °g)(x) = f (g (x)) = f x x 2 1 8 + −       = 5 x x 2 1 8 + −       − 7 = x x x x 10 5 8 7 8 8 ( ) + − − − − = x x x x x 10 5 7 56 8 3 61 8 + − + − = + − • (g °f )(x) = g ( f (x)) = g (5x – 7) = ( ) − + − − = − + − = − − x x x x x x 2 5 7 1 (5 7) 8 10 14 1 5 15 10 13 5 15 12. • From the graph it can been seen that f (0) = 1 2 . • Because g (x) = 4 f (x), the point (0, ½) moves to 4[(0, ½)] or (0, 2). • Applying the vertical shift −10, the point (0, 2) moves to (0, −8). • Therefore, g (0) = −8. 13. • Applying the power property, = x x 2log log 2. • Applying the quotient and product properties gives x x y log (3 ) 4 2 4 = x y log 3 4 6 . 14. •  Finding the function given its inverse is the same procedure as finding the inverse given a function. • Rewrite the equation replacing x with y and y with x. This results in = − + x y y 2 1 5 . • Solve for y. First cross-multiply: x(y + 5) = 2y – 1. • Expand and isolate y: xy + 5x = 2y – 1 → xy – 2y = −5x – 1 → y(x – 2) = −5x – 1. • = − − − y x x 5 1 2 , so the function is = − − − f x x x ( ) 5 1 2 . Unit 2: Exponential and Logarithmic Functions ❮ 109 15. •  First, apply the power property (raise each term to the power) to the numerator and denominator: = − − − − − − − x y z xy z x y z x y z (5 ) (2 ) 5 ( ) ( ) ( ) 2 ( ) ( ) 2 4 2 3 3 2 2 3 2 3 4 3 2 3 2 2 3 2 2 2 . • Next, lets apply the power property (multiply the exponents) again to each individual term: − − − − − x y z x y z 5 ( ) ( ) ( ) 2 ( ) ( ) 3 2 3 4 3 2 3 2 2 3 2 2 2 = x y z x y z 125 2 2 3 4 3 2 3 2 2 3 2 2 2 ⋅ ⋅ −⋅ − − ⋅− ⋅−. • Next, apply the quotient property (subtract exponents): x y z x y z 125 2 6 12 6 2 2 6 4 = − − − − − x y z 125 2 6 ( 2) 12 ( 6) 6 ( 4) 2 −− −− −−− − . • Simplify: = −− −− −−− − − x y z x y z 125 2 (125)(4) 6 ( 2) 12 ( 6) 6 ( 4) 2 8 18 2 . • Finally, use the negative exponent property: x y z 500 8 18 2 . 16. • Rewrite using a single logarithm: x x x x x log(2 ) log log(2 ) log(2 ) 3 3 4 + = ⋅ = . • Rewrite the inequality as an exponential inequality: < → < x x log(2 ) 2 10 2 4 2 4. • Solve the inequality: < → < → x x 100 2 50 4 4 ± < x 50 4 . • Substituting x =−50 4 would lead to an extraneous solution because the logarithm of a negative number does not exist. 17. •  Using the compound interest formula A = B(1 + r)t where B is the initial amount, r is the interest rate, and t is years, we have A = 10,000(1.026)37 = 25,849.512. • Sam will have $25,849.51 after 37 years. 18. • Using the product property, = + nw n w log log log b b b 3 4 1 4 3 4. • Using the power property, + = + n w n w log log 1 4 log 3 4 log b b n b 1 4 3 4 . 19. • The values in the sequence are 1, 2, 4, 8, 16, . . . . • Using the formula gn = g1r(n-1), we get g64 = 1(2)64-1. • On the 64th day the friend would receive 9.223 × 1018 pieces of rice. 20. • Using the formula gn = g1r(n-1), we get g6 = 1(2)6-1 = 32, and g7 = 1(2)7-1 = 64. • The rate of change between the sixth and seventh day is − − = 64 32 7 6 32 grains/day. • Using the formula gn = g1r(n-1), we get g8 = 1(2)8-1= 128. • The rate of change between the seventh and eighth day is − − = 128 64 8 7 64 grains/day. • The graph of f (x) = 1(2)x is growing at a rate of r = 2. • Because the rate of change is positive, exponential functions with a > 0 are concave up.
3963
https://avi-loeb.medium.com/history-of-the-duration-of-a-day-on-earth-ff975be36132
Sitemap Open in app Sign in Sign in History of the Duration of a Day on Earth Avi Loeb 5 min readJan 6, 2025 Our life is structured around the daily cycle of 24 hours. But there is no fundamental reason for this period. In fact, it was different in the past and will be different in the future. The impact that gave birth to the Moon spun up the Earth and the subsequent gravitational interaction between the Moon and the Earth dictated the evolution of the duration of a day since then. In other words, the duration of a day on an Earth-like exoplanet without a Moon might be very different. We should keep that in mind when hosting biological visitors from interstellar space. They might go to sleep in their guest bedroom on very different intervals than we do. The label ET on their alarm clock would not mean Eastern Time’ but ratherExtraterrestrial Time’. The Moon was likely spawned as a result of a giant impact on Earth by a Mars-size object, commonly called Theia, in the early history of Earth. The impact occurred 60–175 million years after the Earth was born, which was 4.57 billion years ago. The energy released during the impact melted the Earth’s surface into a magma ocean. Recent computer simulations of the Moon forming impact, suggest that the Earth was spun up by the impact to about a four-hour day. As the Moon receded from Earth, the day was lengthened by the tidal transfer of angular momentum from the Earth’s spin to the Moon’s orbit. Following the collision, Earth’s surface temperature reached about 2,300 degrees Kelvin (3,680°F), too hot for liquid water to support the chemistry of life as-we-know-it. It took hundreds of millions of years for Earth to cool significantly, eventually allowing the emergence of the Last Universal Common Ancestor (LUCA) of life between 4.09–4.33 billion years ago. When LUCA lived, a day lasted about 10 hours. When photosynthesis started about 3.5 billion years ago, a day lasted 12 hours. By 1.8 billion years ago when complex organisms began, the day was 21 hours long. When the oldest known sexually reproducing organism formed, about 1.2 billion years ago, a day was about 20 hours. The first humans appeared a few million years ago, when the day was very close to the current 24 hours duration. According to recent calculations, Earth’s spin is slowing down on average by 1.35 seconds every 100,000 years, or 3.75 hours in a billion years. Do other actors beyond the Moon have an effect on the Earth’s rotation? Minor shifts in the duration of a terrestrial day occur all the time. For example, on June 29, 2022, Earth’s spin period was recorded to be 1.59 thousandths of a second (millisecond) shorter than 24 hours. Faster spin means that Earth gets to the same position earlier. Two milliseconds translate to a meter at the equator. GPS satellites could lose their precision if not corrected for such changes. There are related implications for cell phones, computers and communications systems, which synchronize with Network Time Protocol (NTP) servers. Get Avi Loeb’s stories in your inbox Join Medium for free to get updates from this writer. Common changes reflect the motion of the Earth’s molten core, oceans, and atmosphere, seismic activity and the gravitational influences of the Moon and the Sun. Climate warming melts polar ice and adds liquid water at the equator, making the Earth less spherical. Similarly to the slower spin of ice skaters who extend their arms, angular momentum conservation implies that a more extended mass distribution than spherical causes Earth to spin slower as a result of climate warming. Cataclysmic events also modify slightly the duration of the day on Earth. The most dramatic climate catastrophe is predicted to occur in a billion years when the Sun will boil off all water reservoirs on the Earth’s surface. As this calamity will shift 0.02% of the mass of the Earth into the atmosphere, it could increase the duration of a day by a tenth of a second. By that time, the day could be several hours longer than 24 hours as a result of the continuing interaction with the Moon. Past climate catastrophes were less dramatic, The Chicxulub impactor which killed off 75% of all terrestrial species, including non-avian dinosaurs, changed the rotation energy of the Earth by at most a fraction of ten billionths and modified the duration of a day by less than a millisecond. Dramatic earthquakes can decrease the length of day by a few millionths of a second (microseconds). The Three-Gorge reservoir in China holds 40 cubic kilometers of water. The shift of mass when it gets filled increases the length of day by 0.06 microseconds. The launch of rockets, including Starship, has a completely negligible effect on the duration of a terrestrial day. In 7.6 billion years, the Sun will expand to a red giant and may engulf the Earth and the Moon. Once that happens, the Moon will spiral inwards and crash on Earth and the Earth’s rotation will slow down due to friction on the red giant’s envelope. Unfortunately, no terrestrial inhabitants will be able to get more sleep during the longer days at that time. ABOUT THE AUTHOR Avi Loeb is the head of the Galileo Project, founding director of Harvard University’s — Black Hole Initiative, director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics, and the former chair of the astronomy department at Harvard University (2011–2020). He is a former member of the President’s Council of Advisors on Science and Technology and a former chair of the Board on Physics and Astronomy of the National Academies. He is the bestselling author of “Extraterrestrial:The First Sign of Intelligent Life Beyond Earth” and a co-author of the textbook “Life in the Cosmos”, both published in 2021. The paperback edition of his new book, titled “Interstellar”, was published in August 2024. Science Technology Data Science Astronomy Space ## Written by Avi Loeb 69K followers ·1 following Avi Loeb is the Baird Professor of Science and Institute director at Harvard University and the bestselling author of “Extraterrestrial” and "Interstellar". Responses (16) Write a response What are your thoughts? Don (δψ) Tadaya | DaScient | Quantum | AI Jan 7 ``` Life is opinion. Math is a construct. ``` 4 Michael Heine Jan 7 ``` Between moon dust and Earth time, you make us aware of how fragile and yet grand our place in the cosmos truly is – a masterpiece of perspective. Thank You for the pleasure. ``` 6 Rob Lohues Jan 8 ``` If a day once lasted 10 hours, how long did 1 year last? This seems important to me because you are talking about a period of about 4 billion years ago; is this about 4 billion of our current yearly cycles? ``` 3 More from Avi Loeb Avi Loeb ## What Should Humanity Do on the Day After an Interstellar Object is Recognized as Technological? Let us imagine for a moment that the new interstellar object, 3I/ATLAS is a spacecraft, guided to send mini-probes that will arrive at… Aug 15 1.2K 129 Avi Loeb ## 3I/ATLAS is Large and Releases Carbon Dioxide (CO2) The team of NASA’s SPHEREx space observatory just reported tantalizing new data on the interstellar object 3I/ATLAS (accessible here). The… Aug 24 1.1K 41 Avi Loeb ## Detection of an Anti-Solar Tail for 3I/ATLAS On October 27, 2025, deep imaging of the interstellar object 3I/ATLAS by the Gemini South 8.2-meter telescope — aided by the Gemini… 5d ago 1.1K 57 Avi Loeb ## Does 3I/ATLAS Generate Its Own Light? The best image we have so far of the new interstellar object, 3I/ATLAS, was obtained by the Hubble Space Telescope on July 21, 2025. The… Aug 17 1.1K 66 See all from Avi Loeb Recommended from Medium In The Infinite Universe by Tim Andersen, Ph.D. ## The dominant model of cosmology may be falling apart Challenges to the ΛCDM can no longer be ignored Aug 28 705 17 Devin Gates ## Is 3I/ATLAS Really Coming to Attack Earth?! “They” say 3I/ATLAS is headed our way and could be due to attack Earth by November of this year…. BUT IS IT TRUE?!? Aug 10 15 Rhett Allain ## The Slippery Slope of Physics: Apparent Weight and a Perfectly Icy Earth This all started with a simple physics question: do you feel heavier at the equator or at the north pole? It’s actually a fun question, so… 6d ago 98 4 In Science Spectrum by Cole Frederick ## Pythagoras: Fanatic or Genius? Did he even create that theorem? 4d ago 802 11 Avi Loeb ## The Challenge of Measuring the Mass of 3I/ATLAS The biggest uncertainty about the interstellar object 3I/ATLAS involves the diameter of its solid-density nucleus. The flux detected by the… 4d ago 902 44 Dan Cleary ## Spacetime Structure: Why Light Doesn’t Have a Speed A Geometric Revolution in Understanding Relativity Jul 30 306 16 See more recommendations Text to speech
3964
https://www.youtube.com/watch?v=KZiwsEXPpQs
Domain and Range of Quadratic Functions Using the Quadratic Formula Discriminant! AF Math & Engineering 33500 subscribers 48 likes Description 4920 views Posted: 8 Sep 2017 In this video, we show you how to solve tricky quadratic domain and range functions using the discriminant that can't be solved by the inverse method shown in our first video. This video also offers you an explanation on our previous video in the series and an error to avoid! Link here for that video: Timestamp: 1:00 - Explanation as to why x^2 is not a valid function for the inverse (follow up from previous video) 2:10: Domain of the function 2:50: Range 6:38: Finding the intervals of the range values 8:08: Checking if the extreme values of the range are included 9:08: Final solution for Range 9:15: Desmos Graphing Calculator, function plotted Link to Desmos Graphing Calculator: VISIT OUR WEBSITE AT !!! Now try to solve it on your OWN! Don't just "understand"... PRACTICE!! AF MATH AND ENGINEERING - Students Helping Students! Join our Community! =) Follow us on Twitter at Follow us on Facebook at math #civilengineering #engineering #education #edchat #lrnchat #blendchat #mlearning #elearning #ipadchat #pblchat #passiondriven #blendedlearning #ccchat #ccss #commoncore #competencyed #curriculum #deeperlearning #digln #earlyed #edapp #eddata #edleaders #edpolicy #edtech #education #elearning #highschool #leadership #middleschool #literacy #engchat #mathchat #scichat #stem #highered #calculus 11 comments Transcript: hey guys welcome back right here from math math and engineering welcome back to the channel we're doing another video for you on the domain and the range but we're going to do some trickier quadratic functions and we're gonna start out this video by going over the previous video that we did on domain and range which was a trick that we showed you that you can use for more simple functions that have inverses this is for a function that you know for example maybe doesn't have an inverse or something like that or it's a quadratic function in which case we can we can go ahead and show you how to how to evaluate that using this really cool trick this video actually was inspired by one of our newest members of our community a viewer commented on a video and you know showed us this really cool function you know and yeah we decided to share it with you so thank you you know who you are and guys if you're liking the video and the channel as always hit the subscribe button it does inspire us to make more videos thank you so much for your engagement guys all right cool so in the last video what we did is we showed you this trick where for a function for example y equals X minus 2 we can go ahead and swap X and Y okay solve for y okay and the in this is what what we call the inverse of the function and the domain of this function is equal to the range of the initial function okay but when we have a function and we actually it was a little bit of a mistake on our part we did we did explain what we did here the purpose of the video was to show you the technique in the video the answer that we got for this actually was correct but the problem with using that from that method the inverse method for finding the domain and range with the function like x squared okay is that it doesn't pass the horizontal line test so if this function for example was defined from zero to anywhere on infinity only or anywhere from zero to negative infinity only the function would be one-to-one for example for every input of X there would be an output of Y but in this case two and negative two will give you the same answer for example here in here which makes it the inverse not exist so for that case of these kind of questions hopefully that gives you actually a little bit of clarification on that video if you were confused about that question our apologies but we're going to show you now this really cool trick how you evaluate these quadratic functions using the discriminant so let's take a look at this question we're asked to find the domain and range the domain it's actually really easy I'm not really gonna spend too much time on that you should know by now that you can't divide by zero and a rational number that makes it undefined so in this case you can see that because this is x squared plus 9 x squared can never be a negative number okay so there's nothing that we can divide we can multiply by in order to get negative 9 here so we know immediately just by inspection we don't need to do anything that the domain is all real numbers for this function so let's take a look at the range because the range is actually why we're doing this question and this is what I wanted to show you okay is what we can do when we have a function like this is we can model it in the form ax squared plus BX plus C equals 0 which is the quadratic form and when we model it in the quadratic form we can plug it into the discriminant and we can find the range from that actually which is a quite an interesting trick so let's take a look at this function so we have y equals 1 over x squared plus 9 and if we multiply both sides by x squared plus 9 we're going to end up with Y x squared plus 9 equals 1 okay if we go ahead and expand that out we're going to have x squared y plus 9 y minus 1 equals 0 okay and if we take a look and we correspond the coefficients in the quadratic form the a B and C to this we as you can see they it fits the same form so what is our a value in this formula here well our a value is the coefficient of x squared which is y so a is equal to Y we don't have a B there's no there's no X here there's only an x squared but there is a C we do have our kind of our constant here which is 9 Y minus 1 so B is equal to 0 and C is equal to 9 Y minus 1 so now with this knowledge what we can do is we can plug in actually to the quadratic equation and we can we can find the range from that so if you'll remember you should know the quadratic formula by heart or the quadratic equation okay B squared minus 4ac over 2a okay this is what we call the discriminant here the set of Y values that satisfy the condition that the discriminant be greater than or equal to zero it's going to be our range for the function because as we know under a radical or under a square root function this term must be greater than or equal to zero so B squared minus 4ac over 2a we have our b squared minus 4ac okay that must be greater than or equal to zero knowing that this must be greater than or equal to zero let's go ahead and plug our values in okay so we have a ok we have our a which is why we have our b squared which is zero okay so we have 4y and then we have C which is 9y minus one okay this must be greater than or equal to zero and if we come down here we're going to expand this out okay so we're going to have negative okay thirty-six y squared plus 4y must be greater than or equal to zero and if we factor out our Y here okay we're going to have negative Wanek and we can say we have 36 Y minus four very good and that's greater than or equal to zero and if we go ahead and we calculate the roots for this function okay we're going to get that Y is equal to zero okay and Y is equal to 4 over 36 which is equal to one over nine cool so that's the roots of that function there and now what we need to consider is we're going to need to take a look at on what interval of zero and one over nine is this function greater than zero and once we know where on which intervals this function is greater than zero the discriminant we're going to be able to determine where there are ranges because that is essentially our range that's our set of data that where the function exists and for the dependent variable so let's go ahead and do that now so I'm not going to break the whole table out but you can go ahead and try this out if you'd like I'm sure you know how to generate a table like this with these factors here and and be able to generate whether or not a function is positive or negative that's a whole another topic so I'm not going to go over that but you can try it out if you'd like so if you'll take the function and you plug in for this function here okay if our Y value is less than zero okay we're going to have that the function okay so I'm just going to write F here so the function will be negative okay so that means that this function will be negative here and that's unacceptable for us okay because with that that makes this function undefined so what we need to do next is we need to test the function for y equals zero okay so if we plug in y equals zero into this function here what we're gonna get is that our function is zero okay if we compute the range of any any number between zero and Y and one over nine not including them okay so any number in between here with if we're going to find that our function is positive if we go over to y equals one over nine okay we're gonna find that our function is zero and any number greater than one over nine we're gonna find that our function is negative okay so if we take a look here this is what we're looking for okay and we're looking for our values here in which our function is either zero or it's positive and that is going to be our initial kind of set of data for our range but we do need to do one more test okay so we're going to say Y 1 over 9 what we do need to do is we need to double check if these numbers the extreme numbers of our our range are defined for our original function so let's go ahead and do that we're gonna go ahead and plug this in okay so when we have our function here y equals 1 over x squared plus 9 let's go ahead and plug in 0 for y ok so we have 0 equals 1 over x squared plus 9 okay and as you can see this function doesn't have any any solution okay so we can there's there's no solution here so that means that this zero value for Y is not included in our range let's go ahead and try 1 over 9 okay equals 1 over x squared plus 9 if we go ahead and solve for this we're going to find that X is equal to 0 which is a bit which is a solution and no solution so that means that 1 over 9 is included which brings us to our final answer okay so we have that our range okay is the set of Y data in which the discriminant was greater than or equal to 0 and we went ahead and we tested our 1 over 9 our zero value in our original function to see if they were included and we found that zero was not included and we found that our one over nine value was included okay so now we can write that our set of data for the range of the range for this function one over x squared plus nine is not including zero to one over nine including so one thing I encourage you guys to do would definitely be to check out an online graphing calculator after you retrieved your answer and it's really what it's going to do is going to really help you you're going to be able to visualize more functions if you just put more functions in and take a look at what they look like it's also going to give you an idea of where your answer came from so it's going to give you kind of like a visual representation of what your answer is which is also important so if we go ahead and we type in y equals one over x squared plus nine okay as you can see we have this function here that it's a little bit of parabolic up to the y-axis and then it trenched all the way to infinity and negative infinity on both sides towards zero but never actually reaching it so if you know you can just keep going forever and you'll see that this this distance is getting smaller and smaller and smaller but it never reaches zero okay and if we go over back to the y-axis okay we're going to see that we have our intersection cleaner which is our maximum point at zero and one over nine right which is what our range was one over nine is included but if you remember from our answer as well zero was not included in the final answer because this function never reaches zero it only trends towards it asymptotically so a really good tip really good trick always put your answers into a graphing calculator like this I'll leave this link down below for you and I highly encourage that you do this for every problem that you do in calculus it's really gonna help you so thank you so much for watching I hope you learned something from that I hope you learned this trick of you know using the discriminant of the quadratic equation in order to help you evaluate some tricky quadratic functions or some rational functions it's it's a good tool to have in your in your toolbox therefore for your map then your precalculus and even calculus one in university as always guys if you like this video hit the subscribe button I'm French bass engineer take care
3965
https://www.uptodate.com/contents/etiology-of-hearing-loss-in-adults
Your Privacy To give you the best possible experience we use cookies and similar technologies. We use data collected through these technologies for various purposes, including to enhance website functionality, remember your preferences, and show the most relevant content. You can select your preferences by clicking the link. For more information, please review our Privacy & Cookie Notice Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device. Because we respect your right to privacy, you can choose not to allow certain types of cookies on our website. Click on the different category headings to find out more and manage your cookie preferences. However, blocking some types of cookies may impact your experience on the site and the services we are able to offer. Privacy & Cookie Notice Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function. They are usually set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, this may have an effect on the proper functioning of (parts of) the site. Performance Cookies These cookies support analytic services that measure and improve the performance of our site. They help us know which pages are the most and least popular and see how visitors move around the site.
3966
https://www.vedantu.com/revision-notes/cbse-class-11-chemistry-notes-chapter-10-the-s-block-elements
Courses for Kids Free study material Offline Centres Talk to our experts Revision Notes Class 11 Chemistry Chapter 10 Tthe S Block Elements The S Block Elements Class 11 Chemistry Chapter 10 CBSE Notes - 2025-26 Download PDF Study Materials NCERT Solutions For Class 11 Important Questions for Class 11 Revision Notes for Class 11 NCERT Books Maths Formula for Class 11 Sample Papers Class 11 Maths Class 11 Physics Class 11 Chemistry Class 11 Business Studies Class 11 Economics Class 11 Accountancy Maths Syllabus Physics Syllabus Chemistry Syllabus Biology Syllabus English Syllabus Hindi Syllabus Accountancy Syllabus Business Studies Syllabus Economics Syllabus Political Science Syllabus Physical Education Syllabus Textbook Solutions Class 11 NCERT Exemplar Solutions Class 11 HC Verma Solutions Class 11 RD Sharma Solutions Class 11 RS Aggarwal Solutions Class 11 DK Goel Solutions Class 11 TS Grewal Solutions Class 11 Sandeep Garg Solutions Latest Updates Book Free Demo: Maximise Every Mark with One-to-One Learning Chemistry Notes for Chapter 10 The S Block Elements Class 11 - FREE PDF Download In Cbse Class 11 Chemistry Notes Chapter 10 The S Block Elements, you’ll dive into the world of alkali and alkaline earth metals—like sodium, potassium, magnesium, and calcium. These elements are at the very start of the periodic table and come up often in CBSE exams, so understanding them can boost your confidence. This chapter will help you learn their properties, reactions, uses, and the special tricks to remember them easily. To make your study smart and effective, don’t forget to check the Class 11 Chemistry Syllabus for all updated topics covered in CBSE. Learning this chapter becomes simple when you have our Class 11 Chemistry Revision Notes as your guide. Vedantu’s notes break down tough points, clear your doubts, and help you revise quickly for exams. This chapter is quite important because questions from the s-block elements appear regularly in board papers. Mastering it can really help you score well in your Chemistry exams. Competitive Exams after 12th Science The s-Block Elements Class 11 Notes Chemistry - Basic Subjective Questions Section – A (1 Mark Question 1. Melting and boiling points of alkali metals is low. Explain? Ans. The melting and boiling points of the alkali metals are low indicating weak metallic bonding due to the presence of only a single valence electron in them. 2. Explain the diagonal relationship in periodic table. Ans. The diagonal relationship is due to the similarity in ionic sizes and/or charge/radius ratio of the elements. 3. Why are lithium compounds soluble in organic solvents? Ans. Due to high polarizing power, there is increased covalent character of lithium compounds which is responsible for their solubility in organic solvents. 4. By which process we can prepapre sodium carbonate? Ans. Sodium carbonate is generally prepared by Solvay’s process. 5. What is the formula of soda ash? Ans. Na2CO3 is the formaula of soda ash. 6. Why do alkaline earth metals have low ionization enthalpy? Ans. The alkaline earth metals have low ionization enthalpies due to fairly large size of atoms. 7. What is the nature of oxide formed by Be? Ans. BeO is covalent and amphoteric while oxides of other elements are ionic and basic in nature. 8. Beryllium show similarities with Al. Why? Ans. Because of their similarity in charge/radius ratios. (Be2+,2/31 = 0.064 and Al3+,3/50 = 0.66) 9. What happens when gypsum is heated to 390K? Ans. When we heat gypsum at 390K ,Plaster of parts will form. 10. Whyanhydrous calcium sulphate cannot be used as plaster of Paris? Ans. Because it does not have the ability to set like plaster of Paris. Section – B (2 Marks Questions) 11. Why metals like potassium and sodium cannot be extracted by reduction of their oxides by carbon? Ans. Potassium and sodium are strong electropositive metals and have great affinity for oxygen than that of carbon. Hence they cannot be extracted from their oxides by reduction with carbon. 12. Complete the reactions and balance them. (a) Na2O2 and water (b) KO2 and water Ans. (a)2Na2O2 + 2H2O → 4NaOH + O2 (b) 2KO2 + 2H2O → 2KOH + H2O + O2 13. Potassium carbonate can not be prepared by the SOLVAY process. Give reason. Ans. Potassium carbonate cannot be prepared by the SOLVAY process because potassium bicarbonate (KHCO3) is highly soluble in water, unlike NaHCO3 which was separated as crystals. Due to its high solubility KHCO3 cannot be precipitated by the addition of ammonium bicarbonate to a saturated solution of KCl. 14. Explain what happens when (i) Sodium hydrogen carbonate is heated. (ii) Sodium with mercury reacts with water. Ans. (i) $\underset{\text{Sodium hydrogen carbonate}}{2\text{NaHCO}_3}\xrightarrow[]{\text{Heat}}\underset{\text{Sodium carbonate}}{\text{Na}_2\text{CO}_3}+\text{H}_2\text{O}+\text{CO}_2\uparrow$ (ii) 2Na-Hg+2H2O→2NaOH+H2↑+2Hg 15. S-block elements never occur in free state. Explain. What are their usual modes of occurrence? Ans. The metals (Alkali & alkaline earth metals) are highly reactive because of their low ionization energy. They are highly electropositive forming positive ions. So, they are never found in a free state. They are widely distributed in nature in a combined state. They occur in the earth’s crust in the form of oxides, chlorides, silicates & carbonates. 16. Why second ionization enthalpy of Calcium is more than the first? How is that calcium forms CaCl2 and not CaCl . Explain Ans. The higher value of second ionization enthalpy is more than compensated by the higher enthalpy of hydration of Ca2+. Therefore, formation of CaCl2 becomes more favourable than CaCl energetically. 17. Solubility of alkaline metal hydroxides increases down the group . Explain. Ans. If the anion and the cation are of comparable size, the cationic radius will influence the lattice energy. Since lattice energy decreases much more than the hydration energy with increasing ionic ionic size, solubility will increases as we go down the group. This is the case of alkaline earth metal hydroxides. 18. Solution of Na2CO3 is alkaline. Give reason. Ans. The solution of Na2CO3 is alkaline in nature because when Na2CO3 is treated with water, it gets hydrolysed to form an alkaline solution: CO32– + H2O → HCO3– + OH– 19. Discuss the various reactions that occur in the solvay process Ans. 2NH3 + H2O + CO2→ (NH4)2CO3 (NH4)CO3 + CO2 + H2O → 2NH4HCO3 NH4HCO3 + NaCl → NH4Cl + NaHCO3 2NaHCO3→ Na2CO3 + CO2 + H2O 20. Give reason: Lithium halides are covalent in nature. Ans. Lithium halides are covalent because of the high polarization capability of lithium ion. The Li+ ion is very small in size and has high tendency to distort electron cloud around the negative halide ion. PDF Summary - Class 11 Chemistry The s-Block Elements Notes (Chapter 12) Alkali Metals (Group 1) They have $n{s^1}$ electronic configuration and are highly reactive metals | | | | --- | Elements | Atomic Number | Electronic Configuration | | Lithium | 3 | $[He]\;2{s^1}$ | | Sodium | 11 | $[Ne]\;3{s^1}$ | | Potassium | 19 | $[Ar]\;4{s^1}$ | | Rubidium | 37 | $[Kr]\;5{s^1}$ | | Cesium | 55 | $[Xe]\;6{s^1}$ | | Francium | 87 | $[Rn]\;7{s^1}$ | Physical Properties Atomic Size In their respective times, the atoms are at their largest. As you progress through the group, the atomic size grows larger. Oxidation State Group 1 elements has +1 oxidation state. Density Alkali metals have low density due to their large size. $Density = \dfrac{{Atomic\;mass}}{{Atomic\;volume}}$ The group's atomic weight increases from $Li$ to $Cs$ , as does its volume, but the atomic weight increase outweighs the volume increase. As a result, density rises from $Li$ to $Cs$ Exception: Density of sodium is more than that of potassium Order: $Li < K < Na < Rb < Cs$ Nature of Bonds When the values of electronegativity are low, they combine with other elements to form Ionic bonds. Ionization Energy The atoms in this group have lower initial ionisation energy than any other group in the periodic table. Because atoms are so big, the outer electron is only weakly bound by the nucleus, resulting in a low ionisation energy. As you move down the group, the ionisation energy drops. Flame Test When alkali metals are burned on a flame, the electrons in the valence shell migrate from a lower energy level to a higher energy level due to heat absorption from the flame. When they return to their original condition, they release the additional energy in the form of visible light, which gives the flame colour. | | | --- | | Element | Colour | | $Li$ | Red | | $Na$ | Golden yellow | | $K$ | Violet | | $Rb$ | Red Violet | | $Cs$ | Blue | Standard Oxidation Potential The electrode potential of a metal in water is a measurement of its tendency to donate electrons. The standard electrode potential is defined as the concentration of metal ions being equal to one. Lithium has the largest ionisation potential, but due to its high hydration energy, it also has the highest electrode potential. Hydration of Ions The ions have a lot of water in them. The degree of hydration is proportional to the size of the ion. As a result, from Li+ to Cs+, the degree of hydration falls. As a result, electrical conductivity diminishes as hydration increases. Lattice Energy Ionic solids are alkali metal salts. The lattice energy of alkali metal salts with a common anion drops as one moves down the group. Solubility in Liquid Ammonia $M + nN{H_3} \to {[M{(N{H_3})_x}]^ + } + {e^ - }{(N{H_3})_y}$ $(n = x + y)$ The major species found in solvated metal ions and solvated electrons in dilute alkali metal solutions in liquid ammonia are solvated metal ions and solvated electrons. The colour diminishes until it disappears if the blue solution is left to stand due to the creation of metal amide. Because of the presence of solvated electrons, metal solutions in liquid conduct electricity. Because they include free electrons, the dilute solutions are paramagnetic. Electronegativity Values The electronegativity values are small which decrease from lithium to cesium. Reactivity The reactivity of alkali metals goes on increasing in the following order: $Li < Na < K < Rb < Cs$ Colourless and Diamagnetic Ions The number of unpaired electrons present in an ion determines whether the ion is colourless or coloured. If an anion has unpaired electrons, these electrons can be stimulated by light energy and subsequently return to the ground state to show colour. Unpaired electron ions have magnetic properties, while paired electron ions cancel out each other's magnetic fields. Diamagnetic ions are such ions. The presence of unpaired electrons causes super oxides to be para magnetic and coloured. Melting and Boiling Point The cohesive energy is the force that holds the atoms or ions in a solid together. The cohesive energy is proportional to the number of electrons capable of bonding. Alkali metals contain only one valence electron that participates in bonding, and the outer bonding electron is big and diffuse, therefore the cohesive force reduces as the group gets smaller. As the atoms get bigger as you go down the group, the bonds get weaker, the cohesive energy drops, and the metal gets softer. As a result, the melting point drops as the group progresses. The boiling point also reduces the size of the group. Chemical Properties Some Common Reactions of Group 1 Metals | | | --- | | Reaction | Comment | | $M + {H_2}O \to MOH + {H_2}$ | Hydroxides are strongest base known. | | $Li + {O_2} \to L{i_2}O$ | Monoxide formed by lithium and to a small extent by sodium. | | $Na + {O_2} \to N{a_2}{O_2}$ | Peroxide formed by sodium and to a small extent by lithium. | | $K + {O_2} \to K{O_2}$ | Superoxide formed by potassium, rubidium and cesium. | | $M + {H_2} \to MH$ | Ionic salt like hydrides. | | $Li + {N_2} \to L{i_3}N$ | Nitride formed only by lithium | | $M + S \to {M_2}S$ | All metals form sulphides. | | $M + {X_2} \to MX$ | All metals form halides. | | $M + N{H_3} \to MN{H_2}$ | All the metals form amides. | Reaction with Air Group 1 elements are very reactive and tarnish quickly when exposed to air. These metals form alkaline carbonates in moist air. $2Na + {O_2} \to 2N{a_2}O$ $N{a_2}O + {H_2}O \to 2NaOH$ $2NaOH + C{O_2} \to N{a_2}C{O_3} + {H_2}O$ Reaction with ${O_2}$ Lithium forms $L{i_2}O$ , sodium forms two types of oxide $({M_2}O,\;{M_2}{O_2})$ and potassium, rubidium and cesium form superoxide $(M{O_2})$ . Basic Nature, Ionic Nature of the Oxides Because the size of the cation increases, the basic nature of oxides changes from lithium to cesium. From lithium to cesium, the cation size increases. The ionic nature of these oxides rises from lithium to cesium, according to Fajan's Rule. As the ionic nature of these metal oxides changes, solubility in water increases from lithium to cesium oxides. Reaction with water Group 1 metals react with water and liberates hydrogen and thus hydroxides are formed. $2 \mathrm{Li}+2 \mathrm{H}_{2} \mathrm{O} \rightarrow 2 \mathrm{LiOH}+\mathrm{H}_{2}$ $2 \mathrm{Na}+2 \mathrm{H}_{2} \mathrm{O} \rightarrow 2 \mathrm{NaOH}+\mathrm{H}_{2}$ $2 \mathrm{~K}+2 \mathrm{H}_{2} \mathrm{O} \rightarrow 2 \mathrm{KOH}+\mathrm{H}_{2}$ Reaction with Hydrogen Group 1 metals react with hydrogen and forms ionic hydrides. Thermal stability of $\mathrm{LiH}$ is high. Stability of hydrides is in the order: $\mathrm{LiH}>\mathrm{NaH}>\mathrm{KH}>\mathrm{RbH}>\mathrm{CsH}$ Reaction with Dilute acids These metals react quickly with dilute acids due to their alkaline nature, and the rate of reaction increases from lithium to cesium as the basic character increases. Compounds of Alkali Metal Hydroxides Caustic soda is another name for sodium hydroxide. Because of its corrosive qualities, potassium hydroxide is known as caustic potash. The strongest base in aqueous solution is caustic alkali. The solubility of hydroxides rises as they progress through the group. The bases react with acids to form salt and water $\mathrm{KOH}+\mathrm{HCl} \rightarrow \mathrm{KCl}+\mathrm{H}_{2} \mathrm{O}$ $\mathrm{NaOH}+\mathrm{HCl} \rightarrow \mathrm{NaCl}+\mathrm{H}_{2} \mathrm{O}$ $2 \mathrm{NaOH}+\mathrm{CO}_{2} \rightarrow \mathrm{Na}_{2} \mathrm{CO}_{3}+\mathrm{H}_{2} \mathrm{O}$ Ammonia is liberated by the bases from ammonium salts $\mathrm{NaOH}+\mathrm{NH}_{4} \mathrm{Cl} \rightarrow \mathrm{NH}_{3}+\mathrm{NaCl}+\mathrm{H}_{2} \mathrm{O}$ $\mathrm{KOH}+\mathrm{NH}_{4} \mathrm{Cl} \rightarrow \mathrm{NH}_{3}+\mathrm{KCl}+\mathrm{H}_{2} \mathrm{O}$ In all of its reactions, potassium hydroxide is similar to sodium hydroxide. However, because potassium hydroxide is so much more expensive, it is rarely utilised. However, because potassium hydroxide is more soluble in alcohol, the equilibrium produces $\mathrm{C}_{2} \mathrm{H}_{5} \mathrm{O}^{-}$ ions. $\mathrm{C}_{2} \mathrm{H}_{5} \mathrm{OH}+\mathrm{OH}^{-} \rightleftharpoons \mathrm{C}_{2} \mathrm{H}_{5} \mathrm{O}^{-}+\mathrm{H}_{2} \mathrm{O}$ Oxides, Peroxides and Superoxides Normal oxides – monoxide: Ionic monoxides are present. They are very basic oxides that create strong bases when they react with water. $\mathrm{Na}_{2} \mathrm{O}+\mathrm{H}_{2} \mathrm{O} \rightarrow \mathrm{NaOH}$ $\mathrm{K}_{2} \mathrm{O}+\mathrm{H}_{3} \mathrm{O} \rightarrow \mathrm{KOH}$ Peroxides Preparation: $2 \mathrm{Na}+\mathrm{O}_{2}(\mathrm{excess}) \underset{\longrightarrow}{300^{\circ} \mathrm{C}}{\longrightarrow} \mathrm{Na}_{2} \mathrm{O}_{2}$ $2 \mathrm{Na}_{2} \mathrm{O} \quad{ }^{7400^{\circ} \mathrm{C}}{\longrightarrow} \mathrm{Na}_{2} \mathrm{O}_{2}+\mathrm{Na}$ (vapour) Properties $\mathrm{Na}_{2} \mathrm{O}_{2}+\mathrm{H}_{2} \mathrm{SO}_{4}(\mathrm{dil}) \rightarrow \mathrm{Na}_{2} \mathrm{SO}_{4}+\mathrm{H}_{2} \mathrm{O}_{2}$ $\mathrm{Na}_{2} \mathrm{O}_{2}+2 \mathrm{H}_{2} \mathrm{O} \rightarrow 2 \mathrm{NaOH}+\mathrm{H}_{2} \mathrm{O}_{2}$ $\mathrm{Na}_{2} \mathrm{O}_{2}$ is a powerful oxidant as it reacts with carbon dioxides present in the air. $\mathrm{Na}_{2} \mathrm{O}_{2}+\mathrm{CO} \rightarrow \mathrm{Na}_{2} \mathrm{CO}_{3}$ $\mathrm{Na}_{2} \mathrm{O}_{2}+2 \mathrm{CO}_{2} \rightarrow \mathrm{Na}_{2} \mathrm{CO}_{3}+\mathrm{O}_{2}$ $\mathrm{Na}_{2} \mathrm{O}_{2}+\mathrm{Cr}^{3+} \rightarrow \mathrm{CrO}_{4}^{2-}$ Structure Oxygen atom is $\mathrm{sp}^{3}$ hybridized. Peroxide ion has 18 electrons which occupies the molecular orbital as shown: $\sigma 1 \mathrm{~s}^{2}, \sigma^{} 1 \mathrm{~s}^{2}, \sigma 2 \mathrm{~s}^{2}, \sigma^{} 2 \mathrm{~s}^{2}, \sigma 2 \mathrm{p}_{z}^{2}, \pi 2 \mathrm{p}_{\mathrm{y}}^{2}=\pi 2 \mathrm{p}_{\mathrm{z}}^{2}, \pi^{} 2 \mathrm{p}_{\mathrm{y}}^{2}=\pi^{} 2 \mathrm{P}_{z}^{1}$ The bond order being 1 so, it is diamagnetic. Superoxides Superoxides are ionic oxides $\mathrm{M}^{+} \mathrm{O}_{2}^{-}$ Preparation $\mathrm{M}+\mathrm{O}_{2}($ excess $) \rightarrow \mathrm{MO}_{2}$ $(\mathrm{M}=\mathrm{K}, \mathrm{Rb}, \mathrm{Cs})$ Superoxides are stronger oxidizing agents than peroxides. The stability of these superoxides is in the order: $\mathrm{KO}_{2}<\mathrm{RbO}_{2}<\mathrm{CsO}_{2}$ Reactions $\mathrm{KO}_{2}+\mathrm{H}_{2} \mathrm{O} \rightarrow \mathrm{KOH}+\mathrm{H}_{2} \mathrm{O}_{2}+1 / 2 \mathrm{O}_{2}$ Because it creates $\mathrm{O}_{2}$ and eliminates $\mathrm{CO}_{2}$, $\mathrm{KO}_{2}$ is utilised in space capsules, submarines, and breathing masks. $4 \mathrm{KO}_{2}+2 \mathrm{CO}_{2} \rightarrow 2 \mathrm{~K}_{2} \mathrm{CO}_{3}+3 \mathrm{O}_{2}$ $4 \mathrm{KO}_{2}+4 \mathrm{CO}_{2}+2 \mathrm{H}_{2} \mathrm{O} \rightarrow 4 \mathrm{KHCO}_{3}+\mathrm{O}_{2}$ Sodium superoxide can be made by reacting sodium peroxide with oxygen at high temperatures and pressures, rather than by burning metal in oxygen. $\mathrm{Na}_{2} \mathrm{O}+\mathrm{O}_{2} \rightarrow 2 \mathrm{NaO}_{2}$ Structure The paramagnetic property is explained by the existence of one unpaired electron in a three-electron bond. The superoxide molecule has 17 electrons and a bond order of1.5, occupying the molecular orbitals as illustrated. $\sigma 1 \mathrm{~s}^{2}, \sigma^{} 1 \mathrm{~s}^{2}, \sigma 2 \mathrm{~s}^{2}, \sigma^{} 2 \mathrm{~s}^{2}, \sigma 2 \mathrm{p}_{\mathrm{x}}^{2}, \pi 2 \mathrm{p}_{\mathrm{y}}^{2}=\pi 2 \mathrm{p}_{\mathrm{z}}^{2}, \pi^{} 2 \mathrm{p}_{\mathrm{y}}^{2}=\pi^{} 2 \mathrm{P}_{\mathrm{z}}^{1}$ The stability of oxides is given as: Normal oxide > peroxide > superoxide Carbonates and Bicarbonates Solid bicarbonates are formed by Group 1 metals $\left(\mathrm{MHCO}_{3}\right)$. $\mathrm{M}_{2} \mathrm{CO}_{3}$ carbonates are formed by all alkali metals. The carbonates and bicarbonates of alkali metals are extremely heat stable due to their electro positive character [\mathbf{Li}_{2}\mathbf{CO}_{3}] decomposes easily by heat). The unusual behaviour of $\mathrm{Li}_{2} \mathrm{CO}_{3}$ can be explained by the fact that: Lithium's small size and high polarisation disrupts the electron cloud of the nearby oxygen atom of the massive $\mathrm{CO}_{3}^{2}$, weakening the carbon-hydrogen bond. $\mathrm{Li}_{2} \mathrm{CO}_{3} \longrightarrow \mathrm{Li}_{2} \mathrm{O}+\mathrm{CO}_{2}$ When a larger carbonate ion is replaced by a smaller carbonate ion, the lattice energy increases, favouring breakdown. $\mathrm{M}_{2} \mathrm{CO}_{3} \longrightarrow \mathrm{M}_{2} \mathrm{O}+\mathrm{CO}_{2} \uparrow$ As a washing soda, $\mathrm{Na}_{2} \mathbf{C O}_{3}$is utilised. Baking soda is made from $\mathrm{NaHCO}_{3}$ . Both $\mathrm{NaHCO}_{3}$ and $\mathrm{KHCO}_{3}$ have hydrogen bonds in their crystal structures. The $\mathrm{HCO}_{3}^{-}$ in $\mathrm{NaHCO}_{3}$ forms an endless chain, whereas $\mathrm{KHCO}_{3}$ forms a dimeric anion. Reactions $2 \mathrm{HNO}_{3}+\mathrm{K}_{2} \mathrm{CO}_{3} \rightarrow 2 \mathrm{KNO}_{3}+\mathrm{CO}_{2}+\mathrm{H}_{2} \mathrm{O}$ $2 \mathrm{NaHCO}_{3} \longrightarrow \mathrm{Na}_{2} \mathrm{CO}_{3}+\mathrm{H}_{2} \mathrm{O}+\mathrm{CO}_{2}$ $\mathrm{M}_{2} \mathrm{CO}_{3}+\mathrm{H}_{2} \mathrm{O} \rightleftharpoons 2 \mathrm{M}^{+}+\mathrm{HCO}_{3}^{-}+\mathrm{OH}^{-}$ Halides All of the metals in this group create MX halides. Because lithium ion is the smallest ion in the group, it is more likely than other metals to produce hydrated salts. Properties Alkali metal halides are excellent ionic compounds, as evidenced by the following features. With the exception of lithium fluoride, all alkali halides are easily soluble in water (Lithium fluoride is soluble in non-polar solvents). Their melting and boiling points are extremely high. The melting and boiling points of the same alkali metal drop in a predictable order. Fluoride > chloride > bromide > iodide This is described in terms of the metal halides' lattice energy. The lattice energy of the same metal reduces when the halogen's electronegativity lowers. The melting point of lithium halides is lower than that of sodium halides for the identical halide ion. However, as we travel down the group from sodium to cesium, the melting points of halides decline. Lithium halides exhibit aberrant behaviour due to their covalent character, whereas sodium and other halides are ionic in nature. As we advance along the group of ionic halides, the melting point lowers as the lattice energy decreases. $NaCl > KCl > RbCl > CsCl$ Solubility of halides of alkali metals: Alkali metal halides have a range of solubilities. The solubility of alkali metal fluorides in water, for example, gradually increases from lithium to caesium. Lithium chloride has a far better solubility in water than sodium chloride when it comes to chlorides. This is owing to the lithium ion's tiny size and high hydration energy. However, when the lattice energy of the crystals decreases, solubility in water increases steadily from sodium chloride to cesium chloride. In the fused condition, they are good conductors of electricity. They are made up of ionic crystals. Lithium halides, on the other hand, have a partially covalent character due to the polarising power of lithium ions. The lattice energy and polarising power are responsible for the structure and stability (solubility) of alkali metal halides. Lattice Energy: The energy produced during the production of a crystal lattice from gaseous cations and anions, or the energy necessary to split one mole of a solid ionic compound into its gaseous ions, is known as lattice energy. As a result, lattice energy (the force of attraction between ions) is a direct measure of ionic crystal stability; the higher the lattice energy of a molecule, the lower its solubility in water. When an ionic compound crystal comes into contact with a polar solvent like water, the water molecule's hydrogen end (positive pole) is attracted to a negative ion, whereas the oxygen end (negative pole) is drawn to a positive ion. Solvation (or hydration, if the solvent is water) of the ions is the process of polar solvent molecules attaching to the ions. When ions are stabilised by solvation, a considerable amount of solvation energy (or hydration energy) is released, which, if it exceeds the crystal's lattice energy, causes the ionic compound to dissolve in the solvent. In the case of lithium fluoride, however, if the solvation energy is insufficient to oppose the lattice energy, the material stays insoluble. The combination of small lithium ions and small fluoride ions gives lithium fluoride its high lattice energy. The lattice energy of a particular ion increases as the size of the oppositely charged ion decreases. Polarising power and polarisability (Fajan’s Rule): Although an ionic bond in a chemical like $\mathrm{M}^{+} \mathrm{X}^{-}$ is thought to be 100 percent ionic, it is discovered to have significant covalent character in some circumstances (e.g., lithium halides). When two oppositely charged ions approach each other, the nature of the link between them, according to Fajan, is determined by the action of one ion on the other. When two oppositely charged ions come into contact, the positive ion attracts electrons from the anion's outermost shell while repelling the anion's positively charged nucleus. The anion is distorted, deformed, or polarised as a result of this. The ability of a cation to distort an anion is called polarisation power, and the anion's susceptibility to be polarised by the cation is called polarisability. If the degree of polarisation is modest, an ionic bond is established; however, if the degree of polarisation is considerable, electrons are attracted from the anion to the cation by electrostatic attraction, resulting in a higher electron density between the two ions and a covalent connection. In general, the stronger an ion's polarisation power or polarisability, the greater its proclivity to form covalent bonds. Because polarisation power grows as the size of the cation decreases and polarisability increases as the size of the anion increases, the polarisation in a compound containing big negative ions and small positive ions may be so strong that the connection becomes covalent. In nature, lithium iodide, which is made up of lithium ions (the smallest alkali metal ion) and iodide ions (the largest halide ion), is found to be highly covalent. Other examples of such ionic-covalent compounds are $\mathrm{AlCl}_{3}, \mathrm{FeCl}_{3}, \mathrm{SnCl}_{4}$, etc. Reactions Ionic polyhalides are formed when alkali metal halides react with halogen and interhalogen compounds. $\mathrm{KI}+\mathrm{I}_{2} \rightarrow \mathrm{K}\left[\mathrm{I}_{3}\right]$ $\mathrm{KBr}+\mathrm{ICl} \rightarrow \mathrm{K}[\mathrm{Br} \mathrm{ICl}]$ $\mathrm{KF}+\mathrm{BrF}_{3} \rightarrow \mathrm{K}\left[\mathrm{BrF}_{4}\right]$ Sulphates They form sulphates of type $\mathrm{M}_{2} \mathrm{SO}_{4}$ Anomalous Behaviour of Lithium Although lithium has many of the properties of the group I elements, it also differs from them in a number of ways. The incredibly small size of the lithium atom and its ion causes this unusual behaviour. The high charge density of the lithium ion is due to its small size. As a result, out of all the alkali metal ions, lithium ion has the most polarising power. As a result, it has a significant distorting impact on a negative ion. As a result, the lithium ion has a strong proclivity for solvation and the creation of covalent bonds. It's also worth noting that the polarising power of the lithium ion is similar to that of the magnesium ion, making the two elements resemble very much in their properties. Lithium is substantially more difficult to work with than the other elements in Group I. (similarity with magnesium which is also a hard metal). Lithium has a high melting point and boiling point. Lithium, unlike the other elements in this group, is the least reactive, as shown by the following points. It is not influenced by air, unlike others. It, unlike others, decomposes water slowly (similar to magnesium). It rarely reacts with bromine, unlike many others. On burning in oxygen, it forms only the monoxide $\mathrm{Li}_{2} \mathrm{O}$ , while the others form peroxides too. Unlike other elements, it forms nitride when it comes into contact with nitrogen, $\mathrm{Li}_{3} \mathrm{~N}$ (similarity with $\mathrm{Mg}$ ) Lithium is much less electropositive and, therefore, several of its compounds $\left(\mathrm{Li}_{2} \mathrm{CO}_{3}\right.$ and $\left.\mathrm{LiOH}\right)$ are less stable (similarity with Mg). For example, $2 \mathrm{LiOH} \rightarrow \mathrm{Li}_{2} \mathrm{O}+\mathrm{H}_{2} \mathrm{O}$ $\mathrm{Mg}(\mathrm{OH})_{2} \rightarrow \mathrm{MgO}+\mathrm{H}_{2} \mathrm{O}$ When heated, lithium nitrate produces nitrogen dioxide and oxygen, leaving lithium oxide behind (similar to magnesium nitrate), but sodium and potassium nitrates produce only oxygen, leaving nitrites. The majority of lithium salts (for example, hydroxide, carbonate, oxalate, phosphate, and fluoride) are water insoluble (similarity with magnesium.) The sodium and potassium salts that correspond to them are both water soluble. Lithium halides and lithium alkyls are soluble in organic solvents, but sodium and potassium halides and alkyls are not; $ \mathrm{MgCl}_{2}$ is soluble in alcohol as well. Lithium chloride, like magnesium chloride, undergoes some hydrolysis in hot water, but only to a little level; sodium chloride and potassium chloride do not. Lithium sulphate, unlike sulphates of other alkali metals, does not create alums. Partially covalent lithium compounds, particularly lithium halides, exist in nature. This is owing to lithium ions' proclivity for attracting electrons (polarising power). This explains why lithium compounds have a smaller dipole moment than expected. The ions and compounds of alkali metals are more hydrated than those of other alkali metals (similarity with magnesium). Extraction of Sodium Sodium is obtained on large scale by two process: Castner’s process The electrolysis of fused sodium hydroxide takes place at $330^{\circ} \mathrm{C}$ , with iron as the cathode and nickel as the anode. $2 \mathrm{NaOH} \rightleftharpoons 2 \mathrm{Na}^{+}+2 \mathrm{OH}^{-}$ At cathode: [2 \mathrm{Na}^{+}+2 \mathrm{e}^{-} \rightarrow 2 \mathrm{Na}] At anode: $4 \mathrm{OH}^{-} \rightarrow 2 \mathrm{H}_{2} \mathrm{O}+\mathrm{O}_{2}+4 \mathrm{e}^{-}$ Oxygen and water are created during electrolysis. At the cathode, water generated at the anode is partially evaporated and partially broken down, resulting in hydrogen discharge. $\mathrm{H}_{2} \mathrm{O} \rightleftharpoons \mathrm{H}^{+}+\mathrm{OH}^{-}$ At cathode: [2\mathbf{H}^{+}+2 \mathbf{e} \rightarrow 2 \mathbf{H} \rightarrow \mathbf{H}_{2}\uparrow] Down’s Process Nowadays, the metal is produced using Down's technique. It includes employing iron as a cathode and graphite as an anode to electrolyze fused sodium chloride containing calcium chloride and potassium fluoride at roughly 600 degrees Celsius. The cell is made up of a steel tank with heat-resistant bricks lining it. In the centre of the cell, a circular graphite anode is installed, which is encircled by a cylindrical iron cathode. A steel gauze cylinder separates the anode and cathode, allowing fused charge to pass through. A dome-shaped steel hood protects the anode and provides an outlet for chlorine gas to escape. The molten metal that has been freed at the cathode rises and flows into the kerosene receiver. Reactions: $\mathrm{NaCl} \rightleftharpoons \mathrm{Na}^{+}+\mathrm{Cl}^{-}$ At Cathode: $2 \mathrm{Na}^{+}+2 \mathrm{e}^{-} \rightarrow 2 \mathrm{Na}$ At Anode: $2 \mathrm{Cl}^{-} \rightarrow \mathrm{Cl}_{2}+2 \mathrm{e}^{-}$ The sodium chloride when obtained from this method is $99.5 \%$ pure. The electrolysis of pure sodium chloride has a number of drawbacks: Sodium chloride has a high fusion temperature of 803 degree celsius, which is difficult to sustain. Because sodium is volatile at this temperature, some of it vapourizes, forming a metallic fog. The electrolysis products, salt and chlorine, are corrosive at this temperature and may harm the cell's substance. Pure sodium chloride is blended with calcium chloride and potassium fluoride to overcome the problems mentioned above. At the voltage used, calcium chloride and potassium fluoride do not breakdown, but they do lower the fusion temperature. A combination containing 40% sodium chloride, 60% calcium chloride, and a trace amount of potassium fluoride reaches a fusion temperature of around 600 degrees Celsius. In the electrolytic cell, this combination is electrolyzed at 600 degrees Celsius. Example 1: Alkali metals are paramagnetic but their salts are diamagnetic. Explain. Solution: The outermost energy shell in metals is solely occupied, whereas in cations, all orbitals are double occupied (inert gas configuration). e.g., $\quad \mathrm{Na}, 1 \mathrm{~s}^{2}, 2 \mathrm{~s}^{2} 2 \mathrm{p}^{6}, 3 \mathrm{~s}^{2} 3 \mathrm{p}^{6}$, 4s $^{1}$ paramagnetic $\mathrm{Na}^{+} 1 \mathrm{~s}^{2}, 2 \mathrm{~s}^{2} 2 \mathrm{p}^{6},\left(3 \mathrm{~s}^{2} 3 \mathrm{p}^{6}\right)$ Diamagnetic Example 2: Alkali metals are good reducing agents. Explain. Solution: Alkali metals are strong reducing agents because their low ionisation enthalpy values and high oxidation potential allow them to easily lose valence electrons. Example 3: Which alkali metal ion has the maximum polarising power and why? Solution: Among the alkali metal ions, the lithium ion has the most polarising power. This is owing to the lithium ion's small size. Example 4: Lithium ion is far smaller than other alkali metal ions but it moves through a solution less rapidly that the others. Explain. OR The conductance of lithium salts is less in comparison to the salts of other alkali metals. Explain. Solution: Lithium ion has the highest degree of hydration because of its strong charge, which pulls numerous water molecules surrounding it. As a result, the size of the hydrated lithium ion is larger than that of the other alkali metal ions, affecting its mobility in solution and lowering conductance. Size: $\quad[\mathrm{Li}(\mathrm{aq})]^{+}>[\mathrm{Na}(\mathrm{aq})]^{+}>[\mathrm{K}(\mathrm{aq})]^{+}$ Example 5: Sodium salts in aqueous solutions are either neutral or alkaline in nature. Explain. Solution: Strong acids or weak acids produce the anions in sodium salts. There is no hydrolysis when anions come from strong acids, and aqueous solutions are neutral. When anions come from weak acids, however, they undergo hydrolysis, resulting in alkaline solutions. Solns. of sodium carbonate or bicarbonate, for example, are alkaline. [\mathrm{CO}_{3}^{2-}+\mathrm{H}_{2} \mathrm{O} \rightleftharpoons \mathrm{HCO}_{3}^{-}+\mathrm{OH}^{-}] $\mathrm{HCO}_{3}^{-}+\mathrm{H}_{2} \mathrm{O} \rightleftharpoons \mathrm{H}_{2} \mathrm{CO}_{3}+\mathrm{OH}^{-}$ Example 6: Why do potassium, rubidium and cesium form superoxides in preference to oxides and peroxides on being heated in excess supply of air? Solution: $\mathrm{K}^{+}, \mathrm{Rb}^{+}$and $\mathrm{Cs}^{+}$are large cations in size and superoxide ion $\left(\mathrm{O}_{2}^{-}\right)$is larger in size in comparison to oxide $\left(\mathrm{O}^{2-}\right)$ and peroxide $\left(\mathrm{O}_{2}^{2-}\right)$ ion. These metals form superoxides rather than oxides or peroxides because a larger cation can stabilise a large anion. Example 7: Why is ${\text{K}}{{\text{O}}_{\text{2}}}$ paramagnetic? Solution: The superoxide $\mathrm{O}_{2}^{-}$is paramagnetic because of one unpaired electron in $\pi 2 \mathrm{p}$ molecular orbital. $\mathrm{KK} \sigma(2 \mathrm{~s})^{2} \sigma^{}(2 \mathrm{~s})^{2} \sigma\left(2 \mathrm{p}_{\mathrm{x}}\right)^{2} \pi\left(2 \mathrm{p}_{\mathrm{x}}\right)^{2}\left(\pi 2 \mathrm{p}_{\mathrm{y}}\right)^{2} \pi^{}\left(2 \mathrm{p}_{\mathrm{x}}\right)^{2}$ $\pi^{}\left(2 \mathrm{p}_{\mathrm{v}}\right)^{1}$ Example 8: Among the alkali metals which element has Highest melting point Highest size of hydrated ion in solution Strongest reducing agent in solution Least electronegative Solution: The elements which have: Highest melting point: Lithium Highest size of hydrated ion in solution: $[\mathrm{Li}(\mathrm{aq})]^{+}$ Strongest reducing agent in solution: Lithium Least electronegative: Cesium Example 9: What happens when following compounds are heated? $L{i_2}C{O_3} \to L{i_2}O + C{O_2}$ $N{a_2}C{O_3}.10{H_2}O \to N{a_2}C{O_3} + 10{H_2}O$ $4LiN{O_3} \to 2L{i_2}O + 4N{O_2} + {O_2}$ $2NaN{O_3} \to 2NaN{O_2} + {O_2}$ Example 10: Arrange $\mathrm{LiF}, \mathrm{NaF}, \mathrm{KF}, \mathrm{RbF}$ and CsF in order of increasing lattice energy. Arrange the following in order of the increasing covalent character. $\mathrm{MCl}, \mathrm{MBr}, \mathrm{MF}, \mathrm{MI}$ (where $\mathrm{M}=$ alkali metal) Solution: $\mathrm{CsF}<\mathrm{RbF}<\mathrm{KF}<\mathrm{NaF}<\mathrm{LiF}$ $\mathrm{MF}<\mathrm{MCl}<\mathrm{MBr}<\mathrm{MI}$ With increasing size of the anion, covalent character increases. Example 11: Why a standard solution of sodium hydroxide cannot be prepared by weighing ? Solution: The material sodium hydroxide is deliquescent. It absorbs moisture and reacts with atmospheric carbon dioxide, both of which increase its mass. As a result, precise weighing is difficult. Example 12: What happens when: Fused sodium reacts with dry ammonia. Sodium hydrogen carbonate is heated. Sodium hydroxide is heated with sulphur? Solution: $2 \mathrm{Na}+2 \mathrm{NH}_{3}+\rightarrow 2 \mathrm{NaNH}_{2}+\mathrm{H}_{2}$ Sodium carbonate is formed. $2 \mathrm{NaHCO}_{3} \rightarrow \mathrm{Na}_{2} \mathrm{CO}_{3}+\mathrm{H}_{2} \mathrm{O}+\mathrm{CO}_{2}$ Sodium thiosulphate is formed. $4 \mathrm{~s}+6 \mathrm{NaOH} \rightarrow \mathrm{Na}_{2} \mathrm{~S}_{2} \mathrm{O}_{3}+2 \mathrm{Na}_{2} \mathrm{~S}+3 \mathrm{H}_{2} \mathrm{O}$ Example 13: Give reasons for the following: $\mathrm{LiCl}$ is more covalent than $\mathrm{NaCl}$ Lithium Iodide has lower melting point then $\mathrm{LiCl}$ $\mathrm{MgCl}_{2}$ is more covalent than $\mathrm{NaCl}$ $\mathrm{CuCl}$ is more covalent than $\mathrm{NaCl}$ Solution: Lithium ion is more polarising than sodium ion due to its smaller size, and so lithium chloride is more covalent than sodium chloride. Due to bigger size, $\mathrm{I}^{-}$ is more polarisable than $\mathrm{Cl}^{-}$ and hence lithium iodide is more covalent than lithium chloride. Therefore, lithium iodide has lower melting point than $\mathrm{LiCl}$. Magnesium ion is more polarising than sodium ion due to its greater charge, and so magnesium chloride is more covalent than sodium chloride. Copper ion is more polarising than sodium ion because to the pseudo inert gas configuration, and so copper chloride is more covalent than sodium chloride. Example 14: Identify (A), (B), (C) and (D) and give their chemical formulae. (A) $+\mathrm{NaOH} \quad$ Heat ${\longrightarrow} \mathrm{NaCl}+\mathrm{NH}_{3}+\mathrm{H}_{2} \mathrm{O}$ $\mathrm{NH}_{3}+\mathrm{CO}_{2}+\mathrm{H}_{2} \mathrm{O} \longrightarrow(\mathrm{B})$ $(\mathrm{B})+\mathrm{NaCl} \longrightarrow(\mathrm{C})+\mathrm{NH}_{4} \mathrm{Cl}$ (C) Heat $\mathrm{Na}_{2} \mathrm{CO}_{2}+\mathrm{H}_{2} \mathrm{O}+(\mathrm{D})$ Solution: $\mathrm{NH}_{4} \mathrm{Cl}+\mathrm{NaOH}_{\longrightarrow}{\text { Heat }}{\longrightarrow} \mathrm{NH}_{3}+\mathrm{NaCl}+\mathrm{H}_{2} \mathrm{O}$ Compound A is ammonium chloride $\left(\mathrm{NH}_{4} \mathrm{Cl}\right)$. $\mathrm{NH}_{3}+\mathrm{CO}_{2}+\mathrm{H}_{2} \mathrm{O} \longrightarrow \mathrm{NH}_{4} \mathrm{HCO}_{3}$ Compound B is ammonium bicarbonate $\left(\mathrm{NH}_{4} \mathrm{HCO}_{3}\right)$. $\mathrm{NH}_{4} \mathrm{HCO}_{3}+\mathrm{NaCl} \longrightarrow \mathrm{NaHCO}_{3}+\mathrm{NH}_{4} \mathrm{Cl}$ Compound (C) is sodium bicarbonate $\left(\mathrm{NaHCO}_{3}\right)$. $2 \mathrm{NaHCO}_{3} \longrightarrow \mathrm{Na}_{2} \mathrm{CO}_{3}+\mathrm{H}_{2} \mathrm{O}+\mathrm{CO}_{2}$ Compound (D) is carbon dioxide $\left(\mathrm{CO}_{2}\right)$. Example 15: Arrange the following as specified: $\mathrm{MgO}, \mathrm{SrO}, \mathrm{K}_{2} \mathrm{O}$ and $\mathrm{Cs}_{2} \mathrm{O}$ (increasing order of basic character) $\mathrm{LiCl}$, ${\text{LiBr}}$ , ${\text{LiI}}$ (decreasing order of covalent character) $\mathrm{NaHCO}_{3}, \mathrm{KHCO}_{3}, \mathrm{Mg}\left(\mathrm{HCO}_{3}\right)_{2}, \mathrm{Ca}\left(\mathrm{HCO}_{3}\right)_{2}$ (decreasing solubility in water) $\mathrm{LiF}, \mathrm{NaF}, \mathrm{RbF}, \mathrm{KF}$ and CsF (in order of increasing lattice energy) $\mathrm{Li}, \mathrm{Na} \mathrm{K}$ (in order to decreasing reducing nature in solution) Solution: $\mathrm{MgO}<\mathrm{SrO}<\mathrm{K}_{2} \mathrm{O}<\mathrm{Cs}_{2} \mathrm{O}$} $\mathrm{LiI}>\mathrm{LiBr}>\mathrm{LiCl}$ $\mathrm{NaHCO}_{3}<\mathrm{KHCO}_{3}<\mathrm{Mg}\left(\mathrm{HCO}_{3}\right)_{2}<\mathrm{Ca}\left(\mathrm{HCO}_{3}\right)_{2}$ $\mathrm{CsF}<\mathrm{RbF}<\mathrm{KF}<\mathrm{NaF}<\mathrm{LiF}$ $\mathrm{Li}>\mathrm{K}>\mathrm{Na}$ Example 16: What happens when $\mathrm{KO}_{2}$ reacts with water ? Give the balanced chemical equation. Predict giving reason the outcome of the reaction : $\mathrm{LiI}+\mathrm{KF} \longrightarrow$ Solution: When $\mathrm{KO}_{2}$ reacts with water, oxygen is evolved and an alkaline solution containing potassium hydroxide and $\mathrm{H}_{2} \mathrm{O}_{2}$ is formed: $2\mathrm{KO}_{2}+2 \mathrm{H}_{2} \mathrm{O} \longrightarrow 2 \mathrm{KOH}+\mathrm{H}_{2} \mathrm{O}_{2}+\mathrm{O}_{2}$ Lithium iodide reacts with potassium fluoride and anions are exchanged in this process: $\mathrm{LiI}+\mathrm{KF} \longrightarrow \mathrm{LiF}+\mathrm{KI}$ As stable compounds form, the exchange takes place, with the bigger cation stabilising the larger anion and the smaller cation stabilising the smaller anion. Alkaline Earth Metals Introduction Group-II of the periodic table contains the elements beryllium, magnesium, calcium, strontium, barium, and radium. All of these substances are metals. Calcium, strontium, and barium oxides were discovered far before the metals themselves, and they were dubbed alkaline earths because they were alkaline and found in the earth. Alkaline earth metals were given to the elements once they were found. Radium shares chemical properties with alkaline earth metals, but because it is a radioactive element, it is researched independently from the other radioactive elements. Physical Properties Atomic Size On going down the group, atomic size of elements increases. Oxidation State +2 oxidation state is exhibited by group II elements. Density Group II elements are smaller in size than group I elements, hence they have a higher density than group I elements. From beryllium to radon, density rises. Exception: Calcium has a lower density than magnesium, whereas magnesium has a lower density than beryllium. Nature of Bonds Beryllium forms mainly covalent compound. The rest of the elements in group II form ionic bonds. Hydration Energy Because of their smaller size and higher charge, the hydration energies of group 2 ions are four to five times higher than those of group 1 ions. As the size of the ions grows larger, the hydration enthalpy falls. Lattice Energy The lattice energy of alkali metal salts with a common anion drops as one moves down the group. Ionization Energy Because the atoms in group 2 are smaller, the electrons are more firmly bound, requiring more energy to remove the initial electron (first ionisation energy) than in group 1. The amount of energy necessary to remove the second electron is approximately double that required to remove the first. As a result, the energy necessary to make divalent ions from group 2 elements is four times that required to make $\mathrm{M}^{+}$ from group 1 metals. Flame Test These elements' electrons are stimulated to higher energy levels when energy is delivered to them in a flame, as is the case with alkali metals under identical conditions. The excess energy generated by the electrons when they return to their original energy level produces visible light with distinct colours, as shown below: | | | --- | | Element | Colour | | Calcium | Brick red | | Strontium | Crimson red | | Barium | Grassy green | | Radium | Crimson | The atoms of beryllium and magnesium are smaller. As a result, the electrons in these atoms are more tightly bound. As a result, the energy of the flame does not stimulate them to higher energy levels. As a result, these ingredients do not produce any colour in the bunsen flame. Standard Oxidation Potential Standard Oxidation Potential of Alkaline Earth Metals | | | | --- | Element | Oxidation Reaction | Standard Oxidation Potential (volt) | | $\mathrm{Be}$ | $\mathrm{Be} \rightarrow \mathrm{Be}^{2+}+2 \mathrm{e}^{-}$ | $1.85$ | | $\mathrm{Mg}$ | $\mathrm{Mg} \rightarrow \mathrm{Mg}^{2+}+2 \mathrm{e}^{-}$ | $2.37$ | | $\mathrm{Ca}$ | $\mathrm{Ca} \rightarrow \mathrm{Ca}^{2+}+2 \mathrm{e}^{-}$ | $2.87$ | | $\mathrm{Sr}$ | $\mathrm{Sr} \rightarrow \mathrm{Sr}^{2+}+2 \mathrm{e}^{-}$ | $2.89$ | | $\mathrm{Ba}$ | $\mathrm{Ba} \rightarrow \mathrm{Ba}^{2+}+2 \mathrm{e}^{-}$ | $2.90$ | Solubility in Liquid Ammonia The metals, like group 1 metals, dissolve in liquid ammonia. Due to the creation of solvated electrons, dilute solutions are blue in colour. As the solution decomposes, amides form and hydrogen gas is released. $2 \mathrm{NH}_{3}+2 \mathrm{e}^{-} \rightarrow 2 \mathrm{NH}_{2}^{-}+\mathrm{H}_{2}$ Electronegative Values Group II element’s electronegativity values are low, although they are higher than group I element’s. The value of electronegativity diminishes as you progress through the group. Colourless and Diamagnetism The elements of the alkaline earth metal group generate $\mathrm{M}^{2+}$ ions, which are diamagnetic and colourless due to the lack of an unpaired electron. Melting and Boiling Point The melting point of elements in group II lowers as the cohesive force diminishes as the group progresses. Exception: Magnesium has the lowest melting point Metallic Properties Group-II elements have typical metallic characteristics. They have a nice metallic sheen and excellent electrical and thermal conductivity. GROUP – I and II Oxides Sodium Oxide Preparation It's made by heating sodium to 180 degrees Celsius in a small amount of air or oxygen and then distilling the surplus sodium away. $2 \mathrm{Na}+1 / 2 \mathrm{O}_{2} \stackrel{180^{\circ}}{\longrightarrow} \mathrm{Na}_{2} \mathrm{O}$ By heating sodium peroxide, nitrate or nitrate with sodium. $N{a_2}{O_2} + 2Na \to 2N{a_2}O$ $2NaN{O_3} + 10Na \to 6N{a_2}O + {N_2}$ $2NaN{O_2} + 6Na \to 4N{a_2}O + {N_2}$ Properties It is a white amorphous mass. It gets decomposed at 400 degree Celsius into sodium peroxide and sodium. $2 \mathrm{Na}_{2} \mathrm{O} \stackrel{400^{\circ} \mathrm{C}}{\longrightarrow} \mathrm{Na}_{2} \mathrm{O}_{2}+2 \mathrm{Na}$ It gets dissolved violently in water and yields caustic soda. $\mathrm{Na}_{2} \mathrm{O}+\mathrm{H}_{2} \mathrm{O} \longrightarrow 2 \mathrm{NaOH}$ Sodium Peroxides Preparation: It is formed by heating the metal in excess of air or oxygen at 300 degrees Celsius in a dry, carbon dioxide-free environment. $2 \mathrm{Na}+\mathrm{O}_{2} \longrightarrow \mathrm{Na}_{2} \mathrm{O}_{2}$ Properties It is a pale yellow solid, becoming white in air from the formation of a film of $\mathrm{NaOH}$ and $\mathrm{Na}_{2} \mathrm{CO}_{3}$ . In cold water, sodium peroxide produces $\mathrm{H}_{2} \mathrm{O}_{2}$ but at room temperature it produces oxygen. Sodium peroxide produces hydrogen peroxide in ice-cold mineral acids. $\mathrm{Na}_{2} \mathrm{O}_{2}+2 \mathrm{H}_{2} \mathrm{O} \stackrel{\sim 0^{\circ} \mathrm{C}}{\longrightarrow} 2 \mathrm{NaOH}+\mathrm{H}_{2} \mathrm{O}_{2}$ $2 \mathrm{Na}_{2} \mathrm{O}_{2}+2 \mathrm{H}_{2} \mathrm{O} \stackrel{25^{\circ} \mathrm{C}}{\longrightarrow} 4 \mathrm{NaOH}+\mathrm{O}_{2}$ $\mathrm{Na}_{2} \mathrm{O}_{2}+\mathrm{H}_{2} \mathrm{SO}_{4} \stackrel{\sim 0^{\circ} \mathrm{C}}{\longrightarrow} \mathrm{Na}_{2} \mathrm{SO}_{4}+\mathrm{H}_{2} \mathrm{O}_{2}$ It combines with carbon dioxide to produce sodium carbonate and oxygen, which is why it's used to purify air in small spaces. Example: submarine, ill-ventilated room. $2 \mathrm{Na}_{2} \mathrm{O}_{2}+2 \mathrm{CO}_{2} \longrightarrow 2 \mathrm{Na}_{2} \mathrm{CO}_{3}+\mathrm{O}_{2}$ It is an oxidising agent and oxidises charcoal, $\mathrm{CO}, \mathrm{NH}_{3}$, $\mathrm{SO}_{2}$ $3 \mathrm{Na}_{2} \mathrm{O}_{2}+2 \mathrm{C} \longrightarrow 2 \mathrm{Na}_{2} \mathrm{CO}_{3}+2 \mathrm{Na}$ $\mathrm{Na}_{2} \mathrm{O}_{2}+\mathrm{CO} \longrightarrow \mathrm{Na}_{2} \mathrm{CO}_{3}$ $\mathrm{Na}_{2} \mathrm{O}_{2}+\mathrm{SO}_{2} \longrightarrow \mathrm{Na}_{2} \mathrm{SO}_{4}$ $3 \mathrm{Na}_{2} \mathrm{O}_{2}+2 \mathrm{NH}_{3}+\longrightarrow 6 \mathrm{NaOH}+\mathrm{N}_{2}$ It contains peroxide ion Uses: For preparing $\mathrm{H}_{2} \mathrm{O}_{2}, \mathrm{O}_{2}$ Oxygenating the air in submarines Oxidising agent in the laboratory. Oxides of Potassium: | | | Column | | $\mathrm{K}_{2} \mathrm{O}$ | White | | $\mathrm{K}_{2} \mathrm{O}_{2}$ | White | | $\mathrm{K}_{2} \mathrm{O}_{3}$ | Red | | $\mathrm{KO}_{2}$ | Bright Yellow | | $\mathrm{KO}_{3}$ | Reddish Brown Needles | Preparation: $2KN{O_3} + 10K\xrightarrow{{heating}}6{K_2}O + {N_2}$ $\mathrm{K}_{2} \mathrm{O} \stackrel{\text { heating }}{\longrightarrow} \mathrm{K}_{2} \mathrm{O}$ ${K_2}O + {H_2}O \to 2KOH$ $2 \mathrm{~K}+\mathrm{O}_{2} \stackrel{\text { Controlled }}{\text { air at } 300^{\circ} \mathrm{C}} \mathrm{K}_{2} \mathrm{O}_{2}$ Passage of $\mathrm{O}_{2}$ through a blue solution of $\mathrm{K}$ in liquid $\mathrm{NH}_{3}$ yields oxides $\mathrm{K}_{2} \mathrm{O}_{2}$ (white), $\mathrm{K}_{2} \mathrm{O}_{3}($ red $)$ and $\mathrm{KO}_{2}$ (deep yellow) $\mathrm{K} \text { in liq. } \mathrm{NH}_{3} \longrightarrow \mathrm{K}_{2} \mathrm{O}_{2} \longrightarrow \mathrm{K}_{2} \mathrm{O}_{3} \longrightarrow \mathrm{KO}_{2}$ $2 \mathrm{KO}_{2}+2 \mathrm{H}_{2} \mathrm{O} \longrightarrow 2 \mathrm{KOH}+\mathrm{H}_{2} \mathrm{O}_{2}+\mathrm{O}_{2}$ Magnesium Oxide: It's also known as magnesia, and it's made from natural magnesite that's been heated. $\mathrm{MgCO}_{3} \longrightarrow \mathrm{MgO}+\mathrm{CO}_{2}$ Properties It is present as a white powder The melting point of magnesium oxide is 2850 degree Celsius. So, it is used in the manufacturing of refractory bricks for furnances. It imparts alkaline reaction and it is very slightly soluble in water. Calcium Oxide It is manufactured by dissolving lime stone at a high temperature of roughly 1000 degrees Celsius and is known as fast lime or lime. $CaC{O_3} \to CaO + C{O_2} + 4200cal$ Properties It is a white amorphous powder having a melting point of 2570 degree Celsius. When heated in an oxygen-hydrogen flame, it produces strong light (lime light). It is a basic oxide that reacts with acidic oxides, such as sulphur dioxide. $\mathrm{CaO}+\mathrm{SiO}_{2} \longrightarrow \mathrm{CaSiO}_{3}$ $\mathrm{CaO}+\mathrm{CO}_{2} \longrightarrow \mathrm{CaCO}_{3}$ On combination with water, it produces slaked lime $\mathrm{CaO}+\mathrm{H}_{2} \mathrm{O} \longrightarrow \mathrm{Ca}(\mathrm{OH})_{2}$ Magnesium Peroxide and Calcium Peroxide: These are obtained by passing $\mathrm{H}_{2} \mathrm{O}_{2}$ in a suspension of $\mathrm{Mg}(\mathrm{OH})_{2}$ and $\mathrm{Ca}(\mathrm{OH})_{2}$ Uses: Magnesium peroxide is a whitening agent and an antimicrobial in tooth paste. Hydroxides Sodium Hydroxides Preparation Electrolysis of Brine: $\mathrm{NaCl} \rightleftharpoons \mathrm{Na}^{+}+\mathrm{Cl}^{-}$ At Anode: $2 \mathrm{Cl}^{-} \longrightarrow \mathrm{Cl}_{2}+2 \mathrm{e}$ At Cathode: $ \mathrm{Na}+\mathrm{e}^{-} \longrightarrow \mathrm{Na}$ $2 \mathrm{Na}+2 \mathrm{H}_{2} \mathrm{O} \longrightarrow 2 \mathrm{NaOH}+\mathrm{H}_{2}$ Caustication of $\mathrm{Na}_{2} \mathrm{CO}_{3}$ (Gossage's method): $\mathrm{Na}_{2} \mathrm{CO}_{3}+\mathrm{Ca}(\mathrm{OH})_{2} \rightleftharpoons 2 \mathrm{NaOH}+\mathrm{CaCO}_{3}$ Since the $\mathrm{K}_{\mathrm{sp}}\left(\mathrm{CaCO}_{3}\right)<\mathrm{K}_{\mathrm{sp}}\left(\mathrm{Ca}(\mathrm{OH})_{2}\right)$, the reaction shifts towards right. Properties It's a crystalline white solid that's highly corrosive and deliquescent. It is resistant to heat. Its aqueous solution has an alkaline pH and feels soapy to the touch. $\mathrm{FeCl}{3}+3 \mathrm{NaOH} \longrightarrow \mathrm{Fe}(\mathrm{OH}){3} \downarrow+3 \mathrm{NaCl}$ $\mathrm{NH}{4} \mathrm{Cl}+\mathrm{NaOH}{ } \longrightarrow \mathrm{NaCl}+\mathrm{NH}{3} \uparrow+\mathrm{H}{2} \mathrm{O}$ $\mathrm{ZnCl}_{2}+2 \mathrm{NaOH} \longrightarrow Z n(\mathrm{OH}), \downarrow+2 \mathrm{NaCl}$ $\mathrm{Zn}(\mathrm{OH}), \downarrow+2 \mathrm{NaOH}{\text {Excess }}{\longrightarrow} \mathrm{Na}{1} \mathrm{ZnO}_{2}+2 \mathrm{H}, \mathrm{O}$ Acidic and amphoteric oxidise gets dissolved easily, e.g: $\mathrm{CO}_{2}+2 \mathrm{NaOH} \longrightarrow \mathrm{Na}_{2} \mathrm{CO}_{3}+\mathrm{H}_{2} \mathrm{O}$ $\mathrm{Al}_{2} \mathrm{O}_{3}+2 \mathrm{NaOH} \longrightarrow 2 \mathrm{NaAlO}_{2}+\mathrm{H}_{2} \mathrm{O}$ Aluminium and Zn metal gives $\mathrm{H}_{2}$ from $\mathrm{NaOH}$ $2 \mathrm{Al}+2 \mathrm{NaOH}+2 \mathrm{H}_{2} \mathrm{O} \longrightarrow 3 \mathrm{H}_{2}+2 \mathrm{NaAlO}_{2}$ Several non metals such as phosphorous, sulphur, calcium etc. yield a hydride instead of hydrogen e.g. $4 \mathrm{P}+3 \mathrm{NaOH}+3 \mathrm{H}_{2} \mathrm{O} \longrightarrow \mathrm{PH}_{3}+3 \mathrm{NaH}_{2} \mathrm{PO}_{2}$ Potassium Hydroxide Preparation: It is prepared by the electrolysis of aqueous solution of potassium chloride. Properties It is stronger base compared to sodium hydroxide Potassium hydroxide is more soluble in water as compared to sodium hydroxide. In alcohol, $\mathrm{NaOH}$ is sparingly soluble but $\mathrm{KOH}$ is highly soluble. As a reagent $\mathrm{KOH}$ is less frequently used but in absorption of $\mathrm{CO}_{2}, \mathrm{KOH}$ is preferably used compared to $\mathrm{NaOH}$. Because $\mathrm{KHCO}_{3}$ formed is soluble whereas $\mathrm{NaHCO}_{3}$ is insoluble. Magnesium Hydroxide It is found as the mineral brucite in nature. Preparation: It's made by combining a caustic soda solution with a magnesium sulphate or chloride solution. $\mathrm{MgSO}_{4}+2 \mathrm{NaOH} \longrightarrow \mathrm{Na}_{2} \mathrm{SO}_{4}+\mathrm{Mg}(\mathrm{OH})_{2}$ Properties It can only be dried at temperatures up to 100 degrees Celsius; otherwise, at greater temperatures, it will break down into its oxide. $\mathrm{Mg}(\mathrm{OH})_{2} \longrightarrow \mathrm{MgO}+\mathrm{H}_{2} \mathrm{O}$ It is alkalinizing because it is mildly soluble in water. It dissolves in $\mathrm{NH}_{4} \mathrm{Cl}$ solution $\mathrm{Mg}(\mathrm{OH})_{2}+2 \mathrm{NH}_{4} \mathrm{Cl} \longrightarrow \mathrm{MgCl}_{2}+2 \mathrm{NH}_{4} \mathrm{OH}$ Thus, $\mathrm{Mg}(\mathrm{OH})_{2}$ is not therefore precipitated from a solution of $\mathrm{Mg}^{+2}$ ions by $\mathrm{NH}_{4} \mathrm{OH}$ in presence of excess of $\mathrm{NH}_{4} \mathrm{Cl}$ Calcium Hydroxide Preparation: It can be prepared easily by spraying water on quicklime. $\mathrm{CaO}+\mathrm{H}_{2} \mathrm{O} \longrightarrow \mathrm{Ca}(\mathrm{OH})_{2}$ Properties The solubility in water of calcium hydroxide is very less. It has a lower solubility in hot water than in cold water. When a result, as the temperature rises, so does the solubility. Carbon dioxide is readily absorbed by calcium hydroxide and is used as a test for the gas. CARBONATES Preparation Leblanc Process: $\mathrm{NaCl}+\mathrm{H}_{2} \mathrm{SO}_{4}$ (conc.) $\stackrel{\text { mild heating }}{\longrightarrow} \mathrm{NaHSO}_{4}+\mathrm{HCl}$ $\mathrm{NaCl}+\mathrm{NaHSO}_{4} \quad \stackrel{\text { Strongly }}{\text { heated }}{\longrightarrow} \mathrm{Na}_{2} \mathrm{SO}_{4}+\mathrm{HCl}$ $\mathrm{Na}_{2} \mathrm{SO}_{4}+4 \mathrm{C} \longrightarrow \mathrm{Na}_{2} \mathrm{~S}+4 \mathrm{CO} \uparrow$ $\mathrm{Na}_{2} \mathrm{~S}+\mathrm{CaCO}_{3} \longrightarrow \mathrm{Na}_{2} \mathrm{CO}_{3}+\mathrm{CaS}$ Solvay Process: $\mathrm{NH}_{3}+\mathrm{H}_{2} \mathrm{O}+\mathrm{CO}_{2} \longrightarrow \mathrm{NH}_{4} \mathrm{HCO}_{3}$ $\mathrm{NaCl}+\mathrm{NH}_{4} \mathrm{HCO}_{3} \longrightarrow \mathrm{NaHCO}_{3}+\mathrm{NH}_{4} \mathrm{Cl}$ $2 \mathrm{NaHCO}_{3} \stackrel{150^{\circ} \mathrm{C}}{\longrightarrow} \mathrm{Na}_{2} \mathrm{CO}_{3}+\mathrm{H}_{2} \mathrm{O}+\mathrm{CO}_{2}$ Properties Soda ash is anhydrous sodium carbonate that does not disintegrate when heated but melts at 852 degrees Celsius. Sodium carbonate forms a number of hydrates. Hydrated $\mathrm{Na}_{2} \mathrm{CO}_{3}$ is called washing soda $\left(\mathrm{Na}_{2} \mathrm{CO}_{3} \cdot 10\right.$ $\left.\mathrm{H}_{2} \mathrm{O}\right)$ and is prepared by Le Blanc process solvay process and electrolytic process. Sodium carbonate absorbs carbon dioxide and produces sodium bicarbonate, which can be calcined to obtain pure sodium carbonate at 250 degrees Celsius. $\mathrm{Na}_{2} \mathrm{CO}_{3}+\mathrm{H}_{2} \mathrm{O}+\mathrm{CO}_{2} \underset{\sum_{250^{\circ} \mathrm{C}}} 2 \mathrm{NaHCO}_{3}$ It was causticized by lime after being dissolved in acid with effervescence of carbon dioxide. $\mathrm{Na}_{2} \mathrm{CO}_{3}+\mathrm{HCl} \longrightarrow 2 \mathrm{NaCl}+\mathrm{H}_{2} \mathrm{O}+\mathrm{CO}_{2}$ $\mathrm{Na}_{2} \mathrm{CO}_{3}+\mathrm{Ca}(\mathrm{OH})_{2} \longrightarrow 2 \mathrm{NaOH}+\mathrm{CaCO}_{3}$ Uses: Sodium carbonate is widely used as a smelter in glass making Potassium Carbonate It can be made by the leblanc method, but not by the solvay process since potassium carbonate is water soluble. Properties: It resembles with $\mathrm{Na}_{2} \mathrm{CO}_{3}$, its melting point is $900^{\circ} \mathrm{C}$ but a mixture of $\mathrm{Na}_{2} \mathrm{CO}_{3}$ and $\mathrm{K}_{2} \mathrm{CO}_{3}$ melts at $712^{\circ} \mathrm{C}$. Uses: Potassium carbonate is used in glass manufacturing. Calcium Carbonate Marble, limestone, chalk, and calcite are examples of natural calcite. It's made by dissolving marble or limestone in $\mathrm{HCl}$, eliminating any iron or aluminium, precipitating with $\mathrm{NH}_{3}$, and adding $\left(\mathrm{NH}_{4}\right)_{2} \mathrm{CO}_{3}$to the solution. $\mathrm{CaCl}_{2}+\left(\mathrm{NH}_{4}\right)_{2} \mathrm{CO}_{3} \longrightarrow \mathrm{CaCO}_{3}+2 \mathrm{NH}_{4} \mathrm{Cl}$ Properties It is dissociates above 1000 degree Celsius as follows: $\mathrm{CaCO}_{3} \longrightarrow \mathrm{CaO}+\mathrm{CO}_{2}$ It dissolves in carbon dioxide-containing water to produce calcium bicarbonate, but boiling separates it from the solution. [\mathrm{CaCO}_{3} \longrightarrow \mathrm{H}_{2} \mathrm{O}+\mathrm{CO}_{2}\;\underset{boiling} {Ca}(HCO_3)_2] Magnesium Carbonate It is found in nature as magnesite, which is isomorphic to calcite. It is precipitated as a white by adding sodium bicarbonate to a solution of a magnesium salt, although only basic carbonate, known as magnesia alba, with the approximate composition $\mathrm{MgCO}_{3} . \mathrm{Mg}(\mathrm{OH})_{2} .3 \mathrm{H}_{2} \mathrm{O}$, is precipitated. Properties: The properties of magnesium carbonate is same as calcium carbonate. Bicarbonates Sodium Bicarbonates Preparation: By absorption of $\mathrm{CO}_{2}$ in $\mathrm{Na}_{2} \mathrm{CO}_{3}$ solution. $\mathrm{Na}_{2} \mathrm{CO}_{3}+\mathrm{H}_{2} \mathrm{O}+\mathrm{CO}_{2} \rightleftharpoons 2 \mathrm{NaHCO}_{3}$ Uses: It is used in pharmaceutical industries and also as baking powder. Potassium bicarbonates Preparation: Same as $\mathrm{NaHCO}_{3}$ Properties: Same as $\mathrm{NaHCO}_{3}$ But it is more alkaline and more soluble in water compared to $\mathrm{NaHCO}_{3}$. Magnesium Bicarbonate $\mathrm{MgCO}_{3}+\mathrm{CO}_{2}+\mathrm{H}_{2} \mathrm{O} \stackrel{\text { boiling }}{\rightleftharpoons} \mathrm{Mg}\left(\mathrm{HCO}_{3}\right)_{2}$ Calcium Bicarbonate $\mathrm{CaCO}_{3}+\mathrm{CO}_{2}+\mathrm{H}_{2} \mathrm{O} \rightleftharpoons \mathrm{Ca}\left(\mathrm{HCO}_{3}\right)_{2}$ Chlorides Sodium Chloride It is prepared by the method of brine which contains 25% sodium chloride. Properties It is nonhygroscopic, whereas ordinary salt contains magnesium chloride, which makes it hygroscopic. It's used to make freezing mixture in the lab (freezing mixture is ice-common salt mixture with a temperature of-25 degrees Celsius). To melt ice and snow from the road. Potassium Chloride It also occurs in nature as sylvite $(\mathrm{KCl})$ or carnallite $\mathrm{KCl} . \mathrm{MgCl}_{2} .6 \mathrm{H}_{2} \mathrm{O}$. Uses: Potassium chloride is used as a fertiliser. Magnesium Chloride Preparation: It is prepared by dissolving $\mathrm{MgCO}_{3}$ in dilute hydrochloric acid. $\mathrm{MgCO}_{3}+2 \mathrm{HCl} \longrightarrow \mathrm{MgCl}_{2}+\mathrm{H}_{2} \mathrm{O}+\mathrm{CO}_{2}$ Properties It crystallises as hexahydrate. $\mathrm{MgCl}_{2} \cdot 6 \mathrm{H}_{2} \mathrm{O}$. It is deliquescent solid. This hydrate undergoes hydrolysis as follows : $\mathrm{MgCl}_{2}+\mathrm{H}_{2} \mathrm{O} \longrightarrow \mathrm{Mg}(\mathrm{OH}) \mathrm{Cl}+\mathrm{HCl}$ $\mathrm{Mg}(\mathrm{OH}) \mathrm{Cl} \longrightarrow \mathrm{MgO}+\mathrm{HCl}$ Anhydrous $\mathrm{MgCl}_{2}$ can be prepared by heating a double salt like. $\mathrm{MgClO}_{2} . \mathrm{NH}_{4} \mathrm{Cl} .6 \mathrm{H}_{2} \mathrm{O}$ as follows: $MgC{l_2}.N{H_4}C{l_2}.6{H_2}O\xrightarrow[\Delta ]{{ - {H_2}O}}MgC{l_2}N{H_4}Cl\xrightarrow[\Delta ]{{strong}}MgC{l_2} + N{H_3} + HCl$ Sorel Cement: It's a paste-like mixture of magnesium oxide and magnesium chloride that hardens when left to stand. This is utilised in dental fillings, flooring, and other applications. Calcium Chloride In the solvay process, it is a by-product. The carbonate can alternatively be made by dissolving it with hydrochloric acid. $\mathrm{CaCO}_{3}+2\mathrm{HCl}\longrightarrow\mathrm{CaCl}_{2}+\mathrm{H}_{2} \mathrm{O}+\mathrm{CO}_{2}$ Properties It's made out of deliquescent crystals. It gets hydrolysed like $\mathrm{MgCl}_{2}$ hence anhydrous $\mathrm{CaCl}_{2}$ cannot be prepared. $\mathrm{CaCl}_{2}+\mathrm{H}_{2}\mathrm{O}\rightleftharpoons\mathrm{CaO}+2 \mathrm{HCl}$ Anhydrous $\mathrm{CaCl}_{2}$ is used in drying gases and organic compounds but not $\mathrm{NH}$ or alcohol due to the formation of $\mathrm{CaCl}_{2} .8 \mathrm{NH}_{3}$ and $\mathrm{CaCl}_{2} \cdot 4 \mathrm{C}_{2} \mathrm{H}_{5} \mathrm{OH}$ Sulphates Sodium Sulphate Preparation It is created by heating common salt with sulphuric acid in the first step of the leblanc process. $2 \mathrm{NaCl}+\mathrm{H}_{2} \mathrm{SO}_{4} \longrightarrow \mathrm{Na}_{2} \mathrm{SO}_{4}+2 \mathrm{HCl}$ Thus the salt cake formed is crystallised out from its aqueous solution as $\mathrm{Na}_{2} \mathrm{SO}_{4} .10 \mathrm{H}_{2} \mathrm{O}$. This is called as Glauber's salt. Properties Sodium sulphate is reduced to $\mathrm{Na}_{2} \mathrm{S}$ when it is fused with carbon. $\mathrm{Na}_{2} \mathrm{SO}_{4}+4 \mathrm{C} \longrightarrow \mathrm{Na}_{2} \mathrm{S}+4 \mathrm{CO}$ Uses: It is used in pharmaceutical industries. Potassium Sulphate It's found in stassfurt potash deposits as schonite and kainite, and it's made by dissolving it in water and crystallising it. It separates as crystals from the solution, whereas $\mathrm{Na}_{2} \mathrm{SO}_{4}$ is a decahydrate. Magnesium Sulphate Preparation It's made by dissolving kieserite $\mathrm{MgSO}_{4} . \mathrm{H}_{2} \mathrm{O}$ in boiling water and then crystallising the resulting hepta hydrate solution. Epsom salt is what it's called. It is also obtained by dissolving magnesite in hot dilute $\mathrm{H}_{2} \mathrm{SO}_{4}$ $\mathrm{MgCO}_{3}+\mathrm{H}_{2}\mathrm{SO}_{4}\longrightarrow\mathrm{MgSO}_{4}+\mathrm{H}_{2} \mathrm{O}+\mathrm{CO}_{2}$ It is isomorphous with $\mathrm{FeSO}_{4} \cdot 7 \mathrm{H}_{2} \mathrm{O}, \mathrm{ZnSO}_{4} \cdot 7 \mathrm{H}_{2} \mathrm{O}$ Calcium Sulphate It occurs as anhydrite $\mathrm{CaSO}_{4}$ and as the dihydrate $\mathrm{CaSO}_{4}.2\mathrm{H}_{2}\mathrm{O}$, gypsum, alabaster or satinspar. Properties Calcium sulphate solubility increases till a certain point and then declines as temperature rises. Because of its porous nature, plaster of Paris is utilised in the construction of wood. Example 17: $\mathrm{Mg}_{3} \mathrm{N}_{2}$ when reacted with water, givesoff $\mathrm{NH_3}$. but HCl is not obtained from $\mathrm{MgCl}$ on reaction with water at room temperature. Why? The crystalline salts of alkaline earth metals contain more water of crystallization than corresponding alkali metal salts. Why? Solution: Because $\mathrm{Mg}_{3} \mathrm{N}_{2}$ is a salt of a strong base and a weak acid $\left(\mathrm{NH}_{3}\right)$ , it can be hydrolyzed. $\mathrm{Mg}_{3} \mathrm{~N}_{2}+6 \mathrm{H}_{2} \mathrm{O}\longrightarrow 3 \mathrm{Mg}(\mathrm{OH})_{2}+2 \mathrm{NH}_{3}$ In comparison to alkali metal ions, alkaline earth metal ions have a stronger tendency to hydrate due to their small size and high nuclear charge. As a result, alkaline earth metal salts contain more crystallisation water than alkali metal salts. Example 18: What happens when: Beryllium carbide reacts with water. Magnesium nitrate is heated. Quick lime is heated in electric furnace with powdered coke. Sodium chloride solution is added to zinc chloride solution. Solution: A gas named methane is evolved $\mathrm{Be}_{2} \mathrm{C}+4 \mathrm{H}_{2} \mathrm{O} \longrightarrow 2 \mathrm{Be}(\mathrm{OH})_{2}+\mathrm{CH}_{4}$ A gas which is brown in colour, $\mathrm{NO}_{2}$, is evolved $2 \mathrm{Mg} \mathrm{NO}_{3} \longrightarrow 2 \mathrm{MgO}+4 \mathrm{NO}_{2}+\mathrm{O}_{2}$ Calcium carbide is formed $CaO + 3C\xrightarrow[{Arc}]{{Electric}}Ca{C_2} + CO$ A white precipitate of zinc hydroxide is evolved which forms sodium zincate on dissolving with excess of sodium hydroxide. $\mathrm{ZnCl}_{2}+2\mathrm{NaOH}\longrightarrow\mathrm{Zn}(\mathrm{OH})_{2}+2 \mathrm{NaCl}$ $\mathrm{Zn}(\mathrm{OH})_{2}+2\mathrm{NaOH}\longrightarrow\mathrm{Na}_{2} \mathrm{ZnO}_{2}+2 \mathrm{H}_{2} \mathrm{O}$ Example 19: Draw a structure of: $BeC{l_2}$ in vapour state $BeC{l_2}$ in solid state Solution: It features a chlorobridged dimer structural bond in the vapour state. At 1000 degrees Celsius, it dissociates into a linear monomer. In the solid state, it has a polymeric structure with chlorobridges, in which a halogen atom connected to one beryllium atom forms a coordinate bond with a lone pair of electrons and a covalent binding with another atom. Class 11 Chemistry Revision Notes for Chapter 10 - The s-Block Elements About s-Block Elements Revision Notes The s-block elements fall among the most popular and speak about the elements in chemistry. Thus in class 11, chapter 10 basically throws light on s-block elements, including their properties, characteristics, types, and the importance of different compounds. Aiming to help the students learn about these topics and study productively, a free CBSE revision notes for class 11, chapter 10 Chemistry - The s-block elements are provided at Vedantu. Students can access the revision notes here at this page and get easily accustomed to all the important topics given in the chapter. Also, students can use the revision notes at their convenience irrespective of the time or the place. Periodic Trends in the Properties of s-block Elements Alkali Metal - These are the metals of silvery-white, soft, low melting, and highly reactive. The ionic size and the atomic size as well increase down the group. On the other side, the ionization enthalpies decrease down the group. Mostly, these compounds are ionic. Their oxides and hydroxides dissolve in water to form strong alkalies. A few compounds of sodium are sodium hydrogen carbonate, sodium chloride, and more related. The Castner-Kellner process does the manufacturing of Sodium hydroxide, and the Solvay process achieves the manufacturing of sodium carbonate. Alkaline Earth Metals - These are the metals having increased cationic charges. Their atomic size and the ionic size is reduced. Oxides and hydroxides of alkaline earth metals are less basic. A few of the compounds of calcium are calcium hydroxide, calcium carbonate, and more related. Diagonal Relationship - The first element of group 1 and the second element of group 2 exhibits similar properties: the Lithium from group 1 and Beryllium from group 2. Thus, such similarities are known as diagonal relationships. What are The s-block Elements? The s-block of the periodic table contains Group 1 and Group 2 elements. Moreover, their hydroxides and oxides are alkaline in nature. Furthermore, they are characterized by the s- electron or electrons in their atom's valence shell. Besides, they are highly reactive metals that form mono and dipositive ions. Additionally, with the increasing atomic number, there is a regular trend in the alkali metal's physical and chemical properties. Most notably, the first element of both Group 1 and Group 2 exhibits the same properties. In the periodic table, these similarities are referred to as the diagonal relationship. Moreover, the chemistry of alkaline earth metals is much similar to the alkali metals. Furthermore, there are some differences because of the ionic and atomic sizes. Sub-Topics of The s- Block Elements Let us look at the subtopics that fall under the topic, The s-block elements: Anomalous Behaviour of Lithium - This topic explains about lithium, its nature, and similarities between the lithium and magnesium, including the difference between lithium with other alkali metals. Beryllium, Magnesium, and Calcium - This topic gives an overview of magnesium, beryllium, and calcium. Characteristics of the Compounds of Alkali Earth Metals - This topic speaks about the alkali earth metals, including their physical and chemical properties. Characteristics of the Compounds of Alkali Metals - The topic describes the characteristics of compounds of alkali metals and also discusses a few other alkali metals. Group 1 Elements, Alkali Metals - This topic highlights the group1 elements like sodium, lithium, potassium, and more. Group 2 Elements, Alkali Earth Metals - This topic teaches us about the group 2 elements such as calcium, magnesium, and more related. Some Important Compounds of Potassium and Sodium - This topic defines the important compound uses and sodium and potassium properties. Why Choose Vedantu? Vedantu is one of the biggest online learning platforms where students can get many benefits, where some of them are mentioned below: Students can be updated on the latest information regarding their respective state boards They can get various info on question papers, revision notes, last 5 years question papers, information on specific topics, and a lot more By going through all these academic papers, students can gain a better understanding of every topic or chapter They can also clear the doubts on a specific topic or chapter Even the students can download and save these papers and keep with them, which can be used further at the time of their examinations FAQs on The S Block Elements Class 11 Chemistry Chapter 10 CBSE Notes - 2025-26 What are the key topics I should focus on when revising The s-Block Elements for Class 11? For a quick and effective revision of The s-Block Elements, you should focus on the following core areas: Group 1 (Alkali Metals): Their electronic configuration, trends in atomic and ionic radii, ionisation enthalpy, and hydration enthalpy. Pay special attention to the anomalous properties of lithium and its diagonal relationship with magnesium. Group 2 (Alkaline Earth Metals): Similar trends as Group 1, but compare their properties like higher melting points and ionisation enthalpies. Focus on the anomalous behaviour of beryllium and its diagonal relationship with aluminium. Important Compounds: Understand the preparation, properties, and uses of key compounds like Sodium Carbonate (Washing Soda) via the Solvay process, Sodium Hydroxide (Caustic Soda), and Calcium Sulphate (Plaster of Paris). Chemical Reactivity: Revise reactions with air (formation of oxides, peroxides, superoxides), water, hydrogen, and halogens for both groups. Why are Group 1 and Group 2 elements called Alkali Metals and Alkaline Earth Metals respectively? Group 1 elements are called Alkali Metals because they react with water to form strong hydroxides, which are highly alkaline in nature. The term 'alkali' is derived from the Arabic word 'al-qaly', meaning plant ashes, from which compounds of sodium and potassium were first isolated. Group 2 elements are called Alkaline Earth Metals because their oxides and hydroxides are also alkaline in nature and these metal oxides were found in the earth's crust and were stable to heat. How do key atomic properties like ionisation enthalpy and atomic radii trend down the groups in s-block elements? When revising trends for s-block elements, remember these two key points for both Group 1 and Group 2: Atomic and Ionic Radii: The size of the atoms and their corresponding ions increases as you move down the group. This is because a new electron shell is added for each successive element. Ionisation Enthalpy: The energy required to remove the outermost electron decreases down the group. As the atomic size increases, the outermost electron is further from the nucleus and is shielded by inner electrons, making it easier to remove. What is the anomalous behaviour of lithium, and why does it show a diagonal relationship with magnesium? Lithium, the first member of Group 1, shows anomalous behaviour due to its exceptionally small atomic size and high polarising power. This leads to increased covalent character in its compounds. Key differences from other alkali metals include its hardness, higher melting point, and formation of only monoxide (Li₂O) and nitride (Li₃N) on reacting with air.Lithium shows a diagonal relationship with magnesium (Group 2) because they have a similar charge-to-radius ratio. Their similarities include: Both are harder than other metals in their respective groups. Both react directly with nitrogen to form nitrides. Their carbonates decompose on heating to form oxides and CO₂. Their chlorides are deliquescent and soluble in ethanol. Why does beryllium show properties different from other alkaline earth metals? Beryllium shows anomalous behaviour compared to other Group 2 elements primarily due to its extremely small atomic size and high ionisation enthalpy. This gives it a high polarising power, leading to its compounds being largely covalent, unlike the ionic compounds of other alkaline earth metals. For instance, beryllium oxide (BeO) is amphoteric, while the oxides of other group members are basic. It also shows a diagonal relationship with aluminium (Al) in Group 13. What are the essential steps to remember for the Solvay process for preparing sodium carbonate? To revise the Solvay process, focus on these main steps: First, a brine solution (concentrated NaCl) is saturated with ammonia. This ammoniated brine is then carbonated by passing carbon dioxide through it, which leads to the precipitation of sodium bicarbonate (NaHCO₃) because it is sparingly soluble in the cold solution. Finally, the separated sodium bicarbonate is heated (calcination) to produce sodium carbonate (Na₂CO₃). Why can't potassium carbonate be prepared by the Solvay process? Potassium carbonate (K₂CO₃) cannot be prepared by the Solvay process due to a key difference in solubility. In the process, the crucial step is the precipitation of bicarbonate. While sodium bicarbonate (NaHCO₃) is sparingly soluble and precipitates out from the ammoniated brine solution, potassium bicarbonate (KHCO₃) is too soluble in water. This high solubility prevents it from precipitating, making the separation and subsequent conversion to potassium carbonate unfeasible through this method. How do the oxides formed by alkali metals differ, and what determines the type of oxide formed? The type of oxide formed by alkali metals upon heating in excess air depends on the size and stability of the cation. Lithium (Li), being small, forms normal oxide, Li₂O. Sodium (Na) forms the peroxide, Na₂O₂. Potassium (K), Rubidium (Rb), and Cesium (Cs), being larger, can stabilise the larger superoxide ion and form superoxides (KO₂, RbO₂, CsO₂). This trend is explained by the fact that larger cations can stabilise larger anions. The superoxide ion (O₂⁻) is larger than the peroxide (O₂²⁻) and oxide (O²⁻) ions. What is the chemical principle behind Plaster of Paris setting into a hard mass? The setting of Plaster of Paris is a process of hydration. Plaster of Paris is calcium sulphate hemihydrate (CaSO₄·½H₂O). When mixed with water, it re-forms gypsum (CaSO₄·2H₂O), a hard, solid mass. This reaction is exothermic and involves the interlocking of gypsum crystals, which gives the set plaster its rigidity and strength. The second ionisation enthalpy of alkaline earth metals is very high, yet they form M²⁺ ions. Why is this energetically favourable? Although the second ionisation enthalpy (removing a second electron) for alkaline earth metals is significantly higher than the first, the formation of a dipositive ion (M²⁺) is ultimately favourable in compounds. This is because the high energy required is more than compensated by the very high lattice enthalpy (in solid state) or hydration enthalpy (in aqueous solution) released. The small size and high charge of the M²⁺ ion lead to strong attractions, making the overall process energetically favourable for forming compounds like CaCl₂ rather than CaCl. How can I quickly recall which s-block elements give a characteristic colour in a flame test and why some do not? Most s-block elements impart a characteristic colour to a flame. When heated, the valence electrons get excited to higher energy levels and emit light of a specific colour upon returning to the ground state. You can remember the colours as: Lithium: Crimson Red Sodium: Golden Yellow Potassium: Lilac (Pale Violet) Rubidium: Reddish Violet Cesium: Blue Calcium: Brick Red Strontium: Crimson Red Barium: Apple Green Beryllium (Be) and Magnesium (Mg) do not show any colour because their atoms are small, and their valence electrons are very strongly bound to the nucleus. The energy from the flame is insufficient to excite these electrons to higher energy levels. Why are alkali metal solutions in liquid ammonia blue and highly conducting? When alkali metals dissolve in liquid ammonia, they form a deep blue solution that is highly conductive. This happens because the alkali metal atom loses its valence electron. Both the cation (M⁺) and the electron become surrounded by ammonia molecules, a process called solvation. The blue colour of the solution is due to the presence of these ammoniated electrons, which absorb light in the visible region. The high electrical conductivity is also due to these mobile ammoniated electrons as well as the ammoniated cations. Recently Updated Pages Limits and Derivatives Class 11 Maths Notes - Free PDFSequences and Series Class 11 Maths Notes - Free PDFPermutations and Combinations Class 11 Maths Notes - Free PDFIntroduction to Three Dimensional Geometry Class 11 Maths Notes - Free PDFProbability Class 11 Maths Notes - Free PDFRelations and Functions Class 11 Maths Notes - Free PDF Limits and Derivatives Class 11 Maths Notes - Free PDFSequences and Series Class 11 Maths Notes - Free PDFPermutations and Combinations Class 11 Maths Notes - Free PDF Introduction to Three Dimensional Geometry Class 11 Maths Notes - Free PDFProbability Class 11 Maths Notes - Free PDFRelations and Functions Class 11 Maths Notes - Free PDF Trending pages Some Basic Concepts of Chemistry Class 11 Chemistry Chapter 1 CBSE Notes - 2025-26Class 11 Physics CBSE Notes - 2025-26Units And Measurements Class 11 Physics Chapter 1 CBSE Notes - 2025-26The Living World Class 11 Biology Chapter 1 CBSE Notes - 2025-26Chemical Bonding and Molecular Structure Class 11 Chemistry Chapter 4 CBSE Notes - 2025-26Motion in a Straight Line Class 11 Physics Chapter 2 CBSE Notes - 2025-26 Some Basic Concepts of Chemistry Class 11 Chemistry Chapter 1 CBSE Notes - 2025-26Class 11 Physics CBSE Notes - 2025-26Units And Measurements Class 11 Physics Chapter 1 CBSE Notes - 2025-26 The Living World Class 11 Biology Chapter 1 CBSE Notes - 2025-26Chemical Bonding and Molecular Structure Class 11 Chemistry Chapter 4 CBSE Notes - 2025-26Motion in a Straight Line Class 11 Physics Chapter 2 CBSE Notes - 2025-26 Other Pages Independence Day Speech in English for Students 2025Essay On My Favourite Teacher: Best Samples for Students1 Billion in Rupees: Explained with Examples & ConversionNEET 2025 Cut Off for MBBS in UP Government CollegesWhat is Communication In English? Types, Meaning, and Importance in Everyday LifeDigital India Essay: Empowering Youth Through Technology Independence Day Speech in English for Students 2025Essay On My Favourite Teacher: Best Samples for Students1 Billion in Rupees: Explained with Examples & Conversion NEET 2025 Cut Off for MBBS in UP Government CollegesWhat is Communication In English? Types, Meaning, and Importance in Everyday LifeDigital India Essay: Empowering Youth Through Technology
3967
https://openstax.org/books/contemporary-mathematics/pages/10-7-volume-and-surface-area
Skip to ContentGo to accessibility pageKeyboard shortcuts menu Contemporary Mathematics 10.7 Volume and Surface Area Contemporary Mathematics10.7 Volume and Surface Area Search for key terms or text. Figure 10.123 Volume is illustrated in this 3-dimensional view of an interior space. This gives a buyer a more realistic interpretation of space. (credit: "beam render 10 with sun and cat tree" by monkeywing/Flickr, CC BY 2.0) Learning Objectives After completing this section, you should be able to: Calculate the surface area of right prisms and cylinders. Calculate the volume of right prisms and cylinders. Solve application problems involving surface area and volume. Volume and surface area are two measurements that are part of our daily lives. We use volume every day, even though we do not focus on it. When you purchase groceries, volume is the key to pricing. Judging how much paint to buy or how many square feet of siding to purchase is based on surface area. The list goes on. An example is a three-dimensional rendering of a floor plan. These types of drawings make building layouts far easier to understand for the client. It allows the viewer a realistic idea of the product at completion; you can see the natural space, the volume of the rooms. This section gives you practical information you will use consistently. You may not remember every formula, but you will remember the concepts, and you will know where to go should you want to calculate volume or surface area in the future. We will concentrate on a few particular types of three-dimensional objects: right prisms and right cylinders. The adjective “right” refers to objects such that the sides form a right angle with the base. We will look at right rectangular prisms, right triangular prisms, right hexagonal prisms, right octagonal prisms, and right cylinders. Although, the principles learned here apply to all right prisms. Three-Dimensional Objects In geometry, three-dimensional objects are called geometric solids. Surface area refers to the flat surfaces that surround the solid and is measured in square units. Volume refers to the space inside the solid and is measured in cubic units. Imagine that you have a square flat surface with width and length. Adding the third dimension adds depth or height, depending on your viewpoint, and now you have a box. One way to view this concept is in the Cartesian coordinate three-dimensional space. The -axis and the -axis are, as you would expect, two dimensions and suitable for plotting two-dimensional graphs and shapes. Adding the -axis, which shoots through the origin perpendicular to the -plane, and we have a third dimension. See Figure 10.124. Figure 10.124 Three-Dimensional Space Here is another view taking the two-dimensional square to a third dimension. See Figure 10.125. Figure 10.125 Going from Two Dimensions to Three Dimensions To study objects in three dimensions, we need to consider the formulas for surface area and volume. For example, suppose you have a box (Figure 10.126) with a hinged lid that you want to use for keeping photos, and you want to cover the box with a decorative paper. You would need to find the surface area to calculate how much paper is needed. Suppose you need to know how much water will fill your swimming pool. In that case, you would need to calculate the volume of the pool. These are just a couple of examples of why these concepts should be understood, and familiarity with the formulas will allow you to make use of these ideas as related to right prisms and right cylinders. Figure 10.126 Right Prisms A right prism is a particular type of three-dimensional object. It has a polygon-shaped base and a congruent, regular polygon-shaped top, which are connected by the height of its lateral sides, as shown in Figure 10.127. The lateral sides form a right angle with the base and the top. There are rectangular prisms, hexagonal prisms, octagonal prisms, triangular prisms, and so on. Figure 10.127 Pentagonal Prism Generally, to calculate surface area, we find the area of each side of the object and add the areas together. To calculate volume, we calculate the space inside the solid bounded by its sides. FORMULA The formula for the surface area of a right prism is equal to twice the area of the base plus the perimeter of the base times the height, where is equal to the area of the base and top, is the perimeter of the base, and is the height. FORMULA The formula for the volume of a rectangular prism, given in cubic units, is equal to the area of the base times the height, where is the area of the base and is the height. Example 10.56 Calculating Surface Area and Volume of a Rectangular Prism Find the surface area and volume of the rectangular prism that has a width of 10 cm, a length of 5 cm, and a height of 3 cm (Figure 10.128). Figure 10.128 Solution The surface area is The volume is Your Turn 10.56 1. A rectangular solid has a width of 6 cm, length of 15 cm, and height or depth of 6 cm. Find the surface area and the volume. In Figure 10.129, we have three views of a right hexagonal prism. The regular hexagon is the base and top, and the lateral faces are the rectangular regions perpendicular to the base. We call it a right prism because the angle formed by the lateral sides to the base is See Figure 10.127. Figure 10.129 Right Hexagonal Prism The first image is a view of the figure straight on with no rotation in any direction. The middle figure is the base or the top. The last figure shows you the solid in three dimensions. To calculate the surface area of the right prism shown in Figure 10.129, we first determine the area of the hexagonal base and multiply that by 2, and then add the perimeter of the base times the height. Recall the area of a regular polygon is given as where is the apothem and is the perimeter. We have that Then, the surface area of the hexagonal prism is To find the volume of the right hexagonal prism, we multiply the area of the base by the height using the formula The base is and the height is 20 . Thus, Example 10.57 Calculating the Surface Area of a Right Triangular Prism Find the surface area of the triangular prism (Figure 10.130). Figure 10.130 Solution The area of the triangular base is . The perimeter of the base is Then, the surface area of the triangular prism is Your Turn 10.57 1. Find the surface area of the triangular prism shown. Example 10.58 Finding the Surface Area and Volume Find the surface area and the volume of the right triangular prism with an equilateral triangle as the base and height (Figure 10.131). Figure 10.131 Solution The area of the triangular base is . Then, the surface area is The volume formula is found by multiplying the area of the base by the height. We have that Your Turn 10.58 1. Find the surface area and the volume of the octagonal figure shown. Example 10.59 Determining Surface Area Application Katherine and Romano built a greenhouse made of glass with a metal roof (Figure 10.132). In order to determine the heating and cooling requirements, the surface area must be known. Calculate the total surface area of the greenhouse. Figure 10.132 Solution The area of the long side measures Multiplying by 2 gives The front (minus the triangular area) measures Multiplying by 2 gives The floor measures Each triangular region measures Multiplying by 2 gives Finally, one side of the roof measures Multiplying by 2 gives Add them up and we have Your Turn 10.59 1. Calculate the surface area of a greenhouse with a flat roof measuring 12 ft wide, 25 ft long, and 8 ft high. Right Cylinders There are similarities between a prism and a cylinder. While a prism has parallel congruent polygons as the top and the base, a right cylinder is a three-dimensional object with congruent circles as the top and the base. The lateral sides of a right prism make a angle with the polygonal base, and the side of a cylinder, which unwraps as a rectangle, makes a angle with the circular base. Right cylinders are very common in everyday life. Think about soup cans, juice cans, soft drink cans, pipes, air hoses, and the list goes on. In Figure 10.133, imagine that the cylinder is cut down the 12-inch side and rolled out. We can see that the cylinder side when flat forms a rectangle. The formula includes the area of the circular base, the circular top, and the area of the rectangular side. The length of the rectangular side is the circumference of the circular base. Thus, we have the formula for total surface area of a right cylinder. Figure 10.133 Right Cylinder FORMULA The surface area of a right cylinder is given as To find the volume of the cylinder, we multiply the area of the base with the height. FORMULA The volume of a right cylinder is given as Example 10.60 Finding the Surface Area and Volume of a Cylinder Given the cylinder in Figure 10.133, which has a radius of 5 inches and a height of 12 inches, find the surface area and the volume. Solution Step 1: We begin with the areas of the base and the top. The area of the circular base is Step 2: The base and the top are congruent parallel planes. Therefore, the area for the base and the top is Step 3: The area of the rectangular side is equal to the circumference of the base times the height: Step 4: We add the area of the side to the areas of the base and the top to get the total surface area. We have . Step 5: The volume is equal to the area of the base times the height. Then, Your Turn 10.60 1. Find the surface area and volume of the cylinder with a radius of 7cm and a height of 5 cm. Applications of Surface Area and Volume The following are just a small handful of the types of applications in which surface area and volume are critical factors. Give this a little thought and you will realize many more practical uses for these procedures. Example 10.61 Applying a Calculation of Volume A can of apple pie filling has a radius of 4 cm and a height of 10 cm. How many cans are needed to fill a pie pan (Figure 10.134) measuring 22 cm in diameter and 3 cm deep? Figure 10.134 Solution The volume of the can of apple pie filling is The volume of the pan is To find the number of cans of apple pie filling, we divide the volume of the pan by the volume of a can of apple pie filling. Thus, We will need 2.3 cans of apple pie filling to fill the pan. Your Turn 10.61 1. You are making a casserole that includes vegetable soup and pasta. The size of your cylindrical casserole dish has a diameter of 10 inches and is 4 inches high. The pasta will consume the bottom portion of the casserole dish about 1 inch high. The soup can has a diameter of 3 inches and is 4 inches high. After the pasta is added, how many cans of soup can you add? Optimization Problems that involve optimization are ones that look for the best solution to a situation under some given conditions. Generally, one looks to calculus to solve these problems. However, many geometric applications can be solved with the tools learned in this section. Suppose you want to make some throw pillows for your sofa, but you have a limited amount of fabric. You want to make the largest pillows you can from the fabric you have, so you would need to figure out the dimensions of the pillows that will fit these criteria. Another situation might be that you want to fence off an area in your backyard for a garden. You have a limited amount of fencing available, but you would like the garden to be as large as possible. How would you determine the shape and size of the garden? Perhaps you are looking for maximum volume or minimum surface area. Minimum cost is also a popular application of optimization. Let’s explore a few examples. Example 10.62 Maximizing Area Suppose you have 150 meters of fencing that you plan to use for the enclosure of a corral on a ranch. What shape would give the greatest possible area? Solution So, how would we start? Let’s look at this on a smaller scale. Say that you have 30 inches of string and you experiment with different shapes. The rectangle in Figure 10.135 measures 12 in long by 3 in wide. We have a perimeter of and the area calculates as The rectangle in Figure 10.135, measures 8 in long and 7 in wide. The perimeter is and the area is In Figure 10.135, the square measures 7.5 in on each side. The perimeter is then and the area is If you want a circular corral as in Figure 10.135, we would consider a circumference of which gives a radius of and an area of Figure 10.135 We see that the circular shape gives the maximum area relative to a circumference of 30 in. So, a circular corral with a circumference of 150 meters and a radius of 23.87 meters gives a maximum area of Your Turn 10.62 1. You have 25 ft of rope to section off a rectangular-shaped garden. What dimensions give the maximum area that can be roped off? Example 10.63 Designing for Cost Suppose you want to design a crate built out of wood in the shape of a rectangular prism (Figure 10.136). It must have a volume of 3 cubic meters. The cost of wood is $15 per square meter. What dimensions give the most economical design while providing the necessary volume? Figure 10.136 Solution To choose the optimal shape for this container, you can start by experimenting with different sizes of boxes that will hold 3 cubic meters. It turns out that, similar to the maximum rectangular area example where a square gives the maximum area, a cube gives the maximum volume and the minimum surface area. As all six sides are the same, we can use a simplified volume formula: where is the length of a side. Then, to find the length of a side, we take the cube root of the volume. We have The surface area is equal to the sum of the areas of the six sides. The area of one side is So, the surface area of the crate is . At $15 a square meter, the cost comes to Checking the volume, we have . Your Turn 10.63 1. Suppose you want to a build a container to hold 2 cubic feet of fabric swatches. You want to cover the container in laminate costing $10 per square foot. What are the dimensions of the container that is the most economical? What is the cost? Check Your Understanding 42. Find the surface area of the equilateral triangular prism shown. 43. Find the surface area of the octagonal prism shown. 44. Find the volume of the octagonal prism shown with the apothem equal to the side length equal to and the height equal to 45. Determine the surface area of the right cylinder where the radius of the base is , and the height is . 46. Find the volume of the cylinder where the radius of the base is , and the height is . 47. As an artist, you want to design a cylindrical container for your colored art pencils and another rectangular container for some other tools. The cylindrical container will be 8 inches high with a diameter of 6 inches. The rectangular container measures 10 inches wide by 8 inches deep by 4 inches high and has a lid. You found some beautiful patterned paper to use to wrap both pieces. How much paper will you need? Section 10.7 Exercises 1 . Find the volume of the right triangular prism with the two side legs of the base equal to 10 m, the hypotenuse equal to , and the height or the length, depending on your viewpoint, is equal to 15 m. 2 . Find the surface area of the right triangular prism in the Exercise 1 with the two legs of the base equal to 10 m, and the height equal to 15 m. 3 . Find the surface area of the right trapezoidal prism with side , side , the height is 10 cm, the slant length is 12 cm, and the length is 24 cm. 4 . Find the volume of the trapezoidal prism in the exercise above where the base and top have the following measurements: side , side , the slant lengths are each , and the height of the trapezoidal base =. The height or length of the three-dimensional solid is . 5 . Find the surface area of the octagonal prism. The base and top are regular octagons with the apothem equal to 10 m, a side length equal to 12 m, and a height of 30 m. 6 . Find the volume for the right octagonal prism, with the apothem equal to 10 m, the side length of the base is equal to 12 m, and the height equal to 30 m. 7 . You decide to paint the living room. You will need the surface area of the 4 walls and the ceiling. The room measures 20 ft long and 14 ft wide, and the ceiling is 8 ft high. For the following exercises, find the surface area of each right cylinder. 8 . 9 . 10 . 11 . 12 . 13 . For the following exercises, find the volume of each right cylinder to the nearest tenth. 14 . 15 . 16 . 17 . 18 . 19 . You have remodeled your kitchen and the exhaust pipe above the stove must pass through an overhead cabinet as shown in the figure. Find the volume of the remaining space in the cabinet. PreviousNext Order a print copy Citation/Attribution This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission. Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax. Attribution information If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution: Access for free at If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution: Access for free at Citation information Use the information below to generate a citation. We recommend using a citation tool such as this one. Authors: Donna Kirk Publisher/website: OpenStax Book title: Contemporary Mathematics Publication date: Mar 22, 2023 Location: Houston, Texas Book URL: Section URL: © Oct 8, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.
3968
https://www.facebook.com/100066953924518/posts/most-sensitive-indicator-of-intravascular-volume-depletion-in-infant-a-stroke-vo/198536123641600/
Medical MCQs - Most sensitive indicator of intravascular... | Facebook Log In Log In Forgot Account? Medical MCQs's Post Medical MCQs July 27, 2013 · Most sensitive indicator of intravascular volume depletion in infant- A. Stroke volume B. Heart rate C. Cardiac output D. Blood pressure All reactions: 3 8 comments Like Comment Most relevant Dhwani Shah b 12y Upasana Panchal B 12y Medical MCQs ans B. 12y Devkant Dipak B 12y Eyzat Ibrahim II B 2y Punam Pandhare D 2y Punam Pandhare D 2y Oma Angel Blood pressure... 47w
3969
https://www.aph.org/product/stackups-kit-spatial-reasoning-using-cubes-and-isometric-drawings/
StackUps Kit: Spatial Reasoning Using Cubes and Isometric Drawings | American Printing House Skip to main contentSkip to main menu Welcome Everyone APH Home Page Start your search. SearchDonate Search Results Search Login 0 items Cart Total: $0.00 View Cart Checkout Life with Vision Loss Recent Diagnosis Kids with Vision Loss Adults with Vision Loss Life at Work Babies Count Educational Tools Core Curriculum Expanded Core Curriculum Physical Education Multiple Disabilities Accessible Textbooks Tactile Literacy Tools Outreach Services Tools for Teachers Next Generation Science Standard Alignment Braille Refreshers Consulting Services Digital Accessibility Consulting Tactile Graphics Development Test Assessments Consulting Museum, Hospitality, and Venue Services Custom Products for your Business CareerConnect Consulting Resources APH Press APH Hive APH Library APH Museum Conferences Webinars Hall of Fame Tactile Graphic Image Library Large Print Guidelines Get Involved Suggest a Product Product Development Our Research Louis Shop Catalogs Meet Monarch About Us Blog Newsletter Podcast Careers Media Kit Contact Support APH Ways to Give Learn About APH Programs Corporate and Foundation Partners Donor Privacy Policy APH Financials Click to enlarge StackUps Kit: Spatial Reasoning Using Cubes and Isometric Drawings Designed to assist students with visual impairments and blindness in the interpretation of raised-line graphics depicting 3-dimensional figures, specifically stacked cube arrangements. $431.20 Federal Quota Eligible 7 in stock Qty: Add to cart Catalog Number: 1-08960-00 Categories: Core CurriculumMathematics Product Description StackUps cubes are joined together to build various stacked cube arrangements. StackUps: Spatial Reasoning Cubes and Isometric Drawings is a set of materials designed to assist students with visual impairments and blindness in the interpretation of raised-line graphics depicting 3-dimensional figures, specifically stacked cube arrangements. Encourages students using 2-D and 3-D shapes to: Recognize, name, build, draw, compare, and sort Describe attributes and parts Investigate and predict the results of putting together and taking apart Students practice: Building 3-D models using hook/loop material cubes in combination with tactile displays Using Mat Plans to construct and create stacked cube arrangements Interpreting front-right-top views Determining volume and surface area WARNING:CHOKING HAZARD — Small parts. Not intended for children ages 5 and under without adult supervision. Optional and Replacement Items StackUps, Cubes (set of 20) Catalog Number: 1-08960-01 StackUps, Mat Plan Card Set Catalog Number: 61-361-006 StackUps, Stacked Cube Arrangement Card Set Catalog Number: 61-361-002 StackUps: 5 x 5 Inch Raised Line Grid, 3-Pack with Loop Material Dots Catalog Number: 61-361-003 StackUps, Teacher’s Guidebook, Braille - Spatial Reasoning Using Cubes and Isometric Drawings Catalog Number: 5-08960-00 Manuals & Downloads Manuals StackUps, CD-ROM Content (ZIP) Size: 1 byte, Uploaded: Mar 25, 2021 Catalog Number: 1-08960-00 StackUps legacy CD-ROM content (.doc, .brf, .epub, .html, .pdf). Extract the contents of the .zip file after downloading. Specs Weight: 7.075 lbs Dimensions: 27.25 × 17.45 × 4.15 in Federal Quota Funds: Available Language: English Age: 10 years and up Warranty Contact Customer Service to discuss your warranty options. Other Products This product is discontinued. Tactile Town: Teacher’s Guidebook, Braille Braille Guidebook for Tactile Town: 3-D O&M Kit. Catalog Number: 5-03135-00 Replacement Item Federal Quota Eligible product DRAFTSMAN Tactile Drawing Tool Replacement part for DRAFTSMAN Tactile Drawing Board: 1-08857-00. $9.35 Catalog Number: 61-151-070 Replacement Item Federal Quota Eligible 4 in stock Qty: Add to cart product Book Port DT: Mask Cover Replacement item for Book Port DT $8.00 Catalog Number: 45-001-003 Federal Quota Eligible 94 in stock Qty: Add to cart product Home/Core Curriculum/Mathematics/StackUps Kit: Spatial Reasoning Using Cubes and Isometric Drawings Life with Vision Loss Educational Resources Consulting Services Resources Research & Development Louis Federal Quota About Us Blog Newsroom Annual Reports Careers Customer Service Order Status Return Policy Equipment Repair Corporate Ethics Donor Privacy Policy Terms of Use Museum & Tours Awards from APH Board of Trustees Business Opportunities Contact Welcome everyone Copyright © 2025 American Printing House. All Rights Reserved. Privacy Policy Accessibility Manuals & Downloads APH Catalogs Create a new list Opens in a new window
3970
https://www.writingforums.com/threads/haiku-v-senryu.181297/
Haiku v Senryu | Writing Forums Log inRegister What's new Search [x] Search titles only By: SearchAdvanced search… Menu Log inRegister What's new Search [x] Search titles only By: SearchAdvanced search… HomeWhat's newLatest activityBookstoreDonate ForumsNew postsSearch forumsBookstoreWriting WorkshopsDonate ResourcesLatest reviewsSearch resources ChallengesLiterary Maneuvers - Fiction CompetitionsStory FrenzyPoetry ChallengesMember Writing ChallengesWordcounter Awards [x] Search titles only By: FiltersSearch New posts Search forums Bookstore Writing Workshops Donate Writing Forums Writing Forums is a non-profit community managed writing environment. We provide an unlimited opportunity for writers and poets of all abilities to share their work and communicate with other writers and creative artists. We offer an experience that is safe, welcoming and friendly, regardless of participation level, knowledge or skill. There are several opportunities for writers to exchange tips, engage in discussions about techniques, and grow in their craft. Participate in forum competitions that are exciting and helpful in building skill level! There's so much more to explore! Log in Register Forums Poetry Poetry Nuts & Bolts You are using an out of date browser. It may not display this or other websites correctly. You should upgrade or use an alternative browser. Haiku v Senryu Thread starterPiP Start dateJan 24, 2019 Status Not open for further replies. PiP Patron Writing Forums Supporter Jan 24, 2019 #1 I am struggling to understand the difference between Haiku and Senryu. I've been trawling the net for a clear explanation of both (what they are or what they are not with clear examples) so I can differentiate between them. Any haiku or senryu gurus out there who would be willing to offer some advice on poems in the workshop? And/r can they offer some advice by way of example here, please? Last edited: Jan 25, 2019 Reactions:Phil Istine Phil Istine WF Veteran Writing Forums Supporter Jan 24, 2019 #2 PiP said: I am struggling to understand the difference between Haiku and Senryu. I've been trawling the net for a clear explanation of both (what they are or what they are not with clear examples) so I can differentiate between them. Any Haiku or Senryu gurus out there who would be willing to offer some advice on poems in the workshop? And/r can they offer some advice by way of example here, please? Click to expand... Not a guru here - and not a definitive point of view My understanding is that haiku are about nature and senryu are about human nature. I do see the potential for some overlap if human behaviour can be inferred by something that happens in nature. A lot of senryu are mistakenly referred to as haiku. Leaving aside the fact that I used the words April and spring, the one below could be either a haiku or a senryu. If read as a senryu, it's a bit racy. It can be read two totally different ways (which is why I wrote it). It's from an old NaPo challenge). glistening wetness ​ beautiful April showers​ spring is in the air ​ Reactions:PiP PiP Patron Writing Forums Supporter Jan 24, 2019 #3 What a clever play on words! LoL Just focusing on writing Haiku at the moment so digging a little deeper I've just found some easy to follow points on this website. ]A haiku is not usually all one sentence — rather, it is two parts. The easiest way to structure haiku for a beginner is to describe thesetting in the first line, then the subject and action in the second and third lines. One line is usually afragment — often the first line — while the other two lines are one phrase Click to expand... W ritten in present tense, haiku is meant to be “in the moment,” taking something ordinary and making it extraordinary. Click to expand... Poetic devices like metaphor, simile, etc are not used. Click to expand... Haiku is meant to be simple. Click to expand... 5 Capitalization is not necessary, and punctuation is minimal or not there at all as haiku are meant to feel open, almost unfinished. Click to expand... The poetry in haiku is created by juxtaposing the two parts to create resonance. Click to expand... Show; don’t tell. Haiku that merely describe a scene without creating any emotional resonance will be boring Click to expand... ​ Reactions:Phil Istine Phil Istine WF Veteran Writing Forums Supporter Jan 24, 2019 #4 Excellent work, Carole. The main surprise for me was in no. 1 "while the other two lines are one phrase".I always saw the first two lines as building up for a little twist in line 3. Reactions:PiP PiP Patron Writing Forums Supporter Jan 24, 2019 #5 Phil Istine said: Excellent work, Carole. The main surprise for me was in no. 1 "while the other two lines are one phrase".I always saw the first two lines as building up for a little twist in line 3. Click to expand... Good point! I think I read on another website they did. I'll do some further research. Reactions:Phil Istine TL Murphy Meta4 Group Leader WF Veteran Jan 25, 2019 #6 Most of what English poets write when they think they are writing haiku is actually senryu. Neither haiku or senryu are capitalized. That would be like capitalizing the word “sonnet”. Haiku has a lot of rules but the main two rules seem to be the use of a seasonal image (not the month or the name of the season, but a reference to the season like “snow” or falling leaves). The second is the “cut” or “kiriji”, which often comes down to a single word where the poem changes direction or flips. The kiriji is the heart of the juxtaposition and usually occurs in the second line but sometimes in the third. Japanese scholars spend years studying kiriji words. Haiku is about nature, senryu is about the human condition and is much looser with rules. Seasonal reference and juxtaposition are not required. The other myth about haiku is that the syllable count must be 5-7-5, which has become a western genre in its own right, but 17 syllables is not what makes a haiku. Most Japanese haiku are actually shorter than than: 12 -13 syllables instead of 17. Reactions:Quiet Man, PiP and Darren White PiP Patron Writing Forums Supporter Jan 25, 2019 #7 Haiku has a lot of rules but the main two rules seem to be the use of a seasonal image (not the month or the name of the season, but a reference to the season like “snow” or falling leaves). Click to expand... I was looking for some good examples of haiku and came across this Robert Oxnam President Emeritus, Asia Society Every haiku has two parts to it. It's divided in the middle by what's called a "cutting word". It's a structure that is designed to engage the reader and it permits multiple interpretations to this potent poetic form. Haiku Poem _kareeda ni karasu no tomarikeri aki no kure_ on a bare branch a crow has alighted autumn evening 2 Haruo Shirane The kigo, or seasonal word, is very obvious: it's the autumn. And there's what's called a kireji, or cutting word, in the middle and it comes right after "has alighted," "tomarikeri." So we have two parts to what's now called the haiku, but what was then called the hokku. "On a bare branch a crow has alighted" and then there's a break, and the second half is "autumn night fall" or "end of autumn." Now, the important part about the cut, the kireji, which cuts the two parts of the haiku is that it leaves the poem open for the reader to complete. So, it's like the linked verse. You have one verse, the verse is basically unfinished. The next person has to complete that by adding a verse. The same thing happens within the bounds of the haiku, or the hokku. The two parts are sliced in half and there's an open space which the reader, the audience, is supposed to enter into. Click to expand... The second is the “cut” or “kiriji”, which often comes down to a single word where the poem changes direction or flips. The _kiriji is the heart of the juxtaposition and usually occurs in the second line but sometimes in the third. Japanese scholars spend years studying \_kiriji words.\__ Click to expand... _\~Tim, Please can you give us some examples?\_ Last edited: Jan 26, 2019 TL Murphy Meta4 Group Leader WF Veteran Jan 26, 2019 #8 I don't know that specific words are designated as kiriji. I think it's more the relationship between words, i.e. the situation. My understanding is that the kiriji is like the pivot-point where the poem changes direction to ultimately form a paradox. It's interesting what the above passage says about leaving the poem open. That's a different perspective I've not seen before but it makes sense. Reactions:PiP rcallaci Staff member Administrator Aug 28, 2020 #9 Senryu deals primarily with the human condition while Haiku deals with nature. There are many disagreements on what is and is not Senryu. Some consider that haiku concerns itself with nature and Senryu human nature. Others see some human nature poems as haiku. The bulk of Haiku usually combines both human nature with nature. Primarily the big difference between Haiku and Senryu is tone. Senryu deals with political issues, heavy satiric humor and darker themes concerning humanity. The structure 5/7/5 or 17 syllables or less is similar. Example of Haiku and Senryu: All the poetry in this last section was written by me. a hug and a kiss- midmorning sun clear skies and (this is a Haiku) melting icicles strangers on a train- a hundred thousand virgins (this is a Senryu) unholy jihad The first piece can be considered haiku –it’s a first impression imprint with a kigo and has a natural feel to it. The second is a little more dark dealing with the baser aspects of human nature. Reactions:PiP PiP Patron Writing Forums Supporter Aug 28, 2020 #10 rcallaci said: Example of Haiku and Senryu: All the poetry in this last section was written by me. a hug and a kiss- midmorning sun clear skies and (this is a Haiku) melting icicles strangers on a train- a hundred thousand virgins (this is a Senryu) unholy jihad The first piece can be considered haiku –it’s a first impression imprint with a kigo and has a natural feel to it. The second is a little more dark dealing with the baser aspects of human nature. Click to expand... great explanation, rcallaci. Thanks. I will return to my Senryu studies. TL Murphy Meta4 Group Leader WF Veteran Sep 1, 2020 #11 I was recently researching the kiriji (or cut word) in haiku. I learned some new things. The haiku, although in 3 lines, is two parts, separated by the kiriji, which in Japanese, is a “cut word”. But the cut word is not possible in the English language. I am not sure why, but the actual cut, or kiriji, in English, is formed with punctuation - a dash or a semi-colon or a comma. The two parts of the haiku form tension or antithesis, or a paradox. Last edited: Apr 12, 2022 Reactions:PiP Show hidden low quality content Status Not open for further replies. Share: FacebookTwitterRedditPinterestWhatsAppEmailShareLink Forums Poetry Poetry Nuts & Bolts Menu Log in Register Install the app Install Useful links Contact us The Rules FAQ Staff Who's What Social Links Facebook Twitter Pinterest Instagram Uniform-Light Contact us Terms and Conditions Privacy policy Help Home RSS Community platform by XenForo®© 2010-2021 XenForo Ltd. Awards System, Ignore/Block Essentials, Paid Registrations by AddonFlare - Premium XF2 Addons | Add-ons by ThemeHouseXenForo theme by xenfocus Forums What's new Log in Register Search This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register. By continuing to use this site, you are consenting to our use of cookies. AcceptLearn more… TopBottom
3971
https://openoregon.pressbooks.pub/mhccbiology112/chapter/meiosis-2/
Skip to content 83 Meiosis II In some species, cells enter a brief interphase, or interkinesis, before entering meiosis II. Interkinesis lacks an S phase, so chromosomes are notduplicated. The two cells produced in meiosis I go through the events of meiosis II at the same time. During meiosis II, the sister chromatids within the two daughter cells separate, forming four new haploid gametes, each with one copy of each chromosome. The mechanics of meiosis II is similar to mitosis, except that each dividing cell has only one set of homologous chromosomes. Therefore, each cell has half the number of sister chromatids to separate out as a diploid cell undergoing mitosis. During meiosis II, each sister chromatid is attached to spindle fiber microtubules from opposite poles. The sister chromatids are pulled apart by the spindle fiber microtubules and move toward opposite poles (Figure 1). The chromosomes arrive at opposite ends of the cells and begin to decondense (unwind). Nuclear envelopes form around the chromosomes. Cytokinesis separates the two cells into four unique haploid cells. At this point, the newly formed nuclei are both haploid and have only one copy of the single set of chromosomes. The cells produced are genetically unique because of the random assortment of paternal and maternal homologs and because of the recombining of maternal and paternal segments of chromosomes (with their sets of genes) that occurs during crossover. The entire process of meiosis is outlined in Figure 2 (you do not need to know the names of the phases or what happens during each phase, only what happens overall during meiosis I and II). Summary of Meiosis II Meiosis II begins with the 2 haploid cells where each chromosome is made up of two connected sister chromatids. DNA replication does NOT occur at the beginning of meiosis II. The sister chromatids are separated, producing 4 genetically different haploid cells. References Unless otherwise noted, images on this page are licensed under CC-BY 4.0 by OpenStax. OpenStax, Biology. OpenStax CNX. May 27, 2016 License MHCC Biology 112: Biology for Health Professions Copyright © 2019 by Lisa Bartee is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted. Share This Book
3972
https://virtualnerd.com/algebra-1/quadratic-equations-functions/discriminant-quadratic-formula/discriminant/discriminant-definition
Real math help. What is the Discriminant? What is the Discriminant? Note: In a quadratic equation, the discriminant helps tell you the number of real solutions to a quadratic equation. The expression used to find the discriminant is the expression located under the radical in the quadratic formula! In this tutorial, get introduced to the discriminant of a quadratic equation! Keywords: definition discriminant quadratic equation Background Tutorials Graphing Definitions and Examples What is a Quadratic Equation? You can't go through algebra without seeing quadratic equations. The graphs of quadratic equations are parabolas; they tend to look like a smile or a frown. There's also a bunch of ways to solve these equations! Watch this tutorial and get introduced to quadratic equations! #### Solving Quadratics Using the Quadratic Formula What is the Quadratic Formula? If you're solving quadratic equations, knowing the quadratic formula is a MUST! This formula is normally used when no other methods for solving quadratics can be reasonably used. In this tutorial, learn about the quadratic formula and see it used to solve a quadratic equation. Take a look! Further Exploration Using the Discriminant How Do You Find The Discriminant of a Quadratic Equation With 2 Solutions? In a quadratic equation, the discriminant helps tell you the number of real solutions to a quadratic equation. In this tutorial, see how to find the discriminant of a quadratic equation and use it to determine the number of solutions! + #### How Do You Use The Discriminant to Determine the Number of Solutions of a Quadratic Equation? In a quadratic equation, the discriminant helps tell you how many real solutions a quadratic equation has. In this tutorial, see how to find the discriminant of a quadratic equation and use it to determine the number of solutions! About Terms of Use Privacy Contact
3973
https://en.wikipedia.org/wiki/List_of_countries_by_sex_ratio
Jump to content Search Contents (Top) 1 Methodology 2 Countries 3 See also 4 References List of countries by sex ratio العربية অসমীয়া বাংলা Deutsch Español فارسی Français 한국어 Հայերեն हिन्दी Bahasa Indonesia Jawa Lietuvių پنجابی پښتو தமிழ் Türkçe Українська اردو Tiếng Việt Edit links Article Talk Read Edit View history Tools Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Print/export Download as PDF Printable version In other projects Wikidata item Appearance From Wikipedia, the free encyclopedia The human sex ratio is the comparative number of males with respect to each female in a population. This is a list of sex ratios by country or region. Methodology [edit] The table's data is from The World Factbook unless noted otherwise. It shows the male to female sex ratio by the Central Intelligence Agency of the United States. If there is a discrepancy between The World Factbook and a country's census data, the latter may be used instead. A ratio above 1, for example 1.1, means there are more males than females (1.1 males for every female). A ratio below 1, for example 0.8, means there are more females than males (0.8 males for every female). A ratio of 1 means there are equal numbers of males and females. Countries [edit] The World Factbook (2024 estimates) | Country/region | At birth | 0–14 years | 15–64 years | Over 65 years | Total | | World | 1.05 | 1.05 | 1.03 | 0.81 | 1.01 | | Afghanistan | 1.05 | 1.03 | 1.03 | 0.85 | 1.02 | | Albania | 1.06 | 1.09 | 0.97 | 0.85 | 0.97 | | Algeria | 1.05 | 1.05 | 1.03 | 0.96 | 1.03 | | American Samoa (US) | 1.06 | 1.07 | 0.97 | 0.88 | 0.99 | | Andorra | 1.06 | 1.06 | 1.05 | 1.03 | 1.05 | | Angola | 1.03 | 1.01 | 0.93 | 0.72 | 0.96 | | Anguilla (UK) | 1.03 | 1.03 | 0.83 | 0.93 | 0.88 | | Antigua and Barbuda | 1.05 | 1.03 | 0.87 | 0.74 | 0.89 | | Argentina | 1.07 | 1.06 | 1.01 | 0.74 | 0.98 | | Armenia | 1.07 | 1.10 | 0.99 | 0.71 | 0.96 | | Aruba (Netherlands) | 1.02 | 1.01 | 0.93 | 0.68 | 0.90 | | Australia | 1.06 | 1.07 | 1.01 | 0.85 | 0.99 | | Austria | 1.05 | 1.05 | 1.00 | 0.79 | 0.96 | | Azerbaijan | 1.15 | 1.15 | 1.00 | 0.72 | 1.00 | | Bahamas, The | 1.03 | 0.90 | 0.86 | 0.81 | 0.86 | | Bahrain | 1.03 | 1.03 | 1.68 | 1.06 | 1.50 | | Bangladesh | 1.04 | 1.04 | 0.95 | 0.87 | 0.96 | | Barbados | 1.01 | 1.00 | 0.97 | 0.73 | 0.93 | | Belarus | 1.06 | 1.06 | 0.96 | 0.51 | 0.88 | | Belgium | 1.05 | 1.05 | 1.02 | 0.80 | 0.97 | | Belize | 1.05 | 1.03 | 0.96 | 0.99 | 0.98 | | Benin | 1.05 | 1.02 | 0.94 | 0.84 | 0.97 | | Bermuda (UK) | 1.05 | 1.05 | 1.01 | 0.74 | 0.95 | | Bhutan | 1.05 | 1.05 | 1.08 | 1.06 | 1.07 | | Bolivia | 1.05 | 1.04 | 1.02 | 0.86 | 1.01 | | Bosnia and Herzegovina | 1.07 | 1.07 | 1.01 | 0.70 | 0.95 | | Botswana | 1.03 | 1.02 | 0.91 | 0.66 | 0.92 | | Brazil | 1.05 | 1.04 | 0.98 | 0.75 | 0.97 | | Brunei | 1.05 | 1.06 | 0.91 | 0.94 | 0.95 | | Bulgaria | 1.06 | 1.06 | 1.04 | 0.67 | 0.95 | | Burkina Faso | 1.03 | 1.03 | 0.93 | 0.73 | 0.96 | | Burundi | 1.03 | 1.02 | 0.98 | 0.76 | 0.99 | | Cambodia | 1.04 | 1.02 | 0.95 | 0.55 | 0.94 | | Cameroon | 1.03 | 1.02 | 0.98 | 0.87 | 0.99 | | Canada | 1.05 | 1.06 | 1.01 | 0.85 | 0.98 | | Cape Verde | 1.03 | 1.01 | 0.96 | 0.62 | 0.95 | | Cayman Islands (UK) | 1.02 | 1.01 | 0.96 | 0.84 | 0.95 | | Central African Republic | 1.03 | 1.05 | 0.97 | 0.79 | 0.99 | | Chad | 1.04 | 1.02 | 0.96 | 0.75 | 0.98 | | Chile | 1.04 | 1.04 | 1.00 | 0.73 | 0.97 | | China | 1.09 | 1.14 | 1.06 | 0.86 | 1.04 | | Colombia | 1.05 | 1.05 | 0.96 | 0.78 | 0.95 | | Comoros | 1.03 | 1.00 | 0.92 | 0.77 | 0.94 | | Congo DR | 1.03 | 1.01 | 1.00 | 0.78 | 1.00 | | Congo | 1.03 | 1.02 | 1.01 | 0.75 | 1.00 | | Cook Islands (New Zealand) | 1.04 | 1.10 | 1.06 | 0.96 | 1.05 | | Costa Rica | 1.05 | 1.05 | 1.02 | 0.84 | 1.00 | | Croatia | 1.06 | 1.07 | 1.00 | 0.71 | 0.93 | | Cuba | 1.06 | 1.06 | 1.01 | 0.82 | 0.99 | | Curacao (Netherlands) | 1.05 | 1.05 | 0.98 | 0.67 | 0.93 | | Cyprus | 1.05 | 1.05 | 1.11 | 0.77 | 1.05 | | Czechia | 1.05 | 1.05 | 1.05 | 0.71 | 0.97 | | Denmark | 1.07 | 1.05 | 1.03 | 0.86 | 0.99 | | Djibouti | 1.03 | 1.01 | 0.77 | 0.77 | 0.83 | | Dominica | 1.05 | 1.05 | 1.04 | 0.91 | 1.02 | | Dominican Republic | 1.04 | 1.03 | 1.03 | 0.93 | 1.02 | | Ecuador | 1.05 | 1.05 | 0.97 | 0.81 | 0.97 | | Egypt | 1.06 | 1.06 | 1.06 | 1.03 | 1.06 | | El Salvador | 1.05 | 1.05 | 0.90 | 0.74 | 0.92 | | Equatorial Guinea | 1.03 | 1.07 | 1.22 | 1.09 | 1.16 | | Eritrea | 1.03 | 1.01 | 0.97 | 0.66 | 0.97 | | Estonia | 1.05 | 1.05 | 1.02 | 0.55 | 0.89 | | Eswatini (Swaziland) | 1.03 | 1.00 | 0.87 | 0.59 | 0.90 | | Ethiopia | 1.03 | 1.01 | 0.99 | 0.82 | 0.99 | | European Union | 1.06 | 1.05 | 1.01 | 0.77 | 0.95 | | Faroe Islands (Denmark) | 1.07 | 1.07 | 1.12 | 0.93 | 1.07 | | Fiji | 1.05 | 1.04 | 1.05 | 0.86 | 1.03 | | Finland | 1.05 | 1.05 | 1.03 | 0.79 | 0.97 | | France | 1.05 | 1.05 | 1.01 | 0.79 | 0.96 | | French Polynesia (France) | 1.05 | 1.06 | 1.06 | 0.93 | 1.05 | | Gabon | 1.03 | 1.02 | 1.11 | 1.03 | 1.07 | | Gambia, The | 1.03 | 1.02 | 0.97 | 0.78 | 0.98 | | Gaza Strip/Palestine | 1.06 | 1.06 | 1.01 | 1.05 | 1.03 | | Georgia | 1.07 | 1.06 | 0.95 | 0.65 | 0.92 | | Germany | 1.05 | 1.04 | 1.03 | 0.81 | 0.98 | | Ghana | 1.03 | 1.02 | 0.93 | 0.81 | 0.96 | | Gibraltar (UK) | 1.05 | 1.05 | 1.02 | 0.93 | 1.01 | | Greece | 1.07 | 1.06 | 1.00 | 0.80 | 0.96 | | Greenland (Denmark) | 1.05 | 1.03 | 1.07 | 1.13 | 1.07 | | Grenada | 1.10 | 1.09 | 1.04 | 0.90 | 1.03 | | Guam (US) | 1.07 | 1.07 | 1.10 | 0.88 | 1.06 | | Guatemala | 1.05 | 1.04 | 0.97 | 0.80 | 0.98 | | Guernsey (UK) | 1.05 | 1.06 | 1.02 | 0.87 | 0.99 | | Guinea | 1.03 | 1.02 | 1.00 | 0.83 | 1.00 | | Guinea-Bissau | 1.03 | 1.01 | 0.93 | 0.71 | 0.96 | | Guyana | 1.05 | 1.04 | 1.08 | 0.78 | 1.04 | | Haiti | 1.01 | 1.00 | 0.97 | 0.77 | 0.97 | | Honduras | 1.03 | 1.02 | 0.91 | 0.77 | 0.93 | | Hong Kong (China) | 1.06 | 1.10 | 0.81 | 0.86 | 0.86 | | Hungary | 1.06 | 1.10 | 1.03 | 0.69 | 0.95 | | Iceland | 1.05 | 1.04 | 1.02 | 0.90 | 1.00 | | India | 1.10 | 1.11 | 1.07 | 0.85 | 1.06 | | Indonesia | 1.05 | 1.05 | 1.00 | 0.85 | 1.00 | | Iran | 1.05 | 1.05 | 1.04 | 0.87 | 1.03 | | Iraq | 1.05 | 1.04 | 1.01 | 0.80 | 1.02 | | Ireland | 1.06 | 1.04 | 0.98 | 0.89 | 0.98 | | Isle of Man (UK) | 1.08 | 1.08 | 1.04 | 0.89 | 1.01 | | Israel | 1.05 | 1.05 | 1.04 | 0.84 | 1.01 | | Italy | 1.06 | 1.05 | 0.97 | 0.79 | 0.93 | | Ivory Coast | 1.03 | 1.01 | 1.02 | 0.82 | 1.01 | | Jamaica | 1.05 | 1.04 | 0.97 | 0.91 | 0.98 | | Japan | 1.06 | 1.06 | 1.01 | 0.79 | 0.95 | | Jersey (UK) | 1.06 | 1.06 | 1.03 | 0.75 | 0.98 | | Jordan | 1.06 | 1.06 | 1.13 | 0.95 | 1.10 | | Kazakhstan | 1.07 | 1.06 | 0.96 | 0.56 | 0.94 | | Kenya | 1.02 | 1.01 | 1.00 | 0.84 | 1.00 | | Kiribati | 1.05 | 1.04 | 0.93 | 0.63 | 0.94 | | Korea, North | 1.06 | 1.05 | 1.00 | 0.59 | 0.95 | | Korea, South | 1.05 | 1.05 | 1.07 | 0.79 | 1.01 | | Kosovo | 1.08 | 1.08 | 1.10 | 0.78 | 1.06 | | Kuwait | 1.05 | 1.09 | 1.51 | 0.74 | 1.36 | | Kyrgyzstan | 1.07 | 1.06 | 0.96 | 0.62 | 0.96 | | Laos | 1.04 | 1.03 | 0.99 | 0.87 | 1.00 | | Latvia | 1.05 | 1.06 | 0.98 | 0.52 | 0.87 | | Lebanon | 1.05 | 1.05 | 1.02 | 0.76 | 1.00 | | Lesotho | 1.03 | 1.01 | 1.00 | 0.59 | 0.98 | | Liberia | 1.03 | 1.01 | 0.99 | 0.87 | 1.00 | | Libya | 1.05 | 1.04 | 1.05 | 0.82 | 1.04 | | Liechtenstein | 1.26 | 1.25 | 0.99 | 0.85 | 0.99 | | Lithuania | 1.06 | 1.06 | 0.96 | 0.53 | 0.86 | | Luxembourg | 1.06 | 1.06 | 1.05 | 0.85 | 1.02 | | Macau (China) | 1.05 | 1.05 | 0.87 | 0.89 | 0.90 | | Madagascar | 1.03 | 1.02 | 1.01 | 0.86 | 1.01 | | Malawi | 1.01 | 0.99 | 0.96 | 0.80 | 0.96 | | Malaysia | 1.07 | 1.06 | 1.06 | 0.94 | 1.05 | | Maldives | 1.05 | 1.04 | 1.06 | 0.77 | 1.04 | | Mali | 1.03 | 1.01 | 0.89 | 0.97 | 0.95 | | Malta | 1.04 | 1.06 | 1.07 | 0.86 | 1.02 | | Marshall Islands | 1.05 | 1.04 | 1.03 | 0.95 | 1.03 | | Mauritania | 1.03 | 1.01 | 0.90 | 0.73 | 0.93 | | Mauritius | 1.07 | 1.04 | 0.99 | 0.71 | 0.95 | | Mexico | 1.05 | 1.06 | 0.95 | 0.75 | 0.96 | | Micronesia | 1.05 | 1.03 | 0.94 | 0.79 | 0.96 | | Moldova | 1.07 | 1.00 | 0.94 | 0.62 | 0.89 | | Monaco | 1.04 | 1.05 | 1.02 | 0.80 | 0.93 | | Mongolia | 1.05 | 1.04 | 0.94 | 0.67 | 0.95 | | Montenegro | 1.04 | 1.06 | 1.00 | 0.78 | 0.96 | | Montserrat (UK) | 1.03 | 1.06 | 0.98 | 1.00 | 1.00 | | Morocco | 1.05 | 1.04 | 0.99 | 0.95 | 1.00 | | Mozambique | 1.03 | 1.03 | 0.93 | 0.97 | 0.97 | | Myanmar (Burma) | 1.06 | 1.05 | 0.97 | 0.77 | 0.97 | | Namibia | 1.03 | 1.02 | 0.95 | 0.76 | 0.97 | | Nauru | 1.04 | 1.04 | 0.97 | 0.49 | 0.96 | | Nepal | 1.06 | 1.06 | 0.93 | 0.95 | 0.96 | | Netherlands | 1.05 | 1.05 | 1.02 | 0.87 | 0.99 | | New Caledonia (France) | 1.05 | 1.04 | 1.01 | 0.77 | 0.99 | | New Zealand | 1.05 | 1.06 | 1.02 | 0.88 | 1.00 | | Nicaragua | 1.05 | 1.04 | 0.95 | 0.80 | 0.96 | | Niger | 1.03 | 1.02 | 0.95 | 0.92 | 0.98 | | Nigeria | 1.06 | 1.04 | 1.01 | 0.88 | 1.02 | | North Macedonia | 1.07 | 1.07 | 1.03 | 0.79 | 0.99 | | Northern Mariana Islands (US) | 1.17 | 1.16 | 1.11 | 1.12 | 1.12 | | Norway | 1.05 | 1.05 | 1.05 | 0.90 | 1.02 | | Oman | 1.05 | 1.05 | 1.24 | 0.87 | 1.16 | | Pakistan | 1.05 | 1.04 | 1.05 | 0.87 | 1.04 | | Palau | 1.06 | 1.07 | 1.25 | 0.33 | 1.06 | | Panama | 1.06 | 1.06 | 1.02 | 0.87 | 1.02 | | Papua New Guinea | 1.05 | 1.04 | 1.02 | 0.97 | 1.03 | | Paraguay | 1.05 | 1.04 | 1.01 | 0.91 | 1.00 | | Peru | 1.05 | 1.04 | 0.96 | 0.75 | 0.96 | | Philippines | 1.05 | 1.04 | 1.02 | 0.66 | 1.00 | | Poland | 1.06 | 1.06 | 0.96 | 0.67 | 0.91 | | Portugal | 1.05 | 1.05 | 0.97 | 0.66 | 0.90 | | Puerto Rico (US) | 1.06 | 1.04 | 0.92 | 0.75 | 0.89 | | Qatar | 1.02 | 1.02 | 4.29 | 1.91 | 3.32 | | Romania | 1.06 | 1.06 | 1.00 | 0.70 | 0.93 | | Russia | 1.06 | 1.06 | 0.95 | 0.52 | 0.87 | | Rwanda | 1.03 | 1.02 | 0.95 | 0.67 | 0.96 | | Saint Barthelemy (France) | 1.06 | 1.06 | 1.17 | 1.01 | 1.12 | | Saint Helena, Ascension and Tristan da Cunha (UK) | 1.06 | 1.04 | 0.99 | 1.03 | 1.00 | | Saint Kitts and Nevis | 1.02 | 1.01 | 1.02 | 0.91 | 1.00 | | Saint Lucia | 1.06 | 1.06 | 0.94 | 0.83 | 0.94 | | Saint Martin (France) | 1.04 | 0.99 | 0.92 | 0.75 | 0.92 | | Saint Pierre and Miquelon (France) | 1.06 | 1.05 | 0.97 | 0.78 | 0.93 | | Saint Vincent and the Grenadines | 1.03 | 1.02 | 1.06 | 0.94 | 1.04 | | Samoa | 1.05 | 1.07 | 1.04 | 0.81 | 1.03 | | San Marino | 1.09 | 1.10 | 0.94 | 0.83 | 0.93 | | Sao Tome and Principe | 1.03 | 1.03 | 0.99 | 0.75 | 1.00 | | Saudi Arabia | 1.05 | 1.04 | 1.42 | 1.14 | 1.31 | | Senegal | 1.05 | 1.04 | 0.94 | 0.76 | 0.97 | | Serbia | 1.06 | 1.06 | 1.01 | 0.71 | 0.95 | | Seychelles | 1.03 | 1.06 | 1.14 | 0.76 | 1.08 | | Sierra Leone | 1.03 | 1.02 | 0.96 | 0.97 | 0.98 | | Singapore | 1.05 | 1.07 | 1.01 | 0.87 | 1.00 | | Sint Maarten (Netherlands) | 1.05 | 1.07 | 0.98 | 0.86 | 0.98 | | Slovakia | 1.07 | 1.09 | 0.98 | 0.67 | 0.93 | | Slovenia | 1.04 | 1.05 | 1.09 | 0.78 | 1.00 | | Solomon Islands | 1.05 | 1.06 | 1.05 | 0.89 | 1.04 | | Somalia | 1.03 | 1.00 | 1.04 | 0.76 | 1.01 | | South Africa | 1.02 | 1.00 | 0.98 | 0.73 | 0.96 | | South Sudan | 1.05 | 1.04 | 1.03 | 1.22 | 1.04 | | Spain | 1.05 | 1.04 | 1.00 | 0.76 | 0.95 | | Sri Lanka | 1.05 | 1.05 | 0.95 | 0.73 | 0.94 | | Sudan | 1.05 | 1.03 | 0.99 | 1.07 | 1.01 | | Suriname | 1.07 | 1.03 | 1.00 | 0.70 | 0.98 | | Sweden | 1.06 | 1.06 | 1.05 | 0.88 | 1.01 | | Switzerland | 1.05 | 1.05 | 1.02 | 0.85 | 0.99 | | Syria | 1.06 | 1.05 | 0.99 | 0.88 | 1.01 | | Taiwan | 1.06 | 1.06 | 1.00 | 0.82 | 0.97 | | Tajikistan | 1.05 | 1.04 | 1.00 | 0.81 | 1.01 | | Tanzania | 1.03 | 1.02 | 1.00 | 0.74 | 1.00 | | Thailand | 1.05 | 1.05 | 0.96 | 0.80 | 0.95 | | Timor-Leste | 1.07 | 1.06 | 0.96 | 0.92 | 0.99 | | Togo | 1.03 | 1.03 | 0.96 | 0.71 | 0.97 | | Tonga | 1.03 | 1.03 | 1.02 | 0.83 | 1.01 | | Trinidad and Tobago | 1.04 | 1.04 | 1.04 | 0.87 | 1.01 | | Tunisia | 1.06 | 1.06 | 0.97 | 0.90 | 0.98 | | Turkey | 1.05 | 1.05 | 1.03 | 0.83 | 1.01 | | Turkmenistan | 1.05 | 1.03 | 0.99 | 0.78 | 0.98 | | Turks and Caicos Islands (UK) | 1.05 | 1.04 | 1.01 | 0.94 | 1.01 | | Tuvalu | 1.05 | 1.05 | 1.02 | 0.57 | 0.98 | | Uganda | 1.03 | 1.03 | 0.90 | 0.74 | 0.95 | | Ukraine | 1.06 | 1.07 | 1.12 | 0.53 | 0.97 | | United Arab Emirates | 1.06 | 1.05 | 2.47 | 3.25 | 2.13 | | United Kingdom | 1.05 | 1.05 | 1.02 | 0.85 | 0.99 | | United States | 1.05 | 1.05 | 1.00 | 0.81 | 0.97 | | Uruguay | 1.04 | 1.04 | 0.99 | 0.68 | 0.94 | | Uzbekistan | 1.08 | 1.07 | 1.00 | 0.79 | 1.01 | | Vanuatu | 1.05 | 1.04 | 0.96 | 0.96 | 0.99 | | Venezuela | 1.05 | 1.05 | 0.99 | 0.84 | 0.99 | | Vietnam | 1.10 | 1.12 | 1.02 | 0.69 | 1.01 | | Virgin Islands (UK) | 1.05 | 0.98 | 0.89 | 0.90 | 0.90 | | Virgin Islands (US) | 1.06 | 1.05 | 0.90 | 0.81 | 0.90 | | Wallis and Futuna (France) | 1.05 | 1.09 | 1.05 | 1.02 | 1.06 | | West Bank/Palestine | 1.06 | 1.05 | 1.03 | 0.90 | 1.03 | | Yemen | 1.05 | 1.04 | 1.03 | 0.78 | 1.02 | | Zambia | 1.03 | 1.02 | 1.00 | 0.82 | 1.00 | | Zimbabwe | 1.03 | 1.02 | 0.92 | 0.68 | 0.95 | See also [edit] List of Chinese administrative divisions by sex ratio List of states and union territories of India by sex ratio Missing women References [edit] ^ "Sex ratio - The World Factbook". www.cia.gov. Retrieved 2025-09-21. | v t e Lists of countries by population statistics | | Global | Current population + United Nations Demographics of the world | | Continents/subregions | Africa Antarctica Asia Europe North America + Caribbean Oceania South America | | Intercontinental | Americas Arab world Commonwealth of Nations Eurasia European Union Islands Latin America Middle East | | Cities/urban areas | World cities National capitals Megacities Megalopolises | | Past and future | Past and future population Estimates of historical world population Human population projections 1 1000 1500 1600 1700 1800 1900 1939 1989 2000 2005 2010 2015 Population milestones | | Population density | Current density Past and future population density Person-to-arable land ratio | | Growth indicators | Population growth rate Natural increase Net reproduction rate Number of births Number of deaths Birth rate Mortality rate Fertility rate Past fertility rate | | Life expectancy | World Africa Asia Europe North America Oceania South America world regions past life expectancy | | Other demographics | Age at childbearing Age at first marriage Age structure Dependency ratio Divorce rate Ethnic and cultural diversity level Ethnic composition Immigrant population Linguistic diversity Median age Net migration rate Number of households Religion / Irreligion Sex ratio Urban population Urbanization | | Health | Antidepressant consumption Antiviral medications for pandemic influenza HIV/AIDS adult prevalence rate Infant and under-five mortality rates Maternal mortality rate Obesity rate Percentage suffering from undernourishment Health expenditure by country by type of financing Suicide rate Total health expenditure per capita Health spending as % of GDP Body mass index (BMI) | | Education and innovation | Bloomberg Innovation Index Education Index Global Innovation Index International Innovation Index Literacy rate Programme for the International Assessment of Adult Competencies Programme for International Student Assessment (PISA) Progress in International Reading Literacy Study (PIRLS) Trends in International Mathematics and Science Study (TIMSS) Tertiary education attainment World Intellectual Property Indicators | | Economic | Access to financial services Development aid donors + Official Development Assistance received Employment rate Irrigated land area Human Development Index + by country + inequality-adjusted + planetary pressures–adjusted HDI Human Poverty Index Imports Exports Income inequality Labour force Share of income of top 1% Number of millionaires (US dollars) Number of billionaires (US dollars) Percentage living in poverty Public sector Unemployment rate Wealth inequality | | List of international rankings Lists by country | | v t e Population | | Major topics | Demographics of the world Demographic transition Estimates of historical world population Population growth Population momentum Human population projections World population | | Populationbiology | Population decline Population density + Physiological density Population dynamics Population model Population pyramid | | Populationecology | Biocapacity Carrying capacity I = P × A  × T Kaya identity Malthusian growth model Overshoot (population) World3 model | | Society and population | Eugenics Dysgenics Human overpopulation + Malthusian catastrophe Human population planning + Compulsory sterilization + Dependency ratio + Family planning + One-child policy + Two-child policy Overconsumption Political demography Population ethics + Antinatalism + Intergenerational equity + Mere addition paradox + Natalism + Non-identity problem + Reproductive rights Sustainable population Zero population growth | | Publications | Population and Environment Population and Development Review | | Lists | Population and housing censuses by country Dependency ratio Largest cities Number of births World population milestones + 6 billion + 7 + 8 Population concern organizations | | Events andorganizations | 7 Billion Actions Church of Euthanasia International Conference on Population and Development Population Action International Population Connection Population Matters United Nations Population Fund United Nations world population conferences Voluntary Human Extinction Movement World Population Conference World Population Day World Population Foundation | | Related topics | Bennett's law Green Revolution Human impact on the environment Migration + Net migration rate Sustainability | | Commons Category | Retrieved from " Categories: Demographic lists Lists of countries Human sex ratio Hidden categories: Articles with short description Short description is different from Wikidata List of countries by sex ratio Add topic
3974
https://gamedev.stackexchange.com/questions/32342/help-understanding-the-order-of-vector-subtraction
mathematics - Help understanding the order of vector subtraction - Game Development Stack Exchange Join Game Development By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Game Development helpchat Game Development Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Companies Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Help understanding the order of vector subtraction Ask Question Asked 13 years, 2 months ago Modified13 years, 2 months ago Viewed 967 times This question shows research effort; it is useful and clear 3 Save this question. Show activity on this post. I always struggle with intuitively figuring out the order of subtraction in multiple situations for example The difference between the current mouse coordinates and the last frame's coordinates The direction an object is to the camera position etc. I know that A - B = A + (-B) ..my problem is figuring out which one is A and which one is B ..and those were just simple examples Is there any hard and fast rules for this? Or is it supposed to be intuitive and obvious (which is what I'm afraid of)? I have similar problems with figuring out which order to take the cross-product of two vectors too, depending on the scenario mathematics Share Share a link to this question Copy linkCC BY-SA 3.0 Follow Follow this question to receive notifications edited Jul 14, 2012 at 21:04 manning18manning18 asked Jul 14, 2012 at 20:44 manning18manning18 165 2 2 silver badges 8 8 bronze badges Add a comment| 3 Answers 3 Sorted by: Reset to default This answer is useful 1 Save this answer. Show activity on this post. Draw it all out on paper. Do it in 1D (ie. a number line), then do it in 2D (ie. a square grid). Draw point B for the current mouse position, then point A for the previous position. B-A gets you the total change in 1D or 2D or 3D and why is very easy to see on the 1D number line. Stop calling the points A and B until later, try calling them Now and Then. If you take Now minus Then you get what happened between Now and Then. As for cross products keep in mind that positive rotation is counter-clockwise. So if you have 2D vectors drawn out on paper with points ABC if you look at it and BA is counter-clockwise to BC then the cross result will be up out of the paper towards your face. If you hold your right hand out, index straight out, next finger turned left and thumbs up that shows the "right hand rule" for what side is what in 3D and it describes visually what you just drew in 2D. Share Share a link to this answer Copy linkCC BY-SA 3.0 Follow Follow this answer to receive notifications edited Jul 14, 2012 at 21:25 answered Jul 14, 2012 at 21:19 Patrick HughesPatrick Hughes 10.2k 1 1 gold badge 34 34 silver badges 33 33 bronze badges 1 2 The last paragraph is why some math and physics exams can be hilarious to watch. Particularly if magnetic fields are involved :)Anton –Anton 2012-07-14 21:31:01 +00:00 Commented Jul 14, 2012 at 21:31 Add a comment| This answer is useful 1 Save this answer. Show activity on this post. The cross product is fairly simple. You just use the right hand rule, where your 4 fingers make the notion of going from the first vector to the second and the thumb points in the direction the product vector will be pointing. As for vector subtracting, let's say you have two points. Then if you subtract point A from point B (it doesn't actually make sense from the stand point of mathematics, but you represent vectors like you do points, where the tail is at the origin and the head is at the given point) then you'll get a vector going from point A to point B. These things are easiest to visualize if you only account for the first quarter of the coordinate system (the x and y axii are positive). Same for vectors, if you have two vectors a and b, then subtracting a from b will result in a vector going from the tip of a to the tip of b. Share Share a link to this answer Copy linkCC BY-SA 3.0 Follow Follow this answer to receive notifications edited Jul 14, 2012 at 21:28 answered Jul 14, 2012 at 21:19 dretadreta 3,524 4 4 gold badges 22 22 silver badges 37 37 bronze badges Add a comment| This answer is useful 1 Save this answer. Show activity on this post. For vector subtraction, I use the mental image of drawing a bow. If I'm subtracting A - B, I visualize holding up a bow with the arrow point at A, then drawing it back till the tail of the arrow is at B. The arrow is the result of subtraction, a vector pointing from B to A. For cross products, dreta's and Patrick's answers already described the two most common methods. Share Share a link to this answer Copy linkCC BY-SA 3.0 Follow Follow this answer to receive notifications answered Jul 15, 2012 at 2:56 Nathan ReedNathan Reed 33.7k 3 3 gold badges 93 93 silver badges 116 116 bronze badges Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions mathematics See similar questions with these tags. The Overflow Blog The history and future of software development (part 1) Getting Backstage in front of a shifting dev experience Featured on Meta Spevacus has joined us as a Community Manager Introducing a new proactive anti-spam measure Report this ad Related 8Rotating a vector by another vector in shader 2Dot Product Vs Cross Product Turning a Turret Towards A Enemy and firing 6How do I calculate the normal of a plane defined using homogeneous coordinates? 6Robust line of sight test on the inside of a polygon with tolerance 1Calculating angle between two vectors to steer towards a target 5Understanding math used to determine if vector is clockwise / counterclockwise from your vector 3Make an object follow the mouse pointer with LÖVE2D 1The math of normal mapping without a dot product 2How to calculate a 3D orientation from 2D vanishing lines on an image? 1How to solve for the angle of a axis/angle rotation that gets me closest to a specific orientation Hot Network Questions How to rsync a large file by comparing earlier versions on the sending end? ICC in Hague not prosecuting an individual brought before them in a questionable manner? ConTeXt: Unnecessary space in \setupheadertext в ответе meaning in context Proof of every Highly Abundant Number greater than 3 is Even How to convert this extremely large group in GAP into a permutation group. Can peaty/boggy/wet/soggy/marshy ground be solid enough to support several tonnes of foot traffic per minute but NOT support a road? How different is Roman Latin? Exchange a file in a zip file quickly Checking model assumptions at cluster level vs global level? With with auto-generated local variables Passengers on a flight vote on the destination, "It's democracy!" Alternatives to Test-Driven Grading in an LLM world Analog story - nuclear bombs used to neutralize global warming "Unexpected"-type comic story. Aboard a space ark/colony ship. Everyone's a vampire/werewolf Suggestions for plotting function of two variables and a parameter with a constraint in the form of an equation Matthew 24:5 Many will come in my name! Explain answers to Scientific American crossword clues "Éclair filling" and "Sneaky Coward" Making sense of perturbation theory in many-body physics Is existence always locational? Do we declare the codomain of a function from the beginning, or do we determine it after defining the domain and operations? Does a Linux console change color when it crashes? What’s the usual way to apply for a Saudi business visa from the UAE? Does the Mishna or Gemara ever explicitly mention the second day of Shavuot? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. default Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Game Development Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
3975
https://prepp.in/question/select-the-most-appropriate-antonym-of-the-given-w-645d35a8e861018095807802
Select the most appropriate antonym of the given word.GAUNT Hello, Guest Login / Register All Categories+ Test SeriesQuizzesPrevious Year PapersLive TestsLive QuizzesCurrent AffairsVideosNewsContact Us Get App All Exams Test series for 1 year @ ₹349 onlyEnroll Now ×× Home English Synonyms or Antonyms select the most appropriate antonym of the given w Question English Select the most appropriate antonym of the given word. GAUNT Plump Old Bent Lean Solution The correct answer is Plump Understanding the Antonym of GAUNT The question asks us to find the most appropriate antonym for the word "GAUNT". An antonym is a word that means the opposite of another word. To answer this, we first need to understand the meaning of "GAUNT" and then look for a word among the options that has the opposite meaning. Defining GAUNT The word GAUNT is typically used to describe a person who is very thin, especially because of illness, lack of food, or worry. It suggests a skeletal or bony appearance, often looking drawn or hollow-cheeked. Synonyms for GAUNT include thin, bony, emaciated, skeletal, scrawny, skinny, lean (in a negative sense). Analyzing the Options Let's examine each option to determine its meaning and whether it is the opposite of GAUNT. Plump: This word describes someone or something that is well-rounded, slightly fat, or full-bodied. It suggests a healthy or pleasing roundness, the opposite of being excessively thin or bony. Old: This refers to someone having lived for many years. While some old people might be thin, "old" describes age, not necessarily body shape. It is not the opposite of gaunt. Bent: This means not straight or having a curve or angle. It describes posture or shape in a structural sense, not the degree of thinness or fullness. It is not the opposite of gaunt. Lean: This describes someone who is thin, especially in a healthy or athletic way, having little excess fat. While it means thin, "GAUNT" implies an unhealthy or extreme thinness, whereas "lean" can be positive. In some contexts, "lean" can be a synonym for gaunt, making it definitely not an antonym. Identifying the Antonym Comparing the options to the meaning of GAUNT (excessively thin and bony), the word that presents the most direct opposite is Plump, which describes someone who is well-rounded or slightly full-bodied. Conclusion Based on the analysis of the word meanings, Plump is the most appropriate antonym for GAUNT. Revision Table: Antonym Analysis | Word | Meaning Related to Body Shape | Relationship to GAUNT | --- | GAUNT | Excessively thin, bony, skeletal (often due to hardship) | Target word | | Plump | Well-rounded, slightly full-bodied | Opposite | | Old | Relating to age | Unrelated to body shape | | Bent | Not straight or having a curve | Unrelated to body shape | | Lean | Thin (often in a healthy way); can be a synonym in some contexts | Synonym or similar, not opposite | Additional Information: Expanding Vocabulary Understanding antonyms and synonyms is a key part of building a strong vocabulary. Words often have subtle differences in meaning and connotation. While "lean" can describe someone thin, "gaunt" carries a stronger sense of emaciation and often suggests suffering or deprivation. "Plump" often has a slightly positive or neutral connotation, suggesting a healthy roundness, as opposed to "fat" which can be more negative. Learning words in context helps to grasp these nuances and use them correctly. Download PDF Was this answer helpful? 0 0 Important Questions from Synonyms or Antonyms Select the most appropriate ANTONYM of the given word. Contentment English View Answer 2. Select the most appropriate synonym of the given word. Apprehension English View Answer 3. Identify the ANTONYM of the following word in the given sentence. Divulge The culprit surprised everyone by opting to conceal the secret execution of his plans even after getting beaten by the police. English View Answer 4. Select the most appropriate ANTONYM of the given word. Adipose English View Answer 5. Which of the following options is the closest in meaning to the word underlined in the sentence below? In a democracy, everybody has the freedom to disagree with the government. English View Answer Need Expert Advice? Ask A Question Start Your Preparation with Prepp Mobile App Download the app from Google Play & App Store Download the app from Google Play & App Store Prepp Community Download Mobile App Government Exams & Jobs AFCAT CDS NDA UP Police UGC NET IAS Exam MPSC BPSC Railways RRB NTPC IBPS PO SBI CLERK SBI PO IBPS Clerk SSC CGL SSC CHSL CTET NTSE UPTET KVPY Previous Year Papers IAS Question Paper BPSC Question Paper IBPS PO Question Paper NDA Question Paper CDS Question Paper UGC NET Question Paper SSC CGL Question Paper RBI Grade B Question Paper RRB Question Paper SBI PO Question Paper Notes Current Affairs IAS Notes NDA Notes SBI PO Notes CDS Notes SSC CGL Notes SSC MTS Notes SSC CHSL Notes RRB ALP Notes RRB NTPC Notes About Us Contact Us Terms & Conditions Privacy Policy Prepp © 2025 Super Charge Your Preparation With Prepp+ Mock Test For All Government Exams Benefits of Prepp+ Subscription loading.. x × Offer Ends In: 01 Day : 19 Hours : 18 Minutes
3976
https://www.sciencedirect.com/topics/neuroscience/stop-codon
Skip to Main content Sign in Chapters and Articles You might find these chapters and articles relevant to this topic. Review article Clarifying lysosomal storage diseases Stop codon read through Mammalian cells utilize three mRNA stop codons to terminate protein translation (UAA, UAG and UGA). It has been shown that, although these termination signals are highly efficient, ‘read through’ of stop codons does occur, and this phenomenon is in part influenced by bases immediately downstream of the stop codon . For genetic disorders resulting from a premature stop codon, inducing read through to increase expression of functional protein with gentamycin and its derivatives has shown early promise in cell culture and animal models [74,75]. Although these reagents have not yet shown clinical efficacy for the LSDs, newer derivatives might . View article Read full article URL: Journal2011, Trends in NeurosciencesMark L. Schultz, ... Beverly L. Davidson Chapter Virus Interactions With the Cell 2017, VirusesSusan Payne Suppression of a stop codon is a process whereby a ribosome fails to terminate protein synthesis at a stop codon. Most eukaryotic genes terminate with multiple stop codons, but if there is a single stop codon, an amino acid can be inserted into the growing polypeptide and translation continues. This is estimated to occur as often as 20% of the time. Thus a protein coding sequence downstream of a pseudoknot or a single stop codon will be synthesized at a lower quantity (~20%) than the protein encoded when the signal is ignored (~80% of the time). Read full chapterView PDFExplore book Read full chapter URL: Book2017, VirusesSusan Payne Chapter Genome Composition, Organization, and Expression 2014, Plant Virology (Fifth Edition)Roger Hull i Stop Codon Context Stop codons have different efficiencies of termination (UAA>UAG>UGA) and in the first, and possibly the second, nucleotide 3′ of the stop codon acts as an important efficiency determinant (Stansfield et al., 1995). It can be seen from Table 6.8 that either amber (UAG) or opal (UGA) stop codons are read through; there are no examples of readthrough of the natural ochre (UAA) stop codon. However, when the suppressible UAG codon of TMV is replaced by a UAA codon, the virus can still replicate and produce mature virions (Ishikawa et al., 1986). Unlike retroviruses, there appear to be no structural requirements for stop codon suppression in plant systems. Various motifs have been recognized to stimulate readthrough (Beier and Grimm, 2001; Harrell et al., 2002). Type I (generally UAG CAA UYA) is found in tobamovirus replicase and benyvirus and pomovirus CP extension; Type II [generally UGA CGG or UGA CUA together with a 3′ structure (Firth et al., 2011)] is found in tobravirus, pecluvirus, furovirus, and pomovirus replicase and furovirus CP extension; and Type III (generally UAG G together with a compact pseudoknot) is found possibly in luteoviruses and tombusviruses. Firth et al. (2011) note that there are some exceptions to these motifs (e.g., enamovirus UGA G) and that there may be modulation of readthrough by 5′ sequences; this may reflect the level of readthrough required by different viruses. The context of the TMV amber stop codon at the end of the 126-kDa ORF has been studied in detail by insertion into constructs containing the GUS or other genes which enable readthrough to be quantitated in protoplasts (Valle et al., 1992). This experimental approach has defined the sequence (C/A)(A/C)A.UAG.CAR.YYA (R=purine, Y=pyrimidine) as the optimal consensus context (Skuzeski et al., 1991; Hamamoto et al., 1993; Harrell et al., 2002). Efficiently recognized stop codons normally have a purine immediately downstream and avoid having a C residue (Fütterer and Hohn, 1996). However, an analysis of Table 6.8 shows that the readthrough stop codons identified for plant viruses do not necessarily conform to these contexts determined from in vitro systems. As the sequence context differs from that described above, the readthrough of UAG.G in many of the Tombusviridae and the Luteoviridae suggest that a different mechanism might be involved. It is possible that there may be a requirement for additional cis-acting sequences such as a conserved CCCCA motif or repeated CCXXXX motifs beginning from 12 to 21 bases downstream of many of these readthrough sites (Fütterer and Hohn, 1996; Miller et al., 1997). It may be that other long-distance interactions are involved in the deviations from the experimentally determined optimal contexts for TMV. View chapterExplore book Read full chapter URL: Book2014, Plant Virology (Fifth Edition)Roger Hull Chapter Dystrophinopathies 2020, Rosenberg's Molecular and Genetic Basis of Neurological and Psychiatric Disease (Sixth Edition)John F. Brandsema, Basil T. Darras Stop codon read-through Suppression of stop codons (applicable to mutations leading to premature stop in about 10%–15% of DMD patients) has been demonstrated with aminoglycoside treatment of cultured cells; the treatment creates misreading of RNA and thereby allows alternative amino acids to be inserted at the site of the mutated stop codon.6 Results in preclinical and clinical studies were largely negative.202–204 Ataluren (PTC124) is an orally administered nonantibiotic drug that appears to promote ribosomal read-through of nonsense (stop) mutations to allow bypass of the pathogenic variant and continuation of the translation process to production of a functioning dystrophin protein. Preclinical studies through phase II studies were promising; a multicenter 48-week double-blind placebo-controlled trial (ACT DMD) showed no significant benefit of ataluren for the primary endpoint (i.e., change from baseline in the 6-minute walk test), though there was benefit for some secondary endpoints and the lower dose was favored as affected individuals on low doses of ataluren had a 30-m lower decline in the 6-minute walk distance than those on high doses or placebo.48,205 Based on these results, the drug was granted conditional approval as Translarna by the European Medicines Agency in August 2014 to treat DMD caused by a nonsense variant; ataluren has not been approved for treating DMD in the United States through 2018.48 View chapterExplore book Read full chapter URL: Book2020, Rosenberg's Molecular and Genetic Basis of Neurological and Psychiatric Disease (Sixth Edition)John F. Brandsema, Basil T. Darras Review article Genome Dynamics and DNA Repair in the CNS 2007, NeuroscienceP.Ø. Falnes, ... I. Alseth Cellular responses to defective RNAs All organisms ranging from bacteria to mammals have so-called mRNA surveillance mechanisms for handling the deleterious consequences of defective mRNAs, often by targeting the mRNA for degradation through a process which is linked to protein translation. One type of mRNAs which are subject to such surveillance, is those which are truncated at the 3′ end, and thus lack an in-frame stop codon. Stop codons represent the signal for release of the ribosome from the mRNA, and when a stop codon is lacking, the ribosome will be stuck at the 3′ end of the mRNA. In bacteria such as E. coli, these “non-stop” mRNAs are targets of the SsrA-system. The SsrA is a so-called tmRNA; a tRNA-like molecule which can bind to the stalled ribosome, and which also encodes a short peptide which upon ribosome stalling is attached to the nascent polypeptide and targets it for degradation (reviewed by Karzai et al., 2000). In the process the mRNA is also endonucleolytically cleaved, and the stalled ribosome liberated (Hayes and Sauer, 2003). Thus, the SsrA system prevents the negative effects of “non-stop” mRNAs both by providing degradation of the potentially deleterious truncated protein, and by freeing the stalled ribosome. Eukaryotic cells also have systems for dealing with non-stop mRNAs. When a ribosome is stalled at the stop codon-less mRNA, the non-stop decay pathway targets the mRNA for degradation from the 3′-end by the exosome complex (Frischmeyer et al., 2002; van Hoof et al., 2002). In addition, eukaryotic cells have a pathway for specific degradation of mRNAs containing premature stop codons, the so-called nonsense-mediated decay (NMD) pathway. In mammalian cells, mRNAs that contain an in-frame stop-codon more than ∼50 nucleotides upstream of the last exon–exon junction will be subject to decapping and degradation from the 5′ end (reviewed by Wagner and Lykke-Andersen, 2002). Thus, this pathway will mediate disposal of mRNAs that contain premature stop codons caused by mutations generated in transcription or replication, but also of aberrant transcripts containing retained introns. Possibly, pathways involved in the degradation of faulty mRNAs may also be capable of disposing of mRNAs that have been subjected to damage. Actually, it was reported that the protein hSMG-1, which is an essential component of the NMD pathway, is also involved in the cellular response to genotoxic stress, suggesting that the NMD pathway may also have a role in disposing of mRNAs that have been damaged by such stress (Brumbaugh et al., 2004). In Arabidopsis thaliana, exposure to genotoxic stress such as UV radiation or methylating agent, caused mRNA degradation. However, mutant plants lacking the ribosomal protein S27 displayed a UV- or methylation-sensitive phenotype, and no such mRNA degradation was observed (Revenkova et al., 1999). Therefore, one may speculate that the S27 protein is involved a response which mediates degradation of damaged mRNAs, possibly as a result of ribosome stalling when damage is encountered during translation. The notion that damaged RNAs may be targeted by degradation pathways has been further strengthened by the recent discovery of the so-called no-go decay pathway (Doma and Parker, 2006). It was observed that when the advancement of the ribosome in yeast cells was blocked by the introduction of a highly stable stem-loop structure, mRNA degradation was initiated. Unlike the NMD and non-stop decay pathways, where degradation occurs from the mRNA ends, the mRNA was actually subjected to an endonucleolytic cleavage. The obstruction of ribosome advancement by a stem-loop is in many respects reminiscent of the situation of ribosome stalling in response to an mRNA lesion. Therefore, it will be very interesting to study whether the presence of an mRNA lesion actually will target the mRNA for degradation, presumably by a mechanism which depends on protein translation. View article Read full article URL: Journal2007, NeuroscienceP.Ø. Falnes, ... I. Alseth Chapter Expression of Genes and Genomes 2012, Human Genes and GenomesLeon E. Rosenberg, Diane Drobnis Rosenberg Termination and Release Termination of synthesis and release from the ribosome occur when a stop codon in mRNA is encountered, i.e., a codon that does not interact with a tRNA species (Figure 7.10). Upon recognition of the stop codon, a release factor interacts with the ribosome and the polypeptide chain. When this occurs, the ribosomal subunits are recycled to other mRNAs. View chapterExplore book Read full chapter URL: Book2012, Human Genes and GenomesLeon E. Rosenberg, Diane Drobnis Rosenberg Chapter Enzymes and Nucleic Acids 1996, Enzymology Primer for Recombinant DNA TechnologyHyone-Myong Eun c. Termination of translation Protein synthesis stops when a protein release factor (RF) recognizes specific termination signals contained in the RNA sequence. Three stop or nonsense codons have been identified: UAG (amber), UAA (ochre), and UGA (opal). [All ciliated protozoa deviate from the universal genetic code by translating either one or two termination codons into Gln or Cys (13).] Chain termination involves the nucleophilic attack by water on the ester bond between the 3′-adenosine of tRNA and the amino acid. Although the translation of mRNA is generally terminated at the trinucleotide stop codons, efficient stop signals are consistently associated, in a context-dependent manner, with a fourth base both before and after the trinucleotide codon, presumably functioning as tetranucleotide stop signals, e.g., UAA(A/G) and UGA(A/G) (14). Prokaryotic stop signals show a strong bias for U immediately 3′ to stop codons, whereas eukaryotes prefer a purine or a purine and U. Both groups share a strong bias against C at this position and especially against the CG dinucleotide (6). View chapterExplore book Read full chapter URL: Book1996, Enzymology Primer for Recombinant DNA TechnologyHyone-Myong Eun Chapter Translation 2013, Brenner's Encyclopedia of Genetics (Second Edition)A. Liljas Reading Frame and Usage of the Genetic Code The initiator AUG codon not only defines the start but also the reading frame of an mRNA. Translation proceeds from this start in steps of three nucleotides (one codon) by binding a cognate tRNA through base pairing. The frequent occurrence of termination codons out of frame prevents translation in the wrong frame for more than short stretches. However, there are mRNAs, where the correct translation needs a change of reading frame. This is the case for Escherichia coli termination or release factor 2 (RF2). The read-through of a stop codon requires a tRNA that would decode a stop (nonsense) codon as a sense codon and incorporate a specific amino acid. Such tRNAs are called suppressor tRNAs. In a few proteins in bacteria and eukaryotes, seleno-cystein (Se-Cys) is required. This is not incorporated by a posttranslational modification as in other cases of nonstandard amino acids. Se-Cys is rather incorporated during translation in response to one of the stop codons. The mechanism for this involves a special tRNA (tRNASec), which reads the stop codon. A similar mechanism leads to the incorporation of pyrrolysine. Seleno-cystein and pyrrolysine are the 21st and 22nd amino acids being translated. View chapterExplore book Read full chapter URL: Reference work2013, Brenner's Encyclopedia of Genetics (Second Edition)A. Liljas Review article Selective neuronal vulnerability to deficits in RNA processing 2023, Progress in NeurobiologyGabrielle Zuniga, Bess Frost 3.4.1 Enhanced stop-codon readthrough efficiencies in neurons Increased lengthening between the 3′ UTR-localized polyA-binding protein and the eukaryotic release factor 3 significantly reduces translation termination efficiency (Roque et al., 2015) and increases the extent of translation through a premature stop codon and production of a C-terminally extended protein product (Wu et al., 2020). Studies in Drosophila and mice suggest that this phenomenon, known as “stop codon readthrough,” is highest in neurons compared to other cell types (Chen et al., 2020; Sapkota et al., 2019). Stop codon readthrough at a premature termination codon inhibits NMD (Annibaldis et al., 2020), suggesting that frequent readthrough, as occurs in neurons, leads to accumulation of C-terminally extended protein isoforms that are normally prevented by NMD. View article Read full article URL: Journal2023, Progress in NeurobiologyGabrielle Zuniga, Bess Frost Review article Unnatural amino acid mutagenesis in mapping ion channel function 2003, Current Opinion in NeurobiologyDarren L Beene, ... Henry A Lester The basic method for in vivo nonsense suppression (Figure 1) first entails mutating a codon of interest to the amber stop codon, TAG. This is done using conventional site-directed mutagenesis, followed by in vitro transcription of UAG-containing mRNA. Separately, a suppressor tRNA containing the appropriate anticodon (CUA) is prepared and chemically acylated with an unnatural amino acid. The tRNA and mRNA are then coinjected into a Xenopus oocyte. Protein synthesis and surface expression are carried out by the oocyte, allowing electrophysiological study 24–72 hours later. View article Read full article URL: Journal2003, Current Opinion in NeurobiologyDarren L Beene, ... Henry A Lester Related terms: Eicosanoid Receptor Transfer RNA C-Terminus Exon Amino Acid Nonsense Mutation Promoter Region Ribosome Intron Codon View all Topics We use cookies that are necessary to make our site work. We may also use additional cookies to analyze, improve, and personalize our content and your digital experience. You can manage your cookie preferences using the “Cookie Settings” link. For more information, see ourCookie Policy Cookie Preference Center We use cookies which are necessary to make our site work. We may also use additional cookies to analyse, improve and personalise our content and your digital experience. For more information, see our Cookie Policy and the list of Google Ad-Tech Vendors. You may choose not to allow some types of cookies. However, blocking some types may impact your experience of our site and the services we are able to offer. See the different category headings below to find out more or change your settings. You may also be able to exercise your privacy choices as described in our Privacy Policy Manage Consent Preferences Strictly Necessary Cookies Always active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. Contextual Advertising Cookies These cookies are used for properly showing banner advertisements on our site and associated functions such as limiting the number of times ads are shown to each user.
3977
https://www.math.cmu.edu/~amanita/math259/handouts/m259_w09_handout1.pdf
Math 259 Winter 2009 Handout 1: Derivation of the Cartesian Equation for an Ellipse The purpose of this handout is to illustrate how the usual Cartesian equation for an ellipse: ! x 2 a2 + y 2 b2 =1 is obtained from the Euclidean definition of the ellipse. Consider the ellipse shown in the following diagram1. The Euclidean definition of the ellipse is that the total distance of the point (x, y) from the two foci (−c, 0) and (c, 0) is equal to a constant. Following the notation suggested by the diagram we will label these distances d1 and d2, and the constant 2a, where a > 0. Now, using the Theorem of Pythagoras we can calculate that: ! d1 = "c " x ( ) 2 + y 2 = c + x ( ) 2 + y 2 ! d2 = c " x ( ) 2 + y 2 , 1 This image was obtained from: bearing in mind that x < 0 in the diagram given above. The Euclidean condition for a point (x, y) to be a point on the ellipse is then: ! c + x ( ) 2 + y 2 + c " x ( ) 2 + y 2 = 2a. Subtracting d2 from both sides and squaring gives: ! c + x ( ) 2 + y 2 = 2a " c " x ( ) 2 + y 2 ! c + x ( ) 2 + y 2 = 4a2 " 4a c " x ( ) 2 + y 2 + c " x ( ) 2 + y 2. Simplifying this equation and making the square root the subject gives: ! 4a c " x ( ) 2 + y 2 = 4a2 + c " x ( ) 2 + y 2 " c + x ( ) 2 " y 2 ! 4a c " x ( ) 2 + y 2 = 4a2 " 4cx ! c " x ( ) 2 + y 2 = a " c a x. Squaring both sides of this equation to remove the square root then gives: ! c " x ( ) 2 + y 2 = a2 " 2c a x + c 2 a2 x 2. FOILing and simplifying this expression then gives: ! c 2 + x 2 + y 2 = a2 + c 2 a2 x 2. Combining the x2 terms, subtracting c2 from both sides and simplifying gives: ! a2 " c 2 a2 x 2 + y 2 = a2 " c 2. Setting b2 = a2 − c2 and dividing both sides of this equation by b2 gives the familiar: ! x 2 a2 + y 2 b2 =1.
3978
https://www.maths.dur.ac.uk/users/herbert.gangl/dioph4.html
Project IV 06-07 (HG) Project IV (MATH4072) 2006-2007 Diophantine equations H Gangl Description Diophantine equations have an immediate appeal; they are--typically very simple--equations in at least two variables with integer coefficients, for which one wants to find all solutions--if any--in integers. In contrast to their simple description they often demand subtle tools and ingenious ideas for their solution. Famous examples are 1) Pell's equation: given an integer D, does x 2 - D y 2 = 1 have a solution in integers with non-zero y? This question leads almost inevitably to the study of continued fractions and of certain quadratic fields; 2) Fermat's Last Theorem (now Wiles's Theorem) x n + y n = z n with an integer n>2, for the tackling of which people have developed a huge variety of tools--a prominent one being the infinite descent (some glimpses into Wiles's proof could be taken); 3) the four-square theorem: each natural number N can be written as a sum of four integer squares (summands 0 2 being allowed); one can in fact say in how many different ways this is possible for a given N; a related theorem states which N can be written as a sum of two integer squares; 4) Euler conjectured in the 18th century that there are no solutions in integers of the form x 4 + y 4 + z 4 = w 4 but only 20 years ago Elkies (and independently Zagier) have found--in fact infinitely many--counterexamples, the smallest one being given by Frye as 95800 4 + 217519 4 + 414560 4 = 422481 4; 5) often one is also interested in studying the rational solutions of the equations in question: for example, does there exist a right triangle with rational sides and area equal to 1? Seemingly unrelated, a prize question in the Sunday Telegraph of London (1.1.1995) asked for a solution of A^3 + B^3 = 6 where A, B are positive rational numbers; both questions have their natural setting in the theory of elliptic curves, where arithmetic and geometric ideas blend into each other in an intriguing way. As there are many different levels on which diophantine equations can be studied, from the very elementary to the rather sophisticated, this project can easily be taylored to the student's background. Depending on the topic chosen the project could involve computer work, theoretical investigation or a combination of the two. Prerequisites There are no particular prerequisites although having already an idea about rings and polynomials as in Algebra and Number Theory II or Number Theory III may be useful for tackling deeper questions. Prior knowledge about elliptic curves would allow to take a more geometric viewpoint and to cover more demanding topics related to the ones in 5) above, or even to Wiles's work on Fermat's Last Theorem. Resources The following book covers most of the above, and much more, from a classical point of view: L. J. Mordell: Diophantine Equations or The following books give a more leisurely--and enticing--introduction to selected Diophantine problems: K. Ireland, M. Rosen: A Classical Introduction to Modern Number Theory, Chapter 17 or W. Scharlau, H. Opolka: From Fermat to Minkowski, Chapters 2,3,5 or K. Kato, N. Kurokawa, T. Saito: Number Theory 1, Fermat's Dream, Chapters 0,1 or Z. I. Borevich, I. R. Shafarevich: Number Theory, Chapter I Elliptic curves are especially nicely treated in J.T. Tate, J.H. Silverman Rational Points on Elliptic Curves (Undergraduate Texts in Mathematics Springer) Historical approaches to Fermat's last theorem can be found in P. Ribenboim 13 Lectures on Fermat's Last Theorem There are more modern books on the subject, for example Y. Hellegouarch: Invitation to the Mathematics of Fermat-Wiles and a very sophisticated one Elliptic curves, modular forms and Fermat's last theorem : proceedings of a conference held in the Institute of Mathematics of the Chinese University of Hong Kong / edited by John Coates, S.T. Yau. Useful links on the web about diophantine equations are given on (the bottom of) Dave Rusin's home page: ("Selected topics") email: Herbert Gangl Back
3979
https://www.khanacademy.org/science/ap-chemistry-beta/x2eef969c74e0d802:chemical-reactions/x2eef969c74e0d802:oxidation-reduction-redox-reactions/a/oxidation-number
Oxidation–reduction (redox) reactions (article) | Khan Academy Skip to main content If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains .kastatic.org and .kasandbox.org are unblocked. Explore Browse By Standards Explore Khanmigo Math: Pre-K - 8th grade Math: Illustrative Math-aligned Math: Eureka Math-aligned Math: Get ready courses Math: High school & college Math: Multiple grades Test prep Science Economics Reading & language arts Computing Life skills Social studies Partner courses Khan for educators Select a category to view its courses Search AI for Teachers FreeDonateLog inSign up Search for courses, skills, and videos Help us do more We'll get right to the point: we're asking you to help support Khan Academy. We're a nonprofit that relies on support from people like you. If everyone reading this gives $10 monthly, Khan Academy can continue to thrive for years. Please help keep Khan Academy free, for anyone, anywhere forever. Select gift frequency One time Recurring Monthly Yearly Select amount $10 $20 $30 $40 Other Give now By donating, you agree to our terms of service and privacy policy. Skip to lesson content AP®︎/College Chemistry Course: AP®︎/College Chemistry>Unit 4 Lesson 5: Oxidation–reduction (redox) reactions Oxidation–reduction (redox) reactions Worked example: Using oxidation numbers to identify oxidation and reduction Balancing redox equations Worked example: Balancing a simple redox equation Worked example: Balancing a redox equation in acidic solution Worked example: Balancing a redox equation in basic solution Oxidation–reduction (redox) reactions Science> AP®︎/College Chemistry> Chemical reactions> Oxidation–reduction (redox) reactions © 2025 Khan Academy Terms of usePrivacy PolicyCookie NoticeAccessibility Statement Oxidation–reduction (redox) reactions AP.Chem: TRA‑2 (EU), TRA‑2.A (LO), TRA‑2.A.2 (EK), TRA‑2.A.3 (EK), TRA‑2.A.4 (EK) Google Classroom Microsoft Teams What is an oxidation–reduction reaction? Plants use photosynthesis, a redox process, to derive energy from the sun. Image credit: Eschtar M. on Pixabay, Pixabay Licence. An oxidation–reduction or redox reaction is a reaction that involves the transfer of electrons between chemical species (the atoms, ions, or molecules involved in the reaction). Redox reactions are all around us: the burning of fuels, the corrosion of metals, and even the processes of photosynthesis and cellular respiration involve oxidation and reduction. Some examples of common redox reactions are shown below. CH A 4(g)+2 O A 2(g)→CO A 2(g)+2 H A 2 O(g)(combustion of methane)‍ 2 Cu(s)+O A 2(g)→2 CuO(s)(oxidation of copper)‍ 6 CO A 2(g)+6 H A 2 O(l)→C A 6 H A 12 O A 6(s)+6 O A 2(g)(photosynthesis)‍ During a redox reaction, some species undergo oxidation, or the loss of electrons, while others undergo reduction, or the gain of electrons. For example, consider the reaction between iron and oxygen to form rust: 4 Fe(s)+3 O A 2(g)→2 Fe A 2 O A 3(s)(rusting of iron)‍ In this reaction, neutral Fe‍ loses electrons to form Fe A 3+‍ ions and neutral O A 2‍ gains electrons to form O A 2−‍ ions. In other words, iron is oxidized and oxygen is reduced. Importantly, oxidation and reduction don’t occur only between metals and nonmetals. Electrons can also move between nonmetals, as indicated by the combustion and photosynthesis examples above. Oxidation numbers How can we determine if a particular reaction is a redox reaction? In some cases, it is possible to tell by visual inspection. For example, we could have determined that the rusting of iron is a redox process by simply noting that it involves the formation of ions (Fe A 3+‍ and O A 2−‍) from free elements (Fe‍ and O A 2‍). In other cases, however, it is not as obvious, particularly when the reaction in question involves only nonmetal substances. To help identify these less obvious redox reactions, chemists have developed the concept of oxidation numbers, which provides a way to track electrons before and after a reaction. An atom’s oxidation number (or oxidation state) is the imaginary charge that the atom would have if all of the bonds to the atom were completely ionic. Oxidation numbers can be assigned to the atoms in a reaction using the following guidelines: An atom of a free element has an oxidation number of 0‍. For example, each Cl‍ atom in Cl A 2‍ has an oxidation number of 0‍. The same is true for each H‍ atom in H A 2‍, each S‍ atom in S A 8‍, and so on. A monatomic ion has an oxidation number equal to its charge. For example, the oxidation number of Cu A 2+‍ is +2‍, and the oxidation number of Br A−‍ is −1‍. When combined with other elements, alkali metals (Group 1 A‍) always have an oxidation number of +1‍, while alkaline earth metals (Group 2 A‍) always have an oxidation number of +2‍. Fluorine has an oxidation number of −1‍ in all compounds. Hydrogen has an oxidation number of +1‍ in most compounds. The major exception is when hydrogen is combined with metals, as in NaH‍ or LiAlH A 4‍. In these cases, the oxidation number of hydrogen is −1‍. Oxygen has an oxidation number of −2‍ in most compounds. The major exception is in peroxides (compounds containing O A 2 A 2−‍), where oxygen has an oxidation number of −1‍. Examples of common peroxides include H A 2 O A 2‍ and Na A 2 O A 2‍. The other halogens (Cl‍, Br‍, and I‍) have an oxidation number of −1‍ in compounds, unless combined with oxygen or fluorine. For example, the oxidation number of Cl‍ in the ion ClO A 4 A−‍ is +7‍ (since O‍ has an oxidation number of −2‍ and the overall charge on the ion is −1‍). The sum of the oxidation numbers for all atoms in a neutral compound is equal to zero, while the sum for all atoms in a polyatomic ion is equal to the charge on the ion. Consider the polyatomic ion NO A 3 A−‍. Each O‍ atom has an oxidation number of −2‍ (for a total of −2×3=−6‍). Since the overall charge on the ion is −1‍, the oxidation number of the N‍ atom must be +5‍. One thing to note is that oxidation numbers are written with the sign (+‍ or −‍) before the number. This is in contrast to the charges on ions, which are written with the sign after the number. Now, let’s see some examples of assigning oxidation numbers! Example 1: Assigning oxidation numbers What is the oxidation number of each atom in (a) SF A 6‍, (b) H A 3 PO A 4‍ and (c) IO A 3 A−‍? To assign the oxidation numbers to the atoms in each compound, let’s follow the guidelines outlined above. (a) We know that the oxidation number of F‍ is −1‍ (guideline 4). Because the sum of the oxidation numbers of the six F‍ atoms is −6‍ and SF A 6‍ is a neutral compound, the oxidation number of S‍ must be +6‍: S F 6+6−1‍ (b) The oxidation number of H‍ is +1‍ (guideline 5) and the oxidation number of O‍ is −2‍ (guideline 6). The sum of these oxidation numbers is 3(+1)+4(−2)=−5‍. Since H A 3 PO A 4‍ has no net charge, the oxidation number of P‍ must be +5‍: H 3 P O 4+1+5−2‍ (c) The oxidation number of O‍ is −2‍ (guideline 6), so the sum of the oxidation numbers of the three O‍ atoms is −6‍. Since the net charge on IO A 3 A−‍ is −1‍, the oxidation number of I‍ must be +5‍: I O 3−+5−2‍ Concept check: What is the oxidation number of the carbon atom in CO A 3 A 2−‍? Show the answer As usual, the oxidation number of O‍ is −2‍ (guideline 6). The sum of the oxidation numbers of the three O‍ atoms is −6‍ and the overall charge on CO A 3 A 2−‍ is −2‍, so the oxidation number of the C‍ atom must be +4‍: C O 3 2−+4−2‍ Recognizing redox reactions How do we actually use oxidation numbers to identify redox reactions? To find out, let’s revisit the reaction between iron and oxygen, this time assigning oxidation numbers to each atom in the equation: 4 Fe(s)+3 O A 2(g)→2 Fe A 2 O A 3(s)0 0+3−2‍ Notice how iron (which we already know is oxidized in this reaction) changes from an oxidation number of 0‍ to an oxidation number of +3‍. Similarly, oxygen (which we know is reduced) changes from an oxidation number of 0‍ to an oxidation number of −2‍. From this, we can conclude that oxidation involves an increase in oxidation number, while reduction involves a decrease in oxidation number. So, we can identify redox reactions by looking for changes in oxidation numbers over the course of a reaction. Let’s explore this idea more in the next example. Example 2: Using oxidation numbers to identify oxidation and reduction Consider the following reaction: 4 NH A 3(g)+5 O A 2(g)→4 NO(g)+6 H A 2 O(g)‍ Is this reaction a redox reaction? If so, which element in the reaction is oxidized and which element is reduced? Considering this is an article about redox reactions, the reaction probably is a redox reaction! However, let’s prove it by assigning oxidation numbers to the atoms of each element in the equation: 4 NH A 3(g)+5 O A 2(g)→4 NO(g)+6 H A 2 O(g)−3+1 0+2−2+1−2‍ The oxidation numbers of N‍ and O‍ are different on either side of the equation, so this is definitely a redox reaction! The oxidation number of N‍ increases from −3‍ to +2‍, which means that N loses electrons and is oxidized during the reaction. The oxidation number of O‍ decreases from 0‍ to −2‍, which means that O‍gains electrons and is reduced during the reaction. Summary The most common oxidation numbers of vanadium are +5‍ (yellow), +4‍ (blue), +3‍ (green), and +2‍ (purple). Image credit: "Vanadium oxidation states" by W. Oelen on Wikimedia Commons, CC BY-SA 3.0. Oxidation–reduction reactions, commonly known as redox reactions, are reactions that involve the transfer of electrons from one species to another. The species that loses electrons is said to be oxidized, while the species that gains electrons is said to be reduced. We can identify redox reactions using oxidation numbers, which are assigned to atoms in molecules by assuming that all bonds to the atoms are ionic. An increase in oxidation number during a reaction corresponds to oxidation, while a decreases corresponds to reduction. References Kotz, J. C., Treichel, P. M., Townsend, J. R., and Treichel, D. A. (2015). Oxidation–Reduction Reactions. In Chemistry and Chemical Reactivity, Instructor's Edition (9th ed., pp. 125-131). Stamford, CT: Cengage Learning. Zumdahl, S. S., and Zumdahl, S. A. (2014). Oxidation–Reduction Reactions. In Chemistry (9th ed., pp. 170–175). Belmont, CA: Brooks/Cole. Skip to end of discussions Questions Tips & Thanks Want to join the conversation? Log in Sort by: Top Voted maks.berlec 10 years ago Posted 10 years ago. Direct link to maks.berlec's post “Shouldn’t equation H2 + O...” more Shouldn’t equation H2 + O2 -> 2 H2O be balanced to 2 H2 + O2 -> 2 H2O? Answer Button navigates to signup page •1 comment Comment on maks.berlec's post “Shouldn’t equation H2 + O...” (45 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer rinipakhe11 7 years ago Posted 7 years ago. Direct link to rinipakhe11's post “Yes it is, you have to fi...” more Yes it is, you have to first see the equation and balance Comment Button navigates to signup page (1 vote) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more yunhu987 2 years ago Posted 2 years ago. Direct link to yunhu987's post “do we have to memorize th...” more do we have to memorize these rules? Answer Button navigates to signup page •Comment Button navigates to signup page (11 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Richard 2 years ago Posted 2 years ago. Direct link to Richard's post “I mean, if you want to so...” more I mean, if you want to solve redox problems, yeah. Comment Button navigates to signup page (15 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more sg60847 9 years ago Posted 9 years ago. Direct link to sg60847's post “What is the use of knowin...” more What is the use of knowing about oxidation numbers ? Answer Button navigates to signup page •Comment Button navigates to signup page (4 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Jonathan Ziesmer 9 years ago Posted 9 years ago. Direct link to Jonathan Ziesmer's post “Knowing oxidation numbers...” more Knowing oxidation numbers allows you to predict what compounds or reactions will form when different elements mix together. This is really important, as you will need to be able to write compounds and reactions to do everything else you will learn in chemistry. I hope this helps! 3 comments Comment on Jonathan Ziesmer's post “Knowing oxidation numbers...” (15 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more ikuko mukai-cheh 10 years ago Posted 10 years ago. Direct link to ikuko mukai-cheh's post “In the example of determi...” more In the example of determining the oxidation state in H2 and H2O, it reads: Rule 4 tells us that all the oxidation numbers in a compound have to add up to charge on the compound, and rule 3 says that oxygen usually has an oxidation number of +2. The "+2" should be "-2". Answer Button navigates to signup page •Comment Button navigates to signup page (9 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Hermione Granger 2 years ago Posted 2 years ago. Direct link to Hermione Granger's post “What's the difference bet...” more What's the difference between a redox reaction and an ionic reaction? Both of them involve transfers of electrons, and then electrostatic attraction right? Answer Button navigates to signup page •Comment Button navigates to signup page (3 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Richard 2 years ago Posted 2 years ago. Direct link to Richard's post “A redox reaction is a mor...” more A redox reaction is a more general, umbrella term for reactions. All which needs to occur is the transfer of electrons. One chemical loses electrons and is oxidized, and the other gains those electrons and is reduced. Redox reactions are broad and can include the production of ions, but other redox reactions do not result in ions. Ionic reactions are simply the joining of ions through electrostatic interactions. Producing the ions would be a redox reaction, but the subsequent bonding of the ions would not. Once the ions are formed, or if they’re already present, they do not transfer electrons when they form ionic bonds. Hope that helps. Comment Button navigates to signup page (5 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Mohammad Masood Alam 2 years ago Posted 2 years ago. Direct link to Mohammad Masood Alam's post “4NH3 (g) + 5O2 (g) ----> ...” more 4NH3 (g) + 5O2 (g) ----> 4NO (g) + 6 H2O (g) Why this equation has an output of NO (Nitrogen Monoxide) not N205 (Dinitrogen Pentoxide). Answer Button navigates to signup page •Comment Button navigates to signup page (2 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Richard 2 years ago Posted 2 years ago. Direct link to Richard's post “For this question it is i...” more For this question it is important to keep in mind that many reactions have different pathways they can take, yielding different types of products. For example, you’re describing the combustion of ammonia, or burning ammonia in oxygen gas. We can have slightly different reactions because ammonia reacting with oxygen gas can proceed differently. Usually combusting ammonia is difficult, but we if do so with optimal fuel-to-air mixtures we can get this reaction: 4NH3(g) + 3O2(g) → 5N2(g) + 6H2O(g), where the products are nitrogen gas and water vapor. The reaction you’ve stated: 4NH3(g) + 5O2(g) → 4NO(g) + 6H2O(g), is a different reaction where a catalyst is used to speed up a normally slow side reaction. The first reaction is known as the thermodynamic product, while the second reaction is known as the kinetic product. A thermodynamic product is more energetically stable, but a kinetic product forms faster because of a lower activation energy. You’ll get a mixture of these products because these two reactions (and probably several others) are occurring at the same time, but we can influence which reaction we want to dominate by changing the reaction conditions. We favor the first one by having the optimal air mixture, and we favor the second one by using a catalyst. As far as forming dinitrogen pentoxide from the combustion ammonia goes, I’ve not seen any literature describe that as a preparation method. Likewise, I’ve never seen dinitrogen pentoxide as one of the possible products of the combustion of ammonia. Very likely, it is possible for dinitrogen pentoxide to be a product of the combustion of ammonia, but it won’t be a thermodynamic product, rather a kinetic product. Nitrogen gas is usually the ideal thermodynamic product since it’s much more stable than nitrogen oxides. So if we wanted dinitrogen pentoxide, we would need to find a specific catalyst which prioritizes dinitrogen pentoxide over all other products. Hope that helps. Comment Button navigates to signup page (5 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more H 6 years ago Posted 6 years ago. Direct link to H's post “How to determine the oxid...” more How to determine the oxidation number if the compound contains neither oxygen or fluorine ? Answer Button navigates to signup page •1 comment Comment on H's post “How to determine the oxid...” (2 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer yunshangchen09 18 days ago Posted 18 days ago. Direct link to yunshangchen09's post “There are some less commo...” more There are some less commonly known rules: Halogens are generally -1, so not just fluorine. Elements in the same group as oxygen are usually -2, Hydrogen is usually +1, Alkali metals such as sodium and potassium are usually +1, Alkaline earth metals such as calcium. and magnesium are usually +2, and there are some weird ones, such as silver, generally being +1, zinc generally being +2, and aluminum, usually being +3. Hope that helps! Comment Button navigates to signup page (1 vote) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Star Birds 2 years ago Posted 2 years ago. Direct link to Star Birds's post “In the example which expl...” more In the example which explains the concept of oxidation and reduction (rusting of iron: 4Fe(s)+3O₂(g)→2Fe₂O₃(s)) at the beginning of the article, why does iron lose electrons to be Fe3+ and why does oxygen gain electrons? I understand the definition of oxidation and reduction, but I don't understand the explanation of why they gain or loss electrons. Can we get some information from the periodic table for this? Thanks in advance! Answer Button navigates to signup page •Comment Button navigates to signup page (1 vote) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Richard 2 years ago Posted 2 years ago. Direct link to Richard's post “Oxygen is more electroneg...” more Oxygen is more electronegative than iron and naturally attracts electrons to itself to a greater degree than does iron. 2 comments Comment on Richard's post “Oxygen is more electroneg...” (6 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Angelica Chen 4 years ago Posted 4 years ago. Direct link to Angelica Chen's post “_I encountered the follow...” more I encountered the following question in my Chemistry textbook, and because it is not assigned as homework, I wanted to ask it here... In a nickel-cadmium battery, the relevant redox reaction is: 2NiO(OH) + Cd + 2H²O → 2Ni(OH)² + Cd(OH)² Does this agree with the EMF series? What are the oxidation states of nickel before and after the reaction? _My answer is that although the correct metal is releasing its electrons (Cd→Ni), it is not valid because nickel's oxidation state of 3+ in the reactant is not a common ionic charge. The product Ni(II) is, but not Ni(III). Is my answer correct? Thank you for the help!_ Answer Button navigates to signup page •Comment Button navigates to signup page (2 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Richard 4 years ago Posted 4 years ago. Direct link to Richard's post “So it's a battery meaning...” more So it's a battery meaning it should produce a spontaneous redox reaction with a positive standard cell potential. Nickel goes from an oxidation state of +3 to +2 (so it's being reduced) and cadmium goes from 0 to +2 (so it's being oxidized). Looking at the standard electrode potentials (or standard reduction potentials or EMF series I suppose is how your book is referring to it as) of the half reactions. We can find that the cadmium half reaction has a value of -0.4 V while the nickel one has a value of +0.8 V. Being more positive means that Nickel is a stronger oxidizing agent (more likely to cause oxidation) and itself more likely to be reduced as compared to cadmium which is a stronger reducing agent (more likely to cause reduction). So seeing that nickel is being reduced and cadmium is being oxidized would agree with their standard electrode values. Additionally having the metals in that order in the redox reaction generates a positive standard cell potential which is necessary for a battery. Hope that helps. Comment Button navigates to signup page (4 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more khasanov.sm 2 years ago Posted 2 years ago. Direct link to khasanov.sm's post “... Importantly, oxidatio...” more ... Importantly, oxidation and reduction don’t occur only between metals and nonmetals. Electrons can also move between nonmetals, as indicated by the combustion and photosynthesis examples above... Is there a type in this part of the notes? Not sure I understand the meaning of this important statement. Answer Button navigates to signup page •Comment Button navigates to signup page (1 vote) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer Richard 2 years ago Posted 2 years ago. Direct link to Richard's post “Redox, reduction-oxidatio...” more Redox, reduction-oxidation, reactions simply involve the transfer of electrons from one reactant to another in the reaction. The following reaction is a redox reaction: 2Cu(s) + O2(g) → 2CuO(s) Where the copper loses electrons and is oxidized, while the oxygen gains electrons and is reduced. Copper is a metal and oxygen is a nonmetal. The reactants do not have to be between a metal and a nonmetal like in the last example though. They can also be between two nonmetals. CH4(g) + 2O2(g) → CO2(g) + 2H2O(g) Where the carbon loses electrons and is oxidized, while the oxygen gains electrons and is reduced. All of the elements, carbon, hydrogen, and oxygen, are nonmetals. Hope that helps. 2 comments Comment on Richard's post “Redox, reduction-oxidatio...” (5 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Up next: video Use of cookies Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy Accept All Cookies Strictly Necessary Only Cookies Settings Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer. More information Allow All Manage Consent Preferences Strictly Necessary Cookies Always Active Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account. For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session. Functional Cookies [x] Functional Cookies These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences. Targeting Cookies [x] Targeting Cookies These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission. We do not use cookies to serve third party ads on our Service. Performance Cookies [x] Performance Cookies These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates. For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users. We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails. Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Reject All Confirm My Choices
3980
https://uomustansiriyah.edu.iq/media/lectures/5/5_2022_01_17!04_53_47_PM.pdf
Al –Mustansiriya University Control Lab. College of Engineering Exp. No.( 10) Computer and Software Eng. Dep. 3rd Class 1+G(s) H(s) = 0 Exp. No. (10) Exp. No. (10) Exp. No. (10) Exp. No. (10)Root Locus Root Locus Root Locus Root Locus : Object Object To Study how the performance of feedback system can be described in terms To Study how the performance of feedback system can be described in terms . of the location of the roots of the characteristic equation in the s-plane of the location of the roots of the characteristic equation in the s-plane : Theory Theory The Basic characteristic of the transient response of closed-loop system is The Basic characteristic of the transient response of closed-loop system is closely related to the location of the closed-loop poles. If the system has a closely related to the location of the closed-loop poles. If the system has a variable loop gain, then the location of the closed-loop poles depends on the variable loop gain, then the location of the closed-loop poles depends on the value of the loop gain chosen. value of the loop gain chosen. The root locus is the locus of roots of the characteristic equation(c/c) of the The root locus is the locus of roots of the characteristic equation(c/c) of the closed-loop system as a gain K is varied from zero to infinity. Such a plot closed-loop system as a gain K is varied from zero to infinity. Such a plot clearly shows the contributions of each open-loop pole or zero to the locations clearly shows the contributions of each open-loop pole or zero to the locations of the closed-loop poles. By using the root locus methods, it is possible to of the closed-loop poles. By using the root locus methods, it is possible to K determine the value ofthe loop gainthat will make the damping ratio of the determine the value of the loop gain that will make the damping ratio of the dominant closed-loop poles as prescribed. If the location of an open-loop pole or dominant closed-loop poles as prescribed. If the location of an open-loop pole or zero is a system variable, then the root-locus method suggests the way to choose zero is a system variable, then the root-locus method suggests the way to choose the location of an open-loop pole or zero. the location of an open-loop pole or zero. (c/c): In plotting root loci with MATLAB, firstly obtain the In plotting root loci with MATLAB, firstly obtain the 1+ 𝐊 𝐬+𝐙𝟏 𝐬+𝐙𝟑 ….(𝐬+𝐙𝐦) (𝐬+𝐏𝟏) 𝐬+𝐏𝟐 ….(𝐬+𝐏𝐧) =0 rlocus(num,den) rlocus(A,B,C,D) rlocus(num,den) rlocus(A,B,C,D) rlocus(num,den,K) rlocus(A,B,C,D,K) rlocus(num,den,K) rlocus(A,B,C,D,K) [r,k]=rlocus(num,den) [r,k]=rlocus(num,den,k) [r,k]=rlocus(A,B,C,D) [r,k]=rlocus(A,B,C,D,k) [r,k]=rlocus(sys) [r,k]=rlocus(num,den) [r,k]=rlocus(num,den,k) [r,k]=rlocus(A,B,C,D) [r,k]=rlocus(A,B,C,D,k) [r,k]=rlocus(sys) Plot(r,'-') Plot(r,'-') Then rearrange this equation in form: Then rearrange this equation in form: 0Where Where≤ ≤K K≤≤∞∞ . . A MATLAB commands commonly used for plotting root loci are: MATLAB commands commonly used for plotting root loci are: If the user supplied gain vector K, one of the following commands can be If the user supplied gain vector K, one of the following commands can be used to plotting root loci: used to plotting root loci: R The following commands with left-hand arguments return the matrixand The following commands with left-hand arguments return the matrix and K. gain Matrix gain Matrix plot Thecommand will plotting the root loci The command will plotting the root loci G(s)H(s)= 𝐊(𝐬+𝟓) 𝐬(𝐬+𝟑)(𝐬+𝟗) G(s)H(s)= 𝟏𝟎𝐊(𝐬+𝟓) 𝐬(𝐬+𝟑)(𝐬+𝟗) G(s)H(s)= 𝟐𝟎𝟎𝐊(𝐬+𝟓) 𝐬(𝐬+𝟑)(𝐬+𝟗) V=[-a a –a a]; Axis(v); Axis('square'); V=[-a a –a a]; Axis(v); Axis('square'); sgri sgrid Since the gain vector is automatically determined in the root locus plot, Since the gain vector is automatically determined in the root locus plot, then the root locus plots of: then the root locus plots of: Are all the same due to the num and den set of the system is the same for all Are all the same due to the num and den set of the system is the same for all three systems. three systems. The commands: The commands: Used to set the given root locus plot region on the screen to be square. Used to set the given root locus plot region on the screen to be square. The command: The command: ( Over lays lines of constant damping ratio Over lays lines of constant damping ratio1 1 0.1 with with ) increment increment and and . circles of constanton the root-locus plot circles of constant on the root-locus plot 𝐜𝐨𝐬𝛟 Sgrid([0.6 0.7],[0.5 2 3]) Sgrid([0.6 0.7],[0.5 2 3]) Fig. 10.1 ( ). shows the lines of constant damping ratio shows the lines of constant damping ratio Fig. 10.1: Lines of Constant damping ratio ( ). ( ) The damping ratioof a pair complex-conjugate poles can be expressed in The damping ratio of a pair complex-conjugate poles can be expressed in : terms of the angleas illustrated in the following equation terms of the angle as illustrated in the following equation Note if the real part of a pair of complex pole is positive, which means that the if the real part of a pair of complex pole is positive, which means that the system is unstable the corresponding system is unstable the corresponding  is negative. is negative. . . ) If only particular constantlines (such as the If only particular constant lines (such as the and particular constantcircles (such as the and particular constant circles (such as theω ωnn= 0 ..55 circlecircle..ω ωnn= 2 2 circlecircle,,and and ωωnn= 3 3 circlecircle)) are desired, use the following commands: are desired, use the following commands: May be stable system 𝜙 𝜙 [k,r]=rlocfind(num,den) , If we want to omit either the entire constantline or entire constantcircles If we want to omit either the entire constant line or entire constant circles [ ] sgrid . we may use empty bracketin the arguments of thecommand we may use empty bracket in the arguments of the command K Every point in the S-plane has the correspondingvalue. If we use a Every point in the S-plane has the corresponding value. If we use a rlocfind, K commandMATLAB will give thevalue of the specified point as command MATLAB will give the value of the specified point as K well as thenearest closed-loop poles corresponding to thisvalue. The syntax well as the nearest closed-loop poles corresponding to this value. The syntax ofcommand is: of command is: rlocfind rlocfind . Thecommand, which must follow ancommand The command, which must follow an command : Procedure Procedure 1-1- -Plotthe root loci for the following control system. Choose the region of root Plot the root loci for the following control system. Choose the region of rootlocus plot to be : locus plot to be : − −𝟔𝟔≤≤𝑿𝑿≤≤𝟔𝟔 𝒂𝒏𝒅𝒂𝒏𝒅−−𝟔𝟔≤≤𝒚𝒚≤≤𝟔𝟔 R(s) C(s) Fig. 10.2: Block diagram of control system. 2- Plot the root-loci for a control system whose an open loop transfer function Plot the root-loci for a control system whose an open loop transfer function G(s)H(s) is : is :G(s)H(s)= G(s)H(s)= 𝑲 𝑲𝒔 𝒔ሺ(𝒔𝒔++𝟎𝟎..𝟓𝟓ሻ)((𝒔𝒔𝟐𝟐++𝟎𝟎..𝟔𝒔𝟔𝒔++𝟏𝟎𝟏𝟎)) Where Where −−𝟔𝟔≤≤𝑿𝑿≤≤𝟔𝟔 𝒂𝒏𝒅𝒂𝒏𝒅−−𝟔𝟔≤≤𝒚𝒚≤≤𝟔𝟔 + - 𝐊(𝐬+ 𝟑) 𝐬(𝐬+ 𝟏)(𝐬𝟐+ 𝟒𝐬+ 𝟏𝟔) 3- (2) forRepeat step Repeat step 𝟎𝟎≤≤𝑲𝑲≤≤𝟐𝟎𝟐𝟎 ,, ≤K ≤3 , and 𝟐𝟎𝟑𝟎 𝟐𝟎 𝟑𝟎≤≤𝑲𝑲≤≤ ,𝟏𝟎𝟎 𝟏𝟎𝟎 and and− −𝟒𝟒≤≤𝒙𝒙≤≤𝟒𝟒 ,,−−𝟒𝟒< <𝒚𝒚≤≤𝟒𝟒 4- C : onsider the control system equations are onsider the control system equations are 𝑿𝑿ሶ = 𝑨𝒙𝑨𝒙+ +𝑩𝒖𝑩𝒖 Y=Cx+Du Y=Cx+Du u=r-y u=r-y Where Where A=[ − − − ] , [ − ] , [ ], [ ]. G(s)H(s) = + + G(s)H(s) 5-Letplot the root loci forshowing thefollowing 5-Let plot the root loci for showing the following specifications. specifications. a. . , . Lineof Line of b. . , , Circlesof Circles of 6-Consider the unity-feedback control systemwith the following feed forward 6-Consider the unity-feedback control system with the following feed forward transfer-function: transfer-function: G(s)= ( + + ) Plot the root loci with MATLAB. Determine the closed-loop poles that have the Plot the root loci with MATLAB. Determine the closed-loop poles that have the 0.5. K . damping ratio ofFind the gain valueat this point damping ratio of Find the gain value at this point : Discussion Discussion 1-Which variable, root locus technique is function to? And whats the range 1- Which variable, root locus technique is function to? And whats the range of changes? of changes? 2-Find O.L.T.F. for the following (c/c) equation. 2- Find O.L.T.F. for the following (c/c) equation. + + + 3-Prove that step response of any system was not effected by it's Zeros. 3- Prove that step response of any system was not effected by it's Zeros. 4-Consider the control system shown in Fig. 10.3 Find the values of K that 4- Consider the control system shown in Fig. 10.3 Find the values of K that make the system stable? (Use root locus technique). make the system stable? (Use root locus technique). R(s) C(s) Fig. 10.3: Block diagram of control system. + - 𝐊(𝐬+ 𝟐𝐬+ 𝟒) 𝐬(𝐬+ 𝟒)(𝐬+ 𝟔)(𝐬𝟐+ 𝟏. 𝟒𝐬+ 𝟏)
3981
https://arxiv.org/abs/0902.3506
[0902.3506] Sums and Products of Distinct Sets and Distinct Elements in $\mathbb{C}$ Skip to main content We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors.Donate >math> arXiv:0902.3506 Help | Advanced Search Search GO quick links Login Help Pages About Mathematics > Combinatorics arXiv:0902.3506 (math) [Submitted on 20 Feb 2009 (v1), last revised 13 Sep 2010 (this version, v2)] Title:Sums and Products of Distinct Sets and Distinct Elements in \mathbb{C} Authors:Karsten Chipeniuk View a PDF of the paper titled Sums and Products of Distinct Sets and Distinct Elements in $\mathbb{C}$, by Karsten Chipeniuk View PDF Abstract:Let Aand Bbe finite subsets of \mathbb{C}such that |B|=C|A|. We show the following variant of the sum product phenomenon: If |AB|<\alpha|A|and \alpha \ll \log |A|, then |kA+lB|\gg |A|^k|B|^l. This is an application of a result of Evertse, Schlickewei, and Schmidt on linear equations with variables taking values in multiplicative groups of finite rank, in combination with an earlier theorem of Ruzsa about sumsets in \mathbb{R}^d. As an application of the case A=Bwe give a lower bound on |A^+|+|A^\times|, where A^+is the set of sums of distinct elements of Aand A^\timesis the set of products of distinct elements of A. Comments:27 pages, Revised with corrections. Accepted by Integers: Electronic Journal of Combinatorial Number Theory Subjects:Combinatorics (math.CO); Number Theory (math.NT) MSC classes:05A99, 11P99 Cite as:arXiv:0902.3506 [math.CO] (or arXiv:0902.3506v2 [math.CO] for this version) Focus to learn more arXiv-issued DOI via DataCite Submission history From: Karsten Chipeniuk [view email] [v1] Fri, 20 Feb 2009 03:18:49 UTC (13 KB) [v2] Mon, 13 Sep 2010 17:54:19 UTC (13 KB) Full-text links: Access Paper: View a PDF of the paper titled Sums and Products of Distinct Sets and Distinct Elements in $\mathbb{C}$, by Karsten Chipeniuk View PDF TeX Source Other Formats view license Current browse context: math.CO <prev | next> new | recent | 2009-02 Change to browse by: math math.NT References & Citations NASA ADS Google Scholar Semantic Scholar export BibTeX citation Loading... BibTeX formatted citation × Data provided by: Bookmark Bibliographic Tools Bibliographic and Citation Tools [x] Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) [x] Connected Papers Toggle Connected Papers (What is Connected Papers?) [x] Litmaps Toggle Litmaps (What is Litmaps?) [x] scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Code, Data and Media Associated with this Article [x] alphaXiv Toggle alphaXiv (What is alphaXiv?) [x] Links to Code Toggle CatalyzeX Code Finder for Papers (What is CatalyzeX?) [x] DagsHub Toggle DagsHub (What is DagsHub?) [x] GotitPub Toggle Gotit.pub (What is GotitPub?) [x] Huggingface Toggle Hugging Face (What is Huggingface?) [x] Links to Code Toggle Papers with Code (What is Papers with Code?) [x] ScienceCast Toggle ScienceCast (What is ScienceCast?) Demos Demos [x] Replicate Toggle Replicate (What is Replicate?) [x] Spaces Toggle Hugging Face Spaces (What is Spaces?) [x] Spaces Toggle TXYZ.AI (What is TXYZ.AI?) Related Papers Recommenders and Search Tools [x] Link to Influence Flower Influence Flower (What are Influence Flowers?) [x] Core recommender toggle CORE Recommender (What is CORE?) Author Venue Institution Topic About arXivLabs arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?) About Help Contact Subscribe Copyright Privacy Policy Web Accessibility Assistance arXiv Operational Status Get status notifications via email or slack
3982
https://mathworld.wolfram.com/ParabolicSegment.html
Parabolic Segment -- from Wolfram MathWorld TOPICS AlgebraApplied MathematicsCalculus and AnalysisDiscrete MathematicsFoundations of MathematicsGeometryHistory and TerminologyNumber TheoryProbability and StatisticsRecreational MathematicsTopologyAlphabetical IndexNew in MathWorld Geometry Curves Plane Curves Conic Sections Parabolic Segment Download Wolfram Notebook The arc length of the parabolic segment (1) illustrated above is given by (2) (3) (4) and the area is given by (5) (6) (Kern and Bland 1948, p.4). The weighted mean of is (7) (8) so the geometric centroid is then given by (9) (10) The area of the cut-off parabolic segment contained between the curves (11) (12) can be found by eliminating , (13) so the points of intersection are (14) with corresponding -coordinates . The area is therefore given by (15) (16) (17) The maximum area of a triangle inscribed in this segment will have two of its polygon vertices at the intersections and , and the third at a point to be determined. From the general equation for a triangle, the area of the inscribed triangle is given by the determinant equation (18) Plugging in and using gives (19) To find the maximum area, differentiable with respect to and set to 0 to obtain (20) so (21) Plugging (21) into (19) then gives (22) This leads to the result known to Archimedes in the third century BC, namely (23) See also Circular Segment, Geometric Centroid, Parabola Explore with Wolfram|Alpha More things to try: parabolic segment ((3+4i)/5)^10 Cesaro fractal References Beyer, W.H. (Ed.). CRC Standard Mathematical Tables, 28th ed. Boca Raton, FL: CRC Press, p.125, 1987.Kern, W.F. and Bland, J.R. Solid Mensuration with Proofs, 2nd ed. New York: Wiley, p.4, 1948. Referenced on Wolfram|Alpha Parabolic Segment Cite this as: Weisstein, Eric W. "Parabolic Segment." From MathWorld--A Wolfram Resource. Subject classifications Geometry Curves Plane Curves Conic Sections About MathWorld MathWorld Classroom Contribute MathWorld Book wolfram.com 13,278 Entries Last Updated: Sun Sep 28 2025 ©1999–2025 Wolfram Research, Inc. Terms of Use wolfram.com Wolfram for Education Created, developed and nurtured by Eric Weisstein at Wolfram Research Created, developed and nurtured by Eric Weisstein at Wolfram Research
3983
https://anatomypubs.onlinelibrary.wiley.com/doi/10.1002/dvdy.20307
Opens in a new window Opens an external website Opens an external website in a new window This website utilizes technologies such as cookies to enable essential site functionality, as well as for analytics, personalization, and targeted advertising. To learn more, view the following link: Privacy Policy Volume 232, Issue 3 pp. 526-528 Perspective Free Access Small flies—Big discoveries: Nearly a century of Drosophila genetics and development† Anthea Letsou, Corresponding Author Anthea Letsou aletsou@genetics.utah.edu Department of Biomedical Genetics, University of Rochester Medical Center, Rochester, New York Eccles Institute of Human Genetics, 15 North 2030 East, Room 2100, University of Utah, Salt Lake City, UT 84112Search for more papers by this author Dirk Bohmann, Dirk Bohmann Department of Biomedical Genetics, University of Rochester Medical Center, Rochester, New York Search for more papers by this author Anthea Letsou, Corresponding Author Anthea Letsou aletsou@genetics.utah.edu Department of Biomedical Genetics, University of Rochester Medical Center, Rochester, New York Eccles Institute of Human Genetics, 15 North 2030 East, Room 2100, University of Utah, Salt Lake City, UT 84112Search for more papers by this author Dirk Bohmann, Dirk Bohmann Department of Biomedical Genetics, University of Rochester Medical Center, Rochester, New York Search for more papers by this author First published: 09 February 2005 Citations: 13 † The authors are Guest Editors of this Sepcial Issue on Drosophila as a Model System. It was almost 100 years ago, in 1909, that a classically trained embryologist, Thomas Hunt Morgan, chose the fruit fly Drosophila melanogaster as a model organism for an experimental study of evolution. Ever since Morgan's auspicious choice of the fruit fly as an experimental organism, scientists have been eyewitnesses to the “awesome power” of Drosophila genetics—from the transmission geneticists best exemplified by Thomas Hunt Morgan and his students to the developmental geneticists who have been led by Ed Lewis, Christiane Nüsslein-Volhard, and Eric Wieschaus. There is no doubt that, even now, in the postgenomic age of the 21st century, we find ourselves ever indebted to our genetics forebearers. Building on the productivity of previous generations, present day Drosophila scientists continue to establish paradigms and achieve technological breakthroughs that help advance not only fly research but many different fields of the life sciences as well. Here, we give a short overview of the history of Drosophila genetics. We hope that an understanding of how we got where we are today and an appreciation of past discoveries will help to place the current excitement about genomes, molecules, and mechanisms within the context of a long-established scientific culture and history of Drosophila experimentation. TRANSMISSION GENETICS IN DROSOPHILA In the earliest years of the twentieth century, a founding generation of geneticists focused on problems of transmission. They exploited the fruit fly to answer classic questions such as, how are genes inherited? What is a chromosome and how does it recombine? Arguably the greatest breakthroughs in this arena were those of the first Drosophila geneticists: Thomas Hunt Morgan and his students, Calvin Bridges, Alfred Sturtevant, and Hermann Muller. It is perhaps somewhat surprising to realize that Morgan chose his organism as an anti-Mendelist, hoping to disprove the canons of genetics that we hold so dear today. After two “wasted” years of attempting to induce mutations in Drosophila by altering selective pressures, Morgan spotted a mutant white-eyed Drosophila male in his culture of wild-type red-eyed flies. The rest is history, as Morgan's analysis of white inheritance quickly led him to abandon his evolutionary studies and to embrace the “rediscovered” genetic theories of Mendel. Morgan demonstrated that genes are on chromosomes. The studies for which he earned the Nobel Prize in Medicine (1933)—“discoveries concerning the role played by the chromosome in heredity”—are elegant in their simplicity. In Drosophila breeding experiments, Morgan showed us that transmission of white eye color is linked to inheritance of the X chromosome, and thus eye color and gender are linked traits. This seminal finding led Morgan to conclude that some genetic traits are not inherited independently (as Mendel had supposed) but rather must be linked (Morgan, 1910). Morgan's students, Bridges, Muller, and Sturtevant, went on to no less illustrious careers of their own. Bridges used the fruit fly to prove that chromosomes are structures of inheritance; thus, we began to understand the cellular basis of heredity. As an aside, it is notable that Bridge's landmark study of nondisjunction was quite possibly a victim of capricious review. Indeed, Morgan was sufficiently irritated by the rejection of his student's manuscript from the Journal of Heredity—a premier journal of the day—that he and his colleagues founded a new journal, Genetics, that still exists. Bridge's manuscript launched the inaugural issue (Bridges, 1916). Bridge's contemporary, Hermann Muller, used the fruit fly to identify and physically map chromosomal aberrations and perhaps even more importantly to decipher the chemical nature of mutation (Muller, 1927). As was his mentor before him, Muller too was the recipient of a Nobel Prize (1946)—“for the discovery of the production of mutations by means of X-ray irradiation”. There can be no doubt as to Muller's intelligence and astuteness; well before the onset of the atomic age in which we now live, Muller anticipated and articulated the genetic risks we would face as a consequence of the irresponsible and weaponized use of ionizing radiation (Muller, 1946). Importantly, the implications of his irradiation studies were not overlooked when we did finally enter the atomic age. In response to Muller's experimental discoveries and musings, extensive genetic studies in Drosophila (and the mouse) were undertaken to assess the relative mutation rates in these model organisms as an indicator of the genetic risk posed to humans from the utilization of atomic energy. The third of the Morgan student triumvirate was Alfred Sturtevant who used the fruit fly to demonstrate that chromosomes constitute a linear array of genes (Sturtevant, 1913). In addition to bolstering the chromosome theory of inheritance, Sturtevant's work is notable for its application of mathematics to biology. In fact, his study highlighted an emerging trend in biological method—this was a replacement (or at least supplementation) of descriptive science with an approach that was both more analytical and mathematical. Taken altogether, Morgan and his students' use of breeding experimentation in the fruit fly Drosophila led them to an enviable record of achievement. Their contributions, which are certainly far too extensive to enumerate here, have been fully discussed and especially nicely annotated with respect to the historical record by Sturtevant (1965). Before leaving the contributions of the classical geneticists, two additional issues (one scientific, the other societal) warrant mention here. First the scientific: T.S. Painter, a student of Theodore Boveri who in 1903 along with Walter Sutton proposed that chromosomes contain genes, discovered the Drosophila giant salivary gland chromosomes. Importantly, Painter understood that this biological material offered an excellent opportunity to visualize chromosome structure (Painter, 1934). Second the societal: Morgan in addition to establishing the fruit fly as a model genetic organism endowed us with a new scientific culture: “The open, critical, yet fully democratic and egalitarian atmosphere that was evident in the Fly Room soon came to characterize the distinctively American atmosphere of university research—an especially significant development as American graduate education increasingly became the model for graduate education throughout the world” (Kandel, 1999). Given the rich intellectual atmosphere and the relatively free exchange of ideas in Morgan's fly room, it is perhaps not too surprising that, in addition to Muller, another of Morgan's trainees as well as two of his “academic grandchildren,” also went on to win Nobel prizes: George Beadle (1958), Joshua Lederberg (1958), and Ed Lewis (1995). FROM FLIES TO CELLS In subsequent years (at the end of the 1930s and for most of the 40s), the rules of mitosis and meiosis as well as the nature of the gene were elucidated, but not in the fly. For this relatively short period in our modern scientific history, the modest fruit fly fell from experimental favor. It did, however, re-emerge as an experimental organism with the birth of molecular genetic analysis: Boris Ephrussi and George Beadle used the fruit fly to give biochemistry and molecular genetics an initial experimental push. Ephrussi and Beadle transplanted larval eye discs from genetically marked larvae into the abdomens of genetically dissimilar larvae. Here, a third eye could develop ectopically and experimenters distinguished between tissue autonomous and nonautonomous requirements for gene products (Beadle and Ephrussi, 1936). Drosophila mosaic studies, like these, set the stage for mosaic studies in a wide variety of experimental organisms. The ability to deliberately replace wild-type genes with gain- and loss-of-function alleles in almost any setting and time frame using the UAS-Gal4 (Brand and Perrimon, 1993) and FLP-FRT (Golic, 1994) binary gene regulatory systems (or any of their various imaginative permutations) has proven invaluable in deciphering the rules by which cells interact with one another to control cell growth and differentiation. MUTATION ANALYSIS AND DEVELOPMENTAL GENETICS IN DROSOPHILA When most of us think about genetics, we think of mutants. Although surely not synonymous terms, one most certainly often invokes the other. But how we used mutants as tools of learning differed dramatically in the early and late parts of the twentieth century. Until the 1970s, investigators had collected mutants, by and large as chromosomal markers that facilitated the study of chromosome mechanics. Although the first fly “monster”—one with two sets of wings—was discovered in 1916, it was not until the 1970s that the idea that single genes could lead to morphological “transformations” was considered experimentally. This intellectual leap was recognized by the 1995 Nobel committee in their tribute to three “modern” Drosophila geneticists: Ed Lewis, Christiane Nüsslein-Volhard, and Eric Wieschaus, “for their discoveries concerning the genetic control of early embryonic development.” Among Ed Lewis' most significant contributions was his demonstration that single genes, members of the homeotic gene family, could lead to developmental transformations (Lewis, 1978). Homeotic genes, now mostly referred to by their molecular name Hox genes, have been recognized since as principal regulators of pattern in flies, mice, and humans. The notion that Hox genes are endowed with transforming capacity revolutionized our understanding of development. This issue of Developmental Dynamics is dedicated to the memory of Ed Lewis, who passed away last year, and his life and scientific contributions are described in fuller detail in two commentaries (Lipshitz, 2005; Sakonju, 2005). The contributions of Christiane Nüsslein-Volhard and Eric Wieschaus to developmental biology nicely complemented those of Lewis. This team's saturation mutation screening efforts led to our understanding that genes can be grouped together bases upon their shared loss-of-function phenotypes. Nüsslein-Volhard and Wieschaus suggested that (1) shared loss-of-function phenotypes define genes functioning in single biochemical pathways, and (2) related (but not identical) phenotypes define genetic hierarchies (Nüsslein-Volhard and Wieschaus, 1980). At approximately the same time that Nüsslein-Volhard and Wieschaus initiated their screens, molecular biology methods were being harnessed in labs world-wide. Coupling this technological boon with a concurrent emerging understanding of how transposons function in fruit flies (Spradling and Rubin, 1982) allowed a second generation of developmental geneticists to identify the gene products associated with the mutants. Thus, tremendous advances in our understanding of embryonic development in flies were the prizes associated with the Heidelberg screens. Happily, the conservation of regulatory mechanisms defined in the fruit fly over the organismal spectrum of evolution has made it possible to use the Drosophila to facilitate our understanding of development and disease in higher eukaryote—most importantly in humans. FLIES 'R US Within the context of a century of progress, we have finally entered the genomic and postgenomic ages of Drosophila-facilitated discovery. With Gerry Rubin at the helm of a collaborative undertaking by the Berkeley Drosophila Genome Project and Celera Genomics, Inc., the fruit fly genome sequence was completed in 2000 (Adams et al., 2000). Comprising approximately 14,000 genes, the Drosophila genome has provided us with a new wealth of information as well as the final validation of Drosophila as a first-class model organism. Developmental biologists have long accepted as dogma that what we learn in the fruit fly can be extended to higher eukaryotes. But now there is the code itself—the fly blueprint, which is remarkable in its likeness to our own. Indeed, when compared with mammalian proteins and expressed sequence tags, more than half of the fly proteins have similar mammalian counterparts at a statistical cutoff of E < 10−10, compared with 36% and 38% for worm and yeast, respectively (Rubin et al., 2000). CONCLUSIONS For approximately 100 years, experimentalists have taken advantage of Drosophila's small size, the low cost and ease with which it can be cultured, its high fecundity and short life cycle, its small chromosome complement, and its ability to withstand mutation and crossbreeding experiments. These past years of successful experimentation and productivity bode well for the next century. Indeed, Drosophila is not about to be retired as a model system for cutting edge research to address pressing questions in biology. In addition to the features that have made Drosophila so amenable to study for the past century, new and powerful resources and experimental possibilities make the research as exciting and attractive as ever in the past century for young and established scientists. We await the genome sequence of 10 or more Drosophila species in the near future, and these will provide a tremendous playing field for bioinformatics approaches to development and organism function. Targeted genome manipulations are possible and will become routine, sophisticated methods of imaging will permit a direct view into the connections between cell topology and function in intact tissues. The powerful and public resources—FlyBase, stock centers, genome projects—built up by the fly community, and by Bill Gelbart and Thom Kaufman in particular, will continue to provide valuable tools for the challenges that we will face and enjoy in the next century. Of course, of immense importance to our continued success will be the excitement and ingenuity of a new generation of open-minded and relentless scientists. REFERENCES Adams MD, Celniker SE, Holt RA, Evans CA, Gocayne JD, Amanatides PG, Scherer SE, Li PW, Hoskins RA, Galle RF, et al., 2000. Comparative genomics of the eukaryotes. Science 24: 2204–2215. Google Scholar Beadle GW, Ephrussi B. 1936. The differentiation of eye pigments in Drosophila as studied by transplantation. Genetics 21: 225–247. 10.1093/genetics/21.3.225 CAS PubMed Web of Science® Google Scholar Brand A, Perrimon N. 1993. Targeted gene expression as a means of altering cell fates and generating dominant phenotypes. Development 118: 401–415. 10.1242/dev.118.2.401 CAS PubMed Web of Science® Google Scholar Bridges CB. 1916. Non-disjunction as proof of the chromosome theory of heredity. Genetics 1: 1–52. 10.1093/genetics/1.1.1 CAS PubMed Web of Science® Google Scholar Golic KG. 1994. Local transposition of P elements in Drosophila melanogaster and recombination between duplicated elements using a site-specific recombinase. Genetics 137: 551–563. CAS PubMed Web of Science® Google Scholar Kandel ER. 1999. Thomas Hunt Morgan at Columbia University: Genes, chromosomes, and the origins of modern biology. Columbia Magazine (fall). PubMed Google Scholar Lewis EB. 1978. A gene complex controlling segmentation in Drosophila. Nature 276: 565–570. 10.1038/276565a0 CAS PubMed Web of Science® Google Scholar Lipshitz HD. 2005. From fruit flies to fallout: Ed Lewis and his science. Dev Dyn (in press). 10.1002/dvdy.20332 PubMed Web of Science® Google Scholar Morgan TH. 1910. Sex limited inheritance in Drosophila. Science 32: 120–122. 10.1126/science.32.812.120 CAS PubMed Web of Science® Google Scholar Muller HJ. 1927. Artificial transmutation of the gene. Science 46: 84–87. 10.1126/science.66.1699.84 Google Scholar Muller HJ. 1946. Nobel lecture. Available at: Google Scholar Nüsslein-Volhard C. Wieschaus E. 1980. Mutations affecting segment number and polarity in Drosophila. Nature 287: 795–801. 10.1038/287795a0 CAS PubMed Web of Science® Google Scholar Painter TS. 1934. A new method for the study of chromosome aberrations and the plotting of chromosome maps in Drosophila melanogaster. Genetics 19: 175–188. CAS PubMed Google Scholar Rubin GM, Yandell MD, Wortman JR, Gabor Miklos GL, Nelson CR, Hariharan IK, Fortini ME, Li PW, Apweiler R, Fleischmann W, et al. 2000. The genome sequence of Drosophila melanogaster. Science 287: 2185–2195. Google Scholar Sakonju S. 2005. Dev Dyn (in press). PubMed Web of Science® Google Scholar Spradling AC, Rubin GM. 1982. Transposition of cloned P elements into Drosophila germ line chromosomes. Science 218: 341–347. 10.1126/science.6289435 CAS PubMed Web of Science® Google Scholar Sturtevant AH. 1913. The linear arrangement of six sex-linked factors in Drosophila, as shown by their mode of association. J Exp Zool 14: 43–59. 10.1002/jez.1400140104 Web of Science® Google Scholar Sturtevant AH. 1965. A history of genetics. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press. Google Scholar Citing Literature Volume232, Issue3 Special Issue:Drosophila as a Model System March 2005 Pages 526-528 ## References ## Related ## Information Close Figure Viewer Previous FigureNext Figure Caption Download PDF Log in to Wiley Online Library NEW USER > INSTITUTIONAL LOGIN > Change Password Password Changed Successfully Your password has been changed
3984
https://math.libretexts.org/Bookshelves/Algebra/Elementary_Algebra_(LibreTexts)/09%3A_Solving_Quadratic_Equations_and_Graphing_Parabolas/9.04%3A_Guidelines_for_Solving_Quadratic_Equations_and_Applications
9.4.2 9.4.3 9.4.4 9.4.1 9.4.5 9.4.6 9.4.2 9.4.10 9.4.3 9.4.4 9.4.5 9.4.6 9.4.7 9.4.8 9.4.9 9.4.10 Skip to main content 9.4: Guidelines for Solving Quadratic Equations and Applications Last updated : Sep 2, 2024 Save as PDF 9.3: Quadratic Formula 9.5: Graphing Parabolas Page ID : 18385 Anonymous LibreTexts ( \newcommand{\kernel}{\mathrm{null}\,}) Learning Objectives Use the discriminant to determine the number and type of solutions to any quadratic equation. Develop a general strategy for solving quadratic equations. Solve applications involving quadratic equations. Discriminant If given a quadratic equation in standard form , ax2+bx+c=0ax2+bx+c=0, where a, b, and c are real numbers and a≠0, then the solutions can be calculated using the quadratic formula : The solutions are rational, irrational, or not real. We can determine the type and number of solutions by studying the discriminant , the expression inside the radical , b2−4acb2−4ac. If the value of this expression is negative, then the equation has no real solutions. If the discriminant is positive, then we have two real solutions. And if the discriminant is 0, then we have one real solution . Example 9.4.19.4.1 Determine the type and number of solutions: x2−10x+30=0x2−10x+30=0. Solution: We begin by identifying a, b, and c. Here Substitute these values into the discriminant and simplify. Since the discriminant is negative, we conclude that there are no real solutions. Answer: No real solution If we use the quadratic formula in the previous example, we find that a negative radicand stops the process of simplification and shows that there is no real solution . Note We will study quadratic equations with no real solutions as we progress in our study of algebra. Example 9.4.29.4.2 Determine the type and number of solutions: 7x2−10x+1=07x2−10x+1=0 Solution: Here Substitute these values into the discriminant : Since the discriminant is positive, we can conclude that there are two real solutions. Answer: Two real solutions If we use the quadratic formula in the previous example, we find that a positive radicand in the quadratic formula leads to two real solutions. The two real solutions are 5−3√275−32√7 and 5+3√275+32√7. Note that these solutions are irrational; we can approximate the values on a calculator. Example 9.4.39.4.3 Determine the type and number of solutions: 2x2−7x−4=02x2−7x−4=0 Solution: In this example, Substitute these values into the discriminant and simplify. Since the discriminant is positive, we conclude that there are two real solutions. Furthermore, since the discriminant is a perfect square , we obtain two rational solutions. Answer: Two real solutions We could solve the previous quadratic equation using the quadratic formula as follows: x=7−94 or x=7+94x=−24x=164x=−12x=4x=7−94x=−24x=−12 or x=7+94x=164x=4 Note If the discriminant is a perfect square , then we could have factored the original equation. 2x+1=0 or x−4=02x=−1x=4x=−12 Given the special condition where the discriminant is 0, we obtain only one solution , a double root . Example 9.4.4 Determine the type and number of solutions: 9x2−6x+1=0 Solution: Here a=9, b=−6, and c=1, and we have Since the discriminant is 0, we conclude that there is only one real solution , a double root . Answer: One real solution Since 0 is a perfect square , we can solve the equation above by factoring. 3x−1=0 or 3x−1=03x=13x=1x=13x=13 Here 1/3 is a solution that occurs twice; it is a double root . In summary, if given any quadratic equation in standard form , ax2+bx+c=0, where a, b, and c are real numbers and a≠0, then we have the following: | | | | --- | Positive discriminant : | b2−4ac>0 | Two real solutions | | Zero discriminant : | b2−4ac=0 | One real solution | | Negative discriminant : | b2−4ac<0 | No real solution | Table 9.4.1 As we will see, knowing the number and type of solutions ahead of time helps us determine which method is best for solving a quadratic equation. Exercise 9.4.1 Determine the number and type of solutions: 3x2−5x+4=0 Answer : No real solution General Guidelines for Solving Quadratic Equations Use the coefficients of a quadratic equation to help decide which method is most appropriate for solving it. While the quadratic formula always works, it is sometimes not the most efficient method. Given any quadratic equation in standard form , ax2+bx+c=0, general guidelines for determining the method for solving it follow: If c = 0, then factor out the GCF and solve by factoring . 2. If b = 0, then solve by extracting the roots . 3. If a, b, and c are all nonzero, then determine the value for the discriminant , b2−4ac: 1. If the discriminant is a perfect square , then solve by factoring . If the discriminant is not a perfect square , then solve using the quadratic formula . 3. If the discriminant is positive, we obtain two real solutions. 4. If the discriminant is negative, then there is no real solution . Example 9.4.5 Solve: 15x2−5x=0 Solution: In this case, c = 0 and we can solve by factoring out the GCF. 5x=0 or 3x−1=0x=03x=1x=13 Answer: The solutions are 0 and 13. Example 9.4.6 Solve: 3x2−5=0 Solution: In this case, b = 0 and we can solve by extracting the roots . Answer: The solutions are ±√153 Example 9.4.7 Solve: 9x2−6x−7=0 Solution: Begin by identifying a, b, and c as the coefficients of each term. Here Substitute these values into the discriminant and then simplify. Since the discriminant is positive and not a perfect square , use the quadratic formula and expect two real solutions. Answer: The solutions are =1±2√23 Example 9.4.8 Solve: 4x(x−2)=−7 Solution: Begin by rewriting the quadratic equation in standard form . Here Substitute these values into the discriminant and then simplify. Since the discriminant is negative, the solutions are not real numbers . Answer: No real solution Example 9.4.9 Solve: (3x+5)(3x+7)=6x+10 Solution: Begin by rewriting the quadratic equation in standard form . (3x+5)(3x+7)=6x+109x2+21x+15x+35=6x+109x2+36x+35=6x+109x2+30x+25=0 Substitute a=9, b=30, and c=25 into the discriminant . Since the discriminant is 0, solve by factoring and expect one real solution , a double root . 9x2+30x+25=0(3x+5)(3x+5)=0 3x+5=0 or 3x+5=03x=−53x=−5x=−53x=−53 Answer: The solution is −53 Exercise 9.4.2 Solve: 5x2+2x−7=2x−3 Answer : ±2√55 Applications Involving Quadratic Equations In this section, the algebraic setups usually consist of a quadratic equation where the solutions may not be integers . Example 9.4.10 The height of a triangle is 2 inches less than twice the length of its base. If the total area of the triangle is 11 square inches, then find the lengths of the base and height. Round answers to the nearest hundredth Solution: Use the formula A=12bh and the fact that the area is 11 square inches to set up an algebraic equation. A=12b⋅h11=12x(2x−2) To rewrite this quadratic equation in standard form , first distribute 12x. 11=12x(2x−2)11=x2−x0=x2−x−11 Use the coefficients, a = 1, b = −1, and c = −11, to determine the type of solutions. Since the discriminant is positive, expect two real solutions. In this problem, disregard the negative solution and consider only the positive solution . x=1+3√52 Back substitute to find the height. Answer: The base measures −1+3√52≈3.85 inches and the height is −1+3√5≈5.71 inches. Example 9.4.11 The sum of the squares of two consecutive positive integers is 481. Find the integers . Solution: Let n represent the first positive integer. Let n+1 represent the next positive integer. The algebraic setup follows: Rewrite the quadratic equation in standard form . When the coefficients are large, sometimes it is less work to use the quadratic formula instead of trying to factor it. In this case, a=1, b=1, and c=−240. Substitute into the quadratic formula and then simplify. n=−1−312n=−1+312n=−322n=302n=−16n=15 Since the problem calls for positive integers , disregard the negative solution and choose n = 15. n+1=15+1=16 Answer: The positive integers are 15 and 16. Key Takeaways Determine the number and type of solutions to any quadratic equation in standard form using the discriminant , b2−4ac. If the discriminant is negative, then the solutions are not real. If the discriminant is positive, then the solutions are real. If the discriminant is 0, then there is only one solution , a double root . Choose the appropriate method for solving a quadratic equation based on the value of its discriminant . While the quadratic formula will solve any quadratic equation, it may not be the most efficient method. When solving applications, use the key words and phrases to set up an algebraic equation that models the problem. In this section, the setup typically involves a quadratic equation. Exercise 9.4.3 use the discriminant Calculate the discriminant and use it to determine the number and type of solutions. Do not solve. x2+2x+3=0 x2−2x−3=0 3x2−1x−2=0 3x2−1x+2=0 9y2+2=0 9y2−2=0 5x2+x=0 5x2−x=0 12x2−2x+52=0 12x2−x−12=0 −x2−2x+4=0 −x2−4x+2=0 4t2−20t+25=0 9t2−6t+1=0 Answer : 1. −8, no real solution 3. 25, two real solutions 5. −72, no real solution 7. 1, two real solutions 9. −1, no real solution 11. 20, two real solutions 13. 0, one real solution Exercise 9.4.4 solving Choose the appropriate method to solve the following. x2−2x−3=0 x2+2x+3=0 3x2−x−2=0 3x2−x+2=0 9y2+2=0 9y2−2=0 5x2+x=0 5x2−x=0 12x2−2x+52=0 12x2−x−12=0 −x2−2x+4=0 −x2−4x+2=0 4t2−20t+25=0 9t2−6t+1=0 y2−4y−1=0 y2−6y−3=0 25x2+1=0 36x2+4=0 5t2−4=0 2t2−9=0 12x2−94x+1=0 3x2+12x−16=0 36y2=2y 50y2=−10y x(x−6)=−29 x(x−4)=−16 4y(y+1)=5 2y(y+2)=3 −3x2=2x+1 3x2+4x=−2 6(x+1)2=11x+7 2(x+2)2=7x+11 9t2=4(3t−1) 5t(5t−6)=−9 (x+1)(x+7)=3 (x−5)(x+7)=14 Answer : 1. −1,3 3. −23,1 5. No real solution 7. −15,0 9. No real solution 11. −1±√5 13. 52 15. 2±√5 17. No real solution 19. ±2√55 21. 12,4 23. 0,118 25. No real solution 27. −1±√62 29. No real solution 31. −12,13 33.23 35. −4±2√3 Exercise 9.4.5 applications number problems Set up an algebraic equation and use it to solve the following. A positive real number is 2 less than another. When 4 times the larger is added to the square of the smaller, the result is 49. Find the numbers. 2. A positive real number is 1 more than another. When twice the smaller is subtracted from the square of the larger, the result is 4. Find the numbers. 3. A positive real number is 6 less than another. If the sum of the squares of the two numbers is 38, then find the numbers. 4. A positive real number is 1 more than twice another. If 4 times the smaller number is subtracted from the square of the larger, then the result is 21. Find the numbers. Answer : 1. 3√5 and (3\sqrt{5}-2) 3. \(\sqrt{10\pm 3\) Exercise 9.4.6 applications geometry problems Round off your answers to the nearest hundredth. The area of a rectangle is 60 square inches. If the length is 3 times the width, then find the dimensions of the rectangle. 2. The area of a rectangle is 6 square feet. If the length is 2 feet more than the width, then find the dimensions of the rectangle. 3. The area of a rectangle is 27 square meters. If the length is 6 meters less than 3 times the width, then find the dimensions of the rectangle. 4. The area of a triangle is 48 square inches. If the base is 2 times the height, then find the length of the base. 5. The area of a triangle is 14 square feet. If the base is 4 feet more than 2 times the height, then find the length of the base and the height. 6. The area of a triangle is 8 square meters. If the base is 4 meters less than the height, then find the length of the base and the height. 7. The perimeter of a rectangle is 54 centimeters and the area is 180 square centimeters. Find the dimensions of the rectangle. 8. The perimeter of a rectangle is 50 inches and the area is 126 square inches. Find the dimensions of the rectangle. 9. George maintains a successful 6-meter-by-8-meter garden. Next season he plans on doubling the planting area by increasing the width and height by an equal amount. By how much must he increase the length and width? 10. A uniform brick border is to be constructed around a 6-foot-by-8-foot garden. If the total area of the garden, including the border, is to be 100 square feet, then find the width of the brick border. Answer : 1. Length: 13.42 inches; width: 4.47 inches 3. Length: 6.48 meters; width: 4.16 meters 5. Height: 2.87 feet; base: 9.74 feet 7. Length: 15 centimeters; width: 12 centimeters 9. 2.85 meters Exercise 9.4.7 applications pythagorean theorem If the sides of a square measure √106 units, then find the length of the diagonal. 2. If the diagonal of a square measures √310 units, then find the length of each side. 3. The diagonal of a rectangle measures √63 inches. If the width is 4 inches less than the length, then find the dimensions of the rectangle. 4. The diagonal of a rectangle measures √23 inches. If the width is 2 inches less than the length, then find the dimensions of the rectangle. 5. The top of a 20-foot ladder, leaning against a building, reaches a height of 18 feet. How far is the base of the ladder from the wall? Round off to the nearest hundredth. 6. To safely use a ladder, the base should be placed about 1/4 of the ladder’s length away from the wall. If a 20-foot ladder is to be safely used, then how high against a building will the top of the ladder reach? Round off to the nearest hundredth. 7. The diagonal of a television monitor measures 32 inches. If the monitor has a 3:2 aspect , then determine its length and width. Round off to the nearest hundredth. 8. The diagonal of a television monitor measures 52 inches. If the monitor has a 16:9 aspect ratio , then determine its length and width. Round off to the nearest hundredth. Answer : 1. √203 units 3. Length: 2+√52 inches; width: −2+√52 inches 5. 2√19≈8.72 feet 7. Length: 26.63 inches; width: 17.75 inches Exercise 9.4.8 applications business problems The profit in dollars of running an assembly line that produces custom uniforms each day is given by the function P(t)=−40t2+960t−4,000, where t represents the number of hours the line is in operation. Calculate the profit on running the assembly line for 10 hours a day. Calculate the number of hours the assembly line should in to break even. Round off to the nearest tenth of an hour. 2. The profit in dollars generated by producing and selling x custom lamps is given by the function P(x)=−10x2+800x−12,000. 1. Calculate the profit on the production and sale of 35 lamps. 2. Calculate the number of lamps that must be sold to profit $3,000. 3. If $1,200 is invested in an account earning an annual interest r, then the amount A that is in the account at the end of 2 years is given by the formula A=1,200(1+r)2. If at the end of 2 years the amount in the account is $1,335.63, then what was the interest rate ? 4. A manufacturing company has determined that the daily revenue, R, in thousands of dollars depends on the number, n, of palettes of product sold according to the formula R=12n−0.6n2. Determine the number of palettes that must be sold in order to maintain revenues at $60,000 per day. Answer : 1. a. $1,600; b. 5.4 hours and 18.6 hours 3. 5.5% Exercise 9.4.9 applications projectile problems The height of a projectile launched upward at a speed of 32 feet/second from a height of 128 feet is given by the function h(t)=−16t2+32t+128. What is the height of the projectile at 1/2 second? At what time after launch will the projectile reach a height of 128 feet? The height of a projectile launched upward at a speed of 16 feet/second from a height of 192 feet is given by the function h(t)=−16t2+16t+192. What is the height of the projectile at 3/2 seconds? At what time will the projectile reach 128 feet? The height of an object dropped from the top of a 144-foot building is given by h(t)=−16t2+144. How long will it take to reach a point halfway to the ground? The height of a projectile shot straight up into the air at 80 feet/second from the ground is given by h(t)=−16t2+80t. At what time will the projectile reach 95 feet? Answer : 1. a. 140 feet; b. 0 seconds and 2 seconds 3. 2.12 seconds Exercise 9.4.10 applications discussion board Discuss the strategy of always using the quadratic formula to solve quadratic equations. 2. List all of the methods that we have learned so far to solve quadratic equations. Discuss the pros and cons of each. Answer : 1. Answers may vary 9.3: Quadratic Formula 9.5: Graphing Parabolas
3985
https://www.quora.com/How-can-we-find-the-radius-of-a-circle-using-the-equations-of-two-tangent-lines
Something went wrong. Wait a moment and try again. Chords and Tangents Circle (shape) Geometric Tangency PLANE GEOMETRY Radius (geometry) Concept of Geometry 5 How can we find the radius of a circle using the equations of two tangent lines? · To find the radius of a circle using the equations of two tangent lines, you can follow these steps: Identify the Equations of the Tangent Lines : Let’s denote the equations of the two tangent lines as: Line 1: y = m 1 x + b 1 - Line 2: y = m 2 x + b 2 2. Find the Point of Intersection of the Tangent Lines : Since the two lines are tangent to the circle, they will intersect at a point that is outside the circle. You can find this point by solving the system of equations formed by the two lines: m 1 x + b 1 = m 2 x + b 2 Rearranging gives: ( m 1 − m 2 ) x = b 2 − b 1 Thus, x = \frac{b_ To find the radius of a circle using the equations of two tangent lines, you can follow these steps: Identify the Equations of the Tangent Lines: Let’s denote the equations of the two tangent lines as: Line 1: y=m1x+b1 Line 2: y=m2x+b2 Find the Point of Intersection of the Tangent Lines: Since the two lines are tangent to the circle, they will intersect at a point that is outside the circle. You can find this point by solving the system of equations formed by the two lines: m1x+b1=m2x+b2 Rearranging gives: (m1−m2)x=b2−b1 Thus, x=b2−b1m1−m2 Substitute x back into either line's equation to find y: y=m1(b2−b1m1−m2)+b1 This gives you the coordinates of the intersection point (x0,y0). Determine the Center of the Circle: The center of the circle lies along the angle bisector of the angles formed by the two tangent lines. You can derive the equation of this bisector by using the slopes of the tangent lines: The angle bisectors can be found using the formula for the angle bisector between two lines. However, for simplicity, we can find the center by recognizing that the center is equidistant from both tangent lines. Calculate the Distance from the Center to the Tangent Lines: The radius of the circle is the distance from the center of the circle to any of the tangent lines. If you find the center (h,k), the distance r from the center to a line Ax+By+C=0 can be calculated using: r=|Ah+Bk+C|√A2+B2 You can convert the tangent line equations into standard form to apply this formula. Conclusion: The radius r calculated from the distance formula will give you the radius of the circle. Suppose the equations of the tangent lines are: - Line 1: y=2x+3 - Line 2: y=−12x+1 Find the intersection point. Determine the center. Calculate the radius using the distance from the center to either line. This method will allow you to find the radius of the circle based on the tangent lines. Max Gretinski Studied Mathematics · Author has 6.5K answers and 2.5M answer views · 3y From the tangent lines alone, you cannot find the radius of the circle unless the lines are parallel. In that case, the radius is one-half the distance between the two lines. If you know the points of intersection between the tangent lines and the circle, you can find the center, radius, and equation of the circle. Others have already explained how to do this. Without knowing the points of intersection, all we can determine is the line containing the center of the circle. See the diagram below. The red line and blue line are known to be the equations of the tangent lines. All this means is that From the tangent lines alone, you cannot find the radius of the circle unless the lines are parallel. In that case, the radius is one-half the distance between the two lines. If you know the points of intersection between the tangent lines and the circle, you can find the center, radius, and equation of the circle. Others have already explained how to do this. Without knowing the points of intersection, all we can determine is the line containing the center of the circle. See the diagram below. The red line and blue line are known to be the equations of the tangent lines. All this means is that the center lies on the dashed line. Shown are two circles, of different radii, that are tangent to both lines. Promoted by Coverage.com Johnny M Master's Degree from Harvard University (Graduated 2011) · Updated Sep 9 Does switching car insurance really save you money, or is that just marketing hype? This is one of those things that I didn’t expect to be worthwhile, but it was. You actually can save a solid chunk of money—if you use the right tool like this one. I ended up saving over $1,500/year, but I also insure four cars. I tested several comparison tools and while some of them ended up spamming me with junk, there were a couple like Coverage.com and these alternatives that I now recommend to my friend. Most insurance companies quietly raise your rate year after year. Nothing major, just enough that you don’t notice. They’re banking on you not shopping around—and to be honest, I didn’t. This is one of those things that I didn’t expect to be worthwhile, but it was. You actually can save a solid chunk of money—if you use the right tool like this one. I ended up saving over $1,500/year, but I also insure four cars. I tested several comparison tools and while some of them ended up spamming me with junk, there were a couple like Coverage.com and these alternatives that I now recommend to my friend. Most insurance companies quietly raise your rate year after year. Nothing major, just enough that you don’t notice. They’re banking on you not shopping around—and to be honest, I didn’t. It always sounded like a hassle. Dozens of tabs, endless forms, phone calls I didn’t want to take. But recently I decided to check so I used this quote tool, which compares everything in one place. It took maybe 2 minutes, tops. I just answered a few questions and it pulled up offers from multiple big-name providers, side by side. Prices, coverage details, even customer reviews—all laid out in a way that made the choice pretty obvious. They claimed I could save over $1,000 per year. I ended up exceeding that number and I cut my monthly premium by over $100. That’s over $1200 a year. For the exact same coverage. No phone tag. No junk emails. Just a better deal in less time than it takes to make coffee. Here’s the link to two comparison sites - the one I used and an alternative that I also tested. If it’s been a while since you’ve checked your rate, do it. You might be surprised at how much you’re overpaying. Playboy Rocks 3y There exist two possible cases. Case 1. When the equations of tangents are parallel i.e. Ax+By+m=0 and Ax+By+n=0 For such a case the distance between the two tangents will act like the diameter. Explanation: The normal to both the tangents will pass through the centre. Moreover the normals will also pass through the points of their opposite tangent’s point of contact with the circle. To find distance between two parallel lines you can use the formula Case 2. If the equation of tangents are not parallel then there are some steps to follow in order to calculate the radius and they are as: Step 1. Find th There exist two possible cases. Case 1. When the equations of tangents are parallel i.e. Ax+By+m=0 and Ax+By+n=0 For such a case the distance between the two tangents will act like the diameter. Explanation: The normal to both the tangents will pass through the centre. Moreover the normals will also pass through the points of their opposite tangent’s point of contact with the circle. To find distance between two parallel lines you can use the formula Case 2. If the equation of tangents are not parallel then there are some steps to follow in order to calculate the radius and they are as: Step 1. Find the normal for both the tangents. Step2. Find their point of intersection, this point will be the centre. (Any normal to the tangent passes through the centre) Step3. From this point(Centre) , let's say C, find the distance of any of the tangent. Or in other words find the distance of this point from any of the tangent. You can use the formula : Step4: The just found distance is your radius (Distance from centre of a circle to the tangent is the radius of the circle) Related questions How do you find the equation of a circle that is tangent to two lines? How do you find the tangent line of a circle? How do I find the equation of the tangent from any point outside the circle to that circle? How do you find the equation of a tangent line to a circle? How do you find the radius of a circle from a tangent? Michael Grady Former OS Software, Tutor, Math/Phys/Chem/Lang Teacher at Apple (company) (1995–2022) · Author has 1.3K answers and 1.2M answer views · 1y Originally Answered: How can the radius be determined from an equation with two points and a tangent line? · Geometric construction: 2 Cases: a. The 2 points (A & B) do not include the point of tangency (at 3rd point) C Solution: Perpendicular bisectors of AC and BC will intersect at center of the circle O. Either OA, OB, OC is the radius. b. One of points A or B is the point of tangency Solution: Perpendicular to the tangent at C, perpendicular bisector of BC, will intersect at the center O. Find radius from OC, or OB. Radius ‘r’ can be measured Algebraic Construction (which I think is the request here): Let’s take case (b) above. Point of Tangency C = (xₜ, yₜ), and point B = (x₁, y₁). We have a tangent l Geometric construction: 2 Cases: a. The 2 points (A & B) do not include the point of tangency (at 3rd point) C Solution: Perpendicular bisectors of AC and BC will intersect at center of the circle O. Either OA, OB, OC is the radius. b. One of points A or B is the point of tangency Solution: Perpendicular to the tangent at C, perpendicular bisector of BC, will intersect at the center O. Find radius from OC, or OB. Radius ‘r’ can be measured Algebraic Construction (which I think is the request here): Let’s take case (b) above. Point of Tangency C = (xₜ, yₜ), and point B = (x₁, y₁). We have a tangent line at C… y = mx + b, given slope “m” (b is irrelevant). Its perpendicular at point of tangency C will yₜ = (-1/m)xₜ + bₜ ==> bₜ = yₜ + (1/m)xₜ ==> y = (-1/m)x + bₜ Perpendicular bisector of BC: Bisection point D = ( (xₜ + x₁)/2, (yₜ + y₁)/2 ) = (x₂, y₂), slope m₂ = (yₜ - y₁) / (xₜ - x₁), perpendicular bisector slope -(1/m₂) ==> y₂ = (-1/m₂)x₂ + b₂ ==> b₂ = y₂ + (1/m₂)x₂ ==> y = (-1/m₂)x + b₂ Result so far… we have: Tangent perpendicular line: y = (-1/m)x + bₜ, Perpendicular bisector line of BC: y = (-1/m₂)x + b₂ At center of circle O, we want (-1/m)x + bₜ = (-1/m₂)x + b₂ ==> x(1/m + -1/m₂) = (bₜ - b₂) ==> xₒ = (bₜ - b₂) / (1/m + -1/m₂), yₒ = (-1/m)[(bₜ - b₂) / (1/m + -1/m₂)] + bₜ ==> we now have (xₒ, yₒ) for Center O Radius: r = √[(xₜ - xₒ)² + (yₜ - yₒ)²] Peter Shea B. Sc in Mathematics & Computer Science, Monash University (Graduated 1972) · Author has 5.2K answers and 1.2M answer views · Updated 1y Originally Answered: How can the radius be determined from an equation with two points and a tangent line? · You can’t, unless you know where the tangent touches the circle! The equation of the circle can be expressed as: (x - a)² + (y - b)² = r² At P₁ (x₁, y₁), (x₁ - a)² + (y₁ - b)² = r² At P₂ (x₂, y₂), (x₂ - a)² + (y₂ - b)² = r² ∴ x₁² - 2ax₁ + a² + y₁² - 2by₁ + b² = x₂² - 2ax₂ + a² + y₂² - 2by₂ + b² ∴ 2a(x₂ - x₁) + 2b(y₂ - y₁) = (x₂² - x₁²) + (y₁² - y₂²) which is a linear equation in 2 unknowns, The tangent is a linear equation of the form y = mx + c If we don’t know where it touches the circle, or its slope and intercept, you have added 2 unknowns and no new information, and therefore cannot solve for a You can’t, unless you know where the tangent touches the circle! The equation of the circle can be expressed as: (x - a)² + (y - b)² = r² At P₁ (x₁, y₁), (x₁ - a)² + (y₁ - b)² = r² At P₂ (x₂, y₂), (x₂ - a)² + (y₂ - b)² = r² ∴ x₁² - 2ax₁ + a² + y₁² - 2by₁ + b² = x₂² - 2ax₂ + a² + y₂² - 2by₂ + b² ∴ 2a(x₂ - x₁) + 2b(y₂ - y₁) = (x₂² - x₁²) + (y₁² - y₂²) which is a linear equation in 2 unknowns, The tangent is a linear equation of the form y = mx + c If we don’t know where it touches the circle, or its slope and intercept, you have added 2 unknowns and no new information, and therefore cannot solve for a, b or r! Promoted by Network Solutions® Zachary Tidwell Senior Communications Manager at Network Solutions (2015–present) · Wed Who is the best domain registrar and why? Who would you recommend for a new website owner, and why? When you’re looking for the best domain provider, it’s important to take the full range of capabilities and options available through that provider into account. Why? Because an online journey, especially for a small business owner or entrepreneur, is rarely a straight line from A to B. I’ve worked in the web presence industry for a decade now, and I’ve seen business owners who called in to buy the same product, who in fact had very different needs. I’ve seen sales agents help customers who didn’t even have a name for their business yet, and I’ve seen successful businesses with numerous locatio When you’re looking for the best domain provider, it’s important to take the full range of capabilities and options available through that provider into account. Why? Because an online journey, especially for a small business owner or entrepreneur, is rarely a straight line from A to B. I’ve worked in the web presence industry for a decade now, and I’ve seen business owners who called in to buy the same product, who in fact had very different needs. I’ve seen sales agents help customers who didn’t even have a name for their business yet, and I’ve seen successful businesses with numerous locations finally get around to their online presence, long after achieving brick-and-mortar success. So, I’m going to focus on how to find the best domain provider for small businesses in particular, because that’s what my company, Network Solutions, specializes in, and that’s where my expertise lies. With that in mind, here are some considerations to keep in mind when deciding where to purchase a domain. Simplified Search Anyone who’s spent time searching for a domain name can tell you that finding a great option isn’t always easy. In many cases, that shiny, perfect domain name you envisioned may already be taken. This is where a feature like an AI domain name generator can be extremely helpful. I cover this in a lot more detail in this answer, but the main thing to know is that AI domain name generation helps you find helpful, pleasantly surprising alternatives to your preferred domain name, immensely simplifying the search process. Reputation and Credibility Whether you’re looking for a taco stand or a car dealership, you always want to make sure you choose an established company with a long-time presence and strong reputation. The same thing applies to the world of domains. In general, large, well-established companies will have the expertise and strong support to get you where you want to be online. Look for a company with a strong score (definitely 4.0+) on Trustpilot, and check to see how long that company has been in business. In my view, longevity and established trust in the domain industry should be always be two major points of consideration when choosing a provider. A Full Suite of Products Maybe today you just want a domain name. But you really should try to work with a company that can support your end-to-end online journey, whatever direction it goes. Look for additional features like an AI-powered website builder, reliable web hosting, robust cybersecurity products and online marketing tools, so you can take that next step when you’re ready. Another factor to consider here is professional services. DIY products are hugely popular, and for good reason, but it’s really nice to have the ability to turn to professional website designers and online marketing specialists if needed. The Journey So often, I see online journeys start with finding a great domain name, and progress from there to a truly expansive and successful digital endeavor. One of my favorite things is seeing small businesses make the most of their websites and eCommerce stores to get an edge on the competition and even compete with some of the bigger players out there, with the innovative tools available today really acting as an equalizer for so many businesses. As you search for an exceptional domain provider, I recommend you keep your options open and leave yourself and your business room to grow. Wherever your online journey takes you, I wish you the best of luck, and I hope finding your perfect domain name is just the start for your web presence. David Tarran Former Math Teacher · Author has 265 answers and 230.3K answer views · 6y If the two tangents are parallel, and the circle lies between the two tangents, then the diameter of the circle is the distance between the pair of parallels. If, however, the tangents are not parallel, then there is an infinity of circles which could lie between the tangents. Can you tell us more details of the problem? Related questions What will be the radius of a circle lying between two circles with radius 4 cm and 2 cm touching each other externally and having a common tangent? How does one find the radius of a circle (in a series of tangent circles), given the radius of the most left/right circles, and that all circles are bounded by two lines? Given a circle with radius 3 cm and center at (2, 4), what is the equation of the tangent line to the circle at the point (5, 3)? How do you find the centre and radius of a circle if it is tangent to two lines? How do I find the equation of two tangents from a point to a circle? Daniel Ettedgui, DO I fell in love with geometry at 9 years of age. · Author has 1.3K answers and 999.2K answer views · 2y Related How do you find the radius of a circle when given two points on its circumference and a tangent line? With two points given, the radius is not possible to calculate. With three points it is quite simple. So we need to find the tangent point as our third point. Consider rough sketch: Known points A and B. Draw line connecting them and intersect given tangent at D. Since slope of BD can be calculated from two known points, we can write an equation for this line and set it equal to known tangent line equation to obtain intersecting point D. We now calculate length of AD using: (AD)^2=(DC)(DB) Now use distance formula to get coordinates of A! With three points on the circle the radius is ours. The perpen With two points given, the radius is not possible to calculate. With three points it is quite simple. So we need to find the tangent point as our third point. Consider rough sketch: Known points A and B. Draw line connecting them and intersect given tangent at D. Since slope of BD can be calculated from two known points, we can write an equation for this line and set it equal to known tangent line equation to obtain intersecting point D. We now calculate length of AD using: (AD)^2=(DC)(DB) Now use distance formula to get coordinates of A! With three points on the circle the radius is ours. The perpendicular bisector of two chords intersect at the center! That gives us coordinates of center. Use distance formula from center to any point on circle and voila. Promoted by The Penny Hoarder Lisa Dawson Finance Writer at The Penny Hoarder · Updated Sep 16 What's some brutally honest advice that everyone should know? Here’s the thing: I wish I had known these money secrets sooner. They’ve helped so many people save hundreds, secure their family’s future, and grow their bank accounts—myself included. And honestly? Putting them to use was way easier than I expected. I bet you can knock out at least three or four of these right now—yes, even from your phone. Don’t wait like I did. Cancel Your Car Insurance You might not even realize it, but your car insurance company is probably overcharging you. In fact, they’re kind of counting on you not noticing. Luckily, this problem is easy to fix. Don’t waste your time Here’s the thing: I wish I had known these money secrets sooner. They’ve helped so many people save hundreds, secure their family’s future, and grow their bank accounts—myself included. And honestly? Putting them to use was way easier than I expected. I bet you can knock out at least three or four of these right now—yes, even from your phone. Don’t wait like I did. Cancel Your Car Insurance You might not even realize it, but your car insurance company is probably overcharging you. In fact, they’re kind of counting on you not noticing. Luckily, this problem is easy to fix. Don’t waste your time browsing insurance sites for a better deal. A company calledInsurify shows you all your options at once — people who do this save up to $996 per year. If you tell them a bit about yourself and your vehicle, they’ll send you personalized quotes so you can compare them and find the best one for you. Tired of overpaying for car insurance? It takes just five minutes to compare your options with Insurify andsee how much you could save on car insurance. Ask This Company to Get a Big Chunk of Your Debt Forgiven A company calledNational Debt Relief could convince your lenders to simply get rid of a big chunk of what you owe. No bankruptcy, no loans — you don’t even need to have good credit. If you owe at least $10,000 in unsecured debt (credit card debt, personal loans, medical bills, etc.), National Debt Relief’s experts will build you a monthly payment plan. As your payments add up, they negotiate with your creditors to reduce the amount you owe. You then pay off the rest in a lump sum. On average, you could become debt-free within 24 to 48 months. It takes less than a minute to sign up and see how much debt you could get rid of. Set Up Direct Deposit — Pocket $300 When you set up direct deposit withSoFi Checking and Savings (Member FDIC), they’ll put up to $300 straight into your account. No… really. Just a nice little bonus for making a smart switch. Why switch? With SoFi, you can earn up to 3.80% APY on savings and 0.50% on checking, plus a 0.20% APY boost for your first 6 months when you set up direct deposit or keep $5K in your account. That’s up to 4.00% APY total. Way better than letting your balance chill at 0.40% APY. There’s no fees. No gotchas.Make the move to SoFi and get paid to upgrade your finances. You Can Become a Real Estate Investor for as Little as $10 Take a look at some of the world’s wealthiest people. What do they have in common? Many invest in large private real estate deals. And here’s the thing: There’s no reason you can’t, too — for as little as $10. An investment called the Fundrise Flagship Fund lets you get started in the world of real estate by giving you access to a low-cost, diversified portfolio of private real estate. The best part? You don’t have to be the landlord. The Flagship Fund does all the heavy lifting. With an initial investment as low as $10, your money will be invested in the Fund, which already owns more than $1 billion worth of real estate around the country, from apartment complexes to the thriving housing rental market to larger last-mile e-commerce logistics centers. Want to invest more? Many investors choose to invest $1,000 or more. This is a Fund that can fit any type of investor’s needs. Once invested, you can track your performance from your phone and watch as properties are acquired, improved, and operated. As properties generate cash flow, you could earn money through quarterly dividend payments. And over time, you could earn money off the potential appreciation of the properties. So if you want to get started in the world of real-estate investing, it takes just a few minutes tosign up and create an account with the Fundrise Flagship Fund. This is a paid advertisement. Carefully consider the investment objectives, risks, charges and expenses of the Fundrise Real Estate Fund before investing. This and other information can be found in the Fund’s prospectus. Read them carefully before investing. Cut Your Phone Bill to $15/Month Want a full year of doomscrolling, streaming, and “you still there?” texts, without the bloated price tag? Right now, Mint Mobile is offering unlimited talk, text, and data for just $15/month when you sign up for a 12-month plan. Not ready for a whole year-long thing? Mint’s 3-month plans (including unlimited) are also just $15/month, so you can test the waters commitment-free. It’s BYOE (bring your own everything), which means you keep your phone, your number, and your dignity. Plus, you’ll get perks like free mobile hotspot, scam call screening, and coverage on the nation’s largest 5G network. Snag Mint Mobile’s $15 unlimited deal before it’s gone. Get Up to $50,000 From This Company Need a little extra cash to pay off credit card debt, remodel your house or to buy a big purchase? We found a company willing to help. Here’s how it works: If your credit score is at least 620, AmONE can help you borrow up to $50,000 (no collateral needed) with fixed rates starting at 6.40% and terms from 6 to 144 months. AmONE won’t make you stand in line or call a bank. And if you’re worried you won’t qualify, it’s free tocheck online. It takes just two minutes, and it could save you thousands of dollars. Totally worth it. Get Paid $225/Month While Watching Movie Previews If we told you that you could get paid while watching videos on your computer, you’d probably laugh. It’s too good to be true, right? But we’re serious. By signing up for a free account with InboxDollars, you could add up to $225 a month to your pocket. They’ll send you short surveys every day, which you can fill out while you watch someone bake brownies or catch up on the latest Kardashian drama. No, InboxDollars won’t replace your full-time job, but it’s something easy you can do while you’re already on the couch tonight, wasting time on your phone. Unlike other sites, InboxDollars pays you in cash — no points or gift cards. It’s already paid its users more than $56 million. Signing up takes about one minute, and you’ll immediately receive a $5 bonus to get you started. Earn $1000/Month by Reviewing Games and Products You Love Okay, real talk—everything is crazy expensive right now, and let’s be honest, we could all use a little extra cash. But who has time for a second job? Here’s the good news. You’re already playing games on your phone to kill time, relax, or just zone out. So why not make some extra cash while you’re at it? WithKashKick, you can actually get paid to play. No weird surveys, no endless ads, just real money for playing games you’d probably be playing anyway. Some people are even making over $1,000 a month just doing this! Oh, and here’s a little pro tip: If you wanna cash out even faster, spending $2 on an in-app purchase to skip levels can help you hit your first $50+ payout way quicker. Once you’ve got $10, you can cash out instantly through PayPal—no waiting around, just straight-up money in your account. Seriously, you’re already playing—might as well make some money while you’re at it.Sign up for KashKick and start earning now! David G. Ebin Professor of Mathematics at Stony Brook University (1969–present) · Author has 110 answers and 198.4K answer views · 6y If the lines are not parallel, draw a perpendicular to each line at its point of tangency. The lines that you drew will intersect at the center of the circle. The distance from the center to either of the original lines is the radius. If the original lines are parallel, see Gupta’s answer below. Steve Maguire M.A. in Mathematics, Boston College · Author has 145 answers and 158.6K answer views · 6y Select two lines that are tangents to the circle and parallel to each other. There are an infinite number of circles that have these two tangents. Sponsored by Grubhub For Merchants Ready to reach Amazon Prime customers? Reach more customers through our Amazon partnership. Prime members receive free Grubhub+. Ujjval Gupta 6y You could find the radius if the two tangents are parallel. The distance between them would be twice the radius of the circle. Subhasish Debroy Former SDE at Bharat Sanchar Nigam Limited (BSNL) · Author has 6.6K answers and 5.8M answer views · 2y Related How do you find the radius of a circle from a tangent? we know radius of a circle is perpendicular to the tangent at the point of tangency. Now we will follow following steps sequentially. we can get point of tangency (x1,y1) by solving the eqn of circle and tangent. From eqn of tangent we get its slope by getting dy/dx ( or transforming the eqn in appropriate y= mx+ c form where m = slope) Then we can get slope of radius m1 by finding (–1/m) we know radius of a circle is perpendicular to the tangent at the point of tangency. Now we will follow following steps sequentially. we can get point of tangency (x1,y1) by solving the eqn of circle and tangent. From eqn of tangent we get its slope by getting dy/dx ( or transforming the eqn in appropriate y= mx+ c form where m = slope) Then we can get slope of radius m1 by finding (–1/m), as product of slopes of two perpendicular ... David Briggs B. Sc. in Mathematics, King's College London (KCL) (Graduated 1961) · Author has 862 answers and 305.8K answer views · 6y You can't. Two tangents don't define a circle. Philip Lloyd Specialist Calculus Teacher, Motivator and Baroque Trumpet Soloist. · Author has 6.8K answers and 52.8M answer views · 5y Related How do you find the equation of 2 lines that intersect at a known point and are both tangents to a circle with a known equation? The best way to explain this is with a real example. Let’s just have this simple circle… and let’s choose a point (obviously outside the circle) say (3, 1) Obviously we don’t know the coordinates of the points where the tangents touch the circle yet! Let the tangent be y = mx + c If it goes through (3, 1) then 1 = 3m + c so c = 1 – 3m The inte The best way to explain this is with a real example. Let’s just have this simple circle… and let’s choose a point (obviously outside the circle) say (3, 1) Obviously we don’t know the coordinates of the points where the tangents touch the circle yet! Let the tangent be y = mx + c If it goes through (3, 1) then 1 = 3m + c so c = 1 – 3m _____________________________________________________________ The intersection of the line y = mx + c and the circle could be found by solving their equations simultaneously: If this line is a tangent then the above equation only has 1 solution for the ... Related questions How do you find the equation of a circle that is tangent to two lines? How do you find the radius of a circle from a tangent? How do you find the equation of a tangent line to a circle? Given a circle with center at (1, 2) and radius 3, what is the equation of the tangent line to the circle at the point (4, 1)? How can the radius be determined from an equation with two points and a tangent line? What will be the radius of a circle lying between two circles with radius 4 cm and 2 cm touching each other externally and having a common tangent? How does one find the radius of a circle (in a series of tangent circles), given the radius of the most left/right circles, and that all circles are bounded by two lines? Given a circle with radius 3 cm and center at (2, 4), what is the equation of the tangent line to the circle at the point (5, 3)? How do you find the centre and radius of a circle if it is tangent to two lines? How do I find the equation of two tangents from a point to a circle? How do I find the equation of the tangent from any point outside the circle to that circle? What is the equation of the tangent line to a circle with center (0,0) and radius 4? I have an area of a circle. How can I get the radius of the circle? How do you find the center and radius of a circle that is tangent to two other circles? What is the formula for finding the radius of contact between two lines and a circle that are tangent to each other at any point on the circle? About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
3986
https://www.addgene.org/mol-bio-reference/
Addgene Website Feedback Please note: Your browser does not support the features used on Addgene's website. You may not be able to create an account or request plasmids through this website until you upgrade your browser. Learn more Please note: Your browser does not fully support some of the features used on Addgene's website. If you run into any problems registering, depositing, or ordering please contact us at [email protected]. Learn more Learn about AAV and antibody materials from user-contributed reports Analyze a DNA sequence to see restriction sites and map Browse a digital-only collection of vector backbone information Molecular Biology Reference Origins of Molecular Genetics The concept of genes as carriers of phenotypic information was introduced in the early 19th century by Gregor Mendel, who later demonstrated the properties of genetic inheritance in peas. Over the next 100 years, many significant discoveries lead to the conclusions that genes encode proteins and reside on chromosomes, which are composed of DNA. These findings culminated in the central dogma of molecular biology, that proteins are translated from RNA, which is transcribed from DNA. DNA is comprised of 4 nucleotides or bases, adenine, thymine, cytosine, and guanine (abbreviated to A, T, C, and G respectively) that are organized into a double stranded helix. The order of these 4 nucleotides makes up the genetic code and provides the instructions to make every protein within an organism. Proteins are made up of amino acids. Each amino acid is encoded for by 3 nucleotides termed a codon. As there are only 20 natural amino acids and 64 codon combinations each amino acid is encoded for by multiple codons. Plasmids and Recombinant DNA Technology Techniques in chemistry enable isolation and purification of cellular components, such as DNA, but practically this isolation is only feasible for relatively short DNA molecules. In order to isolate a particular gene from human chromosomal DNA, it would be necessary to isolate a sequence of a few hundred or few thousand basepairs from the entire human genome. Digesting the human genome with restriction enzymes would yield about two million DNA fragments, which is far too many to separate from each other for the purposes of isolating one specific DNA sequence. This obstacle has been overcome by the field of recombinant DNA technology, which enables the preparation of more managable (i.e., smaller) DNA fragments. In 1952, Joshua Lederberg coined the term plasmid, in reference to any extrachromosomal heritable determinant. Plasmids are fragments of double-stranded DNA that typically carry genes and can replicate independently from chromosomal DNA. Although they can be found in archaea and eukaryotes, they play the most significant biological role in bacteria where they can be passed from one bacterium to another by a type of horizontal gene transfer (conjugation), usually providing a benefit to the host, such as antibiotic resistance. This benefit can be context-dependent, and thus the plasmid exists in a symbiotic relationship with the host cell. Like the bacterial chromosomal DNA, plasmid DNA is replicated upon cell division, and each daughter cell receives at least one copy of the plasmid. By the 1970s the combined discoveries of restriction enzymes, DNA ligase, and gel electrophoresis allowed for the ability to move specific fragments of DNA from one context to another, such as from a chromosome to a plasmid. These tools are essential to the field of recombinant DNA, in which many identical DNA fragments can be generated. The combination of a DNA fragment with a plasmid or vector DNA backbone generates a recombinant DNA molecule, which can be used to study DNA fragments of interest, such as genes. Molecular Cloning Plasmids that are used most commonly in the field of recombinant DNA technology have been optimized for their use of studying and manipulating genes. For instance, most plasmids are replicated in E. coli and are relatively small (∼3000 - 6000 basepairs) to enable easy manipulation. Typically plasmids contain the minimum essential DNA sequences for this purpose, which includes a DNA replication origin, an antibiotic-resistance gene, and a region in which exogenous DNA fragments can be inserted. When a plasmid exists extrachromosomally in E. coli, it is replicated independently and segregated to the resulting daughter cells. These daughter cells contain the same genetic information as the parental cell, and are thus termed clones of the original cell. The plasmid DNA is similarly referred to as cloned DNA, and this process of generating multiple identical copies of a recombinant DNA molecule is known as DNA or molecular cloning. The process of molecular cloning enabled scientists to break chromosomes down to study their genes, marking the birth of molecular genetics. Today, scientists can easily study and manipulate genes and other genetic elements using specifically engineered plasmids, commonly referred to as vectors, which have become possibly the most ubiquitous tools in the molecular biologist’s toolbox. To learn more about different types of cloning methods check out our guide on molecular cloning techniques. Plasmid Elements Plasmids used by scientists today come in many sizes and vary broadly in their functionality. In their simplest form, plasmids require a bacterial origin of replication (ori), an antibiotic-resistance gene, and at least one unique restriction enzyme recognition site. These elements allow for the propagation of the plasmid within bacteria, while allowing for selection against any bacteria not carrying the plasmid. Additionally, the restriction enzyme site(s) allow for the cloning of a fragment of DNA to be studied into the plasmid. Below are some common plasmid elements: Plasmid Element | Description Origin of Replication (ori) | DNA sequence which directs initiation of plasmid replication (by bacteria) by recruiting DNA replication machinery. Theoriis critical for the ability of the plasmid to be copied (amplified) by bacteria, which is an important characteristic of why plasmids are convenient and easy to use. Antibiotic Resistance Gene | Allows for selection of plasmid-containing bacteria by providing a survival advantage to the bacterial host. Each bacterium can contain multiple copies of an individual plasmid, and ideally would replicate these plasmids upon cell division in addition to their own genomic DNA. Because of this additional replication burden, the rate of bacterial cell division is reduced (i.e., it takes more time to copy this extra DNA). Because of this reduced fitness, bacteria without plasmids can replicate faster and out-populate bacteria with plasmids, thus selecting against the propagation of these plasmids through cell division.To ensure the retention of plasmid DNA in bacterial populations, an antibiotic resistance gene (i.e., a gene whose product confers resistance to ampicillin) is included in the plasmid. These bacteria are then grown in the presence of ampicillin. Under these conditions, there is a selective pressure to retain the plasmid DNA, despite the added replication burden, as bacteria without the plasmid DNA would not survive antibiotic treatment. It is important to distinguish that the antibiotic resistance gene is under the control of a bacterial promoter, and is thus expressed in the bacteria by bacterial transcriptional machinery. Multiple Cloning Site (MCS) | Short segment of DNA which contains several restriction enzyme sites, enabling easy insertion of DNA by restriction enzymes digestion and ligation. In expression plasmids, the MCS is often located downstream from a promoter, such that when a gene is inserted within the MCS, its expression will be driven by the promoter. As a general rule, the restriction sites in the MCS are unique and not located elsewhere in the plasmid backbone, which is why they can be used for cloning by restriction enzyme digestion. For more information about restriction enzymes check outNEB's website. Insert | The insert is the gene, promoter, or other DNA fragment cloned into the MCS. The insert is typically the genetic element one wishes to study using a particular plasmid. Promoter Region | Drives transcription of the insert. The promoter is designed to recruit transcriptional machinery from a particular organism or group of organisms. Meaning, if a plasmid in intended for use in human cells, the promoter will be a human or mammalian promoter sequence. The promoter can also direct cell-specific expression, which can be achieved by a tissue-specific promoter (e.g., a liver-specific promoter). The strength of the promoter is also important for controlling the level of insert expression (i.e., a strong promoter directs high expression, whereas weaker promoters can direct low/endogenous expression levels). For more information about promoters, both bacterial and eukaryotic, as well as common promoters used in research check out ourpromoters reference page. Selectable Marker | The selectable marker is used to select for cells that have successfully taken up the plasmid for the purpose of expressing the insert. This is different than selecting for bacterial cells that have taken up the plasmid for the purpose of replication. The selectable marker enables selection of a population of cells that have taken up the plasmid and that can be used to study the insert. The selectable marker is typically in the form of another antibiotic resistance gene (this time, under the control of a non-bacterial promoter) or a fluorescent protein (that can be used to select or sort the cells by visualization or FACS). Primer Binding Site | A short single-stranded DNA sequence used as an initiation point for PCR amplification or DNA sequencing of the plasmid. Primers can be utilized to verify the sequence of the insert or other regions of the plasmid. For commonly used primers check out Addgene'ssequencing primerlist. Working with Plasmids Plasmids have become an essential tool in molecular biology for a variety of reasons, including that they are: Easy to work with - Plasmids are a convenient size (generally 1,000-20,000 basepairs) for physical isolation (purification) and manipulation. With current cloning technology, it is easy to create and modify plasmids containing the genetic element that you are interested in. Self-replicating - Once you have constructed a plasmid, you can easily make an endless number of copies of the plasmid using bacteria, which can uptake plasmids and amplify them during cell division. Because bacteria are easy to grow in a lab, divide relatively quickly, and exhibit exponential growth rates, plasmids can be replicated easily and efficiently in a laboratory setting. Stable - Plasmids are stable long-term either as purified DNA or within bacterial cells that have been preserved as glycerol stocks. Functional in many species and can be useful for a diverse set of applications - Plasmids can drive gene expression in a wide variety of organisms, including plants, worms, mice, and even cultured human cells. Although plasmids were originally used to understand protein coding gene function, they are now used for a variety of studies used to investigate promoters, small RNAs, or other genetic elements. Types of Plasmids Plasmids are versitile and can be used in many different ways by scientists. The combination of elements often determines the type of plasmid and dictates how it might be used in the lab. Below are some common plasmid types: Cloning Plasmids - Used to facilitate the cloning of DNA fragments. Cloning vectors tend to be very simple, often containing only a bacterial resistance gene, origin of replication, and MCS. They are small and optimized to help in the initial cloning of a DNA fragment. Commonly used cloning vectors include Gateway entry vectors and TOPO cloning vectors. If you are looking for an empty plasmid backbone for your experiment, see Addgene's empty backbone page for more information. Expression Plasmids - Used for gene expression (for the purposes of gene study). Expression vectors must contain a promoter sequence, a transcription terminator sequence, and the inserted gene. The promoter region is required for the generation of RNA from the insert DNA via transcription. The terminator sequence on the newly synthesized RNA signals for the transcription process to stop. An expression vector can also include an enhancer sequence which increases the amount of protein or RNA produced. Expression vectors can drive expression in various cell types (mammalian, yeast, bacterial, etc.), depending largely on which promoter is used to initiate transcription. Gene Knock-down Plasmids - Used for reducing the expression of an endogenous gene. This is frequently accomplished through expression of an shRNA targeting the mRNA of the gene of interest. These plasmids have promoters that can drive expression of short RNAs. Genome Engineering Plasmids - Used to target and edit genomes. Genome editing is most commonly accomplished using CRISPR technology. CRISPR is composed of a DNA endonuclease and guide RNAs that target specific locations in the genome. For more information on CRISPR check out Addgene’s CRISPR guide. Reporter Plasmids - Used for studying the function of genetic elements. These plasmids contain a reporter gene (for example, luciferase or GFP) that offers a read-out of the activity of the genetic element. For instance, a promoter of interest could be inserted upstream of the luciferase gene to determine the level of transcription driven by that promoter. Viral Plasmids - These plasmids are modified viral genomes that are used to efficiently deliver genetic material into target cells. You can use these plasmids to create viral particles, such as lentiviral, retroviral, AAV, or adenoviral particles, that can infect your target cells at a high efficiency. Addgene's expanding viral service offers select ready-made AAV and lentiviral particles. Visit our viral service page to learn more. Regardless of type, plasmids are generally propagated, selected for, and the integrity verified prior to use in an experiment. E. coli strains for propagating plasmids E. coli are gram-negative, rod shaped bacteria naturally found in the intestinal tract of animals. There are many different naturally occurring strains of E. coli, some of which are deadly to humans. The majority of all common, commercial lab strains of E. coli used today are descended from two individual isolates, the K-12 strain and the B strain. K-12 has led to the common lab strains MG1655 and its derivatives DH5alpha and DH10b (also known as TOP10) among others, while the B strain gave rise to BL21 and its derivatives. We've included a small number of E. coli strains below and recommend checking out these two Addgene blog posts relating to common E. coli lab strains and E. coli strains specialized for protein expression for additional strain-related information and a more extensive strain list. Strain | Vendor(s) | Genotype BL21 | Invitrogen; New England BioLabs | E. coliB F dcm ompT hsdS(rB mB) gal ccdB Survival | Invitrogen | F- mcrA Delta(mrr-hsdRMS-mcrBC) Phi80lacZDeltaM15 Delta-lacX74 recA1 araDelta139 D(ara-leu)7697 galU galK rpsL (StrR) endA1 nupG tonA::Ptrc ccdA DB3.1 | Invitrogen | F- gyrA462 endA Delta(sr1-recA) mcrB mrr hsdS20 (rB- mB-) supE44 ara14 galK2 lacY1 proA2 rpsL20(StrR) xyl5 lambda- leu mtl1 DH5alpha | Invitrogen | F- Phi80lacZDeltaM15 Delta(lacZYA-argF) U169 recA1 endA1 hsdR17(rk-, mk+) phoA supE44 thi-1 gyrA96 relA1 tonA JM109 | Addgene; Promega | e14-(McrA-) recA1 endA1 gyrA96 thi-1 hsdR17(rK- mK+) supE44 relA1 Delta(lac- proAB) [F traDelta36 proAB lacIqZDeltaM15] NEB Stable | New England Biolabs | F' proA+B+ lacIq ∆(lacZ)M15 zzf::Tn10 (TetR) ∆(ara-leu) 7697 araD139 fhuA ∆lacX74 galK16 galE15 e14- Φ80dlacZ∆M15 recA1 relA1 endA1 nupG rpsL (StrR) rph spoT1 ∆(mrr-hsdRMS-mcrBC) Stbl3 | Invitrogen | F– mcrB mrr hsdS20 (rB–, mB–) recA13 supE44 ara-14 galK2 lacY1 proA2 rpsL20 (StrR ) xyl-5 λ– leu mtl-1 Top10 | Invitrogen | F- mcrA Delta(mrr-hsdRMS-mcrBC) Phi80lacZM15 Delta-lacX74 recA1 araD139 Delta(ara-leu)7697 galU galK rpsL (StrR) endA1 nupG Antibiotics commonly used for plasmid selection Many plasmids are designed to include an antibiotic resistance gene, which when expressed, allows only plasmid-containing bacteria to grow in or on media containing that antibiotic. These antibiotic resistance genes not only give the scientist with an easy way to detect plasmid-containing bacteria, but also provide those bacteria with a pressure to maintain and replicate your plasmid over multiple generations. More information relating to antibiotic resistance genes as well as additional antibiotics not listed in the table below can be found in this blog post. Below you will find a few antibiotics commonly used in the lab and their recommended concentrations. We suggest checking your plasmid's datasheet or the plasmid map to confirm which antibiotic(s) to add to your LB media or LB agar plates. Antibiotic | Recommended Stock Concentration | Recommended Working Concentration Ampicillin | 100 mg/mL | 100 µg/mL Carbenicillin | 100 mg/mL | 100 µg/mL Chloramphenicol | 25 mg/mL(dissolve in EtOH) | 25 µg/mL Hygromycin B | 200 mg/mL | 200 µg/mL Kanamycin | 50 mg/mL | 50 µg/mL Spectinomycin | 50 mg/mL | 50 µg/mL Tetracycline | 10 mg/mL | 10 µg/mL Note: Carbenicillin can be used in place of ampicillin. Preparing Antibiotics Create a stock solution of your antibiotic. Unless otherwise indicated, the antibiotic powder can be dissolved in dH20. Addgene recommends making 1000X stock solutions and storing aliquots at -20°C. To use, dilute your antibiotic into your LB medium at 1:1,000. For example, to make 100 mL of LB/ampicillin growth media, add 100 μL of a 100 mg/mL ampicillin stock (1000X stock) to 100 mL of LB. DNA sequencing for plasmid verification DNA is made up of 4 bases, adenine, thymine , cytosine, and guanine. The order of these bases makes up the genetic code and provides all the information needed for cells to make proteins and other molecules essential for life. Scientists often “sequence DNA” to identify the order of these four nucleotide bases in a particular DNA strand. Sequencing DNA and understanding the genetic code allows scientists to study gene function as well as identify changes or mutations that may cause certain diseases. Sequencing DNA is extremely important when verifying plasmids to ensure each plasmid contains the essential elements to function and the correct gene of interest. So how do scientists sequence DNA? Sanger Sequencing In 1975, Frederick Sanger developed the process termed Sanger sequencing, sometimes referred to as chain-termination sequencing or dideoxy sequencing. To understand Sanger sequencing, we first need to understand DNA replication. DNA is a double helix, where a base on one strand pairs with a particular base on the other, complementary, strand. Specifically, A pairs with T and C pairs with G. During replication, DNA unwinds and the DNA polymerase enzyme binds to and migrates down the single stranded DNA adding nucleotides according to the sequence of the complementary strand. The replication process can also be done in a test tube to copy DNA regions of interest. In vitro DNA replication requires the 4 nucleotides, a DNA polymerase enzyme, the template DNA to be copied, and a primer. A primer is a small piece of DNA, approximately 18-22 nucleotides, that binds to complementary DNA and acts as a starting point for the DNA polymerase. Thus to replicate a piece of DNA in vitro one has to know some of its sequence to design a effective primer. Sanger sequencing is modeled after in vitro DNA replication but relies on the random incorporation of modified, fluorescently tagged bases onto the growing DNA strand in addition to the normal A, T, C, or G nucleotide. The 4 standard bases are tagged with a different fluorophore so they can be distinguished from one another. Similar to DNA replication, the Sanger sequencing reaction begins when a primer binds to its complementary DNA and the DNA polymerase adding nucleotides. The major difference in this process occurs when the polymerase incorporates a fluorescently tagged nucleotide. Because these special bases do not have a binding site for adding the next nucleotide, the reaction is halted once the fluorescently tagged base is incorporated. Sanger sequencing requires a lot of DNA because the ultimate goal is to have a fluorescently tagged nucleotide at each position in the DNA sequence. Thus, the final result is a group of newly synthesized DNA strands of varying lengths whose last nucleotide is labeled. Once all the newly synthesized DNA is made, the DNA molecules are then separated by size from shortest to longest and "read" using a sequencing machine that recognizes the different fluorescent labels. The machine detects which fluorescently labeled nucleotide is present at the end of each fragment and assembles that information into the DNA sequence. Sanger sequencing results are presented as a sequencing chromatogram which provides the color and intensity of each fluorescent signal. Sanger can sequence approximately 500-1000 bases downstream of the known primer region with very few errors making it an efficient and reliable sequencing method. Next Generation Sequencing Although Sanger sequencing is quick and efficient, it is low throughput and can only sequence short pieces of DNA. This is not extremely useful when trying to sequence an entire plasmid or an organism’s genome. One Sanger sequencing reaction would give you only 20 pieces of a 2,000 piece puzzle. A scientist would need to run a ton of Sanger sequencing reactions on different pieces of DNA to be able to assemble the whole puzzle. That’s where Next Generation Sequencing (NGS) comes in. NGS is a high-throughput, multi parallel sequencing platform that can generate sequencing data for up to 600 billion bases in one reaction. In other words NGS can give you most of the puzzle pieces in only a few reactions.There are multiple approaches to acquire NGS but one of the most commonly used is the Illumina NGS platform. This is the platform used by Addgene’s sequencing partner, seqWell. The actual process of Illumina NGS is not that different from Sanger sequencing. This process, like Sanger, is based on DNA replication and utilizes modified fluorescently tagged nucleotides. During illumina NGS, a long piece of DNA is first fragmented into small pieces, labeled with a short DNA barcode, and amplified. These DNA fragments are attached to a glass slide so that different fragments of DNA, or templates, are spatially separated from each other. These attached DNA templates are then amplified again producing ~1,000 copies of each template. Each template is then replicated using the modified bases and a microscope captures the fluorescent color that is emitted each time a base is added. Again, each base (A,C,T, or G) is labelled with a different color making it easy to identify the order of the DNA strand. Unlike Sanger however, these modified bases can be converted back to a regular base and thus do not halt the reaction. Illumina NGS, therefore does not require any “normal” bases in the reaction. All the sequenced templates are then aligned to each other to assemble the entire sequence or puzzle. It is important to note that NGS platforms in general do not require a specific primer for your DNA of interest thus a completely unknown piece of DNA can be sequenced. At Addgene all incoming plasmids are sequenced with NGS during our quality control process. NGS allows us to sequence entire plasmids providing scientists with even more information to aid in the reproducibility of scientific research. Resources Genetic Code The genetic code can be defined as a set of rules for translating the information encoded by DNA and RNA into proteins. DNA is comprised of 4 nucleotides Adenine (A), Thymine (T), Cytosine (C) and Guanine (G). In the double helix A always pairs with T and C always pairs with G. RNA on the other hand consists of Adenine, Cytosine, Guanine and Uracil (U). Uracil replaces thymine in RNA molecules. Every 3 nucleotides (codons) in a DNA sequence encodes for an amino acid. The genetic code is degenerate thus multiple codons code for each amino acid. There are 20 amino acids plus a start and stop codon. Below you will find helpful resource tables about the genetic code. This table includes the nucleotide and amino acid code in addition to ambiguous bases and common epitope tags. Ambiguous bases are included in a DNA sequence when sequencing is not 100% efficient and the machine cannot distinguish between the 4 labelled nucleotides. Epitope tags on the other hand are commonly used in molecular cloning to tag a gene within a plasmid. DNA and RNA Single Letter Code: Primary bases | Nucleobase A | Adenine C | Cytosine G | Guanine T | Thymine U | Uracil Single Letter Code: Ambiguous bases | Nucleobase B | C, G, or T D | A, G, or T H | A, C, or T K | G or T M | A or C N | A, T, C, or G R | A or G S | C or G V | A, C, or G W | A or T Y | C or T Amino Acids Name | Three Letter Code | Single Letter Code | Codons (RNA) Alanine | Ala | A | GCU, GCC, GCA, GCG Arginine | Arg | R | CGU, CGC, CGA, CGG, AGA, AGG Asparagine | Asn | N | AAU, AAC Aspartic acid | Asp | D | GAU, GAC Cysteine | Cys | C | UGU, UGC Glutamine | Gln | Q | CAA, CAG Glutamic Acid | Glu | E | GAA, GAG Glycine | Gly | G | GGU, GGC, GGA, GGG Histidine | His | H | CAU, CAC Isoleucine | Ile | I | AUU, AUC, AUA Leucine | Leu | L | UUA, UUG, CUU, CUC, CUA, CUG Lysine | Lys | K | AAA, AAG Methionine | Met | M | AUG Phenylalanine | Phe | F | UUU, UUC Proline | Pro | P | CCU, CCC, CCA, CCG Serine | Ser | S | UCU, UCC, UCA, UCG, AGU,AGC Threonine | Thr | T | ACU, ACC, ACA, ACG Tryptophan | Trp | W | UGG Tyrosine | Tyr | Y | UAU, UAC Valine | Val | V | GUU, GUC, GUA, GUG Start | | | AUG Stop | | | UAG (amber), UGA (opal), UAA (ochre) AUG is the most common start codon. Alternative start codons include CUG in eukaryotes and GUG in prokaryotes. Common Epitope Tags Tag | Amino Acid Sequence FLAG | DYKDDDDK HA | YPYDVPDYA His | HHHHHH Myc | EQKLISEEDL V5 | GKPIPNPLLGLDST Xpress | DLDDDDK or DLYDDDDK Thrombin | LVPRGS BAD (Biotin Acceptor Domain) | GLNDIFEAQKIEWHE Factor Xa | IEGR or IDGR VSVG | YTDIEMNRLGK SV40 NLS | PKKKRKV or PKKKRKVG Protein C | EDQVDPRLIDGK S Tag | KETAAAKFERQHMDS SB1 | PRPSNKRLQQ Webpage and Blog References Addgene's blog, including our popular Plasmids 101 series covers topics ranging from the newest breakthroughs in plasmid technologies and research, to overviews of molecular biology basics and plasmid components. Plasmids 101: Origin of Replication Plasmids 101: The Promoter Region-Let’s Go Promoters Reference Page Plasmids 101: How to Name Your Plasmid in 3 Easy Steps Molecular Cloning Techniques Reference Guide Plasmids 101: 5 factors to help you choose the right cloning method Plasmids 101: Restriction Cloning Plasmids 101: Inducible Promoters Plasmids 101: What is a plasmid? Plasmids 101: Antibiotic Resistance Choosing Your Perfect Plasmid Backbone 6 tips for analyzing and troubleshooting sanger sequencing results The Power Behind NGS Plasmid Verification: seqWell Protocols There are many different lab techniques that are important in molecular biology and plasmid cloning. Check out our webpages dedicated to lab protocols and videos for specific techniques and tips! Addgene Protocols Addgene Video Library Sign Up for Our Newsletter Keep up with the latest Addgene news, releases, and more. Subscribe to Our Blog Learn about new materials in the repository and helpful community resources. Contact Addgene Have questions about your order, deposit, or a material? Addgene is a nonprofit plasmid repository. We store and distribute high-quality plasmids from your colleagues. About Our Repository Help Center Deposit Plasmids Browse Our Catalog General Reagents Viral Service DNA Service Plasmid Collections Education Tools
3987
https://web.viu.ca/krogh/chem311/UNITS%20OF%20CONCENTRATION%202016.pdf
UNITS OF CONCENTRATION 2016.doc UNITS OF CONCENTRATION There are a number of different ways of expressing solute concentration that are commonly used. Some of these are listed below. Molarity, M = moles solute/liter of solution Normality, N = equivalents of solute/liter of solution Weight %, Wt % = mass ratio x 100% = mass of solute/mass of solution x 100% Parts per million, ppm = mass ratio x 106 = mass of solute/mass of solution x 106 Mass per volume, mg/L = mass of solute/liter of solution molality, m = moles of solute/mass of solvent (kg) mole fraction, χ = moles of solute/total moles of solution Concentrations expressed as ppm and N are less familiar to some students at this stage. Parts per million: Parts per million concentrations are essentially mass ratios (solute to solution) x a million (106). In this sense, they are similar to wt %, which could be thought of as parts per hundred (although nobody uses this term). Since 106 milligrams = 1 kg, 1 mg/kg is equivalent to 1 ppm. Similarly, 1 µg/g and 1ng/mg are also equivalent to 1 ppm. Given that the density of dilute aqueous solutions is ~1.00 kg/L, 1 mg/L of solute in freshwater ≈ 1 ppm. This is true for most freshwater and other dilute aqueous solutions, but not the case for seawater and concentrated wastewater solutions, where the density of the solution is greater than 1.00 kg/L. Other variations on this theme include: ppt – parts per thousand (used for common ions in sea water) ppb – parts per billion (used for trace heavy metals and organics) pptr – parts per trillion (used for ultratrace metals and organics) The following table summarizes common mass ratios for solutions and solids. Unit In General Dilute Aqueous Solutions ppm mg/kg µg/g mg/L µg/mL ppb µg/kg ng/g µg/L ng/mL pptr ng/kg pg/g ng/L pg/mL UNITS OF CONCENTRATION 2016.doc To convert concentrations in mg/L (or ppm in dilute aqueous solution) to molarity, divide by the molar mass of the analyte to convert mass in mg into a corresponding number of moles. Example: What is the molarity of a 6.2 mg/L solution of O2(aq)? To convert from molarity to mg/L (or ppm in dilute aqueous solution), multiply by the molar mass of the analyte to convert moles into corresponding number of moles. Example: The Maximum Acceptable Concentration (MAC) of Pb in drinking water is 10 ppb. If a sample has concentration of 55 nM, does it exceed the MAC? Note 1: In seawater, 1.00 mg/L ≠ 1.00 ppm since the density of seawater is 1.035 kg/L. Hence, 1.00 mg/Lsewater = 1.00 mg/L x 1 L/1.035 kg = 0.966 mg/kg or 0.966 ppm Example: The concentration of K+ in seawater is reported as 10.6 mM. Convert this conc to ppm. UNITS OF CONCENTRATION 2016.doc Note 2: Some concentrations are expressed in terms the species actually measured e.g., mg/L of NO3 - (mass of nitrate ions per liter) Or in terms of a particular element in a species that was measured. e.g., mg/L of NO3 - - N (mass of nitrogen in the form of nitrate ions per liter) To convert from one to the other of these, use the molar mass ratio of the element to that of the chemical species measured. In the example above use, 14 mg N/62 mg NO3 -. It is important to clearly report unit values to avoid serious error in interpretation of results. Similar situations arise in reporting the concentrations of ammonia-nitrogen, phosphate-phosphorous and others. Example: A water sample has a measured phosphate concentration of 6.8 µM. Express this as µg/L PO4 3- and ppb PO4 3 – P. Note 3: Some aggregate parameters are reported in terms of a single surrogate species. e.g., total hardness is usually reported as the mass of CaCO3 that would be required to provide the same number of moles of calcium ions. Example: A groundwater sample has been determined to contain 100. ppm Ca and 80. ppm Mg by flame atomic absorption spectrophotometry. Express the total hardness as ppm CaCO3. UNITS OF CONCENTRATION 2016.doc Example: A water sample has been found to contain 0.6 mM of As, F- and NO3 -. The drinking water guidelines for arsenic, fluoride and nitrate-nitrogen are 10 ppb, 1.5 ppm and 10 ppm, respectively. Does this sample exceed the drinking water guidelines for arsenic, fluoride or nitrate - nitrogen? Example: A commercial bleach solution NaOCl(aq) is reported to be 12.5% (weight). If the density is 1.05 g/mL, calculate the molar concentration. Example: A standard solution is prepared by dissolving 225 mg of sodium thiosulfate pentahydrate in a one litre volumetric flask. After thoroughly mixing, 5.00 mL was transferred to a 250. volumetric flask and diluted to the mark. What is the molar concentration of thiosulfate in the final solution? UNITS OF CONCENTRATION 2016.doc Normality is a concentration unit that is still encountered in many texts and lab manuals. It has particular advantages when carrying out acid/base and redox titration calculations, however it can be confusing for the uninitiated. Normality is defined as the number of equivalents of solute per liter, where an equivalent is defined as a mole of reacting species (H+ or e-). Normality is always a multiple of Molarity. N = K x M where K = #equivalents per mole, K is an integer constant ≥ 1 Hence; Equiv. Weight = M.W./K and # equivalents of solute = mass of solute/equiv. weight K for a particular species is defined by the context of the chemical reaction (acid/base vs redox) and the number of moles of H+ or e- exchanged per mole of reacting substance. For acid/base rxn’s: K is the number of moles of H+ ions produced or neutralized per mole of acid or base supplied. Thus, Acid/base K (equiv/mol) M.W. (g/mol) E.W. (g/equiv) HCl 1 36.5 36.5 H2SO4 2 98.1 49.0 CaCO3 2 100 50.0 Al(OH)3 3 78.0 26.0 Thus, for the reaction; CaCO3 + 2 H+ à Ca2+ + H2CO3 there are 2 moles of H+ transferred per each mole of CaCO3 reacted, so K for CaCO3 (in this context) is equal to 2 equiv/mol and the equivalent weight of CaCO3 is equal to 50 g/equiv. For oxidation/reduction reactions, K is the number of moles of e- transferred per mole of oxidant or reductant in the balanced half-reaction. Balanced half reaction K (equiv/mol) Fe3+ à Fe 3 I2 à 2 I- 2 2 S2O3 2- à S4O6 2- 1 Thus, for the reaction; I2 + 2 S2O3 2- à 2 I- + S4O6 2- there are 2 moles of electrons transferred per mole of I2 reacted, so K for I2 (in this context) is 2 equiv/mol. UNITS OF CONCENTRATION 2016.doc Working with Normality in titration calculations Method 1: Use the appropriate conversion factor (K = #equiv/mol) to convert Normality to Molarity, (i.e., 0.250 N H2SO4 = 0.125 M H2SO4) and use the coefficients in the balanced chemical equations to solve for the number of moles of analyte in given sample volume. Method 2: Use the Normal concentrations directly and convert your final analyte concentration from #equiv/L to moles/L or mg/L using K or E.W., respectively. Although you will need to know the chemical form of the analyte in the final product, you do not need the balanced chemical equations. # of equiv. of analyte = # equiv. titrant This is particularly useful in volumetric determinations, where we can always write; Nanalyte x Vanalyte = Ntitrant x Vtitrant regardless of the complexity of the reaction chemistry involved. EXAMPLES 1. When 25.00 mL of NaOH solution was titrated with 0.572 N H2SO4, 23.40 mL of was required to reach the end point. Find the molarity of the NaOH. Method 1 Method 2 UNITS OF CONCENTRATION 2016.doc 2. The Winkler titration for the determination of dissolved oxygen involves the treatment of the sample with iodide ion (I-) in the presence of manganese ion catalyst (Mn2+) as follows. O2 + 2 Mn2+ + 4 OH- à 2 MnO2(s) + 2 H2O MnO2(s) + 4 H+ + 2 I- à Mn2+ + I2 + 2 H2O The liberated iodine is then titrated with a standard thiosulfate solution. I2 + 2 S2O3 2- à 2 I- + S4O6 2- A 50.00 mL sample of water was treated as above and the I2 liberated was titrated with 8.11 mL of 0.01136 N Na2S2O3 to reach the end point. Determine the concentration of the dissolved O2 in equiv./L, moles/L and mg/L. UNITS OF CONCENTRATION 2016.doc More on those darn Normalities Normality (N) is an expression of solute concentration like Molarity (M), except that it takes into account the actual number of reacting species per mole of reagent (i.e., protons in the case of acid/base reactions or electrons in the case of redox reactions). For acids, an equivalent is defined as one mole of protons. The equivalent amount of any acid is the amount of acid that delivers one mole of H+. So for H2SO4, one equivalent is ½ of one mole, since each mole of H2SO4 produces two moles of H+. Consequently, the equivalent weight is half of the molecular weight. Similarly, for bases an equivalent amount of a base is defined as the amount of base that neutralizes one mole of H+. For CaCO3, one equivalent is ½ of one mole, since each mole of CaCO3 neutralizes two moles of H+. And again, the equivalent weight is ½ the molecular weight. Put another way, K (which is an expression of the number of equivalents supplied per mole of a substance) is equal to the number of moles of H+ produced per mole of substance. Thus, K = 1 equiv/mole for HCl and NaOH, but K = 2 equiv/mole for H2SO4 and CaCO3. For redox rxns, an equivalent is defined as the amount of a substance that delivers one mole of electrons. So for the reaction in which O2 à 2 H2O The oxidation state of each oxygen atom drops from 0 to –2. Thus a total of 4 equivalents have been transferred for each mole of O2 reacted and K = 4 equiv/mole. Put another way, K (which is an expression of the number of equivalents supplied per mole of a substance) is equal to the number of moles of e- produced per mole of substance. For the reaction 2 S2O3 2- à S4O6 2- The oxidation state on each sulfur increases from 2 to 2.5 (on average) for a net change of ½ per S atom. Since there are two S atoms per S2O3 2-, each mole of thiosulfate is involved in the transfer of 1 equivalent of e- in reacting to form S4O6 2-. Thus K = 1. Conversion Chart for Concentrations of O2 TO à FROM mg O2 /L moles O2/L equiv O2 /L mg O2 /L - 1 mol/32,000 mg 1 equiv/8000 mg moles O2/L 32,000 mg/mol - 4 equiv/mol equiv O2/L 8000 mg/equiv 1 mol/4 equiv - Construct a similar conversion table for CaCO3. Other conversion charts can be prepared that include converting to mg/L NO3 - - N, mg/L NO3 - and mM.
3988
https://www.geeksforgeeks.org/maths/graphical-solution-of-linear-programming-problems/
Graphical Solution of Linear Programming Problems - GeeksforGeeks Skip to content Tutorials Python Java DSA ML & Data Science Interview Corner Programming Languages Web Development CS Subjects DevOps Software and Tools School Learning Practice Coding Problems Courses DSA / Placements ML & Data Science Development Cloud / DevOps Programming Languages All Courses Tracks Languages Python C C++ Java Advanced Java SQL JavaScript Interview Preparation GfG 160 GfG 360 System Design Core Subjects Interview Questions Interview Puzzles Aptitude and Reasoning Data Science Python Data Analytics Complete Data Science Dev Skills Full-Stack Web Dev DevOps Software Testing CyberSecurity Tools Computer Fundamentals AI Tools MS Excel & Google Sheets MS Word & Google Docs Maths Maths For Computer Science Engineering Mathematics Switch to Dark Mode Sign In Number System and Arithmetic Algebra Set Theory Probability Statistics Geometry Calculus Logarithms Mensuration Matrices Trigonometry Mathematics Sign In ▲ Open In App Graphical Solution of Linear Programming Problems Last Updated : 23 Jul, 2025 Comments Improve Suggest changes 15 Likes Like Report Linear programming is the simplest way of optimizing a problem. Through this method, we can formulate a real-world problem into a mathematical model. There are various methods for solving Linear Programming Problems and one of the easiest and most important methods for solving LPP is the graphical method. In Graphical Solution of Linear Programming, we use graphs to solve LPP. We can solve a wide variety of problems using Linear programming in different sectors, but it is generally used for problems in which we have to maximize profit, minimize cost, or minimize the use of resources. In this article, we will learn about Solutions of Graphical solutions of linear programming problems, their types, examples, and others in detail. Table of Content Linear Programming Types of Linear Programming Problems Graphical Solution of a Linear Programming Problem Corner Point Methods Solved Examples of LPP using Corner Point Method Iso-Cost Method Solved Examples of Graphical Solution of LPP using Iso-Cost Method Practice Questions on Graphical Solution of LPP Linear Programming Linear programming is a mathematical technique employed to determine the most favorable solution for a problem characterized by linear relationships. It is a valuable tool in fields such as operations research, economics, and engineering, where efficient resource allocation and optimization are critical. Now, let's learn about types of linear programming problems Types of Linear Programming Problems There are mainly three types of problems based on Linear programming: Manufacturing Problem: In this type of problem, some constraints like manpower, output units/hour, and machine hours are given in the form of a linear equation. And we have to find an optimal solution to make a maximum profit or minimum cost. Diet Problem: These problems are generally easy to understand and have fewer variables. Our main objective in this kind of problem is to minimize the cost of diet and to keep a minimum amount of every constituent in the diet. Transportation Problem:In these problems, we have to find the cheapest way of transportation by choosing the shortest route/optimized path. Some commonly used terms in linear programming problems are: Objective function: The direct function of the form Z = ax + by, where a and b are constant, which is reduced or enlarged, is called the objective function. For example, if Z = 10x + 7y. The variables x and y are called the decision variables. Constraints: The restrictions that are applied to a linear inequality are called constraints. Non-Negative Constraints: x > 0, y > 0 etc. General Constraints: x + y > 40, 2x + 9y ≥ 40 etc. Optimization problem:A problem that seeks to maximization or minimization of variables of a linear inequality problem is called an optimization problem. Feasible Region:A common region determined by all given issues, including the non-negative (x ≥ 0, y ≥ 0) constraint, is called the feasible region (or solution area) of the problem. The region other than the feasible region is known as the infeasible region. Feasible Solutions: These points within or on the boundary region represent feasible solutions to the problem. Any point outside the scenario is called an infeasible solution. Optimal (Most Feasible) Solution:Any point in the emerging region that provides the right amount (maximum or minimum) of the objective function is called the optimal solution. ➤NOTE If we have to find maximum output, we have to consider the innermost intersecting points of all equations. If we have to find the minimum output, we consider the outermost intersecting points of all equations. If there is no point in common in the linear inequality, then there is no feasible solution. Graphical Solution of a Linear Programming Problem We can solve linear programming problems using two different methods are, Corner Point Methods Iso-Cost Methods Corner Point Methods To solve the problem using the corner point method, you need to follow the following steps: Step 1: Create a mathematical formulation from the given problem. If not given. Step 2: Now plot the graph using the given constraints and find the feasible region. Step 3: Find the coordinates of the feasible region(vertices) that we get from step 2. Step 4: Evaluate the objective function at each corner point of the feasible region. Assume N and n denote the largest and smallest values of these points. Step 5: If the feasible region is bounded, then N and n are the maximum and minimum values of the objective function. Or if the feasible region is unbounded, then: N is the maximum value of the objective function if the open half plane is obtained by the ax + by > N has no common point with the feasible region. Otherwise, the objective function has no solution. n is the minimum value of the objective function if the open half plane is obtained by the ax + by < n has no common point with the feasible region. Otherwise, the objective function has no solution. Solved Examples of LPP using Corner Point Method Example 1:Solve the given linear programming problems graphically: Maximize: Z = 8x + y Constraints are, x + y ≤ 40 2x + y ≤ 60 x ≥ 0, y ≥ 0 Solution: Step 1: Constraints are, x + y ≤ 40 2x + y ≤ 60 x ≥ 0, y ≥ 0 Step 2: Draw the graph using these constraints. Here both the constraints are less than or equal to, so they satisfy the below region (towards origin). You can find the vertex of the feasible region by graph, or you can calculate using the given constraints: x + y = 40 ...(i) 2x + y = 60 ...(ii) Now multiply eq(i) by 2 and then subtract both eq(i) and (ii), we get y = 20 Now, put the value of y in any of the equations, and we get x = 20 So the third point of the feasible region is (20, 20) Step 3: To find the maximum value of Z = 8x + y. Compare each intersection point of the graph to find the maximum value | Points | Z = 8x + y | --- | | (0, 0) | 0 | | (0, 40) | 40 | | (20, 20) | 180 | | (30, 0) | 240 | So the maximum value of Z = 240 at point x = 30, y = 0. Example 2: One kind of cake requires 200 g of flour and 25g of fat, and another kind of cake requires 100 g of flour and 50 g of fat Find the maximum number of cakes that can be made from 5 kg of flour and 1 kg of fat assuming that there is no shortage of the other ingredients, used in making the cakes. Solution: Step 1: Create a table like this for easy understanding (not necessary). | | Flour(g) | Fat(g) | --- | Cake of first kind (x) | 200 | 25 | | Cake of second kind (y) | 100 | 50 | | Availability | 5000 | 1000 | Step 2: Create a linear equation using inequality 200x + 100y ≤ 5000 or 2x + y ≤ 50 25x + 50y ≤ 1000 or x + 2y ≤ 40 Also, x > 0 and y > 0 Step 3: Create a graph using the inequality (remember only to take positive x and y-axis) Step 4: To find the maximum number of cakes (Z) = x + y. Compare each intersection point of the graph to find the maximum number of cakes that can be baked. | x | y | Z = (x+y) | --- | 0 | 20 | 20 | | 20 | 10 | 30 | | 25 | 0 | 25 | Z is maximum at coordinate (20, 10). So the maximum number of cakes that can be baked is Z = 20 + 10 = 30. Iso-Cost Method The term iso-cost or iso-profit method provides the combination of points that produces the same cost/profit as any other combination on the same line. It is a line on the graph where every point gives the same value of the objective function Z = ax + by. This is done by plotting lines parallel to the slope of the equation. To solve the problem using the Iso-cost method, you need to follow the following steps: Step 1: Create a mathematical formulation from the given problem. If not given. Step 2:Now, plot the graph using the given constraints and find the feasible region. Step 3: Now, find the coordinates of the feasible region that we get from step 2. Step 4: Find the convenient value of Z(objective function) and draw the line of this objective function. Step 5: If the objective function is maximum type, then draw a line that is parallel to the objective function line, and this line is farthest from the origin and only has one common point with the feasible region. Or if the objective function is minimum type, then draw a line that is parallel to the objective function line, and this line is nearest to the origin and has at least one common point with the feasible region. Step 6: Now get the coordinates of the common point that we find in step 5. Now, this point is used to find the optimal solution and the value of the objective function. Solved Examples of Graphical Solution of LPP using Iso-Cost Method Example 1: Solve the given linear programming problems graphically: Maximize: Z = 50x + 15y Constraints are, 5x + y ≤ 100 x + y ≤ 50 x ≥ 0, y ≥ 0 Solution: Given, 5x + y ≤ 100 x + y ≤ 50 x ≥ 0, y ≥ 0 Step 1: Finding points We can also write as 5x + y = 100....(i) x + y = 50....(ii) Now we find the points so we take eq(i), now in this equation When x = 0, y = 100 When y = 0, x = 20 So, the points are (0, 100) and (20, 0) Similarly, in eq(ii) When x = 0, y = 50 When y = 0, x = 50 So, the points are (0, 50) and (50, 0) Step 2:Now, plot these points in the graph and find the feasible region. Step 3:Now we find the convenient value of Z(objective function) So, to find the convenient value of Z, we have to take the lcm of the coefficient of 50x + 15y, i.e., 150. So, the value of Z is the multiple of 150, i.e., 300. Hence, 50x + 15y = 300 Now we find the points Put x = 0, y = 20 Put y = 0, x = 6 draw the line of this objective function on the graph: Step 4: As we know that the objective function is maximum type then we draw a line which is parallel to the objective function line and farthest from the origin and only has one common point to the feasible region. Step 5:We have a common point that is (12.5, 37.5) with the feasible region. So, now we find the optimal solution of the objective function: Z = 50x + 15y Z = 50(12.5) + 15(37.5) Z = 625 + 562.5 Z = 1187.5 Thus, the maximum value of Z with the given constraint is 1187. Example 2: Solve the given linear programming problems graphically: Minimize: Z = 20x + 10y Constraints are, x + 2y ≤ 40 3x + y ≥ 30 4x + 3y ≥ 60 x ≥ 0, y ≥ 0 Solution: Given, x + 2y ≤ 40 3x + y ≥ 30 4x + 3y ≥ 60 x ≥ 0, y ≥ 0 Step 1: Finding points We can also write as l 1 = x + 2y = 40 ....(I) l 2 = 3x + y = 30 ....(ii) l 3 = 4x + 3y = 60 ....(iii) Now we find the points So we take eq(i), now in this equation When x = 0, y = 20 When y = 0, x = 40 So, the points are (0, 20) and (40, 0) Similarly, in eq(ii) When x = 0, y = 30 When y = 0, x = 10 So, the points are (0, 30) and (10, 0) Similarly, in eq(iii) When x = 0, y = 20 When y = 0, x = 15 So, the points are (0, 20) and (15, 0) Step 2: Now, plot these points in the graph and find the feasible region. Step 3: Now we find the convenient value of Z(objective function) So let us assume z = 0 20x + 10y = 0 x = -1/2y draw the line of this objective function on the graph: Step 4: As we know that the objective function is minimum type then we draw a line which is parallel to the objective function line and nearest from the origin and has at least one common point to the feasible region. This parallel line touches the feasible region at point A. So now we find the coordinates of point A: As you can see from the graph, at point A, l2 and l3 lines intersect, so we find the coordinate of point A by solving these equations: l 2 = 3x + y = 30 ....(iv) l 3 = 4x + 3y = 60 ....(v) Now multiply eq(iv) with 4 and eq(v) with 3, we get 12x + 4y = 120 12x + 9y = 180 Now, subtracting both equations, we get coordinates (6, 12) Step 5: We have a common point that is (6, 12) with the feasible region. So, now we find the optimal solution of the objective function: Z = 20x + 10y Z = 20(6) + 10(12) Z = 120 + 120 Z = 240 Thus, the minimum value of Z with the given constraint is 240. Practice Questions on Graphical Solution of LPP Question 1. Maximize Z=4x+3y subject to: 2x + y ≤ 8 x + 2y ≤ 8 x, y ≥ 0 Question 2. Minimize Z=5x + 7y subject to: x + y ≥ 6 2x + 3y ≤ 12 x, y ≥ 0 Question 3. Maximize Z = 2x + 5y subject to: x + 4y ≤ 2 3x + y ≤ 9 x, y ≥ 0 Question 4. Minimize Z=3x + 4y subject to: x + y ≤ 5 2x + 3y ≥ 12 x, y ≥ 0 Question 5. Maximize Z= 6x + 4y subject to: x + 2y ≤ 10 2x + y ≥ 8 x, y ≥ 0 Question 6. Minimize Z= 2x + 3y subject to: x + y ≥ 4 x − y ≤ 2 x, y ≥ 0 Question 7. Maximize Z = 7x + 5y subject to: x + 3y ≤ 15 2x + y ≥ 6 x, y ≥ 0 Question 8. Minimize Z = x + 4y subject to: x + y ≤ 7 x − 2y ≥ 3 x, y ≥ 0 Question 9. Maximize Z = 8x + 2y subject to: 3x + y ≤ 18 x + 2y ≥ 10 x, y ≥ 0 Question 10. Minimize Z= 3x + 5y subject to: 2x + y ≥ 7 x + 3y ≤ 12 x, y ≥ 0 Conclusion The graphical method for solving linear programming problems is a powerful visualization tool for problems with two variables. By plotting constraints and identifying the feasible region, one can find the optimal solution by evaluating the objective function at the corner points. This method not only provides insights into the problem but also helps in understanding the impact of each constraint on the solution. However, for problems with more than two variables, other techniques such as the Simplex method are required. Comment More info A ankitzm Follow 15 Improve Article Tags : Mathematics Linear Equations Maths-Class-12 Explore Maths 4 min read Basic Arithmetic What are Numbers? 15+ min readArithmetic Operations 9 min readFractions - Definition, Types and Examples 7 min readWhat are Decimals? 10 min readExponents 9 min readPercentage 4 min read Algebra Variable in Maths 5 min readPolynomials| Degree | Types | Properties and Examples 9 min readCoefficient 8 min readAlgebraic Identities 14 min readProperties of Algebraic Operations 3 min read Geometry Lines and Angles 9 min readGeometric Shapes in Maths 2 min readArea and Perimeter of Shapes | Formula and Examples 10 min readSurface Areas and Volumes 10 min readPoints, Lines and Planes 14 min readCoordinate Axes and Coordinate Planes in 3D space 6 min read Trigonometry & Vector Algebra Trigonometric Ratios 4 min readTrigonometric Equations | Definition, Examples & How to Solve 9 min readTrigonometric Identities 7 min readTrigonometric Functions 6 min readInverse Trigonometric Functions | Definition, Formula, Types and Examples 11 min readInverse Trigonometric Identities 9 min read Calculus Introduction to Differential Calculus 6 min readLimits in Calculus 12 min readContinuity of Functions 10 min readDifferentiation 2 min readDifferentiability of Functions 9 min readIntegration 3 min read Probability and Statistics Basic Concepts of Probability 7 min readBayes' Theorem 13 min readProbability Distribution - Function, Formula, Table 13 min readDescriptive Statistic 5 min readWhat is Inferential Statistics? 7 min readMeasures of Central Tendency in Statistics 11 min readSet Theory 3 min read Practice NCERT Solutions for Class 8 to 12 7 min readRD Sharma Class 8 Solutions for Maths: Chapter Wise PDF 5 min readRD Sharma Class 9 Solutions 10 min readRD Sharma Class 10 Solutions 9 min readRD Sharma Class 11 Solutions for Maths 13 min readRD Sharma Class 12 Solutions for Maths 13 min read Like 15 Corporate & Communications Address: A-143, 7th Floor, Sovereign Corporate Tower, Sector- 136, Noida, Uttar Pradesh (201305) Registered Address: K 061, Tower K, Gulshan Vivante Apartment, Sector 137, Noida, Gautam Buddh Nagar, Uttar Pradesh, 201305 Company About Us Legal Privacy Policy Contact Us Advertise with us GFG Corporate Solution Campus Training Program Explore POTD Job-A-Thon Community Blogs Nation Skill Up Tutorials Programming Languages DSA Web Technology AI, ML & Data Science DevOps CS Core Subjects Interview Preparation GATE Software and Tools Courses IBM Certification DSA and Placements Web Development Programming Languages DevOps & Cloud GATE Trending Technologies Videos DSA Python Java C++ Web Development Data Science CS Subjects Preparation Corner Aptitude Puzzles GfG 160 DSA 360 System Design @GeeksforGeeks, Sanchhaya Education Private Limited, All rights reserved Improvement Suggest changes Suggest Changes Help us improve. Share your suggestions to enhance the article. Contribute your expertise and make a difference in the GeeksforGeeks portal. Create Improvement Enhance the article with your expertise. Contribute to the GeeksforGeeks community and help create better learning resources for all. Suggest Changes min 4 words, max Words Limit:1000 Thank You! Your suggestions are valuable to us. What kind of Experience do you want to share? Interview ExperiencesAdmission ExperiencesCareer JourneysWork ExperiencesCampus ExperiencesCompetitive Exam Experiences Login Modal | GeeksforGeeks Log in New user ?Register Now Continue with Google or Username or Email Password [x] Remember me Forgot Password Sign In By creating this account, you agree to ourPrivacy Policy&Cookie Policy. Create Account Already have an account ?Log in Continue with Google or Username or Email Password Institution / Organization Sign Up Please enter your email address or userHandle. Back to Login Reset Password
3989
https://www.tiger-algebra.com/en/solution/combination-without-repetition/c%2885%2C3%29/
Copyright Ⓒ 2013-2025 tiger-algebra.com This site is best viewed with Javascript. If you are unable to turn on Javascript, please click here. Solution - Combinations without repetition Other Ways to Solve Step-by-step explanation 1. Find the number of terms in the set n represents the total number of items in the set: c(n,k) c(85,3) n=85 2. Find the number of items selected from the set k represents the number of items selected from the set: c(n,k) c(85,3) k=3 3. Calculate the combinations using the formula Plug n (n=85) and k (k=3) into the combination formula: C(n,k)=n!k!(n-k)! C(85,3)=85!3!(85-3)! C(85,3)=85!3!·82! C(85,3)=85·84·83·82·81·80...7·6·5·4·3!3!·82! C(85,3)=85·84·83·82·81·80...7·6·5·482! C(85,3)=85·84·83·82·81·80...7·6·5·482·81·80·79·78·77...5·4·3·2·1 C(85,3)=8584838280...76548280797877...54321 C(85,3)=98770 There are 98,770 ways that 3 items chosen from a set of 85 can be combined. How did we do? Why learn this Combinations and permutations If you have 2 types of crust, 4 types of toppings, and 3 types of cheese, how many different pizza combinations can you make? If there are 8 swimmers in a race, how many different sets of 1st, 2nd, and 3rd place winners could there be? What are your chances of winning the lottery? All of these questions can be answered using two of the most fundamental concepts in probability: combinations and permutations. Though these concepts are very similar, probability theory holds that they have some important differences. Both combinations and permutations are used to calculate the number of possible combinations of things. The most important difference between the two, however, is that combinations deal with arrangements in which the order of the items being arranged does not matter—such as combinations of pizza toppings—while permutations deal with arrangements in which the order the items being arranged does matter—such as setting the combination to a combination lock, which should really be called a permutation lock because the order of the input matters. What these two concepts have in common, is that they both help us understand the relationships between sets and the items or subsets that make up those sets. As the examples above illustrate, this can be used to better understand many different types of situations. Combinations and permutations Terms and topics Related links Latest Related Drills Solved Copyright Ⓒ 2013-2025 tiger-algebra.com
3990
https://www.nature.com/articles/s41598-023-33812-w
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. View all journals Search Log in Explore content About the journal Publish with us Sign up for alerts RSS feed Correlation between genotype and phenotype with special attention to hearing in 14 Japanese cases of NF2-related schwannomatosis Download PDF Download PDF Article Open access Published: Correlation between genotype and phenotype with special attention to hearing in 14 Japanese cases of NF2-related schwannomatosis Naoki Oishi1, Masaru Noguchi1, Masato Fujioka1,2,3, Kiyomitsu Nara4, Koichiro Wasano4,5, Hideki Mutai4, Rie Kawakita6, Ryota Tamura7, Kosuke Karatsu7, Yukina Morimoto7, Masahiro Toda7, Hiroyuki Ozawa1 & €¦ Tatsuo Matsunaga4 Scientific Reports volume 13, Article number: 6595 (2023) Cite this article 2798 Accesses 2 Citations 13 Altmetric Metrics details Subjects Disease genetics Genetics research Molecular medicine Neurological disorders Abstract NF2-related schwannomatosis (NF2) is an autosomal dominant genetic disorder caused by variants in the NF2 gene. Approximately 50% of NF2 patients inherit pathogenic variants, and the remainder acquire de novo variants. NF2 is characterized by development of bilateral vestibular schwannomas. The genetic background of Japanese NF2 cases has not been fully investigated, and the present report performed a genetic analysis of 14 Japanese NF2 cases and examined genotype€“phenotype correlations. DNA samples collected from peripheral blood were analyzed by next-generation sequencing, multiplex ligation-dependent probe amplification analysis, and in vitro electrophoresis. Ten cases had pathogenic or likely pathogenic variants in the NF2 gene, with seven truncating variants and three non-truncating variants. The age of onset in all seven cases with truncating variants was€‰<€‰20 years. The age of onset significantly differed among cases with truncating NF2 variants, non-truncating NF2 variants, and no NF2 variants. However, the clinical course of tumor growth and hearing deterioration were not predicted only by germline pathogenic NF2 variants. The rate of truncating variants was higher in the present study than that of previous reports. Genotype€“phenotype correlations in the age of onset were present in the analyzed Japanese NF2 cases. Similar content being viewed by others Early prediction of functional prognosis in neurofibromatosis type 2 patients based on genotype€“phenotype correlation with targeted deep sequencing Article Open access 09 June 2022 Prevalence and natural history of schwannomas in neurofibromatosis type 2 (NF2): the influence of pathogenic variants Article 24 January 2022 Cost-effective Whole Exome Sequencing discovers pathogenic variant causing Neurofibromatosis type 1 in a family from Jammu and Kashmir, India Article Open access 15 May 2023 Introduction NF2-related schwannomatosis (NF2) (formerly Neurofibromatosis type 2) is an autosomal dominant genetic disorder caused by variants in the NF2 gene located on the long arm of chromosome 221,2,3. The disease is characterized by development of bilateral vestibular schwannomas (VS) and other benign intracranial and spinal tumors such as meningiomas, schwannomas, and ependymomas1,2. The birth incidence rate is approximately 1 in 25,000€“33,000, and€‰~€‰50% of patients with non-heritable disease develop de novo variants1,2,4. Prevalence is estimated 1 in 50,500€“67,700 in UK4. The frequencies of tumor presence are 90€“95% for bilateral VS, 24€“51% for cranial nerve tumors other than VS, 45€“58% for intracranial meningiomas, and 63€“90% for spinal cord tumors2. Histopathological comparison of NF2 VS with sporadic VS revealed that NF2 VS tends to infiltrate nerve fibers and form multicentric patterns5,6,7. In clinical disease course, NF2 VS developed at a younger age, grew faster, and hearing deteriorated more rapidly than in sporadic VS8,9. Between 60 and 81% of NF2 cases also develop juvenile cataracts2. In sporadic NF2 cases, NF2 variants are identified in DNA obtained from peripheral blood in 35€“75% of cases10,11. The frequency of variants is 51€“55% for truncating variants such as nonsense variants and frameshift variants in which protein synthesis is interrupted, 5€“9% for non-truncating variants such as missense variants and in-frame deletions in which protein synthesis is not interrupted, and 16€“22% for splice site variants10,11. Copy number variations (CNVs), such as large deletions, are reported to occur in 17€“24% of cases10,11. In addition, the overall probable mosaicism rate in de novo NF2 cases is estimated to be approximately 60% using next-generation sequencing12. NF2 exhibits genotype€“phenotype correlations11,12,13,14,15,16,17,18,19. Compared with non-truncating and splice site variant cases, truncating variant cases are characterized by younger age of onset, higher risk of death, greater number of tumors other than VS, and lower age at loss of useful hearing11,13,15,16,17,18,19. NF2 variants in the exons 14€“15 encoding C-terminal amino acid residues 525€“595 show lower risk of cranial meningioma20. The severity of NF2 is likely to be milder in mosaic cases than in cases with germline variants10,11,12,13,14,19. Previous studies have reported no significant differences between the types of variants in hearing levels or tumor growth rates and sizes during the untreated course21,22,23,24,25,26. In terms of demographic variation, the genetic background of Asian NF2 cases has not been fully studied, and genotype€“phenotype correlation in Asian NF2 cases was recently reported in a study19. However, there have been no other reports from Asian countries which assessed the relationship between genotype and hearing. Therefore, this report performed a genetic analysis of 14 Japanese NF2 cases and examined the genotype€“phenotype correlations between NF2 gene variants and clinical course, especially hearing prior to intervention. Results Clinical data and detected pathogenic or likely pathogenic (P/LP) variants in each case, variant type, and evaluation of pathogenicity are shown in Table 1. The age for genetic testing was 6€“76 years (median 25.5 years). Thirteen of the 14 cases were sporadic, and Case 10 had a first-degree relative with NF2. Case 12 had a unilateral VS with trigeminal schwannoma as well as cataract diagnosed at the age of 42. Targeted resequencing analysis revealed truncating variants in seven cases (Cases 1€“7). Five of the cases with truncating variants had nonsense variants, and two had frameshift variants. In addition, one case (Case 8) had a splice site variant and another case (Case 9) had an in-frame deletion. In five cases, (Cases 10€“14), in which P/LP variants were not detected by targeted resequencing analysis, multiplex ligation-dependent probe amplification (MLPA) analysis was performed. Heterozygous deletion of exon 11 was detected in Case 10 (Fig. 1a) and no CNVs were detected in the other four cases. Due to existence of a polymorphic variant, c.1113C€‰>€‰T, on the sequence corresponding to the MLPA probe in case 10, we further confirmed the deletion by a read-depth analysis of the targeted-sequencing data (Supplementary Figure S1). Because exon 11 is 123 bp, deletion of the exon was considered to result in production of in-frame non-truncating NF2 protein. The variant was predicted to remove less than 10% of the protein and evaluated as likely pathogenic. For Case 8, which had a splice variant, and Case 9, which had an in-frame deletion, the effects of the variants on protein splicing were tested by in vitro electrophoresis. In Case 8, the variant was detected at the donor site of exon 5 and predicted as a splice site variant. A band was detected at the same position as the empty vector, and exon 5 (69 bp) was predicted to be skipped (Fig. 1b, Supplementary Figure S2). In Case 9, the deletion was detected at exon 3 and predicted as an in-frame deletion. The deletion did not have an identifiable splicing effect (Fig. 1b, Supplementary Figure S2). Therefore, both Cases 8 and 9 were predicted to have non-truncating and in-frame deletions. Both variants were predicted to remove less than 10% of the protein and evaluated as likely pathogenic. The age of onset of NF2-related symptoms is shown in Fig. 2. The ages of onset of all seven cases predicted to have truncating variants were€‰<€‰20 years (median 10 years (0€“17 years)). The median age of onset in the three cases predicted to have non-truncating variants was 27 years (23€“34 years), and the median age of onset of the other four cases with no P/LP variants or CNV detected was 38 years (16€“76 years). The difference of age of onset among the three groups was statistically significant (p€‰=€‰0.0014). A Dunn€™s multiple comparisons test revealed that the median age of onset of truncating variant cases was significantly lower than that of the €˜not detected€™ cases (p€‰=€‰0.0151). Tumor volumes of VS were followed for more than one year in 13 tumors of seven cases (Cases 3, 5, 8, 9, 10, 11, 13) (Fig. 3). The average follow-up period was 4.2€‰±€‰2.6 years (median 3.0 years). Tumor volumes increased during follow-up in most of the affected ears. The average growth rate was 1067.5€‰±€‰621.5 mm3/year in the four truncating variant ears and 674.8€‰±€‰905.9 mm3/year in the other nine ears. The tumor growth rate varied among the affected ears, and no consistent trend was detected between variant types. The courses of both ears were followed in six cases, and tumor volumes had asymmetric changes in two out of the six cases (Cases 11, and 13). Pure-tone averages (PTAs) were followed for€‰>€‰1 year in 18 ears of ten cases (Cases 1, 2, 3, 5, 8, 9, 10, 11, 13, 14) (Fig. 4). The average follow-up period was 3.5€‰±€‰2.6 years (median 3 years). PTAs increased in most of the affected ears. The average increase rate was 6.0€‰±€‰5.3 dB/year in seven ears of truncating variant cases and 0.3€‰±€‰6.4 dB/year in the remaining eleven ears. The PTA increase rates varied among affected ears, and no consistent trend was detected between variant types. The courses of both ears were followed in eight cases, with asymmetric PTA changes in six of the eight cases (Cases 1, 3, 8, 10, 11, and 13). Speech discrimination scores (SDSs) were followed for€‰>€‰1 year in nine ears of five cases (Cases 3, 8, 9, 11, 13) (Fig. 5). The average follow-up period was 4.3€‰±€‰2.5 years (median 4.0 years). SDSs deteriorated in most of the affected ears. The average deterioration rate was 9.4€‰±€‰2.7%/year in two ears with the truncating variant and 3.8€‰±€‰9.0%/year in the other seven ears. The SDS deterioration rate varied among affected ears, but no consistent trend was detected among variant types. The courses of both ears were followed in four cases, revealing asymmetric SDS changes in three of the four cases (Cases 8, 11, and 13). Discussion In the 14 evaluated Japanese NF2 cases, P/LP variants were detected in ten cases (71.4%). The variant detection rate in peripheral blood was consistent with that of previous reports (35€“75%)10,11. None of the 14 patients had a family history of NF2, which is consisted with the results of Teranishi et al.19, indicating the low inheritance of NF2 in Japanese. In the de novo NF2 cases, mosaicism, which was not evaluated in this study, was likely to be present considering the high estimated rate of mosaicism (60%)12. Seven of the ten cases with detected P/LP variants (70%) had truncating variants. Previously reported rates of truncating variants in cases with pathogenic variants were approximately 50% in Western countries10,11, as well as in Japanese subjects14. Prior studies have not identified differences in NF2 morbidity between races and ethnic groups32"),33."). Thus, the rate of truncating variants detected in the present study could have been higher because€‰>€‰50% of the participants were younger patients (<€‰20 years). All truncating variant cases developed NF2 at€‰<€‰20 years, and the age of onset was significantly lower than in cases with no detected variants, as in previous reports11."),12."),13: Evidence for more severe disease associated with truncating mutations. J. Med. Genet. 35, 450€“455 (1998)."),15."),16."). Other than these correlations, genotype€“phenotype correlation was not identified for tumor growth, hearing levels, and speech discrimination scores in contrast to the previous reports11."),19."). This might be explained by the small number of the patients, limited follow up, and lack of mosaicism determination which are the limitations of the present study. The increased rate of tumor growth during the untreated period varied among affected ears, and no consistent trend was detected between variant types. The courses of tumor growth progression varied bilaterally even within the same case. Therefore, to predict tumor growth rate, it is necessary to evaluate factors other than germline variants of the NF2 gene from peripheral blood. As the candidates of such predictive factors, tumor growth rate could also be affected by variants in other alleles (second hit) in individual tumors34,35."). Further, in reports of sporadic VS, increases in tumor sizes were associated with expression of vascular endothelial growth factor (VEGF) and its receptors36."),37."). No consistent trend in hearing deterioration was detected between variant types. No significant differences in hearing level have been reported between variant types, nor have correlations between hearing level and tumor size21,22,23,24. One potential mechanism of hearing deterioration in VS is tumor secretory factors such as TNF alpha38."),39."). To predict the deterioration of hearing, it could be necessary to consider factors other than the NF2 gene, including the anatomical positional relationship between the tumor and the nerves, the degree of impaired blood flow to the cochlear nerve and cochlea, and the influence of tumor secretory factors. Halliday et al. and Teranishi et al. reported genetic severity scores based on the type of variant, respectively9."),19."). When the cases in which pathogenic variants were detected in this report were categorized based on the two different scoring systems, the results were identical in Cases 8 and 10. However, Case 9 could not be evaluated based on the Teranishi scoring system because the system has no corresponding score to in-frame deletion. In four cases (Cases 11, 12, 13, and 14), P/LP variants were not detected, nor was CNV. If no P/LP variants are detected in the peripheral blood, these cases could be mosaic, and the clinical courses could be milder with a later age of onset10,11,12,13. In the present study, the age of onset in Cases 11 and 12 were both middle-aged, and that of Case 14 was€‰>€‰70 years, while the age of onset of Case 13 was 17 years old. To determine if these cases were mosaic, it would be necessary to collect samples such as in tumors rather than peripheral blood and investigate whether P/LP variants are detected14. It is also possible that no P/LP variant was detected due to chromosomal structural abnormalities such as ring chromosome which could not be detected by the genetic analysis used. To detect these changes, microarray analysis would be effective. These analyses in a larger number of patients would determine genetic characteristics and genotype€“phenotype correlations of NF2 occurring in Asian patients. Conclusion In the 14 Japanese NF2 cases evaluated in the study, P/LP variants were detected in ten cases (71.4%), and seven of the ten cases (70%) were truncating variants. Genotype€“phenotype correlations in the age of onset were detected. However, it would be necessary to consider factors in addition to NF2 germline P/LP variants to predict phenotypic characteristics such as tumor growth and hearing deterioration. Materials and methods Participants Fourteen cases diagnosed with NF2 using Manchester criteria40 at Keio University Hospital, National Hospital Organization Tokyo Medical Center, and Osaka City General Hospital, from September 2016 to August 2020, were enrolled in the study. Nine cases were male, and five were female. Written informed consent was obtained from all cases. The study was approved by the ethics committees of Keio University School of Medicine, National Hospital Organization Tokyo Medical Center, and Osaka City General Hospital (approval numbers 20150235, R20-184, and 1601110, respectively). The study was conducted in accordance with the principles of the Declaration of Helsinki. DNA sequencing DNAs were extracted from peripheral blood leukocytes using Genomix (Biologica, Nagoya, Japan). Targeted resequencing, including exonic regions of NF2, was performed using the SureSelect target enrichment system (Agilent Technologies, CA, USA) or NEXTFLEX Rapid XP DNA seq kit (Perkin-Elmer, MA, USA), followed by target resequencing with a next-generation sequencer (NextSeq platform, Illumina, CA, USA). Variant calling was conducted with SureCall (Agilent Technologies)41. The details are described in our previous report42."). Candidate variants from panel analysis were verified by Sanger sequencing using a standard protocol. PCR reactions were conducted to amplify the corresponding exons using PrimeSTAR DNA polymerase (TAKARA BIO, Kyoto, Japan) with the primers listed in the Supplementary Table S1. The pathogenicity of the detected variants was evaluated according to the American College of Medical Genetics and Genomics (ACMG) guidelines43. For loss of function variants, we also used the recommendation by the ClinGen sequence variant interpretation working group44. Subjects with no candidate variant were further analyzed by the MLPA method to assess CNVs using SALSA MLPA NF2 probemix (MRC Holland, Amsterdam, The Netherlands) according to manufacturer€™s instructions. MLPA results were confirmed with read-depth analysis using DepthOfCoverage version 4.1.9.0 in gatk445. Assessment of splicing The effects of splicing were evaluated for cases predicted to have non-truncating pathogenic variants. Exons and introns containing wild-type and predicted pathogenic variants were introduced into the pET01 vector (MoBiTec, Göttingen, Germany) cleaved by the XhoI (R0146S, NEB). After transduction of the vectors into Escherichia coli, introduction of the target sequences was confirmed. HEK293T cells were transfected with empty vectors, wild-type vectors, and vectors containing the predicted pathogenic variants by Effectene (301425, Qiagen). Moreover, after culturing HEK293T cells for 40€“48 h, RNAs were extracted using the Quick-RNA MiniPrep Plus kit (R1057, Zymo Research). Reverse transcriptions of the obtained RNAs were performed using SuperScript III Reverse Transcriptase (18080044, Thermo Fisher Scientific) to prepare cDNAs. After cDNAs were amplified by PCR using Phusion High-Fidelity DNA polymerase (M0530, NEB), product length was confirmed by electrophoresis. The PCR primers were 5'-CCTGGCTGCCCAGGCTTTTGTCAACA-3' and 5'-CCACCTCCAGTGCCAAGGTCTGAAGGTCA-3'. Clinical characteristics The relationship between variant type and clinical information such as NF2-related lesions, age of onset, VS tumor volume calculated from MRI, PTA, and highest SDS was evaluated prior to intervention. Clinical information was obtained retrospectively from medical records. PTA refers to the average hearing threshold at 0.5, 1, and 2 kHz. SDS refers to the percentage of words correctly recognized and repeated. The clinical courses of tumor volume, PTA, and SDS were evaluated for cases followed more than 1 year before therapeutic interventions for VS. Regarding the age of NF2-related symptom onset, the truncating variant group, non-truncating variant group, and €˜not detected€™ group were compared using a Kruskal€“Wallis test. p-values€‰<€‰0.05 were considered statistically significant. IBM SPSS Statistics version 25 (IBM, NY, USA) was used for statistical analysis. Data availability All the NF2 variants detected in this study and clinical features of subjects are included within this article and deposited to Medical Genomics Japan Variant Database (MGeND, , accession number: MGS000062). The genomic data supporting the findings of this study may be made available to qualified investigators upon request with appropriate institutional review board approval and execution of a data use agreement with the corresponding authors. References Evans, D. G. et al. A genetic study of type 2 neurofibromatosis in the United Kingdom. I. Prevalence, mutation rate, fitness, and confirmation of maternal transmission effect on severity. J. Med. Genet. 29, 841€“846 (1992). Article CAS PubMed PubMed Central Google Scholar 2. Asthagiri, A. R. et al. Neurofibromatosis type 2. Lancet 373, 1974€“1986 (2009). Article CAS PubMed PubMed Central Google Scholar 3. Plotkin, S. R. et al. Updated diagnostic criteria and nomenclature for neurofibromatosis type 2 and schwannomatosis: An international consensus recommendation. Genet. Med. 24, 1967€“1977 (2022). Article CAS PubMed Google Scholar 4. Evans, D. G. et al. Schwannomatosis: A genetic and epidemiological study. J Neurol. Neurosurg. Psychiatry 89, 1215€“1219 (2018). Article PubMed Google Scholar 5. Linthicum, F. H. Jr. & Brackmann, D. E. Bilateral acoustic tumors. A diagnostic and surgical challenge. Arch. Otolaryngol. 106, 729€“733 (1980). Article PubMed Google Scholar 6. Matsunaga, T., Igarashi, M. & Kanzaki, J. Temporal bone pathology of acoustic neurinoma (unilateral and bilateral) in relation to the internal auditory canal surgery. Acta Otolaryngol. Suppl. 487, 61€“68 (1991). Article CAS PubMed Google Scholar 7. Nam, S. I., Linthicum, F. H. Jr. & Merchant, S. N. Temporal bone histopathology in neurofibromatosis type 2. Laryngoscope 121, 1548€“1554 (2011). Article PubMed PubMed Central Google Scholar 8. Kishore, A. & O€™Reilly, B. F. A clinical study of vestibular schwannomas in type 2 neurofibromatosis. Clin. Otolaryngol. Allied Sci. 25, 561€“565 (2000). Article CAS PubMed Google Scholar 9. Mahboubi, H. et al. Vestibular schwannoma excision in sporadic versus neurofibromatosis type 2 populations. Otolaryngol. Head Neck Surg. 153, 822€“831 (2015). Article PubMed Google Scholar 10. Evans, D. G. Neurofibromatosis type 2 (NF2): a clinical and molecular review. Orphanet J. Rare Dis. 19, 4€“16 (2009). Google Scholar Halliday, D. et al. Genetic severity score predicts clinical phenotype in NF2. J. Med. Genet. 54, 657€“664 (2017). Article CAS PubMed Google Scholar 12. Evans, D. G. et al. Incidence of mosaicism in 1055 de novo NF2 cases: Much higher than previous estimates with high utility of next-generation sequencing. Genet. Med. 22, 53€“59 (2020). Article CAS PubMed Google Scholar 13. Evans, D. G., Trueman, L., Wallace, A., Collins, S. & Strachan, T. Genotype/phenotype correlations in type 2 neurofibromatosis (NF2): Evidence for more severe disease associated with truncating mutations. J. Med. Genet. 35, 450€“455 (1998). Article CAS PubMed PubMed Central Google Scholar 14. Teranishi, Y. et al. Targeted deep sequencing of DNA from multiple tissue types improves the diagnostic rate and reveals a highly diverse phenotype of mosaic neurofibromatosis type 2. J. Med. Genet. 58, 701€“711 (2020). Article PubMed Google Scholar 15. Hexter, A. et al. Clinical and molecular predictors of mortality in neurofibromatosis 2: A UK national analysis of 1192 patients. J. Med. Genet. 52, 699€“705 (2015). Article CAS PubMed Google Scholar 16. Baser, M. E. et al. Genotype-phenotype correlations for nervous system tumors in neurofibromatosis 2: A population-based study. Am. J. Hum. Genet. 75, 231€“239 (2004). Article CAS PubMed PubMed Central Google Scholar 17. Kluwe, L. et al. Identification of NF2 germline mutations and comparison with neurofibromatosis 2 phenotypes. Hum. Genet. 98, 534€“538 (1996). Article CAS PubMed Google Scholar 18. Parry, D. M. et al. Germline mutations in the neurofibromatosis 2 gene: Correlations with disease severity and retinal abnormalities. Am. J. Hum. Genet. 59, 529€“539 (1996). CAS PubMed PubMed Central Google Scholar 19. Teranishi, Y. et al. Early prediction of functional prognosis in neurofibromatosis type 2 patients based on genotype-phenotype correlation with targeted deep sequencing. Sci. Rep. 12, 9543 (2022). Article ADS CAS PubMed PubMed Central Google Scholar 20. Smith, M. J. et al. Cranial meningiomas in 411 neurofibromatosis type 2 (NF2) patients with proven gene mutations: Clear positional effect of mutations, but absence of female severity effect on age at onset. J. Med. Genet. 48, 261€“265 (2011). Article ADS CAS PubMed Google Scholar 21. Baser, M. E., Makariou, E. V. & Parry, D. M. Predictors of vestibular schwannoma growth in patients with neurofibromatosis Type 2. J. Neurosurg. 96, 217€“222 (2002). Article PubMed Google Scholar 22. Mautner, V. F. et al. Vestibular schwannoma growth in patients with neurofibromatosis Type 2: A longitudinal study. J. Neurosurg. 96, 223€“228 (2002). Article PubMed Google Scholar 23. Fisher, L. M., Doherty, J. K., Lev, M. H. & Slattery, W. H. Concordance of bilateral vestibular schwannoma growth and hearing changes in neurofibromatosis 2: Neurofibromatosis 2 natural history consortium. Otol. Neurotol. 30, 835€“841 (2009). Article PubMed Google Scholar 24. Peyre, M. et al. Conservative management of bilateral vestibular schwannomas in neurofibromatosis type 2 patients: hearing and tumor growth results. Neurosurgery 72, 907€“913 (2013). Article PubMed Google Scholar 25. Kontorinis, G. et al. Progress of hearing loss in neurofibromatosis type 2: Implications for future management. Eur. Arch. Otorhinolaryngol. 272, 3143€“3150 (2015). Article PubMed Google Scholar 26. Plotkin, S. R., Merker, V. L., Muzikansky, A., Barker, F. G. & Slattery, W. Natural history of vestibular schwannoma growth and hearing decline in newly diagnosed neurofibromatosis type 2 patients. Otol. Neurotol. 35, 50€“56 (2014). Article Google Scholar 27. Bourn, D. et al. Eleven novel mutations in the NF2 tumour suppressor gene. Hum. Genet. 95, 572€“574 (1995). Article CAS PubMed Google Scholar 28. MacCollin, M. et al. Mutational analysis of patients with neurofibromatosis 2. Am. J. Hum. Genet. 55, 314€“320 (1994). CAS PubMed PubMed Central Google Scholar 29. Ng, H. K. et al. Combined molecular genetic studies of chromosome 22q and the neurofibromatosis type 2 gene in central nervous system tumors. Neurosurgery 37, 764€“773 (1995). Article CAS PubMed Google Scholar 30. Evans, D. G. et al. Probability of bilateral disease in people presenting with a unilateral vestibular schwannoma. J. Neurol. Neurosurg. Psychiatry 66, 764€“767 (1999). Article CAS PubMed PubMed Central Google Scholar 31. Wallace, A. J., Watson, C. J., Award, E., Evans, D. G. & Elles, R. G. Mutation scanning of the NF2 gene: An improved service based on meta-PCR/sequencing, dosage analysis, and loss of heterozygosity analysis. Genet. Test 8, 368€“380 (2004). Article CAS PubMed Google Scholar 32. Evans, D. G. Neurofibromatosis 2. GeneReviews (1998(Updated 2018) 33. Hoa, M. & Slattery, W. H. 3rd. Neurofibromatosis 2. Otolaryngol. Clin. North Am. 45, 315€“332 (2012). Article PubMed Google Scholar 34. Hadfield, K. D. et al. Rates of loss of heterozygosity and mitotic recombination in NF2 schwannomas, sporadic vestibular schwannomas and schwannomatosis schwannomas. Oncogene 29, 6216€“6221 (2010). Article CAS PubMed Google Scholar 35. Chen, H., Xue, L., Wang, H., Wang, Z. & Wu, H. Differential NF2 gene status in sporadic vestibular schwannomas and its prognostic impact on tumour growth patterns. Sci. Rep. 7, 5470. (2017). Article ADS PubMed PubMed Central Google Scholar 36. Cayé-Thomasen, P. et al. VEGF and VEGF receptor-1 concentration in vestibular schwannoma homogenates correlates to tumor growth rate. Otol. Neurotol. 26, 98€“101 (2005). Article PubMed Google Scholar 37. London, N. R. & Gurgel, R. K. The role of vascular endothelial growth factor and vascular stability in diseases of the ear. Laryngoscope 124, E340-346 (2014). Article CAS PubMed Google Scholar 38. Dilwali, S., Landegger, L. D., Soares, V. Y., Deschler, D. G. & Stankovic, K. M. Secreted factors from human vestibular schwannomas can cause cochlear damage. Sci. Rep. 5, 18599. (2015). Article ADS CAS PubMed PubMed Central Google Scholar 39. Soares, V. Y. et al. Extracellular vesicles derived from human vestibular schwannomas associated with poor hearing damage cochlear cells. Neuro. Oncol. 18, 1498€“1507 (2016). CAS PubMed PubMed Central Google Scholar 40. Evans, D. G. et al. Identifying the deficiencies of current diagnostic criteria for neurofibromatosis 2 using databases of 2777 individuals with molecular testing. Genet. Med. 21, 1525€“1533 (2019). Article PubMed Google Scholar 41. Fujiki, R. et al. Assessing the accuracy of variant detection in cost-effective gene panel testing by next-generation sequencing. J. Mol. Diagn. 20, 572€“582 (2018). Article CAS PubMed Google Scholar 42. Mutai, H. et al. Variants encoding a restricted carboxy-terminal domain of SLC12A2 cause hereditary hearing loss in humans. PLoS Genet. 16, 1008643. (2020). Article Google Scholar 43. Richards, S. et al. Standards and guidelines for the interpretation of sequence variants: A joint consensus recommendation of the American college of medical genetics and genomics and the association for molecular pathology. Genet. Med. 17, 405€“424 (2015). Article PubMed PubMed Central Google Scholar 44. Abou Tayoun, A. N. et al. Recommendations for interpreting the loss of function PVS1 ACM/GAMP variant criterion. Hum. Mutat. 39, 1517€“1524 (2018). Article PubMed PubMed Central Google Scholar 45. Van der Auwera, G. A. & O'Connor, B. D. Genomics in the Cloud: Using Docker, GATK, and WDL in Terra (1st Edition) (O'Reilly Media, 2020). Download references Acknowledgements This research was supported by JSPS KAKENHI Grant Number 17K16944 to M.N., 18K16869 to K.W, a Grant-in-Aid for Clinical Research from the National Hospital Organization of Japan (R3-NHO (kankakuki)-02) to T.M., AMED under Grant Number 22gk0110063h0001 to T.M., and Health Labor Sciences Research Grant (Number 20FC1057) to T.M. Author information Authors and Affiliations Department of Otolaryngology-Head and Neck Surgery, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, 160-8582, Japan Naoki Oishi, Masaru Noguchi, Masato Fujioka & Hiroyuki Ozawa 2. Department of Molecular Genetics, Kitasato University School of Medicine, Kanagawa, Japan Masato Fujioka 3. Clinical and Translational Research Center, Keio University Hospital, Tokyo, Japan Masato Fujioka 4. Division of Hearing and Balance Research, National Institute of Sensory Organs, National Hospital Organization Tokyo Medical Center, 2-5-1 Higashigaoka, Meguro, Tokyo, 152-8902, Japan Kiyomitsu Nara, Koichiro Wasano, Hideki Mutai & Tatsuo Matsunaga 5. Department of Otolaryngology and Head and Neck Surgery, Tokai University School of Medicine, Kanagawa, Japan Koichiro Wasano 6. Department of Pediatric Endocrinology and Metabolism, Osaka City General Hospital, Osaka, Japan Rie Kawakita 7. Department of Neurosurgery, Keio University School of Medicine, Tokyo, Japan Ryota Tamura, Kosuke Karatsu, Yukina Morimoto & Masahiro Toda Authors Naoki Oishi View author publications Search author on:PubMed Google Scholar 2. Masaru Noguchi View author publications Search author on:PubMed Google Scholar 3. Masato Fujioka View author publications Search author on:PubMed Google Scholar 4. Kiyomitsu Nara View author publications Search author on:PubMed Google Scholar 5. Koichiro Wasano View author publications Search author on:PubMed Google Scholar 6. Hideki Mutai View author publications Search author on:PubMed Google Scholar 7. Rie Kawakita View author publications Search author on:PubMed Google Scholar 8. Ryota Tamura View author publications Search author on:PubMed Google Scholar 9. Kosuke Karatsu View author publications Search author on:PubMed Google Scholar 10. Yukina Morimoto View author publications Search author on:PubMed Google Scholar Masahiro Toda View author publications Search author on:PubMed Google Scholar 12. Hiroyuki Ozawa View author publications Search author on:PubMed Google Scholar 13. Tatsuo Matsunaga View author publications Search author on:PubMed Google Scholar Contributions N.O., M.N., M.F., and T.M. designed the study. N.O, M.N., M.F., K.W., R.K., R.T., K.K., Y.M., M.T., H.O., and T.M. collected clinical information. K.N., K.W., H.M., and T.M. performed genetic analyses. N.O., M.N., K.N., K.W., H.M., and T.M. analyzed the results. N.O., M.N., K.N., K.W., H.M., and T.M. wrote the manuscript. All authors reviewed and approved the manuscript. Corresponding authors Correspondence to Naoki Oishi or Tatsuo Matsunaga. Ethics declarations Competing interests The authors declare no competing interests. Additional information Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary Information Supplementary Information. Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit Reprints and permissions About this article Cite this article Oishi, N., Noguchi, M., Fujioka, M. et al. Correlation between genotype and phenotype with special attention to hearing in 14 Japanese cases of NF2-related schwannomatosis. Sci Rep 13, 6595 (2023). Download citation Received: Accepted: Published: DOI: Share this article Anyone you share the following link with will be able to read this content: Sorry, a shareable link is not currently available for this article. Provided by the Springer Nature SharedIt content-sharing initiative This article is cited by High de novo mutation rate in Iranian NF2-related schwannomatosis patients with a report of a novel NF2 mutation Mohammad Amin Ghalavand Alimohamad Asghari Masoumeh Falah Molecular Biology Reports (2025) Search Advanced search Quick links Explore articles by subject Find a job Guide to authors Editorial policies Sign up for the Nature Briefing newsletter €” what matters in science, free to your inbox daily. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
3991
https://pdfcoffee.com/arfken-george-weber-hans-mathematical-methods-for-physicists-6a-edition-pdf-free.html
Arfken George, Weber Hans - Mathematical Methods for Physicists - 6a Edition - PDFCOFFEE.COM Email: info@pdfcoffee.com Login Register English Deutsch Español Français Português Home Top Categories CAREER & MONEY PERSONAL GROWTH POLITICS & CURRENT AFFAIRS SCIENCE & TECH HEALTH & FITNESS LIFESTYLE ENTERTAINMENT BIOGRAPHIES & HISTORY FICTION Top stories Best stories Add Story My Stories Home Arfken George, Weber Hans - Mathematical Methods for Physicists - 6a Edition Arfken George, Weber Hans - Mathematical Methods for Physicists - 6a Edition Author / Uploaded Astrid Rangel Categories Documents MATHEMATICAL METHODS FOR PHYSICISTS SIXTH EDITION George B. Arfken Miami University Oxford, OH Hans J. Weber University Views 3,437 Downloads 1,303 File size 7MB Report DMCA / Copyright DOWNLOAD FILE Recommend Stories ###### Arfken-Mathematical Methods For Physicists-3ed.pdf 2,926 2,567 31MB Read more ###### [Arfken] Mathematical Methods for Physicists 7th SOLUCIONARIO.pdf www.elsolucionario.net Instructor’s Manual MATHEMATICAL METHODS FOR PHYSICISTS A Comprehensive Guide George B. Arfken 852 112 3MB Read more ###### Mathematical Methods For Physicists - George Arfken - 3th Ed 0 0 46MB Read more ###### Arfken-Mathematical Methods For Physicists.pdf This page intentionally left blank MATHEMATICAL METHODS FOR PHYSICISTS SIXTH EDITION This page intentionally left bl 4,775 2,115 8MB Read more ###### MATHEMATICAL METHODS FOR PHYSICISTS SIXTH EDITION MATHEMATICAL METHODS FOR PHYSICISTS SIXTH EDITION George B. Arfken Miami University Oxford, OH Hans J. Weber University 1,239 999 7MB Read more ###### Essential Mathematical Methods for Physicists Vector Identities A = Ax xˆ + Ayyˆ + Azzˆ , A2 = Ax2 + A2y + A2z , A · B = Ax Bx + Ay By + Az Bz A A 406 199 5MB Read more ###### 267806183-Mathematical-Methods-for-Physicists-5th-Ed-Arfken-Solution.pdf Descrição completa 920 151 9MB Read more ###### Mathematical Methods For Physicists Solutions Ch. 2, Webber and Arfken Physics 451 Fall 2004 Homework Assignment #4 — Solutions Textbook problems: Ch. 2: 2.5.11, 2.6.5, 2.9.6, 2.9.12, 2.10. 343 68 70KB Read more ###### Mathematical Methods For Physicists Webber/Arfken Selecet Solutions ch. 7 Physics 451 Fall 2004 Homework Assignment #11 — Solutions Textbook problems: Ch. 7: 7.2.5, 7.2.7, 7.2.14, 7.2.20, 7.2. 272 83 111KB Read more Citation preview MATHEMATICAL METHODS FOR PHYSICISTS SIXTH EDITION George B. Arfken Miami University Oxford, OH Hans J. Weber University of Virginia Charlottesville, VA Amsterdam Boston Heidelberg London New York Oxford Paris San Diego San Francisco Singapore Sydney Tokyo This page intentionally left blank MATHEMATICAL METHODS FOR PHYSICISTS SIXTH EDITION This page intentionally left blank Acquisitions Editor Project Manager Marketing Manager Cover Design Composition Cover Printer Interior Printer Tom Singer Simon Crump Linda Beattie Eric DeCicco VTEX Typesetting Services Phoenix Color The Maple–Vail Book Manufacturing Group Elsevier Academic Press 30 Corporate Drive, Suite 400, Burlington, MA 01803, USA 525 B Street, Suite 1900, San Diego, California 92101-4495, USA 84 Theobald’s Road, London WC1X 8RR, UK ∞ This book is printed on acid-free paper.  Copyright © 2005, Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Permissions may be sought directly from Elsevier’s Science & Technology Rights Department in Oxford, UK: phone: (+44) 1865 843830, fax: (+44) 1865 853333, e-mail: permissions@elsevier.co.uk. You may also complete your request on-line via the Elsevier homepage ( by selecting “Customer Support” and then “Obtaining Permissions.” Library of Congress Cataloging-in-Publication Data Appication submitted British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN: 0-12-059876-0 Case bound ISBN: 0-12-088584-0 International Students Edition For all information on all Elsevier Academic Press Publications visit our Web site at www.books.elsevier.com Printed in the United States of America 05 06 07 08 09 10 9 8 7 6 5 4 3 2 1 CONTENTS Preface 1 2 xi Vector Analysis 1.1 Definitions, Elementary Approach . . . . . . 1.2 Rotation of the Coordinate Axes . . . . . . . 1.3 Scalar or Dot Product . . . . . . . . . . . . 1.4 Vector or Cross Product . . . . . . . . . . . 1.5 Triple Scalar Product, Triple Vector Product 1.6 Gradient, ∇ . . . . . . . . . . . . . . . . . . 1.7 Divergence, ∇ . . . . . . . . . . . . . . . . . 1.8 Curl, ∇× . . . . . . . . . . . . . . . . . . . 1.9 Successive Applications of ∇ . . . . . . . . 1.10 Vector Integration . . . . . . . . . . . . . . . 1.11 Gauss’ Theorem . . . . . . . . . . . . . . . . 1.12 Stokes’ Theorem . . . . . . . . . . . . . . . 1.13 Potential Theory . . . . . . . . . . . . . . . 1.14 Gauss’ Law, Poisson’s Equation . . . . . . . 1.15 Dirac Delta Function . . . . . . . . . . . . . 1.16 Helmholtz’s Theorem . . . . . . . . . . . . . Additional Readings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 7 12 18 25 32 38 43 49 54 60 64 68 79 83 95 101 Vector Analysis in Curved Coordinates and Tensors 2.1 Orthogonal Coordinates in R3 . . . . . . . . . . 2.2 Differential Vector Operators . . . . . . . . . . 2.3 Special Coordinate Systems: Introduction . . . 2.4 Circular Cylinder Coordinates . . . . . . . . . . 2.5 Spherical Polar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 103 110 114 115 123 v . . . . . . . . . . . . . . . . . vi Contents 2.6 2.7 2.8 2.9 2.10 2.11 3 4 5 6 Tensor Analysis . . . . . . . . Contraction, Direct Product . Quotient Rule . . . . . . . . . Pseudotensors, Dual Tensors General Tensors . . . . . . . . Tensor Derivative Operators . Additional Readings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 139 141 142 151 160 163 Determinants and Matrices 3.1 Determinants . . . . . . . . . . . . . . 3.2 Matrices . . . . . . . . . . . . . . . . . 3.3 Orthogonal Matrices . . . . . . . . . . 3.4 Hermitian Matrices, Unitary Matrices 3.5 Diagonalization of Matrices . . . . . . 3.6 Normal Matrices . . . . . . . . . . . . Additional Readings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 165 176 195 208 215 231 239 Group Theory 4.1 Introduction to Group Theory . . . . . . . . 4.2 Generators of Continuous Groups . . . . . . 4.3 Orbital Angular Momentum . . . . . . . . . 4.4 Angular Momentum Coupling . . . . . . . . 4.5 Homogeneous Lorentz Group . . . . . . . . 4.6 Lorentz Covariance of Maxwell’s Equations 4.7 Discrete Groups . . . . . . . . . . . . . . . . 4.8 Differential Forms . . . . . . . . . . . . . . Additional Readings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 241 246 261 266 278 283 291 304 319 Infinite Series 5.1 Fundamental Concepts . . . . . . . . . . . . . 5.2 Convergence Tests . . . . . . . . . . . . . . . 5.3 Alternating Series . . . . . . . . . . . . . . . . 5.4 Algebra of Series . . . . . . . . . . . . . . . . 5.5 Series of Functions . . . . . . . . . . . . . . . 5.6 Taylor’s Expansion . . . . . . . . . . . . . . . 5.7 Power Series . . . . . . . . . . . . . . . . . . 5.8 Elliptic Integrals . . . . . . . . . . . . . . . . 5.9 Bernoulli Numbers, Euler–Maclaurin Formula 5.10 Asymptotic Series . . . . . . . . . . . . . . . . 5.11 Infinite Products . . . . . . . . . . . . . . . . Additional Readings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 321 325 339 342 348 352 363 370 376 389 396 401 Functions of a Complex Variable I Analytic Properties, Mapping 6.1 Complex Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Cauchy–Riemann Conditions . . . . . . . . . . . . . . . . . . . . . . . 6.3 Cauchy’s Integral Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 403 404 413 418 Contents 6.4 6.5 6.6 6.7 6.8 7 8 9 Cauchy’s Integral Formula Laurent Expansion . . . . Singularities . . . . . . . . Mapping . . . . . . . . . . Conformal Mapping . . . Additional Readings . . . . . . . . . vii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 430 438 443 451 453 Functions of a Complex Variable II 7.1 Calculus of Residues . . . . . 7.2 Dispersion Relations . . . . . 7.3 Method of Steepest Descents . Additional Readings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 455 482 489 497 The Gamma Function (Factorial Function) 8.1 Definitions, Simple Properties . . . . . 8.2 Digamma and Polygamma Functions . 8.3 Stirling’s Series . . . . . . . . . . . . . 8.4 The Beta Function . . . . . . . . . . . 8.5 Incomplete Gamma Function . . . . . Additional Readings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 499 510 516 520 527 533 Differential Equations 9.1 Partial Differential Equations . . . . . . . . . . 9.2 First-Order Differential Equations . . . . . . . 9.3 Separation of Variables . . . . . . . . . . . . . . 9.4 Singular Points . . . . . . . . . . . . . . . . . . 9.5 Series Solutions—Frobenius’ Method . . . . . . 9.6 A Second Solution . . . . . . . . . . . . . . . . . 9.7 Nonhomogeneous Equation—Green’s Function 9.8 Heat Flow, or Diffusion, PDE . . . . . . . . . . Additional Readings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535 535 543 554 562 565 578 592 611 618 10 Sturm–Liouville Theory—Orthogonal Functions 10.1 Self-Adjoint ODEs . . . . . . . . . . . . . . 10.2 Hermitian Operators . . . . . . . . . . . . . 10.3 Gram–Schmidt Orthogonalization . . . . . . 10.4 Completeness of Eigenfunctions . . . . . . . 10.5 Green’s Function—Eigenfunction Expansion Additional Readings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621 622 634 642 649 662 674 11 Bessel Functions 11.1 Bessel Functions of the First Kind, Jν (x) . . 11.2 Orthogonality . . . . . . . . . . . . . . . . . 11.3 Neumann Functions . . . . . . . . . . . . . 11.4 Hankel Functions . . . . . . . . . . . . . . . 11.5 Modified Bessel Functions, Iν (x) and Kν (x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675 675 694 699 707 713 viii Contents 11.6 11.7 Asymptotic Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . Spherical Bessel Functions . . . . . . . . . . . . . . . . . . . . . . . . . Additional Readings . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719 725 739 12 Legendre Functions 12.1 Generating Function . . . . . . . . . . . . . 12.2 Recurrence Relations . . . . . . . . . . . . . 12.3 Orthogonality . . . . . . . . . . . . . . . . . 12.4 Alternate Definitions . . . . . . . . . . . . . 12.5 Associated Legendre Functions . . . . . . . 12.6 Spherical Harmonics . . . . . . . . . . . . . 12.7 Orbital Angular Momentum Operators . . . 12.8 Addition Theorem for Spherical Harmonics 12.9 Integrals of Three Y’s . . . . . . . . . . . . . 12.10 Legendre Functions of the Second Kind . . . 12.11 Vector Spherical Harmonics . . . . . . . . . Additional Readings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 741 741 749 756 767 771 786 793 797 803 806 813 816 13 More Special Functions 13.1 Hermite Functions . . . . . . . . . . 13.2 Laguerre Functions . . . . . . . . . . 13.3 Chebyshev Polynomials . . . . . . . 13.4 Hypergeometric Functions . . . . . . 13.5 Confluent Hypergeometric Functions 13.6 Mathieu Functions . . . . . . . . . . Additional Readings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817 817 837 848 859 863 869 879 14 Fourier Series 14.1 General Properties . . . . . . . . . . . . . 14.2 Advantages, Uses of Fourier Series . . . . 14.3 Applications of Fourier Series . . . . . . . 14.4 Properties of Fourier Series . . . . . . . . 14.5 Gibbs Phenomenon . . . . . . . . . . . . . 14.6 Discrete Fourier Transform . . . . . . . . 14.7 Fourier Expansions of Mathieu Functions Additional Readings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881 881 888 892 903 910 914 919 929 15 Integral Transforms 15.1 Integral Transforms . . . . . . . . . . . . 15.2 Development of the Fourier Integral . . . 15.3 Fourier Transforms—Inversion Theorem 15.4 Fourier Transform of Derivatives . . . . 15.5 Convolution Theorem . . . . . . . . . . . 15.6 Momentum Representation . . . . . . . . 15.7 Transfer Functions . . . . . . . . . . . . 15.8 Laplace Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931 931 936 938 946 951 955 961 965 . . . . . . . . . . . . . . . . . . . . . . Contents 15.9 15.10 15.11 15.12 Laplace Transform of Derivatives Other Properties . . . . . . . . . Convolution (Faltungs) Theorem Inverse Laplace Transform . . . . Additional Readings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 971 . 979 . 990 . 994 . 1003 16 Integral Equations 16.1 Introduction . . . . . . . . . . . . . . . . . . . . . 16.2 Integral Transforms, Generating Functions . . . . 16.3 Neumann Series, Separable (Degenerate) Kernels 16.4 Hilbert–Schmidt Theory . . . . . . . . . . . . . . Additional Readings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005 1005 1012 1018 1029 1036 17 Calculus of Variations 17.1 A Dependent and an Independent Variable . . 17.2 Applications of the Euler Equation . . . . . . 17.3 Several Dependent Variables . . . . . . . . . . 17.4 Several Independent Variables . . . . . . . . . 17.5 Several Dependent and Independent Variables 17.6 Lagrangian Multipliers . . . . . . . . . . . . . 17.7 Variation with Constraints . . . . . . . . . . . 17.8 Rayleigh–Ritz Variational Technique . . . . . Additional Readings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1037 1038 1044 1052 1056 1058 1060 1065 1072 1076 18 Nonlinear Methods and Chaos 18.1 Introduction . . . . . . . . . . . . . . . . . . . . 18.2 The Logistic Map . . . . . . . . . . . . . . . . . 18.3 Sensitivity to Initial Conditions and Parameters 18.4 Nonlinear Differential Equations . . . . . . . . Additional Readings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1079 1079 1080 1085 1088 1107 19 Probability 19.1 Definitions, Simple Properties 19.2 Random Variables . . . . . . 19.3 Binomial Distribution . . . . 19.4 Poisson Distribution . . . . . 19.5 Gauss’ Normal Distribution . 19.6 Statistics . . . . . . . . . . . . Additional Readings . . . . . General References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1109 1109 1116 1128 1130 1134 1138 1150 1150 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1153 This page intentionally left blank PREFACE Through six editions now, Mathematical Methods for Physicists has provided all the mathematical methods that aspirings scientists and engineers are likely to encounter as students and beginning researchers. More than enough material is included for a two-semester undergraduate or graduate course. The book is advanced in the sense that mathematical relations are almost always proven, in addition to being illustrated in terms of examples. These proofs are not what a mathematician would regard as rigorous, but sketch the ideas and emphasize the relations that are essential to the study of physics and related fields. This approach incorporates theorems that are usually not cited under the most general assumptions, but are tailored to the more restricted applications required by physics. For example, Stokes’ theorem is usually applied by a physicist to a surface with the tacit understanding that it be simply connected. Such assumptions have been made more explicit. PROBLEM-SOLVING SKILLS The book also incorporates a deliberate focus on problem-solving skills. This more advanced level of understanding and active learning is routine in physics courses and requires practice by the reader. Accordingly, extensive problem sets appearing in each chapter form an integral part of the book. They have been carefully reviewed, revised and enlarged for this Sixth Edition. PATHWAYS THROUGH THE MATERIAL Undergraduates may be best served if they start by reviewing Chapter 1 according to the level of training of the class. Section 1.2 on the transformation properties of vectors, the cross product, and the invariance of the scalar product under rotations may be postponed until tensor analysis is started, for which these sections form the introduction and serve as xi xii Preface examples. They may continue their studies with linear algebra in Chapter 3, then perhaps tensors and symmetries (Chapters 2 and 4), and next real and complex analysis (Chapters 5–7), differential equations (Chapters 9, 10), and special functions (Chapters 11–13). In general, the core of a graduate one-semester course comprises Chapters 5–10 and 11–13, which deal with real and complex analysis, differential equations, and special functions. Depending on the level of the students in a course, some linear algebra in Chapter 3 (eigenvalues, for example), along with symmetries (group theory in Chapter 4), and tensors (Chapter 2) may be covered as needed or according to taste. Group theory may also be included with differential equations (Chapters 9 and 10). Appropriate relations have been included and are discussed in Chapters 4 and 9. A two-semester course can treat tensors, group theory, and special functions (Chapters 11–13) more extensively, and add Fourier series (Chapter 14), integral transforms (Chapter 15), integral equations (Chapter 16), and the calculus of variations (Chapter 17). CHANGES TO THE SIXTH EDITION Improvements to the Sixth Edition have been made in nearly all chapters adding examples and problems and more derivations of results. Numerous left-over typos caused by scanning into LaTeX, an error-prone process at the rate of many errors per page, have been corrected along with mistakes, such as in the Dirac γ -matrices in Chapter 3. A few chapters have been relocated. The Gamma function is now in Chapter 8 following Chapters 6 and 7 on complex functions in one variable, as it is an application of these methods. Differential equations are now in Chapters 9 and 10. A new chapter on probability has been added, as well as new subsections on differential forms and Mathieu functions in response to persistent demands by readers and students over the years. The new subsections are more advanced and are written in the concise style of the book, thereby raising its level to the graduate level. Many examples have been added, for example in Chapters 1 and 2, that are often used in physics or are standard lore of physics courses. A number of additions have been made in Chapter 3, such as on linear dependence of vectors, dual vector spaces and spectral decomposition of symmetric or Hermitian matrices. A subsection on the diffusion equation emphasizes methods to adapt solutions of partial differential equations to boundary conditions. New formulas have been developed for Hermite polynomials and are included in Chapter 13 that are useful for treating molecular vibrations; they are of interest to the chemical physicists. ACKNOWLEDGMENTS We have benefited from the advice and help of many people. Some of the revisions are in response to comments by readers and former students, such as Dr. K. Bodoor and J. Hughes. We are grateful to them and to our Editors Barbara Holland and Tom Singer who organized accuracy checks. We would like to thank in particular Dr. Michael Bozoian and Prof. Frank Harris for their invaluable help with the accuracy checking and Simon Crump, Production Editor, for his expert management of the Sixth Edition. CHAPTER 1 VECTOR ANALYSIS 1.1 DEFINITIONS, ELEMENTARY APPROACH In science and engineering we frequently encounter quantities that have magnitude and magnitude only: mass, time, and temperature. These we label scalar quantities, which remain the same no matter what coordinates we use. In contrast, many interesting physical quantities have magnitude and, in addition, an associated direction. This second group includes displacement, velocity, acceleration, force, momentum, and angular momentum. Quantities with magnitude and direction are labeled vector quantities. Usually, in elementary treatments, a vector is defined as a quantity having magnitude and direction. To distinguish vectors from scalars, we identify vector quantities with boldface type, that is, V. Our vector may be conveniently represented by an arrow, with length proportional to the magnitude. The direction of the arrow gives the direction of the vector, the positive sense of direction being indicated by the point. In this representation, vector addition C=A+B (1.1) consists in placing the rear end of vector B at the point of vector A. Vector C is then represented by an arrow drawn from the rear of A to the point of B. This procedure, the triangle law of addition, assigns meaning to Eq. (1.1) and is illustrated in Fig. 1.1. By completing the parallelogram, we see that C = A + B = B + A, as shown in Fig. 1.2. In words, vector addition is commutative. For the sum of three vectors D = A + B + C, Fig. 1.3, we may first add A and B: A + B = E. 1 (1.2) 2 Chapter 1 Vector Analysis FIGURE 1.1 Triangle law of vector addition. FIGURE 1.2 Parallelogram law of vector addition. FIGURE 1.3 Vector addition is associative. Then this sum is added to C: D = E + C. Similarly, we may first add B and C: B + C = F. Then D = A + F. In terms of the original expression, (A + B) + C = A + (B + C). Vector addition is associative. A direct physical example of the parallelogram addition law is provided by a weight suspended by two cords. If the junction point (O in Fig. 1.4) is in equilibrium, the vector 1.1 Definitions, Elementary Approach FIGURE 1.4 3 Equilibrium of forces: F1 + F2 = −F3 . sum of the two forces F1 and F2 must just cancel the downward force of gravity, F3 . Here the parallelogram addition law is subject to immediate experimental verification.1 Subtraction may be handled by defining the negative of a vector as a vector of the same magnitude but with reversed direction. Then A − B = A + (−B). In Fig. 1.3, A = E − B. Note that the vectors are treated as geometrical objects that are independent of any coordinate system. This concept of independence of a preferred coordinate system is developed in detail in the next section. The representation of vector A by an arrow suggests a second possibility. Arrow A (Fig. 1.5), starting from the origin,2 terminates at the point (Ax , Ay , Az ). Thus, if we agree that the vector is to start at the origin, the positive end may be specified by giving the Cartesian coordinates (Ax , Ay , Az ) of the arrowhead. Although A could have represented any vector quantity (momentum, electric field, etc.), one particularly important vector quantity, the displacement from the origin to the point 1 Strictly speaking, the parallelogram addition was introduced as a definition. Experiments show that if we assume that the forces are vector quantities and we combine them by parallelogram addition, the equilibrium condition of zero resultant force is satisfied. 2 We could start from any point in our Cartesian reference frame; we choose the origin for simplicity. This freedom of shifting the origin of the coordinate system without affecting the geometry is called translation invariance. 4 Chapter 1 Vector Analysis FIGURE 1.5 Cartesian components and direction cosines of A. (x, y, z), is denoted by the special symbol r. We then have a choice of referring to the displacement as either the vector r or the collection (x, y, z), the coordinates of its endpoint: r ↔ (x, y, z). (1.3) Using r for the magnitude of vector r, we find that Fig. 1.5 shows that the endpoint coordinates and the magnitude are related by x = r cos α, y = r cos β, z = r cos γ . (1.4) Here cos α, cos β, and cos γ are called the direction cosines, α being the angle between the given vector and the positive x-axis, and so on. One further bit of vocabulary: The quantities Ax , Ay , and Az are known as the (Cartesian) components of A or the projections of A, with cos2 α + cos2 β + cos2 γ = 1. Thus, any vector A may be resolved into its components (or projected onto the coordinate axes) to yield Ax = A cos α, etc., as in Eq. (1.4). We may choose to refer to the vector as a single quantity A or to its components (Ax , Ay , Az ). Note that the subscript x in Ax denotes the x component and not a dependence on the variable x. The choice between using A or its components (Ax , Ay , Az ) is essentially a choice between a geometric and an algebraic representation. Use either representation at your convenience. The geometric “arrow in space” may aid in visualization. The algebraic set of components is usually more suitable for precise numerical or algebraic calculations. Vectors enter physics in two distinct forms. (1) Vector A may represent a single force acting at a single point. The force of gravity acting at the center of gravity illustrates this form. (2) Vector A may be defined over some extended region; that is, A and its components may be functions of position: Ax = Ax (x, y, z), and so on. Examples of this sort include the velocity of a fluid varying from point to point over a given volume and electric and magnetic fields. These two cases may be distinguished by referring to the vector defined over a region as a vector field. The concept of the vector defined over a region and 1.1 Definitions, Elementary Approach 5 being a function of position will become extremely important when we differentiate and integrate vectors. At this stage it is convenient to introduce unit vectors along each of the coordinate axes. Let xˆ be a vector of unit magnitude pointing in the positive x-direction, yˆ , a vector of unit magnitude in the positive y-direction, and zˆ a vector of unit magnitude in the positive zdirection. Then xˆ Ax is a vector with magnitude equal to |Ax | and in the x-direction. By vector addition, A = xˆ Ax + yˆ Ay + zˆ Az . (1.5) Note that if A vanishes, all of its components must vanish individually; that is, if A = 0, then Ax = Ay = Az = 0. This means that these unit vectors serve as a basis, or complete set of vectors, in the threedimensional Euclidean space in terms of which any vector can be expanded. Thus, Eq. (1.5) is an assertion that the three unit vectors xˆ , yˆ , and zˆ span our real three-dimensional space: Any vector may be written as a linear combination of xˆ , yˆ , and zˆ . Since xˆ , yˆ , and zˆ are linearly independent (no one is a linear combination of the other two), they form a basis for the real three-dimensional Euclidean space. Finally, by the Pythagorean theorem, the magnitude of vector A is  1/2 |A| = A2x + A2y + A2z . (1.6) Note that the coordinate unit vectors are not the only complete set, or basis. This resolution of a vector into its components can be carried out in a variety of coordinate systems, as shown in Chapter 2. Here we restrict ourselves to Cartesian coordinates, where the unit vectors have the coordinates xˆ = (1, 0, 0), yˆ = (0, 1, 0) and zˆ = (0, 0, 1) and are all constant in length and direction, properties characteristic of Cartesian coordinates. As a replacement of the graphical technique, addition and subtraction of vectors may now be carried out in terms of their components. For A = xˆ Ax + yˆ Ay + zˆ Az and B = xˆ Bx + yˆ By + zˆ Bz , A ± B = xˆ (Ax ± Bx ) + yˆ (Ay ± By ) + zˆ (Az ± Bz ). (1.7) It should be emphasized here that the unit vectors xˆ , yˆ , and zˆ are used for convenience. They are not essential; we can describe vectors and use them entirely in terms of their components: A ↔ (Ax , Ay , Az ). This is the approach of the two more powerful, more sophisticated definitions of vector to be discussed in the next section. However, xˆ , yˆ , and zˆ emphasize the direction. So far we have defined the operations of addition and subtraction of vectors. In the next sections, three varieties of multiplication will be defined on the basis of their applicability: a scalar, or inner, product, a vector product peculiar to three-dimensional space, and a direct, or outer, product yielding a second-rank tensor. Division by a vector is not defined. 6 Chapter 1 Vector Analysis Exercises 1.1.1 Show how to find A and B, given A + B and A − B. 1.1.2 The vector A whose magnitude is 1.732 units makes equal angles with the coordinate axes. Find Ax , Ay , and Az . 1.1.3 Calculate the components of a unit vector that lies in the xy-plane and makes equal angles with the positive directions of the x- and y-axes. 1.1.4 The velocity of sailboat A relative to sailboat B, vrel , is defined by the equation vrel = vA − vB , where vA is the velocity of A and vB is the velocity of B. Determine the velocity of A relative to B if vA = 30 km/hr east vB = 40 km/hr north. ANS. vrel = 50 km/hr, 53.1◦ south of east. 1.1.5 A sailboat sails for 1 hr at 4 km/hr (relative to the water) on a steady compass heading of 40◦ east of north. The sailboat is simultaneously carried along by a current. At the end of the hour the boat is 6.12 km from its starting point. The line from its starting point to its location lies 60◦ east of north. Find the x (easterly) and y (northerly) components of the water’s velocity. ANS. veast = 2.73 km/hr, vnorth ≈ 0 km/hr. 1.1.6 A vector equation can be reduced to the form A = B. From this show that the one vector equation is equivalent to three scalar equations. Assuming the validity of Newton’s second law, F = ma, as a vector equation, this means that ax depends only on Fx and is independent of Fy and Fz . 1.1.7 The vertices A, B, and C of a triangle are given by the points (−1, 0, 2), (0, 1, 0), and (1, −1, 0), respectively. Find point D so that the figure ABCD forms a plane parallelogram. ANS. (0, −2, 2) or (2, 0, −2). 1.1.8 A triangle is defined by the vertices of three vectors A, B and C that extend from the origin. In terms of A, B, and C show that the vector sum of the successive sides of the triangle (AB + BC + CA) is zero, where the side AB is from A to B, etc. 1.1.9 A sphere of radius a is centered at a point r1 . (a) Write out the algebraic equation for the sphere. (b) Write out a vector equation for the sphere. ANS. (a) (x − x1 )2 + (y − y1 )2 + (z − z1 )2 = a 2 . (b) r = r1 + a, with r1 = center. (a takes on all directions but has a fixed magnitude a.) 1.2 Rotation of the Coordinate Axes 7 1.1.10 A corner reflector is formed by three mutually perpendicular reflecting surfaces. Show that a ray of light incident upon the corner reflector (striking all three surfaces) is reflected back along a line parallel to the line of incidence. Hint. Consider the effect of a reflection on the components of a vector describing the direction of the light ray. 1.1.11 Hubble’s law. Hubble found that distant galaxies are receding with a velocity proportional to their distance from where we are on Earth. For the ith galaxy, vi = H0 ri , with us at the origin. Show that this recession of the galaxies from us does not imply that we are at the center of the universe. Specifically, take the galaxy at r1 as a new origin and show that Hubble’s law is still obeyed. 1.1.12 1.2 Find the diagonal vectors of a unit cube with one corner at the origin and its three sides lying √ along Cartesian coordinates axes. Show that there are four diagonals with length 3. Representing these as vectors, √ what are their components? Show that the diagonals of the cube’s faces have length 2 and determine their components. ROTATION OF THE COORDINATE AXES3 In the preceding section vectors were defined or represented in two equivalent ways: (1) geometrically by specifying magnitude and direction, as with an arrow, and (2) algebraically by specifying the components relative to Cartesian coordinate axes. The second definition is adequate for the vector analysis of this chapter. In this section two more refined, sophisticated, and powerful definitions are presented. First, the vector field is defined in terms of the behavior of its components under rotation of the coordinate axes. This transformation theory approach leads into the tensor analysis of Chapter 2 and groups of transformations in Chapter 4. Second, the component definition of Section 1.1 is refined and generalized according to the mathematician’s concepts of vector and vector space. This approach leads to function spaces, including the Hilbert space. The definition of vector as a quantity with magnitude and direction is incomplete. On the one hand, we encounter quantities, such as elastic constants and index of refraction in anisotropic crystals, that have magnitude and direction but that are not vectors. On the other hand, our naïve approach is awkward to generalize to extend to more complex quantities. We seek a new definition of vector field using our coordinate vector r as a prototype. There is a physical basis for our development of a new definition. We describe our physical world by mathematics, but it and any physical predictions we may make must be independent of our mathematical conventions. In our specific case we assume that space is isotropic; that is, there is no preferred direction, or all directions are equivalent. Then the physical system being analyzed or the physical law being enunciated cannot and must not depend on our choice or orientation of the coordinate axes. Specifically, if a quantity S does not depend on the orientation of the coordinate axes, it is called a scalar. 3 This section is optional here. It will be essential for Chapter 2. 8 Chapter 1 Vector Analysis FIGURE 1.6 Rotation of Cartesian coordinate axes about the z-axis. Now we return to the concept of vector r as a geometric object independent of the coordinate system. Let us look at r in two different systems, one rotated in relation to the other. For simplicity we consider first the two-dimensional case. If the x-, y-coordinates are rotated counterclockwise through an angle ϕ, keeping r, fixed (Fig. 1.6), we get the following relations between the components resolved in the original system (unprimed) and those resolved in the new rotated system (primed): x ′ = x cos ϕ + y sin ϕ, y ′ = −x sin ϕ + y cos ϕ. (1.8) We saw in Section 1.1 that a vector could be represented by the coordinates of a point; that is, the coordinates were proportional to the vector components. Hence the components of a vector must transform under rotation as coordinates of a point (such as r). Therefore whenever any pair of quantities Ax and Ay in the xy-coordinate system is transformed into (A′x , A′y ) by this rotation of the coordinate system with A′x = Ax cos ϕ + Ay sin ϕ, A′y = −Ax sin ϕ + Ay cos ϕ, (1.9) we define4 Ax and Ay as the components of a vector A. Our vector now is defined in terms of the transformation of its components under rotation of the coordinate system. If Ax and Ay transform in the same way as x and y, the components of the general two-dimensional coordinate vector r, they are the components of a vector A. If Ax and Ay do not show this 4 A scalar quantity does not depend on the orientation of coordinates; S ′ = S expresses the fact that it is invariant under rotation of the coordinates. 1.2 Rotation of the Coordinate Axes 9 form invariance (also called covariance) when the coordinates are rotated, they do not form a vector. The vector field components Ax and Ay satisfying the defining equations, Eqs. (1.9), associate a magnitude A and a direction with each point in space. The magnitude is a scalar quantity, invariant to the rotation of the coordinate system. The direction (relative to the unprimed system) is likewise invariant to the rotation of the coordinate system (see Exercise 1.2.1). The result of all this is that the components of a vector may vary according to the rotation of the primed coordinate system. This is what Eqs. (1.9) say. But the variation with the angle is just such that the components in the rotated coordinate system A′x and A′y define a vector with the same magnitude and the same direction as the vector defined by the components Ax and Ay relative to the x-, y-coordinate axes. (Compare Exercise 1.2.1.) The components of A in a particular coordinate system constitute the representation of A in that coordinate system. Equations (1.9), the transformation relations, are a guarantee that the entity A is independent of the rotation of the coordinate system. To go on to three and, later, four dimensions, we find it convenient to use a more compact notation. Let a11 = cos ϕ, x → x1 y → x2 a21 = − sin ϕ, (1.10) a12 = sin ϕ, a22 = cos ϕ. (1.11) Then Eqs. (1.8) become x1′ = a11 x1 + a12 x2 , x2′ = a21 x1 + a22 x2 . (1.12) The coefficient aij may be interpreted as a direction cosine, the cosine of the angle between xi′ and xj ; that is, a12 = cos(x1′ , x2 ) = sin ϕ,  a21 = cos(x2′ , x1 ) = cos ϕ + π2 = − sin ϕ. (1.13) The advantage of the new notation5 is that it permits us to use the summation symbol and to rewrite Eqs. (1.12) as xi′ = 2  j =1 aij xj , i = 1, 2. (1.14) Note that i remains as a parameter that gives rise to one equation when it is set equal to 1 and to a second equation when it is set equal to 2. The index j , of course, is a summation index, a dummy index, and, as with a variable of integration, j may be replaced by any other convenient symbol. 5 You may wonder at the replacement of one parameter ϕ by four parameters a . Clearly, the a do not constitute a minimum ij ij set of parameters. For two dimensions the four aij are subject to the three constraints given in Eq. (1.18). The justification for this redundant set of direction cosines is the convenience it provides. Hopefully, this convenience will become more apparent in Chapters 2 and 3. For three-dimensional rotations (9 aij but only three independent) alternate descriptions are provided by: (1) the Euler angles discussed in Section 3.3, (2) quaternions, and (3) the Cayley–Klein parameters. These alternatives have their respective advantages and disadvantages. 10 Chapter 1 Vector Analysis The generalization to three, four, or N dimensions is now simple. The set of N quantities Vj is said to be the components of an N -dimensional vector V if and only if their values relative to the rotated coordinate axes are given by Vi′ = N  i = 1, 2, . . . , N. aij Vj , j =1 (1.15) As before, aij is the cosine of the angle between xi′ and xj . Often the upper limit N and the corresponding range of i will not be indicated. It is taken for granted that you know how many dimensions your space has. From the definition of aij as the cosine of the angle between the positive xi′ direction and the positive xj direction we may write (Cartesian coordinates)6 aij = ∂xi′ . ∂xj (1.16a) Using the inverse rotation (ϕ → −ϕ) yields xj = 2  aij xi′ ∂xj = aij . ∂xi′ or i=1 (1.16b) Note that these are partial derivatives. By use of Eqs. (1.16a) and (1.16b), Eq. (1.15) becomes Vi′ = N N   ∂xi′ ∂xj Vj . Vj = ∂xj ∂xi′ j =1 (1.17) j =1 The direction cosines aij satisfy an orthogonality condition  aij aik = δj k (1.18) i or, equivalently,  i aj i aki = δj k . (1.19) Here, the symbol δj k is the Kronecker delta, defined by δj k = 1 δj k = 0 for for j = k, j = k. (1.20) It is easily verified that Eqs. (1.18) and (1.19) hold in the two-dimensional case by substituting in the specific aij from Eqs. (1.11). The result is the well-known identity sin2 ϕ + cos2 ϕ = 1 for the nonvanishing case. To verify Eq. (1.18) in general form, we may use the partial derivative forms of Eqs. (1.16a) and (1.16b) to obtain  ∂xj ∂xk  ∂xj ∂x ′ ∂xj i = = . (1.21) ′ ′ ′ ∂xi ∂xi ∂xi ∂xk ∂xk i i 6 Differentiate x ′ with respect to x . See discussion following Eq. (1.21). j i 1.2 Rotation of the Coordinate Axes 11 The last step follows by the standard rules for partial differentiation, assuming that xj is a function of x1′ , x2′ , x3′ , and so on. The final result, ∂xj /∂xk , is equal to δj k , since xj and xk as coordinate lines (j = k) are assumed to be perpendicular (two or three dimensions) or orthogonal (for any number of dimensions). Equivalently, we may assume that xj and xk (j = k) are totally independent variables. If j = k, the partial derivative is clearly equal to 1. In redefining a vector in terms of how its components transform under a rotation of the coordinate system, we should emphasize two points: 1. This definition is developed because it is useful and appropriate in describing our physical world. Our vector equations will be independent of any particular coordinate system. (The coordinate system need not even be Cartesian.) The vector equation can always be expressed in some particular coordinate system, and, to obtain numerical results, we must ultimately express the equation in some specific coordinate system. 2. This definition is subject to a generalization that will open up the branch of mathematics known as tensor analysis (Chapter 2). A qualification is in order. The behavior of the vector components under rotation of the coordinates is used in Section 1.3 to prove that a scalar product is a scalar, in Section 1.4 to prove that a vector product is a vector, and in Section 1.6 to show that the gradient of a scalar ψ, ∇ψ, is a vector. The remainder of this chapter proceeds on the basis of the less restrictive definitions of the vector given in Section 1.1. Summary: Vectors and Vector Space It is customary in mathematics to label an ordered triple of real numbers (x1 , x2 , x3 ) a vector x. The number xn is called the nth component of vector x. The collection of all such vectors (obeying the properties that follow) form a three-dimensional real vector space. We ascribe five properties to our vectors: If x = (x1 , x2 , x3 ) and y = (y1 , y2 , y3 ), 1. 2. 3. 4. 5. Vector equality: x = y means xi = yi , i = 1, 2, 3. Vector addition: x + y = z means xi + yi = zi , i = 1, 2, 3. Scalar multiplication: ax ↔ (ax1 , ax2 , ax3 ) (with a real). Negative of a vector: −x = (−1)x ↔ (−x1 , −x2 , −x3 ). Null vector: There exists a null vector 0 ↔ (0, 0, 0). Since our vector components are real (or complex) numbers, the following properties also hold: 1. Addition of vectors is commutative: x + y = y + x. 2. Addition of vectors is associative: (x + y) + z = x + (y + z). 3. Scalar multiplication is distributive: a(x + y) = ax + ay, also (a + b)x = ax + bx. Scalar multiplication is associative: (ab)x = a(bx). 12 Chapter 1 Vector Analysis Further, the null vector 0 is unique, as is the negative of a given vector x. So far as the vectors themselves are concerned this approach merely formalizes the component discussion of Section 1.1. The importance lies in the extensions, which will be considered in later chapters. In Chapter 4, we show that vectors form both an Abelian group under addition and a linear space with the transformations in the linear space described by matrices. Finally, and perhaps most important, for advanced physics the concept of vectors presented here may be generalized to (1) complex quantities,7 (2) functions, and (3) an infinite number of components. This leads to infinite-dimensional function spaces, the Hilbert spaces, which are important in modern quantum theory. A brief introduction to function expansions and Hilbert space appears in Section 10.4. Exercises 1.2.1 (a) Show that the magnitude of a vector A, A = (A2x + A2y )1/2 , is independent of the orientation of the rotated coordinate system, 1/2  2 1/2  ′2 Ax + A2y , = Ax + A′2 y that is, independent of the rotation angle ϕ. This independence of angle is expressed by saying that A is invariant under rotations. (b) At a given point (x, y), A defines an angle α relative to the positive x-axis and α ′ relative to the positive x ′ -axis. The angle from x to x ′ is ϕ. Show that A = A′ defines the same direction in space when expressed in terms of its primed components as in terms of its unprimed components; that is, α ′ = α − ϕ. 1.2.2 Prove the orthogonality condition i aj i aki = δj k . As a special case of this, the direction cosines of Section 1.1 satisfy the relation cos2 α + cos2 β + cos2 γ = 1, a result that also follows from Eq. (1.6). 1.3 SCALAR OR DOT PRODUCT Having defined vectors, we now proceed to combine them. The laws for combining vectors must be mathematically consistent. From the possibilities that are consistent we select two that are both mathematically and physically interesting. A third possibility is introduced in Chapter 2, in which we form tensors. The projection of a vector A onto a coordinate axis, which gives its Cartesian components in Eq. (1.4), defines a special geometrical case of the scalar product of A and the coordinate unit vectors: Ax = A cos α ≡ A · xˆ , Ay = A cos β ≡ A · yˆ , Az = A cos γ ≡ A · zˆ . (1.22) 7 The n-dimensional vector space of real n-tuples is often labeled Rn and the n-dimensional vector space of complex n-tuples is labeled Cn . 1.3 Scalar or Dot Product 13 This special case of a scalar product in conjunction with general properties the scalar product is sufficient to derive the general case of the scalar product. Just as the projection is linear in A, we want the scalar product of two vectors to be linear in A and B, that is, obey the distributive and associative laws A · (B + C) = A · B + A · C A · (yB) = (yA) · B = yA · B, (1.23a) (1.23b) where y is a number. Now we can use the decomposition of B into its Cartesian components according to Eq. (1.5), B = Bx xˆ + By yˆ + Bz zˆ , to construct the general scalar or dot product of the vectors A and B as A · B = A · (Bx xˆ + By yˆ + Bz zˆ ) = Bx A · xˆ + By A · yˆ + Bz A · zˆ = Bx Ax + By Ay + Bz Az upon applying Eqs. (1.23a) and (1.23b) upon substituting Eq. (1.22). Hence A·B≡  i Bi Ai =  i Ai Bi = B · A. (1.24) If A = B in Eq. (1.24), we recover the magnitude A = ( A2i )1/2 of A in Eq. (1.6) from Eq. (1.24). It is obvious from Eq. (1.24) that the scalar product treats A and B alike, or is symmetric in A and B, and is commutative. Thus, alternatively and equivalently, we can first generalize Eqs. (1.22) to the projection AB of A onto the direction of a vector B = 0 ˆ where Bˆ = B/B is the unit vector in the direction of B and θ as AB = A cos θ ≡ A · B, is the angle between A and B, as shown in Fig. 1.7. Similarly, we project B onto A as ˆ Second, we make these projections symmetric in A and B, which BA = B cos θ ≡ B · A. leads to the definition A · B ≡ AB B = ABA = AB cos θ. FIGURE 1.7 Scalar product A · B = AB cos θ . (1.25) 14 Chapter 1 Vector Analysis FIGURE 1.8 The distributive law A · (B + C) = ABA + ACA = A(B + C)A , Eq. (1.23a). The distributive law in Eq. (1.23a) is illustrated in Fig. 1.8, which shows that the sum of the projections of B and C onto A, BA + CA is equal to the projection of B + C onto A, (B + C)A . It follows from Eqs. (1.22), (1.24), and (1.25) that the coordinate unit vectors satisfy the relations xˆ · xˆ = yˆ · yˆ = zˆ · zˆ = 1, (1.26a) xˆ · yˆ = xˆ · zˆ = yˆ · zˆ = 0. (1.26b) whereas If the component definition, Eq. (1.24), is labeled an algebraic definition, then Eq. (1.25) is a geometric definition. One of the most common applications of the scalar product in physics is in the calculation of work = force·displacement· cos θ , which is interpreted as displacement times the projection of the force along the displacement direction, i.e., the scalar product of force and displacement, W = F · S. If A · B = 0 and we know that A = 0 and B = 0, then, from Eq. (1.25), cos θ = 0, or θ = 90◦ , 270◦ , and so on. The vectors A and B must be perpendicular. Alternately, we may say A and B are orthogonal. The unit vectors xˆ , yˆ , and zˆ are mutually orthogonal. To develop this notion of orthogonality one more step, suppose that n is a unit vector and r is a nonzero vector in the xy-plane; that is, r = xˆ x + yˆ y (Fig. 1.9). If n·r=0 for all choices of r, then n must be perpendicular (orthogonal) to the xy-plane. Often it is convenient to replace xˆ , yˆ , and zˆ by subscripted unit vectors em , m = 1, 2, 3, with xˆ = e1 , and so on. Then Eqs. (1.26a) and (1.26b) become em · en = δmn . (1.26c) For m = n the unit vectors em and en are orthogonal. For m = n each vector is normalized to unity, that is, has unit magnitude. The set em is said to be orthonormal. A major advantage of Eq. (1.26c) over Eqs. (1.26a) and (1.26b) is that Eq. (1.26c) may readily be generalized to N -dimensional space: m, n = 1, 2, . . . , N . Finally, we are picking sets of unit vectors em that are orthonormal for convenience – a very great convenience. 1.3 Scalar or Dot Product FIGURE 1.9 15 A normal vector. Invariance of the Scalar Product Under Rotations We have not yet shown that the word scalar is justified or that the scalar product is indeed a scalar quantity. To do this, we investigate the behavior of A · B under a rotation of the coordinate system. By use of Eq. (1.15),     A′x Bx′ + A′y By′ + A′z Bz′ = ayj Bj axj Bj + ayi Ai axi Ai j i +  azi Ai i j i  azj Bj . (1.27) j Using the indices k and l to sum over x, y, and z, we obtain   A′k Bk′ = ali Ai alj Bj , k l i (1.28) j and, by rearranging the terms on the right-hand side, we have     Ai Bi . δij Ai Bj = (ali alj )Ai Bj = A′k Bk′ = k l i j i j (1.29) i The last two steps follow by using Eq. (1.18), the orthogonality condition of the direction cosines, and Eqs. (1.20), which define the Kronecker delta. The effect of the Kronecker delta is to cancel all terms in a summation over either index except the term for which the indices are equal. In Eq. (1.29) its effect is to set j = i and to eliminate the summation over j . Of course, we could equally well set i = j and eliminate the summation over i. 16 Chapter 1 Vector Analysis Equation (1.29) gives us  k A′k Bk′ =  Ai Bi , (1.30) i which is just our definition of a scalar quantity, one that remains invariant under the rotation of the coordinate system. In a similar approach that exploits this concept of invariance, we take C = A + B and dot it into itself: C · C = (A + B) · (A + B) = A · A + B · B + 2A · B. (1.31) Since C · C = C2, (1.32) the square of the magnitude of vector C and thus an invariant quantity, we see that A·B= 1 2 C − A2 − B 2 , 2 invariant. (1.33) Since the right-hand side of Eq. (1.33) is invariant — that is, a scalar quantity — the lefthand side, A · B, must also be invariant under rotation of the coordinate system. Hence A · B is a scalar. Equation (1.31) is really another form of the law of cosines, which is C 2 = A2 + B 2 + 2AB cos θ. (1.34) Comparing Eqs. (1.31) and (1.34), we have another verification of Eq. (1.25), or, if preferred, a vector derivation of the law of cosines (Fig. 1.10). The dot product, given by Eq. (1.24), may be generalized in two ways. The space need not be restricted to three dimensions. In n-dimensional space, Eq. (1.24) applies with the sum running from 1 to n. Moreover, n may be infinity, with the sum then a convergent infinite series (Section 5.2). The other generalization extends the concept of vector to embrace functions. The function analog of a dot, or inner, product appears in Section 10.4. FIGURE 1.10 The law of cosines. 1.3 Scalar or Dot Product 17 Exercises 1.3.1 Two unit magnitude vectors ei and ej are required to be either parallel or perpendicular to each other. Show that ei · ej provides an interpretation of Eq. (1.18), the direction cosine orthogonality relation. 1.3.2 Given that (1) the dot product of a unit vector with itself is unity and (2) this relation is valid in all (rotated) coordinate systems, show that xˆ ′ · xˆ ′ = 1 (with the primed system rotated 45◦ about the z-axis relative to the unprimed) implies that xˆ · yˆ = 0. 1.3.3 The vector r, starting at the origin, terminates at and specifies the point in space (x, y, z). Find the surface swept out by the tip of r if (a) (r − a) · a = 0. Characterize a geometrically. (b) (r − a) · r = 0. Describe the geometric role of a. The vector a is constant (in magnitude and direction). 1.3.4 The interaction energy between two dipoles of moments µ1 and µ2 may be written in the vector form 3(µ1 · r)(µ2 · r) µ ·µ V =− 1 3 2 + r r5 and in the scalar form µ1 µ2 V = 3 (2 cos θ1 cos θ2 − sin θ1 sin θ2 cos ϕ). r Here θ1 and θ2 are the angles of µ1 and µ2 relative to r, while ϕ is the azimuth of µ2 relative to the µ1 –r plane (Fig. 1.11). Show that these two forms are equivalent. Hint: Equation (12.178) will be helpful. 1.3.5 A pipe comes diagonally down the south wall of a building, making an angle of 45◦ with the horizontal. Coming into a corner, the pipe turns and continues diagonally down a west-facing wall, still making an angle of 45◦ with the horizontal. What is the angle between the south-wall and west-wall sections of the pipe? ANS. 120◦ . 1.3.6 Find the shortest distance of an observer at the point (2, 1, 3) from a rocket in free flight with velocity (1, 2, 3) m/s. The rocket was launched at time t = 0 from (1, 1, 1). Lengths are in kilometers. 1.3.7 Prove the law of cosines from the triangle with corners at the point of C and A in Fig. 1.10 and the projection of vector B onto vector A. FIGURE 1.11 Two dipole moments. 18 1.4 Chapter 1 Vector Analysis VECTOR OR CROSS PRODUCT A second form of vector multiplication employs the sine of the included angle instead of the cosine. For instance, the angular momentum of a body shown at the point of the distance vector in Fig. 1.12 is defined as angular momentum = radius arm × linear momentum = distance × linear momentum × sin θ. For convenience in treating problems relating to quantities such as angular momentum, torque, and angular velocity, we define the vector product, or cross product, as C = A × B, with C = AB sin θ. (1.35) Unlike the preceding case of the scalar product, C is now a vector, and we assign it a direction perpendicular to the plane of A and B such that A, B, and C form a right-handed system. With this choice of direction we have A × B = −B × A, anticommutation. (1.36a) From this definition of cross product we have xˆ × xˆ = yˆ × yˆ = zˆ × zˆ = 0, (1.36b) whereas xˆ × yˆ = zˆ , yˆ × xˆ = −ˆz, yˆ × zˆ = xˆ , zˆ × xˆ = yˆ , zˆ × yˆ = −ˆx, xˆ × zˆ = −ˆy. (1.36c) Among the examples of the cross product in mathematical physics are the relation between linear momentum p and angular momentum L, with L defined as L = r × p, FIGURE 1.12 Angular momentum. 1.4 Vector or Cross Product FIGURE 1.13 19 Parallelogram representation of the vector product. and the relation between linear velocity v and angular velocity ω, v = ω × r. Vectors v and p describe properties of the particle or physical system. However, the position vector r is determined by the choice of the origin of the coordinates. This means that ω and L depend on the choice of the origin. The familiar magnetic induction B is usually defined by the vector product force equation8 FM = qv × B (mks units). Here v is the velocity of the electric charge q and FM is the resulting force on the moving charge. The cross product has an important geometrical interpretation, which we shall use in subsequent sections. In the parallelogram defined by A and B (Fig. 1.13), B sin θ is the height if A is taken as the length of the base. Then |A × B| = AB sin θ is the area of the parallelogram. As a vector, A × B is the area of the parallelogram defined by A and B, with the area vector normal to the plane of the parallelogram. This suggests that area (with its orientation in space) may be treated as a vector quantity. An alternate definition of the vector product can be derived from the special case of the coordinate unit vectors in Eqs. (1.36c) in conjunction with the linearity of the cross product in both vector arguments, in analogy with Eqs. (1.23) for the dot product, 8 The electric field E is assumed here to be zero. A × (B + C) = A × B + A × C, (1.37a) (A + B) × C = A × C + B × C, (1.37b) A × (yB) = yA × B = (yA) × B, (1.37c) 20 Chapter 1 Vector Analysis where y is a number again. Using the decomposition of A and B into their Cartesian components according to Eq. (1.5), we find A × B ≡ C = (Cx , Cy , Cz ) = (Ax xˆ + Ay yˆ + Az zˆ ) × (Bx xˆ + By yˆ + Bz zˆ ) = (Ax By − Ay Bx )ˆx × yˆ + (Ax Bz − Az Bx )ˆx × zˆ + (Ay Bz − Az By )ˆy × zˆ upon applying Eqs. (1.37a) and (1.37b) and substituting Eqs. (1.36a), (1.36b), and (1.36c) so that the Cartesian components of A × B become Cx = Ay Bz − Az By , Cy = Az Bx − Ax Bz , Cz = Ax By − Ay Bx , (1.38) or Ci = Aj Bk − Ak Bj , i, j, k all different, (1.39) and with cyclic permutation of the indices i, j , and k corresponding to x, y, and z, respectively. The vector product C may be mnemonically represented by a determinant,9    xˆ       yˆ zˆ    A Az       − yˆ  Ax Az  + zˆ  Ax Ay  , C =  Ax Ay Az  ≡ xˆ  y (1.40)     By Bz Bx Bz Bx By   Bx By Bz  which is meant to be expanded across the top row to reproduce the three components of C listed in Eqs. (1.38). Equation (1.35) might be called a geometric definition of the vector product. Then Eqs. (1.38) would be an algebraic definition. To show the equivalence of Eq. (1.35) and the component definition, Eqs. (1.38), let us form A · C and B · C, using Eqs. (1.38). We have A · C = A · (A × B) = Ax (Ay Bz − Az By ) + Ay (Az Bx − Ax Bz ) + Az (Ax By − Ay Bx ) = 0. (1.41) Similarly, B · C = B · (A × B) = 0. (1.42) Equations (1.41) and (1.42) show that C is perpendicular to both A and B (cos θ = 0, θ = ±90◦ ) and therefore perpendicular to the plane they determine. The positive direction is determined by considering special cases, such as the unit vectors xˆ × yˆ = zˆ (Cz = +Ax By ). The magnitude is obtained from (A × B) · (A × B) = A2 B 2 − (A · B)2 = A2 B 2 − A2 B 2 cos2 θ = A2 B 2 sin2 θ. 9 See Section 3.1 for a brief summary of determinants. (1.43) 1.4 Vector or Cross Product 21 Hence C = AB sin θ. (1.44) The first step in Eq. (1.43) may be verified by expanding out in component form, using Eqs. (1.38) for A × B and Eq. (1.24) for the dot product. From Eqs. (1.41), (1.42), and (1.44) we see the equivalence of Eqs. (1.35) and (1.38), the two definitions of vector product. There still remains the problem of verifying that C = A × B is indeed a vector, that is, that it obeys Eq. (1.15), the vector transformation law. Starting in a rotated (primed system), Ci′ = A′j Bk′ − A′k Bj′ , i, j, and k in cyclic order,     aj l Al akl Al akm Bm − aj m Bm = l =  l,m m l m (aj l akm − akl aj m )Al Bm . (1.45) The combination of direction cosines in parentheses vanishes for m = l. We therefore have j and k taking on fixed values, dependent on the choice of i, and six combinations of l and m. If i = 3, then j = 1, k = 2 (cyclic order), and we have the following direction cosine combinations:10 a11 a22 − a21 a12 = a33 , a13 a21 − a23 a11 = a32 , a12 a23 − a22 a13 = a31 (1.46) and their negatives. Equations (1.46) are identities satisfied by the direction cosines. They may be verified with the use of determinants and matrices (see Exercise 3.3.3). Substituting back into Eq. (1.45), C3′ = a33 A1 B2 + a32 A3 B1 + a31 A2 B3 − a33 A2 B1 − a32 A1 B3 − a31 A3 B2 = a31 C1 + a32 C2 + a33 C3  = a3n Cn . (1.47) n By permuting indices to pick up C1′ and C2′ , we see that Eq. (1.15) is satisfied and C is indeed a vector. It should be mentioned here that this vector nature of the cross product is an accident associated with the three-dimensional nature of ordinary space.11 It will be seen in Chapter 2 that the cross product may also be treated as a second-rank antisymmetric tensor. 10 Equations (1.46) hold for rotations because they preserve volumes. For a more general orthogonal transformation, the r.h.s. of Eqs. (1.46) is multiplied by the determinant of the transformation matrix (see Chapter 3 for matrices and determinants). 11 Specifically Eqs. (1.46) hold only for three-dimensional space. See D. Hestenes and G. Sobczyk, Clifford Algebra to Geometric Calculus (Dordrecht: Reidel, 1984) for a far-reaching generalization of the cross product. 22 Chapter 1 Vector Analysis If we define a vector as an ordered triplet of numbers (or functions), as in the latter part of Section 1.2, then there is no problem identifying the cross product as a vector. The crossproduct operation maps the two triples A and B into a third triple, C, which by definition is a vector. We now have two ways of multiplying vectors; a third form appears in Chapter 2. But what about division by a vector? It turns out that the ratio B/A is not uniquely specified (Exercise 3.2.21) unless A and B are also required to be parallel. Hence division of one vector by another is not defined. Exercises 1.4.1 Show that the medians of a triangle intersect in the center, which is 2/3 of the median’s length from each corner. Construct a numerical example and plot it. 1.4.2 Prove the law of cosines starting from A2 = (B − C)2 . 1.4.3 Starting with C = A + B, show that C × C = 0 leads to A × B = −B × A. 1.4.4 Show that (a) (A − B) · (A + B) = A2 − B 2 , (b) (A − B) × (A + B) = 2A × B. The distributive laws needed here, A · (B + C) = A · B + A · C, and A × (B + C) = A × B + A × C, may easily be verified (if desired) by expansion in Cartesian components. 1.4.5 Given the three vectors, P = 3ˆx + 2ˆy − zˆ , Q = −6ˆx − 4ˆy + 2ˆz, R = xˆ − 2ˆy − zˆ , find two that are perpendicular and two that are parallel or antiparallel. 1.4.6 If P = xˆ Px + yˆ Py and Q = xˆ Qx + yˆ Qy are any two nonparallel (also nonantiparallel) vectors in the xy-plane, show that P × Q is in the z-direction. 1.4.7 Prove that (A × B) · (A × B) = (AB)2 − (A · B)2 . 1.4 Vector or Cross Product 1.4.8 23 Using the vectors P = xˆ cos θ + yˆ sin θ, Q = xˆ cos ϕ − yˆ sin ϕ, R = xˆ cos ϕ + yˆ sin ϕ, prove the familiar trigonometric identities sin(θ + ϕ) = sin θ cos ϕ + cos θ sin ϕ, cos(θ + ϕ) = cos θ cos ϕ − sin θ sin ϕ. 1.4.9 (a) Find a vector A that is perpendicular to U = 2ˆx + yˆ − zˆ , V = xˆ − yˆ + zˆ . (b) 1.4.10 What is A if, in addition to this requirement, we demand that it have unit magnitude? If four vectors a, b, c, and d all lie in the same plane, show that (a × b) × (c × d) = 0. Hint. Consider the directions of the cross-product vectors. 1.4.11 The coordinates of the three vertices of a triangle are (2, 1, 5), (5, 2, 8), and (4, 8, 2). Compute its area by vector methods, its center and medians. Lengths are in centimeters. Hint. See Exercise 1.4.1. 1.4.12 The vertices of parallelogram ABCD are (1, 0, 0), (2, −1, 0), (0, −1, 1), and (−1, 0, 1) in order. Calculate the vector areas of triangle ABD and of triangle BCD. Are the two vector areas equal? ANS. AreaABD = − 21 (ˆx + yˆ + 2ˆz). 1.4.13 The origin and the three vectors A, B, and C (all of which start at the origin) define a tetrahedron. Taking the outward direction as positive, calculate the total vector area of the four tetrahedral surfaces. Note. In Section 1.11 this result is generalized to any closed surface. 1.4.14 Find the sides and angles of the spherical triangle ABC defined by the three vectors A = (1, 0, 0),   1 1 B = √ , 0, √ , 2 2   1 1 C = 0, √ , √ . 2 2 Each vector starts from the origin (Fig. 1.14). 24 Chapter 1 Vector Analysis FIGURE 1.14 1.4.15 1.4.16 Spherical triangle. Derive the law of sines (Fig. 1.15): sin α sin β sin γ = = . |A| |B| |C| The magnetic induction B is defined by the Lorentz force equation, F = q(v × B). Carrying out three experiments, we find that if v = xˆ , F = 2ˆz − 4ˆy, q v = yˆ , F = 4ˆx − zˆ , q v = zˆ , F = yˆ − 2ˆx. q From the results of these three separate experiments calculate the magnetic induction B. 1.4.17 Define a cross product of two vectors in two-dimensional space and give a geometrical interpretation of your construction. 1.4.18 Find the shortest distance between the paths of two rockets in free flight. Take the first rocket path to be r = r1 + t1 v1 with launch at r1 = (1, 1, 1) and velocity v1 = (1, 2, 3) 1.5 Triple Scalar Product, Triple Vector Product FIGURE 1.15 25 Law of sines. and the second rocket path as r = r2 + t2 v2 with r2 = (5, 2, 1) and v2 = (−1, −1, 1). Lengths are in kilometers, velocities in kilometers per hour. 1.5 TRIPLE SCALAR PRODUCT, TRIPLE VECTOR PRODUCT Triple Scalar Product Sections 1.3 and 1.4 cover the two types of multiplication of interest here. However, there are combinations of three vectors, A · (B × C) and A × (B × C), that occur with sufficient frequency to deserve further attention. The combination A · (B × C) is known as the triple scalar product. B × C yields a vector that, dotted into A, gives a scalar. We note that (A · B) × C represents a scalar crossed into a vector, an operation that is not defined. Hence, if we agree to exclude this undefined interpretation, the parentheses may be omitted and the triple scalar product written A · B × C. Using Eqs. (1.38) for the cross product and Eq. (1.24) for the dot product, we obtain A · B × C = Ax (By Cz − Bz Cy ) + Ay (Bz Cx − Bx Cz ) + Az (Bx Cy − By Cx ) =B·C×A=C·A×B = −A · C × B = −C · B × A = −B · A × C, and so on. (1.48) There is a high degree of symmetry in the component expansion. Every term contains the factors Ai , Bj , and Ck . If i, j , and k are in cyclic order (x, y, z), the sign is positive. If the order is anticyclic, the sign is negative. Further, the dot and the cross may be interchanged, A · B × C = A × B · C. (1.49) 26 Chapter 1 Vector Analysis FIGURE 1.16 Parallelepiped representation of triple scalar product. A convenient representation of the component expansion of Eq. (1.48) is provided by the determinant    Ax Ay Az    A · B × C =  Bx By Bz  . (1.50)  Cx Cy Cz  The rules for interchanging rows and columns of a determinant12 provide an immediate verification of the permutations listed in Eq. (1.48), whereas the symmetry of A, B, and C in the determinant form suggests the relation given in Eq. (1.49). The triple products encountered in Section 1.4, which showed that A × B was perpendicular to both A and B, were special cases of the general result (Eq. (1.48)). The triple scalar product has a direct geometrical interpretation. The three vectors A, B, and C may be interpreted as defining a parallelepiped (Fig. 1.16): |B × C| = BC sin θ = area of parallelogram base. (1.51) The direction, of course, is normal to the base. Dotting A into this means multiplying the base area by the projection of A onto the normal, or base times height. Therefore A · B × C = volume of parallelepiped defined by A, B, and C. The triple scalar product finds an interesting and important application in the construction of a reciprocal crystal lattice. Let a, b, and c (not necessarily mutually perpendicular) 12 See Section 3.1 for a summary of the properties of determinants. 1.5 Triple Scalar Product, Triple Vector Product 27 represent the vectors that define a crystal lattice. The displacement from one lattice point to another may then be written r = na a + nb b + nc c, (1.52) with na , nb , and nc taking on integral values. With these vectors we may form a′ = b×c , a·b×c b′ = c×a , a·b×c c′ = a×b . a·b×c (1.53a) We see that a′ is perpendicular to the plane containing b and c, and we can readily show that a′ · a = b′ · b = c′ · c = 1, (1.53b) a′ · b = a′ · c = b′ · a = b′ · c = c′ · a = c′ · b = 0. (1.53c) whereas It is from Eqs. (1.53b) and (1.53c) that the name reciprocal lattice is associated with the points r′ = n′a a′ + n′b b′ + n′c c′ . The mathematical space in which this reciprocal lattice exists is sometimes called a Fourier space, on the basis of relations to the Fourier analysis of Chapters 14 and 15. This reciprocal lattice is useful in problems involving the scattering of waves from the various planes in a crystal. Further details may be found in R. B. Leighton’s Principles of Modern Physics, pp. 440–448 [New York: McGraw-Hill (1959)]. Triple Vector Product The second triple product of interest is A × (B × C), which is a vector. Here the parentheses must be retained, as may be seen from a special case (ˆx × xˆ ) × yˆ = 0, while xˆ × (ˆx × yˆ ) = xˆ × zˆ = −ˆy. Example 1.5.1 A TRIPLE VECTOR PRODUCT For the vectors and A = xˆ + 2ˆy − zˆ = (1, 2, −1), B = yˆ + zˆ = (0, 1, 1),    xˆ yˆ zˆ   B × C =  0 1 1  = xˆ + yˆ − zˆ ,  1 −1 0    xˆ yˆ  A × (B × C) =  1 2 1 1 C = xˆ − yˆ = (0, 1, 1),  zˆ  −1  = −ˆx − zˆ = −(ˆy + zˆ ) − (ˆx − yˆ ) −1  = −B − C.  By rewriting the result in the last line of Example 1.5.1 as a linear combination of B and C, we notice that, taking a geometric approach, the triple vector product is perpendicular 28 Chapter 1 Vector Analysis FIGURE 1.17 B and C are in the xy-plane. B × C is perpendicular to the xy-plane and is shown here along the z-axis. Then A × (B × C) is perpendicular to the z-axis and therefore is back in the xy-plane. to A and to B × C. The plane defined by B and C is perpendicular to B × C, and so the triple product lies in this plane (see Fig. 1.17): A × (B × C) = uB + vC. (1.54) Taking the scalar product of Eq. (1.54) with A gives zero for the left-hand side, so uA · B + vA · C = 0. Hence u = wA · C and v = −wA · B for a suitable w. Substituting these values into Eq. (1.54) gives we want to show that  A × (B × C) = w B(A · C) − C(A · B) ; (1.55) w=1 in Eq. (1.55), an important relation sometimes known as the BAC–CAB rule. Since Eq. (1.55) is linear in A, B, and C, w is independent of these magnitudes. That is, we ˆ B, ˆ C. ˆ Let us denote B ˆ ·C ˆ = cos α, only need to show that w = 1 for unit vectors A, ˆ ·A ˆ = cos β, A ˆ · Bˆ = cos γ , and square Eq. (1.55) to obtain C   ˆ × (Bˆ × C) ˆ 2=A ˆ 2 (Bˆ × C) ˆ 2− A ˆ · (Bˆ × C) ˆ 2 A  ˆ · (Bˆ × C) ˆ 2 = 1 − cos2 α − A  ˆ · C) ˆ 2 + (A ˆ · B) ˆ 2 − 2(A ˆ · B)( ˆ A ˆ · C)( ˆ Bˆ · C) ˆ = w 2 (A  = w 2 cos2 β + cos2 γ − 2 cos α cos β cos γ , (1.56) 1.5 Triple Scalar Product, Triple Vector Product 29 ˆ 2B ˆ · B) ˆ 2 repeatedly (see Eq. (1.43) for a proof). Consequently, ˆ × B) ˆ 2=A ˆ 2 − (A using (A ˆ B, ˆ C ˆ that occurs in Eq. (1.56) can be written as the (squared) volume spanned by A,   2 ˆ · (Bˆ × C) ˆ A = 1 − cos2 α − w 2 cos2 β + cos2 γ − 2 cos α cos β cos γ . Here w 2 = 1, since this volume is symmetric in α, β, γ . That is, w = ±1 and is indeˆ B, ˆ C. ˆ Using again the special case xˆ × (ˆx × yˆ ) = −ˆy in Eq. (1.55) finally pendent of A, gives w = 1. (An alternate derivation using the Levi-Civita symbol εij k of Chapter 2 is the topic of Exercise 2.9.8.) It might be noted here that just as vectors are independent of the coordinates, so a vector equation is independent of the particular coordinate system. The coordinate system only determines the components. If the vector equation can be established in Cartesian coordinates, it is established and valid in any of the coordinate systems to be introduced in Chapter 2. Thus, Eq. (1.55) may be verified by a direct though not very elegant method of expanding into Cartesian components (see Exercise 1.5.2). Exercises 1.5.1 One vertex of a glass parallelepiped is at the origin (Fig. 1.18). The three adjacent vertices are at (3, 0, 0), (0, 0, 2), and (0, 3, 1). All lengths are in centimeters. Calculate the number of cubic centimeters of glass in the parallelepiped using the triple scalar product. 1.5.2 Verify the expansion of the triple vector product A × (B × C) = B(A · C) − C(A · B) FIGURE 1.18 Parallelepiped: triple scalar product. 30 Chapter 1 Vector Analysis by direct expansion in Cartesian coordinates. 1.5.3 Show that the first step in Eq. (1.43), which is (A × B) · (A × B) = A2 B 2 − (A · B)2 , is consistent with the BAC–CAB rule for a triple vector product. 1.5.4 You are given the three vectors A, B, and C, A = xˆ + yˆ , B = yˆ + zˆ , C = xˆ − zˆ . Compute the triple scalar product, A · B × C. Noting that A = B + C, give a geometric interpretation of your result for the triple scalar product. (b) Compute A × (B × C). (a) 1.5.5 The orbital angular momentum L of a particle is given by L = r × p = mr × v, where p is the linear momentum. With linear and angular velocity related by v = ω × r, show that  L = mr 2 ω − rˆ (ˆr · ω) . Here rˆ is a unit vector in the r-direction. For r · ω = 0 this reduces to L = I ω, with the moment of inertia I given by mr 2 . In Section 3.5 this result is generalized to form an inertia tensor. 1.5.6 The kinetic energy of a single particle is given by T = 12 mv 2 . For rotational motion this becomes 12 m(ω × r)2 . Show that 1  T = m r 2 ω2 − (r · ω)2 . 2 For r · ω = 0 this reduces to T = 12 I ω2 , with the moment of inertia I given by mr 2 . 1.5.7 Show that13 a × (b × c) + b × (c × a) + c × (a × b) = 0. 1.5.8 A vector A is decomposed into a radial vector Ar and a tangential vector At . If rˆ is a unit vector in the radial direction, show that (a) Ar = rˆ (A · rˆ ) and (b) At = −ˆr × (ˆr × A). 1.5.9 Prove that a necessary and sufficient condition for the three (nonvanishing) vectors A, B, and C to be coplanar is the vanishing of the triple scalar product A · B × C = 0. 13 This is Jacobi’s identity for vector products; for commutators it is important in the context of Lie algebras (see Eq. (4.16) in Section 4.2). 1.5 Triple Scalar Product, Triple Vector Product 1.5.10 31 Three vectors A, B, and C are given by A = 3ˆx − 2ˆy + 2ˆz, B = 6ˆx + 4ˆy − 2ˆz, C = −3ˆx − 2ˆy − 4ˆz. Compute the values of A · B × C and A × (B × C), C × (A × B) and B × (C × A). 1.5.11 Vector D is a linear combination of three noncoplanar (and nonorthogonal) vectors: D = aA + bB + cC. Show that the coefficients are given by a ratio of triple scalar products, a= 1.5.12 Show that D·B×C , A·B×C and so on. (A × B) · (C × D) = (A · C)(B · D) − (A · D)(B · C). 1.5.13 Show that (A × B) × (C × D) = (A · B × D)C − (A · B × C)D. 1.5.14 For a spherical triangle such as pictured in Fig. 1.14 show that sin A sin BC = sin B sin CA = sin C sin AB . Here sin A is the sine of the included angle at A, while BC is the side opposite (in radians). 1.5.15 Given b×c , a·b×c and a · b × c = 0, show that a′ = b′ = c×a , a·b×c c′ = a×b , a·b×c (a) x · y′ = δxy , (x, y = a, b, c), (b) a′ · b′ × c′ = (a · b × c)−1 , b′ × c′ (c) a = ′ ′ . a · b × c′ 1.5.16 If x · y′ = δxy , (x, y = a, b, c), prove that a′ = (This is the converse of Problem 1.5.15.) 1.5.17 b×c . a·b×c Show that any vector V may be expressed in terms of the reciprocal vectors a′ , b′ , c′ (of Problem 1.5.15) by V = (V · a)a′ + (V · b)b′ + (V · c)c′ . 32 Chapter 1 Vector Analysis 1.5.18 An electric charge q1 moving with velocity v1 produces a magnetic induction B given by B= µ0 v1 × rˆ q1 2 4π r (mks units), where rˆ points from q1 to the point at which B is measured (Biot and Savart law). (a) Show that the magnetic force on a second charge q2 , velocity v2 , is given by the triple vector product F2 = µ0 q1 q2 v2 × (v1 × rˆ ). 4π r 2 (b) Write out the corresponding magnetic force F1 that q2 exerts on q1 . Define your unit radial vector. How do F1 and F2 compare? (c) Calculate F1 and F2 for the case of q1 and q2 moving along parallel trajectories side by side. ANS. µ0 q1 q2 (b) F1 = − v1 × (v2 × rˆ ). 4π r 2 In general, there is no simple relation between F1 and F2 . Specifically, Newton’s third law, F1 = −F2 , does not hold. µ0 q1 q2 2 (c) F1 = v rˆ = −F2 . 4π r 2 Mutual attraction. 1.6 GRADIENT, ∇ To provide a motivation for the vector nature of partial derivatives, we now introduce the total variation of a function F (x, y), dF = ∂F ∂F dx + dy. ∂x ∂y It consists of independent variations in the x- and y-directions. We write dF as a sum of two increments, one purely in the x- and the other in the y-direction, dF (x, y) ≡ F (x + dx, y + dy) − F (x, y)   = F (x + dx, y + dy) − F (x, y + dy) + F (x, y + dy) − F (x, y) = ∂F ∂F dx + dy, ∂x ∂y by adding and subtracting F (x, y + dy). The mean value theorem (that is, continuity of F ) tells us that here ∂F /∂x, ∂F /∂y are evaluated at some point ξ, η between x and x + dx, y 1.6 Gradient, ∇ 33 and y + dy, respectively. As dx → 0 and dy → 0, ξ → x and η → y. This result generalizes to three and higher dimensions. For example, for a function ϕ of three variables,  dϕ(x, y, z) ≡ ϕ(x + dx, y + dy, z + dz) − ϕ(x, y + dy, z + dz)  + ϕ(x, y + dy, z + dz) − ϕ(x, y, z + dz)  + ϕ(x, y, z + dz) − ϕ(x, y, z) (1.57) = ∂ϕ ∂ϕ ∂ϕ dx + dy + dz. ∂x ∂y ∂z Algebraically, dϕ in the total variation is a scalar product of the change in position dr and the directional change of ϕ. And now we are ready to recognize the three-dimensional partial derivative as a vector, which leads us to the concept of gradient. Suppose that ϕ(x, y, z) is a scalar point function, that is, a function whose value depends on the values of the coordinates (x, y, z). As a scalar, it must have the same value at a given fixed point in space, independent of the rotation of our coordinate system, or ϕ ′ (x1′ , x2′ , x3′ ) = ϕ(x1 , x2 , x3 ). (1.58) By differentiating with respect to xi′ we obtain ∂ϕ ′ (x1′ , x2′ , x3′ ) ∂ϕ(x1 , x2 , x3 )  ∂ϕ ∂xj  ∂ϕ aij = = = ′ ′ ′ ∂xi ∂xi ∂xj ∂xi ∂xj j (1.59) j by the rules of partial differentiation and Eqs. (1.16a) and (1.16b). But comparison with Eq. (1.17), the vector transformation law, now shows that we have constructed a vector with components ∂ϕ/∂xj . This vector we label the gradient of ϕ. A convenient symbolism is ∂ϕ ∂ϕ ∂ϕ + yˆ + zˆ ∂x ∂y ∂z (1.60) ∂ ∂ ∂ + yˆ + zˆ . ∂x ∂y ∂z (1.61) ∇ϕ = xˆ or ∇ = xˆ ∇ϕ (or del ϕ) is our gradient of the scalar ϕ, whereas ∇ (del) itself is a vector differential operator (available to operate on or to differentiate a scalar ϕ). All the relationships for ∇ (del) can be derived from the hybrid nature of del in terms of both the partial derivatives and its vector nature. The gradient of a scalar is extremely important in physics and engineering in expressing the relation between a force field and a potential field, force F = −∇(potential V ), (1.62) which holds for both gravitational and electrostatic fields, among others. Note that the minus sign in Eq. (1.62) results in water flowing downhill rather than uphill! If a force can be described, as in Eq. (1.62), by a single function V (r) everywhere, we call the scalar function V its potential. Because the force is the directional derivative of the potential, we can find the potential, if it exists, by integrating the force along a suitable path. Because the 34 Chapter 1 Vector Analysis total variation dV = ∇V · dr = −F · dr is the work done against the force along the path dr, we recognize the physical meaning of the potential (difference) as work and energy. Moreover, in a sum of path increments the intermediate points cancel,   V (r + dr1 + dr2 ) − V (r + dr1 ) + V (r + dr1 ) − V (r) = V (r + dr2 + dr1 ) − V (r), so the integrated work along some path from an initial point ri to a final point r is given by the potential difference V (r) − V (ri ) at the endpoints of the path. Therefore, such forces are especially simple and well behaved: They are called conservative. When there is loss of energy due to friction along the path or some other dissipation, the work will depend on the path, and such forces cannot be conservative: No potential exists. We discuss conservative forces in more detail in Section 1.13. Example 1.6.1 THE GRADIENT OF A POTENTIAL V (r) Let us calculate the gradient of V (r) = V ( x 2 + y 2 + z2 ), so ∇V (r) = xˆ ∂V (r) ∂V (r) ∂V (r) + yˆ + zˆ . ∂x ∂y ∂z Now, V (r) depends on x through the dependence of r on x. Therefore14 ∂V (r) dV (r) ∂r = · . ∂x dr ∂x From r as a function of x, y, z, Therefore ∂(x 2 + y 2 + z2 )1/2 x ∂r x = = 2 = . 2 2 1/2 ∂x ∂x r (x + y + z ) ∂V (r) dV (r) x = · . ∂x dr r Permuting coordinates (x → y, y → z, z → x) to obtain the y and z derivatives, we get ∇V (r) = (ˆxx + yˆ y + zˆ z) = 1 dV r dr dV r dV = rˆ . r dr dr Here rˆ is a unit vector (r/r) in the positive radial direction. The gradient of a function of r is a vector in the (positive or negative) radial direction. In Section 2.5, rˆ is seen as one of the three orthonormal unit vectors of spherical polar coordinates and rˆ ∂/∂r as the radial component of ∇.  14 This is a special case of the chain rule of partial differentiation: ∂V (r, θ, ϕ) ∂V ∂r ∂V ∂θ ∂V ∂ϕ = + + , ∂x ∂r ∂x ∂θ ∂x ∂ϕ ∂x where ∂V /∂θ = ∂V /∂ϕ = 0, ∂V /∂r → dV /dr. 1.6 Gradient, ∇ 35 A Geometrical Interpretation One immediate application of ∇ϕ is to dot it into an increment of length dr = xˆ dx + yˆ dy + zˆ dz. Thus we obtain ∇ϕ · dr = ∂ϕ ∂ϕ ∂ϕ dx + dy + dz = dϕ, ∂x ∂y ∂z the change in the scalar function ϕ corresponding to a change in position dr. Now consider P and Q to be two points on a surface ϕ(x, y, z) = C, a constant. These points are chosen so that Q is a distance dr from P . Then, moving from P to Q, the change in ϕ(x, y, z) = C is given by dϕ = (∇ϕ) · dr = 0 (1.63) since we stay on the surface ϕ(x, y, z) = C. This shows that ∇ϕ is perpendicular to dr. Since dr may have any direction from P as long as it stays in the surface of constant ϕ, point Q being restricted to the surface but having arbitrary direction, ∇ϕ is seen as normal to the surface ϕ = constant (Fig. 1.19). If we now permit dr to take us from one surface ϕ = C1 to an adjacent surface ϕ = C2 (Fig. 1.20), dϕ = C1 − C2 = C = (∇ϕ) · dr. (1.64) For a given dϕ, |dr| is a minimum when it is chosen parallel to ∇ϕ (cos θ = 1); or, for a given |dr|, the change in the scalar function ϕ is maximized by choosing dr parallel to FIGURE 1.19 The length increment dr has to stay on the surface ϕ = C. 36 Chapter 1 Vector Analysis FIGURE 1.20 Gradient. ∇ϕ. This identifies ∇ϕ as a vector having the direction of the maximum space rate of change of ϕ, an identification that will be useful in Chapter 2 when we consider nonCartesian coordinate systems. This identification of ∇ϕ may also be developed by using the calculus of variations subject to a constraint, Exercise 17.6.9. Example 1.6.2 FORCE AS GRADIENT OF A POTENTIAL As a specific example of the foregoing, and as an extension of Example 1.6.1, we consider the surfaces consisting of concentric spherical shells, Fig. 1.21. We have 1/2  = r = C, ϕ(x, y, z) = x 2 + y 2 + z2 where r is the radius, equal to C, our constant. C = ϕ = r, the distance between two shells. From Example 1.6.1 dϕ(r) = rˆ . dr The gradient is in the radial direction and is normal to the spherical surface ϕ = C. ∇ϕ(r) = rˆ Example 1.6.3  INTEGRATION BY PARTS OF GRADIENT Let us prove the formula A(r) · ∇f (r) d 3 r = − f (r)∇ · A(r) d 3 r, where A or f or both vanish at infinity so that the integrated parts vanish. This condition is satisfied if, for example, A is the electromagnetic vector potential and f is a bound-state wave function ψ(r). 1.6 Gradient, ∇ 37 FIGURE 1.21 Gradient for ϕ(x, y, z) = (x 2 + y 2 + z2 )1/2 , spherical shells: (x22 + y22 + z22 )1/2 = r2 = C2 , (x12 + y12 + z12 )1/2 = r1 = C1 . Writing the inner product in Cartesian coordinates, integrating each one-dimensional integral by parts, and dropping the integrated terms, we obtain  ∂Ax ∞ 3 Ax f |x=−∞ − f dx dy dz + · · · A(r) · ∇f (r) d r = ∂x ∂Ay ∂Ax ∂Az =− f dx dy dz − f dy dx dz − f dz dx dy ∂x ∂y ∂z = − f (r)∇ · A(r) d 3 r. If A = eikz eˆ describes an outgoing photon in the direction of the constant polarization unit vector eˆ and f = ψ(r) is an exponentially decaying bound-state wave function, then deikz 3 eikz eˆ · ∇ψ(r) d 3 r = −ez ψ(r) d r = −ikez ψ(r)eikz d 3 r, dz because only the z-component of the gradient contributes. Exercises 1.6.1 If S(x, y, z) = (x 2 + y 2 + z2 )−3/2 , find (a) ∇S at the point (1, 2, 3); (b) the magnitude of the gradient of S, |∇S| at (1, 2, 3); and (c) the direction cosines of ∇S at (1, 2, 3).  38 Chapter 1 Vector Analysis 1.6.2 (a) Find a unit vector perpendicular to the surface x 2 + y 2 + z2 = 3 (b) at the point (1, 1, 1). Lengths are in centimeters. Derive the equation of the plane tangent to the surface at (1, 1, 1). √ ANS. (a) (ˆx + yˆ + zˆ )/ 3, (b) x + y + z = 3. 1.6.3 Given a vector r12 = xˆ (x1 − x2 ) + yˆ (y1 − y2 ) + zˆ (z1 − z2 ), show that ∇ 1 r12 (gradient with respect to x1 , y1 , and z1 of the magnitude r12 ) is a unit vector in the direction of r12 . 1.6.4 If a vector function F depends on both space coordinates (x, y, z) and time t, show that dF = (dr · ∇)F + 1.6.5 ∂F dt. ∂t Show that ∇(uv) = v∇u + u∇v, where u and v are differentiable scalar functions of x, y, and z. (a) Show that a necessary and sufficient condition that u(x, y, z) and v(x, y, z) are related by some function f (u, v) = 0 is that (∇u) × (∇v) = 0. (b) If u = u(x, y) and v = v(x, y), show that the condition (∇u) × (∇v) = 0 leads to the two-dimensional Jacobian   ∂u ∂u     u, v ∂y  =  ∂x J ∂v  = 0. ∂v x, y ∂x ∂y The functions u and v are assumed differentiable. 1.7 DIVERGENCE, ∇ Differentiating a vector function is a simple extension of differentiating scalar quantities. Suppose r(t) describes the position of a satellite at some time t. Then, for differentiation with respect to time, r(t + t) − r(t) dr(t) = lim = v, linear velocity. →0 dt t Graphically, we again have the slope of a curve, orbit, or trajectory, as shown in Fig. 1.22. If we resolve r(t) into its Cartesian components, dr/dt always reduces directly to a vector sum of not more than three (for three-dimensional space) scalar derivatives. In other coordinate systems (Chapter 2) the situation is more complicated, for the unit vectors are no longer constant in direction. Differentiation with respect to the space coordinates is handled in the same way as differentiation with respect to time, as seen in the following paragraphs. 1.7 Divergence, ∇ FIGURE 1.22 39 Differentiation of a vector. In Section 1.6, ∇ was defined as a vector operator. Now, paying attention to both its vector and its differential properties, we let it operate on a vector. First, as a vector we dot it into a second vector to obtain ∇·V= ∂Vy ∂Vx ∂Vz + + , ∂x ∂y ∂z (1.65a) known as the divergence of V. This is a scalar, as discussed in Section 1.3. Example 1.7.1 DIVERGENCE OF COORDINATE VECTOR Calculate ∇ · r:   ∂ ∂ ∂ · (ˆxx + yˆ y + zˆ z) + yˆ + zˆ ∇ · r = xˆ ∂x ∂y ∂z = ∂x ∂y ∂z + + , ∂x ∂y ∂z or ∇ · r = 3. Example 1.7.2  DIVERGENCE OF CENTRAL FORCE FIELD Generalizing Example 1.7.1,  ∂  ∂  ∂  x f (r) + y f (r) + z f (r) ∇ · rf (r) = ∂x ∂y ∂z x 2 df y 2 df z2 df + + r dr r dr r dr df = 3f (r) + r . dr = 3f (r) + 40 Chapter 1 Vector Analysis The manipulation of the partial derivatives leading to the second equation in Example 1.7.2 is discussed in Example 1.6.1. In particular, if f (r) = r n−1 ,  ∇ · rr n−1 = ∇ · rˆ r n = 3r n−1 + (n − 1)r n−1 = (n + 2)r n−1 . (1.65b) This divergence vanishes for n = −2, except at r = 0, an important fact in Section 1.14.  Example 1.7.3 INTEGRATION BY PARTS OF DIVERGENCE Let us prove the formula f (r)∇ · A(r) d 3 r = − A · ∇f d 3 r, where A or f or both vanish at infinity. To show this, we proceed, as in Example 1.6.3, by integration by parts after writing the inner product in Cartesian coordinates. Because the integrated terms are evaluated at infinity, where they vanish, we obtain   ∂Ay ∂Az ∂Ax 3 dx dy dz + dy dx dz + dz dx dy f (r)∇ · A(r) d r = f ∂x ∂y ∂z   ∂f ∂f ∂f Ax dx dy dz + Ay dy dx dz + Az dz dx dy =− ∂x ∂y ∂z = − A · ∇f d 3 r.  A Physical Interpretation To develop a feeling for the physical significance of the divergence, consider ∇ · (ρv) with v(x, y, z), the velocity of a compressible fluid, and ρ(x, y, z), its density at point (x, y, z). If we consider a small volume dx dy dz (Fig. 1.23) at x = y = z = 0, the fluid flowing into this volume per unit time (positive x-direction) through the face EFGH is (rate of flow in)EFGH = ρvx |x=0 = dy dz. The components of the flow ρvy and ρvz tangential to this face contribute nothing to the flow through this face. The rate of flow out (still positive x-direction) through face ABCD is ρvx |x=dx dy dz. To compare these flows and to find the net flow out, we expand this last result, like the total variation in Section 1.6.15 This yields (rate of flow out)ABCD = ρvx |x=dx dy dz  ∂ (ρvx ) dx dy dz. = ρvx + ∂x x=0 Here the derivative term is a first correction term, allowing for the possibility of nonuniform density or velocity or both.16 The zero-order term ρvx |x=0 (corresponding to uniform flow) 15 Here we have the increment dx and we show a partial derivative with respect to x since ρv may also depend on y and z. x 16 Strictly speaking, ρv is averaged over face EFGH and the expression ρv + (∂/∂x)(ρv ) dx is similarly averaged over face x x x ABCD. Using an arbitrarily small differential volume, we find that the averages reduce to the values employed here. 1.7 Divergence, ∇ FIGURE 1.23 41 Differential rectangular parallelepiped (in first octant). cancels out: Net rate of flow out|x = ∂ (ρvx ) dx dy dz. ∂x Equivalently, we can arrive at this result by  ρvx (x, 0, 0) − ρvx (0, 0, 0) ∂[ρvx (x, y, z)]  ≡ lim .  x→0 x ∂x 0,0,0 Now, the x-axis is not entitled to any preferred treatment. The preceding result for the two faces perpendicular to the x-axis must hold for the two faces perpendicular to the y-axis, with x replaced by y and the corresponding changes for y and z: y → z, z → x. This is a cyclic permutation of the coordinates. A further cyclic permutation yields the result for the remaining two faces of our parallelepiped. Adding the net rate of flow out for all three pairs of surfaces of our volume element, we have  ∂ ∂ ∂ net flow out = (ρvx ) + (ρvy ) + (ρvz ) dx dy dz (per unit time) ∂x ∂y ∂z = ∇ · (ρv) dx dy dz. (1.66) Therefore the net flow of our compressible fluid out of the volume element dx dy dz per unit volume per unit time is ∇ · (ρv). Hence the name divergence. A direct application is in the continuity equation ∂ρ + ∇ · (ρv) = 0, ∂t (1.67a) which states that a net flow out of the volume results in a decreased density inside the volume. Note that in Eq. (1.67a), ρ is considered to be a possible function of time as well as of space: ρ(x, y, z, t). The divergence appears in a wide variety of physical problems, 42 Chapter 1 Vector Analysis ranging from a probability current density in quantum mechanics to neutron leakage in a nuclear reactor. The combination ∇ · (f V), in which f is a scalar function and V is a vector function, may be written ∇ · (f V) = = ∂ ∂ ∂ (f Vx ) + (f Vy ) + (f Vz ) ∂x ∂y ∂z ∂Vy ∂f ∂f ∂f ∂Vx ∂Vz Vx + f + Vy + f + Vz + f ∂x ∂x ∂y ∂y ∂z ∂z = (∇f ) · V + f ∇ · V, (1.67b) which is just what we would expect for the derivative of a product. Notice that ∇ as a differential operator differentiates both f and V; as a vector it is dotted into V (in each term). If we have the special case of the divergence of a vector vanishing, ∇ · B = 0, (1.68) the vector B is said to be solenoidal, the term coming from the example in which B is the magnetic induction and Eq. (1.68) appears as one of Maxwell’s equations. When a vector is solenoidal, it may be written as the curl of another vector known as the vector potential. (In Section 1.13 we shall calculate such a vector potential.) Exercises 1.7.1 For a particle moving in a circular orbit r = xˆ r cos ωt + yˆ r sin ωt, (a) evaluate r × r˙ , with r˙ = dr dt = v. (b) Show that r¨ + ω2 r = 0 with r¨ = dv dt . The radius r and the angular velocity ω are constant. ANS. (a) zˆ ωr 2 . 1.7.2 Vector A satisfies the vector transformation law, Eq. (1.15). Show directly that its time derivative dA/dt also satisfies Eq. (1.15) and is therefore a vector. 1.7.3 Show, by differentiating components, that (a) (b) d dA dB dt (A · B) = dt · B + A · dt , dA dB d dt (A × B) = dt × B + A × dt , just like the derivative of the product of two algebraic functions. 1.7.4 In Chapter 2 it will be seen that the unit vectors in non-Cartesian coordinate systems are usually functions of the coordinate variables, ei = ei (q1 , q2 , q3 ) but |ei | = 1. Show that either ∂ei /∂qj = 0 or ∂ei /∂qj is orthogonal to ei . Hint. ∂e2i /∂qj = 0. 1.8 Curl, ∇× 1.7.5 Prove ∇ · (a × b) = b · (∇ × a) − a · (∇ × b). Hint. Treat as a triple scalar product. 1.7.6 The electrostatic field of a point charge q is E= 43 rˆ q · . 4πε0 r 2 Calculate the divergence of E. What happens at the origin? 1.8 CURL, ∇× Another possible operation with the vector operator ∇ is to cross it into a vector. We obtain       ∂ ∂ ∂ ∂ ∂ ∂ Vz − Vy + yˆ Vx − Vz + zˆ Vy − Vx ∇ × V = xˆ ∂y ∂z ∂z ∂x ∂x ∂y    xˆ yˆ zˆ   ∂ ∂ ∂   (1.69) =  ∂x ∂y ∂z , V V V  x y z which is called the curl of V. In expanding this determinant we must consider the derivative nature of ∇. Specifically, V × ∇ is defined only as an operator, another vector differential operator. It is certainly not equal, in general, to −∇ × V.17 In the case of Eq. (1.69) the determinant must be expanded from the top down so that we get the derivatives as shown in the middle portion of Eq. (1.69). If ∇ is crossed into the product of a scalar and a vector, we can show  ∂ ∂ (f Vz ) − (f Vy ) ∇ × (f V)|x = ∂y ∂z   ∂Vy ∂Vz ∂f ∂f = f + Vz − f − Vy ∂y ∂y ∂z ∂z = f ∇ × V|x + (∇f ) × V|x . (1.70) If we permute the coordinates x → y, y → z, z → x to pick up the y-component and then permute them a second time to pick up the z-component, then ∇ × (f V) = f ∇ × V + (∇f ) × V, (1.71) which is the vector product analog of Eq. (1.67b). Again, as a differential operator ∇ differentiates both f and V. As a vector it is crossed into V (in each term). 17 In this same spirit, if A is a differential operator, it is not necessarily true that A × A = 0. Specifically, for the quantum mechanical angular momentum operator L = −i(r × ∇), we find that L × L = iL. See Sections 4.3 and 4.4 for more details. 44 Chapter 1 Vector Analysis Example 1.8.1 VECTOR POTENTIAL OF A CONSTANT B FIELD From electrodynamics we know that ∇ · B = 0, which has the general solution B = ∇ × A, where A(r) is called the vector potential (of the magnetic induction), because ∇ ·(∇ ×A) = (∇ × ∇) · A ≡ 0, as a triple scalar product with two identical vectors. This last identity will not change if we add the gradient of some scalar function to the vector potential, which, therefore, is not unique. In our case, we want to show that a vector potential is A = 12 (B × r). Using the BAC–BAC rule in conjunction with Example 1.7.1, we find that 2∇ × A = ∇ × (B × r) = (∇ · r)B − (B · ∇)r = 3B − B = 2B, where we indicate by the ordering of the scalar product of the second term that the gradient still acts on the coordinate vector.  Example 1.8.2 CURL OF A CENTRAL FORCE FIELD Calculate ∇ × (rf (r)). By Eq. (1.71), First,   ∇ × rf (r) = f (r)∇ × r + ∇f (r) × r.   xˆ  ∂ ∇ × r =  ∂x x yˆ ∂ ∂y y  zˆ  ∂  ∂z  = 0. z (1.72) (1.73) Second, using ∇f (r) = rˆ (df/dr) (Example 1.6.1), we obtain df rˆ × r = 0. dr This vector product vanishes, since r = rˆ r and rˆ × rˆ = 0. ∇ × rf (r) = (1.74)  To develop a better feeling for the physical significance of the curl, we consider the circulation of fluid around a differential loop in the xy-plane, Fig. 1.24. FIGURE 1.24 Circulation around a differential loop. 1.8 Curl, ∇× 45 Although the circulation is technically given by a vector line integral V · dλ (Section 1.10), we can set up the equivalent scalar integrals here. Let us take the circulation to be circulation1234 = Vx (x, y) dλx + Vy (x, y) dλy 1 + 3 2 Vx (x, y) dλx + Vy (x, y) dλy . (1.75) 4 The numbers 1, 2, 3, and 4 refer to the numbered line segments in Fig. 1.24. In the first integral, dλx = +dx; but in the third integral, dλx = −dx because the third line segment is traversed in the negative x-direction. Similarly, dλy = +dy for the second integral, −dy for the fourth. Next, the integrands are referred to the point (x0 , y0 ) with a Taylor expansion18 taking into account the displacement of line segment 3 from 1 and that of 2 from 4. For our differential line segments this leads to  ∂Vy dx dy circulation1234 = Vx (x0 , y0 ) dx + Vy (x0 , y0 ) + ∂x  ∂Vx + Vx (x0 , y0 ) + dy (−dx) + Vy (x0 , y0 )(−dy) ∂y   ∂Vy ∂Vx dx dy. (1.76) − = ∂x ∂y Dividing by dx dy, we have circulation per unit area = ∇ × V|z . (1.77) ∇ × V = 0, (1.78) The circulation19 about our differential area in the xy-plane is given by the z-component of ∇ × V. In principle, the curl ∇ × V at (x0 , y0 ) could be determined by inserting a (differential) paddle wheel into the moving fluid at point (x0 , y0 ). The rotation of the little paddle wheel would be a measure of the curl, and its axis would be along the direction of ∇ × V, which is perpendicular to the plane of circulation. We shall use the result, Eq. (1.76), in Section 1.12 to derive Stokes’ theorem. Whenever the curl of a vector V vanishes, V is labeled irrotational. The most important physical examples of irrotational vectors are the gravitational and electrostatic forces. In each case rˆ r =C 3, (1.79) 2 r r where C is a constant and rˆ is the unit vector in the outward radial direction. For the gravitational case we have C = −Gm1 m2 , given by Newton’s law of universal gravitation. If C = q1 q2 /4πε0 , we have Coulomb’s law of electrostatics (mks units). The force V V=C 18 Here, V (x + dx, y ) = V (x , y ) + ( ∂Vy ) y 0 y 0 0 0 ∂x x0 y0 dx + · · · . The higher-order terms will drop out in the limit as dx → 0. A correction term for the variation of Vy with y is canceled by the corresponding term in the fourth integral. 19 In fluid dynamics ∇ × V is called the “vorticity.” 46 Chapter 1 Vector Analysis given in Eq. (1.79) may be shown to be irrotational by direct expansion into Cartesian components, as we did in Example 1.8.1. Another approach is developed in Chapter 2, in which we express ∇×, the curl, in terms of spherical polar coordinates. In Section 1.13 we shall see that whenever a vector is irrotational, the vector may be written as the (negative) gradient of a scalar potential. In Section 1.16 we shall prove that a vector field may be resolved into an irrotational part and a solenoidal part (subject to conditions at infinity). In terms of the electromagnetic field this corresponds to the resolution into an irrotational electric field and a solenoidal magnetic field. For waves in an elastic medium, if the displacement u is irrotational, ∇ × u = 0, plane waves (or spherical waves at large distances) become longitudinal. If u is solenoidal, ∇ · u = 0, then the waves become transverse. A seismic disturbance will produce a displacement that may be resolved into a solenoidal part and an irrotational part (compare Section 1.16). The irrotational part yields the longitudinal P (primary) earthquake waves. The solenoidal part gives rise to the slower transverse S (secondary) waves. Using the gradient, divergence, and curl, and of course the BAC–CAB rule, we may construct or verify a large number of useful vector identities. For verification, complete expansion into Cartesian components is always a possibility. Sometimes if we use insight instead of routine shuffling of Cartesian components, the verification process can be shortened drastically. Remember that ∇ is a vector operator, a hybrid creature satisfying two sets of rules: 1. vector rules, and 2. partial differentiation rules — including differentiation of a product. Example 1.8.3 GRADIENT OF A DOT PRODUCT Verify that ∇(A · B) = (B · ∇)A + (A · ∇)B + B × (∇ × A) + A × (∇ × B). (1.80) This particular example hinges on the recognition that ∇(A · B) is the type of term that appears in the BAC–CAB expansion of a triple vector product, Eq. (1.55). For instance, A × (∇ × B) = ∇(A · B) − (A · ∇)B, with the ∇ differentiating only B, not A. From the commutativity of factors in a scalar product we may interchange A and B and write B × (∇ × A) = ∇(A · B) − (B · ∇)A, now with ∇ differentiating only A, not B. Adding these two equations, we obtain ∇ differentiating the product A · B and the identity, Eq. (1.80). This identity is used frequently in electromagnetic theory. Exercise 1.8.13 is a simple illustration.  1.8 Curl, ∇× Example 1.8.4 47 INTEGRATION BY PARTS OF CURL Let us prove the formula C(r) · (∇ × A(r)) d 3 r = A(r) · (∇ × C(r)) d 3 r, where A or C or both vanish at infinity. To show this, we proceed, as in Examples 1.6.3 and 1.7.3, by integration by parts after writing the inner product and the curl in Cartesian coordinates. Because the integrated terms vanish at infinity we obtain  C(r) · ∇ × A(r) d 3 r = Cz   ∂Ay ∂Ax − ∂x ∂y  Cx  ∂Az ∂Ay − ∂y ∂z  Cy  ∂Ax ∂Az − ∂z ∂x  d 3r      ∂Cy ∂Cz ∂Cy ∂Cx ∂Cz ∂Cx Ax + Ay + Az d 3r = − − − ∂y ∂z ∂z ∂x ∂x ∂y  = A(r) · ∇ × C(r) d 3 r, just rearranging appropriately the terms after integration by parts.  Exercises 1.8.1 Show, by rotating the coordinates, that the components of the curl of a vector transform as a vector. Hint. The direction cosine identities of Eq. (1.46) are available as needed. 1.8.2 Show that u × v is solenoidal if u and v are each irrotational. 1.8.3 If A is irrotational, show that A × r is solenoidal. 1.8.4 A rigid body is rotating with constant angular velocity ω. Show that the linear velocity v is solenoidal. 1.8.5 If a vector function f(x, y, z) is not irrotational but the product of f and a scalar function g(x, y, z) is irrotational, show that then f · ∇ × f = 0. 1.8.6 If (a) V = xˆ Vx (x, y) + yˆ Vy (x, y) and (b) ∇ × V = 0, prove that ∇ × V is perpendicular to V. 1.8.7 Classically, orbital angular momentum is given by L = r × p, where p is the linear momentum. To go from classical mechanics to quantum mechanics, replace p by the operator −i∇ (Section 15.6). Show that the quantum mechanical angular momentum 48 Chapter 1 Vector Analysis operator has Cartesian components (in units of h¯ )   ∂ ∂ , Lx = −i y − z ∂z ∂y   ∂ ∂ Ly = −i z , −x ∂x ∂z   ∂ ∂ Lz = −i x . −y ∂y ∂x 1.8.8 Using the angular momentum operators previously given, show that they satisfy commutation relations of the form [Lx , Ly ] ≡ Lx Ly − Ly Lx = iLz and hence L × L = iL. These commutation relations will be taken later as the defining relations of an angular momentum operator — Exercise 3.2.15 and the following one and Chapter 4. 1.8.9 With the commutator bracket notation [Lx , Ly ] = Lx Ly − Ly Lx , the angular momentum vector L satisfies [Lx , Ly ] = iLz , etc., or L × L = iL. If two other vectors a and b commute with each other and with L, that is, [a, b] = [a, L] = [b, L] = 0, show that [a · L, b · L] = i(a × b) · L. 1.8.10 For A = xˆ Ax (x, y, z) and B = xˆ Bx (x, y, z) evaluate each term in the vector identity ∇(A · B) = (B · ∇)A + (A · ∇)B + B × (∇ × A) + A × (∇ × B) and verify that the identity is satisfied. 1.8.11 Verify the vector identity ∇ × (A × B) = (B · ∇)A − (A · ∇)B − B(∇ · A) + A(∇ · B). 1.8.12 As an alternative to the vector identity of Example 1.8.3 show that ∇(A · B) = (A × ∇) × B + (B × ∇) × A + A(∇ · B) + B(∇ · A). 1.8.13 1.8.14 Verify the identity 1  A × (∇ × A) = ∇ A2 − (A · ∇)A. 2 If A and B are constant vectors, show that ∇(A · B × r) = A × B. 1.9 Successive Applications of ∇ 1.8.15 49 A distribution of electric currents creates a constant magnetic moment m = const. The force on m in an external magnetic induction B is given by F = ∇ × (B × m). Show that F = (m · ∇)B. Note. Assuming no time dependence of the fields, Maxwell’s equations yield ∇ ×B = 0. Also, ∇ · B = 0. 1.8.16 An electric dipole of moment p is located at the origin. The dipole creates an electric potential at r given by ψ(r) = p·r . 4πε0 r 3 Find the electric field, E = −∇ψ at r. 1.8.17 The vector potential A of a magnetic dipole, dipole moment m, is given by A(r) = (µ0 /4π)(m × r/r 3 ). Show that the magnetic induction B = ∇ × A is given by B= µ0 3ˆr(ˆr · m) − m . 4π r3 Note. The limiting process leading to point dipoles is discussed in Section 12.1 for electric dipoles, in Section 12.5 for magnetic dipoles. 1.8.18 The velocity of a two-dimensional flow of liquid is given by V = xˆ u(x, y) − yˆ v(x, y). If the liquid is incompressible and the flow is irrotational, show that ∂u ∂v = ∂x ∂y and ∂u ∂v =− . ∂y ∂x These are the Cauchy–Riemann conditions of Section 6.2. 1.8.19 1.9 The evaluation in this section of the four integrals for the circulation omitted Taylor series terms such as ∂Vx /∂x, ∂Vy /∂y and all second derivatives. Show that ∂Vx /∂x, ∂Vy /∂y cancel out when the four integrals are added and that the second derivative terms drop out in the limit as dx → 0, dy → 0. Hint. Calculate the circulation per unit area and then take the limit dx → 0, dy → 0. SUCCESSIVE APPLICATIONS OF ∇ We have now defined gradient, divergence, and curl to obtain vector, scalar, and vector quantities, respectively. Letting ∇ operate on each of these quantities, we obtain (a) ∇ · ∇ϕ (d) ∇ · ∇ × V (b) ∇ × ∇ϕ (e) ∇ × (∇ × V) (c) ∇∇ · V 50 Chapter 1 Vector Analysis all five expressions involving second derivatives and all five appearing in the second-order differential equations of mathematical physics, particularly in electromagnetic theory. The first expression, ∇ · ∇ϕ, the divergence of the gradient, is named the Laplacian of ϕ. We have     ∂ ∂ϕ ∂ ∂ ∂ϕ ∂ϕ ∇ · ∇ϕ = xˆ + yˆ + zˆ + yˆ + zˆ · xˆ ∂x ∂y ∂z ∂x ∂y ∂z = ∂ 2ϕ ∂ 2ϕ ∂ 2ϕ + 2 + 2. ∂x 2 ∂y ∂z (1.81a) When ϕ is the electrostatic potential, we have ∇ · ∇ϕ = 0 (1.81b) at points where the charge density vanishes, which is Laplace’s equation of electrostatics. Often the combination ∇ · ∇ is written ∇ 2 , or  in the European literature. Example 1.9.1 LAPLACIAN OF A POTENTIAL Calculate ∇ · ∇V (r). Referring to Examples 1.6.1 and 1.7.2, ∇ · ∇V (r) = ∇ · rˆ 2 dV d 2V dV = + , dr r dr dr 2 replacing f (r) in Example 1.7.2 by 1/r · dV /dr. If V (r) = r n , this reduces to ∇ · ∇r n = n(n + 1)r n−2 . This vanishes for n = 0 [V (r) = constant] and for n = −1; that is, V (r) = 1/r is a solution of Laplace’s equation, ∇ 2 V (r) = 0. This is for r = 0. At r = 0, a Dirac delta function is involved (see Eq. (1.169) and Section 9.7).  Expression (b) may be written   xˆ   ∂ ∇ × ∇ϕ =  ∂x  ∂ϕ  ∂x yˆ ∂ ∂y ∂ϕ ∂y  zˆ   ∂  ∂z  .  ∂ϕ  ∂z By expanding the determinant, we obtain  2   2  ∂ ϕ ∂ ϕ ∂ 2ϕ ∂ 2ϕ ∇ × ∇ϕ = xˆ − − + yˆ ∂y ∂z ∂z ∂y ∂z ∂x ∂x ∂z   2 ∂ 2ϕ ∂ ϕ − = 0, + zˆ ∂x ∂y ∂y ∂x (1.82) assuming that the order of partial differentiation may be interchanged. This is true as long as these second partial derivatives of ϕ are continuous functions. Then, from Eq. (1.82), the curl of a gradient is identically zero. All gradients, therefore, are irrotational. Note that 1.9 Successive Applications of ∇ 51 the zero in Eq. (1.82) comes as a mathematical identity, independent of any physics. The zero in Eq. (1.81b) is a consequence of physics. Expression (d) is a triple scalar product that may be written   ∂ ∂ ∂    ∂x ∂y ∂z   ∂ ∂ ∂ . (1.83) ∇ · ∇ × V =  ∂x ∂y ∂z    Vx Vy Vz  Again, assuming continuity so that the order of differentiation is immaterial, we obtain ∇ · ∇ × V = 0. (1.84) The divergence of a curl vanishes or all curls are solenoidal. In Section 1.16 we shall see that vectors may be resolved into solenoidal and irrotational parts by Helmholtz’s theorem. The two remaining expressions satisfy a relation ∇ × (∇ × V) = ∇∇ · V − ∇ · ∇V, (1.85) valid in Cartesian coordinates (but not in curved coordinates). This follows immediately from Eq. (1.55), the BAC–CAB rule, which we rewrite so that C appears at the extreme right of each term. The term ∇ · ∇V was not included in our list, but it may be defined by Eq. (1.85). Example 1.9.2 ELECTROMAGNETIC WAVE EQUATION One important application of this vector relation (Eq. (1.85)) is in the derivation of the electromagnetic wave equation. In vacuum Maxwell’s equations become ∇ · B = 0, (1.86a) ∇ · E = 0, (1.86b) ∇ × B = ε0 µ0 ∂E , ∂t (1.86c) ∂B . (1.86d) ∂t Here E is the electric field, B is the magnetic induction, ε0 is the electric permittivity, and µ0 is the magnetic permeability (SI units), so ε0 µ0 = 1/c2 , c being the velocity of light. The relation has important consequences. Because ε0 , µ0 can be measured in any frame, the velocity of light is the same in any frame. Suppose we eliminate B from Eqs. (1.86c) and (1.86d). We may do this by taking the curl of both sides of Eq. (1.86d) and the time derivative of both sides of Eq. (1.86c). Since the space and time derivatives commute, ∇×E=− ∂ ∂B ∇×B=∇× , ∂t ∂t and we obtain ∇ × (∇ × E) = −ε0 µ0 ∂ 2E . ∂t 2 52 Chapter 1 Vector Analysis Application of Eqs. (1.85) and (1.86b) yields ∂ 2E , (1.87) ∂t 2 the electromagnetic vector wave equation. Again, if E is expressed in Cartesian coordinates, Eq. (1.87) separates into three scalar wave equations, each involving the scalar Laplacian. When external electric charge and current densities are kept as driving terms in Maxwell’s equations, similar wave equations are valid for the electric potential and the vector potential. To show this, we solve Eq. (1.86a) by writing B = ∇ × A as a curl of the vector potential. This expression is substituted into Faraday’s induction law in differential ∂A form, Eq. (1.86d), to yield ∇ × (E + ∂A ∂t ) = 0. The vanishing curl implies that E + ∂t is a gradient and, therefore, can be written as −∇ϕ, where ϕ(r, t) is defined as the (nonstatic) electric potential. These results for the B and E fields, ∇ · ∇E = ε0 µ0 B = ∇ × A, E = −∇ϕ − ∂A , ∂t (1.88) solve the homogeneous Maxwell’s equations. We now show that the inhomogeneous Maxwell’s equations, Gauss’ law: ∇ · E = ρ/ε0 , Oersted’s law: ∇ × B − 1 ∂E = µ0 J c2 ∂t (1.89) in differential form lead to wave equations for the potentials ϕ and A, provided that ∇ · A is determined by the constraint c12 ∂ϕ ∂t + ∇ · A = 0. This choice of fixing the divergence of the vector potential, called the Lorentz gauge, serves to uncouple the differential equations of both potentials. This gauge constraint is not a restriction; it has no physical effect. Substituting our electric field solution into Gauss’ law yields ρ ∂ 1 ∂ 2ϕ = ∇ · E = −∇ 2 ϕ − ∇ · A = −∇ 2 ϕ + 2 2 , ε0 ∂t c ∂t (1.90) the wave equation for the electric potential. In the last step we have used the Lorentz gauge to replace the divergence of the vector potential by the time derivative of the electric potential and thus decouple ϕ from A. Finally, we substitute B = ∇ × A into Oersted’s law and use Eq. (1.85), which expands ∇ 2 in terms of a longitudinal (the gradient term) and a transverse component (the curl term). This yields   ∂ϕ ∂ 2 A 1 ∂E 1 µ0 J + 2 = ∇ × (∇ × A) = ∇(∇ · A) − ∇ 2 A = µ0 J − 2 ∇ + 2 , ∂t c ∂t c ∂t where we have used the electric field solution (Eq. (1.88)) in the last step. Now we see that the Lorentz gauge condition eliminates the gradient terms, so the wave equation 1 ∂ 2A − ∇ 2 A = µ0 J c2 ∂t 2 (1.91) 1.9 Successive Applications of ∇ 53 for the vector potential remains. Finally, looking back at Oersted’s law, taking the divergence of Eq. (1.89), dropping ∇ · (∇ × B) = 0, and substituting Gauss’ law for ∇ · E = ρ/ǫ0 , we find µ0 ∇ · J = − ǫ 1c2 ∂ρ ∂t , 0 where ǫ0 µ0 = 1/c2 , that is, the continuity equation for the current density. This step justifies the inclusion of Maxwell’s displacement current in the generalization of Oersted’s law to nonstationary situations.  Exercises 1.9.1 Verify Eq. (1.85), ∇ × (∇ × V) = ∇∇ · V − ∇ · ∇V, by direct expansion in Cartesian coordinates. 1.9.2 Show that the identity ∇ × (∇ × V) = ∇∇ · V − ∇ · ∇V follows from the BAC–CAB rule for a triple vector product. Justify any alteration of the order of factors in the BAC and CAB terms. 1.9.3 Prove that ∇ × (ϕ∇ϕ) = 0. 1.9.4 You are given that the curl of F equals the curl of G. Show that F and G may differ by (a) a constant and (b) a gradient of a scalar function. 1.9.5 The Navier–Stokes equation of hydrodynamics contains a nonlinear term (v · ∇)v. Show that the curl of this term may be written as −∇ × [v × (∇ × v)]. 1.9.6 From the Navier–Stokes equation for the steady flow of an incompressible viscous fluid we have the term  ∇ × v × (∇ × v) , where v is the fluid velocity. Show that this term vanishes for the special case v = xˆ v(y, z). 1.9.7 Prove that (∇u) × (∇v) is solenoidal, where u and v are differentiable scalar functions. 1.9.8 ϕ is a scalar satisfying Laplace’s equation, ∇ 2 ϕ = 0. Show that ∇ϕ is both solenoidal and irrotational. 1.9.9 With ψ a scalar (wave) function, show that (r × ∇) · (r × ∇)ψ = r 2 ∇ 2 ψ − r 2 ∂ 2ψ ∂ψ . − 2r ∂r ∂r 2 (This can actually be shown more easily in spherical polar coordinates, Section 2.5.) 54 Chapter 1 Vector Analysis 1.9.10 In a (nonrotating) isolated mass such as a star, the condition for equilibrium is ∇P + ρ∇ϕ = 0. Here P is the total pressure, ρ is the density, and ϕ is the gravitational potential. Show that at any given point the normals to the surfaces of constant pressure and constant gravitational potential are parallel. 1.9.11 In the Pauli theory of the electron, one encounters the expression (p − eA) × (p − eA)ψ, where ψ is a scalar (wave) function. A is the magnetic vector potential related to the magnetic induction B by B = ∇ × A. Given that p = −i∇, show that this expression reduces to ieBψ . Show that this leads to the orbital g-factor gL = 1 upon writing the magnetic moment as µ = gL L in units of Bohr magnetons and L = −ir × ∇. See also Exercise 1.13.7. 1.9.12 Show that any solution of the equation ∇ × (∇ × A) − k 2 A = 0 automatically satisfies the vector Helmholtz equation ∇2 A + k2 A = 0 and the solenoidal condition ∇ · A = 0. Hint. Let ∇· operate on the first equation. 1.9.13 The theory of heat conduction leads to an equation ∇ 2  = k|∇|2 , where  is a potential satisfying Laplace’s equation: ∇ 2  = 0. Show that a solution of this equation is 1  = k2 . 2 1.10 VECTOR INTEGRATION The next step after differentiating vectors is to integrate them. Let us start with line integrals and then proceed to surface and volume integrals. In each case the method of attack will be to reduce the vector integral to scalar integrals with which the reader is assumed familiar. 1.10 Vector Integration 55 Line Integrals Using an increment of length dr = xˆ dx + yˆ dy + zˆ dz, we may encounter the line integrals ϕ dr, (1.92a) V · dr, (1.92b) C V × dr, (1.92c) C C in each of which the integral is over some contour C that may be open (with starting point and ending point separated) or closed (forming a loop). Because of its physical interpretation that follows, the second form, Eq. (1.92b) is by far the most important of the three. With ϕ, a scalar, the first integral reduces immediately to C ϕ dr = xˆ C ϕ(x, y, z) dx + yˆ C ϕ(x, y, z) dy + zˆ ϕ(x, y, z) dz. (1.93) C This separation has employed the relation xˆ ϕ dx = xˆ ϕ dx, (1.94) which is permissible because the Cartesian unit vectors xˆ , yˆ , and zˆ are constant in both magnitude and direction. Perhaps this relation is obvious here, but it will not be true in the non-Cartesian systems encountered in Chapter 2. The three integrals on the right side of Eq. (1.93) are ordinary scalar integrals and, to avoid complications, we assume that they are Riemann integrals. Note, however, that the integral with respect to x cannot be evaluated unless y and z are known in terms of x and similarly for the integrals with respect to y and z. This simply means that the path of integration C must be specified. Unless the integrand has special properties so that the integral depends only on the value of the end points, the value will depend on the particular choice of contour C. For instance, if we choose the very special case ϕ = 1, Eq. (1.92a) is just the vector distance from the start of contour C to the endpoint, in this case independent of the choice of path connecting fixed endpoints. With dr = xˆ dx + yˆ dy + zˆ dz, the second and third forms also reduce to scalar integrals and, like Eq. (1.92a), are dependent, in general, on the choice of path. The form (Eq. (1.92b)) is exactly the same as that encountered when we calculate the work done by a force that varies along the path, W= F · dr = Fx (x, y, z) dx + Fy (x, y, z) dy + In this expression F is the force exerted on a particle. Fz (x, y, z) dz. (1.95a) 56 Chapter 1 Vector Analysis FIGURE 1.25 Example 1.10.1 A path of integration. PATH-DEPENDENT WORK The force exerted on a body is F = −ˆxy + yˆ x. The problem is to calculate the work done going from the origin to the point (1, 1): 1,1 1,1 W= F · dr = (−y dx + x dy). (1.95b) 0,0 0,0 Separating the two integrals, we obtain W =− 0 1 y dx + 1 x dy. (1.95c) 0 The first integral cannot be evaluated until we specify the values of y as x ranges from 0 to 1. Likewise, the second integral requires x as a function of y. Consider first the path shown in Fig. 1.25. Then 1 1 W =− 0 dx + 1 dy = 1, (1.95d) 0 0 since y = 0 along the first segment of the path and x = 1 along the second. If we select the path [x = 0, 0 y 1] and [0 x 1, y = 1], then Eq. (1.95c) gives W = −1. For this force the work done depends on the choice of path.  Surface Integrals Surface integrals appear in the same forms as line integrals, the element of area also being a vector, dσ .20 Often this area element is written ndA, in which n is a unit (normal) vector to indicate the positive direction.21 There are two conventions for choosing the positive direction. First, if the surface is a closed surface, we agree to take the outward normal as positive. Second, if the surface is an open surface, the positive normal depends on the direction in which the perimeter of the open surface is traversed. If the right-hand fingers 20 Recall that in Section 1.4 the area (of a parallelogram) is represented by a cross-product vector. 21 Although n always has unit length, its direction may well be a function of position. 1.10 Vector Integration 57 FIGURE 1.26 Right-hand rule for the positive normal. are placed in the direction of travel around the perimeter, the positive normal is indicated by the thumb of the right hand. As an illustration, a circle in the xy-plane (Fig. 1.26) mapped out from x to y to −x to −y and back to x will have its positive normal parallel to the positive z-axis (for the right-handed coordinate system). Analogous to the line integrals, Eqs. (1.92a) to (1.92c), surface integrals may appear in the forms ϕ dσ , V · dσ , V × dσ . Again, the dot product is by far the most commonly encountered form. The surface integral V · dσ may be interpreted as a flow or flux through the given surface. This is really what we did in Section 1.7 to obtain the significance of the term divergence. This identification reappears in Section 1.11 as Gauss’ theorem. Note that both physically and from the dot product the tangential components of the velocity contribute nothing to the flow through the surface. Volume Integrals Volume integrals are somewhat simpler, for the volume element dτ is a scalar quantity.22 We have V dτ = xˆ Vx dτ + yˆ Vy dτ + zˆ Vz dτ, (1.96) V V V V again reducing the vector integral to a vector sum of scalar integrals. 22 Frequently the symbols d 3 r and d 3 x are used to denote a volume element in coordinate (xyz or x x x ) space. 1 2 3 58 Chapter 1 Vector Analysis FIGURE 1.27 Differential rectangular parallelepiped (origin at center). Integral Definitions of Gradient, Divergence, and Curl One interesting and significant application of our surface and volume integrals is their use in developing alternate definitions of our differential relations. We find ϕ dσ , (1.97) ∇ϕ = lim dτ dτ →0 V · dσ ∇ · V = lim , (1.98) dτ dτ →0 dσ × V ∇ × V = lim . (1.99) dτ dτ →0 In these three equations dτ is the volume of a small region of space and dσ is the vector area element of this volume. The identification of Eq. (1.98) as the divergence of V was carried out in Section 1.7. Here we show that Eq. (1.97) is consistent with our earlier definition of ∇ϕ (Eq. (1.60)). For simplicity we choose dτ to be the differential volume dx dy dz (Fig. 1.27). This time we place the origin at the geometric center of our volume element. The area integral leads to six integrals, one for each of the six faces. Remembering that dσ is outward, dσ · xˆ = −|dσ | for surface EFHG, and +|dσ | for surface ABDC, we have     ∂ϕ dx ∂ϕ dx ϕ− dy dz + xˆ ϕ+ dy dz ϕ dσ = −ˆx ∂x 2 ∂x 2 EFHG ABDC     ∂ϕ dy ∂ϕ dy ϕ− ϕ+ − yˆ dx dz + yˆ dx dz ∂y 2 ∂y 2 AEGC BFHD     ∂ϕ dz ∂ϕ dz dx dy + zˆ ϕ+ dx dy. − zˆ ϕ− ∂z 2 ∂z 2 CDHG ABFE 1.10 Vector Integration 59 Using the total variations, we evaluate each integrand at the origin with a correction included to correct for the displacement (±dx/2, etc.) of the center of the face from the origin. Having chosen the total volume to be of differential size ( dτ = dx dy dz), we drop the integral signs on the right and obtain   ∂ϕ ∂ϕ ∂ϕ dx dy dz. (1.100) + yˆ + zˆ ϕ dσ = xˆ ∂x ∂y ∂z Dividing by dτ = dx dy dz, we verify Eq. (1.97). This verification has been oversimplified in ignoring other correction terms beyond the first derivatives. These additional terms, which are introduced in Section 5.6 when the Taylor expansion is developed, vanish in the limit dτ → 0 (dx → 0, dy → 0, dz → 0). This, of course, is the reason for specifying in Eqs. (1.97), (1.98), and (1.99) that this limit be taken. Verification of Eq. (1.99) follows these same lines exactly, using a differential volume dx dy dz. Exercises 1.10.1 The force field acting on a two-dimensional linear oscillator may be described by F = −ˆxkx − yˆ ky. Compare the work done moving against this force field when going from (1, 1) to (4, 4) by the following straight-line paths: (a) (1, 1) → (4, 1) → (4, 4) (b) (1, 1) → (1, 4) → (4, 4) (c) (1, 1) → (4, 4) along x = y. This means evaluating − (4,4) (1,1) F · dr along each path. 1.10.2 Find the work done going around a unit circle in the xy-plane: (a) counterclockwise from 0 to π , (b) clockwise from 0 to −π , doing work against a force field given by F= −ˆxy yˆ x + 2 . 2 +y x + y2 x2 Note that the work done depends on the path. 60 Chapter 1 Vector Analysis 1.10.3 Calculate the work you do in going from point (1, 1) to point (3, 3). The force you exert is given by F = xˆ (x − y) + yˆ (x + y). 1.10.4 1.10.5 Specify clearly the path you choose. Note that this force field is nonconservative.  Evaluate r · dr.  Note. The symbol means that the path of integration is a closed loop. Evaluate 1 3 s r · dσ over the unit cube defined by the point (0, 0, 0) and the unit intercepts on the positive x-, y-, and z-axes. Note that (a) r · dσ is zero for three of the surfaces and (b) each of the three remaining surfaces contributes the same amount to the integral. 1.10.6 1.11 Show, by expansion of the surface integral, that ×V s dσ = ∇ × V. lim dτ dτ →0 Hint. Choose the volume dτ to be a differential volume dx dy dz. GAUSS’ THEOREM Here we derive a useful relation between a surface integral of a vector and the volume integral of the divergence of that vector. Let us assume that the vector V and its first derivatives are continuous over the simply connected region (that does not have any holes, such as a donut) of interest. Then Gauss’ theorem states that  ∂V V · dσ = V ∇ · V dτ. (1.101a) In words, the surface integral of a vector over a closed surface equals the volume integral of the divergence of that vector integrated over the volume enclosed by the surface. Imagine that volume V is subdivided into an arbitrarily large number of tiny (differential) parallelepipeds. For each parallelepiped  V · dσ = ∇ · V dτ (1.101b) six surfaces from the analysis of Section 1.7, Eq. (1.66), with ρv replaced by V. The summation is over the six faces of the parallelepiped. Summing over all parallelepipeds, we find that the V · dσ terms cancel (pairwise) for all interior faces; only the contributions of the exterior surfaces survive (Fig. 1.28). Analogous to the definition of a Riemann integral as the limit 1.11 Gauss’ Theorem 61 FIGURE 1.28 Exact cancellation of dσ ’s on interior surfaces. No cancellation on the exterior surface. of a sum, we take the limit as the number of parallelepipeds approaches infinity (→ ∞) and the dimensions of each approach zero (→ 0): V · dσ = ∇ · V dτ exterior surfaces S V · dσ volumes = V ∇ · V dτ. The result is Eq. (1.101a), Gauss’ theorem. From a physical point of view Eq. (1.66) has established ∇ · V as the net outflow of fluid per unit volume. The volume integral then gives the total net outflow. But the surface integral V · dσ is just another way of expressing this same quantity, which is the equality, Gauss’ theorem. Green’s Theorem A frequently useful corollary of Gauss’ theorem is a relation known as Green’s theorem. If u and v are two scalar functions, we have the identities ∇ · (u ∇v) = u∇ · ∇v + (∇u) · (∇v), (1.102) ∇ · (v ∇u) = v∇ · ∇u + (∇v) · (∇u). (1.103) Subtracting Eq. (1.103) from Eq. (1.102), integrating over a volume (u, v, and their derivatives, assumed continuous), and applying Eq. (1.101a) (Gauss’ theorem), we obtain V (u∇ · ∇v − v∇ · ∇u) dτ =  ∂V (u∇v − v∇u) · dσ . (1.104) 62 Chapter 1 Vector Analysis This is Green’s theorem. We use it for developing Green’s functions in Chapter 9. An alternate form of Green’s theorem, derived from Eq. (1.102) alone, is  u∇v · dσ = u∇ · ∇v dτ + ∇u · ∇v dτ. (1.105) ∂V V V This is the form of Green’s theorem used in Section 1.16. Alternate Forms of Gauss’ Theorem Although Eq. (1.101a) involving the divergence is by far the most important form of Gauss’ theorem, volume integrals involving the gradient and the curl may also appear. Suppose V(x, y, z) = V (x, y, z)a, (1.106) in which a is a vector with constant magnitude and constant but arbitrary direction. (You pick the direction, but once you have chosen it, hold it fixed.) Equation (1.101a) becomes a ·  V dσ = ∇ · aV dτ = a · ∇V dτ (1.107) ∂V V V by Eq. (1.67b). This may be rewritten  a ·  V dσ − ∇V dτ = 0. ∂V (1.108) V Since |a| = 0 and its direction is arbitrary, meaning that the cosine of the included angle cannot always vanish, the terms in brackets must be zero.23 The result is  V dσ = ∇V dτ. (1.109) ∂V V In a similar manner, using V = a × P in which a is a constant vector, we may show  dσ × P = ∇ × P dτ. (1.110) ∂V V These last two forms of Gauss’ theorem are used in the vector form of Kirchoff diffraction theory. They may also be used to verify Eqs. (1.97) and (1.99). Gauss’ theorem may also be extended to tensors (see Section 2.11). Exercises 1.11.1 Using Gauss’ theorem, prove that  dσ = 0 if S = ∂V is a closed surface. S 23 This exploitation of the arbitrary nature of a part of a problem is a valuable and widely used technique. The arbitrary vector is used again in Sections 1.12 and 1.13. Other examples appear in Section 1.14 (integrands equated) and in Section 2.8, quotient rule. 1.11 Gauss’ Theorem 1.11.2 63 Show that 1  r · dσ = V , 3 S where V is the volume enclosed by the closed surface S = ∂V . Note. This is a generalization of Exercise 1.10.5. 1.11.3 If B = ∇ × A, show that  B · dσ = 0 S for any closed surface S. 1.11.4 Over some volume V let ψ be a solution of Laplace’s equation (with the derivatives appearing there continuous). Prove that the integral over any closed surface in V of the normal derivative of ψ (∂ψ/∂n, or ∇ψ · n) will be zero. 1.11.5 In analogy to the integral definition of gradient, divergence, and curl of Section 1.10, show that ∇ϕ · dσ 2 . ∇ ϕ = lim dτ dτ →0 1.11.6 The electric displacement vector D satisfies the Maxwell equation ∇ · D = ρ, where ρ is the charge density (per unit volume). At the boundary between two media there is a surface charge density σ (per unit area). Show that a boundary condition for D is (D2 − D1 ) · n = σ. n is a unit vector normal to the surface and out of medium 1. Hint. Consider a thin pillbox as shown in Fig. 1.29. 1.11.7 From Eq. (1.67b), with V the electric field E and f the electrostatic potential ϕ, show that, for integration over all space, ρϕ dτ = ε0 E 2 dτ. This corresponds to a three-dimensional integration by parts. Hint. E = −∇ϕ, ∇ · E = ρ/ε0 . You may assume that ϕ vanishes at large r at least as fast as r −1 . FIGURE 1.29 Pillbox. 64 Chapter 1 Vector Analysis 1.11.8 A particular steady-state electric current distribution is localized in space. Choosing a bounding surface far enough out so that the current density J is zero everywhere on the surface, show that J dτ = 0. Hint. Take one component of J at a time. With ∇ · J = 0, show that Ji = ∇ · (xi J) and apply Gauss’ theorem. 1.11.9 The creation of a localized system of steady electric currents (current density J) and magnetic fields may be shown to require an amount of work 1 W= H · B dτ. 2 Transform this into W= 1 2 J · A dτ. Here A is the magnetic vector potential: ∇ × A = B. Hint. In Maxwell’s equations take the displacement current term ∂D/∂t = 0. If the fields and currents are localized, a bounding surface may be taken far enough out so that the integrals of the fields and currents over the surface yield zero. 1.11.10 Prove the generalization of Green’s theorem: (vLu − uLv) dτ =  V ∂V p(v∇u − u∇v) · dσ . Here L is the self-adjoint operator (Section 10.1),  L = ∇ · p(r)∇ + q(r) and p, q, u, and v are functions of position, p and q having continuous first derivatives and u and v having continuous second derivatives. Note. This generalized Green’s theorem appears in Section 9.7. 1.12 STOKES’ THEOREM Gauss’ theorem relates the volume integral of a derivative of a function to an integral of the function over the closed surface bounding the volume. Here we consider an analogous relation between the surface integral of a derivative of a function and the line integral of the function, the path of integration being the perimeter bounding the surface. Let us take the surface and subdivide it into a network of arbitrarily small rectangles. In Section 1.8 we showed that the circulation about such a differential rectangle (in the xy-plane) is ∇ × V|z dx dy. From Eq. (1.76) applied to one differential rectangle,  V · dλ = ∇ × V · dσ . (1.111) four sides 1.12 Stokes’ Theorem 65 FIGURE 1.30 Exact cancellation on interior paths. No cancellation on the exterior path. We sum over all the little rectangles, as in the definition of a Riemann integral. The surface contributions (right-hand side of Eq. (1.111)) are added together. The line integrals (lefthand side of Eq. (1.111)) of all interior line segments cancel identically. Only the line integral around the perimeter survives (Fig. 1.30). Taking the usual limit as the number of rectangles approaches infinity while dx → 0, dy → 0, we have V · dλ ∇ × V · dσ = (1.112) exterior line rectangles segments  V · dλ = S ∇ × V · dσ . This is Stokes’ theorem. The surface integral on the right is over the surface bounded by the perimeter or contour, for the line integral on the left. The direction of the vector representing the area is out of the paper plane toward the reader if the direction of traversal around the contour for the line integral is in the positive mathematical sense, as shown in Fig. 1.30. This demonstration of Stokes’ theorem is limited by the fact that we used a Maclaurin expansion of V(x, y, z) in establishing Eq. (1.76) in Section 1.8. Actually we need only demand that the curl of V(x, y, z) exist and that it be integrable over the surface. A proof of the Cauchy integral theorem analogous to the development of Stokes’ theorem here but using these less restrictive conditions appears in Section 6.3. Stokes’ theorem obviously applies to an open surface. It is possible to consider a closed surface as a limiting case of an open surface, with the opening (and therefore the perimeter) shrinking to zero. This is the point of Exercise 1.12.7. 66 Chapter 1 Vector Analysis Alternate Forms of Stokes’ Theorem As with Gauss’ theorem, other relations between surface and line integrals are possible. We find  dσ × ∇ϕ = ϕ dλ (1.113) S and S ∂S (dσ × ∇) × P =  ∂S dλ × P. (1.114) Equation (1.113) may readily be verified by the substitution V = aϕ, in which a is a vector of constant magnitude and of constant direction, as in Section 1.11. Substituting into Stokes’ theorem, Eq. (1.112), (∇ × aϕ) · dσ = − a × ∇ϕ · dσ S S = −a · S ∇ϕ × dσ . (1.115) For the line integral,  ∂S and we obtain a·  ∂S aϕ · dλ = a · ϕ dλ + S  ϕ dλ, (1.116)  (1.117) ∂S ∇ϕ × dσ = 0. Since the choice of direction of a is arbitrary, the expression in parentheses must vanish, thus verifying Eq. (1.113). Equation (1.114) may be derived similarly by using V = a × P, in which a is again a constant vector. We can use Stokes’ theorem to derive Oersted’s and Faraday’s laws from two of Maxwell’s equations, and vice versa, thus recognizing that the former are an integrated form of the latter. Example 1.12.1 OERSTED’S AND FARADAY’S LAWS Consider the magnetic field generated by a long wire that carries a stationary current I . Starting from Maxwell’s differential law ∇ × H = J, Eq. (1.89) (with Maxwell’s displacement current ∂D/∂t = 0 for a stationary current case by Ohm’s law), we integrate over a closed area S perpendicular to and surrounding the wire and apply Stokes’ theorem to get  I = J · dσ = (∇ × H) · dσ = H · dr, S S ∂S which is Oersted’s law. Here the line integral is along ∂S, the closed curve surrounding the cross-sectional area S. 1.12 Stokes’ Theorem 67 Similarly, we can integrate Maxwell’s equation for ∇ ×E, Eq. (1.86d), to yield Faraday’s induction law. Imagine moving a closed loop (∂S) of wire (of area S) across a magnetic induction field B. We integrate Maxwell’s equation and use Stokes’ theorem, yielding d d , E · dr = (∇ × E) · dσ = − B · dσ = − dt S dt ∂S S which is Faraday’s law. The line integral on the left-hand side represents the voltage induced in the wire loop, while the right-hand side is the change with time of the magnetic flux  through the moving surface S of the wire.  Both Stokes’ and Gauss’ theorems are of tremendous importance in a wide variety of problems involving vector calculus. Some idea of their power and versatility may be obtained from the exercises of Sections 1.11 and 1.12 and the development of potential theory in Sections 1.13 and 1.14. Exercises 1.12.1 Given a vector t = −ˆxy + yˆ x, show, with the help of Stokes’ theorem, that the integral around a continuous closed curve in the xy-plane   1 1 t · dλ = (x dy − y dx) = A, 2 2 the area enclosed by the curve. 1.12.2 The calculation of the magnetic moment of a current loop leads to the line integral  r × dr. (a) Integrate around the perimeter of a current loop (in the xy-plane) and show that the scalar magnitude of this line integral is twice the area of the enclosed surface. (b) The perimeter of an ellipse is described by r = xˆ a cos θ + yˆ b sin θ . From part (a) show that the area of the ellipse is πab. 1.12.3 Evaluate  r × dr by using the alternate form of Stokes’ theorem given by Eq. (1.114):  (dσ × ∇) × P = dλ × P. S Take the loop to be entirely in the xy-plane. 1.12.4 In steady state the magnetic field H satisfies the Maxwell equation ∇ × H = J, where J is the current density (per square meter). At the boundary between two media there is a surface current density K. Show that a boundary condition on H is n × (H2 − H1 ) = K. n is a unit vector normal to the surface and out of medium 1. Hint. Consider a narrow loop perpendicular to the interface as shown in Fig. 1.31. 68 Chapter 1 Vector Analysis FIGURE 1.31 Integration path at the boundary of two media. From Maxwell’s equations, ∇ × H = J, with J here the current density and E = 0. Show from this that  H · dr = I, 1.12.5 where I is the net electric current enclosed by the loop integral. These are the differential and integral forms of Ampère’s law of magnetism. 1.12.6 A magnetic induction B is generated by electric current in a ring of radius R. Show that the magnitude of the vector potential A (B = ∇ × A) at the ring can be ϕ |A| = , 2πR where ϕ is the total magnetic flux passing through the ring. Note. A is tangential to the ring and may be changed by adding the gradient of a scalar function. 1.12.7 Prove that S if S is a closed surface.  Evaluate r · dr (Exercise 1.10.4) by Stokes’ theorem. 1.12.8 1.12.9 Prove that 1.12.10 1.13 ∇ × V · dσ = 0, Prove that   u∇v · dλ = − u∇v · dλ = S  v∇u · dλ. (∇u) × (∇v) · dσ . POTENTIAL THEORY Scalar Potential If a force over a given simply connected region of space S (which means that it has no holes) can be expressed as the negative gradient of a scalar function ϕ, F = −∇ϕ, (1.118) 1.13 Potential Theory 69 we call ϕ a scalar potential that describes the force by one function instead of three. A scalar potential is only determined up to an additive constant, which can be used to adjust its value at infinity (usually zero) or at some other point. The force F appearing as the negative gradient of a single-valued scalar potential is labeled a conservative force. We want to know when a scalar potential function exists. To answer this question we establish two other relations as equivalent to Eq. (1.118). These are ∇×F=0 and  (1.119) F · dr = 0, (1.120) for every closed path in our simply connected region S. We proceed to show that each of these three equations implies the other two. Let us start with F = −∇ϕ. (1.121) ∇ × F = −∇ × ∇ϕ = 0 (1.122) Then by Eq. (1.82) or Eq. (1.118) implies Eq. (1.119). Turning to the line integral, we have    F · dr = − ∇ϕ · dr = − dϕ, (1.123) using Eq. (1.118). Now, dϕ integrates to give ϕ. Since we have specified a closed loop, the end points coincide and we get zero for every closed path in our region S for which Eq. (1.118) holds. It is important to note the restriction here that the potential be singlevalued and that Eq. (1.118) hold for all points in S. This problem may arise in using a scalar magnetic potential, a perfectly valid procedure as long as no net current is encircled. As soon as we choose a path in space that encircles a net current, the scalar magnetic potential ceases to be single-valued and our analysis no longer applies.  Continuing this demonstration of equivalence, let us assume that Eq. (1.120) holds. If F · dr = 0 for all paths in S, we see that the value of the integral joining two distinct points A and B is independent of the path (Fig. 1.32). Our premise is that  F · dr = 0. (1.124) ACBDA Therefore ACB F · dr = − BDA F · dr = ADB F · dr, (1.125) reversing the sign by reversing the direction of integration. Physically, this means that the work done in going from A to B is independent of the path and that the work done in going around a closed path is zero. This is the reason for labeling such a force conservative: Energy is conserved. 70 Chapter 1 Vector Analysis FIGURE 1.32 Possible paths for doing work. With the result shown in Eq. (1.125), we have the work done dependent only on the endpoints A and B. That is, B F · dr = ϕ(A) − ϕ(B). (1.126) work done by force = A Equation (1.126) defines a scalar potential (strictly speaking, the difference in potential between points A and B) and provides a means of calculating the potential. If point B is taken as a variable, say, (x, y, z), then differentiation with respect to x, y, and z will recover Eq. (1.118). The choice of sign on the right-hand side is arbitrary. The choice here is made to achieve agreement with Eq. (1.118) and to ensure that water will run downhill rather than uphill. For points A and B separated by a length dr, Eq. (1.126) becomes F · dr = −dϕ = −∇ϕ · dr. (1.127) (F + ∇ϕ) · dr = 0, (1.128) This may be rewritten and since dr is arbitrary, Eq. (1.118) must follow. If  F · dr = 0, we may obtain Eq. (1.119) by using Stokes’ theorem (Eq. (1.112)):  F · dr = ∇ × F · dσ . (1.129) (1.130) If we take the path of integration to be the perimeter of an arbitrary differential area dσ , the integrand in the surface integral must vanish. Hence Eq. (1.120) implies Eq. (1.119). Finally, if ∇ × F = 0, we need only reverse our statement of Stokes’ theorem (Eq. (1.130)) to derive Eq. (1.120). Then, by Eqs. (1.126) to (1.128), the initial statement 1.13 Potential Theory FIGURE 1.33 71 Equivalent formulations of a conservative force. FIGURE 1.34 Potential energy versus distance (gravitational, centrifugal, and simple harmonic oscillator). F = −∇ϕ is derived. The triple equivalence is demonstrated (Fig. 1.33). To summarize, a single-valued scalar potential function ϕ exists if and only if F is irrotational or the work done around every closed loop is zero. The gravitational and electrostatic force fields given by Eq. (1.79) are irrotational and therefore are conservative. Gravitational and electrostatic scalar potentials exist. Now, by calculating the work done (Eq. (1.126)), we proceed to determine three potentials (Fig. 1.34). 72 Chapter 1 Vector Analysis Example 1.13.1 GRAVITATIONAL POTENTIAL Find the scalar potential for the gravitational force on a unit mass m1 , kˆr Gm1 m2 rˆ =− 2, (1.131) 2 r r radially inward. By integrating Eq. (1.118) from infinity in to position r, we obtain ∞ r FG · dr. (1.132) FG · dr = + ϕG (r) − ϕG (∞) = − FG = − ∞ r By use of FG = −Fapplied , a comparison with Eq. (1.95a) shows that the potential is the work done in bringing the unit mass in from infinity. (We can define only potential difference. Here we arbitrarily assign infinity to be a zero of potential.) The integral on the right-hand side of Eq. (1.132) is negative, meaning that ϕG (r) is negative. Since FG is radial, we obtain a contribution to ϕ only when dr is radial, or ∞ k dr Gm1 m2 k . ϕG (r) = − =− =− 2 r r r r The final negative sign is a consequence of the attractive force of gravity. Example 1.13.2  CENTRIFUGAL POTENTIAL Calculate the scalar potential for the centrifugal force per unit mass, FC = ω2 r rˆ , radially outward. Physically, you might feel this on a large horizontal spinning disk at an amusement park. Proceeding as in Example 1.13.1 but integrating from the origin outward and taking ϕC (0) = 0, we have r ω2 r 2 . ϕC (r) = − FC · dr = − 2 0 If we reverse signs, taking FSHO = −kr, we obtain ϕSHO = 21 kr 2 , the simple harmonic oscillator potential. The gravitational, centrifugal, and simple harmonic oscillator potentials are shown in Fig. 1.34. Clearly, the simple harmonic oscillator yields stability and describes a restoring force. The centrifugal potential describes an unstable situation.  Thermodynamics — Exact Differentials In thermodynamics, which is sometimes called a search for exact differentials, we encounter equations of the form df = P (x, y) dx + Q(x, y) dy. (1.133a) The usual problem is to determine whether (P (x, y) dx + Q(x, y) dy) depends only on the endpoints, that is, whether df is indeed an exact differential. The necessary and sufficient condition is that ∂f ∂f dx + dy (1.133b) df = ∂x ∂y 1.13 Potential Theory 73 or that P (x, y) = ∂f/∂x, Q(x, y) = ∂f/∂y. (1.133c) Equations (1.133c) depend on satisfying the relation ∂P (x, y) ∂Q(x, y) = . ∂y ∂x (1.133d) This, however, is exactly analogous to Eq. (1.119), the requirement that F be irrotational. Indeed, the z-component of Eq. (1.119) yields ∂Fy ∂Fx = , ∂y ∂x (1.133e) with Fx = ∂f , ∂x Fy = ∂f . ∂y Vector Potential In some branches of physics, especially electrodynamics, it is convenient to introduce a vector potential A such that a (force) field B is given by B = ∇ × A. (1.134) Clearly, if Eq. (1.134) holds, ∇ · B = 0 by Eq. (1.84) and B is solenoidal. Here we want to develop a converse, to show that when B is solenoidal a vector potential A exists. We demonstrate the existence of A by actually calculating it. Suppose B = xˆ b1 + yˆ b2 + zˆ b3 and our unknown A = xˆ a1 + yˆ a2 + zˆ a3 . By Eq. (1.134), ∂a3 ∂a2 − = b1 , ∂y ∂z ∂a1 ∂a3 − = b2 , ∂z ∂x ∂a2 ∂a1 − = b3 . ∂x ∂y (1.135a) (1.135b) (1.135c) Let us assume that the coordinates have been chosen so that A is parallel to the yz-plane; that is, a1 = 0.24 Then ∂a3 ∂x ∂a2 . b3 = ∂x b2 = − (1.136) 24 Clearly, this can be done at any one point. It is not at all obvious that this assumption will hold at all points; that is, A will be two-dimensional. The justification for the assumption is that it works; Eq. (1.141) satisfies Eq. (1.134). 74 Chapter 1 Vector Analysis Integrating, we obtain a2 = x x0 a3 = − b3 dx + f2 (y, z), (1.137) x x0 b2 dx + f3 (y, z), where f2 and f3 are arbitrary functions of y and z but not functions of x. These two equations can be checked by differentiating and recovering Eq. (1.136). Equation (1.135a) becomes25  x ∂a3 ∂a2 ∂f3 ∂f2 ∂b2 ∂b3 dx + − =− + − ∂y ∂z ∂y ∂z ∂y ∂z x0 x ∂f3 ∂f2 ∂b1 dx + − , (1.138) = ∂x ∂y ∂z x0 using ∇ · B = 0. Integrating with respect to x, we obtain ∂a3 ∂a2 ∂f3 ∂f2 − = b1 (x, y, z) − b1 (x0 , y, z) + − . ∂y ∂z ∂y ∂z (1.139) Remembering that f3 and f2 are arbitrary functions of y and z, we choose f2 = 0, y b1 (x0 , y, z) dy, f3 = (1.140) y0 so that the right-hand side of Eq. (1.139) reduces to b1 (x, y, z), in agreement with Eq. (1.135a). With f2 and f3 given by Eq. (1.140), we can construct A: y  x x A = yˆ b3 (x, y, z) dx + zˆ b2 (x, y, z) dx . (1.141) b1 (x0 , y, z) dy − x0 x0 y0 However, this is not quite complete. We may add any constant since B is a derivative of A. What is much more important, we may add any gradient of a scalar function ∇ϕ without affecting B at all. Finally, the functions f2 and f3 are not unique. Other choices could have been made. Instead of setting a1 = 0 to get Eq. (1.136) any cyclic permutation of 1, 2, 3, x, y, z, x0 , y0 , z0 would also work. Example 1.13.3 A MAGNETIC VECTOR POTENTIAL FOR A CONSTANT MAGNETIC FIELD To illustrate the construction of a magnetic vector potential, we take the special but still important case of a constant magnetic induction B = zˆ Bz , 25 Leibniz’ formula in Exercise 9.6.13 is useful here. (1.142) 1.13 Potential Theory 75 in which Bz is a constant. Equations (1.135a to c) become ∂a3 ∂a2 − = 0, ∂y ∂z ∂a1 ∂a3 − = 0, ∂z ∂x ∂a2 ∂a1 − = Bz . ∂x ∂y If we assume that a1 = 0, as before, then by Eq. (1.141) x A = yˆ Bz dx = yˆ xBz , (1.143) (1.144) setting a constant of integration equal to zero. It can readily be seen that this A satisfies Eq. (1.134). To show that the choice a1 = 0 was not sacred or at least not required, let us try setting a3 = 0. From Eq. (1.143) ∂a2 = 0, ∂z ∂a1 = 0, ∂z ∂a2 ∂a1 − = Bz . ∂x ∂y (1.145a) (1.145b) (1.145c) We see a1 and a2 are independent of z, or a1 = a1 (x, y), Equation (1.145c) is satisfied if we take a2 = p and a1 = (p − 1) with p any constant. Then a2 = a2 (x, y). (1.146) x Bz dx = pxBz (1.147) Bz dy = (p − 1)yBz , (1.148) y A = xˆ (p − 1)yBz + yˆ pxBz . (1.149) Again, Eqs. (1.134), (1.142), and (1.149) are seen to be consistent. Comparison of Eqs. (1.144) and (1.149) shows immediately that A is not unique. The difference between Eqs. (1.144) and (1.149) and the appearance of the parameter p in Eq. (1.149) may be accounted for by rewriting Eq. (1.149) as   1 1 A = − (ˆxy − yˆ x)Bz + p − (ˆxy + yˆ x)Bz 2 2   1 1 Bz ∇ϕ (1.150) = − (ˆxy − yˆ x)Bz + p − 2 2 76 Chapter 1 Vector Analysis with ϕ = xy. (1.151)  The first term in A corresponds to the usual form 1 A = (B × r) 2 (1.152) for B, a constant. Adding a gradient of a scalar function,  say, to the vector potential A does not affect B, by Eq. (1.82); this is known as a gauge transformation (see Exercises 1.13.9 and 4.6.4): A → A′ = A + ∇. (1.153) Suppose now that the wave function ψ0 solves the Schrödinger equation of quantum mechanics without magnetic induction field B,   1 2 (−i h∇) + V − E ψ0 = 0, (1.154) ¯ 2m describing a particle with mass m and charge e. When B is switched on, the wave equation becomes   1 2 (−i h∇ (1.155) ¯ − eA) + V − E ψ = 0. 2m Its solution ψ picks up a phase factor that depends on the coordinates in general,  r ie ′ ′ A(r ) · dr ψ0 (r). ψ(r) = exp h¯ From the relation (1.156)   ie ie ′ (−i h∇ − eA)ψ − i hψ A (−i h∇ − eA)ψ = exp A · dr ¯ ¯ ¯ 0 0 h¯ h¯  ie = exp (1.157) A · dr′ (−i h∇ψ ¯ 0 ), h¯ it is obvious that ψ solves Eq. (1.155) if ψ0 solves Eq. (1.154). The gauge covariant derivative ∇ − i(e/h¯ )A describes the coupling of a charged particle with the magnetic field. It is often called minimal substitution and plays a central role in quantum electromagnetism, the first and simplest gauge theory in physics. To summarize this discussion of the vector potential: When a vector B is solenoidal, a vector potential A exists such that B = ∇ × A. A is undetermined to within an additive gradient. This corresponds to the arbitrary zero of a potential, a constant of integration for the scalar potential. In many problems the magnetic vector potential A will be obtained from the current distribution that produces the magnetic induction B. This means solving Poisson’s (vector) equation (see Exercise 1.14.4). 1.13 Potential Theory 77 Exercises 1.13.1 If a force F is given by n  F = x 2 + y 2 + z2 (ˆxx + yˆ y + zˆ z), find (a) ∇ · F. (b) ∇ × F. (c) A scalar potential ϕ(x, y, z) so that F = −∇ϕ. (d) For what value of the exponent n does the scalar potential diverge at both the origin and infinity? ANS. (a) (2n + 3)r 2n , (b) 0, 1 (c) − 2n+2 r 2n+2 , n = −1, (d) n = −1, ϕ = − ln r. 1.13.2 A sphere of radius a is uniformly charged (throughout its volume). Construct the electrostatic potential ϕ(r) for 0 r < ∞. Hint. In Section 1.14 it is shown that the Coulomb force on a test charge at r = r0 depends only on the charge at distances less than r0 and is independent of the charge at distances greater than r0 . Note that this applies to a spherically symmetric charge distribution. 1.13.3 The usual problem in classical mechanics is to calculate the motion of a particle given the potential. For a uniform density (ρ0 ), nonrotating massive sphere, Gauss’ law of Section 1.14 leads to a gravitational force on a unit mass m0 at a point r0 produced by the attraction of the mass at r r0 . The mass at r > r0 contributes nothing to the force. (a) Show that F/m0 = −(4πGρ0 /3)r, 0 r a, where a is the radius of the sphere. (b) Find the corresponding gravitational potential, 0 r a. (c) Imagine a vertical hole running completely through the center of the Earth and out to the far side. Neglecting the rotation of the Earth and assuming a uniform density ρ0 = 5.5 gm/cm3 , calculate the nature of the motion of a particle dropped into the hole. What is its period? Note. F ∝ r is actually a very poor approximation. Because of varying density, the approximation F = constant along the outer half of a radial line and F ∝ r along the inner half is a much closer approximation. 1.13.4 The origin of the Cartesian coordinates is at the Earth’s center. The moon is on the zaxis, a fixed distance R away (center-to-center distance). The tidal force exerted by the moon on a particle at the Earth’s surface (point x, y, z) is given by Fx = −GMm x , R3 Fy = −GMm Find the potential that yields this tidal force. y , R3 Fz = +2GMm z . R3 78 Chapter 1 Vector Analysis   GMm 2 1 2 1 2 . z x y − − 2 2 R3 In terms of the Legendre polynomials of Chapter 12 this becomes GMm 2 − r P2 (cos θ ). R3 A long, straight wire carrying a current I produces a magnetic induction B with components   µ0 I y x B= − 2 , ,0 . 2π x + y2 x2 + y2 ANS. − 1.13.5 Find a magnetic vector potential A. ANS. A = −ˆz(µ0 I /4π) ln(x 2 + y 2 ). (This solution is not unique.) 1.13.6 If rˆ B= 2 = r   x y z , , , r3 r3 r3 find a vector A such that ∇ × A = B. One possible solution is A= 1.13.7 xˆ yz yˆ xz − . 2 2 + y ) r(x + y 2 ) r(x 2 Show that the pair of equations 1 B=∇×A A = (B × r), 2 is satisfied by any constant magnetic induction B. 1.13.8 Vector B is formed by the product of two gradients B = (∇u) × (∇v), where u and v are scalar functions. (a) Show that B is solenoidal. (b) Show that 1 A = (u ∇v − v ∇u) 2 is a vector potential for B, in that B = ∇ × A. 1.13.9 The magnetic induction B is related to the magnetic vector potential A by B = ∇ × A. By Stokes’ theorem  B · dσ = A · dr. 1.14 Gauss’ Law, Poisson’s Equation 79 Show that each side of this equation is invariant under the gauge transformation, A → A + ∇ϕ. Note. Take the function ϕ to be single-valued. The complete gauge transformation is considered in Exercise 4.6.4. 1.13.10 With E the electric field and A the magnetic vector potential, show that [E + ∂A/∂t] is irrotational and that therefore we may write ∂A . ∂t The total force on a charge q moving with velocity v is E = −∇ϕ − 1.13.11 F = q(E + v × B). Using the scalar and vector potentials, show that  dA F = q −∇ϕ − + ∇(A · v) . dt Note that we now have a total time derivative of A in place of the partial derivative of Exercise 1.13.10. 1.14 GAUSS’ LAW, POISSON’S EQUATION Gauss’ Law Consider a point electric charge q at the origin of our coordinate system. This produces an electric field E given by26 E= q rˆ . 4πε0 r 2 (1.158) We now derive Gauss’ law, which states that the surface integral in Fig. 1.35 is q/ε0 if the closed surface S = ∂V includes the origin (where q is located) and zero if the surface does not include the origin. The surface S is any closed surface; it need not be spherical. Using Gauss’ theorem, Eqs. (1.101a) and (1.101b) (and neglecting the q/4πε0 ), we obtain   rˆ · dσ rˆ (1.159) = ∇ · 2 dτ = 0 2 r r V S by Example 1.7.2, provided the surface S does not include the origin, where the integrands are not defined. This proves the second part of Gauss’ law. The first part, in which the surface S must include the origin, may be handled by surrounding the origin with a small sphere S ′ = ∂V ′ of radius δ (Fig. 1.36). So that there will be no question what is inside and what is outside, imagine the volume outside the outer surface S and the volume inside surface S ′ (r < δ) connected by a small hole. This 26 The electric field E is defined as the force per unit charge on a small stationary test charge q : E = F/q . From Coulomb’s law t t the force on qt due to q is F = (qqt /4π ε0 )(ˆr/r 2 ). When we divide by qt , Eq. (1.158) follows. 80 Chapter 1 Vector Analysis FIGURE 1.35 FIGURE 1.36 Gauss’ law. Exclusion of the origin. joins surfaces S and S ′ , combining them into one single simply connected closed surface. Because the radius of the imaginary hole may be made vanishingly small, there is no additional contribution to the surface integral. The inner surface is deliberately chosen to be 1.14 Gauss’ Law, Poisson’s Equation 81 spherical so that we will be able to integrate over it. Gauss’ theorem now applies to the volume between S and S ′ without any difficulty. We have rˆ · dσ rˆ · dσ ′ + = 0. (1.160) 2 δ2 S r S′ We may evaluate the second integral, for dσ ′ = −ˆrδ 2 d, in which d is an element of solid angle. The minus sign appears because we agreed in Section 1.10 to have the positive normal rˆ ′ outward from the volume. In this case the outward rˆ ′ is in the negative radial direction, rˆ ′ = −ˆr. By integrating over all angles, we have rˆ · dσ ′ rˆ · rˆ δ 2 d = − = −4π, (1.161) δ2 δ2 S′ S′ independent of the radius δ. With the constants from Eq. (1.158), this results in q q 4π = , E · dσ = 4πε0 ε0 S (1.162) completing the proof of Gauss’ law. Notice that although the surface S may be spherical, it need not be spherical. Going just a bit further, we consider a distributed charge so that q= ρ dτ. (1.163) V Equation (1.162) still applies, with q now interpreted as the total distributed charge enclosed by surface S: ρ E · dσ = dτ. (1.164) S V ε0 Using Gauss’ theorem, we have V ∇ · E dτ = V ρ dτ. ε0 (1.165) Since our volume is completely arbitrary, the integrands must be equal, or ∇·E= ρ , ε0 (1.166) one of Maxwell’s equations. If we reverse the argument, Gauss’ law follows immediately from Maxwell’s equation. Poisson’s Equation If we replace E by −∇ϕ, Eq. (1.166) becomes ∇ · ∇ϕ = − ρ , ε0 (1.167a) 82 Chapter 1 Vector Analysis which is Poisson’s equation. For the condition ρ = 0 this reduces to an even more famous equation, ∇ · ∇ϕ = 0, (1.167b) Laplace’s equation. We encounter Laplace’s equation frequently in discussing various coordinate systems (Chapter 2) and the special functions of mathematical physics that appear as its solutions. Poisson’s equation will be invaluable in developing the theory of Green’s functions (Section 9.7). From direct comparison of the Coulomb electrostatic force law and Newton’s law of universal gravitation, FE = 1 q1 q2 rˆ , 4πε0 r 2 FG = −G m1 m2 rˆ . r2 All of the potential theory of this section applies equally well to gravitational potentials. For example, the gravitational Poisson equation is ∇ · ∇ϕ = +4πGρ, (1.168) with ρ now a mass density. Exercises 1.14.1 Develop Gauss’ law for the two-dimensional case in which ϕ = −q ln ρ , 2πε0 E = −∇ϕ = q ρˆ . 2πε0 ρ Here q is the charge at the origin or the line charge per unit length if the two-dimensional system is a unit thickness slice of a three-dimensional (circular cylindrical) system. The variable ρ is measured radially outward from the line charge. ρˆ is the corresponding unit vector (see Section 2.4). 1.14.2 (a) (b) Show that Gauss’ law follows from Maxwell’s equation ρ ∇·E= . ε0 Here ρ is the usual charge density. Assuming that the electric field of a point charge q is spherically symmetric, show that Gauss’ law implies the Coulomb inverse square expression E= 1.14.3 q rˆ . 4πε0 r 2 Show that the value of the electrostatic potential ϕ at any point P is equal to the average of the potential over any spherical surface centered on P . There are no electric charges on or within the sphere. Hint. Use Green’s theorem, Eq. (1.104), with u−1 = r, the distance from P , and v = ϕ. Also note Eq. (1.170) in Section 1.15. 1.15 Dirac Delta Function 1.14.4 83 Using Maxwell’s equations, show that for a system (steady current) the magnetic vector potential A satisfies a vector Poisson equation, ∇ 2 A = −µ0 J, provided we require ∇ · A = 0. 1.15 DIRAC DELTA FUNCTION From Example 1.6.1 and the development of Gauss’ law in Section 1.14,      rˆ 1 −4π dτ = − ∇ · 2 dτ = ∇·∇ 0, r r (1.169) depending on whether or not the integration includes the origin r = 0. This result may be conveniently expressed by introducing the Dirac delta function,   1 = −4πδ(r) ≡ −4πδ(x)δ(y)δ(z). ∇ r 2 (1.170) This Dirac delta function is defined by its assigned properties δ(x) = 0, f (0) = ∞ x = 0 (1.171a) f (x)δ(x) dx, (1.171b) −∞ where f (x) is any well-behaved function and the integration includes the origin. As a special case of Eq. (1.171b), ∞ −∞ δ(x) dx = 1. (1.171c) From Eq. (1.171b), δ(x) must be an infinitely high, infinitely thin spike at x = 0, as in the description of an impulsive force (Section 15.9) or the charge density for a point charge.27 The problem is that no such function exists, in the usual sense of function. However, the crucial property in Eq. (1.171b) can be developed rigorously as the limit of a sequence of functions, a distribution. For example, the delta function may be approximated by the 27 The delta function is frequently invoked to describe very short-range forces, such as nuclear forces. It also appears in the normalization of continuum wave functions of quantum mechanics. Compare Eq. (1.193c) for plane-wave eigenfunctions. 84 Chapter 1 Vector Analysis FIGURE 1.37 δ-Sequence function. FIGURE 1.38 δ-Sequence function. sequences of functions, Eqs. (1.172) to (1.175) and Figs. 1.37 to 1.40:  1 x < − 2n  0, 1 1 δn (x) = n, − 2n < x < 2n  1 0, x > 2n  n δn (x) = √ exp −n2 x 2 π n 1 · π 1 + n2 x 2 n 1 sin nx = δn (x) = eixt dt. πx 2π −n δn (x) = (1.172) (1.173) (1.174) (1.175) 1.15 Dirac Delta Function FIGURE 1.39 δ-Sequence function. FIGURE 1.40 δ-Sequence function. 85 These approximations have varying degrees of usefulness. Equation (1.172) is useful in providing a simple derivation of the integral property, Eq. (1.171b). Equation (1.173) is convenient to differentiate. Its derivatives lead to the Hermite polynomials. Equation (1.175) is particularly useful in Fourier analysis and in its applications to quantum mechanics. In the theory of Fourier series, Eq. (1.175) often appears (modified) as the Dirichlet kernel: δn (x) = 1 sin[(n + 21 )x] . 2π sin( 21 x) (1.176) In using these approximations in Eq. (1.171b) and later, we assume that f (x) is well behaved — it offers no problems at large x. 86 Chapter 1 Vector Analysis For most physical purposes such approximations are quite adequate. From a mathematical point of view the situation is still unsatisfactory: The limits lim δn (x) n→∞ do not exist. A way out of this difficulty is provided by the theory of distributions. Recognizing that Eq. (1.171b) is the fundamental property, we focus our attention on it rather than on δ(x) itself. Equations (1.172) to (1.175) with n = 1, 2, 3, . . . may be interpreted as sequences of normalized functions: ∞ δn (x) dx = 1. (1.177) −∞ The sequence of integrals has the limit ∞ δn (x)f (x) dx = f (0). lim n→∞ −∞ (1.178) Note that Eq. (1.178) is the limit of a sequence of integrals. Again, the limit of δn (x), n → ∞, does not exist. (The limits for all four forms of δn (x) diverge at x = 0.) We may treat δ(x) consistently in the form ∞ ∞ δ(x)f (x) dx = lim δn (x)f (x) dx. (1.179) n→∞ −∞ −∞ δ(x) is labeled a distribution (not a function) defined by the sequences δn (x) as indicated in Eq. (1.179). We might emphasize that the integral on the left-hand side of Eq. (1.179) is not a Riemann integral.28 It is a limit. This distribution δ(x) is only one of an infinity of possible distributions, but it is the one we are interested in because of Eq. (1.171b). From these sequences of functions we see that Dirac’s delta function must be even in x, δ(−x) = δ(x). The integral property, Eq. (1.171b), is useful in cases where the argument of the delta function is a function g(x) with simple zeros on the real axis, which leads to the rules 1 δ(ax) = δ(x), a a > 0, (1.180)  δ(x − a) . |g ′ (a)| (1.181a)  δ g(x) = a, g(a)=0, ′ g (a) =0 Equation (1.180) may be written   ∞ 1 1 ∞ y δ(y) dy = f (0), f (x)δ(ax) dx = f a a a −∞ −∞ 28 It can be treated as a Stieltjes integral if desired. δ(x) dx is replaced by du(x), where u(x) is the Heaviside step function (compare Exercise 1.15.13). 1.15 Dirac Delta Function applying Eq. (1.171b). Equation (1.180) may be written as δ(ax) = prove Eq. (1.181a) we decompose the integral ∞ −∞   f (x)δ g(x) dx = a a+ε a−ε 1 |a| δ(x)  f (x)δ (x − a)g ′ (a) dx 87 for a < 0. To (1.181b) into a sum of integrals over small intervals containing the zeros of g(x). In these intervals, g(x) ≈ g(a) + (x − a)g ′ (a) = (x − a)g ′ (a). Using Eq. (1.180) on the right-hand side of Eq. (1.181b) we obtain the integral of Eq. (1.181a). Using integration by parts we can also define the derivative δ ′ (x) of the Dirac delta function by the relation ∞ ∞ f (x)δ ′ (x − x ′ ) dx = − f ′ (x)δ(x − x ′ ) dx = −f ′ (x ′ ). (1.182) −∞ −∞ We use δ(x) frequently and call it the Dirac delta function29 — for historical reasons. Remember that it is not really a function. It is essentially a shorthand notation, defined implicitly as the limit of integrals in a sequence, δn (x), according to Eq. (1.179). It should be understood that our Dirac delta function has significance only as part of an integrand. In this spirit, the linear operator dx δ(x − x0 ) operates on f (x) and yields f (x0 ): L(x0 )f (x) ≡ ∞ −∞ δ(x − x0 )f (x) dx = f (x0 ). (1.183) It may also be classified as a linear mapping or simply as a generalized function. Shifting our singularity to the point x = x ′ , we write the Dirac delta function as δ(x − x ′ ). Equation (1.171b) becomes ∞ f (x)δ(x − x ′ ) dx = f (x ′ ). (1.184) −∞ As a description of a singularity at x = x ′ , the Dirac delta function may be written as δ(x − x ′ ) or as δ(x ′ − x). Going to three dimensions and using spherical polar coordinates, we obtain 2π π ∞ ∞ 2 δ(x)δ(y)δ(z) dx dy dz = 1. (1.185) δ(r)r dr sin θ dθ dϕ = 0 0 0 −∞ This corresponds to a singularity (or source) at the origin. Again, if our source is at r = r1 , Eq. (1.185) becomes (1.186) δ(r2 − r1 )r22 dr2 sin θ2 dθ2 dϕ2 = 1. 29 Dirac introduced the delta function to quantum mechanics. Actually, the delta function can be traced back to Kirchhoff, 1882. For further details see M. Jammer, The Conceptual Development of Quantum Mechanics. New York: McGraw–Hill (1966), p. 301. 88 Chapter 1 Vector Analysis Example 1.15.1 TOTAL CHARGE INSIDE A SPHERE  Consider the total electric flux E · dσ out of a sphere of radius R around the origin surrounding n charges ej , located at the points rj with rj < R, that is, inside the sphere. The electric field strength E = −∇ϕ(r), where the potential n  ej ρ(r′ ) 3 ′ ϕ= = d r |r − rj | |r − r′ | j =1 is the sum of the Coulomb potentials generated by each charge and the total charge density is ρ(r) = j ej δ(r − rj ). The delta function is used here as an abbreviation of a pointlike density. Now we use Gauss’ theorem for   ρ(r) j ej 2 E · dσ = − ∇ϕ · dσ = − ∇ ϕ dτ = dτ = ε0 ε0 in conjunction with the differential form of Gauss’s law, ∇ · E = −ρ/ε0 , and   ej δ(r − rj ) dτ = ej . j j  Example 1.15.2 PHASE SPACE In the scattering theory of relativistic particles using Feynman diagrams, we encounter the following integral over energy of the scattered particle (we set the velocity of light c = 1):   d 4 pδ p 2 − m2 f (p) ≡ d 3 p dp0 δ p02 − p2 − m2 f (p) d 3 p f (E, p) d 3 p f (E, p) , E0 2 m2 + p2 where we have used Eq. (1.181a) at the zeros E = ± m2 + p2 of the argument of the delta function. The physical meaning of δ(p 2 − m2 ) is that the particle of mass m and four-momentum p µ = (p0 , p) is on its mass shell, because p 2 = m2 is equivalent to E = 2 2 ± m + p . Thus, the on-mass-shell volume element in momentum space is the Lorentz = 3 invariant d2Ep , in contrast to the nonrelativistic d 3 p of momentum space. The fact that a negative energy occurs is a peculiarity of relativistic kinematics that is related to the antiparticle.  Delta Function Representation by Orthogonal Functions Dirac’s delta function30 can be expanded in terms of any basis of real orthogonal functions {ϕn (x), n = 0, 1, 2, . . .}. Such functions will occur in Chapter 10 as solutions of ordinary differential equations of the Sturm–Liouville form. 30 This section is optional here. It is not needed until Chapter 10. 1.15 Dirac Delta Function 89 They satisfy the orthogonality relations b ϕm (x)ϕn (x) dx = δmn , a (1.187) where the interval (a, b) may be infinite at either end or both ends. [For convenience we assume that ϕn has been defined to include (w(x))1/2 if the orthogonality relations contain an additional positive weight function w(x).] We use the ϕn to expand the delta function as δ(x − t) = ∞  an (t)ϕn (x), (1.188) n=0 where the coefficients an are functions of the variable t. Multiplying by ϕm (x) and integrating over the orthogonality interval (Eq. (1.187)), we have am (t) = b a δ(x − t)ϕm (x) dx = ϕm (t) (1.189) or δ(x − t) = ∞  n=0 ϕn (t)ϕn (x) = δ(t − x). (1.190) This series is assuredly not uniformly convergent (see Chapter 5), but it may be used as part of an integrand in which the ensuing integration will make it convergent (compare Section 5.5). Suppose we form the integral F (t)δ(t − x) dx, where it is assumed that F (t) can be expanded in a series of orthogonal functions ϕp (t), a property called completeness. We then obtain F (t)δ(t − x) dt = =  ∞ p=0 ∞  p=0 ap ϕp (t) ∞  ϕn (x)ϕn (t) dt n=0 ap ϕp (x) = F (x), (1.191) the cross products ϕp ϕn dt (n = p) vanishing by orthogonality (Eq. (1.187)). Referring back to the definition of the Dirac delta function, Eq. (1.171b), we see that our series representation, Eq. (1.190), satisfies the defining property of the Dirac delta function and therefore is a representation of it. This representation of the Dirac delta function is called closure. The assumption of completeness of a set of functions for expansion of δ(x − t) yields the closure relation. The converse, that closure implies completeness, is the topic of Exercise 1.15.16. 90 Chapter 1 Vector Analysis Integral Representations for the Delta Function Integral transforms, such as the Fourier integral ∞ f (t) exp(iωt) dt F (ω) = −∞ of Chapter 15, lead to the corresponding integral representations of Dirac’s delta function. For example, take n  sin n(t − x) 1 exp iω(t − x) dω, (1.192) δn (t − x) = = π(t − x) 2π −n using Eq. (1.175). We have f (x) = lim ∞ n→∞ −∞ f (t)δn (t − x) dt, (1.193a) where δn (t − x) is the sequence in Eq. (1.192) defining the distribution δ(t − x). Note that Eq. (1.193a) assumes that f (t) is continuous at t = x. If we substitute Eq. (1.192) into Eq. (1.193a) we obtain n ∞  1 exp iω(t − x) dω dt. (1.193b) f (t) f (x) = lim n→∞ 2π −∞ −n Interchanging the order of integration and then taking the limit as n → ∞, we have the Fourier integral theorem, Eq. (15.20). With the understanding that it belongs under an integral sign, as in Eq. (1.193a), the identification 1 δ(t − x) = 2π ∞ −∞  exp iω(t − x) dω provides a very useful integral representation of the delta function. When the Laplace transform (see Sections 15.1 and 15.9) ∞ exp(−st)δ(t − t0 ) = exp(−st0 ), t0 > 0 Lδ (s) = (1.193c) (1.194) 0 is inverted, we obtain the complex representation δ(t − t0 ) = 1 2πi γ +i∞ γ −i∞  exp s(t − t0 ) ds, (1.195) which is essentially equivalent to the previous Fourier representation of Dirac’s delta function. 1.15 Dirac Delta Function 91 Exercises 1.15.1 Let Show that   0, δn (x) = n,  0, lim ∞ n→∞ −∞ 1 x < − 2n , 1 − 2n < x < 1 2n < x. 1 2n , f (x)δn (x) dx = f (0), assuming that f (x) is continuous at x = 0. 1.15.2 Verify that the sequence δn (x), based on the function  0, x < 0, δn (x) = x > 0, ne−nx , is a delta sequence (satisfying Eq. (1.178)). Note that the singularity is at +0, the positive side of the origin. Hint. Replace the upper limit (∞) by c/n, where c is large but finite, and use the mean value theorem of integral calculus. 1.15.3 For δn (x) = 1 n · , π 1 + n2 x 2 (Eq. (1.174)), show that ∞ −∞ 1.15.4 δn (x) dx = 1. Demonstrate that δn = sin nx/πx is a delta distribution by showing that ∞ sin nx dx = f (0). lim f (x) n→∞ −∞ πx Assume that f (x) is continuous at x = 0 and vanishes as x → ±∞. Hint. Replace x by y/n and take lim n → ∞ before integrating. 1.15.5 Fejer’s method of summing series is associated with the function  1 sin(nt/2) 2 δn (t) = . 2πn sin(t/2) Show that δn (t) is a delta distribution, in the sense that  ∞ sin(nt/2) 2 1 lim dt = f (0). f (t) n→∞ 2πn −∞ sin(t/2) 92 Chapter 1 Vector Analysis 1.15.6 Prove that 1  δ a(x − x1 ) = δ(x − x1 ). a Note. If δ[a(x − x1 )] is considered even, relative to x1 , the relation holds for negative a and 1/a may be replaced by 1/|a|. 1.15.7 Show that   δ (x − x1 )(x − x2 ) = δ(x − x1 ) + δ(x − x2 ) /|x1 − x2 |. Hint. Try using Exercise 1.15.6. 1.15.8 Using the Gauss error curve delta sequence (δn = √n π 2x2 e−n ), show that d δ(x) = −δ(x), dx treating δ(x) and its derivative as in Eq. (1.179). x 1.15.9 Show that ∞ −∞ δ ′ (x)f (x) dx = −f ′ (0). Here we assume that f ′ (x) is continuous at x = 0. 1.15.10 Prove that     df (x) −1   δ(x − x0 ), δ f (x) =  dx x=x0 where x0 is chosen so that f (x0 ) = 0. Hint. Note that δ(f ) df = δ(x) dx. 1.15.11 Show that in spherical polar coordinates (r, cos θ, ϕ) the delta function δ(r1 − r2 ) becomes 1 δ(r1 − r2 )δ(cos θ1 − cos θ2 )δ(ϕ1 − ϕ2 ). r12 Generalize this to the curvilinear coordinates (q1 , q2 , q3 ) of Section 2.1 with scale factors h1 , h2 , and h3 . 1.15.12 A rigorous development of Fourier transforms31 includes as a theorem the relations 2 x2 sin ax dx lim f (u + x) a→∞ π x x 1  f (u + 0) + f (u − 0), x1 < 0 < x2    f (u + 0), x1 = 0 < x2 = f (u − 0), x1 < 0 = x2    0, x1 < x2 < 0 or 0 < x1 < x2 . Verify these results using the Dirac delta function. 31 I. N. Sneddon, Fourier Transforms. New York: McGraw-Hill (1951). 1.15 Dirac Delta Function FIGURE 1.41 1.15.13 (a) 93 1 2 [1 + tanh nx] and the Heaviside unit step function. If we define a sequence δn (x) = n/(2 cosh2 nx), show that ∞ δn (x) dx = 1, independent of n. −∞ (b) Continuing this analysis, show that32 x 1 δn (x) dx = [1 + tanh nx] ≡ un (x), 2 −∞ lim un (x) = n→∞  0, 1, x < 0, x > 0. This is the Heaviside unit step function (Fig. 1.41). 1.15.14 Show that the unit step function u(x) may be represented by ∞ 1 dt 1 P eixt , u(x) = + 2 2πi t −∞ where P means Cauchy principal value (Section 7.1). 1.15.15 As a variation of Eq. (1.175), take δn (x) = 1 2π ∞ eixt−|t|/n dt. −∞ Show that this reduces to (n/π)1/(1 + n2 x 2 ), Eq. (1.174), and that ∞ δn (x) dx = 1. −∞ Note. In terms of integral transforms, the initial equation here may be interpreted as either a Fourier exponential transform of e−|t|/n or a Laplace transform of eixt . 32 Many other symbols are used for this function. This is the AMS-55 (see footnote 4 on p. 330 for the reference) notation: u for unit. 94 Chapter 1 Vector Analysis 1.15.16 (a) The Dirac delta function representation given by Eq. (1.190), δ(x − t) = ∞  ϕn (x)ϕn (t), n=0 is often called the closure relation. For an orthonormal set of real functions, ϕn , show that closure implies completeness, that is, Eq. (1.191) follows from Eq. (1.190). Hint. One can take F (x) = F (t)δ(x − t) dt. (b) Following the hint of part (a) you encounter the integral you know that this integral is finite? F (t)ϕn (t) dt. How do 1.15.17 For the finite interval (−π, π) write the Dirac delta function δ(x − t) as a series of sines and cosines: sin nx, cos nx, n = 0, 1, 2, . . . . Note that although these functions are orthogonal, they are not normalized to unity. 1.15.18 In the interval (−π, π), δn (x) = √n π exp(−n2 x 2 ). (a) Write δn (x) as a Fourier cosine series. (b) Show that your Fourier series agrees with a Fourier expansion of δ(x) in the limit as n → ∞. (c) Confirm the delta function nature of your Fourier series by showing that for any f (x) that is finite in the interval [−π, π] and continuous at x = 0, π  f (x) Fourier expansion of δ∞ (x) dx = f (0). −π Write δn (x) = √nπ exp(−n2 x 2 ) in the interval (−∞, ∞) as a Fourier integral and compare the limit n → ∞ with Eq. (1.193c). (b) Write δn (x) = n exp(−nx) as a Laplace transform and compare the limit n → ∞ with Eq. (1.195). Hint. See Eqs. (15.22) and (15.23) for (a) and Eq. (15.212) for (b). 1.15.19 (a) 1.15.20 (a) Show that the Dirac delta function δ(x − a), expanded in a Fourier sine series in the half-interval (0, L), (0 < a < L), is given by     ∞ 2 nπx nπa δ(x − a) = sin . sin L L L n=1 Note that this series actually describes −δ(x + a) + δ(x − a) (b) in the interval (−L, L). By integrating both sides of the preceding equation from 0 to x, show that the cosine expansion of the square wave  0, 0x 1/2n. n, 0, (This is Eq. (1.172).) Express δn (x) as a Fourier integral (via the Fourier integral theorem, inverse transform, etc.). Finally, show that we may write ∞ 1 e−ikx dk. δ(x) = lim δn (x) = n→∞ 2π −∞ 1.15.23 Using the sequence show that  n δn (x) = √ exp −n2 x 2 , π δ(x) = 1 2π ∞ −∞ e−ikx dk. Note. Remember that δ(x) is defined in terms of its behavior as part of an integrand — especially Eqs. (1.178) and (1.189). 1.15.24 1.16 Derive sine and cosine representations of δ(t − x) that are comparable to the exponential representation, Eq. (1.193c). ∞ ∞ ANS. π2 0 sin ωt sin ωx dω, π2 0 cos ωt cos ωx dω. HELMHOLTZ’S THEOREM In Section 1.13 it was emphasized that the choice of a magnetic vector potential A was not unique. The divergence of A was still undetermined. In this section two theorems about the divergence and curl of a vector are developed. The first theorem is as follows: A vector is uniquely specified by giving its divergence and its curl within a simply connected region (without holes) and its normal component over the boundary. 96 Chapter 1 Vector Analysis Note that the subregions, where the divergence and curl are defined (often in terms of Dirac delta functions), are part of our region and are not supposed to be removed here or in Helmholtz’s theorem, which follows. Let us take ∇ · V1 = s, ∇ × V1 = c, (1.196) where s may be interpreted as a source (charge) density and c as a circulation (current) density. Assuming also that the normal component V1n on the boundary is given, we want to show that V1 is unique. We do this by assuming the existence of a second vector, V2 , which satisfies Eq. (1.196) and has the same normal component over the boundary, and then showing that V1 − V2 = 0. Let W = V1 − V2 . Then ∇·W=0 (1.197) ∇ × W = 0. (1.198) and Since W is irrotational we may write (by Section (1.13)) W = −∇ϕ. (1.199) Substituting this into Eq. (1.197), we obtain ∇ · ∇ϕ = 0, (1.200) Laplace’s equation. Now we draw upon Green’s theorem in the form given in Eq. (1.105), letting u and v each equal ϕ. Since Wn = V1n − V2n = 0 on the boundary, Green’s theorem reduces to (∇ϕ) · (∇ϕ) dτ = W · W dτ = 0. V (1.201) (1.202) V The quantity W · W = W 2 is nonnegative and so we must have W = V1 − V2 = 0 (1.203) everywhere. Thus V1 is unique, proving the theorem. For our magnetic vector potential A the relation B = ∇ × A specifies the curl of A. Often for convenience we set ∇ · A = 0 (compare Exercise 1.14.4). Then (with boundary conditions) A is fixed. This theorem may be written as a uniqueness theorem for solutions of Laplace’s equation, Exercise 1.16.1. In this form, this uniqueness theorem is of great importance in solving electrostatic and other Laplace equation boundary value problems. If we can find a solution of Laplace’s equation that satisfies the necessary boundary conditions, then our solution is the complete solution. Such boundary value problems are taken up in Sections 12.3 and 12.5. 1.16 Helmholtz’s Theorem 97 Helmholtz’s Theorem The second theorem we shall prove is Helmholtz’s theorem. A vector V satisfying Eq. (1.196) with both source and circulation densities vanishing at infinity may be written as the sum of two parts, one of which is irrotational, the other of which is solenoidal. Note that our region is simply connected, being all of space, for simplicity. Helmholtz’s theorem will clearly be satisfied if we may write V as V = −∇ϕ + ∇ × A, (1.204a) −∇ϕ being irrotational and ∇ × A being solenoidal. We proceed to justify Eq. (1.204a). V is a known vector. We take the divergence and curl ∇ · V = s(r) (1.204b) ∇ × V = c(r) (1.204c) with s(r) and c(r) now known functions of position. From these two functions we construct a scalar potential ϕ(r1 ), 1 s(r2 ) ϕ(r1 ) = dτ2 , (1.205a) 4π r12 and a vector potential A(r1 ), 1 A(r1 ) = 4π c(r2 ) dτ2 . r12 (1.205b) If s = 0, then V is solenoidal and Eq. (1.205a) implies ϕ = 0. From Eq. (1.204a), V = ∇ × A, with A as given in Eq. (1.141), which is consistent with Section 1.13. Further, if c = 0, then V is irrotational and Eq. (1.205b) implies A = 0, and Eq. (1.204a) implies V = −∇ϕ, consistent with scalar potential theory of Section 1.13. Here the argument r1 indicates (x1 , y1 , z1 ), the field point; r2 , the coordinates of the source point (x2 , y2 , z2 ), whereas  1/2 r12 = (x1 − x2 )2 + (y1 − y2 )2 + (z1 − z2 )2 . (1.206) When a direction is associated with r12 , the positive direction is taken to be away from the source and toward the field point. Vectorially, r12 = r1 − r2 , as shown in Fig. 1.42. Of course, s and c must vanish sufficiently rapidly at large distance so that the integrals exist. The actual expansion and evaluation of integrals such as Eqs. (1.205a) and (1.205b) is treated in Section 12.1. From the uniqueness theorem at the beginning of this section, V is uniquely specified by its divergence, s, and curl, c (and boundary conditions). Returning to Eq. (1.204a), we have ∇ · V = −∇ · ∇ϕ, (1.207a) the divergence of the curl vanishing, and ∇ × V = ∇ × (∇ × A), (1.207b) 98 Chapter 1 Vector Analysis FIGURE 1.42 Source and field points. the curl of the gradient vanishing. If we can show that −∇ · ∇ϕ(r1 ) = s(r1 ) (1.207c) and  ∇ × ∇ × A(r1 ) = c(r1 ), (1.207d) then V as given in Eq. (1.204a) will have the proper divergence and curl. Our description will be internally consistent and Eq. (1.204a) justified.33 First, we consider the divergence of V: 1 s(r2 ) ∇ · V = −∇ · ∇ϕ = − ∇ · ∇ dτ2 . (1.208) 4π r12 The Laplacian operator, ∇ · ∇, or ∇ 2 , operates on the field coordinates (x1 , y1 , z1 ) and so commutes with the integration with respect to (x2 , y2 , z2 ). We have   1 1 dτ2 . ∇·V=− (1.209) s(r2 )∇ 21 4π r12 We must make two minor modifications in Eq. (1.169) before applying it. First, our source is at r2 , not at the origin. This means that a nonzero result from Gauss’ law appears if and only if the surface S includes the point r = r2 . To show this, we rewrite Eq. (1.170):   1 = −4πδ(r1 − r2 ). (1.210) ∇2 r12 33 Alternatively, we could solve Eq. (1.207c), Poisson’s equation, and compare the solution with the constructed potential, Eq. (1.205a). The solution of Poisson’s equation is developed in Section 9.7. 1.16 Helmholtz’s Theorem 99 This shift of the source to r2 may be incorporated in the defining equation (1.171b) as r1 = r2 , δ(r1 − r2 ) = 0, f (r1 )δ(r1 − r2 ) dτ1 = f (r2 ). (1.211a) (1.211b) −1 Second, noting that differentiating r12 twice with respect to x2 , y2 , z2 is the same as differentiating twice with respect to x1 , y1 , z1 , we have     1 1 2 2 ∇1 = ∇2 = −4πδ(r1 − r2 ) r12 r12 = −4πδ(r2 − r1 ). (1.212) Rewriting Eq. (1.209) and using the Dirac delta function, Eq. (1.212), we may integrate to obtain   1 1 2 dτ2 ∇·V=− s(r2 )∇ 2 4π r12 1 =− s(r2 )(−4π)δ(r2 − r1 ) dτ2 4π = s(r1 ). (1.213) The final step follows from Eq. (1.211b), with the subscripts 1 and 2 exchanged. Our result, Eq. (1.213), shows that the assumed forms of V and of the scalar potential ϕ are in agreement with the given divergence (Eq. (1.204b)). To complete the proof of Helmholtz’s theorem, we need to show that our assumptions are consistent with Eq. (1.204c), that is, that the curl of V is equal to c(r1 ). From Eq. (1.204a), ∇ × V = ∇ × (∇ × A) = ∇∇ · A − ∇ 2 A. The first term, ∇∇ · A, leads to 4π∇∇ · A = c(r2 ) · ∇ 1 ∇ 1   1 dτ2 r12 (1.214) (1.215) by Eq. (1.205b). Again replacing the second derivatives with respect to x1 , y1 , z1 by second derivatives with respect to x2 , y2 , z2 , we integrate each component34 of Eq. (1.215) by parts:   1 ∂ dτ2 4π∇∇ · A|x = c(r2 ) · ∇ 2 ∂x2 r12   1 ∂ dτ2 = ∇ 2 · c(r2 ) ∂x2 r12    ∂ 1 dτ2 . − ∇ 2 · c(r2 ) (1.216) ∂x2 r12 34 This avoids creating the tensor c(r )∇ . 2 2 100 Chapter 1 Vector Analysis The second integral vanishes because the circulation density c is solenoidal.35 The first integral may be transformed to a surface integral by Gauss’ theorem. If c is bounded in space or vanishes faster that 1/r for large r, so that the integral in Eq. (1.205b) exists, then by choosing a sufficiently large surface the first integral on the right-hand side of Eq. (1.216) also vanishes. With ∇∇ · A = 0, Eq. (1.214) now reduces to   1 1 2 2 ∇ × V = −∇ A = − dτ2 . (1.217) c(r2 )∇ 1 4π r12 This is exactly like Eq. (1.209) except that the scalar s(r2 ) is replaced by the vector circulation density c(r2 ). Introducing the Dirac delta function, as before, as a convenient way of carrying out the integration, we find that Eq. (1.217) reduces to Eq. (1.196). We see that our assumed forms of V, given by Eq. (1.204a), and of the vector potential A, given by Eq. (1.205b), are in agreement with Eq. (1.196) specifying the curl of V. This completes the proof of Helmholtz’s theorem, showing that a vector may be resolved into irrotational and solenoidal parts. Applied to the electromagnetic field, we have resolved our field vector V into an irrotational electric field E, derived from a scalar potential ϕ, and a solenoidal magnetic induction field B, derived from a vector potential A. The source density s(r) may be interpreted as an electric charge density (divided by electric permittivity ε), whereas the circulation density c(r) becomes electric current density (times magnetic permeability µ). Exercises 1.16.1 Implicit in this section is a proof that a function ψ(r) is uniquely specified by requiring it to (1) satisfy Laplace’s equation and (2) satisfy a complete set of boundary conditions. Develop this proof explicitly. 1.16.2 (a) Assuming that P is a solution of the vector Poisson equation, ∇ 21 P(r1 ) = −V(r1 ), develop an alternate proof of Helmholtz’s theorem, showing that V may be written as V = −∇ϕ + ∇ × A, where A = ∇ × P, and ϕ = ∇ · P. (b) Solving the vector Poisson equation, we find V(r2 ) 1 P(r1 ) = dτ2 . 4π V r12 Show that this solution substituted into ϕ and A of part (a) leads to the expressions given for ϕ and A in Section 1.16. 35 Remember, c = ∇ × V is known. 1.16 Additional Readings 101 Additional Readings Borisenko, A. I., and I. E. Taropov, Vector and Tensor Analysis with Applications. Englewood Cliffs, NJ: PrenticeHall (1968). Reprinted, Dover (1980). Davis, H. F., and A. D. Snider, Introduction to Vector Analysis, 7th ed. Boston: Allyn & Bacon (1995). Kellogg, O. D., Foundations of Potential Theory. New York: Dover (1953). Originally published (1929). The classic text on potential theory. Lewis, P. E., and J. P. Ward, Vector Analysis for Engineers and Scientists. Reading, MA: Addison-Wesley (1989). Marion, J. B., Principles of Vector Analysis. New York: Academic Press (1965). A moderately advanced presentation of vector analysis oriented toward tensor analysis. Rotations and other transformations are described with the appropriate matrices. Spiegel, M. R., Vector Analysis. New York: McGraw-Hill (1989). Tai, C.-T., Generalized Vector and Dyadic Analysis. Oxford: Oxford University Press (1996). Wrede, R. C., Introduction to Vector and Tensor Analysis. New York: Wiley (1963). Reprinted, New York: Dover (1972). Fine historical introduction. Excellent discussion of differentiation of vectors and applications to mechanics. This page intentionally left blank CHAPTER 2 VECTOR ANALYSIS IN CURVED COORDINATES AND TENSORS In Chapter 1 we restricted ourselves almost completely to rectangular or Cartesian coordinate systems. A Cartesian coordinate system offers the unique advantage that all three unit vectors, xˆ , yˆ , and zˆ , are constant in direction as well as in magnitude. We did introduce the radial distance r, but even this was treated as a function of x, y, and z. Unfortunately, not all physical problems are well adapted to a solution in Cartesian coordinates. For instance, if we have a central force problem, F = rˆ F (r), such as gravitational or electrostatic force, Cartesian coordinates may be unusually inappropriate. Such a problem demands the use of a coordinate system in which the radial distance is taken to be one of the coordinates, that is, spherical polar coordinates. The point is that the coordinate system should be chosen to fit the problem, to exploit any constraint or symmetry present in it. Then it is likely to be more readily soluble than if we had forced it into a Cartesian framework. Naturally, there is a price that must be paid for the use of a non-Cartesian coordinate system. We have not yet written expressions for gradient, divergence, or curl in any of the non-Cartesian coordinate systems. Such expressions are developed in general form in Section 2.2. First, we develop a system of curvilinear coordinates, a general system that may be specialized to any of the particular systems of interest. We shall specialize to circular cylindrical coordinates in Section 2.4 and to spherical polar coordinates in Section 2.5. 2.1 ORTHOGONAL COORDINATES IN R3 In Cartesian coordinates we deal with three mutually perpendicular families of planes: x = constant, y = constant, and z = constant. Imagine that we superimpose on this system 103 104 Chapter 2 Vector Analysis in Curved Coordinates and Tensors three other families of surfaces qi (x, y, z), i = 1, 2, 3. The surfaces of any one family qi need not be parallel to each other and they need not be planes. If this is difficult to visualize, the figure of a specific coordinate system, such as Fig. 2.3, may be helpful. The three new families of surfaces need not be mutually perpendicular, but for simplicity we impose this condition (Eq. (2.7)) because orthogonal coordinates are common in physical applications. This orthogonality has many advantages: Orthogonal coordinates are almost like Cartesian coordinates where infinitesimal areas and volumes are products of coordinate differentials. In this section we develop the general formalism of orthogonal coordinates, derive from the geometry the coordinate differentials, and use them for line, area, and volume elements in multiple integrals and vector operators. We may describe any point (x, y, z) as the intersection of three planes in Cartesian coordinates or as the intersection of the three surfaces that form our new, curvilinear coordinates. Describing the curvilinear coordinate surfaces by q1 = constant, q2 = constant, q3 = constant, we may identify our point by (q1 , q2 , q3 ) as well as by (x, y, z): General curvilinear coordinates q1 , q2 , q3 x = x(q1 , q2 , q3 ) y = y(q1 , q2 , q3 ) z = z(q1 , q2 , q3 ) Circular cylindrical coordinates ρ, ϕ, z −∞ < x = ρ cos ϕ < ∞ −∞ < y = ρ sin ϕ < ∞ −∞ < z = z < ∞ specifying x, y, z in terms of q1 , q2 , q3 and the inverse relations  1/2 0 ρ = x2 + y2 0. Differentiation of x in Eqs. (2.1) leads to the total variation or differential dx = ∂x ∂x ∂x dq1 + dq2 + dq3 , ∂q1 ∂q2 ∂q3 (2.4) ∂r and similarly for differentiation of y and z. In vector notation dr = i ∂q dqi . From i the Pythagorean theorem in Cartesian coordinates the square of the distance between two neighboring points is ds 2 = dx 2 + dy 2 + dz2 . 2.1 Orthogonal Coordinates in R3 105 Substituting dr shows that in our curvilinear coordinate space the square of the distance element can be written as a quadratic form in the differentials dqi : ds 2 = dr · dr = dr2 =  ∂r ∂r · dqi dqj ∂qi ∂qj ij = g11 dq12 + g12 dq1 dq2 + g13 dq1 dq3 + g21 dq2 dq1 + g22 dq22 + g23 dq2 dq3 + g31 dq3 dq1 + g32 dq3 dq2 + g33 dq32  = gij dqi dqj , (2.5) ij where nonzero mixed terms dqi dqj with i = j signal that these coordinates are not orthogonal, that is, that the tangential directions qˆ i are not mutually orthogonal. Spaces for which Eq. (2.5) is a legitimate expression are called metric or Riemannian. Writing Eq. (2.5) more explicitly, we see that gij (q1 , q2 , q3 ) = ∂y ∂y ∂z ∂z ∂r ∂r ∂x ∂x + + = · ∂qi ∂qj ∂qi ∂qj ∂qi ∂qj ∂qi ∂qj (2.6) ∂r are scalar products of the tangent vectors ∂q to the curves r for qj = const., j = i. These i coefficient functions gij , which we now proceed to investigate, may be viewed as specifying the nature of the coordinate system (q1 , q2 , q3 ). Collectively these coefficients are referred to as the metric and in Section 2.10 will be shown to form a second-rank symmetric tensor.1 In general relativity the metric components are determined by the properties of matter; that is, the gij are solutions of Einstein’s field equations with the energy– momentum tensor as driving term; this may be articulated as “geometry is merged with physics.” At usual we limit ourselves to orthogonal (mutually perpendicular surfaces) coordinate systems, which means (see Exercise 2.1.1)2 gij = 0, i = j, (2.7) and qˆ i · qˆ j = δij . (Nonorthogonal coordinate systems are considered in some detail in Sections 2.10 and 2.11 in the framework of tensor analysis.) Now, to simplify the notation, we write gii = h2i > 0, so ds 2 = (h1 dq1 )2 + (h2 dq2 )2 + (h3 dq3 )2 =  (hi dqi )2 . (2.8) i 1 The tensor nature of the set of g ’s follows from the quotient rule (Section 2.8). Then the tensor transformation law yields ij Eq. (2.5). 2 In relativistic cosmology the nondiagonal elements of the metric g are usually set equal to zero as a consequence of physical ij assumptions such as no rotation, as for dϕ dt, dθ dt . 106 Chapter 2 Vector Analysis in Curved Coordinates and Tensors The specific orthogonal coordinate systems are described in subsequent sections by specifying these (positive) scale factors h1 , h2 , and h3 . Conversely, the scale factors may be conveniently identified by the relation ∂r = hi qˆ i ∂qi dsi = hi dqi , (2.9) for any given dqi , holding all other q constant. Here, dsi is a differential length along the direction qˆ i . Note that the three curvilinear coordinates q1 , q2 , q3 need not be lengths. The scale factors hi may depend on q and they may have dimensions. The product hi dqi must have a dimension of length. The differential distance vector dr may be written  dr = h1 dq1 qˆ 1 + h2 dq2 qˆ 2 + h3 dq3 qˆ 3 = hi dqi qˆ i . i Using this curvilinear component form, we find that a line integral becomes V · dr =  Vi hi dqi . i From Eqs. (2.9) we may immediately develop the area and volume elements dσij = dsi dsj = hi hj dqi dqj (2.10) dτ = ds1 ds2 ds3 = h1 h2 h3 dq1 dq2 dq3 . (2.11) and The expressions in Eqs. (2.10) and (2.11) agree, of course, with the results of using the transformation equations, Eq. (2.1), and Jacobians (described shortly; see also Exercise 2.1.5). From Eq. (2.10) an area element may be expanded: dσ = ds2 ds3 qˆ 1 + ds3 ds1 qˆ 2 + ds1 ds2 qˆ 3 = h2 h3 dq2 dq3 qˆ 1 + h3 h1 dq3 dq1 qˆ 2 + h1 h2 dq1 dq2 qˆ 3 . A surface integral becomes V · dσ = + V1 h2 h3 dq2 dq3 + V2 h3 h1 dq3 dq1 V3 h1 h2 dq1 dq2 . (Examples of such line and surface integrals appear in Sections 2.4 and 2.5.) 2.1 Orthogonal Coordinates in R3 107 In anticipation of the new forms of equations for vector calculus that appear in the next section, let us emphasize that vector algebra is the same in orthogonal curvilinear coordinates as in Cartesian coordinates. Specifically, for the dot product, A·B= =  ik  i Ai qˆ i · qˆ k Bk =  Ai Bk δik ik Ai Bi = A1 B1 + A2 B2 + A3 B3 , (2.12) where the subscripts indicate curvilinear components. For the cross product,   qˆ 1  A × B =  A1  B1  qˆ 3  A3  , B3  qˆ 2 A2 B2 (2.13) as in Eq. (1.40). Previously, we specialized to locally rectangular coordinates that are adapted to special symmetries. Let us now briefly look at the more general case, where the coordinates are not necessarily orthogonal. Surface and volume elements are part of multiple integrals, which are common in physical applications, such as center of mass determinations and moments of inertia. Typically, we choose coordinates according to the symmetry of the particular problem. In Chapter 1 we used Gauss’ theorem to transform a volume integral into a surface integral and Stokes’ theorem to transform a surface integral into a line integral. For orthogonal coordinates, the surface and volume elements are simply products of the line elements hi dqi (see Eqs. (2.10) and (2.11)). For the general case, we use the geometric meaning of ∂r/∂qi in Eq. (2.5) as tangent vectors. We start with the Cartesian surface element dx dy, which becomes an infinitesimal rectangle in the new coordinates q1 , q2 formed by the two incremental vectors dr1 = r(q1 + dq1 , q2 ) − r(q1 , q2 ) = ∂r dq1 , ∂q1 dr2 = r(q1 , q2 + dq2 ) − r(q1 , q2 ) = ∂r dq2 , ∂q2 whose area is the z-component of their cross product, or   ∂x ∂y ∂x ∂y  dq1 dq2 dx dy = dr1 × dr2 z = − ∂q1 ∂q2 ∂q2 ∂q1   ∂x ∂x    ∂q1 ∂q2  =  ∂y ∂y  dq1 dq2 .  ∂q ∂q 1 (2.14) (2.15) 2 The transformation coefficient in determinant form is called the Jacobian. Similarly, the volume element dx dy dz becomes the triple scalar product of the three in∂r along the qi directions qˆi , which, according finitesimal displacement vectors dri = dqi ∂q i 108 Chapter 2 Vector Analysis in Curved Coordinates and Tensors to Section 1.5, takes on the form  ∂x  ∂q  1  ∂y dx dy dz =  ∂q  1  ∂z ∂q1 ∂x ∂q2 ∂y ∂q2 ∂z ∂q2 ∂x ∂q3 ∂y ∂q3 ∂z ∂q3      dq1 dq2 dq3 .   (2.16) Here the determinant is also called the Jacobian, and so on in higher dimensions. For orthogonal coordinates the Jacobians simplify to products of the orthogonal vectors in Eq. (2.9). It follows that they are just products of the hi ; for example, the volume Jacobian becomes h1 h2 h3 (qˆ 1 × qˆ 2 ) · qˆ 3 = h1 h2 h3 , and so on. Example 2.1.1 JACOBIANS FOR POLAR COORDINATES Let us illustrate the transformation of the Cartesian two-dimensional volume element dx dy to polar coordinates ρ, ϕ, with x = ρ cos ϕ, y = ρ sin ϕ. (See also Section 2.4.) Here,  ∂x ∂x       cos ϕ −ρ sin ϕ   ∂ρ ∂ϕ   dρ dϕ = ρ dρ dϕ.  dxdy =  ∂y ∂y  dρ dϕ =  sin ϕ ρ cos ϕ    ∂ρ ∂ϕ Similarly, in spherical coordinates (see Section 2.5) we get, from x = r sin θ cos ϕ, y = r sin θ sin ϕ, z = r cos θ , the Jacobian  ∂x ∂x ∂x     ∂r ∂θ ∂ϕ   sin θ cos ϕ r cos θ cos ϕ −r sin θ sin ϕ     ∂y ∂y ∂y   J =  ∂r ∂θ ∂ϕ  =  sin θ sin ϕ r cos θ sin ϕ r sin θ cos ϕ     −r sin θ 0  ∂z ∂z ∂z   cos θ ∂r ∂θ ∂ϕ      r cos θ cos ϕ −r sin θ sin ϕ   + r sin θ  sin θ cos ϕ = cos θ   sin θ sin ϕ  r cos θ sin ϕ r sin θ cos ϕ  = r 2 cos2 θ sin θ + sin3 θ = r 2 sin θ  −r sin θ sin ϕ  r sin θ cos ϕ  by expanding the determinant along the third line. Hence the volume element becomes dx dy dz = r 2 dr sin θ dθ dϕ. The volume integral can be written as  f (x, y, z) dx dy dz = f x(r, θ, ϕ), y(r, θ, ϕ), z(r, θ, ϕ) r 2 dr sin θ dθ dϕ.  In summary, we have developed the general formalism for vector analysis in orthogonal curvilinear coordinates in R3 . For most applications, locally orthogonal coordinates can be chosen for which surface and volume elements in multiple integrals are products of line elements. For the general nonorthogonal case, Jacobian determinants apply. 2.1 Orthogonal Coordinates in R3 109 Exercises 2.1.1 Show that limiting our attention to orthogonal coordinate systems implies that gij = 0 for i = j (Eq. (2.7)). Hint. Construct a triangle with sides ds1 , ds2 , and ds2 . Equation (2.9) must hold regardless of whether gij = 0. Then compare ds 2 from Eq. (2.5) with a calculation using the √ law of cosines. Show that cos θ12 = g12 / g11 g22 . 2.1.2 In the spherical polar coordinate system, q1 = r, q2 = θ , q3 = ϕ. The transformation equations corresponding to Eq. (2.1) are x = r sin θ cos ϕ, y = r sin θ sin ϕ, z = r cos θ. (a) Calculate the spherical polar coordinate scale factors: hr , hθ , and hϕ . (b) Check your calculated scale factors by the relation dsi = hi dqi . 2.1.3 The u-, v-, z-coordinate system frequently used in electrostatics and in hydrodynamics is defined by xy = u, x 2 − y 2 = v, z = z. This u-, v-, z-system is orthogonal. (a) In words, describe briefly the nature of each of the three families of coordinate surfaces. (b) Sketch the system in the xy-plane showing the intersections of surfaces of constant u and surfaces of constant v with the xy-plane. (c) Indicate the directions of the unit vector uˆ and vˆ in all four quadrants. (d) Finally, is this u-, v-, z-system right-handed (uˆ × vˆ = +ˆz) or left-handed (uˆ × vˆ = −ˆz)? 2.1.4 The elliptic cylindrical coordinate system consists of three families of surfaces: 1) x2 a 2 cosh2 u + y2 a 2 sinh2 u = 1; 2) y2 x2 − = 1; a 2 cos2 v a 2 sin2 v 3) z = z. Sketch the coordinate surfaces u = constant and v = constant as they intersect the first quadrant of the xy-plane. Show the unit vectors uˆ and vˆ . The range of u is 0 u < ∞. The range of v is 0 v 2π . 2.1.5 A two-dimensional orthogonal system is described by the coordinates q1 and q2 . Show that the Jacobian  x, y J q1 , q2  ≡ ∂(x, y) ∂x ∂y ∂x ∂y − = h1 h2 ≡ ∂(q1 , q2 ) ∂q1 ∂q2 ∂q2 ∂q1 is in agreement with Eq. (2.10). Hint. It’s easier to work with the square of each side of this equation. 110 Chapter 2 Vector Analysis in Curved Coordinates and Tensors 2.1.6 In Minkowski space we define x1 = x, x2 = y, x3 = z, and x0 = ct. This is done so that the metric interval becomes ds 2 = dx02 – dx12 – dx22 – dx32 (with c = velocity of light). Show that the metric in Minkowski space is   1 0 0 0  0 −1 0 0   (gij ) =   0 0 −1 0  . 0 0 0 −1 We use Minkowski space in Sections 4.5 and 4.6 for describing Lorentz transformations. 2.2 DIFFERENTIAL VECTOR OPERATORS We return to our restriction to orthogonal coordinate systems. Gradient The starting point for developing the gradient, divergence, and curl operators in curvilinear coordinates is the geometric interpretation of the gradient as the vector having the magnitude and direction of the maximum space rate of change (compare Section 1.6). From this interpretation the component of ∇ψ(q1 , q2 , q3 ) in the direction normal to the family of surfaces q1 = constant is given by3 qˆ 1 · ∇ψ = ∇ψ|1 = ∂ψ 1 ∂ψ = , ∂s1 h1 ∂q1 (2.17) since this is the rate of change of ψ for varying q1 , holding q2 and q3 fixed. The quantity ds1 is a differential length in the direction of increasing q1 (compare Eqs. (2.9)). In Section 2.1 we introduced a unit vector qˆ 1 to indicate this direction. By repeating Eq. (2.17) for q2 and again for q3 and adding vectorially, we see that the gradient becomes ∇ψ(q1 , q2 , q3 ) = qˆ 1 ∂ψ ∂ψ ∂ψ + qˆ 2 + qˆ 3 ∂s1 ∂s2 ∂s3 1 ∂ψ 1 ∂ψ 1 ∂ψ + qˆ 2 + qˆ 3 h1 ∂q1 h2 ∂q2 h3 ∂q3  1 ∂ψ . = qˆ i hi ∂qi = qˆ 1 (2.18) i Exercise 2.2.4 offers a mathematical alternative independent of this physical interpretation of the gradient. The total variation of a function,  1 ∂ψ  ∂ψ dψ = ∇ψ · dr = dsi = dqi hi ∂qi ∂qi i i is consistent with Eq. (2.18), of course. 3 Here the use of ϕ to label a function is avoided because it is conventional to use this symbol to denote an azimuthal coordinate. 2.2 Differential Vector Operators 111 Divergence The divergence operator may be obtained from the second definition (Eq. (1.98)) of Chapter 1 or equivalently from Gauss’ theorem, Section 1.11. Let us use Eq. (1.98), V · dσ ∇ · V(q1 , q2 , q3 ) = lim , (2.19) dτ dτ →0 with a differential volume h1 h2 h3 dq1 dq2 dq3 (Fig. 2.1). Note that the positive directions have been chosen so that (qˆ 1 , qˆ 2 , qˆ 3 ) form a right-handed set, qˆ 1 × qˆ 2 = qˆ 3 . The difference of area integrals for the two faces q1 = constant is given by  ∂ V1 h2 h3 + (V1 h2 h3 ) dq1 dq2 dq3 − V1 h2 h3 dq2 dq3 ∂q1 = ∂ (V1 h2 h3 ) dq1 dq2 dq3 , ∂q1 (2.20) exactly as in Sections 1.7 and 1.10.4 Here, Vi = V · qˆ i is the projection of V onto the qˆ i -direction. Adding in the similar results for the other two pairs of surfaces, we obtain V(q1 , q2 , q3 ) · dσ =  ∂ ∂ ∂ (V1 h2 h3 ) + (V2 h3 h1 ) + (V3 h1 h2 ) dq1 dq2 dq3 . ∂q1 ∂q2 ∂q3 FIGURE 2.1 Curvilinear volume element. 4 Since we take the limit dq , dq , dq → 0, the second- and higher-order derivatives will drop out. 1 2 3 112 Chapter 2 Vector Analysis in Curved Coordinates and Tensors Now, using Eq. (2.19), division by our differential volume yields ∇ · V(q1 , q2 , q3 ) =  ∂ 1 ∂ ∂ (V1 h2 h3 ) + (V2 h3 h1 ) + (V3 h1 h2 ) . h1 h2 h3 ∂q1 ∂q2 ∂q3 (2.21) We may obtain the Laplacian by combining Eqs. (2.18) and (2.21), using V = ∇ψ(q1 , q2 , q3 ). This leads to ∇ · ∇ψ(q1 , q2 , q3 )       ∂ h3 h1 ∂ψ ∂ h1 h2 ∂ψ ∂ h2 h3 ∂ψ 1 + + . = h1 h2 h3 ∂q1 h1 ∂q1 ∂q2 h2 ∂q2 ∂q3 h3 ∂q3 (2.22) Curl Finally, to develop ∇ × V, let us apply Stokes’ theorem (Section 1.12) and, as with the divergence, take the limit as the surface area becomes vanishingly small. Working on one component at a time, we consider a differential surface element in the curvilinear surface q1 = constant. From ∇ × V · dσ = qˆ 1 · (∇ × V)h2 h3 dq2 dq3 (2.23) s (mean value theorem of integral calculus), Stokes’ theorem yields  qˆ 1 · (∇ × V)h2 h3 dq2 dq3 = V · dr, (2.24) with the line integral lying in the surface q1 = constant. Following the loop (1, 2, 3, 4) of Fig. 2.2,   ∂ (V3 h3 ) dq2 dq3 V(q1 , q2 , q3 ) · dr = V2 h2 dq2 + V3 h3 + ∂q2  ∂ − V2 h2 + (V2 h2 )dq3 dq2 − V3 h3 dq3 ∂q3  ∂ ∂ = (h3 V3 ) − (h2 V2 ) dq2 dq3 . (2.25) ∂q2 ∂q3 We pick up a positive sign when going in the positive direction on parts 1 and 2 and a negative sign on parts 3 and 4 because here we are going in the negative direction. (Higher-order terms in Maclaurin or Taylor expansions have been omitted. They will vanish in the limit as the surface becomes vanishingly small (dq2 → 0, dq3 → 0).) From Eq. (2.24),  ∂ ∂ 1 ∇ × V|1 = (h3 V3 ) − (h2 V2 ) . (2.26) h2 h3 ∂q2 ∂q3 2.2 Differential Vector Operators 113 FIGURE 2.2 Curvilinear surface element with q1 = constant. The remaining two components of ∇ × V may be picked up by cyclic permutation of the indices. As in Chapter 1, it is often convenient to write the curl in determinant form:   qˆ 1 h1  1  ∂ ∇×V=  h1 h2 h3  ∂q1   h1 V1 qˆ 2 h2 ∂ ∂q2 h2 V2  qˆ 3 h3   ∂  . ∂q3  h3 V3  (2.27) Remember that, because of the presence of the differential operators, this determinant must be expanded from the top down. Note that this equation is not identical with the form for the cross product of two vectors, Eq. (2.13). ∇ is not an ordinary vector; it is a vector operator. Our geometric interpretation of the gradient and the use of Gauss’ and Stokes’ theorems (or integral definitions of divergence and curl) have enabled us to obtain these quantities without having to differentiate the unit vectors qˆ i . There exist alternate ways to determine grad, div, and curl based on direct differentiation of the qˆ i . One approach resolves the qˆ i of a specific coordinate system into its Cartesian components (Exercises 2.4.1 and 2.5.1) and differentiates this Cartesian form (Exercises 2.4.3 and 2.5.2). The point here is that the derivatives of the Cartesian xˆ , yˆ , and zˆ vanish since xˆ , yˆ , and zˆ are constant in direction as well as in magnitude. A second approach [L. J. Kijewski, Am. J. Phys. 33: 816 (1965)] assumes the equality of ∂ 2 r/∂qi ∂qj and ∂ 2 r/∂qj ∂qi and develops the derivatives of qˆ i in a general curvilinear form. Exercises 2.2.3 and 2.2.4 are based on this method. Exercises 2.2.1 Develop arguments to show that dot and cross products (not involving ∇) in orthogonal curvilinear coordinates in R3 proceed, as in Cartesian coordinates, with no involvement of scale factors. 2.2.2 With qˆ 1 a unit vector in the direction of increasing q1 , show that 114 Chapter 2 Vector Analysis in Curved Coordinates and Tensors (a) (b) 1 ∂(h2 h3 ) h1 h2 h3 ∂q1  1 ∂h1 1 ∂h1 1 qˆ 2 . − qˆ 3 ∇ × qˆ 1 = h1 h3 ∂q3 h2 ∂q2 ∇ · qˆ 1 = Note that even though qˆ 1 is a unit vector, its divergence and curl do not necessarily vanish. 2.2.3 Show that the orthogonal unit vectors qˆ j may be defined by qˆ i = 1 ∂r . hi ∂qi (a) In particular, show that qˆ i · qˆ i = 1 leads to an expression for hi in agreement with Eqs. (2.9). Equation (a) may be taken as a starting point for deriving 1 ∂hj ∂ qˆ i = qˆ j , ∂qj hi ∂qi i = j and  1 ∂hi ∂ qˆ i qˆ j =− . ∂qi hj ∂qj j =i 2.2.4 Derive ∇ψ = qˆ 1 1 ∂ψ 1 ∂ψ 1 ∂ψ + qˆ 2 + qˆ 3 h1 ∂q1 h2 ∂q2 h3 ∂q3 by direct application of Eq. (1.97), ∇ψ = lim dτ →0 ψ dσ . dτ Hint. Evaluation of the surface integral will lead to terms like (h1 h2 h3 )−1 (∂/∂q1 ) × (qˆ 1 h2 h3 ). The results listed in Exercise 2.2.3 will be helpful. Cancellation of unwanted terms occurs when the contributions of all three pairs of surfaces are added together. 2.3 SPECIAL COORDINATE SYSTEMS: INTRODUCTION There are at least 11 coordinate systems in which the three-dimensional Helmholtz equation can be separated into three ordinary differential equations. Some of these coordinate systems have achieved prominence in the historical development of quantum mechanics. Other systems, such as bipolar coordinates, satisfy special needs. Partly because the needs are rather infrequent but mostly because the development of computers and efficient programming techniques reduce the need for these coordinate systems, the discussion in this chapter is limited to (1) Cartesian coordinates, (2) spherical polar coordinates, and (3) circular cylindrical coordinates. Specifications and details of the other coordinate systems will be found in the first two editions of this work and in Additional Readings at the end of this chapter (Morse and Feshbach, Margenau and Murphy). 2.4 Circular Cylinder Coordinates 2.4 115 CIRCULAR CYLINDER COORDINATES In the circular cylindrical coordinate system the three curvilinear coordinates (q1 , q2 , q3 ) are relabeled (ρ, ϕ, z). We are using ρ for the perpendicular distance from the z-axis and saving r for the distance from the origin. The limits on ρ, ϕ and z are 0 ρ < ∞, 0 ϕ 2π, and − ∞ < z < ∞. For ρ = 0, ϕ is not well defined. The coordinate surfaces, shown in Fig. 2.3, are: 1. Right circular cylinders having the z-axis as a common axis, 1/2  = constant. ρ = x2 + y2 Half-planes through the z-axis, ϕ = tan −1   y = constant. x Planes parallel to the xy-plane, as in the Cartesian system, z = constant. FIGURE 2.3 Circular cylinder coordinates. 116 Chapter 2 Vector Analysis in Curved Coordinates and Tensors FIGURE 2.4 Circular cylindrical coordinate unit vectors. Inverting the preceding equations for ρ and ϕ (or going directly to Fig. 2.3), we obtain the transformation relations x = ρ cos ϕ, y = ρ sin ϕ, z = z. (2.28) The z-axis remains unchanged. This is essentially a two-dimensional curvilinear system with a Cartesian z-axis added on to form a three-dimensional system. According to Eq. (2.5) or from the length elements dsi , the scale factors are h1 = hρ = 1, h2 = hϕ = ρ, h3 = hz = 1. (2.29) The unit vectors qˆ 1 , qˆ 2 , qˆ 3 are relabeled (ρ, ˆ ϕ, ˆ zˆ ), as in Fig. 2.4. The unit vector ρˆ is normal to the cylindrical surface, pointing in the direction of increasing radius ρ. The unit vector ϕˆ is tangential to the cylindrical surface, perpendicular to the half plane ϕ = constant and pointing in the direction of increasing azimuth angle ϕ. The third unit vector, zˆ , is the usual Cartesian unit vector. They are mutually orthogonal, ρˆ · ϕˆ = ϕˆ · zˆ = zˆ · ρˆ = 0, and the coordinate vector and a general vector V are expressed as r = ρρ ˆ + zˆ z, V = ρV ˆ ρ + ϕV ˆ ϕ + zˆ Vz . A differential displacement dr may be written dr = ρˆ dsρ + ϕˆ dsϕ + zˆ dz = ρˆ dρ + ϕρ ˆ dϕ + zˆ dz. Example 2.4.1 (2.30) AREA LAW FOR PLANETARY MOTION First we derive Kepler’s law in cylindrical coordinates, saying that the radius vector sweeps out equal areas in equal time, from angular momentum conservation. 2.4 Circular Cylinder Coordinates 117 We consider the sun at the origin as a source of the central gravitational force F = f (r)ˆr. Then the orbital angular momentum L = mr × v of a planet of mass m and velocity v is conserved, because the torque dL dr dr dv f (r) =m × +r×m =r×F= r × r = 0. dt dt dt dt r Hence L = const. Now we can choose the z-axis to lie along the direction of the orbital angular momentum vector, L = Lˆz, and work in cylindrical coordinates r = (ρ, ϕ, z) = ρ ρˆ with z = 0. The planet moves in the xy-plane because r and v are perpendicular to L. Thus, we expand its velocity as follows: v= dr d ρˆ = ρ˙ ρˆ + ρ . dt dt From ρˆ = (cos ϕ, sin ϕ), ∂ ρˆ = (− sin ϕ, cos ϕ) = ϕ, ˆ dϕ d ρˆ dϕ we find that ddtρˆ = dϕ ˆ using the chain rule, so v = ρ˙ ρˆ + ρ ddtρˆ = ρ˙ ρˆ + ρ ϕ˙ ϕ. ˆ When dt = ϕ˙ ϕ we substitute the expansions of ρˆ and v in polar coordinates, we obtain L = mρ × v = mρ(ρ ϕ)( ˙ ρˆ × ϕ) ˆ = mρ 2 ϕ˙ zˆ = constant. The triangular area swept by the radius vector ρ in the time dt (area law), when integrated over one revolution, is given by 1 L 1 Lτ A= , (2.31) ρ(ρ dϕ) = ρ 2 ϕ˙ dt = dt = 2 2 2m 2m if we substitute mρ 2 ϕ˙ = L = const. Here τ is the period, that is, the time for one revolution of the planet in its orbit. Kepler’s first law says that the orbit is an ellipse. Now we derive the orbit equation ρ(ϕ) of the ellipse in polar coordinates, where in Fig. 2.5 the sun is at one focus, which is the origin of our cylindrical coordinates. From the geometrical construction of the ellipse we know that ρ ′ + ρ = 2a, where a is the major half-axis; we shall show that this is equivalent to the conventional form of the ellipse equation. The distance between both foci is 0 < 2aǫ < 2a, where 0 < ǫ < 1 is called the eccentricity of the ellipse. For a circle ǫ = 0 because both foci coincide with the center. There is an angle, as shown in Fig. 2.5, where the distances ρ ′ = ρ = a are equal, and Pythagoras’ theorem applied to this right triangle FIGURE 2.5 Ellipse in polar coordinates. 118 Chapter 2 Vector Analysis in Curved Coordinates and Tensors √ gives b2 + a 2 ǫ 2 = a 2 . As a result, 1 − ǫ 2 = b/a is the ratio of the minor half-axis (b) to the major half-axis, a. Now consider the triangle with the sides labeled by ρ ′ , ρ, 2aǫ in Fig. 2.5 and angle opposite ρ ′ equal to π − ϕ. Then, applying the law of cosines, gives ρ ′ 2 = ρ 2 + 4a 2 ǫ 2 + 4ρaǫ cos ϕ. Now substituting ρ ′ = 2a − ρ, canceling ρ 2 on both sides and dividing by 4a yields  ρ(1 + ǫ cos ϕ) = a 1 − ǫ 2 ≡ p, (2.32) the Kepler orbit equation in polar coordinates. Alternatively, we revert to Cartesian coordinates to find, from Eq. (2.32) with x = ρ cos ϕ, that ρ 2 = x 2 + y 2 = (p − xǫ)2 = p 2 + x 2 ǫ 2 − 2pxǫ, so the familiar ellipse equation in Cartesian coordinates, 2   pǫ p2 ǫ 2 p2 1 − ǫ2 x + + y 2 = p2 + = , 2 2 1−ǫ 1−ǫ 1 − ǫ2 obtains. If we compare this result with the standard form of the ellipse, (x − x0 )2 y 2 + 2 = 1, a2 b we confirm that p b= √ = a 1 − ǫ2, 1 − ǫ2 a= p , 1 − ǫ2 and that the distance x0 between the center and focus is aǫ, as shown in Fig. 2.5.  The differential operations involving ∇ follow from Eqs. (2.18), (2.21), (2.22), and (2.27): ∇ψ(ρ, ϕ, z) = ρˆ 1 ∂ψ ∂ψ ∂ψ + ϕˆ + zˆ , ∂ρ ρ ∂ϕ ∂z 1 ∂ 1 ∂Vϕ ∂Vz (ρVρ ) + + , ρ ∂ρ ρ ∂ϕ ∂z   ∂ψ 1 ∂ 2ψ ∂ 2ψ 1 ∂ 2 ρ + 2 + 2, ∇ ψ= ρ ∂ρ ∂ρ ρ ∂ϕ 2 ∂z    ρˆ ρ ϕˆ zˆ     1 ∂ ∂  . ∇×V=  ∂ ρ  ∂ρ ∂ϕ ∂z     Vρ ρVϕ Vz  ∇·V= (2.33) (2.34) (2.35) (2.36) 2.4 Circular Cylinder Coordinates 119 Finally, for problems such as circular wave guides and cylindrical cavity resonators the vector Laplacian ∇ 2 V resolved in circular cylindrical coordinates is ∇ 2 V|ρ = ∇ 2 Vρ − 1 2 ∂Vϕ , Vρ − 2 2 ρ ρ ∂ϕ ∇ 2 V|ϕ = ∇ 2 Vϕ − 1 2 ∂Vρ , Vϕ + 2 ρ2 ρ ∂ϕ (2.37) ∇ 2 V|z = ∇ 2 Vz , which follow from Eq. (1.85). The basic reason for this particular form of the z-component is that the z-axis is a Cartesian axis; that is, ∇ 2 (ρV ˆ ρ + ϕV ˆ ϕ + zˆ Vz ) = ∇ 2 (ρV ˆ ρ + ϕV ˆ ϕ ) + zˆ ∇ 2 Vz ˆ ∇ 2 Vz . = ρf ˆ (Vρ , Vϕ ) + ϕg(V ˆ ρ , Vϕ ) + z Finally, the operator ∇ 2 operating on the ρ, ˆ ϕˆ unit vectors stays in the ρˆ ϕ-plane. ˆ Example 2.4.2 A NAVIER–STOKES TERM The Navier–Stokes equations of hydrodynamics contain a nonlinear term  ∇ × v × (∇ × v) , where v is the fluid velocity. For fluid flowing through a cylindrical pipe in the z-direction, v = zˆ v(ρ). From Eq. (2.36), Finally,    ρˆ ρ ϕˆ zˆ    1  ∂ ∂  = −ϕˆ ∂v ∇×v=  ∂ ρ  ∂ρ ∂ϕ ∂ρ ∂z    0  0 v(ρ)    ρˆ ϕˆ zˆ    0 0 v ∂v  = ρv(ρ) . v × (∇ × v) =   ˆ ∂ρ ∂v   0 0 − ∂ρ   ρˆ    1 ∂ ∇ × v × (∇ × v) =  ∂ρ ρ  ∂v v ∂ρ ρ ϕˆ ∂ ∂ϕ 0 so, for this particular case, the nonlinear term vanishes.  zˆ   ∂   ∂z  = 0,   0   120 Chapter 2 Vector Analysis in Curved Coordinates and Tensors Exercises 2.4.1 Resolve the circular cylindrical unit vectors into their Cartesian components (Fig. 2.6). ANS. ρˆ = xˆ cos ϕ + yˆ sin ϕ, ϕˆ = −ˆx sin ϕ + yˆ cos ϕ, zˆ = zˆ . 2.4.2 Resolve the Cartesian unit vectors into their circular cylindrical components (Fig. 2.6). ANS. xˆ = ρˆ cos ϕ − ϕˆ sin ϕ, yˆ = ρˆ sin ϕ + ϕˆ cos ϕ, zˆ = zˆ . 2.4.3 From the results of Exercise 2.4.1 show that ∂ ρˆ = ϕ, ˆ ∂ϕ ∂ ϕˆ = −ρˆ ∂ϕ and that all other first derivatives of the circular cylindrical unit vectors with respect to the circular cylindrical coordinates vanish. 2.4.4 Compare ∇ · V (Eq. (2.34)) with the gradient operator ∇ = ρˆ ∂ 1 ∂ ∂ + ϕˆ + zˆ ∂ρ ρ ∂ϕ ∂z (Eq. (2.33)) dotted into V. Note that the differential operators of ∇ differentiate both the unit vectors and the components of V. ∂ Hint. ϕ(1/ρ)(∂/∂ϕ) ˆ · ρV ˆ ρ becomes ϕˆ · ρ1 ∂ϕ (ρV ˆ ρ ) and does not vanish. 2.4.5 (a) Show that r = ρρ ˆ + zˆ z. FIGURE 2.6 Plane polar coordinates. 2.4 Circular Cylinder Coordinates (b) Working entirely in circular cylindrical coordinates, show that ∇·r=3 2.4.6 (a) 2.4.7 and ∇ × r = 0. Show that the parity operation (reflection through the origin) on a point (ρ, ϕ, z) relative to fixed x-, y-, z-axes consists of the transformation ρ → ρ, (b) 121 ϕ → ϕ ± π, z → −z. Show that ρˆ and ϕˆ have odd parity (reversal of direction) and that zˆ has even parity. Note. The Cartesian unit vectors xˆ , yˆ , and zˆ remain constant. A rigid body is rotating about a fixed axis with a constant angular velocity ω. Take ω to lie along the z-axis. Express the position vector r in circular cylindrical coordinates and using circular cylindrical coordinates, (a) calculate v = ω × r, (b) calculate ∇ × v. ANS. (a) v = ϕωρ, ˆ (b) ∇ × v = 2ω. 2.4.8 Find the circular cylindrical components of the velocity and acceleration of a moving particle, vρ = ρ, ˙ vϕ = ρ ϕ, ˙ vz = z˙ , aρ = ρ¨ − ρ ϕ˙ 2 , aϕ = ρ ϕ¨ + 2ρ˙ ϕ, ˙ az = z¨ . Hint. r(t) = ρ(t)ρ(t) ˆ + zˆ z(t)  = xˆ cos ϕ(t) + yˆ sin ϕ(t) ρ(t) + zˆ z(t). Note. ρ˙ = dρ/dt, ρ¨ = d 2 ρ/dt 2 , and so on. 2.4.9 Solve Laplace’s equation, ∇ 2 ψ = 0, in cylindrical coordinates for ψ = ψ(ρ). ANS. ψ = k ln 2.4.10 ρ . ρ0 In right circular cylindrical coordinates a particular vector function is given by ˆ ϕ (ρ, ϕ). V(ρ, ϕ) = ρV ˆ ρ (ρ, ϕ) + ϕV Show that ∇ × V has only a z-component. Note that this result will hold for any vector confined to a surface q3 = constant as long as the products h1 V1 and h2 V2 are each independent of q3 . 2.4.11 For the flow of an incompressible viscous fluid the Navier–Stokes equations lead to  η −∇ × v × (∇ × v) = ∇ 2 (∇ × v). ρ0 Here η is the viscosity and ρ0 is the density of the fluid. For axial flow in a cylindrical pipe we take the velocity v to be v = zˆ v(ρ). 122 Chapter 2 Vector Analysis in Curved Coordinates and Tensors From Example 2.4.2, for this choice of v. Show that  ∇ × v × (∇ × v) = 0 ∇ 2 (∇ × v) = 0 leads to the differential equation  2  1 d 1 dv d v ρ 2 − 2 =0 ρ dρ dρ ρ dρ and that this is satisfied by v = v 0 + a2 ρ 2 . 2.4.12 A conducting wire along the z-axis carries a current I . The resulting magnetic vector potential is given by   1 µI . ln A = zˆ 2π ρ Show that the magnetic induction B is given by B = ϕˆ 2.4.13 µI . 2πρ A force is described by F = −ˆx (a) x y + yˆ 2 . x2 + y2 x + y2 Express F in circular cylindrical coordinates. Operating entirely in circular cylindrical coordinates for (b) and (c), (b) calculate the curl of F and (c) calculate the work done by F in travers the unit circle once counterclockwise. (d) How do you reconcile the results of (b) and (c)? 2.4.14 A transverse electromagnetic wave (TEM) in a coaxial waveguide has an electric field E = E(ρ, ϕ)ei(kz−ωt) and a magnetic induction field of B = B(ρ, ϕ)ei(kz−ωt) . Since the wave is transverse, neither E nor B has a z component. The two fields satisfy the vector Laplacian equation ∇ 2 E(ρ, ϕ) = 0 ∇ 2 B(ρ, ϕ) = 0. (a) Show that E = ρE ˆ 0 (a/ρ)ei(kz−ωt) and B = ϕB ˆ 0 (a/ρ)ei(kz−ωt) are solutions. Here a is the radius of the inner conductor and E0 and B0 are constant amplitudes. 2.5 Spherical Polar Coordinates (b) 123 Assuming a vacuum inside the waveguide, verify that Maxwell’s equations are satisfied with B0 /E0 = k/ω = µ0 ε0 (ω/k) = 1/c. 2.4.15 A calculation of the magnetohydrodynamic pinch effect involves the evaluation of (B · ∇)B. If the magnetic induction B is taken to be B = ϕB ˆ ϕ (ρ), show that (B · ∇)B = −ρB ˆ ϕ2 /ρ. 2.4.16 The linear velocity of particles in a rigid body rotating with angular velocity ω is given by Integrate 2.4.17 2.5  v = ϕρω. ˆ v · dλ around a circle in the xy-plane and verify that  v · dλ = ∇ × v|z . area A proton of mass m, charge +e, and (asymptotic) momentum p = mv is incident on a nucleus of charge +Ze at an impact parameter b. Determine the proton’s distance of closest approach. SPHERICAL POLAR COORDINATES Relabeling (q1 , q2 , q3 ) as (r, θ, ϕ), we see that the spherical polar coordinate system consists of the following: 1. Concentric spheres centered at the origin, 1/2  r = x 2 + y 2 + z2 = constant. Right circular cones centered on the z-(polar) axis, vertices at the origin, θ = arccos z = constant. (x 2 + y 2 + z2 )1/2 Half-planes through the z-(polar) axis, ϕ = arctan y = constant. x By our arbitrary choice of definitions of θ , the polar angle, and ϕ, the azimuth angle, the z-axis is singled out for special treatment. The transformation equations corresponding to Eq. (2.1) are x = r sin θ cos ϕ, y = r sin θ sin ϕ, z = r cos θ, (2.38) 124 Chapter 2 Vector Analysis in Curved Coordinates and Tensors FIGURE 2.7 Spherical polar coordinate area elements. measuring θ from the positive z-axis and ϕ in the xy-plane from the positive x-axis. The ranges of values are 0 r < ∞, 0 θ π , and 0 ϕ 2π . At r = 0, θ and ϕ are undefined. From differentiation of Eq. (2.38), h1 = hr = 1, h2 = hθ = r, (2.39) h3 = hϕ = r sin θ. This gives a line element dr = rˆ dr + θˆ r dθ + ϕr ˆ sin θ dϕ, so ds 2 = dr · dr = dr 2 + r 2 dθ 2 + r 2 sin2 θ dϕ 2 , the coordinates being obviously orthogonal. In this spherical coordinate system the area element (for r = constant) is dA = dσθϕ = r 2 sin θ dθ dϕ, (2.40) the light, unshaded area in Fig. 2.7. Integrating over the azimuth ϕ, we find that the area element becomes a ring of width dθ , dAθ = 2πr 2 sin θ dθ. (2.41) This form will appear repeatedly in problems in spherical polar coordinates with azimuthal symmetry, such as the scattering of an unpolarized beam of particles. By definition of solid radians, or steradians, an element of solid angle d is given by d = dA = sin θ dθ dϕ. r2 (2.42) 2.5 Spherical Polar Coordinates 125 FIGURE 2.8 Spherical polar coordinates. Integrating over the entire spherical surface, we obtain d = 4π. From Eq. (2.11) the volume element is dτ = r 2 dr sin θ dθ dϕ = r 2 dr d. (2.43) The spherical polar coordinate unit vectors are shown in Fig. 2.8. It must be emphasized that the unit vectors rˆ , θˆ , and ϕˆ vary in direction as the angles θ and ϕ vary. Specifically, the θ and ϕ derivatives of these spherical polar coordinate unit vectors do not vanish (Exercise 2.5.2). When differentiating vectors in spherical polar (or in any non-Cartesian system), this variation of the unit vectors with position must not be neglected. In terms of the fixed-direction Cartesian unit vectors xˆ , yˆ and zˆ (cp. Eq. (2.38)), rˆ = xˆ sin θ cos ϕ + yˆ sin θ sin ϕ + zˆ cos θ, ∂ rˆ θˆ = xˆ cos θ cos ϕ + yˆ cos θ sin ϕ − zˆ sin θ = , ∂θ 1 ∂ rˆ ϕˆ = −ˆx sin ϕ + yˆ cos ϕ = , sin θ ∂ϕ which follow from 0= ∂ rˆ 2 ∂ rˆ = 2ˆr · , ∂θ ∂θ 0= ∂ rˆ 2 ∂ rˆ = 2ˆr · . ∂ϕ ∂ϕ (2.44) 126 Chapter 2 Vector Analysis in Curved Coordinates and Tensors Note that Exercise 2.5.5 gives the inverse transformation and that a given vector can now be expressed in a number of different (but equivalent) ways. For instance, the position vector r may be written 1/2  r = rˆ r = rˆ x 2 + y 2 + z2 = xˆ x + yˆ y + zˆ z = xˆ r sin θ cos ϕ + yˆ r sin θ sin ϕ + zˆ r cos θ. (2.45) Select the form that is most useful for your particular problem. From Section 2.2, relabeling the curvilinear coordinate unit vectors qˆ 1 , qˆ 2 , and qˆ 3 as rˆ , θˆ , and ϕˆ gives ∂ψ 1 ∂ψ 1 ∂ψ + θˆ + ϕˆ , ∂r r ∂θ r sin θ ∂ϕ  ∂Vϕ ∂ ∂ 1 sin θ (r 2 Vr ) + r (sin θ Vθ ) + r , ∇·V= 2 ∂r ∂θ ∂ϕ r sin θ      ∂ ∂ψ 1 ∂ 2ψ ∂ 1 2 ∂ψ sin θ , r + sin θ + ∇ · ∇ψ = 2 ∂r ∂r ∂θ ∂θ sin θ ∂ϕ 2 r sin θ    rˆ r θˆ r sin θ ϕˆ    1  ∂  ∂ ∂ ∇×V= 2 .  r sin θ  ∂r ∂θ ∂ϕ    Vr rVθ r sin θ Vϕ  ∇ψ = rˆ (2.46) (2.47) (2.48) (2.49) Occasionally, the vector Laplacian ∇ 2 V is needed in spherical polar coordinates. It is best obtained by using the vector identity (Eq. (1.85)) of Chapter 1. For reference  ∂2 ∂2 1 ∂2 2 2 ∂ cos θ ∂ 1 + + 2 2+ Vr ∇ V|r = − 2 + + r ∂r ∂r 2 r 2 sin θ ∂θ r r ∂θ r 2 sin2 θ ∂ϕ 2     2 cos θ 2 ∂ 2 ∂ Vϕ − 2 Vθ + − 2 + − 2 r ∂θ r sin θ r sin θ ∂ϕ 2  = ∇ 2 Vr − 2 cos θ 2 2 ∂Vθ 2 ∂Vϕ − 2 , Vθ − 2 Vr − 2 2 r r ∂θ r sin θ r sin θ ∂ϕ 2 cos θ ∂Vϕ 2 ∂Vr − , r 2 ∂θ r 2 sin2 θ ∂ϕ 1 2 ∂Vr 2 cos θ ∂Vθ ∇ 2 V|ϕ = ∇ 2 Vϕ − Vϕ + 2 + . 2 2 r sin θ ∂ϕ r sin θ r 2 sin2 θ ∂ϕ ∇ 2 V|θ = ∇ 2 Vθ − 1 r 2 sin2 θ Vθ + (2.50) (2.51) (2.52) These expressions for the components of ∇ 2 V are undeniably messy, but sometimes they are needed. 2.5 Spherical Polar Coordinates Example 2.5.1 127 ∇, ∇ · , ∇× FOR A CENTRAL FORCE Using Eqs. (2.46) to (2.49), we can reproduce by inspection some of the results derived in Chapter 1 by laborious application of Cartesian coordinates. From Eq. (2.46), df , dr ∇r n = rˆ nr n−1 . ∇f (r) = rˆ For the Coulomb potential V = Ze/(4πε0 r), the electric field is E = −∇V = From Eq. (2.47), 2 df ∇ · rˆ f (r) = f (r) + , r dr ∇ · rˆ r n = (n + 2)r n−1 . (2.53) Ze rˆ . 4πε0 r 2 (2.54) For r > 0 the charge density of the electric field of the Coulomb potential is ρ = ∇ · E = rˆ Ze 4πε0 ∇ · r 2 = 0 because n = −2. From Eq. (2.48), d 2f 2 df + 2, r dr dr (2.55) ∇ 2 r n = n(n + 1)r n−2 , (2.56) ∇ 2 f (r) = in contrast to the ordinary radial second derivative of r n involving n − 1 instead of n + 1. Finally, from Eq. (2.49), ∇ × rˆ f (r) = 0. (2.57)  Example 2.5.2 MAGNETIC VECTOR POTENTIAL The computation of the magnetic vector potential of a single current loop in the xy-plane uses Oersted’s law, ∇ × H = J, in conjunction with µ0 H = B = ∇ × A (see Examples 1.9.2 and 1.12.1), and involves the evaluation of  µ0 J = ∇ × ∇ × ϕA ˆ ϕ (r, θ ) . In spherical polar coordinates this reduces to    rˆ  r θˆ r sin θ ϕˆ     1  ∂  ∂ ∂ µ0 J = ∇ × 2    r sin θ  ∂r ∂θ ∂ϕ    0 0 r sin θ Aϕ (r, θ )   1 ∂ ∂ =∇× 2 rˆ (r sin θ Aϕ ) − r θˆ (r sin θ Aϕ ) . ∂r r sin θ ∂θ 128 Chapter 2 Vector Analysis in Curved Coordinates and Tensors Taking the curl a second time, we obtain    rˆ r θˆ r sin θ ϕˆ     ∂ ∂ ∂  1  . µ0 J = 2 ∂r ∂θ ∂ϕ  r sin θ  1 ∂ ∂   1  (r sin θ Aϕ ) − (r sin θ Aϕ ) 0  r sin θ ∂r r 2 sin θ ∂θ By expanding the determinant along the top row, we have   1 ∂ 1 ∂ 1 ∂2 µ0 J = −ϕˆ (rA ) + ) (sin θ A ϕ ϕ r ∂r 2 r 2 ∂θ sin θ ∂θ  1 (2.58) = −ϕˆ ∇ 2 Aϕ (r, θ ) − Aϕ (r, θ ) . 2 r sin2 θ  Exercises 2.5.1 Express the spherical polar unit vectors in Cartesian unit vectors. ANS. rˆ = xˆ sin θ cos ϕ + yˆ sin θ sin ϕ + zˆ cos θ, θˆ = xˆ cos θ cos ϕ + yˆ cos θ sin ϕ − zˆ sin θ, ϕˆ = −ˆx sin ϕ + yˆ cos ϕ. 2.5.2 From the results of Exercise 2.5.1, calculate the partial derivatives of rˆ , θˆ , and ϕˆ with respect to r, θ , and ϕ. (b) With ∇ given by (a) rˆ ∂ 1 ∂ 1 ∂ + θˆ + ϕˆ ∂r r ∂θ r sin θ ∂ϕ (greatest space rate of change), use the results of part (a) to calculate ∇ · ∇ψ. This is an alternate derivation of the Laplacian. Note. The derivatives of the left-hand ∇ operate on the unit vectors of the right-hand ∇ before the unit vectors are dotted together. 2.5.3 A rigid body is rotating about a fixed axis with a constant angular velocity ω. Take ω to be along the z-axis. Using spherical polar coordinates, (a) Calculate v = ω × r. (b) Calculate ∇ × v. ANS. (a) v = ϕωr ˆ sin θ, (b) ∇ × v = 2ω. 2.5 Spherical Polar Coordinates 2.5.4 129 The coordinate system (x, y, z) is rotated through an angle  counterclockwise about an axis defined by the unit vector n into system (x ′ , y ′ , z′ ). In terms of the new coordinates the radius vector becomes r′ = r cos  + r × n sin  + n(n · r)(1 − cos ). (a) Derive this expression from geometric considerations. (b) Show that it reduces as expected for n = zˆ . The answer, in matrix form, appears in Eq. (3.90). (c) Verify that r ′ 2 = r 2 . 2.5.5 Resolve the Cartesian unit vectors into their spherical polar components: xˆ = rˆ sin θ cos ϕ + θˆ cos θ cos ϕ − ϕˆ sin ϕ, yˆ = rˆ sin θ sin ϕ + θˆ cos θ sin ϕ + ϕˆ cos ϕ, zˆ = rˆ cos θ − θˆ sin θ. 2.5.6 The direction of one vector is given by the angles θ1 and ϕ1 . For a second vector the corresponding angles are θ2 and ϕ2 . Show that the cosine of the included angle γ is given by cos γ = cos θ1 cos θ2 + sin θ1 sin θ2 cos(ϕ1 − ϕ2 ). See Fig. 12.15. 2.5.7 A certain vector V has no radial component. Its curl has no tangential components. What does this imply about the radial dependence of the tangential components of V? 2.5.8 Modern physics lays great stress on the property of parity — whether a quantity remains invariant or changes sign under an inversion of the coordinate system. In Cartesian coordinates this means x → −x, y → −y, and z → −z. (a) Show that the inversion (reflection through the origin) of a point (r, θ, ϕ) relative to fixed x-, y-, z-axes consists of the transformation r → r, (b) 2.5.9 θ → π − θ, ϕ → ϕ ± π. Show that rˆ and ϕˆ have odd parity (reversal of direction) and that θˆ has even parity. With A any vector, A · ∇r = A. (a) Verify this result in Cartesian coordinates. (b) Verify this result using spherical polar coordinates. (Equation (2.46) provides ∇.) 130 Chapter 2 Vector Analysis in Curved Coordinates and Tensors 2.5.10 Find the spherical coordinate components of the velocity and acceleration of a moving particle: vr = r˙ , vθ = r θ˙ , ˙ vϕ = r sin θ ϕ, ar = r¨ − r θ˙ 2 − r sin2 θ ϕ˙ 2 , aθ = r θ¨ + 2˙r θ˙ − r sin θ cos θ ϕ˙ 2 , aϕ = r sin θ ϕ¨ + 2˙r sin θ ϕ˙ + 2r cos θ θ˙ ϕ. ˙ Hint. r(t) = rˆ (t)r(t)  = xˆ sin θ (t) cos ϕ(t) + yˆ sin θ (t) sin ϕ(t) + zˆ cos θ (t) r(t). Note. Using the Lagrangian techniques of Section 17.3, we may obtain these results somewhat more elegantly. The dot in r˙ , θ˙ , ϕ˙ means time derivative, r˙ = dr/dt, θ˙ = dθ/dt, ϕ˙ = dϕ/dt. The notation was originated by Newton. 2.5.11 A particle m moves in response to a central force according to Newton’s second law, m¨r = rˆ f (r). Show that r × r˙ = c, a constant, and that the geometric interpretation of this leads to Kepler’s second law. 2.5.12 Express ∂/∂x, ∂/∂y, ∂/∂z in spherical polar coordinates. ANS. ∂ 1 ∂ sin ϕ ∂ ∂ = sin θ cos ϕ + cos θ cos ϕ − , ∂x ∂r r ∂θ r sin θ ∂ϕ ∂ ∂ 1 ∂ cos ϕ ∂ = sin θ sin ϕ + cos θ sin ϕ + , ∂y ∂r r ∂θ r sin θ ∂ϕ ∂ ∂ 1 ∂ = cos θ − sin θ . ∂z ∂r r ∂θ Hint. Equate ∇ xyz and ∇ rθϕ . 2.5.13 From Exercise 2.5.12 show that   ∂ ∂ ∂ = −i . −y −i x ∂y ∂x ∂ϕ This is the quantum mechanical operator corresponding to the z-component of orbital angular momentum. 2.5.14 With the quantum mechanical orbital angular momentum operator defined as L = −i(r × ∇), show that   ∂ ∂ iϕ (a) Lx + iLy = e , + i cot θ ∂θ ∂ϕ (b) 2.5 Spherical Polar Coordinates   ∂ ∂ Lx − iLy = −e−iϕ . − i cot θ ∂θ ∂ϕ 131 (These are the raising and lowering operators of Section 4.3.) 2.5.15 Verify that L × L = iL in spherical polar coordinates. L = −i(r × ∇), the quantum mechanical orbital angular momentum operator. Hint. Use spherical polar coordinates for L but Cartesian components for the cross product. 2.5.16 (a) From Eq. (2.46) show that   ∂ 1 ∂ ˆ L = −i(r × ∇) = i θ . − ϕˆ sin θ ∂ϕ ∂θ (b) (c) Resolving θˆ and ϕˆ into Cartesian components, determine Lx , Ly , and Lz in terms of θ , ϕ, and their derivatives. From L2 = L2x + L2y + L2z show that   ∂ 1 ∂2 1 ∂ L2 = − sin θ − 2 sin θ ∂θ ∂θ sin θ ∂ϕ 2   ∂ ∂ r2 . = −r 2 ∇ 2 + ∂r ∂r This latter identity is useful in relating orbital angular momentum and Legendre’s differential equation, Exercise 9.3.8. 2.5.17 With L = −ir × ∇, verify the operator identities (a) (b) 2.5.18 ∂ r×L −i 2 , ∂r  r  ∂ 2 = i∇ × L. r∇ − ∇ 1 + r ∂r ∇ = rˆ Show that the following three forms (spherical coordinates) of ∇ 2 ψ(r) are equivalent: (a)  1 d 2 dψ(r) r ; dr r 2 dr (b) 1 d2  rψ(r) ; r dr 2 (c) d 2 ψ(r) 2 dψ(r) . + r dr dr 2 The second form is particularly convenient in establishing a correspondence between spherical polar and Cartesian descriptions of a problem. 2.5.19 One model of the solar corona assumes that the steady-state equation of heat flow, ∇ · (k∇T ) = 0, is satisfied. Here, k, the thermal conductivity, is proportional to T 5/2 . Assuming that the temperature T is proportional to r n , show that the heat flow equation is satisfied by T = T0 (r0 /r)2/7 . 132 Chapter 2 Vector Analysis in Curved Coordinates and Tensors 2.5.20 A certain force field is given by F = rˆ P 2P cos θ + θˆ 3 sin θ, 3 r r r P /2 (in spherical polar coordinates). (a) Examine ∇ × F to see if a potential exists. (b) Calculate F · dλ for a unit circle in the plane θ = π/2. What does this indicate about the force being conservative or nonconservative? (c) If you believe that F may be described by F = −∇ψ , find ψ . Otherwise simply state that no acceptable potential exists. 2.5.21 (a) Show that A = −ϕˆ cot θ/r is a solution of ∇ × A = rˆ /r 2 . (b) Show that this spherical polar coordinate solution agrees with the solution given for Exercise 1.13.6: yz xz A = xˆ − yˆ . 2 2 2 r(x + y ) r(x + y 2 ) (c) 2.5.22 Note that the solution diverges for θ = 0, π corresponding to x, y = 0. ˆ sin θ/r is a solution. Note that although this solution Finally, show that A = −θϕ does not diverge (r = 0), it is no longer single-valued for all possible azimuth angles. A magnetic vector potential is given by A= µ0 m × r . 4π r 3 Show that this leads to the magnetic induction B of a point magnetic dipole with dipole moment m. ANS. for m = zˆ m, µ0 2m cos θ µ0 m sin θ ∇ × A = rˆ + θˆ . 3 4π 4π r 3 r Compare Eqs. (12.133) and (12.134) 2.5.23 At large distances from its source, electric dipole radiation has fields E = aE sin θ ei(kr−ωt) ˆ θ, r B = aB sin θ ei(kr−ωt) ϕ. ˆ r Show that Maxwell’s equations ∇×E=− ∂B ∂t and ∇ × B = ε0 µ0 are satisfied, if we take ω aE = = c = (ε0 µ0 )−1/2 . aB k Hint. Since r is large, terms of order r −2 may be dropped. ∂E ∂t 2.6 Tensor Analysis 2.5.24 The magnetic vector potential for a uniformly charged rotating spherical shell is  4  ϕˆ µ0 a σ ω · sin θ , r >a 3 r2 A= µ aσ ω  0 ϕˆ · r cos θ, r < a. 3 (a = radius of spherical shell, σ = surface charge density, and ω = angular velocity.) Find the magnetic induction B = ∇ × A. ANS. Br (r, θ ) = 2µ0 a 4 σ ω cos θ · 3 , 3 r µ0 a 4 σ ω sin θ · 3 , 3 r 2µ0 aσ ω B = zˆ , 3 Bθ (r, θ ) = 2.5.25 r > a, r > a, r < a. Explain why ∇ 2 in plane polar coordinates follows from ∇ 2 in circular cylindrical coordinates with z = constant. (b) Explain why taking ∇ 2 in spherical polar coordinates and restricting θ to π/2 does not lead to the plane polar form of ∇. Note. (a) ∇ 2 (ρ, ϕ) = 2.6 133 1 ∂2 ∂2 1 ∂ + 2 2. + 2 ρ ∂ρ ρ ∂ϕ ∂ρ TENSOR ANALYSIS Introduction, Definitions Tensors are important in many areas of physics, including general relativity and electrodynamics. Scalars and vectors are special cases of tensors. In Chapter 1, a quantity that did not change under rotations of the coordinate system in three-dimensional space, an invariant, was labeled a scalar. A scalar is specified by one real number and is a tensor of rank 0. A quantity whose components transformed under rotations like those of the distance of a point from a chosen origin (Eq. (1.9), Section 1.2) was called a vector. The transformation of the components of the vector under a rotation of the coordinates preserves the vector as a geometric entity (such as an arrow in space), independent of the orientation of the reference frame. In three-dimensional space, a vector is specified by 3 = 31 real numbers, for example, its Cartesian components, and is a tensor of rank 1. A tensor of rank n has 3n components that transform in a definite way.5 This transformation philosophy is of central importance for tensor analysis and conforms with the mathematician’s concept of vector and vector (or linear) space and the physicist’s notion that physical observables must not depend on the choice of coordinate frames. There is a physical basis for such a philosophy: We describe the physical world by mathematics, but any physical predictions we make 5 In N -dimensional space a tensor of rank n has N n components. 134 Chapter 2 Vector Analysis in Curved Coordinates and Tensors must be independent of our mathematical conventions, such as a coordinate system with its arbitrary origin and orientation of its axes. There is a possible ambiguity in the transformation law of a vector  A′i = aij Aj , (2.59) j in which aij is the cosine of the angle between the xi′ -axis and the xj -axis. If we start with a differential distance vector dr, then, taking dxi′ to be a function of the unprimed variables, dxi′ =  ∂x ′ i dxj ∂xj (2.60) ∂xi′ , ∂xj (2.61) j by partial differentiation. If we set aij = Eqs. (2.59) and (2.60) are consistent. Any set of quantities Aj transforming according to A′ i =  ∂x ′ i j A ∂xj (2.62a) j is defined as a contravariant vector, whose indices we write as superscript; this includes the Cartesian coordinate vector x i = xi from now on. However, we have already encountered a slightly different type of vector transformation. The gradient of a scalar ∇ϕ, defined by ∇ϕ = xˆ ∂ϕ ∂ϕ ∂ϕ + yˆ 2 + zˆ 3 ∂x 1 ∂x ∂x (2.63) (using x 1 , x 2 , x 3 for x, y, z), transforms as  ∂ϕ ∂x j ∂ϕ ′ = , ′ i ∂x ∂x j ∂x ′ i (2.64) j using ϕ = ϕ(x, y, z) = ϕ(x ′ , y ′ , z′ ) = ϕ ′ , ϕ defined as a scalar quantity. Notice that this differs from Eq. (2.62) in that we have ∂x j /∂x ′ i instead of ∂x ′ i /∂x j . Equation (2.64) is taken as the definition of a covariant vector, with the gradient as the prototype. The covariant analog of Eq. (2.62a) is  ∂x j Aj . ∂x ′ i (2.62b) ∂x j ∂x ′ i = = aij ∂x ′ i ∂x j (2.65) A′i = j Only in Cartesian coordinates is 2.6 Tensor Analysis 135 so that there no difference between contravariant and covariant transformations. In other systems, Eq. (2.65) in general does not apply, and the distinction between contravariant and covariant is real and must be observed. This is of prime importance in the curved Riemannian space of general relativity. In the remainder of this section the components of any contravariant vector are denoted by a superscript, Ai , whereas a subscript is used for the components of a covariant vector Ai .6 Definition of Tensors of Rank 2 Now we proceed to define contravariant, mixed, and covariant tensors of rank 2 by the following equations for their components under coordinate transformations:  ∂x ′ i ∂x ′ j A′ij = Akl , ∂x k ∂x l kl B′ i j = Cij′ =  ∂x ′ i ∂x l Bkl, ∂x k ∂x ′ j (2.66) kl  ∂x k ∂x l Ckl . ∂x ′ i ∂x ′ j kl Clearly, the rank goes as the number of partial derivatives (or direction cosines) in the definition: 0 for a scalar, 1 for a vector, 2 for a second-rank tensor, and so on. Each index (subscript or superscript) ranges over the number of dimensions of the space. The number of indices (equal to the rank of tensor) is independent of the dimensions of the space. We see that Akl is contravariant with respect to both indices, Ckl is covariant with respect to both indices, and B k l transforms contravariantly with respect to the first index k but covariantly with respect to the second index l. Once again, if we are using Cartesian coordinates, all three forms of the tensors of second rank contravariant, mixed, and covariant are — the same. As with the components of a vector, the transformation laws for the components of a tensor, Eq. (2.66), yield entities (and properties) that are independent of the choice of reference frame. This is what makes tensor analysis important in physics. The independence of reference frame (invariance) is ideal for expressing and investigating universal physical laws. The second-rank tensor A (components Akl ) may be conveniently represented by writing out its components in a square array (3 × 3 if we are in three-dimensional space):  11  A A12 A13 A =  A21 A22 A23  . (2.67) 31 32 33 A A A This does not mean that any square array of numbers or functions forms a tensor. The essential condition is that the components transform according to Eq. (2.66). 6 This means that the coordinates (x, y, z) are written (x 1 , x 2 , x 3 ) since r transforms as a contravariant vector. The ambiguity of x 2 representing both x squared and y is the price we pay. 136 Chapter 2 Vector Analysis in Curved Coordinates and Tensors In the context of matrix analysis the preceding transformation equations become (for Cartesian coordinates) an orthogonal similarity transformation; see Section 3.3. A geometrical interpretation of a second-rank tensor (the inertia tensor) is developed in Section 3.5. In summary, tensors are systems of components organized by one or more indices that transform according to specific rules under a set of transformations. The number of indices is called the rank of the tensor. If the transformations are coordinate rotations in three-dimensional space, then tensor analysis amounts to what we did in the sections on curvilinear coordinates and in Cartesian coordinates in Chapter 1. In four dimensions of Minkowski space–time, the transformations are Lorentz transformations, and tensors of rank 1 are called four-vectors. Addition and Subtraction of Tensors The addition and subtraction of tensors is defined in terms of the individual elements, just as for vectors. If A + B = C, (2.68) then Aij + B ij = C ij . Of course, A and B must be tensors of the same rank and both expressed in a space of the same number of dimensions. Summation Convention In tensor analysis it is customary to adopt a summation convention to put Eq. (2.66) and subsequent tensor equations in a more compact form. As long as we are distinguishing between contravariance and covariance, let us agree that when an index appears on one side of an equation, once as a superscript and once as a subscript (except for the coordinates where both are subscripts), we automatically sum over that index. Then we may write the second expression in Eq. (2.66) as B′ i j = ∂x ′ i ∂x l k B l, ∂x k ∂x ′ j (2.69) with the summation of the right-hand side over k and l implied. This is Einstein’s summation convention.7 The index i is superscript because it is associated with the contravariant x ′ i ; likewise j is subscript because it is related to the covariant gradient. To illustrate the use of the summation convention and some of the techniques of tensor analysis, let us show that the now-familiar Kronecker delta, δkl , is really a mixed tensor 7 In this context ∂x ′ i /∂x k might better be written as a i and ∂x l /∂x ′ j as bl . k j 2.6 Tensor Analysis 137 of rank 2, δ k l .8 The question is: Does δ k l transform according to Eq. (2.66)? This is our criterion for calling it a tensor. We have, using the summation convention, ∂x ′ i ∂x l ∂x ′ i ∂x k = k ′ j ∂x ∂x ∂x k ∂x ′ j by definition of the Kronecker delta. Now, δk l ∂x ′ i ∂x k ∂x ′ i = ∂x k ∂x ′ j ∂x ′ j (2.70) (2.71) by direct partial differentiation of the right-hand side (chain rule). However, x ′ i and x ′ j are independent coordinates, and therefore the variation of one with respect to the other must be zero if they are different, unity if they coincide; that is, ∂x ′ i = δ′ i j . ∂x ′ j (2.72) Hence δ′ i j = ∂x ′ i ∂x l k δ l, ∂x k ∂x ′ j showing that the δ k l are indeed the components of a mixed second-rank tensor. Notice that this result is independent of the number of dimensions of our space. The reason for the upper index i and lower index j is the same as in Eq. (2.69). The Kronecker delta has one further interesting property. It has the same components in all of our rotated coordinate systems and is therefore called isotropic. In Section 2.9 we shall meet a third-rank isotropic tensor and three fourth-rank isotropic tensors. No isotropic first-rank tensor (vector) exists. Symmetry–Antisymmetry The order in which the indices appear in our description of a tensor is important. In general, Amn is independent of Anm , but there are some cases of special interest. If, for all m and n, Amn = Anm , (2.73) we call the tensor symmetric. If, on the other hand, Amn = −Anm , (2.74) the tensor is antisymmetric. Clearly, every (second-rank) tensor can be resolved into symmetric and antisymmetric parts by the identity   (2.75) Amn = 21 Amn + Anm + 12 Amn − Anm , the first term on the right being a symmetric tensor, the second, an antisymmetric tensor. A similar resolution of functions into symmetric and antisymmetric parts is of extreme importance to quantum mechanics. 8 It is common practice to refer to a tensor A by specifying a typical component, A . As long as the reader refrains from writing ij nonsense such as A = Aij , no harm is done. 138 Chapter 2 Vector Analysis in Curved Coordinates and Tensors Spinors It was once thought that the system of scalars, vectors, tensors (second-rank), and so on formed a complete mathematical system, one that is adequate for describing a physics independent of the choice of reference frame. But the universe and mathematical physics are not that simple. In the realm of elementary particles, for example, spin zero particles9 (π mesons, α particles) may be described with scalars, spin 1 particles (deuterons) by vectors, and spin 2 particles (gravitons) by tensors. This listing omits the most common particles: electrons, protons, and neutrons, all with spin 21 . These particles are properly described by spinors. A spinor is not a scalar, vector, or tensor. A brief introduction to spinors in the context of group theory (J = 1/2) appears in Section 4.3. Exercises 2.6.1 Show that if all the components of any tensor of any rank vanish in one particular coordinate system, they vanish in all coordinate systems. Note. This point takes on special importance in the four-dimensional curved space of general relativity. If a quantity, expressed as a tensor, exists in one coordinate system, it exists in all coordinate systems and is not just a consequence of a choice of a coordinate system (as are centrifugal and Coriolis forces in Newtonian mechanics). 2.6.2 The components of tensor A are equal to the corresponding components of tensor B in one particular coordinate system, denoted by the superscript 0; that is, A0ij = Bij0 . Show that tensor A is equal to tensor B, Aij = Bij , in all coordinate systems. 2.6.3 The last three components of a four-dimensional vector vanish in each of two reference frames. If the second reference frame is not merely a rotation of the first about the x0 axis, that is, if at least one of the coefficients ai0 (i = 1, 2, 3) = 0, show that the zeroth component vanishes in all reference frames. Translated into relativistic mechanics this means that if momentum is conserved in two Lorentz frames, then energy is conserved in all Lorentz frames. 2.6.4 From an analysis of the behavior of a general second-rank tensor under 90◦ and 180◦ rotations about the coordinate axes, show that an isotropic second-rank tensor in threedimensional space must be a multiple of δij . 2.6.5 The four-dimensional fourth-rank Riemann–Christoffel curvature tensor of general relativity, Riklm , satisfies the symmetry relations Riklm = −Rikml = −Rkilm . With the indices running from 0 to 3, show that the number of independent components is reduced from 256 to 36 and that the condition Riklm = Rlmik 9 The particle spin is intrinsic angular momentum (in units of h). It is distinct from classical, orbital angular momentum due to ¯ motion. 2.7 Contraction, Direct Product 139 further reduces the number of independent components to 21. Finally, if the components satisfy an identity Riklm + Rilmk + Rimkl = 0, show that the number of independent components is reduced to 20. Note. The final three-term identity furnishes new information only if all four indices are different. Then it reduces the number of independent components by one-third. 2.6.6 2.7 Tiklm is antisymmetric with respect to all pairs of indices. How many independent components has it (in three-dimensional space)? CONTRACTION, DIRECT PRODUCT Contraction When dealing with vectors, we formed a scalar product (Section 1.3) by summing products of corresponding components: A · B = Ai Bi (summation convention). (2.76) The generalization of this expression in tensor analysis is a process known as contraction. Two indices, one covariant and the other contravariant, are set equal to each other, and then (as implied by the summation convention) we sum over this repeated index. For example, let us contract the second-rank mixed tensor B ′ i j , ∂x l k ∂x ′ i ∂x l k B = B l l ∂x k ∂x ′ i ∂x k using Eq. (2.71), and then by Eq. (2.72) B′ i i = B ′ i i = δl k B k l = B k k . (2.77) (2.78) Our contracted second-rank mixed tensor is invariant and therefore a scalar.10 This is exactly what we obtained in Section 1.3 for the dot product of two vectors and in Section 1.7 for the divergence of a vector. In general, the operation of contraction reduces the rank of a tensor by 2. An example of the use of contraction appears in Chapter 4. Direct Product The components of a covariant vector (first-rank tensor) ai and those of a contravariant vector (first-rank tensor) bj may be multiplied component by component to give the general term ai bj . This, by Eq. (2.66) is actually a second-rank tensor, for ai′ b′ j = Contracting, we obtain ∂x k ∂x ′ j  l ∂x k ∂x ′ j l a b = ak b . k ∂x ′ i ∂x l ∂x ′ i ∂x l ai′ b′ i = ak bk , 10 In matrix analysis this scalar is the trace of the matrix, Section 3.2. (2.79) (2.80) 140 Chapter 2 Vector Analysis in Curved Coordinates and Tensors as in Eqs. (2.77) and (2.78), to give the regular scalar product. The operation of adjoining two vectors ai and bj as in the last paragraph is known as forming the direct product. For the case of two vectors, the direct product is a tensor of second rank. In this sense we may attach meaning to ∇E, which was not defined within the framework of vector analysis. In general, the direct product of two tensors is a tensor of rank equal to the sum of the two initial ranks; that is, Ai j B kl = C i j kl , (2.81a) where C i j kl is a tensor of fourth rank. From Eqs. (2.66), ∂x ′ i ∂x n ∂x ′k ∂x ′l m pq C n . (2.81b) ∂x m ∂x ′ j ∂x p ∂x q The direct product is a technique for creating new, higher-rank tensors. Exercise 2.7.1 is a form of the direct product in which the first factor is ∇. Applications appear in Section 4.6. When T is an nth-rank Cartesian tensor, (∂/∂x i )Tj kl . . . , a component of ∇T, is a Cartesian tensor of rank n + 1 (Exercise 2.7.1). However, (∂/∂x i )Tj kl . . . is not a tensor in more general spaces. In non-Cartesian systems ∂/∂x ′ i will act on the partial derivatives ∂x p /∂x ′ q and destroy the simple tensor transformation relation (see Eq. (2.129)). So far the distinction between a covariant transformation and a contravariant transformation has been maintained because it does exist in non-Euclidean space and because it is of great importance in general relativity. In Sections 2.10 and 2.11 we shall develop differential relations for general tensors. Often, however, because of the simplification achieved, we restrict ourselves to Cartesian tensors. As noted in Section 2.6, the distinction between contravariance and covariance disappears. C ′ i j kl = Exercises 2.7.1 2.7.2 2.7.3 If T···i is a tensor of rank n, show that ∂T···i /∂x j is a tensor of rank n + 1 (Cartesian coordinates). Note. In non-Cartesian coordinate systems the coefficients aij are, in general, functions of the coordinates, and the simple derivative of a tensor of rank n is not a tensor except in the special case of n = 0. In this case the derivative does yield a covariant vector (tensor of rank 1) by Eq. (2.64). If Tij k··· is a tensor of rank n, show that j ∂Tij k··· /∂x j is a tensor of rank n − 1 (Cartesian coordinates). The operator ∇2 − 1 ∂2 c2 ∂t 2 may be written as 4  ∂2 , 2 ∂x i i=1 2.8 Quotient Rule 141 using x4 = ict. This is the four-dimensional Laplacian, sometimes called the d’Alembertian and denoted by 2 . Show that it is a scalar operator, that is, is invariant under Lorentz transformations. 2.8 QUOTIENT RULE If Ai and Bj are vectors, as seen in Section 2.7, we can easily show that Ai Bj is a secondrank tensor. Here we are concerned with a variety of inverse relations. Consider such equations as Ki Ai = B (2.82a) Kij Aj = Bi (2.82b) Kij Aj k = Bik (2.82c) Kij kl Aij = Bkl (2.82d) Kij Ak = Bij k . (2.82e) Inline with our restriction to Cartesian systems, we write all indices as subscripts and, unless specified otherwise, sum repeated indices. In each of these expressions A and B are known tensors of rank indicated by the number of indices and A is arbitrary. In each case K is an unknown quantity. We wish to establish the transformation properties of K. The quotient rule asserts that if the equation of interest holds in all (rotated) Cartesian coordinate systems, K is a tensor of the indicated rank. The importance in physical theory is that the quotient rule can establish the tensor nature of quantities. Exercise 2.8.1 is a simple illustration of this. The quotient rule (Eq. (2.82b)) shows that the inertia matrix appearing in the angular momentum equation L = I ω, Section 3.5, is a tensor. In proving the quotient rule, we consider Eq. (2.82b) as a typical case. In our primed coordinate system Kij′ A′j = Bi′ = aik Bk , (2.83) using the vector transformation properties of B. Since the equation holds in all rotated Cartesian coordinate systems, aik Bk = aik (Kkl Al ). (2.84) Kij′ A′j = aik Kkl aj l A′j . (2.85) (Kij′ − aik aj l Kkl )A′j = 0. (2.86) Now, transforming A back into the primed coordinate system11 (compare Eq. (2.62)), we have Rearranging, we obtain 11 Note the order of the indices of the direction cosine a in this inverse transformation. We have jl Al =   ∂xl A′ = aj l A′j . ∂xj′ j j j 142 Chapter 2 Vector Analysis in Curved Coordinates and Tensors This must hold for each value of the index i and for every primed coordinate system. Since the A′j is arbitrary,12 we conclude Kij′ = aik aj l Kkl , (2.87) which is our definition of second-rank tensor. The other equations may be treated similarly, giving rise to other forms of the quotient rule. One minor pitfall should be noted: The quotient rule does not necessarily apply if B is zero. The transformation properties of zero are indeterminate. Example 2.8.1 EQUATIONS OF MOTION AND FIELD EQUATIONS In classical mechanics, Newton’s equations of motion m˙v = F tell us on the basis of the quotient rule that, if the mass is a scalar and the force a vector, then the acceleration a ≡ v˙ is a vector. In other words, the vector character of the force as the driving term imposes its vector character on the acceleration, provided the scale factor m is scalar. The wave equation of electrodynamics ∂ 2 Aµ = J µ involves the four-dimensional ver2 sion of the Laplacian ∂ 2 = c2∂∂t 2 −∇ 2 , a Lorentz scalar, and the external four-vector current J µ as its driving term. From the quotient rule, we infer that the vector potential Aµ is a four-vector as well. If the driving current is a four-vector, the vector potential must be of rank 1 by the quotient rule.  The quotient rule is a substitute for the illegal division of tensors. Exercises 2.8.1 The double summation Kij Ai Bj is invariant for any two vectors Ai and Bj . Prove that Kij is a second-rank tensor. Note. In the form ds 2 (invariant) = gij dx i dx j , this result shows that the matrix gij is a tensor. 2.8.2 The equation Kij Aj k = Bik holds for all orientations of the coordinate system. If A and B are arbitrary second-rank tensors, show that K is a second-rank tensor also. 2.8.3 The exponential in a plane wave is exp[i(k · r − ωt)]. We recognize x µ = (ct, x1 , x2 , x3 ) as a prototype vector in Minkowski space. If k · r − ωt is a scalar under Lorentz transformations (Section 4.5), show that k µ = (ω/c, k1 , k2 , k3 ) is a vector in Minkowski space. Note. Multiplication by h¯ yields (E/c, p) as a vector in Minkowski space. 2.9 PSEUDOTENSORS, DUAL TENSORS So far our coordinate transformations have been restricted to pure passive rotations. We now consider the effect of reflections or inversions. 12 We might, for instance, take A′ = 1 and A′ = 0 for m = 1. Then the equation K ′ = a a K follows immediately. The ik 1l kl m 1 i1 rest of Eq. (2.87) comes from other special choices of the arbitrary A′j . 2.9 Pseudotensors, Dual Tensors FIGURE 2.9 143 Inversion of Cartesian coordinates — polar vector. If we have transformation coefficients aij = −δij , then by Eq. (2.60) x i = −x ′ i , (2.88) which is an inversion or parity transformation. Note that this transformation changes our initial right-handed coordinate system into a left-handed coordinate system.13 Our prototype vector r with components (x 1 , x 2 , x 3 ) transforms to   r′ = x ′ 1 , x ′ 2 , x ′ 3 = −x 1 , −x 2 , −x 3 . This new vector r′ has negative components, relative to the new transformed set of axes. As shown in Fig. 2.9, reversing the directions of the coordinate axes and changing the signs of the components gives r′ = r. The vector (an arrow in space) stays exactly as it was before the transformation was carried out. The position vector r and all other vectors whose components behave this way (reversing sign with a reversal of the coordinate axes) are called polar vectors and have odd parity. A fundamental difference appears when we encounter a vector defined as the cross product of two polar vectors. Let C = A × B, where both A and B are polar vectors. From Eq. (1.33), the components of C are given by C 1 = A2 B 3 − A3 B 2 (2.89) and so on. Now, when the coordinate axes are inverted, Ai → −A′ i , Bj → −Bj′ , but from its definition C k → +C ′k ; that is, our cross-product vector, vector C, does not behave like a polar vector under inversion. To distinguish, we label it a pseudovector or axial vector (see Fig. 2.10) that has even parity. The term axial vector is frequently used because these cross products often arise from a description of rotation. 13 This is an inversion of the coordinate system or coordinate axes, objects in the physical world remaining fixed. 144 Chapter 2 Vector Analysis in Curved Coordinates and Tensors FIGURE 2.10 Inversion of Cartesian coordinates — axial vector. Examples are angular velocity, orbital angular momentum, torque, force = F, magnetic induction field B, v = ω × r, L = r × p, N = r × F, ∂B = −∇ × E. ∂t In v = ω × r, the axial vector is the angular velocity ω, and r and v = dr/ dt are polar vectors. Clearly, axial vectors occur frequently in physics, although this fact is usually not pointed out. In a right-handed coordinate system an axial vector C has a sense of rotation associated with it given by a right-hand rule (compare Section 1.4). In the inverted left-handed system the sense of rotation is a left-handed rotation. This is indicated by the curved arrows in Fig. 2.10. The distinction between polar and axial vectors may also be illustrated by a reflection. A polar vector reflects in a mirror like a real physical arrow, Fig. 2.11a. In Figs. 2.9 and 2.10 the coordinates are inverted; the physical world remains fixed. Here the coordinate axes remain fixed; the world is reflected — as in a mirror in the xz-plane. Specifically, in this representation we keep the axes fixed and associate a change of sign with the component of the vector. For a mirror in the xz-plane, Py → −Py . We have P = (Px , Py , Pz ) P′ = (Px , −Py , Pz ) polar vector. An axial vector such as a magnetic field H or a magnetic moment µ (= current × area of current loop) behaves quite differently under reflection. Consider the magnetic field H and magnetic moment µ to be produced by an electric charge moving in a circular path (Exercise 5.8.4 and Example 12.5.3). Reflection reverses the sense of rotation of the charge. 2.9 Pseudotensors, Dual Tensors 145 a b FIGURE 2.11 (a) Mirror in xz-plane; (b) mirror in xz-plane. The two current loops and the resulting magnetic moments are shown in Fig. 2.11b. We have µ = (µx , µy , µz ) µ′ = (−µx , µy , −µz ) reflected axial vector. 146 Chapter 2 Vector Analysis in Curved Coordinates and Tensors If we agree that the universe does not care whether we use a right- or left-handed coordinate system, then it does not make sense to add an axial vector to a polar vector. In the vector equation A = B, both A and B are either polar vectors or axial vectors.14 Similar restrictions apply to scalars and pseudoscalars and, in general, to the tensors and pseudotensors considered subsequently. Usually, pseudoscalars, pseudovectors, and pseudotensors will transform as S ′ = J S, Ci′ = J aij Cj , A′ij = J aik aj l Akl , (2.90) where J is the determinant15 of the array of coefficients amn , the Jacobian of the parity transformation. In our inversion the Jacobian is    −1 0 0   (2.91) J =  0 −1 0  = −1.  0 0 −1  For a reflection of one axis, the x-axis,    −1 0 0    J =  0 1 0  = −1,  0 0 1 (2.92) and again the Jacobian J = −1. On the other hand, for all pure rotations, the Jacobian J is always +1. Rotation matrices discussed further in Section 3.3. In Chapter 1 the triple scalar product S = A × B · C was shown to be a scalar (under rotations). Now by considering the parity transformation given by Eq. (2.88), we see that S → −S, proving that the triple scalar product is actually a pseudoscalar: This behavior was foreshadowed by the geometrical analogy of a volume. If all three parameters of the volume — length, depth, and height — change from positive distances to negative distances, the product of the three will be negative. Levi-Civita Symbol For future use it is convenient to introduce the three-dimensional Levi-Civita symbol εij k , defined by ε123 = ε231 = ε312 = 1, ε132 = ε213 = ε321 = −1, (2.93) all other εij k = 0. Note that εij k is antisymmetric with respect to all pairs of indices. Suppose now that we have a third-rank pseudotensor δij k , which in one particular coordinate system is equal to εij k . Then δij′ k = |a|aip aj q akr εpqr (2.94) 14 The big exception to this is in beta decay, weak interactions. Here the universe distinguishes between right- and left-handed systems, and we add polar and axial vector interactions. 15 Determinants are described in Section 3.1. 2.9 Pseudotensors, Dual Tensors 147 by definition of pseudotensor. Now, a1p a2q a3r εpqr = |a| (2.95) δij′ k = εij k (2.96) ′ = |a|2 = 1 = ε123 . Considering by direct expansion of the determinant, showing that δ123 the other possibilities one by one, we find for rotations and reflections. Hence εij k is a pseudotensor.16,17 Furthermore, it is seen to be an isotropic pseudotensor with the same components in all rotated Cartesian coordinate systems. Dual Tensors With any antisymmetric second-rank tensor C (in three-dimensional space) we may associate a dual pseudovector Ci defined by 1 Ci = εij k C j k . 2 Here the antisymmetric C may be written   0 C 12 −C 31 C =  −C 12 0 C 23  . 31 23 C −C 0 (2.97) (2.98) We know that Ci must transform as a vector under rotations from the double contraction of the fifth-rank (pseudo) tensor εij k Cmn but that it is really a pseudovector from the pseudo nature of εij k . Specifically, the components of C are given by  (2.99) (C1 , C2 , C3 ) = C 23 , C 31 , C 12 . Notice the cyclic order of the indices that comes from the cyclic order of the components of εij k . Eq. (2.99) means that our three-dimensional vector product may literally be taken to be either a pseudovector or an antisymmetric second-rank tensor, depending on how we choose to write it out. If we take three (polar) vectors A, B, and C, we may define the direct product V ij k = Ai B j C k . By an extension of the analysis of Section 2.6, quantity V= V ij k 1 εij k V ij k 3! (2.100) is a tensor of third rank. The dual (2.101) 16 The usefulness of ε pqr extends far beyond this section. For instance, the matrices Mk of Exercise 3.2.16 are derived from (Mr )pq = −iεpqr . Much of elementary vector analysis can be written in a very compact form by using εij k and the identity of Exercise 2.9.4 See A. A. Evett, Permutation symbol approach to elementary vector analysis. Am. J. Phys. 34: 503 (1966). 17 The numerical value of ε pqr is given by the triple scalar product of coordinate unit vectors: xˆ p · xˆ q × xˆ r . From this point of view each element of εpqr is a pseudoscalar, but the εpqr collectively form a third-rank pseudotensor. 148 Chapter 2 Vector Analysis in Curved Coordinates and Tensors is clearly a pseudoscalar. By expansion it is seen that   1  A B1 C1    2 V =  A B 2 C 2   A3 B 3 C 3  (2.102) H ij kl = Ai B j C k D l , (2.103) is our familiar triple scalar product. For use in writing Maxwell’s equations in covariant form, Section 4.6, we want to extend this dual vector analysis to four-dimensional space and, in particular, to indicate that the four-dimensional volume element dx 0 dx 1 dx 2 dx 3 is a pseudoscalar. We introduce the Levi-Civita symbol εij kl , the four-dimensional analog of εij k . This quantity εij kl is defined as totally antisymmetric in all four indices. If (ij kl) is an even permutation18 of (0, 1, 2, 3), then εij kl is defined as +1; if it is an odd permutation, then εij kl is −1, and 0 if any two indices are equal. The Levi-Civita εij kl may be proved a pseudotensor of rank 4 by analysis similar to that used for establishing the tensor nature of εij k . Introducing the direct product of four vectors as fourth-rank tensor with components built from the polar vectors A, B, C, and D, we may define the dual quantity 1 εij kl H ij kl , (2.104) 4! a pseudoscalar due to the quadruple contraction with the pseudotensor εij kl . Now we let A, B, C, and D be infinitesimal displacements along the four coordinate axes (Minkowski space),  A = dx 0 , 0, 0, 0 (2.105)  and so on, B = 0, dx 1 , 0, 0 , H= and H = dx 0 dx 1 dx 2 dx 3 . (2.106) The four-dimensional volume element is now identified as a pseudoscalar. We use this result in Section 4.6. This result could have been expected from the results of the special theory of relativity. The Lorentz–Fitzgerald contraction of dx 1 dx 2 dx 3 just balances the time dilation of dx 0 . We slipped into this four-dimensional space as a simple mathematical extension of the three-dimensional space and, indeed, we could just as easily have discussed 5-, 6-, or N dimensional space. This is typical of the power of the component analysis. Physically, this four-dimensional space may be taken as Minkowski space,  0 1 2 3 x , x , x , x = (ct, x, y, z), (2.107) where t is time. This is the merger of space and time achieved in special relativity. The transformations that describe the rotations in four-dimensional space are the Lorentz transformations of special relativity. We encounter these Lorentz transformations in Section 4.6. 18 A permutation is odd if it involves an odd number of interchanges of adjacent indices, such as (0 1 2 3) → (0 2 1 3). Even permutations arise from an even number of transpositions of adjacent indices. (Actually the word adjacent is unnecessary.) ε0123 = +1. 2.9 Pseudotensors, Dual Tensors 149 Irreducible Tensors For some applications, particularly in the quantum theory of angular momentum, our Cartesian tensors are not particularly convenient. In mathematical language our general secondrank tensor Aij is reducible, which means that it can be decomposed into parts of lower tensor rank. In fact, we have already done this. From Eq. (2.78), A = Ai i (2.108) Bij = 12 (Aij − Aj i ), (2.109) is a scalar quantity, the trace of Aij .19 The antisymmetric portion, has just been shown to be equivalent to a (pseudo) vector, or Bij = Ck cyclic permutation of i, j, k. (2.110) By subtracting the scalar A and the vector Ck from our original tensor, we have an irreducible, symmetric, zero-trace second-rank tensor, Sij , in which Sij = 21 (Aij + Aj i ) − 31 Aδij , (2.111) with five independent components. Then, finally, our original Cartesian tensor may be written Aij = 13 Aδij + Ck + Sij . (2.112) The three quantities A, Ck , and Sij form spherical tensors of rank 0, 1, and 2, respectively, transforming like the spherical harmonics YLM (Chapter 12) for L = 0, 1, and 2. Further details of such spherical tensors and their uses will be found in Chapter 4 and the books by Rose and Edmonds cited there. A specific example of the preceding reduction is furnished by the symmetric electric quadrupole tensor  Qij = 3xi xj − r 2 δij ρ(x1 , x2 , x3 ) d 3 x. The −r 2 δij term represents a subtraction of the scalar trace (the three i = j terms). The resulting Qij has zero trace. Exercises 2.9.1 An antisymmetric square array is given by    0 0 C3 −C2  −C3 0 C1  =  −C 12 C2 −C1 0 −C 13 19 An alternate approach, using matrices, is given in Section 3.3 (see Exercise 3.3.9). C 12 0 −C 23  C 13 C 23  , 0 150 Chapter 2 Vector Analysis in Curved Coordinates and Tensors where (C1 , C2 , C3 ) form a pseudovector. Assuming that the relation Ci = 1 εij k C j k 2! holds in all coordinate systems, prove that C j k is a tensor. (This is another form of the quotient theorem.) 2.9.2 Show that the vector product is unique to three-dimensional space; that is, only in three dimensions can we establish a one-to-one correspondence between the components of an antisymmetric tensor (second-rank) and the components of a vector. 2.9.3 Show that in R3 (a) δii = 3, (b) δij εij k = 0, (c) εipq εjpq = 2δij , (d) εij k εij k = 6. 2.9.4 Show that in R3 εij k εpqk = δip δj q − δiq δjp . 2.9.5 Express the components of a cross-product vector C, C = A × B, in terms of εij k and the components of A and B. (b) Use the antisymmetry of εij k to show that A · A × B = 0. (a) ANS. (a) 2.9.6 (a) Ci = εij k Aj Bk . Show that the inertia tensor (matrix) may be written Iij = m(xi xj δij − xi xj ) (b) for a particle of mass m at (x1 , x2 , x3 ). Show that Iij = −Mil Mlj = −mεilk xk εlj m xm , Mil = m1/2 εilk xk . where This is the contraction of two second-rank tensors and is identical with the matrix product of Section 3.2. 2.9.7 Write ∇ · ∇ × A and ∇ × ∇ϕ in tensor (index) notation in R3 so that it becomes obvious that each expression vanishes. ∂ ∂ k A , ∂x i ∂x j ∂ ∂ (∇ × ∇ϕ)i = εij k j k ϕ. ∂x ∂x Expressing cross products in terms of Levi-Civita symbols (εij k ), derive the BAC–CAB rule, Eq. (1.55). Hint. The relation of Exercise 2.9.4 is helpful. ANS. ∇ · ∇ × A = εij k 2.9.8 2.10 General Tensors 2.9.9 151 Verify that each of the following fourth-rank tensors is isotropic, that is, that it has the same form independent of any rotation of the coordinate systems. (a) Aij kl = δij δkl , (b) Bij kl = δik δj l + δil δj k , (c) Cij kl = δik δj l − δil δj k . 2.9.10 Show that the two-index Levi-Civita symbol εij is a second-rank pseudotensor (in twodimensional space). Does this contradict the uniqueness of δij (Exercise 2.6.4)? 2.9.11 Represent εij by a 2 × 2 matrix, and using the 2 × 2 rotation matrix of Section 3.3 show that εij is invariant under orthogonal similarity transformations. 2.9.12 Given Ak = 12 εij k B ij with B ij = −B j i , antisymmetric, show that B mn = ε mnk Ak . 2.9.13 Show that the vector identity (A × B) · (C × D) = (A · C)(B · D) − (A · D)(B · C) (Exercise 1.5.12) follows directly from the description of a cross product with εij k and the identity of Exercise 2.9.4. 2.9.14 2.10 Generalize the cross product of two vectors to n-dimensional space for n = 4, 5, . . . . Check the consistency of your construction and discuss concrete examples. See Exercise 1.4.17 for the case n = 2. GENERAL TENSORS The distinction between contravariant and covariant transformations was established in Section 2.6. Then, for convenience, we restricted our attention to Cartesian coordinates (in which the distinction disappears). Now in these two concluding sections we return to non-Cartesian coordinates and resurrect the contravariant and covariant dependence. As in Section 2.6, a superscript will be used for an index denoting contravariant and a subscript for an index denoting covariant dependence. The metric tensor of Section 2.1 will be used to relate contravariant and covariant indices. The emphasis in this section is on differentiation, culminating in the construction of the covariant derivative. We saw in Section 2.7 that the derivative of a vector yields a second-rank tensor — in Cartesian coordinates. In non-Cartesian coordinate systems, it is the covariant derivative of a vector rather than the ordinary derivative that yields a secondrank tensor by differentiation of a vector. Metric Tensor Let us start with the transformation of vectors from one set of coordinates (q 1 , q 2 , q 3 ) to another r = (x 1 , x 2 , x 3 ). The new coordinates are (in general nonlinear) functions 152 Chapter 2 Vector Analysis in Curved Coordinates and Tensors x i (q 1 , q 2 , q 3 ) of the old, such as spherical polar coordinates (r, θ, φ). But their differentials obey the linear transformation law dx i = ∂x i j dq , ∂q j (2.113a) or dr = εj dq j (2.113b) 1 1 1 ∂x ∂x ∂x in vector notation. For convenience we take the basis vectors ε 1 = ( ∂q 1 , ∂q 2 , ∂q 3 ), ε 2 , and ε3 to form a right-handed set. These vectors are not necessarily orthogonal. Also, a limitation to three-dimensional space will be required only for the discussions of cross products and curls. Otherwise these εi may be in N -dimensional space, including the four-dimensional space–time of special and general relativity. The basis vectors εi may be expressed by εi = ∂r , ∂q i (2.114) as in Exercise 2.2.3. Note, however, that the εi here do not necessarily have unit magnitude. From Exercise 2.2.3, the unit vectors are ei = 1 ∂r hi ∂qi (no summation), and therefore ε i = hi ei (no summation). (2.115) The ε i are related to the unit vectors ei by the scale factors hi of Section 2.2. The ei have no dimensions; the εi have the dimensions of hi . In spherical polar coordinates, as a specific example, ε r = er = rˆ , εθ = reθ = r θˆ , ε ϕ = r sin θ eϕ = r sin θ ϕ. ˆ (2.116) In Euclidean spaces, or in Minkowski space of special relativity, the partial derivatives in Eq. (2.113) are constants that define the new coordinates in terms of the old ones. We used them to define the transformation laws of vectors in Eq. (2.59) and (2.62) and tensors in Eq. (2.66). Generalizing, we define a contravariant vector V i under general coordinate transformations if its components transform according to V ′i = ∂x i j V , ∂q j (2.117a) or V′ = V j ε j (2.117b) in vector notation. For covariant vectors we inspect the transformation of the gradient operator ∂ ∂q j ∂ = ∂x i ∂x i ∂q j (2.118) 2.10 General Tensors 153 using the chain rule. From ∂x i ∂q j = δi k ∂q j ∂x k (2.119) it is clear that Eq. (2.118) is related to the inverse transformation of Eq. (2.113), dq j = ∂q j i dx . ∂x i (2.120) Hence we define a covariant vector Vi if ∂q j Vj ∂x i (2.121a) V′ = Vj ε j , (2.121b) Vi′ = holds or, in vector notation, where εj are the contravariant vectors g j i εi = ε j . Second-rank tensors are defined as in Eq. (2.66), A′ij = ∂x i ∂x j kl A , ∂q k ∂q l and tensors of higher rank similarly. As in Section 2.1, we construct the square of a differential displacement  2 (ds)2 = dr · dr = εi dq i = ε i · ε j dq i dq j . (2.122) (2.123) Comparing this with (ds)2 of Section 2.1, Eq. (2.5), we identify ε i · εj as the covariant metric tensor εi · εj = gij . (2.124) Clearly, gij is symmetric. The tensor nature of gij follows from the quotient rule, Exercise 2.8.1. We take the relation g ik gkj = δ i j (2.125) to define the corresponding contravariant tensor g ik . Contravariant g ik enters as the inverse20 of covariant gkj . We use this contravariant g ik to raise indices, converting a covariant index into a contravariant index, as shown subsequently. Likewise the covariant gkj will be used to lower indices. The choice of g ik and gkj for this raising–lowering operation is arbitrary. Any second-rank tensor (and its inverse) would do. Specifically, we have g ij εj = εi g ij Fj = F i relating covariant and contravariant basis vectors, relating covariant and contravariant vector components. 20 If the tensor g is written as a matrix, the tensor g ik is given by the inverse matrix. kj (2.126) 154 Chapter 2 Vector Analysis in Curved Coordinates and Tensors Then gij ε j = εi gij F j = Fi as the corresponding index lowering relations. (2.127) It should be emphasized again that the εi and ε j do not have unit magnitude. This may be seen in Eqs. (2.116) and in the metric tensor gij for spherical polar coordinates and its inverse g ij :   1 0 0     1 0 0   ij  0 1 0 2  .   0 g = (gij ) = 0 r r2  2 2   1 0 0 r sin θ 0 0 r 2 sin2 θ Christoffel Symbols Let us form the differential of a scalar ψ, dψ = ∂ψ dq i . ∂q i (2.128) Since the dq i are the components of a contravariant vector, the partial derivatives ∂ψ/∂q i must form a covariant vector — by the quotient rule. The gradient of a scalar becomes ∂ψ ∇ψ = i ε i . (2.129) ∂q Note that ∂ψ/∂q i are not the gradient components of Section 2.2 — because εi = ei of Section 2.2. Moving on to the derivatives of a vector, we find that the situation is much more complicated because the basis vectors εi are in general not constant. Remember, we are no longer restricting ourselves to Cartesian coordinates and the nice, convenient xˆ , yˆ , zˆ ! Direct differentiation of Eq. (2.117a) yields ∂V ′k ∂x k ∂V i ∂ 2xk = + V i, ∂q j ∂q i ∂q j ∂q j ∂q i (2.130a) ∂V′ ∂ε i ∂V i = j εi + V i j . j ∂q ∂q ∂q (2.130b) or, in vector notation, The right side of Eq. (2.130a) differs from the transformation law for a second-rank mixed tensor by the second term, which contains second derivatives of the coordinates x k . The latter are nonzero for nonlinear coordinate transformations. Now, ∂ε i /∂q j will be some linear combination of the εk , with the coefficient depending on the indices i and j from the partial derivative and index k from the base vector. We write ∂ε i (2.131a) = Ŵijk ε k . ∂q j 2.10 General Tensors 155 Multiplying by εm and using εm · εk = δkm from Exercise 2.10.2, we have Ŵijm = εm · ∂ε i . ∂q j (2.131b) The Ŵijk is a Christoffel symbol of the second kind. It is also called a coefficient of connection. These Ŵijk are not third-rank tensors and the ∂V i /∂q j of Eq. (2.130a) are not second-rank tensors. Equations (2.131) should be compared with the results quoted in Exercise 2.2.3 (remembering that in general εi = ei ). In Cartesian coordinates, Ŵijk = 0 for all values of the indices i, j , and k. These Christoffel three-index symbols may be computed by the techniques of Section 2.2. This is the topic of Exercise 2.10.8. Equation (2.138) offers an easier method. Using Eq. (2.114), we obtain ∂ε j ∂ 2r ∂ε i = j i = i = Ŵjki ε k . j ∂q ∂q ∂q ∂q (2.132) Hence these Christoffel symbols are symmetric in the two lower indices: Ŵijk = Ŵjki . (2.133) Christoffel Symbols as Derivatives of the Metric Tensor It is often convenient to have an explicit expression for the Christoffel symbols in terms of derivatives of the metric tensor. As an initial step, we define the Christoffel symbol of the first kind [ij, k] by [ij, k] ≡ gmk Ŵijm , (2.134) from which the symmetry [ij, k] = [j i, k] follows. Again, this [ij, k] is not a third-rank tensor. From Eq. (2.131b), [ij, k] = gmk εm · = εk · ∂ε i ∂q j ∂ε i . ∂q j (2.135) Now we differentiate gij = ε i · εj , Eq. (2.124): ∂ε j ∂gij ∂ε i = k · εj + εi · k ∂q k ∂q ∂q by Eq. (2.135). Then [ij, k] = = [ik, j ] + [j k, i] (2.136)   1 ∂gik ∂gj k ∂gij , + − 2 ∂q j ∂q i ∂q k (2.137) and Ŵijs = g ks [ij, k]   ∂gik ∂gj k ∂gij 1 . = g ks + − 2 ∂q j ∂q i ∂q k (2.138) 156 Chapter 2 Vector Analysis in Curved Coordinates and Tensors These Christoffel symbols are applied in the next section. Covariant Derivative With the Christoffel symbols, Eq. (2.130b) may be rewritten ∂V′ ∂V i = j εi + V i Ŵijk εk . j ∂q ∂q (2.139) Now, i and k in the last term are dummy indices. Interchanging i and k (in this one term), we have   i ∂V′ ∂V k i (2.140) = + V Ŵ kj ε i . ∂q j ∂q j The quantity in parenthesis is labeled a covariant derivative, V;ji . We have V;ji ≡ ∂V i i . + V k Ŵkj ∂q j (2.141) The ;j subscript indicates differentiation with respect to q j . The differential dV′ becomes dV′ = ∂V′ j dq = [V;ji dq j ]ε i . ∂q j (2.142) A comparison with Eq. (2.113) or (2.122) shows that the quantity in square brackets is the ith contravariant component of a vector. Since dq j is the j th contravariant component of a vector (again, Eq. (2.113)), V;ji must be the ij th component of a (mixed) second-rank tensor (quotient rule). The covariant derivatives of the contravariant components of a vector form a mixed second-rank tensor, V;ji . Since the Christoffel symbols vanish in Cartesian coordinates, the covariant derivative and the ordinary partial derivative coincide: ∂V i = V;ji ∂q j (Cartesian coordinates). (2.143) The covariant derivative of a covariant vector Vi is given by (Exercise 2.10.9) Vi;j = ∂Vi − Vk Ŵijk . ∂q j (2.144) Like V;ji , Vi;j is a second-rank tensor. The physical importance of the covariant derivative is that “A consistent replacement of regular partial derivatives by covariant derivatives carries the laws of physics (in component form) from flat space–time into the curved (Riemannian) space–time of general relativity. Indeed, this substitution may be taken as a mathematical statement of Einstein’s principle of equivalence.”21 21 C. W. Misner, K. S. Thorne, and J. A. Wheeler, Gravitation. San Francisco: W. H. Freeman (1973), p. 387. 2.10 General Tensors 157 Geodesics, Parallel Transport The covariant derivative of vectors, tensors, and the Christoffel symbols may also be approached from geodesics. A geodesic in Euclidean space is a straight line. In general, it is the curve of shortest length between two points and the curve along which a freely falling particle moves. The ellipses of planets are geodesics around the sun, and the moon is in free fall around the Earth on a geodesic. Since we can throw a particle in any direction, a geodesic can have any direction through a given point. Hence the geodesic equation can be obtained from Fermat’s variational principle of optics (see Chapter 17 for Euler’s equation), δ ds = 0, (2.145) where ds 2 is the metric, Eq. (2.123), of our space. Using the variation of ds 2 , 2 ds δ ds = dq i dq j δ gij + gij dq i δ dq j + gij dq j δ dq i in Eq. (2.145) yields  i j 1 dq dq dq i d dq j d δgij + gij δ dq j + gij δ dq i ds = 0, 2 ds ds ds ds ds ds (2.146) (2.147) where ds measures the length on the geodesic. Expressing the variations δgij = ∂gij δ dq k ≡ (∂k gij )δ dq k ∂q k in terms of the independent variations δ dq k , shifting their derivatives in the other two terms of Eq. (2.147) upon integrating by parts, and renaming dummy summation indices, we obtain   i j 1 dq dq dq i d dq j gik δ dq k ds = 0. ∂k gij − + gkj (2.148) 2 ds ds ds ds ds The integrand of Eq. (2.148), set equal to zero, is the geodesic equation. It is the Euler equation of our variational problem. Upon expanding dgik dq j = (∂j gik ) , ds ds along the geodesic we find dgkj dq i = (∂i gkj ) ds ds d 2q i 1 dq i dq j (∂k gij − ∂j gik − ∂i gkj ) − gik 2 = 0. 2 ds ds ds (2.149) (2.150) Multiplying Eq. (2.150) with g kl and using Eq. (2.125), we find the geodesic equation d 2 q l dq i dq j 1 kl g (∂i gkj + ∂j gik − ∂k gij ) = 0, + ds ds 2 ds 2 (2.151) where the coefficient of the velocities is the Christoffel symbol Ŵijl of Eq. (2.138). Geodesics are curves that are independent of the choice of coordinates. They can be drawn through any point in space in various directions. Since the length ds measured along 158 Chapter 2 Vector Analysis in Curved Coordinates and Tensors the geodesic is a scalar, the velocities dq i /ds (of a freely falling particle along the geodesic, for example) form a contravariant vector. Hence Vk dq k /ds is a well-defined scalar on any geodesic, which we can differentiate in order to define the covariant derivative of any covariant vector Vk . Using Eq. (2.151) we obtain from the scalar   d dVk dq k dq k d 2q k Vk = + Vk ds ds ds ds ds 2 i j ∂Vk dq i dq k k dq dq − V Ŵ k ij ∂q i ds ds ds ds   i k dq dq ∂Vk l = − Ŵik Vl . ds ds ∂q i = (2.152) When the quotient theorem is applied to Eq. (2.152) it tells us that Vk;i = ∂Vk l − Ŵik Vl ∂q i (2.153) is a covariant tensor that defines the covariant derivative of Vk , consistent with Eq. (2.144). Similarly, higher-order tensors may be derived. The second term in Eq. (2.153) defines the parallel transport or displacement, l Vl δq i , δVk = Ŵki (2.154) of the covariant vector Vk from the point with coordinates q i to q i + δq i . The parallel transport, δU k , of a contravariant vector U k may be found from the invariance of the scalar product U k Vk under parallel transport, δ(U k Vk ) = δU k Vk + U k δVk = 0, (2.155) in conjunction with the quotient theorem. In summary, when we shift a vector to a neighboring point, parallel transport prevents it from sticking out of our space. This can be clearly seen on the surface of a sphere in spherical geometry, where a tangent vector is supposed to remain a tangent upon translating it along some path on the sphere. This explains why the covariant derivative of a vector or tensor is naturally defined by translating it along a geodesic in the desired direction. Exercises 2.10.1 Equations (2.115) and (2.116) use the scale factor hi , citing Exercise 2.2.3. In Section 2.2 we had restricted ourselves to orthogonal coordinate systems, yet Eq. (2.115) holds for nonorthogonal systems. Justify the use of Eq. (2.115) for nonorthogonal systems. 2.10.2 (a) Show that ε i · ε j = δji . (b) From the result of part (a) show that F i = F · εi and Fi = F · ε i . 2.10 General Tensors 2.10.3 159 For the special case of three-dimensional space (ε1 , ε2 , ε 3 defining a right-handed coordinate system, not necessarily orthogonal), show that εi = εj × εk , εj × εk · εi i, j, k = 1, 2, 3 and cyclic permutations. Note. These contravariant basis vectors εi define the reciprocal lattice space of Section 1.5. 2.10.4 Prove that the contravariant metric tensor is given by g ij = εi · εj . 2.10.5 If the covariant vectors ε i are orthogonal, show that (a) gij is diagonal, (b) g ii = 1/gii (no summation), (c) |ε i | = 1/|ε i |. 2.10.6 Derive the covariant and contravariant metric tensors for circular cylindrical coordinates. 2.10.7 Transform the right-hand side of Eq. (2.129), ∇ψ = ∂ψ i ε, ∂q i into the ei basis, and verify that this expression agrees with the gradient developed in Section 2.2 (for orthogonal coordinates). 2.10.8 Evaluate ∂ε i /∂q j for spherical polar coordinates, and from these results calculate Ŵijk for spherical polar coordinates. Note. Exercise 2.5.2 offers a way of calculating the needed partial derivatives. Remember, ε 1 = rˆ 2.10.9 but ε 2 = r θˆ and ε3 = r sin θ ϕ. ˆ Show that the covariant derivative of a covariant vector is given by Vi;j ≡ ∂Vi − Vk Ŵijk . ∂q j Hint. Differentiate ε i · ε j = δji . 2.10.10 Verify that Vi;j = gik V;jk by showing that   k ∂V ∂Vi s m k − V Ŵ = g + V Ŵ s ij ik mj . ∂q j ∂q j 2.10.11 From the circular cylindrical metric tensor gij , calculate the Ŵijk for circular cylindrical coordinates. Note. There are only three nonvanishing Ŵ. 160 Chapter 2 Vector Analysis in Curved Coordinates and Tensors 2.10.12 Using the Ŵijk from Exercise 2.10.11, write out the covariant derivatives V;ji of a vector V in circular cylindrical coordinates. 2.10.13 A triclinic crystal is described using an oblique coordinate system. The three covariant base vectors are ε 1 = 1.5ˆx, ε 2 = 0.4ˆx + 1.6ˆy, ε 3 = 0.2ˆx + 0.3ˆy + 1.0ˆz. (a) Calculate the elements of the covariant metric tensor gij . (b) Calculate the Christoffel three-index symbols, Ŵijk . (This is a “by inspection” calculation.) (c) From the cross-product form of Exercise 2.10.3 calculate the contravariant base vector ε 3 . (d) Using the explicit forms ε3 and ε i , verify that ε 3 · εi = δ 3 i . Note. If it were needed, the contravariant metric tensor could be determined by finding the inverse of gij or by finding the εi and using g ij = ε i · ε j . 2.10.14 Verify that   1 ∂gik ∂gj k ∂gij [ij, k] = + − k . 2 ∂q j ∂q i ∂q Hint. Substitute Eq. (2.135) into the right-hand side and show that an identity results. 2.10.15 Show that for the metric tensor gij ;k = 0, g ij ;k = 0. 2.10.16 Show that parallel displacement δ dq i = d 2 q i along a geodesic. Construct a geodesic by parallel displacement of δ dq i . 2.10.17 Construct the covariant derivative of a vector V i by parallel transport starting from the limiting procedure V i (q j + dq j ) − V i (q j ) . dq j dq j →0 lim 2.11 TENSOR DERIVATIVE OPERATORS In this section the covariant differentiation of Section 2.10 is applied to rederive the vector differential operations of Section 2.2 in general tensor form. Divergence Replacing the partial derivative by the covariant derivative, we take the divergence to be ∇ · V = V;ii = ∂V i i . + V k Ŵik ∂q i (2.156) 2.11 Tensor Derivative Operators i by Eq. (2.138), we have Expressing Ŵik   1 im ∂gim ∂gkm ∂gik i Ŵik = g + − m . 2 ∂q k ∂q i ∂q 161 (2.157) When contracted with g im the last two terms in the curly bracket cancel, since g im ∂gki ∂gik ∂gkm = g mi m = g im m . i ∂q ∂q ∂q (2.158) Then 1 ∂gim i Ŵik = g im k . 2 ∂q (2.159) From the theory of determinants, Section 3.1, ∂g ∂gim = gg im k , ∂q k ∂q (2.160) where g is the determinant of the metric, g = det(gij ). Substituting this result into Eq. (2.158), we obtain i = Ŵik 1 ∂g 1/2 1 ∂g = . 2g ∂q k g 1/2 ∂q k (2.161) This yields ∇ · V = V;ii = 1 g 1/2 ∂  1/2 k g V . ∂q k (2.162) To compare this result with Eq. (2.21), note that h1 h2 h3 = g 1/2 and V i (contravariant coefficient of ε i ) = Vi / hi (no summation), where Vi is Section 2.2 coefficient of ei . Laplacian In Section 2.2, replacement of the vector V in ∇ · V by ∇ψ led to the Laplacian ∇ · ∇ψ. Here we have a contravariant V i . Using the metric tensor to create a contravariant ∇ψ , we make the substitution ∂ψ V i → g ik k . ∂q Then the Laplacian ∇ · ∇ψ becomes ∇ · ∇ψ =   ∂ 1/2 ik ∂ψ g . g ∂q k g 1/2 ∂q i 1 (2.163) For the orthogonal systems of Section 2.2 the metric tensor is diagonal and the contravariant g ii (no summation) becomes g ii = (hi )−2 . 162 Chapter 2 Vector Analysis in Curved Coordinates and Tensors Equation (2.163) reduces to ∂ 1 ∇ · ∇ψ = h1 h2 h3 ∂q i   h1 h2 h3 ∂ψ , h2i ∂q i in agreement with Eq. (2.22). Curl The difference of derivatives that appears in the curl (Eq. (2.27)) will be written ∂Vj ∂Vi − . j ∂q ∂q i Again, remember that the components Vi here are coefficients of the contravariant (nonunit) base vectors ε i . The Vi of Section 2.2 are coefficients of unit vectors ei . Adding and subtracting, we obtain ∂Vj ∂Vj ∂Vi ∂Vi − = j − Vk Ŵijk − + Vk Ŵjki j i ∂q ∂q ∂q ∂q i = Vi;j − Vj ;i (2.164) using the symmetry of the Christoffel symbols. The characteristic difference of derivatives of the curl becomes a difference of covariant derivatives and therefore is a second-rank tensor (covariant in both indices). As emphasized in Section 2.9, the special vector form of the curl exists only in three-dimensional space. From Eq. (2.138) it is clear that all the Christoffel three index symbols vanish in Minkowski space and in the real space–time of special relativity with   1 0 0 0  0 −1 0 0 . gλµ =  0 0 −1 0 0 0 0 −1 Here x0 = ct, x1 = x, x2 = y, and x3 = z. This completes the development of the differential operators in general tensor form. (The gradient was given in Section 2.10.) In addition to the fields of elasticity and electromagnetism, these differentials find application in mechanics (Lagrangian mechanics, Hamiltonian mechanics, and the Euler equations for rotation of rigid body); fluid mechanics; and perhaps most important of all, the curved space–time of modern theories of gravity. Exercises 2.11.1 Verify Eq. (2.160), ∂g ∂gim = gg im k , k ∂q ∂q for the specific case of spherical polar coordinates. 2.11 Additional Readings 163 2.11.2 Starting with the divergence in tensor notation, Eq. (2.162), develop the divergence of a vector in spherical polar coordinates, Eq. (2.47). 2.11.3 The covariant vector Ai is the gradient of a scalar. Show that the difference of covariant derivatives Ai;j − Aj ;i vanishes. Additional Readings Dirac, P. A. M., General Theory of Relativity. Princeton, NJ: Princeton University Press (1996). Hartle, J. B., Gravity, San Francisco: Addison-Wesley (2003). This text uses a minimum of tensor analysis. Jeffreys, H., Cartesian Tensors. Cambridge: Cambridge University Press (1952). This is an excellent discussion of Cartesian tensors and their application to a wide variety of fields of classical physics. Lawden, D. F., An Introduction to Tensor Calculus, Relativity and Cosmology, 3rd ed. New York: Wiley (1982). Margenau, H., and G. M. Murphy, The Mathematics of Physics and Chemistry, 2nd ed. Princeton, NJ: Van Nostrand (1956). Chapter 5 covers curvilinear coordinates and 13 specific coordinate systems. Misner, C. W., K. S. Thorne, and J. A. Wheeler, Gravitation. San Francisco: W. H. Freeman (1973), p. 387. Moller, C., The Theory of Relativity. Oxford: Oxford University Press (1955). Reprinted (1972). Most texts on general relativity include a discussion of tensor analysis. Chapter 4 develops tensor calculus, including the topic of dual tensors. The extension to non-Cartesian systems, as required by general relativity, is presented in Chapter 9. Morse, P. M., and H. Feshbach, Methods of Theoretical Physics. New York: McGraw-Hill (1953). Chapter 5 includes a description of several different coordinate systems. Note that Morse and Feshbach are not above using left-handed coordinate systems even for Cartesian coordinates. Elsewhere in this excellent (and difficult) book there are many examples of the use of the various coordinate systems in solving physical problems. Eleven additional fascinating but seldom-encountered orthogonal coordinate systems are discussed in the second (1970) edition of Mathematical Methods for Physicists. Ohanian, H. C., and R. Ruffini, Gravitation and Spacetime, 2nd ed. New York: Norton & Co. (1994). A wellwritten introduction to Riemannian geometry. Sokolnikoff, I. S., Tensor Analysis — Theory and Applications, 2nd ed. New York: Wiley (1964). Particularly useful for its extension of tensor analysis to non-Euclidean geometries. Weinberg, S., Gravitation and Cosmology. Principles and Applications of the General Theory of Relativity. New York: Wiley (1972). This book and the one by Misner, Thorne, and Wheeler are the two leading texts on general relativity and cosmology (with tensors in non-Cartesian space). Young, E. C., Vector and Tensor Analysis, 2nd ed. New York: Marcel Dekker (1993). This page intentionally left blank CHAPTER 3 DETERMINANTS AND MATRICES 3.1 DETERMINANTS We begin the study of matrices by solving linear equations that will lead us to determinants and matrices. The concept of determinant and the notation were introduced by the renowned German mathematician and philosopher Gottfried Wilhelm von Leibniz. Homogeneous Linear Equations One of the major applications of determinants is in the establishment of a condition for the existence of a nontrivial solution for a set of linear homogeneous algebraic equations. Suppose we have three unknowns x1 , x2 , x3 (or n equations with n unknowns): a1 x1 + a2 x2 + a3 x3 = 0, b1 x1 + b2 x2 + b3 x3 = 0, (3.1) c1 x1 + c2 x2 + c3 x3 = 0. The problem is to determine under what conditions there is any solution, apart from the trivial one x1 = 0, x2 = 0, x3 = 0. If we use vector notation x = (x1 , x2 , x3 ) for the solution and three rows a = (a1 , a2 , a3 ), b = (b1 , b2 , b3 ), c = (c1 , c2 , c3 ) of coefficients, then the three equations, Eqs. (3.1), become a · x = 0, b · x = 0, c · x = 0. (3.2) These three vector equations have the geometrical interpretation that x is orthogonal to a, b, and c. If the volume spanned by a, b, c given by the determinant (or triple scalar 165 166 Chapter 3 Determinants and Matrices product, see Eq. (1.50) of Section 1.5)   a1  D3 = (a × b) · c = det(a, b, c) =  b1  c1 a2 b2 c2  a3  b3  c3  (3.3) is not zero, then there is only the trivial solution x = 0. Conversely, if the aforementioned determinant of coefficients vanishes, then one of the row vectors is a linear combination of the other two. Let us assume that c lies in the plane spanned by a and b, that is, that the third equation is a linear combination of the first two and not independent. Then x is orthogonal to that plane so that x ∼ a × b. Since homogeneous equations can be multiplied by arbitrary numbers, only ratios of the xi are relevant, for which we then obtain ratios of 2 × 2 determinants x1 a 2 b 3 − a 3 b 2 = x3 a1 b2 − a2 b1 (3.4) a1 b3 − a3 b1 x2 =− x3 a1 b2 − a2 b1 from the components of the cross product a × b, provided x3 ∼ a1 b2 − a2 b1 = 0. This is Cramer’s rule for three homogeneous linear equations. Inhomogeneous Linear Equations The simplest case of two equations with two unknowns, a1 x1 + a2 x2 = a3 , b1 x1 + b2 x2 = b3 , (3.5) can be reduced to the previous case by imbedding it in three-dimensional space with a solution vector x = (x1 , x2 , −1) and row vectors a = (a1 , a2 , a3 ), b = (b1 , b2 , b3 ). As before, Eqs. (3.5) in vector notation, a · x = 0 and b · x = 0, imply that x ∼ a × b, so the analog of Eqs. (3.4) holds. For this to apply, though, the third component of a × b must not be zero, that is, a1 b2 − a2 b1 = 0, because the third component of x is −1 = 0. This yields the xi as    a3 a2    a3 b2 − b3 a2  b3 b2    , x1 = = (3.6a) a1 b2 − a2 b1  a1 a2   b1 b2     a1 a3    a1 b3 − a3 b1  b1 b3  . x2 = = (3.6b) a1 b2 − a2 b1  a1 a2   b1 b2  The determinant a a  in the numerator of x1 (x2 ) is obtained from the determinantofthe coefficients b12 b22  by replacing the first (second) column vector by the vector ab33 of the inhomogeneous side of Eq. (3.5). This is Cramer’s rule for a set of two inhomogeneous linear equations with two unknowns. 3.1 Determinants These solutions of linear equations in terms dimensions. The determinant is a square array   a1 a2   b b2 Dn =  1  c1 c2  · · 167 of determinants can be generalized to n ··· ··· ··· ···  an  bn  cn  ·  (3.7) of numbers (or functions), the coefficients of n linear equations in our case here. The number n of columns (and of rows) in the array is sometimes called the order of the determinant. The generalization of the expansion in Eq. (1.48) of the triple scalar product (of row vectors of three linear equations) leads to the following value of the determinant Dn in n dimensions,  Dn = εij k··· ai bj ck · · · , (3.8) i,j,k,... where εij k··· , analogous to the Levi-Civita symbol of Section 2.9, is +1 for even permutations1 (ij k · · · ) of (123 · · · n), −1 for odd permutations, and zero if any index is repeated. Specifically, for the third-order determinant D3 of Eq. (3.3), Eq. (3.8) leads to D3 = +a1 b2 c3 − a1 b3 c2 − a2 b1 c3 + a2 b3 c1 + a3 b1 c2 − a3 b2 c1 . (3.9) The third-order determinant, then, is this particular linear combination of products. Each product contains one and only one element from each row and from each column. Each product is added if the columns (indices) represent an even permutation of (123) and subtracted if we have an odd permutation. Equation (3.3) may be considered shorthand notation for Eq. (3.9). The number of terms in the sum (Eq. (3.8)) is 24 for a fourth-order determinant, n! for an nth-order determinant. Because of the appearance of the negative signs in Eq. (3.9) (and possibly in the individual elements as well), there may be considerable cancellation. It is quite possible that a determinant of large elements will have a very small value. Several useful properties of the nth-order determinants follow from Eq. (3.8). Again, to be specific, Eq. (3.9) for third-order determinants is used to illustrate these properties. Laplacian Development by Minors Equation (3.9) may be written D3 = a1 (b2 c3 − b3 c2 ) − a2 (b1 c3 − b3 c1 ) + a3 (b1 c2 − b2 c1 )        b1 b2   b1 b3   b2 b3        = a1  .  + a3   − a2   c1 c2   c1 c3   c2 c3  (3.10) In general, the nth-order determinant may be expanded as a linear combination of the products of the elements of any row (or any column) and the (n − 1)th-order determinants 1 In a linear sequence abcd · · · , any single, simple transposition of adjacent elements yields an odd permutation of the original sequence: abcd → bacd. Two such transpositions yield an even permutation. In general, an odd number of such interchanges of adjacent elements results in an odd permutation; an even number of such transpositions yields an even permutation. 168 Chapter 3 Determinants and Matrices formed by striking out the row and column of the original determinant in which the element appears. This reduced array (2×2 in this specific example) is called a minor. If the element is in the ith row and the j th column, the sign associated with the product is (−1)i+j . The minor with this sign is called the cofactor. If Mij is used to designate the minor formed by omitting the ith row and the j th column and Cij is the corresponding cofactor, Eq. (3.10) becomes D3 = 3 3   aj C1j . (−1)j +1 aj M1j = j =1 (3.11) j =1 In this case, expanding along the first row, we have i = 1 and the summation over j , the columns. This Laplace expansion may be used to advantage in the evaluation of high-order determinants in which a lot of the elements are zero. For example, to find the value of the determinant    0 1 0 0    −1 0 0 0  ,  (3.12) D=   0 0 0 1  0 0 −1 0  we expand across the top row to obtain    −1 0 0    0 1  . D = (−1)1+2 · (1)  0  0 −1 0  (3.13) Again, expanding across the top row, we get      0 1  0 1  = 1. = D = (−1) · (−1)1+1 · (−1)  −1 0   −1 0  (3.14) (This determinant D (Eq. (3.12)) is formed from one of the Dirac matrices appearing in Dirac’s relativistic electron theory in Section 3.4.) Antisymmetry The determinant changes sign if any two rows are interchanged or if any two columns are interchanged. This follows from the even–odd character of the Levi-Civita ε in Eq. (3.8) or explicitly from the form of Eqs. (3.9) and (3.10).2 This property was used in Section 2.9 to develop a totally antisymmetric linear combination. It is also frequently used in quantum mechanics in the construction of a many-particle wave function that, in accordance with the Pauli exclusion principle, will be antisymmetric under the interchange of any two identical spin 21 particles (electrons, protons, neutrons, etc.). 2 The sign reversal is reasonably obvious for the interchange of two adjacent rows (or columns), this clearly being an odd permutation. Show that the interchange of any two rows is still an odd permutation. 3.1 Determinants 169 • As a special case of antisymmetry, any determinant with two rows equal or two columns equal equals zero. • If each element in a row or each element in a column is zero, the determinant is equal to zero. • If each element in a row or each element in a column is multiplied by a constant, the determinant is multiplied by that constant. • The value of a determinant is unchanged if a multiple of one row is added (column by column) to another row or if a multiple of one column is added (row by row) to another column.3 We have   a1   b1   c1 a2 b2 c2   a3   a1 + ka2 b3  =  b1 + kb2 c3   c1 + kc2 a2 b2 c2  a3  b3  . c3  Using the Laplace development on the right-hand side, we obtain       a1 + ka2 a2 a3   a1 a2 a3   a2 a2       b1 + kb2 b2 b3  =  b1 b2 b3  + k  b2 b2       c1 + kc2 c2 c3   c1 c2 c3   c2 c2 (3.15)  a3  b3  , c3  (3.16) then by the property of antisymmetry the second determinant on the right-hand side of Eq. (3.16) vanishes, verifying Eq. (3.15). As a special case, a determinant is equal to zero if any two rows are proportional or any two columns are proportional. Some useful relations involving determinants or matrices appear in Exercises of Sections 3.2 and 3.4. Returning to the homogeneous Eqs. (3.1) and multiplying the determinant of the coefficients by x1 , then adding x2 times the second column and x3 times the third column, we can directly establish the condition for the presence of a nontrivial solution for Eqs. (3.1):        a1 a2 a3   a1 x1 a2 a3   a1 x1 + a2 x2 + a3 x3 a2 a3        x1  b1 b2 b3  =  b1 x1 b2 b3  =  b1 x1 + b2 x2 + b3 x3 b2 b3   c1 c2 c3   c1 x1 c2 c3   c1 x1 + c2 x2 + c3 x3 c2 c3     0 a2 a3    =  0 b2 b3  = 0. (3.17)  0 c2 c3  Therefore x1 (and x2 and x3 ) must be zero unless the determinant of the coefficients vanishes. Conversely (see text below Eq. (3.3)), we can show that if the determinant of the coefficients vanishes, a nontrivial solution does indeed exist. This is used in Section 9.6 to establish the linear dependence or independence of a set of functions. 3 This derives from the geometric meaning of the determinant as the volume of the parallelepiped spanned by its column vectors. Pulling it to the side without changing its height leaves the volume unchanged. 170 Chapter 3 Determinants and Matrices If our linear equations are inhomogeneous, that is, as in Eqs. (3.5) if the zeros on the right-hand side of Eqs. (3.1) are replaced by a4 , b4 , and c4 , respectively, then from Eq. (3.17) we obtain, instead,    a4 a2 a3     b4 b2 b3     c4 c2 c3  , (3.18) x1 =   a1 a2 a3     b1 b2 b3     c1 c2 c3  which generalizes Eq. (3.6a) to n = 3 dimensions, etc. If the determinant of the coefficients vanishes, the inhomogeneous set of equations has no solution — unless the numerators also vanish. In this case solutions may exist but they are not unique (see Exercise 3.1.3 for a specific example). For numerical work, this determinant solution, Eq. (3.18), is exceedingly unwieldy. The determinant may involve large numbers with alternate signs, and in the subtraction of two large numbers the relative error may soar to a point that makes the result worthless. Also, although the determinant method is illustrated here with three equations and three unknowns, we might easily have 200 equations with 200 unknowns, which, involving up to 200! terms in each determinant, pose a challenge even to high-speed computers. There must be a better way. In fact, there are better ways. One of the best is a straightforward process often called Gauss elimination. To illustrate this technique, consider the following set of equations. Example 3.1.1 GAUSS ELIMINATION Solve 3x + 2y + z = 11 2x + 3y + z = 13 (3.19) x + y + 4z = 12. The determinant of the inhomogeneous linear equations (3.19) is 18, so a solution exists. For convenience and for the optimum numerical accuracy, the equations are rearranged so that the largest coefficients run along the main diagonal (upper left to lower right). This has already been done in the preceding set. The Gauss technique is to use the first equation to eliminate the first unknown, x, from the remaining equations. Then the (new) second equation is used to eliminate y from the last equation. In general, we work down through the set of equations, and then, with one unknown determined, we work back up to solve for each of the other unknowns in succession. Dividing each row by its initial coefficient, we see that Eqs. (3.19) become x + 23 y + 13 z = 11 3 x + 32 y + 12 z = 13 2 x + y + 4z = 12. (3.20) 3.1 Determinants 171 Now, using the first equation, we eliminate x from the second and third equations: x + 32 y + 13 z = 5 6y 11 3 16 z = 17 6 11 3 z = 25 3 x + 23 y + 13 z = 11 3 y + 15 z = 17 5 1 3y + (3.21) and (3.22) y + 11z = 25. Repeating the technique, we use the new second equation to eliminate y from the third equation: x + 23 y + 13 z = 11 3 y + 15 z = 17 5 (3.23) 54z = 108, or z = 2. Finally, working back up, we get y+ 1 5 ×2= 17 5 , or y = 3. Then with z and y determined, x+ 2 3 ×3+ 1 3 ×2= 11 3 , and x = 1. The technique may not seem so elegant as Eq. (3.18), but it is well adapted to computers and is far faster than the time spent with determinants. This Gauss technique may be used to convert a determinant into triangular form:    a1 b1 c1    D =  0 b2 c2   0 0 c3  for a third-order determinant whose elements are not to be confused with those in Eq. (3.3). In this form D = a1 b2 c3 . For an nth-order determinant the evaluation of the triangular form requires only n − 1 multiplications, compared with the n! required for the general case. 172 Chapter 3 Determinants and Matrices A variation of this progressive elimination is known as Gauss–Jordan elimination. We start as with the preceding Gauss elimination, but each new equation considered is used to eliminate a variable from all the other equations, not just those below it. If we had used this Gauss–Jordan elimination, Eq. (3.23) would become x + 15 z = 7 5 y + 51 z = 17 5 (3.24) z = 2, using the second equation of Eqs. (3.22) to eliminate y from both the first and third equations. Then the third equation of Eqs. (3.24) is used to eliminate z from the first and second, giving =1 x =3 y (3.25) z = 2. We return to this Gauss–Jordan technique in Section 3.2 for inverting matrices. Another technique suitable for computer use is the Gauss–Seidel iteration technique. Each technique has its advantages and disadvantages. The Gauss and Gauss–Jordan methods may have accuracy problems for large determinants. This is also a problem for matrix inversion (Section 3.2). The Gauss–Seidel method, as an iterative method, may have convergence problems. The IBM Scientific Subroutine Package (SSP) uses Gauss and Gauss–Jordan techniques. The Gauss–Seidel iterative method and the Gauss and Gauss– Jordan elimination methods are discussed in considerable detail by Ralston and Wilf and also by Pennington.4 Computer codes in FORTRAN and other programming languages and extensive literature for the Gauss–Jordan elimination and others are also given by Press et al.5  Linear Dependence of Vectors Two nonzero two-dimensional vectors   a11 = 0, a1 = a12 a2 =  a21 a22  = 0 are defined to be linearly dependent if two numbers x1 , x2 can be found that are not both zero so that the linear relation x1 a1 + x2 a2 = 0 holds. They are linearly independent if x1 = 0 = x2 is the only solution of this linear relation. Writing it in Cartesian components, we obtain two homogeneous linear equations a11 x1 + a21 x2 = 0, a12 x1 + a22 x2 = 0 4 A. Ralston and H. Wilf, eds., Mathematical Methods for Digital Computers. New York: Wiley (1960); R. H. Pennington, Introductory Computer Methods and Numerical Analysis. New York: Macmillan (1970). 5 W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes, 2nd ed. Cambridge, UK: Cambridge University Press (1992), Chapter 2. 3.1 Determinants 173 from which we extract the following criterion for linear independence of two vectors  a using 21  Cramer’s rule. If a1 , a2 span a nonzero area, that is, their determinant aa11 12 a22 = 0, then the set of homogeneous linear equations has only the solution x1 = 0 = x2 . If the determinant is zero, then there is a nontrivial solution x1 , x2 , and our vectors are linearly dependent. In particular, the unit vectors  in the  x- and y-directions are linearly independent, the linear relation x1 xˆ1 + x2 xˆ2 = xx12 = 00 having only the trivial solution x1 = 0 = x2 . Three or more vectors in two-dimensional space are always linearly dependent. Thus, the maximum number of linearly independent vectors in two-dimensional space is 2. For example, given a1 , a2 , a3 , the linear relation x1 a1 + x2 a2 + x3 a3 = 0 always has nontrivial solutions. If one of the vectors is zero, linear dependence is obvious because the coefficient of the zero vector may be chosen to be nonzero and that of the others as zero. So we assume all of them as nonzero. If a1 and a2 are linearly independent, we write the linear relation a11 x1 + a21 x2 = −a31 x3 , a12 x1 + a22 x2 = −a32 x3 , as a set of two inhomogeneous linear equations and apply Cramer’s rule. Since the determinant is nonzero, we can find a nontrivial solution x1 , x2 for any nonzero x3 . This argument goes through for any pair of linearly independent vectors. If all pairs are linearly dependent, any of these linear relations is a linear relation among the three vectors, and we are finished. If there are more than three vectors, we pick any three of them and apply the foregoing reasoning and put the coefficients of the other vectors, xj = 0, in the linear relation. • Mutually orthogonal vectors are linearly independent. Assume a linear relation i ci vi = 0. Dotting vj into this using vj · vi = 0 for j = i, we obtain cj vj · vj = 0, so every cj = 0 because v2j = 0. It is straightforward to extend these theorems to n or more vectors in n-dimensional Euclidean space. Thus, the maximum number of linearly independent vectors in n-dimensional space is n. The coordinate unit vectors are linearly independent because they span a nonzero parallelepiped in n-dimensional space and their determinant is unity. Gram–Schmidt Procedure In an n-dimensional vector space with an inner (or scalar) product, we can always construct an orthonormal basis of n vectors wi with wi ·wj = δij starting from n linearly independent vectors vi , i = 0, 1, . . . , n − 1. We start by normalizing v0 to unity, defining w0 = √v0 2 . Then we project v0 from v1 , v0 forming u1 = v1 + a10 w0 , with the admixture coefficient a10 chosen so that v0 · u1 = 0. Dotting v0 into u1 yields a10 = − v0 ·v21 = −v1 · w0 . Again, we normalize u1 defining w1 = v0 u1 . u21 Here, u21 = 0 because v0 , v1 are linearly independent. This first step generalizes to uj = vj + aj 0 w0 + aj 1 w1 + · · · + ajj −1 wj −1 , with coefficients aj i = −vj · wi . Normalizing wj = u j u2j completes our construction. 174 Chapter 3 Determinants and Matrices It will be noticed that although this Gram–Schmidt procedure is one possible way of constructing an orthogonal or orthonormal set, the vectors wi are not unique. There is an infinite number of possible orthonormal sets. As an illustration of the freedom involved, consider two (nonparallel) vectors A and B in the xy-plane. We may normalize A to unit magnitude and then form B′ = aA + B so that B′ is perpendicular to A. By normalizing B′ we have completed the Gram–Schmidt orthogonalization for two vectors. But any two perpendicular unit vectors, such as xˆ and yˆ , could have been chosen as our orthonormal set. Again, with an infinite number of possible rotations of xˆ and yˆ about the z-axis, we have an infinite number of possible orthonormal sets. Example 3.1.2 VECTORS BY GRAM–SCHMIDT ORTHOGONALIZATION To illustrate the method, we consider two vectors     1 1 , , v1 = v0 = −2 1 √ which are neither orthogonal nor normalized. Normalizing the first vector w0 = v0 / 2, we then construct u1 = v1 + a10 w0 so as to be orthogonal to v0 . This yields √ a10 u1 · v0 = 0 = v1 · v0 + √ v20 = −1 + a10 2, 2 √ so the adjustable admixture coefficient a10 = 1/ 2. As a result,       3 1 1 1 1 u1 = = , + −2 2 1 2 −1 so the second orthonormal vector becomes   1 1 . w1 = √ 2 −1 We check that w0 · w1 = 0. The two vectors w0 , w1 form an orthonormal set of vectors, a basis of two-dimensional Euclidean space.  Exercises 3.1.1 3.1.2 Evaluate the following determinants:   1 0 1   (a)  0 1 0  , 1 0 0  1  (b)  3 0 2 1 3  0  2  , 1 Test the set of linear homogeneous equations x + 3y + 3z = 0, √   0 3 √ 1  3 0 (c) √  2 2 0  0 0 x − y + z = 0, to see if it possesses a nontrivial solution, and find one.  0 0  2 √0  . 3  √0 3 0  2x + y + 3z = 0 3.1 Determinants 3.1.3 175 Given the pair of equations x + 2y = 3, 2x + 4y = 6, (a) Show that the determinant of the coefficients vanishes. (b) Show that the numerator determinants (Eq. (3.18)) also vanish. (c) Find at least two solutions. 3.1.4 Express the components of A × B as 2 × 2 determinants. Then show that the dot product A · (A × B) yields a Laplacian expansion of a 3 × 3 determinant. Finally, note that two rows of the 3 × 3 determinant are identical and hence that A · (A × B) = 0. 3.1.5 If Cij is the cofactor of element aij (formed by striking out the ith row and j th column and including a sign (−1)i+j ), show that (a) (b) 3.1.6 i aij Cij = i aj i Cj i = |A|, where |A| is the determinant with the elements aij , i aij Cik = i aj i Cki = 0, j = k. A determinant with all elements of order unity may be surprisingly small. The Hilbert determinant Hij = (i + j − 1)−1 , i, j = 1, 2, . . . , n is notorious for its small values. (a) (b) Calculate the value of the Hilbert determinants of order n for n = 1, 2, and 3. If an appropriate subroutine is available, find the Hilbert determinants of order n for n = 4, 5, and 6. ANS. 3.1.7 n 1 2 3 4 5 6 Det(Hn ) 1. 8.33333 × 10−2 4.62963 × 10−4 1.65344 × 10−7 3.74930 × 10−12 5.36730 × 10−18 Solve the following set of linear simultaneous equations. Give the results to five decimal places. 1.0x1 + 0.9x2 + 0.8x3 + 0.4x4 + 0.1x5 = 1.0 0.9x1 + 1.0x2 + 0.8x3 + 0.5x4 + 0.2x5 + 0.1x6 = 0.9 0.8x1 + 0.8x2 + 1.0x3 + 0.7x4 + 0.4x5 + 0.2x6 = 0.8 0.4x1 + 0.5x2 + 0.7x3 + 1.0x4 + 0.6x5 + 0.3x6 = 0.7 0.1x1 + 0.2x2 + 0.4x3 + 0.6x4 + 1.0x5 + 0.5x6 = 0.6 0.1x2 + 0.2x3 + 0.3x4 + 0.5x5 + 1.0x6 = 0.5. Note. These equations may also be solved by matrix inversion, Section 3.2. 176 Chapter 3 Determinants and Matrices Solve the linear equations a · x = c, a × x + b = 0 for x = (x1 , x2 , x3 ) with constant vectors a = 0, b and constant c. 3.1.8 ANS. x = c a + (a × b)/a 2 . a2 Solve the linear equations a · x = d, b · x = e, c · x = f, for x = (x1 , x2 , x3 ) with constant vectors a, b, c and constants d, e, f such that (a × b) · c = 0. 3.1.9 ANS. [(a × b) · c]x = d(b × c) + e(c × a) + f (a × b). 3.1.10 3.2 Express in vector form the solution (x1 , x2 , x3 ) of ax1 + bx2 + cx3 + d = 0 with constant vectors a, b, c, d so that (a × b) · c = 0. MATRICES Matrix analysis belongs to linear algebra because matrices are linear operators or maps such as rotations. Suppose, for instance, we rotate the Cartesian coordinates of a twodimensional space, as in Section 1.2, so that, in vector notation,     ′  a1j xj x1 x1 cos ϕ + x2 sin ϕ j = . (3.26) = −x2 sin ϕ + x2 cos ϕ x2′ j a2j xj  a12 We label the array of elements aa11 21 a22 a 2 × 2 matrix A consisting of two rows and two columns and consider the vectors x, x ′ as 2 × 1 matrices. We take the summation of products in Eq. (3.26) as a definition of matrix multiplication involving the scalar product of each row vector of A with the column vector x. Thus, in matrix notation Eq. (3.26) becomes x ′ = Ax. (3.27) To extend this definition of multiplication of a matrix times a column vector to the product of two 2 × 2 matrices, let the coordinate rotation be followed by a second rotation given by matrix B such that x ′′ = Bx ′ . (3.28) In component form, xi′′ =  j bij xj′ =  j bij  k aj k xk =  k j  bij aj k xk . (3.29) The summation over j is matrix multiplication defining a matrix C = BA such that  xi′′ = cik xk , (3.30) k x ′′ or = Cx in matrix notation. Again, this definition involves the scalar products of row vectors of B with column vectors of A. This definition of matrix multiplication generalizes to m × n matrices and is found useful; indeed, this usefulness is the justification for its existence. The geometrical interpretation is that the matrix product of the two matrices BA is the rotation that carries the unprimed system directly into the double-primed coordinate 3.2 Matrices 177 system. Before passing to formal definitions, the your should note that operator A is described by its effect on the coordinates or basis vectors. The matrix elements aij constitute a representation of the operator, a representation that depends on the choice of a basis. The special case where a matrix has one column and n rows is called a column vector, |x, with components xi , i = 1, 2, . . . , n. If A is an n × n matrix, |x an n-component column vector, A|x is defined as in Eqs. (3.27) and (3.26). Similarly, if a matrix has one row and n columns, it is called a row vector, x| with components xi , i = 1, 2, . . . , n. Clearly, x| results from |x by interchanging rows and columns, a matrix operation called transposition, and transposition for any matrix A, A˜ is called6 “A transpose” with matrix ˜ ik = Aki . Transposing a product of matrices AB reverses the order and gives elements (A) ˜ A; ˜ similarly A|x transpose is x|A. The scalar product takes the form x|y = xi yi B i (xi∗ in a complex vector space). This Dirac bra-ket notation is used in quantum mechanics extensively and in Chapter 10 and here subsequently. More abstractly, we can define the dual space V˜ of linear functionals F on a vector space V , where each linear functional F of V˜ assigns a number F (v) so that F (c1 v1 + c2 v2 ) = c1 F (v1 ) + c2 F (v2 ) for any vectors v1 , v2 from our vector space V and numbers c1 , c2 . If we define the sum of two functionals by linearity as (F1 + F2 )(v) = F1 (v) + F2 (v), then V˜ is a linear space by construction. Riesz’ theorem says that there is a one-to-one correspondence between linear functionals F in V˜ and vectors f in a vector space V that has an inner (or scalar) product f|v defined for any pair of vectors f, v. The proof relies on the scalar product by defining a linear functional F for any vector f of V as F (v) = f|v for any v of V . The linearity of the scalar product in f shows that these functionals form a vector space (contained in V˜ necessarily). Note that a linear functional is completely specified when it is defined for every vector v of a given vector space. On the other hand, starting from any nontrivial linear functional F of V˜ we now construct a unique vector f of V so that F (v) = f · v is given by an inner product. We start from an orthonormal basis wi of vectors in V using the Gram–Schmidt procedure (see Section 3.2). Take any vector v from V and expand it as v = w · vwi . Then the i i linear functionalF (v) = i wi · vF (wi ) is well defined on V . If we define the specific vector f = i F (wi )wi , then its inner product with an arbitrary vector v is given by f|v = f · v = i F (wi )wi · v = F (v), which proves Riesz’ theorem. Basic Definitions A matrix is defined as a square or rectangular array of numbers or functions that obeys certain laws. This is a perfectly logical extension of familiar mathematical concepts. In arithmetic we deal with single numbers. In the theory of complex variables (Chapter 6) we deal with ordered pairs of numbers, (1, 2) = 1 + 2i, in which the ordering is important. We 6 Some texts (including ours sometimes) denote A transpose by AT . 178 Chapter 3 Determinants and Matrices now consider numbers (or functions) ordered in a square or rectangular array. For convenience in later work the numbers are distinguished by two subscripts, the first indicating the row (horizontal) and the second indicating the column (vertical) in which the number appears. For instance, a13 is the matrix element in the first row, third column. Hence, if A is a matrix with m rows and n columns,   a11 a12 · · · a1n  a21 a22 · · · a2n  . (3.31) A=  ··· ··· ·  am1 am2 · · · amn Perhaps the most important fact to note is that the elements aij are not combined with one another. A matrix is not a determinant. It is an ordered array of numbers, not a single number. The matrix A, so far just an array of numbers, has the properties we assign to it. Literally, this means constructing a new form of mathematics. We define that matrices A, B, and C, with elements aij , bij , and cij , respectively, combine according to the following rules. Rank Looking back at the homogeneous linear Eqs. (3.1), we note that the matrix of coefficients, A, is made up of three row vectors that each represent one linear equation of the set. If their triple scalar product is not zero, than they span a nonzero volume and are linearly independent, and the homogeneous linear equations have only the trivial solution. In this case the matrix is said to have rank 3. In n dimensions the volume represented by the triple scalar product becomes the determinant, det(A), for a square matrix. If det(A) = 0, the n × n matrix A has rank n. The case of Eqs. (3.1), where the vector c lies in the plane spanned by a and b, corresponds to rank 2 of the matrix of coefficients, because only two of its row vectors (a, b corresponding to two equations) are independent. In general, the rank r of a matrix is the maximal number of linearly independent row or column vectors it has, with 0 ≤ r ≤ n. Equality Matrix A = Matrix B if and only if aij = bij for all values of i and j . This, of course, requires that A and B each be m × n arrays (m rows, n columns). Addition, Subtraction A ± B = C if and only if aij ± bij = cij for all values of i and j , the elements combining according to the laws of ordinary algebra (or arithmetic if they are simple numbers). This means that A + B = B + A, commutation. Also, an associative law is satisfied (A + B) + C = A + (B + C). If all elements are zero, the matrix, called the null matrix, is denoted by O. For all A, A + O = O + A = A, 3.2 Matrices with  0 0 0 · 0 0 0 · O= 0 0 0 · · · · · · · · ·  · · . · · 179 (3.32) Such m × n matrices form a linear space with respect to addition and subtraction. Multiplication (by a Scalar) The multiplication of matrix A by the scalar quantity α is defined as αA = (αA), (3.33) in which the elements of αA are αaij ; that is, each element of matrix A is multiplied by the scalar factor. This is in striking contrast to the behavior of determinants in which the factor α multiplies only one column or one row and not every element of the entire determinant. A consequence of this scalar multiplication is that αA = Aα, commutation. If A is a square matrix, then det(αA) = α n det(A). Matrix Multiplication, Inner Product AB = C if and only if7 cij =  aik bkj . (3.34) k The ij element of C is formed as a scalar product of the ith row of A with the j th column of B (which demands that A have the same number of columns (n) as B has rows). The dummy index k takes on all values 1, 2, . . . , n in succession; that is, cij = ai1 b1j + ai2 b2j + ai3 b3j (3.35) for n = 3. Obviously, the dummy index k may be replaced by any other symbol that is not already in use without altering Eq. (3.34). Perhaps the situation may be clarified by stating that Eq. (3.34) defines the method of combining certain matrices. This method of combination, to give it a label, is called matrix multiplication. To illustrate, consider two (so-called Pauli) matrices     0 1 1 0 σ1 = and σ3 = . (3.36) 1 0 0 −1 7 Some authors follow the summation convention here (compare Section 2.6). 180 Chapter 3 Determinants and Matrices The 11 element of the product, (σ1 σ3 )11 is given by the sum of the products of elements of the first row of σ1 with the corresponding elements of the first column of σ3 :  ! 1 0 0 1   → 0 · 1 + 1 · 0 = 0. 1 0 0 −1 Continuing, we have σ1 σ3 =  0 · 1 + 1 · 0 0 · 0 + 1 · (−1) 1 · 1 + 0 · 0 1 · 0 + 0 · (−1)  =   0 −1 . 1 0 (3.37) Here (σ1 σ3 )ij = σ1i1 σ31j + σ1i2 σ32j . Direct application of the definition of matrix multiplication shows that   0 1 σ3 σ1 = −1 0 (3.38) and by Eq. (3.37) σ3 σ1 = −σ1 σ3 . Except in special cases, matrix multiplication is not AB = BA. (3.39) commutative:8 (3.40) However, from the definition of matrix multiplication we can show9 that an associative law holds, (AB)C = A(BC). There is also a distributive law, A(B + C) = AB + AC. The unit matrix 1 has elements δij , Kronecker delta, and the property that 1A = A1 = A for all A,   1 0 0 0 · · · 0 1 0 0 · · ·    (3.41) 1= 0 0 1 0 · · ·. 0 0 0 1 · · · · · · · · · · It should be noted that it is possible for the product of two matrices to be the null matrix without either one being the null matrix. For example, if     1 1 1 0 A= and B = , 0 0 −1 0 AB = O. This differs from the multiplication of real or complex numbers, which form a field, whereas the additive and multiplicative structure of matrices is called a ring by mathematicians. See also Exercise 3.2.6(a), from which it is evident that, if AB = 0, at 8 Commutation or the lack of it is conveniently described by the commutator bracket symbol, [A, B] = AB − BA. Equation (3.40) becomes [A, B] = 0. 9 Note that the basic definitions of equality, addition, and multiplication are given in terms of the matrix elements, the a . All our ij matrix operations can be carried out in terms of the matrix elements. However, we can also treat a matrix as a single algebraic operator, as in Eq. (3.40). Matrix elements and single operators each have their advantages, as will be seen in the following section. We shall use both approaches. 3.2 Matrices 181 least one of the matrices must have a zero determinant (that is, be singular as defined after Eq. (3.50) in this section). If A is an n × n matrix with determinant |A| = 0, then it has a unique inverse A−1 satisfying AA−1 = A−1 A = 1. If B is also an n × n matrix with inverse B−1 , then the product AB has the inverse (AB)−1 = B−1 A−1 (3.42) because ABB−1 A−1 = 1 = B−1 A−1 AB (see also Exercises 3.2.31 and 3.2.32). The product theorem, which says that the determinant of the product, |AB|, of two n×n matrices A and B is equal to the product of the determinants, |A||B|, links matrices with determinants. To prove this, consider the n column vectors ck = ( j aij bj k , i = 1, 2, . . . , n) of the product matrix C = AB for k = 1, 2, . . . , n. Each ck = jk bjk k ajk is a sum of n column vectors ajk = (aijk , i = 1, 2, . . . , n). Note that we are now using a different product summation index jk for each column ck . Since any determinant D(b1 a1 + b2 a2 ) = b1 D(a1 ) + b2 D(a2 ) is linear in its column vectors, we can pull out the summation sign in front of the determinant from each column vector in C together with the common column factor bjk k so that  |C| = bj1 1 bj2 2 · · · bjn n det(aj1 aj2 , . . . , ajn ). (3.43) jk′ s If we rearrange the column vectors ajk of the determinant factor in Eq. (3.43) in the proper order, then we can pull the common factor det(a1 , a2 , . . . , an ) = |A| in front of the n summation signs in Eq. (3.43). These column permutations generate just the right sign εj1 j2 ···jn to produce in Eq. (3.43) the expression in Eq. (3.8) for |B| so  |C| = |A| εj1 j2 ···jn bj1 1 bj2 2 · · · bjn n = |A||B|, (3.44) jk′ s which proves the product theorem. Direct Product A second procedure for multiplying matrices, known as the direct tensor or Kronecker product, follows. If A is an m × m matrix and B is an n × n matrix, then the direct product is A ⊗ B = C. (3.45) C is an mn × mn matrix with elements Cαβ = Aij Bkl , with α = m(i − 1) + k, β = n(j − 1) + l. (3.46) 182 Chapter 3 Determinants and Matrices For instance, if A and B are both 2 × 2 matrices,   a11 B a12 B A⊗B= a21 B a22 B  a11 b11 a11 b12  a11 b21 a11 b22 =  a21 b11 a21 b12 a21 b21 a21 b22 a12 b11 a12 b21 a22 b11 a22 b21  a12 b12 a12 b22  . a22 b12  a22 b22 (3.47) The direct product is associative but not commutative. As an example of the direct product, the Dirac matrices of Section 3.4 may be developed as direct products of the Pauli matrices and the unit matrix. Other examples appear in the construction of groups (see Chapter 4) and in vector or Hilbert space in quantum theory. Example 3.2.1 DIRECT PRODUCT OF VECTORS The direct product of two two-dimensional vectors is a four-component vector,   x0 y0      x0 y1  x0 y  ⊗ 0 =  x1 y0  ; x1 y1 x1 y1 while the direct product of three such vectors,  x0 y0 z0  x0 y0 z1             x0 y1 z0   x0 y1 z1  x0 y0 z0  ⊗ ⊗ =  x1 y0 z0  , x1 y1 z1    x1 y0 z1     x1 y1 z0  x1 y1 z1  is a (23 = 8)-dimensional vector.  Diagonal Matrices An important special type of matrix is the square matrix in which all the nondiagonal elements are zero. Specifically, if a 3 × 3 matrix A is diagonal, then   a11 0 0 A =  0 a22 0  . 0 0 a33 A physical interpretation of such diagonal matrices and the method of reducing matrices to this diagonal form are considered in Section 3.5. Here we simply note a significant property of diagonal matrices — multiplication of diagonal matrices is commutative, AB = BA, if A and B are each diagonal. 3.2 Matrices 183 Multiplication by a diagonal matrix [d1 , d2 , . . . , dn ] that has only nonzero elements in the diagonal is particularly simple:        1 2 1 2 1 2 1 0 ; = = 6 8 3 4 0 2 2·3 2·4 while the opposite order gives        1 2·2 1 0 1 2 1 4 = = . 3 2·4 0 2 3 4 3 8 Thus, a diagonal matrix does not commute with another matrix unless both are diagonal, or the diagonal matrix is proportional to the unit matrix. This is borne out by the more general form    a11 a12 · · · a1n d1 0 · · · 0  0 d2 · · · 0   a21 a22 · · · a2n    [d1 , d2 , . . . , dn ]A =   ··· ··· ·  ··· ··· ·  0 0 · · · dn an1 an2 · · · ann   d1 a11 d1 a12 · · · d1 a1n  d2 a21 d2 a22 · · · d2 a2n  , =  ··· ··· ·  dn an1 dn an2 · · · dn ann whereas  a11 a12 · · ·  a21 a22 · · · A[d1 , d2 , . . . , dn ] =   ··· ··· an1 an2 · · ·  d1 a11 d2 a12  d1 a21 d2 a22 =  ··· d1 an1 d2 an2  a1n d1 0  0 d2 a2n   ·  ··· ann 0 0  · · · dn a1n · · · dn a2n  . ··· ·  · · · dn ann ··· ··· ··· ···  0 0  ·  dn Here we have denoted by [d1 , . . . , dn ] a diagonal matrix with diagonal elements d1 , . . . , dn . In the special case of multiplying two diagonal matrices, we simply multiply the corresponding diagonal matrix elements, which obviously is commutative. Trace In any square matrix the sum of the diagonal elements is called the trace. Clearly the trace is a linear operation: trace(A − B) = trace(A) − trace(B). 184 Chapter 3 Determinants and Matrices One of its interesting and useful properties is that the trace of a product of two matrices A and B is independent of the order of multiplication:   aij bj i (AB)ii = trace(AB) = i i =  j i j  bj i aij = (BA)jj (3.48) j = trace(BA). This holds even though AB = BA. Equation (3.48) means that the trace of any commutator [A, B] = AB − BA is zero. From Eq. (3.48) we obtain trace(ABC) = trace(BCA) = trace(CAB), which shows that the trace is invariant under cyclic permutation of the matrices in a product. For a real symmetric or a complex Hermitian matrix (see Section 3.4) the trace is the sum, and the determinant the product, of its eigenvalues, and both are coefficients of the characteristic polynomial. In Exercise 3.4.23 the operation of taking the trace selects one term out of a sum of 16 terms. The trace will serve a similar function relative to matrices as orthogonality serves for vectors and functions. In terms of tensors (Section 2.7) the trace is a contraction and, like the contracted secondrank tensor, is a scalar (invariant). Matrices are used extensively to represent the elements of groups (compare Exercise 3.2.7 and Chapter 4). The trace of the matrix representing the group element is known in group theory as the character. The reason for the special name and special attention is that, the trace or character remains invariant under similarity transformations (compare Exercise 3.3.9). Matrix Inversion At the beginning of this section matrix A is introduced as the representation of an operator that (linearly) transforms the coordinate axes. A rotation would be one example of such a linear transformation. Now we look for the inverse transformation A−1 that will restore the original coordinate axes. This means, as either a matrix or an operator equation,10 With (−1) (A−1 )ij ≡ aij , AA−1 = A−1 A = 1. (−1) aij ≡ Cj i , |A| (3.49) (3.50) 10 Here and throughout this chapter our matrices have finite rank. If A is an infinite-rank matrix (n × n with n → ∞), then life is more difficult. For A−1 to be the inverse we must demand that both AA−1 = 1 one relation no longer implies the other. and A−1 A = 1. 3.2 Matrices 185 with Cj i the cofactor (see discussion preceding Eq. (3.11)) of aij and the assumption that the determinant of A, |A| = 0. If it is zero, we label A singular. No inverse exists. There is a wide variety of alternative techniques. One of the best and most commonly used is the Gauss–Jordan matrix inversion technique. The theory is based on the results of Exercises 3.2.34 and 3.2.35, which show that there exist matrices ML such that the product ML A will be A but with a. one row multiplied by a constant, or b. one row replaced by the original row minus a multiple of another row, or c. rows interchanged. Other matrices MR operating on the right (AMR ) can carry out the same operations on the columns of A. This means that the matrix rows and columns may be altered (by matrix multiplication) as though we were dealing with determinants, so we can apply the Gauss–Jordan elimination techniques of Section 3.1 to the matrix elements. Hence there exists a matrix ML (or MR ) such that11 ML A = 1. (3.51) ML 1 = ML . (3.52) Then ML = A−1 . We determine ML by carrying out the identical elimination operations on the unit matrix. Then To clarify this, we consider a specific example. Example 3.2.2 GAUSS–JORDAN MATRIX INVERSION We want to invert the matrix For convenience we write A each:  3 2 1   3 2 1 A = 2 3 1. 1 1 4 and 1 side by side and carry out the identical operations on  2 1 3 1 1 4 and  1 0 0 0 1 0. 0 0 1  To be systematic, we multiply each row to get ak1 = 1, 1   1 23 13 3    and 0  1 32 12  1 1 4 0 11 Remember that det(A) = 0. (3.53) 0 1 2 0 0   0. 1 (3.54) (3.55) 186 Chapter 3 Determinants and Matrices Subtracting the first row from the second and third rows, we obtain   1   1 32 31 0 0 3    1 1  and  0 56 61  −3 2 0. 0 1 3 11 3 − 13 0 (3.56) 1 Then we divide the second row (of both matrices) by 56 and subtract 32 times it from the first row and 31 times it from the third row. The results for both matrices are  3    − 52 0 1 0 15 5   2   3 (3.57) and 0. −5  0 1 15  5 0 0 18 5 − 15 − 51 1 1 We divide the third row (of both matrices) by 18 5 . Then as the last step 5 times the third row is subtracted from each of the first two rows (of both matrices). Our final pair is  11   7 1  − 18 − 18 1 0 0 18  7 11 1 . 0 1 0 and A−1 =  − 18 (3.58) − 18  18 0 0 1 1 1 5 − − 18 The check is to multiply the original A by the calculated the unit matrix 1. 18 A−1 18 to see if we really do get  As with the Gauss–Jordan solution of simultaneous linear algebraic equations, this technique is well adapted to computers. Indeed, this Gauss–Jordan matrix inversion technique will probably be available in the program library as a subroutine (see Sections 2.3 and 2.4 of Press et al., loc. cit.). For matrices of special form, the inverse matrix can be given in closed form. For example, for   a b c (3.59) A = b d b, c b e the inverse matrix has a similar but slightly more general form,   α β1 γ A−1 =  β1 δ β2  , γ β2 ǫ (3.60) with matrix elements given by Dα = ed − b2 , Dδ = ae − c2 ,  Dγ = − cd − b2 , Dǫ = ad − b2 , Dβ1 = (c − e)b, Dβ2 = (c − a)b,  D = b2 (2c − a − e) + d ae − c2 , where D = det(A) is the determinant of the matrix A. If e = a in A, then the inverse matrix A−1 also simplifies to  β1 = β2 , ǫ = α, D = a 2 − c2 d + 2(c − a)b2 . 3.2 Matrices 187 As a check, let us work out the 11-matrix element of the product AA−1 = 1. We find  1  a ed − b2 + b2 (c − e) − c cd − b2 D D 1 − ab2 + aed + 2b2 c − b2 e − c2 d = = 1. = D D Similarly we check that the 12-matrix element vanishes, aα + bβ1 + cγ = aβ1 + bδ + cβ2 =  1 ab(c − e) + b ae − c2 + cb(c − a) = 0, D and so on. Note though that we cannot always find an inverse of A−1 by solving for the matrix elements a, b, . . . of A, because not every inverse matrix A−1 of the form in Eq. (3.60) has a corresponding A of the special form in Eq. (3.59), as Example 3.2.2 clearly shows. Matrices are square or rectangular arrays of numbers that define linear transformations, such as rotations of a coordinate system. As such, they are linear operators. Square matrices may be inverted when their determinant is nonzero. When a matrix defines a system of linear equations, the inverse matrix solves it. Matrices with the same number of rows and columns may be added and subtracted. They form what mathematicians call a ring with a unit and a zero matrix. Matrices are also useful for representing group operations and operators in Hilbert spaces. Exercises 3.2.1 Show that matrix multiplication is associative, (AB)C = A(BC). 3.2.2 Show that (A + B)(A − B) = A2 − B2 if and only if A and B commute, [A, B] = 0. 3.2.3 Show that matrix A is a linear operator by showing that A(c1 r1 + c2 r2 ) = c1 Ar1 + c2 Ar2 . It can be shown that an n × n matrix is the most general linear operator in an ndimensional vector space. This means that every linear operator in this n-dimensional vector space is equivalent to a matrix. 3.2.4 (a) (b) Complex numbers, a + ib, with a and b real, may be represented by (or are isomorphic with) 2 × 2 matrices:   a b a + ib ↔ . −b a Show that this matrix representation is valid for (i) addition and (ii) multiplication. Find the matrix corresponding to (a + ib)−1 . 188 Chapter 3 Determinants and Matrices 3.2.5 If A is an n × n matrix, show that det(−A) = (−1)n det A. 3.2.6 (a) (b) The matrix equation A2 = 0 does not imply A = 0. Show that the most general 2 × 2 matrix whose square is zero may be written as   ab b2 , −a 2 −ab where a and b are real or complex numbers. If C = A + B, in general det C = det A + det B. Construct a specific numerical example to illustrate this inequality. 3.2.7 Given the three matrices   −1 0 A= , 0 −1 B=   1 , 0 0 1 C=   0 −1 , −1 0 find all possible products of A, B, and C, two at a time, including squares. Express your answers in terms of A, B, and C, and 1, the unit matrix. These three matrices, together with the unit matrix, form a representation of a mathematical group, the vierergruppe (see Chapter 4). 3.2.8 Given  0 K =  −i 0 show that  0 i 0 0, −1 0 Kn = KKK · · · (n factors) = 1 (with the proper choice of n, n = 0). 3.2.9 Verify the Jacobi identity,    A, [B, C] = B, [A, C] − C, [A, B] . This is useful in matrix descriptions of elementary particles (see Eq. (4.16)). As a mnemonic aid, the you might note that the Jacobi identity has the same form as the BAC–CAB rule of Section 1.5. 3.2.10 Show that the matrices   0 1 0 A = 0 0 0, 0 0 0 satisfy the commutation relations [A, B] = C,  0 B = 0 0 0 0 0 [A, C] = 0,  0 1, 0 and   0 0 1 C = 0 0 0 0 0 0 [B, C] = 0. 3.2 Matrices 3.2.11 Let  0  −1 i=  0 0 and  0 0 , 1 0  0 0 j= 0 1 0 0 1 0  0 −1 −1 0  , 0 0  0 0  Show that (a) (b) 1 0 0 0 0 0 0 −1 189  0 0 −1 0 0 0 0 1 . k= 1 0 0 0 0 −1 0 0 i2 = j2 = k2 = −1, where 1 is the unit matrix. ij = −ji = k, jk = −kj = i, ki = −ik = j. These three matrices (i, j, and k) plus the unit matrix 1 form a basis for quaternions. An alternate basis is provided by the four 2 × 2 matrices, iσ1 , iσ2 , −iσ3 , and 1, where the σ are the Pauli spin matrices of Exercise 3.2.13. 3.2.12 A matrix with elements aij = 0 for j < i may be called upper right triangular. The elements in the lower left (below and to the left of the main diagonal) vanish. Examples are the matrices in Chapters 12 and 13, Exercise 13.1.21, relating power series and eigenfunction expansions. Show that the product of two upper right triangular matrices is an upper right triangular matrix. 3.2.13 The three Pauli spin matrices are     0 −i 0 1 σ1 = , , σ2 = i 0 1 0 and σ3 =   1 0 . 0 −1 Show that (a) (b) (c) (σi )2 = 12 , σj σk = iσl , (j, k, l) = (1, 2, 3), (2, 3, 1), (3, 1, 2) (cyclic permutation), σi σj + σj σi = 2δij 12 ; 12 is the 2 × 2 unit matrix. These matrices were used by Pauli in the nonrelativistic theory of electron spin. 3.2.14 Using the Pauli σi of Exercise 3.2.13, show that (σ · a)(σ · b) = a · b 12 + iσ · (a × b). Here σ ≡ xˆ σ1 + yˆ σ2 + zˆ σ3 , a and b are ordinary vectors, and 12 is the 2 × 2 unit matrix. 190 Chapter 3 Determinants and Matrices 3.2.15 One description of spin 1 particles uses the matrices    0 1 0 0 −i 1 1  1 0 1, My = √  i 0 Mx = √ 2 0 1 0 2 0 i and  Show that (a)  0 −i  , 0  1 0 0 Mz =  0 0 0  . 0 0 −1 [Mx , My ] = iMz , and so on12 (cyclic permutation of indices). Using the LeviCivita symbol of Section 2.9, we may write [Mp , Mq ] = iεpqr Mr . (b) (c) 3.2.16 M2 ≡ M2x + M2y + M2z = 2 13 , where 13 is the 3 × 3 unit matrix. [M2 , Mi ] = 0, [Mz , L+ ] = L+ , [L+ , L− ] = 2Mz , where L+ ≡ Mx + iMy , L− ≡ Mx − iMy . Repeat Exercise 3.2.15 using an alternate representation,    0 0 0 0 My =  0 Mx =  0 0 −i  , −i 0 i 0 and  0 −i Mz =  i 0 0 0 0 0 0  i 0, 0  0 0. 0 In Chapter 4 these matrices appear as the generators of the rotation group. 3.2.17 Show that the matrix–vector equation   1 ∂ M · ∇ + 13 ψ =0 c ∂t reproduces Maxwell’s equations in vacuum. Here ψ is a column vector with components ψj = Bj − iEj /c, j = x, y, z. M is a vector whose elements are the angular momentum matrices of Exercise 3.2.16. Note that ε0 µ0 = 1/c2 , 13 is the 3 × 3 unit matrix. 12 [A, B] = AB − BA. 3.2 Matrices 191 From Exercise 3.2.15(b), M2 ψ = 2ψ. A comparison with the Dirac relativistic electron equation suggests that the “particle” of electromagnetic radiation, the photon, has zero rest mass and a spin of 1 (in units of h). 3.2.18 Repeat Exercise 3.2.15, using the matrices for a spin of 3/2, √ √     3 0 0 0 √0 √0 − 3 0 1 3 0 i 3 2 √0  0 −2 0  , , √ My =  Mx =      0 2 √0 0 2 3 2 2 √0 − 3 0 0 0 0 3 0 3 0 and 3.2.19  3 1 0 Mz =   2 0 0  0 0 0 1 0 0  . 0 −1 0  0 0 −3 An operator P commutes with Jx and Jy , the x and y components of an angular momentum operator. Show that P commutes with the third component of angular momentum, that is, that [P, Jz ] = 0. Hint. The angular momentum components must satisfy the commutation relation of Exercise 3.2.15(a). 3.2.20 The L+ and L− matrices of Exercise 3.2.15 are ladder operators (see Chapter 4): L+ operating on a system of spin projection m will raise the spin projection to m + 1 if m is − below its maximum. L+ operating on mmax yields √ zero. L reduces the spin projection in unit steps in a similar fashion. Dividing by 2, we have     0 0 0 0 1 0 L− =  1 0 0  . L+ =  0 0 1  , 0 1 0 0 0 0 Show that L+ |−1 = |0, L− |−1 = null column vector, L+ |0 = |1, L− |0 = |−1, L+ |1 = null column vector, L− |1 = |0, where   0 |−1 =  0  , 1   0 |0 =  1  , 0 and   1 |1 =  0  0 represent states of spin projection −1, 0, and 1, respectively. Note. Differential operator analogs of these ladder operators appear in Exercise 12.6.7. 192 Chapter 3 Determinants and Matrices 3.2.21 Vectors A and B are related by the tensor T, B = TA. Given A and B, show that there is no unique solution for the components of T. This is why vector division B/A is undefined (apart from the special case of A and B parallel and T then a scalar). 3.2.22 We might ask for a vector A−1 , an inverse of a given vector A in the sense that A · A−1 = A−1 · A = 1. Show that this relation does not suffice to define A−1 uniquely; A would then have an infinite number of inverses. 3.2.23 If A is a diagonal matrix, with all diagonal elements different, and A and B commute, show that B is diagonal. 3.2.24 If A and B are diagonal, show that A and B commute. 3.2.25 Show that trace(ABC) = trace(CBA) if any two of the three matrices commute. 3.2.26 Angular momentum matrices satisfy a commutation relation [Mj , Mk ] = iMl , j, k, l cyclic. Show that the trace of each angular momentum matrix vanishes. 3.2.27 (a) The operator trace replaces a matrix A by its trace; that is,  aii . trace(A) = i (b) Show that trace is a linear operator. The operator det replaces a matrix A by its determinant; that is, det(A) = determinant of A. Show that det is not a linear operator. 3.2.28 A and B anticommute: BA = −AB. Also, A2 = 1, B2 = 1. Show that trace(A) = trace(B) = 0. Note. The Pauli and Dirac (Section 3.4) matrices are specific examples. 3.2.29 With |x an N -dimensional column vector and y| an N -dimensional row vector, show that  trace |xy| = y|x. Note. |xy| means direct product of column vector |x with row vector y|. The result is a square N × N matrix. 3.2.30 (a) If two nonsingular matrices anticommute, show that the trace of each one is zero. (Nonsingular means that the determinant of the matrix nonzero.) (b) For the conditions of part (a) to hold, A and B must be n × n matrices with n even. Show that if n is odd, a contradiction results. 3.2 Matrices 3.2.31 If a matrix has an inverse, show that the inverse is unique. 3.2.32 If A−1 has elements 193  −1 Cj i (−1) , A ij = aij = |A| where Cj i is the j ith cofactor of |A|, show that A−1 A = 1. Hence A−1 is the inverse of A (if |A| = 0). 3.2.33 Show that det A−1 = (det A)−1 . Hint. Apply the product theorem of Section 3.2. Note. If det A is zero, then A has no inverse. A is singular. 3.2.34 Find the matrices ML such that the product ML A will be A but with: the ith row multiplied by a constant k (aij → kaij , j = 1, 2, 3, . . .); the ith row replaced by the original ith row minus a multiple of the mth row (aij → aij − Kamj , i = 1, 2, 3, . . .); (c) the ith and mth rows interchanged (aij → amj , amj → aij , j = 1, 2, 3, . . .). (a) (b) 3.2.35 Find the matrices MR such that the product AMR will be A but with: the ith column multiplied by a constant k (aj i → kaj i , j = 1, 2, 3, . . .); the ith column replaced by the original ith column minus a multiple of the mth column (aj i → aj i − kaj m , j = 1, 2, 3, . . .); (c) the ith and mth columns interchanged (aj i → aj m , aj m → aj i , j = 1, 2, 3, . . .). (a) (b) 3.2.36 Find the inverse of 3.2.37 (a)   3 2 1 A = 2 2 1. 1 1 4 Rewrite Eq. (2.4) of Chapter 2 (and the corresponding equations for dy and dz) as a single matrix equation |dxk  = J|dqj . J is a matrix of derivatives, the Jacobian matrix. Show that dxk |dxk  = dqi |G|dqj , (b) with the metric (matrix) G having elements gij given by Eq. (2.6). Show that det(J) dq1 dq2 dq3 = dx dy dz, with det(J) the usual Jacobian. 194 Chapter 3 Determinants and Matrices 3.2.38 Matrices are far too useful to remain the exclusive property of physicists. They may appear wherever there are linear relations. For instance, in a study of population movement the initial fraction of a fixed population in each of n areas (or industries or religions, etc.) is represented by an n-component column vector P. The movement of people from one area to another in a given time is described by an n × n (stochastic) matrix T. Here Tij is the fraction of the population in the j th area that moves to the ith area. (Those not moving are covered by i = j .) With P describing the initial population distribution, the final population distribution is given by the matrix equation TP = Q. From its definition, ni=1 Pi = 1. (a) Show that conservation of people requires that n  i=1 (b) Tij = 1, j = 1, 2, . . . , n. Prove that n  i=1 Qi = 1 continues the conservation of people. 3.2.39 3.2.40 Given a 6 × 6 matrix A with elements aij = 0.5|i−j | , i = 0, 1, 2, . . . , 5; i = 0, 1, 2, . . . , 5, find A−1 . List its matrix elements to five decimal places.   4 −2 0 0 0 0  −2 5 −2 0 0 0    1 0 −2 5 −2 0 0 −1 .  ANS. A =  0 −2 5 −2 0  3 0   0 0 0 −2 5 −2 0 0 0 0 −2 4 Exercise 3.1.7 may be written in matrix form: Find A−1 and calculate X as A−1 C. AX = C. 3.2.41 (a) Write a subroutine that will multiply complex matrices. Assume that the complex matrices are in a general rectangular form. (b) Test your subroutine by multiplying pairs of the Dirac 4 × 4 matrices, Section 3.4. 3.2.42 (a) Write a subroutine that will call the complex matrix multiplication subroutine of Exercise 3.2.41 and will calculate the commutator bracket of two complex matrices. (b) Test your complex commutator bracket subroutine with the matrices of Exercise 3.2.16. 3.2.43 Interpolating polynomial is the name given to the (n−1)-degree polynomial determined by (and passing through) n points, (xi , yi ) with all the xi distinct. This interpolating polynomial forms a basis for numerical quadratures. 3.3 Orthogonal Matrices (a) 195 Show that the requirement that an (n − 1)-degree polynomial in x pass through each of the n points (xi , yi ) with all xi distinct leads to n simultaneous equations of the form n−1  j =0 j aj xi = yi , i = 1, 2, . . . , n. (b) Write a computer program that will read in n data points and return the n coefficients aj . Use a subroutine to solve the simultaneous equations if such a subroutine is available. (c) Rewrite the set of simultaneous equations as a matrix equation XA = Y. (d) 3.2.44 Repeat the computer calculation of part (b), but this time solve for vector A by inverting matrix X (again, using a subroutine). A calculation of the values of electrostatic potential inside a cylinder leads to V (0.0) = 52.640 V (0.6) = 25.844 V (0.2) = 48.292 V (0.8) = 12.648 V (0.4) = 38.270 V (1.0) = 0.0. The problem is to determine the values of the argument for which V = 10, 20, 30, 40, and 50. Express V (x) as a series 5n=0 a2n x 2n . (Symmetry requirements in the original problem require that V (x) be an even function of x.) Determine the coefficients a2n . With V (x) now a known function of x, find the root of V (x) − 10 = 0, 0 ≤ x ≤ 1. Repeat for V (x) − 20, and so on. ANS. a0 = 52.640, a2 = −117.676, V (0.6851) = 20. 3.3 ORTHOGONAL MATRICES Ordinary three-dimensional space may be described with the Cartesian coordinates (x1 , x2 , x3 ). We consider a second set of Cartesian coordinates (x1′ , x2′ , x3′ ), whose origin and handedness coincides with that of the first set but whose orientation is different (Fig. 3.1). We can say that the primed coordinate axes have been rotated relative to the initial, unprimed coordinate axes. Since this rotation is a linear operation, we expect a matrix equation relating the primed basis to the unprimed basis. This section repeats portions of Chapters 1 and 2 in a slightly different context and with a different emphasis. Previously, attention was focused on the vector or tensor. In the case of the tensor, transformation properties were strongly stressed and were critical. Here emphasis is placed on the description of the coordinate rotation itself — the matrix. Transformation properties, the behavior of the matrix when the basis is changed, appear at the end of this section. Sections 3.4 and 3.5 continue with transformation properties in complex vector spaces. 196 Chapter 3 Determinants and Matrices FIGURE 3.1 Cartesian coordinate systems. Direction Cosines A unit vector along the x1′ -axis (ˆx′1 ) may be resolved into components along the x1 -, x2 -, and x3 -axes by the usual projection technique: xˆ ′1 = xˆ 1 cos(x1′ , x1 ) + xˆ 2 cos(x1′ , x2 ) + xˆ 3 cos(x1′ , x3 ). (3.61) Equation (3.61) is a specific example of the linear relations discussed at the beginning of Section 3.2. For convenience these cosines, which are the direction cosines, are labeled cos(x1′ , x1 ) = xˆ ′1 · xˆ 1 = a11 , cos(x1′ , x2 ) = xˆ ′1 · xˆ 2 = a12 , cos(x1′ , x3 ) = xˆ ′1 (3.62a) · xˆ 3 = a13 . Continuing, we have cos(x2′ , x1 ) = xˆ ′2 · xˆ 1 = a21 , cos(x2′ , x2 ) = xˆ ′2 · xˆ 2 = a22 , (3.62b) and so on, where a21 = a12 in general. Now, Eq. (3.62) may be rewritten xˆ ′1 = xˆ 1 a11 + xˆ 2 a12 + xˆ 3 a13 , (3.62c) and also xˆ ′2 = xˆ 1 a21 + xˆ 2 a22 + xˆ 3 a23 , xˆ ′3 = xˆ 1 a31 + xˆ 2 a32 + xˆ 3 a33 . (3.62d) 3.3 Orthogonal Matrices 197 We may also go the other way by resolving xˆ 1 , xˆ 2 , and xˆ 3 into components in the primed system. Then xˆ 1 = xˆ ′1 a11 + xˆ ′2 a21 + xˆ ′3 a31 , xˆ 2 = xˆ ′1 a12 + xˆ ′2 a22 + xˆ ′3 a32 , (3.63) xˆ 3 = xˆ ′1 a13 + xˆ ′2 a23 + xˆ ′3 a33 . Associating xˆ 1 and xˆ ′1 with the subscript 1, xˆ 2 and xˆ ′2 with the subscript 2, xˆ 3 and xˆ ′3 with the subscript 3, we see that in each case the first subscript of aij refers to the primed unit vector (ˆx′1 , xˆ ′2 , xˆ ′3 ), whereas the second subscript refers to the unprimed unit vector (ˆx1 , xˆ 2 , xˆ 3 ). Applications to Vectors If we consider a vector whose components are functions of the position in space, then V(x1 , x2 , x3 ) = xˆ 1 V1 + xˆ 2 V2 + xˆ 3 V3 , V′ (x1′ , x2′ , x3′ ) = xˆ ′1 V1′ + xˆ ′2 V2′ + xˆ ′3 V3′ , (3.64) since the point may be given both by the coordinates (x1 , x2 , x3 ) and by the coordinates (x1′ , x2′ , x3′ ). Note that V and V′ are geometrically the same vector (but with different components). The coordinate axes are being rotated; the vector stays fixed. Using Eqs. (3.62) to eliminate xˆ 1 , xˆ 2 , and xˆ 3 , we may separate Eq. (3.64) into three scalar equations, V1′ = a11 V1 + a12 V2 + a13 V3 , V2′ = a21 V1 + a22 V2 + a23 V3 , (3.65) V3′ = a31 V1 + a32 V2 + a33 V3 . In particular, these relations will hold for the coordinates of a point (x1 , x2 , x3 ) and (x1′ , x2′ , x3′ ), giving x1′ = a11 x1 + a12 x2 + a13 x3 , x2′ = a21 x1 + a22 x2 + a23 x3 , (3.66) x3′ = a31 x1 + a32 x2 + a33 x3 , and similarly for the primed coordinates. In this notation the set of three equations (3.66) may be written as xi′ = 3  aij xj , (3.67) j =1 where i takes on the values 1, 2, and 3 and the result is three separate equations. Now let us set aside these results and try a different approach to the same problem. We consider two coordinate systems (x1 , x2 , x3 ) and (x1′ , x2′ , x3′ ) with a common origin and one point (x1 , x2 , x3 ) in the unprimed system, (x1′ , x2′ , x3′ ) in the primed system. Note the usual ambiguity. The same symbol x denotes both the coordinate axis and a particular 198 Chapter 3 Determinants and Matrices distance along that axis. Since our system is linear, xi′ must be a linear combination of the xi . Let xi′ = 3  aij xj . (3.68) j =1 The aij may be identified as the direction cosines. This identification is carried out for the two-dimensional case later. If we have two sets of quantities (V1 , V2 , V3 ) in the unprimed system and (V1′ , V2′ , V3′ ) in the primed system, related in the same way as the coordinates of a point in the two different systems (Eq. (3.68)), Vi′ = 3  aij Vj , (3.69) j =1 then, as in Section 1.2, the quantities (V1 , V2 , V3 ) are defined as the components of a vector that stays fixed while the coordinates rotate; that is, a vector is defined in terms of transformation properties of its components under a rotation of the coordinate axes. In a sense the coordinates of a point have been taken as a prototype vector. The power and usefulness of this definition became apparent in Chapter 2, in which it was extended to define pseudovectors and tensors. From Eq. (3.67) we can derive interesting information about the aij that describe the orientation of coordinate system (x1′ , x2′ , x3′ ) relative to the system (x1 , x2 , x3 ). The length from the origin to the point is the same in both systems. Squaring, for convenience,13      xi′ 2 = xi2 = aik xk aij xj i i i =  j,k xj xk  aij aik . (3.70) i This can be true for all points if and only if  aij aik = δj k , i k j j, k = 1, 2, 3. (3.71) Note that Eq. (3.71) is equivalent to the matrix equation (3.83); see also Eqs. (3.87a) to (3.87d). Verification of Eq. (3.71), if needed, may be obtained by returning to Eq. (3.70) and setting r = (x1 , x2 , x3 ) = (1, 0, 0), (0, 1, 0), (0, 0, 1), (1, 1, 0), and so on to evaluate the nine relations given by Eq. (3.71). This process is valid, since Eq. (3.70) must hold for all r for a given set of aij . Equation (3.71), a consequence of requiring that the length remain constant (invariant) under rotation of the coordinate system, is called the orthogonality condition. The aij , written as a matrix A subject to Eq. (3.71), form an orthogonal matrix, a first definition of an orthogonal matrix. Note that Eq. (3.71) is not matrix multiplication. Rather, it is interpreted later as a scalar product of two columns of A. 13 Note that two independent indices j and k are used. 3.3 Orthogonal Matrices 199 In matrix notation Eq. (3.67) becomes |x ′  = A|x. (3.72) Orthogonality Conditions — Two-Dimensional Case A better understanding of the aij and the orthogonality condition may be gained by considering rotation in two dimensions in detail. (This can be thought of as a three-dimensional system with the x1 -, x2 -axes rotated about x3 .) From Fig. 3.2, x1′ = x1 cos ϕ + x2 sin ϕ, x2′ = −x1 sin ϕ + x2 cos ϕ. Therefore by Eq. (3.72) A=  cos ϕ − sin ϕ  sin ϕ . cos ϕ (3.73) (3.74) Notice that A reduces to the unit matrix for ϕ = 0. Zero angle rotation means nothing has changed. It is clear from Fig. 3.2 that a11 = cos ϕ = cos(x1′ , x1 ),  a12 = sin ϕ = cos π2 − ϕ = cos(x1′ , x2 ), (3.75) and so on, thus identifying the matrix elements aij as the direction cosines. Equation (3.71), the orthogonality condition, becomes sin2 ϕ + cos2 ϕ = 1, sin ϕ cos ϕ − sin ϕ cos ϕ = 0. FIGURE 3.2 Rotation of coordinates. (3.76) 200 Chapter 3 Determinants and Matrices The extension to three dimensions (rotation of the coordinates through an angle ϕ counterclockwise about x3 ) is simply   cos ϕ sin ϕ 0 (3.77) A =  − sin ϕ cos ϕ 0  . 0 0 1 The a33 = 1 expresses the fact that x3′ = x3 , since the rotation has been about the x3 -axis. The zeros guarantee that x1′ and x2′ do not depend on x3 and that x3′ does not depend on x1 and x2 . Inverse Matrix, A−1 Returning to the general transformation matrix A, the inverse matrix A−1 is defined such that |x = A−1 |x ′ . (3.78) |x = A−1 A|x, (3.79) A−1 A = 1, (3.80) AA−1 = 1, (3.81) That is, A−1 describes the reverse of the rotation given by A and returns the coordinate system to its original position. Symbolically, Eqs. (3.72) and (3.78) combine to give and since |x is arbitrary, the unit matrix. Similarly, using Eqs. (3.72) and (3.78) and eliminating |x instead of |x ′ . ˜ Transpose Matrix, A We can determine the elements of our postulated inverse matrix A−1 by employing the orthogonality condition. Equation (3.71), the orthogonality condition, does not conform to our definition of matrix multiplication, but it can be put in the required form by defining a new matrix A˜ such that a˜ j i = aij . (3.82) ˜ = 1. AA (3.83) Equation (3.71) becomes This is a restatement of the orthogonality condition and may be taken as the constraint defining an orthogonal matrix, a second definition of an orthogonal matrix. Multiplying Eq. (3.83) by A−1 from the right and using Eq. (3.81), we have A˜ = A−1 , (3.84) 3.3 Orthogonal Matrices 201 a third definition of an orthogonal matrix. This important result, that the inverse equals the transpose, holds only for orthogonal matrices and indeed may be taken as a further restatement of the orthogonality condition. Multiplying Eq. (3.84) by A from the left, we obtain ˜ =1 AA (3.85) or  i aj i aki = δj k , (3.86) which is still another form of the orthogonality condition. Summarizing, the orthogonality condition may be stated in several equivalent ways:  aij aik = δj k , (3.87a) i  i aj i aki = δj k , ˜ = AA˜ = 1, AA ˜ =A A −1 . (3.87b) (3.87c) (3.87d) Any one of these relations is a necessary and a sufficient condition for A to be orthogonal. It is now possible to see and understand why the term orthogonal is appropriate for these matrices. We have the general form   a11 a12 a13 A =  a21 a22 a23  , a31 a32 a33 a matrix of direction cosines in which aij is the cosine of the angle between xi′ and xj . Therefore a11 , a12 , a13 are the direction cosines of x1′ relative to x1 , x2 , x3 . These three elements of A define a unit length along x1′ , that is, a unit vector xˆ ′1 , xˆ ′1 = xˆ 1 a11 + xˆ 2 a12 + xˆ 3 a13 . The orthogonality relation (Eq. (3.86)) is simply a statement that the unit vectors xˆ ′1 , xˆ ′2 , and xˆ ′3 are mutually perpendicular, or orthogonal. Our orthogonal transformation matrix A transforms one orthogonal coordinate system into a second orthogonal coordinate system by rotation and/or reflection. As an example of the use of matrices, the unit vectors in spherical polar coordinates may be written as     rˆ xˆ  θˆ  = C  yˆ  , (3.88) zˆ ϕˆ 202 Chapter 3 Determinants and Matrices where C is given in Exercise 2.5.1. This is equivalent to Eqs. (3.62) with x′1 , x′2 , and x′3 ˆ From the preceding analysis C is orthogonal. Therefore the inverse replaced by rˆ , θˆ , and ϕ. relation becomes       rˆ rˆ xˆ ˜  θˆ  ,  yˆ  = C−1  θˆ  = C (3.89) zˆ ϕˆ ϕˆ and Exercise 2.5.5 is solved by inspection. Similar applications of matrix inverses appear in connection with the transformation of a power series into a series of orthogonal functions (Gram–Schmidt orthogonalization in Section 10.3) and the numerical solution of integral equations. Euler Angles Our transformation matrix A contains nine direction cosines. Clearly, only three of these are independent, Eq. (3.71) providing six constraints. Equivalently, we may say that two parameters (θ and ϕ in spherical polar coordinates) are required to fix the axis of rotation. Then one additional parameter describes the amount of rotation about the specified axis. (In the Lagrangian formulation of mechanics (Section 17.3) it is necessary to describe A by using some set of three independent parameters rather than the redundant direction cosines.) The usual choice of parameters is the Euler angles.14 The goal is to describe the orientation of a final rotated system (x1′′′ , x2′′′ , x3′′′ ) relative to some initial coordinate system (x1 , x2 , x3 ). The final system is developed in three steps, with each step involving one rotation described by one Euler angle (Fig. 3.3): 1. The coordinates are rotated about the x3 -axis through an angle α counterclockwise into new axes denoted by x1′ -, x2′ -, x3′ . (The x3 - and x3′ -axes coincide.) FIGURE 3.3 (a) Rotation about x3 through angle α; (b) rotation about x2′ through angle β; (c) rotation about x3′′ through angle γ . 14 There are almost as many definitions of the Euler angles as there are authors. Here we follow the choice generally made by workers in the area of group theory and the quantum theory of angular momentum (compare Sections 4.3, 4.4). 3.3 Orthogonal Matrices 203 The coordinates are rotated about the x2′ -axis15 through an angle β counterclockwise into new axes denoted by x1′′ -, x2′′ -, x3′′ . (The x2′ - and x2′′ -axes coincide.) 3. The third and final rotation is through an angle γ counterclockwise about the x3′′ -axis, yielding the x1′′′ , x2′′′ , x3′′′ system. (The x3′′ - and x3′′′ -axes coincide.) The three matrices describing these rotations are  cos α sin α  − sin α cos α Rz (α) = 0 0  0 0, 1 (3.90)  0 − sin β 1 0  0 cos β (3.91)  0 0. 1 (3.92) A(α, β, γ ) = Rz (γ )Ry (β)Rz (α). (3.93) exactly like Eq. (3.77), and  cos β Ry (β) =  0 sin β  cos γ Rz (γ ) =  − sin γ 0 sin γ cos γ 0 The total rotation is described by the triple matrix product, Note the order: Rz (α) operates first, then Ry (β), and finally Rz (γ ). Direct multiplication gives A(α, β, γ )   cos γ cos β cos α − sin γ sin α cos γ cos β sin α + sin γ cos α − cos γ sin β = − sin γ cos β cos α − cos γ sin α − sin γ cos β sin α + cos γ cos α sin γ sin β  sin β cos α sin β sin α cos β (3.94) Equating A(aij ) with A(α, β, γ ), element by element, yields the direction cosines in terms of the three Euler angles. We could use this Euler angle identification to verify the direction cosine identities, Eq. (1.46) of Section 1.4, but the approach of Exercise 3.3.3 is much more elegant. Symmetry Properties Our matrix description leads to the rotation group SO(3) in three-dimensional space R3 , and the Euler angle description of rotations forms a basis for developing the rotation group in Chapter 4. Rotations may also be described by the unitary group SU(2) in twodimensional space C2 over the complex numbers. The concept of groups such as SU(2) and its generalizations and group theoretical techniques are often encountered in modern 15 Some authors choose this second rotation to be about the x ′ -axis. 1 204 Chapter 3 Determinants and Matrices particle physics, where symmetry properties play an important role. The SU(2) group is also considered in Chapter 4. The power and flexibility of matrices pushed quaternions into obscurity early in the 20th century.16 It will be noted that matrices have been handled in two ways in the foregoing discussion: by their components and as single entities. Each technique has its own advantages and both are useful. The transpose matrix is useful in a discussion of symmetry properties. If ˜ A = A, aij = aj i , (3.95) aij = −aj i , (3.96) the matrix is called symmetric, whereas if ˜ A = −A, it is called antisymmetric or skewsymmetric. The diagonal elements vanish. It is easy to show that any (square) matrix may be written as the sum of a symmetric matrix and an antisymmetric matrix. Consider the identity ˜ + 1 [A − A]. ˜ A = 21 [A + A] 2 (3.97) ˜ is clearly symmetric, whereas [A − A] ˜ is clearly antisymmetric. This is the [A + A] matrix analog of Eq. (2.75), Chapter 2, for tensors. Similarly, a function may be broken up into its even and odd parts. So far we have interpreted the orthogonal matrix as rotating the coordinate system. This changes the components of a fixed vector (not rotating with the coordinates) (Fig. 1.6, Chapter 1). However, an orthogonal matrix A may be interpreted equally well as a rotation of the vector in the opposite direction (Fig. 3.4). These two possibilities, (1) rotating the vector keeping the coordinates fixed and (2) rotating the coordinates (in the opposite sense) keeping the vector fixed, have a direct analogy in quantum theory. Rotation (a time transformation) of the state vector gives the Schrödinger picture. Rotation of the basis keeping the state vector fixed yields the Heisenberg picture. FIGURE 3.4 Fixed coordinates — rotated vector. 16 R. J. Stephenson, Development of vector analysis from quaternions. Am. J. Phys. 34: 194 (1966). 3.3 Orthogonal Matrices 205 Suppose we interpret matrix A as rotating a vector r into the position shown by r1 ; that is, in a particular coordinate system we have a relation r1 = Ar. (3.98) Now let us rotate the coordinates by applying matrix B, which rotates (x, y, z) into (x ′ , y ′ , z′ ),  r′1 = Br1 = BAr = (Ar)′ = BA B−1 B r   = BAB−1 Br = BAB−1 r′ . (3.99) Br1 is just r1 in the new coordinate system, with a similar interpretation holding for Br. Hence in this new system (Br) is rotated into position (Br1 ) by the matrix BAB−1 : Br1 = (BAB−1 ) Br r′1 = A′ r′ In the new system the coordinates have been rotated by matrix B; A has the form A′ , in which A′ = BAB−1 . (3.100) A′ operates in the x ′ , y ′ , z′ space as A operates in the x, y, z space. The transformation defined by Eq. (3.100) with B any matrix, not necessarily orthogonal, is known as a similarity transformation. In component form Eq. (3.100) becomes aij′ =  k,l  bik akl B−1 lj . (3.101) Now, if B is orthogonal, and we have  −1 ˜ lj = bj l , B lj = (B) aij′ =  bik bj l akl . (3.102) (3.103) k,l It may be helpful to think of A again as an operator, possibly as rotating coordinate axes, relating angular momentum and angular velocity of a rotating solid (Section 3.5). Matrix A is the representation in a given coordinate system — or basis. But there are directions associated with A — crystal axes, symmetry axes in the rotating solid, and so on — so that the representation A depends on the basis. The similarity transformation shows just how the representation changes with a change of basis. 206 Chapter 3 Determinants and Matrices Relation to Tensors Comparing Eq. (3.103) with the equations of Section 2.6, we see that it is the definition of a tensor of second rank. Hence a matrix that transforms by an orthogonal similarity transformation is, by definition, a tensor. Clearly, then, any orthogonal matrix A, interpreted as rotating a vector (Eq. (3.98)), may be called a tensor. If, however, we consider the orthogonal matrix as a collection of fixed direction cosines, giving the new orientation of a coordinate system, there is no tensor property involved. The symmetry and antisymmetry properties defined earlier are preserved under orthog˜ and onal similarity transformations. Let A be a symmetric matrix, A = A, A′ = BAB−1 . (3.104) ˜ ′ = B˜ −1 A ˜B ˜ = BAB ˜ −1 , A (3.105) Now, ˜ Therefore since B is orthogonal. But A = A. A˜ ′ = BAB−1 = A′ , (3.106) showing that the property of symmetry is invariant under an orthogonal similarity transformation. In general, symmetry is not preserved under a nonorthogonal similarity transformation. Exercises Note. Assume all matrix elements are real. 3.3.1 Show that the product of two orthogonal matrices is orthogonal. Note. This is a key step in showing that all n × n orthogonal matrices form a group (Section 4.1). 3.3.2 If A is orthogonal, show that its determinant = ±1. 3.3.3 If A is orthogonal and det A = +1, show that (det A)aij = Cij , where Cij is the cofactor of aij . This yields the identities of Eq. (1.46), used in Section 1.4 to show that a cross product of vectors (in three-space) is itself a vector. Hint. Note Exercise 3.2.32. 3.3.4 Another set of Euler rotations in common use is (1) (2) (3) a rotation about the x3 -axis through an angle ϕ, counterclockwise, a rotation about the x1′ -axis through an angle θ , counterclockwise, a rotation about the x3′′ -axis through an angle ψ, counterclockwise. If α = ϕ − π/2 β =θ γ = ψ + π/2 show that the final systems are identical. ϕ = α + π/2 θ =β ψ = γ − π/2, 3.3 Orthogonal Matrices 3.3.5 207 Suppose the Earth is moved (rotated) so that the north pole goes to 30◦ north, 20◦ west (original latitude and longitude system) and the 10◦ west meridian points due south. (a) (b) What are the Euler angles describing this rotation? Find the corresponding direction cosines.  3.3.6  0.9551 −0.2552 −0.1504 ANS. (b) A =  0.0052 0.5221 −0.8529 . 0.2962 0.8138 0.5000 Verify that the Euler angle rotation matrix, Eq. (3.94), is invariant under the transformation α → α + π, 3.3.7 β → −β, γ → γ − π. Show that the Euler angle rotation matrix A(α, β, γ ) satisfies the following relations: (a) (b) ˜ A−1 (α, β, γ ) = A(α, β, γ ), −1 A (α, β, γ ) = A(−γ , −β, −α). 3.3.8 Show that the trace of the product of a symmetric and an antisymmetric matrix is zero. 3.3.9 Show that the trace of a matrix remains invariant under similarity transformations. 3.3.10 Show that the determinant of a matrix remains invariant under similarity transformations. Note. Exercises (3.3.9) and (3.3.10) show that the trace and the determinant are independent of the Cartesian coordinates. They are characteristics of the matrix (operator) itself. 3.3.11 Show that the property of antisymmetry is invariant under orthogonal similarity transformations. 3.3.12 A is 2 × 2 and orthogonal. Find the most general form of   a b A= . c d Compare with two-dimensional rotation. 3.3.13 |x and |y are column vectors. Under an orthogonal transformation S, |x′  = S|x, |y′  = S|y. Show that the scalar product x | y is invariant under this orthogonal transformation. Note. This is equivalent to the invariance of the dot product of two vectors, Section 1.3. 3.3.14 Show that the sum of the squares of the elements of a matrix remains invariant under orthogonal similarity transformations. 3.3.15 As a generalization of Exercise 3.3.14, show that   ′ ′ S j k Tj k = Slm Tlm , jk l,m 208 Chapter 3 Determinants and Matrices where the primed and unprimed elements are related by an orthogonal similarity transformation. This result is useful in deriving invariants in electromagnetic theory (compare Section 4.6). Note. This product Mj k = Sj k Tj k is sometimes called a Hadamard product. In the framework of tensor analysis, Chapter 2, this exercise becomes a double contraction of two second-rank tensors and therefore is clearly a scalar (invariant). 3.3.16 A rotation ϕ1 + ϕ2 about the z-axis is carried out as two successive rotations ϕ1 and ϕ2 , each about the z-axis. Use the matrix representation of the rotations to derive the trigonometric identities cos(ϕ1 + ϕ2 ) = cos ϕ1 cos ϕ2 − sin ϕ1 sin ϕ2 , sin(ϕ1 + ϕ2 ) = sin ϕ1 cos ϕ2 + cos ϕ1 sin ϕ2 . 3.3.17 A column vector V has components V1 and V2 in an initial (unprimed) system. Calculate V1′ and V2′ for a (a) (b) rotation of the coordinates through an angle of θ counterclockwise, rotation of the vector through an angle of θ clockwise. The results for parts (a) and (b) should be identical. 3.3.18 Write a subroutine that will test whether a real N × N matrix is symmetric. Symmetry may be defined as 0 ≤ |aij − aj i | ≤ ε, where ε is some small tolerance (which allows for truncation error, and so on in the computer). 3.4 HERMITIAN MATRICES, UNITARY MATRICES Definitions Thus far it has generally been assumed that our linear vector space is a real space and that the matrix elements (the representations of the linear operators) are real. For many calculations in classical physics, real matrix elements will suffice. However, in quantum mechanics complex variables are unavoidable because of the form of the basic commutation relations (or the form of the time-dependent Schrödinger equation). With this in mind, we generalize to the case of complex matrix elements. To handle these elements, let us define, or label, some new properties. 1. Complex conjugate,√A∗ , formed by taking the complex conjugate (i → −i) of each element, where i = −1. 2. Adjoint, A† , formed by transposing A∗ , "∗ = A˜ ∗ . A† = A (3.107) 3.4 Hermitian Matrices, Unitary Matrices 209 Hermitian matrix: The matrix A is labeled Hermitian (or self-adjoint) if A = A† . (3.108) U† = U−1 . (3.109) ˜ and real Hermitian matrices are real symmetric matrices. If A is real, then A† = A In quantum mechanics (or matrix mechanics) matrices are usually constructed to be Hermitian, or unitary. 4. Unitary matrix: Matrix U is labeled unitary if ˜ so real unitary matrices are orthogonal matrices. This If U is real, then U−1 = U, represents a generalization of the concept of orthogonal matrix (compare Eq. (3.84)). 5. (AB)∗ = A∗ B∗ , (AB)† = B† A† . If the matrix elements are complex, the physicist is almost always concerned with Hermitian and unitary matrices. Unitary matrices are especially important in quantum mechanics because they leave the length of a (complex) vector unchanged — analogous to the operation of an orthogonal matrix on a real vector. It is for this reason that the S matrix of scattering theory is a unitary matrix. One important exception to this interest in unitary matrices is the group of Lorentz matrices, Chapter 4. Using Minkowski space, we see that these matrices are not unitary. In a complex n-dimensional linear space the square of the length of a point x˜ = T (x , x , . . . , x ), or the square of its distance from the origin 0, is defined as x † x = x ∗1 2 n2 xi xi = |xi | . If a coordinate transformation y = Ux leaves the distance unchanged, then x † x = y † y = (Ux)† Ux = x † U† Ux. Since x is arbitrary it follows that U† U = 1n ; that is, U is a unitary n × n matrix. If x ′ = Ax is a linear map, then its matrix in the new coordinates becomes the unitary (analog of a similarity) transformation A′ = UAU† , (3.110) because Ux ′ = y ′ = UAx = UAU−1 y = UAU† y. Pauli and Dirac Matrices The set of three 2 × 2 Pauli matrices σ ,     0 −i 0 1 , , σ2 = σ1 = i 0 1 0 σ3 =   1 0 , 0 −1 (3.111) were introduced by W. Pauli to describe a particle of spin 1/2 in nonrelativistic quantum mechanics. It can readily be shown that (compare Exercises 3.2.13 and 3.2.14) the Pauli σ satisfy σi σj + σj σi = 2δij 12 , σi σj = iσk , (σi )2 = 12 , anticommutation i, j, k a cyclic permutation of 1, 2, 3 (3.112) (3.113) (3.114) 210 Chapter 3 Determinants and Matrices where 12 is the 2 × 2 unit matrix. Thus, the vector σ /2 satisfies the same commutation relations, [σi , σj ] ≡ σi σj − σj σi = 2iεij k σk , (3.115) as the orbital angular momentum L (L × L = iL, see Exercise 2.5.15 and the SO(3) and SU(2) groups in Chapter 4). The three Pauli matrices σ and the unit matrix form a complete set, so any Hermitian 2 × 2 matrix M may be expanded as M = m0 12 + m1 σ1 + m2 σ2 + m3 σ3 = m0 + m · σ , (3.116) where the mi form a constant vector m. Using (σi )2 = 12 and trace(σi ) = 0 we obtain from Eq. (3.116) the expansion coefficients mi by forming traces, 2m0 = trace(M), 2mi = trace(Mσi ), i = 1, 2, 3. (3.117) algebra.17 Adding and multiplying such 2 × 2 matrices we generate the Pauli Note that trace(σi ) = 0 for i = 1, 2, 3. In 1927 P. A. M. Dirac extended this formalism to fast-moving particles of spin 21 , such as electrons (and neutrinos). To include special relativity he started from Einstein’s energy, E 2 = p2 c2 + m2 c4 , instead of the nonrelativistic kinetic and potential energy, E = p2 /2m + V . The key to the Dirac equation is to factorize E 2 − p2 c2 = E 2 − (cσ · p)2 = (E − cσ · p)(E + cσ · p) = m2 c4 (3.118) using the 2 × 2 matrix identity (σ · p)2 = p2 12 . (3.119) The 2 × 2 unit matrix 12 is not written explicitly in Eq. (3.118), and Eq. (3.119) follows from Exercise 3.2.14 for a = b = p. Equivalently, we can introduce two matrices γ ′ and γ to factorize E 2 − p2 c2 directly:  ′ 2 Eγ ⊗ 12 − c(γ ⊗ σ ) · p = E 2 γ ′ 2 ⊗ 12 + c2 γ 2 ⊗ (σ · p)2 − Ec(γ ′ γ + γ γ ′ ) ⊗ σ · p = E 2 − p2 c2 = m2 c4 . (3.119′ ) For Eq. (3.119′ ) to hold, the conditions γ ′ 2 = 1 = −γ 2 , γ ′γ + γ γ ′ = 0 (3.120) must be satisfied. Thus, the matrices γ ′ and γ anticommute, just like the three Pauli matrices; therefore they cannot be real or complex numbers. Because the conditions (3.120) can be met by 2 × 2 matrices, we have written direct product signs (see Example 3.2.1) in Eq. (3.119′ ) because γ ′ , γ are multiplied by 12 , σ matrices, respectively, with     1 0 0 1 ′ γ = , γ= . (3.121) 0 −1 −1 0 17 For its geometrical significance, see W. E. Baylis, J. Huschilt, and Jiansu Wei, Am. J. Phys. 60: 788 (1992). 3.4 Hermitian Matrices, Unitary Matrices The direct-product 4 × 4 matrices in Eq. (3.119′ ) γ -matrices,  1    0 0 1 2 = γ 0 = γ ′ ⊗ 12 = 0 0 −12 0  0    0 0 σ1 1 = γ = γ ⊗ σ1 =  0 −σ1 0 −1  0    0 0 σ 3 = γ 3 = γ ⊗ σ3 =  −1 −σ3 0 0 211 are the four conventional Dirac 0 1 0 0  0 0 0 0  , −1 0  0 −1  0 0 1 0 1 0 , −1 0 0  0 0 0  0 1 0 0 0 −1  , 0 0 0  1 0 0 (3.122) and similarly for γ 2 = γ ⊗ σ2 . In vector notation γ = γ ⊗ σ is a vector with three components, each a 4 × 4 matrix, a generalization of the vector of Pauli matrices to a vector of 4 × 4 matrices. The four matrices γ i are the components of the four-vector γ µ = (γ 0 , γ 1 , γ 2 , γ 3 ). If we recognize in Eq. (1.119′ ) Eγ ′ ⊗ 12 − c(γ ⊗ σ ) · p = γ µ pµ = γ · p = (γ0 , γ ) · (E, cp) (3.123) as a scalar product of two four-vectors γ µ and p µ (see Lorentz group in Chapter 4), then Eq. (3.119′ ) with p 2 = p · p = E 2 − p2 c2 may be regarded as a four-vector generalization of Eq. (3.119). Summarizing the relativistic treatment of a spin 1/2 particle, it leads to 4 × 4 matrices, while the spin 1/2 of a nonrelativistic particle is described by the 2 × 2 Pauli matrices σ . By analogy with the Pauli algebra, we can form products of the basic γ µ matrices and linear combinations of them and the unit matrix 1 = 14 , thereby generating a 16dimensional (so-called Clifford18 ) algebra. A basis (with convenient Lorentz transformation properties, see Chapter 4) is given (in 2 × 2 matrix notation of Eq. (3.122)) by    0 12 0 1 2 3 , γ µ , γ 5 γ µ , σ µν = i γ µ γ ν − γ ν γ µ /2. (3.124) 14 , γ5 = iγ γ γ γ = 12 0 The γ -matrices anticommute; that is, their symmetric combinations γ µ γ ν + γ ν γ µ = 2g µν 14 , (3.125) where g 00 = 1 = −g 11 = −g 22 = −g 33 , and g µν = 0 for µ = ν, are zero or proportional to the 4 × 4 unit matrix 14 , while the six antisymmetric combinations in Eq. (3.124) give new basis elements that transform like a tensor under Lorentz transformations (see Chapter 4). Any 4 × 4 matrix can be expanded in terms of these 16 elements, and the expansion coefficients are given by forming traces similar to the 2 × 2 case in Eq. (3.117) us18 D. Hestenes and G. Sobczyk, loc.cit.; D. Hestenes, Am. J. Phys. 39: 1013 (1971); and J. Math. Phys. 16: 556 (1975). 212 Chapter 3 Determinants and Matrices ing trace(14 ) = 4, trace(γ5 ) = 0, trace(γ µ ) = 0 = trace(γ5 γ µ ), trace(σ µν ) = 0 for µ, ν = 0, 1, 2, 3 (see Exercise 3.4.23). In Chapter 4 we show that γ5 is odd under parity, so γ5 γ µ transform like an axial vector that has even parity. The spin algebra generated by the Pauli matrices is just a matrix representation of the four-dimensional Clifford algebra, while Hestenes and coworkers (loc. cit.) have developed in their geometric calculus a representation-free (that is, “coordinate-free”) algebra that contains complex numbers, vectors, the quaternion subalgebra, and generalized cross products as directed areas (called bivectors). This algebraic-geometric framework is tailored to nonrelativistic quantum mechanics, where spinors acquire geometric aspects and the Gauss and Stokes theorems appear as components of a unified theorem. Their geometric algebra corresponding to the 16-dimensional Clifford algebra of Dirac γ -matrices is the appropriate coordinate-free framework for relativistic quantum mechanics and electrodynamics. The discussion of orthogonal matrices in Section 3.3 and unitary matrices in this section is only a beginning. Further extensions are of vital concern in “elementary” particle physics. With the Pauli and Dirac matrices, we can develop spinor wave functions for electrons, protons, and other (relativistic) spin 21 particles. The coordinate system rotations lead to Dj (α, β, γ ), the rotation group usually represented by matrices in which the elements are functions of the Euler angles describing the rotation. The special unitary group SU(3) (composed of 3 × 3 unitary matrices with determinant +1) has been used with considerable success to describe mesons and baryons involved in the strong interactions, a gauge theory that is now called quantum chromodynamics. These extensions are considered further in Chapter 4. Exercises 3.4.1 Show that 3.4.2 Three angular momentum matrices satisfy the basic commutation relation  det(A∗ ) = (det A)∗ = det A† . [Jx , Jy ] = iJz (and cyclic permutation of indices). If two of the matrices have real elements, show that the elements of the third must be pure imaginary. 3.4.3 3.4.4 Show that (AB)† = B† A† . A matrix C = S† S. Show that the trace is positive definite unless S is the null matrix, in which case trace (C) = 0. 3.4.5 If A and B are Hermitian matrices, show that (AB + BA) and i(AB − BA) are also Hermitian. 3.4.6 The matrix C is not Hermitian. Show that then C + C† and i(C − C† ) are Hermitian. This means that a non-Hermitian matrix may be resolved into two Hermitian parts, 1  1 C = C + C† + i C − C† . 2 2i This decomposition of a matrix into two Hermitian matrix parts parallels the decomposition of a complex number z into x + iy, where x = (z + z∗ )/2 and y = (z − z∗ )/2i. 3.4 Hermitian Matrices, Unitary Matrices 3.4.7 213 A and B are two noncommuting Hermitian matrices: AB − BA = iC. Prove that C is Hermitian. 3.4.8 Show that a Hermitian matrix remains Hermitian under unitary similarity transformations. 3.4.9 Two matrices A and B are each Hermitian. Find a necessary and sufficient condition for their product AB to be Hermitian. ANS. [A, B] = 0. 3.4.10 Show that the reciprocal (that is, inverse) of a unitary matrix is unitary. 3.4.11 A particular similarity transformation yields A′ = UAU−1 , A′† = UA† U−1 . ′ ′ If the adjoint relationship is preserved (A† = A † ) and det U = 1, show that U must be unitary. 3.4.12 Two matrices U and H are related by U = eiaH , with a real. (The exponential function is defined by a Maclaurin expansion. This will be done in Section 5.6.) (a) (b) If H is Hermitian, show that U is unitary. If U is unitary, show that H is Hermitian. (H is independent of a.) Note. With H the Hamiltonian, ψ(x, t) = U(x, t)ψ(x, 0) = exp(−itH/h¯ )ψ(x, 0) is a solution of the time-dependent Schrödinger equation. U(x, t) = exp(−itH/h¯ ) is the “evolution operator.” 3.4.13 An operator T (t + ε, t) describes the change in the wave function from t to t + ε. For ε real and small enough so that ε2 may be neglected, i T (t + ε, t) = 1 − εH(t). h¯ (a) (b) If T is unitary, show that H is Hermitian. If H is Hermitian, show that T is unitary. Note. When H(t) is independent of time, this relation may be put in exponential form — Exercise 3.4.12. 214 Chapter 3 Determinants and Matrices 3.4.14 Show that an alternate form, T (t + ε, t) = 1 − iεH(t)/2h¯ , 1 + iεH(t)/2h¯ agrees with the T of part (a) of Exercise 3.4.13, neglecting ε 2 , and is exactly unitary (for H Hermitian). 3.4.15 Prove that the direct product of two unitary matrices is unitary. 3.4.16 Show that γ5 anticommutes with all four γ µ . 3.4.17 Use the four-dimensional Levi-Civita symbol ελµνρ with ε0123 = −1 (generalizing Eqs. (2.93) in Section 2.9 to four dimensions) and show that (i) 2γ5 σµν = −iεµναβ σ αβ using the summation convention of Section 2.6 and (ii) γλ γµ γν = gλµ γν − gλν γµ + gµν γλ + iελµνρ γ ρ γ5 . Define γµ = gµν γ ν using g µν = gµν to raise and lower indices. 3.4.18 Evaluate the following traces: (see Eq. (3.123) for the notation) (i) (ii) (iii) (iv) 3.4.19 3.4.20 trace(γ · aγ · b) = 4a · b, trace(γ · aγ · bγ · c) = 0, trace(γ · aγ · bγ · cγ · d) = 4(a · bc · d − a · cb · d + a · db · c), trace(γ5 γ · aγ · bγ · cγ · d) = 4iεαβµν a α bβ cµ d ν . Show that (i) γµ γ α γ µ = −2γ α , (ii) γµ γ α γ β γ µ = 4g αβ , and (iii) γµ γ α γ β γ ν γ µ = −2γ ν γ β γ α . If M = 12 (1 + γ5 ), show that M2 = M. Note that γ5 may be replaced by any other Dirac matrix (any Ŵi of Eq. (3.124)). If M is Hermitian, then this result, M2 = M, is the defining equation for a quantum mechanical projection operator. 3.4.21 Show that where α = γ0 γ is a vector α × α = 2iσ ⊗ 12 , α = (α1 , α2 , α3 ). Note that if α is a polar vector (Section 2.4), then σ is an axial vector. 3.4.22 Prove that the 16 Dirac matrices form a linearly independent set. 3.4.23 If we assume that a given 4 × 4 matrix A (with constant elements) can be written as a linear combination of the 16 Dirac matrices A= 16  ci Ŵi , i=1 show that ci ∼ trace(AŴi ). 3.5 Diagonalization of Matrices 3.4.24 3.4.25 215 If C = iγ 2 γ 0 is the charge conjugation matrix, show that Cγ µ C−1 = −γ˜ µ , where ˜ indicates transposition. Let xµ′ = νµ xν be a rotation by an angle θ about the 3-axis, x0′ = x0 , x1′ = x1 cos θ + x2 sin θ, x2′ = −x1 sin θ + x2 cos θ, x3′ = x3 . Use R = exp(iθ σ 12 /2) = cos θ/2 + iσ 12 sin θ/2 (see Eq. (3.170b)) and show that the γ ’s transform just like the coordinates x µ , that is, νµ γν = R −1 γµ R. (Note that γµ = gµν γ ν and that the γ µ are well defined only up to a similarity transformation.) Similarly, if x ′ = x is a boost (pure Lorentz transformation) along the 1-axis, that is, x0′ = x0 cosh ζ − x1 sinh ζ, x2′ = x2 , x1′ = −x0 sinh ζ + x1 cosh ζ, x3′ = x3 , with tanh ζ = v/c and B = exp(−iζ σ 01 /2) = cosh ζ /2 − iσ 01 sinh ζ /2 (see Eq. (3.170b)), show that νµ γν = Bγµ B −1 . 3.4.26 Given r′ = Ur, with U a unitary matrix and r a (column) vector with complex elements, show that the norm (magnitude) of r is invariant under this operation. (b) The matrix U transforms any column vector r with complex elements into r′ , leaving the magnitude invariant: r† r = r′ † r′ . Show that U is unitary. 3.4.27 Write a subroutine that will test whether a complex n × n matrix is self-adjoint. In demanding equality of matrix elements aij = aij† , allow some small tolerance ε to compensate for truncation error of the computer. 3.4.28 Write a subroutine that will form the adjoint of a complex M × N matrix. 3.4.29 Write a subroutine that will take a complex M × N matrix A and yield the product A† A. Hint. This subroutine can call the subroutines of Exercises 3.2.41 and 3.4.28. (b) Test your subroutine by taking A to be one or more of the Dirac matrices, Eq. (3.124). 3.5 (a) (a) DIAGONALIZATION OF MATRICES Moment of Inertia Matrix In many physical problems involving real symmetric or complex Hermitian matrices it is desirable to carry out a (real) orthogonal similarity transformation or a unitary transformation (corresponding to a rotation of the coordinate system) to reduce the matrix to a diagonal form, nondiagonal elements all equal to zero. One particularly direct example of this is the moment of inertia matrix I of a rigid body. From the definition of angular momentum L we have L = Iω, (3.126) 216 Chapter 3 Determinants and Matrices ω being the angular velocity.19 The inertia matrix I is found to have diagonal components   and so on, (3.127) mi ri2 − xi2 , Ixx = i the subscript i referring to mass mi located at ri = (xi , yi , zi ). For the nondiagonal components we have  mi xi yi = Iyx . (3.128) Ixy = − i By inspection, matrix I is symmetric. Also, since I appears in a physical equation of the form (3.126), which holds for all orientations of the coordinate system, it may be considered to be a tensor (quotient rule, Section 2.3). The key now is to orient the coordinate axes (along a body-fixed frame) so that the Ixy and the other nondiagonal elements will vanish. As a consequence of this orientation and an indication of it, if the angular velocity is along one such realigned principal axis, the angular velocity and the angular momentum will be parallel. As an illustration, the stability of rotation is used by football players when they throw the ball spinning about its long principal axis. Eigenvectors, Eigenvalues It is instructive to consider a geometrical picture of this problem. If the inertia matrix I is multiplied from each side by a unit vector of variable direction, nˆ = (α, β, γ ), then in the Dirac bracket notation of Section 3.2, ˆ n ˆ = I, n|I| (3.129) where I is the moment of inertia about the direction nˆ and a positive number (scalar). Carrying out the multiplication, we obtain I = Ixx α 2 + Iyy β 2 + Izz γ 2 + 2Ixy αβ + 2Ixz αγ + 2Iyz βγ , (3.130) a positive definite quadratic form that must be an ellipsoid (see Fig. 3.5). From analytic geometry it is known that the coordinate axes can always be rotated to coincide with the axes of our ellipsoid. In many elementary cases, especially when symmetry is present, these new axes, called the principal axes, can be found by inspection. We can find the axes by locating the local extrema of the ellipsoid in terms of the variable components of n, subject to the constraint nˆ 2 = 1. To deal with the constraint, we introduce a Lagrange multiplier λ ˆ n ˆ − λn| ˆ n, ˆ (Section 17.6). Differentiating n|I|  ∂  ˆ n ˆ − λn| ˆ n ˆ =2 Ij k nk − 2λnj = 0, j = 1, 2, 3 (3.131) n|I| ∂nj k yields the eigenvalue equations ˆ = λ|n. ˆ I|n 19 The moment of inertia matrix may also be developed from the kinetic energy of a rotating body, T = 1/2ω|I|ω. (3.132) 3.5 Diagonalization of Matrices FIGURE 3.5 217 Moment of inertia ellipsoid. The same result can be found by purely geometric methods. We now proceed to develop a general method of finding the diagonal elements and the principal axes. ˜ is the real orthogonal matrix such that n′ = Rn, or |n′  = R|n in Dirac If R−1 = R notation, are the new coordinates, then we obtain, using n′ |R = n| in Eq. (3.132), ˜ ′  = I ′ n′ 2 + I ′ n′ 2 + I ′ n′ 2 , n|I|n = n′ |RIR|n 1 1 2 2 3 3 (3.133) where the Ii′ > 0 are the principal moments of inertia. The inertia matrix I′ in Eq. (3.133) is diagonal in the new coordinates,   ′ I1 0 0 ˜ =  0 I′ 0 . I′ = RIR (3.134) 2 0 0 I3′ ˜ in the form If we rewrite Eq. (3.134) using R−1 = R ˜ ′ = IR ˜ RI (3.135) ˜ = (v1 , v2 , v3 ) to consist of three column vectors, then Eq. (3.135) splits up into and take R three eigenvalue equations, Ivi = Ii′ vi , i = 1, 2, 3 (3.136) with eigenvalues Ii′ and eigenvectors vi . The names were introduced from the German literature on quantum mechanics. Because these equations are linear and homogeneous 218 Chapter 3 Determinants and Matrices (for fixed i), by Section 3.1 their determinants have to vanish:    I11 − I ′ I12 I13  i   I12 I22 − Ii′ I23  = 0.   I13 I23 I33 − Ii′  (3.137) Replacing the eigenvalue Ii′ by a variable λ times the unit matrix 1, we may rewrite Eq. (3.136) as (I − λ1)|v = 0. (3.136′ ) |I − λ1| = 0, (3.137′ ) The determinant set to zero, is a cubic polynomial in λ; its three roots, of course, are the Ii′ . Substituting one root at a time back into Eq. (3.136) (or (3.136′ )), we can find the corresponding eigenvectors. Because of its applications in astronomical theories, Eq. (3.137) (or (3.137′ )) is known as the secular equation.20 The same treatment applies to any real symmetric matrix I, except that its eigenvalues need not all be positive. Also, the orthogonality condition in Eq. (3.87) for R say that, in geometric terms, the eigenvectors vi are mutually orthogonal unit vectors. Indeed they form the new coordinate system. The fact that any two eigenvectors vi , vj are orthogonal if Ii′ = Ij′ follows from Eq. (3.136) in conjunction with the symmetry of I by multiplying with vi and vj , respectively, vj |I|vi  = Ii′ vj · vi = vi |I|vj  = Ij′ vi · vj . (3.138a) Since Ii′ = Ij′ and Eq. (3.138a) implies that (Ij′ − Ii′ )vi · vj = 0, so vi · vj = 0. We can write the quadratic forms in Eq. (3.133) as a sum of squares in the original coordinates |n, ˜ ′ = n|I|n = n′ |RIR|n  i Ii′ (n · vi )2 , (3.138b) because the rows of the rotation matrix in n′ = Rn, or  ′   n1 v1 · n  ′   n 2  = v2 · n  v3 · n n′ 3 componentwise, are made up of the eigenvectors vi . The underlying matrix identity, I=  i Ii′ |vi vi |, (3.138c) 20 Equation (3.126) will take on this form when ω is along one of the principal axes. Then L = λω and Iω = λω. In the mathe- matics literature λ is usually called a characteristic value, ω a characteristic vector. 3.5 Diagonalization of Matrices 219 may be viewed as the spectral decomposition of the inertia tensor (or any real symmetric matrix). Here, the word spectral is just another term for expansion in terms of its eigenvalues. When we multiply this eigenvalue expansion by n| on the left and |n on the right we reproduce the previous relation between quadratic forms. The operator Pi = |vi vi | is 2 a projection operator satisfying Pi = Pi that projects the ith component wi of any vector |w = j wj |vj  that is expanded in terms of the eigenvector basis |vj . This is verified by  wj |vi vi |vj  = wi |vi  = vi · w|vi . Pi |w = j Finally, the identity  i |vi vi | = 1 expresses the completeness of the eigenvector basis according to which any vector |w = w |v  i i i can be expanded in terms of the eigenvectors. Multiplying the completeness relation by |w proves the expansion |w = i vi |w|vi . An important extension of the spectral decomposition theorem applies to commuting symmetric (or Hermitian) matrices A, B: If [A, B] = 0, then there is an orthogonal (unitary) matrix that diagonalizes both A and B; that is, both matrices have common eigenvectors if the eigenvalues are nondegenerate. The reverse of this theorem is also valid. To prove this theorem we diagonalize A : Avi = ai vi . Multiplying each eigenvalue equation by B we obtain BAvi = ai Bvi = A(Bvi ), which says that Bvi is an eigenvector of A with eigenvalue ai . Hence Bvi = bi vi with real bi . Conversely, if the vectors vi are common eigenvectors of A and B, then ABvi = Abi vi = ai bi vi = BAvi . Since the eigenvectors vi are complete, this implies AB = BA. Hermitian Matrices For complex vector spaces, Hermitian and unitary matrices play the same role as symmetric and orthogonal matrices over real vector spaces, respectively. First, let us generalize the important theorem about the diagonal elements and the principal axes for the eigenvalue equation A|r = λ|r, (3.139) We now show that if A is a Hermitian matrix,21 its eigenvalues are real and its eigenvectors orthogonal. Let λi and λj be two eigenvalues and |ri  and |rj , the corresponding eigenvectors of A, a Hermitian matrix. Then A|ri  = λi |ri , (3.140) A|rj  = λj |rj . (3.141) 21 If A is real, the Hermitian requirement reduces to a requirement of symmetry. 220 Chapter 3 Determinants and Matrices Equation (3.140) is multiplied by rj |: rj |A|ri  = λi rj |ri . (3.142) Equation (3.141) is multiplied by ri | to give ri |A|rj  = λj ri |rj . (3.143) rj |A† |ri  = λ∗j rj |ri , (3.144) rj |A|ri  = λ∗j rj |ri  (3.145) Taking the adjoint22 of this equation, we have or since A is Hermitian. Subtracting Eq. (3.145) from Eq. (3.142), we obtain (λi − λ∗j )rj |ri  = 0. (3.146) This is a general result for all possible combinations of i and j . First, let j = i. Then Eq. (3.146) becomes (λi − λ∗i )ri |ri  = 0. (3.147) Since ri |ri  = 0 would be a trivial solution of Eq. (3.147), we conclude that λi = λ∗i , (3.148) (λi − λj )rj |ri  = 0, (3.149) or λi is real, for all i. Second, for i = j and λi = λj , or rj |ri  = 0, (3.150) which means that the eigenvectors of distinct eigenvalues are orthogonal, Eq. (3.150) being our generalization of orthogonality in this complex space.23 If λi = λj (degenerate case), |ri  is not automatically orthogonal to |rj , but it may be made orthogonal.24 Consider the physical problem of the moment of inertia matrix again. If x1 is an axis of rotational symmetry, then we will find that λ2 = λ3 . Eigenvectors |r2  and |r3  are each perpendicular to the symmetry axis, |r1 , but they lie anywhere in the plane perpendicular to |r1 ; that is, any linear combination of |r2  and |r3  is also an eigenvector. Consider (a2 |r2  + a3 |r3 ) with a2 and a3 constants. Then  A a2 |r2  + a3 |r3  = a2 λ2 |r2  + a3 λ3 |r3   (3.151) = λ2 a2 |r2  + a3 |r3  , 22 Note r | = |r † for complex vectors. j j 23 The corresponding theory for differential operators (Sturm–Liouville theory) appears in Section 10.2. The integral equation analog (Hilbert–Schmidt theory) is given in Section 16.4. 24 We are assuming here that the eigenvectors of the n-fold degenerate λ span the corresponding n-dimensional space. This i may be shown by including a parameter ε in the original matrix to remove the degeneracy and then letting ε approach zero (compare Exercise 3.5.30). This is analogous to breaking a degeneracy in atomic spectroscopy by applying an external magnetic field (Zeeman effect). 3.5 Diagonalization of Matrices 221 as is to be expected, for x1 is an axis of rotational symmetry. Therefore, if |r1  and |r2  are fixed, |r3  may simply be chosen to lie in the plane perpendicular to |r1  and also perpendicular to |r2 . A general method of orthogonalizing solutions, the Gram–Schmidt process (Section 3.1), is applied to functions in Section 10.3. matrix A forms a The set of n orthogonal eigenvectors |ri  of our n × n Hermitian complete set, spanning the n-dimensional (complex) space, i |ri ri | = 1. This fact is useful in a variational calculation of the eigenvalues, Section 17.8. The spectral decomposition of any Hermitian matrix A is proved by analogy with real symmetric matrices A=  i λi |ri ri |, with real eigenvalues λi and orthonormal eigenvectors |ri . Eigenvalues and eigenvectors are not limited to Hermitian matrices. All matrices have at least one eigenvalue and eigenvector. However, only Hermitian matrices have all eigenvectors orthogonal and all eigenvalues real. Anti-Hermitian Matrices Occasionally in quantum theory we encounter anti-Hermitian matrices: A† = −A. Following the analysis of the first portion of this section, we can show that a. The eigenvalues are pure imaginary (or zero). b. The eigenvectors corresponding to distinct eigenvalues are orthogonal. The matrix R formed from the normalized eigenvectors is unitary. This anti-Hermitian property is preserved under unitary transformations. Example 3.5.1 EIGENVALUES AND EIGENVECTORS OF A REAL SYMMETRIC MATRIX Let The secular equation is or  0 1 0 A = 1 0 0. 0 0 0     −λ 1 0    1 −λ 0  = 0,    0 0 −λ   −λ λ2 − 1 = 0, (3.152) (3.153) (3.154) 222 Chapter 3 Determinants and Matrices expanding by minors. The roots are λ = −1, 0, 1. To find the eigenvector corresponding to λ = −1, we substitute this value back into the eigenvalue equation, Eq. (3.139),     0 −λ 1 0 x  1 −λ 0   y  =  0  . 0 0 −λ z 0  (3.155) With λ = −1, this yields x + y = 0, z = 0. (3.156) Within an arbitrary scale factor and an arbitrary sign (or phase factor), r1 | = (1, −1, 0). Note that (for real |r in ordinary space) the eigenvector singles out a line in space. The positive or negative sense is not determined. This indeterminancy could be expected if we noted that Eq. (3.139) is homogeneous in |r. For convenience we will require that the eigenvectors be normalized to unity, r1 |r1  = 1. With this condition, r1 | =  1 −1 √ , √ ,0 2 2  (3.157) is fixed except for an overall sign. For λ = 0, Eq. (3.139) yields y = 0, x = 0, (3.158) r2 | = (0, 0, 1) is a suitable eigenvector. Finally, for λ = 1, we get −x + y = 0, z = 0, (3.159) or r3 | =   1 1 √ , √ ,0 . 2 2 (3.160) The orthogonality of r1 , r2 , and r3 , corresponding to three distinct eigenvalues, may be easily verified. The corresponding spectral decomposition gives  1   1    √     √2 0 2 1 1 1 1     A = (−1) √ , − √ , 0  − √1  + (+1) √ , √ , 0  √1  + 0(0, 0, 1)  0  2 2 2 2 2 2 1 0 0  1  1 1    − 12 0 0 1 0 2 2 2 0     1 = −  − 21 0  +  12 12 0  =  1 0 0  . 2 0 0 0 0 0 0  0 0 0 3.5 Diagonalization of Matrices Example 3.5.2 223 DEGENERATE EIGENVALUES Consider   1 0 0 A = 0 0 1. 0 1 0 The secular equation is  0 0  −λ 1  = 0 1 −λ   1 − λ   0   0 or  (1 − λ) λ2 − 1 = 0, λ = −1, 1, 1, (3.161) (3.162) (3.163) a degenerate case. If λ = −1, the eigenvalue equation (3.139) yields 2x = 0, y + z = 0. (3.164)  1 −1 r1 | = 0, √ , √ . 2 2 (3.165) −y + z = 0. (3.166) A suitable normalized eigenvector is  For λ = 1, we get Any eigenvector satisfying Eq. (3.166) is perpendicular to r1 . We have an infinite number of choices. Suppose, as one possible choice, r2 is taken as   1 1 (3.167) r2 | = 0, √ , √ , 2 2 which clearly satisfies Eq. (3.166). Then r3 must be perpendicular to r1 and may be made perpendicular to r2 by25 r3 = r1 × r2 = (1, 0, 0). (3.168) The corresponding spectral decomposition gives       0    0  1 1 1  √1  1  √1  1 A = − 0, √ , − √  2  + 0, √ , √  2  + (1, 0, 0)  0  2 2 2 2 0 √1 − √1 2 2         0 0 0 0 0 0 1 0 0 1 0 0     1 − 21  +  0 21 12  +  0 0 0  =  0 0 1  . = −0 2 1 0 0 0 0 1 0 0 −1 0 1 1 2 2 2 2 25 The use of the cross product is limited to three-dimensional space (see Section 1.4).  224 Chapter 3 Determinants and Matrices Functions of Matrices Polynomials with one or more matrix arguments are well defined and occur often. Power series of a matrix may also be defined, provided the series converge (see Chapter 5) for each matrix element. For example, if A is any n × n matrix, then the power series ∞  1 j A , j! (3.169a) sin(A) = ∞  (−1)j A2j +1 , (2j + 1)! (3.169b) cos(A) = ∞  (−1)j (3.169c) exp(A) = j =0 j =0 j =0 (2j )! A2j are well defined n × n matrices. For the Pauli matrices σk the Euler identity for real θ and k = 1, 2, or 3 exp(iσk θ ) = 12 cos θ + iσk sin θ, (3.170a) follows from collecting all even and odd powers of θ in separate series using σk2 = 1. For the 4 × 4 Dirac matrices σ j k = 1 with (σ j k )2 = 1 if j = k = 1, 2 or 3 we obtain similarly (without writing the obvious unit matrix 14 anymore) while  exp iσ j k θ = cos θ + iσ j k sin θ,  exp iσ 0k ζ = cosh ζ + iσ 0k sinh ζ (3.170b) (3.170c) holds for real ζ because (iσ 0k )2 = 1 for k = 1, 2, or 3. For a Hermitian matrix A there is a unitary matrix U that diagonalizes it; that is, UAU† = [a1 , a2 , . . . , an ]. Then the trace formula   det exp(A) = exp trace(A) is obtained (see Exercises 3.5.2 and 3.5.9) from     det exp(A) = det U exp(A)U† = det exp UAU†  = det exp[a1 , a2 , . . . , an ] = det ea1 , ea2 , . . . , ean $ % #  = eai = exp ai = exp trace(A) , (3.171) using UAi U† = (UAU† )i in the power series Eq. (3.169a) for exp(UAU† ) and the product theorem for determinants in Section 3.2. 3.5 Diagonalization of Matrices 225 This trace formula is a special case of the spectral decomposition law for any (infinitely differentiable) function f (A) for Hermitian A: f (A) =  i f (λi )|ri ri |, where |ri  are the common eigenvectors of A and Aj . This eigenvalue expansion follows j from Aj |ri  = λi |ri , multiplied by f (j ) (0)/j ! and summed over j to form the Taylor expansion of f (λi ) and yield f (A)|ri  = f (λi )|ri . Finally, summing over i and using completeness we obtain f (A) i |ri ri | = i f (λi )|ri ri | = f (A), q.e.d. Example 3.5.3 EXPONENTIAL OF A DIAGONAL MATRIX If the matrix A is diagonal like σ3 =   1 0 , 0 −1 then its nth power is also diagonal with its diagonal, matrix elements raised to the nth power:   1 0 . (σ3 )n = 0 (−1)n Then summing the exponential series, element for element, yields ! ! ∞ 1 0 e 0 n=0 n! σ3 e = . ∞ (−1)n = 0 1e 0 n=0 n! If we write the general diagonal matrix as A = [a1 , a2 , . . . , an ] with diagonal elements aj , then Am = [a1m , a2m , . . . , anm ], and summing the exponentials elementwise again we obtain eA = [ea1 , ea2 , . . . , ean ]. Using the spectral decomposition law we obtain directly       0 1 e 0 −1 σ3 +1 . e = e (1, 0) = + e (0, 1)  0 e−1 1 0 Another important relation is the Baker–Hausdorff formula, 1 exp(iG)H exp(−iG) = H + [iG, H] + iG, [iG, H] + · · · , (3.172) 2 which follows from multiplying the power series for exp(iG) and collecting the terms with the same powers of iG. Here we define [G, H] = GH − HG as the commutator of G and H. The preceding analysis has the advantage of exhibiting and clarifying conceptual relationships in the diagonalization of matrices. However, for matrices larger than 3 × 3, or perhaps 4 × 4, the process rapidly becomes so cumbersome that we turn to computers and 226 Chapter 3 Determinants and Matrices iterative techniques.26 One such technique is the Jacobi method for determining eigenvalues and eigenvectors of real symmetric matrices. This Jacobi technique for determining eigenvalues and eigenvectors and the Gauss–Seidel method of solving systems of simultaneous linear equations are examples of relaxation methods. They are iterative techniques in which the errors may decrease or relax as the iterations continue. Relaxation methods are used extensively for the solution of partial differential equations. Exercises 3.5.1 (a) Starting with the orbital angular momentum of the ith element of mass, Li = ri × pi = mi ri × (ω × ri ), (b) 3.5.2 derive the inertia matrix such that L = Iω, |L = I|ω. Repeat the derivation starting with kinetic energy   1 1 Ti = mi (ω × ri )2 T = ω|I|ω . 2 2 Show that the eigenvalues of a matrix are unaltered if the matrix is transformed by a similarity transformation. This property is not limited to symmetric or Hermitian matrices. It holds for any matrix satisfying the eigenvalue equation, Eq. (3.139). If our matrix can be brought into diagonal form by a similarity transformation, then two immediate consequences are 1. The trace (sum of eigenvalues) is invariant under a similarity transformation. 2. The determinant (product of eigenvalues) is invariant under a similarity transformation. Note. The invariance of the trace and determinant are often demonstrated by using the Cayley–Hamilton theorem: A matrix satisfies its own characteristic (secular) equation. 3.5.3 As a converse of the theorem that Hermitian matrices have real eigenvalues and that eigenvectors corresponding to distinct eigenvalues are orthogonal, show that if (a) (b) the eigenvalues of a matrix are real and the eigenvectors satisfy r†i rj = δij = ri |rj , then the matrix is Hermitian. 3.5.4 Show that a real matrix that is not symmetric cannot be diagonalized by an orthogonal similarity transformation. Hint. Assume that the nonsymmetric real matrix can be diagonalized and develop a contradiction. 26 In higher-dimensional systems the secular equation may be strongly ill-conditioned with respect to the determination of its roots (the eigenvalues). Direct solution by computer may be very inaccurate. Iterative techniques for diagonalizing the original matrix are usually preferred. See Sections 2.7 and 2.9 of Press et al., loc. cit. 3.5 Diagonalization of Matrices 227 3.5.5 The matrices representing the angular momentum components Jx , Jy , and Jz are all Hermitian. Show that the eigenvalues of J2 , where J2 = Jx2 + Jy2 + Jz2 , are real and nonnegative. 3.5.6 A has eigenvalues λi and corresponding eigenvectors |xi . Show that A−1 has the same eigenvectors but with eigenvalues λ−1 i . 3.5.7 A square matrix with zero determinant is labeled singular. (a) If A is singular, show that there is at least one nonzero column vector v such that A|v = 0. (b) If there is a nonzero vector |v such that A|v = 0, show that A is a singular matrix. This means that if a matrix (or operator) has zero as an eigenvalue, the matrix (or operator) has no inverse and its determinant is zero. 3.5.8 The same similarity transformation diagonalizes each of two matrices. Show that the original matrices must commute. (This is particularly important in the matrix (Heisenberg) formulation of quantum mechanics.) 3.5.9 Two Hermitian matrices A and B have the same eigenvalues. Show that A and B are related by a unitary similarity transformation. 3.5.10 Find the eigenvalues and an orthonormal (orthogonal and normalized) set of eigenvectors for the matrices of Exercise 3.2.15. 3.5.11 Show that the inertia matrix for a single particle of mass m at (x, y, z) has a zero determinant. Explain this result in terms of the invariance of the determinant of a matrix under similarity transformations (Exercise 3.3.10) and a possible rotation of the coordinate system. 3.5.12 A certain rigid body may be represented by three point masses: m1 = 1 at (1, 1, −2), m2 = 2 at (−1, −1, 0), and m3 = 1 at (1, 1, 2). (a) (b) 3.5.13 Find the inertia matrix. Diagonalize the inertia matrix, obtaining the eigenvalues and the principal axes (as orthonormal eigenvectors). Unit masses are placed as shown in Fig. 3.6. (a) (b) (c) Find the moment of inertia matrix. Find the eigenvalues and a set of orthonormal eigenvectors. Explain the degeneracy in terms of the symmetry of the system.   4 −1 −1 ANS. I =  −1 4 −1  −1 −1 4 λ1 = 2 √ √ √ r1 = (1/ 3, 1/ 3, 1/ 3 ) λ2 = λ3 = 5. 228 Chapter 3 Determinants and Matrices FIGURE 3.6 Mass sites for inertia tensor. 3.5.14 A mass m1 = 1/2 kg is located at (1, 1, 1) (meters), a mass m2 = 1/2 kg is at (−1, −1, −1). The two masses are held together by an ideal (weightless, rigid) rod. (a) (b) (c) Find the inertia tensor of this pair of masses. Find the eigenvalues and eigenvectors of this inertia matrix. Explain the meaning, the physical significance of the λ = 0 eigenvalue. What is the significance of the corresponding eigenvector? (d) Now that you have solved this problem by rather sophisticated matrix techniques, explain how you could obtain (1) (2) 3.5.15 3.5.16 3.5.17 λ = 0 and λ =? — by inspection (that is, using common sense). rλ=0 =? — by inspection (that is, using freshman physics). Unit masses are at the eight corners of a cube (±1, ±1, ±1). Find the moment of inertia matrix and show that there is a triple degeneracy. This means that so far as moments of inertia are concerned, the cubic structure exhibits spherical symmetry. Find the eigenvalues and corresponding orthonormal eigenvectors of the following matrices (as a numerical check, note that the sum of the eigenvalues equals the sum of the diagonal elements of the original matrix, Exercise 3.3.9). Note also the correspondence between det A = 0 and the existence of λ = 0, as required by Exercises 3.5.2 and 3.5.7.   1 0 1 A =  0 1 0 . 1 0 1  √1 A= 2 0 √  2 0 0 0 . 0 0 ANS. λ = 0, 1, 2. ANS. λ = −1, 0, 2. 3.5 Diagonalization of Matrices 3.5.18  1 1 0 A =  1 0 1 . 0 1 1 3.5.19 √  8 √0 √1 A =  8 √1 8 . 0 8 1 3.5.20 3.5.21 3.5.22 3.5.23 3.5.24 3.5.25 3.5.26 229     1 0 0 A =  0 1 1 . 0 1 1   1 0 √0  2 . A = 0 √1 0 2 0   0 1 0 A =  1 0 1 . 0 1 0  2 0 0 A =  0 1 1 . 0 1 1 0 1 1 A =  1 0 1 . 1 1 0  1 −1 −1 A =  −1 1 −1 . −1 −1 1   1 1 1 A =  1 1 1 . 1 1 1 ANS. λ = 0, 1, 2. ANS. λ = −1, 1, 2. ANS. λ = 0, 2, 2.   ANS. λ = −3, 1, 5. √ √ ANS. λ = − 2, 0, 2.   ANS. λ = −1, 1, 2. ANS. λ = −1, −1, 2. ANS. λ = −1, 2, 2. ANS. λ = 0, 0, 3. 230 Chapter 3 Determinants and Matrices   5 0 2 3.5.27 A =  0 1 0 . 2 0 2 3.5.28 3.5.29 3.5.30   1 1 0 A =  1 1 0 . 0 0 0 ANS. λ = 0, 0, 2. √  5 0 3 A =  √0 3 0 . 3 0 3  (a) ANS. λ = 1, 1, 6. ANS. λ = 2, 3, 6. Determine the eigenvalues and eigenvectors of   1 ε . ε 1 Note that the eigenvalues are degenerate for ε = 0 but that the eigenvectors are orthogonal for all ε = 0 and ε → 0. (b) Determine the eigenvalues and eigenvectors of   1 1 . ε2 1 Note that the eigenvalues are degenerate for ε = 0 and that for this (nonsymmetric) matrix the eigenvectors (ε = 0) do not span the space. (c) Find the cosine of the angle between the two eigenvectors as a function of ε for 0 ≤ ε ≤ 1. 3.5.31 (a) (b) Take the coefficients of the simultaneous linear equations of Exercise 3.1.7 to be the matrix elements aij of matrix A (symmetric). Calculate the eigenvalues and eigenvectors. Form a matrix R whose columns are the eigenvectors of A, and calculate the triple ˜ matrix product RAR. ANS. λ = 3.33163. 3.5.32 Repeat Exercise 3.5.31 by using the matrix of Exercise 3.2.39. 3.5.33 Describe the geometric properties of the surface x 2 + 2xy + 2y 2 + 2yz + z2 = 1. How is it oriented in three-dimensional space? Is it a conic section? If so, which kind? 3.6 Normal Matrices 231 Table 3.1 Matrix Eigenvalues Eigenvectors (for different eigenvalues) Hermitian Anti-Hermitian Unitary Normal Real Pure imaginary (or zero) Unit magnitude If A has eigenvalue λ, A† has eigenvalue λ∗ Orthogonal Orthogonal Orthogonal Orthogonal A and A† have the same eigenvectors For a Hermitian n × n matrix A with distinct eigenvalues λj and a function f , show that the spectral decomposition law may be expressed as & n  i =j (A − λi ) f (λj ) & f (A) = . i =j (λj − λi ) 3.5.34 j =1 This formula is due to Sylvester. 3.6 NORMAL MATRICES In Section 3.5 we concentrated primarily on Hermitian or real symmetric matrices and on the actual process of finding the eigenvalues and eigenvectors. In this section27 we generalize to normal matrices, with Hermitian and unitary matrices as special cases. The physically important problem of normal modes of vibration and the numerically important problem of ill-conditioned matrices are also considered. A normal matrix is a matrix that commutes with its adjoint,  A, A† = 0. Obvious and important examples are Hermitian and unitary matrices. We will show that normal matrices have orthogonal eigenvectors (see Table 3.1). We proceed in two steps. I. Let A have an eigenvector |x and corresponding eigenvalue λ. Then A|x = λ|x (3.173) (A − λ1)|x = 0. (3.174) or For convenience the combination A − λ1 will be labeled B. Taking the adjoint of Eq. (3.174), we obtain x|(A − λ1)† = 0 = x|B† . Because   (A − λ1)† , (A − λ1) = A, A† = 0, (3.175) 27 Normal matrices are the largest class of matrices that can be diagonalized by unitary transformations. For an extensive discus- sion of normal matrices, see P. A. Macklin, Normal matrices for physicists. Am. J. Phys. 52: 513 (1984). 232 Chapter 3 Determinants and Matrices we have  B, B† = 0. (3.176) The matrix B is also normal. From Eqs. (3.174) and (3.175) we form x|B† B|x = 0. (3.177) x|BB† |x = 0 (3.178) This equals by Eq. (3.176). Now Eq. (3.178) may be rewritten as  † †  † B |x B |x = 0. Thus  B† |x = A† − λ∗ 1 |x = 0. (3.179) (3.180) A† We see that for normal matrices, has the same eigenvectors as A but the complex conjugate eigenvalues. II. Now, considering more than one eigenvector–eigenvalue, we have A|xi  = λi |xi , (3.181) A|xj  = λj |xj . (3.182) xi |A|xj  = λj xi |xj . (3.183) Multiplying Eq. (3.182) from the left by xi | yields Taking the transpose of Eq. (3.181), we obtain  † xi |A = A† |xi  . (3.184) From Eq. (3.180), with A† having the same eigenvectors as A but the complex conjugate eigenvalues, †  †  † A |xi  = λ∗i |xi  = λi xi |. (3.185) Substituting into Eq. (3.183) we have λi xi |xj  = λj xi |xj  or (λi − λj )xi |xj  = 0. This is the same as Eq. (3.149). For λi = λj , (3.186) xj |xi  = 0. The eigenvectors corresponding to different eigenvalues of a normal matrix are orthogonal. This means that a normal matrix may be diagonalized by a unitary transformation. The required unitary matrix may be constructed from the orthonormal eigenvectors as shown earlier, in Section 3.5. The converse of this result is also true. If A can be diagonalized by a unitary transformation, then A is normal. 3.6 Normal Matrices 233 Normal Modes of Vibration We consider the vibrations of a classical model of the CO2 molecule. It is an illustration of the application of matrix techniques to a problem that does not start as a matrix problem. It also provides an example of the eigenvalues and eigenvectors of an asymmetric real matrix. Example 3.6.1 NORMAL MODES Consider three masses on the x-axis joined by springs as shown in Fig. 3.7. The spring forces are assumed to be linear (small displacements, Hooke’s law), and the mass is constrained to stay on the x-axis. Using a different coordinate for each mass, Newton’s second law yields the set of equations k x¨1 = − (x1 − x2 ) M k k x¨2 = − (x2 − x1 ) − (x2 − x3 ) (3.187) m m k x¨3 = − (x3 − x2 ). M The system of masses is vibrating. We seek the common frequencies, ω, such that all masses vibrate at this same frequency. These are the normal modes. Let xi = xi0 eiωt , i = 1, 2, 3. Substituting this set into Eq. (3.187), we may rewrite it as  k     k −M 0 x1 x1 M  k 2k k   x  = +ω2  x  , − −m  2 2 m m x x k k 3 3 0 − M (3.188) M eiωt divided out. We have a matrix–eigenvalue equation with the with the common factor matrix asymmetric. The secular equation is  k  k  − ω2  −M 0 M    k k 2k 2 = 0. (3.189) − ω −  −m m m   k k 2   0 − −ω M M FIGURE 3.7 Double oscillator. 234 Chapter 3 Determinants and Matrices This leads to ω2  k − ω2 M   k 2k ω2 − − = 0. m M The eigenvalues are k , M ω2 = 0, k 2k + , M m all real. The corresponding eigenvectors are determined by substituting the eigenvalues back into Eq. (3.188) one eigenvalue at a time. For ω2 = 0, Eq. (3.188), yields x1 − x2 = 0, −x1 + 2x2 − x3 = 0, −x2 + x3 = 0. Then we get x1 = x 2 = x 3 . This describes pure translation with no relative motion of the masses and no vibration. For ω2 = k/M, Eq. (3.188) yields x1 = −x3 , x2 = 0. The two outer masses are moving in opposite direction. The central mass is stationary. For ω2 = k/M + 2k/m, the eigenvector components are 2M x1 . m The two outer masses are moving together. The central mass is moving opposite to the two outer ones. The net momentum is zero. Any displacement of the three masses along the x-axis can be described as a linear combination of these three types of motion: translation plus two forms of vibration.  x1 = x3 , x2 = − Ill-Conditioned Systems A system of simultaneous linear equations may be written as A|x = |y or A−1 |y = |x, (3.190) with A and |y known and |x unknown. When a small error in |y results in a larger error in |x, then the matrix A is called ill-conditioned. With |δx an error in |x and |δx an error in |y, the relative errors may be written as   δy|δy 1/2 δx|δx 1/2 ≤ K(A) . (3.191) x|x y|y Here K(A), a property of matrix A, is labeled the condition number. For A Hermitian one form of the condition number is given by28 K(A) = |λ|max . |λ|min (3.192) 28 G. E. Forsythe, and C. B. Moler, Computer Solution of Linear Algebraic Systems. Englewood Cliffs, NJ, Prentice Hall (1967). 3.6 Normal Matrices 235 An approximate form due to Turing29 is  K(A) = n[Aij ]max A−1 ij max , (3.193) in which n is the order of the matrix and [Aij ]max is the maximum element in A. Example 3.6.1 AN ILL-CONDITIONED MATRIX A common example of an ill-conditioned matrix is the Hilbert matrix, Hij = (i + j − 1)−1 . The Hilbert matrix of order 4, H4 , is encountered in a least-squares fit of data to a thirddegree polynomial. We have   1 21 13 14 1 1 1 1 2 3 4 5  (3.194) H4 =   1 1 1 1 . 3 4 5 6 1 4 1 5 1 6 1 7 The elements of the inverse matrix (order n) are given by For n = 4,  −1 (n + i − 1)!(n + j − 1)! (−1)i+j Hn ij = . · i + j − 1 [(i − 1)!(j − 1)!]2 (n − i)!(n − j )!   16 −120 240 −140  −120 1200 −2700 1680   . H−1 4 =  240 −2700 6480 −4200  −140 1680 −4200 2800 (3.195) (3.196) From Eq. (3.193) the Turing estimate of the condition number for H4 becomes KTuring = 4 × 1 × 6480 = 2.59 × 104 . This is a warning that an input error may be multiplied by 26,000 in the calculation of the output result. It is a statement that H4 is ill-conditioned. If you encounter a highly ill-conditioned system, you have two alternatives (besides abandoning the problem). (a) (b) Try a different mathematical attack. Arrange to carry more significant figures and push through by brute force. As previously seen, matrix eigenvector–eigenvalue techniques are not limited to the solution of strictly matrix problems. A further example of the transfer of techniques from one area to another is seen in the application of matrix techniques to the solution of Fredholm eigenvalue integral equations, Section 16.3. In turn, these matrix techniques are strengthened by a variational calculation of Section 17.8.  29 Compare J. Todd, The Condition of the Finite Segments of the Hilbert Matrix, Applied Mathematics Series No. 313. Washing- ton, DC: National Bureau of Standards. 236 Chapter 3 Determinants and Matrices Exercises 3.6.1 Show that every 2 × 2 matrix has two eigenvectors and corresponding eigenvalues. The eigenvectors are not necessarily orthogonal and may be degenerate. The eigenvalues are not necessarily real. 3.6.2 As an illustration of Exercise 3.6.1, find the eigenvalues and corresponding eigenvectors for   2 4 . 1 2 Note that the eigenvectors are not orthogonal. ANS. λ1 = 0, r1 = (2, −1); λ2 = 4, r2 = (2, 1). 3.6.3 If A is a 2 × 2 matrix, show that its eigenvalues λ satisfy the secular equation λ2 − λ trace(A) + det A = 0. 3.6.4 Assuming a unitary matrix U to satisfy an eigenvalue equation Ur = λr, show that the eigenvalues of the unitary matrix have unit magnitude. This same result holds for real orthogonal matrices. 3.6.5 Since an orthogonal matrix describing a rotation in real three-dimensional space is a special case of a unitary matrix, such an orthogonal matrix can be diagonalized by a unitary transformation. Show that the sum of the three eigenvalues is 1 + 2 cos ϕ, where ϕ is the net angle of rotation about a single fixed axis. (b) Given that one eigenvalue is 1, show that the other two eigenvalues must be eiϕ and e−iϕ . (a) Our orthogonal rotation matrix (real elements) has complex eigenvalues. 3.6.6 A is an nth-order Hermitian matrix with orthonormal eigenvectors |xi  and real eigenvalues λ1 ≤ λ2 ≤ λ3 ≤ · · · ≤ λn . Show that for a unit magnitude vector |y, λ1 ≤ y|A|y ≤ λn . 3.6.7 A particular matrix is both Hermitian and unitary. Show that its eigenvalues are all ±1. Note. The Pauli and Dirac matrices are specific examples. 3.6.8 For his relativistic electron theory Dirac required a set of four anticommuting matrices. Assume that these matrices are to be Hermitian and unitary. If these are n × n matrices, show that n must be even. With 2 × 2 matrices inadequate (why?), this demonstrates that the smallest possible matrices forming a set of four anticommuting, Hermitian, unitary matrices are 4 × 4. 3.6 Normal Matrices 3.6.9 237 A is a normal matrix with eigenvalues λn and orthonormal eigenvectors |xn . Show that A may be written as  A= λn |xn xn |. n 3.6.10 3.6.11 Hint. Show that both this eigenvector form of A and the original A give the same result acting on an arbitrary vector |y.   A has eigenvalues 1 and −1 and corresponding eigenvectors 10 and 01 . Construct A.   1 0 ANS. A = . 0 −1 A non-Hermitian matrix A has eigenvalues λi and corresponding eigenvectors |ui . The adjoint matrix A† has the same set of eigenvalues but different corresponding eigenvectors, |vi . Show that the eigenvectors form a biorthogonal set, in the sense that vi |uj  = 0 3.6.12 for λ∗i = λj . You are given a pair of equations: A|fn  = λn |gn  ˜ n  = λn |fn  A|g with A real. ˜ with eigenvalue λ2 . (a) Prove that |fn  is an eigenvector of (AA) n ˜ with eigenvalue λ2 . (b) Prove that |gn  is an eigenvector of (AA) n (c) State how you know that (1) (2) (3) 3.6.13 The |fn  form an orthogonal set. The |gn  form an orthogonal set. λ2n is real. Prove that A of the preceding exercise may be written as  A= λn |gn fn |, n with the |gn  and fn | normalized to unity. Hint. Expand your arbitrary vector as a linear combination of |fn . 3.6.14 Given 1 A= √ 5   2 2 , 1 −4 ˜ and the symmetric forms AA ˜ and AA. ˜ (a) Construct the transpose A 2 ˜ (b) From AA|gn  = λn |gn  find λn and |gn . Normalize the |gn . ˜ n  = λ2 |gn  find λn [same as (b)] and |fn . Normalize the |fn . (c) From AA|f n ˜ n  = λn |fn . = λn |gn  and A|g (d) Verify that A|fn  (e) Verify that A = n λn |gn fn |. 238 Chapter 3 Determinants and Matrices 3.6.15 Given the eigenvalues λ1 = 1, λ2 = −1 and the corresponding eigenvectors         1 1 1 1 0 1 |f1  = , |g1  = √ , and |g2  = √ , |f2  = , 0 1 2 1 2 −1 (a) (b) (c) construct A; verify that A|fn  = λn |gn ; ˜ n  = λn |fn . verify that A|g 1 ANS. A = √ 2 3.6.16   1 −1 . 1 1 This is a continuation of Exercise 3.4.12, where the unitary matrix U and the Hermitian matrix H are related by U = eiaH . (a) If trace H = 0, show that det U = +1. (b) If det U = +1, show that trace H = 0. Hint. H may be diagonalized by a similarity transformation. Then interpreting the exponential by a Maclaurin expansion, U is also diagonal. The corresponding eigenvalues are given by uj = exp(iahj ). Note. These properties, and those of Exercise 3.4.12, are vital in the development of the concept of generators in group theory — Section 4.2. 3.6.17 An n × n matrix A has n eigenvalues Ai . If B = eA , show that B has the same eigenvectors as A, with the corresponding eigenvalues Bi given by Bi = exp(Ai ). Note. eA is defined by the Maclaurin expansion of the exponential: A2 A3 + + ··· . 2! 3! A matrix P is a projection operator (see the discussion following Eq. (3.138c)) satisfying the condition eA = 1 + A + 3.6.18 P2 = P. Show that the corresponding eigenvalues (ρ 2 )λ and ρλ satisfy the relation  2 ρ λ = (ρλ )2 = ρλ . This means that the eigenvalues of P are 0 and 1. 3.6.19 In the matrix eigenvector–eigenvalue equation A|ri  = λi |ri , A is an n × n Hermitian matrix. For simplicity assume that its n real eigenvalues are distinct, λ1 being the largest. If |r is an approximation to |r1 , |r = |r1  + n  i=2 δi |ri , 3.6 Additional Readings 239 FIGURE 3.8 Triple oscillator. show that r|A|r ≤ λ1 r|r and that the error in λ1 is of the order |δi |2 . Take |δi | ≪ 1. Hint. The n |ri  form a complete orthogonal set spanning the n-dimensional (complex) space. 3.6.20 Two equal masses are connected to each other and to walls by springs as shown in Fig. 3.8. The masses are constrained to stay on a horizontal line. (a) (b) (c) 3.6.21 Set up the Newtonian acceleration equation for each mass. Solve the secular equation for the eigenvectors. Determine the eigenvectors and thus the normal modes of motion. Given a normal matrix A with eigenvalues λj , show that A† has eigenvalues λ∗j , its real part (A + A† )/2 has eigenvalues ℜ(λj ), and its imaginary part (A − A† )/2i has eigenvalues ℑ(λj ). Additional Readings Aitken, A. C., Determinants and Matrices. New York: Interscience (1956). Reprinted, Greenwood (1983). A readable introduction to determinants and matrices. Barnett, S., Matrices: Methods and Applications. Oxford: Clarendon Press (1990). Bickley, W. G., and R. S. H. G. Thompson, Matrices — Their Meaning and Manipulation. Princeton, NJ: Van Nostrand (1964). A comprehensive account of matrices in physical problems, their analytic properties, and numerical techniques. Brown, W. C., Matrices and Vector Spaces. New York: Dekker (1991). Gilbert, J. and L., Linear Algebra and Matrix Theory. San Diego: Academic Press (1995). Heading, J., Matrix Theory for Physicists. London: Longmans, Green and Co. (1958). A readable introduction to determinants and matrices, with applications to mechanics, electromagnetism, special relativity, and quantum mechanics. Vein, R., and P. Dale, Determinants and Their Applications in Mathematical Physics. Berlin: Springer (1998). Watkins, D. S., Fundamentals of Matrix Computations. New York: Wiley (1991). This page intentionally left blank CHAPTER 4 GROUP THEORY Disciplined judgment, about what is neat and symmetrical and elegant has time and time again proved an excellent guide to how nature works M URRAY G ELL -M ANN 4.1 INTRODUCTION TO GROUP THEORY In classical mechanics the symmetry of a physical system leads to conservation laws. Conservation of angular momentum is a direct consequence of rotational symmetry, which means invariance under spatial rotations. In the first third of the 20th century, Wigner and others realized that invariance was a key concept in understanding the new quantum phenomena and in developing appropriate theories. Thus, in quantum mechanics the concept of angular momentum and spin has become even more central. Its generalizations, isospin in nuclear physics and the flavor symmetry in particle physics, are indispensable tools in building and solving theories. Generalizations of the concept of gauge invariance of classical electrodynamics to the isospin symmetry lead to the electroweak gauge theory. In each case the set of these symmetry operations forms a group. Group theory is the mathematical tool to treat invariants and symmetries. It brings unification and formalization of principles, such as spatial reflections, or parity, angular momentum, and geometry, that are widely used by physicists. In geometry the fundamental role of group theory was recognized more than a century ago by mathematicians (e.g., Felix Klein’s Erlanger Program). In Euclidean geometry the distance between two points, the scalar product of two vectors or metric, does not change under rotations or translations. These symmetries are characteristic of this geometry. In special relativity the metric, or scalar product of four-vectors, differs from that of 241 242 Chapter 4 Group Theory Euclidean geometry in that it is no longer positive definite and is invariant under Lorentz transformations. For a crystal the symmetry group contains only a finite number of rotations at discrete values of angles or reflections. The theory of such discrete or finite groups, developed originally as a branch of pure mathematics, now is a useful tool for the development of crystallography and condensed matter physics. A brief introduction to this area appears in Section 4.7. When the rotations depend on continuously varying angles (the Euler angles of Section 3.3) the rotation groups have an infinite number of elements. Such continuous (or Lie1 ) groups are the topic of Sections 4.2–4.6. In Section 4.8 we give an introduction to differential forms, with applications to Maxwell’s equations and topics of Chapters 1 and 2, which allows seeing these topics from a different perspective. Definition of a Group A group G may be defined as a set of objects or operations, rotations, transformations, called the elements of G, that may be combined, or “multiplied,” to form a well-defined product in G, denoted by a , that satisfies the following four conditions. 1. If a and b are any two elements of G, then the product a ∗ b is also an element of G, where b acts before a; or (a, b) → a ∗ b associates (or maps) an element a ∗ b of G with the pair (a, b) of elements of G. This property is known as “G is closed under multiplication of its own elements.” 2. This multiplication is associative: (a ∗ b) ∗ c = a ∗ (b ∗ c). 3. There is a unit element2 1 in G such that 1 ∗ a = a ∗ 1 = a for every element a in G. The unit is unique: 1 = 1′ ∗ 1 = 1′ . 4. There is an inverse, or reciprocal, of each element a of G, labeled a −1 , such that a ∗ a −1 = a −1 ∗ a = 1. The inverse is unique: If a −1 and a ′−1 are both inverses of a, then a ′−1 = a ′−1 ∗ (a ∗ a ′−1 ) = (a ′−1 ∗ a) ∗ a −1 = a −1 . Since the for multiplication is tedious to write, it is customary to drop it and simply let it be understood. From now on, we write ab instead of a ∗ b . • If a subset G′ of G is closed under multiplication, it is a group and called a subgroup of G; that is, G′ is closed under the multiplication of G. The unit of G always forms a subgroup of G. • If gg ′ g −1 is an element of G′ for any g of G and g ′ of G′ , then G′ is called an invariant subgroup of G. The subgroup consisting of the unit is invariant. If the group elements are square matrices, then gg ′ g −1 corresponds to a similarity transformation (see Eq. (3.100)). • If ab = ba for all a, b of G, the group is called abelian, that is, the order in products does not matter; commutative multiplication is often denoted by a + sign. Examples are vector spaces whose unit is the zero vector and −a is the inverse of a for all elements a in G. 1 After the Norwegian mathematician Sophus Lie. 2 Following E. Wigner, the unit element of a group is often labeled E, from the German Einheit, that is, unit, or just 1, or I for identity. 4.1 Introduction to Group Theory Example 4.1.1 243 ORTHOGONAL AND UNITARY GROUPS Orthogonal n × n matrices form the group O(n), and SO(n) if their determinants are +1 ˜ i = O−1 for i = 1 and 2 (see Section 3.3 for orthogonal (S stands for “special”). If O i matrices) are elements of O(n), then the product −1 −1 −1 ˜ ˜  O 1 O2 = O2 O1 = O2 O1 = (O1 O2 ) is also an orthogonal matrix in O(n), thus proving closure under (matrix) multiplication. The inverse is the transpose (orthogonal) matrix. The unit of the group is the n-dimensional unit matrix 1n . A real orthogonal n × n matrix has n(n − 1)/2 independent parameters. For n = 2, there is only one parameter: one angle. For n = 3, there are three independent parameters: the three Euler angles of Section 3.3. ˜ i = O−1 (for i = 1 and 2) are elements of SO(n), then closure requires proving in If O i addition that their product has determinant +1, which follows from the product theorem in Chapter 3. Likewise, unitary n × n matrices form the group U(n), and SU(n) if their determinants are +1. If U†i = U−1 i (see Section 3.4 for unitary matrices) are elements of U(n), then −1 −1 (U1 U2 )† = U†2 U†1 = U−1 2 U1 = (U1 U2 ) , so the product is unitary and an element of U(n), thus proving closure under multiplication. Each unitary matrix has an inverse (its Hermitian adjoint), which again is unitary. If U†i = U−1 i are elements of SU(n), then closure requires us to prove that their product also has determinant +1, which follows from the product theorem in Chapter 3.  • Orthogonal groups are called Lie groups; that is, they depend on continuously varying parameters (the Euler angles and their generalization for higher dimensions); they are compact because the angles vary over closed, finite intervals (containing the limit of any converging sequence of angles). Unitary groups are also compact. Translations form a noncompact group because the limit of translations with distance d → ∞ is not part of the group. The Lorentz group is not compact either. Homomorphism, Isomorphism There may be a correspondence between the elements of two groups: one-to-one, two-toone, or many-to-one. If this correspondence preserves the group multiplication, we say that the two groups are homomorphic. A most important homomorphic correspondence between the rotation group SO(3) and the unitary group SU(2) is developed in Section 4.2. If the correspondence is one-to-one, still preserving the group multiplication,3 then the groups are isomorphic. • If a group G is homomorphic to a group of matrices G′ , then G′ is called a representation of G. If G and G′ are isomorphic, the representation is called faithful. There are many representations of groups; they are not unique. 3 Suppose the elements of one group are labeled g , the elements of a second group h . Then g ↔ h is a one-to-one corresponi i i i dence for all values of i. If gi gj = gk and hi hj = hk , then gk and hk must be the corresponding group elements. 244 Chapter 4 Group Theory Example 4.1.2 ROTATIONS Another instructive example for a group is the set of counterclockwise coordinate rotations of three-dimensional Euclidean space about its z-axis. From Chapter 3 we know that such a rotation is described by a linear transformation of the coordinates involving a 3 × 3 matrix made up of three rotations depending on the Euler angles. If the z-axis is fixed, the linear transformation is through an angle ϕ of the xy-coordinate system to a new orientation in Eq. (1.8), Fig. 1.6, and Section 3.3:     ′   x cos ϕ sin ϕ 0 x x  y ′  = Rz (ϕ)  y  ≡  − sin ϕ cos ϕ 0   y  (4.1) z′ z 0 0 1 z involves only one angle of the rotation about the z-axis. As shown in Chapter 3, the linear transformation of two successive rotations involves the product of the matrices corresponding to the sum of the angles. The product corresponds to two rotations, Rz (ϕ1 )Rz (ϕ2 ), and is defined by rotating first by the angle ϕ2 and then by ϕ1 . According to Eq. (3.29), this corresponds to the product of the orthogonal 2 × 2 submatrices, ! ! cos ϕ2 sin ϕ2 cos ϕ1 sin ϕ1 − sin ϕ1 = cos ϕ1 cos(ϕ1 + ϕ2 ) − sin(ϕ1 + ϕ2 ) − sin ϕ2 cos ϕ2 ! sin(ϕ1 + ϕ2 ) cos(ϕ1 + ϕ2 ) (4.2) , using the addition formulas for the trigonometric functions. The unity in the lower righthand corner of the matrix in Eq. (4.1) is also reproduced upon multiplication. The product is clearly a rotation, represented by the orthogonal matrix with angle ϕ1 + ϕ2 . The associative group multiplication corresponds to the associative matrix multiplication. It is commutative, or abelian, because the order in which these rotations are performed does not matter. The inverse of the rotation with angle ϕ is that with angle −ϕ. The unit corresponds to the angle ϕ = 0. Striking off the coordinate vectors in Eq. (4.1), we can associate the matrix of the linear transformation with each rotation, which is a group multiplication preserving one-to-one mapping, an isomorphism: The matrices form a faithful representation of the rotation group. The unity in the right-hand corner is superfluous as well, like the coordinate vectors, and may be deleted. This defines another isomorphism and representation by the 2 × 2 submatrices:   ! cos ϕ sin ϕ 0 cos ϕ sin ϕ   . (4.3) Rz (ϕ) =  − sin ϕ cos ϕ 0  → R(ϕ) = − sin ϕ cos ϕ 0 0 1 The group’s name is SO(2), if the angle ϕ varies continuously from 0 to 2π ; SO(2) has infinitely many elements and is compact. The group of rotations Rz is obviously isomorphic to the group of rotations in Eq. (4.3). The unity with angle ϕ = 0 and the rotation with ϕ = π form a finite subgroup. The finite subgroups with angles 2πm/n, n an integer and m = 0, 1, . . . , n − 1 are cyclic; that is, the  rotations R(2πm/n) = R(2π/n)m . 4.1 Introduction to Group Theory 245 In the following we shall discuss only the rotation groups SO(n) and unitary groups SU(n) among the classical Lie groups. (More examples of finite groups will be given in Section 4.7.) Representations — Reducible and Irreducible The representation of group elements by matrices is a very powerful technique and has been almost universally adopted by physicists. The use of matrices imposes no significant restriction. It can be shown that the elements of any finite group and of the continuous groups of Sections 4.2–4.4 may be represented by matrices. Examples are the rotations described in Eq. (4.3). To illustrate how matrix representations arise from a symmetry, consider the stationary Schrödinger equation (or some other eigenvalue equation, such as Ivi = Ii vi for the principal moments of inertia of a rigid body in classical mechanics, say), H ψ = Eψ. (4.4) Let us assume that the Hamiltonian H stays invariant under a group G of transformations R in G (coordinate rotations, for example, for a central potential V (r) in the Hamiltonian H ); that is, HR = RH R−1 = H, RH = H R. (4.5) Now take a solution ψ of Eq. (4.4) and “rotate” it: ψ → Rψ. Then Rψ has the same energy E because multiplying Eq. (4.4) by R and using Eq. (4.5) yields  RH ψ = E(Rψ) = RH R−1 Rψ = H (Rψ). (4.6) In other words, all rotated solutions Rψ are degenerate in energy or form what physicists call a multiplet. For example, the spin-up and -down states of a bound electron in the ground state of hydrogen form a doublet, and the states with projection quantum numbers m = −l, −l + 1, . . . , l of orbital angular momentum l form a multiplet with 2l + 1 basis states. Let us assume that this vector space Vψ of transformed solutions has a finite dimension n. Let ψ1 , ψ2 , . . . , ψn be a basis. Since Rψj is a member of the multiplet, we can expand it in terms of its basis,  rj k ψk . (4.7) Rψj = k Thus, with each R in G we can associate a matrix (rj k ). Just as in Example 4.1.2, two successive rotations correspond to the product of their matrices, so this map R → (rj k ) is a representation of G. It is necessary for a representation to be irreducible that we can take any element of Vψ and, by rotating with all elements R of G, transform it into all other elements of Vψ . If not all elements of Vψ are reached, then Vψ splits into a direct sum of two or more vector subspaces, Vψ = V1 ⊕ V2 ⊕ · · · , which are mapped into themselves by rotating their elements. For example, the 2s state and 2p states of principal quantum number n = 2 of the hydrogen atom have the same energy (that is, are degenerate) and form 246 Chapter 4 Group Theory a reducible representation, because the 2s state cannot be rotated into the 2p states, and vice versa (angular momentum is conserved under rotations). In this case the representation is called reducible. Then we can find a basis in Vψ (that is, there is a unitary matrix U) so that   r1 0 · · ·   0 r2 · · ·  U(rj k )U† =  (4.8)   .. .. . . for all R of G, and all matrices (rj k ) have similar block-diagonal shape. Here r1 , r2 , . . . are matrices of lower dimension than (rj k ) that are lined up along the diagonal and the 0’s are matrices made up of zeros. We may say that the representation has been decomposed into r1 + r2 + · · · along with Vψ = V1 ⊕ V2 ⊕ · · · . The irreducible representations play a role in group theory that is roughly analogous to the unit vectors of vector analysis. They are the simplest representations; all others can be built from them. (See Section 4.4 on Clebsch–Gordan coefficients and Young tableaux.) Exercises 4.1.1 Show that an n × n orthogonal matrix has n(n − 1)/2 independent parameters. Hint. The orthogonality condition, Eq. (3.71), provides constraints. 4.1.2 Show that an n × n unitary matrix has n2 − 1 independent parameters. Hint. Each element may be complex, doubling the number of possible parameters. Some of the constraint equations are likewise complex and count as two constraints. 4.1.3 The special linear group SL(2) consists of all 2 × 2 matrices (with complex elements) having a determinant of +1. Show that such matrices form a group. Note. The SL(2) group can be related to the full Lorentz group in Section 4.4, much as the SU(2) group is related to SO(3). 4.1.4 Show that the rotations about the z-axis form a subgroup of SO(3). Is it an invariant subgroup? 4.1.5 Show that if R, S, T are elements of a group G so that RS = T and R → (rik ), S → (sik ) is a representation according to Eq. (4.7), then    (rik )(sik ) = tik = rin snk , n that is, group multiplication translates into matrix multiplication for any group representation. 4.2 GENERATORS OF CONTINUOUS GROUPS A characteristic property of continuous groups known as Lie groups is that the parameters of a product element are analytic functions4 of the parameters of the factors. The analytic 4 Analytic here means having derivatives of all orders. 4.2 Generators of Continuous Groups 247 nature of the functions (differentiability) allows us to develop the concept of generator and to reduce the study of the whole group to a study of the group elements in the neighborhood of the identity element. Lie’s essential idea was to study elements R in a group G that are infinitesimally close to the unity of G. Let us consider the SO(2) group as a simple example. The 2 × 2 rotation matrices in Eq. (4.2) can be written in exponential form using the Euler identity, Eq. (3.170a), as ! cos ϕ sin ϕ (4.9) = 12 cos ϕ + iσ2 sin ϕ = exp(iσ2 ϕ). R(ϕ) = − sin ϕ cos ϕ From the exponential form it is obvious that multiplication of these matrices is equivalent to addition of the arguments  R(ϕ2 )R(ϕ1 ) = exp(iσ2 ϕ2 ) exp(iσ2 ϕ1 ) = exp iσ2 (ϕ1 + ϕ2 ) = R(ϕ1 + ϕ2 ). Rotations close to 1 have small angle ϕ ≈ 0. This suggests that we look for an exponential representation  ε → 0, R = exp(iεS) = 1 + iεS + O ε 2 , (4.10) for group elements R in G close to the unity 1. The infinitesimal transformations are εS, and the S are called generators of G. They form a linear space because multiplication of the group elements R translates into addition of generators S. The dimension of this vector space (over the complex numbers) is the order of G, that is, the number of linearly independent generators of the group. If R is a rotation, it does not change the volume element of the coordinate space that it rotates, that is, det(R) = 1, and we may use Eq. (3.171) to see that   det(R) = exp trace(ln R) = exp iε trace(S) = 1 implies ε trace(S) = 0 and, upon dividing by the small but nonzero parameter ε, that generators are traceless, trace(S) = 0. (4.11) This is the case not only for the rotation groups SO(n) but also for unitary groups SU(n). If R of G in Eq. (4.10) is unitary, then S † = S is Hermitian, which is also the case for SO(n) and SU(n). This explains why the extra i has been inserted in Eq. (4.10). Next we go around the unity in four steps, similar to parallel transport in differential geometry. We expand the group elements Ri = exp(iεi Si ) = 1 + iεi Si − 12 εi2 S2i + · · · , 1 2 2 R−1 i = exp(−iεi Si ) = 1 − iεi Si − 2 εi Si + · · · , (4.12) to second order in the small group parameter εi because the linear terms and several quadratic terms all cancel in the product (Fig. 4.1) −1 R−1 i Rj Ri Rj = 1 + εi εj [Sj , Si ] + · · · , = 1 + εi εj  k cjk i Sk + · · · , (4.13) 248 Chapter 4 Group Theory FIGURE 4.1 Illustration of Eq. (4.13). when Eq. (4.12) is substituted into Eq. (4.13). The last line holds because the product in Eq. (4.13) is again a group element, Rij , close to the unity in the group G. Hence its exponent must be a linear combination of the generators Sk , and its infinitesimal group parameter has to be proportional to the product εi εj . Comparing both lines in Eq. (4.13) we find the closure relation of the generators of the Lie group G,  k [Si , Sj ] = Sk . (4.14) cij k k are the structure constants of the group G. Since the commutator in The coefficients cij Eq. (4.14) is antisymmetric in i and j , so are the structure constants in the lower indices, k cij = −cjk i . (4.15) If the commutator in Eq. (4.14) is taken as a multiplication law of generators, we see that the vector space of generators becomes an algebra, the Lie algebra G of the group G. An algebra has two group structures, a commutative product denoted by a + symbol (this is the addition of infinitesimal generators of a Lie group) and a multiplication (the commutator of generators). Often an algebra is a vector space with a multiplication, such as a ring of square matrices. For SU(l + 1) the Lie algebra is called Al , for SO(2l + 1) it is Bl , and for SO(2l) it is Dl , where l = 1, 2, . . . is a positive integer, later called the rank of the Lie group G or of its algebra G. Finally, the Jacobi identity holds for all double commutators    [Si , Sj ], Sk + [Sj , Sk ], Si + [Sk , Si ], Sj = 0, (4.16) which is easily verified using the definition of any commutator [A, B] ≡ AB − BA. When Eq. (4.14) is substituted into Eq. (4.16) we find another constraint on structure constants, ' ( m m cij [Sm , Sk ] + cjmk [Sm , Si ] + cki [Sm , Sj ] = 0. (4.17) m Upon inserting Eq. (4.14) again, Eq. (4.17) implies that ' ( m n n m n cij cmk Sn + cjmk cmi Sn + cki cmj Sn = 0, mn (4.18) 4.2 Generators of Continuous Groups 249 where the common factor Sn (and the sum over n) may be dropped because the generators are linearly independent. Hence ' ( m n n m n cmj = 0. + cki (4.19) cmk + cjmk cmi cij m The relations (4.14), (4.15), and (4.19) form the basis of Lie algebras from which finite elements of the Lie group near its unity can be reconstructed. Returning to Eq. (4.5), the inverse of R is R−1 = exp(−iεS). We expand HR according to the Baker–Hausdorff formula, Eq. (3.172),  H = HR = exp(iεS)H exp(−iεS) = H + iε[S, H ] − 21 ε 2 S[S, H ] + · · · (4.20) We drop H from Eq. (4.20), divide by the small (but nonzero), ε, and let ε → 0. Then Eq. (4.20) implies that the commutator [S, H ] = 0. (4.21) If S and H are Hermitian matrices, Eq. (4.21) implies that S and H can be simultaneously diagonalized and have common eigenvectors (for matrices, see Section 3.5; for operators, see Schur’s lemma in Section 4.3). If S and H are differential operators like the Hamiltonian and orbital angular momentum in quantum mechanics, then Eq. (4.21) implies that S and H have common eigenfunctions and that the degenerate eigenvalues of H can be distinguished by the eigenvalues of the generators S. These eigenfunctions and eigenvalues, s, are solutions of separate differential equations, Sψs = sψs , so group theory (that is, symmetries) leads to a separation of variables for a partial differential equation that is invariant under the transformations of the group. For example, let us take the single-particle Hamiltonian h¯ 2 1 ∂ 2 ∂ h¯ 2 2 r + L + V (r) 2 2m r ∂r ∂r 2mr 2 that is invariant under SO(3) and, therefore, a function of the radial distance r, the radial gradient, and the rotationally invariant operator L2 of SO(3). Upon replacing the orbital angular momentum operator L2 by its eigenvalue l(l + 1) we obtain the radial Schrödinger equation (ODE),  h¯ 2 l(l + 1) h¯ 2 1 d 2 d H Rl (r) = − r + + V (r) Rl (r) = El Rl (r), 2m r 2 dr dr 2mr 2 H =− where Rl (r) is the radial wave function. For cylindrical symmetry, the invariance of H under rotations about the z-axis would require H to be independent of the rotation angle ϕ, leading to the ODE H Rm (z, ρ) = Em Rm (z, ρ), with m the eigenvalue of Lz = −i∂/∂ϕ, the z-component of the orbital angular momentum operator. For more examples, see the separation of variables method for partial differential equations in Section 9.3 and special functions in Chapter 12. This is by far the most important application of group theory in quantum mechanics. In the next subsections we shall study orthogonal and unitary groups as examples to understand better the general concepts of this section. 250 Chapter 4 Group Theory Rotation Groups SO(2) and SO(3) For SO(2) as defined by Eq. (4.3) there is only one linearly independent generator, σ2 , and the order of SO(2) is 1. We get σ2 from Eq. (4.9) by differentiation at the unity of SO(2), that is, ϕ = 0, ! ! − sin ϕ cos ϕ  0 1  −idR(ϕ)/dϕ|ϕ=0 = −i = σ2 . (4.22) = −i − cos ϕ − sin ϕ ϕ=0 −1 0 For the rotations Rz (ϕ) about the z-axis described by Eq. (4.1), the generator is given by   0 −i 0   0, (4.23) −idRz (ϕ)/dϕ|ϕ=0 = Sz =  i 0 0 0 0 where the factor i is inserted to make Sz Hermitian. The rotation Rz (δϕ) through an infinitesimal angle δϕ may then be expanded to first order in the small δϕ as Rz (δϕ) = 13 + iδϕSz . (4.24) A finite rotation R(ϕ) may be compounded of successive infinitesimal rotations Rz (δϕ1 + δϕ2 ) = (1 + iδϕ1 Sz )(1 + iδϕ2 Sz ). (4.25) Let δϕ = ϕ/N for N rotations, with N → ∞. Then  N Rz (ϕ) = lim 1 + (iϕ/N)Sz = exp(iϕSz ). (4.26) N →∞ This form identifies Sz as the generator of the group Rz , an abelian subgroup of SO(3), the group of rotations in three dimensions with determinant +1. Each 3 × 3 matrix Rz (ϕ) is orthogonal, hence unitary, and trace(Sz ) = 0, in accord with Eq. (4.11). By differentiation of the coordinate rotations     cos θ 0 − sin θ 1 0 0     1 0 , sin ψ  , Rx (ψ) =  0 cos ψ (4.27) Ry (θ ) =  0 0 − sin ψ sin θ cos ψ 0 cos θ we get the generators  0 0 0    Sx =  0 0 −i  , 0 i 0  0  Sy =  0 −i of Rx (Ry ), the subgroup of rotations about the x- (y-)axis. 0 i   0 0 0 0 (4.28) 4.2 Generators of Continuous Groups 251 Rotation of Functions and Orbital Angular Momentum In the foregoing discussion the group elements are matrices that rotate the coordinates. Any physical system being described is held fixed. Now let us hold the coordinates fixed and rotate a function ψ(x, y, z) relative to our fixed coordinates. With R to rotate the coordinates, x′ = Rx, (4.29) we define R on ψ by Rψ(x, y, z) = ψ ′ (x, y, z) ≡ ψ(x′ ). (4.30) ψ′ In words, R operates on the function ψ , creating a new function that is numerically equal to ψ(x′ ), where x′ are the coordinates rotated by R. If R rotates the coordinates counterclockwise, the effect of R is to rotate the pattern of the function ψ clockwise. Returning to Eqs. (4.30) and (4.1), consider an infinitesimal rotation again, ϕ → δϕ. Then, using Rz Eq. (4.1), we obtain Rz (δϕ)ψ(x, y, z) = ψ(x + yδϕ, y − xδϕ, z). (4.31) The right side may be expanded to first order in the small δϕ to give Rz (δϕ)ψ(x, y, z) = ψ(x, y, z) − δϕ{x∂ψ/∂y − y∂ψ/∂x} + O(δϕ)2 = (1 − iδϕLz )ψ(x, y, z), (4.32) the differential expression in curly brackets being the orbital angular momentum iLz (Exercise 1.8.7). Since a rotation of first ϕ and then δϕ about the z-axis is given by Rz (ϕ + δϕ)ψ = Rz (δϕ)Rz (ϕ)ψ = (1 − iδϕLz )Rz (ϕ)ψ, (4.33) we have (as an operator equation) dRz Rz (ϕ + δϕ) − Rz (ϕ) = lim = −iLz Rz (ϕ). δϕ→0 dϕ δϕ (4.34) In this form Eq. (4.34) integrates immediately to Rz (ϕ) = exp(−iϕLz ). (4.35) Note that Rz (ϕ) rotates functions (clockwise) relative to fixed coordinates and that Lz is the z component of the orbital angular momentum L. The constant of integration is fixed by the boundary condition Rz (0) = 1. As suggested by Eq. (4.32), Lz is connected to Sz by   ∂/∂x   ∂ ∂   , (4.36) Lz = (x, y, z)Sz  ∂/∂y  = −i x −y ∂y ∂x ∂/∂z so Lx , Ly , and Lz satisfy the same commutation relations, [Li , Lj ] = iεij k Lk , as Sx , Sy , and Sz and yield the same structure constants iεij k of SO(3). (4.37) 252 Chapter 4 Group Theory SU(2) — SO(3) Homomorphism Since unitary 2 × 2 matrices transform complex two-dimensional vectors preserving their norm, they represent the most general transformations of (a basis in the Hilbert space of) spin 12 wave functions in nonrelativistic quantum mechanics. The basis states of this system are conventionally chosen to be     1 0 |↑ = , |↓ = , 0 1 corresponding to spin 12 up and down states, respectively. We can show that the special unitary group SU(2) of unitary 2 × 2 matrices with determinant +1 has all three Pauli matrices σi as generators (while the rotations of Eq. (4.3) form a one-dimensional abelian subgroup). So SU(2) is of order 3 and depends on three real continuous parameters ξ, η, ζ , which are often called the Cayley–Klein parameters. To construct its general element, we start with the observation that orthogonal 2 × 2 matrices are real unitary matrices, so they form a subgroup of SU(2). We also see that ! 0 eiα e−iα 0 is unitary for real angle α with determinant +1. So these simple and manifestly unitary matrices form another subgroup of SU(2) from which we can obtain all elements of SU(2), that is, the general 2 × 2 unitary matrix of determinant +1. For a two-component spin 21 wave function of quantum mechanics this diagonal unitary matrix corresponds to multiplication of the spin-up wave function with a phase factor eiα and the spin-down component with the inverse phase factor. Using the real angle η instead of ϕ for the rotation matrix and then multiplying by the diagonal unitary matrices, we construct a 2 × 2 unitary matrix that depends on three parameters and clearly is a more general element of SU(2): ! ! ! cos η sin η eiβ 0 0 eiα 0 = = e−iα − sin η eiα cos η −e−iα sin η cos η 0 ! eiα sin η eiβ e−iα cos η e−iβ 0 ei(α+β) cos η ei(α−β) sin η −e−i(α−β) sin η e−i(α+β) cos η e−iβ ! 0 ! . Defining α + β ≡ ξ, α − β ≡ ζ , we have in fact constructed the general element of SU(2): ! ! eiξ cos η a b eiζ sin η U(ξ, η, ζ ) = . (4.38) = −b∗ a ∗ −e−iζ sin η e−iξ cos η  To see this, we write the general SU(2) element as U = ac db with complex numbers a, b, c, d so that det(U) = 1. Writing unitarity, U† = U−1 , and using Eq. (3.50) for the 4.2 Generators of Continuous Groups inverse we obtain a∗ c∗ b∗ d∗ ! = d −c −b a ! 253 , implying c = −b∗ , d = a ∗ , as shown in Eq. (4.38). It is easy to check that the determinant det(U) = 1 and that U† U = 1 = UU† hold. To get the generators, we differentiate (and drop irrelevant overall factors): ! 1 0 −i∂U/∂ξ|ξ =0,η=0 = = σ3 , (4.39a) 0 −1 ! 0 −i −i∂U/∂η|η=0,ζ =0 = = σ2 . (4.39b) i 0 To avoid a factor 1/ sin η for η → 0 upon differentiating with respect to ζ , we use instead the right-hand side of Eq. (4.38) for U for pure imaginary b = iβ with β → 0, so a = 1 − β 2 from |a|2 + |b|2 = a 2 + β 2 = 1. Differentiating such a U, we get the third generator,   ! !  i −√ β 2   0 1 1 − β2 iβ ∂ 1−β     = σ1 . = −i = −i   √β ∂β iβ −i 1 0 1 − β 2 β=0 β=0 2 1−β (4.39c) The Pauli matrices are all traceless and Hermitian. With the Pauli matrices as generators, the elements U1 , U2 , U3 of SU(2) may be generated by U1 = exp(ia1 σ1 /2), U2 = exp(ia2 σ2 /2), U3 = exp(ia3 σ3 /2). (4.40) The three parameters ai are real. The extra factor 1/2 is present in the exponents to make Si = σi /2 satisfy the same commutation relations, [Si , Sj ] = iεij k Sk , (4.41) as the angular momentum in Eq. (4.37). To connect and compare our results, Eq. (4.3) gives a rotation operator for rotating the Cartesian coordinates in the three-space R3 . Using the angular momentum matrix S3 , we have as the corresponding rotation operator in two-dimensional (complex) space Rz (ϕ) = exp(iϕσ3 /2). For rotating the two-component vector wave function (spinor) or a spin 1/2 particle relative to fixed coordinates, the corresponding rotation operator is Rz (ϕ) = exp(−iϕσ3 /2) according to Eq. (4.35). More generally, using in Eq. (4.40) the Euler identity, Eq. (3.170a), we obtain     aj aj + iσj sin . (4.42) Uj = cos 2 2 Here the parameter aj appears as an angle, the coefficient of an angular momentum matrixlike ϕ in Eq. (4.26). The selection of Pauli matrices corresponds to the Euler angle rotations described in Section 3.3. 254 Chapter 4 Group Theory FIGURE 4.2 Illustration of M′ = UMU† in Eq. (4.43). As just seen, the elements of SU(2) describe rotations in a two-dimensional complex space that leave |z1 |2 + |z2 |2 invariant. The determinant is +1. There are three independent real parameters. Our real orthogonal group SO(3) clearly describes rotations in ordinary three-dimensional space with the important characteristic of leaving x 2 + y 2 + z2 invariant. Also, there are three independent real parameters. The rotation interpretations and the equality of numbers of parameters suggest the existence of some correspondence between the groups SU(2) and SO(3). Here we develop this correspondence. The operation of SU(2) on a matrix is given by a unitary transformation, Eq. (4.5), with R = U and Fig. 4.2: M′ = UMU† . (4.43) Taking M to be a 2 × 2 matrix, we note that any 2 × 2 matrix may be written as a linear combination of the unit matrix and the three Pauli matrices of Section 3.4. Let M be the zero-trace matrix, ! z x − iy M = xσ1 + yσ2 + zσ3 = , (4.44) x + iy −z the unit matrix not entering. Since the trace is invariant under a unitary similarity transformation (Exercise 3.3.9), M′ must have the same form, ! z′ x ′ − iy ′ ′ ′ ′ ′ M = x σ1 + y σ2 + z σ3 = . (4.45) x ′ + iy ′ −z′ The determinant is also invariant under a unitary transformation (Exercise 3.3.10). Therefore   − x 2 + y 2 + z2 = − x ′ 2 + y ′ 2 + z′ 2 , (4.46) or x 2 + y 2 + z2 is invariant under this operation of SU(2), just as with SO(3). Operations of SU(2) on M must produce rotations of the coordinates x, y, z appearing therein. This suggests that SU(2) and SO(3) may be isomorphic or at least homomorphic. 4.2 Generators of Continuous Groups 255 We approach the problem of what this operation of SU(2) corresponds to by considering special cases. Returning to Eq. (4.38), let a = eiξ and b = 0, or ! eiξ 0 U3 = . (4.47) 0 e−iξ In anticipation of Eq. (4.51), this U is given a subscript 3. Carrying out a unitary similarity transformation, Eq. (4.43), on each of the three Pauli σ ’s of SU(2), we have ! ! ! eiξ e−iξ 0 0 1 0 † U3 σ 1 U3 = 1 0 0 eiξ 0 e−iξ ! 0 e2iξ = . (4.48) e−2iξ 0 We reexpress this result in terms of the Pauli σi , as in Eq. (4.44), to obtain U3 xσ1 U†3 = xσ1 cos 2ξ − xσ2 sin 2ξ. (4.49) Similarly, U3 yσ2 U†3 = yσ1 sin 2ξ + yσ2 cos 2ξ, U3 zσ3 U†3 = zσ3 . (4.50) From these double angle expressions we see that we should start with a halfangle: ξ = α/2. Then, adding Eqs. (4.49) and (4.50) and comparing with Eqs. (4.44) and (4.45), we obtain x ′ = x cos α + y sin α y ′ = −x sin α + y cos α (4.51) ′ z = z. The 2 × 2 unitary transformation using U3 (α) is equivalent to the rotation operator R(α) of Eq. (4.3). The correspondence of ! cos β/2 sin β/2 U2 (β) = (4.52) − sin β/2 cos β/2 and Ry (β) and of U1 (ϕ) = cos ϕ/2 i sin ϕ/2 i sin ϕ/2 cos ϕ/2 ! (4.53) and R1 (ϕ) follow similarly. Note that Uk (ψ) has the general form Uk (ψ) = 12 cos ψ/2 + iσk sin ψ/2, where k = 1, 2, 3. (4.54) 256 Chapter 4 Group Theory The correspondence U3 (α) = eiα/2 0 0 e−iα/2 !  cos α sin α  ↔  − sin α cos α 0 0 0   0  = Rz (α) (4.55) 1 is not a simple one-to-one correspondence. Specifically, as α in Rz ranges from 0 to 2π , the parameter in U3 , α/2, goes from 0 to π . We find Rz (α + 2π) = Rz (α) U3 (α + 2π) = −eiα/2 0 0 −e−iα/2 ! = −U3 (α). (4.56) Therefore both U3 (α) and U3 (α + 2π) = −U3 (α) correspond to Rz (α). The correspondence is 2 to 1, or SU(2) and SO(3) are homomorphic. This establishment of the correspondence between the representations of SU(2) and those of SO(3) means that the known representations of SU(2) automatically provide us with the representations of SO(3). Combining the various rotations, we find that a unitary transformation using U(α, β, γ ) = U3 (γ )U2 (β)U3 (α) (4.57) corresponds to the general Euler rotation Rz (γ )Ry (β)Rz (α). By direct multiplication, ! ! ! cos β/2 sin β/2 eiγ /2 eiα/2 0 0 U(α, β, γ ) = − sin β/2 cos β/2 0 e−iγ /2 0 e−iα/2 ! ei(γ +α)/2 cos β/2 ei(γ −α)/2 sin β/2 = . (4.58) −e−i(γ −α)/2 sin β/2 e−i(γ +α)/2 cos β/2 This is our alternate general form, Eq. (4.38), with ξ = (γ + α)/2, η = β/2, ζ = (γ − α)/2. (4.59) Thus, from Eq. (4.58) we may identify the parameters of Eq. (4.38) as a = ei(γ +α)/2 cos β/2 b = ei(γ −α)/2 sin β/2. (4.60) SU(2)-Isospin and SU(3)-Flavor Symmetry The application of group theory to “elementary” particles has been labeled by Wigner the third stage of group theory and physics. The first stage was the search for the 32 crystallographic point groups and the 230 space groups giving crystal symmetries — Section 4.7. The second stage was a search for representations such as of SO(3) and SU(2) — Section 4.2. Now in this stage, physicists are back to a search for groups. In the 1930s to 1960s the study of strongly interacting particles of nuclear and highenergy physics led to the SU(2) isospin group and the SU(3) flavor symmetry. In the 1930s, after the neutron was discovered, Heisenberg proposed that the nuclear forces were charge 4.2 Generators of Continuous Groups Table 4.1 Baryons with Spin Mass (MeV) −  Even Parity Y I −1 1 2 0 1 0 0 1 1 2 1321.32 1 2 0 − 0 +  n 1314.9 1197.43 1192.55 1189.37 1115.63 939.566 p 938.272 N 257 I3 − 12 + 12 −1 0 +1 0 − 12 + 12 independent. The neutron mass differs from that of the proton by only 1.6%. If this tiny mass difference is ignored, the neutron and proton may be considered as two charge (or isospin) states of a doublet, called the nucleon. The isospin I has z-projection I3 = 1/2 for the proton and I3 = −1/2 for the neutron. Isospin has nothing to do with spin (the particle’s intrinsic angular momentum), but the two-component isospin state obeys the same mathematical relations as the spin 1/2 state. For the nucleon, I = τ /2 are the  1usual Pauli matrices and the ±1/2 isospin states are eigenvectors of the Pauli matrix τ3 = 0 −10 . Similarly, the three charge states of the pion (π + , π 0 , π − ) form a triplet. The pion is the lightest of all strongly interacting particles and is the carrier of the nuclear force at long distances, much like the photon is that of the electromagnetic force. The strong interaction treats alike members of these particle families, or multiplets, and conserves isospin. The symmetry is the SU(2) isospin group. By the 1960s particles produced as resonances by accelerators had proliferated. The eight shown in Table 4.1 attracted particular attention.5 The relevant conserved quantum numbers that are analogs and generalizations of Lz and L2 from SO(3) are I3 and I 2 for isospin and Y for hypercharge. Particles may be grouped into charge or isospin multiplets. Then the hypercharge may be taken as twice the average charge of the multiplet. For the nucleon, that is, the neutron–proton doublet, Y = 2 · 21 (0 + 1) = 1. The hypercharge and isospin values are listed in Table 4.1 for baryons like the nucleon and its (approximately degenerate) partners. They form an octet, as shown in Fig. 4.3, after which the corresponding symmetry is called the eightfold way. In 1961 Gell-Mann, and independently Ne’eman, suggested that the strong interaction should be (approximately) invariant under a three-dimensional special unitary group, SU(3), that is, has SU(3) flavor symmetry. The choice of SU(3) was based first on the two conserved and independent quantum numbers, H1 = I3 and H2 = Y (that is, generators with [I3 , Y ] = 0, not Casimir invariants; see the summary in Section 4.3) that call for a group of rank 2. Second, the group had to have an eight-dimensional representation to account for the nearly degenerate baryons and four similar octets for the mesons. In a sense, SU(3) is the simplest generalization of SU(2) isospin. Three of its generators are zero-trace Hermitian 3 × 3 matrices that contain 5 All masses are given in energy units, 1 MeV = 106 eV. 258 Chapter 4 Group Theory FIGURE 4.3 Baryon octet weight diagram for SU(3). the 2 × 2 isospin Pauli matrices τi in the upper left corner,   0 τi   0  , i = 1, 2, 3. λi =  (4.61a) 0 0 0 Thus, the SU(2)-isospin group is a subgroup of SU(3)-flavor with I3 = λ3 /2. Four other generators have the off-diagonal 1’s of τ1 , and −i, i of τ2 in all other possible locations to form zero-trace Hermitian 3 × 3 matrices,     0 0 −i 0 0 1     λ4 =  0 0 0  , λ5 =  0 0 0  , i 1 0 0  0 0 0    λ6 =  0 0 1  , 0 1 0  0 0 0 0 0  λ7 =  0 0 0 i   −i  . (4.61b) 0 The second diagonal generator has the two-dimensional unit matrix 12 in the upper left corner, which makes it clearly independent of the SU(2)-isospin subgroup because of its nonzero trace in that subspace, and −2 in the third diagonal place to make it traceless,   1 0 0 1   λ8 = √  0 1 0  . (4.61c) 3 0 0 −2 4.2 Generators of Continuous Groups 259 FIGURE 4.4 Baryon mass splitting. Altogether there are 32 − 1 = 8 generators for SU(3), which has order 8. From the commutators of these generators the structure constants of SU(3) can easily be obtained. Returning to the SU(3) flavor symmetry, we imagine the Hamiltonian for our eight baryons to be composed of three parts: H = Hstrong + Hmedium + Helectromagnetic . (4.62) The first part, Hstrong , has the SU(3) symmetry and leads to the eightfold degeneracy. Introduction of the symmetry-breaking term, Hmedium , removes part of the degeneracy, giving the four isospin multiplets (− , 0 ), ( − ,  0 ,  + ), , and N = (p, n) different masses. These are still multiplets because Hmedium has SU(2)-isospin symmetry. Finally, the presence of charge-dependent forces splits the isospin multiplets and removes the last degeneracy. This imagined sequence is shown in Fig. 4.4. The octet representation is not the simplest SU(3) representation. The simplest representations are the triangular ones shown in Fig. 4.5, from which all others can be generated by generalized angular momentum coupling (see Section 4.4 on Young tableaux). The fundamental representation in Fig. 4.5a contains the u (up), d (down), and s (strange) quarks, and Fig. 4.5b contains the corresponding antiquarks. Since the meson octets can be obtained from the quark representations as q q, ¯ with 32 = 8 + 1 states, this suggests that mesons contain quarks (and antiquarks) as their constituents (see Exercise 4.4.3). The resulting quark model gives a successful description of hadronic spectroscopy. The resolution of its problem with the Pauli exclusion principle eventually led to the SU(3)-color gauge theory of the strong interaction called quantum chromodynamics (QCD). To keep group theory and its very real accomplishment in proper perspective, we should emphasize that group theory identifies and formalizes symmetries. It classifies (and sometimes predicts) particles. But aside from saying that one part of the Hamiltonian has SU(2) 260 Chapter 4 Group Theory FIGURE 4.5 (a) Fundamental representation of SU(3), the weight diagram for ¯ s¯ . the u, d, s quarks; (b) weight diagram for the antiquarks u, ¯ d, symmetry and another part has SU(3) symmetry, group theory says nothing about the particle interaction. Remember that the statement that the atomic potential is spherically symmetric tells us nothing about the radial dependence of the potential or of the wave function. In contrast, in a gauge theory the interaction is mediated by vector bosons (like the photon in quantum electrodynamics) and uniquely determined by the gauge covariant derivative (see Section 1.13). Exercises 4.2.1 (i) Show that the Pauli matrices are the generators of SU(2) without using the parameterization of the general unitary 2 × 2 matrix in Eq. (4.38). (ii) Derive the eight independent generators λi of SU(3) similarly. Normalize them so that tr(λi λj ) = 2δij . Then determine the structure constants of SU(3). Hint. The λi are traceless and Hermitian 3 × 3 matrices. (iii) Construct the quadratic Casimir invariant of SU(3). Hint. Work by analogy with σ12 + σ22 + σ32 of SU(2) or L2 of SO(3). 4.2.2 Prove that the general form of a 2 × 2 unitary, unimodular matrix is ! a b U= −b∗ a ∗ with a ∗ a + b∗ b = 1. 4.2.3 Determine three SU(2) subgroups of SU(3). 4.2.4 A translation operator T (a) converts ψ(x) to ψ(x + a), T (a)ψ(x) = ψ(x + a). 4.3 Orbital Angular Momentum 261 In terms of the (quantum mechanical) linear momentum operator px = −id/dx, show that T (a) = exp(iapx ), that is, px is the generator of translations. Hint. Expand ψ(x + a) as a Taylor series. 4.2.5 Consider the general SU(2) element Eq. (4.38) to be built up of three Euler rotations: (i) a rotation of a/2 about the z-axis, (ii) a rotation of b/2 about the new x-axis, and (iii) a rotation of c/2 about the new z-axis. (All rotations are counterclockwise.) Using the Pauli σ generators, show that these rotation angles are determined by a=ξ −ζ + π 2 c= ξ +ζ − π 2 b = 2η 4.2.6 4.3 =α+ π 2 =β = γ − π2 . Note. The angles a and b here are not the a and b of Eq. (4.38). Rotate a nonrelativistic wave function ψ˜ = (ψ↑ , ψ↓ ) of spin 1/2 about the z-axis by a small angle dθ . Find the corresponding generator. ORBITAL ANGULAR MOMENTUM The classical concept of angular momentum, Lclass = r × p, is presented in Section 1.4 to introduce the cross product. Following the usual Schrödinger representation of quantum mechanics, the classical linear momentum p is replaced by the operator −i∇. The quantum mechanical orbital angular momentum operator becomes6 LQM = −ir × ∇. (4.63) This is used repeatedly in Sections 1.8, 1.9, and 2.4 to illustrate vector differential operators. From Exercise 1.8.8 the angular momentum components satisfy the commutation relations [Li , Lj ] = iεij k Lk . (4.64) The εij k is the Levi-Civita symbol of Section 2.9. A summation over the index k is understood. The differential operator corresponding to the square of the angular momentum L2 = L · L = L2x + L2y + L2z (4.65) may be determined from L · L = (r × p) · (r × p), (4.66) L2 as a scalar product is inwhich is the subject of Exercises 1.9.9 and 2.5.17(b). Since variant under rotations, that is, a rotational scalar, we expect [L2 , Li ] = 0, which can also be verified directly. Equation (4.64) presents the basic commutation relations of the components of the quantum mechanical angular momentum. Indeed, within the framework of quantum mechanics and group theory, these commutation relations define an angular momentum operator. We shall use them now to construct the angular momentum eigenstates and find the eigenvalues. For the orbital angular momentum these are the spherical harmonics of Section 12.6. 6 For simplicity, h is set equal to 1. This means that the angular momentum is measured in units of h. ¯ ¯ 262 Chapter 4 Group Theory Ladder Operator Approach Let us start with a general approach, where the angular momentum J we consider may represent an orbital angular momentum L, a spin σ /2, or a total angular momentum L + σ /2, etc. We assume that 1. J is an Hermitian operator whose components satisfy the commutation relations  2 [Ji , Jj ] = iεij k Jk , (4.67) J , Ji = 0. 2. Otherwise J is arbitrary. (See Exercise 4.3.l.) |λM is simultaneously a normalized eigenfunction (or eigenvector) of Jz with eigenvalue M and an eigenfunction7 of J2 , Jz |λM = M|λM, J2 |λM = λ|λM, λM|λM = 1. (4.68) We shall show that λ = J (J + 1) and then find other properties of the |λM. The treatment will illustrate the generality and power of operator techniques, particularly the use of ladder operators.8 The ladder operators are defined as J+ = Jx + iJy , J− = Jx − iJy . (4.69) In terms of these operators J2 may be rewritten as J2 = 12 (J+ J− + J− J+ ) + Jz2 . (4.70) From the commutation relations, Eq. (4.67), we find [Jz , J+ ] = +J+ , [Jz , J− ] = −J− , [J+ , J− ] = 2Jz . Since J+ commutes with J2 (Exercise 4.3.1),    J2 J+ |λM = J+ J2 |λM = λ J+ |λM . (4.71) (4.72) Therefore, J+ |λM is still an eigenfunction of J2 with eigenvalue λ, and similarly for J− |λM. But from Eq. (4.71), Jz J+ = J+ (Jz + 1), (4.73) or  Jz J+ |λM = J+ (Jz + 1)|λM = (M + 1)J+ |λM. (4.74) 7 That |λM can be an eigenfunction of both J and J2 follows from [J , J2 ] = 0 in Eq. (4.67). For SU(2), λM|λM is the z z scalar product (of the bra and ket vector or spinors) in the bra-ket notation introduced in Section 3.1. For SO(3), |λM is a 2π π ∗ ′ function Y (θ, ϕ) and |λM ′  is a function Y ′ (θ, ϕ) and the matrix element λM|λM ′  ≡ ϕ=0 θ =0 Y (θ, ϕ)Y (θ, ϕ) sin θ dθ dϕ is their overlap. However, in our algebraic approach only the norm in Eq. (4.68) is used and matrix elements of the angular momentum operators are reduced to the norm by means of the eigenvalue equation for Jz , Eq. (4.68), and Eqs. (4.83) and (4.84). 8 Ladder operators can be developed for other mathematical functions. Compare the next subsection, on other Lie groups, and Section 13.1, for Hermite polynomials. 4.3 Orbital Angular Momentum 263 Therefore, J+ |λM is still an eigenfunction of Jz but with eigenvalue M + 1. J+ has raised the eigenvalue by 1 and so is called a raising operator. Similarly, J− lowers the eigenvalue by 1 and is called a lowering operator. Taking expectation values and using Jx† = Jx , Jy† = Jy , we get 2 2   λM|J2 − Jz2 |λM = λM|Jx2 + Jy2 |λM = Jx |λM + Jy |λM and see that λ − M 2 ≥ 0, so M is bounded. Let J be the largest M. Then J+ |λJ  = 0, which implies J− J+ |λJ  = 0. Hence, combining Eqs. (4.70) and (4.71) to get J2 = J− J+ + Jz (Jz + 1), (4.75) we find from Eq. (4.75) that Therefore   0 = J− J+ |λJ  = J2 − Jz2 − Jz |λJ  = λ − J 2 − J |λJ . λ = J (J + 1) ≥ 0, with nonnegative J . We now relabel the states |λM ≡ |J M. Similarly, let smallest M. Then J− |J J ′  = 0. From J2 = J+ J− + Jz (Jz − 1), (4.76) J′ be the (4.77) we see that Hence   0 = J+ J− |J J ′  = J2 + Jz − Jz2 |J J ′  = λ + J ′ − J ′ 2 |J J ′ . (4.78) λ = J (J + 1) = J ′ (J ′ − 1) = (−J )(−J − 1). So J ′ = −J , and M runs in integer steps from −J to +J , −J ≤ M ≤ J. (4.79) Starting from |J J  and applying J− repeatedly, we reach all other states |J M. Hence the |J M form an irreducible representation of SO(3) or SU(2); M varies and J is fixed. Then using Eqs. (4.67), (4.75), and (4.77) we obtain  J− J+ |J M = J (J + 1) − M(M + 1) |J M = (J − M)(J + M + 1)|J M,  (4.80) J+ J− |J M = J (J + 1) − M(M − 1) |J M = (J + M)(J − M + 1)|J M. Because J+ and J− are Hermitian conjugates,9 J+† = J− , J−† = J+ , (4.81) zero.10 the eigenvalues in Eq. (4.80) must be positive or Examples of Eq. (4.81) are provided by the matrices of Exercise 3.2.13 (spin 1/2), 3.2.15 (spin 1), and 3.2.18 (spin 3/2). 9 The Hermitian conjugation or adjoint operation is defined for matrices in Section 3.5, and for operators in general in Sec- tion 10.1. 10 For an excellent discussion of adjoint operators and Hilbert space see A. Messiah, Quantum Mechanics. New York: Wiley 1961, Chapter 7. 264 Chapter 4 Group Theory For the orbital angular momentum ladder operators, L+ , and L− , explicit forms are given in Exercises 2.5.14 and 12.6.7. You can now show (see also Exercise 12.7.2) that   † J M|J− J+ |J M = J+ |J M J+ |J M. (4.82) Since J+ raises the eigenvalue M to M + 1, we relabel the resultant eigenfunction |J M + 1. The normalization is given by Eq. (4.80) as J+ |J M = (J − M)(J + M + 1)|J M + 1 = J (J + 1) − M(M + 1)|J M + 1, (4.83) taking the positive square root and not introducing any phase factor. By the same arguments, J− |J M = (J + M)(J − M + 1)|J M − 1 = (J (J + 1) − M(M − 1)|J M − 1. (4.84) Applying J+ to Eq. (4.84), we obtain the second line of Eq. (4.80) and verify that Eq. (4.84) is consistent with Eq. (4.83). Finally, since M ranges from −J to +J in unit steps, 2J must be an integer; J is either an integer or half of an odd integer. As seen later, if J is an orbital angular momentum L, the set |LM for all M is a basis defining a representation of SO(3) and L will then be integral. In spherical polar coordinates θ, ϕ, the functions |LM become the spherical harmonics YLM (θ, ϕ) of Section 12.6. The sets of |J M states with half-integral J define representations of SU(2) that are not representations of SO(3); we get J = 1/2, 3/2, 5/2, . . . . Our angular momentum is quantized, essentially as a result of the commutation relations. All these representations are irreducible, as an application of the raising and lowering operators suggests. Summary of Lie Groups and Lie Algebras The general commutation relations, Eq. (4.14) in Section 4.2, for a classical Lie group [SO(n) and SU(n) in particular] can be simplified to look more like Eq. (4.71) for SO(3) and SU(2) in this section. Here we merely review and, as a rule, do not provide proofs for various theorems that we explain. First we choose linearly independent and mutually commuting generators Hi which are generalizations of Jz for SO(3) and SU(2). Let l be the maximum number of such Hi with [Hi , Hk ] = 0. (4.85) Then l is called the rank of the Lie group G or its Lie algebra G. The rank and dimension, or order, of some Lie groups are given in Table 4.2. All other generators Eα can be shown to be raising and lowering operators with respect to all the Hi , so [Hi , Eα ] = αi Eα , i = 1, 2, . . . , l. (4.86) The set of so-called root vectors (α1 , α2 , . . . , αl ) form the root diagram of G. When the Hi commute, they can be simultaneously diagonalized (for symmetric (or Hermitian) matrices see Chapter 3; for operators see Chapter 10). The Hi provide us with a set of eigenvalues m1 , m2 , . . . , ml [projection or additive quantum numbers generalizing 4.3 Orbital Angular Momentum Table 4.2 Groups 265 Rank and Order of Unitary and Rotational Lie algebra Lie group Rank Order Al SU(l + 1) l l(l + 2) Bl SO(2l + 1) l l(2l + 1) Dl SO(2l) l l(2l − 1) M of Jz in SO(3) and SU(2)]. The set of so-called weight vectors (m1 , m2 , . . . , ml ) for an irreducible representation (multiplet) form a weight diagram. There are l invariant operators Ci , called Casimir operators, that commute with all generators and are generalizations of J2 , [Ci , Hj ] = 0, [Ci , Eα ] = 0, i = 1, 2, . . . , l. (4.87) The first one, C1 , is a quadratic function of the generators; the others are more complicated. Since the Cj commute with all Hj , they can be simultaneously diagonalized with the Hj . Their eigenvalues c1 , c2 , . . . , cl characterize irreducible representations and stay constant while the weight vector varies over any particular irreducible representation. Thus the general eigenfunction may be written as   (c1 , c2 , . . . , cl )m1 , m2 , . . . , ml , (4.88) generalizing the multiplet |J M of SO(3) and SU(2). Their eigenvalue equations are     Hi (c1 , c2 , . . . , cl )m1 , m2 , . . . , ml = mi (c1 , c2 , . . . , cl )m1 , m2 , . . . , ml (4.89a)     Ci (c1 , c2 , . . . , cl )m1 , m2 , . . . , ml = ci (c1 , c2 , . . . , cl )m1 , m2 , . . . , ml . (4.89b) We can now show that Eα |(c1 , c2 , . . . , cl )m1 , m2 , . . . , ml  has the weight vector (m1 + α1 , m2 + α2 , . . . , ml + αl ) using the commutation relations, Eq. (4.86), in conjunction with Eqs. (4.89a) and (4.89b):   Hi Eα (c1 , c2 , . . . , cl )m1 , m2 , . . . , ml    = Eα Hi + [Hi , Eα ] (c1 , c2 , . . . , cl )m1 , m2 , . . . , ml   (4.90) = (mi + αi )Eα (c1 , c2 , . . . , cl )m1 , m2 , . . . , ml . Therefore     Eα (c1 , c2 , . . . , cl )m1 , m2 , . . . , ml ∼ (c1 , . . . , cl )m1 + α1 , . . . , ml + αl , the generalization of Eqs. (4.83) and (4.84) from SO(3). These changes of eigenvalues by the operator Eα are called its selection rules in quantum mechanics. They are displayed in the root diagram of a Lie algebra. Examples of root diagrams are given in Fig. 4.6 for SU(2) and SU(3). If we attach the roots denoted by arrows in Fig. 4.6b to a weight in Figs. 4.3 or 4.5a, b, we can reach any other state (represented by a dot in the weight diagram). Here Schur’s lemma applies: An operator H that commutes with all group operators, and therefore with all generators Hi of a (classical) Lie group G in particular, has as eigenvectors all states of a multiplet and is degenerate with the multiplet. As a consequence, such an operator commutes with all Casimir invariants, [H, Ci ] = 0. 266 Chapter 4 Group Theory FIGURE 4.6 Root diagram for (a) SU(2) and (b) SU(3). The last result is clear because the Casimir invariants are constructed from the generators and raising and lowering operators of the group. To prove the rest, let ψ be an eigenvector, H ψ = Eψ . Then, for any rotation R of G, we have H Rψ = ERψ , which says that Rψ is an eigenstate with the same eigenvalue E along with ψ. Since [H, Ci ] = 0, all Casimir invariants can be diagonalized simultaneously with H and an eigenstate of H is an eigenstate of all the Ci . Since [Hi , Ci ] = 0, the rotated eigenstates Rψ are eigenstates of Ci , along with ψ belonging to the same multiplet characterized by the eigenvalues ci of Ci . Finally, such an operator H cannot induce transitions between different multiplets of the group because     ′ ′ (c1 , c2 , . . . , cl′ )m′1 , m′2 , . . . , m′l H (c1 , c2 , . . . , cl )m1 , m2 , . . . , ml = 0. Using [H, Cj ] = 0 (for any j ) we have     0 = (c1′ , c2′ , . . . , cl′ )m′1 , m′2 , . . . , m′l [H, Cj ](c1 , c2 , . . . , cl )m1 , m2 , . . . , ml     = (cj − cj′ ) (c1′ , c2′ , . . . , cl′ )m′1 , m′2 , . . . , m′l H (c1 , c2 , . . . , cl )m1 , m2 , . . . , ml . If cj′ = cj for some j , then the previous equation follows. Exercises 4.3.1 Show that (a) [J+ , J2 ] = 0, (b) [J− , J2 ] = 0. 4.3.2 Derive the root diagram of SU(3) in Fig. 4.6b from the generators λi in Eq. (4.61). Hint. Work out first the SU(2) case in Fig. 4.6a from the Pauli matrices. 4.4 ANGULAR MOMENTUM COUPLING In many-body systems of classical mechanics, the total angular momentum is the sum L = i Li of the individual orbital angular momenta. Any isolated particle has conserved angular momentum. In quantum mechanics, conserved angular momentum arises when particles move in a central potential, such as the Coulomb potential in atomic physics, a shell model potential in nuclear physics, or a confinement potential of a quark model in 4.4 Angular Momentum Coupling 267 particle physics. In the relativistic Dirac equation, orbital angular momentum is no longer conserved, but J = L + S is conserved, the total angular momentum of a particle consisting of its orbital and intrinsic angular momentum, called spin S = σ /2, in units of h¯ . It is readily shown that the sum of angular momentum operators obeys the same commutation relations in Eq. (4.37) or (4.41) as the individual angular momentum operators, provided those from different particles commute. Clebsch–Gordan Coefficients: SU(2)–SO(3) Clearly, combining two commuting angular momenta Ji to form their sum J = J1 + J 2 , [J1i , J2i ] = 0, (4.91) occurs often in applications, and J satisfies the angular momentum commutation relations [Jj , Jk ] = [J1j + J2j , J1k + J2k ] = [J1j , J1k ] + [J2j , J2k ] = iεj kl (J1l + J2l ) = iεj kl Jl . For a single particle with spin 1/2, for example, an electron or a quark, the total angular momentum is a sum of orbital angular momentum and spin. For two spinless particles their total orbital angular momentum L = L1 + L2 . For J2 and Jz of Eq. (4.91) to be both diagonal, [J2 , Jz ] = 0 has to hold. To show this we use the obvious commutation relations [Jiz , J2j ] = 0, and J2 = J21 + J22 + 2J1 · J2 = J21 + J22 + J1+ J2− + J1− J2+ + 2J1z J2z (4.91′ ) in conjunction with Eq. (4.71), for both Ji , to obtain  2 J , Jz = [J1− J2+ + J1+ J2− , J1z + J2z ] = [J1− , J1z ]J2+ + J1− [J2+ , J2z ] + [J1+ , J1z ]J2− + J1+ [J2− , J2z ] = J1− J2+ − J1− J2+ − J1+ J2− + J1+ J2− = 0. [J2 , J2i ] = 0 is proved. Hence the eigenvalues of J2i , J2 , Jz can be used to label Similarly the total angular momentum states |J1 J2 J M. The product states |J1 m1 |J2 m2  obviously satisfy the eigenvalue equations Jz |J1 m1 |J2 m2  = (J1z + J2z )|J1 m1 |J2 m2  = (m1 + m2 )|J1 m1 |J2 m2  J2i |J1 m1 |J2 m2  = M|J1 m1 |J2 m2 , = Ji (Ji + 1)|J1 m1 |J2 m2 , (4.92) but will not have diagonal J2 except for the maximally stretched states with M = ±(J1 + J2 ) and J = J1 + J2 (see Fig. 4.7a). To see this we use Eq. (4.91′ ) again in conjunction with Eqs. (4.83) and (4.84) in ( ' J2 |J1 m1 J2 m2  = J1 (J1 + 1) + J2 (J2 + 1) + 2m1 m2 |J1 m1 |J2 m2  ' (1/2 ' (1/2 + J1 (J1 + 1) − m1 (m1 + 1) J2 (J2 + 1) − m2 (m2 − 1) ' (1/2 × |J1 m1 + 1|J2 m2 − 1 + J1 (J1 + 1) − m1 (m1 − 1) ' (1/2 |J1 m1 − 1|J2 m2 + 1. (4.93) × J2 (J2 + 1) − m2 (m2 + 1) 268 Chapter 4 Group Theory FIGURE 4.7 Coupling of two angular momenta: (a) parallel stretched, (b) antiparallel, (c) general case. The last two terms in Eq. (4.93) vanish only when m1 = J1 and m2 = J2 or m1 = −J1 and m2 = −J2 . In both cases J = J1 + J2 follows from the first line of Eq. (4.93). In general, therefore, we have to form appropriate linear combinations of product states   C J1 J2 J |m1 m2 M |J1 m1 |J2 m2 , (4.94) |J1 J2 J M = m1 ,m2 J2 so that has eigenvalue J (J + 1). The quantities C(J1 J2 J |m1 m2 M) in Eq. (4.94) are called Clebsch–Gordan coefficients. From Eq. (4.92) we see that they vanish unless M = m1 + m2 , reducing the double sum to a single sum. Applying J± to |J M shows that the eigenvalues M of Jz satisfy the usual inequalities −J ≤ M ≤ J . Clearly, the maximal Jmax = J1 + J2 (see Fig. 4.7a). In this case Eq. (4.93) reduces to a pure product state |J1 J2 J = J1 + J2 M = J1 + J2  = |J1 J1 |J2 J2 , (4.95a) so the Clebsch–Gordan coefficient C(J1 J2 J = J1 + J2 |J1 J2 J1 + J2 ) = 1. (4.95b) The minimal J = J1 − J2 (if J1 > J2 , see Fig. 4.7b) and J = J2 − J1 for J2 > J1 follow if we keep in mind that there are just as many product states as |J M states; that is, J max J =Jmin (2J + 1) = (Jmax − Jmin + 1)(Jmax + Jmin + 1) = (2J1 + 1)(2J2 + 1). (4.96) This condition holds because the |J1 J2 J M states merely rearrange all product states into irreducible representations of total angular momentum. It is equivalent to the triangle rule: (J1 J2 J ) = 1, (J1 J2 J ) = 0, if |J1 − J2 | ≤ J ≤ J1 + J2 ; else. (4.97) 4.4 Angular Momentum Coupling 269 This indicates that one complete multiplet of each J value from Jmin to Jmax accounts for all the states and that all the |J M states are necessarily orthogonal. In other words, Eq. (4.94) defines a unitary transformation from the orthogonal basis set of products of single-particle states |J1 m1 ; J2 m2  = |J1 m1 |J2 m2  to the two-particle states |J1 J2 J M. The Clebsch–Gordan coefficients are just the overlap matrix elements C(J1 J2 J |m1 m2 M) ≡ J1 J2 J M|J1 m1 ; J2 m2 . (4.98) The explicit construction in what follows shows that they are all real. The states in Eq. (4.94) are orthonormalized, provided that the constraints  C(J1 J2 J |m1 m2 M)C(J1 J2 J ′ |m1 m2 M ′  (4.99a) m1 ,m2 , m1 +m2 =M ′ ′ ′ ′ = J1 J2 J M|J1 J2 J M  = δJ J δMM  J,M C(J1 J2 J |m1 m2 M)C(J1 J2 J |m′1 m′2 M) = J1 m1 |J1 m′1 J2 m2 |J2 m′2  = δm1 m′1 δm2 m′2 (4.99b) hold. Now we are ready to construct more directly the total angular momentum states starting from |Jmax = J1 + J2 M = J1 + J2  in Eq. (4.95a) and using the lowering operator J− = J1− + J2− repeatedly. In the first step we use Eq. (4.84) for ' (1/2 Ji− |Ji Ji  = Ji (Ji + 1) − Ji (Ji − 1) |Ji Ji − 1 = (2Ji )1/2 |Ji Ji − 1, which we substitute into (J1− + J2− |J1 J1 )|J2 J2 . Normalizing the resulting state with M = J1 + J2 − 1 properly to 1, we obtain ' (1/2 |J1 J2 J1 + J2 J1 + J2 − 1 = J1 /(J1 + J2 ) |J1 J1 − 1|J2 J2  ' (1/2 + J2 /(J1 + J2 ) |J1 J1 |J2 J2 − 1. (4.100) Equation (4.100) yields the Clebsch–Gordan coefficients ' (1/2 C(J1 J2 J1 + J2 |J1 − 1 J2 J1 + J2 − 1) = J1 /(J1 + J2 ) , ' (1/2 C(J1 J2 J1 + J2 |J1 J2 − 1 J1 + J2 − 1) = J2 /(J1 + J2 ) . (4.101) Then we apply J− again and normalize the states obtained until we reach |J1 J2 J1 + J2 M with M = −(J1 + J2 ). The Clebsch–Gordan coefficients C(J1 J2 J1 + J2 |m1 m2 M) may thus be calculated step by step, and they are all real. The next step is to realize that the only other state with M = J1 + J2 − 1 is the top of the next lower tower of |J1 + J2 − 1M states. Since |J1 + J2 − 1 J1 + J2 − 1 is orthogonal to |J1 + J2 J1 + J2 − 1 in Eq. (4.100), it must be the other linear combination with a relative minus sign, ' (1/2 |J1 J1 − 1|J2 J2  |J1 + J2 − 1 J1 + J2 − 1 = − J2 /(J1 + J2 ) ' (1/2 + J1 /(J1 + J2 ) |J1 J1 |J2 J2 − 1, (4.102) up to an overall sign. 270 Chapter 4 Group Theory Hence we have determined the Clebsch–Gordan coefficients (for J2 ≥ J1 ) ' (1/2 C(J1 J2 J1 + J2 − 1|J1 − 1 J2 J1 + J2 − 1) = − J2 /(J1 + J2 ) , ' (1/2 C(J1 J2 J1 + J2 − 1|J1 J2 − 1 J1 + J2 − 1) = J1 /(J1 + J2 ) . (4.103) Again we continue using J− until we reach M = −(J1 + J2 − 1), and we keep normalizing the resulting states |J1 + J2 − 1M of the J = J1 + J2 − 1 tower. In order to get to the top of the next tower, |J1 + J2 − 2M with M = J1 + J2 − 2, we remember that we have already constructed two states with that M. Both |J1 + J2 J1 + J2 − 2 and |J1 + J2 − 1 J1 + J2 − 2 are known linear combinations of the three product states |J1 J1 |J2 J2 − 2, |J1 J1 − 1 × |J2 J2 − 1, and |J1 J1 − 2|J2 J2 . The third linear combination is easy to find from orthogonality to these two states, up to an overall phase, which is chosen by the Condon–Shortley phase conventions11 so that the coefficient C(J1 J2 J1 + J2 − 2|J1 J2 − 2 J1 + J2 − 2) of the last product state is positive for |J1 J2 J1 + J2 − 2 J1 + J2 − 2. It is straightforward, though a bit tedious, to determine the rest of the Clebsch–Gordan coefficients. Numerous recursion relations can be derived from matrix elements of various angular momentum operators, for which we refer to the literature.12 The symmetry properties of Clebsch–Gordan coefficients are best displayed in the more symmetric Wigner’s 3j -symbols, which are tabulated:12 J1 J2 J3 m1 m2 m3 ! = (−1)J1 −J2 −m3 C(J1 J2 J3 |m1 m2 , −m3 ), (2J3 + 1)1/2 (4.104a) obeying the symmetry relations J1 J2 J3 m1 m2 m3 ! J1 +J2 +J3 = (−1) Jk Jl Jn mk ml mn ! (4.104b) for (k, l, n) an odd permutation of (1, 2, 3). One of the most important places where Clebsch–Gordan coefficients occur is in matrix elements of tensor operators, which are governed by the Wigner–Eckart theorem discussed in the next section, on spherical tensors. Another is coupling of operators or state vectors to total angular momentum, such as spin-orbit coupling. Recoupling of operators and states in matrix elements leads to 6j and 9j -symbols.12 Clebsch–Gordan coefficients can and have been calculated for other Lie groups, such as SU(3). 11 E. U. Condon and G. H. Shortley, Theory of Atomic Spectra. Cambridge, UK: Cambridge University Press (1935). 12 There is a rich literature on this subject, e.g., A. R. Edmonds, Angular Momentum in Quantum Mechanics. Princeton, NJ: Princeton University Press (1957); M. E. Rose, Elementary Theory of Angular Momentum. New York: Wiley (1957); A. de-Shalit and I. Talmi, Nuclear Shell Model. New York: Academic Press (1963); Dover (2005). Clebsch–Gordan coefficients are tabulated in M. Rotenberg, R. Bivins, N. Metropolis, and J. K. Wooten, Jr., The 3j- and 6j-Symbols. Cambridge, MA: Massachusetts Institute of Technology Press (1959). 4.4 Angular Momentum Coupling 271 Spherical Tensors In Chapter 2 the properties of Cartesian tensors are defined using the group of nonsingular general linear transformations, which contains the three-dimensional rotations as a subgroup. A tensor of a given rank that is irreducible with respect to the full group may well become reducible for the rotation group SO(3). To explain this point, consider the second-rank tensor with components Tj k = xj yk for j, k = 1, 2, 3. It contains the symmetric tensor Sj k = (xj yk + xk yj )/2 and the antisymmetric tensor Aj k = (xj yk − xk yj )/2, so Tj k = Sj k + Aj k . This reduces Tj k in SO(3). However, under rotations the scalar product x · y is invariant and is therefore irreducible in SO(3). Thus, Sj k can be reduced by subtraction of the multiple of x · y that makes it traceless. This leads to the SO(3)-irreducible tensor Sj′ k = 21 (xj yk + xk yj ) − 31 x · yδj k . Tensors of higher rank may be treated similarly. When we form tensors from products of the components of the coordinate vector r then, in polar coordinates that are tailored to SO(3) symmetry, we end up with the spherical harmonics of Chapter 12. The form of the ladder operators for SO(3) in Section 4.3 leads us to introduce the spherical components (note the different normalization and signs, though, prescribed by the Ylm ) of a vector A: A+1 = − √1 (Ax + iAy ), 2 A−1 = √1 (Ax 2 − iAy ), A0 = Az . Then we have for the coordinate vector r in polar coordinates,   √1 r sin θ e−iϕ = r 4π Y1,−1 , r+1 = − √1 r sin θ eiϕ = r 4π Y , r = 11 −1 3 3 2 2 r0 = r (4.105) (4.106) 4π 3 Y10 , where Ylm (θ, ϕ) are the spherical harmonics of Chapter 12. Again, the spherical j m components of tensors Tj m of higher rank j may be introduced similarly. An irreducible spherical tensor operator Tj m of rank j has 2j + 1 components, just as for spherical harmonics, and m runs from −j to +j . Under a rotation R(α), where α stands for the Euler angles, the Ylm transform as Ylm (ˆr′ ) =  m′ l Ylm′ (ˆr)Dm ′ m (R), (4.107a) where rˆ ′ = (θ ′ , ϕ ′ ) are obtained from rˆ = (θ, ϕ) by the rotation R and are the angles of the same point in the rotated frame, and DJm′ m (α, β, γ ) = J m| exp(iαJz ) exp(iβJy ) exp(iγ Jz )|J m′  are the rotation matrices. So, for the operator Tj m , we define RTj m R−1 =  m′ j Tj m′ Dm′ m (α). (4.107b) 272 Chapter 4 Group Theory For an infinitesimal rotation (see Eq. (4.20) in Section 4.2 on generators) the left side of Eq. (4.107b) simplifies to a commutator and the right side to the matrix elements of J, the infinitesimal generator of the rotation R:  [Jn , Tj m ] = Tj m′ j m′ |Jn |j m. (4.108) m′ If we substitute Eqs. (4.83) and (4.84) for the matrix elements of Jm we obtain the alternative transformation laws of a tensor operator, ' (1/2 [J0 , Tj m ] = mTj m , [J± , Tj m ] = Tj m±1 (j − m)(j ± m + 1) . (4.109) We can use the Clebsch–Gordan coefficients of the previous subsection to couple two tensors of given rank to another rank. An example is the cross or vector product of two vectors a and b from Chapter 1. Let us write both vectors in spherical components, am and bm . Then we verify that the tensor Cm of rank 1 defined as  i Cm ≡ C(111|m1 m2 m)am1 bm2 = √ (a × b)m . (4.110) 2 m1 m2 Since Cm is a spherical tensor of rank 1 that is linear in the components of a and b, it must be proportional to the cross product, Cm = N (a × b)m . The constant N can be determined from a special case, a = xˆ , b = yˆ , essentially writing xˆ × yˆ = zˆ in spherical components as follows. Using √ √ (ˆz)0 = 1; (ˆx)1 = −1/ 2, (ˆx)−1 = 1/ 2; √ √ (ˆy)1 = −i/ 2, (ˆy)−1 = −i/ 2, Eq. (4.110) for m = 0 becomes   C(111|1, −1, 0) (ˆx)1 (ˆy)−1 − (ˆx)−1 (ˆy)1 = N (ˆz)0 = N     1 1 i 1 i i = √ −√ −√ − √ −√ =√ , 2 2 2 2 2 2 where we have used C(111|101) = √1 2 √1 2 from Eq. (4.103) for J1 = 1 = J2 , which implies using Eqs. (4.104a,b): C(111|1, −1, 0) = ! ! 1 1 1 1 1 1 1 1 1 = − √ C(111|1, −1, 0). = − √ C(111|101) = − = − 6 1 −1 0 1 0 −1 3 3 A bit simpler is the usual scalar product of two vectors in Chapter 1, in which a and b are coupled to zero angular momentum: √  √ C(110|m, −m, 0)am b−m . (4.111) a · b ≡ −(ab)0 3 ≡ − 3 m Again, the rank zero of our tensor product implies a · b = n(ab)0 . The constant n can be determined from a special case, essentially writing zˆ 2 = 1 in spherical components: zˆ 2 = 1 = nC(110|000) = − √n . 3 4.4 Angular Momentum Coupling 273 Another often-used application of tensors is the recoupling that involves 6j-symbols for three operators and 9j for four operators.12 An example is the following scalar product, for which it can be shown12 that 1 σ 1 · rσ 2 · r = r2 σ 1 · σ 2 + (σ 1 σ 2 )2 · (rr)2 , (4.112) 3 but which can also be rearranged by elementary means. Here the tensor operators are defined as  (σ 1 σ 2 )2m = C(112|m1 m2 m)σ1m1 σ2m2 , (4.113) m1 m2 (rr)2m =  m C(112|m1 m2 m)rm1 rm2 = ) 8π 2 r Y2m (ˆr), 15 and the scalar product of tensors of rank 2 as  √  (σ 1 σ 2 )2 · (rr)2 = (−1)m (σ 1 σ 2 )2m (rr)2,−m = 5 (σ 1 σ 2 )2 (rr)2 0 . (4.114) (4.115) m One of the most important applications of spherical tensor operators is the Wigner– Eckart theorem. It says that a matrix element of a spherical tensor operator Tkm of rank k between states of angular momentum j and j ′ factorizes into a Clebsch–Gordan coefficient and a so-called reduced matrix element, denoted by double bars, that no longer has any dependence on the projection quantum numbers m, m′ , n: ′ (4.116) j ′ m′ |Tkn |j m = C(kjj ′ |nmm′ )(−1)k−j +j j ′ Tk j / (2j ′ + 1). In other words, such a matrix element factors into a dynamic part, the reduced matrix element, and a geometric part, the Clebsch–Gordan coefficient that contains the rotational properties (expressed by the projection quantum numbers) from the SO(3) invariance. To see this we couple Tkn with the initial state to total angular momentum j ′ :  C(kjj ′ |nmm′ )Tkn |j m. (4.117) |j ′ m′ 0 ≡ nm |j ′ m′ 0 transforms just like |j ′ m′ . Thus, the overlap matrix eleUnder rotations the state ′ ′ ′ ′ ment j m |j m 0 is a rotational scalar that has no m′ dependence, so we can average over the projections, δJj ′ δMm′  ′ j µ|j ′ µ0 . (4.118) J M|j ′ m′ 0 = 2j ′ + 1 µ Next we substitute our definition, Eq. (4.117), into Eq. (4.118) and invert the relation Eq. (4.117) using orthogonality, Eq. (4.99b), to find that  δJj ′ δMm′  J M|Tkn |j m = C(kjj ′ |nmm′ ) J µ|J µ0 , (4.119) 2J + 1 µ ′ ′ jm which proves the Wigner–Eckart theorem, Eq. (4.116).13 13 The extra factor (−1)k−j +j ′ / (2j ′ + 1) in Eq. (4.116) is just a convention that varies in the literature. 274 Chapter 4 Group Theory As an application, we can write the Pauli matrix elements in terms of Clebsch–Gordan coefficients. We apply the Wigner–Eckart theorem to  1 1  1  1   1 1    √1 (4.120) 2 γ σα 2 β = (σα )γβ = − 2 C 1 2 2 αβγ 2 σ 2 . √ Since  21 21 |σ0 | 12 21  = 1 with σ0 = σ3 and C(1 21 12 | 0 21 12 ) = −1/ 3, we find 1 1 √ (4.121) 2 σ 2 = 6, which, substituted into Eq. (4.120), yields √   (σα )γβ = − 3C 1 21 12 αβγ . (4.122) Note that the α = ±1, 0 denote the spherical components of the Pauli matrices. Young Tableaux for SU(n) Young tableaux (YT) provide a powerful and elegant method for decomposing products of SU(n) group representations into sums of irreducible representations. The YT provide the dimensions and symmetry types of the irreducible representations in this so-called Clebsch–Gordan series, though not the Clebsch–Gordan coefficients by which the product states are coupled to the quantum numbers of each irreducible representation of the series (see Eq. (4.94)). Products of representations correspond to multiparticle states. In this context, permutations of particles are important when we deal with several identical particles. Permutations of n identical objects form the symmetric group Sn . A close connection between irreducible representations of Sn , which are the YT, and those of SU(n) is provided by this theorem: Every N -particle state of Sn that is made up of single-particle states of the fundamental n-dimensional SU(n) multiplet belongs to an irreducible SU(n) representation. A proof is in Chapter 22 of Wybourne.14 For SU(2) the fundamental representation is a box that stands for the spin + 12 (up) and 1 − 2 (down) states and has dimension 2. For SU(3) the box comprises the three quark states in the triangle of Fig. 4.5a; it has dimension 3. An array of boxes shown in Fig. 4.8 with λ1 boxes in the first row, λ2 boxes in the second row, . . . , and λn−1 boxes in the last row is called a Young tableau (YT), denoted by [λ1 , . . . , λn−1 ], and represents an irreducible representation of SU(n) if and only if λ1 ≥ λ2 ≥ · · · ≥ λn−1 . (4.123) Boxes in the same row are symmetric representations; those in the same column are antisymmetric. A YT consisting of one row is totally symmetric. A YT consisting of a single column is totally antisymmetric. There are at most n − 1 rows for SU(n) YT because a column of n boxes is the totally antisymmetric (Slater determinant of single-particle states) singlet representation that may be struck from the YT. An array of N boxes is an N -particle state whose boxes may be labeled by positive integers so that the (particle labels or) numbers in one row of the YT do not decrease from 14 B. G. Wybourne, Classical Groups for Physicists. New York: Wiley (1974). 4.4 Angular Momentum Coupling 275 FIGURE 4.8 Young tableau (YT) for SU(n). left to right and those in any one column increase from top to bottom. In contrast to the possible repetitions of row numbers, the numbers in any column must be different because of the antisymmetry of these states. The product of a YT with a single box, , is the sum of YT formed when the box is put at the end of each row of the YT, provided the resulting YT is legitimate, that is, obeys Eq. (4.123). For SU(2) the product of two boxes, spin 1/2 representations of dimension 2, generates ⊗ = ⊕ [1, 1], (4.124) the symmetric spin 1 representation of dimension 3 and the antisymmetric singlet of dimension 1 mentioned earlier. The column of n − 1 boxes is the conjugate representation of the fundamental representation; its product with a single box contains the column of n boxes, which is the singlet. For SU(3) the conjugate representation of the single box, or fundamental quark representation, is the inverted triangle in Fig. 4.5b, [1, 1], which represents the three antiquarks ¯ s¯ , obviously of dimension 3 as well. u, ¯ d, The dimension of a YT is given by the ratio N . (4.125) D The numerator N is obtained by writing an n in all boxes of the YT along the diagonal, (n + 1) in all boxes immediately above the diagonal, (n − 1) immediately below the diagonal, etc. N is the product of all the numbers in the YT. An example is shown in Fig. 4.9a for the octet representation of SU(3), where N = 2 · 3 · 4 = 24. There is a closed formula that is equivalent to Eq. (4.125).15 The denominator D is the product of all hooks.16 A hook is drawn through each box of the YT by starting a horizontal line from the right to the box in question and then continuing it vertically out of the YT. The number of boxes encountered by the hook-line is the hook-number of the box. D is the product of all hook-numbers of dim YT = 15 See, for example, M. Hamermesh, Group Theory and Its Application to Physical Problems. Reading, MA: Addison-Wesley (1962). 16 F. Close, Introduction to Quarks and Partons. New York: Academic Press (1979). 276 Chapter 4 Group Theory (a) (b) FIGURE 4.9 Illustration of (a) N and (b) D in Eq. (4.125) for the octet Young tableau of SU(3). the YT. An example is shown in Fig. 4.9b for the octet of SU(3), whose hook-number is D = 1 · 3 · 1 = 3. Hence the dimension of the SU(3) octet is 24/3 = 8, whence its name. Now we can calculate the dimensions of the YT in Eq. (4.124). For SU(2) they are 2 × 2 = 3 + 1 = 4. For SU(3) they are 3 · 3 = 3 · 4/(1 · 2) + 3 · 2/(2 · 1) = 6 + 3 = 9. For the product of the quark times antiquark YT of SU(3) we get [1, 1] ⊗ = [2, 1] ⊕ [1, 1, 1], (4.126) that is, octet and singlet, which are precisely the meson multiplets considered in the subsection on the eightfold way, the SU(3) flavor symmetry, which suggest mesons are bound states of a quark and an antiquark, q q¯ configurations. For the product of three quarks we get   ⊗ ⊗ = ⊕ [1, 1] ⊗ = ⊕ 2[2, 1] ⊕ [1, 1, 1], (4.127) that is, decuplet, octet, and singlet, which are the observed multiplets for the baryons, which suggests they are bound states of three quarks, q 3 configurations. As we have seen, YT describe the decomposition of a product of SU(n) irreducible representations into irreducible representations of SU(n), which is called the Clebsch–Gordan series, while the Clebsch–Gordan coefficients considered earlier allow construction of the individual states in this series. 4.4 Angular Momentum Coupling 277 Exercises 4.4.1 4.4.2 Derive recursion relations for Clebsch–Gordan coefficients. Use them to calculate C(11J |m1 m2 M) for J = 0, 1, 2. Hint. Use the known matrix elements of J+ = J1+ + J2+ , Ji+ , and J2 = (J1 + J2 )2 , etc. Show that (Yl χ)JM = C(l 21 J |ml ms M)Ylml χms , where χ±1/2 are the spin up and down eigenfunctions of σ3 = σz , transforms like a spherical tensor of rank J . 4.4.3 When the spin of quarks is taken into account, the SU(3) flavor symmetry is replaced by the SU(6) symmetry. Why? Obtain the Young tableau for the antiquark configuration q. ¯ Then decompose the product q q. ¯ Which SU(3) representations are contained in the nontrivial SU(6) representation for mesons? Hint. Determine the dimensions of all YT. 4.4.4 For l = 1, Eq. (4.107a) becomes Y1m (θ ′ , ϕ ′ ) = 1  m′ =−1 ′ 1 m Dm ′ m (α, β, γ )Y1 (θ, ϕ). Rewrite these spherical harmonics in Cartesian form. Show that the resulting Cartesian coordinate equations are equivalent to the Euler rotation matrix A(α, β, γ ), Eq. (3.94), rotating the coordinates. 4.4.5 Assuming that D j (α, β, γ ) is unitary, show that l  Ylm∗ (θ1 , ϕ1 )Ylm (θ2 , ϕ2 ) m=−l is a scalar quantity (invariant under rotations). This is a spherical tensor analog of a scalar product of vectors. 4.4.6 (a) Show that the α and γ dependence of Dj (α, β, γ ) may be factored out such that Dj (α, β, γ ) = Aj (α)dj (β)Cj (γ ). (b) Show that Aj (α) and Cj (γ ) are diagonal. Find the explicit forms. (c) Show that dj (β) = Dj (0, β, 0). 4.4.7 The angular momentum–exponential form of the Euler angle rotation operators is R = Rz′′ (γ )Ry ′ (β)Rz (α) = exp(−iγ Jz′′ ) exp(−iβJy ′ ) exp(−iαJz ). Show that in terms of the original axes R = exp(iαJz ) exp(−iβJy ) exp(−iγ Jz ). Hint. The R operators transform as matrices. The rotation about the y ′ -axis (second Euler rotation) may be referred to the original y-axis by exp(−iβJy ′ ) = exp(−iαJz ) exp(−iβJy ) exp(iαJz ). 278 Chapter 4 Group Theory 4.4.8 4.4.9 4.5 Using the Wigner–Eckart theorem, prove the decomposition theorem for a spherical ′ 1 |j m δjj ′ . vector operator j ′ m′ |T1m |j m = j mj|J·T (j +1) Using the Wigner–Eckart theorem, prove the factorization j ′ m′ |JM J · T1 |j m = j m′ |JM |j mδj ′ j j m|J · T1 |j m. HOMOGENEOUS LORENTZ GROUP Generalizing the approach to vectors of Section 1.2, in special relativity we demand that our physical laws be covariant17 under a. space and time translations, b. rotations in real, three-dimensional space, and c. Lorentz transformations. The demand for covariance under translations is based on the homogeneity of space and time. Covariance under rotations is an assertion of the isotropy of space. The requirement of Lorentz covariance follows from special relativity. All three of these transformations together form the inhomogeneous Lorentz group or the Poincaré group. When we exclude translations, the space rotations and the Lorentz transformations together form a group — the homogeneous Lorentz group. We first generate a subgroup, the Lorentz transformations in which the relative velocity v is along the x = x 1 -axis. The generator may be determined by considering space–time reference frames moving with a relative velocity δv, an infinitesimal.18 The relations are similar to those for rotations in real space, Sections 1.2, 2.6, and 3.3, except that here the angle of rotation is pure imaginary (compare Section 4.6). Lorentz transformations are linear not only in the space coordinates xi but in the time t as well. They originate from Maxwell’s equations of electrodynamics, which are invariant under Lorentz transformations, as we shall see later. Lorentz transformations leave the quadratic form c2 t 2 − x12 − x22 − x32 = x02 − x12 − x22 − x32 invariant, where x0 = ct. We see this if we switch on a light source  at the origin of the coordinate system. At time t xi2 , so c2 t 2 − x12 − x22 − x32 = 0. Special relativity light has traveled the distance ct = requires that in all (inertial) frames that move with velocity v ≤ c in any direction relative to the xi -system and have the same origin at time t = 0, c2 t ′ 2 − x1′ 2 − x2′ 2 − x3′ 2 = 0 holds also. Four-dimensional space–time with the metric x · x = x 2 = x02 − x12 − x22 − x32 is called Minkowski space, with the scalar product of two four-vectors defined as a · b = a0 b0 − a · b. Using the metric tensor   1 0 0 0   0   µν  0 −1 0   (4.128) (gµν ) = g =   0 0 −1 0  0 0 0 −1 17 To be covariant means to have the same form in different coordinate systems so that there is no preferred reference system (compare Sections 1.2 and 2.6). 18 This derivation, with a slightly different metric, appears in an article by J. L. Strecker, Am. J. Phys. 35: 12 (1967). 4.5 Homogeneous Lorentz Group 279 we can raise and lower the indices of a four-vector, such as the coordinates x µ = (x0 , x), so that xµ = gµν x ν = (x0 , −x) and x µ gµν x ν = x02 − x2 , Einstein’s summation convention being understood. For the gradient, ∂ µ = (∂/∂x0 , −∇) = ∂/∂xµ and ∂µ = (∂/∂x0 , ∇), so ∂ 2 = ∂ µ ∂µ = (∂/∂x0 )2 − ∇ 2 is a Lorentz scalar, just like the metric x 2 = x02 − x2 . For v ≪ c, in the nonrelativistic limit, a Lorentz transformation must be Galilean. Hence, to derive the form of a Lorentz transformation along the x1 -axis, we start with a Galilean transformation for infinitesimal relative velocity δv: x ′ 1 = x 1 − δvt = x 1 − x 0 δβ. (4.129) Here, β = v/c. By symmetry we also write x ′ 0 = x 0 + aδβx 1 , (4.129′ ) x0′ 2 − x1′ 2 = x02 − x12 . (4.130) with the parameter a chosen so that x02 − x12 is invariant, Remember, x µ = (x 0 , x) is the prototype four-dimensional vector in Minkowski space. Thus Eq. (4.130) is simply a statement of the invariance of the square of the magnitude of the “distance” vector under Lorentz transformation in Minkowski space. Here is where the special relativity is brought into our transformation. Squaring and subtracting Eqs. (4.129) and (4.129′ ) and discarding terms of order (δβ)2 , we find a = −1. Equations (4.129) and (4.129′ ) may be combined as a matrix equation, ! ! x′ 0 x0 = (12 − δβσ1 ) ; (4.131) x′ 1 x1 σ1 happens to be the Pauli matrix, σ1 , and the parameter δβ represents an infinitesimal change. Using the same techniques as in Section 4.2, we repeat the transformation N times to develop a finite transformation with the velocity parameter ρ = N δβ. Then !  !  x′ 0 ρσ1 N x 0 = 12 − . (4.132) N x′ 1 x1 In the limit as N → ∞, lim N →∞  ρσ1 12 − N N = exp(−ρσ1 ). (4.133) As in Section 4.2, the exponential is interpreted by a Maclaurin expansion, exp(−ρσ1 ) = 12 − ρσ1 + 1 1 (ρσ1 )2 − (ρσ1 )3 + · · · . 2! 3! (4.134) Noting that (σ1 )2 = 12 , exp(−ρσ1 ) = 12 cosh ρ − σ1 sinh ρ. Hence our finite Lorentz transformation is ! cosh ρ x′ 0 = ′ 1 − sinh ρ x − sinh ρ cosh ρ ! x0 x1 (4.135) ! . (4.136) 280 Chapter 4 Group Theory σ1 has generated the representations of this pure Lorentz transformation. The quantities cosh ρ and sinh ρ may be identified by considering the origin of the primed coordinate system, x ′ 1 = 0, or x 1 = vt. Substituting into Eq. (4.136), we have With x1 = vt and x0 0 = x 1 cosh ρ − x 0 sinh ρ. (4.137) = ct, v tanh ρ = β = . c Note that the rapidity ρ = v/c, except in the limit as v → 0. The rapidity is the additive parameter for pure Lorentz transformations (“boosts”) along the same axis that corresponds to angles for rotations about the same axis. Using 1 − tanh2 ρ = (cosh2 ρ)−1 ,  −1/2 cosh ρ = 1 − β 2 ≡ γ, sinh ρ = βγ . (4.138) The group of Lorentz transformations is not compact, because the limit of a sequence of rapidities going to infinity is no longer an element of the group. The preceding special case of the velocity parallel to one space axis is easy, but it illustrates the infinitesimal velocity-exponentiation-generator technique. Now, this exact technique may be applied to derive the Lorentz transformation for the relative velocity v not parallel to any space axis. The matrices given by Eq. (4.136) for the case of v = xˆ vx form a subgroup. The matrices in the general case do not. The product of two Lorentz transformation matrices L(v1 ) and L(v2 ) yields a third Lorentz matrix, L(v3 ), if the two velocities v1 and v2 are parallel. The resultant velocity, v3 , is related to v1 and v2 by the Einstein velocity addition law, Exercise 4.5.3. If v1 and v2 are not parallel, no such simple relation exists. Specifically, consider three reference frames S, S ′ , and S ′′ , with S and S ′ related by L(v1 ) and S ′ and S ′′ related by L(v2 ). If the velocity of S ′′ relative to the original system S is v3 , S ′′ is not obtained from S by L(v3 ) = L(v2 )L(v1 ). Rather, we find that L(v3 ) = RL(v2 )L(v1 ), (4.139) where R is a 3 × 3 space rotation matrix embedded in our four-dimensional space–time. With v1 and v2 not parallel, the final system, S ′′ , is rotated relative to S. This rotation is the origin of the Thomas precession involved in spin-orbit coupling terms in atomic and nuclear physics. Because of its presence, the pure Lorentz transformations L(v) by themselves do not form a group. Kinematics and Dynamics in Minkowski Space–Time We have seen that the propagation of light determines the metric r2 − c 2 t 2 = 0 = r′ 2 − c 2 t ′ 2 , where x µ = (ct, r) is the coordinate four-vector. For a particle moving with velocity v, the Lorentz invariant infinitesimal version c dτ ≡ dx µ dxµ = c2 dt 2 − dr2 = dt c2 − v2 defines the invariant proper time τ on its track. Because of time dilation in moving frames, a proper-time clock rides with the particle (in its rest frame) and runs at the slowest possible 4.5 Homogeneous Lorentz Group 281 rate compared to any other inertial frame (of an observer, for example). The four-velocity of the particle can now be defined properly as   c v dx µ µ =u = √ , ,√ dτ c 2 − v2 c 2 − v2 so u2 = 1, and the four-momentum p µ = cmuµ = ( Ec , p) yields Einstein’s famous energy relation E= mc2 1 − v2 /c2 = mc2 + m 2 v ± ··· . 2 A consequence of u2 = 1 and its physical significance is that the particle is on its mass shell p 2 = m2 c2 . Now we formulate Newton’s equation for a single particle of mass m in special relativity µ µ µ vector part of the equation as dp dτ = K , with K denoting the force four-vector, so its coincides with the usual form. For µ = 1, 2, 3 we use dτ = dt 1 − v2 /c2 and find 1 1 − v2 /c2 F dp = = K, dt 1 − v2 /c2 determining K in terms of the usual force F. We need to find K 0 . We proceed by analogy with the derivation of energy conservation, multiplying the force equation into the fourvelocity muν m du2 duν = = 0, dτ 2 dτ because u2 = 1 = const. The other side of Newton’s equation yields so K 0 = √ F·v/c2 1−v /c2 K0 F · v/c 1 , − 0= u·K = 2 2 2 c 1 − v /c 1 − v2 /c2 is related to the rate of work done by the force on the particle. Now we turn to two-body collisions, in which energy–momentum conservation takes µ the form p1 + p2 = p3 + p4 , where pi are the particle four-momenta. Because the scalar product of any four-vector with itself is an invariant under Lorentz transformations, it is convenient to define the Lorentz invariant energy squared s = (p1 + p2 )2 = P 2 , where P µ is the total four-momentum, and to use units where the velocity of light c = 1. The laboratory system (lab) is defined as the rest frame of the particle with four-momentum µ p2 = (m2 , 0) and the center of momentum frame (cms) by the total four-momentum P µ = (E1 + E2 , 0). When the incident lab energy E1L is given, then s = p12 + p22 + 2p1 · p2 = m21 + m22 + 2m2 E1L is determined. Now, the cms energies of the four particles are obtained from scalar products √ p1 · P = E1 (E1 + E2 ) = E1 s, 282 Chapter 4 Group Theory so E1 = p1 · (p1 + p2 ) m21 + p1 · p2 m21 − m22 + s = = , √ √ √ s s 2 s E2 = p2 · (p1 + p2 ) m22 + p1 · p2 m22 − m21 + s = = , √ √ √ s s 2 s E3 = p3 · (p3 + p4 ) m23 + p3 · p4 m23 − m24 + s = = , √ √ √ s s 2 s E4 = p4 · (p3 + p4 ) m24 + p3 · p4 m24 − m23 + s = = , √ √ √ s s 2 s by substituting 2p3 · p4 = s − m23 − m24 . 2p1 · p2 = s − m21 − m22 , Thus, all cms energies Ei depend only on the incident energy but not on the scattering angle. For elastic scattering, m3 = m1 , m4 = m2 , so E3 = E1 , E4 = E2 . The Lorentz invariant momentum transfer squared t = (p1 − p3 )2 = m21 + m23 − 2p1 · p3 depends linearly on the cosine of the scattering angle. Example 4.5.1 KAON DECAY AND PION PHOTOPRODUCTION THRESHOLD Find the kinetic energies of the muon of mass 106 MeV and massless neutrino into which a K meson of mass 494 MeV decays in its rest frame. √ Conservation of energy and momentum gives mK = Eµ + Eν = s. Applying the relativistic kinematics described previously yields Eµ = pµ · (pµ + pν ) m2µ + pµ · pν = , mK mK Eν = pν · (pµ + pν ) pµ · pν = . mK mK Combining both results we obtain m2K = m2µ + 2pµ · pν , so Eµ = Tµ + mµ = E ν = Tν = m2K + m2µ 2mK m2K − m2µ 2mK = 258.4 MeV, = 235.6 MeV. As another example, in the production of a neutral pion by an incident photon according to γ + p → π 0 + p ′ at threshold, the neutral pion and proton are created at rest in the cms. Therefore, s = (pγ + p)2 = m2p + 2mp EγL = (pπ + p ′ )2 = (mπ + mp )2 , 4.6 Lorentz Covariance of Maxwell’s Equations so EγL = mπ + m2π 2mp = 144.7 MeV. 283  Exercises 4.5.1 Two Lorentz transformations are carried out in succession: v1 along the x-axis, then v2 along the y-axis. Show that the resultant transformation (given by the product of these two successive transformations) cannot be put in the form of a single Lorentz transformation. Note. The discrepancy corresponds to a rotation. 4.5.2 Rederive the Lorentz transformation, working entirely in the real space (x 0 , x 1 , x 2 , x 3 ) with x 0 = x0 = ct. Show that the Lorentz transformation may be written L(v) = exp(ρσ ), with   0 −λ −µ −ν    −λ 0 0 0    σ = 0 0    −µ 0 −ν 0 0 0 and λ, µ, ν the direction cosines of the velocity v. 4.5.3 Using the matrix relation, Eq. (4.136), let the rapidity ρ1 relate the Lorentz reference frames (x ′ 0 , x ′ 1 ) and (x 0 , x 1 ). Let ρ2 relate (x ′′ 0 , x ′′ 1 ) and (x ′ 0 , x ′ 1 ). Finally, let ρ relate (x ′′ 0 , x ′′ 1 ) and (x 0 , x 1 ). From ρ = ρ1 + ρ2 derive the Einstein velocity addition law v= 4.6 v1 + v2 . 1 + v1 v2 /c2 LORENTZ COVARIANCE OF MAXWELL’S EQUATIONS If a physical law is to hold for all orientations of our (real) coordinates (that is, to be invariant under rotations), the terms of the equation must be covariant under rotations (Sections 1.2 and 2.6). This means that we write the physical laws in the mathematical form scalar = scalar, vector = vector, second-rank tensor = second-rank tensor, and so on. Similarly, if a physical law is to hold for all inertial systems, the terms of the equation must be covariant under Lorentz transformations. Using Minkowski space (ct = x 0 ; x = x 1 , y = x 2 , z = x 3 ), we have a four-dimensional space with the metric gµν (Eq. (4.128), Section 4.5). The Lorentz transformations are linear in space and time in this four-dimensional real space.19 19 A group theoretic derivation of the Lorentz transformation in Minkowski space appears in Section 4.5. See also H. Goldstein, Classical Mechanics. Cambridge, MA: Addison-Wesley (1951), Chapter 6. The metric equation x02 − x2 = 0, independent of reference frame, leads to the Lorentz transformations. 284 Chapter 4 Group Theory Here we consider Maxwell’s equations, ∂B , ∂t ∂D ∇×H= + ρv, ∂t ∇ · D = ρ, ∇×E=− ∇ · B = 0, (4.140a) (4.140b) (4.140c) (4.140d) and the relations D = ε0 E, B = µ0 H. (4.141) The symbols have their usual meanings as given in Section 1.9. For simplicity we assume vacuum (ε = ε0 , µ = µ0 ). We assume that Maxwell’s equations hold in all inertial systems; that is, Maxwell’s equations are consistent with special relativity. (The covariance of Maxwell’s equations under Lorentz transformations was actually shown by Lorentz and Poincaré before Einstein proposed his theory of special relativity.) Our immediate goal is to rewrite Maxwell’s equations as tensor equations in Minkowski space. This will make the Lorentz covariance explicit, or manifest. In terms of scalar, ϕ, and magnetic vector potentials, A, we may solve20 Eq. (4.140d) and then (4.140a) by B=∇×A ∂A E=− − ∇ϕ. (4.142) ∂t Equation (4.142) specifies the curl of A; the divergence of A is still undefined (compare Section 1.16). We may, and for future convenience we do, impose a further gauge restriction on the vector potential A: ∂ϕ = 0. (4.143) ∂t This is the Lorentz gauge relation. It will serve the purpose of uncoupling the differential equations for A and ϕ that follow. The potentials A and ϕ are not yet completely fixed. The freedom remaining is the topic of Exercise 4.6.4. Now we rewrite the Maxwell equations in terms of the potentials A and ϕ. From Eqs. (4.140c) for ∇ · D, (4.141) and (4.142), ∇ · A + ε0 µ0 ∇2 ϕ + ∇ · ρ ∂A =− , ∂t ε0 (4.144) whereas Eqs. (4.140b) for ∇ × H and (4.142) and Eq. (1.86c) of Chapter 1 yield ( ρv ∂ 2A ∂ϕ 1 ' ∇∇ · A − ∇ 2 A = . +∇ + 2 ∂t ε0 µ0 ε0 ∂t 20 Compare Section 1.13, especially Exercise 1.13.10. (4.145) 4.6 Lorentz Covariance of Maxwell’s Equations Using the Lorentz relation, Eq. (4.143), and the relation ε0 µ0 = 1/c2 , we obtain  1 ∂2 2 ∇ − 2 2 A = −µ0 ρv, c ∂t  ρ 1 ∂2 2 ∇ − 2 2 ϕ=− . ε0 c ∂t 285 (4.146) Now, the differential operator (see also Exercise 2.7.3) ∇2 − 1 ∂2 ≡ −∂ 2 ≡ −∂ µ ∂µ c2 ∂t 2 is a four-dimensional Laplacian, usually called the d’Alembertian and also sometimes denoted by . It is a scalar by construction (see Exercise 2.7.3). For convenience we define Az Ax A1 ≡ = cε0 Ax , = cε0 Az , A3 ≡ µ0 c µ0 c (4.147) Ay A2 ≡ = cε0 Ay , A0 ≡ ε0 ϕ = A0 . µ0 c If we further define a four-vector current density ρvy ρvx ρvz ≡ j 1, ≡ j 2, ≡ j 3, c c c ρ ≡ j0 = j 0 , (4.148) then Eq. (4.146) may be written in the form ∂ 2 Aµ = j µ . (4.149) The wave equation (4.149) looks like a four-vector equation, but looks do not constitute proof. To prove that it is a four-vector equation, we start by investigating the transformation properties of the generalized current j µ . Since an electric charge element de is an invariant quantity, we have de = ρdx 1 dx 2 dx 3 , invariant. (4.150) We saw in Section 2.9 that the four-dimensional volume element dx 0 dx 1 dx 2 dx 3 was also invariant, a pseudoscalar. Comparing this result, Eq. (2.106), with Eq. (4.150), we see that the charge density ρ must transform the same way as dx 0 , the zeroth component of a fourdimensional vector dx λ . We put ρ = j 0 , with j 0 now established as the zeroth component of a four-vector. The other parts of Eq. (4.148) may be expanded as j1 = ρ dx 1 ρvx dx 1 = = j0 0 . c c dt dx (4.151) Since we have just shown that j 0 transforms as dx 0 , this means that j 1 transforms as dx 1 . With similar results for j 2 and j 3 , We have j λ transforming as dx λ , proving that j λ is a four-vector in Minkowski space. Equation (4.149), which follows directly from Maxwell’s equations, Eqs. (4.140), is assumed to hold in all Cartesian systems (all Lorentz frames). Then, by the quotient rule, Section 2.8, Aµ is also a vector and Eq. (4.149) is a legitimate tensor equation. 286 Chapter 4 Group Theory Now, working backward, Eq. (4.142) may be written ε0 Ej = − ∂Aj ∂A0 , − ∂x j ∂x 0 1 ∂Ak ∂Aj Bi = − , µ0 c ∂x j ∂x k j = 1, 2, 3, (4.152) (i, j, k) = cyclic (1, 2, 3). We define a new tensor, ∂ µ Aλ − ∂ λ Aµ = ∂Aλ ∂Aµ − ≡ F µλ = −F λµ ∂xµ ∂xλ (µ, λ = 0, 1, 2, 3), an antisymmetric second-rank tensor, since Aλ is a vector. Written out explicitly,     0 Ex 0 −Ex −Ey −Ez Ey Ez     −Ex 0 −cBz cBy  Ex 0 −cBz cBy  Fµλ  F µλ     . , = = ε0 ε0 0 −cBx  cBz 0 −cBx   −Ey cBz  Ey   −Ez −cBy cBx 0 Ez −cBy cBx 0 (4.153) Notice that in our four-dimensional Minkowski space E and B are no longer vectors but together form a second-rank tensor. With this tensor we may write the two nonhomogeneous Maxwell equations ((4.140b) and (4.140c)) combined as a tensor equation, ∂Fλµ = jλ . ∂xµ (4.154) The left-hand side of Eq. (4.154) is a four-dimensional divergence of a tensor and therefore a vector. This, of course, is equivalent to contracting a third-rank tensor ∂F λµ /∂xν (compare Exercises 2.7.1 and 2.7.2). The two homogeneous Maxwell equations — (4.140a) for ∇ × E and (4.140d) for ∇ · B — may be expressed in the tensor form ∂F23 ∂F31 ∂F12 + + =0 ∂x1 ∂x2 ∂x3 (4.155) for Eq. (4.140d) and three equations of the form − ∂F30 ∂F02 ∂F23 − + =0 ∂x2 ∂x3 ∂x0 (4.156) for Eq. (4.140a). (A second equation permutes 120, a third permutes 130.) Since ∂ λ F µν = ∂F µν ≡ t λµν ∂xλ is a tensor (of third rank), Eqs. (4.140a) and (4.140d) are given by the tensor equation t λµν + t νλµ + t µνλ = 0. (4.157) From Eqs. (4.155) and (4.156) you will understand that the indices λ, µ, and ν are supposed to be different. Actually Eq. (4.157) automatically reduces to 0 = 0 if any two indices coincide. An alternate form of Eq. (4.157) appears in Exercise 4.6.14. 4.6 Lorentz Covariance of Maxwell’s Equations 287 Lorentz Transformation of E and B The construction of the tensor equations ((4.154) and (4.157)) completes our initial goal of rewriting Maxwell’s equations in tensor form.21 Now we exploit the tensor properties of our four vectors and the tensor Fµν . For the Lorentz transformation corresponding to motion along the z(x3 )-axis with velocity v, the “direction cosines” are given by22  x ′ 0 = γ x 0 − βx 3 (4.158)  x ′ 3 = γ x 3 − βx 0 , where β= v c and  −1/2 γ = 1 − β2 . (4.159) Using the tensor transformation properties, we may calculate the electric and magnetic fields in the moving system in terms of the values in the original reference frame. From Eqs. (2.66), (4.153), and (4.158) we obtain   1 v ′ Ex − 2 By , Ex = c 1 − β2   1 v ′ (4.160) Ey = Ey + 2 Bx , c 1 − β2 Ez′ = Ez and  1 Bx + Bx′ = 1 − β2  1 ′ By = By − 1 − β2  v E y , c2  v Ex , c2 (4.161) Bz′ = Bz . This coupling of E and B is to be expected. Consider, for instance, the case of zero electric field in the unprimed system Ex = Ey = Ez = 0. 21 Modern theories of quantum electrodynamics and elementary particles are often written in this “manifestly covariant” form to guarantee consistency with special relativity. Conversely, the insistence on such tensor form has been a useful guide in the construction of these theories. 22 A group theoretic derivation of the Lorentz transformation appears in Section 4.5. See also Goldstein, loc. cit., Chapter 6. 288 Chapter 4 Group Theory Clearly, there will be no force on a stationary charged particle. When the particle is in motion with a small velocity v along the z-axis,23 an observer on the particle sees fields (exerting a force on his charged particle) given by Ex′ = −vBy , Ey′ = vBx , where B is a magnetic induction field in the unprimed system. These equations may be put in vector form, E′ = v × B or (4.162) F = qv × B, which is usually taken as the operational definition of the magnetic induction B. Electromagnetic Invariants Finally, the tensor (or vector) properties allow us to construct a multitude of invariant quantities. A more important one is the scalar product of the two four-dimensional vectors or four-vectors Aλ and jλ . We have Aλ jλ = −cε0 Ax ρvy ρvx ρvz − cε0 Ay − cε0 Az + ε0 ϕρ c c c = ε0 (ρϕ − A · J), invariant, (4.163) with A the usual magnetic vector potential and J the ordinary current density. The first term, ρϕ, is the ordinary static electric coupling, with dimensions of energy per unit volume. Hence our newly constructed scalar invariant is an energy density. The dynamic interaction of field and current is given by the product A · J. This invariant Aλ jλ appears in the electromagnetic Lagrangians of Exercises 17.3.6 and 17.5.1. Other possible electromagnetic invariants appear in Exercises 4.6.9 and 4.6.11. The Lorentz group is the symmetry group of electrodynamics, of the electroweak gauge theory, and of the strong interactions described by quantum chromodynamics: It governs special relativity. The metric of Minkowski space–time is Lorentz invariant and expresses the propagation of light; that is, the velocity of light is the same in all inertial frames. Newton’s equations of motion are straightforward to extend to special relativity. The kinematics of two-body collisions are important applications of vector algebra in Minkowski space–time. 23 If the velocity is not small, a relativistic transformation of force is needed. 4.6 Lorentz Covariance of Maxwell’s Equations 289 Exercises 4.6.1 (a) Show that every four-vector in Minkowski space may be decomposed into an ordinary three-space vector and a three-space scalar. Examples: (ct, r), (ρ, ρv/c), (ε0 ϕ, cε0 A), (E/c, p), (ω/c, k). Hint. Consider a rotation of the three-space coordinates with time fixed. (b) Show that the converse of (a) is not true — every three-vector plus scalar does not form a Minkowski four-vector. 4.6.2 (a) Show that ∂ µ jµ = ∂ · j = ∂jµ = 0. ∂xµ (b) Show how the previous tensor equation may be interpreted as a statement of continuity of charge and current in ordinary three-dimensional space and time. (c) If this equation is known to hold in all Lorentz reference frames, why can we not conclude that jµ is a vector? 4.6.3 Write the Lorentz gauge condition (Eq. (4.143)) as a tensor equation in Minkowski space. 4.6.4 A gauge transformation consists of varying the scalar potential ϕ1 and the vector potential A1 according to the relation ∂χ , ∂t A2 = A1 − ∇χ. ϕ2 = ϕ1 + The new function χ is required to satisfy the homogeneous wave equation ∇2 χ − 1 ∂ 2χ = 0. c2 ∂t 2 Show the following: (a) The Lorentz gauge relation is unchanged. (b) The new potentials satisfy the same inhomogeneous wave equations as did the original potentials. (c) The fields E and B are unaltered. The invariance of our electromagnetic theory under this transformation is called gauge invariance. 4.6.5 A charged particle, charge q, mass m, obeys the Lorentz covariant equation q dp µ = F µν pν , dτ ε0 mc ν 1 2 3 where p is the four-momentum vector (E/c; p , p , p ), τ is the proper time, dτ = 2 2 dt 1 − v /c , a Lorentz scalar. Show that the explicit space–time forms are dE = qv · E; dt dp = q(E + v × B). dt 290 Chapter 4 Group Theory 4.6.6 From the Lorentz transformation matrix elements (Eq. (4.158)) derive the Einstein velocity addition law u′ = u−v 1 − (uv/c2 ) or u= u′ + v , 1 + (u′ v/c2 ) where u = c dx 3 /dx 0 and u′ = c dx ′ 3 /dx ′ 0 . Hint. If L12 (v) is the matrix transforming system 1 into system 2, L23 (u′ ) the matrix transforming system 2 into system 3, L13 (u) the matrix transforming system 1 directly into system 3, then L13 (u) = L23 (u′ )L12 (v). From this matrix relation extract the Einstein velocity addition law. 4.6.7 ˜ where the The dual of a four-dimensional second-rank tensor B may be defined by B, elements of the dual tensor are given by 1 B˜ ij = ε ij kl Bkl . 2! Show that B˜ transforms as (a) a second-rank tensor under rotations, (b) a pseudotensor under inversions. Note. The tilde here does not mean transpose. 4.6.8 ˜ the dual of F, where F is the electromagnetic tensor given by Eq. (4.153). Construct F,   0 −cBx −cBy −cBz    cBx 0 Ez −Ey  µν .  ˜ ANS. F = ε0  0 Ex    cBy −Ez cBz Ey −Ex 0 This corresponds to cB → −E, E → cB. This transformation, sometimes called a dual transformation, leaves Maxwell’s equations in vacuum (ρ = 0) invariant. 4.6.9 Because the quadruple contraction of a fourth-rank pseudotensor and two second-rank tensors εµλνσ F µλ F νσ is clearly a pseudoscalar, evaluate it. ANS. −8ε02 cB · E. 4.6.10 (a) If an electromagnetic field is purely electric (or purely magnetic) in one particular Lorentz frame, show that E and B will be orthogonal in other Lorentz reference systems. (b) Conversely, if E and B are orthogonal in one particular Lorentz frame, there exists a Lorentz reference system in which E (or B) vanishes. Find that reference system. 4.7 Discrete Groups 4.6.11 4.6.12 291 Show that c2 B2 − E2 is a Lorentz scalar. Since (dx 0 , dx 1 , dx 2 , dx 3 ) is a four-vector, dxµ dx µ is a scalar. Evaluate this scalar for a moving particle in two different coordinate systems: (a) a coordinate system fixed relative to you (lab system), and (b) a coordinate system moving with a moving particle (velocity v relative to you). With the time increment labeled dτ in the particle system and dt in the lab system, show that  dτ = dt 1 − v 2 /c2 . τ is the proper time of the particle, a Lorentz invariant quantity. 4.6.13 Expand the scalar expression − 1 1 Fµν F µν + jµ Aµ 4ε0 ε0 in terms of the fields and potentials. The resulting expression is the Lagrangian density used in Exercise 17.5.1. 4.6.14 Show that Eq. (4.157) may be written εαβγ δ 4.7 ∂F αβ = 0. ∂xγ DISCRETE GROUPS Here we consider groups with a finite number of elements. In physics, groups usually appear as a set of operations that leave a system unchanged, invariant. This is an expression of symmetry. Indeed, a symmetry may be defined as the invariance of the Hamiltonian of a system under a group of transformations. Symmetry in this sense is important in classical mechanics, but it becomes even more important and more profound in quantum mechanics. In this section we investigate the symmetry properties of sets of objects (atoms in a molecule or crystal). This provides additional illustrations of the group concepts of Section 4.1 and leads directly to dihedral groups. The dihedral groups in turn open up the study of the 32 crystallographic point groups and 230 space groups that are of such importance in crystallography and solid-state physics. It might be noted that it was through the study of crystal symmetries that the concepts of symmetry and group theory entered physics. In physics, the abstract group conditions often take on direct physical meaning in terms of transformations of vectors, spinors, and tensors. As a simple, but not trivial, example of a finite group, consider the set 1, a, b, c that combine according to the group multiplication table24 (see Fig. 4.10). Clearly, the four conditions of the definition of “group” are satisfied. The elements a, b, c, and 1 are abstract mathematical entities, completely unrestricted except for the multiplication table of Fig. 4.10. Now, for a specific representation of these group elements, let 1 → 1, a → i, b → −1, 24 The order of the factors is row–column: ab = c in the indicated previous example. c → −i, (4.164) 292 Chapter 4 Group Theory FIGURE 4.10 Group multiplication table. combining by ordinary multiplication. Again, the four group conditions are satisfied, and these four elements form a group. We label this group C4 . Since the multiplication of the group elements is commutative, the group is labeled commutative, or abelian. Our group is also a cyclic group, in that the elements may be written as successive powers of one element, in this case i n , n = 0, 1, 2, 3. Note that in writing out Eq. (4.164) we have selected a specific faithful representation for this group of four objects, C4 . We recognize that the group elements 1, i, −1, −i may be interpreted as successive 90◦ rotations in the complex plane. Then, from Eq. (3.74), we create the set of four 2 × 2 matrices (replacing ϕ by −ϕ in Eq. (3.74) to rotate a vector rather than rotate the coordinates): ! cos ϕ − sin ϕ R(ϕ) = , sin ϕ cos ϕ and for ϕ = 0, π/2, π , and 3π/2 we have ! 1 0 1= 0 1 ! −1 0 B= 0 −1 0 −1 A= C= 1 0 0 1 −1 0 ! ! (4.165) . This set of four matrices forms a group, with the law of combination being matrix multiplication. Here is a second faithful representation. By matrix multiplication one verifies that this representation is also abelian and cyclic. Clearly, there is a one-to-one correspondence of the two representations 1↔1↔1 a↔i↔A b ↔ −1 ↔ B c ↔ −i ↔ C. (4.166) In the group C4 the two representations (1, i, −1, −i) and (1, A, B, C) are isomorphic. In contrast to this, there is no such correspondence between either of these representations of group C4 and another group of four objects, the vierergruppe (Exercise 3.2.7). The Table 4.3 1 V1 V2 V3 1 V1 V2 V3 1 V1 V2 V3 V1 1 V3 V2 V2 V3 1 V1 V3 V2 V1 1 4.7 Discrete Groups 293 vierergruppe has the multiplication table shown in Table 4.3. Confirming the lack of correspondence between the group represented by (1, i, −1, −i) or the matrices (1, A, B, C) of Eq. (4.165), note that although the vierergruppe is abelian, it is not cyclic. The cyclic group C4 and the vierergruppe are not isomorphic. Classes and Character Consider a group element x transformed into a group element y by a similarity transform with respect to gi , an element of the group gi xgi−1 = y. (4.167) The group element y is conjugate to x. A class is a set of mutually conjugate group elements. In general, this set of elements forming a class does not satisfy the group postulates and is not a group. Indeed, the unit element 1, which is always in a class by itself, is the only class that is also a subgroup. All members of a given class are equivalent, in the sense that any one element is a similarity transform of any other element. Clearly, if a group is abelian, every element is a class by itself. We find that 1. Every element of the original group belongs to one and only one class. 2. The number of elements in a class is a factor of the order of the group. We get a possible physical interpretation of the concept of class by noting that y is a similarity transform of x. If gi represents a rotation of the coordinate system, then y is the same operation as x but relative to the new, related coordinates. In Section 3.3 we saw that a real matrix transforms under rotation of the coordinates by an orthogonal similarity transformation. Depending on the choice of reference frame, essentially the same matrix may take on an infinity of different forms. Likewise, our group representations may be put in an infinity of different forms by using unitary transformations. But each such transformed representation is isomorphic with the original. From Exercise 3.3.9 the trace of each element (each matrix of our representation) is invariant under unitary transformations. Just because it is invariant, the trace (relabeled the character) assumes a role of some importance in group theory, particularly in applications to solid-state physics. Clearly, all members of a given class (in a given representation) have the same character. Elements of different classes may have the same character, but elements with different characters cannot be in the same class. The concept of class is important (1) because of the trace or character and (2) because the number of nonequivalent irreducible representations of a group is equal to the number of classes. Subgroups and Cosets Frequently a subset of the group elements (including the unit element I ) will by itself satisfy the four group requirements and therefore is a group. Such a subset is called a subgroup. Every group has two trivial subgroups: the unit element alone and the group itself. The elements 1 and b of the four-element group C4 discussed earlier form a nontrivial 294 Chapter 4 Group Theory subgroup. In Section 4.1 we consider SO(3), the (continuous) group of all rotations in ordinary space. The rotations about any single axis form a subgroup of SO(3). Numerous other examples of subgroups appear in the following sections. Consider a subgroup H with elements hi and a group element x not in H . Then xhi and hi x are not in subgroup H . The sets generated by xhi , i = 1, 2, . . . and hi x, i = 1, 2, . . . are called cosets, respectively the left and right cosets of subgroup H with respect to x. It can be shown (assume the contrary and prove a contradiction) that the coset of a subgroup has the same number of distinct elements as the subgroup. Extending this result we may express the original group G as the sum of H and cosets: G = H + x1 H + x 2 H + · · · . Then the order of any subgroup is a divisor of the order of the group. It is this result that makes the concept of coset significant. In the next section the six-element group D3 (order 6) has subgroups of order 1, 2, and 3. D3 cannot (and does not) have subgroups of order 4 or 5. The similarity transform of a subgroup H by a fixed group element x not in H, xH x −1 , yields a subgroup — Exercise 4.7.8. If this new subgroup is identical with H for all x, that is, xH x −1 = H, then H is called an invariant, normal, or self-conjugate subgroup. Such subgroups are involved in the analysis of multiplets of atomic and nuclear spectra and the particles discussed in Section 4.2. All subgroups of a commutative (abelian) group are automatically invariant. Two Objects — Twofold Symmetry Axis Consider first the two-dimensional system of two identical atoms in the xy-plane at (1, 0) and (−1, 0), Fig. 4.11. What rotations25 can be carried out (keeping both atoms in the xy-plane) that will leave this system invariant? The first candidate is, of course, the unit operator 1. A rotation of π radians about the z-axis completes the list. So we have a rather uninteresting group of two members (1, −1). The z-axis is labeled a twofold symmetry axis — corresponding to the two rotation angles, 0 and π , that leave the system invariant. Our system becomes more interesting in three dimensions. Now imagine a molecule (or part of a crystal) with atoms of element X at ±a on the x-axis, atoms of element Y at ±b on the y-axis, and atoms of element Z at ±c on the z-axis, as show in Fig. 4.12. Clearly, each axis is now a twofold symmetry axis. Using Rx (π) to designate a rotation of π radians about the x-axis, we may 25 Here we deliberately exclude reflections and inversions. They must be brought in to develop the full set of 32 crystallographic point groups. 4.7 Discrete Groups 295 FIGURE 4.11 Diatomic molecules H2 , N2 , O2 , Cl2 . FIGURE 4.12 D2 symmetry. set up a matrix representation of the rotations as in Section 3.3:     −1 0 0 1 0 0     1 0 , Rx (π) =  0 −1 0  , Ry (π) =  0 0 0 −1 0 0 −1  −1  Rz (π) =  0 0 0 −1 0 0   0, 1  1 0 0    1 = 0 1 0. (4.168) 0 0 1 These four elements [1, Rx (π), Ry (π), Rz (π)] form an abelian group, with the group multiplication table shown in Table 4.4. The products shown in Table 4.4 can be obtained in either of two distinct ways: (1) We may analyze the operations themselves — a rotation of π about the x-axis followed by a rotation of π about the y-axis is equivalent to a rotation of π about the z-axis: Ry (π)Rx (π) = Rz (π). (2) Alternatively, once a faithful representation is established, we 296 Chapter 4 Group Theory Table 4.4 1 Rx (π ) Ry (π ) Rz (π ) 1 Rx (π ) Ry (π ) Rz (π ) 1 Rx Ry Rz Rx 1 Rz Ry Ry Rz 1 Rx Rx Ry Rx 1 can obtain the products by matrix multiplication. This is where the power of mathematics is shown — when the system is too complex for a direct physical interpretation. Comparison with Exercises 3.2.7, 4.7.2, and 4.7.3 shows that this group is the vierergruppe. The matrices of Eq. (4.168) are isomorphic with those of Exercise 3.2.7. Also, they are reducible, being diagonal. The subgroups are (1, Rx ), (1, Ry ), and (1, Rz ). They are invariant. It should be noted that a rotation of π about the y-axis and a rotation of π about the z-axis is equivalent to a rotation of π about the x-axis: Rz (π)Ry (π) = Rx (π). In symmetry terms, if y and z are twofold symmetry axes, x is automatically a twofold symmetry axis. This symmetry group,26 the vierergruppe, is often labeled D2 , the D signifying a dihedral group and the subscript 2 signifying a twofold symmetry axis (and no higher symmetry axis). Three Objects — Threefold Symmetry Axis Consider now three identical atoms at the vertices of an equilateral triangle, Fig. 4.13. Rotations of the triangle of 0, 2π/3, and 4π/3 leave the triangle invariant. In matrix form, we have27 ! 1 0 1 = Rz (0) = 0 1 ! ! √ −1/2 − 3/2 cos 2π/3 − sin 2π/3 A = Rz (2π/3) = = √ sin 2π/3 cos 2π/3 3/2 −1/2 ! √ −1/2 3/2 B = Rz (4π/3) = . (4.169) √ − 3/2 −1/2 The z-axis is a threefold symmetry axis. (1, A, B) form a cyclic group, a subgroup of the complete six-element group that follows. In the xy-plane there are three additional axes of symmetry — each atom (vertex) and the geometric center defining an axis. Each of these is a twofold symmetry axis. These rotations may most easily be described within our two-dimensional framework by introducing 26 A symmetry group is a group of symmetry-preserving operations, that is, rotations, reflections, and inversions. A symmetric group is the group of permutations of n distinct objects — of order n!. 27 Note that here we are rotating the triangle counterclockwise relative to fixed coordinates. 4.7 Discrete Groups 297 FIGURE 4.13 Symmetry operations on an equilateral triangle. reflections. The rotation of π about the C- (or y-) axis, which means the interchanging of (structureless) atoms a and c, is just a reflection of the x-axis: C = RC (π) = −1 0 0 1 ! . (4.170) We may replace the rotation about the D-axis by a rotation of 4π/3 (about our z-axis) followed by a reflection of the x-axis (x → −x) (Fig. 4.14): D = RD (π) = CB ! ! √ −1 0 −1/2 3/2 = √ 0 1 − 3/2 − 1/2 ! √ 1/2 − 3/2 = . √ − 1/2 − 3/2 FIGURE 4.14 The triangle on the right is the triangle on the left rotated 180◦ about the D-axis. D = CB. (4.171) 298 Chapter 4 Group Theory In a similar manner, the rotation of π about the E-axis, interchanging a and b, is replaced by a rotation of 2π/3(A) and then a reflection28 of the x-axis: E = RE (π) = CA ! ! √ −1 0 −1/2 − 3/2 = √ 0 1 3/2 − 1/2 ! √ 1/2 3/2 = √ . 3/2 − 1/2 (4.172) The complete group multiplication table is 1 A B C D E 1 1 A B C D E A A B 1 E C D B B 1 A D E C C C D E 1 A B D D E C B 1 A E E C D A B 1 Notice that each element of the group appears only once in each row and in each column, as required by the rearrangement theorem, Exercise 4.7.4. Also, from the multiplication table the group is not abelian. We have constructed a six-element group and a 2 × 2 irreducible matrix representation of it. The only other distinct six-element group is the cyclic group [1, R, R2 , R3 , R4 , R5 ], with ! √ 1/2 − 3/2 2πi/6 −πiσ2 /3 R=e or R=e = √ . (4.173) 3/2 1/2 Our group [1, A, B, C, D, E] is labeled D3 in crystallography, the dihedral group with a threefold axis of symmetry. The three axes (C, D, and E) in the xy-plane automatically become twofold symmetry axes. As a consequence, (1, C), (1, D), and (1, E) all form two-element subgroups. None of these two-element subgroups of D3 is invariant. A general and most important result for finite groups of h elements is that  i n2i = h, (4.174) where ni is the dimension of the matrices of the ith irreducible representation. This equality, sometimes called the dimensionality theorem, is very useful in establishing the irreducible representations of a group. Here for D3 we have 12 + 12 + 22 = 6 for our three representations. No other irreducible representations of this symmetry group of three objects exist. (The other representations are the identity and ±1, depending upon whether a reflection was involved.) 28 Note that, as a consequence of these reflections, det(C) = det(D) = det(E) = −1. The rotations A and B, of course, have a determinant of +1. 4.7 Discrete Groups FIGURE 4.15 299 Ruthenocene. Dihedral Groups, D n A dihedral group Dn with an n-fold symmetry axis implies n axes with angular separation of 2π/n radians, n is a positive integer, but otherwise unrestricted. If we apply the symmetry arguments to crystal lattices, then n is limited to 1, 2, 3, 4, and 6. The requirement of invariance of the crystal lattice under translations in the plane perpendicular to the n-fold axis excludes n = 5, 7, and higher values. Try to cover a plane completely with identical regular pentagons and with no overlapping.29 For individual molecules, this constraint does not exist, although the examples with n > 6 are rare. n = 5 is a real possibility. As an example, the symmetry group for ruthenocene, (C5 H5 )2 Ru, illustrated in Fig. 4.15, is D5 .30 Crystallographic Point and Space Groups The dihedral groups just considered are examples of the crystallographic point groups. A point group is composed of combinations of rotations and reflections (including inversions) that will leave some crystal lattice unchanged. Limiting the operations to rotations and reflections (including inversions) means that one point — the origin — remains fixed, hence the term point group. Including the cyclic groups, two cubic groups (tetrahedron and octahedron symmetries), and the improper forms (involving reflections), we come to a total of 32 crystallographic point groups. 29 For D imagine a plane covered with regular hexagons and the axis of rotation through the geometric center of one of them. 6 30 Actually the full technical label is D , with h indicating invariance under a reflection of the fivefold axis. 5h 300 Chapter 4 Group Theory If, to the rotation and reflection operations that produced the point groups, we add the possibility of translations and still demand that some crystal lattice remain invariant, we come to the space groups. There are 230 distinct space groups, a number that is appalling except, possibly, to specialists in the field. For details (which can cover hundreds of pages) see the Additional Readings. Exercises 4.7.1 Show that the matrices 1, A, B, and C of Eq. (4.165) are reducible. Reduce them. Note. This means transforming A and C to diagonal form (by the same unitary transformation). Hint. A and C are anti-Hermitian. Their eigenvectors will be orthogonal. 4.7.2 Possible operations on a crystal lattice include Aπ (rotation by π ), m (reflection), and i (inversion). These three operations combine as A2π = m2 = i 2 = 1, Aπ · m = i, m · i = Aπ , and i · Aπ = m. Show that the group (1, Aπ , m, i) is isomorphic with the vierergruppe. 4.7.3 Four possible operations in the xy-plane are: 1. no change + + x→x y→y x → −x y → −y + x → −x 3. reflection y→y + x→x 4. reflection y → −y. inversion (a) Show that these four operations form a group. (b) Show that this group is isomorphic with the vierergruppe. (c) Set up a 2 × 2 matrix representation. 4.7.4 Rearrangement theorem: Given a group of n distinct elements (I, a, b, c, . . . , n), show that the set of products (aI, a 2 , ab, ac . . . an) reproduces the n distinct elements in a new order. 4.7.5 Using the 2 × 2 matrix representation of Exercise 3.2.7 for the vierergruppe, (a) Show that there are four classes, each with one element. 4.7 Discrete Groups 301 (b) Calculate the character (trace) of each class. Note that two different classes may have the same character. (c) Show that there are three two-element subgroups. (The unit element by itself always forms a subgroup.) (d) For any one of the two-element subgroups show that the subgroup and a single coset reproduce the original vierergruppe. Note that subgroups, classes, and cosets are entirely different. 4.7.6 Using the 2 × 2 matrix representation, Eq. (4.165), of C4 , (a) Show that there are four classes, each with one element. (b) Calculate the character (trace) of each class. (c) Show that there is one two-element subgroup. (d) Show that the subgroup and a single coset reproduce the original group. 4.7.7 Prove that the number of distinct elements in a coset of a subgroup is the same as the number of elements in the subgroup. 4.7.8 A subgroup H has elements hi . Let x be a fixed element of the original group G and not a member of H . The transform xhi x −1 , i = 1, 2, . . . generates a conjugate subgroup xH x −1 . Show that this conjugate subgroup satisfies each of the four group postulates and therefore is a group. A particular group is abelian. A second group is created by replacing gi by gi−1 for each element in the original group. Show that the two groups are isomorphic. Note. This means showing that if ai bi = ci , then ai−1 bi−1 = ci−1 . (b) Continuing part (a), if the two groups are isomorphic, show that each must be abelian. 4.7.9 (a) 4.7.10 (a) 4.7.11 Explain how the relation Once you have a matrix representation of any group, a one-dimensional representation can be obtained by taking the determinants of the matrices. Show that the multiplicative relations are preserved in this determinant representation. (b) Use determinants to obtain a one-dimensional representative of D3 .  i n2i = h applies to the vierergruppe (h = 4) and to the dihedral group D3 with h = 6. 4.7.12 Show that the subgroup (1, A, B) of D3 is an invariant subgroup. 4.7.13 The group D3 may be discussed as a permutation group of three objects. Matrix B, for instance, rotates vertex a (originally in location 1) to the position formerly occupied by c 302 Chapter 4 Group Theory (location 3). Vertex b moves from location 2 to location 1, and so on. As a permutation (abc) → (bca). In three dimensions      b a 0 1 0      0 0 1b  = c . 1 0 0 c a (a) Develop analogous 3 × 3 representations for the other elements of D3 . (b) Reduce your 3 × 3 representation to the 2 × 2 representation of this section. (This 3 × 3 representation must be reducible or Eq. (4.174) would be violated.) Note. The actual reduction of a reducible representation may be awkward. It is often easier to develop directly a new representation of the required dimension. 4.7.14 The permutation group of four objects P4 has 4! = 24 elements. Treating the four elements of the cyclic group C4 as permutations, set up a 4 × 4 matrix representation of C4 . C4 that becomes a subgroup of P4 . (b) How do you know that this 4 × 4 matrix representation of C4 must be reducible? Note. C4 is abelian and every abelian group of h objects has only h one-dimensional irreducible representations. 4.7.15 (a) The objects (abcd) are permuted to (dacb). Write out a 4×4 matrix representation of this one permutation. (b) Is the permutation (abdc) → (dacb) odd or even? (c) Is this permutation a possible member of the D4 group? Why or why not? 4.7.16 The elements of the dihedral group Dn may be written in the form (a) Sλ Rµ z (2π/n), λ = 0, 1 µ = 0, 1, . . . , n − 1, where Rz (2π/n) represents a rotation of 2π/n about the n-fold symmetry axis, whereas S represents a rotation of π about an axis through the center of the regular polygon and one of its vertices. For S = E show that this form may describe the matrices A, B, C, and D of D3 . Note. The elements Rz and S are called the generators of this finite group. Similarly, i is the generator of the group given by Eq. (4.164). 4.7.17 Show that the cyclic group of n objects, Cn , may be represented by r m , m = 0, 1, 2, . . . , n − 1. Here r is a generator given by r = exp(2πis/n). The parameter s takes on the values s = 1, 2, 3, . . . , n, each value of s yielding a different one-dimensional (irreducible) representation of Cn . 4.7.18 Develop the irreducible 2 × 2 matrix representation of the group of operations (rotations and reflections) that transform a square into itself. Give the group multiplication table. Note. This is the symmetry group of a square and also the dihedral group D4 . (See Fig. 4.16.) 4.7 Discrete Groups 303 FIGURE 4.16 Square. FIGURE 4.17 Hexagon. 4.7.19 The permutation group of four objects contains 4! = 24 elements. From Exercise 4.7.18, D4 , the symmetry group for a square, has far fewer than 24 elements. Explain the relation between D4 and the permutation group of four objects. 4.7.20 A plane is covered with regular hexagons, as shown in Fig. 4.17. (a) Determine the dihedral symmetry of an axis perpendicular to the plane through the common vertex of three hexagons (A). That is, if the axis has n-fold symmetry, show (with careful explanation) what n is. Write out the 2 × 2 matrix describing the minimum (nonzero) positive rotation of the array of hexagons that is a member of your Dn group. (b) Repeat part (a) for an axis perpendicular to the plane through the geometric center of one hexagon (B). 4.7.21 In a simple cubic crystal, we might have identical atoms at r = (la, ma, na), with l, m, and n taking on all integral values. (a) Show that each Cartesian axis is a fourfold symmetry axis. (b) The cubic group will consist of all operations (rotations, reflections, inversion) that leave the simple cubic crystal invariant. From a consideration of the permutation 304 Chapter 4 Group Theory FIGURE 4.18 Multiplication table. of the positive and negative coordinate axes, predict how many elements this cubic group will contain. 4.7.22 4.8 (a) From the D3 multiplication table of Fig. 4.18 construct a similarity transform table showing xyx −1 , where x and y each range over all six elements of D3 : (b) Divide the elements of D3 into classes. Using the 2 × 2 matrix representation of Eqs. (4.169)–(4.172) note the trace (character) of each class. DIFFERENTIAL FORMS In Chapters 1 and 2 we adopted the view that, in n dimensions, a vector is an n-tuple of real numbers and that its components transform properly under changes of the coordinates. In this section we start from the alternative view, in which a vector is thought of as a directed line segment, an arrow. The point of the idea is this: Although the concept of a vector as a line segment does not generalize to curved space–time (manifolds of differential geometry), except by working in the flat tangent space requiring embedding in auxiliary extra dimensions, Elie Cartan’s differential forms are natural in curved space–time and a very powerful tool. Calculus can be based on differential forms, as Edwards has shown by his classic textbook (see the Additional Readings). Cartan’s calculus leads to a remarkable unification of concepts and theorems of vector analysis that is worth pursuing. In differential geometry and advanced analysis (on manifolds) the use of differential forms is now widespread. Cartan’s notion of vector is based on the one-to-one correspondence between the linear spaces of displacement vectors and directional differential operators (components of the gradient form a basis). A crucial advantage of the latter is that they can be generalized to curved space–time. Moreover, describing vectors in terms of directional derivatives along curves uniquely specifies the vector at a given point without the need to invoke coordinates. Ultimately, since coordinates are needed to specify points, the Cartan formalism, though an elegant mathematical tool for the efficient derivation of theorems on tensor analysis, has in principle no advantage over the component formalism. 1-Forms We define dx, dy, dz in three-dimensional Euclidean space as functions assigning to a directed line segment P Q from the point P to the point Q the corresponding change in x, y, z. The symbol dx represents “oriented length of the projection of a curve on the 4.8 Differential Forms 305 x-axis,” etc. Note that dx, dy, dz can be, but need not be, infinitesimally small, and they must not be confused with the ordinary differentials that we associate with integrals and differential quotients. A function of the type A dx + B dy + C dz, A, B, C real numbers (4.175) is defined as a constant 1-form. Example 4.8.1 CONSTANT 1-FORM For a constant force F = (A, B, C), the work done along the displacement from P = (3, 2, 1) to Q = (4, 5, 6) is therefore given by W = A(4 − 3) + B(5 − 2) + C(6 − 1) = A + 3B + 5C. If F is a force field, then its rectangular components A(x, y, z), B(x, y, z), C(x, y, z) will depend on the location and the (nonconstant) 1-form dW = F · dr corresponds to the concept of work done against the force field F(r) along dr on a space curve. A finite amount of work  W= A(x, y, z) dx + B(x, y, z) dy + C(x, y, z) dz (4.176) C involves the familiar line integral along an oriented curve C, where the 1-form dW describes the amount of work for small displacements (segments on the path C). In this light, b the integrand f (x) dx of an integral a f (x) dx consisting of the function f and of the measure dx as the oriented length is here considered to be a 1-form. The value of the integral is obtained from the ordinary line integral.  2-Forms Consider a unit flow of mass in the z-direction, that is, a flow in the direction of increasing z so that a unit mass crosses a unit square of the xy-plane in unit time. The orientation symbolized by the sequence of points in Fig. 4.19, (0, 0, 0) → (1, 0, 0) → (1, 1, 0) → (0, 1, 0) → (0, 0, 0), will be called counterclockwise, as usual. A unit flow in the z-direction is defined by the function dx dy 31 assigning to oriented rectangles in space the oriented area of their projections on the xy-plane. Similarly, a unit flow in the x-direction is described by dy dz and a unit flow in the y-direction by dz dx. The reverse order, dz dx, is dictated by the orientation convention, and dz dx = −dx dz by definition. This antisymmetry is consistent with the cross product of two vectors representing oriented areas in Euclidean space. This notion generalizes to polygons and curved differentiable surfaces approximated by polygons and volumes. 31 Many authors denote this wedge product as dx ∧ dy with dy ∧ dx = −dx ∧ dy. Note that the product dx dy = dy dx for ordinary differentials. 306 Chapter 4 Group Theory FIGURE 4.19 Counterclockwise-oriented rectangle. Example 4.8.2 MAGNETIC FLUX ACROSS AN ORIENTED SURFACE If B = (A, B, C) is a constant magnetic induction, then the constant 2-form A dy dz + B dz dx + C dx dy describes the magnetic flux across an oriented rectangle. If B is a magnetic induction field varying across a surface S, then the flux  = Bx (r) dy dz + By (r) dz dx + Bz (r) dx dy (4.177) S across the oriented surface S involves the familiar (Riemann) integration over approximating small oriented rectangles from which S is pieced together.  The definition of ω relies on decomposing ω = i ωi , where the differential forms ωi are each nonzero in a small patch of the surface S that covers the surface. Then it can only be shown that i ωi converges, as the patches become smaller and more numerous, to the limit ω, which is independent of these decompositions. For more details and proofs, we refer the reader to Edwards in the Additional Readings. 3-Forms A 3-form dx dy dz represents an oriented volume. For example, the determinant of three vectors in Euclidean space changes sign if we reverse the order of two vectors. The determinant measures the oriented volume spanned by the three vectors. In particular, ρ(x, y, z) dx dy dz represents the total charge inside the volume V if ρ is the charge V density. Higher-dimensional differential forms in higher-dimensional spaces are defined similarly and are called k-forms, with k = 0, 1, 2, . . . . If a 3-form ω = A(x1 , x2 , x3 ) dx1 dx2 dx3 = A′ (x1′ , x2′ , x3′ ) dx1′ dx2′ dx3′ (4.178) 4.8 Differential Forms 307 on a 3-dimensional manifold is expressed in terms of new coordinates, then there is a oneto-one, differentiable map xi′ = xi′ (x1 , x2 , x3 ) between these coordinates with Jacobian J= ∂(x1′ , x2′ , x3′ ) = 1, ∂(x1 , x2 , x3 ) and A = A′ J = A′ so that ω= A dx1 dx2 dx3 = V V V′ A′ dx1′ dx2′ dx3′ . (4.179) This statement spells out the parameter independence of integrals over differential forms, since parameterizations are essentially arbitrary. The rules governing integration of differential forms are defined on manifolds. These are continuous if we can move continuously (actually we assume them differentiable) from point to point, oriented if the orientation of curves generalizes to surfaces and volumes up to the dimension of the whole manifold. The rules on differential forms are: • If ω = aω1 + a ′ ω1′ , with a, a ′ real numbers, then S ω = a S ω1 + a ′ S ω1′ , where S is a compact, oriented, continuous manifold with boundary. • If the orientation is reversed, then the integral S ω changes sign. Exterior Derivative We now introduce the exterior derivative d of a function f , a 0-form: df ≡ ∂f ∂f ∂f ∂f dx + dy + dz = dxi , ∂x ∂y ∂z ∂xi (4.180) generating a 1-form ω1 = df , the differential of f (or exterior derivative), the gradient in standard vector analysis. Upon summing over the coordinates, we have used and will continue to use Einstein’s summation convention. Applying the exterior derivative d to a 1-form we define d(A dx + B dy + C dz) = dA dx + dB dy + dC dz (4.181) with functions A, B, C. This definition in conjunction with df as just given ties vectors to differential operators ∂i = ∂x∂ i . Similarly, we extend d to k-forms. However, applying d twice gives zero, ddf = 0, because d(df ) = d ∂f ∂f dx + d dy ∂x ∂y     2 ∂ 2f ∂ 2f ∂ 2f ∂ f dx + dy dy dy dx + dx + ∂x ∂y ∂y∂x ∂x 2 ∂y 2   2 ∂ 2f ∂ f dx dy = 0. − = ∂y ∂x ∂x ∂y = (4.182) This follows from the fact that in mixed partial derivatives their order does not matter provided all functions are sufficiently differentiable. Similarly we can show ddω1 = 0 for a 1-form ω1 , etc. 308 Chapter 4 Group Theory The rules governing differential forms, with ωk denoting a k-form, that we have used so far are • dx dx = 0 = dy dy = dz dz, dxi2 = 0; • dx dy = −dy dx, dxi dxj = −dxj dxi , i = j , • dx1 dx2 · · · dxk is totally antisymmetric in the dxi , i = 1, 2, . . . , k. • df = • d(ωk + k ) = dωk + dk , linearity; • ddωk = 0. ∂f ∂xi dxi ; Now we apply the exterior derivative d to products of differential forms, starting with functions (0-forms). We have   ∂(f g) ∂g ∂f d(f g) = dxi = f + g dxi = f dg + df g. (4.183) ∂xi ∂xi ∂xi If ω1 = ∂g ∂xi dxi is a 1-form and f is a function, then     ∂g ∂g d(f ω1 ) = d f dxi = d f dxi ∂xi ∂xi  ∂g   ∂ f ∂x ∂ 2g ∂f ∂g i dxj dxi dxj dxi = +f = ∂xj ∂xj ∂xi ∂xi ∂xj = df ω1 + f dω1 , as expected. But if ω1′ = d(ω1 ω1′ ) ∂f ∂xj (4.184) dxj is another 1-form, then    ∂g ∂f ∂g ∂f dxi dxj =d dxi dxj = d ∂xi ∂xj ∂xi ∂xj $ % ∂g ∂f ∂ ∂x i ∂xj dxk dxi dxj = ∂xk =  ∂ 2g ∂f ∂ 2f ∂g dxk dxi dxj − dxi dxk dxj ∂xi ∂xk ∂xj ∂xi ∂xj ∂xk = dω1 ω1′ − ω1 dω1′ . (4.185) This proof is valid for more general 1-forms ω = fi dxi with functions fi . In general, therefore, we define for k-forms: d(ωk ωk′ ) = (dωk )ωk′ + (−1)k ωk (dωk′ ). In general, the exterior derivative of a k-form is a (k + 1)-form. (4.186) 4.8 Differential Forms Example 4.8.3 309 POTENTIAL ENERGY As an application in two dimensions (for simplicity), consider the potential V (r), a 0-form, and dV , its exterior derivative. Integrating V along an oriented path C from r1 to r2 gives   ∂V ∂V (4.187) V (r2 ) − V (r1 ) = dV = dx + dy = ∇V · dr, ∂y C C ∂x C where the last integral is the standard formula for the potential energy difference that forms part of the energy conservation theorem. The path and parameterization independence are manifest in this special case.  Pullbacks If a linear map L2 from the uv-plane to the xy-plane has the form x = au + bv + c, y = eu + f v + g, (4.188) oriented polygons in the uv-plane are mapped onto similar polygons in the xy-plane, provided the determinant af − be of the map L2 is nonzero. The 2-form dx dy = (a du + b dv)(e du + f dv) = (af − be)du dv (4.189) can be pulled back from the xy- to the uv-plane. That is to say, an integral over a simply connected surface S becomes dx dy = (af − be) du dv, (4.190) L2 (S) S and (af − be) du dv is the pullback of dx dy, opposite to the direction of the map L2 from the uv-plane to the xy-plane. Of course, the determinant af − be of the map L2 is simply the Jacobian, generated without effort by the differential forms in Eq. (4.189). Similarly, a linear map L3 from the u1 u2 u3 -space to the x1 x2 x3 -space xi = aij uj + bi , i = 1, 2, 3, (4.191) automatically generates its Jacobian from the 3-form       3 3 3 a3j duj a2j duj a1j duj dx1 dx2 dx3 = j =1 j =1 j =1 = (a11 a22 a33 − a12 a21 a33 ± · · · )du1 du2 du3   a11 a12 a13 = det  a21 a22 a23  du1 du2 du3 . a31 a32 a33 (4.192) Thus, differential forms generate the rules governing determinants. Given two linear maps in a row, it is straightforward to prove that the pullback under a composed map is the pullback of the pullback. This theorem is the differential-forms analog of matrix multiplication. 310 Chapter 4 Group Theory Let us now consider a curve C defined by a parameter t in contrast to a curve defined by an equation. For example, the circle {(cos t, sin t); 0 ≤ t ≤ 2π} is a parameterization by t, whereas the circle {(x, y); x 2 + y 2 = 1} is a definition by an equation. Then the line integral  tf  dy dx dt (4.193) +B A A(x, y)dx + B(x, y) dy = dt dt C ti for continuous functions A, B, dx/dt, dy/dt becomes a one-dimensional integral over the dy oriented interval ti ≤ t ≤ tf . Clearly, the 1-form [A dx dt + B dt ] dt on the t-line is obtained from the 1-form A dx + B dy on the xy-plane via the map x = x(t), y = y(t) from the dy t-line to the curve C in the xy-plane. The 1-form [A dx dt + B dt ] dt is called the pullback of the 1-form A dx + B dy under the map x = x(t), y = y(t). Using pullbacks we can show that integrals over 1-forms are independent of the parameterization of the path. dy can be considered as the coefficient of dx in the In this sense, the differential quotient dx pullback of dy under the function y = f (x), or dy = f ′ (x) dx. This concept of pullback readily generalizes to maps in three or more dimensions and to k-forms with k > 1. In particular, the chain rule can be seen to be a pullback: If yi = fi (x1 , x2 , . . . , xn ), i = 1, 2, . . . , l zj = gj (y1 , y2 , . . . , yl ), j = 1, 2, . . . , m and (4.194) are differentiable maps from Rn → Rl and Rl → Rm , then the composed map Rn → Rm is differentiable and the pullback of any k-form under the composed map is equal to the pullback of the pullback. This theorem is useful for establishing that integrals of k-forms are parameter independent. Similarly, we define the differential df as the pullback of the 1-form dz under the function z = f (x, y): dz = df = Example 4.8.4 ∂f ∂f dx + dy. ∂x ∂y (4.195) STOKES’ THEOREM As another application let us first sketch the standard derivation of the simplest version of Stokes’ theorem for a rectangle S = [a ≤ x ≤ b, c ≤ y ≤ d] oriented counterclockwise, with ∂S its boundary b d a c (A dx + B dy) = A(x, c) dx + B(b, y) dy + A(x, d) dx + B(a, y) dy ∂S a = c c d d b B(b, y) − B(a, y) dy − b b d ∂B dx dy − a c a ∂x   ∂B ∂A = dx dy, − ∂y S ∂x = c a d b A(x, d) − A(x, c) dx ∂A dy dx ∂y (4.196) 4.8 Differential Forms 311 which holds for any simply connected surface S that can be pieced together by rectangles. Now we demonstrate the use of differential forms to obtain the same theorem (again in two dimensions for simplicity): d(A dx + B dy) = dA dx + dB dy       ∂B ∂B ∂A ∂A ∂B ∂A dx + dy dx + dx + dy dy = − dx dy, = ∂x ∂y ∂x ∂y ∂x ∂y (4.197) using the rules highlighted earlier. Integrating over a surface S and its boundary ∂S, respectively, we obtain   ∂B ∂A dx dy. (4.198) − (A dx + B dy) = d(A dx + B dy) = ∂y ∂S S S ∂x Here contributions to the left-hand integral from inner boundaries cancel as usual because they are oriented in opposite directions on adjacent rectangles. For each oriented inner rectangle that makes up the simply connected surface S we have used, ddx = dx = 0. (4.199) R ∂R Note that the exterior derivative automatically generates the z component of the curl. In three dimensions, Stokes’ theorem derives from the differential-form identity involving the vector potential A and magnetic induction B = ∇ × A, d(Ax dx + Ay dy + Az dz) = dAx dx + dAy dy + dAz dz   ∂Ax ∂Ax ∂Ax dx + dy + dz dx + · · · = ∂x ∂y ∂z       ∂Ay ∂Ax ∂Az ∂Ax ∂Az ∂Ay dy dz + dz dx + dx dy, − − − = ∂y ∂z ∂z ∂x ∂x ∂y (4.200) generating all components of the curl in three-dimensional space. This identity is integrated over each oriented rectangle that makes up the simply connected surface S (which has no holes, that is, where every curve contracts to a point of the surface) and then is summed over all adjacent rectangles to yield the magnetic flux across S,  = [Bx dy dz + By dz dx + Bz dx dy] S = ∂S [Ax dx + Ay dy + Az dz], or, in the standard notation of vector analysis (Stokes’ theorem, Chapter 1), B · da = (∇ × A) · da = A · dr. S S (4.201) (4.202) ∂S  312 Chapter 4 Group Theory Example 4.8.5 GAUSS’ THEOREM Consider Gauss’ law, Section 1.14. We integrate the electric density ρ = ε10 ∇ · E over the volume of a single parallelepiped V = [a ≤ x ≤ b, c ≤ y ≤ d, e ≤ z ≤ f ] oriented by dx dy dz (right-handed), the side x = b of V is oriented by dy dz (counterclockwise, as seen from x > b), and so on. Using b ∂Ex Ex (b, y, z) − Ex (a, y, z) = dx, (4.203) a ∂x we have, in the notation of differential forms, summing over all adjacent parallelepipeds that make up the volume V, ∂Ex dx dy dz. (4.204) Ex dy dz = ∂V V ∂x Integrating the electric flux (2-form) identity d(Ex dy dz + Ey dz dx + Ez dx dy) = dEx dy dz + dEy dz dx + dEz dx dy   ∂Ey ∂Ex ∂Ez dx dy dz (4.205) = + + ∂x ∂y ∂z across the simply connected surface ∂V we have Gauss’ theorem,   ∂Ey ∂Ex ∂Ez dx dy dz, (4.206) + + (Ex dy dz + Ey dz dx + Ez dx dy) = ∂x ∂y ∂z ∂V V or, in standard notation of vector analysis, q E · da = ∇ · E d 3r = . ε0 ∂V V (4.207)  These examples are different cases of a single theorem on differential forms. To explain why, let us begin with some terminology, a preliminary definition of a differentiable manifold M: It is a collection of points (m-tuples of real numbers) that are smoothly (that is, differentiably) connected with each other so that the neighborhood of each point looks like a simply connected piece of an m-dimensional Cartesian space “close enough” around the point and containing it. Here, m, which stays constant from point to point, is called the dimension of the manifold. Examples are the m-dimensional Euclidean space Rm and the m-dimensional sphere  m+1   2  xi = 1 . Sm = x 1 , . . . , x m+1 ; i=1 Any surface with sharp edges, corners, or kinks is not a manifold in our sense, that is, is not differentiable. In differential geometry, all movements, such as translation and parallel displacement, are local, that is, are defined infinitesimally. If we apply the exterior derivative d to a function f (x 1 , . . . , x m ) on M, we generate basic 1-forms: df = ∂f dx i , ∂xi (4.208) 4.8 Differential Forms where x i (P ) are coordinate functions. As before we have d(df ) = 0 because   ∂f ∂ 2f d(df ) = d dx i = j i dx j dx i i ∂x ∂x ∂x    ∂ 2f ∂ 2f dx j dx i = 0 − = ∂x j ∂x i ∂x i ∂x j 313 (4.209) j N. This condition is often derived from the Cauchy criterion applied to the partial sums si . The Cauchy criterion is: A necessary and sufficient condition that a sequence (si ) converge is that for each ε > 0 there is a fixed number N such that |sj − si | < ε, for all i, j > N. This means that the individual partial sums must cluster together as we move far out in the sequence. The Cauchy criterion may easily be extended to sequences of functions. We see it in this form in Section 5.5 in the definition of uniform convergence and in Section 10.4 in the development of Hilbert space. Our partial sums si may not converge to a single limit but may oscillate, as in the case ∞  n=1 un = 1 − 1 + 1 − 1 + 1 + · · · − (−1)n + · · · . Clearly, si = 1 for i odd but si = 0 for i even. There is no convergence to a limit, and series such as this one are labeled oscillatory. Whenever the sequence of partial sums diverges (approaches ±∞), the infinite series is said to diverge. Often the term divergent is extended to include oscillatory series as well. Because we evaluate the partial sums by ordinary arithmetic, the convergent series, defined in terms of a limit of the partial sums, assumes a position of supreme importance. Two examples may clarify the nature of convergence or divergence of a series and will also serve as a basis for a further detailed investigation in the next section. Example 5.1.1 THE GEOMETRIC SERIES The geometrical sequence, starting with a and with a ratio r (= an+1 /an independent of n), is given by a + ar + ar 2 + ar 3 + · · · + ar n−1 + · · · . The nth partial sum is given by1 sn = a Taking the limit as n → ∞, lim sn = n→∞ 1 Multiply and divide s = n−1 ar m by 1 − r. n m=0 1 − rn . 1−r a , 1−r for |r| < 1. (5.3) (5.4) 5.1 Fundamental Concepts 323 Hence, by definition, the infinite geometric series converges for |r| < 1 and is given by ∞  n=1 ar n−1 = a . 1−r (5.5) On the other hand, if |r| ≥ 1, the necessary condition un → 0 is not satisfied and the infinite series diverges.  Example 5.1.2 THE HARMONIC SERIES As a second and more involved example, we consider the harmonic series ∞  1 n=1 n =1+ 1 1 1 1 + + + ··· + + ··· . 2 3 4 n (5.6) We have the limn→∞ un = limn→∞ 1/n = 0, but this is not sufficient to guarantee convergence. If we group the terms (no change in order) as    1 1 + 12 + 13 + 14 + 51 + 16 + 17 + 81 + 91 + · · · + 16 + ··· , (5.7) each pair of parentheses encloses p terms of the form 1 1 1 p 1 + + ··· + > = . p+1 p+2 p + p 2p 2 (5.8) Forming partial sums by adding the parenthetical groups one by one, we obtain s1 = 1, 3 s2 = , 2 4 s3 > , 2 5 s4 > , 2 6 s5 > , · · · 2 n+1 sn > . 2 (5.9) The harmonic series considered in this way is certainly divergent.2 An alternate and independent demonstration of its divergence appears in Section 5.2.  If the un > 0 are monotonically decreasing to zero, that is, un > un+1 for all n, then n un is converging to S if, and only if, sn − nun converges to S. As the partial sums sn converge to S, this theorem implies that nun → 0, for n → ∞. To prove this theorem, we start by concluding from 0 < un+1 < un and sn+1 − (n + 1)un+1 = sn − nun+1 = sn − nun + n(un − un+1 ) > sn − nun that sn − nun increases as n → ∞. As a consequence of sn − nun < sn ≤ S, sn − nun converges to a value s ≤ S. Deleting the tail of positive terms ui − un from i = ν + 1 to n, 2 The (finite) harmonic series appears in an interesting note on the maximum stable displacement of a stack of coins. P. R. John- son, The Leaning Tower of Lire. Am. J. Phys. 23: 240 (1955). 324 Chapter 5 Infinite Series we infer from sn − nun > u0 + (u1 − un ) + · · · + (uν − un ) = sν − νun that sn − nun ≥ sν for n → ∞. Hence also s ≥ S, so s = S and nun → 0. When this theorem is applied to the harmonic series n n1 with n n1 = 1 it implies that it does not converge; it diverges to +∞. Addition, Subtraction of Series If we have two convergent series n un → s and n vn → S, their sum and difference will also converge to s ± S because their partial sums satisfy     sj ± Sj − (si ± Si ) = sj − si ± (Sj − Si ) ≤ |sj − si | + |Sj − Si | < 2ǫ using the triangle inequality |a| − |b| ≤ |a + b| ≤ |a| + |b| for a = sj − si , b = Sj − Si . A convergent series n un → S may be multiplied termwise by a real number a. The new series will converge to aS because   |asj − asi | = a(sj − si ) = |a||sj − si | < |a|ǫ. This multiplication by a constant can be generalized to a multiplication by terms cn of a bounded sequence of numbers. If n un converges to S and 0 < cn ≤ M are bounded, then n un cn is convergent. If n un cn diverges. n un is divergent and cn > M > 0, then To prove this theorem we take i, j sufficiently large so that |sj − si | < ǫ. Then j  i+1 un cn ≤ M The divergent case follows from  n j  i+1 un = M|sj − si | < Mǫ. un cn > M  n un → ∞. Using the binomial theorem3 (Section 5.6), we may expand the function (1 + x)−1 : 1 = 1 − x + x 2 − x 3 + · · · + (−x)n−1 + · · · . 1+x If we let x → 1, this series becomes 1 − 1 + 1 − 1 + 1 − 1 + ··· , (5.10) (5.11) a series that we labeled oscillatory earlier in this section. Although it does not converge in the usual sense, meaning can be attached to this series. Euler, for example, assigned a value of 1/2 to this oscillatory sequence on the basis of the correspondence between this series and the well-defined function (1 + x)−1 . Unfortunately, such correspondence between series and function is not unique, and this approach must be refined. Other methods 3 Actually Eq. (5.10) may be verified by multiplying both sides by 1 + x. 5.2 Convergence Tests 325 of assigning a meaning to a divergent or oscillatory series, methods of defining a sum, have been developed. See G. H. Hardy, Divergent Series, Chelsea Publishing Co. 2nd ed. (1992). In general, however, this aspect of infinite series is of relatively little interest to the scientist or the engineer. An exception to this statement, the very important asymptotic or semiconvergent series, is considered in Section 5.10. Exercises 5.1.1 Show that ∞  n=1 1 1 = . (2n − 1)(2n + 1) 2 Hint. Show (by mathematical induction) that sm = m/(2m + 1). 5.1.2 Show that ∞  n=1 1 = 1. n(n + 1) Find the partial sum sm and verify its correctness by mathematical induction. Note. The method of expansion in partial fractions, Section 15.8, offers an alternative way of solving Exercises 5.1.1 and 5.1.2. 5.2 CONVERGENCE TESTS Although nonconvergent series may be useful in certain special cases (compare Section 5.10), we usually insist, as a matter of convenience if not necessity, that our series be convergent. It therefore becomes a matter of extreme importance to be able to tell whether a given series is convergent. We shall develop a number of possible tests, starting with the simple and relatively insensitive tests and working up to the more complicated but quite sensitive tests. For the present let us consider a series of positive terms an ≥ 0, postponing negative terms until the next section. Comparison Test If term by term a series of terms 0 ≤ un ≤ an , in which the a convergent series, n form a the series n un is also convergent. If un ≤ an for all n, then n un ≤ n an and n un therefore is convergent. If term by term a series of terms vn ≥ bn , in which the bn , form a comparisons divergent series, the series n vn is also divergent. Note that of un with bn or vn with an yield no information. If vn ≥ bn for all n, then n vn ≥ n bn and n vn therefore is divergent. For the convergent series an we already have the geometric series, whereas the harmonic series will serve as the divergent comparison series bn . As other series are identified as either convergent or divergent, they may be used for the known series in this comparison test. All tests developed in this section are essentially comparison tests. Figure 5.1 exhibits these tests and the interrelationships. 326 Chapter 5 Infinite Series FIGURE 5.1 Example 5.2.1 Comparison tests. A DIRICHLET SERIES −p −0.999 > n−1 and b = n−1 forms the Test ∞ n n=1 n , p = 0.999, for convergence. Since n divergentharmonic series, the comparison test shows that n n−0.999 is divergent. Generalizing, n n−p is seen to be divergent for all p ≤ 1 but convergent for p > 1 (see Example 5.2.3).  Cauchy Root Test If (an )1/n ≤ r < 1 for all sufficiently large n, with r independent of n, then n an is 1/n convergent. If (an ) ≥ 1 for all sufficiently large n, then n an is divergent. The first part of this test is verified easily by raising (an )1/n ≤ r to the nth power. We get an ≤ r n < 1. Since r n is just the nth term in a convergent geometric series, n an is convergent by the 1/n comparison test. Conversely, if (an ) ≥ 1, then an ≥ 1 and the series must diverge. This root test is particularly useful in establishing the properties of power series (Section 5.7). D’Alembert (or Cauchy) Ratio Test If an+1 /an ≤ r < 1 for all sufficiently large n and r is independent of n, then n an is convergent. If an+1 /an ≥ 1 for all sufficiently large n, then n an is divergent. Convergence is proved by direct comparison with the geometric series (1 + r + r 2 + · · · ). In the second part, an+1 ≥ an and divergence should be reasonably obvious. Although not 5.2 Convergence Tests 327 quite so sensitive as the Cauchy root test, this D’Alembert ratio test is one of the easiest to apply and is widely used. An alternate statement of the ratio test is in the form of a limit: If lim n→∞ an+1 < 1, an convergence, 1, divergence, = 1, indeterminate. (5.12) Because of this final indeterminate possibility, the ratio test is likely to fail at crucial points, and more delicate, sensitive tests are necessary. The alert reader may wonder how this indeterminacy arose. Actually it was concealed in the first statement, an+1 /an ≤ r < 1. We might encounter an+1 /an < 1 for all finite n but be unable to choose an r < 1 and independent of n such that an+1 /an ≤ r for all sufficiently large n. An example is provided by the harmonic series n an+1 < 1. = an n+1 (5.13) an+1 = 1, n→∞ an (5.14) Since lim no fixed ratio r < 1 exists and the ratio test fails. Example 5.2.2 Test n n/2 D’ALEMBERT RATIO TEST n for convergence. an+1 (n + 1)/2n+1 1 n + 1 . = = · an n/2n 2 n (5.15) Since an+1 3 ≤ an 4 for n ≥ 2, (5.16) we have convergence. Alternatively, lim n→∞ an+1 1 = an 2 and again — convergence. (5.17)  Cauchy (or Maclaurin) Integral Test This is another sort of comparison test, in which we compare a series with an integral. Geometrically, we compare the area of a series of unit-width rectangles with the area under a curve. 328 Chapter 5 Infinite Series FIGURE 5.2 (a) Comparison of integral and sum-blocks leading. (b) Comparison of integral and sum-blocks lagging. monotonic decreasing function in which f (n) = an . Then ∞ Let f (x) be a continuous, a converges if f (x) dx is finite and diverges if the integral is infinite. For the ith n n 1 partial sum, si = i  n=1 But si > an = i  f (n). (5.18) n=1 i+1 f (x) dx (5.19) 1 from Fig. 5.2a, f (x) being monotonic decreasing. On the other hand, from Fig. 5.2b, i f (x) dx, (5.20) si − a1 < 1 in which the series is represented by the inscribed rectangles. Taking the limit as i → ∞, we have ∞ 1 f (x) dx ≤ ∞  n=1 an ≤ ∞ 1 f (x) dx + a1 . (5.21) Hence the infinite series converges or diverges as the corresponding integral converges or diverges. This integral test is particularly useful in setting upper and lower bounds on the remainder of a series after some number of initial terms have been summed. That is, ∞  n=1 an = N  n=1 an + ∞  an , n=N +1 where ∞ N +1 f (x) dx ≤ ∞  n=N +1 an ≤ ∞ N +1 f (x) dx + aN +1 . 5.2 Convergence Tests 329 To free the integral test from the quite restrictive requirement that the interpolating function f (x) be positive and monotonic, we show for any function f (x) with a continuous derivative that Nf Nf Nf   x − [x] f ′ (x) dx f (x) dx + (5.22) f (n) = Ni Ni n=Ni +1 holds. Here [x] denotes the largest integer below x, so x − [x] varies sawtoothlike between 0 and 1. To derive Eq. (5.22) we observe that Nf Nf ′ xf (x) dx = Nf f (Nf ) − Ni f (Ni ) − f (x) dx, (5.23) Ni Ni using integration by parts. Next we evaluate the integral Nf Ni ′ [x]f (x) dx = Nf −1  n n+1 Nf  n=Ni +1 ′ f (x) dx = n n=Ni =− Nf −1  ' ( n f (n + 1) − f (n) n=Ni f (n) − Ni f (Ni ) + Nf f (Nf ). (5.24) Subtracting Eq. (5.24) from (5.23) we arrive at Eq. (5.22). Note that f (x) may go up or down and even change sign, so Eq. (5.22) applies to alternating series (see Section 5.3) as well. Usually f ′ (x) falls faster than f (x) for x → ∞, so the remainder term in Eq. (5.22) converges better. It is easy to improve Eq. (5.22) by replacing x − [x] by x − [x] − 21 , which varies between − 12 and 21 : Nf Nf   f (x) dx + f (n) = x − [x] − 21 f ′ (x) dx Ni 0, and q > 0 (if x = 1). 5.7.9 Evaluate  (a) lim sin(tan x) − tan(sin x) x −7 , x→0 (b) lim x −n jn (x), x→0 n = 3, where jn (x) is a spherical Bessel function (Section 11.7), defined by     1 d n sin x . jn (x) = (−1)n x n x dx x ANS. (a) − 1 1 1 , (b) → 30 1 · 3 · 5 · · · (2n + 1) 105 for n = 3. 15 The series expansion of tan−1 x (upper limit 1 replaced by x) was discovered by James Gregory in 1671, 3 years before Leibniz. See Peter Beckmann’s entertaining book, A History of Pi, 2nd ed., Boulder, CO: Golem Press (1971) and L. Berggren, J. and P. Borwein, Pi: A Source Book, New York: Springer (1997). 5.7 Power Series 5.7.10 369 Neutron transport theory gives the following expression for the inverse neutron diffusion length of k:   k a−b = 1. tanh−1 k a By series inversion or otherwise, determine k 2 as a series of powers of b/a. Give the first two terms of the series.   4b 2 . ANS. k = 3ab 1 − 5a 5.7.11 Develop a series expansion of y = sinh−1 x (that is, sinh y = x) in powers of x by (a) inversion of the series for sinh y, (b) a direct Maclaurin expansion. 5.7.12 A function f (z) is represented by a descending power series f (z) = ∞  an z−n , n=0 R ≤ z < ∞. Show that this series expansion is unique; that is, if f (z) = R ≤ z < ∞, then an = bn for all n. ∞ n=0 bn z −n , 5.7.13 A power series converges for −R < x < R. Show that the differentiated series and the integrated series have the same interval of convergence. (Do not bother about the endpoints x = ±R.) 5.7.14 Assuming that f (x) may be expanded in a power series about the origin, f (x) = ∞ n n=0 an x , with some nonzero range of convergence. Use the techniques employed in proving uniqueness of series to show that your assumed series is a Maclaurin series: 1 (n) f (0). n! The Klein–Nishina formula for the scattering of photons by electrons contains a term of the form  (1 + ε) 2 + 2ε ln(1 + 2ε) − f (ε) = . 1 + 2ε ε ε2 an = 5.7.15 Here ε = hν/mc2 , the ratio of the photon energy to the electron rest mass energy. Find lim f (ε). ε→0 ANS. 34 . 5.7.16 The behavior of a neutron losing energy by colliding elastically with nuclei of mass A is described by a parameter ξ1 , ξ1 = 1 + (A − 1)2 A − 1 ln . 2A A+1 370 Chapter 5 Infinite Series An approximation, good for large A, is ξ2 = 2 . A + 2/3 Expand ξ1 and ξ2 in powers of A−1 . Show that ξ2 agrees with ξ1 through (A−1 )2 . Find the difference in the coefficients of the (A−1 )3 term. 5.7.17 Show that each of these two integrals equals Catalan’s constant: 1 1 dt dx (a) . arc tan t , ln x (b) − t 1 + x2 0 0 Note. See β(2) in Section 5.9 for the value of Catalan’s constant. 5.7.18 Calculate π (double precision) by each of the following arc tangent expressions: π = 16 tan−1 (1/5) − 4 tan−1 (1/239) π = 24 tan−1 (1/8) + 8 tan−1 (1/57) + 4 tan−1 (1/239) π = 48 tan−1 (1/18) + 32 tan−1 (1/57) − 20 tan−1 (1/239). Obtain 16 significant figures. Verify the formulas using Exercise 5.6.2. Note. These formulas have been used in some of the more accurate calculations of π .16 5.7.19 An analysis of the Gibbs phenomenon of Section 14.5 leads to the expression 2 π sin ξ dξ. π 0 ξ (a) Expand the integrand in a series and integrate term by term. Find the numerical value of this expression to four significant figures. (b) Evaluate this expression by the Gaussian quadrature if available. ANS. 1.178980. 5.8 ELLIPTIC INTEGRALS Elliptic integrals are included here partly as an illustration of the use of power series and partly for their own intrinsic interest. This interest includes the occurrence of elliptic integrals in physical problems (Example 5.8.1 and Exercise 5.8.4) and applications in mathematical problems. Example 5.8.1 PERIOD OF A SIMPLE PENDULUM For small-amplitude oscillations, our pendulum (Fig. 5.8) has simple harmonic motion with a period T = 2π(l/g)1/2 . For a maximum amplitude θM large enough so that sin θM = θM , Newton’s second law of motion and Lagrange’s equation (Section 17.7) lead to a nonlinear differential equation (sin θ is a nonlinear function of θ ), so we turn to a different approach. 16 D. Shanks and J. W. Wrench, Computation of π to 100 000 decimals. Math. Comput. 16: 76 (1962). 5.8 Elliptic Integrals 371 FIGURE 5.8 Simple pendulum. The swinging mass m has a kinetic energy of ml 2 (dθ/dt)2 /2 and a potential energy of −mgl cos θ (θ = π/2 taken for the arbitrary zero of potential energy). Since dθ/dt = 0 at θ = θM , conservation of energy gives   1 2 dθ 2 ml − mgl cos θ = −mgl cos θM . (5.124) 2 dt Solving for dθ/dt we obtain  1/2 2g dθ =± (cos θ − cos θM )1/2 , dt l (5.125) with the mass m canceling out. We take t to be zero when θ = 0 and dθ/dt > 0. An integration from θ = 0 to θ = θM yields  1/2 t  1/2 θM 2g 2g −1/2 (cos θ − cos θM ) dθ = t. (5.126) dt = l l 0 0 This is 14 of a cycle, and therefore the time t is 41 of the period T . We note that θ ≤ θM , and with a bit of clairvoyance we try the half-angle substitution     θ θM sin = sin sin ϕ. (5.127) 2 2 With this, Eq. (5.126) becomes   −1/2  1/2 π/2  l 2 2 θM 1 − sin sin ϕ T =4 dϕ. g 2 0 (5.128) Although not an obvious improvement over Eq. (5.126), the integral now defines the complete elliptic integral of the first kind, K(sin2 θM /2). From the series expansion, the period of our pendulum may be developed as a power series — powers of sin θM /2:  1/2   1 2 θM l 9 4 θM 1 + sin T = 2π + sin + ··· . (5.129) g 4 2 64 2  372 Chapter 5 Infinite Series Definitions Generalizing Example 5.8.1 to include the upper limit as a variable, the elliptic integral of the first kind is defined as ϕ  −1/2 F (ϕ\α) = 1 − sin2 α sin2 θ dθ, (5.130a) 0 or F (x|m) = 0 x   −1/2 1 − t 2 1 − mt 2 dt, 0 ≤ m < 1. (5.130b) (This is the notation of AMS-55 see footnote 4 for the reference.) For ϕ = π/2, x = 1, we have the complete elliptic integral of the first kind, π/2  −1/2 1 − m sin2 θ dθ K(m) = 0 = 0 1   −1/2 dt, 1 − t 2 1 − mt 2 with m = sin2 α, 0 ≤ m < 1. The elliptic integral of the second kind is defined by ϕ  1/2 1 − sin2 α sin2 θ dθ E(ϕ\α) = (5.131) (5.132a) 0 or E(x|m) = x 0 1 − mt 2 1 − t2 1/2 dt, 0 ≤ m ≤ 1. (5.132b) Again, for the case ϕ = π/2, x = 1, we have the complete elliptic integral of the second kind: π/2  1/2 1 − m sin2 θ E(m) = dθ 0 = 1 0 1 − mt 2 1 − t2 1/2 dt, 0 ≤ m ≤ 1. (5.133) Exercise 5.8.1 is an example of its occurrence. Figure 5.9 shows the behavior of K(m) and E(m). Extensive tables are available in AMS-55 (see Exercise 5.2.22 for the reference). Series Expansion For our range 0 ≤ m < 1, the denominator of K(m) may be expanded by the binomial series  −1/2 1 3 1 − m sin2 θ = 1 + m sin2 θ + m2 sin4 θ + · · · 2 8 ∞  (2n − 1)!! n 2n m sin θ. (5.134) = (2n)!! n=0 5.8 Elliptic Integrals FIGURE 5.9 373 Complete elliptic integrals, K(m) and E(m). For any closed interval [0, mmax ], mmax < 1, this series is uniformly convergent and may be integrated term by term. From Exercise 8.4.9, π/2 (2n − 1)!! π · . (5.135) sin2n θ dθ = (2n)!! 2 0 Hence    ∞  (2n − 1)!! 2 n π K(m) = 1+ m . 2 (2n)!! (5.136)    ∞  (2n − 1)!! 2 mn π E(m) = 1− 2 (2n)!! 2n − 1 (5.137) n=1 Similarly, n=1 (Exercise 5.8.2). In Section 13.5 these series are identified as hypergeometric functions, and we have   1 1 π K(m) = 2 F1 , ; 1; m (5.138) 2 2 2 E(m) =   1 1 π − , ; 1; m . F 2 1 2 2 2 (5.139) 374 Chapter 5 Infinite Series Limiting Values From the series Eqs. (5.136) and (5.137), or from the defining integrals, π lim K(m) = , m→0 2 π lim E(m) = . m→0 2 For m → 1 the series expansions are of little use. However, the integrals yield lim K(m) = ∞, m→1 (5.140) (5.141) (5.142) the integral diverging logarithmically, and lim E(m) = 1. m→1 (5.143) The elliptic integrals have been used extensively in the past for evaluating integrals. For instance, integrals of the form x  I= R t, a4 t 4 + a3 t 3 + a2 t 2 + a1 t 1 + a0 dt, 0 where R is a rational function of t and of the radical, may be expressed in terms of elliptic integrals. Jahnke and Emde, Tables of Functions with Formulae and Curves. New York: Dover (1943), Chapter 5, give pages of such transformations. With computers available for direct numerical evaluation, interest in these elliptic integral techniques has declined. However, elliptic integrals still remain of interest because of their appearance in physical problems — see Exercises 5.8.4 and 5.8.5. For an extensive account of elliptic functions, integrals, and Jacobi theta functions, you are directed to Whittaker and Watson’s treatise A Course in Modern Analysis, 4th ed. Cambridge, UK: Cambridge University Press (1962). Exercises 5.8.1 The ellipse x 2 /a 2 + y 2 /b2 = 1 may be represented parametrically by x = a sin θ, y = b cos θ . Show that the length of arc within the first quadrant is π/2  1/2 a 1 − m sin2 θ dθ = aE(m). 0 Here 5.8.2 0 ≤ m = (a 2 − b2 )/a 2 Derive the series expansion E(m) = 5.8.3 ≤ 1. Show that   2    1 m 1 · 3 2 m2 π 1− − − ··· . 2 2 1 2·4 3 lim m→0 (K − E) π = . m 4 5.8 Elliptic Integrals FIGURE 5.10 5.8.4 375 Circular wire loop. A circular loop of wire in the xy-plane, as shown in Fig. 5.10, carries a current I . Given that the vector potential is aµ0 I π cos α dα , Aϕ (ρ, ϕ, z) = 2 2 2π 0 (a + ρ + z2 − 2aρ cos α)1/2 show that Aϕ (ρ, ϕ, z) = where µ0 I πk  1/2      k2 a 1− K k2 − E k2 , ρ 2 k2 = 4aρ . (a + ρ)2 + z2 Note. For extension of Exercise 5.8.4 to B, see Smythe, p. 270.17 5.8.5 An analysis of the magnetic vector potential of a circular current loop leads to the expression     f k 2 = k −2 2 − k 2 K k 2 − 2E k 2 , where K(k 2 ) and E(k 2 ) are the complete elliptic integrals of the first and second kinds. Show that for k 2 ≪ 1 (r ≫ radius of loop)  πk 2 . f k2 ≈ 16 17 W. R. Smythe, Static and Dynamic Electricity, 3rd ed. New York: McGraw-Hill (1969). 376 Chapter 5 Infinite Series 5.8.6 Show that (a) dE(k 2 ) 1 = (E − K), dk k (b) E K dK(k 2 ) − . = dk k k(1 − k 2 ) Hint. For part (b) show that   E k2 = 1 − k2 by comparing series expansions. 0 π/2  1 − k sin2 θ −3/2 dθ 5.8.7 (a) Write a function subroutine that will compute E(m) from the series expansion, Eq. (5.137). (b) Test your function subroutine by using it to calculate E(m) over the range m = 0.0(0.1)0.9 and comparing the result with the values given by AMS-55 (see Exercise 5.2.22 for the reference). 5.8.8 Repeat Exercise 5.8.7 for K(m). Note. These series for E(m), Eq. (5.137), and K(m), Eq. (5.136), converge only very slowly for m near 1. More rapidly converging series for E(m) and K(m) exist. See Dwight’s Tables of Integrals:18 No. 773.2 and 774.2. Your computer subroutine for computing E and K probably uses polynomial approximations: AMS-55, Chapter 17. 5.8.9 A simple pendulum is swinging with a maximum amplitude of θM . In the limit as θM → 0, the period is 1 s. Using the elliptic integral, K(k 2 ), k = sin(θM /2), calculate the period T for θM = 0 (10◦ ) 90◦ . Caution. Some elliptic integral subroutines require k = m1/2 as an input parameter, not m itself. Check values. 5.8.10 θM T (sec) 10◦ 1.00193 50◦ 1.05033 90◦ 1.18258 Calculate the magnetic vector potential A(ρ, ϕ, z) = ϕA ˆ ϕ (ρ, ϕ, z) of a circular current loop (Exercise 5.8.4) for the ranges ρ/a = 2, 3, 4, and z/a = 0, 1, 2, 3, 4. Note. This elliptic integral calculation of the magnetic vector potential may be checked by an associated Legendre function calculation, Example 12.5.1. Check value. For ρ/a = 3 and z/a = 0; Aϕ = 0.029023µ0 I . 5.9 BERNOULLI NUMBERS, EULER–MACLAURIN FORMULA The Bernoulli numbers were introduced by Jacques (James, Jacob) Bernoulli. There are several equivalent definitions, but extreme care must be taken, for some authors introduce 18 H. B. Dwight, Tables of Integrals and Other Mathematical Data. New York: Macmillan (1947). 5.9 Bernoulli Numbers,Euler–Maclaurin Formula 377 variations in numbering or in algebraic signs. One relatively simple approach is to define the Bernoulli numbers by the series19 ∞  Bn x n x = , ex − 1 n! (5.144) n=0 which converges for |x| < 2π by the ratio test substitut Eq. (5.153) (see also Example 7.1.7). By differentiating this power series repeatedly and then setting x = 0, we obtain Bn =   x dn . dx n ex − 1 x=0 (5.145) Specifically,     x 1 xex  d 1  B1 = − x = x =− ,   x 2 dx e − 1 x=0 e − 1 (e − 1) x=0 2 (5.146) as may be seen by series expansion of the denominators. Using B0 = 1 and B1 = − 21 , it is easy to verify that the function ∞ x x  xn x x Bn −1+ = = − −x −1− x e −1 2 n! e −1 2 (5.147) n=2 is even in x, so all B2n+1 = 0. To derive a recursion relation for the Bernoulli numbers, we multiply ex − 1 x =1= x ex − 1  ∞ m=0 =1+ + xm (m + 1)! ∞  xm m=1 ∞  xN N =2    ∞ x  x 2n 1− + B2n 2 (2n)! n=1 1 1 − (m + 1)! 2m!  1≤n≤N/2  B2n . (2n)!(N − 2n + 1)! (5.148) For N > 0 the coefficient of x N is zero, so Eq. (5.148) yields 1 (N + 1) − 1 = 2  1≤n≤N/2 B2n   N +1 1 = (N − 1), 2n 2 (5.149) 19 The function x/(ex − 1) may be considered a generating function since it generates the Bernoulli numbers. Generating functions of the special functions of mathematical physics appear in Chapters 11, 12, and 13. 378 Chapter 5 Infinite Series Table 5.1 Bernoulli Numbers n Bn 0 1 Bn 1 − 12 1.0000 00000 −0.5000 00000 1 6 1 − 30 1 42 1 − 30 5 66 2 4 6 8 10 0.1666 66667 −0.0333 33333 0.0238 09524 −0.0333 33333 0.0757 57576 Note. Further values are given in National Bureau of Standards, Handbook of Mathematical Functions (AMS-55). See footnote 4 for the reference. which is equivalent to   N 2N + 1 1  N− = B2n , 2n 2 n=1 N −1= N −1  n=1   2N . B2n 2n (5.150) From Eq. (5.150) the Bernoulli numbers in Table 5.1 are readily obtained. If the variable x in Eq. (5.144) is replaced by 2ix we obtain an alternate (and equivalent) definition of B2n (B1 is set equal to − 12 by Eq. (5.146)) by the expression x cot x = ∞  (2x)2n , (−1)n B2n (2n)! n=0 −π < x < π. (5.151) Using the method of residues (Section 7.1) or working from the infinite product representation of sin x (Section 5.11), we find that B2n = ∞ (−1)n−1 2(2n)!  −2n p , (2π)2n p=1 n = 1, 2, 3, . . . . (5.152) This representation of the Bernoulli numbers was discovered by Euler. It is readily seen from Eq. (5.152) that |B2n | increases without limit as n → ∞. Numerical values have been calculated by Glaisher.20 Illustrating the divergent behavior of the Bernoulli numbers, we have B20 = −5.291 × 102 B200 = −3.647 × 10215 . 20 J. W. L. Glaisher, table of the first 250 Bernoulli’s numbers (to nine figures) and their logarithms (to ten figures). Trans. Cambridge Philos. Soc. 12: 390 (1871–1879). 5.9 Bernoulli Numbers,Euler–Maclaurin Formula 379 Some authors prefer to define the Bernoulli numbers with a modified version of Eq. (5.152) by using Bn = ∞ 2(2n)!  −2n p , (2π)2n (5.153) p=1 the subscript being just half of our subscript and all signs positive. Again, when using other texts or references, you must check to see exactly how the Bernoulli numbers are defined. The Bernoulli numbers occur frequently in number theory. The von Staudt–Clausen theorem states that B2n = An − 1 1 1 1 − − − ··· − , p1 p2 p 3 pk (5.154) in which An is an integer and p1 , p2 , . . . , pk are prime numbers so that pi − 1 is a divisor of 2n. It may readily be verified that this holds for B6 (A3 = 1, p = 2, 3, 7), B8 (A4 = 1, p = 2, 3, 5), (5.155) B10 (A5 = 1, p = 2, 3, 11), and other special cases. The Bernoulli numbers appear in the summation of integral powers of the integers, N  j p, p integral, j =1 and in numerous series expansions of the transcendental functions, including tan x, cot x, ln | sin x|, (sin x)−1 , ln | cos x|, ln | tan x|, (cosh x)−1 , tanh x, and coth x. For example, tan x = x + 2 x3 (−1)n−1 22n (22n − 1)B2n 2n−1 + x5 + · · · + x + ··· . 3 15 (2n)! (5.156) The Bernoulli numbers are likely to come in such series expansions because of the defining equations (5.144), (5.150), and (5.151) and because of their relation to the Riemann zeta function, ζ (2n) = ∞  p −2n . (5.157) p=1 Bernoulli Polynomials If Eq. (5.144) is generalized slightly, we have ∞  xexs xn = B (s) n ex − 1 n! n=0 (5.158) 380 Chapter 5 Infinite Series Table 5.2 Bernoulli Polynomials B0 = 1 B1 = x − 12 B2 = x 2 − x + 61 B3 = x 3 − 32 x 2 + 21 x 1 B4 = x 4 − 2x 3 + x 2 − 30 B5 = x 5 − 52 x 4 + 35 x 3 − 16 x 1 B6 = x 6 − 3x 5 + 25 x 4 − 12 x 2 + 42 Bn (0) = Bn , Bernoulli number defining the Bernoulli polynomials, Bn (s). The first seven Bernoulli polynomials are given in Table 5.2. From the generating function, Eq. (5.158), Bn (0) = Bn , n = 0, 1, 2, . . . , (5.159) the Bernoulli polynomial evaluated at zero equals the corresponding Bernoulli number. Two particularly important properties of the Bernoulli polynomials follow from the defining relation, Eq, (5.158): a differentiation relation d Bn (s) = nBn−1 (s), ds n = 1, 2, 3, . . . , (5.160) and a symmetry relation (replace x → −x in Eq. (5.158) and then set s = 1) Bn (1) = (−1)n Bn (0), n = 1, 2, 3, . . . . (5.161) These relations are used in the development of the Euler–Maclaurin integration formula. Euler–Maclaurin Integration Formula One use of the Bernoulli functions is in the derivation of the Euler–Maclaurin integration formula. This formula is used in Section 8.3 for the development of an asymptotic expression for the factorial function — Stirling’s series. The technique is repeated integration by parts, using Eq. (5.160) to create new derivatives. We start with 1 1 f (x)B0 (x) dx. (5.162) f (x) dx = 0 0 From Eq. (5.160) and Exercise 5.9.2, B1′ (x) = B0 (x) = 1. (5.163) 5.9 Bernoulli Numbers,Euler–Maclaurin Formula 381 Substituting B1′ (x) into Eq. (5.162) and integrating by parts, we obtain 1 1 f (x) dx = f (1)B1 (1) − f (0)B1 (0) − f ′ (x)B1 (x) dx 0 0 = 1 f (1) + f (0) − 2 Again using Eq. (5.160), we have 1 f ′ (x)B1 (x) dx. (5.164) 0 1 B1 (x) = B2′ (x), 2 and integrating by parts we get 1 1 ′ 1 f (1)B2 (1) − f ′ (0)B2 (0) f (x) dx = f (1) + f (0) − 2 2! 0 1 1 + f (2) (x)B2 (x) dx. 2! 0 (5.165) (5.166) Using the relations B2n (1) = B2n (0) = B2n , n = 0, 1, 2, . . . B2n+1 (1) = B2n+1 (0) = 0, n = 1, 2, 3, . . . (5.167) and continuing this process, we have 1 q  1  1 B2p f (2p−1) (1) − f (2p−1) (0) f (x) dx = f (1) + f (0) − 2 (2p)! 0 p=1 + 1 (2q)! 1 f (2q) (x)B2q (x) dx. (5.168a) 0 This is the Euler–Maclaurin integration formula. It assumes that the function f (x) has the required derivatives. The range of integration in Eq. (5.168a) may be shifted from [0, 1] to [1, 2] by replacing f (x) by f (x + 1). Adding such results up to [n − 1, n], we obtain n 1 1 f (x) dx = f (0) + f (1) + f (2) + · · · + f (n − 1) + f (n) 2 2 0 − + q  p=1  1 B2p f (2p−1) (n) − f (2p−1) (0) (2p)! 1 (2q)! 0 1 B2q (x) n−1  ν=0 f (2q) (x + ν) dx. (5.168b) The terms 12 f (0) + f (1) + · · · + 21 f (n) appear exactly as in trapezoidal integration, or quadrature. The summation over p may be interpreted as a correction to the trapezoidal approximation. Equation (5.168b) may be seen as a generalization of Eq. (5.22); it is the 382 Chapter 5 Infinite Series Table 5.3 Riemann Zeta Function s ζ (s) 2 3 4 5 6 7 8 9 10 1.6449340668 1.2020569032 1.0823232337 1.0369277551 1.0173430620 1.0083492774 1.0040773562 1.0020083928 1.0009945751 form used in Exercise 5.9.5 for summing positive powers of integers and in Section 8.3 for the derivation of Stirling’s formula. The Euler–Maclaurin formula is often useful in summing series by converting them to integrals.21 Riemann Zeta Function −2n , was used as a comparison series for testing convergence (SecThis series, ∞ p=1 p tion 5.2) and in Eq. (5.152) as one definition of the Bernoulli numbers, B2n . It also serves to define the Riemann zeta function by ζ (s) ≡ ∞  n−s , s > 1. (5.169) n=1 Table 5.3 lists the values of ζ (s) for integral s, s = 2, 3, . . . , 10. Closed forms for even s appear in Exercise 5.9.6. Figure 5.11 is a plot of ζ (s) − 1. An integral expression for this Riemann zeta function appears in Exercise 8.2.21 as part of the development of the gamma function, and the functional relation is given in Section 14.3. The celebrated Euler prime number product for the Riemann zeta function may be derived as    1 1 1 1 1 −s = 1 + s + s + ··· − s + s + s + ··· ; (5.170) ζ (s) 1 − 2 2 3 2 4 6 eliminating all the n−s , where n is a multiple of 2. Then   1 1 1 1 ζ (s) 1 − 2−s 1 − 3−s = 1 + s + s + s + s + · · · 3 5 7 9   1 1 1 − s + s + s + ··· ; 3 9 15 (5.171) 21 See R. P. Boas and C. Stutz, Estimating sums with integrals. Am. J. Phys. 39: 745 (1971), for a number of examples. 5.9 Bernoulli Numbers,Euler–Maclaurin Formula 383 FIGURE 5.11 Riemann zeta function, ζ (s) − 1 versus s. eliminating all the remaining terms in which n is a multiple of 3. Continuing, we have ζ (s)(1 − 2−s )(1 − 3−s )(1 − 5−s ) · · · (1 − P −s ), where P is a prime number, and all terms n−s , in which n is a multiple of any integer up through P , are canceled out. As P → ∞,    ζ (s) 1 − 2−s 1 − 3−s · · · 1 − P −s → ζ (s) ∞ #  1 − P −s = 1. (5.172) P (prime)=2 Therefore ζ (s) = ∞ #  P (prime)=2 1 − P −s −1 , (5.173) giving ζ (s) as an infinite product.22 This cancellation procedure has a clear application in numerical computation. Equation (5.170) will give ζ (s)(1 − 2−s ) to the same accuracy as Eq. (5.169) gives ζ (s), but 22 This is the starting point for the extensive applications of the Riemann zeta function to analytic number theory. See H. M. Ed- wards, Riemann’s Zeta Function. New York: Academic Press (1974); A. Ivi´c, The Riemann Zeta Function. New York: Wiley (1985); S. J. Patterson, Introduction to the Theory of the Riemann Zeta Function. Cambridge, UK: Cambridge University Press (1988). 384 Chapter 5 Infinite Series with only half as many terms. (In either case, a correction would be made for the neglected tail of the series by the Maclaurin integral test technique — replacing the series by an integral, Section 5.2.) Along with the Riemann zeta function, AMS-55 (Chapter 23. See Exercise 5.2.22 for the reference.) defines three other Dirichlet series related to ζ (s): η(s) = ∞   (−1)n−1 n−s = 1 − 21−s ζ (s), n=1 ∞   λ(s) = (2n + 1)−s = 1 − 2−s ζ (s), n=0 and β(s) = ∞  (−1)n (2n + 1)−s . n=0 From the Bernoulli numbers (Exercise 5.9.6) or Fourier series (Example 14.3.3 and Exercise 14.3.13) special values are ζ (2) = 1 + 1 1 π2 + 2 + ··· = 2 6 2 3 ζ (4) = 1 + 1 π4 1 + + · · · = 90 24 34 η(2) = 1 − 1 π2 1 + 2 + ··· = 2 12 2 3 η(4) = 1 − 1 1 7π 4 + + · · · = 720 24 34 λ(2) = 1 + 1 1 π2 + 2 + ··· = 2 8 3 5 1 π4 1 + + · · · = 96 34 54 π 1 1 β(1) = 1 − + − · · · = 3 5 4 λ(4) = 1 + β(3) = 1 − 1 1 π3 + 3 − ··· = . 3 32 3 5 Catalan’s constant, β(2) = 1 − is the topic of Exercise 5.2.22. 1 1 + 2 − · · · = 0.91596559 . . . , 2 3 5 5.9 Bernoulli Numbers,Euler–Maclaurin Formula 385 Improvement of Convergence If we are required to sum a convergent series ∞ n=1 an whose terms are rational functions of n, the convergence may be improved dramatically by introducing the Riemann zeta function. Example 5.9.1 IMPROVEMENT OF CONVERGENCE ∞ 2 2 −1 = The problem is to evaluate the series n=1 1/(1 + n ). Expanding (1 + n ) −2 −2 −1 n (1 + n ) by direct division, we have    −1 n−6 1 + n2 = n−2 1 − n−2 + n−4 − 1 + n−2 = 1 1 1 1 − + − . n2 n4 n6 n8 + n6 Therefore ∞  n=1 ∞  1 1 = ζ (2) − ζ (4) + ζ (6) − . 2 8 1+n n + n6 n=1 The ζ values are tabulated and the remainder series converges as n−8 . Clearly, the process can be continued as desired. You make a choice between how much algebra you will do and how much arithmetic the computer will do. Other methods for improving computational effectiveness are given at the end of Sections 5.2 and 5.4.  Exercises 5.9.1 Show that tan x = ∞  (−1)n−1 22n (22n − 1)B2n n=1 (2n)! x 2n−1 , Hint. tan x = cot x − 2 cot 2x. 5.9.2 Show that the first Bernoulli polynomials are B0 (s) = 1 B1 (s) = s − 1 2 B2 (s) = s 2 − s + 61 . Note that Bn (0) = Bn , the Bernoulli number. 5.9.3 Show that Bn′ (s) = nBn−1 (s), n = 1, 2, 3, . . . . Hint. Differentiate Eq. (5.158). − π π 1. Integrals such as this appear in the quantum theory of transport effects — thermal and electrical conductivity. 5.9.9 The Bloch–Gruneissen approximation for the resistance in a monovalent metal is T 5 /T x 5 dx ρ=C 6 , (ex − 1)(1 − e−x )  0 where  is the Debye temperature characteristic of the metal. (a) For T → ∞, show that ρ≈ (b) C T . · 4 2 For T → 0, show that ρ ≈ 5!ζ (5)C 5.9.10 Show that 1 1 ln(1 + x) dx = ζ (2), (a) x 2 0 (b) lim a a→1 0 T5 . 6 ln(1 − x) dx = ζ (2). x From Exercise 5.9.6, ζ (2) = π 2 /6. Note that the integrand in part (b) diverges for a = 1 but that the integrated series is convergent. 5.9.11 The integral 0 1 2 dx ln(1 − x) x appears in the fourth-order correction to the magnetic moment of the electron. Show that it equals 2ζ (3). Hint. Let 1 − x = e−t . 5.9.12 Show that ∞ 0   1 1 1 (ln z)2 dz = 4 1 − + − + · · · . 1 + z2 33 53 73 By contour integration (Exercise 7.1.17), this may be shown equal to π 3 /8. 5.9.13 For “small” values of x, ln(x!) = −γ x + ∞  ζ (n) n (−1)n x , n n=2 where γ is the Euler–Mascheroni constant and ζ (n) is the Riemann zeta function. For what values of x does this series converge? ANS. −1 < x ≤ 1. 388 Chapter 5 Infinite Series Note that if x = 1, we obtain γ= ∞  ζ (n) , (−1)n n n=2 a series for the Euler–Mascheroni constant. The convergence of this series is exceedingly slow. For actual computation of γ , other, indirect approaches are far superior (see Exercises 5.10.11, and 8.5.16). 5.9.14 Show that the series expansion of ln(x!) (Exercise 5.9.13) may be written as (a) (b)   ∞  1 πx ζ (2n + 1) 2n+1 −γx − ln x , 2 sin πx 2n + 1 n=1     1 πx 1+x 1 − ln + (1 − γ )x ln(x!) = ln 2 sin πx 2 1−x ln(x!) = − ∞   x 2n+1 . ζ (2n + 1) − 1 2n + 1 n=1 Determine the range of convergence of each of these expressions. 5.9.15 Show that Catalan’s constant, β(2), may be written as β(2) = 2 ∞  π2 . (4k − 3)−2 − 8 k=1 Hint. π 2 = 6ζ (2). 5.9.16 Derive the following expansions of the Debye functions for n ≥ 1: x 0 x 5.9.17 ∞  ∞  t n dt B2k x 2k x n 1 , =x − + et − 1 n 2(n + 1) (2k + n)(2k)! k=1 |x| < 2π; n  ∞  t n dt nx n−1 n(n − 1)x n−2 n! −kx x = + e + + · · · + et − 1 k k2 k3 k n+1 k=1 for x > 0. The complete integral (0, ∞) equals n!ζ (n + 1), Exercise 8.2.15. s+1 s −1 (Exercise 5.4.1) may be rewritten (a) Show that the equation ln 2 = ∞ s=1 (−1) as  ∞ ∞   1 −1 −n−1 −s 1− ln 2 = . (2p) 2 ζ (s) + 2p s=2 (b) p=1 Hint. Take the terms in pairs. Calculate ln 2 to six significant figures. 5.10 Asymptotic Series 5.9.18 (a) Show that the equation π/4 = rewritten as ∞ n+1 (2n − 1)−1 n=1 (−1) 389 (Exercise 5.7.6) may be −1 ∞ ∞   1 π (4p)−2n−2 1 − 4−2s ζ (2s) − 2 . =1−2 4 (4p)2 p=1 s=1 (b) Calculate π/4 to six significant figures. 5.9.19 Write a function subprogram ZETA(N ) that will calculate the Riemann zeta function for integer argument. Tabulate ζ (s) for s = 2, 3, 4, . . . , 20. Check your values against Table 5.3 and AMS-55, Chapter 23. (See Exercise 5.2.22 for the reference.). Hint. If you supply the function subprogram with the known values of ζ (2), ζ (3), and ζ (4), you avoid the more slowly converging series. Calculation time may be further shortened by using Eq. (5.170). 5.9.20 Calculate the logarithm (base 10) of |B2n |, n = 10, 20, . . . , 100. Hint. Program ζ (n) as a function subprogram, Exercise 5.9.19. Check values. log |B100 | = 78.45 log |B200 | = 215.56. 5.10 ASYMPTOTIC SERIES Asymptotic series frequently occur in physics. In numerical computations they are employed for the accurate computation of a variety of functions. We consider here two types of integrals that lead to asymptotic series: first, an integral of the form ∞ I1 (x) = e−u f (u) du, x where the variable x appears as the lower limit of an integral. Second, we consider the form   ∞ u −u du, e f I2 (x) = x 0 with the function f to be expanded as a Taylor series (binomial series). Asymptotic series often occur as solutions of differential equations. An example of this appears in Section 11.6 as a solution of Bessel’s equation. Incomplete Gamma Function The nature of an asymptotic series is perhaps best illustrated by a specific example. Suppose that we have the exponential integral function23 x u e du, (5.174) Ei(x) = −∞ u 23 This function occurs frequently in astrophysical problems involving gas with a Maxwell–Boltzmann energy distribution. 390 Chapter 5 Infinite Series or −Ei(−x) = ∞ x e−u du = E1 (x), u (5.175) to be evaluated for large values of x. Or let us take a generalization of the incomplete factorial function (incomplete gamma function),24 ∞ I (x, p) = e−u u−p du = Ŵ(1 − p, x), (5.176) x in which x and p are positive. Again, we seek to evaluate it for large values of x. Integrating by parts, we obtain ∞ e−x I (x, p) = p − p e−u u−p−1 du x x ∞ e−x pe−x = p − p+1 + p(p + 1) e−u u−p−2 du. (5.177) x x x Continuing to integrate by parts, we develop the series   1 p p(p + 1) −x n−1 (p + n − 2)! I (x, p) = e − + − · · · + (−1) x p x p+1 x p+2 (p − 1)!x p+n−1 ∞ (p + n − 1)! + (−1)n e−u u−p−n du. (5.178) (p − 1)! x This is a remarkable series. Checking the convergence by the d’Alembert ratio test, we find |un+1 | 1 (p + n)! p+n = lim · = lim =∞ (5.179) lim n→∞ |un | n→∞ (p + n − 1)! x n→∞ x for all finite values of x. Therefore our series as an infinite series diverges everywhere! Before discarding Eq. (5.178) as worthless, let us see how well a given partial sum approximates the incomplete factorial function, I (x, p): ∞ n+1 (p + n)! I (x, p) − sn (x, p) = (−1) e−u u−p−n−1 du = Rn (x, p). (5.180) (p − 1)! x In absolute value   I (x, p) − sn (x, p) ≤ (p + n)! (p − 1)! ∞ e−u u−p−n−1 du. x When we substitute u = v + x, the integral becomes ∞ ∞ −u −p−n−1 −x e u du = e e−v (v + x)−p−n−1 dv x 0 = 24 See also Section 8.5. e−x x p+n+1 0 ∞ e −v  v 1+ x −p−n−1 dv. 5.10 Asymptotic Series FIGURE 5.12 391 Partial sums of ex E1 (x)|x=5 . For large x the final integral approaches 1 and −x   I (x, p) − sn (x, p) ≈ (p + n)! · e . p+n+1 (p − 1)! x (5.181) This means that if we take x large enough, our partial sum sn is an arbitrarily good approximation to the function I (x, p). Our divergent series (Eq. (5.178)) therefore is perfectly good for computations of partial sums. For this reason it is sometimes called a semiconvergent series. Note that the power of x in the denominator of the remainder (p + n + 1) is higher than the power of x in the last term included in sn (x, p), (p + n). Since the remainder Rn (x, p) alternates in sign, the successive partial sums give alternately upper and lower bounds for I (x, p). The behavior of the series (with p = 1) as a function of the number of terms included is shown in Fig. 5.12. We have ∞ −u e x x e E1 (x) = e du u x 1! n! 1 2! 3! ∼ (5.182) = sn (x) = − 2 + 2 − 4 + · · · + (−1)n n+1 , x x x x x which is evaluated at x = 5. The optimum determination of ex E1 (x) is given by the closest approach of the upper and lower bounds, that is, between s4 = s6 = 0.1664 and s5 = 0.1741 for x = 5. Therefore  0.1664 ≤ ex E1 (x)x=5 ≤ 0.1741. (5.183) Actually, from tables,  ex E1 (x)x=5 = 0.1704, (5.184) 392 Chapter 5 Infinite Series within the limits established by our asymptotic expansion. Note that inclusion of additional terms in the series expansion beyond the optimum point literally reduces the accuracy of the representation. As x is increased, the spread between the lowest upper bound and the highest lower bound will diminish. By taking x large enough, one may compute ex E1 (x) to any desired degree of accuracy. Other properties of E1 (x) are derived and discussed in Section 8.5. Cosine and Sine Integrals Asymptotic series may also be developed from definite integrals — if the integrand has the required behavior. As an example, the cosine and sine integrals (Section 8.5) are defined by Ci(x) = − si(x) = − ∞ cos t dt, t (5.185) ∞ sin t dt. t (5.186) x x Combining these with regular trigonometric functions, we may define f (x) = Ci(x) sin x − si(x) cos x = ∞ 0 g(x) = −Ci(x) cos x − si(x) sin x = sin y dy, y +x ∞ 0 (5.187) cos y dy, y+x with the new variable y = t − x. Going to complex variables, Section 6.1, we have g(x) + if (x) = ∞ 0 eiy dy = y+x ∞ 0 ie−xu du, 1 + iu (5.188) in which u = −iy/x. The limits of integration, 0 to ∞, rather than 0 to −i∞, may be justified by Cauchy’s theorem, Section 6.3. Rationalizing the denominator and equating real part to real part and imaginary part to imaginary part, we obtain g(x) = 0 ∞ ue−xu du, 1 + u2 f (x) = ∞ 0 e−xu du. 1 + u2 For convergence of the integrals we must require that ℜ(x) > 0.25 25 ℜ(x) = real part of (complex) x (compare Section 6.1). (5.189) 5.10 Asymptotic Series 393 Now, to develop the asymptotic expansions, let v = xu and expand the preceding factor [1 + (v/x)2 ]−1 by the binomial theorem.26 We have v 2n (2n)! 1 ∞ −v  1  f (x) ≈ (−1)n 2n dv = (−1)n 2n , e (5.190) x 0 x x x 0≤n≤N g(x) ≈ 1 x2 ∞ e−v 0  0≤n≤N (−1)n 0≤n≤N v 2n+1 1  (2n + 1)! dv = . (−1)n x 2n x2 x 2n 0≤n≤N From Eqs. (5.187) and (5.190), sin x  (2n)! cos x  (2n + 1)! Ci(x) ≈ (−1)n 2n − 2 , (−1)n x x x x 2n 0≤n≤N 0≤n≤N (2n)! sin x  (2n + 1)! cos x  (−1)n 2n − 2 (−1)n si(x) ≈ − x x x x 2n 0≤n≤N (5.191) 0≤n≤N are the desired asymptotic expansions. This technique of expanding the integrand of a definite integral and integrating term by term is applied in Section 11.6 to develop an asymptotic expansion of the modified Bessel function Kν and in Section 13.5 for expansions of the two confluent hypergeometric functions M(a, c; x) and U (a, c; x). Definition of Asymptotic Series The behavior of these series (Eqs. (5.178) and (5.191)), is consistent with the defining properties of an asymptotic series.27 Following Poincaré, we take28  x n Rn (x) = x n f (x) − sn (x) , (5.192) where an a1 a2 + 2 + ··· + n. x x x The asymptotic expansion of f (x) has the properties that sn (x) = a0 + lim x n Rn (x) = 0, x→∞ (5.193) for fixed n, (5.194) for fixed x.29 (5.195) and lim x n Rn (x) = ∞, n→∞ 26 This step is valid for v ≤ x. The contributions from v ≥ x will be negligible (for large x) because of the negative exponential. It is because the binomial expansion does not converge for v ≥ x that our final series is asymptotic rather than convergent. 27 It is not necessary that the asymptotic series be a power series. The required property is that the remainder R (x) be of higher n order than the last term kept — as in Eq. (5.194). 28 Poincaré’s definition allows (or neglects) exponentially decreasing functions. The refinement of Poincaré’s definition is of considerable importance for the advanced theory of asymptotic expansions, particularly for extensions into the complex plane. However, for purposes of an introductory treatment and especially for numerical computation with x real and positive, Poincaré’s approach is perfectly satisfactory. 394 Chapter 5 Infinite Series See Eqs. (5.178) and (5.179) for an example of these properties. For power series, as assumed in the form of sn (x), Rn (x) ∼ x −n−1 . With conditions (5.194) and (5.195) satisfied, we write f (x) ≈ ∞  an x −n . (5.196) n=0 Note the use of ≈ in place of =. The function f (x) is equal to the series only in the limit as x → ∞ and a finite number of terms in the series. Asymptotic expansions of two functions may be multiplied together, and the result will be an asymptotic expansion of the product of the two functions. The asymptotic expansion of a given function f (t) may be integrated term by term (just as in a uniformly convergent series of continuous functions) from x ≤ t < ∞, and the result ∞ will be an asymptotic expansion of x f (t) dt. Term-by-term differentiation, however, is valid only under very special conditions. Some functions do not possess an asymptotic expansion; ex is an example of such a function. However, if a function has an asymptotic expansion, it has only one. The correspondence is not one to one; many functions may have the same asymptotic expansion. One of the most useful and powerful methods of generating asymptotic expansions, the method of steepest descents, will be developed in Section 7.3. Applications include the derivation of Stirling’s formula for the (complete) factorial function (Section 8.3) and the asymptotic forms of the various Bessel functions (Section 11.6). Asymptotic series occur fairly often in mathematical physics. One of the earliest and still important approximations of quantum mechanics, the WKB expansion, is an asymptotic series. Exercises 5.10.1 Stirling’s formula for the logarithm of the factorial function is ln(x!) =   N  B2n 1 1 ln x − x − ln 2π + x + x 1−2n . 2 2 2n(2n − 1) n=1 The B2n are the Bernoulli numbers (Section 5.9). Show that Stirling’s formula is an asymptotic expansion. 5.10.2 Integrating by parts, develop asymptotic expansions of the Fresnel integrals. (a) C(x) = x 0 πu2 du, cos 2 (b) s(x) = x sin 0 πu2 du. 2 These integrals appear in the analysis of a knife-edge diffraction pattern. 5.10.3 Rederive the asymptotic expansions of Ci(x) and si(x) by repeated integration by parts. ∞ it Hint. Ci(x) + i si(x) = − x et dt. 29 This excludes convergent series of inverse powers of x. Some writers feel that this exclusion is artificial and unnecessary. 5.10 Asymptotic Series 395 5.10.4 Derive the asymptotic expansion of the Gauss error function x 2 2 e−t dt erf(x) = √ π 0  2  (2n − 1)!! e−x 1·3 1·3·5 1 . ≈1− √ 1 − 2 + 2 4 − 3 6 + · · · + (−1)n n 2n 2x 2 x 2 x 2 x πx ∞ 2 Hint: erf(x) = 1 − erfc(x) = 1 − √2π x e−t dt. Normalized so that erf(∞) = 1, this function plays an important role in probability theory. It may be expressed in terms of the Fresnel integrals (Exercise 5.10.2), the incomplete gamma functions (Section 8.5), and the confluent hypergeometric functions (Section 13.5). 5.10.5 The asymptotic expressions for the various Bessel functions, Section 11.6, contain the series &2n ∞ 2 2  n s=1 [4ν − (2s − 1) ] , (−1) Pν (z) ∼ 1 + (2n)!(8z)2n n=1 Qν (z) ∼ ∞  (−1)n+1 n=1 &2n−1 [4ν 2 − (2s − 1)2 ] . (2n − 1)!(8z)2n−1 s=1 Show that these two series are indeed asymptotic series. 5.10.6 For x > 1, ∞  1 1 = (−1)n n+1 . 1+x x n=0 Test this series to see if it is an asymptotic series. 5.10.7 Derive the following Bernoulli number asymptotic series for the Euler–Mascheroni constant: γ= n  s=1 N s −1 − ln n −  B2k 1 + . 2n (2k)n2k k=1 Hint. Apply the Euler–Maclaurin integration formula to f (x) = x −1 over the interval [1, n] for N = 1, 2, . . . . 5.10.8 Develop an asymptotic series for 0 Take x to be real and positive. ∞  −2 e−xv 1 + v 2 dv. ANS. 2! 4! (−1)n (2n)! 1 . − 3 + 5 − ··· + x x x x 2n+1 396 Chapter 5 Infinite Series Calculate partial sums of ex E1 (x) for x = 5, 10, and 15 to exhibit the behavior shown in Fig. 5.11. Determine the width of the throat for x = 10 and 15, analogous to Eq. (5.183). 5.10.9 ANS. Throat width: n = 10, 0.000051 n = 15, 0.0000002. 5.10.10 The knife-edge diffraction pattern is described by 2  2 ( ' I = 0.5I0 C(u0 ) + 0.5 + S(u0 ) + 0.5 , where C(u0 ) and S(u0 ) are the Fresnel integrals of Exercise 5.10.2. Here I0 is the incident intensity and I is the diffracted intensity; u0 is proportional to the distance away from the knife edge (measured at right angles to the incident beam). Calculate I /I0 for u0 varying from −1.0 to +4.0 in steps of 0.1. Tabulate your results and, if a plotting routine is available, plot them. Check value. u0 = 1.0, I /I0 = 1.259226. 5.10.11 The Euler–Maclaurin integration formula of Section 5.9 provides a way of calculating the Euler–Mascheroni constant γ to high accuracy. Using f (x) = 1/x in Eq. (5.168b) (with interval [1, n]) and the definition of γ (Eq. 5.28), we obtain γ= n  s=1 N s −1 − ln n −  B2k 1 . + 2n (2k)n2k k=1 Using double-precision arithmetic, calculate γ for N = 1, 2, . . . . Note. D. E. Knuth, Euler’s constant to 1271 places. Math. Comput. 16: 275 (1962). An even more precise calculation appears in Exercise 8.5.16. ANS. For n = 1000, N = 2 γ = 0.5772 1566 4901. 5.11 INFINITE PRODUCTS Consider a succession of positive factors f1 · f2 · f3 · f4 · · · fn (fi > 0). Using capital pi & ( ) to indicate product, as capital sigma ( ) indicates a sum, we have f1 · f2 · f3 · · · fn = n # fi . (5.197) i=1 We define pn , a partial product, in analogy with sn the partial sum, pn = n # fi (5.198) i=1 and then investigate the limit, lim pn = P . n→∞ (5.199) If P is finite (but not zero), we say the infinite product is convergent. If P is infinite or zero, the infinite product is labeled divergent. 5.11 Infinite Products 397 Since the product will diverge to infinity if lim fn > 1 (5.200) n→∞ or to zero for lim fn < 1 (and > 0), n→∞ (5.201) it is convenient to write our infinite products as ∞ # n=1 (1 + an ). The condition an → 0 is then a necessary (but not sufficient) condition for convergence. The infinite product may be related to an infinite series by the obvious method of taking the logarithm, ln ∞ # n=1 (1 + an ) = ∞  n=1 ln(1 + an ). (5.202) A more useful relationship is stated by the following theorem. Convergence of Infinite Product &&∞ ∞ If 0 ≤ an < 1, the infinite products ∞ n=1 (1 + an ) and n=1 (1 − an ) converge if n=1 an converges and diverge if ∞ a diverges. n n=1 Considering the term 1 + an , we see from Eq. (5.90) that 1 + an ≤ ean . (5.203) Therefore for the partial product pn , with sn the partial sum of the ai , pn ≤ e s n , and letting n → ∞, ∞ # n=1 (5.204) (1 + an ) ≤ exp ∞  an , (5.205) n=1 thus establishing an upper bound for the infinite product. To develop a lower bound, we note that pn = 1 + since ai ≥ 0. Hence n  i=1 ∞ # ai + n=1 n  n  i=1 j =1 (1 + an ) ≥ ai aj + · · · ≥ sn , ∞  n=1 an . (5.206) (5.207) 398 Chapter 5 Infinite Series If the infinite sum remains finite, the infinite product will also. If the infinite sum diverges, so will the infinite & product. The case of (1 − an ) is complicated by the negative signs, but a proof that depends on the foregoing proof may be developed by noting that for an < 21 (remember an → 0 for convergence), (1 − an ) ≤ (1 + an )−1 and (1 − an ) ≥ (1 + 2an )−1 . (5.208) Sine, Cosine, and Gamma Functions An nth-order polynomial Pn (x) with n real roots may be written as a product of n factors (see Section 6.4, Gauss’ fundamental theorem of algebra): Pn (x) = (x − x1 )(x − x2 ) · · · (x − xn ) = n # i=1 (x − xi ). (5.209) In much the same way we may expect that a function with an infinite number of roots may be written as an infinite product, one factor for each root. This is indeed the case for the trigonometric functions. We have two very useful infinite product representations,  ∞  # x2 1− 2 2 , sin x = x n π (5.210) n=1 cos x = ∞ # n=1 1−  4x 2 . (2n − 1)2 π 2 (5.211) The most convenient and perhaps most elegant derivation of these two expressions is by the use of complex variables.30 By our theorem of convergence, Eqs. (5.210) and (5.211) are convergent for all finite values of x. Specifically, for the infinite product for sin x, an = x 2 /n2 π 2 , ∞  n=1 an = ∞ x 2  −2 x 2 x2 n = 2 ζ (2) = 2 6 π π (5.212) n=1 by Exercise 5.9.6. The series corresponding to Eq. (5.211) behaves in a similar manner. Equation (5.210) leads to two interesting results. First, if we set x = π/2, we obtain   ∞ ∞ π # 1 π # (2n)2 − 1 1= 1− = . (5.213) 2 2 (2n)2 (2n)2 n=1 30 See Eqs. (7.25) and (7.26). n=1 5.11 Infinite Products 399 Solving for π/2, we have  ∞ 2·2 4·4 6·6 (2n)2 π # = = · · ··· , 2 (2n − 1)(2n + 1) 1·3 3·5 5·7 (5.214) n=1 which is Wallis’ famous formula for π/2. The second result involves the gamma or factorial function (Section 8.1). One definition of the gamma function is Ŵ(x) = xe γx ∞ # r=1  −1 x −x/r 1+ e , r (5.215) where γ is the usual Euler–Mascheroni constant (compare Section 5.2). If we take the product of Ŵ(x) and Ŵ(−x), Eq. (5.215) leads to −1   ∞ ∞ # x −x/r −γ x # x x/r γx Ŵ(x)Ŵ(−x) = − xe 1+ e e 1− xe r r r=1 =− 1 x2 ∞ # r=1 1− r=1 2 −1 x r2 . (5.216) Using Eq. (5.210) with x replaced by πx, we obtain π . (5.217) x sin πx Anticipating a recurrence relation developed in Section 8.1, we have −xŴ(−x) = Ŵ(1−x). Equation (5.217) may be written as Ŵ(x)Ŵ(−x) = − Ŵ(x)Ŵ(1 − x) = π . sin πx (5.218) This will be useful in treating the gamma function (Chapter 8). Strictly speaking, we should check the range of x for which Eq. (5.215) is convergent. Clearly, individual factors will vanish for x = 0, −1, −2, . . . . The proof that the infinite product converges for all other (finite) values of x is left as Exercise 5.11.9. These infinite products have a variety of uses in mathematics. However, because of rather slow convergence, they are not suitable for precise numerical work in physics. Exercises 5.11.1 Using ln ∞ # (1 ± an ) = n=1 ∞  n=1 ln(1 ± an ) and the Maclaurin expansion of ln(1 ± an ), show that the infinite product converges or diverges with the infinite series ∞ n=1 an . &∞ n=1 (1 ± an ) 400 Chapter 5 Infinite Series 5.11.2 An infinite product appears in the form ∞  # n=1  1 + a/n , 1 + b/n where a and b are constants. Show that this infinite product converges only if a = b. 5.11.3 Show that the infinite product representations of sin x and cos x are consistent with the identity 2 sin x cos x = sin 2x. 5.11.4 Determine the limit to which ∞  # n=2 converges. 5.11.5  Show that ∞ # n=2 5.11.6 (−1)n n 1+  1 2 = . 1− n(n + 1) 3 Prove that  ∞  # 1 1 1− 2 = . 2 n n=2 5.11.7 Using the infinite product representations of sin x, show that  ∞   x 2m x cot x = 1 − 2 , nπ m,n=1 hence that the Bernoulli number B2n = (−1)n−1 5.11.8 2(2n)! ζ (2n). (2π)2n Verify the Euler identity ∞ ∞ #  −1  # 1 − z2q−1 , 1 + zp = p=1 5.11.9 5.11.10 q=1 |z| < 1. & −x/r converges for all finite x (except for the zeros of Show that ∞ r=1 (1 + x/r)e 1 + x/r). Hint. Write the nth factor as 1 + an . Calculate cos x from its infinite product representation, Eq. (5.211), using (a) 10, (b) 100, and (c) 1000 factors in the product. Calculate the absolute error. Note how slowly the partial products converge–making the infinite product quite unsuitable for precise numerical work. ANS. For 1000 factors, cos π = −1.00051. 5.11 Additional Readings 401 Additional Readings The topic of infinite series is treated in many texts on advanced calculus. Bender, C. M., and S. Orszag, Advanced Mathematical Methods for Scientists and Engineers. New York: McGraw-Hill (1978). Particularly recommended for methods of accelerating convergence. Davis, H. T., Tables of Higher Mathematical Functions. Bloomington, IN: Principia Press (1935). Volume II contains extensive information on Bernoulli numbers and polynomials. Dingle, R. B., Asymptotic Expansions: Their Derivation and Interpretation. New York: Academic Press (1973). Galambos, J., Representations of Real Numbers by Infinite Series. Berlin: Springer (1976). Gradshteyn, I. S., and I. M. Ryzhik, Table of Integrals, Series and Products. Corrected and enlarged 6th edition prepared by Alan Jeffrey. New York: Academic Press (2000). Hamming, R. W., Numerical Methods for Scientists and Engineers. Reprinted, New York: Dover (1987). Hansen, E., A Table of Series and Products. Englewood Cliffs, NJ: Prentice-Hall (1975). A tremendous compilation of series and products. Hardy, G. H., Divergent Series. Oxford: Clarendon Press (1956), 2nd ed., Chelsea (1992). The standard, comprehensive work on methods of treating divergent series. Hardy includes instructive accounts of the gradual development of the concepts of convergence and divergence. Jeffrey, A., Handbook of Mathematical Formulas and Integrals. San Diego: Academic Press (1995). Knopp, K., Theory and Application of Infinite Series. London: Blackie and Son (2nd ed.); New York: Hafner (1971). Reprinted: A. K. Peters Classics (1997). This is a thorough, comprehensive, and authoritative work that covers infinite series and products. Proofs of almost all of the statements not proved in Chapter 5 will be found in this book. Mangulis, V., Handbook of Series for Scientists and Engineers. New York: Academic Press (1965). A most convenient and useful collection of series. Includes algebraic functions, Fourier series, and series of the special functions: Bessel, Legendre, and so on. Olver, F. W. J., Asymptotics and Special Functions. New York: Academic Press (1974). A detailed, readable development of asymptotic theory. Considerable attention is paid to error bounds for use in computation. Rainville, E. D., Infinite Series. New York: Macmillan (1967). A readable and useful account of series constants and functions. Sokolnikoff, I. S., and R. M. Redheffer, Mathematics of Physics and Modern Engineering, 2nd ed. New York: McGraw-Hill (1966). A long Chapter 2 (101 pages) presents infinite series in a thorough but very readable form. Extensions to the solutions of differential equations, to complex series, and to Fourier series are included. This page intentionally left blank CHAPTER 6 FUNCTIONS OF A COMPLEX VARIABLE I ANALYTIC PROPERTIES, MAPPING The imaginary numbers are a wonderful flight of God’s spirit; they are almost an amphibian between being and not being. G OTTFRIED W ILHELM VON L EIBNIZ , 1702 We turn now to a study of functions of a complex variable. In this area we develop some of the most powerful and widely useful tools in all of analysis. To indicate, at least partly, why complex variables are important, we mention briefly several areas of application. 1. For many pairs of functions u and v, both u and v satisfy Laplace’s equation, ∇2 ψ = ∂ 2 ψ(x, y) ∂ 2 ψ(x, y) + = 0. ∂x 2 ∂y 2 Hence either u or v may be used to describe a two-dimensional electrostatic potential. The other function, which gives a family of curves orthogonal to those of the first function, may then be used to describe the electric field E. A similar situation holds for the hydrodynamics of an ideal fluid in irrotational motion. The function u might describe the velocity potential, whereas the function v would then be the stream function. In many cases in which the functions u and v are unknown, mapping or transforming in the complex plane permits us to create a coordinate system tailored to the particular problem. 2. In Chapter 9 we shall see that the second-order differential equations of interest in physics may be solved by power series. The same power series may be used in the complex plane to replace x by the complex variable z. The dependence of the solution f (z) at a given z0 on the behavior of f (z) elsewhere gives us greater insight into the behavior of our 403 404 Chapter 6 Functions of a Complex Variable I solution and a powerful tool (analytic continuation) for extending the region in which the solution is valid. 3. The change of a parameter k from real to imaginary, k → ik, transforms the Helmholtz equation into the diffusion equation. The same change transforms the Helmholtz equation solutions (Bessel and spherical Bessel functions) into the diffusion equation solutions (modified Bessel and modified spherical Bessel functions). 4. Integrals in the complex plane have a wide variety of useful applications: • • • • Evaluating definite integrals; Inverting power series; Forming infinite products; Obtaining solutions of differential equations for large values of the variable (asymptotic solutions); • Investigating the stability of potentially oscillatory systems; • Inverting integral transforms. Many physical quantities that were originally real become complex as a simple physical theory is made more general. The real index of refraction of light becomes a complex quantity when absorption is included. The real energy associated with an energy level becomes complex when the finite lifetime of the level is considered. 6.1 COMPLEX ALGEBRA A complex number is nothing more than an ordered pair of two real numbers, (a, b). Similarly, a complex variable is an ordered pair of two real variables,1 z ≡ (x, y). (6.1) The ordering is significant. In general (a, b) is not equal to (b, a) and (x, y) is not equal to (y, x). As usual, we continue writing a real number (x, 0) simply as x, and we call i ≡ (0, 1) the imaginary unit. All our complex variable analysis can be developed in terms of ordered pairs of numbers (a, b), variables (x, y), and functions (u(x, y), v(x, y)). We now define addition of complex numbers in terms of their Cartesian components as z1 + z2 = (x1 , y1 ) + (x2 , y2 ) = (x1 + x2 , y1 + y2 ), (6.2a) that is, two-dimensional vector addition. In Chapter 1 the points in the xy-plane are identified with the two-dimensional displacement vector r = xˆ x + yˆ y. As a result, twodimensional vector analogs can be developed for much of our complex analysis. Exercise 6.1.2 is one simple example; Cauchy’s theorem, Section 6.3, is another. Multiplication of complex numbers is defined as z1 z2 = (x1 , y1 ) · (x2 , y2 ) = (x1 x2 − y1 y2 , x1 y2 + x2 y1 ). 1 This is precisely how a computer does complex arithmetic. (6.2b) 6.1 Complex Algebra 405 2 Using √ Eq. (6.2b) we verify that i = (0, 1) · (0, 1) = (−1, 0) = −1, so we can also identify i = −1, as usual and further rewrite Eq. (6.1) as z = (x, y) = (x, 0) + (0, y) = x + (0, 1) · (y, 0) = x + iy. (6.2c) Clearly, the i is not necessary here but it is convenient. It serves to keep pairs in order — somewhat like the unit vectors of Chapter 1.2 Permanence of Algebraic Form All our elementary functions, ez , sin z, and so on, can be extended into the complex plane (compare Exercise 6.1.9). For instance, they can be defined by power-series expansions, such as ∞ n  z z z2 ez = 1 + + + ··· = (6.3) 1! 2! n! n=0 for the exponential. Such definitions agree with the real variable definitions along the real x-axis and extend the corresponding real functions into the complex plane. This result is often called permanence of the algebraic form. It is convenient to employ a graphical representation of the complex variable. By plotting x — the real part of z — as the abscissa and y — the imaginary part of z — as the ordinate, we have the complex plane, or Argand plane, shown in Fig. 6.1. If we assign specific values to x and y, then z corresponds to a point (x, y) in the plane. In terms of the ordering mentioned before, it is obvious that the point (x, y) does not coincide with the point (y, x) except for the special case of x = y. Further, from Fig. 6.1 we may write x = r cos θ, y = r sin θ FIGURE 6.1 Complex plane — Argand diagram. 2 The algebra of complex numbers, (a, b), is isomorphic with that of matrices of the form  (compare Exercise 3.2.4). a b −b a  (6.4a) 406 Chapter 6 Functions of a Complex Variable I and z = r(cos θ + i sin θ ). (6.4b) Using a result that is suggested (but not rigorously proved)3 by Section 5.6 and Exercise 5.6.1, we have the useful polar representation z = r(cos θ + i sin θ ) = reiθ . (6.4c) In order to prove this identity, we use i 3 = −i, i 4 = 1, . . . in the Taylor expansion of the exponential and trigonometric functions and separate even and odd powers in eiθ = = ∞  (iθ )n n! n=0 ∞  (−1)ν ν=0 = ∞  (iθ )2ν ν=0 + (2ν)! θ 2ν +i (2ν)! ∞  ∞  (iθ )2ν+1 (2ν + 1)! ν=0 (−1)ν ν=0 θ 2ν+1 = cos θ + i sin θ. (2ν + 1)! For the special values θ = π/2 and θ = π, we obtain π π eiπ = cos(π) = −1, eiπ/2 = cos + i sin = i, 2 2 intriguing connections between e, i, and π. Moreover, the exponential function eiθ is periodic with period 2π, just like sin θ and cos θ . In this representation r is called the modulus or magnitude of z (r = |z| = (x 2 + y 2 )1/2 ) and the angle θ (= tan−1 (y/x)) is labeled the argument or phase of z. (Note that the arctan function tan−1 (y/x) has infinitely many branches.) The choice of polar representation, Eq. (6.4c), or Cartesian representation, Eqs. (6.1) and (6.2c), is a matter of convenience. Addition and subtraction of complex variables are easier in the Cartesian representation, Eq. (6.2a). Multiplication, division, powers, and roots are easier to handle in polar form, Eq. (6.4c). Analytically or graphically, using the vector analogy, we may show that the modulus of the sum of two complex numbers is no greater than the sum of the moduli and no less than the difference, Exercise 6.1.3, |z1 | − |z2 | ≤ |z1 + z2 | ≤ |z1 | + |z2 |. (6.5) Because of the vector analogy, these are called the triangle inequalities. Using the polar form, Eq. (6.4c), we find that the magnitude of a product is the product of the magnitudes: |z1 · z2 | = |z1 | · |z2 |. (6.6) arg(z1 · z2 ) = arg z1 + arg z2 . (6.7) Also, 3 Strictly speaking, Chapter 5 was limited to real variables. The development of power-series expansions for complex functions is taken up in Section 6.5 (Laurent expansion). 6.1 Complex Algebra 407 FIGURE 6.2 The function w(z) = u(x, y) + iv(x, y) maps points in the xy-plane into points in the uv-plane. From our complex variable z complex functions f (z) or w(z) may be constructed. These complex functions may then be resolved into real and imaginary parts, w(z) = u(x, y) + iv(x, y), (6.8) in which the separate functions u(x, y) and v(x, y) are pure real. For example, if f (z) = z2 , we have  f (z) = (x + iy)2 = x 2 − y 2 + i2xy. The real part of a function f (z) will be labeled ℜf (z), whereas the imaginary part will be labeled ℑf (z). In Eq. (6.8) ℜw(z) = Re(w) = u(x, y), ℑw(z) = Im(w) = v(x, y). The relationship between the independent variable z and the dependent variable w is perhaps best pictured as a mapping operation. A given z = x + iy means a given point in the z-plane. The complex value of w(z) is then a point in the w-plane. Points in the z-plane map into points in the w-plane and curves in the z-plane map into curves in the w-plane, as indicated in Fig. 6.2. Complex Conjugation In all these steps, complex number, variable, and function, the operation of replacing i by –i is called “taking the complex conjugate.” The complex conjugate of z is denoted by z∗ , where4 z∗ = x − iy. 4 The complex conjugate is often denoted by z¯ in the mathematical literature. (6.9) 408 Chapter 6 Functions of a Complex Variable I FIGURE 6.3 Complex conjugate points. The complex variable z and its complex conjugate z∗ are mirror images of each other reflected in the x-axis, that is, inversion of the y-axis (compare Fig. 6.3). The product zz∗ leads to zz∗ = (x + iy)(x − iy) = x 2 + y 2 = r 2 . (6.10) Hence (zz∗ )1/2 = |z|, the magnitude of z. Functions of a Complex Variable All the elementary functions of real variables may be extended into the complex plane — replacing the real variable x by the complex variable z. This is an example of the analytic continuation mentioned in Section 6.5. The extremely important relation of Eq. (6.4c) is an illustration. Moving into the complex plane opens up new opportunities for analysis. Example 6.1.1 DE MOIVRE’S FORMULA If Eq. (6.4c) (setting r = 1) is raised to the nth power, we have einθ = (cos θ + i sin θ )n . (6.11) Expanding the exponential now with argument nθ , we obtain cos nθ + i sin nθ = (cos θ + i sin θ )n . (6.12) De Moivre’s formula is generated if the right-hand side of Eq. (6.12) is expanded by the binomial theorem; we obtain cos nθ as a series of powers of cos θ and sin θ , Exercise 6.1.6.  Numerous other examples of relations among the exponential, hyperbolic, and trigonometric functions in the complex plane appear in the exercises. Occasionally there are complications. The logarithm of a complex variable may be expanded using the polar representation ln z = ln reiθ = ln r + iθ. (6.13a) 6.1 Complex Algebra 409 This is not complete. To the phase angle, θ , we may add any integral multiple of 2π without changing z. Hence Eq. (6.13a) should read ln z = ln rei(θ+2nπ) = ln r + i(θ + 2nπ). (6.13b) The parameter n may be any integer. This means that ln z is a multivalued function having an infinite number of values for a single pair of real values r and θ . To avoid ambiguity, the simplest choice is n = 0 and limitation of the phase to an interval of length 2π , such as (−π, π).5 The line in the z-plane that is not crossed, the negative real axis in this case, is labeled a cut line or branch cut. The value of ln z with n = 0 is called the principal value of ln z. Further discussion of these functions, including the logarithm, appears in Section 6.7. Exercises 6.1.1 (a) Find the reciprocal of x + iy, working entirely in the Cartesian representation. (b) Repeat part (a), working in polar form but expressing the final result in Cartesian form. 6.1.2 The complex quantities a = u + iv and b = x + iy may also be represented as twodimensional vectors a = xˆ u + yˆ v, b = xˆ x + yˆ y. Show that a ∗ b = a · b + i zˆ · a × b. 6.1.3 Prove algebraically that for complex numbers, |z1 | − |z2 | ≤ |z1 + z2 | ≤ |z1 | + |z2 |. Interpret this result in terms of two-dimensional vectors. Prove that   for ℜ(z) > 0. |z − 1| <  z2 − 1  < |z + 1|, 6.1.4 We may define a complex conjugation operator K such that Kz = z∗ . Show that K is not a linear operator. 6.1.5 Show that complex numbers have square roots and that the square roots are contained in the complex plane. What are the square roots of i? 6.1.6 Show that   (a) cos nθ = cosn θ − n2 cosn−2 θ sin2 θ + n4 cosn−4 θ sin4 θ − · · · .  n n−1  n n−3 (b) sin nθ = 1 cos θ sin3 θ + · · · . θ sin θ − 3 cos Note. The quantities 6.1.7 Prove that (a) N −1  n=0 cos nx =  n m are binomial coefficients:  n m sin(N x/2) x cos(N − 1) , sin x/2 2 5 There is no standard choice of phase; the appropriate phase depends on each problem. = n!/[(n − m)!m!]. 410 Chapter 6 Functions of a Complex Variable I (b) N −1  n=0 sin nx = sin(N x/2) x sin(N − 1) . sin x/2 2 These series occur in the analysis of the multiple-slit diffraction pattern. Another application is the analysis of the Gibbs phenomenon, Section 14.5. Hint. Parts (a) and (b) may be combined to form a geometric series (compare Section 5.1). 6.1.8 For −1 < p < 1 prove that (a) (b) ∞  n=0 ∞  n=0 p n cos nx = 1 − p cos x , 1 − 2p cos x + p 2 p n sin nx = p sin x . 1 − 2p cos x + p 2 These series occur in the theory of the Fabry–Perot interferometer. 6.1.9 Assume that the trigonometric functions and the hyperbolic functions are defined for complex argument by the appropriate power series ∞  sin z = n=1,odd cos z = ∞  cosh z = (−1)n/2 zn = n! ∞  (−1)s s=0 z2s , (2s)! ∞ ∞  zn  z2s+1 = , n! (2s + 1)! n=1,odd ∞  n=0,even s=0 ∞ zn  z2s = . n! (2s)! s=0 Show that i sin z = sinh iz, cos z = cosh iz, (b) ∞ zn  z2s+1 = , (−1)s n! (2s + 1)! s=0 n=0,even sinh z = (a) (−1)(n−1)/2 sin iz = i sinh z, cos iz = cosh z. Verify that familiar functional relations such as cosh z = ez +e−z 2 , sin(z1 + z2 ) = sin z1 cos z2 + sin z2 cos z1 , still hold in the complex plane. 6.1 Complex Algebra 6.1.10 411 Using the identities eiz − e−iz eiz + e−iz , sin z = , 2 2i established from comparison of power series, show that cos z = (a) sin(x + iy) = sin x cosh y + i cos x sinh y, cos(x + iy) = cos x cosh y − i sin x sinh y, (b) | sin z|2 = sin2 x + sinh2 y, | cos z|2 = cos2 x + sinh2 y. This demonstrates that we may have | sin z|, | cos z| > 1 in the complex plane. 6.1.11 From the identities in Exercises 6.1.9 and 6.1.10 show that (a) sinh(x + iy) = sinh x cos y + i cosh x sin y, cosh(x + iy) = cosh x cos y + i sinh x sin y, (b) 6.1.12 | sinh z|2 = sinh2 x + sin2 y, | cosh z|2 = cosh2 x + sin2 y. Prove that (a) | sin z| ≥ | sin x| (b) | cos z| ≥ | cos x|. 6.1.13 Show that the exponential function ez is periodic with a pure imaginary period of 2πi. 6.1.14 Show that z sinh x + i sin y , (a) tanh = 2 cosh x + cos y 6.1.15 Find all the zeros of (a) sin z, 6.1.16 (b) coth Show that (b) cos z, (c) sinh z,  (a) sin−1 z = −i ln iz ± 1 − z2 ,  (b) cos−1 z = −i ln z ± z2 − 1 ,   i+z i , (c) tan−1 z = ln 2 i−z z sinh x − i sin y = . 2 cosh x − cos y (d) cosh z.  (d) sinh−1 z = ln z + z2 + 1 ,  (e) cosh−1 z = ln z + z2 − 1 ,   1+z 1 (f) tanh−1 z = ln . 2 1−z Hint. 1. Express the trigonometric and hyperbolic functions in terms of exponentials. 2. Solve for the exponential and then for the exponent. 6.1.17 In the quantum theory of the photoionization we encounter the identity    ia − 1 ib = exp −2b cot−1 a , ia + 1 in which a and b are real. Verify this identity. 412 Chapter 6 Functions of a Complex Variable I 6.1.18 A plane wave of light of angular frequency ω is represented by eiω(t−nx/c) . In a certain substance the simple real index of refraction n is replaced by the complex quantity n − ik. What is the effect of k on the wave? What does k correspond to physically? The generalization of a quantity from real to complex form occurs frequently in physics. Examples range from the complex Young’s modulus of viscoelastic materials to the complex (optical) potential of the “cloudy crystal ball” model of the atomic nucleus. 6.1.19 We see that for the angular momentum components defined in Exercise 2.5.14, Lx − iLy = (Lx + iLy )∗ . Explain why this occurs. 6.1.20 Show that the phase of f (z) = u + iv is equal to the imaginary part of the logarithm of f (z). Exercise 8.2.13 depends on this result. 6.1.21 (a) Show that eln z always equals z. (b) Show that ln ez does not always equal z. 6.1.22 The infinite product representations of Section 5.11 hold when the real variable x is replaced by the complex variable z. From this, develop infinite product representations for (a) sinh z, (b) cosh z. 6.1.23 The equation of motion of a mass m relative to a rotating coordinate system is     dω dr d 2r −m ×r . m 2 = F − mω × (ω × r) − 2m ω × dt dt dt Consider the case F = 0, r = xˆ x + yˆ y, and ω = ωˆz, with ω constant. Show that the replacement of r = xˆ x + yˆ y by z = x + iy leads to d 2z dz + i2ω − ω2 z = 0. dt dt 2 Note. This ODE may be solved by the substitution z = f e−iωt . 6.1.24 Using the complex arithmetic available in FORTRAN, write a program that will calculate the complex exponential ez from its series expansion (definition). Calculate ez for z = einπ/6 , n = 0, 1, 2, . . . , 12. Tabulate the phase angle (θ = nπ/6), ℜz, ℑz, ℜ(ez ), ℑ(ez ), |ez |, and the phase of ez . Check value. n = 5, θ = 2.61799, ℜ(z) = −0.86602, ℑz = 0.50000, ℜ(ez ) = 0.36913, ℑ(ez ) = 0.20166, |ez | = 0.42062, phase(ez ) = 0.50000. 6.1.25 Using the complex arithmetic available in FORTRAN, calculate and tabulate ℜ(sinh z), ℑ(sinh z), | sinh z|, and phase (sinh z) for x = 0.0(0.1)1.0 and y = 0.0(0.1)1.0. 6.2 Cauchy–Riemann Conditions 413 Hint. Beware of dividing by zero when calculating an angle as an arc tangent. Check value. z = 0.2 + 0.1i, ℜ(sinh z) = 0.20033, ℑ(sinh z) = 0.10184, | sinh z| = 0.22473, phase(sinh z) = 0.47030. 6.1.26 6.2 Repeat Exercise 6.1.25 for cosh z. CAUCHY–RIEMANN CONDITIONS Having established complex functions of a complex variable, we now proceed to differentiate them. The derivative of f (z), like that of a real function, is defined by f (z + δz) − f (z) δf (z) df = lim = = f ′ (z), δz→0 δz→0 δz z + δz − z dz (6.14) δz = δx + iδy. (6.15) δf = δu + iδv, (6.16) δu + iδv δf = . δz δx + iδy (6.17) lim provided that the limit is independent of the particular approach to the point z. For real variables we require that the right-hand limit (x → x0 from above) and the left-hand limit (x → x0 from below) be equal for the derivative df (x)/dx to exist at x = x0 . Now, with z (or z0 ) some point in a plane, our requirement that the limit be independent of the direction of approach is very restrictive. Consider increments δx and δy of the variables x and y, respectively. Then Also, so that Let us take the limit indicated by Eq. (6.14) by two different approaches, as shown in Fig. 6.4. First, with δy = 0, we let δx → 0. Equation (6.14) yields   ∂u δv ∂v δu δf = = lim +i +i , (6.18) lim δx→0 δx δz→0 δz δx ∂x ∂x FIGURE 6.4 Alternate approaches to z0 . 414 Chapter 6 Functions of a Complex Variable I assuming the partial derivatives exist. For a second approach, we set δx = 0 and then let δy → 0. This leads to   δf ∂u ∂v δu δv = −i = lim −i + + . (6.19) lim δz→0 δz δy→0 δy δy ∂y ∂y If we are to have a derivative df/dz, Eqs. (6.18) and (6.19) must be identical. Equating real parts to real parts and imaginary parts to imaginary parts (like components of vectors), we obtain ∂u ∂v = , ∂x ∂y ∂u ∂v =− . ∂y ∂x (6.20) These are the famous Cauchy–Riemann conditions. They were discovered by Cauchy and used extensively by Riemann in his theory of analytic functions. These Cauchy–Riemann conditions are necessary for the existence of a derivative of f (z); that is, if df/dz exists, the Cauchy–Riemann conditions must hold. Conversely, if the Cauchy–Riemann conditions are satisfied and the partial derivatives of u(x, y) and v(x, y) are continuous, the derivative df/dz exists. This may be shown by writing     ∂u ∂v ∂v ∂u δx + δy. (6.21) +i +i δf = ∂x ∂x ∂y ∂y The justification for this expression depends on the continuity of the partial derivatives of u and v. Dividing by δz, we have (∂u/∂x + i(∂v/∂x))δx + (∂u/∂y + i(∂v/∂y))δy δf = δz δx + iδy = (∂u/∂x + i(∂v/∂x)) + (∂u/∂y + i(∂v/∂y))δy/δx . 1 + i(δy/δx) (6.22) If δf/δz is to have a unique value, the dependence on δy/δx must be eliminated. Applying the Cauchy–Riemann conditions to the y derivatives, we obtain ∂u ∂v ∂v ∂u +i =− +i . ∂y ∂y ∂x ∂x (6.23) Substituting Eq. (6.23) into Eq. (6.22), we may cancel out the δy/δx dependence and δf ∂u ∂v = +i , δz ∂x ∂x (6.24) which shows that lim δf/δz is independent of the direction of approach in the complex plane as long as the partial derivatives are continuous. Thus, df dz exists and f is analytic at z. It is worthwhile noting that the Cauchy–Riemann conditions guarantee that the curves u = c1 will be orthogonal to the curves v = c2 (compare Section 2.1). This is fundamental in application to potential problems in a variety of areas of physics. If u = c1 is a line of 6.2 Cauchy–Riemann Conditions 415 electric force, then v = c2 is an equipotential line (surface), and vice versa. To see this, let us write the Cauchy–Riemann conditions as a product of ratios of partial derivatives, ux vx · = −1, uy vy (6.25) with the abbreviations ∂u ≡ ux , ∂x ∂u ≡ uy , ∂y ∂v ≡ vx , ∂x ∂v ≡ vy . ∂y Now recall the geometric meaning of −ux /uy as the slope of the tangent of each curve u(x, y) = const. and similarly for v(x, y) = const. This means that the u = const. and v = const. curves are mutually orthogonal at each intersection. Alternatively, ux dx + uy dy = 0 = vy dx − vx dy says that, if (dx, dy) is tangent to the u-curve, then the orthogonal (−dy, dx) is tangent to the v-curve at the intersection point, z = (x, y). Or equivalently, ux vx + uy vy = 0 implies that the gradient vectors (ux , uy ) and (vx , vy ) are perpendicular. A further implication for potential theory is developed in Exercise 6.2.1. Analytic Functions Finally, if f (z) is differentiable at z = z0 and in some small region around z0 , we say that f (z) is analytic6 at z = z0 . If f (z) is analytic everywhere in the (finite) complex plane, we call it an entire function. Our theory of complex variables here is one of analytic functions of a complex variable, which points up the crucial importance of the Cauchy–Riemann conditions. The concept of analyticity carried on in advanced theories of modern physics plays a crucial role in dispersion theory (of elementary particles). If f ′ (z) does not exist at z = z0 , then z0 is labeled a singular point and consideration of it is postponed until Section 6.6. To illustrate the Cauchy–Riemann conditions, consider two very simple examples. Example 6.2.1 z2 IS ANALYTIC Let f (z) = z2 . Then the real part u(x, y) = x 2 − y 2 and the imaginary part v(x, y) = 2xy. Following Eq. (6.20), ∂u ∂v = 2x = , ∂x ∂y ∂u ∂v = −2y = − . ∂y ∂x We see that f (z) = z2 satisfies the Cauchy–Riemann conditions throughout the complex plane. Since the partial derivatives are clearly continuous, we conclude that f (z) = z2 is analytic.  6 Some writers use the term holomorphic or regular. 416 Chapter 6 Functions of a Complex Variable I Example 6.2.2 z∗ IS NOT ANALYTIC Let f (z) = z∗ . Now u = x and v = −y. Applying the Cauchy–Riemann conditions, we obtain ∂u ∂v = 1 = = −1. ∂x ∂y The Cauchy–Riemann conditions are not satisfied and f (z) = z∗ is not an analytic function of z. It is interesting to note that f (z) = z∗ is continuous, thus providing an example of a function that is everywhere continuous but nowhere differentiable in the complex plane. The derivative of a real function of a real variable is essentially a local characteristic, in that it provides information about the function only in a local neighborhood — for instance, as a truncated Taylor expansion. The existence of a derivative of a function of a complex variable has much more far-reaching implications. The real and imaginary parts of our analytic function must separately satisfy Laplace’s equation. This is Exercise 6.2.1. Further, our analytic function is guaranteed derivatives of all orders, Section 6.4. In this sense the derivative not only governs the local behavior of the complex function, but controls the distant behavior as well.  Exercises 6.2.1 The functions u(x, y) and v(x, y) are the real and imaginary parts, respectively, of an analytic function w(z). (a) Assuming that the required derivatives exist, show that ∇ 2 u = ∇ 2 v = 0. Solutions of Laplace’s equation such as u(x, y) and v(x, y) are called harmonic functions. (b) Show that ∂u ∂u ∂v ∂v + = 0, ∂x ∂y ∂x ∂y and give a geometric interpretation. Hint. The technique of Section 1.6 allows you to construct vectors normal to the curves u(x, y) = ci and v(x, y) = cj . 6.2.2 Show whether or not the function f (z) = ℜ(z) = x is analytic. 6.2.3 Having shown that the real part u(x, y) and the imaginary part v(x, y) of an analytic function w(z) each satisfy Laplace’s equation, show that u(x, y) and v(x, y) cannot both have either a maximum or a minimum in the interior of any region in which w(z) is analytic. (They can have saddle points only.) 6.2 Cauchy–Riemann Conditions 6.2.4 417 Let A = ∂ 2 w/∂x 2 , B = ∂ 2 w/∂x∂y, C = ∂ 2 w/∂y 2 . From the calculus of functions of two variables, w(x, y), we have a saddle point if B 2 − AC > 0. With f (z) = u(x, y) + iv(x, y), apply the Cauchy–Riemann conditions and show that neither u(x, y) nor v(x, y) has a maximum or a minimum in a finite region of the complex plane. (See also Section 7.3.) 6.2.5 Find the analytic function w(z) = u(x, y) + iv(x, y) if (a) u(x, y) = x 3 − 3xy 2 , (b) v(x, y) = e−y sin x. 6.2.6 6.2.7 If there is some common region in which w1 = u(x, y) + iv(x, y) and w2 = w1∗ = u(x, y) − iv(x, y) are both analytic, prove that u(x, y) and v(x, y) are constants. The function f (z) = u(x, y) + iv(x, y) is analytic. Show that f ∗ (z∗ ) is also analytic. 6.2.8 Using f (reiθ ) = R(r, θ )ei(r,θ) , in which R(r, θ ) and (r, θ ) are differentiable real functions of r and θ , show that the Cauchy–Riemann conditions in polar coordinates become 1 ∂R ∂ ∂R R ∂ = , (b) = −R . (a) ∂r r ∂θ r ∂θ ∂r Hint. Set up the derivative first with δz radial and then with δz tangential. 6.2.9 As an extension of Exercise 6.2.8 show that (r, θ ) satisfies Laplace’s equation in polar coordinates. Equation (2.35) (without the final term and set to zero) is the Laplacian in polar coordinates. 6.2.10 Two-dimensional irrotational fluid flow is conveniently described by a complex potential f (z) = u(x, v) + iv(x, y). We label the real part, u(x, y), the velocity potential and the imaginary part, v(x, y), the stream function. The fluid velocity V is given by V = ∇u. If f (z) is analytic, (a) Show that df/dz = Vx − iVy ; (b) Show that ∇ · V = 0 (no sources or sinks); (c) Show that ∇ × V = 0 (irrotational, nonturbulent flow). 6.2.11 A proof of the Schwarz inequality (Section 10.4) involves minimizing an expression, ∗ + λλ∗ ψbb ≥ 0. f = ψaa + λψab + λ∗ ψab The ψ are integrals of products of functions; ψaa and ψbb are real, ψab is complex and λ is a complex parameter. (a) Differentiate the preceding expression with respect to λ∗ , treating λ as an independent parameter, independent of λ∗ . Show that setting the derivative ∂f/∂λ∗ equal to zero yields λ=− ∗ ψab . ψbb 418 Chapter 6 Functions of a Complex Variable I (b) Show that ∂f/∂λ = 0 leads to the same result. (c) Let λ = x + iy, λ∗ = x − iy. Set the x and y derivatives equal to zero and show that again λ=− ∗ ψab . ψbb This independence of λ and λ∗ appears again in Section 17.7. 6.2.12 6.3 The function f (z) is analytic. Show that the derivative of f (z) with respect to z∗ does not exist unless f (z) is a constant. Hint. Use the chain rule and take x = (z + z∗ )/2, y = (z − z∗ )/2i. Note. This result emphasizes that our analytic function f (z) is not just a complex function of two real variables x and y. It is a function of the complex variable x + iy. CAUCHY ’S INTEGRAL THEOREM Contour Integrals With differentiation under control, we turn to integration. The integral of a complex variable over a contour in the complex plane may be defined in close analogy to the (Riemann) integral of a real function integrated along the real x-axis. We divide the contour from z0 to z0′ into n intervals by picking n − 1 intermediate points z1 , z2 , . . . on the contour (Fig. 6.5). Consider the sum Sn = n  j =1 f (ζj )(zj − zj −1 ), FIGURE 6.5 Integration path. (6.26) 6.3 Cauchy’s Integral Theorem 419 where ζj is a point on the curve between zj and zj −1 . Now let n → ∞ with |zj − zj −1 | → 0 for all j . If the limn→∞ Sn exists and is independent of the details of choosing the points zj and ζj , then z′ n  0 f (z) dz. (6.27) f (ζj )(zj − zj −1 ) = lim n→∞ j =1 z0 The right-hand side of Eq. (6.27) is called the contour integral of f (z) (along the specified contour C from z = z0 to z = z0′ ). The preceding development of the contour integral is closely analogous to the Riemann integral of a real function of a real variable. As an alternative, the contour integral may be defined by z2 x2 ,y2  f (z)dz = u(x, y) + iv(x, y) [dx + idy] z1 x1 ,y1 = x2 ,y2  x1 ,y1 u(x, y) dx − v(x, y) dy + i x2 ,y2  x1 ,y1 v(x, y) dx + u(x, y) dy with the path joining (x1 , y1 ) and (x2 , y2 ) specified. This reduces the complex integral to the complex sum of real integrals. It is somewhat analogous to the replacement of a vector integral by the vector sum of scalar integrals, Section 1.10. An important example is the contour integral C zn dz, where C is a circle of radius r > 0 around the origin z = 0 in the positive mathematical sense (counterclockwise). In polar coordinates of Eq. (6.4c) we parameterize the circle as z = reiθ and dz = ireiθ dθ . For n = −1, n an integer, we then obtain  1 r n+1 2π exp i(n + 1)θ dθ zn dz = 2πi C 2π 0  −1  2π = 2πi(n + 1) r n+1 ei(n+1)θ 0 = 0 (6.27a) because 2π is a period of ei(n+1)θ , while for n = −1 2π dz 1 1 = dθ = 1, 2πi C z 2π 0 (6.27b) again independent of r. Alternatively, we can integrate around a rectangle with the corners z1 , z2 , z3 , z4 to obtain for n = −1     zn+1 z2 zn+1 z3 zn+1 z4 zn+1 z1 zn dz = + + + = 0, n + 1 z1 n + 1 z2 n + 1 z3 n + 1 z4 because each corner point appears once as an upper and a lower limit that cancel. For n = −1 the corresponding real parts of the logarithms cancel similarly, but their imaginary parts involve the increasing arguments of the points from z1 to z4 and, when we come back to the first corner z1 , its argument has increased by 2π due to the multivaluedness of the 420 Chapter 6 Functions of a Complex Variable I logarithm, so 2πi is left over as the value of the integral. Thus, the value of the integral involving a multivalued function must be that which is reached in a continuous fashion on the path being taken. These integrals are examples of Cauchy’s integral theorem, which we consider in the next section. Stokes’ Theorem Proof Cauchy’s integral theorem is the first of two basic theorems in the theory of the behavior of functions of a complex variable. First, we offer a proof under relatively restrictive conditions — conditions that are intolerable to the mathematician developing a beautiful abstract theory but that are usually satisfied in physical problems. If a function f (z) is analytic, that is, if its partial derivatives are continuous throughout some simply connected region R,7 for every closed path C (Fig. 6.6) in R, and if it is single-valued (assumed for simplicity here), the line integral of f (z) around C is zero, or  f (z) dz = f (z) dz = 0. (6.27c) C C Recall that in Section  1.13 such a function f (z), identified as a force, was labeled conservative. The symbol is used to emphasize that the path is closed. Note that the interior of the simply connected region bounded by a contour is that region lying to the left when moving in the direction implied by the contour; as a rule, a simply connected region is bounded by a single closed curve. In this form the Cauchy integral theorem may be proved by direct application of Stokes’ theorem (Section 1.12). With f (z) = u(x, y) + iv(x, y) and dz = dx + idy,   f (z) dz = (u + iv)(dx + idy) C C =  C (u dx − v dy) + i  (v dx + u dy). (6.28) These two line integrals may be converted to surface integrals by Stokes’ theorem, a procedure that is justified if the partial derivatives are continuous within C. In applying Stokes’ theorem, note that the final two integrals of Eq. (6.28) are real. Using V = xˆ Vx + yˆ Vy , Stokes’ theorem says that    ∂Vy ∂Vx dx dy. − (Vx dx + Vy dy) = ∂x ∂y C (6.29) For the first integral in the last part of Eq. (6.28) let u = Vx and v = −Vy .8 Then 7 Any closed simple curve (one that does not intersect itself) inside a simply connected region or domain may be contracted to a single point that still belongs to the region. If a region is not simply connected, it is called multiply connected. As an example of a multiply connected region, consider the z-plane with the interior of the unit circle excluded. 8 In the proof of Stokes’ theorem, Section 1.12, V and V are any two functions (with continuous partial derivatives). x y 6.3 Cauchy’s Integral Theorem 421 FIGURE 6.6 A closed contour C within a simply connected region R.  C (u dx − v dy) = =  C (Vx dx + Vy dy)  ∂Vy ∂Vx − ∂x ∂y  dx dy = −   ∂v ∂u + dx dy. ∂x ∂y (6.30) For the second integral on the right side of Eq. (6.28) we let u = Vy and v = Vx . Using Stokes’ theorem again, we obtain    ∂u ∂v dx dy. (6.31) − (v dx + u dy) = ∂x ∂y On application of the Cauchy–Riemann conditions, which must hold, since f (z) is assumed analytic, each integrand vanishes and      ∂u ∂v ∂v ∂u + − dx dy + i dx dy = 0. (6.32) f (z) dz = − ∂x ∂y ∂x ∂y Cauchy–Goursat Proof This completes the proof of Cauchy’s integral theorem. However, the proof is marred from a theoretical point of view by the need for continuity of the first partial derivatives. Actually, as shown by Goursat, this condition is not necessary. An outline of the Goursat proof is as follows. We subdivide the region inside the contour C into a network of small squares, as indicated in Fig. 6.7. Then   f (z) dz = f (z) dz, (6.33) C j Cj  all integrals along interior lines canceling out. To estimate the Cj f (z) dz, we construct the function  f (z) − f (zj ) df (z)  − δj (z, zj ) = , (6.34) z − zj dz z=zj with zj an interior point of the j th subregion. Note that [f (z) − f (zj )]/(z − zj ) is an approximation to the derivative at z = zj . Equivalently, we may note that if f (z) had 422 Chapter 6 Functions of a Complex Variable I FIGURE 6.7 Cauchy–Goursat contours. a Taylor expansion (which we have not yet proved), then δj (z, zj ) would be of order z − zj , approaching zero as the network was made finer. But since f ′ (zj ) exists, that is, is finite, we may make   δj (z, zj ) < ε, (6.35) where ε is an arbitrarily chosen small positive quantity. Solving Eq. (6.34) for f (z) and integrating around Cj , we obtain   f (z) dz = (z − zj )δj (z, zj ) dz, (6.36) Cj Cj the integrals of the other terms vanishing.9 When Eqs. (6.35) and (6.36) are combined, one shows that       (6.37) f (z) dz < Aε,  j Cj where A is a term of the order of the area of the enclosed region. Since ε is arbitrary, we let ε → 0 and conclude that if a function f (z) is analytic on and within a closed path C,  f (z) dz = 0. (6.38) C Details of the proof of this significantly more general and more powerful form can be found in Churchill in the Additional Readings. Actually we can still prove the theorem for f (z) analytic within the interior of C and only continuous on C. The consequence of the Cauchy integral theorem is that for analytic functions the line integral is a function only of its endpoints, independent of the path of integration, z2 z1 f (z) dz = F (z2 ) − F (z1 ) = − f (z) dz, (6.39) z1 z2 again exactly like the case of a conservative force, Section 1.13. 9  dz and  z dz = 0 by Eq. (6.27a). 6.3 Cauchy’s Integral Theorem 423 Multiply Connected Regions The original statement of Cauchy’s integral theorem demanded a simply connected region. This restriction may be relaxed by the creation of a barrier, a contour line. The purpose of the following contour-line construction is to permit, within a multiply connected region, the identification of curves that can be shrunk to a point within the region, that is, the construction of a subregion that is simply connected. Consider the multiply connected region of Fig. 6.8, in which f (z) is not defined for the interior, R ′ . Cauchy’s integral theorem is not valid for the contour C, as shown, but we can construct a contour C ′ for which the theorem holds. We draw a line from the interior forbidden region, R ′ , to the forbidden region exterior to R and then run a new contour, C ′ , as shown in Fig. 6.9. The new contour, C ′ , through ABDEFGA never crosses the contour line that literally converts R into a simply connected region. The three-dimensional analog of this technique was used in Section 1.14 to prove Gauss’ law. By Eq. (6.39), A D f (z) dz = − f (z) dz, (6.40) G E FIGURE 6.8 A closed contour C in a multiply connected region. FIGURE 6.9 Conversion of a multiply connected region into a simply connected region. 424 Chapter 6 Functions of a Complex Variable I with f (z) having been continuous across the contour line and line segments DE and GA arbitrarily close together. Then  f (z) dz = f (z) dz + f (z) dz = 0 (6.41) C′ ABD EFG by Cauchy’s integral theorem, with region R now simply connected. Applying Eq. (6.39) once again with ABD → C1′ and EFG → −C2′ , we obtain   f (z) dz, (6.42) f (z) dz = C2′ C1′ in which C1′ and C2′ are both traversed in the same (counterclockwise, that is, positive) direction. Let us emphasize that the contour line here is a matter of mathematical convenience, to permit the application of Cauchy’s integral theorem. Since f (z) is analytic in the annular region, it is necessarily single-valued and continuous across any such contour line. Exercises 6.3.1 Show that 6.3.2 Prove that z2 z1 f (z) dz = − z1 z2 f (z) dz.      f (z) dz ≤ |f |max · L,   C where |f |max is the maximum value of |f (z)| along the contour C and L is the length of the contour. 6.3.3 Verify that 1,1 z∗ dz 0,0 depends on the path by evaluating the integral for the two paths shown in Fig. 6.10. Recall that f (z) = z∗ is not an analytic function of z and that Cauchy’s integral theorem therefore does not apply. 6.3.4 Show that  C dz = 0, +z z2 in which the contour C is a circle defined by |z| = R > 1. Hint. Direct use of the Cauchy integral theorem is illegal. Why? The integral may be evaluated by transforming to polar coordinates and using tables. This yields 0 for R > 1 and 2πi for R < 1. 6.4 Cauchy’s Integral Formula FIGURE 6.10 6.4 425 Contour. CAUCHY ’S INTEGRAL FORMULA As in the preceding section, we consider a function f (z) that is analytic on a closed contour C and within the interior region bounded by C. We seek to prove that 1 2πi  C f (z) dz = f (z0 ), z − z0 (6.43) in which z0 is any point in the interior region bounded by C. This is the second of the two basic theorems mentioned in Section 6.3. Note that since z is on the contour C while z0 is in the interior, z − z0 = 0 and the integral Eq. (6.43) is well defined. Although f (z) is assumed analytic, the integrand is f (z)/(z − z0 ) and is not analytic at z = z0 unless f (z0 ) = 0. If the contour is deformed as shown in Fig. 6.11 (or Fig. 6.9, Section 6.3), Cauchy’s integral theorem applies. By Eq. (6.42),   f (z) f (z) dz − dz = 0, (6.44) z − z z 0 C C2 − z0 where C is the original outer contour and C2 is the circle surrounding the point z0 traversed in a counterclockwise direction. Let z = z0 + reiθ , using the polar representation because of the circular shape of the path around z0 . Here r is small and will eventually be made to approach zero. We have (with dz = ireirθ dθ from Eq. (6.27a))   f (z) f (z0 + reiθ ) iθ dz = rie dθ. reiθ C2 z − z0 C2 Taking the limit as r → 0, we obtain  f (z) dθ = 2πif (z0 ), dz = if (z0 ) C2 C2 z − z0 (6.45) 426 Chapter 6 Functions of a Complex Variable I FIGURE 6.11 Exclusion of a singular point. since f (z) is analytic and therefore continuous at z = z0 . This proves the Cauchy integral formula. Here is a remarkable result. The value of an analytic function f (z) is given at an interior point z = z0 once the values on the boundary C are specified. This is closely analogous to a two-dimensional form of Gauss’ law (Section 1.14) in which the magnitude of an interior line charge would be given in terms of the cylindrical surface integral of the electric field E. A further analogy is the determination of a function in real space by an integral of the function and the corresponding Green’s function (and their derivatives) over the bounding surface. Kirchhoff diffraction theory is an example of this. It has been emphasized that z0 is an interior point. What happens if z0 is exterior to C? In this case the entire integrand is analytic on and within C. Cauchy’s integral theorem, Section 6.3, applies and the integral vanishes. We have +  1 f (z) dz f (z0 ), z0 interior = 2πi C z − z0 0, z0 exterior. Derivatives Cauchy’s integral formula may be used to obtain an expression for the derivative of f (z). From Eq. (6.43), with f (z) analytic,    f (z0 + δz0 ) − f (z0 ) 1 f (z) f (z) = dz − dz . δz0 2πiδz0 z − z0 − δz0 z − z0 Then, by definition of derivative (Eq. (6.14)),  1 δz0 f (z) dz f ′ (z0 ) = lim δz0 →0 2πiδz0 (z − z0 − δz0 )(z − z0 )  1 f (z) = dz. 2πi (z − z0 )2 (6.46) This result could have been obtained by differentiating Eq. (6.43) under the integral sign with respect to z0 . This formal, or turning-the-crank, approach is valid, but the justification for it is contained in the preceding analysis. 6.4 Cauchy’s Integral Formula 427 This technique for constructing derivatives may be repeated. We write f ′ (z0 + δz0 ) and f ′ (z0 ), using Eq. (6.46). Subtracting, dividing by δz0 , and finally taking the limit as δz0 → 0, we have  2 f (z) dz . f (2) (z0 ) = 2πi (z − z0 )3 Note that f (2) (z0 ) is independent of the direction of δz0 , as it must be. Continuing, we get10 f (n) (z0 ) = n! 2πi  f (z) dz ; (z − z0 )n+1 (6.47) that is, the requirement that f (z) be analytic guarantees not only a first derivative but derivatives of all orders as well! The derivatives of f (z) are automatically analytic. Notice that this statement assumes the Goursat version of the Cauchy integral theorem. This is also why Goursat’s contribution is so significant in the development of the theory of complex variables. Morera’s Theorem A further application of Cauchy’s integral formula is in the proof of Morera’s theorem, which is the converse of Cauchy’s integral theorem. The theorem states the following: If a function f (z) is continuous in a simply connected region R and C f (z) dz = 0 for every closed contour C within R, then f (z) is analytic throughout R. Let us integrate f (z) from z1 to z2 . Since every closed-path integral of f (z) vanishes, the integral is independent of path and depends only on its endpoints. We label the result of the integration F (z), with z2 f (z) dz. (6.48) F (z2 ) − F (z1 ) = z1 As an identity, F (z2 ) − F (z1 ) − f (z1 ) = z2 − z1 z2 z1 [f (t) − f (z1 )] dt z2 − z1 , using t as another complex variable. Now we take the limit as z2 → z1 : z2 z [f (t) − f (z1 )] dt lim 1 = 0, z2 →z1 z2 − z1 (6.49) (6.50) 10 This expression is the starting point for defining derivatives of fractional order. See A. Erdelyi (ed.), Tables of Integral Transforms, Vol. 2. New York: McGraw-Hill (1954). For recent applications to mathematical analysis, see T. J. Osler, An integral analogue of Taylor’s series and its use in computing Fourier transforms. Math. Comput. 26: 449 (1972), and references therein. 428 Chapter 6 Functions of a Complex Variable I since f (t) is continuous.11 Therefore lim z2 →z1  F (z2 ) − F (z1 ) = F ′ (z)z=z = f (z1 ) 1 z2 − z1 (6.51) by definition of derivative (Eq. (6.14)). We have proved that F ′ (z) at z = z1 exists and equals f (z1 ). Since z1 is any point in R, we see that F (z) is analytic. Then by Cauchy’s integral formula (compare Eq. (6.47)), F ′ (z) = f (z) is also analytic, proving Morera’s theorem. Drawing once more on our electrostatic analog, we might use f (z) to represent the electrostatic field E. If the net charge within every closed region in R is zero (Gauss’ law), the charge density is everywhere zero in R. Alternatively, in terms of the analysis of Section 1.13, f (z) represents a conservative force (by definition of conservative), and then we find that it is always possible to express it as the derivative of a potential function F (z). An important application of Cauchy’s integral formula is the following Cauchy inequal ity. If f (z) = an zn is analytic and bounded, |f (z)| ≤ M on a circle of radius r about the origin, then |an |r n ≤ M (Cauchy’s inequality) (6.52) gives upper bounds for the coefficients of its Taylor expansion. To prove Eq. (6.52) let us define M(r) = max|z|=r |f (z)| and use the Cauchy integral for an :   2πr 1  f (z)  |an | = dz ≤ M(r) .  n+1 2π |z|=r z 2πr n+1 An immediate consequence of the inequality (6.52) is Liouville’s theorem: If f (z) is analytic and bounded in the entire complex plane it is a constant. In fact, if |f (z)| ≤ M for all z, then Cauchy’s inequality (6.52) gives |an | ≤ Mr −n → 0 as r → ∞ for n > 0. Hence f (z) = a0 . Conversely, the slightest deviation of an analytic function from a constant value implies that there must be at least one singularity somewhere in the infinite complex plane. Apart from the trivial constant functions, then, singularities are a fact of life, and we must learn to live with them. But we shall do more than that. We shall next expand a function in a Laurent series at a singularity, and we shall use singularities to develop the powerful and useful calculus of residues in Chapter 7. A famous application of Liouville’s theorem yields the fundamental theorem of alge bra (due to C. F. Gauss), which says that any polynomial P (z) = nν=0 aν zν with n > 0 and an = 0 has n roots. To prove this, suppose P (z) has no zero. Then 1/P (z) is analytic and bounded as |z| → ∞. Hence P (z) is a constant by Liouville’s theorem, q.e.a. Thus, P (z) has at least one root that we can divide out. Then we repeat the process for the resulting polynomial of degree n − 1. This leads to the conclusion that P (z) has exactly n roots. 11 We quote the mean value theorem of calculus here. 6.4 Cauchy’s Integral Formula 429 Exercises 6.4.1 Show that  n C (z − z0 ) dz = + 2πi, 0, n = −1, n = −1, where the contour C encircles the point z = z0 in a positive (counterclockwise) sense. The exponent n is an integer. See also Eq. (6.27a). The calculus of residues, Chapter 7, is based on this result. 6.4.2 Show that 1 2πi  zm−n−1 dz, m and n integers (with the contour encircling the origin once counterclockwise) is a representation of the Kronecker δmn . 6.4.3 Solve Exercise 6.3.4 by separating the integrand into partial fractions and then applying Cauchy’s integral theorem for multiply connected regions. Note. Partial fractions are explained in Section 15.8 in connection with Laplace transforms. 6.4.4 Evaluate  C where C is the circle |z| = 2. 6.4.5 6.4.6 dz , −1 z2 Assuming that f (z) is analytic on and within a closed contour C and that the point z0 is within C, show that   f ′ (z) f (z) dz = dz. 2 C z − z0 C (z − z0 ) You know that f (z) is analytic on and within a closed contour C. You suspect that the nth derivative f (n) (z0 ) is given by  f (z) n! f (n) (z0 ) = dz. 2πi C (z − z0 )n+1 Using mathematical induction, prove that this expression is correct. 6.4.7 (a) A function f (z) is analytic within a closed contour C (and continuous on C). If f (z) = 0 within C and |f (z)| ≤ M on C, show that   f (z) ≤ M for all points within C. Hint. Consider w(z) = 1/f (z). (b) If f (z) = 0 within the contour C, show that the foregoing result does not hold and that it is possible to have |f (z)| = 0 at one or more points in the interior with |f (z)| > 0 over the entire bounding contour. Cite a specific example of an analytic function that behaves this way. 430 Chapter 6 Functions of a Complex Variable I 6.4.8 Using the Cauchy integral formula for the nth derivative, convert the following Rodrigues formulas into the corresponding so-called Schlaefli integrals. (a) Legendre: Pn (x) = 1 2n n! n dn  2 x −1 . n dx ANS. (b)  (1 − z2 )n dz. (z − x)n+1 Hermite: Hn (x) = (−1)n ex (c) (−1)n 1 · 2n 2πi 2 d n −x 2 e . dx n Laguerre: Ln (x) = ex d n  n −x x e . n! dx n Note. From the Schlaefli integral representations one can develop generating functions for these special functions. Compare Sections 12.4, 13.1, and 13.2. 6.5 LAURENT EXPANSION Taylor Expansion The Cauchy integral formula of the preceding section opens up the way for another derivation of Taylor’s series (Section 5.6), but this time for functions of a complex variable. Suppose we are trying to expand f (z) about z = z0 and we have z = z1 as the nearest point on the Argand diagram for which f (z) is not analytic. We construct a circle C centered at z = z0 with radius less than |z1 − z0 | (Fig. 6.12). Since z1 was assumed to be the nearest point at which f (z) was not analytic, f (z) is necessarily analytic on and within C. From Eq. (6.43), the Cauchy integral formula,  1 f (z′ ) dz′ f (z) = 2πi C z′ − z  f (z′ ) dz′ 1 = 2πi C (z′ − z0 ) − (z − z0 )  f (z′ ) dz′ 1 . (6.53) = 2πi C (z′ − z0 )[1 − (z − z0 )/(z′ − z0 )] Here z′ is a point on the contour C and z is any point interior to C. It is not legal yet to expand the denominator of the integrand in Eq. (6.53) by the binomial theorem, for we have not yet proved the binomial theorem for complex variables. Instead, we note the identity ∞  1 = 1 + t + t2 + t3 + · · · = t n, 1−t n=0 (6.54) 6.5 Laurent Expansion FIGURE 6.12 431 Circular domain for Taylor expansion. which may easily be verified by multiplying both sides by 1 − t. The infinite series, following the methods of Section 5.2, is convergent for |t| < 1. Now, for a point z interior to C, |z − z0 | < |z′ − z0 |, and, using Eq. (6.54), Eq. (6.53) becomes   ∞ (z − z0 )n f (z′ ) dz′ 1 f (z) = . (6.55) 2πi C (z′ − z0 )n+1 n=0 Interchanging the order of integration and summation (valid because Eq. (6.54) is uniformly convergent for |t| < 1), we obtain  ∞ f (z′ ) dz′ 1  (z − z0 )n . (6.56) f (z) = ′ n+1 2πi C (z − z0 ) n=0 Referring to Eq. (6.47), we get f (z) = ∞  f (n) (z0 ) , (z − z0 )n n! (6.57) n=0 which is our desired Taylor expansion. Note that it is based only on the assumption that f (z) is analytic for |z − z0 | < |z1 − z0 |. Just as for real variable power series (Section 5.7), this expansion is unique for a given z0 . From the Taylor expansion for f (z) a binomial theorem may be derived (Exercise 6.5.2). Schwarz Reflection Principle From the binomial expansion of g(z) = (z − x0 )n for integral n it is easy to see that the complex conjugate of the function g is the function of the complex conjugate for real x0 :  ∗ g ∗ (z) = (z − x0 )n = (z∗ − x0 )n = g(z∗ ). (6.58) 432 Chapter 6 Functions of a Complex Variable I FIGURE 6.13 Schwarz reflection. This leads us to the Schwarz reflection principle: If a function f (z) is (1) analytic over some region including the real axis and (2) real when z is real, then f ∗ (z) = f (z∗ ). (6.59) (See Fig. 6.13.) Expanding f (z) about some (nonsingular) point x0 on the real axis, f (z) = ∞  f (n) (x0 ) (z − x0 )n n! (6.60) n=0 by Eq. (6.56). Since f (z) is analytic at z = x0 , this Taylor expansion exists. Since f (z) is real when z is real, f (n) (x0 ) must be real for all n. Then when we use Eq. (6.58), Eq. (6.59), the Schwarz reflection principle, follows immediately. Exercise 6.5.6 is another form of this principle. This completes the proof within a circle of convergence. Analytic continuation then permits extending this result to the entire region of analyticity. Analytic Continuation It is natural to think of the values f (z) of an analytic function f as a single entity, which is usually defined in some restricted region S1 of the complex plane, for example, by a Taylor series (see Fig. 6.14). Then f is analytic inside the circle of convergence C1 , whose radius is given by the distance r1 from the center of C1 to the nearest singularity of f at z1 (in Fig. 6.14). A singularity is any point where f is not analytic. If we choose a point inside C1 6.5 Laurent Expansion FIGURE 6.14 433 Analytic continuation. that is farther than r1 from the singularity z1 and make a Taylor expansion of f about it (z2 in Fig. 6.14), then the circle of convergence, C2 will usually extend beyond the first circle, C1 . In the overlap region of both circles, C1 , C2 , the function f is uniquely defined. In the region of the circle C2 that extends beyond C1 , f (z) is uniquely defined by the Taylor series about the center of C2 and is analytic there, although the Taylor series about the center of C1 is no longer convergent there. After Weierstrass this process is called analytic continuation. It defines the analytic functions in terms of its original definition (in C1 , say) and all its continuations. A specific example is the function 1 , (6.61) 1+z which has a (simple) pole at z = −1 and is analytic elsewhere. The geometric series expansion ∞  1 = 1 − z + z2 + · · · = (−z)n (6.62) 1+z f (z) = n=0 converges for |z| < 1, that is, inside the circle C1 in Fig. 6.14. Suppose we expand f (z) about z = i, so 1 1 1 = = 1 + z 1 + i + (z − i) (1 + i)(1 + (z − i)/(1 + i))  1 z−i (z − i)2 − · · · = 1− + (6.63) 2 1 + i (1 + i) 1+i √ converges for |z − i| < |1 + i| = 2. Our circle of convergence is C2 in Fig. 6.14. Now f (z) = 434 Chapter 6 Functions of a Complex Variable I FIGURE 6.15 |z′ − z0 |C1 > |z − z0 |; |z′ − z0 |C2 < |z − z0 |. f (z) is defined by the expansion (6.63) in S2 , which overlaps S1 and extends further out in the complex plane.12 This extension is an analytic continuation, and when we have only isolated singular points to contend with, the function can be extended indefinitely. Equations (6.61), (6.62), and (6.63) are three different representations of the same function. Each representation has its own domain of convergence. Equation (6.62) is a Maclaurin series. Equation (6.63) is a Taylor expansion about z = i and from the following paragraphs Eq. (6.61) is seen to be a one-term Laurent series. Analytic continuation may take many forms, and the series expansion just considered is not necessarily the most convenient technique. As an alternate technique we shall use a functional relation in Section 8.1 to extend the factorial function around the isolated singular points z = −n, n = 1, 2, 3, . . . . As another example, the hypergeometric equation is satisfied by the hypergeometric function defined by the series, Eq. (13.115), for |z| < 1. The integral representation given in Exercise 13.4.7 permits a continuation into the complex plane. 12 One of the most powerful and beautiful results of the more abstract theory of functions of a complex variable is that if two analytic functions coincide in any region, such as the overlap of S1 and S2 , or coincide on any line segment, they are the same function, in the sense that they will coincide everywhere as long as they are both well defined. In this case the agreement of the expansions (Eqs. (6.62) and (6.63)) over the region common to S1 and S2 would establish the identity of the functions these expansions represent. Then Eq. (6.63) would represent an analytic continuation or extension of f (z) into regions not covered by Eq. (6.62). We could equally well say that f (z) = 1/(1 + z) is itself an analytic continuation of either of the series given by Eqs. (6.62) and (6.63). 6.5 Laurent Expansion 435 Laurent Series We frequently encounter functions that are analytic and single-valued in an annular region, say, of inner radius r and outer radius R, as shown in Fig. 6.15. Drawing an imaginary contour line to convert our region into a simply connected region, we apply Cauchy’s integral formula, and for two circles C2 and C1 centered at z = z0 and with radii r2 and r1 , respectively, where r < r2 < r1 < R, we have13   f (z′ ) dz′ f (z′ ) dz′ 1 1 f (z) = − . (6.64) ′ 2πi C1 z − z 2πi C2 z′ − z Note that in Eq. (6.64) an explicit minus sign has been introduced so that the contour C2 (like C1 ) is to be traversed in the positive (counterclockwise) sense. The treatment of Eq. (6.64) now proceeds exactly like that of Eq. (6.53) in the development of the Taylor series. Each denominator is written as (z′ − z0 ) − (z − z0 ) and expanded by the binomial theorem, which now follows from the Taylor series (Eq. (6.57)). Noting that for C1 , |z′ − z0 | > |z − z0 | while for C2 , |z′ − z0 | < |z − z0 |, we find f (z) =  ∞ f (z′ ) dz′ 1  (z − z0 )n ′ n+1 2πi C1 (z − z0 ) n=0  ∞ 1  −n (z′ − z0 )n−1 f (z′ ) dz′ . (z − z0 ) + 2πi C2 (6.65) n=1 The minus sign of Eq. (6.64) has been absorbed by the binomial expansion. Labeling the first series S1 and the second S2 we have  ∞ f (z′ ) dz′ 1  (z − z0 )n S1 = , (6.66) ′ n+1 2πi C1 (z − z0 ) n=0 which is the regular Taylor expansion, convergent for |z − z0 | < |z′ − z0 | = r1 , that is, for all z interior to the larger circle, C1 . For the second series in Eq. (6.65) we have S2 =  ∞ 1  (z′ − z0 )n−1 f (z′ ) dz′ , (z − z0 )−n 2πi C2 (6.67) n=1 convergent for |z − z0 | > |z′ − z0 | = r2 , that is, for all z exterior to the smaller circle, C2 . Remember, C2 now goes counterclockwise. These two series are combined into one series14 (a Laurent series) by f (z) = ∞  n=−∞ an (z − z0 )n , 13 We may take r arbitrarily close to r and r arbitrarily close to R, maximizing the area enclosed between C and C . 2 1 1 2 14 Replace n by −n in S and add. 2 (6.68) 436 Chapter 6 Functions of a Complex Variable I where an = 1 2πi  C f (z′ ) dz′ . (z′ − z0 )n+1 (6.69) Since, in Eq. (6.69), convergence of a binomial expansion is no longer a problem, C may be any contour within the annular region r < |z − z0 | < R encircling z0 once in a counterclockwise sense. If we assume that such an annular region of convergence does exist, then Eq. (6.68) is the Laurent series, or Laurent expansion, of f (z). The use of the contour line (Fig. 6.15) is convenient in converting the annular region into a simply connected region. Since our function is analytic in this annular region (and single-valued), the contour line is not essential and, indeed, does not appear in the final result, Eq. (6.69). Laurent series coefficients need not come from evaluation of contour integrals (which may be very intractable). Other techniques, such as ordinary series expansions, may provide the coefficients. Numerous examples of Laurent series appear in Chapter 7. We limit ourselves here to one simple example to illustrate the application of Eq. (6.68). Example 6.5.1 LAURENT EXPANSION Let f (z) = [z(z − 1)]−1 . If we choose z0 = 0, then r = 0 and R = 1, f (z) diverging at z = 1. A partial fraction expansion yields the Laurent series ∞  1 1 1 1 =− − = − − 1 − z − z2 − z3 − · · · = − zn . z(z − 1) 1−z z z (6.70) n=−1 From Eqs. (6.70), (6.68), and (6.69) we then have 1 an = 2πi  dz′ −1 = ′ n+2 ′ (z ) (z − 1) 0 for n ≥ −1, for n < −1. (6.71) The integrals in Eq. (6.71) can also be directly evaluated by substituting the geometricseries expansion of (1 − z′ )−1 used already in Eq. (6.70) for (1 − z)−1 : an = −1 2πi   ∞ (z′ )m m=0 dz′ . (z′ )n+2 (6.72) Upon interchanging the order of summation and integration (uniformly convergent series), we have an = − ∞  dz′ 1  . 2πi (z′ )n+2−m m=0 (6.73) 6.5 Laurent Expansion 437 If we employ the polar form, as in Eq. (6.47) (or compare Exercise 6.4.1), ∞  1  rieiθ dθ an = − 2πi r n+2−m ei(n+2−m)θ m=0 =− which agrees with Eq. (6.71). ∞  1 · 2πi δn+2−m,1 , 2πi (6.74) m=0  The Laurent series differs from the Taylor series by the obvious feature of negative powers of (z − z0 ). For this reason the Laurent series will always diverge at least at z = z0 and perhaps as far out as some distance r (Fig. 6.15). Exercises 6.5.1 Develop the Taylor expansion of ln(1 + z). ANS. ∞  zn (−1)n−1 . n n=1 6.5.2 Derive the binomial expansion (1 + z)m = 1 + mz + ∞    m n m(m − 1) 2 z z + ··· = n 1·2 n=0 for m any real number. The expansion is convergent for |z| < 1. Why? 6.5.3 A function f (z) is analytic on and within the unit circle. Also, |f (z)| < 1 for |z| ≤ 1 and f (0) = 0. Show that |f (z)| < |z| for |z| ≤ 1. Hint. One approach is to show that f (z)/z is analytic and then to express [f (z0 )/z0 ]n by the Cauchy integral formula. Finally, consider absolute magnitudes and take the nth root. This exercise is sometimes called Schwarz’s theorem. 6.5.4 If f (z) is a real function of the complex variable z = x + iy, that is, if f (x) = f ∗ (x), and the Laurent expansion about the origin, f (z) = an zn , has an = 0 for n < −N , show that all of the coefficients an are real. Hint. Show that zN f (z) is analytic (via Morera’s theorem, Section 6.4). 6.5.5 A function f (z) = u(x, y) + iv(x, y) satisfies the conditions for the Schwarz reflection principle. Show that (a) u is an even function of y. (b) v is an odd function of y. 6.5.6 A function f (z) can be expanded in a Laurent series about the origin with the coefficients an real. Show that the complex conjugate of this function of z is the same function of the complex conjugate of z; that is, f ∗ (z) = f (z∗ ). Verify this explicitly for (a) f (z) = zn , n an integer, (b) f (z) = sin z. If f (z) = iz (a1 = i), show that the foregoing statement does not hold. 438 Chapter 6 Functions of a Complex Variable I 6.5.7 The function f (z) is analytic in a domain that includes the real axis. When z is real (z = x), f (x) is pure imaginary. (a) (b) 6.5.8 Show that  ∗ f (z∗ ) = − f (z) . For the specific case f (z) = iz, develop the Cartesian forms of f (z), f (z∗ ), and f ∗ (z). Do not quote the general result of part (a). Develop the first three nonzero terms of the Laurent expansion of  −1 f (z) = ez − 1 about the origin. Notice the resemblance to the Bernoulli number–generating function, Eq. (5.144) of Section 5.9. 6.5.9 Prove that the Laurent expansion of a given function about a given point is unique; that is, if f (z) = ∞  n=−N an (z − z0 )n = ∞  n=−N bn (z − z0 )n , show that an = bn for all n. Hint. Use the Cauchy integral formula. 6.5.10 6.5.11 Develop a Laurent expansion of f (z) = [z(z − 1)]−1 about the point z = 1 valid for small values of |z − 1|. Specify the exact range over which your expansion holds. This is an analytic continuation of Eq. (6.70). (b) Determine the Laurent expansion of f (z) about z = 1 but for |z − 1| large. Hint. Partial fraction this function and use the geometric series. ∞ (a) Given f1 (z) = 0 e−zt dt (with t real), show that the domain in which f1 (z) exists (and is analytic) is ℜ(z) > 0. (b) Show that f2 (z) = 1/z equals f1 (z) over ℜ(z) > 0 and is therefore an analytic continuation of f1 (z) over the entire z-plane except for z = 0. n (c) Expand 1/z about the point z = i. You will have f3 (z) = ∞ n=0 an (z − i) . What is the domain of f3 (z)? (a) ANS. ∞  1 i n (z − i)n , = −i z n=0 6.6 |z − i| < 1. SINGULARITIES The Laurent expansion represents a generalization of the Taylor series in the presence of singularities. We define the point z0 as an isolated singular point of the function f (z) if f (z) is not analytic at z = z0 but is analytic at all neighboring points. 6.6 Singularities 439 Poles In the Laurent expansion of f (z) about z0 , f (z) = ∞  m=−∞ am (z − z0 )m , (6.75) if am = 0 for m < −n < 0 and a−n = 0, we say that z0 is a pole of order n. For instance, if n = 1, that is, if a−1 /(z − z0 ) is the first nonvanishing term in the Laurent series, we have a pole of order 1, often called a simple pole. If, on the other hand, the summation continues to m = −∞, then z0 is a pole of infinite order and is called an essential singularity. These essential singularities have many pathological features. For instance, we can show that in any small neighborhood of an essential singularity of f (z) the function f (z) comes arbitrarily close to any (and therefore every) preselected complex quantity w0 .15 Here, the entire w-plane is mapped by f into the neighborhood of the point z0 . One point of fundamental difference between a pole of finite order n and an essential singularity is that by multiplying f (z) by (z − z0 )n , f (z)(z − z0 )n is no longer singular at z0 . This obviously cannot be done for an essential singularity. The behavior of f (z) as z → ∞ is defined in terms of the behavior of f (1/t) as t → 0. Consider the function sin z = ∞  (−1)n z2n+1 n=0 (2n + 1)! . (6.76) As z → ∞, we replace the z by 1/t to obtain    ∞ 1 (−1)n sin . = t (2n + 1)!t 2n+1 (6.77) n=0 From the definition, sin z has an essential singularity at infinity. This result could be anticipated from Exercise 6.1.9 since sin z = sin iy = i sinh y, when x = 0, which approaches infinity exponentially as y → ∞. Thus, although the absolute value of sin x for real x is equal to or less than unity, the absolute value of sin z is not bounded. A function that is analytic throughout the finite complex plane except for isolated poles is called meromorphic, such as ratios of two polynomials or tan z, cot z. Examples are also entire functions that have no singularities in the finite complex plane, such as exp(z), sin z, cos z (see Sections 5.9, 5.11). 15 This theorem is due to Picard. A proof is given by E. C. Titchmarsh, The Theory of Functions, 2nd ed. New York: Oxford University Press (1939). 440 Chapter 6 Functions of a Complex Variable I Branch Points There is another sort of singularity that will be important in Chapter 7. Consider f (z) = za , in which a is not an integer.16 As z moves around the unit circle from e0 to e2πi , f (z) → e2πai = e0·a = 1, for nonintegral a. We have a branch point at the origin and another at infinity. If we set z = 1/t, a similar analysis of f (z) for t → 0 shows that t = 0; that is, z = ∞ is also a branch point. The points e0i and e2πi in the z-plane coincide, but these coincident points lead to different values of f (z); that is, f (z) is a multivalued function. The problem is resolved by constructing a cut line joining both branch points so that f (z) will be uniquely specified for a given point in the z-plane. For za , the cut line can go out at any angle. Note that the point at infinity must be included here; that is, the cut line may join finite branch points via the point at infinity. The next example is a case in point. If a = p/q is a rational number, then q is called the order of the branch point, because one needs to go around the branch point q times before coming back to the starting point. If a is irrational, then the order of the branch point is infinite, just as for the logarithm. Note that a function with a branch point and a required cut line will not be continuous across the cut line. Often there will be a phase difference on opposite sides of this cut line. Hence line integrals on opposite sides of this branch point cut line will not generally cancel each other. Numerous examples of this case appear in the exercises. The contour line used to convert a multiply connected region into a simply connected region (Section 6.3) is completely different. Our function is continuous across that contour line, and no phase difference exists. Example 6.6.1 BRANCH POINTS OF ORDER 2 Consider the function 1/2  = (z + 1)1/2 (z − 1)1/2 . f (z) = z2 − 1 (6.78) The first factor on the right-hand side, (z + 1)1/2 , has a branch point at z = −1. The second factor has a branch point at z = +1. At infinity f (z) has a simple pole. This is best seen by substituting z = 1/t and making a binomial expansion at t = 0:  ∞   2 1/2 1  1 1 1 1  1/2 2 1/2 (−1)n t 2n = − t − t 3 + · · · . = z −1 = 1−t t t t 2 8 n n=0 The cut line has to connect both branch points, so it is not possible to encircle either branch point completely. To check on the possibility of taking the line segment joining z = +1 and 16 z = 0 is a singular point, for za has only a finite number of derivatives, whereas an analytic function is guaranteed an infinite number of derivatives (Section 6.4). The problem is that f (z) is not single-valued as we encircle the origin. The Cauchy integral formula may not be applied. 6.6 Singularities 441 FIGURE 6.16 Branch cut and phases of Table 6.1. Table 6.1 Phase Angle ϕ θ +ϕ 2 Point θ 1 0 0 0 2 0 π 3 0 π π 2 π 2 4 π π π 5 2π π 6 2π π 3π 2 3π 2 7 2π 2π 2π z = −1 as a cut line, let us follow the phases of these two factors as we move along the contour shown in Fig. 6.16. For convenience in following the changes of phase let z + 1 = reiθ and z − 1 = ρeiϕ . Then the phase of f (z) is (θ + ϕ)/2. We start at point 1, where both z + 1 and z − 1 have a phase of zero. Moving from point 1 to point 2, ϕ, the phase of z − 1 = ρeiϕ , increases by π . (z − 1 becomes negative.) ϕ then stays constant until the circle is completed, moving from 6 to 7. θ , the phase of z + 1 = reiθ , shows a similar behavior, increasing by 2π as we move from 3 to 5. The phase of the function f (z) = (z + 1)1/2 (z − 1)1/2 = r 1/2 ρ 1/2 ei(θ+ϕ)/2 is (θ + ϕ)/2. This is tabulated in the final column of Table 6.1. Two features emerge: 1. The phase at points 5 and 6 is not the same as the phase at points 2 and 3. This behavior can be expected at a branch cut. 2. The phase at point 7 exceeds that at point 1 by 2π , and the function f (z) = (z2 − 1)1/2 is therefore single-valued for the contour shown, encircling both branch points. If we take the x-axis, −1 ≤ x ≤ 1, as a cut line, f (z) is uniquely specified. Alternatively, the positive x-axis for x > 1 and the negative x-axis for x < −1 may be taken as cut lines. The branch points cannot be encircled, and the function remains single-valued. These two cut lines are, in fact, one branch cut from −1 to +1 via the point at infinity.  Generalizing from this example, we have that the phase of a function f (z) = f1 (z) · f2 (z) · f3 (z) · · · is the algebraic sum of the phase of its individual factors: arg f (z) = arg f1 (z) + arg f2 (z) + arg f3 (z) + · · · . 442 Chapter 6 Functions of a Complex Variable I The phase of an individual factor may be taken as the arctangent of the ratio of its imaginary part to its real part (choosing the appropriate branch of the arctan function tan−1 y/x, which has infinitely many branches),   vi argfi (z) = tan−1 . ui For the case of a factor of the form fi (z) = (z − z0 ), the phase corresponds to the phase angle of a two-dimensional vector from +z0 to z, the phase increasing by 2π as the point +z0 is encircled. Conversely, the traversal of any closed loop not encircling z0 does not change the phase of z − z0 . Exercises 6.6.1 The function f (z) expanded in a Laurent series exhibits a pole of order m at z = z0 . Show that the coefficient of (z − z0 )−1 , a−1 , is given by a−1 = with d m−1  1 (z − z0 )m f (z) z=z , m−1 0 (m − 1)! dz  a−1 = (z − z0 )f (z) z=z , 0 when the pole is a simple pole (m = 1). These equations for a−1 are extremely useful in determining the residue to be used in the residue theorem of Section 7.1. Hint. The technique that was so successful in proving the uniqueness of power series, Section 5.7, will work here also. 6.6.2 A function f (z) can be represented by f (z) = f1 (z) , f2 (z) in which f1 (z) and f2 (z) are analytic. The denominator, f2 (z), vanishes at z = z0 , showing that f (z) has a pole at z = z0 . However, f1 (z0 ) = 0, f2′ (z0 ) = 0. Show that a−1 , the coefficient of (z − z0 )−1 in a Laurent expansion of f (z) at z = z0 , is given by a−1 = f1 (z0 ) . f2′ (z0 ) (This result leads to the Heaviside expansion theorem, Exercise 15.12.11.) 6.6.3 In analogy with Example 6.6.1, consider in detail the phase of each factor and the resultant overall phase of f (z) = (z2 + 1)1/2 following a contour similar to that of Fig. 6.16 but encircling the new branch points. 6.6.4 The Legendre function of the second kind, Qν (z), has branch points at z = ±1. The branch points are joined by a cut line along the real (x) axis. 6.7 Mapping 443 Show that Q0 (z) = 12 ln((z + 1)/(z − 1)) is single-valued (with the real axis −1 ≤ x ≤ 1 taken as a cut line). (b) For real argument x and |x| < 1 it is convenient to take (a) Q0 (x) = Show that 1 1+x ln . 2 1−x 1 Q0 (x + i0) + Q0 (x − i0) . 2 Here x + i0 indicates that z approaches the real axis from above, and x − i0 indicates an approach from below. Q0 (x) = 6.6.5 As an example of an essential singularity, consider e1/z as z approaches zero. For any complex number z0 , z0 = 0, show that e1/z = z0 has an infinite number of solutions. 6.7 MAPPING In the preceding sections we have defined analytic functions and developed some of their main features. Here we introduce some of the more geometric aspects of functions of complex variables, aspects that will be useful in visualizing the integral operations in Chapter 7 and that are valuable in their own right in solving Laplace’s equation in two-dimensional systems. In ordinary analytic geometry we may take y = f (x) and then plot y versus x. Our problem here is more complicated, for z is a function of two variables, x and y. We use the notation w = f (z) = u(x, y) + iv(x, y). (6.79) Then for a point in the z-plane (specific values for x and y) there may correspond specific values for u(x, y) and v(x, y) that then yield a point in the w-plane. As points in the z-plane transform, or are mapped into points in the w-plane, lines or areas in the z-plane will be mapped into lines or areas in the w-plane. Our immediate purpose is to see how lines and areas map from the z-plane to the w-plane for a number of simple functions. Translation w = z + z0 . (6.80) The function w is equal to the variable z plus a constant, z0 = x0 + iy0 . By Eqs. (6.1) and (6.79), u = x + x0 , v = y + y0 , representing a pure translation of the coordinate axes, as shown in Fig. 6.17. (6.81) 444 Chapter 6 Functions of a Complex Variable I FIGURE 6.17 Translation. Rotation w = zz0 . (6.82) Here it is convenient to return to the polar representation, using w = ρeiϕ , z = reiθ , and z0 = r0 eiθ0 , (6.83) then ρeiϕ = rr0 ei(θ+θ0 ) , (6.84) or ρ = rr0 , ϕ = θ + θ0 . (6.85) Two things have occurred. First, the modulus r has been modified, either expanded or contracted, by the factor r0 . Second, the argument θ has been increased by the additive constant θ0 (Fig. 6.18). This represents a rotation of the complex variable through an angle θ0 . For the special case of z0 = i, we have a pure rotation through π/2 radians. FIGURE 6.18 Rotation. 6.7 Mapping 445 Inversion 1 w= . z (6.86) 1 1 = e−iθ , reiθ r (6.87) Again, using the polar form, we have ρeiϕ = which shows that 1 ρ= , ϕ = −θ. (6.88) r The first part of Eq. (6.87) shows that inversion clearly. The interior of the unit circle is mapped onto the exterior and vice versa (Fig. 6.19). In addition, the second part of Eq. (6.87) shows that the polar angle is reversed in sign. Equation (6.88) therefore also involves a reflection of the y-axis, exactly like the complex conjugate equation. To see how curves in the z-plane transform into the w-plane, we return to the Cartesian form: 1 u + iv = . (6.89) x + iy Rationalizing the right-hand side by multiplying numerator and denominator by z∗ and then equating the real parts and the imaginary parts, we have x u u= 2 , x= 2 , 2 x +y u + v2 (6.90) v y , y = − . v=− 2 x + y2 u2 + v 2 FIGURE 6.19 Inversion. 446 Chapter 6 Functions of a Complex Variable I A circle centered at the origin in the z-plane has the form x2 + y2 = r 2 (6.91) u2 v2 + = r 2. (u2 + v 2 )2 (u2 + v 2 )2 (6.92) and by Eqs. (6.90) transforms into Simplifying Eq. (6.92), we obtain u2 + v 2 = 1 = ρ2, r2 (6.93) which describes a circle in the w-plane also centered at the origin. The horizontal line y = c1 transforms into −v = c1 , u2 + v 2 (6.94)   1 1 2 = , u2 + v + 2c1 (2c1 )2 (6.95) or which describes a circle in the w-plane of radius (1/2c1 ) and centered at u = 0, v = − 2c11 (Fig. 6.20). We pick up the other three possibilities, x = ±c1 , y = −c1 , by rotating the xy-axes. In general, any straight line or circle in the z-plane will transform into a straight line or a circle in the w-plane (compare Exercise 6.7.1). FIGURE 6.20 Inversion, line ↔ circle. 6.7 Mapping 447 Branch Points and Multivalent Functions The three transformations just discussed have all involved one-to-one correspondence of points in the z-plane to points in the w-plane. Now to illustrate the variety of transformations that are possible and the problems that can arise, we introduce first a two-to-one correspondence and then a many-to-one correspondence. Finally, we take up the inverses of these two transformations. Consider first the transformation w = z2 , (6.96) which leads to ρ = r 2, ϕ = 2θ. (6.97) Clearly, our transformation is nonlinear, for the modulus is squared, but the significant feature of Eq. (6.96) is that the phase angle or argument is doubled. This means that the π , → upper half-plane of w, 0 ≤ ϕ < π , 2 • upper half-plane of z, 0 ≤ θ < π , → whole plane of w, 0 ≤ ϕ < 2π . • first quadrant of z, 0 ≤ θ 0, one point on the line u = c1 corresponds, and vice versa. However, every point on the line u = c1 also corresponds to a point on the hyperbola x 2 − y 2 = c1 in the left half-plane, x < 0, as already explained. It will be shown in Section 6.8 that if lines in the w-plane are orthogonal, the corresponding lines in the z-plane are also orthogonal, as long as the transformation is analytic. Since u = c1 and v = c2 are constructed perpendicular to each other, the corresponding hyperbolas in the z-plane are orthogonal. We have constructed a new orthogonal system of hyperbolic lines (or surfaces if we add an axis perpendicular to x and y). Exercise 2.1.3 was an analysis of this system. It might be noted that if the hyperbolic lines are electric or magnetic lines of force, then we have a quadrupole lens useful in focusing beams of high-energy particles. The inverse of the fourth transformation (Eq. (6.96)) is w = z1/2 . (6.100) 448 Chapter 6 Functions of a Complex Variable I FIGURE 6.21 Mapping — hyperbolic coordinates. From the relation ρeiϕ = r 1/2 eiθ/2 (6.101) 2ϕ = θ, (6.102) w = ez (6.103) ρeiϕ = ex+iy , (6.104) and we now have two points in the w-plane (arguments ϕ and ϕ + π ) corresponding to one point in the z-plane (except for the point z = 0). Or, to put it another way, θ and θ + 2π correspond to ϕ and ϕ + π , two distinct points in the w-plane. This is the complex variable analog of the simple real variable equation y 2 = x, in which two values of y, plus and minus, correspond to each value of x. The important point here is that we can make the function w of Eq. (6.100) a singlevalued function instead of a double-valued function if we agree to restrict θ to a range such as 0 ≤ θ < 2π . This may be done by agreeing never to cross the line θ = 0 in the z-plane (Fig. 6.22). Such a line of demarcation is called a cut line or branch cut. Note that branch points occur in pairs. The cut line joins the two branch point singularities, here at 0 and ∞ (for the latter, transform z = 1/t for t → 0). Any line from z = 0 to infinity would serve equally well. The purpose of the cut line is to restrict the argument of z. The points z and z exp(2πi) coincide in the z-plane but yield different points w and −w = w exp(πi) in the w-plane. Hence in the absence of a cut line, the function w = z1/2 is ambiguous. Alternatively, since the function w = z1/2 is double-valued, we can also glue two sheets of the complex zplane together along the branch cut so that arg(z) increases beyond 2π along the branch cut and continues from 4π on the second sheet to reach the same function values for z as for ze−4πi , that is, the start on the first sheet again. This construction is called the Riemann surface of w = z1/2 . We shall encounter branch points and cut lines (branch cuts) frequently in Chapter 7. The transformation leads to 6.7 Mapping 449 FIGURE 6.22 A cut line. or ρ = ex , ϕ = y. (6.105) If y ranges from 0 ≤ y < 2π (or −π < y ≤ π ), then ϕ covers the same range. But this is the whole w-plane. In other words, a horizontal strip in the z-plane of width 2π maps into the entire w-plane. Further, any point x + i(y + 2nπ), in which n is any integer, maps into the same point (by Eq. (6.104)) in the w-plane. We have a many-(infinitely many)-to-one correspondence. Finally, as the inverse of the fifth transformation (Eq. (6.103)), we have w = ln z. (6.106) u + iv = ln reiθ = ln r + iθ. (6.107) By expanding it, we obtain For a given point z0 in the z-plane the argument θ is unspecified within an integral multiple of 2π . This means that v = θ + 2nπ, (6.108) and, as in the exponential transformation, we have an infinitely many-to-one correspondence. Equation (6.108) has a nice physical representation. If we go around the unit circle in the z-plane, r = 1, and by Eq. (6.107), u = ln r = 0; but v = θ , and θ is steadily increasing and continues to increase as θ continues past 2π . The cut line joins the branch point at the origin with infinity. As θ increases past 2π we glue a new sheet of the complex z-plane along the cut line, etc. Going around the unit circle in the z-plane is like the advance of a screw as it is rotated or the ascent of a person walking up a spiral staircase (Fig. 6.23), which is the Riemann surface of w = ln z. As in the preceding example, we can also make the correspondence unique (and Eq. (6.106) unambiguous) by restricting θ to a range such as 0 ≤ θ < 2π by taking the 450 Chapter 6 Functions of a Complex Variable I FIGURE 6.23 This is the Riemann surface for ln z, a multivalued function. line θ = 0 (positive real axis) as a cut line. This is equivalent to taking one and only one complete turn of the spiral staircase. The concept of mapping is a very broad and useful one in mathematics. Our mapping from a complex z-plane to a complex w-plane is a simple generalization of one definition of function: a mapping of x (from one set) into y in a second set. A more sophisticated form of mapping appears in Section 1.15 where we use the Dirac delta function δ(x − a) to map a function f (x) into its value at the point a. Then in Chapter 15 integral transforms are used to map one function f (x) in x-space into a second (related) function F (t) in t-space. Exercises 6.7.1 How do circles centered on the origin in the z-plane transform for 1 1 (b) w2 (z) = z − , (a) w1 (z) = z + , z z What happens when |z| → 1? 6.7.2 What part of the z-plane corresponds to the interior of the unit circle in the w-plane if (a) w = 6.7.3 for z = 0? z−1 , z+1 (b) w = Discuss the transformations z−i ? z+i (a) w(z) = sin z, (c) w(z) = sinh z, (b) w(z) = cos z, (d) w(z) = cosh z. Show how the lines x = c1 , y = c2 map into the w-plane. Note that the last three transformations can be obtained from the first one by appropriate translation and/or rotation. 6.8 Conformal Mapping FIGURE 6.24 6.7.4 451 Bessel function integration contour. Show that the function  1/2 w(z) = z2 − 1 is single-valued if we take −1 ≤ x ≤ 1, y = 0 as a cut line. 6.7.5 Show that negative numbers have logarithms in the complex plane. In particular, find ln(−1). ANS. ln(−1) = iπ . 6.7.6 An integral representation of the Bessel function follows the contour in the t-plane shown in Fig. 6.24. Map this contour into the θ -plane with t = eθ . Many additional examples of mapping are given in Chapters 11, 12, and 13. 6.7.7 For noninteger m, show that the binomial expansion of Exercise 6.5.2 holds only for a suitably defined branch of the function (1 + z)m . Show how the z-plane is cut. Explain why |z| < 1 may be taken as the circle of convergence for the expansion of this branch, in light of the cut you have chosen. 6.7.8 The Taylor expansion of Exercises 6.5.2 and 6.7.7 is not suitable for branches other than the one suitably defined branch of the function (1 + z)m for noninteger m. [Note that other branches cannot have the same Taylor expansion since they must be distinguishable.] Using the same branch cut of the earlier exercises for all other branches, find the corresponding Taylor expansions, detailing the phase assignments and Taylor coefficients. 6.8 CONFORMAL MAPPING In Section 6.7 hyperbolas were mapped into straight lines and straight lines were mapped into circles. Yet in all these transformations one feature stayed constant. This constancy was a result of the fact that all the transformations of Section 6.7 were analytic. As long as w = f (z) is an analytic function, we have df dw w = = lim . dz dz z→0 z (6.109) 452 Chapter 6 Functions of a Complex Variable I FIGURE 6.25 Conformal mapping — preservation of angles. Assuming that this equation is in polar form, we may equate modulus to modulus and argument to argument. For the latter (assuming that df/dz = 0), arg lim z→0 w w = lim arg z→0 z z = lim arg w − lim arg z = arg z→0 z→0 df = α, dz (6.110) where α, the argument of the derivative, may depend on z but is a constant for a fixed z, independent of the direction of approach. To see the significance of this, consider two curves Cz in the z-plane and the corresponding curve Cw in the w-plane (Fig. 6.25). The increment z is shown at an angle of θ relative to the real (x) axis, whereas the corresponding increment w forms an angle of ϕ with the real (u) axis. From Eq. (6.110), ϕ = θ + α, (6.111) or any line in the z-plane is rotated through an angle α in the w-plane as long as w is an analytic transformation and the derivative is not zero.17 Since this result holds for any line through z0 , it will hold for a pair of lines. Then for the angle between these two lines, ϕ2 − ϕ1 = (θ2 + α) − (θ1 + α) = θ2 − θ1 , (6.112) which shows that the included angle is preserved under an analytic transformation. Such angle-preserving transformations are called conformal. The rotation angle α will, in general, depend on z. In addition |f ′ (z)| will usually be a function of z. Historically, these conformal transformations have been of great importance to scientists and engineers in solving Laplace’s equation for problems of electrostatics, hydrodynamics, heat flow, and so on. Unfortunately, the conformal transformation approach, however elegant, is limited to problems that can be reduced to two dimensions. The method is often beautiful if there is a high degree of symmetry present but often impossible if the symmetry is broken or absent. Because of these limitations and primarily because electronic computers offer a useful alternative (iterative solution of the partial differential equation), the details and applications of conformal mappings are omitted. 17 If df/dz = 0, its argument or phase is undefined and the (analytic) transformation will not necessarily preserve angles. 6.8 Additional Readings 453 Exercises 6.8.1 Expand w(x) in a Taylor series about the point z = z0 , where f ′ (z0 ) = 0. (Angles are not preserved.) Show that if the first n − 1 derivatives vanish but f (n) (z0 ) = 0, then angles in the z-plane with vertices at z = z0 appear in the w-plane multiplied by n. 6.8.2 Develop the transformations that create each of the four cylindrical coordinate systems: x = ρ cos ϕ, y = ρ sin ϕ. (b) Elliptic cylindrical: x = a cosh u cos v, y = a sinh u sin v. (c) Parabolic cylindrical: x = ξ η,  y = 21 η2 − ξ 2 . a sinh η , (d) Bipolar: x= cosh η − cos ξ a sin ξ y= . cosh η − cos ξ Note. These transformations are not necessarily analytic. (a) Circular cylindrical: 6.8.3 In the transformation a−w , a+w how do the coordinate lines in the z-plane transform? What coordinate system have you constructed? ez = Additional Readings Ahlfors, L. V., Complex Analysis, 3rd ed. New York: McGraw-Hill (1979). This text is detailed, thorough, rigorous, and extensive. Churchill, R. V., J. W. Brown, and R. F. Verkey, Complex Variables and Applications, 5th ed. New York: McGrawHill (1989). This is an excellent text for both the beginning and advanced student. It is readable and quite complete. A detailed proof of the Cauchy–Goursat theorem is given in Chapter 5. Greenleaf, F. P., Introduction to Complex Variables. Philadelphia: Saunders (1972). This very readable book has detailed, careful explanations. Kurala, A., Applied Functions of a Complex Variable. New York: Wiley (Interscience) (1972). An intermediatelevel text designed for scientists and engineers. Includes many physical applications. Levinson, N., and R. M. Redheffer, Complex Variables. San Francisco: Holden-Day (1970). This text is written for scientists and engineers who are interested in applications. Morse, P. M., and H. Feshbach, Methods of Theoretical Physics. New York: McGraw-Hill (1953). Chapter 4 is a presentation of portions of the theory of functions of a complex variable of interest to theoretical physicists. Remmert, R., Theory of Complex Functions. New York: Springer (1991). Sokolnikoff, I. S., and R. M. Redheffer, Mathematics of Physics and Modern Engineering, 2nd ed. New York: McGraw-Hill (1966). Chapter 7 covers complex variables. Spiegel, M. R., Complex Variables. New York: McGraw-Hill (1985). An excellent summary of the theory of complex variables for scientists. Titchmarsh, E. C., The Theory of Functions, 2nd ed. New York: Oxford University Press (1958). A classic. 454 Chapter 6 Functions of a Complex Variable I Watson, G. N., Complex Integration and Cauchy’s Theorem. New York: Hafner (orig. 1917, reprinted 1960). A short work containing a rigorous development of the Cauchy integral theorem and integral formula. Applications to the calculus of residues are included. Cambridge Tracts in Mathematics, and Mathematical Physics, No. 15. Other references are given at the end of Chapter 15. CHAPTER 7 FUNCTIONS OF A COMPLEX VARIABLE II In this chapter we return to the analysis that started with the Cauchy–Riemann conditions in Chapter 6 and develop the residue theorem, with major applications to the evaluation of definite and principal part integrals of interest to scientists and asymptotic expansion of integrals by the method of steepest descent. We also develop further specific analytic functions, such as pole expansions of meromorphic functions and product expansions of entire functions. Dispersion relations are included because they represent an important application of complex variable methods for physicists. 7.1 CALCULUS OF RESIDUES Residue Theorem n If the Laurent expansion of a function f (z) = ∞ n=−∞ an (z − z0 ) is integrated term by term by using a closed contour that encircles one isolated singular point z0 once in a counterclockwise sense, we obtain (Exercise 6.4.1)   (z − z0 )n+1 z1 n an (z − z0 ) dz = an = 0, n = −1. (7.1) n + 1 z1 However, if n = −1, a−1  −1 (z − z0 ) dz = a−1  ireiθ dθ = 2πia−1 . reiθ Summarizing Eqs. (7.1) and (7.2), we have  1 f (z) dz = a−1 . 2πi 455 (7.2) (7.3) 456 Chapter 7 Functions of a Complex Variable II FIGURE 7.1 Excluding isolated singularities. The constant a−1 , the coefficient of (z − z0 )−1 in the Laurent expansion, is called the residue of f (z) at z = z0 . A set of isolated singularities can be handled by deforming our contour as shown in Fig. 7.1. Cauchy’s integral theorem (Section 6.3) leads to     f (z) dz + · · · = 0. (7.4) f (z) dz + f (z) dz + f (z) dz + C C1 C0 C2 The circular integral around any given singular point is given by Eq. (7.3),  f (z) dz = −2πia−1,zi , (7.5) Ci assuming a Laurent expansion about the singular point z = zi . The negative sign comes from the clockwise integration, as shown in Fig. 7.1. Combining Eqs. (7.4) and (7.5), we have  C f (z) dz = 2πi(a−1z0 + a−1z1 + a−1z2 + · · · ) = 2πi × (sum of enclosed residues). (7.6) This is the residue theorem. The problem of evaluating one or more contour integrals is replaced by the algebraic problem of computing residues at the enclosed singular points. We first use this residue theorem to develop the concept of the Cauchy principal value. Then in the remainder of this section we apply the residue theorem to a wide variety of definite integrals of mathematical and physical interest. Using the transformation z = 1/w for w approaching 0, we can find the nature of a singularity at z going to ∞ and the residue of a function f (z) with just isolated singularities and no branch points. In such cases we know that  {residues in the finite z-plane} + {residue at z → ∞} = 0. 7.1 Calculus of Residues 457 Cauchy Principal Value Occasionally an isolated pole will be directly on the contour of integration, causing the integral to diverge. Let us illustrate a physical case. Example 7.1.1 FORCED CLASSICAL OSCILLATOR The inhomogeneous differential equation for a classical, undamped, driven harmonic oscillator, x(t) ¨ + ω02 x(t) = f (t), (7.7) may be solved by representing the driving force f (t) = δ(t ′ − t)f (t ′ ) dt ′ as a superposition of impulses by analogy with an extended charge distribution in electrostatics.1 If we solve first the simpler differential equation ¨ + ω02 G = δ(t − t ′ ) G G(t, t ′ ), (7.8) which is independent of the driving term f (model dependent), then x(t) = for G(t, t ′ )f (t ′ ) dt ′ solves the original problem. First, we verify this by substituting the integrals for x(t) and its time derivatives into the differential equation for x(t) using the dif iωt dω in terms of an integral ˜ ferential equation for G. Then we look for G(t, t ′ ) = G(ω)e 2π ˜ which is suggested by a similar integral form for δ(t − t ′ ) = eiω(t−t ′ ) dω weighted by G, 2π (see Eq. (1.193c) in Section 1.15). ¨ into the differential equation for G, we obtain Upon substituting G and G  2 ˜ − e−iωt ′ eiωt dω = 0. ω0 − ω 2 G (7.9) Because this integral is zero for all t, the expression in brackets must vanish for all ω. This relation is no longer a differential equation but an algebraic relation that we can solve ˜ for G: ′ ′ ′ e−iωt e−iωt e−iωt ˜ − . = G(ω) = 2 ω0 − ω2 2ω0 (ω + ω0 ) 2ω0 (ω − ω0 ) ˜ into the integral for G yields Substituting G ∞ iω(t−t ′ ) ′  e 1 eiω(t−t ) G(t, t ′ ) = dω. − 4πω0 −∞ ω + ω0 ω − ω0 (7.10) (7.11) Here, the dependence of G on t − t ′ in the exponential is consistent with the same dependence of δ(t − t ′ ), its driving term. Now, the problem is that this integral diverges because the integrand blows up at ω = ±ω0 , since the integration goes right through the first-order poles. To explain why this happens, we note that the δ-function driving term for G in˜ at cludes all frequencies with the same amplitude. Next, we see that the equation for G ′ t = 0 has its driving term equal to unity for all frequencies ω, including the resonant ω0 . 1 Adapted from A. Yu. Grosberg, priv. comm. 458 Chapter 7 Functions of a Complex Variable II We know from physics that forcing an oscillator at resonance leads to an indefinitely growing amplitude when there is no friction. With friction, the amplitude remains finite, even at resonance. This suggests including a small friction term in the differential equations for x(t) and G. ˙ η > 0, in the differential equation for G(t, t ′ ) (and ηx˙ for With a small friction term ηG, x(t)), we can still solve the algebraic equation  2 ˜ = e−iωt ′ ω0 − ω2 + iηω G (7.12) ˜ with friction. The solution is for G  ′ ′ e−iωt e−iωt 1 1 = − , 2 ω − ω− ω − ω+ ω02 − ω2 + iηω ,   η 2 iη  = ω0 1 − . ω± = ± + , 2 2ω0 ˜ = G (7.13) (7.14) For small friction, 0 < η ≪ ω0 ,  is nearly equal to ω0 and real, whereas ω± each pick up a small imaginary part. This means that the integration of the integral for G, ∞ iω(t−t ′ ) ′  e 1 eiω(t−t ) dω, (7.15) G(t, t ′ ) = − 4π −∞ ω − ω− ω − ω+ no longer encounters a pole and remains finite.  This treatment of an integral with a pole moves the pole off the contour and then considers the limiting behavior as it is brought back, as in Example 7.1.1 for η → 0. This example also suggests treating ω as a complex variable in case the singularity is a firstorder pole, deforming the integration path to avoid the singularity, which is equivalent to adding a small imaginary part to the pole position, and evaluating the integral by means of the residue theorem. dz Therefore, if the integration path of an integral z−x for real x0 goes right through the 0 pole x0 , we may deform the contour to include or exclude the residue, as desired, by including a semicircular detour of infinitesimal radius. This is shown in Fig. 7.2. The integration over the semicircle then gives, with z − x0 = δeiϕ , dz = i δeiϕ dϕ (see Eq. (6.27a)), 2π dz dϕ = iπ, i.e., πia−1 if counterclockwise, =i z − x0 π 0 dz =i dϕ = −iπ, i.e., − πia−1 if clockwise. z − x0 π This contribution, + or −, appears on the left-hand side of Eq. (7.6). If our detour were clockwise, the residue would not be enclosed and there would be no corresponding term on the right-hand side of Eq. (7.6). However, if our detour were counterclockwise, this residue would be enclosed by the contour C and a term 2πia−1 would appear on the right-hand side of Eq. (7.6). The net result for either clockwise or counterclockwise detour is that a simple pole on the contour is counted as one-half of what it would be if it were within the contour. This corresponds to taking the Cauchy principal value. 7.1 Calculus of Residues FIGURE 7.2 459 Bypassing singular points. FIGURE 7.3 Closing the contour with an infinite-radius semicircle. For instance, let us suppose that f (z) with a simple pole at z = x0 is integrated over the entire real axis. The contour is closed with an infinite semicircle in the upper half-plane (Fig. 7.3). Then  x0 −δ f (z) dz = f (z) dz f (x) dx + −∞ + = 2πi Cx0 ∞ x0 +δ  f (x) dx + infinite semicircle C enclosed residues. (7.16) If the small semicircle Cx0 , includes x0 (by going below the x-axis, counterclockwise), x0 is enclosed, and its contribution appears twice — as πia−1 in Cx and as 2πia−1 in the 0 term 2πi enclosed residues — for a net contribution of πia−1 . If the upper small semicircle is selected, x0 is excluded. The only contribution is from the clockwise integration over Cx0 , which yields −πia−1 . Moving this to the extreme right of Eq. (7.16), we have +πia−1 , as before. The integrals along the x-axis may be combined and the semicircle radius permitted to approach zero. We therefore define  x0 −δ  ∞ ∞ lim f (x) dx. (7.17) f (x) dx = P f (x) dx + δ→0 −∞ −∞ x0 +δ P indicates the Cauchy principal value and represents the preceding limiting process. Note that the Cauchy principal value is a balancing (or canceling) process. In the vicinity of our singularity at z = x0 , f (x) ≈ a−1 . x − x0 (7.18) 460 Chapter 7 Functions of a Complex Variable II FIGURE 7.4 Cancellation at a simple pole. This is odd, relative to x0 . The symmetric or even interval (relative to x0 ) provides cancellation of the shaded areas, Fig. 7.4. The contribution of the singularity is in the integration about the semicircle. In general, if a function f (x) has a singularity x0 somewhere inside the interval a ≤ x0 ≤ b and is integrable over every portion of this interval that does not contain the point x0 , then we define b x0 −δ1 b f (x) dx, f (x) dx = lim f (x) dx + lim δ1 →0 a a δ2 →0 x0 +δ2 when the limit exists as δj → 0 independently, else the integral is said to diverge. If this limit does not exist but the limit δ1 = δ2 = δ → 0 exists, it is defined to be the principal value of the integral. This same limiting technique is applicable to the integration limits ±∞. We define b ∞ f (x) dx = lim f (x) dx, (7.19a) −∞ a→−∞,b→∞ a if the integral exists with a, b approaching their limits independently, else the integral diverges. In case the integral diverges but ∞ a f (x) dx = P f (x) dx (7.19b) lim a→∞ −a exist, it is defined as its principal value. −∞ 7.1 Calculus of Residues 461 Pole Expansion of Meromorphic Functions Analytic functions f (z) that have only isolated poles as singularities are called meromord ln sin z in Eq. (5.210)] and ratios of polynomials. For phic. Examples are cot z [from dz simplicity we assume that these poles at finite z = an with 0 < |a1 | < |a2 | < · · · are all simple with residues bn . Then an expansion of f (z) in terms of bn (z − an )−1 depends in a systematic way on all singularities of f (z), in contrast to the Taylor expansion about an arbitrarily chosen analytic point z0 of f (z) or the Laurent expansion about one of the singular points of f (z). Let us consider a series of concentric circles Cn about the origin so that Cn includes a1 , a2 , . . . , an but no other poles, its radius Rn → ∞ as n → ∞. To guarantee convergence we assume that |f (z)| < εRn for any small positive constant ε and all z on Cn . Then the series f (z) = f (0) + ∞  n=1 ' ( bn (z − an )−1 + an−1 (7.20) converges to f (z). To prove this theorem (due to Mittag–Leffler) we use the residue theorem to evaluate the contour integral for z inside Cn : f (w) 1 dw In = 2πi Cn w(w − z) = n  m=1 bm f (z) − f (0) + . am (am − z) z (7.21) On Cn we have, for n → ∞, |In | ≤ 2πRn maxw on Cn |f (w)| εRn < →ε 2πRn (Rn − |z|) Rn − |z| for Rn ≫ |z|. Using In → 0 in Eq. (7.21) proves Eq. (7.20). p+1 If |f (z)| < εRn , then we evaluate similarly the integral 1 f (w) In = dw → 0 as n → ∞ 2πi w p+1 (w − z) and obtain the analogous pole expansion f (z) = f (0) + zf ′ (0) + · · · + ∞ p+1 zp f (p) (0)  bn zp+1 /an + p! z − an . (7.22) n=1 Note that the convergence of the series in Eqs. (7.20) and (7.22) is implied by the bound of |f (z)| for |z| → ∞. 462 Chapter 7 Functions of a Complex Variable II Product Expansion of Entire Functions A function f (z) that is analytic for all finite z is called an entire function. The logarithmic derivative f ′ /f is a meromorphic function with a pole expansion. If f (z) has a simple zero at z = an , then f (z) = (z − an )g(z) with analytic g(z) and g(an ) = 0. Hence the logarithmic derivative f ′ (z) g ′ (z) = (z − an )−1 + f (z) g(z) (7.23) has a simple pole at z = an with residue 1, and g ′ /g is analytic there. If f ′ /f satisfies the conditions that lead to the pole expansion in Eq. (7.20), then  ∞ f ′ (z) f ′ (0)  1 1 + = + f (z) f (0) an z − an (7.24) n=1 holds. Integrating Eq. (7.24) yields z ′ f (z) dz = ln f (z) − ln f (0) 0 f (z)  ∞  zf ′ (0)  z , ln(z − an ) − ln(−an ) + + = f (0) an n=1 and exponentiating we obtain the product expansion   ′ # ∞ z zf (0) ez/an . 1− f (z) = f (0) exp f (0) an (7.25) 1 Examples are the product expansions (see Chapter 5) for sin z = z cos z = ∞  # n=−∞ n =0 ∞  # n=1 1−   ∞  # z z2 1− 2 2 , ez/nπ = z nπ n π  z2 . 1− (n − 1/2)2 π 2 n=1 (7.26) Another example is the product expansion of the gamma function, which will be discussed in Chapter 8. As a consequence of Eq. (7.23) the contour integral of the logarithmic derivative may be used to count the number Nf of zeros (including their multiplicities) of the function f (z) inside the contour C: 1 f ′ (z) dz = Nf . (7.27) 2πi C f (z) 7.1 Calculus of Residues 463 Moreover, using   f ′ (z) dz = ln f (z) = lnf (z) + i arg f (z), f (z) (7.28) we see that the real part in Eq. (7.28) does not change as z moves once around the contour, while the corresponding change in arg f must be C arg(f ) = 2πNf . (7.29) This leads to Rouché’s theorem: If f (z) and g(z) are analytic inside and on a closed contour C and |g(z)| < |f (z)| on C then f (z) and f (z) + g(z) have the same number of zeros inside C. To show this we use   g . 2πNf +g = C arg(f + g) = C arg(f ) + C arg 1 + f Since |g| < |f | on C, the point w = 1 + g(z)/f (z) is always an interior point of the circle in the w-plane with center at 1 and radius 1. Hence arg(1 + g/f ) must return to its original value when z moves around C (it does not circle the origin); it cannot decrease or increase by a multiple of 2π so that C arg(1 + g/f ) = 0. Rouché’s theorem may be used for an alternative proof of the fundamental theorem of algebra: A polynomial nm=0 am zm with an = 0 has n zeros. We define f (z) = an zn . Then m f has an n-fold zero at the origin and no other zeros. Let g(z) = n−1 m=0 am z . We apply Rouché’s theorem to a circle C with center at the origin and radius R > 1. On C, |f (z)| = |an |R n and   n−1   g(z) ≤ |a0 | + |a1 |R + · · · + |an−1 |R n−1 ≤ |am | R n−1 . m=0 n−1 Hence |g(z)| < |f (z)| for z on C, provided R > ( m=0 |am |)/|an |. For all sufficiently n large circles C therefore, f + g = m=0 am zm has n zeros inside C according to Rouché’s theorem. Evaluation of Definite Integrals Definite integrals appear repeatedly in problems of mathematical physics as well as in pure mathematics. Three moderately general techniques are useful in evaluating definite integrals: (1) contour integration, (2) conversion to gamma or beta functions (Chapter 8), and (3) numerical quadrature. Other approaches include series expansion with term-by-term integration and integral transforms. As will be seen subsequently, the method of contour integration is perhaps the most versatile of these methods, since it is applicable to a wide variety of integrals. 464 Chapter 7 Functions of a Complex Variable II Definite Integrals:  2π 0 f (sin θ, cos θ) dθ The calculus of residues is useful in evaluating a wide variety of definite integrals in both physical and purely mathematical problems. We consider, first, integrals of the form 2π f (sin θ, cos θ ) dθ, (7.30) I= 0 where f is finite for all values of θ . We also require f to be a rational function of sin θ and cos θ so that it will be single-valued. Let z = eiθ , dz = ieiθ dθ. From this, dθ = −i dz , z sin θ = z − z−1 , 2i cos θ = z + z−1 . 2 (7.31) Our integral becomes I = −i  f   z − z−1 z + z−1 dz , , 2i 2 z with the path of integration the unit circle. By the residue theorem, Eq. (7.16),  I = (−i)2πi residues within the unit circle. (7.32) (7.33) Note that we are after the residues of f/z. Illustrations of integrals of this type are provided by Exercises 7.1.7–7.1.10. Example 7.1.2 INTEGRAL OF COS IN DENOMINATOR Our problem is to evaluate the definite integral 2π dθ , I= 1 + ε cos θ 0 By Eq. (7.32) this becomes I = −i  2 = −i ε unit circle  |ε| < 1. dz z[1 + (ε/2)(z + z−1 )] dz . z2 + (2/ε)z + 1 The denominator has roots 1 1 1 1 1 − ε2 and z+ = − + 1 − ε2 , z− = − − ε ε ε ε where z+ is within the unit circle and z− is outside. Then by Eq. (7.33) and Exercise 6.6.1,   1 2  . I = −i · 2πi √ ε 2z + 2/ε z=−1/ε+(1/ε) 1−ε2 7.1 Calculus of Residues 465 We obtain 0 2π 2π dθ =√ , 1 + ε cos θ 1 − ε2 Evaluation of Definite Integrals: |ε| < 1.  ∞ −∞ f (x) dx Suppose that our definite integral has the form ∞ I= f (x) dx (7.34) −∞ and satisfies the two conditions: • f (z) is analytic in the upper half-plane except for a finite number of poles. (It will be assumed that there are no poles on the real axis. If poles are present on the real axis, they may be included or excluded as discussed earlier in this section.) • f (z) vanishes as strongly2 as 1/z2 for |z| → ∞, 0 ≤ arg z ≤ π . With these conditions, we may take as a contour of integration the real axis and a semicircle in the upper half-plane, as shown in Fig. 7.5. We let the radius R of the semicircle become infinitely large. Then  R π  f (x) dx + lim f Reiθ iReiθ dθ f (z) dz = lim R→∞ −R = 2πi  R→∞ 0 residues (upper half-plane). From the second condition the second integral (over the semicircle) vanishes and ∞  f (x) dx = 2πi residues (upper half-plane). −∞ FIGURE 7.5 Half-circle contour. 2 We could use f (z) vanishes faster than 1/z, and we wish to have f (z) single-valued. (7.35) (7.36) 466 Chapter 7 Functions of a Complex Variable II Example 7.1.3 INTEGRAL OF MEROMORPHIC FUNCTION Evaluate I= ∞ −∞ dx . 1 + x2 (7.37) From Eq. (7.36), ∞ −∞  dx = 2πi residues (upper half-plane). 1 + x2 Here and in every other similar problem we have the question: Where are the poles? Rewriting the integrand as 1 1 1 = · , z2 + 1 z + i z − i (7.38) we see that there are simple poles (order 1) at z = i and z = −i. A simple pole at z = z0 indicates (and is indicated by) a Laurent expansion of the form f (z) = ∞  a−1 an (z − z0 )n . + a0 + z − z0 (7.39) n=1 The residue a−1 is easily isolated as (Exercise 6.6.1) a−1 = (z − z0 )f (z)|z=z0 . (7.40) Using Eq. (7.40), we find that the residue at z = i is 1/2i, whereas that at z = −i is −1/2i. Then ∞ dx 1 = π. (7.41) = 2πi · 2 2i −∞ 1 + x Here we have used a−1 = 1/2i for the residue of the one included pole at z = i. Note that it is possible to use the lower semicircle and that this choice will lead to the same result, I = π . A somewhat more delicate problem is provided by the next example.  Evaluation of Definite Integrals: ∞ −∞ f (x)e iax dx Consider the definite integral I= ∞ f (x)eiax dx, (7.42) −∞ with a real and positive. (This is a Fourier transform, Chapter 15.) We assume the two conditions: • f (z) is analytic in the upper half-plane except for a finite number of poles. 7.1 Calculus of Residues • lim f (z) = 0, |z|→∞ 0 ≤ arg z ≤ π. 467 (7.43) Note that this ∞is a less restrictive condition than the second condition imposed on f (z) for integrating −∞ f (x) dx previously. We employ the contour shown in Fig. 7.5. The application of the calculus of residues is the same as the one just considered, but here we have to work harder to show that the integral over the (infinite) semicircle goes to zero. This integral becomes π  IR = f Reiθ eiaR cos θ−aR sin θ iReiθ dθ. (7.44) 0 Let R be so large that |f (z)| = |f (Reiθ )| < ε. Then π −aR sin θ e dθ = 2εR |IR | ≤ εR π/2 e−aR sin θ dθ. (7.45) 0 0 In the range [0, π/2], 2 θ ≤ sin θ. π Therefore (Fig. 7.6) |IR | ≤ 2εR π/2 e−2aRθ/π dθ. (7.46) 0 Now, integrating by inspection, we obtain |IR | ≤ 2εR 1 − e−aR . 2aR/π Finally, lim |IR | ≤ R→∞ π ε. a (7.47) From Eq. (7.43), ε → 0 as R → ∞ and lim |IR | = 0. R→∞ FIGURE 7.6 (a) y = (2/π)θ , (b) y = sin θ . (7.48) 468 Chapter 7 Functions of a Complex Variable II This useful result is sometimes called Jordan’s lemma. With it, we are prepared to tackle Fourier integrals of the form shown in Eq. (7.42). Using the contour shown in Fig. 7.5, we have ∞  f (x)eiax dx + lim IR = 2πi residues (upper half-plane). −∞ R→∞ Since the integral over the upper semicircle IR vanishes as R → ∞ (Jordan’s lemma), ∞  f (x)eiax dx = 2πi residues (upper half-plane) (a > 0). (7.49) −∞ Example 7.1.4 SIMPLE POLE ON CONTOUR OF INTEGRATION The problem is to evaluate I= ∞ 0 This may be taken as the imaginary part3 of I2 = P sin x dx. x ∞ −∞ eiz dz . z (7.50) (7.51) Now the only pole is a simple pole at z = 0 and the residue there by Eq. (7.40) is a−1 = 1. We choose the contour shown in Fig. 7.7 (1) to avoid the pole, (2) to include the real axis, and (3) to yield a vanishingly small integrand for z = iy, y → ∞. Note that in this case a large (infinite) semicircle in the lower half-plane would be disastrous. We have  iz −r R ix e dz eiz dz eiz dz e dx ix dx = + + + = 0, (7.52) e z x z x z C1 C2 −R r FIGURE 7.7 Singularity on contour. 3 One can use ple 7.1.5). [(eiz − e−iz )/2iz] dz, but then two different contours will be needed for the two exponentials (compare Exam- 7.1 Calculus of Residues the final zero coming from the residue theorem (Eq. (7.6)). By Jordan’s lemma eiz dz = 0, z C2 and  eiz dz = z eiz dz +P z C1 ∞ −∞ eix dx = 0. x 469 (7.53) (7.54) The integral over the small semicircle yields (−)πi times the residue of 1, and minus, as a result of going clockwise. Taking the imaginary part,4 we have ∞ sin x dx = π (7.55) −∞ x or ∞ sin x π dx = . (7.56) x 2 0 The contour of Fig. 7.7, although convenient, is not at all unique. Another choice of contour for evaluating Eq. (7.50) is presented as Exercise 7.1.15.  Example 7.1.5 QUANTUM MECHANICAL SCATTERING The quantum mechanical analysis of scattering leads to the function ∞ x sin x dx I (σ ) = , 2 2 −∞ x − σ (7.57) where σ is real and positive. This integral is divergent and therefore ambiguous. From the physical conditions of the problem there is a further requirement: I (σ ) is to have the form eiσ so that it will represent an outgoing scattered wave. Using 1 1 1 sinh iz = eiz − e−iz , i 2i 2i we write Eq. (7.57) in the complex plane as sin z = I (σ ) = I1 + I2 , with I1 = 1 2i I2 = − 1 2i ∞ −∞ −R eix (7.59) zeiz dz, − σ2 z2 ∞ −∞ ze−iz dz. − σ2 z2 4 Alternatively, we may combine the integrals of Eq. (7.52) as −r (7.58) R R R  ix dx sin x dx dx e − e−ix + = = 2i dx. eix x x x x r r r (7.60) 470 Chapter 7 Functions of a Complex Variable II FIGURE 7.8 Contours. Integral I1 is similar to Example 7.1.4 and, as in that case, we may complete the contour by an infinite semicircle in the upper half-plane, as shown in Fig. 7.8a. For I2 the exponential is negative and we complete the contour by an infinite semicircle in the lower half-plane, as shown in Fig. 7.8b. As in Example 7.1.4, neither semicircle contributes anything to the integral — Jordan’s lemma. There is still the problem of locating the poles and evaluating the residues. We find poles at z = +σ and z = −σ on the contour of integration. The residues are (Exercises 6.6.1 and 7.1.1) I1 I2 z=σ eiσ 2 e−iσ 2 z = −σ e−iσ 2 eiσ 2 Detouring around the poles, as shown in Fig. 7.8 (it matters little whether we go above or below), we find that the residue theorem leads to   −iσ   iσ   iσ 1 e 1 e 1 e + πi = 2πi , (7.61) P I1 − πi 2i 2 2i 2 2i 2 for we have enclosed the singularity at z = σ but excluded the one at z = −σ . In similar fashion, but noting that the contour for I2 is clockwise,       −1 e−iσ −1 eiσ −1 eiσ P I2 − πi + πi = −2πi . (7.62) 2i 2 2i 2 2i 2 Adding Eqs. (7.61) and (7.62), we have π  iσ P I (σ ) = P I1 + P I2 = e + e−iσ = π cosh iσ = π cos σ. (7.63) 2 This is a perfectly good evaluation of Eq. (7.57), but unfortunately the cosine dependence is appropriate for a standing wave and not for the outgoing scattered wave as specified. 7.1 Calculus of Residues 471 To obtain the desired form, we try a different technique (compare Example 7.1.1). Instead of dodging around the singular points, let us move them off the real axis. Specifically, let σ → σ + iγ , −σ → −σ − iγ , where γ is positive but small and will eventually be made to approach zero; that is, for I1 we include one pole and for I2 the other one, I+ (σ ) = lim I (σ + iγ ). γ →0 (7.64) With this simple substitution, the first integral I1 becomes  1 ei(σ +iγ ) I1 (σ + iγ ) = 2πi 2i 2  (7.65) by direct application of the residue theorem. Also, I2 (σ + iγ ) = −2πi   −1 ei(σ +iγ ) . 2i 2 (7.66) Adding Eqs. (7.65) and (7.66) and then letting γ → 0, we obtain  I+ (σ ) = lim I1 (σ + iγ ) + I2 (σ + iγ ) γ →0 = lim πei(σ +iγ ) = πeiσ , γ →0 (7.67) a result that does fit the boundary conditions of our scattering problem. It is interesting to note that the substitution σ → σ − iγ would have led to I− (σ ) = πe−iσ , (7.68) which could represent an incoming wave. Our earlier result (Eq. (7.63)) is seen to be the arithmetic average of Eqs. (7.67) and (7.68). This average is the Cauchy principal value of the integral. Note that we have these possibilities (Eqs. (7.63), (7.67), and (7.68)) because our integral is not uniquely defined until we specify the particular limiting process (or average) to be used.  Evaluation of Definite Integrals: Exponential Forms With exponential or hyperbolic functions present in the integrand, life gets somewhat more complicated than before. Instead of a general overall prescription, the contour must be chosen to fit the specific integral. These cases are also opportunities to illustrate the versatility and power of contour integration. As an example, we consider an integral that will be quite useful in developing a relation between Ŵ(1 + z) and Ŵ(1 − z). Notice how the periodicity along the imaginary axis is exploited. 472 Chapter 7 Functions of a Complex Variable II FIGURE 7.9 Example 7.1.6 Rectangular contour. FACTORIAL FUNCTION We wish to evaluate I= ∞ −∞ eax dx, 1 + ex 0 < a < 1. (7.69) The limits on a are sufficient (but not necessary) to prevent the integral from diverging as x → ±∞. This integral (Eq. (7.69)) may be handled by replacing the real variable x by the complex variable z and integrating around the contour shown in Fig. 7.9. If we take the limit as R → ∞, the real axis, of course, leads to the integral we want. The return path along y = 2π is chosen to leave the denominator of the integral invariant, at the same time introducing a constant factor ei2πa in the numerator. We have, in the complex plane,  R  R  eax eax eaz i2πa dz = lim dx − e dx x x R→∞ 1 + ez −R 1 + e −R 1 + e ∞ eax  dx. (7.70) = 1 − ei2πa x −∞ 1 + e In addition there are two vertical sections (0 ≤ y ≤ 2π), which vanish (exponentially) as R → ∞. Now where are the poles and what are the residues? We have a pole when ez = ex eiy = −1. (7.71) Equation (7.71) is satisfied at z = 0 + iπ . By a Laurent expansion5 in powers of (z − iπ) the pole is seen to be a simple pole with a residue of −eiπa . Then, applying the residue theorem,  ∞ eax  1 − ei2πa dx = 2πi −eiπa . (7.72) x −∞ 1 + e This quickly reduces to ∞ −∞ eax π dx = , x 1+e sin aπ 5 1 + ez = 1 + ez−iπ eiπ = 1 − ez−iπ = −(z − iπ )(1 + z−iπ + (z−iπ )2 + · · · ). 2! 3! 0 < a < 1. (7.73) 7.1 Calculus of Residues 473 Using the beta function (Section 8.4), we can show the integral to be equal to the product Ŵ(a)Ŵ(1 − a). This results in the interesting and useful factorial function relation Ŵ(a + 1)Ŵ(1 − a) = πa . sin πa (7.74) Although Eq. (7.73) holds for real a, 0 < a < 1, Eq. (7.74) may be extended by analytic continuation to all values of a, real and complex, excluding only real integral values.  As a final example of contour integrals of exponential functions, we consider Bernoulli numbers again. Example 7.1.7 BERNOULLI NUMBERS In Section 5.9 the Bernoulli numbers were defined by the expansion ∞  Bn x = xn. x e −1 n! (7.75) n=0 Replacing x with z (analytic continuation), we have a Taylor series (compare Eq. (6.47)) with  dz z n! , (7.76) Bn = 2πi C0 ez − 1 zn+1 where the contour C0 is around the origin counterclockwise with |z| < 2π to avoid the poles at 2πin. For n = 0 we have a simple pole at z = 0 with a residue of +1. Hence by Eq. (7.25), B0 = 0! · 2πi(1) = 1. 2πi (7.77) For n = 1 the singularity at z = 0 becomes a second-order pole. The residue may be shown to be − 21 by series expansion of the exponential, followed by a binomial expansion. This results in   1 1 1! =− . · 2πi − (7.78) B1 = 2πi 2 2 For n ≥ 2 this procedure becomes rather tedious, and we resort to a different means of evaluating Eq. (7.76). The contour is deformed, as shown in Fig. 7.10. The new contour C still encircles the origin, as required, but now it also encircles (in a negative direction) an infinite series of singular points along the imaginary axis at z = ±p2πi, p = 1, 2, 3, . . . . The integration back and forth along the x-axis cancels out, and for R → ∞ the integration over the infinite circle yields zero. Remember that n ≥ 2. Therefore  ∞  z dz residues (z = ±p2πi). (7.79) = −2πi z n+1 C0 e − 1 z p=1 474 Chapter 7 Functions of a Complex Variable II FIGURE 7.10 Contour of integration for Bernoulli numbers. At z = p2πi we have a simple pole with a residue (p2πi)−n . When n is odd, the residue from z = p2πi exactly cancels that from z = −p2πi and Bn = 0, n = 3, 5, 7, and so on. For n even the residues add, giving Bn = ∞  1 n! (−2πi)2 n 2πi p (2πi)n =− p=1 ∞ (−1)n/2 2n!  −n (−1)n/2 2n! ζ (n) p = − (2π)n (2π)n (n even), (7.80) p=1 where ζ (n) is the Riemann zeta function introduced in Section 5.9. Equation (7.80) corresponds to Eq. (5.152) of Section 5.9.  Exercises 7.1.1 Determine the nature of the singularities of each of the following functions and evaluate the residues (a > 0). 1 . z2 + a 2 z2 . (c) 2 (z + a 2 )2 ze+iz . (e) 2 z + a2 e+iz (g) 2 . z − a2 (a) 1 . (z2 + a 2 )2 sin 1/z (d) 2 . z + a2 ze+iz (f) 2 . z − a2 z−k , 0 < k < 1. (h) z+1 (b) Hint. For the point at infinity, use the transformation w = 1/z for |z| → 0. For the residue, transform f (z) dz into g(w) dw and look at the behavior of g(w). 7.1 Calculus of Residues 7.1.2 475 Locate the singularities and evaluate the residues of each of the following functions. (a) z−n (ez − 1)−1 , z = 0, z2 e z . 1 + e2z (c) Find a closed-form expression (that is, not a sum) for the sum of the finite-plane singularities. (d) Using the result in part (c), what is the residue at |z| → ∞? (b) Hint. See Section 5.9 for expressions involving Bernoulli numbers. Note that Eq. (5.144) cannot be used to investigate the singularity at z → ∞, since this series is only valid for |z| < 2π . 7.1.3 The statement that the integral halfway around a singular point is equal to one-half the integral all the way around was limited to simple poles. Show, by a specific example, that  1 f (z) dz f (z) dz = 2 Circle Semicircle does not necessarily hold if the integral encircles a pole of higher order. Hint. Try f (z) = z−2 . 7.1.4 A function f (z) is analytic along the real axis except for a third-order pole at z = x0 . The Laurent expansion about z = x0 has the form a−3 a−1 f (z) = + + g(z), (z − x0 )3 z − x0 with g(z) analytic at z = x0 . Show that the Cauchy principal value technique is applicable, in the sense that (a) (b) lim δ→0  Cx0 x0 −δ −∞ f (x) dx + ∞ x0 +δ  f (x) dx is finite. f (z) dz = ±iπa−1 , where Cx0 denotes a small semicircle about z = x0 . 7.1.5 The unit step function is defined as (compare Exercise 1.15.13) + 0, s a. Show that u(s) has the integral representations (a) 1 u(s) = lim + 2πi ε→0 ∞ −∞ eixs dx, x − iε 476 Chapter 7 Functions of a Complex Variable II (b) 1 1 u(s) = + P 2 2πi ∞ −∞ eixs dx. x Note. The parameter s is real. 7.1.6 Most of the special functions of mathematical physics may be generated (defined) by a generating function of the form  fn (x)t n . g(t, x) = n Given the following integral representations, derive the corresponding generating function: (a) (b) (c) Bessel: 1 Jn (x) = 2πi  e(x/2)(t−1/t) t −n−1 dt. 1 In (x) = 2πi  e(x/2)(t+1/t) t −n−1 dt. Modified Bessel: Legendre: Pn (x) = (d) 1 2πi Hermite: Hn (x) = (e)  −1/2 −n−1  t dt. 1 − 2tx + t 2 n! 2πi Laguerre:  1 Ln (x) = 2πi (f) Chebyshev: Tn (x) = 1 4πi   e−t 2 +2tx t −n−1 dt. e−xt/(1−t) dt. (1 − t)t n+1 (1 − t 2 )t −n−1 dt. (1 − 2tx + t 2 ) Each of the contours encircles the origin and no other singular points. 7.1.7 Generalizing Example 7.1.2, show that 2π 2π 2π dθ dθ = = 2 , a ± b cos θ a ± b sin θ (a − b2 )1/2 0 0 What happens if |b| > |a|? 7.1.8 Show that π 0 πa dθ = , (a + cos θ )2 (a 2 − 1)3/2 a > 1. for a > |b|. 7.1 Calculus of Residues 7.1.9 477 Show that 2π 0 dθ 2π = , 2 1 − 2t cos θ + t 1 − t2 for |t| < 1. What happens if |t| > 1? What happens if |t| = 1? 7.1.10 With the calculus of residues show that π (2n)! (2n − 1)!! , cos2n θ dθ = π 2n =π 2 (2n)!! 2 (n!) 0 n = 0, 1, 2, . . . . (The double factorial notation is defined in Section 8.1.) Hint. cos θ = 12 (eiθ + e−iθ ) = 21 (z + z−1 ), |z| = 1. 7.1.11 Evaluate ∞ −∞ cos bx − cos ax dx, x2 a > b > 0. ANS. π(a − b). 7.1.12 Prove that ∞ −∞ 2 Hint. sin x = 7.1.13 sin2 x π dx = . 2 x2 1 2 (1 − cos 2x). A quantum mechanical calculation of a transition probability leads to the function f (t, ω) = 2(1 − cos ωt)/ω2 . Show that ∞ f (t, ω)dω = 2πt. −∞ 7.1.14 Show that (a > 0) (a) ∞ −∞ cos x π dx = e−a . a x 2 + a2 How is the right side modified if cos x is replaced by cos kx? ∞ x sin x (b) dx = πe−a . 2 2 −∞ x + a How is the right side modified if sin x is replaced by sin kx? These integrals may also be interpreted as Fourier cosine and sine transforms — Chapter 15. 7.1.15 Use the contour shown (Fig. 7.11) with R → ∞ to prove that ∞ sin x dx = π. −∞ x 478 Chapter 7 Functions of a Complex Variable II FIGURE 7.11 Large square contour. 7.1.16 In the quantum theory of atomic collisions we encounter the integral ∞ sin t ipt e dt, I= −∞ t in which p is real. Show that I = 0, I = π, |p| > 1 |p| < 1. What happens if p = ±1? 7.1.17 Evaluate ∞ 0 (a) (ln x)2 dx 1 + x2 by appropriate series expansion of the integrand to obtain 4 ∞  (−1)n (2n + 1)−3 , n=0 (b) and by contour integration to obtain π3 . 8 Hint. x → z = et . Try the contour shown in Fig. 7.12, letting R → ∞. 7.1.18 Show that 0 ∞ xa πa , dx = 2 sin πa (x + 1) FIGURE 7.12 Small square contour. 7.1 Calculus of Residues 479 FIGURE 7.13 Contour avoiding branch point and pole. where −1 < a < 1. Here is still another way of deriving Eq. (7.74). Hint. Use the contour shown in Fig. 7.13, noting that z = 0 is a branch point and the positive x-axis is a cut line. Note also the comments on phases following Example 6.6.1. 7.1.19 Show that 0 ∞ π x −a dx = , x +1 sin aπ where 0 < a < 1. This opens up another way of deriving the factorial function relation given by Eq. (7.74). Hint. You have a branch point and you will need a cut line. Recall that z−a = w in polar form is  i(θ+2πn) −a re = ρeiϕ , which leads to −aθ −2anπ = ϕ. You must restrict n to zero (or any other single integer) in order that ϕ may be uniquely specified. Try the contour shown in Fig. 7.14. FIGURE 7.14 Alternative contour avoiding branch point. 480 Chapter 7 Functions of a Complex Variable II FIGURE 7.15 Angle contour. 7.1.20 Show that ∞ 0 7.1.21 dx π = , (x 2 + a 2 )2 4a 3 Evaluate ∞ −∞ 7.1.22 a > 0. x2 dx. 1 + x4 Show that 0 ∞  cos t 2 dt = 0 ∞ √ ANS. π/ 2. √  2 π sin t dt = √ . 2 2 Hint. Try the contour shown in Fig. 7.15. Note. These are the Fresnel integrals for the special case of infinity as the upper limit. For the general case of a varying upper limit, asymptotic expansions of the Fresnel integrals are the topic of Exercise 5.10.2. Spherical Bessel expansions are the subject of Exercise 11.7.13. 7.1.23 Several of the Bromwich integrals, Section 15.12, involve a portion that may be approximated by a+iy zt e I (y) = dz. 1/2 a−iy z Here a and t are positive and finite. Show that lim I (y) = 0. y→∞ 7.1 Calculus of Residues FIGURE 7.16 7.1.24 Show that 0 ∞ Sector contour. 1 π/n . dx = 1 + xn sin(π/n) Hint. Try the contour shown in Fig. 7.16. 7.1.25 (a) Show that f (z) = z4 − 2 cos 2θ z2 + 1 has zeros at eiθ , e−iθ , −eiθ , and −e−iθ . (b) Show that ∞ dx π π = 1/2 = . 4 2 2 sin θ 2 (1 − cos 2θ )1/2 −∞ x − 2 cos 2θ x + 1 Exercise 7.1.24 (n = 4) is a special case of this result. 7.1.26 Show that ∞ −∞ x4 π π x 2 dx = 1/2 . = − 2 cos 2θ x 2 + 1 2 sin θ 2 (1 − cos 2θ )1/2 Exercise 7.1.21 is a special case of this result. 7.1.27 Apply the techniques of Example 7.1.5 to the evaluation of the improper integral ∞ dx I= . 2 2 −∞ x − σ (a) Let σ → σ + iγ . (b) Let σ → σ − iγ . (c) Take the Cauchy principal value. 481 482 Chapter 7 Functions of a Complex Variable II 7.1.28 The integral in Exercise 7.1.17 may be transformed into ∞ y2 π3 . e−y dy = −2y 16 1+e 0 Evaluate this integral by the Gauss–Laguerre quadrature and compare your result with π 3 /16. ANS. Integral = 1.93775 (10 points). 7.2 DISPERSION RELATIONS The concept of dispersion relations entered physics with the work of Kronig and Kramers in optics. The name dispersion comes from optical dispersion, a result of the dependence of the index of refraction on wavelength, or angular frequency. The index of refraction n may have a real part determined by the phase velocity and a (negative) imaginary part determined by the absorption — see Eq. (7.94). Kronig and Kramers showed in 1926– 1927 that the real part of (n2 − 1) could be expressed as an integral of the imaginary part. Generalizing this, we shall apply the label dispersion relations to any pair of equations giving the real part of a function as an integral of its imaginary part and the imaginary part as an integral of its real part — Eqs. (7.86a) and (7.86b), which follow. The existence of such integral relations might be suspected as an integral analog of the Cauchy–Riemann differential relations, Section 6.2. The applications in modern physics are widespread. For instance, the real part of the function might describe the forward scattering of a gamma ray in a nuclear Coulomb field (a dispersive process). Then the imaginary part would describe the electron–positron pair production in that same Coulomb field (the absorptive process). As will be seen later, the dispersion relations may be taken as a consequence of causality and therefore are independent of the details of the particular interaction. We consider a complex function f (z) that is analytic in the upper half-plane and on the real axis. We also require that   lim f (z) = 0, 0 ≤ arg z ≤ π, (7.81) |z|→∞ in order that the integral over an infinite semicircle will vanish. The point of these conditions is that we may express f (z) by the Cauchy integral formula, Eq. (6.43),  f (z) 1 dz. (7.82) f (z0 ) = 2πi z − z0 The integral over the upper semicircle6 vanishes and we have ∞ f (x) 1 dx. f (z0 ) = 2πi −∞ x − z0 (7.83) The integral over the contour shown in Fig. 7.17 has become an integral along the x-axis. Equation (7.83) assumes that z0 is in the upper half-plane — interior to the closed contour. If z0 were in the lower half-plane, the integral would yield zero by the Cauchy integral 6 The use of a semicircle to close the path of integration is convenient, not mandatory. Other paths are possible. 7.2 Dispersion Relations FIGURE 7.17 483 Semicircle contour. theorem, Section 6.3. Now, either letting z0 approach the real axis from above (z0 − x0 ) or placing it on the real axis and taking an average of Eq. (7.83) and zero, we find that Eq. (7.83) becomes ∞ 1 f (x) P dx, (7.84) f (x0 ) = πi x −∞ − x0 where P indicates the Cauchy principal value. Splitting Eq. (7.84) into real and imaginary parts7 yields f (x0 ) = u(x0 ) + iv(x0 ) ∞ ∞ v(x) i u(x) 1 dx − P dx. = P π π −∞ x − x0 −∞ x − x0 (7.85) Finally, equating real part to real part and imaginary part to imaginary part, we obtain u(x0 ) = 1 P π 1 v(x0 ) = − P π ∞ v(x) dx x − x0 −∞ ∞ −∞ (7.86a) u(x) dx. (7.86b) x − x0 These are the dispersion relations. The real part of our complex function is expressed as an integral over the imaginary part. The imaginary part is expressed as an integral over the real part. The real and imaginary parts are Hilbert transforms of each other. Note that these relations are meaningful only when f (x) is a complex function of the real variable x. Compare Exercise 7.2.1. From a physical point of view u(x) and/or v(x) represent some physical measurements. Then f (z) = u(z) + iv(z) is an analytic continuation over the upper half-plane, with the value on the real axis serving as a boundary condition. 7 The second argument, y = 0, is dropped: u(x , 0) → u(x ). 0 0 484 Chapter 7 Functions of a Complex Variable II Symmetry Relations On occasion f (x) will satisfy a symmetry relation and the integral from −∞ to +∞ may be replaced by an integral over positive values only. This is of considerable physical importance because the variable x might represent a frequency and only zero and positive frequencies are available for physical measurements. Suppose8 f (−x) = f ∗ (x). (7.87) u(−x) + iv(−x) = u(x) − iv(x). (7.88) Then The real part of f (x) is even and the imaginary part is odd.9 In quantum mechanical scattering problems these relations (Eq. (7.88)) are called crossing conditions. To exploit these crossing conditions, we rewrite Eq. (7.86a) as 0 ∞ v(x) v(x) 1 1 dx + P dx. (7.89) u(x0 ) = P π x − x π x − x0 0 −∞ 0 Letting x → −x in the first integral on the right-hand side of Eq. (7.89) and substituting v(−x) = −v(x) from Eq. (7.88), we obtain   ∞ 1 1 1 dx + v(x) u(x0 ) = P π x + x0 x − x0 0 ∞ 2 xv(x) = P dx. (7.90) 2 − x2 π x 0 0 Similarly, 2 v(x0 ) = − P π 0 ∞ x0 u(x) dx. x 2 − x02 (7.91) The original Kronig–Kramers optical dispersion relations were in this form. The asymptotic behavior (x0 → ∞) of Eqs. (7.90) and (7.91) lead to quantum mechanical sum rules, Exercise 7.2.4. Optical Dispersion The function exp[i(kx − ωt)] describes an electromagnetic wave moving along the x-axis in the positive direction with velocity v = ω/k; ω is the angular frequency, k the wave number or propagation vector, and n = ck/ω the index of refraction. From Maxwell’s 8 This is not just a happy coincidence. It ensures that the Fourier transform of f (x) will be real. In turn, Eq. (7.87) is a conse- quence of obtaining f (x) as the Fourier transform of a real function. 9 u(x, 0) = u(−x, 0), v(x, 0) = −v(−x, 0). Compare these symmetry conditions with those that follow from the Schwarz reflec- tion principle, Section 6.5. 7.2 Dispersion Relations 485 equations, electric permittivity ε, and Ohm’s law with conductivity σ , the propagation vector k for a dielectric becomes10   4πσ ω2 2 (7.92) k =ε 2 1+i ωε c (with µ, the magnetic permeability, taken to be unity). The presence of the conductivity (which means absorption) gives rise to an imaginary part. The propagation vector k (and therefore the index of refraction n) have become complex. Conversely, the (positive) imaginary part implies absorption. For poor conductivity (4πσ/ωε ≪ 1) a binomial expansion yields k= √ ω 2πσ ε +i √ c c ε and ei(kx−ωt) = eiω(x √ √ ε/c−t) −2πσ x/c ε e , an attenuated wave. Returning to the general expression for k 2 , Eq. (7.92), we find that the index of refraction becomes n2 = c2 k 2 4πσ . =ε+i ω ω2 (7.93) We take n2 to be a function of the complex variable ω (with ε and σ depending on ω). However, n2 does not vanish as ω → ∞ but instead approaches unity. So to satisfy the condition, Eq. (7.81), one works with f (ω) = n2 (ω) − 1. The original Kronig–Kramers optical dispersion relations were in the form of ∞  2 ωℑ[n2 (ω) − 1] ℜ n2 (ω0 ) − 1 = P dω, π ω2 − ω02 0 (7.94) ∞  2 2 ω0 ℜ[n2 (ω) − 1] ℑ n (ω0 ) − 1 = − P dω. π ω2 − ω02 0 Knowledge of the absorption coefficient at all frequencies specifies the real part of the index of refraction, and vice versa. The Parseval Relation When the functions u(x) and v(x) are Hilbert transforms of each other (given by Eqs. (7.86)) and each is square integrable,11 the two functions are related by ∞ ∞     v(x)2 dx. u(x)2 dx = (7.95) −∞ −∞ 10 See J. D. Jackson, Classical Electrodynamics, 3rd ed. New York: Wiley (1999), Sections 7.7 and 7.10. Equation (7.92) is in Gaussian units. 11 This means that ∞ |u(x)|2 dx and ∞ |v(x)|2 dx are finite. −∞ −∞ 486 Chapter 7 Functions of a Complex Variable II This is the Parseval relation. To derive Eq. (7.95), we start with ∞ ∞ ∞   1 v(s) ds 1 ∞ v(t) dt u(x)2 dx = dx, −∞ π −∞ s − x π −∞ t − x −∞ using Eq. (7.86a) twice. Integrating first with respect to x, we have ∞ ∞ ∞   dx v(t) dt ∞ u(x)2 dx = v(s) ds . 2 −∞ −∞ (s − x)(t − x) −∞ −∞ π (7.96) From Exercise 7.2.8, the x integration yields a delta function: ∞ dx 1 = δ(s − t). π 2 −∞ (s − x)(t − x) We have ∞ −∞  u(x)2 dx = ∞ −∞ v(t) dt ∞ −∞ v(s)δ(s − t) ds. (7.97) Then the s integration is carried out by inspection, using the defining property of the delta function: ∞ v(s)δ(s − t) ds = v(t). (7.98) −∞ Substituting Eq. (7.98) into Eq. (7.97), we have Eq. (7.95), the Parseval relation. Again, in terms of optics, the presence of refraction over some frequency range (n = 1) implies the existence of absorption, and vice versa. Causality The real significance of dispersion relations in physics is that they are a direct consequence of assuming that the particular physical system obeys causality. Causality is awkward to define precisely, but the general meaning is that the effect cannot precede the cause. A scattered wave cannot be emitted by the scattering center before the incident wave has arrived. For linear systems the most general relation between an input function G (the cause) and an output function H (the effect) may be written as ∞ H (t) = F (t − t ′ )G(t ′ ) dt ′ . (7.99) −∞ Causality is imposed by requiring that F (t − t ′ ) = 0 for t − t ′ < 0. Equation (7.99) gives the time dependence. The frequency dependence is obtained by taking Fourier transforms. By the Fourier convolution theorem, Section 15.5, h(ω) = f (ω)g(ω), where f (ω) is the Fourier transform of F (t), and so on. Conversely, F (t) is the Fourier transform of f (ω). 7.2 Dispersion Relations 487 The connection with the dispersion relations is provided by the Titchmarsh theorem.12 This states that if f (ω) is square integrable over the real ω-axis, then any one of the following three statements implies the other two. 1. The Fourier transform of f (ω) is zero for t < 0: Eq. (7.99). 2. Replacing ω by z, the function f (z) is analytic in the complex z-plane for y > 0 and approaches f (x) almost everywhere as y → 0. Further, ∞   f (x + iy)2 dx < K for y > 0; −∞ that is, the integral is bounded. 3. The real and imaginary parts of f (z) are Hilbert transforms of each other: Eqs. (7.86a) and (7.86b). The assumption that the relationship between the input and the output of our linear system is causal (Eq. (7.99)) means that the first statement is satisfied. If f (ω) is square integrable, then the Titchmarsh theorem has the third statement as a consequence and we have dispersion relations. Exercises 7.2.1 The function f (z) satisfies the conditions for the dispersion relations. In addition, f (z) = f ∗ (z∗ ), the Schwarz reflection principle, Section 6.5. Show that f (z) is identically zero. 7.2.2 For f (z) such that we may replace the closed contour of the Cauchy integral formula by an integral over the real axis we have  x0 −δ  ∞ f (x) f (x) f (x) 1 1 dx + dx + dx. f (x0 ) = 2πi −∞ x − x0 2πi Cx0 x − x0 x0 +δ x − x0 Here Cx0 designates a small semicircle about x0 in the lower half-plane. Show that this reduces to ∞ 1 f (x) P dx, f (x0 ) = πi −∞ x − x0 which is Eq. (7.84). 7.2.3 For f (z) = eiz , Eq. (7.81) does not hold at the endpoints, arg z = 0, π . Show, with the help of Jordan’s lemma, Section 7.1, that Eq. (7.82) still holds. (b) For f (z) = eiz verify the dispersion relations, Eq. (7.89) or Eqs. (7.90) and (7.91), by direct integration. 7.2.4 With f (x) = u(x) + iv(x) and f (x) = f ∗ (−x), show that as x0 → ∞, (a) 12 Refer to E. C. Titchmarsh, Introduction to the Theory of Fourier Integrals, 2nd ed. New York: Oxford University Press (1937). For a more informal discussion of the Titchmarsh theorem and further details on causality see J. Hilgevoord, Dispersion Relations and Causal Description. Amsterdam: North-Holland (1962). 488 Chapter 7 Functions of a Complex Variable II ∞ 2 (a) u(x0 ) ∼ − 2 xv(x) dx, πx0 0 ∞ 2 (b) v(x0 ) ∼ u(x) dx. πx0 0 In quantum mechanics relations of this form are often called sum rules. 7.2.5 (a) Given the integral equation 1 1 = P 2 π 1 + x0 ∞ −∞ u(x) dx, x − x0 use Hilbert transforms to determine u(x0 ). (b) Verify that the integral equation of part (a) is satisfied. (c) From f (z)|y=0 = u(x) + iv(x), replace x by z and determine f (z). Verify that the conditions for the Hilbert transforms are satisfied. (d) Are the crossing conditions satisfied? x0 ANS. (a) u(x0 ) = , (c) f (z) = (z + i)−1 . 1 + x02 7.2.6 (a) If the real part of the complex index of refraction (squared) is constant (no optical dispersion), show that the imaginary part is zero (no absorption). (b) Conversely, if there is absorption, show that there must be dispersion. In other words, if the imaginary part of n2 − 1 is not zero, show that the real part of n2 − 1 is not constant. 7.2.7 Given u(x) = x/(x 2 + 1) and v(x) = −1/(x 2 + 1), show by direct evaluation of each integral that ∞ ∞     v(x)2 dx. u(x)2 dx = −∞ −∞ ANS. ∞ −∞ 7.2.8  u(x)2 dx = ∞  v(x)2 dx = π . 2 −∞ Take u(x) = δ(x), a delta function, and assume that the Hilbert transform equations hold. (a) Show that δ(w) = (b) 1 π2 ∞ −∞ dy . y(y − w) With changes of variables w = s − t and x = s − y, transform the δ representation of part (a) into ∞ 1 dx δ(s − t) = 2 . π −∞ (x − s)(s − t) Note. The δ function is discussed in Section 1.15. 7.3 Method of Steepest Descents 7.2.9 Show that δ(x) = 1 π2 ∞ −∞ 489 dt t (t − x) is a valid representation of the delta function in the sense that ∞ f (x)δ(x) dx = f (0). −∞ Assume that f (x) satisfies the condition for the existence of a Hilbert transform. Hint. Apply Eq. (7.84) twice. 7.3 METHOD OF STEEPEST DESCENTS Analytic Landscape In analyzing problems in mathematical physics, one often finds it desirable to know the behavior of a function for large values of the variable or some parameter s, that is, the asymptotic behavior of the function. Specific examples are furnished by the gamma function (Chapter 8) and various Bessel functions (Chapter 11). All these analytic functions are defined by integrals I (s) = F (z, s) dz, (7.100) C where F is analytic in z and depends on a real parameter s. We write F (z) whenever possible. So far we have evaluated such definite integrals of analytic functions along the real axis by deforming the path C to C ′ in the complex plane, so |F | becomes small for all z on C ′ . This method succeeds as long as only isolated poles occur in the area between C and C ′ . The poles are taken into account by applying the residue theorem of Section 7.1. The residues give a measure of the simple poles, where |F | → ∞, which usually dominate and determine the value of the integral. The behavior of the integral in Eq. (7.100) clearly depends on the absolute value |F | of the integrand. Moreover, the contours of |F | often become more pronounced as s becomes large. Let us focus on a plot of |F (x + iy)|2 = U 2 (x, y) + V 2 (x, y), rather than the real part ℜF = U and the imaginary part ℑF = V separately. Such a plot of |F |2 over the complex plane is called the analytic landscape, after Jensen, who, in 1912, proved that it has only saddle points and troughs but no peaks. Moreover, the troughs reach down all the way to the complex plane. In the absence of (simple) poles, saddle points are next in line to dominate the integral in Eq. (7.100). Hence the name saddle point method. At a saddle point the real (or imaginary) part U of F has a local maximum, which implies that ∂U ∂U = = 0, ∂x ∂y and therefore by the use of the Cauchy–Riemann conditions of Section 6.2, ∂V ∂V = = 0, ∂x ∂y 490 Chapter 7 Functions of a Complex Variable II so V has a minimum, or vice versa, and F ′ (z) = 0. Jensen’s theorem prevents U and V from having either a maximum or a minimum. See Fig. 7.18 for a typical shape (and Exercises 6.2.3 and 6.2.4). Our strategy will be to choose the path C so that it runs over the saddle point, which gives the dominant contribution, and in the valleys elsewhere. If there are several saddle points, we treat each alike, and their contributions will add to I (s → ∞). To prove that there are no peaks, assume there is one at z0 . That is, |F (z0 )|2 > |F (z)|2 for all z of a neighborhood |z − z0 | ≤ r. If F (z) = ∞  n=0 an (z − z0 )n is the Taylor expansion at z0 , the mean value m(F ) on the circle z = z0 + r exp(iϕ) becomes 2π    1 F z0 + reiϕ 2 dϕ m(F ) ≡ 2π 0 2π  ∞ 1 ∗ am an r m+n ei(n−m)ϕ dϕ = 2π 0 m,n=0 = ∞  n=0 2  |an |2 r 2n ≥ |a0 |2 = F (z0 ) , (7.101) 2π 1 2 using orthogonality, 2π 0 exp i(n−m)ϕ dϕ = δnm . Since m(F ) is the mean value of |F | 2 2 on the circle of radius r, there must be a point z1 on it so that |F (z1 )| ≥ m(F ) ≥ |F (z0 )| , which contradicts our assumption. Hence there can be no such peak. Next, let us assume there is a minimum at z0 so that 0 < |F (z0 )|2 < |F (z)|2 for all z of a neighborhood of z0 . In other words, the dip in the valley does not go down to the complex plane. Then |F (z)|2 > 0 and, since 1/F (z) is analytic there, it has a Taylor expansion and z0 would be a peak of 1/|F (z)|2 , which is impossible. This proves Jensen’s theorem. We now turn our attention back to the integral in Eq. (7.100). Saddle Point Method Since each saddle point z0 necessarily lies above the complex plane, that is, |F (z0 )|2 > 0, we write F in exponential form, ef (z,s) , in its vicinity without loss of generality. Note that having no zero in the complex plane is a characteristic property of the exponential function. Moreover, any saddle point with F (z) = 0 becomes a trough of |F (z)|2 because |F (z)|2 ≥ 0. A case in point is the function z2 at z = 0, where d(z2 )/dz = 2z = 0. Here z2 = (x + iy)2 = x 2 − y 2 + 2ixy, and 2xy has a saddle point at z = 0, and so has x 2 − y 2 , but |z|4 has a trough there. ∂f At z0 the tangential plane is horizontal; that is, ∂F ∂z |z=z0 = 0, or equivalently ∂z |z=z0 = 0. This condition locates the saddle point. Our next goal is to determine the direction of steepest descent. At z0 , f has a power series 1 f (z) = f (z0 ) + f ′′ (z0 )(z − z0 )2 + · · · , 2 (7.102) 7.3 Method of Steepest Descents FIGURE 7.18 491 A saddle point. or 1  ′′ f (z0 ) + ε (z − z0 )2 , (7.103) 2 upon collecting all higher powers in the (small) ε. Let us take f ′′ (z0 ) = 0 for simplicity. Then f (z) = f (z0 ) + f ′′ (z0 )(z − z0 )2 = −t 2 , t real, (7.104) defines a line through z0 (saddle point axis in Fig. 7.18). At z0 , t = 0. Along the axis ℑf ′′ (z0 )(z − z0 )2 is zero and v = ℑf (z) ≈ ℑf (z0 ) is constant if ε in Eq. (7.103) is neglected. Equation (7.104) can also be expressed in terms of angles, arg(z − z0 ) = 1 π − arg f ′′ (z0 ) = constant. 2 2 (7.105) Since |F (z)|2 = exp(2ℜf ) varies monotonically with ℜf , |F (z)|2 ≈ exp(−t 2 ) falls off exponentially from its maximum at t = 0 along this axis. Hence the name steepest descent. The line through z0 defined by f ′′ (z0 )(z − z0 )2 = +t 2 (7.106) is orthogonal to this axis (dashed in Fig. 7.18), which is evident from its angle, 1 arg(z − z0 ) = − arg f ′′ (z0 ) = constant, 2 when compared with Eq. (7.105). Here |F (z)|2 grows exponentially. (7.107) 492 Chapter 7 Functions of a Complex Variable II The curves ℜf (z) = ℜf (z0 ) go through z0 , so ℜ[(f ′′ (z0 ) + ε)(z − z0 )2 ] = 0, or (f ′′ (z0 ) + ε)(z − z0 )2 = it for real t. Expressing this in angles as  1 π arg(z − z0 ) = − arg f ′′ (z0 ) + ε , t > 0, (7.108a) 4 2  1 π arg(z − z0 ) = − − arg f ′′ (z0 ) + ε , t < 0, (7.108b) 4 2 and comparing with Eqs. (7.105) and (7.107) we note that these curves (dot-dashed in Fig. 7.18) divide the saddle point region into four sectors, two with ℜf (z) > ℜf (z0 ) (hence |F (z)| > |F (z0 )|), shown shaded in Fig. 7.18, and two with ℜf (z) < ℜf (z0 ) (hence |F (z)| < |F (z0 )|). They are at ± π4 angles from the axis. Thus, the integration path has to avoid the shaded areas, where |F | rises. If a path is chosen to run up the slopes above the saddle point, the large imaginary part of f (z) leads to rapid oscillations of F (z) = ef (z) and cancelling contributions to the integral. So far, our treatment has been general, except for f ′′ (z0 ) = 0, which can be relaxed. Now we are ready to specialize the integrand F further in order to tie up the path selection with the asymptotic behavior as s → ∞. We assume that s appears linearly in the exponent, that is, we replace exp f (z, s) → exp(sf (z)). This dependence on s ensures that the saddle point contribution at z0 grows with s → ∞ providing steep slopes, as is the case in most applications in physics. In order to account for the region far away from the saddle point that is not influenced by s, we include another analytic function, g(z), which varies slowly near the saddle point and is independent of s. Altogether, then, our integral has the more appropriate and specific form I (s) = g(z)esf (z) dz. (7.109) C The path of steepest descent is the saddle point axis when we neglect the higher-order terms, ε, in Eq. (7.103). With ε, the path of steepest descent is the curve close to the axis within the unshaded sectors, where v = ℑf (z) is strictly constant, while ℑf (z) is only approximately constant on the axis. We approximate I (s) by the integral along the piece of the axis inside the patch in Fig. 7.18, where (compare with Eq. (7.104)) z = z0 + xeiα , We find I (s) ≈ e iα α= a b 1 π − arg f ′′ (z0 ), 2 2 a ≤ x ≤ b.    g z0 + xeiα exp sf z0 + xeiα dx, (7.110) (7.111a) and the omitted part is small and can be estimated because ℜ(f (z) − f (z0 )) has an upper negative bound, −R say, that depends on the size of the saddle point patch in Fig. 7.18 (that is, the values of a, b in Eq. (7.110)) that we choose. In Eq. (7.111) we use the power expansions  1 f z0 + xeiα = f (z0 ) + f ′′ (z0 )e2iα x 2 + · · · , 2 (7.111b)  g z0 + xeiα = g(z0 ) + g ′ (z0 )eiα x + · · · , 7.3 Method of Steepest Descents 493 and recall from Eq. (7.110) that  1 1 ′′ f (z0 )e2iα = − f ′′ (z0 ) < 0. 2 2 We find for the leading term for s → ∞: I (s) = g(z0 )e sf (z0 )+iα b 1 e− 2 s|f ′′ (z )|x 2 0 dx. (7.112) a Since the integrand in Eq. (7.112) is essentially zero when x departs appreciably from the origin, we let b → ∞ and a → −∞. The small error involved is straightforward to estimate. Noting that the remaining integral is just a Gauss error integral, √ ∞ 2π 1 ∞ − 1 x2 − 21 a 2 x 2 2 , e dx = dx = e a −∞ a −∞ we finally obtain I (s) = √ 2π g(z0 )esf (z0 ) eiα , |sf ′′ (z0 )|1/2 (7.113) where the phase α was introduced in Eqs. (7.110) and (7.105). A note of warning: We assumed that the only significant contribution to the integral came from the immediate vicinity of the saddle point(s) z = z0 . This condition must be checked for each new problem (Exercise 7.3.5). Example 7.3.1 (1) ASYMPTOTIC FORM OF THE HANKEL FUNCTION Hν (s) In Section 11.4 it is shown that the Hankel functions, which satisfy Bessel’s equation, may be defined by ∞eiπ dz 1 e(s/2)(z−1/z) ν+1 , (7.114) Hν(1) (s) = πi C1 ,0 z 0 1 dz Hν(2) (s) = e(s/2)(z−1/z) ν+1 . (7.115) πi C2 ,∞e−iπ z The contour C1 is the curve in the upper half-plane of Fig. 7.19. The contour C2 is in the lower half-plane. We apply the method of steepest descents to the first Hankel function, (1) Hν (s), which is conveniently in the form specified by Eq. (7.109), with f (z) given by   1 1 z− . (7.116) f (z) = 2 z By differentiating, we obtain f ′ (z) = 1 1 + 2. 2 2z (7.117) 494 Chapter 7 Functions of a Complex Variable II FIGURE 7.19 Hankel function contours. Setting f ′ (z) = 0, we obtain z = i, −i. (7.118) Hence there are saddle points at z = +i and z = −i. At z = i, f ′′ (i) = −i, or arg f ′′ (i) = −π/2, so the saddle point direction is given by Eq. (7.110) as α = π2 + π4 = 43 π. For the (1) integral for Hν (s) we must choose the contour through the point z = +i so that it starts at the origin, moves out tangentially to the positive real axis, and then moves around through the saddle point at z = +i in the direction given by the angle α = 3π/4 and then on out to minus infinity, asymptotic with the negative real axis. The path of steepest ascent, which we must avoid, has the phase − 12 arg f ′′ (i) = π4 , according to Eq. (7.107), and is orthogonal to the axis, our path of steepest descent. Direct substitution into Eq. (7.113) with α = 3π/4 now yields √ 1 2πi −ν−1 e(s/2)(i−1/ i) e3πi/4 (1) Hν (s) = πi |(s/2)(−2/i 3 )|1/2 ) 2 (iπ/2)(−ν−2) is i(3π/4) e e e . (7.119) = πs By combining terms, we obtain Hν(1) (s) ≈ ) 2 i(s−ν(π/2)−π/4) e πs (7.120) (1) as the leading term of the asymptotic expansion of the Hankel function Hν (s). Additional terms, if desired, may be picked up from the power series of f and g in Eq. (7.111b). The other Hankel function can be treated similarly using the saddle point at z = −i.  Example 7.3.2 ASYMPTOTIC FORM OF THE FACTORIAL FUNCTION Ŵ(1 + s) In many physical problems, particularly in the field of statistical mechanics, it is desirable to have an accurate approximation of the gamma or factorial function of very large 7.3 Method of Steepest Descents 495 numbers. As developed in Section 8.1, the factorial function may be defined by the Euler integral ∞ ∞ Ŵ(1 + s) = ρ s e−ρ dρ = s s+1 es(ln z−z) dz. (7.121) 0 0 Here we have made the substitution ρ = zs in order to convert the integral to the form required by Eq. (7.109). As before, we assume that s is real and positive, from which it follows that the integrand vanishes at the limits 0 and ∞. By differentiating the z-dependence appearing in the exponent, we obtain df (z) d 1 = (ln z − z) = − 1, dz dz z f ′′ (z) = − 1 , z2 (7.122) which shows that the point z = 1 is a saddle point and arg f ′′ (1) = arg(−1) = π. According to Eq. (7.109) we let z − 1 = xeiα , α= 1 π π π − arg f ′′ (1) = − = 0, 2 2 2 2 (7.123) with x small, to describe the contour in the vicinity of the saddle point. From this we see that the direction of steepest descent is along the real axis, a conclusion that we could have reached more or less intuitively. Direct substitution into Eq. (7.113) with α = 0 now gives √ 2πs s+1 e−s Ŵ(1 + s) ≈ . (7.124) |s(−1−2 )|1/2 Thus the first term in the asymptotic expansion of the factorial function is √ Ŵ(1 + s) ≈ 2πss s e−s . (7.125) This result is the first term in Stirling’s expansion of the factorial function. The method of steepest descent is probably the easiest way of obtaining this first term. If more terms in the expansion are desired, then the method of Section 8.3 is preferable.  In the foregoing example the calculation was carried out by assuming s to be real. This assumption is not necessary. We may show (Exercise 7.3.6) that Eq. (7.125) also holds when s is replaced by the complex variable w, provided only that the real part of w be required to be large and positive. Asymptotic limits of integral representations of functions are extremely important in many approximations and applications in physics: √ 2πg(z0 )esf (z0 ) eiα sf (z) g(z)e dz ∼ , f ′ (z0 ) = 0. |sf ′′ (z0 )| C The saddle point method is one method of choice for deriving them and belongs in the toolkit of every physicist and engineer. 496 Chapter 7 Functions of a Complex Variable II Exercises 7.3.1 Using the method of steepest descents, evaluate the second Hankel function, given by 0 dz 1 e(s/2)(z−1/z) ν+1 , Hν(2) (s) = πi −∞C2 z with contour C2 as shown in Fig. 7.19. 7.3.2 7.3.3 ) 2 −i(s−π/4−νπ/2) . e πs Find and leading asymptotic expansion for the Fresnel integrals s the2 steepest s path 2 dx. cos x dx, sin x 0 1 0 2 Hint. Use 0 eisz dz. ANS. Hν(2) (s) ≈ (a) (b) (1) In applying the method of steepest descent to the Hankel function Hν (s), show that   ℜ f (z) < ℜ f (z0 ) = 0 for z on the contour C1 but away from the point z = z0 = i. Show that π  0 for 0 < r < 1,  −π ≤ θ < π 2 and  ℜ f (z) < 0 for r > 1, − π π 0. (8.5) The restriction on z is necessary to avoid divergence of the integral. When the gamma function does appear in physical problems, it is often in this form or some variation, such as ∞ 2 Ŵ(z) = 2 ℜ(z) > 0. (8.6) e−t t 2z−1 dt, 0 1  z−1 1 dt, Ŵ(z) = ln t 0 ℜ(z) > 0. (8.7) When z = 12 , Eq. (8.6) is just the Gauss error integral, and we have the interesting result  √ (8.8) Ŵ 21 = π . Generalizations of Eq. (8.6), the Gaussian integrals, are considered in Exercise 8.1.11. This definite integral form of Ŵ(z), Eq. (8.5), leads to the beta function, Section 8.4. 8.1 Definitions, Simple Properties 501 To show the equivalence of these two definitions, Eqs. (8.1) and (8.5), consider the function of two variables  n t n z−1 1− t dt, ℜ(z) > 0, (8.9) F (z, n) = n 0 with n a positive integer.1 Since lim n→∞  1− t n n ≡ e−t , (8.10) from the definition of the exponential lim F (z, n) = F (z, ∞) = n→∞ 0 ∞ e−t t z−1 dt ≡ Ŵ(z) (8.11) by Eq. (8.5). Returning to F (z, n), we evaluate it in successive integrations by parts. For convenience let u = t/n. Then 1 F (z, n) = nz (1 − u)n uz−1 du. (8.12) 0 Integrating by parts, we obtain  z 1 F (z, n) n 1 nu  = (1 − u) (1 − u)n−1 uz du. + nz z 0 z 0 (8.13) Repeating this with the integrated part vanishing at both endpoints each time, we finally get 1 n(n − 1) · · · 1 F (z, n) = nz uz+n−1 du z(z + 1) · · · (z + n − 1) 0 = 1 · 2 · 3···n nz . z(z + 1)(z + 2) · · · (z + n) (8.14) lim F (z, n) = F (z, ∞) ≡ Ŵ(z), (8.15) This is identical with the expression on the right side of Eq. (8.1). Hence n→∞ by Eq. (8.1), completing the proof. Infinite Product (Weierstrass) The third definition (Weierstrass’ form) is  ∞  # z −z/n 1 1+ ≡ zeγ z e , Ŵ(z) n n=1 1 The form of F (z, n) is suggested by the beta function (compare Eq. (8.60)). (8.16) 502 Chapter 8 Gamma–Factorial Function where γ is the Euler–Mascheroni constant, γ = 0.5772156619 . . . . (8.17) This infinite-product form may be used to develop the reflection identity, Eq. (8.23), and applied in the exercises, such as Exercise 8.1.17. This form can be derived from the original definition (Eq. (8.1)) by rewriting it as  n  1 · 2 · 3···n 1 # z −1 z z Ŵ(z) = lim 1+ n. (8.18) n = lim n→∞ z(z + 1) · · · (z + n) n→∞ z m m=1 Inverting Eq. (8.18) and using n−z = e(− ln n)z , (8.19)  n  # 1 z (− ln n)z = z lim e . 1+ n→∞ Ŵ(z) m (8.20) we obtain m=1 Multiplying and dividing by    # n 1 1 1 exp 1 + + + · · · + z = ez/m , 2 3 n (8.21) m=1 we get     1 1 1 1 = z lim exp 1 + + + · · · + − ln n z n→∞ Ŵ(z) 2 3 n    n # z −z/m . e × lim 1+ n→∞ m (8.22) m=1 As shown in Section 5.2, the parenthesis in the exponent approaches a limit, namely γ , the Euler–Mascheroni constant. Hence Eq. (8.16) follows. It was shown in Section 5.11 that the Weierstrass infinite-product definition of Ŵ(z) led directly to an important identity, π Ŵ(z)Ŵ(1 − z) = . (8.23) sin zπ Alternatively, we can start from the product of Euler integrals, ∞ ∞ z −s Ŵ(z + 1)Ŵ(1 − z) = s e ds t −z e−t dt = 0 0 ∞ 0 vz dv (v + 1)2 0 ∞ e−u u du = πz , sin πz transforming from the variables s, t to u = s + t, v = s/t, as suggested by combining the exponentials and the powers in the integrands. The Jacobian is   1 (v + 1)2 1  s + t J = −  1 , = s = 2 − t2 u t t 8.1 Definitions, Simple Properties 503 ∞ where (v + 1)t = u. The integral 0 e−u u du = 1, while that over v may be derived by contour integration, giving sinπzπz . This identity may also be derived by contour integration (Example 7.1.6 and Exercises 7.1.18 and 7.1.19) and the beta function, Section 8.4. Setting z = 12 in Eq. (8.23), we obtain  √ Ŵ 12 = π (8.24a) (taking the positive square root), in agreement with Eq. (8.8). Similarly one can establish Legendre’s duplication formula,  √ Ŵ(1 + z)Ŵ z + 21 = 2−2z π Ŵ(2z + 1). (8.24b) The Weierstrass definition shows immediately that Ŵ(z) has simple poles at z = 0, −1, −2, −3, . . . and that [Ŵ(z)]−1 has no poles in the finite complex plane, which means that Ŵ(z) has no zeros. This behavior may also be seen in Eq. (8.23), in which we note that π/(sin πz) is never equal to zero. Actually the infinite-product definition of Ŵ(z) may be derived from the Weierstrass factorization theorem with the specification that [Ŵ(z)]−1 have simple zeros at z = 0, −1, −2, −3, . . . . The Euler–Mascheroni constant is fixed by requiring Ŵ(1) = 1. See also the products expansions of entire functions in Section 7.1. In probability theory the gamma distribution (probability density) is given by  1  x α−1 e−x/β , x>0 α β Ŵ(α) f (x) = (8.24c)  0, x ≤ 0. The constant [β α Ŵ(α)]−1 is chosen so that the total (integrated) probability will be unity. For x → E, kinetic energy, α → 32 , and β → kT , Eq. (8.24c) yields the classical Maxwell– Boltzmann statistics. Factorial Notation So far this discussion has been presented in terms of the classical notation. As pointed out by Jeffreys and others, the −1 of the z − 1 exponent in our second definition (Eq. (8.5)) is a continual nuisance. Accordingly, Eq. (8.5) is sometimes rewritten as ∞ e−t t z dt ≡ z!, ℜ(z) > −1, (8.25) 0 to define a factorial function z!. Occasionally we may still encounter Gauss’ notation, & (z), for the factorial function: # (z) = z! = Ŵ(z + 1). (8.26) The Ŵ notation is due to Legendre. The factorial function of Eq. (8.25) is related to the gamma function by Ŵ(z) = (z − 1)! or Ŵ(z + 1) = z!. (8.27) 504 Chapter 8 Gamma–Factorial Function FIGURE 8.1 The factorial function — extension to negative arguments. If z = n, a positive integer (Eq. (8.4)) shows that z! = n! = 1 · 2 · 3 · · · n, (8.28) the familiar factorial. However, it should be noted that since z! is now defined by Eq. (8.25) (or equivalently by Eq. (8.27)) the factorial function is no longer limited to positive integral values of the argument (Fig. 8.1). The difference relation (Eq. (8.2)) becomes (z − 1)! = z! . z (8.29) This shows immediately that 0! = 1 (8.30) and n! = ±∞ for n, a negative integer. (8.31) In terms of the factorial, Eq. (8.23) becomes z!(−z)! = πz . sin πz (8.32) By restricting ourselves to the real values of the argument, we find that Ŵ(x + 1) defines the curves shown in Figs. 8.1 and 8.2. The minimum of the curve is Ŵ(x + 1) = x! = (0.46163 . . .)! = 0.88560 . . . . (8.33a) 8.1 Definitions, Simple Properties 505 FIGURE 8.2 The factorial function and the first two derivatives of ln(Ŵ(x + 1)). Double Factorial Notation In many problems of mathematical physics, particularly in connection with Legendre polynomials (Chapter 12), we encounter products of the odd positive integers and products of the even positive integers. For convenience these are given special labels as double factorials: 1 · 3 · 5 · · · (2n + 1) = (2n + 1)!! 2 · 4 · 6 · · · (2n) = (2n)!!. (8.33b) Clearly, these are related to the regular factorial functions by (2n + 1)! . 2n n! We also define (−1)!! = 1, a special case that does not follow from Eq. (8.33c). (2n)!! = 2n n! and (2n + 1)!! = (8.33c) Integral Representation An integral representation that is useful in developing asymptotic series for the Bessel functions is  e−z zν dz = e2πiν − 1 Ŵ(ν + 1), (8.34) C where C is the contour shown in Fig. 8.3. This contour integral representation is only useful when ν is not an integer, z = 0 then being a branch point. Equation (8.34) may be 506 Chapter 8 Gamma–Factorial Function FIGURE 8.3 Factorial function contour. FIGURE 8.4 The contour of Fig. 8.3 deformed. readily verified for ν > −1 by deforming the contour as shown in Fig. 8.4. The integral from ∞ into the origin yields −(ν!), placing the phase of z at 0. The integral out to ∞ (in the fourth quadrant) then yields e2πiν ν!, the phase of z having increased to 2π . Since the circle around the origin contributes nothing when ν > −1, Eq. (8.34) follows. It is often convenient to cast this result into a more symmetrical form: e−z (−z)ν dz = 2iŴ(ν + 1) sin(νπ). (8.35) C This analysis establishes Eqs. (8.34) and (8.35) for ν > −1. It is relatively simple to extend the range to include all nonintegral ν. First, we note that the integral exists for ν < −1 as long as we stay away from the origin. Second, integrating by parts we find that Eq. (8.35) yields the familiar difference relation (Eq. (8.29)). If we take the difference relation to define the factorial function of ν < −1, then Eqs. (8.34) and (8.35) are verified for all ν (except negative integers). Exercises 8.1.1 Derive the recurrence relations Ŵ(z + 1) = zŴ(z) from the Euler integral (Eq. (8.5)), Ŵ(z) = 0 ∞ e−t t z−1 dt. 8.1 Definitions, Simple Properties 8.1.2 507 In a power-series solution for the Legendre functions of the second kind we encounter the expression (n + 1)(n + 2)(n + 3) · · · (n + 2s − 1)(n + 2s) , 2 · 4 · 6 · 8 · · · (2s − 2)(2s) · (2n + 3)(2n + 5)(2n + 7) · · · (2n + 2s + 1) in which s is a positive integer. Rewrite this expression in terms of factorials. 8.1.3 Show that, as s − n → negative integer, (s − n)! (−1)n−s (2n − 2s)! → . (2s − 2n)! (n − s)! Here s and n are integers with s < n. This result can be used to avoid negative factorials, such as in the series representations of the spherical Neumann functions and the Legendre functions of the second kind. 8.1.4 Show that Ŵ(z) may be written Ŵ(z) = 8.1.5 8.1.6 8.1.7 ∞ 2 e−t t 2z−1 dt, ℜ(z) > 0, 1  z−1 1 ln dt, t 0 ℜ(z) > 0. Ŵ(z) = 2 0 In a Maxwellian distribution the fraction of particles with speed between v and v + dv is 3/2    dN m mv 2 2 v dv, = 4π exp − N 2πkT 2kT N being the total of particles. The average or expectation value of v n is defined number n n −1 v dN . Show that as v  = N     n 2kT n/2 Ŵ n+3 2 . v = m Ŵ(3/2) By transforming the integral into a gamma function, show that 1 1 − x k ln x dx = , k > −1. (k + 1)2 0 Show that ∞ e 0 8.1.8 −x 4   5 . dx = Ŵ 4 Show that (ax − 1)! 1 = . x→0 (x − 1)! a lim 8.1.9 Locate the poles of Ŵ(z). Show that they are simple poles and determine the residues. 8.1.10 Show that the equation x! = k, k = 0, has an infinite number of real roots. 8.1.11 Show that 508 Chapter 8 Gamma–Factorial Function ∞  s! (a) x 2s+1 exp −ax 2 dx = s+1 . 2a 0 ) ∞  (s − 21 )! (2s − 1)!! π = s+1 s x 2s exp −ax 2 dx = s+1/2 (b) . a 2a 2 a 0 These Gaussian integrals are of major importance in statistical mechanics. 8.1.12 (a) Develop recurrence relations for (2n)!! and for (2n + 1)!!. (b) Use these recurrence relations to calculate (or to define) 0!! and (−1)!!. ANS. 0!! = 1, 8.1.13 For s a nonnegative integer, show that (−2s − 1)!! = 8.1.14 (−1)!! = 1. (−1)s 2s s! (−1)s = . (2s − 1)!! (2s)! Express the coefficient of the nth term of the expansion of (1 + x)1/2 (a) in terms of factorials of integers, (b) in terms of the double factorial (!!) functions. ANS. an = (−1)n+1 8.1.15 (2n − 3)! (2n − 3)!! , = (−1)n+1 (2n)!! 22n−2 n!(n − 2)! n = 2, 3, . . . . Express the coefficient of the nth term of the expansion of (1 + x)−1/2 (a) in terms of the factorials of integers, (b) in terms of the double factorial (!!) functions. ANS. an = (−1)n 8.1.16 (2n)! (2n − 1)!! , = (−1)n (2n)!! 22n (n!)2 n = 1, 2, 3, . . . . The Legendre polynomial may be written as  n 1 (2n − 1)!! Pn (cos θ ) = 2 cos(n − 2)θ cos nθ + · (2n)!! 1 2n − 1 + + 1·3 n(n − 1) cos(n − 4)θ 1 · 2 (2n − 1)(2n − 3)  n(n − 1)(n − 2) 1·3·5 cos(n − 6)θ + · · · . 1 · 2 · 3 (2n − 1)(2n − 3)(2n − 5) Let n = 2s + 1. Then Pn (cos θ ) = P2s+1 (cos θ ) = s  m=0 am cos(2m + 1)θ. Find am in terms of factorials and double factorials. 8.1 Definitions, Simple Properties 8.1.17 (a) Show that Ŵ (b) 1 2 509  − n Ŵ 12 + n = (−1)n π, where n is an integer. Express Ŵ( 12 + n) and Ŵ( 21 − n) separately in terms of π 1/2 and a !! function. (2n − 1)!! 1/2 π . 2n From one of the definitions of the factorial or gamma function, show that   (ix)!2 = πx . sinh πx Prove that −1/2 ∞ #    β2 Ŵ(α + iβ) = Ŵ(α) 1+ . (α + n)2 ANS. Ŵ( 12 + n) = 8.1.18 8.1.19 n=0 This equation has been useful in calculations of beta decay theory. 8.1.20 Show that   (n + ib)! =  πb sinh πb for n, a positive integer. 8.1.21 Show that 1/2 # n  2 1/2 s + b2 s=1   |x!| ≥ (x + iy)! for all x. The variables x and y are real. 8.1.22 8.1.23 Show that  1  Ŵ + iy 2 = 2 π . cosh πy The probability density associated with the normal distribution of statistics is given by  1 (x − µ)2 , exp − f (x) = σ (2π)1/2 2σ 2 with (−∞, ∞) for the range of x. Show that (a) the mean value of x, x is equal to µ, (b) the standard deviation (x 2  − x2 )1/2 is given by σ . 8.1.24 From the gamma distribution f (x) = show that (a) x (mean) = αβ,    1 β α Ŵ(α) 0, x α−1 e−x/β , x > 0, x ≤ 0, (b) σ 2 (variance) ≡ x 2  − x2 = αβ 2 . 510 Chapter 8 Gamma–Factorial Function 8.1.25 The wave function of a particle scattered by a Coulomb potential is ψ(r, θ ). At the origin the wave function becomes ψ(0) = e−πγ /2 Ŵ(1 + iγ ), where γ = Z1 Z2 e2 /h¯ v. Show that 8.1.26   ψ(0)2 = 2πγ . −1 e2πγ Derive the contour integral representation of Eq. (8.34), 2iν! sin νπ = e−z (−z)ν dz. C 8.1.27 Write a function subprogram FACT(N ) (fixed-point independent variable) that will calculate N!. Include provision for rejection and appropriate error message if N is negative. Note. For small integer N , direct multiplication is simplest. For large N , Eq. (8.55), Stirling’s series would be appropriate. 8.1.28 (a) Write a function subprogram to calculate the double factorial ratio (2N − 1)!!/ (2N )!!. Include provision for N = 0 and for rejection and an error message if N is negative. Calculate and tabulate this ratio for N = 1(1)100. (b) Check your function subprogram calculation of 199!!/200!! against the value obtained from Stirling’s series (Section 8.3). 199!! = 0.056348. 200!! Using either the FORTRAN-supplied GAMMA or a library-supplied subroutine for x! or Ŵ(x), determine the value of x for which Ŵ(x) is a minimum (1 ≤ x ≤ 2) and this minimum value of Ŵ(x). Notice that although the minimum value of Ŵ(x) may be obtained to about six significant figures (single precision), the corresponding value of x is much less accurate. Why this relatively low accuracy? ANS. 8.1.29 8.1.30 The factorial function expressed in integral form can be evaluated by the Gauss– Laguerre quadrature. For a 10-point formula the resultant x! is theoretically exact for x an integer, 0 up through 19. What happens if x is not an integer? Use the Gauss– Laguerre quadrature to evaluate x!, x = 0.0(0.1)2.0. Tabulate the absolute error as a function of x. Check value. x!exact − x!quadrature = 0.00034 for x = 1.3. 8.2 DIGAMMA AND POLYGAMMA FUNCTIONS Digamma Functions As may be noted from the three definitions in Section 8.1, it is inconvenient to deal with the derivatives of the gamma or factorial function directly. Instead, it is customary to take 8.2 Digamma and Polygamma Functions 511 the natural logarithm of the factorial function (Eq. (8.1)), convert the product to a sum, and then differentiate; that is, Ŵ(z + 1) = zŴ(z) = lim n→∞ n! nz (z + 1)(z + 2) · · · (z + n) (8.36) and  ln Ŵ(z + 1) = lim ln(n!) + z ln n − ln(z + 1) n→∞ − ln(z + 2) − · · · − ln(z + n) , (8.37) in which the logarithm of the limit is equal to the limit of the logarithm. Differentiating with respect to z, we obtain   1 1 1 d , (8.38) ln Ŵ(z + 1) ≡ ψ(z + 1) = lim ln n − − − ··· − n→∞ dz z+1 z+2 z+n which defines ψ(z + 1), the digamma function. From Mascheroni constant,2 Eq. (8.38) may be rewritten as ∞   1 − ψ(z + 1) = −γ − z+n n=1 = −γ + ∞  n=1 the definition of the Euler– 1 n  z . n(n + z) (8.39) One application of Eq. (8.39) is in the derivation of the series form of the Neumann function (Section 11.3). Clearly, ψ(1) = −γ = −0.577 215 664 901 . . . .3 (8.40) Another, perhaps more useful, expression for ψ(z) is derived in Section 8.3. Polygamma Function The digamma function may be differentiated repeatedly, giving rise to the polygamma function: ψ (m) (z + 1) ≡ d m+1 ln(z!) dzm+1 = (−1)m+1 m! ∞  n=1 1 , (z + n)m+1 m = 1, 2, 3, . . . . (8.41) 2 Compare Sections 5.2 and 5.9. We add and substract n s −1 . s=1 3 γ has been computed to 1271 places by D. E. Knuth, Math. Comput. 16: 275 (1962), and to 3566 decimal places by D. W. Sweeney, ibid. 17: 170 (1963). It may be of interest that the fraction 228/395 gives γ accurate to six places. 512 Chapter 8 Gamma–Factorial Function A plot of ψ(x + 1) and ψ ′ (x + 1) is included in Fig. 8.2. Since the series in Eq. (8.41) defines the Riemann zeta function4 (with z = 0), ζ (m) ≡ ∞  1 , nm (8.42) n=1 we have ψ (m) (1) = (−1)m+1 m!ζ (m + 1), m = 1, 2, 3, . . . . (8.43) The values of the polygamma functions of positive integral argument, ψ (m) (n + 1), may be calculated by using Exercise 8.2.6. In terms of the perhaps more common Ŵ notation, d n+1 dn ψ(z) = ψ (n) (z). ln Ŵ(z) = dzn dzn+1 (8.44a) Maclaurin Expansion, Computation It is now possible to write a Maclaurin expansion for ln Ŵ(z + 1): ln Ŵ(z + 1) = ∞ n  z n=1 n! ψ (n−1) (1) = −γ z + ∞  zn (−1)n ζ (n) n (8.44b) n=2 convergent for |z| < 1; for z = x, the range is −1 < x ≤ 1. Alternate forms of this series appear in Exercise 5.9.14. Equation (8.44b) is a possible means of computing Ŵ(z + 1) for real or complex z, but Stirling’s series (Section 8.3) is usually better, and in addition, an excellent table of values of the gamma function for complex arguments based on the use of Stirling’s series and the recurrence relation (Eq. (8.29)) is now available.5 Series Summation The digamma and polygamma functions may also be used in summing series. If the general term of the series has the form of a rational fraction (with the highest power of the index in the numerator at least two less than the highest power of the index in the denominator), it may be transformed by the method of partial fractions (compare Section 15.8). The infinite series may then be expressed as a finite sum of digamma and polygamma functions. The usefulness of this method depends on the availability of tables of digamma and polygamma functions. Such tables and examples of series summation are given in AMS-55, Chapter 6 (see Additional Readings for the reference). 4 See Section 5.9. For z = 0 this series may be used to define a generalized zeta function. 5 Table of the Gamma Function for Complex Arguments, Applied Mathematics Series No. 34. Washington, DC: National Bureau of Standards (1954). 8.2 Digamma and Polygamma Functions Example 8.2.1 513 CATALAN’S CONSTANT Catalan’s constant, Exercise 5.2.22, or β(2) of Section 5.9 is given by K = β(2) = ∞  (−1)k . (2k + 1)2 (8.44c) k=0 Grouping the positive and negative terms separately and starting with unit index (to match the form of ψ (1) , Eq. (8.41)), we obtain K =1+ ∞  n=1 ∞ 1 1 1  − − . 2 9 (4n + 1) (4n + 3)2 n=1 Now, quoting Eq. (8.41), we get K= 8 9 + 1 (1) 16 ψ  1 + 14 − 1 (1) 16 ψ  1 + 34 . (8.44d) Using the values of ψ (1) from Table 6.1 of AMS-55 (see Additional Readings for the reference), we obtain K = 0.91596559 . . . . Compare this calculation of Catalan’s constant with the calculations of Chapter 5, either direct summation or a modification using Riemann zeta function values.  Exercises 8.2.1 Verify that the following two forms of the digamma function, ψ(x + 1) = x  1 r=1 r −γ and ψ(x + 1) = ∞  r=1 x − γ, r(r + x) are equal to each other (for x a positive integer). 8.2.2 Show that ψ(z + 1) has the series expansion ψ(z + 1) = −γ + 8.2.3 ∞  (−1)n ζ (n)zn−1 . n=2 For a power-series expansion of ln(z!), AMS-55 (see Additional Readings for reference) lists ln(z!) = − ln(1 + z) + z(1 − γ ) + ∞  [ζ (n) − 1]zn . (−1)n n n=2 514 Chapter 8 Gamma–Factorial Function (a) Show that this agrees with Eq. (8.44b) for |z| < 1. (b) What is the range of convergence of this new expression? 8.2.4 Show that    ∞ 1 ζ (2n) 2n πz = ln z , 2 sin πz 2n |z| < 1. n=1 Hint. Try Eq. (8.32). 8.2.5 Write out a Weierstrass infinite-product definition of ln(z!). Without differentiating, show that this leads directly to the Maclaurin expansion of ln(z!), Eq. (8.44b). 8.2.6 Derive the difference relation for the polygamma function ψ (m) (z + 2) = ψ (m) (z + 1) + (−1)m 8.2.7 Show that if m! , (z + 1)m+1 m = 0, 1, 2, . . . . Ŵ(x + iy) = u + iv, then Ŵ(x − iy) = u − iv. This is a special case of the Schwarz reflection principle, Section 6.5. 8.2.8 The Pochhammer symbol (a)n is defined as (a)n = a(a + 1) · · · (a + n − 1), (a)0 = 1 (for integral n). (a) Express (a)n in terms of factorials. (b) Find (d/da)(a)n in terms of (a)n and digamma functions. ANS. (c) Show that  d (a)n = (a)n ψ(a + n) − ψ(a) . da (a)n+k = (a + n)k · (a)n . 8.2.9 Verify the following special values of the ψ form of the di- and polygamma functions: ψ(1) = −γ , 8.2.10 ψ (1) (1) = ζ (2), ψ (2) (1) = −2ζ (3). Derive the polygamma function recurrence relation ψ (m) (1 + z) = ψ (m) (z) + (−1)m m!/zm+1 , 8.2.11 Verify (a) ∞ 0 e−r ln r dr = −γ . m = 0, 1, 2, . . . . 8.2 Digamma and Polygamma Functions (b) (c) 0 ∞ ∞ 0 re−r ln r dr = 1 − γ . r n e−r ln r dr = (n − 1)! + n ∞ r n−1 e−r ln r dr, 515 n = 1, 2, 3, . . . . 0 Hint. These may be verified by integration by parts, three parts, or differentiating the integral form of n! with respect to n. 8.2.12 Dirac relativistic wave functions for hydrogen involve factors such as [2(1 − α 2 Z 2 )1/2 ]! 1 where α, the fine structure constant, is 137 and Z is the atomic number. Expand 2 2 1/2 2 2 [2(1 − α Z ) ]! in a series of powers of α Z . 8.2.13 The quantum mechanical description of a particle in a Coulomb field requires a knowledge of the phase of the complex factorial function. Determine the phase of (1 + ib)! for small b. 8.2.14 The total energy radiated by a blackbody is given by 8πk 4 T 4 ∞ x 3 dx. u= 3 3 ex − 1 c h 0 Show that the integral in this expression is equal to 3!ζ (4). [ζ (4) = π 4 /90 = 1.0823 . . .] The final result is the Stefan–Boltzmann law. 8.2.15 As a generalization of the result in Exercise 8.2.14, show that ∞ s x dx = s!ζ (s + 1), ℜ(s) > 0. x −1 e 0 8.2.16 The neutrino energy density (Fermi distribution) in the early history of the universe is given by x3 4π ∞ ρν = 3 dx. h 0 exp(x/kT ) + 1 Show that ρν = 8.2.17 7π 5 (kT )4 . 30h3 Prove that 0 ∞  x s dx = s! 1 − 2−s ζ (s + 1), x e +1 ℜ(s) > 0. ℜ(z) > 0. Exercises 8.2.15 and 8.2.17 actually constitute Mellin integral transforms (compare Section 15.1). 8.2.18 Prove that ψ (n) (z) = (−1)n+1 ∞ 0 t n e−zt dt, 1 − e−t 516 Chapter 8 Gamma–Factorial Function 8.2.19 Using di- and polygamma functions, sum the series (a) ∞  n=1 1 , n(n + 1) (b) ∞  n=2 1 . n2 − 1 Note. You can use Exercise 8.2.6 to calculate the needed digamma functions. 8.2.20 Show that ∞  n=1 ( 1 1 ' ψ(1 + b) − ψ(1 + a) , = (n + a)(n + b) (b − a) where a = b and neither a nor b is a negative integer. It is of some interest to compare this summation with the corresponding integral, 1 ∞ ( 1 ' dx ln(1 + b) − ln(1 + a) . = (x + a)(x + b) b − a The relation between ψ(x) and ln x is made explicit in Eq. (8.51) in the next section. 8.2.21 Verify the contour integral representation of ζ (s), (−s)! ζ (s) = − 2πi C (−z)s−1 dz. ez − 1 The contour C is the same as that for Eq. (8.35). The points z = ±2nπi, n = 1, 2, 3, . . . , are all excluded. 8.2.22 Show that ζ (s) is analytic in the entire finite complex plane except at s = 1, where it has a simple pole with a residue of +1. Hint. The contour integral representation will be useful. 8.2.23 Using the complex variable capability of FORTRAN calculate ℜ(1 + ib)!, ℑ(1 + ib)!, |(1 + ib)!| and phase (1 + ib)! for b = 0.0(0.1)1.0. Plot the phase of (1 + ib)! versus b. Hint. Exercise 8.2.3 offers a convenient approach. You will need to calculate ζ (n). 8.3 STIRLING’S SERIES For computation of ln(z!) for very large z (statistical mechanics) and for numerical computations at nonintegral values of z, a series expansion of ln(z!) in negative powers of z is desirable. Perhaps the most elegant way of deriving such an expansion is by the method of steepest descents (Section 7.3). The following method, starting with a numerical integration formula, does not require knowledge of contour integration and is particularly direct. 8.3 Stirling’s Series 517 Derivation from Euler–Maclaurin Integration Formula The Euler–Maclaurin formula for evaluating a definite integral6 is n f (x) dx = 21 f (0) + f (1) + f (2) + · · · + 12 f (n) 0   − b2 f ′ (n) − f ′ (0) − b4 f ′′′ (n) − f ′′′ (0) − · · · , (8.45) in which the b2n are related to the Bernoulli numbers B2n (compare Section 5.9) by (2n)!b2n = B2n , B0 = 1, B2 = 16 , 1 B4 = − 30 , 1 42 , 1 B8 = − 30 , 5 B10 = 66 , (8.46) B6 = (8.47) and so on. By applying Eq. (8.45) to the definite integral ∞ dx 1 = 2 z (z + x) 0 (8.48) (for z not on the negative real axis), we obtain 1 1 2!b2 4!b4 = + ψ (1) (z + 1) − 3 − 5 − · · · . z 2z2 z z (8.49) This is the reason for using Eq. (8.48). The Euler–Maclaurin evaluation yields ψ (1) (z + 1), which is d 2 ln Ŵ(z + 1)/dz2 . Using Eq. (8.46) and solving for ψ (1) (z + 1), we have ψ (1) (z + 1) = 1 1 d B2 B4 ψ(z + 1) = − 2 + 3 + 5 + · · · dz z 2z z z = ∞  B2n 1 1 − 2+ . z 2z z2n+1 (8.50) n=1 Since the Bernoulli numbers diverge strongly, this series does not converge. It is a semiconvergent, or asymptotic, series, useful if one retains a small enough number of terms (compare Section 5.10). Integrating once, we get the digamma function ψ(z + 1) = C1 + ln z + = C1 + ln z + B2 1 B4 − 2 − 4 − ··· 2z 2z 4z ∞  B2n 1 − . 2z 2nz2n (8.51) n=1 Integrating Eq. (8.51) with respect to z from z − 1 to z and then letting z approach infinity, C1 , the constant of integration, may be shown to vanish. This gives us a second expression for the digamma function, often more useful than Eq. (8.38) or (8.44b). 6 This is obtained by repeated integration by parts, Section 5.9. 518 Chapter 8 Gamma–Factorial Function Stirling’s Series The indefinite integral of the digamma function (Eq. (8.51)) is   B2n 1 B2 + ··· + ln Ŵ(z + 1) = C2 + z + + ··· , ln z − z + 2 2z 2n(2n − 1)z2n−1 (8.52) in which C2 is another constant of integration. To fix C2 we find it convenient to use the doubling, or Legendre duplication, formula derived in Section 8.4,  Ŵ(z + 1)Ŵ z + 12 = 2−2z π 1/2 Ŵ(2z + 1). (8.53) This may be proved directly when z is a positive integer by writing Ŵ(2z + 1) as a product of even terms times a product of odd terms and extracting a factor of 2 from each term (Exercise 8.3.5). Substituting Eq. (8.52) into the logarithm of the doubling formula, we find that C2 is C2 = 21 ln 2π, (8.54) giving ln Ŵ(z + 1) =   1 1 1 1 1 ln z − z + ln 2π + z + − + − ··· . 3 2 2 12z 360z 1260z5 (8.55) This is Stirling’s series, an asymptotic expansion. The absolute value of the error is less than the absolute value of the first term omitted. The constants of integration C1 and C2 may also be evaluated by comparison with the first term of the series expansion obtained by the method of “steepest descent.” This is carried out in Section 7.3. To help convey a feeling of the remarkable precision of Stirling’s series for Ŵ(s + 1), the ratio of the first term of Stirling’s approximation to Ŵ(s + 1) is plotted in Fig. 8.5. A tabulation gives the ratio of the first term in the expansion to Ŵ(s + 1) and the ratio of the first two terms in the expansion to Ŵ(s + 1) (Table 8.1). The derivation of these forms is Exercise 8.3.1. Exercises 8.3.1 Rewrite Stirling’s series to give Ŵ(z + 1) instead of ln Ŵ(z + 1).   √ 1 1 139 z+1/2 −z + 1+ − + ··· . e ANS. Ŵ(z + 1) = 2πz 12z 288z2 51,840z3 8.3.2 Use Stirling’s formula to estimate 52!, the number of possible rearrangements of cards in a standard deck of playing cards. 8.3.3 By integrating Eq. (8.51) from z − 1 to z and then letting z → ∞, evaluate the constant C1 in the asymptotic series for the digamma function ψ(z). 8.3.4 Show that the constant C2 in Stirling’s formula equals 21 ln 2π by using the logarithm of the doubling formula. 8.3 Stirling’s Series FIGURE 8.5 519 Accuracy of Stirling’s formula. Table 8.1 s √ 1 2π s s+1/2 e−s Ŵ(s + 1)   √ 1 1 2π s s+1/2 e−s 1 + Ŵ(s + 1) 12s 1 2 3 4 5 6 7 8 9 10 0.92213 0.95950 0.97270 0.97942 0.98349 0.98621 0.98817 0.98964 0.99078 0.99170 0.99898 0.99949 0.99972 0.99983 0.99988 0.99992 0.99994 0.99995 0.99996 0.99998 8.3.5 By direct expansion, verify the doubling formula for z = n + 12 ; n is an integer. 8.3.6 Without using Stirling’s series show that (a) ln(n!) n 1 ln x dx; n is an integer ≥ 2. Notice that the arithmetic mean of these two integrals gives a good approximation for Stirling’s series. 8.3.7 Test for convergence  ∞ ∞   (p − 21 )! 2 2p + 1 (2p − 1)!!(2p + 1)!! =π . × p! 2p + 2 (2p)!!(2p + 2)!! p=0 p=0 520 Chapter 8 Gamma–Factorial Function This series arises in an attempt to describe the magnetic field created by and enclosed by a current loop. 8.3.8 Show that lim x b−a x→∞ 8.3.9 Show that (x + a)! = 1. (x + b)! (2n − 1)!! 1/2 n = π −1/2 . (2n)!!  Calculate the binomial coefficient 2n n to six significant figures for n = 10, 20, and 30. Check your values by lim n→∞ 8.3.10 (a) a Stirling series approximation through terms in n−1 , (b) a double precision calculation. ANS. 8.3.11  20 10 = 1.84756 × 105 ,  40 11 20 = 1.37846 × 10 ,  60 17 30 = 1.18264 × 10 . Write a program (or subprogram) that will calculate log10 (x!) directly from Stirling’s series. Assume that x ≥ 10. (Smaller values could be calculated via the factorial recurrence relation.) Tabulate log10 (x!) versus x for x = 10(10)300. Check your results against AMS-55 (see Additional Readings for this reference) or by direct multiplication (for n = 10, 20, and 30). Check value. log10 (100!) = 157.97. 8.3.12 Using the complex arithmetic capability of FORTRAN, write a subroutine that will calculate ln(z!) for complex z based on Stirling’s series. Include a test and an appropriate error message if z is too close to a negative real integer. Check your subroutine against alternate calculations for z real, z pure imaginary, and z = 1 + ib (Exercise 8.2.23). Check values. 8.4 |(i0.5)!| = 0.82618 phase (i0.5)! = −0.24406. THE BETA FUNCTION Using the integral definition (Eq. (8.25)), we write the product of two factorials as the product of two integrals. To facilitate a change in variables, we take the integrals over a finite range: a2 a2 ℜ(m) > −1, e−u um du e−v v n dv, (8.56a) m!n! = lim ℜ(n) > −1. a 2 →∞ 0 0 Replacing u with x 2 and v with y 2 , we obtain a −x 2 2m+1 m!n! = lim 4 dx e x a→∞ 0 0 a 2 e−y y 2n+1 dy. (8.56b) 8.4 The Beta Function 521 FIGURE 8.6 Transformation from Cartesian to polar coordinates. Transforming to polar coordinates gives us a 2 m!n! = lim 4 e−r r 2m+2n+3 dr a→∞ 0 = (m + n + 1)!2 π/2 π/2 cos2m+1 θ sin2n+1 θ dθ 0 cos2m+1 θ sin2n+1 θ dθ. (8.57) 0 Here the Cartesian area element dx dy has been replaced by r dr dθ (Fig. 8.6). The last equality in Eq. (8.57) follows from Exercise 8.1.11. The definite integral, together with the factor 2, has been named the beta function: π/2 cos2m+1 θ sin2n+1 θ dθ B(m + 1, n + 1) ≡ 2 0 = m!n! . (m + n + 1)! (8.58a) Equivalently, in terms of the gamma function and noting its symmetry, B(p, q) = Ŵ(p)Ŵ(q) , Ŵ(p + q) B(q, p) = B(p, q). (8.58b) The only reason for choosing m + 1 and n + 1, rather than m and n, as the arguments of B is to be in agreement with the conventional, historical beta function. Definite Integrals, Alternate Forms The beta function is useful in the evaluation of a wide variety of definite integrals. The substitution t = cos2 θ converts Eq. (8.58a) to7 1 m!n! = B(m + 1, n + 1) = t m (1 − t)n dt. (8.59a) (m + n + 1)! 0 7 The Laplace transform convolution theorem provides an alternate derivation of Eq. (8.58a), compare Exercise 15.11.2. 522 Chapter 8 Gamma–Factorial Function Replacing t by x 2 , we obtain m!n! = 2(m + n + 1)! 0 1  n x 2m+1 1 − x 2 dx. The substitution t = u/(1 + u) in Eq. (8.59a) yields still another useful form, ∞ m!n! um = du. (m + n + 1)! (1 + u)m+n+2 0 (8.59b) (8.60) The beta function as a definite integral is useful in establishing integral representations of the Bessel function (Exercise 11.1.18) and the hypergeometric function (Exercise 13.4.10). Verification of πα/ sin πα Relation If we take m = a, n = −a, −1 < a < 1, then ∞ ua du = a!(−a)!. (1 + u)2 0 (8.61) By contour integration this integral may be shown to be equal to πa/ sin πa (Exercise 7.1.18), thus providing another method of obtaining Eq. (8.32). Derivation of Legendre Duplication Formula The form of Eq. (8.58a) suggests that the beta function may be useful in deriving the doubling formula used in the preceding section. From Eq. (8.59a) with m = n = z and ℜ(z) > −1, 1 z!z! = t z (1 − t)z dt. (8.62) (2z + 1)! 0 By substituting t = (1 + s)/2, we have 1 1  z  z z!z! = 2−2z−1 1 − s 2 ds. 1 − s 2 ds = 2−2z (2z + 1)! 0 −1 (8.63) The last equality holds because the integrand is even. Evaluating this integral as a beta function (Eq. (8.59b)), we obtain z!(− 21 )! z!z! . = 2−2z−1 (2z + 1)! (z + 21 )! (8.64) Rearranging terms and recalling that (− 12 )! = π 1/2 , we reduce this equation to one form of the Legendre duplication formula,  z! z + 12 ! = 2−2z−1 π 1/2 (2z + 1)!. (8.65a) Dividing by (z + 21 ), we obtain an alternate form of the duplication formula:  z! z − 21 ! = 2−2z π 1/2 (2z)!. (8.65b) 8.4 The Beta Function 523 Although the integrals used in this derivation are defined only for ℜ(z) > −1, the results (Eqs. (8.65a) and (8.65b) hold for all regular points z by analytic continuation.8 Using the double factorial notation (Section 8.1), we may rewrite Eq. (8.65a) (with z = n, an integer) as  n + 12 ! = π 1/2 (2n + 1)!!/2n+1 . (8.65c) This is often convenient for eliminating factorials of fractions. Incomplete Beta Function Just as there is an incomplete gamma function (Section 8.5), there is also an incomplete beta function, x t p−1 (1 − t)q−1 dt, 0 ≤ x ≤ 1, p > 0, q > 0 (if x = 1). (8.66) Bx (p, q) = 0 Clearly, Bx=1 (p, q) becomes the regular (complete) beta function, Eq. (8.59a). A powerseries expansion of Bx (p, q) is the subject of Exercises 5.2.18 and 5.7.8. The relation to hypergeometric functions appears in Section 13.4. The incomplete beta function makes an appearance in probability theory in calculating the probability of at most k successes in n independent trials.9 Exercises 8.4.1 Derive the doubling formula for the factorial function by integrating (sin 2θ )2n+1 = (2 sin θ cos θ )2n+1 (and using the beta function). 8.4.2 Verify the following beta function identities: (a) (b) (c) 8.4.3 B(a, b) = B(a + 1, b) + B(a, b + 1), a+b B(a, b + 1), b b−1 B(a + 1, b − 1), B(a, b) = a B(a, b) = (d) B(a, b)B(a + b, c) = B(b, c)B(a, b + c). (a) Show that 1 −1 1−x 2 1/2   π/2, x 2n dx = (2n − 1)!! π , (2n + 2)!! 8 If 2z is a negative integer, we get the valid but unilluminating result ∞ = ∞. n=0 n = 1, 2, 3, . . . . 9 W. Feller, An Introduction to Probability Theory and Its Applications, 3rd ed. New York: Wiley (1968), Section VI.10. 524 Chapter 8 Gamma–Factorial Function (b) Show that   π, 2 −1/2 2n 1−x x dx = (2n − 1)!! π , (2n)!! −1 8.4.4 Show that 1 −1 8.4.5 8.4.6 8.4.7 Evaluate n=0 1 1−x 2 n 1   2n+1  2 n!n! , (2n + 1)! dx = (2n)!!   2 , (2n + 1)!! a b −1 (1 + x) (1 − x) dx n = 1, 2, 3, . . . . n > −1 n = 0, 1, 2, . . . . in terms of the beta function. ANS. 2a+b+1 B(a + 1, b + 1). Show, by means of the beta function, that z dx π = , 1−α α sin πα (x − t) t (z − x) Show that the Dirichlet integral x p y q dx dy = 0 < α < 1. B(p + 1, q + 1) p!q! = , (p + q + 2)! p+q +2 where the range of integration is the triangle bounded by the positive x- and y-axes and the line x + y = 1. 8.4.8 Show that 0 ∞ ∞ e−(x 2 +y 2 +2xy cos θ) 0 dx dy = θ . 2 sin θ What are the limits on θ ? Hint. Consider oblique xy-coordinates. ANS. −π < θ < π . 8.4.9 Evaluate (using the beta function) (a) 0 π/2 cos1/2 θ dθ = (2π)3/2 16[( 41 )!]2 (b) 0 π/2 n cos θ dθ = π/2 0 n sin θ dθ =    (n − 1)!!  n!! =  π  · (n − 1)!!  2 n!! √ , π[(n − 1)/2]! 2(n/2)! for n odd, for n even. 8.4 The Beta Function 8.4.10 8.4.11 1 Evaluate 0 525 (1 − x 4 )−1/2 dx as a beta function. ANS. [( 41 )!]2 · 4 = 1.311028777. (2π)1/2 Given  ν π/2 z Jν (z) = sin2ν θ cos(z cos θ ) dθ, 1 1/2 2 π (ν − 2 )! 0 2 ℜ(ν) > − 21 , show, with the aid of beta functions, that this reduces to the Bessel series  2s+ν ∞  z 1 s Jν (z) = , (−1) s!(s + ν)! 2 s=0 identifying the initial Jν as an integral representation of the Bessel function, Jν (Section 11.1). 8.4.12 Given the associated Legendre function Section 12.5, show that (a) 1 2 Pmm (x) dx = −1 (b) m = 0, 1, 2, . . . , 2 dx Pmm (x) = 2 · (2m − 1)!, 1 − x2 m = 1, 2, 3, . . . . Show that (a) 1 0 (b) 0 8.4.14 2 (2m)!, 2m + 1 1 −1 8.4.13  m/2 Pmm (x) = (2m − 1)!! 1 − x 2 , 1  −1/2 x 2s+1 1 − x 2 dx = (2s)!! , (2s + 1)!!  q 1 (p − 21 )!q! x 2p 1 − x 2 dx = . 2 (p + q + 12 )! A particle of mass m moving in a symmetric potential that is well described by V (x) = A|x|n has a total energy 12 m(dx/dt)2 + V (x) = E. Solving for dx/dt and integrating we find that the period of motion is √ xmax dx , τ = 2 2m (E − Ax n )1/2 0 n = E. Show that where xmax is a classical turning point given by Axmax )   2 2πm E 1/n Ŵ(1/n) τ= . n E A Ŵ(1/n + 12 ) 8.4.15 Referring to Exercise 8.4.14, 526 Chapter 8 Gamma–Factorial Function (a) Determine the limit as n → ∞ of )   2 2πm E 1/n Ŵ(1/n) . n E A Ŵ(1/n + 12 ) (b) Find lim τ from the behavior of the integrand (E − Ax n )−1/2 . (c) 8.4.16 n→∞ Investigate the behavior of the physical system (potential well) as n → ∞. Obtain the period from inspection of this limiting physical system. Show that ∞ 0   α+1 β −α 1 sinhα x dx = B , , 2 2 2 coshβ x −1 < α < β. Hint. Let sinh2 x = u. 8.4.17 The beta distribution of probability theory has a probability density f (x) = Ŵ(α + β) α−1 x (1 − x)β−1 , Ŵ(α)Ŵ(β) with x restricted to the interval (0, 1). Show that 8.4.18 α . α+β (a) x(mean) = (b) σ 2 (variance) ≡ x 2  − x2 = αβ (α β)2 (α β + 1) . From π/2 lim 0 n→∞ π/2 0 sin2n θ dθ sin2n+1 θ dθ =1 derive the Wallis formula for π : π 2·2 4·4 6·6 = · · ··· . 2 1·3 3·5 5·7 8.4.19 Tabulate the beta function B(p, q) for p and q = 1.0(0.1)2.0 independently. Check value. B(1.3, 1.7) = 0.40774. 8.4.20 (a) Write a subroutine that will calculate the incomplete beta function Bx (p, q). For 0.5 < x ≤ 1 you will find it convenient to use the relation Bx (p, q) = B(p, q) − B1−x (q, p). (b) Tabulate Bx ( 23 , 23 ). Spot check your results by using the Gauss–Legendre quadrature. 8.5 Incomplete Gamma Function 8.5 527 THE INCOMPLETE GAMMA FUNCTIONS AND RELATED FUNCTIONS Generalizing the Euler definition of the gamma function (Eq. (8.5)), we define the incomplete gamma functions by the variable limit integrals x e−t t a−1 dt, ℜ(a) > 0 γ (a, x) = 0 and ∞ e−t t a−1 dt. (8.67) γ (a, x) + Ŵ(a, x) = Ŵ(a). (8.68) Ŵ(a, x) = x Clearly, the two functions are related, for The choice of employing γ (a, x) or Ŵ(a, x) is purely a matter of convenience. If the parameter a is a positive integer, Eq. (8.67) may be integrated completely to yield  n−1 s   x −x γ (n, x) = (n − 1)! 1 − e s! s=0 Ŵ(n, x) = (n − 1)!e−x n−1 s  x s=0 s! , (8.69) n = 1, 2, . . . . For nonintegral a, a power-series expansion of γ (a, x) for small x and an asymptotic expansion of Ŵ(a, x) (denoted as I (x, p)) are developed in Exercise 5.7.7 and Section 5.10: γ (a, x) = x a ∞  (−1)n n=0 Ŵ(a, x) = x a−1 e−x =x a−1 −x e ∞  n=0 xn , n!(a + n) |x| ∼ 1 (small x), 1 (a − 1)! · n (a − 1 − n)! x ∞  (n − a)! 1 · , (−1)n (−a)! x n n=0 (8.70) x ≫ 1 (large x). These incomplete gamma functions may also be expressed quite elegantly in terms of confluent hypergeometric functions (compare Section 13.5). Exponential Integral Although the incomplete gamma function Ŵ(a, x) in its general form (Eq. (8.67)) is only infrequently encountered in physical problems, a special case is quite common and very 528 Chapter 8 Gamma–Factorial Function FIGURE 8.7 The exponential integral, E1 (x) = −Ei(−x). useful. We define the exponential integral by10 ∞ −t e dt = E1 (x). −Ei(−x) ≡ t x (8.71) (See Fig. 8.7.) Caution is needed here, for the integral in Eq. (8.71) diverges logarithmically as x → 0. To obtain a series expansion for small x, we start from  (8.72) E1 (x) = Ŵ(0, x) = lim Ŵ(a) − γ (a, x) . a→0 We may split the divergent term in the series expansion for γ (a, x),   ∞ aŴ(a) − x a (−1)n x n − . E1 (x) = lim a→0 a n · n! (8.73) n=1 Using l’Hôpital’s rule (Exercise 5.6.8) and ( d d ln(a!) d ' aŴ(a) = a! = e = a!ψ(a + 1), da da da (8.74) and then Eq. (8.40),11 we obtain the rapidly converging series E1 (x) = −γ − ln x − An asymptotic expansion E1 (x) ≈ e−x [ x1 − tion 5.10. 1! x2 ∞  (−1)n x n n=1 n · n! . (8.75) · · · ] for x → ∞ is developed in Sec- 10 The appearance of the two minus signs in −Ei(−x) is a historical monstrosity. AMS-55, Chapter 5, denotes this integral as E1 (x). See Additional Readings for the reference. 11 dx a /da = x a ln x. 8.5 Incomplete Gamma Function FIGURE 8.8 529 Sine and cosine integrals. Further special forms related to the exponential integral are the sine integral, cosine integral (Fig. 8.8), and logarithmic integral, defined by12 ∞ sin t si(x) = − dt t x ∞ cos t dt (8.76) Ci(x) = − t x x du li(x) = = Ei(ln x) ln u 0 for their principal branch, with the branch cut conventionally chosen to be along the negative real axis from the branch point at zero. By transforming from real to imaginary argument, we can show that 1 1 Ei(ix) − Ei(−ix) = E1 (ix) − E1 (−ix) , si(x) = (8.77) 2i 2i whereas 1 1 π Ci(x) = Ei(ix) + Ei(−ix) = − E1 (ix) + E1 (−ix) , | arg x| < . (8.78) 2 2 2 Adding these two relations, we obtain Ei(ix) = Ci(x) + isi(x), (8.79) to show that the relation among these integrals is exactly analogous to that among eix , cos x, and sin x. Reference to Eqs. (8.71) and (8.78) shows that Ci(x) agrees with the definitions of AMS-55 (see Additional Readings for the reference). In terms of E1 , E1 (ix) = −Ci(x) + isi(x). Asymptotic expansions of Ci(x) and si(x) are developed in Section 5.10. Power-series expansions about the origin for Ci(x), si(x), and li(x) may be obtained from those for 12 Another sine integral is given by Si(x) = si(x) + π/2. 530 Chapter 8 Gamma–Factorial Function FIGURE 8.9 Error function, erf x. the exponential integral, E1 (x), or by direct integration, Exercise 8.5.10. The exponential, sine, and cosine integrals are tabulated in AMS-55, Chapter 5, (see Additional Readings for the reference) and can also be accessed by symbolic software such as Mathematica, Maple, Mathcad, and Reduce. Error Integrals The error integrals 2 erf z = √ π 0 z 2 e−t dt, 2 erfc z = 1 − erf z = √ π ∞ 2 e−t dt (8.80a) z (normalized so that erf ∞ = 1) are introduced in Exercise 5.10.4 (Fig. 8.9). Asymptotic forms are developed there. From the general form of the integrands and Eq. (8.6) we expect that erf z and erfc z may be written as incomplete gamma functions with a = 12 . The relations are   erf z = π −1/2 γ 12 , z2 , erfc z = π −1/2 Ŵ 21 , z2 . (8.80b) The power-series expansion of erf z follows directly from Eq. (8.70). Exercises 8.5.1 Show that γ (a, x) = e−x ∞  (a − 1)! n=0 (a + n)! x a+n (a) by repeatedly integrating by parts. (b) Demonstrate this relation by transforming it into Eq. (8.70). 8.5.2 Show that (a) d m  −a x γ (a, x) = (−1)m x −a−m γ (a + m, x), m dx 8.5 Incomplete Gamma Function (b) 8.5.3 531 dm  x Ŵ(a) e γ (a, x) = ex γ (a − m, x). m dx Ŵ(a − m) Show that γ (a, x) and Ŵ(a, x) satisfy the recurrence relations (a) γ (a + 1, x) = aγ (a, x) − x a e−x , (b) Ŵ(a + 1, x) = aŴ(a, x) + x a e−x . 8.5.4 The potential produced by a 1S hydrogen electron (Exercise 12.8.6) is given by   1 q γ (3, 2r) + Ŵ(2, 2r) . V (r) = 4πε0 a0 2r (a) For r ≪ 1, show that V (r) = (b) For r ≫ 1, show that   2 q 1 − r2 + · · · . 4πε0 a0 3 V (r) = 1 q · . 4πε0 a0 r Here r is expressed in units of a0 , the Bohr radius. Note. For computation at intermediate values of r, Eqs. (8.69) are convenient. 8.5.5 The potential of a 2P hydrogen electron is found to be (Exercise 12.8.7)   1 1 q V (r) = γ (5, r) + Ŵ(4, r) · 4πε0 24a0 r   1 1 q 2 − · γ (7, r) + r Ŵ(2, r) P2 (cos θ ). 4πε0 120a0 r 3 Here r is expressed in units of a0 , the Bohr radius. P2 (cos θ ) is a Legendre polynomial (Section 12.1). (a) For r ≪ 1, show that V (r) = (b) 8.5.6 For r ≫ 1, show that   1 2 q 1 1 − r P2 (cos θ ) + · · · . · 4πε0 a0 4 120   6 q 1 1 − 2 P2 (cos θ ) + · · · . · V (r) = 4πε0 a0 r r Prove that the exponential integral has the expansion ∞ −t ∞  (−1)n x n e dt = −γ − ln x − , t n · n! x n=1 where γ is the Euler–Mascheroni constant. 532 Chapter 8 Gamma–Factorial Function 8.5.7 Show that E1 (z) may be written as E1 (z) = e−z ∞ 0 e−zt dt. 1+t Show also that we must impose the condition | arg z| ≤ π/2. 8.5.8 Related to the exponential integral (Eq. (8.71)) by a simple change of variable is the function ∞ −xt e En (x) = dt. tn 1 Show that En (x) satisfies the recurrence relation 8.5.9 8.5.10 1 x n = 1, 2, 3, . . . . En+1 (x) = e−x − En (x), n n With En (x) as defined in Exercise 8.5.8, show that En (0) = 1/(n − 1), n > 1. Develop the following power-series expansions: (a) ∞ π  (−1)n x 2n+1 , si(x) = − + 2 (2n + 1)(2n + 1)! n=0 (b) 8.5.11 Ci(x) = γ + ln x + ∞  (−1)n x 2n n=1 2n(2n)! . An analysis of a center-fed linear antenna leads to the expression x 1 − cos t dt. t 0 Show that this is equal to γ + ln x − Ci(x). 8.5.12 Using the relation Ŵ(a) = γ (a, x) + Ŵ(a, x), show that if γ (a, x) satisfies the relations of Exercise 8.5.2, then Ŵ(a, x) must satisfy the same relations. 8.5.13 (a) Write a subroutine that will calculate the incomplete gamma functions γ (n, x) and Ŵ(n, x) for n a positive integer. Spot check Ŵ(n, x) by Gauss–Laguerre quadratures. (b) Tabulate γ (n, x) and Ŵ(n, x) for x = 0.0(0.1)1.0 and n = 1, 2, 3. 8.5.14 Calculate the potential produced by a 1S hydrogen electron (Exercise 8.5.4) (Fig. 8.10). Tabulate V (r)/(q/4πε0 a0 ) for x = 0.0(0.1)4.0. Check your calculations for r ≪ 1 and for r ≫ 1 by calculating the limiting forms given in Exercise 8.5.4. 8.5.15 Using Eqs. (5.182) and (8.75), calculate the exponential integral E1 (x) for (a) x = 0.2(0.2)1.0, (b) x = 6.0(2.0)10.0. Program your own calculation but check each value, using a library subroutine if available. Also check your calculations at each point by a Gauss–Laguerre quadrature. 8.5 Additional Readings 533 FIGURE 8.10 Distributed charge potential produced by a 1S hydrogen electron, Exercise 8.5.14. You’ll find that the power-series converges rapidly and yields high precision for small x. The asymptotic series, even for x = 10, yields relatively poor accuracy. Check values. E1 (1.0) = 0.219384 E1 (10.0) = 4.15697 × 10−6 . 8.5.16 The two expressions for E1 (x), (1) Eq. (5.182), an asymptotic series and (2) Eq. (8.75), a convergent power series, provide a means of calculating the Euler–Mascheroni constant γ to high accuracy. Using double precision, calculate γ from Eq. (8.75), with E1 (x) evaluated by Eq. (5.182). Hint. As a convenient choice take x in the range 10 to 20. (Your choice of x will set a limit on the accuracy of your result.) To minimize errors in the alternating series of Eq. (8.75), accumulate the positive and negative terms separately. ANS. For x = 10 and “double precision,” γ = 0.57721566. Additional Readings Abramowitz, M., and I. A. Stegun, eds., Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables (AMS-55). Washington, DC: National Bureau of Standards (1972), reprinted, Dover (1974). Contains a wealth of information about gamma functions, incomplete gamma functions, exponential integrals, error functions, and related functions — Chapters 4 to 6. Artin, E., The Gamma Function (translated by M. Butler). New York: Holt, Rinehart and Winston (1964). Demonstrates that if a function f (x) is smooth (log convex) and equal to (n − 1)! when x = n = integer, it is the gamma function. Davis, H. T., Tables of the Higher Mathematical Functions. Bloomington, IN: Principia Press (1933). Volume I contains extensive information on the gamma function and the polygamma functions. Gradshteyn, I. S., and I. M. Ryzhik, Table of Integrals, Series, and Products. New York: Academic Press (1980). Luke, Y. L., The Special Functions and Their Approximations, Vol. 1. New York: Academic Press (1969). Luke, Y. L., Mathematical Functions and Their Approximations. New York: Academic Press (1975). This is an updated supplement to Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables (AMS-55). Chapter 1 deals with the gamma function. Chapter 4 treats the incomplete gamma function and a host of related functions. This page intentionally left blank CHAPTER 9 DIFFERENTIAL EQUATIONS 9.1 PARTIAL DIFFERENTIAL EQUATIONS Introduction In physics the knowledge of the force in an equation of motion usually leads to a differential equation. Thus, almost all the elementary and numerous advanced parts of theoretical physics are formulated in terms of differential equations. Sometimes these are ordinary differential equations in one variable (abbreviated ODEs). More often the equations are partial differential equations (PDEs) in two or more variables. Let us recall from calculus that the operation of taking an ordinary or partial derivative is a linear operation (L),1 d(aϕ(x) + bψ(x)) dϕ dψ =a +b , dx dx dx for ODEs involving derivatives in one variable x only and no quadratic, (dψ/dx)2 , or higher powers. Similarly, for partial derivations, ∂ϕ(x, y) ∂ψ(x, y) ∂(aϕ(x, y) + bψ(x, y)) =a +b . ∂x ∂x ∂x In general L(aϕ + bψ) = aL(ϕ) + bL(ψ). Thus, ODEs and PDEs appear as linear operator equations, Lψ = F, (9.1) 1 We are especially interested in linear operators because in quantum mechanics physical quantities are represented by linear operators operating in a complex, infinite-dimensional Hilbert space. 535 536 Chapter 9 Differential Equations where F is a known (source) function of one (for ODEs) or more variables (for PDEs), L is a linear combination of derivatives, and ψ is the unknown function or solution. Any linear combination of solutions is again a solution if F = 0; this is the superposition principle for homogeneous PDEs. Since the dynamics of many physical systems involve just two derivatives, for example, acceleration in classical mechanics and the kinetic energy operator, ∼ ∇ 2 , in quantum mechanics, differential equations of second order occur most frequently in physics. (Maxwell’s and Dirac’s equations are first order but involve two unknown functions. Eliminating one unknown yields a second-order differential equation for the other (compare Section 1.9).) Examples of PDEs Among the most frequently encountered PDEs are the following: 1. Laplace’s equation, ∇ 2 ψ = 0. This very common and very important equation occurs in studies of a. electromagnetic phenomena, including electrostatics, dielectrics, steady currents, and magnetostatics, b. hydrodynamics (irrotational flow of perfect fluid and surface waves), c. heat flow, d. gravitation. 2. 3. Poisson’s equation, ∇ 2 ψ = −ρ/ε0 . In contrast to the homogeneous Laplace equation, Poisson’s equation is nonhomogeneous with a source term −ρ/ε0 . The wave (Helmholtz) and time-independent diffusion equations, ∇ 2 ψ ± k 2 ψ = 0. These equations appear in such diverse phenomena as a. elastic waves in solids, including vibrating strings, bars, membranes, b. sound, or acoustics, c. electromagnetic waves, d. nuclear reactors. 4. The time-dependent diffusion equation 1 ∂ψ a 2 ∂t and the corresponding four-dimensional forms involving the d’Alembertian, a fourdimensional analog of the Laplacian in Minkowski space, ∇2 ψ = ∂ µ ∂µ = ∂ 2 = 5. 6. 1 ∂2 − ∇2 . c2 ∂t 2 The time-dependent wave equation, ∂ 2 ψ = 0. The scalar potential equation, ∂ 2 ψ = ρ/ε0 . Like Poisson’s equation, this equation is nonhomogeneous with a source term ρ/ε0 . 9.1 Partial Differential Equations 7. 8. 537 The Klein–Gordon equation, ∂ 2 ψ = −µ2 ψ, and the corresponding vector equations, in which the scalar function ψ is replaced by a vector function. Other, more complicated forms are common. The Schrödinger wave equation, − h¯ 2 2 ∂ψ ∇ ψ + V ψ = i h¯ 2m ∂t and − h¯ 2 2 ∇ ψ + V ψ = Eψ 2m for the time-independent case. 9. The equations for elastic waves and viscous fluids and the telegraphy equation. 10. Maxwell’s coupled partial differential equations for electric and magnetic fields and those of Dirac for relativistic electron wave functions. For Maxwell’s equations see the Introduction and also Section 1.9. Some general techniques for solving second-order PDEs are discussed in this section. 1. Separation of variables, where the PDE is split into ODEs that are related by common constants that appear as eigenvalues of linear operators, Lψ = lψ, usually in one variable. This method is closely related to symmetries of the PDE and a group of transformations (see Section 4.2). The Helmholtz equation, listed example 3, has this form, where the eigenvalue k 2 may arise by separation of the time t from the spatial variables. Likewise, in example 8 the energy E is the eigenvalue that arises in the separation of t from r in the Schrödinger equation. This is pursued in Chapter 10 in greater detail. Section 9.2 serves as introduction. ODEs may be attacked by Frobenius’ power-series method in Section 9.5. It does not always work but is often the simplest method when it does. 2. Conversion of a PDE into an integral equation using Green’s functions applies to inhomogeneous PDEs, such as examples 2 and 6 given above. An introduction to the Green’s function technique is given in Section 9.7. 3. Other analytical methods, such as the use of integral transforms, are developed and applied in Chapter 15. Occasionally, we encounter equations of higher order. In both the theory of the slow motion of a viscous fluid and the theory of an elastic body we find the equation  2 2 ∇ ψ = 0. Fortunately, these higher-order differential equations are relatively rare and are not discussed here. Although not so frequently encountered and perhaps not so important as second-order ODEs, first-order ODEs do appear in theoretical physics and are sometimes intermediate steps for second-order ODEs. The solutions of some more important types of first-order ODEs are developed in Section 9.2. First-order PDEs can always be reduced to ODEs. This is a straightforward but lengthy process and involves a search for characteristics that are briefly introduced in what follows; for more details we refer to the literature. 538 Chapter 9 Differential Equations Classes of PDEs and Characteristics Second-order PDEs form three classes: (i) Elliptic PDEs involve ∇ 2 or c−2 ∂ 2 /∂t 2 + ∇ 2 (ii) parabolic PDEs, a∂/∂t + ∇ 2 ; (iii) hyperbolic PDEs, c−2 ∂ 2 /∂t 2 − ∇ 2 . These canonical operators come about by a change of variables ξ = ξ(x, y), η = η(x, y) in a linear operator (for two variables just for simplicity) L=a ∂2 ∂2 ∂ ∂2 ∂ + 2b +c 2 +d +e + f, 2 ∂x∂y ∂x ∂y ∂x ∂y (9.2) which can be reduced to the canonical forms (i), (ii), (iii) according to whether the discriminant D = ac − b2 > 0, = 0, or < 0. If ξ(x, y) is determined from the first-order, but nonlinear, PDE  2     2 ∂ξ ∂ξ ∂ξ ∂ξ +c + 2b = 0, (9.3) a ∂x ∂x ∂y ∂y then the coefficient of ∂ 2 /∂ξ 2 in L (that is, Eq. (9.3)) is zero. If η is an independent solution of the same Eq. (9.3), then the coefficient of ∂ 2 /∂η2 is also zero. The remaining operator, ∂ 2 /∂ξ ∂η, in L is characteristic of the hyperbolic case (iii) with D < 0 (a = 0 = c leads to D = −b2 < 0), where the quadratic form aλ2 + 2bλ + c factorizes and, therefore, Eq. (9.3) has two independent solutions ξ(x, y), η(x, y). In the elliptic case (i) with D > 0, the two solutions ξ , η are complex conjugate, which, when substituted into Eq. (9.2), remove the mixed second-order derivative instead of the other second-order terms, yielding the canonical form (i). In the parabolic case (ii) with D = 0, only ∂ 2 /∂ξ 2 remains in L, while the coefficients of the other two second-order derivatives vanish. If the coefficients a, b, c in L are functions of the coordinates, then this classification is only local; that is, its type may change as the coordinates vary. Let us illustrate the physics underlying the hyperbolic case by looking at the wave equation, Eq. (9.2) (in 1 + 1 dimensions for simplicity)   ∂2 1 ∂2 ψ = 0. − c2 ∂t 2 ∂x 2 Since Eq. (9.3) now becomes  2    2  ∂ξ ∂ξ ∂ξ ∂ξ ∂ξ ∂ξ −c +c − c2 = =0 ∂t ∂x ∂t ∂x ∂t ∂x and factorizes, we determine the solution of ∂ξ/∂t − c∂ξ/∂x = 0. This is an arbitrary function ξ = F (x + ct), and ξ = G(x − ct) solves ∂ξ/∂t + c∂ξ/∂x = 0, which is readily verified. By linear superposition a general solution of the wave equation is ψ = F (x +ct)+ G(x − ct). For periodic functions F, G we recognize the lines x + ct and x − ct as the phases of plane waves or wave fronts, where not all second-order derivatives of ψ in the wave equation are well defined. Normal to the wave fronts are the rays of geometric optics. Thus, the lines that are solutions of Eq. (9.3) and are called characteristics or sometimes bicharacteristics (for second-order PDEs) in the mathematical literature correspond to the wave fronts of the geometric optics solution of the wave equation. 9.1 Partial Differential Equations 539 For the elliptic case let us consider Laplace’s equation, ∂ 2ψ ∂ 2ψ + = 0, ∂x 2 ∂y 2 for a potential ψ of two variables. Here the characteristics equation,  2  2    ∂ξ ∂ξ ∂ξ ∂ξ ∂ξ ∂ξ = 0, + = +i −i ∂x ∂y ∂x ∂y ∂x ∂y has complex conjugate solutions: ξ = F (x + iy) for ∂ξ/∂x + i(∂ξ/∂y) = 0 and ξ = G(x − iy) for ∂ξ/∂x − i(∂ξ/∂y) = 0. A general solution of Laplace’s equation is therefore ψ = F (x + iy) + G(x − iy), as well as the real and imaginary parts of ψ, which are called harmonic functions, while polynomial solutions are called harmonic polynomials. In quantum mechanics the Wentzel–Kramers–Brillouin (WKB) form ψ = exp(−iS/h¯ ) for the solution of the Schrödinger equation, a complex parabolic PDE,   ∂ψ h¯ 2 2 ∇ + V ψ = i h¯ , − 2m ∂t leads to the Hamilton–Jacobi equation of classical mechanics, 1 ∂S (∇S)2 + V = , (9.4) 2m ∂t in the limit h¯ → 0. The classical action S obeys the Hamilton–Jacobi equation, which is the analog of Eq. (9.3) of the Schrödinger equation. Substituting ∇ψ = −iψ∇S/h¯ , ∂ψ/∂t = −iψ(∂S/∂t)/h¯ into the Schrödinger equation, dropping the overall nonvanishing factor ψ , and taking the limit of the resulting equation as h¯ → 0, we indeed obtain Eq. (9.4). Finding solutions of PDEs by solving for the characteristics is one of several general techniques. For more examples we refer to H. Bateman, Partial Differential Equations of Mathematical Physics, New York: Dover (1944); K. E. Gustafson, Partial Differential Equations and Hilbert Space Methods, 2nd ed., New York: Wiley (1987), reprinted Dover (1998). In order to derive and appreciate more the mathematical method behind these solutions of hyperbolic, parabolic, and elliptic PDEs let us reconsider the PDE (9.2) with constant coefficients and, at first, d = e = f = 0 for simplicity. In accordance with the form of the wave front solutions, we seek a solution ψ = F (ξ ) of Eq. (9.2) with a function ξ = ξ(t, x) using the variables t, x instead of x, y. Then the partial derivatives become  2 2 ∂ξ dF ∂ψ ∂ξ dF ∂ 2ψ ∂ξ d F ∂ 2 ξ dF ∂ψ = , = , + = 2 , ∂x ∂x dξ ∂t ∂t dξ ∂x ∂x 2 ∂x dξ dξ 2 and ∂ 2 ξ dF ∂ξ ∂ξ d 2 F ∂ 2ψ = + , ∂x∂t ∂x∂t dξ ∂x ∂t dξ 2  2 2 ∂ξ d F ∂ 2ψ ∂ 2 ξ dF + = , 2 2 ∂t ∂t ∂t dξ dξ 2 using the chain rule of differentiation. When ξ depends on x and t linearly, these partial derivatives of ψ yield a single term only and solve our PDE (9.2) as a consequence. From the linear ξ = αx + βt we obtain 2 ∂ 2ψ 2d F = α , ∂x 2 dξ 2 d 2F ∂ 2ψ = αβ 2 , ∂x∂t dξ 2 ∂ 2ψ 2d F = β , ∂t 2 dξ 2 540 Chapter 9 Differential Equations and our PDE (9.2) becomes equivalent to the analog of Eq. (9.3), A solution of d2F dξ 2  2 d 2F α a + 2αβb + β 2 c = 0. dξ 2 (9.5) = 0 only leads to the trivial ψ = k1 x + k2 t + k3 with constant ki that is linear in the coordinates and for which all second derivatives vanish. From α 2 a + 2αβb + β 2 c = 0, on the other hand, we get the ratios  1/2 β 1 ≡ r1,2 = −b ± b2 − ac α c (9.6) 2 as solutions of Eq. (9.5) with ddξF2 = 0 in general. The lines ξ1 = x + r1 t and ξ2 = x + r2 t will solve the PDE (9.2), with ψ(x, t) = F (ξ1 )+G(ξ2 ) corresponding to the generalization of our previous hyperbolic and elliptic PDE examples. For the parabolic case, where b2 = ac, there is only one ratio from Eq. (9.6), β/α = r = −b/c, and one solution, ψ(x, t) = F (x − bt/c). In order to find the second general solution of our PDE (9.2) we make the Ansatz (trial solution) ψ(x, t) = ψ0 (x, t) · G(x − bt/c). Substituting this into Eq. (9.2) we find a ∂ 2 ψ0 ∂ 2 ψ0 ∂ 2 ψ0 +c 2 =0 + 2b 2 ∂x∂t ∂x ∂t for ψ0 since, upon replacing F → G, G solves Eq. (9.5) with d 2 G/dξ 2 = 0 in general. The solution ψ0 can be any solution of our PDE (9.2), including the trivial ones such as ψ0 = x and ψ0 = t. Thus we obtain the general parabolic solution,     b b ψ(x, t) = F x − t + ψ0 (x, t)G x − t , c c with ψ0 = x or ψ0 = t, etc. With the same Ansatz one finds solutions of our PDE (9.2) with a source term, for example, f = 0, but still d = e = 0 and constant a, b, c. Next we determine the characteristics, that is, curves where the second order derivatives of the solution ψ are not well defined. These are the wave fronts along which the solutions of our hyperbolic PDE (9.2) propagate. We solve our PDE with a source term f = 0 and Cauchy boundary conditions (see Table 9.1) that are appropriate for hyperbolic PDEs, where ψ and its normal derivative ∂ψ/∂n are specified on an open curve C : x = x(s), t = t (s), with the parameter s the length on C. Then dr = (dx, dt) is tangent and nˆ ds = (dt, −dx) is normal to the curve C, and the first-order tangential and normal derivatives are given by the chain rule dr ∂ψ dx ∂ψ dt dψ = ∇ψ · = + , ds ds ∂x ds ∂t ds ∂ψ dt ∂ψ dx dψ = ∇ψ · nˆ = − . dn ∂x ds ∂t ds 9.1 Partial Differential Equations 541 From these two linear equations, ∂ψ/∂t and ∂ψ/∂x can be determined on C, provided    2  2 dt   dx dx dt  ds ds  = − − = 0.   dt dx   ds ds − ds ds For the second derivatives we use the chain rule again: dx ∂ 2 ψ d ∂ψ dt ∂ 2 ψ = , + ds ∂x ds ∂x 2 ds ∂x∂t (9.7a) d ∂ψ dx ∂ 2 ψ dt ∂ 2 ψ = + . ds ∂t ds ∂x∂t ds ∂t 2 (9.7b) From our PDE (9.2), and Eqs. (9.7a,b), which are linear in the second-order derivatives, they cannot be calculated when the determinant vanishes, that is,    a 2b c     2  2   dx dt dx dx dt  = a dt  0 +c − 2b = 0. (9.8)   ds ds ds ds ds ds    0 dx dt  ds ds From Eq. (9.8), which defines the characteristics, we find that the tangent ratio dx/dt obeys  2 dx dx c + a = 0, − 2b dt dt so  1/2 dx 1  = b ± b2 − ac . dt c (9.9) For the earlier hyperbolic wave (and elliptic potential) equation examples, b = 0 and a, c are constants, so the solutions ξi = x + tri from Eq. (9.6) coincide with the characteristics of Eq. (9.9). Nonlinear PDEs Nonlinear ODEs and PDEs are a rapidly growing and important field. We encountered earlier the simplest linear wave equation, ∂ψ ∂ψ +c = 0, ∂t ∂x as the first-order PDE of the wave fronts of the wave equation. The simplest nonlinear wave equation, ∂ψ ∂ψ + c(ψ) = 0, ∂t ∂x (9.10) results if the local speed of propagation, c, is not constant but depends on the wave ψ . When a nonlinear equation has a solution of the form ψ(x, t) = A cos(kx − ωt), where 542 Chapter 9 Differential Equations ω(k) varies with k so that ω′′ (k) = 0, then it is called dispersive. Perhaps the best-known nonlinear dispersive equation is the Korteweg–deVries equation, ∂ψ ∂ψ ∂ 3ψ = 0, (9.11) +ψ + ∂t ∂x ∂x 3 which models the lossless propagation of shallow water waves and other phenomena. It is widely known for its soliton solutions. A soliton is a traveling wave with the property of persisting through an interaction with another soliton: After they pass through each other, they emerge in the same shape and with the same velocity and acquire no more than a phase shift. Let ψ(ξ = x − ct) be such a traveling wave. When substituted into Eq. (9.11) this yields the nonlinear ODE (ψ − c) dψ d 3 ψ = 0, + dξ dξ 3 (9.12) which can be integrated to yield ψ2 d 2ψ . = cψ − 2 dξ 2 (9.13) There is no additive integration constant in Eq. (9.13) to ensure that d 2 ψ/dξ 2 → 0 with ψ → 0 for large ξ , so ψ is localized at the characteristic ξ = 0, or x = ct. Multiplying Eq. (9.13) by dψ/dξ and integrating again yields   dψ 2 ψ3 , (9.14) = cψ 2 − dξ 3 where dψ/dξ → 0 for large ξ . Taking the root of Eq. (9.14) and integrating once more yields the soliton solution ψ(x − ct) = 3c √ x−ct . cosh c 2 2 (9.15) Some nonlinear topics, for example, the logistic equation and the onset of chaos, are reviewed in Chapter 18. For more details and literature, see J. Guckenheimer, P. Holmes, and F. John, Nonlinear Oscillations, Dynamical Systems and Bifurcations of Vector Fields, rev. ed., New York: Springer-Verlag (1990). Boundary Conditions Usually, when we know a physical system at some time and the law governing the physical process, then we are able to predict the subsequent development. Such initial values are the most common boundary conditions associated with ODEs and PDE. Finding solutions that match given points, curves, or surfaces corresponds to boundary value problems. Solutions usually are required to satisfy certain imposed (for example, asymptotic) boundary conditions. These boundary conditions may take three forms: 1. Cauchy boundary conditions. The value of a function and normal derivative specified on the boundary. In electrostatics this would mean ϕ, the potential, and En , the normal component of the electric field. 9.2 First-Order Differential Equations 543 Table 9.1 Boundary conditions Cauchy Open surface Closed surface Dirichlet Open surface Closed surface Neumann Open surface Closed surface Elliptic Type of partial differential equation Hyperbolic Parabolic Laplace, Poisson in (x, y) Wave equation in (x, t) Diffusion equation in (x, t) Unphysical results (instability) Too restrictive Unique, stable solution Too restrictive Too restrictive Insufficient Insufficient Unique, stable solution Solution not unique Insufficient Insufficient Unique, stable solution Solution not unique Too restrictive Unique, stable solution in one direction Too restrictive Unique, stable solution in one direction Too restrictive Dirichlet boundary conditions. The value of a function specified on the boundary. 3. Neumann boundary conditions. The normal derivative (normal gradient) of a function specified on the boundary. In the electrostatic case this would be En and therefore σ , the surface charge density. A summary of the relation of these three types of boundary conditions to the three types of two-dimensional partial differential equations is given in Table 9.1. For extended discussions of these partial differential equations the reader may consult Morse and Feshbach, Chapter 6 (see Additional Readings). Parts of Table 9.1 are simply a matter of maintaining internal consistency or of common sense. For instance, for Poisson’s equation with a closed surface, Dirichlet conditions lead to a unique, stable solution. Neumann conditions, independent of the Dirichlet conditions, likewise lead to a unique stable solution independent of the Dirichlet solution. Therefore Cauchy boundary conditions (meaning Dirichlet plus Neumann) could lead to an inconsistency. The term boundary conditions includes as a special case the concept of initial conditions. For instance, specifying the initial position x0 and the initial velocity v0 in some dynamical problem would correspond to the Cauchy boundary conditions. The only difference in the present usage of boundary conditions in these one-dimensional problems is that we are going to apply the conditions on both ends of the allowed range of the variable. 9.2 FIRST-ORDER DIFFERENTIAL EQUATIONS Physics involves some first-order differential equations. For completeness (and review) it seems desirable to touch on them briefly. We consider here differential equations of the 544 Chapter 9 Differential Equations general form P (x, y) dy = f (x, y) = − . dx Q(x, y) (9.16) Equation (9.16) is clearly a first-order, ordinary differential equation. It is first order because it contains the first and no higher derivatives. It is ordinary because the only derivative, dy/dx, is an ordinary, or total, derivative. Equation (9.16) may or may not be linear, although we shall treat the linear case explicitly later, Eq. (9.25). Separable Variables Frequently Eq. (9.16) will have the special form P (x) dy = f (x, y) = − . dx Q(y) (9.17) Then it may be rewritten as P (x) dx + Q(y) dy = 0. Integrating from (x0 , y0 ) to (x, y) yields x P (x) dx + x0 y y0 Q(y) dy = 0. Since the lower limits, x0 and y0 , contribute constants, we may ignore the lower limits of integration and simply add a constant of integration. Note that this separation of variables technique does not require that the differential equation be linear. Example 9.2.1 PARACHUTIST We want to find the velocity of the falling parachutist as a function of time and are particularly interested in the constant limiting velocity, v0 , that comes about by air drag, taken, to be quadratic, −bv 2 , and opposing the force of the gravitational attraction, mg, of the Earth. We choose a coordinate system in which the positive direction is downward so that the gravitational force is positive. For simplicity we assume that the parachute opens immediately, that is, at time t = 0, where v(t = 0) = 0, our initial condition. Newton’s law applied to the falling parachutist gives mv˙ = mg − bv 2 , where m includes the mass of the parachute. The terminal velocity, v0 , can be found from the equation of motion as t → ∞; when there is no acceleration, v˙ = 0, so ) mg 2 . bv0 = mg, or v0 = b 9.2 First-Order Differential Equations 545 The variables t and v separate dv g− b 2 mv = dt, which we integrate by decomposing the denominator into partial fractions. The roots of the denominator are at v = ±v0 . Hence     1 1 m b 2 −1 − = . g− v m 2v0 b v + v0 v − v0 Integrating both terms yields v dV g− b 2 mV = 1 2 ) m v0 + v ln = t. gb v0 − v Solving for the velocity yields v= where T = m gb sinh Tt e2t/T − 1 t v = v 0 0 t = v0 tanh T , 2t/T e +1 cosh T is the time constant governing the asymptotic approach of the velocity to the limiting velocity, v0 . 2 Putting √ in numerical values, g = 9.8 m/s and taking b = 700 kg/m, m = 70 kg, gives v0 = 9.8/10 ∼ 1m/s ∼ 3.6 km/h ∼ 2.23 mi/h, the walking speed of a pedestrian at √ m = 1/ 10 · 9.8 ∼ 0.1 s. Thus, the constant speed v0 is reached landing, and T = bg within a second. Finally, because it is always important to check the solution, we verify that our solution satisfies v˙ = sinh2 t/T v0 v0 v2 b cosh t/T v0 − = − = g − v2, 2 cosh t/T T T T T v m cosh t/T 0 that is, Newton’s equation of motion. The more realistic case, where the parachutist is in free fall with an initial speed vi = v(0) > 0 before the parachute opens, is addressed in Exercise 9.2.18.  Exact Differential Equations We rewrite Eq. (9.16) as P (x, y) dx + Q(x, y) dy = 0. (9.18) This equation is said to be exact if we can match the left-hand side of it to a differential dϕ, dϕ = ∂ϕ ∂ϕ dx + dy. ∂x ∂y (9.19) Since Eq. (9.18) has a zero on the right, we look for an unknown function ϕ(x, y) = constant and dϕ = 0. 546 Chapter 9 Differential Equations We have (if such a function ϕ(x, y) exists) P (x, y) dx + Q(x, y) dy = ∂ϕ ∂ϕ dx + dy ∂x ∂y (9.20a) and ∂ϕ = P (x, y), ∂x ∂ϕ = Q(x, y). ∂y (9.20b) The necessary and sufficient condition for our equation to be exact is that the second, mixed partial derivatives of ϕ(x, y) (assumed continuous) are independent of the order of differentiation: ∂ 2ϕ ∂P (x, y) ∂Q(x, y) ∂ 2ϕ = = = . ∂y∂x ∂y ∂x ∂x∂y (9.21) Note the resemblance to Eqs. (1.133a) of Section 1.13, “Potential Theory.” If Eq. (9.18) corresponds to a curl (equal to zero), then a potential, ϕ(x, y), must exist. If ϕ(x, y) exists, then from Eqs. (9.18) and (9.20a) our solution is ϕ(x, y) = C. We may construct ϕ(x, y) from its partial derivatives just as we constructed a magnetic vector potential in Section 1.13 from its curl. See Exercises 9.2.7 and 9.2.8. It may well turn out that Eq. (9.18) is not exact and that Eq. (9.21) is not satisfied. However, there always exists at least one and perhaps an infinity of integrating factors α(x, y) such that α(x, y)P (x, y) dx + α(x, y)Q(x, y) dy = 0 is exact. Unfortunately, an integrating factor is not always obvious or easy to find. Unlike the case of the linear first-order differential equation to be considered next, there is no systematic way to develop an integrating factor for Eq. (9.18). A differential equation in which the variables have been separated is automatically exact. An exact differential equation is not necessarily separable. The wave front method of Section 9.1 also works for a first-order PDE: ∂ψ ∂ψ a(x, y) + b(x, y) = 0. (9.22a) ∂x ∂y We look for a solution of the form ψ = F (ξ ), where ξ(x, y) = constant for varying x and y defines the wave front. Hence dξ = ∂ξ ∂ξ dx + dy = 0, ∂x ∂y (9.22b) while the PDE yields  a  ∂ξ dF ∂ξ +b =0 ∂x ∂y dξ (9.23a) with dF /dξ = 0 in general. Comparing Eqs. (9.22b) and (9.23a) yields dx dy = , a b (9.23b) 9.2 First-Order Differential Equations 547 which reduces the PDE to a first-order ODE for the tangent dy/dx of the wave front function ξ(x, y). When there is an additional source term in the PDE, a ∂ψ ∂ψ +b + cψ = 0, ∂x ∂y then we use the Ansatz ψ = ψ0 (x, y)F (ξ ), which converts our PDE to     ∂ξ dF ∂ψ0 ∂ξ ∂ψ0 a = 0. +b + cψ0 + ψ0 +b F a ∂x ∂y dξ ∂x ∂y (9.23c) (9.24) If we can guess a solution ψ0 of Eq. (9.23c), then Eq. (9.24) reduces to our previous equation, Eq. (9.23a), from which the ODE of Eq. (9.23b) follows. Linear First-Order ODEs If f (x, y) in Eq. (9.16) has the form −p(x)y + q(x), then Eq. (9.16) becomes dy + p(x)y = q(x). (9.25) dx Equation (9.25) is the most general linear first-order ODE. If q(x) = 0, Eq. (9.25) is homogeneous (in y). A nonzero q(x) may represent a source or a driving term. Equation (9.25) is linear; each term is linear in y or dy/dx. There are no higher powers, that is, y 2 , and no products, y(dy/dx). Note that the linearity refers to the y and dy/dx; p(x) and q(x) need not be linear in x. Equation (9.25), the most important of these first-order ODEs for physics, may be solved exactly. Let us look for an integrating factor α(x) so that α(x) dy + α(x)p(x)y = α(x)q(x) dx (9.26) may be rewritten as d  α(x)y = α(x)q(x). (9.27) dx The purpose of this is to make the left-hand side of Eq. (9.25) a derivative so that it can be integrated — by inspection. It also, incidentally, makes Eq. (9.25) exact. Expanding Eq. (9.27), we obtain dy dα + y = α(x)q(x). dx dx Comparison with Eq. (9.26) shows that we must require α(x) dα = α(x)p(x). (9.28) dx Here is a differential equation for α(x), with the variables α and x separable. We separate variables, integrate, and obtain x  α(x) = exp p(x) dx (9.29) 548 Chapter 9 Differential Equations as our integrating factor. With α(x) known we proceed to integrate Eq. (9.27). This, of course, was the point of introducing α in the first place. We have x x d  α(x)y(x) dx = α(x)q(x) dx. dx Now integrating by inspection, we have α(x)y(x) = x α(x)q(x) dx + C. The constants from a constant lower limit of integration are lumped into the constant C. Dividing by α(x), we obtain  x   −1 y(x) = α(x) α(x)q(x) dx + C . Finally, substituting in Eq. (9.29) for α yields x   s  x y(x) = exp − p(t) dt q(s) ds + C . exp p(t) dt (9.30) Here the (dummy) variables of integration have been rewritten to make them unambiguous. Equation (9.30) is the complete general solution of the linear, first-order differential equation, Eq. (9.25). The portion  x y1 (x) = C exp − p(t) dt (9.31) corresponds to the case q(x) = 0 and is a general solution of the homogeneous differential equation. The other term in Eq. (9.30), x  x s  y2 (x) = exp − p(t) dt exp p(t) dt q(s) ds, (9.32) is a particular solution corresponding to the specific source term q(x). Note that if our linear first-order differential equation is homogeneous (q = 0), then it is separable. Otherwise, apart from special cases such as p = constant, q = constant, and q(x) = ap(x), Eq. (9.25) is not separable. Let us summarize this solution of the inhomogeneous ODE in terms of a method called variation of the constant as follows. In the first step, we solve the homogeneous ODE by separation of variables as before, giving x x y′ = −p, ln y = − p(X) dX + ln C, y(x) = Ce− p(X) dX . y In the second step, we let the integration constant become x-dependent, that is, C → C(x). This is the “variation of the constant” used to solve the inhomogeneous ODE. Differentiating y(x) we obtain y ′ = −pCe− p(x) dx C ′ (x)e− p(x) dx = −py(x) + C ′ (x)e− p(x) dx . 9.2 First-Order Differential Equations 549 Comparing with the inhomogeneous ODE we find the ODE for C: x X or C(x) = e p(Y ) dY q(X) dX. C ′ e− p(x) dx = q, Substituting this C into y = C(x)e− Example 9.2.2 x p(X) dX reproduces Eq. (9.32). RL CIRCUIT For a resistance-inductance circuit Kirchhoff’s law leads to L dI (t) + RI (t) = V (t) dt for the current I (t), where L is the inductance and R is the resistance, both constant. V (t) is the time-dependent input voltage. From Eq. (9.29) our integrating factor α(t) is t R dt = eRt/L . α(t) = exp L Then by Eq. (9.30), I (t) = e −Rt/L t e Rt/L V (t) L  dt + C , with the constant C to be determined by an initial condition (a boundary condition). For the special case V (t) = V0 , a constant,  V0 −Rt/L V0 L Rt/L · e + Ce−Rt/L . +C = I (t) = e L R R If the initial condition is I (0) = 0, then C = −V0 /R and I (t) = V0  1 − e−Rt/L . R  Now we prove the theorem that the solution of the inhomogeneous ODE is unique up to an arbitrary multiple of the solution of the homogeneous ODE. To show this, suppose y1 , y2 both solve the inhomogeneous ODE, Eq. (9.25); then y1′ − y2′ + p(x)(y1 − y2 ) = 0 follows by subtracting the ODEs and says that y1 − y2 is a solution of the homogeneous ODE. The solution of the homogeneous ODE can always be multiplied by an arbitrary constant. We also prove the theorem that a first-order homogeneous ODE has only one linearly independent solution. This is meant in the following sense. If two solutions are linearly dependent, by definition they satisfy ay1 (x) + by2 (x) = 0 with nonzero constants a, b for all values of x. If the only solution of this linear relation is a = 0 = b, then our solutions y1 and y2 are said to be linearly independent. 550 Chapter 9 Differential Equations To prove this theorem, suppose y1 , y2 both solve the homogeneous ODE. Then y1′ y′ = −p(x) = 2 y1 y2 implies W (x) ≡ y1′ y2 − y1 y2′ ≡ 0. (9.33) The functional determinant W is called the Wronskian of the pair y1 , y2 . We now show that W ≡ 0 is the condition for them to be linearly dependent. Assuming linear dependence, that is, ay1 (x) + by2 (x) = 0 with nonzero constants a, b for all values of x, we differentiate this linear relation to get another linear relation, ay1′ (x) + by2′ (x) = 0. The condition for these two homogeneous linear equations in the unknowns a, b to have a nontrivial solution is that their determinant be zero, which is W = 0. Conversely, from W = 0, there follows linear dependence, because we can find a nontrivial solution of the relation y′ y1′ = 2 y1 y2 by integration, which gives ln y1 = ln y2 + ln C, or y1 = Cy2 . Linear dependence and the Wronskian are generalized to three or more functions in Section 9.6. Exercises 9.2.1 From Kirchhoff’s law the current I in an RC (resistance–capacitance) circuit (Fig. 9.1) obeys the equation R (a) (b) 1 dI + I = 0. dt C Find I (t). For a capacitance of 10,000 µF charged to 100 V and discharging through a resistance of 1 M, find the current I for t = 0 and for t = 100 seconds. Note. The initial voltage is I0 R or Q/C, where Q = 9.2.2 ∞ 0 I (t) dt. The Laplace transform of Bessel’s equation (n = 0) leads to  2 s + 1 f ′ (s) + sf (s) = 0. Solve for f (s). 9.2 First-Order Differential Equations FIGURE 9.1 9.2.3 551 RC circuit. The decay of a population by catastrophic two-body collisions is described by dN = −kN 2 . dt This is a first-order, nonlinear differential equation. Derive the solution   t −1 , N (t) = N0 1 + τ0 where τ0 = (kN0 )−1 . This implies an infinite population at t = −τ0 . 9.2.4 The rate of a particular chemical reaction A + B → C is proportional to the concentrations of the reactants A and B:   dC(t) = α A(0) − C(t) B(0) − C(t) . dt (a) (b) Find C(t) for A(0) = B(0). Find C(t) for A(0) = B(0). The initial condition is that C(0) = 0. 9.2.5 A boat, coasting through the water, experiences a resisting force proportional to v n , v being the boat’s instantaneous velocity. Newton’s second law leads to m dv = −kv n . dt With v(t = 0) = v0 , x(t = 0) = 0, integrate to find v as a function of time and v as a function of distance. 9.2.6 In the first-order differential equation dy/dx = f (x, y) the function f (x, y) is a function of the ratio y/x: dy = g(y/x). dx Show that the substitution of u = y/x leads to a separable equation in u and x. 552 Chapter 9 Differential Equations 9.2.7 The differential equation P (x, y) dx + Q(x, y) dy = 0 is exact. Construct a solution x ϕ(x, y) = P (x, y) dx + x0 9.2.8 y y0 Q(x0 , y) dy = constant. The differential equation P (x, y) dx + Q(x, y) dy = 0 is exact. If ϕ(x, y) = x x0 P (x, y) dx + y Q(x0 , y) dy, y0 show that ∂ϕ = P (x, y), ∂x ∂ϕ = Q(x, y). ∂y Hence ϕ(x, y) = constant is a solution of the original differential equation. 9.2.9 Prove that Eq. (9.26) is exact in the sense of Eq. (9.21), provided that α(x) satisfies Eq. (9.28). 9.2.10 A certain differential equation has the form f (x) dx + g(x)h(y) dy = 0, with none of the functions f (x), g(x), h(y) identically zero. Show that a necessary and sufficient condition for this equation to be exact is that g(x) = constant. 9.2.11 Show that y(x) = exp − x p(t) dt  x s   exp p(t) dt q(s) ds + C is a solution of dy + p(x)y(x) = q(x) dx by differentiating the expression for y(x) and substituting into the differential equation. 9.2.12 The motion of a body falling in a resisting medium may be described by m dv = mg − bv dt when the retarding force is proportional to the velocity, v. Find the velocity. Evaluate the constant of integration by demanding that v(0) = 0. 9.2 First-Order Differential Equations 9.2.13 553 Radioactive nuclei decay according to the law dN = −λN, dt N being the concentration of a given nuclide and λ, the particular decay constant. In a radioactive series of n different nuclides, starting with N1 , dN1 = −λ1 N1 , dt dN2 = λ1 N 1 − λ2 N 2 , dt and so on. Find N2 (t) for the conditions N1 (0) = N0 and N2 (0) = 0. 9.2.14 The rate of evaporation from a particular spherical drop of liquid (constant density) is proportional to its surface area. Assuming this to be the sole mechanism of mass loss, find the radius of the drop as a function of time. 9.2.15 In the linear homogeneous differential equation dv = −av dt the variables are separable. When the variables are separated, the equation is exact. Solve this differential equation subject to v(0) = v0 by the following three methods: (a) (b) (c) Separating variables and integrating. Treating the separated variable equation as exact. Using the result for a linear homogeneous differential equation. ANS. v(t) = v0 e−at . 9.2.16 Bernoulli’s equation, dy + f (x)y = g(x)y n , dx is nonlinear for n = 0 or 1. Show that the substitution u = y 1−n reduces Bernoulli’s equation to a linear equation. (See Section 18.4.) du + (1 − n)f (x)u = (1 − n)g(x). dx Solve the linear, first-order equation, Eq. (9.25), by assuming y(x) = u(x)v(x), where v(x) is a solution of the corresponding homogeneous equation [q(x) = 0]. This is the method of variation of parameters due to Lagrange. We apply it to second-order equations in Exercise 9.6.25. ANS. 9.2.17 9.2.18 (a) Solve Example 9.2.1 for an initial velocity vi = 60 mi/h, when the parachute opens. Find v(t). (b) For a skydiver in free fall use the friction coefficient b = 0.25 kg/m and mass m = 70 kg. What is the limiting velocity in this case? 554 9.3 Chapter 9 Differential Equations SEPARATION OF VARIABLES The equations of mathematical physics listed in Section 9.1 are all partial differential equations. Our first technique for their solution splits the partial differential equation of n variables into n ordinary differential equations. Each separation introduces an arbitrary constant of separation. If we have n variables, we have to introduce n−1 constants, determined by the conditions imposed in the problem being solved. Cartesian Coordinates In Cartesian coordinates the Helmholtz equation becomes ∂ 2ψ ∂ 2ψ ∂ 2ψ + + 2 + k 2 ψ = 0, ∂x 2 ∂y 2 ∂z (9.34) using Eq. (2.27) for the Laplacian. For the present let k 2 be a constant. Perhaps the simplest way of treating a partial differential equation such as Eq. (9.34) is to split it into a set of ordinary differential equations. This may be done as follows. Let ψ(x, y, z) = X(x)Y (y)Z(z) (9.35) and substitute back into Eq. (9.34). How do we know Eq. (9.35) is valid? When the differential operators in various variables are additive in the PDE, that is, when there are no products of differential operators in different variables, the separation method usually works. We are proceeding in the spirit of let’s try and see if it works. If our attempt succeeds, then Eq. (9.35) will be justified. If it does not succeed, we shall find out soon enough and then we shall try another attack, such as Green’s functions, integral transforms, or brute-force numerical analysis. With ψ assumed given by Eq. (9.35), Eq. (9.34) becomes d 2X d 2Y d 2Z + XZ 2 + XY 2 + k 2 XY Z = 0. 2 dx dy dz Dividing by ψ = XY Z and rearranging terms, we obtain YZ (9.36) 1 d 2X 1 d 2Y 1 d 2Z 2 = −k − − . (9.37) X dx 2 Y dy 2 Z dz2 Equation (9.37) exhibits one separation of variables. The left-hand side is a function of x alone, whereas the right-hand side depends only on y and z and not on x. But x, y, and z are all independent coordinates. The equality of both sides depending on different variables means that the behavior of x as an independent variable is not determined by y and z. Therefore, each side must be equal to a constant, a constant of separation. We choose2 −k 2 − 1 d 2X = −l 2 , X dx 2 (9.38) 1 d 2Y 1 d 2Z − = −l 2 . 2 Y dy Z dz2 (9.39) 2 The choice of sign, completely arbitrary here, will be fixed in specific problems by the need to satisfy specific boundary conditions. 9.3 Separation of Variables 555 Now, turning our attention to Eq. (9.39), we obtain 1 d 2Z 1 d 2Y = −k 2 + l 2 − , 2 Y dy Z dz2 (9.40) and a second separation has been achieved. Here we have a function of y equated to a function of z, as before. We resolve it, as before, by equating each side to another constant of separation,2 −m2 , 1 d 2Y = −m2 , Y dy 2 (9.41) 1 d 2Z = −k 2 + l 2 + m2 = −n2 , Z dz2 (9.42) introducing a constant n2 by k 2 = l 2 + m2 + n2 to produce a symmetric set of equations. Now we have three ODEs ((9.38), (9.41), and (9.42)) to replace Eq. (9.34). Our assumption (Eq. (9.35)) has succeeded and is thereby justified. Our solution should be labeled according to the choice of our constants l, m, and n; that is, ψlm (x, y, z) = Xl (x)Ym (y)Zn (z). (9.43) Subject to the conditions of the problem being solved and to the condition k 2 = l 2 + m2 + n2 , we may choose l, m, and n as we like, and Eq. (9.43) will still be a solution of Eq. (9.34), provided Xl (x) is a solution of Eq. (9.38), and so on. We may develop the most general solution of Eq. (9.34) by taking a linear combination of solutions ψlm ,  = alm ψlm . (9.44) l,m The constant coefficients alm are finally chosen to permit  to satisfy the boundary conditions of the problem, which, as a rule, lead to a discrete set of values l, m. Circular Cylindrical Coordinates With our unknown function ψ dependent on ρ, ϕ, and z, the Helmholtz equation becomes (see Section 2.4 for ∇ 2 ) or ∇ 2 ψ(ρ, ϕ, z) + k 2 ψ(ρ, ϕ, z) = 0, (9.45)   1 ∂ ∂ψ 1 ∂ 2ψ ∂ 2ψ ρ + 2 + + k 2 ψ = 0. ρ ∂ρ ∂ρ ρ ∂ϕ 2 ∂z2 (9.46) As before, we assume a factored form for ψ , ψ(ρ, ϕ, z) = P (ρ)(ϕ)Z(z). Substituting into Eq. (9.46), we have   dP P Z d 2 d 2Z Z d ρ + 2 + P  + k 2 P Z = 0. ρ dρ dρ ρ dϕ 2 dz2 (9.47) (9.48) 556 Chapter 9 Differential Equations All the partial derivatives have become ordinary derivatives. Dividing by P Z and moving the z derivative to the right-hand side yields   1 d 1 d 2Z dP 1 d 2 + k2 = − . (9.49) ρ + 2 2 ρP dρ dρ Z dz2 ρ  dϕ Again, a function of z on the right appears to depend on a function of ρ and ϕ on the left. We resolve this by setting each side of Eq. (9.49) equal to the same constant. Let us choose3 −l 2 . Then and d 2Z = l2Z dz2 (9.50)   1 d dP 1 d 2 ρ + 2 + k 2 = −l 2 . ρP dρ dρ ρ  dϕ 2 (9.51) Setting k 2 + l 2 = n2 , multiplying by ρ 2 , and rearranging terms, we obtain   ρ d 1 d 2 dP . ρ + n2 ρ 2 = − P dρ dρ  dϕ 2 (9.52) We may set the right-hand side to m2 and d 2 = −m2 . dϕ 2 Finally, for the ρ dependence we have    d dP ρ ρ + n2 ρ 2 − m2 P = 0. dρ dρ (9.53) (9.54) This is Bessel’s differential equation. The solutions and their properties are presented in Chapter 11. The separation of variables of Laplace’s equation in parabolic coordinates also gives rise to Bessel’s equation. It may be noted that the Bessel equation is notorious for the variety of disguises it may assume. For an extensive tabulation of possible forms the reader is referred to Tables of Functions by Jahnke and Emde.4 The original Helmholtz equation, a three-dimensional PDE, has been replaced by three ODEs, Eqs. (9.50), (9.53), and (9.54). A solution of the Helmholtz equation is ψ(ρ, ϕ, z) = P (ρ)(ϕ)Z(z). (9.55) Identifying the specific P , , Z solutions by subscripts, we see that the most general solution of the Helmholtz equation is a linear combination of the product solutions:  (ρ, ϕ, z) = amn Pmn (ρ)m (ϕ)Zn (z). (9.56) m,n 3 The choice of sign of the separation constant is arbitrary. However, a minus sign is chosen for the axial coordinate z in expec- tation of a possible exponential dependence on z (from Eq. (9.50)). A positive sign is chosen for the azimuthal coordinate ϕ in expectation of a periodic dependence on ϕ (from Eq. (9.53)). 4 E. Jahnke and F. Emde, Tables of functions, 4th rev. ed., New York: Dover (1945), p. 146; also, E. Jahnke, F. Emde, and F. Lösch, Tables of Higher Functions, 6th ed., New York: McGraw-Hill (1960). 9.3 Separation of Variables 557 Spherical Polar Coordinates Let us try to separate the Helmholtz equation, again with k 2 constant, in spherical polar coordinates. Using Eq. (2.48), we obtain      1 ∂ ∂ψ 1 ∂ 2ψ ∂ 2 ∂ψ r + sin θ + = −k 2 ψ. sin θ (9.57) ∂r ∂r ∂θ ∂θ sin θ ∂ϕ 2 r 2 sin θ Now, in analogy with Eq. (9.35) we try ψ(r, θ, ϕ) = R(r)(θ )(ϕ). By substituting back into Eq. (9.57) and dividing by R, we have     1 d d 2 1 d 1 d 2 dR r + sin θ + = −k 2 . dr dθ Rr 2 dr r 2 sin θ dθ r 2 sin2 θ dϕ 2 (9.58) (9.59) Note that all derivatives are now ordinary derivatives rather than partials. By multiplying by r 2 sin2 θ , we can isolate (1/)(d 2 /dϕ 2 ) to obtain5     1 d d 1 d 2 1 d 2 dR 2 2 2 r − sin θ . (9.60) = r sin θ −k −  dϕ 2 dr dθ r 2 R dr r 2 sin θ  dθ Equation (9.60) relates a function of ϕ alone to a function of r and θ alone. Since r, θ , and ϕ are independent variables, we equate each side of Eq. (9.60) to a constant. In almost all physical problems ϕ will appear as an azimuth angle. This suggests a periodic solution rather than an exponential. With this in mind, let us use −m2 as the separation constant, which, then, must be an integer squared. Then and 1 d 2 (ϕ) = −m2  dϕ 2 (9.61)     1 d m2 1 d d 2 dR r + sin θ − = −k 2 . 2 2 dr dθ r R dr r sin θ  dθ r 2 sin2 θ (9.62) Multiplying Eq. (9.62) by r 2 and rearranging terms, we obtain     dR d m2 d 1 d 1 r2 + r 2k2 = − sin θ + 2 . R dr dr sin θ  dθ dθ sin θ (9.63) Again, the variables are separated. We equate each side to a constant, Q, and finally obtain   d m2 1 d sin θ − 2  + Q = 0, (9.64) sin θ dθ dθ sin θ   QR 1 d 2 dR r + k 2 R − 2 = 0. (9.65) dr r 2 dr r 5 The order in which the variables are separated here is not unique. Many quantum mechanics texts show the r dependence split off first. 558 Chapter 9 Differential Equations Once more we have replaced a partial differential equation of three variables by three ODEs. The solutions of these ODEs are discussed in Chapters 11 and 12. In Chapter 12, for example, Eq. (9.64) is identified as the associated Legendre equation, in which the constant Q becomes l(l + 1); l is a non-negative integer because θ is an angular variable. If k 2 is a (positive) constant, Eq. (9.65) becomes the spherical Bessel equation of Section 11.7. Again, our most general solution may be written  ψQm (r, θ, ϕ) = aQm RQ (r)Qm (θ )m (ϕ). (9.66) Q,m The restriction that k 2 be a constant is unnecessarily severe. The separation process will still be possible for k 2 as general as 1 1 g(θ ) + h(ϕ) + k ′2 . (9.67) 2 2 r r sin2 θ In the hydrogen atom problem, one of the most important examples of the Schrödinger wave equation with a closed form solution is k 2 = f (r), with k 2 independent of θ, ϕ. Equation (9.65) for the hydrogen atom becomes the associated Laguerre equation. The great importance of this separation of variables in spherical polar coordinates stems from the fact that the case k 2 = k 2 (r) covers a tremendous amount of physics: a great deal of the theories of gravitation, electrostatics, and atomic, nuclear, and particle physics. And with k 2 = k 2 (r), the angular dependence is isolated in Eqs. (9.61) and (9.64), which can be solved exactly. Finally, as an illustration of how the constant m in Eq. (9.61) is restricted, we note that ϕ in cylindrical and spherical polar coordinates is an azimuth angle. If this is a classical problem, we shall certainly require that the azimuthal solution (ϕ) be single-valued; that is, k 2 = f (r) + (ϕ + 2π) = (ϕ). (9.68) This is equivalent to requiring the azimuthal solution to have a period of 2π .6 Therefore m must be an integer. Which integer it is depends on the details of the problem. If the integer |m| > 1, then  will have the period 2π/m. Whenever a coordinate corresponds to an axis of translation or to an azimuth angle, the separated equation always has the form d 2 (ϕ) = −m2 (ϕ) dϕ 2 for ϕ, the azimuth angle, and d 2 Z(z) = ±a 2 Z(z) dz2 (9.69) for z, an axis of translation of the cylindrical coordinate system. The solutions, of course, are sin az and cos az for −a 2 and the corresponding hyperbolic function (or exponentials) sinh az and cosh az for +a 2 . 6 This also applies in most quantum mechanical problems, but the argument is much more involved. If m is not an integer, rotation group relations and ladder operator relations (Section 4.3) are disrupted. Compare E. Merzbacher, Single valuedness of wave functions. Am. J. Phys. 30: 237 (1962). 9.3 Separation of Variables Table 9.2 Solutions in Spherical Polar Coordinatesa ψ= 1. 2. 3. 559 ∇2 ψ = 0 ∇2 ψ + k2 ψ = 0 ∇2 ψ − k2 ψ = 0  alm ψlm l,m ψlm = ψlm = ψlm = + + + rl -+ m -+ Pl (cos θ) cos mϕ b Qm sin mϕ l (cos θ) -+ m -+ Pl (cos θ) jl (kr) cos mϕ b r −l−1 Qm sin mϕ l (cos θ) -+ m -+ Pl (cos θ) il (kr) cos mϕ b nl (kr) kl (kr) Qm l (cos θ) sin mϕ a References for some of the functions are P m (cos θ ), m = 0, Section 12.1; m = 0, Secl tion 12.5; Qm l (cos θ ), Section 12.10; jl (kr), nl (kr), il (kr), and kl (kr), Section 11.7. b cos mϕ and sin mϕ may be replaced by e±imϕ . Other occasionally encountered ODEs include the Laguerre and associated Laguerre equations from the supremely important hydrogen atom problem in quantum mechanics: x dy d 2y + αy = 0, + (1 − x) 2 dx dx (9.70) dy d 2y + (1 + k − x) + αy = 0. (9.71) dx dx 2 From the quantum mechanical theory of the linear oscillator we have Hermite’s equation, x d 2y dy + 2αy = 0. − 2x 2 dx dx Finally, from time to time we find the Chebyshev differential equation, (9.72)  d 2y dy 1 − x2 + n2 y = 0. (9.73) −x dx dx 2 For convenient reference, the forms of the solutions of Laplace’s equation, Helmholtz’s equation, and the diffusion equation for spherical polar coordinates are collected in Table 9.2. The solutions of Laplace’s equation in circular cylindrical coordinates are presented in Table 9.3. General properties following from the form of the differential equations are discussed in Chapter 10. The individual solutions are developed and applied in Chapters 11–13. The practicing physicist may and probably will meet other second-order ODEs, some of which may possibly be transformed into the examples studied here. Some of these ODEs may be solved by the techniques of Sections 9.5 and 9.6. Others may require a computer for a numerical solution. We refer to the second edition of this text for other important coordinate systems. • To put the separation method of solving PDEs in perspective, let us review it as a consequence of a symmetry of the PDE. Take the stationary Schrödinger equation H ψ = Eψ as an example, with a potential V (r) depending only on the radial distance r. Then this 560 Chapter 9 Differential Equations Table 9.3 Solutions in Circular Cylindrical Coordinatesa ψ= a. b. c. ∇2 ψ + α2 ψ = 0  amα ψmα m,α ψmα = ∇2 ψ − α2 ψ = 0 ψmα = ∇2 ψ = 0 ψm = + + + Jm (αρ) Nm (αρ) Im (αρ) -+ -+ cos mϕ sin mϕ cos mϕ sin mϕ Km (αρ) -+ m cos mϕ ρ ρ −m -+ -+ e−αz eαz - cos αz sin αz - sin mϕ a References for the radial functions are J (αρ), Section 11.1; N (αρ), Section 11.3; m m Im (αρ) and Km (αρ), Section 11.5. PDE is invariant under rotations that comprise the group SO(3). Its diagonal genera∂ , and its quadratic (Casimir) tor is the orbital angular momentum operator Lz = −i ∂ϕ 2 invariant is L . Since both commute with H (see Section 4.3), we end up with three separate eigenvalue equations: H ψ = Eψ, L2 ψ = l(l + 1)ψ, Lz ψ = mψ. Upon replacing L2z in L2 by its eigenvalue m2 , the L2 PDE becomes Legendre’s ODE, and similarly H ψ = Eψ becomes the radial ODE of the separation method in spherical polar coordinates. • For cylindrical coordinates the PDE is invariant under rotations about the z-axis only, which form a subgroup of SO(3). This invariance yields the generator Lz = −i∂/∂ϕ and separate azimuthal ODE Lz ψ = mψ , as before. If the potential V is invariant under translations along the z-axis, then the generator −i∂/∂z gives the separate ODE in the z variable. • In general (see Section 4.3), there are n mutually commuting generators Hi with eigenvalues mi of the (classical) Lie group G of rank n and the corresponding Casimir invariants Ci with eigenvalues ci (Chapter 4), which yield the separate ODEs Hi ψ = mi ψ, Ci ψ = c i ψ in addition to the (by now) radial ODE H ψ = Eψ . Exercises 9.3.1 By letting the operator ∇ 2 + k 2 act on the general form a1 ψ1 (x, y, z) + a2 ψ2 (x, y, z), show that it is linear, that is, that (∇ 2 + k 2 )(a1 ψ1 + a2 ψ2 ) = a1 (∇ 2 + k 2 )ψ1 + a2 (∇ 2 + k 2 )ψ2 . 9.3.2 Show that the Helmholtz equation, ∇ 2 ψ + k 2 ψ = 0, 9.3 Separation of Variables 561 is still separable in circular cylindrical coordinates if k 2 is generalized to k 2 + f (ρ) + (1/ρ 2 )g(ϕ) + h(z). 9.3.3 Separate variables in the Helmholtz equation in spherical polar coordinates, splitting off the radial dependence first. Show that your separated equations have the same form as Eqs. (9.61), (9.64), and (9.65). 9.3.4 Verify that  1 1 ∇ ψ(r, θ, ϕ) + k + f (r) + 2 g(θ ) + h(ϕ) ψ(r, θ, ϕ) = 0 r r 2 sin2 θ is separable (in spherical polar coordinates). The functions f, g, and h are functions only of the variables indicated; k 2 is a constant. 2 9.3.5 2 An atomic (quantum mechanical) particle is confined inside a rectangular box of sides a, b, and c. The particle is described by a wave function ψ that satisfies the Schrödinger wave equation h¯ 2 2 ∇ ψ = Eψ. 2m The wave function is required to vanish at each surface of the box (but not to be identically zero). This condition imposes constraints on the separation constants and therefore on the energy E. What is the smallest value of E for which such a solution can be obtained?   π 2 h¯ 2 1 1 1 ANS. E = . + + 2m a 2 b2 c2 − 9.3.6 For a homogeneous spherical solid with constant thermal diffusivity, K, and no heat sources, the equation of heat conduction becomes ∂T (r, t) = K∇ 2 T (r, t). ∂t Assume a solution of the form T = R(r)T (t) and separate variables. Show that the radial equation may take on the standard form d 2R dR  2 2 + α r − n(n + 1) R = 0; n = integer. + 2r 2 dr dr The solutions of this equation are called spherical Bessel functions. r2 9.3.7 Separate variables in the thermal diffusion equation of Exercise 9.3.6 in circular cylindrical coordinates. Assume that you can neglect end effects and take T = T (ρ, t). 9.3.8 The quantum mechanical angular momentum operator is given by L = −i(r×∇). Show that L · Lψ = l(l + 1)ψ leads to the associated Legendre equation. Hint. Exercises 1.9.9 and 2.5.16 may be helpful. 562 Chapter 9 Differential Equations The one-dimensional Schrödinger wave equation for a particle in a potential field V = 1 2 2 kx is 9.3.9 − (a) (b) h¯ 2 d 2 ψ 1 2 + kx ψ = Eψ(x). 2m dx 2 2 Using ξ = ax and a constant λ, we have     mk 1/4 2E m 1/2 a= ; , λ= k h¯ h¯ 2 show that d 2 ψ(ξ )  + λ − ξ 2 ψ(ξ ) = 0. 2 dξ Substituting ψ(ξ ) = y(ξ )e−ξ 2 /2 , show that y(ξ ) satisfies the Hermite differential equation. 9.3.10 Verify that the following are solutions of Laplace’s equation: r +z 1 ln . 2r r − z Note. The z derivatives of 1/r generate the Legendre polynomials, Pn (cos θ ), Exercise 12.1.7. The z derivatives of (1/2r) ln[(r + z)/(r − z)] generate the Legendre functions, Qn (cos θ ). (a) ψ1 = 1/r, r = 0, 9.3.11 9.4 (b) ψ2 = If  is a solution of Laplace’s equation, ∇ 2  = 0, show that ∂/∂z is also a solution. SINGULAR POINTS In this section the concept of a singular point, or singularity (as applied to a differential equation), is introduced. The interest in this concept stems from its usefulness in (1) classifying ODEs and (2) investigating the feasibility of a series solution. This feasibility is the topic of Fuchs’ theorem, Sections 9.5 and 9.6. All the ODEs listed in Section 9.3 may be solved for d 2 y/dx 2 . Using the notation 2 d y/dx 2 = y ′′ , we have7 y ′′ = f (x, y, y ′ ). (9.74) If we write our second-order homogeneous differential equation (in y) as y ′′ + P (x)y ′ + Q(x)y = 0, (9.75) we are ready to define ordinary and singular points. If the functions P (x) and Q(x) remain finite at x = x0 , point x = x0 is an ordinary point. However, if either P (x) or Q(x) (or 7 This prime notation, y ′ = dy/dx, was introduced by Lagrange in the late 18th century as an abbreviation for Leibniz’s more explicit but more cumbersome dy/dx. 9.4 Singular Points 563 both) diverges as x → x0 , point x0 is a singular point. Using Eq. (9.75), we may distinguish between two kinds of singular points. 1. If either P (x) or Q(x) diverges as x → x0 but (x − x0 )P (x) and (x − x0 )2 Q(x) remain finite as x → x0 , then x = x0 is called a regular, or nonessential, singular point. 2. If P (x) diverges faster than 1/(x − x0 ) so that (x − x0 )P (x) goes to infinity as x → x0 , or Q(x) diverges faster than 1/(x − x0 )2 so that (x − x0 )2 Q(x) goes to infinity as x → x0 , then point x = x0 is labeled an irregular, or essential, singularity. These definitions hold for all finite values of x0 . The analysis of point x → ∞ is similar to the treatment of functions of a complex variable (Section 6.6). We set x = 1/z, substitute into the differential equation, and then let z → 0. By changing variables in the derivatives, we have dy(x) dy(z−1 ) dz dy(z−1 ) 1 dy(z−1 ) = =− 2 = −z2 , dx dz dx dz dz x (9.76)  2 −1  d 2 y(x) dy(z−1 ) d dy(x) dz  2 2 d y(z ) = −z − z −2z = dz dx dx dz dx 2 dz2 d 2 y(z−1 ) dy(z−1 ) + z4 . dz dz2 Using these results, we transform Eq. (9.75) into = 2z3 (9.77)   dy d 2y  3 + Q z−1 y = 0. + 2z − z2 P z−1 (9.78) 2 dz dz The behavior at x = ∞(z = 0) then depends on the behavior of the new coefficients, z4 2z − P (z−1 ) Q(z−1 ) and , z2 z4 as z → 0. If these two expressions remain finite, point x = ∞ is an ordinary point. If they diverge no more rapidly than 1/z and 1/z2 , respectively, point x = ∞ is a regular singular point; otherwise it is an irregular singular point (an essential singularity). Example 9.4.1 Bessel’s equation is  x 2 y ′′ + xy ′ + x 2 − n2 y = 0. (9.79) Comparing it with Eq. (9.75) we have 1 n2 , Q(x) = 1 − 2 , x x which shows that point x = 0 is a regular singularity. By inspection we see that there are no other singular points in the finite range. As x → ∞(z → 0), from Eq. (9.78) we have P (x) = 564 Chapter 9 Differential Equations Table 9.4 Equation 1. 2. 3. 4. 5. 6. 7. 8. Hypergeometric x(x − 1)y ′′ + [(1 + a + b)x − c]y ′ + aby = 0. Legendrea (1 − x 2 )y ′′ − 2xy ′ + l(l + 1)y = 0. Chebyshev (1 − x 2 )y ′′ − xy ′ + n2 y = 0. Confluent hypergeometric xy ′′ + (c − x)y ′ − ay = 0. Bessel x 2 y ′′ + xy ′ + (x 2 − n2 )y = 0. Laguerrea xy ′′ + (1 − x)y ′ + ay = 0. Simple harmonic oscillator y ′′ + ω2 y = 0. Hermite y ′′ − 2xy ′ + 2αy = 0. Regular singularity x= Irregular singularity x= 0, 1, ∞ – −1, 1, ∞ – −1, 1, ∞ – 0 ∞ 0 ∞ 0 ∞ – ∞ – ∞ a The associated equations have the same singular points. the coefficients 2z − z z2 and 1 − n2 z 2 . z4 Since the latter expression diverges as z4 , point x = ∞ is an irregular, or essential, singularity.  The ordinary differential equations of Section 9.3, plus two others, the hypergeometric and the confluent hypergeometric, have singular points, as shown in Table 9.4. It will be seen that the first three equations in Table 9.4, hypergeometric, Legendre, and Chebyshev, all have three regular singular points. The hypergeometric equation, with regular singularities at 0, 1, and ∞ is taken as the standard, the canonical form. The solutions of the other two may then be expressed in terms of its solutions, the hypergeometric functions. This is done in Chapter 13. In a similar manner, the confluent hypergeometric equation is taken as the canonical form of a linear second-order differential equation with one regular and one irregular singular point. Exercises 9.4.1 Show that Legendre’s equation has regular singularities at x = −1, 1, and ∞. 9.4.2 Show that Laguerre’s equation, like the Bessel equation, has a regular singularity at x = 0 and an irregular singularity at x = ∞. 9.5 Series Solutions — Frobenius’ Method 9.4.3 565 Show that the substitution 1−x , a = −l, b = l + 1, c=1 2 converts the hypergeometric equation into Legendre’s equation. x→ 9.5 SERIES SOLUTIONS — FROBENIUS’ METHOD In this section we develop a method of obtaining one solution of the linear, second-order, homogeneous ODE. The method, a series expansion, will always work, provided the point of expansion is no worse than a regular singular point. In physics this very gentle condition is almost always satisfied. A linear, second-order, homogeneous ODE may be put in the form d 2y dy + Q(x)y = 0. + P (x) 2 dx dx (9.80) The equation is homogeneous because each term contains y(x) or a derivative; linear because each y, dy/dx, or d 2 y/dx 2 appears as the first power — and no products. In this section we develop (at least) one solution of Eq. (9.80). In Section 9.6 we develop the second, independent solution and prove that no third, independent solution exists. Therefore the most general solution of Eq. (9.80) may be written as y(x) = c1 y1 (x) + c2 y2 (x). (9.81) Our physical problem may lead to a nonhomogeneous, linear, second-order ODE, d 2y dy + Q(x)y = F (x). + P (x) dx dx 2 (9.82) The function on the right, F (x), represents a source (such as electrostatic charge) or a driving force (as in a driven oscillator). Specific solutions of this nonhomogeneous equation are touched on in Exercise 9.6.25. They are explored in some detail, using Green’s function techniques, in Sections 9.7 and 10.5, and with a Laplace transform technique in Section 15.11. Calling this solution yp , we may add to it any solution of the corresponding homogeneous equation (Eq. (9.80)). Hence the most general solution of Eq. (9.82) is y(x) = c1 y1 (x) + c2 y2 (x) + yp (x). (9.83) The constants c1 and c2 will eventually be fixed by boundary conditions. For the present, we assume that F (x) = 0 and that our differential equation is homogeneous. We shall attempt to develop a solution of our linear, second-order, homogeneous differential equation, Eq. (9.80), by substituting in a power series with undetermined coefficients. Also available as a parameter is the power of the lowest nonvanishing term of the series. To illustrate, we apply the method to two important differential equations, first the 566 Chapter 9 Differential Equations linear (classical) oscillator equation d 2y + ω2 y = 0, dx 2 with known solutions y = sin ωx, cos ωx. We try  y(x) = x k a0 + a1 x + a2 x 2 + a3 x 3 + · · · = ∞  aλ x k+λ , λ=0 a0 = 0, (9.84) (9.85) with the exponent k and all the coefficients aλ still undetermined. Note that k need not be an integer. By differentiating twice, we obtain ∞ dy  = aλ (k + λ)x k+λ−1 , dx λ=0 d 2y dx 2 = ∞  λ=0 aλ (k + λ)(k + λ − 1)x k+λ−2 . By substituting into Eq. (9.84), we have ∞  λ=0 aλ (k + λ)(k + λ − 1)x k+λ−2 + ω2 ∞  λ=0 aλ x k+λ = 0. (9.86) From our analysis of the uniqueness of power series (Chapter 5), the coefficients of each power of x on the left-hand side of Eq. (9.86) must vanish individually. The lowest power of x appearing in Eq. (9.86) is x k−2 , for λ = 0 in the first summation. The requirement that the coefficient vanish8 yields a0 k(k − 1) = 0. We had chosen a0 as the coefficient of the lowest nonvanishing terms of the series (Eq. (9.85)), hence, by definition, a0 = 0. Therefore we have k(k − 1) = 0. (9.87) This equation, coming from the coefficient of the lowest power of x, we call the indicial equation. The indicial equation and its roots are of critical importance to our analysis. If k = 1, the coefficient a1 (k + 1)k of x k−1 must vanish so that a1 = 0. Clearly, in this example we must require either that k = 0 or k = 1. Before considering these two possibilities for k, we return to Eq. (9.86) and demand that the remaining net coefficients, say, the coefficient of x k+j (j ≥ 0), vanish. We set λ = j + 2 in the first summation and λ = j in the second. (They are independent summations and λ is a dummy index.) This results in aj +2 (k + j + 2)(k + j + 1) + ω2 aj = 0 8 See the uniqueness of power series, Section 5.7. 9.5 Series Solutions — Frobenius’ Method 567 or aj +2 = −aj ω2 . (k + j + 2)(k + j + 1) (9.88) This is a two-term recurrence relation.9 Given aj , we may compute aj +2 and then aj +4 , aj +6 , and so on up as far as desired. Note that for this example, if we start with a0 , Eq. (9.88) leads to the even coefficients a2 , a4 , and so on, and ignores a1 , a3 , a5 , and so on. Since a1 is arbitrary if k = 0 and necessarily zero if k = 1, let us set it equal to zero (compare Exercises 9.5.3 and 9.5.4) and then by Eq. (9.88) a3 = a5 = a7 = · · · = 0, and all the odd-numbered coefficients vanish. The odd powers of x will actually reappear when the second root of the indicial equation is used. Returning to Eq. (9.87) our indicial equation, we first try the solution k = 0. The recurrence relation (Eq. (9.88)) becomes aj +2 = −aj ω2 , (j + 2)(j + 1) (9.89) which leads to a2 = −a0 a4 = −a2 a6 = −a4 ω2 ω2 = − a0 , 1·2 2! ω4 ω2 = + a0 , 3·4 4! ω2 ω6 = − a0 , 5·6 6! and so on. By inspection (and mathematical induction), a2n = (−1)n ω2n a0 , (2n)! (9.90) and our solution is  (ωx)2 (ωx)4 (ωx)6 y(x)k=0 = a0 1 − + − + · · · = a0 cos ωx. 2! 4! 6! (9.91) If we choose the indicial equation root k = 1 (Eq. (9.88)), the recurrence relation becomes aj +2 = −aj ω2 . (j + 3)(j + 2) (9.92) 9 The recurrence relation may involve three terms, that is, a j +2 , depending on aj and aj −2 . Equation (13.2) for the Hermite functions provides an example of this behavior. 568 Chapter 9 Differential Equations Substituting in j = 0, 2, 4, successively, we obtain a2 = −a0 a4 = −a2 ω2 ω2 = − a0 , 2·3 3! ω2 ω4 = + a0 , 4·5 5! ω2 ω6 = − a0 , 6·7 7! Again, by inspection and mathematical induction, a6 = −a4 a2n = (−1)n and so on. ω2n a0 . (2n + 1)! (9.93) For this choice, k = 1, we obtain  (ωx)2 (ωx)4 (ωx)6 + − + ··· y(x)k=1 = a0 x 1 − 3! 5! 7!  (ωx)3 (ωx)5 (ωx)7 a0 (ωx) − + − + ··· = ω 3! 5! 7! a0 sin ωx. (9.94) = ω To summarize this approach, we may write Eq. (9.86) schematically as shown in Fig. 9.2. From the uniqueness of power series (Section 5.7), the total coefficient of each power of x must vanish — all by itself. The requirement that the first coefficient (1) vanish leads to the indicial equation, Eq. (9.87). The second coefficient is handled by setting a1 = 0. The vanishing of the coefficient of x k (and higher powers, taken one at a time) leads to the recurrence relation, Eq. (9.88). This series substitution, known as Frobenius’ method, has given us two series solutions of the linear oscillator equation. However, there are two points about such series solutions that must be strongly emphasized: 1. The series solution should always be substituted back into the differential equation, to see if it works, as a precaution against algebraic and logical errors. If it works, it is a solution. 2. The acceptability of a series solution depends on its convergence (including asymptotic convergence). It is quite possible for Frobenius’ method to give a series solution that satisfies the original differential equation when substituted in the equation but that does FIGURE 9.2 Recurrence relation from power series expansion. 9.5 Series Solutions — Frobenius’ Method 569 not converge over the region of interest. Legendre’s differential equation illustrates this situation. Expansion About x 0 Equation (9.85) is an expansion about the origin, x0 = 0. It is perfectly possible to replace Eq. (9.85) with y(x) = ∞  λ=0 aλ (x − x0 )k+λ , a0 = 0. (9.95) Indeed, for the Legendre, Chebyshev, and hypergeometric equations the choice x0 = 1 has some advantages. The point x0 should not be chosen at an essential singularity — or our Frobenius method will probably fail. The resultant series (x0 an ordinary point or regular singular point) will be valid where it converges. You can expect a divergence of some sort when |x − x0 | = |zs − x0 |, where zs is the closest singularity to x0 (in the complex plane). Symmetry of Solutions Let us note that we obtained one solution of even symmetry, y1 (x) = y1 (−x), and one of odd symmetry, y2 (x) = −y2 (−x). This is not just an accident but a direct consequence of the form of the ODE. Writing a general ODE as L(x)y(x) = 0, (9.96) in which L(x) is the differential operator, we see that for the linear oscillator equation (Eq. (9.84)), L(x) is even under parity; that is, L(x) = L(−x). (9.97) Whenever the differential operator has a specific parity or symmetry, either even or odd, we may interchange +x and −x, and Eq. (9.96) becomes ±L(x)y(−x) = 0, (9.98) if L(x) is even, − if L(x) is odd. Clearly, if y(x) is a solution of the differential equation, y(−x) is also a solution. Then any solution may be resolved into even and odd parts,   (9.99) y(x) = 21 y(x) + y(−x) + 12 y(x) − y(−x) , the first bracket on the right giving an even solution, the second an odd solution. If we refer back to Section 9.4, we can see that Legendre, Chebyshev, Bessel, simple harmonic oscillator, and Hermite equations (or differential operators) all exhibit this even parity; that is, their P (x) in Eq. (9.80) is odd and Q(x) even. Solutions of all of them may be presented as series of even powers of x and separate series of odd powers of x. The Laguerre differential operator has neither even nor odd symmetry; hence its solutions cannot be expected to exhibit even or odd parity. Our emphasis on parity stems primarily from the importance of parity in quantum mechanics. We find that wave functions usually are either even or odd, meaning that they have a definite parity. Most interactions (beta decay is the big exception) are also even or odd, and the result is that parity is conserved. 570 Chapter 9 Differential Equations Limitations of Series Approach — Bessel’s Equation This attack on the linear oscillator equation was perhaps a bit too easy. By substituting the power series (Eq. (9.85)) into the differential equation (Eq. (9.84)), we obtained two independent solutions with no trouble at all. To get some idea of what can happen we try to solve Bessel’s equation,  (9.100) x 2 y ′′ + xy ′ + x 2 − n2 y = 0, using y ′ for dy/dx and y ′′ for d 2 y/dx 2 . Again, assuming a solution of the form y(x) = ∞  aλ x k+λ , λ=0 we differentiate and substitute into Eq. (9.100). The result is ∞  λ=0 aλ (k + λ)(k + λ − 1)x + ∞  λ=0 aλ x k+λ+2 − ∞  λ=0 k+λ + ∞  λ=0 aλ (k + λ)x k+λ aλ n2 x k+λ = 0. (9.101) By setting λ = 0, we get the coefficient of x k , the lowest power of x appearing on the left-hand side,  (9.102) a0 k(k − 1) + k − n2 = 0, and again a0 = 0 by definition. Equation (9.102) therefore yields the indicial equation k 2 − n2 = 0 (9.103) with solutions k = ±n. It is of some interest to examine the coefficient of x k+1 also. Here we obtain  a1 (k + 1)k + k + 1 − n2 = 0, or a1 (k + 1 − n)(k + 1 + n) = 0. (9.104) For k = ±n, neither k + 1 − n nor k + 1 + n vanishes and we must require a1 = 0.10 Proceeding to the coefficient of x k+j for k = n, we set λ = j in the first, second, and fourth terms of Eq. (9.101) and λ = j − 2 in the third term. By requiring the resultant coefficient of x k+1 to vanish, we obtain  aj (n + j )(n + j − 1) + (n + j ) − n2 + aj −2 = 0. When j is replaced by j + 2, this can be rewritten for j ≥ 0 as aj +2 = −aj 10 k = ±n = − 1 are exceptions. 2 1 , (j + 2)(2n + j + 2) (9.105) 9.5 Series Solutions — Frobenius’ Method 571 which is the desired recurrence relation. Repeated application of this recurrence relation leads to a2 = −a0 a4 = −a2 a6 = −a4 1 a0 n! =− 2 , 2(2n + 2) 2 1!(n + 1)! 1 a0 n! , = 4 4(2n + 4) 2 2!(n + 2)! a0 n! 1 =− 6 , 6(2n + 6) 2 3!(n + 3)! and so on, and in general, a2p = (−1)p a0 n! . 22p p!(n + p)! Inserting these coefficients in our assumed series solution, we have  n!x 2 n!x 4 n y(x) = a0 x 1 − 2 + − ··· . 2 1!(n + 1)! 24 2!(n + 2)! (9.106) (9.107) In summation form y(x) = a0 ∞  (−1)j j =0 = a0 2n n! n!x n+2j 22j j !(n + j )! ∞  (−1)j j =0  n+2j x 1 . j !(n + j )! 2 (9.108) In Chapter 11 the final summation is identified as the Bessel function Jn (x). Notice that this solution, Jn (x), has either even or odd symmetry,11 as might be expected from the form of Bessel’s equation. When k = −n and n is not an integer, we may generate a second distinct series, to be labeled J−n (x). However, when −n is a negative integer, trouble develops. The recurrence relation for the coefficients aj is still given by Eq. (9.105), but with 2n replaced by −2n. Then, when j + 2 = 2n or j = 2(n − 1), the coefficient aj +2 blows up and we have no series solution. This catastrophe can be remedied in Eq. (9.108), as it is done in Chapter 11, with the result that J−n (x) = (−1)n Jn (x), n an integer. (9.109) The second solution simply reproduces the first. We have failed to construct a second independent solution for Bessel’s equation by this series technique when n is an integer. By substituting in an infinite series, we have obtained two solutions for the linear oscillator equation and one for Bessel’s equation (two if n is not an integer). To the questions “Can we always do this? Will this method always work?” the answer is no, we cannot always do this. This method of series solution will not always work. 11 J (x) is an even function if n is an even integer, an odd function if n is an odd integer. For nonintegral n the x n has no such n simple symmetry. 572 Chapter 9 Differential Equations Regular and Irregular Singularities The success of the series substitution method depends on the roots of the indicial equation and the degree of singularity of the coefficients in the differential equation. To understand better the effect of the equation coefficients on this naive series substitution approach, consider four simple equations: 6 y = 0, x2 6 y ′′ − 3 y = 0, x y ′′ − (9.110a) (9.110b) 1 a2 y ′′ + y ′ − 2 y = 0, x x (9.110c) 1 ′ a2 y − 2 y = 0. x2 x (9.110d) y ′′ + The reader may show easily that for Eq. (9.110a) the indicial equation is k 2 − k − 6 = 0, giving k = 3, −2. Since the equation is homogeneous in x (counting d 2 /dx 2 as x −2 ), there is no recurrence relation. However,we are left with two perfectly good solutions, x 3 and x −2 . Equation (9.110b) differs from Eq. (9.110a) by only one power of x, but this sends the indicial equation to −6a0 = 0, with no solution at all, for we have agreed that a0 = 0. Our series substitution worked for Eq. (9.110a), which had only a regular singularity, but broke down at Eq. (9.110b), which has an irregular singular point at the origin. Continuing with Eq. (9.110c), we have added a term y ′ /x. The indicial equation is k 2 − a 2 = 0, but again, there is no recurrence relation. The solutions are y = x a , x −a , both perfectly acceptable one-term series. When we change the power of x in the coefficient of y ′ from −1 to −2, Eq. (9.110d), there is a drastic change in the solution. The indicial equation (with only the y ′ term contributing) becomes k = 0. There is a recurrence relation, aj +1 = +aj a 2 − j (j − 1) . j +1 9.5 Series Solutions — Frobenius’ Method 573 Unless the parameter a is selected to make the series terminate, we have    aj +1   = lim j (j + 1) lim  j →∞ j + 1 j →∞ aj  j2 = ∞. j →∞ j = lim Hence our series solution diverges for all x = 0. Again, our method worked for Eq. (9.110c) with a regular singularity but failed when we had the irregular singularity of Eq. (9.110d). Fuchs’ Theorem The answer to the basic question when the method of series substitution can be expected to work is given by Fuchs’ theorem, which asserts that we can always obtain at least one power-series solution, provided we are expanding about a point that is an ordinary point or at worst a regular singular point. If we attempt an expansion about an irregular or essential singularity, our method may fail, as it did for Eqs. (9.110b) and (9.110d). Fortunately, the more important equations of mathematical physics, listed in Section 9.4, have no irregular singularities in the finite plane. Further discussion of Fuchs’ theorem appears in Section 9.6. From Table 9.4, Section 9.4, infinity is seen to be a singular point for all equations considered. As a further illustration of Fuchs’ theorem, Legendre’s equation (with infinity as a regular singularity) has a convergent-series solution in negative powers of the argument (Section 12.10). In contrast, Bessel’s equation (with an irregular singularity at infinity) yields asymptotic series (Sections 5.10 and 11.6). These asymptotic solutions are extremely useful. Summary If we are expanding about an ordinary point or at worst about a regular singularity, the series substitution approach will yield at least one solution (Fuchs’ theorem). Whether we get one or two distinct solutions depends on the roots of the indicial equation. 1. If the two roots of the indicial equation are equal, we can obtain only one solution by this series substitution method. 2. If the two roots differ by a nonintegral number, two independent solutions may be obtained. 3. If the two roots differ by an integer, the larger of the two will yield a solution. The smaller may or may not give a solution, depending on the behavior of the coefficients. In the linear oscillator equation we obtain two solutions; for Bessel’s equation, we get only one solution. 574 Chapter 9 Differential Equations The usefulness of the series solution in terms of what the solution is (that is, numbers) depends on the rapidity of convergence of the series and the availability of the coefficients. Many ODEs will not yield nice, simple recurrence relations for the coefficients. In general, the available series will probably be useful for |x| (or |x − x0 |) very small. Computers can be used to determine additional series coefficients using a symbolic language, such as Mathematica,12 Maple,13 or Reduce.14 Often, however, for numerical work a direct numerical integration will be preferred. Exercises 9.5.1 Uniqueness theorem. The function y(x) satisfies a second-order, linear, homogeneous differential equation. At x = x0 , y(x) = y0 and dy/dx = y0′ . Show that y(x) is unique, in that no other solution of this differential equation passes through the points (x0 , y0 ) with a slope of y0′ . Hint. Assume a second solution satisfying these conditions and compare the Taylor series expansions. 9.5.2 A series solution of Eq. (9.80) is attempted, expanding about the point x = x0 . If x0 is an ordinary point, show that the indicial equation has roots k = 0, 1. 9.5.3 In the development of a series solution of the simple harmonic oscillator (SHO) equation, the second series coefficient a1 was neglected except to set it equal to zero. From the coefficient of the next-to-the-lowest power of x, x k−1 , develop a second indicialtype equation. (SHO equation with k = 0). Show that a1 , may be assigned any finite value (including zero). (b) (SHO equation with k = 1). Show that a1 must be set equal to zero. (a) 9.5.4 Analyze the series solutions of the following differential equations to see when a1 may be set equal to zero without irrevocably losing anything and when a1 must be set equal to zero. (a) Legendre, (b) Chebyshev, (c) Bessel, (d) Hermite. Legendre, (b) Chebyshev, and (d) Hermite: For k = 0, a1 may be set equal to zero; for k = 1, a1 must be set equal to zero. (c) Bessel: a1 must be set equal to zero (except for k = ±n = − 12 ). ANS. (a) 9.5.5 Solve the Legendre equation  1 − x 2 y ′′ − 2xy ′ + n(n + 1)y = 0 by direct series substitution. 12 S. Wolfram, Mathematica, A System for Doing Mathematics by Computer, New York: Addison Wesley (1991). 13 A. Heck, Introduction to Maple, New York: Springer (1993). 14 G. Rayna, Reduce Software for Algebraic Computation, New York: Springer (1987). 9.5 Series Solutions — Frobenius’ Method (a) 575 Verify that the indicial equation is k(k − 1) = 0. (b) Using k = 0, obtain a series of even powers of x (a1 = 0).  n(n + 1) 2 n(n − 2)(n + 1)(n + 3) 4 yeven = a0 1 − x + x + ··· , 2! 4! where aj +2 = (c) j (j + 1) − n(n + 1) aj . (j + 1)(j + 2) Using k = 1, develop a series of odd powers of x (a1 = 1).  (n − 1)(n + 2) 3 (n − 1)(n − 3)(n + 2)(n + 4) 5 x + x + ··· , yodd = a1 x − 3! 5! where aj +2 = (j + 1)(j + 2) − n(n + 1) aj . (j + 2)(j + 3) Show that both solutions, yeven and yodd , diverge for x = ±1 if the series continue to infinity. (e) Finally, show that by an appropriate choice of n, one series at a time may be converted into a polynomial, thereby avoiding the divergence catastrophe. In quantum mechanics this restriction of n to integral values corresponds to quantization of angular momentum. (d) 9.5.6 Develop series solutions for Hermite’s differential equation (a) y ′′ − 2xy ′ + 2αy = 0. ANS. k(k − 1) = 0, indicial equation. For k = 0, aj +2 = 2aj j −α (j + 1)(j + 2) (j even),  2(−α)x 2 22 (−α)(2 − α)x 4 yeven = a0 1 + + + ··· . 2! 4! For k = 1, aj +2 = 2aj j +1−α (j + 2)(j + 3) (j even),  2(1 − α)x 3 22 (1 − α)(3 − α)x 5 + + ··· . yodd = a1 x + 3! 5! 576 Chapter 9 Differential Equations (b) Show that both series solutions are convergent for all x, the ratio of successive coefficients behaving, for large index, like the corresponding ratio in the expansion of exp(x 2 ). (c) Show that by appropriate choice of α the series solutions may be cut off and converted to finite polynomials. (These polynomials, properly normalized, become the Hermite polynomials in Section 13.1.) 9.5.7 Laguerre’s ODE is xL′′n (x) + (1 − x)L′n (x) + nLn (x) = 0. Develop a series solution selecting the parameter n to make your series a polynomial. 9.5.8 Solve the Chebyshev equation  1 − x 2 Tn′′ − xTn′ + n2 Tn = 0, by series substitution. What restrictions are imposed on n if you demand that the series solution converge for x = ±1? ANS. The infinite series does converge for x = ±1 and no restriction on n exists (compare Exercise 5.2.16). 9.5.9 Solve  1 − x 2 Un′′ (x) − 3xUn′ (x) + n(n + 2)Un (x) = 0, choosing the root of the indicial equation to obtain a series of odd powers of x. Since the series will diverge for x = 1, choose n to convert it into a polynomial. k(k − 1) = 0. For k = 1, aj +2 = 9.5.10 (j + 1)(j + 3) − n(n + 2) aj . (j + 2)(j + 3) Obtain a series solution of the hypergeometric equation  x(x − 1)y ′′ + (1 + a + b)x − c y ′ + aby = 0. Test your solution for convergence. 9.5.11 Obtain two series solutions of the confluent hypergeometric equation xy ′′ + (c − x)y ′ − ay = 0. Test your solutions for convergence. 9.5.12 A quantum mechanical analysis of the Stark effect (parabolic coordinates) leads to the differential equation     du 1 m2 1 2 d ξ + Eξ + α − − F ξ u = 0. dξ dξ 2 4ξ 4 Here α is a separation constant, E is the total energy, and F is a constant, where F z is the potential energy added to the system by the introduction of an electric field. 9.5 Series Solutions — Frobenius’ Method 577 Using the larger root of the indicial equation, develop a power-series solution about ξ = 0. Evaluate the first three coefficients in terms of ao . Indicial equation u(ξ ) = a0 ξ m/2  1− k2 − m2 = 0, 4   α E α2 2 ξ + ··· . ξ+ − m+1 2(m + 1)(m + 2) 4(m + 2) Note that the perturbation F does not appear until a3 is included. 9.5.13 For the special case of no azimuthal dependence, the quantum mechanical analysis of the hydrogen molecular ion leads to the equation  d  2 du 1−η + αu + βη2 u = 0. dη dη Develop a power-series solution for u(η). Evaluate the first three nonvanishing coefficients in terms of a0 . Indicial equation k(k − 1) = 0,    β 4 (2 − α)(12 − α) 2−α 2 η + ··· . η + − uk=1 = a0 η 1 + 6 120 20 9.5.14 To a good approximation, the interaction of two nucleons may be described by a mesonic potential Ae−ax , x attractive for A negative. Develop a series solution of the resultant Schrödinger wave equation V= h¯ 2 d 2 ψ + (E − V )ψ = 0 2m dx 2 through the first three nonvanishing coefficients. '  ( ψ = a0 x + 21 A′ x 2 + 16 21 A′2 − E ′ − aA′ x 3 + · · · , where the prime indicates multiplication by 2m/h¯ 2 . 9.5.15 Near the nucleus of a complex atom the potential energy of one electron is given by Ze2  1 + b1 r + b2 r 2 , r where the coefficients b1 and b2 arise from screening effects. For the case of zero angular momentum show that the first three terms of the solution of the Schrödinger equation have the same form as those of Exercise 9.5.14. By appropriate translation of coefficients or parameters, write out the first three terms in a series expansion of the wave function. V= 578 Chapter 9 Differential Equations 9.5.16 If the parameter a 2 in Eq. (9.110d) is equal to 2, Eq. (9.110d) becomes 1 ′ 2 y − 2 y = 0. 2 x x From the indicial equation and the recurrence relation derive a solution y = 1 + 2x + 2x 2 . Verify that this is indeed a solution by substituting back into the differential equation. y ′′ + 9.5.17 The modified Bessel function I0 (x) satisfies the differential equation d2 d I0 (x) + x I0 (x) − x 2 I0 (x) = 0. 2 dx dx From Exercise 7.3.4 the leading term in an asymptotic expansion is found to be x2 ex I0 (x) ∼ √ . 2πx Assume a series of the form ( ex ' 1 + b1 x −1 + b2 x −2 + · · · . I0 (x) ∼ √ 2πx Determine the coefficients b1 and b2 . ANS. b1 = 81 , b2 = 9 128 . 9.5.18 The even power-series solution of Legendre’s equation is given by Exercise 9.5.5. Take a0 = 1 and n not an even integer, say n = 0.5. Calculate the partial sums of the series through x 200 , x 400 , x 600 , . . . , x 2000 for x = 0.95(0.01)1.00. Also, write out the individual term corresponding to each of these powers. Note. This calculation does not constitute proof of convergence at x = 0.99 or divergence at x = 1.00, but perhaps you can see the difference in the behavior of the sequence of partial sums for these two values of x. 9.5.19 (a) 9.6 The odd power-series solution of Hermite’s equation is given by Exercise 9.5.6. Take a0 = 1. Evaluate this series for α = 0, x = 1, 2, 3. Cut off your calculation after the last term calculated has dropped below the maximum term by a factor of 106 or more. Set an upper bound to the error made in ignoring the remaining terms in the infinite series. (b) As a check on the x calculation of part (a), show that the Hermite series yodd (α = 0) corresponds to 0 exp(x 2 ) dx. (c) Calculate this integral for x = 1, 2, 3. A SECOND SOLUTION In Section 9.5 a solution of a second-order homogeneous ODE was developed by substituting in a power series. By Fuchs’ theorem this is possible, provided the power series is an expansion about an ordinary point or a nonessential singularity.15 There is no guarantee 15 This is why the classification of singularities in Section 9.4 is of vital importance. 9.6 A Second Solution 579 that this approach will yield the two independent solutions we expect from a linear secondorder ODE. In fact, we shall prove that such an ODE has at most two linearly independent solutions. Indeed, the technique gave only one solution for Bessel’s equation (n an integer). In this section we also develop two methods of obtaining a second independent solution: an integral method and a power series containing a logarithmic term. First, however, we consider the question of independence of a set of functions. Linear Independence of Solutions Given a set of functions ϕλ , the criterion for linear dependence is the existence of a relation of the form  kλ ϕλ = 0, (9.111) λ in which not all the coefficients kλ are zero. On the other hand, if the only solution of Eq. (9.111) is kλ = 0 for all λ, the set of functions ϕλ is said to be linearly independent. It may be helpful to think of linear dependence of vectors. Consider A, B, and C in three-dimensional space, with A · B × C = 0. Then no nontrivial relation of the form aA + bB + cC = 0 (9.112) exists. A, B, and C are linearly independent. On the other hand, any fourth vector, D, may be expressed as a linear combination of A, B, and C (see Section 3.1). We can always write an equation of the form D − aA − bB − cC = 0, (9.113) and the four vectors are not linearly independent. The three noncoplanar vectors A, B, and C span our real three-dimensional space. If a set of vectors or functions are mutually orthogonal, then they are automatically linearly independent. Orthogonality implies linear independence. This can easily be demonstrated by taking inner products (scalar or dot product for vectors, orthogonality integral of Section 10.2 for functions). Let us assume that the functions ϕλ are differentiable as needed. Then, differentiating Eq. (9.111) repeatedly, we generate a set of equations  kλ ϕλ′ = 0, (9.114) λ  λ kλ ϕλ′′ = 0, (9.115) and so on. This gives us a set of homogeneous linear equations in which kλ are the unknown quantities. By Section 3.1 there is a solution kλ = 0 only if the determinant of the coefficients of the kλ ’ vanishes. This means    ϕ1 ϕ2 ··· ϕn    ϕ1′ ϕ2′ ··· ϕn′   (9.116)  ··· ··· ··· · · ·  = 0.  (n−1) (n−1) (n−1)  ϕ ϕ · · · ϕn 1 2 This determinant is called the Wronskian. 580 Chapter 9 Differential Equations 1. If the Wronskian is not equal to zero, then Eq. (9.111) has no solution other than kλ = 0. The set of functions ϕλ is therefore linearly independent. 2. If the Wronskian vanishes at isolated values of the argument, this does not necessarily prove linear dependence (unless the set of functions has only two functions). However, if the Wronskian is zero over the entire range of the variable, the functions ϕλ are linearly dependent over this range16 (compare Exercise 9.5.2 for the simple case of two functions). Example 9.6.1 LINEAR INDEPENDENCE The solutions of the linear oscillator equation (9.84) are ϕ1 = sin ωx, ϕ2 = cos ωx. The Wronskian becomes    sin ωx cos ωx    ω cos ωx −ω sin ωx  = −ω = 0. These two solutions, ϕ1 and ϕ2 , are therefore linearly independent. For just two functions this means that one is not a multiple of the other, which is obviously true in this case. You know that  1/2 sin ωx = ± 1 − cos2 ωx , but this is not a linear relation, of the form of Eq. (9.111). Examples 9.6.2  LINEAR DEPENDENCE For an illustration of linear dependence, consider the solutions of the one-dimensional diffusion equation. We have ϕ1 = ex and ϕ2 = e−x , and we add ϕ3 = cosh x, also a solution. The Wronskian is  x  −x e   x e −x cosh x   e −e sinh x  = 0.  x −x e e cosh x  The determinant vanishes for all x because the first and third rows are identical. Hence ex , e−x , and cosh x are linearly dependent, and, indeed, we have a relation of the form of Eq. (9.111): ex + e−x − 2 cosh x = 0 with kλ = 0.  Now we are ready to prove the theorem that a second-order homogeneous ODE has two linearly independent solutions. Suppose y1 , y2 , y3 are three solutions of the homogeneous ODE (9.80). Then we form the Wronskian Wj k = yj yk′ − yj′ yk of any pair yj , yk of them and recall that 16 Compare H. Lass, Elements of Pure and Applied Mathematics, New York: McGraw-Hill (1957), p. 187, for proof of this assertion. It is assumed that the functions have continuous derivatives and that at least one of the minors of the bottom row of Eq. (9.116) (Laplace expansion) does not vanish in [a, b], the interval under consideration. 9.6 A Second Solution 581 Wj′ k = yj yk′′ − yj′′ yk . We divide each ODE by y, getting −Q on their right-hand side, so yj′′ yj′ y ′′ y′ +P = −Q(x) = k + P k . yj yj yk yk Multiplying by yj yk , we find (yj yk′′ − yj′′ yk ) + P (yj yk′ − yj′ yk ) = 0, or Wj′ k = −P Wj k (9.117) for any pair of solutions. Finally we evaluate the Wronskian of all three solutions, expanding it along the second row and using the ODEs for the Wj k :    y1 y2 y3   ′  ′ ′ ′ W =  y1 y2′ y3′  = −y1′ W23 + y2′ W13 − y3′ W12  y ′′ y ′′ y ′′  1 2 3    y1 y2 y3    = P (y1′ W23 − y2′ W13 + y3′ W12 ) = −P  y1′ y2′ y3′  = 0.  y′ y′ y′  1 2 3 The vanishing Wronskian, W = 0, because of two identical rows, is just the condition for linear dependence of the solutions yj . Thus, there are at most two linearly independent solutions of the homogeneous ODE. Similarly one can prove that a linear homogeneous nth-order ODE has n linearly independent solutions yj , so the general solution y(x) = cj yj (x) is a linear combination of them. A Second Solution Returning to our linear, second-order, homogeneous ODE of the general form y ′′ + P (x)y ′ + Q(x)y = 0, (9.118) let y1 and y2 be two independent solutions. Then the Wronskian, by definition, is W = y1 y2′ − y1′ y2 . (9.119) By differentiating the Wronskian, we obtain W ′ = y1′ y2′ + y1 y2′′ − y1′′ y2 − y1′ y2′   = y1 −P (x)y2′ − Q(x)y2 − y2 −P (x)y1′ − Q(x)y1 = −P (x)(y1 y2′ − y1′ y2 ). The expression in parentheses is just W , the Wronskian, and we have W ′ = −P (x)W. (9.120) In the special case that P (x) = 0, that is, y ′′ + Q(x)y = 0, (9.121) W = y1 y2′ − y1′ y2 = constant. (9.122) the Wronskian 582 Chapter 9 Differential Equations Since our original differential equation is homogeneous, we may multiply the solutions y1 and y2 by whatever constants we wish and arrange to have the Wronskian equal to unity (or −1). This case, P (x) = 0, appears more frequently than might be expected. Recall that the portion of ∇ 2 ( ψr ) in spherical polar coordinates involving radial derivatives contains no first radial derivative. Finally, every linear second-order differential equation can be transformed into an equation of the form of Eq. (9.121) (compare Exercise 9.6.11). For the general case, let us now assume that we have one solution of Eq. (9.118) by a series substitution (or by guessing). We now proceed to develop a second, independent solution for which W = 0. Rewriting Eq. (9.120) as dW = −P dx, W we integrate over the variable x, from a to x, to obtain x W (x) ln P (x1 ) dx1 , =− W (a) a or17 W (x) = W (a) exp − a x  P (x1 ) dx1 . (9.123)   d y2 . dx y1 (9.124) But W (x) = y1 y2′ − y1′ y2 = y12 By combining Eqs. (9.123) and (9.124), we have x   exp[− a P (x1 ) dx1 ] d y2 = W (a) . dx y1 y12 Finally, by integrating Eq. (9.125) from x2 = b to x2 = x we get x x exp[− a 2 P (x1 )dx1 ] y2 (x) = y1 (x)W (a) dx2 . [y1 (x2 )]2 b (9.125) (9.126) Here a and b are arbitrary constants and a term y1 (x)y2 (b)/y1 (b) has been dropped, for it leads to nothing new. Since W (a), the Wronskian evaluated at x = a, is a constant and our solutions for the homogeneous differential equation always contain an unknown normalizing factor, we set W (a) = 1 and write y2 (x) = y1 (x) x x exp[− 2 P (x1 ) dx1 ] dx2 . [y1 (x2 )]2 (9.127) Note that the lower limits x1 = a and x2 = b have been omitted. If they are retained, they simply make a contribution equal to a constant times the known first solution, y1 (x), and 17 If P (x) remains finite in the domain of interest, W (x) = 0 unless W (a) = 0. That is, the Wronskian of our two solutions is either identically zero or never zero. However, if P (x) does not remain finite in our interval, then W (x) can have isolated zeros in that domain and one must be careful to choose a so that W (a) = 0. 9.6 A Second Solution 583 hence add nothing new. If we have the important special case of P (x) = 0, Eq. (9.127) reduces to x dx2 y2 (x) = y1 (x) . (9.128) [y1 (x2 )]2 This means that by using either Eq. (9.127) or Eq. (9.128) we can take one known solution and by integrating can generate a second, independent solution of Eq. (9.118). This technique is used in Section 12.10 to generate a second solution of Legendre’s differential equation. Example 9.6.3 A SECOND SOLUTION FOR THE LINEAR OSCILLATOR EQUATION From d 2 y/dx 2 + y = 0 with P (x) = 0 let one solution be y1 = sin x. By applying Eq. (9.128), we obtain x dx2 y2 (x) = sin x = sin x(− cot x) = − cos x, sin2 x2  which is clearly independent (not a linear multiple) of sin x. Series Form of the Second Solution Further insight into the nature of the second solution of our differential equation may be obtained by the following sequence of operations. 1. Express P (x) and Q(x) in Eq. (9.118) as P (x) = ∞  pi x i , i=−1 Q(x) = ∞  qj x j . (9.129) j =−2 The lower limits of the summations are selected to create the strongest possible regular singularity (at the origin). These conditions just satisfy Fuchs’ theorem and thus help us gain a better understanding of Fuchs’ theorem. 2. Develop the first few terms of a power-series solution, as in Section 9.5. 3. Using this solution as y1 , obtain a second series type solution, y2 , with Eq. (9.127), integrating term by term. Proceeding with Step 1, we have   y ′′ + p−1 x −1 + p0 + p1 x + · · · y ′ + q−2 x −2 + q−1 x −1 + · · · y = 0, (9.130) in which point x = 0 is at worst a regular singular point. If p−1 = q−1 = q−2 = 0, it reduces to an ordinary point. Substituting y= ∞  λ=0 aλ x k+λ 584 Chapter 9 Differential Equations (Step 2), we obtain ∞ ∞ ∞    pi x i (k + λ)(k + λ − 1)aλ x k+λ−2 + (k + λ)aλ x k+λ−1 i=−1 λ=0 + ∞  qj x j j =−2 ∞  λ=0 λ=0 aλ x k+λ = 0. (9.131) Assuming that p−1 = 0, q−2 = 0, our indicial equation is k(k − 1) + p−1 k + q−2 = 0, which sets the net coefficient of x k−2 equal to zero. This reduces to k 2 + (p−1 − 1)k + q−2 = 0. (9.132) We denote the two roots of this indicial equation by k = α and k = α − n, where n is zero or a positive integer. (If n is not an integer, we expect two independent series solutions by the methods of Section 9.5 and we are done.) Then (k − α)(k − α + n) = 0, (9.133) or k 2 + (n − 2α)k + α(α − n) = 0, and equating coefficients of k in Eqs. (9.132) and (9.133), we have p−1 − 1 = n − 2α. (9.134) The known series solution corresponding to the larger root k = α may be written as y1 = x α ∞  aλ x λ . λ=0 Substituting this series solution into Eq. (9.127) (Step 3), we are faced with x x i exp(− a 2 ∞ i=−1 pi x1 dx1 ) y2 (x) = y1 (x) dx2 , λ 2 x22α ( ∞ λ=0 aλ x2 ) (9.135) where the solutions y1 and y2 have been normalized so that the Wronskian W (a) = 1. Tackling the exponential factor first, we have x2 a ∞  i=−1 pi x1i dx1 = p−1 ln x2 + ∞  pk k+1 x + f (a) k+1 2 k=0 (9.136) 9.6 A Second Solution 585 with f (a) an integration constant that may depend on a. Hence,  exp − x2 a  pi x1i dx1 i     ∞  −p pk k+1 x2 = exp −f (a) x2 −1 exp − k+1 k=0 2    ∞ ∞   −p−1 pk k+1 1 pk k+1 1− x + x − + ··· . = exp −f (a) x2 k+1 2 2! k+1 2 k=0 k=0 (9.137) This final series expansion of the exponential is certainly convergent if the original expansion of the coefficient P (x) was uniformly convergent. The denominator in Eq. (9.135) may be handled by writing x22α  ∞ λ=0 aλ x2λ 2 −1 = x2−2α  ∞ aλ x2λ λ=0 −2 = x2−2α ∞  bλ x2λ . (9.138) λ=0 Neglecting constant factors, which will be picked up anyway by the requirement that W (a) = 1, we obtain y2 (x) = y1 (x) x −p −2α x2 −1  ∞ λ=0 cλ x2λ  dx2 . (9.139) By Eq. (9.134), −p−1 −2α x2 = x2−n−1 , (9.140) and we have assumed here that n is an integer. Substituting this result into Eq. (9.139), we obtain x  −n−1 y2 (x) = y1 (x) c0 x2 + c1 x2−n + c2 x2−n+1 + · · · + cn x2−1 + · · · dx2 . (9.141) The integration indicated in Eq. (9.141) leads to a coefficient of y1 (x) consisting of two parts: 1. A power series starting with x −n . 2. A logarithm term from the integration of x −1 (when λ = n). This term always appears when n is an integer, unless cn fortuitously happens to vanish.18 18 For parity considerations, ln x is taken to be ln |x|, even. 586 Chapter 9 Differential Equations Example 9.6.4 A SECOND SOLUTION OF BESSEL’S EQUATION From Bessel’s equation, Eq. (9.100) (divided by x 2 to agree with Eq. (9.118)), we have P (x) = x −1 Q(x) = 1 for the case n = 0. Hence p−1 = 1, q0 = 1; all other pi and qj vanish. The Bessel indicial equation is k2 = 0 (Eq. (9.103) with n = 0). Hence we verify Eqs. (9.132) to (9.134) with n and α = 0. Our first solution is available from Eq. (9.108). Relabeling it to agree with Chapter 11 (and using a0 = 1), we obtain19 y1 (x) = J0 (x) = 1 −  x2 x4 + − O x6 . 4 64 (9.142a) Now, substituting all this into Eq. (9.127), we have the specific case corresponding to Eq. (9.135): x x exp[− 2 x1−1 dx1 ] y2 (x) = J0 (x) dx2 . (9.142b) [1 − x22 /4 + x24 /64 − · · · ]2 From the numerator of the integrand,  x2 dx1 1 = exp[− ln x2 ] = . exp − x1 x2 −p This corresponds to the x2 −1 in Eq. (9.137). From the denominator of the integrand, using a binomial expansion, we obtain  x 2 x 4 −2 x 2 5x 4 1− 2 + 2 = 1 + 2 + 2 + ··· . 4 64 2 32 Corresponding to Eq. (9.139), we have  x x22 5x24 1 y2 (x) = J0 (x) 1+ + + · · · dx2 x2 2 32   x 2 5x 4 + + ··· . = J0 (x) ln x + 4 128 (9.142c) Let us check this result. From Eqs. (11.62) and (11.64), which give the standard form of the second solution (higher-order terms are needed)   2 x 2 3x 4 2 N0 (x) = [ln x − ln 2 + γ ]J0 (x) + − + ··· . (9.142d) π π 4 128 Two points arise: (1) Since Bessel’s equation is homogeneous, we may multiply y2 (x) by any constant. To match N0 (x), we multiply our y2 (x) by 2/π . (2) To our second solution, 19 The capital O (order of) as written here means terms proportional to x 6 and possibly higher powers of x. 9.6 A Second Solution 587 (2/π)y2 (x), we may add any constant multiple of the first solution. Again, to match N0 (x) we add 2 [− ln 2 + γ ]J0 (x), π where γ is the usual Euler–Mascheroni constant (Section 5.2).20 Our new, modified second solution is  2  2 x 2 5x 4 y2 (x) = [ln x − ln 2 + γ ]J0 (x) + J0 (x) + + ··· . (9.142e) π π 4 128 Now the comparison with N0 (x) becomes a simple multiplication of J0 (x) from Eq. (9.142a) and the curly bracket of Eq. (9.142c). The multiplication checks, through terms of order x 2 and x 4 , which is all we carried. Our second solution from Eqs. (9.127) and (9.135) agrees with the standard second solution, the Neumann function, N0 (x). From the preceding analysis, the second solution of Eq. (9.118), y2 (x), may be written as ∞  y2 (x) = y1 (x) ln x + dj x j +α , (9.142f) j =−n the first solution times ln x and another power series, this one starting with x α−n , which means that we may look for a logarithmic term when the indicial equation of Section 9.5 gives only one series solution. With the form of the second solution specified by Eq. (9.142f), we can substitute Eq. (9.142f) into the original differential equation and determine the coefficients dj exactly as in Section 9.5. It may be worth noting that no series expansion of ln x is needed. In the substitution, ln x will drop out; its derivatives will survive.  The second solution will usually diverge at the origin because of the logarithmic factor and the negative powers of x in the series. For this reason y2 (x) is often referred to as the irregular solution. The first series solution, y1 (x), which usually converges at the origin, is called the regular solution. The question of behavior at the origin is discussed in more detail in Chapters 11 and 12, in which we take up Bessel functions, modified Bessel functions, and Legendre functions. Summary The two solutions of both sections (together with the exercises) provide a complete solution of our linear, homogeneous, second-order ODE — assuming that the point of expansion is no worse than a regular singularity. At least one solution can always be obtained by series substitution (Section 9.5). A second, linearly independent solution can be constructed by the Wronskian double integral, Eq. (9.127). This is all there are: No third, linearly independent solution exists (compare Exercise 9.6.10). The nonhomogeneous, linear, second-order ODE will have an additional solution: the particular solution. This particular solution may be obtained by the method of variation of parameters, Exercise 9.6.25, or by techniques such as Green’s function, Section 9.7. 20 The Neumann function N is defined as it is in order to achieve convenient asymptotic properties, Sections 11.3 and 11.6. 0 588 Chapter 9 Differential Equations Exercises 9.6.1 You know that the three unit vectors xˆ , yˆ , and zˆ are mutually perpendicular (orthogonal). Show that xˆ , yˆ , and zˆ are linearly independent. Specifically, show that no relation of the form of Eq. (9.111) exists for xˆ , yˆ , and zˆ . 9.6.2 The criterion for the linear independence of three vectors A, B, and C is that the equation aA + bB + cC = 0 (analogous to Eq. (9.111)) has no solution other than the trivial a = b = c = 0. Using components A = (A1 , A2 , A3 ), and so on, set up the determinant criterion for the existence or nonexistence of a nontrivial solution for the coefficients a, b, and c. Show that your criterion is equivalent to the triple scalar product A · B × C = 0. 9.6.3 Using the Wronskian determinant, show that the set of functions   xn 1, (n = 1, 2, . . . , N) n! is linearly independent. 9.6.4 If the Wronskian of two functions y1 and y2 is identically zero, show by direct integration that y1 = cy2 , that is, that y1 and y2 are dependent. Assume the functions have continuous derivatives and that at least one of the functions does not vanish in the interval under consideration. 9.6.5 The Wronskian of two functions is found to be zero at x0 − ε ≤ x ≤ x0 + ε for arbitrarily small ε > 0. Show that this Wronskian vanishes for all x and that the functions are linearly dependent. 9.6.6 The three functions sin x, ex , and e−x are linearly independent. No one function can be written as a linear combination of the other two. Show that the Wronskian of sin x, ex , and e−x vanishes but only at isolated points. ANS. W = 4 sin x, W = 0 for x = ±nπ, n = 0, 1, 2, . . . . 9.6.7 Consider two functions ϕ1 = x and ϕ2 = |x| = x sgn x (Fig. 9.3). The function sgn x is the sign of x. Since ϕ1′ = 1 and ϕ2′ = sgn x, W (ϕ1 , ϕ2 ) = 0 for any interval, including [−1, +1]. Does the vanishing of the Wronskian over [−1, +1] prove that ϕ1 and ϕ2 are linearly dependent? Clearly, they are not. What is wrong? 9.6.8 Explain that linear independence does not mean the absence of any dependence. Illustrate your argument with cosh x and ex . 9.6.9 Legendre’s differential equation  1 − x 2 y ′′ − 2xy ′ + n(n + 1)y = 0 9.6 A Second Solution FIGURE 9.3 589 x and |x|. has a regular solution Pn (x) and an irregular solution Qn (x). Show that the Wronskian of Pn and Qn is given by Pn (x)Q′n (x) − Pn′ (x)Qn (x) = with An independent of x. 9.6.10 An , 1 − x2 Show, by means of the Wronskian, that a linear, second-order, homogeneous ODE of the form y ′′ (x) + P (x)y ′ (x) + Q(x)y(x) = 0 cannot have three independent solutions. (Assume a third solution and show that the Wronskian vanishes for all x.) 9.6.11 Transform our linear, second-order ODE y ′′ + P (x)y ′ + Q(x)y = 0 by the substitution 1 y = z exp − 2 x P (t) dt  and show that the resulting differential equation for z is z′′ + q(x)z = 0, where q(x) = Q(x) − 12 P ′ (x) − 14 P 2 (x). Note. This substitution can be derived by the technique of Exercise 9.6.24. 9.6.12 Use the result of Exercise 9.6.11 to show that the replacement of ϕ(r) by rϕ(r) may be expected to eliminate the first derivative from the Laplacian in spherical polar coordinates. See also Exercise 2.5.18(b). 590 Chapter 9 Differential Equations 9.6.13 By direct differentiation and substitution show that s x exp[− P (t) dt] ds y2 (x) = y1 (x) [y1 (s)]2 satisfies (like y1 (x)) the ODE y2′′ (x) + P (x)y2′ (x) + Q(x)y2 (x) = 0. 9.6.14 Note. The Leibniz formula for the derivative of an integral is h(α) h(α)  dh(α)  dg(α) d ∂f (x, α) f (x, α) dx = dx + f h(α), α − f g(α), α . dα g(α) ∂α dα dα g(α) In the equation y2 (x) = y1 (x) y1 (x) satisfies x s exp[− P (t) dt] ds [y1 (s)]2 y1′′ + P (x)y1′ + Q(x)y1 = 0. The function y2 (x) is a linearly independent second solution of the same equation. Show that the inclusion of lower limits on the two integrals leads to nothing new, that is, that it generates only an overall constant factor and a constant multiple of the known solution y1 (x). 9.6.15 9.6.16 Given that one solution of m2 1 R ′′ + R ′ − 2 R = 0 r r is R = r m , show that Eq. (9.127) predicts a second solution, R = r −m . n 2n+1 /(2n + 1)! as a solution of the linear oscillator equaUsing y1 (x) = ∞ n=0 (−1) x tion, follow the analysis culminating in Eq. (9.142f) and show that c1 = 0 so that the second solution does not, in this case, contain a logarithmic term. 9.6.17 Show that when n is not an integer in Bessel’s ODE, Eq. (9.100), the second solution of Bessel’s equation, obtained from Eq. (9.127), does not contain a logarithmic term. 9.6.18 (a) One solution of Hermite’s differential equation y ′′ − 2xy ′ + 2αy = 0 for α = 0 is y1 (x) = 1. Find a second solution, y2 (x), using Eq. (9.127). Show that your second solution is equivalent to yodd (Exercise 9.5.6). (b) Find a second solution for α = 1, where y1 (x) = x, using Eq. (9.127). Show that your second solution is equivalent to yeven (Exercise 9.5.6). 9.6.19 One solution of Laguerre’s differential equation xy ′′ + (1 − x)y ′ + ny = 0 for n = 0 is y1 (x) = 1. Using Eq. (9.127), develop a second, linearly independent solution. Exhibit the logarithmic term explicitly. 9.6 A Second Solution 9.6.20 For Laguerre’s equation with n = 0, y2 (x) = (a) (b) (c) 9.6.21 x 591 es ds. s Write y2 (x) as a logarithm plus a power series. Verify that the integral form of y2 (x), previously given, is a solution of Laguerre’s equation (n = 0) by direct differentiation of the integral and substitution into the differential equation. Verify that the series form of y2 (x), part (a), is a solution by differentiating the series and substituting back into Laguerre’s equation. One solution of the Chebyshev equation  1 − x 2 y ′′ − xy ′ + n2 y = 0 for n = 0 is y1 = 1. (a) (b) Using Eq. (9.127), develop a second, linearly independent solution. Find a second solution by direct integration of the Chebyshev equation. Hint. Let v = y ′ and integrate. Compare your result with the second solution given in Section 13.3. ANS. (a) y2 = sin−1 x. (b) The second solution, Vn (x), is not defined for n = 0. 9.6.22 9.6.23 One solution of the Chebyshev equation  1 − x 2 y ′′ − xy ′ + n2 y = 0 for n = 1 is y1 (x) = x. Set up the Wronskian double integral solution and derive a second solution, y2 (x).  1/2 ANS. y2 = − 1 − x 2 . The radial Schrödinger wave equation has the form   h¯ 2 d 2 h¯ 2 − + l(l + 1) + V (r) y(r) = Ey(r). 2m dr 2 2mr 2 The potential energy V (r) may be expanded about the origin as V (r) = (a) (b) 9.6.24 b−1 + b0 + b1 r + · · · . r Show that there is one (regular) solution starting with r l+1 . From Eq. (9.128) show that the irregular solution diverges at the origin as r −l . Show that if a second solution, y2 , is assumed to have the form y2 (x) = y1 (x)f (x), substitution back into the original equation y2′′ + P (x)y2′ + Q(x)y2 = 0 592 Chapter 9 Differential Equations leads to f (x) = in agreement with Eq. (9.127). 9.6.25 x s exp[− P (t)dt] ds, [y1 (s)]2 If our linear, second-order ODE is nonhomogeneous, that is, of the form of Eq. (9.82), the most general solution is y(x) = y1 (x) + y2 (x) + yp (x). (y1 and y2 are independent solutions of the homogeneous equation.) Show that x x y1 (s)F (s) ds y2 (s)F (s) ds yp (x) = y2 (x) − y1 (x) , W {y1 (s), y2 (s)} W {y1 (s), y2 (s)} with W {y1 (x), y2 (x)} the Wronskian of y1 (s) and y2 (s). Hint. As in Exercise 9.6.24, let yp (x) = y1 (x)v(x) and develop a first-order differential equation for v ′ (x). 9.6.26 (a) Show that y ′′ + 1 − α2 y=0 4x 2 has two solutions: y1 (x) = a0 x (1+α)/2 , y2 (x) = a0 x (1−α)/2 . (b) For α = 0 the two linearly independent solutions of part (a) reduce to y10 = a0 x 1/2 . Using Eq. (9.128) derive a second solution, y20 (x) = a0 x 1/2 ln x. Verify that y20 is indeed a solution. (c) Show that the second solution from part (b) may be obtained as a limiting case from the two solutions of part (a):   y1 − y2 . y20 (x) = lim α→0 α 9.7 NONHOMOGENEOUS EQUATION — GREEN’S FUNCTION The series substitution of Section 9.5 and the Wronskian double integral of Section 9.6 provide the most general solution of the homogeneous, linear, second-order ODE. The specific solution, yp , linearly dependent on the source term (F (x) of Eq. (9.82)) may be cranked out by the variation of parameters method, Exercise 9.6.25. In this section we turn to a different method of solution — Green’s function. For a brief introduction to Green’s function method, as applied to the solution of a nonhomogeneous PDE, it is helpful to use the electrostatic analog. In the presence of charges 9.7 Nonhomogeneous Equation — Green’s Function 593 the electrostatic potential ψ satisfies Poisson’s nonhomogeneous equation (compare Section 1.14), ρ (mks units), (9.143) ∇2 ψ = − ε0 and Laplace’s homogeneous equation, ∇ 2 ψ = 0, (9.144) in the absence of electric charge (ρ = 0). If the charges are point charges qi , we know that the solution is 1  qi ψ= , (9.145) 4πε0 ri i a superposition of single-point charge solutions obtained from Coulomb’s law for the force between two point charges q1 and q2 , F= q1 q2 rˆ . 4πε0 r 2 (9.146) By replacement of the discrete point charges with a smeared-out distributed charge, charge density ρ, Eq. (9.145) becomes 1 ρ(r) ψ(r = 0) = dτ (9.147) 4πε0 r or, for the potential at r = r1 away from the origin and the charge at r = r2 , 1 ρ(r2 ) ψ(r1 ) = dτ2 . 4πε0 |r1 − r2 | (9.148) We use ψ as the potential corresponding to the given distribution of charge and therefore satisfying Poisson’s equation (9.143), whereas a function G, which we label Green’s function, is required to satisfy Poisson’s equation with a point source at the point defined by r2 : ∇ 2 G = −δ(r1 − r2 ). (9.149) Physically, then, G is the potential at r1 corresponding to a unit source at r2 . By Green’s theorem (Section 1.11, Eq. (11.104))  2 2 (9.150) ψ∇ G − G∇ ψ dτ2 = (ψ∇G − G∇ψ) · dσ . Assuming that the integrand falls off faster than r −2 we may simplify our problem by taking the volume so large that the surface integral vanishes, leaving (9.151) ψ∇ 2 G dτ2 = G∇ 2 ψ dτ2 , or, by substituting in Eqs. (9.143) and (9.149), we have G(r1 , r2 )ρ(r2 ) − ψ(r2 )δ(r1 − r2 ) dτ2 = − dτ2 . ε0 (9.152) 594 Chapter 9 Differential Equations Integration by employing the defining property of the Dirac delta function (Eq. (1.171b)) produces 1 ψ(r1 ) = ε0 G(r1 , r2 )ρ(r2 ) dτ2 . (9.153) Note that we have used Eq. (9.149) to eliminate ∇ 2 G but that the function G itself is still unknown. In Section 1.14, Gauss law, we found that    0, 2 1 ∇ dτ = (9.154) −4π, r 0 if the volume did not include the origin and −4π if the origin were included. This result from Section 1.14 may be rewritten as in Eq. (1.170), or     1 1 = −δ(r), or ∇2 = −δ(r1 − r2 ), (9.155) ∇2 4πr 4πr12 corresponding to a shift of the electrostatic charge from the origin to the position r = r2 . Here r12 = |r1 − r2 |, and the Dirac delta function δ(r1 − r2 ) vanishes unless r1 = r2 . Therefore in a comparison of Eqs. (9.149) and (9.155) the function G (Green’s function) is given by G(r1 , r2 ) = 1 . 4π|r1 − r2 | The solution of our differential equation (Poisson’s equation) is 1 ρ(r2 ) ψ(r1 ) = dτ2 , 4πε0 |r1 − r2 | (9.156) (9.157) in complete agreement with Eq. (9.148). Actually ψ(r1 ), Eq. (9.157), is the particular solution of Poisson’s equation. We may add solutions of Laplace’s equation (compare Eq. (9.83)). Such solutions could describe an external field. These results will be generalized to the second-order, linear, but nonhomogeneous differential equation Ly(r1 ) = −f (r1 ), (9.158) where L is a linear differential operator. The Green’s function is taken to be a solution of LG(r1 , r2 ) = −δ(r1 − r2 ), (9.159) analogous to Eq. (9.149). The Green’s function depends on boundary conditions that may no longer be those of electrostatics in a region of infinite extent. Then the particular solution y(r1 ) becomes y(r1 ) = G(r1 , r2 )f (r2 ) dτ2 . (9.160) 9.7 Nonhomogeneous Equation — Green’s Function 595 (There may also be an integral over a bounding surface, depending on the conditions specified.) In summary, Green’s function, often written G(r1 , r2 ) as a reminder of the name, is a solution of Eq. (9.149) or Eq. (9.159) more generally. It enters in an integral solution of our differential equation, as in Eqs. (9.148) and (9.153). For the simple, but important, electrostatic case we obtain Green’s function, G(r1 , r2 ), by Gauss’ law, comparing Eqs. (9.149) and (9.155). Finally, from the final solution (Eq. (9.157)) it is possible to develop a physical interpretation of Green’s function. It occurs as a weighting function or propagator function that enhances or reduces the effect of the charge element ρ(r2 ) dτ2 according to its distance from the field point r1 . Green’s function, G(r1 , r2 ), gives the effect of a unit point source at r2 in producing a potential at r1 . This is how it was introduced in Eq. (9.149); this is how it appears in Eq. (9.157). Symmetry of Green’s Function An important property of Green’s function is the symmetry of its two variables; that is, G(r1 , r2 ) = G(r2 , r1 ). (9.161) Although this is obvious in the electrostatic case just considered, it can be proved under more general conditions. In place of Eq. (9.149), let us require that G(r, r1 ) satisfy21  ∇ · p(r)∇G(r, r1 ) + λq(r)G(r, r1 ) = −δ(r − r1 ), (9.162) corresponding to a mathematical point source at r = r1 . Here the functions p(r) and q(r) are well-behaved but otherwise arbitrary functions of r. The Green’s function, G(r, r2 ), satisfies the same equation, but the subscript 1 is replaced by subscript 2. The Green’s functions, G(r, r1 ) and G(r, r2 ), have the same values over a given surface S of some volume of finite or infinite extent, and their normal derivatives have the same values over the surface S, or these Green’s functions vanish on S (Dirichlet boundary conditions, Section 9.1).22 Then G(r, r2 ) is a sort of potential at r, created by a unit point source at r2 . We multiply the equation for G(r, r1 ) by G(r, r2 ) and the equation for G(r, r2 ) by G(r, r1 ) and then subtract the two:   G(r, r2 )∇ · p(r)∇G(r, r1 ) − G(r, r1 )∇ · p(r)∇G(r, r2 ) = −G(r, r2 )δ(r − r1 ) + G(r, r1 )δ(r − r2 ). (9.163) The first term in Eq. (9.163),  G(r, r2 )∇ · p(r)∇G(r, r1 ) , may be replaced by  ∇ · G(r, r2 )p(r)∇G(r, r1 ) − ∇G(r, r2 ) · p(r)∇G(r, r1 ). 21 Equation (9.162) is a three-dimensional, inhomogeneous version of the self-adjoint eigenvalue equation, Eq. (10.8). 22 Any attempt to demand that the normal derivatives vanish at the surface (Neumann’s conditions, Section 9.1) leads to trouble with Gauss’ law. It is like demanding that the surface. E · dσ = 0 when you know perfectly well that there is some electric charge inside 596 Chapter 9 Differential Equations A similar transformation is carried out on the second term. Then integrating over the volume whose surface is S and using Green’s theorem, we obtain a surface integral:  G(r, r2 )p(r)∇G(r, r1 ) − G(r, r1 )p(r)∇G(r, r2 ) · dσ S = −G(r1 , r2 ) + G(r2 , r1 ). (9.164) The terms on the right-hand side appear when we use the Dirac delta functions in Eq. (9.163) and carry out the volume integration. With the boundary conditions earlier imposed on the Green’s function, the surface integral vanishes and G(r1 , r2 ) = G(r2 , r1 ), (9.165) which shows that Green’s function is symmetric. If the eigenfunctions are complex, boundary conditions corresponding to Eqs. (10.19) to (10.20) are appropriate. Equation (9.165) becomes G(r1 , r2 ) = G∗ (r2 , r1 ). (9.166) Note that this symmetry property holds for Green’s functions in every equation in the form of Eq. (9.162). In Chapter 10 we shall call equations in this form self-adjoint. The symmetry is the basis of various reciprocity theorems; the effect of a charge at r2 on the potential at r1 is the same as the effect of a charge at r1 on the potential at r2 . This use of Green’s functions is a powerful technique for solving many of the more difficult problems of mathematical physics. Form of Green’s Functions Let us assume that L is a self-adjoint differential operator of the general form23  L1 = ∇ 1 · p(r1 )∇ 1 + q(r1 ). (9.167) Here the subscript 1 on L emphasizes that L operates on r1 . Then, as a simple generalization of Green’s theorem, Eq. (1.104), we have (vL2 u − uL2 v) dτ2 = p(v∇ 2 u − u∇ 2 v) · dσ 2 , (9.168) in which all quantities have r2 as their argument. (To verify Eq. (9.168), take the divergence of the integrand of the surface integral.) We let u(r2 ) = y(r2 ) so that Eq. (9.158) applies and v(r2 ) = G(r1 , r2 ) so that Eq. (9.159) applies. (Remember, G(r1 , r2 ) = G(r2 , r1 ).) Substituting into Green’s theorem we get ' ( −G(r1 , r2 )f (r2 ) + y(r2 )δ(r1 − r2 ) dτ2 = ' ( p(r2 ) G(r1 , r2 )∇ 2 y(r2 ) − y(r2 )∇ 2 G(r1 , r2 ) · dσ 2 . 23 L may be in 1, 2, or 3 dimensions (with appropriate interpretation of ∇ ). 1 1 (9.169) 9.7 Nonhomogeneous Equation — Green’s Function 597 When we integrate over the Dirac delta function y(r1 ) = G(r1 , r2 )f (r2 ) dτ2 + ' ( p(r2 ) G(r1 , r2 )∇ 2 y(r2 ) − y(r2 )∇ 2 G(r1 , r2 ) · dσ 2 , (9.170) our solution to Eq. (9.158) appears as a volume integral plus a surface integral. If y and G both satisfy Dirichlet boundary conditions or if both satisfy Neumann boundary conditions, the surface integral vanishes and we regain Eq. (9.160). The volume integral is a weighted integral over the source term f (r2 ) with our Green’s function G(r1 , r2 ) as the weighting function. For the special case of p(r1 ) = 1 and q(r1 ) = 0, L is ∇ 2 , the Laplacian. Let us integrate ∇ 21 G(r1 , r2 ) = −δ(r1 − r2 ) over a small volume including the point source. Then ∇ 1 · ∇ 1 G(r1 , r2 ) dτ1 = − δ(r1 − r2 ) dτ1 = −1. (9.171) (9.172) The volume integral on the left may be transformed by Gauss’ theorem, as in the development of Gauss’ law — Section 1.14. We find that ∇ 1 G(r1 , r2 ) · dσ 1 = −1. (9.173) This shows, incidentally, that it may not be possible to impose a Neumann boundary condition, that the normal derivative of the Green’s function, ∂G/∂n, vanishes over the entire surface. If we are in three-dimensional space, Eq. (9.173) is satisfied by taking ∂ 1 1 · G(r1 , r2 ) = − , ∂r12 4π |r1 − r2 |2 r12 = |r1 − r2 |. (9.174) The integration is over the surface of a sphere centered at r2 . The integral of Eq. (9.174) is G(r1 , r2 ) = 1 1 · , 4π |r1 − r2 | (9.175) in agreement with Section 1.14. If we are in two-dimensional space, Eq. (9.173) is satisfied by taking ∂ 1 1 · , G(ρ 1 , ρ 2 ) = ∂ρ12 2π |ρ 1 − ρ 2 | (9.176) with r being replaced by ρ, ρ = (x 2 + y 2 )1/2 , and the integration being over the circumference of a circle centered on ρ 2 . Here ρ12 = |ρ 1 − ρ 2 |. Integrating Eq. (9.176), we obtain 1 ln |ρ 1 − ρ 2 |. (9.177) 2π To G(ρ 1 , ρ 2 ) (and to G(r1 , r2 )) we may add any multiple of the regular solution of the homogeneous (Laplace’s) equation as needed to satisfy boundary conditions. G(ρ 1 , ρ 2 ) = − 598 Chapter 9 Differential Equations Table 9.5 Green’s Functionsa One-dimensional space Two-dimensional space Three-dimensional space Laplace ∇2 Helmholtz ∇2 + k2 Modified Helmholtz ∇2 − k2 No solution i exp(ik|x1 − x2 |) 2k 1 exp(−k|x1 − x2 |) 2k i (1) H (k|ρ 1 − ρ 2 |) 4 0 exp(ik|r1 − r2 |) 4π |r1 − r2 | 1 K0 (k|ρ 1 − ρ 2 |) 2π exp(−k|r1 − r2 |) 4π |r1 − r2 | for (−∞, ∞) 1 − ln |ρ 1 − ρ 2 | 2π 1 1 · 4π |r1 − r2 | a These are the Green’s functions satisfying the boundary condition G(r , r ) = 0 as r → ∞ for the Laplace and modified 1 2 1 (1) Helmholtz operators. For the Helmholtz operator, G(r1 , r2 ) corresponds to an outgoing wave. H0 is the Hankel function of Section 11.4. K0 is he modified Bessel function of Section 11.5. The behavior of the Laplace operator Green’s function in the vicinity of the source point r1 = r2 shown by Eqs. (9.175) and (9.177) facilitates the identification of the Green’s functions for the other cases, such as the Helmholtz and modified Helmholtz equations. 1. For r1 = r2 , G(r1 , r2 ) must satisfy the homogeneous differential equation L1 G(r1 , r2 ) = 0, r1 = r2 . (9.178) As r1 → r2 (or ρ 1 → ρ 2 ), 1 ln |ρ 1 − ρ 2 |, two-dimensional space, 2π 1 1 · , three-dimensional space. G(r1 , r2 ) ≈ 4π |r1 − r2 | G(ρ 1 , ρ 2 ) ≈ − (9.179) (9.180) The term ±k 2 in the operator does not affect the behavior of G near the singular point r1 = r2 . For convenience, the Green’s functions for the Laplace, Helmholtz, and modified Helmholtz operators are listed in Table 9.5. Spherical Polar Coordinate Expansion24 As an alternate determination of the Green’s function of the Laplace operator, let us assume a spherical harmonic expansion of the form G(r1 , r2 ) = ∞  l  gl (r1 , r2 )Ylm (θ1 , ϕ1 )Ylm∗ (θ2 , ϕ2 ), (9.181) l=0 m=−l where the summation index l is the same for the spherical harmonics, as a consequence of the symmetry of the Green’s function. We will now determine the radial functions 24 This section is optional here and may be postponed to Chapter 12. 9.7 Nonhomogeneous Equation — Green’s Function 599 gl (r1 , r2 ). From Exercises 1.15.11 and 12.6.6, δ(r1 − r2 ) = = 1 δ(r1 − r2 )δ(cos θ1 − cos θ2 )δ(ϕ1 − ϕ2 ) r12 ∞  l  1 Ylm (θ1 , ϕ1 )Ylm∗ (θ2 , ϕ2 ). δ(r − r ) 1 2 r12 l=0 m=−l (9.182) Substituting Eqs. (9.181) and (9.182) into the Green’s function differential equation, Eq. (9.171), and making use of the orthogonality of the spherical harmonics, we obtain a radial equation: r1 d2  r1 gl (r1 , r2 ) − l(l + 1)gl (r1 , r2 ) = −δ(r1 − r2 ). 2 dr1 (9.183) This is now a one-dimensional problem. The solutions25 of the corresponding homogeneous equation are r1l and r1−l−1 . If we demand that gl remain finite as r1 → 0 and vanish as r1 → ∞, the technique of Section 10.5 leads to  l r1   , r1 < r2 ,   1 r2l+1 (9.184) gl (r1 , r2 ) = 2l + 1  r2l   , r > r ,  l+1 1 2 r1 or gl (r1 , r2 ) = l r< 1 · l+1 . 2l + 1 r> (9.185) Hence our Green’s function is G(r1 , r2 ) = ∞  l  l=0 m=−l l r< 1 Y m (θ1 , ϕ1 )Ylm∗ (θ2 , ϕ2 ). l+1 2l + 1 r> l (9.186) Since we already have G(r1 , r2 ) in closed form, Eq. (9.175), we may write ∞  l l  1 1 r< 1 · = Y m (θ1 , ϕ1 )Ylm∗ (θ2 , ϕ2 ). l+1 l 4π |r1 − r2 | 2l + 1 r> (9.187) l=0 m=−l One immediate use for this spherical harmonic expansion of the Green’s function is in the development of an electrostatic multipole expansion. The potential for an arbitrary charge distribution is ρ(r2 ) 1 dτ2 ψ(r1 ) = 4πε0 |r1 − r2 | 25 Compare Table 9.2. 600 Chapter 9 Differential Equations (which is Eq. (9.148)). Substituting Eq. (9.187), we get ψ(r1 ) = ∞ l  1 Ylm (θ1 , ϕ1 ) 1   ε0 2l + 1 r1l+1 l=0 m=−l  · ρ(r2 )Ylm∗ (θ2 , ϕ2 )r2l dϕ2 sin θ2 dθ2 r22 dr2 , for r1 > r2 . This is the multipole expansion. The relative importance of the various terms in the double sum depends on the form of the source, ρ(r2 ). Legendre Polynomial Addition Theorem26 From the generating expression for Legendre polynomials, Eq. (12.4a), ∞ l 1 1  r< 1 · = Pl (cos γ ), 4π |r1 − r2 | 4π r l+1 l=0 > (9.188) where γ is the angle included between vectors r1 and r2 , Fig. 9.4. Equating Eqs. (9.187) and (9.188), we have the Legendre polynomial addition theorem: l 4π  m Pl (cos γ ) = Yl (θ1 , ϕ1 )Ylm∗ (θ2 , ϕ2 ). 2l + 1 m=−l FIGURE 9.4 Spherical polar coordinates. 26 This section is optional here and may be postponed to Chapter 12. (9.189) 9.7 Nonhomogeneous Equation — Green’s Function 601 It is instructive to compare this derivation with the relatively cumbersome derivation of Section 12.8 leading to Eq. (12.177). Circular Cylindrical Coordinate Expansion27 In analogy with the preceding spherical polar coordinate expansion, we write δ(r1 − r2 ) = 1 δ(ρ1 − ρ2 )δ(ϕ1 − ϕ2 )δ(z1 − z2 ) ρ1 ∞ 1 1  im(ϕ1 −ϕ2 ) ∞ ik(z1 −z2 ) = δ(ρ1 − ρ2 ) 2 e e dk, ρ1 4π m=−∞ −∞ (9.190) using Exercise 12.6.5 and Eq. (1.193c) and the Cauchy principal value. But why a summation for the ϕ-dependence and an integration for the z-dependence? The requirement that the azimuthal dependence be single-valued quantizes m, hence the summation. No such restriction applies to k. To avoid problems later with negative values of k, we rewrite Eq. (9.190) as ∞ 1  im(ϕ1 −ϕ2 ) 1 ∞ 1 δ(r1 − r2 ) = δ(ρ1 − ρ2 ) e cos k(z1 − z2 ) dk. (9.191) ρ1 2π m=−∞ π 0 We assume a similar expansion of the Green’s function, ∞ ∞ 1  im(ϕ1 −ϕ2 ) G(r1 , r2 ) = gm (ρ1 , ρ2 )e cos k(z1 − z2 ) dk, 2π 2 m=−∞ 0 (9.192) with the ρ-dependent coefficients gm (ρ1 , ρ2 ) to be determined. Substituting into Eq. (9.171), now in circular cylindrical coordinates, we find that if g(ρ1 , ρ2 ) satisfies   dgm m2 d ρ1 − k 2 ρ1 + gm = −δ(ρ1 − ρ2 ), (9.193) dρ1 dρ1 ρ1 then Eq. (9.171) is satisfied. The operator in Eq. (9.193) is identified as the modified Bessel operator (in selfadjoint form). Hence the solutions of the corresponding homogeneous equation are u1 = Im (kρ), u2 = Km (kρ). As in the spherical polar coordinate case, we demand that G be finite at ρ1 = 0 and vanish as ρ1 → ∞. Then the technique of Section 10.5 yields 1 gm (ρ1 , ρ2 ) = − Im (kρ< )Km (kρ> ). (9.194) A This corresponds to Eq. (9.155). The constant A comes from the Wronskian (see Eq. (9.120)): ′ Im (kρ)Km (kρ) − Im′ (kρ)Km (kρ) = 27 This section is optional here and may be postponed to Chapter 11. A . P (kρ) (9.195) 602 Chapter 9 Differential Equations From Exercise 11.5.10, A = −1 and gm (ρ1 , ρ2 ) = Im (kρ< )Km (kρ> ). (9.196) Therefore our circular cylindrical coordinate Green’s function is 1 1 · 4π |r1 − r2 | ∞ ∞ 1  Im (kρ< )Km (kρ> )eim(ϕ1 −ϕ2 ) cos k(z1 − z2 ) dk. = 2π 2 m=−∞ 0 G(r1 , r2 ) = (9.197) Exercise 9.7.14 is a special case of this result. Example 9.7.1 QUANTUM MECHANICAL SCATTERING — NEUMANN SERIES SOLUTION The quantum theory of scattering provides a nice illustration of integral equation techniques and an application of a Green’s function. Our physical picture of scattering is as follows. A beam of particles moves along the negative z-axis toward the origin. A small fraction of the particles is scattered by the potential V (r) and goes off as an outgoing spherical wave. Our wave function ψ(r) must satisfy the time-independent Schrödinger equation − h¯ 2 2 ∇ ψ(r) + V (r)ψ(r) = Eψ(r), 2m (9.198a) or  2mE 2m ∇ ψ(r) + k ψ(r) = − − 2 V (r)ψ(r) , k2 = 2 . (9.198b) h¯ h¯ From the physical picture just presented we look for a solution having an asymptotic form 2 2 ψ(r) ∼ eik0 ·r + fk (θ, ϕ) eikr . r (9.199) Here eik0 ·r is the incident plane wave28 with k0 the propagation vector carrying the subscript 0 to indicate that it is in the θ = 0 (z-axis) direction. The magnitudes k0 and k are equal (ignoring recoil), and eikr /r is the outgoing spherical wave with an angular (and energy) dependent amplitude factor fk (θ, ϕ).29 Vector k has the direction of the outgoing scattered wave. In quantum mechanics texts it is shown that the differential probability of scattering, dσ/d, the scattering cross section per unit solid angle, is given by |fk (θ, ϕ|2 . Identifying [−(2m/h¯ 2 )V (r)ψ(r)] with f (r) of Eq. (9.158), we have 2m ψ(r1 ) = − V (r2 )ψ(r2 )G(r1 , r2 ) d 3 r2 (9.200) h¯ 2 28 For simplicity we assume a continuous incident beam. In a more sophisticated and more realistic treatment, Eq. (9.199) would be one component of a Fourier wave packet. 29 If V (r) represents a central force, f will be a function of θ only, independent of azimuth. k 9.7 Nonhomogeneous Equation — Green’s Function 603 by Eq. (9.170). This does not have the desired asymptotic form of Eq. (9.199), but we may add to Eq. (9.200) eik0 ·r1 , a solution of the homogeneous equation, and put ψ(r) into the desired form: 2m ik0 ·r1 ψ(r1 ) = e − V (r2 )ψ(r2 )G(r1 , r2 ) d 3 r2 . (9.201) h¯ 2 Our Green’s function is the Green’s function of the operator L = ∇ 2 + k 2 (Eq. (9.198)), satisfying the boundary condition that it describe an outgoing wave. Then, from Table 9.5, G(r1 , r2 ) = exp(ik|r1 − r2 |)/(4π|r1 − r2 |) and 2m eik|r1 −r2 | 3 ik0 ·r1 ψ(r1 ) = e − (9.202) V (r )ψ(r ) d r2 . 2 2 4π|r1 − r2 | h¯ 2 This integral equation analog of the original Schrödinger wave equation is exact. Employing the Neumann series technique of Section 16.3 (remember, the scattering probability is very small), we have ψ0 (r1 ) = eik0 ·r1 , (9.203a) which has the physical interpretation of no scattering. Substituting ψ0 (r2 ) = eik0 ·r2 into the integral, we obtain the first correction term, eik|r1 −r2 | ik0 ·r2 3 2m ψ1 (r1 ) = eik0 ·r1 − e V (r ) d r2 . (9.203b) 2 4π|r1 − r2 | h¯ 2 This is the famous Born approximation. It is expected to be most accurate for weak potentials and high incident energy. If a more accurate approximation is desired, the Neumann series may be continued.30  Example 9.7.2 QUANTUM MECHANICAL SCATTERING — GREEN’S FUNCTION Again, we consider the Schrödinger wave equation (Eq. (9.198b)) for the scattering problem. This time we use Fourier transform techniques and derive the desired form of the Green’s function by contour integration. Substituting the desired asymptotic form of the solution (with k replaced by k0 ), eik0 r = eik0 z + (r), r into the Schrödinger wave equation, Eq. (9.198b), yields  2 ∇ + k02 (r) = U (r)eik0 z + U (r)(r). ψ(r) ∼ eik0 z + fk0 (θ, ϕ) (9.204) (9.205a) Here h¯ 2 U (r) = V (r), 2m 30 This assumes the Neumann series is convergent. In some physical situations it is not convergent and then other techniques are needed. 604 Chapter 9 Differential Equations the scattering (perturbing) potential. Since the probability of scattering is much less than 1, the second term on the right-hand side of Eq. (9.205a) is expected to be negligible (relative to the first term on the right-hand side) and thus we drop it. Note that we are approximating our differential equation with  2 (9.205b) ∇ + k02 (r) = U (r)eik0 z . We now proceed to solve Eq. (9.205b), a nonhomogeneous PDE. The differential operator ∇ 2 generates a continuous set of eigenfunctions ∇ 2 ψk (r) = −k 2 ψk (r), (9.206) where ψk (r) = (2π)−3/2 eik·r . These plane-wave eigenfunctions form a continuous but orthonormal set, in the sense that ψk∗1 (r)ψk2 (r) d 3 r = δ(k1 − k2 ) (compare Eq. (15.21d)).31 We use these eigenfunctions to derive a Green’s function. We expand the unknown function (r1 ) in these eigenfunctions, (r1 ) = Ak1 ψk1 (r1 ) d 3 k1 , (9.207) a Fourier integral with Ak1 , the unknown coefficients. Substituting Eq. (9.207) into Eq. (9.205b) and using Eq. (9.206), we obtain  (9.208) Ak k02 − k 2 ψk (r) d 3 k = U (r)eik0 z . Using the now-familiar technique of multiplying by ψk∗2 (r) and integrating over the space coordinates, we have   Ak1 k02 − k12 d 3 k1 ψk∗2 (r)ψk1 (r) d 3 r = Ak2 k02 − k22 = ψk∗2 (r)U (r)eik0 z d 3 r. Solving for Ak2 and substituting into Eq. (9.207) we have   2 −1 k0 − k22 ψk∗2 (r1 )U (r1 )eik0 z1 d 3 r1 ψk2 (r2 ) d 3 k2 . (r2 ) = Hence (r1 ) = −1 3  d k1 ψk1 (r1 ) k02 − k12 31 d 3 r = dx dy dz, a (three-dimensional) volume element in r-space. ψk∗1 (r2 )U (r2 )eik0 z2 d 3 r2 , (9.209) (9.210) (9.211) 9.7 Nonhomogeneous Equation — Green’s Function 605 replacing k2 by k1 and r1 by r2 to agree with Eq. (9.207). Reversing the order of integration, we have (r1 ) = − Gk0 (r1 , r2 )U (r2 )eik0 z2 d 3 r2 , (9.212) where Gk0 (r1 , r2 ), our Green’s function, is given by ψ ∗ (r )ψ (r ) k1 2 k1 1 3 Gk0 (r1 , r2 ) = d k1 , k12 − k02 (9.213) analogous to Eq. (10.90) of Section 10.5 for discrete eigenfunctions. Equation (9.212) should be compared with the Green’s function solution of Poisson’s equation (9.157). It is perhaps worth evaluating this integral to emphasize once more the vital role played by the boundary conditions. Using the eigenfunctions from Eq. (9.206) and d 3 k = k 2 dk sin θ dθ dϕ, we obtain Gk0 (r1 , r2 ) = 1 (2π)3 0 ∞ π 2π 0 0 eikρ cos θ dϕ sin θ dθ k 2 dk. k 2 − k02 (9.214) Here kρ cos θ has replaced k · (r1 − r2 ), with ρ = r1 − r2 indicating the polar axis in kspace. Integrating over ϕ by inspection, we pick up a 2π . The θ -integration then leads to ∞ ikρ e − e−ikρ 1 k dk, (9.215) Gk0 (r1 , r2 ) = 2 4π ρi 0 k 2 − k02 and since the integrand is an even function of k, we may set ∞ iκ (e − e−iκ ) 1 κ dκ. Gk0 (r1 , r2 ) = 8π 2 ρi −∞ κ 2 − σ 2 (9.216) The latter step is taken in anticipation of the evaluation of Gk (r1 , r2 ) as a contour integral. The symbols κ and σ (σ > 0) represent kρ and k0 ρ, respectively. If the integral in Eq. (9.216) is interpreted as a Riemann integral, the integral does not exist. This implies that L−1 does not exist, and in a literal sense it does not. L = ∇ 2 + k 2 is singular since there exist nontrivial solutions ψ for which the homogeneous equation Lψ = 0. We avoid this problem by introducing a parameter γ , defining a different operator L−1 γ , and taking the limit as γ → 0. Splitting the integral into two parts so that each part may be written as a suitable contour integral gives us   1 κeiκ dκ κe−iκ dκ 1 + . (9.217) G(r1 , r2 ) = 8π 2 ρi C1 κ 2 − σ 2 8π 2 ρi C2 κ 2 − σ 2 Contour C1 is closed by a semicircle in the upper half-plane, C2 by a semicircle in the lower half-plane. These integrals were evaluated in Chapter 7 by using appropriately chosen infinitesimal semicircles to go around the singular points κ = ±σ . As an alternative procedure, let us first displace the singular points from the real axis by replacing σ by σ + iγ and then, after evaluation, taking the limit as γ → 0 (Fig. 9.5). 606 Chapter 9 Differential Equations FIGURE 9.5 Possible Green’s function contours of integration. For γ positive, contour C1 encloses the singular point κ = σ + iγ and the first integral contributes 1 2πi · ei(σ +iγ ) . 2 From the second integral we also obtain 1 2πi · ei(σ +iγ ) , 2 the enclosed singularity being κ = −(σ + iγ ). Returning to Eq. (9.217) and letting γ → 0, we have G(r1 , r2 ) = eik0 |r1 −r2 | 1 iσ e = , 4πρ 4π|r1 − r2 | (9.218) in full agreement with Exercise 9.7.16. This result depends on starting with γ positive. Had we chosen γ negative, our Green’s function would have included e−iσ , which corresponds to an incoming wave. The choice of positive γ is dictated by the boundary conditions we wish to satisfy. Equations (9.212) and (9.218) reproduce the scattered wave in Eq. (9.203b) and constitute an exact solution of the approximate Eq. (9.205b). Exercises 9.7.18 and 9.7.20 extend these results.  9.7 Nonhomogeneous Equation — Green’s Function 607 Exercises 9.7.1 9.7.2 Verify Eq. (9.168), (vL2 u − uL2 v) dτ2 = p(v∇ 2 u − u∇ 2 v) · dσ 2 . Show that the terms +k 2 in the Helmholtz operator and −k 2 in the modified Helmholtz operator do not affect the behavior of G(r1 , r2 ) in the immediate vicinity of the singular point r1 = r2 . Specifically, show that lim k 2 G(r1 , r2 ) dτ2 = 1. |r1 −r2 |→0 9.7.3 Show that exp(ik|r1 − r2 |) 4π|r1 − r2 | satisfies the two appropriate criteria and therefore is a Green’s function for the Helmholtz equation. 9.7.4 (a) Find the Green’s function for the three-dimensional Helmholtz equation, Exercise 9.7.3, when the wave is a standing wave. (b) How is this Green’s function related to the spherical Bessel functions? 9.7.5 The homogeneous Helmholtz equation ∇ 2 ϕ + λ2 ϕ = 0 has eigenvalues λ2i and eigenfunctions ϕi . Show that the corresponding Green’s function that satisfies ∇ 2 G(r1 , r2 ) + λ2 G(r1 , r2 ) = −δ(r1 − r2 ) may be written as G(r1 , r2 ) = ∞  ϕi (r1 )ϕi (r2 ) i=1 λ2i − λ2 . An expansion of this form is called a bilinear expansion. If the Green’s function is available in closed form, this provides a means of generating functions. 9.7.6 An electrostatic potential (mks units) is ϕ(r) = Z e−ar . · 4πε0 r Reconstruct the electrical charge distribution that will produce this potential. Note that ϕ(r) vanishes exponentially for large r, showing that the net charge is zero. ANS. ρ(r) = Zδ(r) − Za 2 e−ar . 4π r 608 Chapter 9 Differential Equations 9.7.7 Transform the ODE e−r d 2 y(r) 2 y(r) = 0 − k y(r) + V 0 r dr 2 and the boundary conditions y(0) = y(∞) = 0 into a Fredholm integral equation of the form ∞ e−t y(r) = λ G(r, t) y(t) dt. t 0 9.7.8 9.7.9 The quantities V0 = λ and k 2 are constants. The ODE is derived from the Schrödinger wave equation with a mesonic potential:  1 −kt   0 ≤ r < t,  k e sinh kr, G(r, t) =    1 e−kr sinh kt, t < r < ∞. k A charged conducting ring of radius a (Example 12.3.3) may be described by q δ(r − a)δ(cos θ ). ρ(r) = 2πa 2 Using the known Green’s function for this system, Eq. (9.187) find the electrostatic potential. Hint. Exercise 12.6.3 will be helpful. Changing a separation constant from k 2 to −k 2 and putting the discontinuity of the first derivative into the z-dependence, show that ∞ ∞ 1  1 = eim(ϕ1 −ϕ2 ) Jm (kρ1 )Jm (kρ2 )e−k|z1 −z2 | dk. 4π|r1 − r2 | 4π m=−∞ 0 Hint. The required δ(ρ1 − ρ2 ) may be obtained from Exercise 15.1.2. 9.7.10 Derive the expansion  (1) ∞  jl (kr1 )hl (kr2 ),  exp[ik|r1 − r2 |] = ik  4π|r1 − r2 | (1) l=0 jl (kr2 )hl (kr1 ), × l   r1 < r2  r1 > r2 Ylm (θ1 , ϕ1 )Ylm∗ (θ2 , ϕ2 ).  m=−l Hint. The left side is a known Green’s function. Assume a spherical harmonic expansion and work on the remaining radial dependence. The spherical harmonic closure relation, Exercise 12.6.6, covers the angular dependence. 9.7.11 Show that the modified Helmholtz operator Green’s function exp(−k|r1 − r2 |) 4π|r1 − r2 | 9.7 Nonhomogeneous Equation — Green’s Function 609 has the spherical polar coordinate expansion ∞ l   exp(−k|r1 − r2 |) =k il (kr< )kl (kr> ) Ylm (θ1 , ϕ1 )Ylm∗ (θ2 , ϕ2 ). 4π|r1 − r2 | m=−l l=0 Note. The modified spherical Bessel functions il (kr) and kl (kr) are defined in Exercise 11.7.15. 9.7.12 From the spherical Green’s function of Exercise 9.7.10, derive the plane-wave expansion eik·r = ∞  l=0 i l (2l + 1)jl (kr)Pl (cos γ ), where γ is the angle included between k and r. This is the Rayleigh equation of Exercise 12.4.7. Hint. Take r2 ≫ r1 so that |r1 − r2 | → r2 − r20 · r1 = r2 − k · r1 . k Let r2 → ∞ and cancel a factor of eikr2 /r2 . 9.7.13 From the results of Exercises 9.7.10 and 9.7.12, show that eix = 9.7.14 (a) (b) 9.7.15 ∞  l=0 i l (2l + 1)jl (x). From the circular cylindrical coordinate expansion of the Laplace Green’s function (Eq. (9.197)), show that 1 2 ∞ K0 (kρ) cos kz dk. = (ρ 2 + z2 )1/2 π 0 This same result is obtained directly in Exercise 15.3.11. As a special case of part (a) show that ∞ π K0 (k) dk = . 2 0 Noting that ψk (r) = 1 eik·r (2π)3/2 is an eigenfunction of  2 ∇ + k 2 ψk (r) = 0 (Eq. (9.206)), show that the Green’s function of L = ∇ 2 may be expanded as 1 1 d 3k = eik·(r1 −r2 ) 2 . 3 4π|r1 − r2 | (2π) k 610 Chapter 9 Differential Equations 9.7.16 Using Fourier transforms, show that the Green’s function satisfying the nonhomogeneous Helmholtz equation  2 ∇ + k02 G(r1 , r2 ) = −δ(r1 − r2 ) is G(r1 , r2 ) = 1 (2π)3 eik·(r1 −r2 ) 3 d k, k 2 − k02 in agreement with Eq. (9.213). 9.7.17 The basic equation of the scalar Kirchhoff diffraction theory is  ikr  ikr e e 1 · dσ 2 , ∇ψ(r2 ) − ψ(r2 )∇ ψ(r1 ) = 4π S2 r r where ψ satisfies the homogeneous Helmholtz equation and r = |r1 − r2 |. Derive this equation. Assume that r1 is interior to the closed surface S2 . Hint. Use Green’s theorem. 9.7.18 The Born approximation for the scattered wave is given by Eq. (9.203b) (and Eq. (9.211)). From the asymptotic form, Eq. (9.199), 2m eikr eik|r−r2 | ik0 ·r2 3 fk (θ, ϕ) =− 2 e d r2 . V (r2 ) r 4π|r − r2 | h¯ For a scattering potential V (r2 ) that is independent of angles and for r ≫ r2 , show that 2m ∞ sin(|k0 − k|r2 ) dr2 . fk (θ, ϕ) = − 2 r2 V (r2 ) |k0 − k| h¯ 0 Here k0 is in the θ = 0 (original z-axis) direction, whereas k is in the (θ, ϕ) direction. The magnitudes are equal: |k0 | = |k|; m is the reduced mass. Hint. You have Exercise 9.7.12 to simplify the exponential and Exercise 15.3.20 to transform the three-dimensional Fourier exponential transform into a one-dimensional Fourier sine transform. 9.7.19 Calculate the scattering amplitude fk (θ, ϕ) for a mesonic potential V(r) = V0 (e−αr/αr). Hint. This particular potential permits the Born integral, Exercise 9.7.18, to be evaluated as a Laplace transform. ANS. fk (θ, ϕ) = − 9.7.20 2mV0 h2 α α2 1 . + (k0 − k)2 ¯ The mesonic potential V (r) = V0 may be used to describe the Coulomb scattering of two charges q1 and q2 . We let α → 0 and V0 → 0 but take the ratio V0 /α to be q1 q2 /4πε0 . (For Gaussian units omit the 4πε0 .) Show that the differential scattering cross section dσ/d = |fk (θ, ϕ)|2 is given by   dσ h¯ 2 k 2 1 q1 q2 2 p2 = = . , E= 4 d 4πε0 16E 2 sin (θ/2) 2m 2m (e−αr /αr) It happens (coincidentally) that this Born approximation is in exact agreement with both the exact quantum mechanical calculations and the classical Rutherford calculation. 9.8 Heat Flow, or Diffusion, PDE 9.8 611 HEAT FLOW, OR DIFFUSION, PDE Here we return to a special PDE to develop fairly general methods to adapt a special solution of a PDE to boundary conditions by introducing parameters that apply to other secondorder PDEs with constant coefficients as well. To some extent, they are complementary to the earlier basic separation method for finding solutions in a systematic way. We select the full time-dependent diffusion PDE for an isotropic medium. Assuming isotropy actually is not much of a restriction because, in case we have different (constant) rates of diffusion in different directions, for example in wood, our heat flow PDE takes the form ∂ψ ∂ 2ψ ∂ 2ψ ∂ 2ψ = a 2 2 + b2 2 + c2 2 , ∂t ∂x ∂y ∂z (9.219) if we put the coordinate axes along the principal directions of anisotropy. Now we simply rescale the coordinates using the substitutions x = aξ, y = bη, z = cζ to get back the original isotropic form of Eq. (9.219), ∂ ∂ 2  ∂ 2  ∂ 2  = + + ∂t ∂ξ 2 ∂η2 ∂ζ 2 (9.220) for the temperature distribution function (ξ, η, ζ, t) = ψ(x, y, z, t). For simplicity, we first solve the time-dependent PDE for a homogeneous onedimensional medium, a long metal rod in the x-direction, say, ∂ψ ∂ 2ψ = a2 2 , ∂t ∂x (9.221) where the constant a measures the diffusivity, or heat conductivity, of the medium. We attempt to solve this linear PDE with constant coefficients with the relevant exponential product Ansatz ψ = eαx · eβt , which, when substituted into Eq. (9.221), solves the PDE with the constraint β = a 2 α 2 for the parameters. We seek exponentially decaying solutions for large times, that is, solutions with negative β values, and therefore set α = iω, α 2 = −ω2 for real ω and have ψ(x, t) = eiωx e−ω 2 a2 t = (cos ωx + i sin ωx)e−ω 2 a2 t . Forming real linear combinations we obtain the solution ψ(x, t) = (A cos ωx + B sin ωx)e−ω 2 a2 t , for any choice of A, B, ω, which are introduced to satisfy boundary conditions. Upon summing over multiples nω of the basic frequency for periodic boundary conditions or integrating over the parameter ω for general (nonperiodic boundary conditions), we find a solution,  2 2 ψ(x, t) = A(ω) cos ωx + B(ω) sin ωx e−a ω t dω, (9.222) that is general enough to be adapted to boundary conditions at t = 0, say. When the boundary condition gives a nonzero temperature ψ0 , as for our rod, then the summation method 612 Chapter 9 Differential Equations applies (Fourier expansion of the boundary condition). If the space is unrestricted (as for an infinitely extended rod), the Fourier integral applies. • This summation or integration over parameters is one of the standard methods for generalizing specific PDE solutions in order to adapt them to boundary conditions. Example 9.8.1 A SPECIFIC BOUNDARY CONDITION Let us solve a one-dimensional case explicitly, where the temperature at time t = 0 is ψ0 (x) = 1 = const. in the interval between x = +1 and x = −1 and zero for x > 1 and x < 1. At the ends, x = ±1, the temperature is always held at zero. For a finite interval we choose the cos(lπx/2) spatial solutions of Eq. (9.221) for integer l, because they vanish at x = ±1. Thus, at t = 0 our solution is a Fourier series, ψ(x, 0) = ∞  al cos l=1 πlx = 1, 2 −1 < x < 1 with coefficients (see Section 14.1.)  1 2 πlx 1 πlx al = = sin 1 · cos 2 lπ 2 x=−1 −1 = lπ 4(−1)m 4 sin = , πl 2 (2m + 1)π al = 0, l = 2m + 1; l = 2m. Including its time dependence, the full solution is given by the series  ∞ πx −t ((2m+1)πa/2)2 4  (−1)m e cos (2m + 1) , ψ(x, t) = π 2m + 1 2 (9.223) m=0 which converges absolutely for t > 0 but only conditionally at t = 0, as a result of the discontinuity at x = ±1. Without the restriction to zero temperature at the endpoints of the given finite interval, the Fourier series is replaced by a Fourier integral. The general solution is then given by Eq. (9.222). At t = 0 the given temperature distribution ψ0 = 1 gives the coefficients as (see Section 15.3)  1 1 1 sin ωx 1 2 sin ω , B(ω) = 0. A(ω) = cos ωx dx = =  π −1 π ω x=−1 πω Therefore 2 ψ(x, t) = π 0 ∞ sin ω 2 2 cos(ωx)e−a ω t dω. ω (9.224)  In three dimensions the corresponding exponential Ansatz ψ = eik·r/a+βt leads to a solution with the relation β = −k2 = −k 2 for its parameter, and the three-dimensional 9.8 Heat Flow, or Diffusion, PDE 613 form of Eq. (9.221) becomes ∂ 2ψ ∂ 2ψ ∂ 2ψ + + 2 + k 2 ψ = 0, ∂x 2 ∂y 2 ∂z (9.225) which is the Helmholtz equation, which may be solved by the separation method just like the earlier Laplace equation in Cartesian, cylindrical, or spherical coordinates under appropriately generalized boundary conditions. In Cartesian coordinates, with the product Ansatz of Eq. (9.35), the separated x- and yODEs from Eq. (9.221) are the same as Eqs. (9.38) and (9.41), while the z-ODE, Eq. (9.42), generalizes to 1 d 2Z = −k 2 + l 2 + m2 = n2 > 0, Z dz2 (9.226) where we introduce another separation constant, n2 , constrained by k 2 = l 2 + m2 − n2 (9.227) to produce a symmetric set of equations. Now, our solution of Helmholtz’s Eq. (9.225) is labeled according to the choice of all three separation constants l, m, n subject to the constraint Eq. (9.227). As before the z-ODE, Eq. (9.226), yields exponentially decaying solutions ∼ e−nz . The boundary condition at z = 0 fixes the expansion coefficients alm , as in Eq. (9.44). In cylindrical coordinates, we now use the separation constant l 2 for the z-ODE, with an exponentially decaying solution in mind, d 2Z = l 2 Z > 0, dz2 (9.228) so Z ∼ e−lz , because the temperature goes to zero at large z. If we set k 2 + l 2 = n2 , Eqs. (9.53) to (9.54) stay the same, so we end up with the same Fourier–Bessel expansion, Eq. (9.56), as before. In spherical coordinates with radial boundary conditions, the separation method leads to the same angular ODEs in Eqs. (9.61) and (9.64), and the radial ODE now becomes   QR 1 d 2 dR r + k 2 R − 2 = 0, Q = l(l + 1), (9.229) 2 dr r dr r that is, of Eq. (9.65), whose solutions are the spherical Bessel functions of Section 11.7. They are listed in Table 9.2. The restriction that k 2 be a constant is unnecessarily severe. The separation process will still work with Helmholtz’s PDE for k 2 as general as k 2 = f (r) + 1 1 g(θ ) + h(ϕ) + k ′2 . 2 2 r r sin2 θ (9.230) In the hydrogen atom we have k 2 = f (r) in the Schrödinger wave equation, and this leads to a closed-form solution involving Laguerre polynomials. 614 Chapter 9 Differential Equations Alternate Solutions In a new approach to the heat flow PDE suggested by experiments, we now return to the one-dimensional PDE, Eq. (9.221), seeking solutions of a new functional form √ √ ψ(x, t) = u(x/ t ), which is suggested by Example 15.1.1. Substituting u(ξ ), ξ = x/ t , into Eq. (9.221) using ∂ψ u′ =√ , ∂x t with the notation u′ (ξ ) ≡ du dξ , u′′ ∂ 2ψ , = t ∂x 2 ∂ψ x = − √ u′ ∂t 2 t3 (9.231) the PDE is reduced to the ODE 2a 2 u′′ (ξ ) + ξ u′ (ξ ) = 0. (9.232) Writing this ODE as u′′ ξ =− 2, ′ u 2a 2 ξ we can integrate it once to get ln u′ = − 4a 2 + ln C1 , with an integration constant C1 . Exponentiating and integrating again we find the solution ξ 2 −ξ u(ξ ) = C1 (9.233) e 4a2 dξ + C2 , 0 involving two integration constants Ci . Normalizing this solution at time t = 0 to temperature +1 for x > 0 and −1 for x < 0, our boundary conditions, fixes the constants Ci , so   √x x√ 2 1 x 2 t − ξ 2a t −v 2 (9.234) ψ= √ dv =  e 4a2 dξ = √ e √ , a π 0 π 0 2a t where  denotes Gauss’ error function (see Exercise 5.10.4). See Example 15.1.1 for a derivation using a Fourier transform. We need to generalize this specific solution to adapt it to boundary conditions. To this end we now generate new solutions of the PDE with constant coefficients by differentiating a special solution, Eq. (9.234). In other words, if ψ(x, t) solves the ∂ψ PDE in Eq. (9.221), so do ∂ψ ∂t and ∂x , because these derivatives and the differentiations of the PDE commute; that is, the order in which they are carried out does not matter. Note carefully that this method no longer works if any coefficient of the PDE depends on t or x explicitly. However, PDEs with constant coefficients dominate in physics. Examples are Newton’s equations of motion (ODEs) in classical mechanics, the wave equations of electrodynamics, and Poisson’s and Laplace’s equations in electrostatics and gravity. Even Einstein’s nonlinear field equations of general relativity take on this special form in local geodesic coordinates. Therefore, by differentiating Eq. (9.234) with respect to x, we find the simpler, more basic solution 2 1 − x ψ1 (x, t) = √ e 4a2 t , a tπ (9.235) 9.8 Heat Flow, or Diffusion, PDE 615 and, repeating the process, another basic solution, ψ2 (x, t) = 2 x − x e 4a2 t . √ 2a 3 t 3 π (9.236) Again, these solutions have to be generalized to adapt them to boundary conditions. And there is yet another method of generating new solutions of a PDE with constant coefficients: We can translate a given solution, for example, ψ1 (x, t) → ψ1 (x − α, t), and then integrate over the translation parameter α. Therefore ∞ 2 1 − (x−α) (9.237) C(α)e 4a2 t dα ψ(x, t) = √ 2a tπ −∞ is again a solution, which we rewrite using the substitution ξ= x −α √ , 2a t √ α = x − 2aξ t, dα = −2a dξ √ t. (9.238) Thus, we find that 1 ψ(x, t) = √ π ∞ −∞ √ 2 C(x − 2aξ t )e−ξ dξ (9.239) is a solution of our PDE. In this form we recognize the significance of the weight function C(x) from the translation method because, at t = 0, ψ(x, 0) = C(x) = ψ0 (x) is deter ∞ √ 2 mined by the boundary condition, and −∞ e−ξ dξ = π . Therefore, we can also write the solution as ∞ √ 1 2 ψ(x, t) = √ (9.240) ψ0 (x − 2aξ t )e−ξ dξ, π −∞ displaying the role of the boundary condition explicitly. From Eq. (9.240) we see that the initial temperature distribution, ψ0 (x), spreads out over time and is damped by the Gaussian weight function. Example 9.8.2 SPECIAL BOUNDARY CONDITION AGAIN Let us express the solution of Example 9.8.1 in terms of the error function solution of Eq. (9.234). The boundary condition at t = 0 is ψ0 (x) = 1 for −1 < x < 1 and zero for |x| >√1. From Eq. (9.240) we find the limits on the integration variable √ξ by setting x − 2aξ t = ±1. This yields the integration endpoints ξ = (±1 + x)/2a t . Therefore our solution becomes x+1 √ 1 2a t −ξ 2 ψ(x, t) = √ dξ. e π x−1 √ 2a t Using the error function defined in Eq. (9.234) we can also write this solution as follows     x −1 x+1 1 ψ(x, t) = erf . (9.241) √ − erf √ 2 2a t 2a t 616 Chapter 9 Differential Equations Comparing this form of our solution with that from Example 9.8.1 we see that we can express Eq. (9.241) as the Fourier integral of Example 9.8.1, an identity that gives the Fourier integral, Eq. (9.224), in closed form of the tabulated error function.  Finally, we consider the heat flow case for an extended spherically symmetric medium centered at the origin, which prescribes polar coordinates r, θ, ϕ. We expect a solution of the form ψ(r, t) = u(r, t). Using Eq. (2.48) we find the PDE   2 ∂u ∂ u 2 ∂u = a2 + , (9.242) ∂t r ∂r ∂r 2 which we transform to the one-dimensional heat flow PDE by the substitution u= v(r, t) , r ∂u 1 ∂v v = − 2, ∂r r ∂r r ∂u 1 ∂v = , ∂t r ∂t ∂ 2u 1 ∂ 2v 2 ∂v 2v = − 2 + 3. 2 2 r ∂r ∂r r ∂r r (9.243) This yields the PDE ∂ 2v ∂v = a2 2 . ∂t ∂r Example 9.8.3 (9.244) SPHERICALLY SYMMETRIC HEAT FLOW Let us apply the one-dimensional heat flow PDE with the solution Eq. (9.234) to a spherically symmetric heat flow under fairly common boundary conditions, where x is released by the radial variable. Initially we have zero temperature everywhere. Then, at time t = 0, a finite amount of heat energy Q is released at the origin, spreading evenly in all directions. What is the resulting spatial and temporal temperature distribution? Inspecting our special solution in Eq. (9.236) we see that, for t → 0, the temperature v(r, t) C − r2 = √ e 4a2 t r t3 (9.245) goes to zero for all r = 0, so zero initial temperature is guaranteed. As t → ∞, the temperature v/r → 0 for all r including the origin, which is implicit in our boundary conditions. The constant C can be determined from energy conservation, which gives the constraint v 3 4πσρC ∞ 2 − r 22 Q = σρ d r= √ (9.246) r e 4a t dr = 8 π 3 σρa 3 C, r 0 t3 where ρ is the constant density of the medium and σ is its specific heat. Here we have rescaled the integration variable and integrated by parts to get ∞ 2 √ 3 ∞ −ξ 2 2 − r2 2 4a t e ξ dξ, r dr = (2a t ) e 0 0 0 ∞ ∞ √ ξ π 1 ∞ −ξ 2 2 2 . e−ξ ξ 2 dξ = − e−ξ  + dξ = e 2 2 4 0 0 9.8 Heat Flow, or Diffusion, PDE 617 The temperature, as given by Eq. (9.245) at any moment, which is at fixed t, is a Gaussian √ distribution that flattens out as time increases, because its width is proportional to t . As a function of time the temperature is proportional to t −3/2 e−T /t , with T ≡ r 2 /4a 2 , which rises from zero to a maximum and then falls off to zero again for large times. To find the maximum, we set   3 d  −3/2 −T /t −5/2 −T /t T − = 0, (9.247) t e =t e dt t 2 from which we find t = 2T /3.  In the case of cylindrical symmetry (in the plane z = 0 in plane polar coordinates ρ = x 2 + y 2 , ϕ) we look for a temperature ψ = u(ρ, t) that then satisfies the ODE (using Eq. (2.35) in the diffusion equation)   2 ∂u 1 ∂u 2 ∂ u , (9.248) + =a ∂t ∂ρ 2 ρ ∂ρ which is the planar √ analog of Eq. (9.244). This ODE also has solutions with the functional dependence ρ/ t ≡ r. Upon substituting   ∂u ρ ∂ 2u v′ ∂u ρv ′ v′ u=v √ , = = − 3/2 , =√ , (9.249) ∂t ∂ρ t 2t ∂ρ 2 t t into Eq. (9.248) with the notation v ′ ≡ dv dr , we find the ODE   2 r ′ a v = 0. + a 2 v ′′ + r 2 (9.250) This is a first-order ODE for v ′ , which we can integrate when we separate the variables v and r as   1 r v ′′ + 2 . =− (9.251) v′ r 2a This yields √ t − ρ 22 C − r 22 e 4a t . v(r) = e 4a = C r ρ (9.252) This special solution for cylindrical symmetry can be similarly generalized and adapted to boundary conditions, as for the spherical case. Finally, the z-dependence can be factored in, because z separates from the plane polar radial variable ρ. In summary, PDEs can be solved with initial conditions, just like ODEs, or with boundary conditions prescribing the value of the solution or its derivative on boundary surfaces, curves, or points. When the solution is prescribed on the boundary, the PDE is called a Dirichlet problem; if the normal derivative of the solution is prescribed on the boundary, the PDE is called a Neumann problem. When the initial temperature is prescribed for the one-dimensional or three-dimensional heat equation (with spherical or cylindrical symmetry) it becomes a weight function of the solution, in terms of an integral over the generic Gaussian solution. The three-dimensional 618 Chapter 9 Differential Equations heat equation, with spherical or cylindrical boundary conditions, is solved by separation of the variables, leading to eigenfunctions in each separated variable and eigenvalues as separation constants. For finite boundary intervals in each spatial coordinate, the sum over separation constants leads to a Fourier-series solution, while infinite boundary conditions lead to a Fourier-integral solution. The separation of variables method attempts to solve a PDE by writing the solution as a product of functions of one variable each. General conditions for the separation method to work are provided by the symmetry properties of the PDE, to which continuous group theory applies. Additional Readings Bateman, H., Partial Differential Equations of Mathematical Physics. New York: Dover (1944), 1st ed. (1932). A wealth of applications of various partial differential equations in classical physics. Excellent examples of the use of different coordinate systems — ellipsoidal, paraboloidal, toroidal coordinates, and so on. Cohen, H., Mathematics for Scientists and Engineers. Englewood Cliffs, NJ: Prentice-Hall (1992). Courant, R., and D. Hilbert, Methods of Mathematical Physics, Vol. 1 (English edition). New York: Interscience (1953), Wiley (1989). This is one of the classic works of mathematical physics. Originally published in German in 1924, the revised English edition is an excellent reference for a rigorous treatment of Green’s functions and for a wide variety of other topics on mathematical physics. Davis, P. J., and P. Rabinowitz, Numerical Integration. Waltham, MA: Blaisdell (1967). This book covers a great deal of material in a relatively easy-to-read form. Appendix 1 (On the Practical Evaluation of Integrals by M. Abramowitz) is excellent as an overall view. Garcia, A. L., Numerical Methods for Physics. Englewood Cliffs, NJ: Prentice-Hall (1994). Hamming, R. W., Numerical Methods for Scientists and Engineers, 2nd ed. New York: McGraw-Hill (1973), reprinted Dover (1987). This well-written text discusses a wide variety of numerical methods from zeros of functions to the fast Fourier transform. All topics are selected and developed with a modern computer in mind. Hubbard, J., and B. H. West, Differential Equations. Berlin: Springer (1995). Ince, E. L., Ordinary Differential Equations. New York: Dover (1956). The classic work in the theory of ordinary differential equations. Lapidus, L., and J. H. Seinfeld, Numerical Solutions of Ordinary Differential Equations. New York: Academic Press (1971). A detailed and comprehensive discussion of numerical techniques, with emphasis on the Runge– Kutta and predictor–corrector methods. Recent work on the improvement of characteristics such as stability is clearly presented. Margenau, H., and G. M. Murhpy, The Mathematics of Physics and Chemistry, 2nd ed. Princeton, NJ: Van Nostrand (1956). Chapter 5 covers curvilinear coordinates and 13 specific coordinate systems. Miller, R. K., and A.N. Michel, Ordinary Differential Equations. New York: Academic Press (1982). Morse, P. M., and H. Feshbach, Methods of Theoretical Physics. New York: McGraw-Hill (1953). Chapter 5 includes a description of several different coordinate systems. Note that Morse and Feshbach are not above using left-handed coordinate systems even for Cartesian coordinates. Elsewhere in this excellent (and difficult) book are many examples of the use of the various coordinate systems in solving physical problems. Chapter 7 is a particularly detailed, complete discussion of Green’s functions from the point of view of mathematical physics. Note, however, that Morse and Feshbach frequently choose a source of 4π δ(r − r′ ) in place of our δ(r − r′ ). Considerable attention is devoted to bounded regions. Murphy, G. M., Ordinary Differential Equations and Their Solutions. Princeton, NJ: Van Nostrand (1960). A thorough, relatively readable treatment of ordinary differential equations, both linear and nonlinear. Press, W. H., B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes, 2nd ed. Cambridge, UK: Cambridge University Press (1992). Ralston, A., and H. Wilf, eds., Mathematical Methods for Digital Computers. New York: Wiley (1960). 9.8 Additional Readings 619 Ritger, P. D., and N. J. Rose, Differential Equations with Applications. New York: McGraw-Hill (1968). Stakgold, I., Green’s Functions and Boundary Value Problems, 2nd ed. New York: Wiley (1997). Stoer, J., and R. Burlirsch, Introduction to Numerical Analysis. New York: Springer-Verlag (1992). Stroud, A. H., Numerical Quadrature and Solution of Ordinary Differential Equations, Applied Mathematics Series, Vol. 10. New York: Springer-Verlag (1974). A balanced, readable, and very helpful discussion of various methods of integrating differential equations. Stroud is familiar with the work in this field and provides numerous references. This page intentionally left blank CHAPTER 10 STURM–LIOUVILLE THEORY — ORTHOGONAL FUNCTIONS In the preceding chapter we developed two linearly independent solutions of the secondorder linear homogeneous differential equation and proved that no third, linearly independent solution existed. In this chapter the emphasis shifts from solving the differential equation to developing and understanding general properties of the solutions. There is a close analogy between the concepts in this chapter and those of linear algebra in Chapter 3. Functions here play the role of vectors there, and linear operators that of matrices in Chapter 3. The diagonalization of a real symmetric matrix in Chapter 3 corresponds here to the solution of an ODE defined by a self-adjoint operator L in terms of its eigenfunctions, which are the “continuous” analog of the eigenvectors in Chapter 3. Examples for the corresponding analogy between Hermitian matrices and Hermitian operators are Hamiltonians in quantum mechanics and their energy eigenfunctions. In Section 10.1 the concepts of self-adjoint operator, eigenfunction, eigenvalue, and Hermitian operator are presented. The concept of adjoint operator, given first in terms of differential equations, is then redefined in accordance with usage in quantum mechanics, where eigenfunctions take complex values. The vital properties of reality of eigenvalues and orthogonality of eigenfunctions are derived in Section 10.2. In Section 10.3 we discuss the Gram–Schmidt procedure for systematically constructuring sets of orthogonal functions. Finally, the general property of the completeness of a set of eigenfunctions is explored in Section 10.4, and Green’s functions from Chapter 9 are continued in Section 10.5. 621 622 10.1 Chapter 10 Sturm–Liouville Theory — Orthogonal Functions SELF-ADJOINT ODES In Chapter 9 we studied, classified, and solved linear, second-order ODEs corresponding to linear, second-order differential operators of the general form Lu(x) = p0 (x) d2 d u(x) + p1 (x) u(x) + p2 (x)u(x). dx dx 2 (10.1) The coefficients p0 (x), p1 (x), and p2 (x) are real functions of x, and over the region of interest, a ≤ x ≤ b, the first 2 − i derivatives of pi (x) are continuous. Reference to Eq. (9.118) shows that P (x) = p1 (x)/p0 (x) and Q(x) = p2 (x)/p0 (x). Hence, p0 (x) must not vanish for a < x < b. Now, the zeros of p0 (x) are singular points (Section 9.4), and the preceding statement means that our interval [a, b] must be given so that there are no singular points in the interior of the interval. There may be and often are singular points on the boundaries. For a linear operator L, the analog of a quadratic form for a matrix in Chapter 3 is the integral b u|L|u ≡ u|Lu ≡ u(x)Lu(x) dx a = b a u{p0 u′′ + p1 u′ + p2 u} dx, (10.2) where the primes on the real function u(x) denote derivatives, as usual, and, for simplicity, u(x) is taken to be real. If we shift the derivatives to the first factor, u, in Eq. (10.2) by integrating by parts once or twice, we are led to the equivalent expression,  b u|L|u = u(x)(p1 − p0′ )u(x) x=a  b 2 d d [p1 u] + p2 u u dx. (10.3) [p0 u] − + dx dx 2 a If we require that the integrals in Eqs. (10.2) and (10.3) be identical for all (twice differentiable) functions u, then the integrands have to be equal. The comparison then yields u(p0′′ − p1′ )u + 2u(p0′ − p1 )u′ = 0, or p0′ (x) = p1 (x), (10.4) and, as a bonus, the terms at the boundaries x = a and x = b in Eq. (10.3) then also vanish. Because of the analogy with the transposed matrix in Chapter 3, it is convenient to define the linear operator in Eq. (10.3), 2 ¯ = d [p0 u] − d [p1 u] + p2 u Lu dx dx 2 = p0 d 2u du + (2p0′ − p1 ) + (p0′′ − p1′ + p2 )u, 2 dx dx (10.5) 10.1 Self-Adjoint ODEs 623 ¯ We have defined the adjoint operator L¯ and have shown that if as the adjoint1 operator L. ¯ Eq. (10.4) is satisfied, Lu|u = u|Lu. Following the same procedure we can show more generally that v|Lu = Lv|u. When this condition is satisfied,  ¯ = Lu = d p(x) du(x) + q(x)u(x), Lu dx dx (10.6) the operator L is said to be self-adjoint. Here, for the self-adjoint case, p0 (x) is replaced by p(x) and p2 (x) by q(x) to avoid unnecessary subscripts. The form of Eq. (10.6) allows carrying out two integrations by parts in Eq. (10.3) (and Eq. (10.22) and following) without integrated terms.2 Note that a given operator is not inherently self-adjoint; its selfadjointness depends on the properties of the function space in which it acts and on the boundary conditions. In a survey of the ODEs introduced in Section 9.3, Legendre’s equation and the linear oscillator equation are self-adjoint, but others, such as the Laguerre and Hermite equations, are not. However, the theory of linear, second-order, self-adjoint differential equations is perfectly general because we can always transform the non-self-adjoint operator into the required self-adjoint form. Consider Eq. (10.1) with p0′ = p1 . If we multiply L by3 1 exp p0 (x) x  p1 (t) dt , p0 (t) we obtain 1 exp p0 (x) x   d p1 (t) exp dt Lu(x) = p0 (t) dx   p1 (t) du(x) dt p0 (t) dx x  p2 (x) p1 (t) + · exp dt u, p0 (x) p0 (t) x (10.7) which is clearly self-adjoint (see Eq. (10.6)). Notice the p0 (x) in the denominator. This is why we require p0 (x) = 0, a < x < b. In the following development we assume that L has been put into self-adjoint form. 1 The adjoint operator bears a somewhat forced relationship to the adjoint matrix. A better justification for the nomenclature is found in a comparison of the self-adjoint operator (plus appropriate boundary conditions) with the self-adjoint matrix. The significant properties are developed in Section 10.2. Because of these properties, we are interested in self-adjoint operators. 2 The full importance of the self-adjoint form (plus boundary conditions) will become apparent in Section 10.2. In addition, self-adjoint forms will be required for developing Green’s functions in Section 10.5. 3 If we multiply L by f (x)/p (x) and then demand that 0 f ′ (x) = fp1 , p0 so that the new operator will be self-adjoint, we obtain f (x) = exp x  p1 (t) dt . p0 (t) 624 Chapter 10 Sturm–Liouville Theory — Orthogonal Functions Eigenfunctions, Eigenvalues Schrödinger’s wave equation H ψ(x) = Eψ(x) is the major example of an eigenvalue equation in physics; here the differential operator L is defined by the Hamiltonian H and may no longer be real, and the eigenvalue becomes the total energy E of the system. The eigenfunction ψ(x) may be complex and is usually called a wave function. A variational formulation of this Schrödinger equation appears in Section 17.7. Based on spherical, cylindrical, or some other symmetry properties, a three- or four-dimensional PDE or eigenvalue equation such as the Schrödinger equation may separate into eigenvalue equations in a single variable each. Examples are Eqs. (9.41), (9.42), (9.50), and (9.53). However, sometimes an eigenvalue equation takes the more general self-adjoint form Lu(x) + λw(x)u(x) = 0, (10.8) where the constant λ is the eigenvalue4 and w(x) is a known weight or density function; w(x) > 0 except possibly at isolated points at which w(x) = 0. (In Section 10.1, w(x) ≡ 1.) For a given choice of the parameter λ, a function uλ (x), which satisfies Eq. (10.8) and the imposed boundary conditions, is called an eigenfunction corresponding to λ. The constant λ is then called an eigenvalue by mathematicians. There is no guarantee that an eigenfunction uλ (x) will exist for an arbitrary choice of the parameter λ. Indeed, the requirement that there be an eigenfunction often restricts the acceptable values of λ to a discrete set. Examples of this for the Legendre, Hermite, and Chebyshev equations appear in the exercises of Section 9.5. Here we have the mathematical approach to the process of quantization in quantum mechanics. b The inner product of two functions, v|u = a v ∗ (x)w(x)u(x) dx, depends on the weight function and generalizes our previous definition, where w(x) ≡ 1. The weight function also modifies the definition of orthogonality of two eigenfunctions: They are orthogonal if their inner product uλ′ |uλ  = 0. The extra weight function w(x) appears sometimes as an asymptotic wave function ψ∞ that is a common factor in all solutions of a PDE such as the Schrödinger equation, for example, when the potential V (x) → 0 as x → ∞ in H = T + V . We can find ψ∞ when we set V = 0 in the Schrödinger equation. Another source for w(x) may be a nonzero angular momentum barrier l(l + 1)/x 2 in a PDE or separated ODE Eq. (9.65) that has a regular singularity and dominates at x → 0. In such a case the indicial equation, such as Eq. (9.87) or (9.103), shows that the wave function has x l as an overall factor. Since the wave function enters twice in matrix elements and orthogonality relations, the weight functions in Table 10.1 come from these common factors in both radial wave functions. This is how the exp(−x) for Laguerre polynomials arises and x k exp(−x) for associated Laguerre polynomials in Table 10.1. 4 Note that this mathematical definition of the eigenvalue differs by a sign from the usage in physics. 10.1 Self-Adjoint ODEs 625 Table 10.1 Equation p(x) q(x) λ w(x) Legendrea 1 − x2 l(l + 1) l(l + 1) l(l + 1) n2 n2 n(n + 2) n(n + 2α) a2 α α−k 1 1 1 (1 − x 2 )−1/2 [x(1 − x)]−1/2 (1 − x 2 )1/2 (1 − x 2 )α−1/2 x e−x x k e−x Shifted Legendrea Associated Legendrea Chebyshev I Shifted Chebyshev I Chebyshev II Ultraspherical (Gegenbauer) Besselb , 0 ≤ x ≤ a Laguerre, 0 ≤ x < ∞ Associated Laguerrec x(1 − x) 1 − x2 (1 − x 2 )1/2 [x(1 − x)]1/2 (1 − x 2 )3/2 (1 − x 2 )α+1/2 x xe−x x k+1 e−x 0 0 −m2 /(1 − x 2 ) 0 0 0 0 −n2 /x 0 0 Hermite, 0 ≤ x < ∞ Simple harmonic oscillatord e−x 1 0 0 2 2α n2 e−x 1 2 a l = 0, 1, . . . , −l ≤ m ≤ l are integers and −1 ≤ x ≤ 1, 0 ≤ x ≤ 1 for shifted Legendre. b Orthogonality of Bessel functions is rather special. Compare Section 11.2. for details. A second type of orthogonality is developed in Eq. (11.174). c k is a non-negative integer. For more details, see Table 10.2. d This will form the basis for Chapter 14, Fourier series. Example 10.1.1 LEGENDRE’S EQUATION Legendre’s equation is given by  1 − x 2 u′′ − 2xu′ + n(n + 1)u = 0, −1 ≤ x ≤ 1. (10.9) From Eqs. (10.1), (10.8), and (10.9), p0 (x) = 1 − x 2 = p, p1 (x) = −2x = p ′ , w(x) = 1, λ = n(n + 1), p2 (x) = 0 = q. Recall that our series solutions of Legendre’s equation (Exercise 9.5.5)5 diverged unless n was restricted to one of the integers. This represents a quantization of the eigenvalue λ.  When the equations of Chapter 9 are transformed into the self-adjoint form, we find the following values of the coefficients and parameters (Table 10.1). The coefficient p(x) is the coefficient of the second derivative of the eigenfunction. The eigenvalue λ is the parameter that is available in a term of the form λw(x)u(x); any x dependence apart from the eigenfunction becomes the weighting function w(x). If there is another term containing the eigenfunction (not the derivatives), the coefficient of the eigenfunction in this additional term is identified as q(x). If no such term is present, q(x) is zero. 5 Compare also Exercise 5.2.15 and 12.10. 626 Chapter 10 Sturm–Liouville Theory — Orthogonal Functions Example 10.1.2 DEUTERON Further insight into the concepts of eigenfunction and eigenvalue may be provided by an extremely simple model of the deuteron, a bound state of a neutron and proton. From experiment, the binding energy of about 2 MeV≪ Mc2 , with M = Mp = Mn , the common neutron and proton mass whose small mass difference we neglect. Due to the short range of the nuclear force, the deuteron properties do not depend much on the detailed shape of the interaction potential. Thus, the neutron–proton nuclear interaction may be modeled by a spherically symmetric square well potential: V = V0 < 0 for 0 ≤ r < a, V = 0 for r > a. The Schrödinger wave equation is h¯ 2 2 (10.10) ∇ ψ + V ψ = Eψ, M where the energy eigenvalue E < 0 for a bound state. For the ground state the orbital angular momentum l = 0 because for l = 0 there is the additional positive angular momentum barrier. So, with ψ = ψ(r), we may write u(r) = rψ(r), and, using Exercise 2.5.18, the wave equation becomes − d 2u + k12 u = 0, dr 2 (10.11) with k12 = M (E − V0 ) > 0 h¯ 2 for the interior range, 0 ≤ r < a. For a < r < ∞, we have (10.12) d 2u − k22 u = 0, dr 2 (10.13) with k22 = − ME h¯ 2 The boundary condition that ψ remain finite at r = 0 implies u(0) = 0 and (10.14) u1 (r) = sin k1 r, (10.15) 0 ≤ r < a. In the range outside the potential well, we have a linear combination of the two exponentials, u2 (r) = A exp k2 r + B exp(−k2 r), a < r < ∞. (10.16) Continuity of particle and current density demand that u1 (a) = u2 (a) and that u′1 (a) = u′2 (a). These joining, or matching, conditions give sin k1 a = A exp k2 a + B exp(−k2 a), k1 cos k1 a = k2 A exp k2 a − k2 B exp(−k2 a). (10.17) The that we actually have a bound proton–neutron combination is that ∞ condition 2 0 u (r) dr = 1. This constraint can be met if we impose a boundary condition that ψ(r) 10.1 Self-Adjoint ODEs 627 FIGURE 10.1 A deuteron eigenfunction. remain finite as r → ∞. And this, in turn, means that A = 0. Dividing the preceding pair of equations (to cancel B), we obtain ) E − V0 k1 tan k1 a = − = − , (10.18) k2 −E a transcendental equation for the energy E with only certain discrete solutions. If E is such that Eq. (10.18) can be satisfied, our solutions u1 (r) and u2 (r) can satisfy the boundary conditions. If Eq. (10.18) is not satisfied, no acceptable solution exists. The values of E for which Eq. (10.18) is satisfied are the eigenvalues; the corresponding functions u1 and u2 (or ψ) are the eigenfunctions. For the deuteron, problem there is one (and only one) negative value of E satisfying Eq. (10.18); that is, the deuteron has one and only one bound state. Now, what happens if E does not satisfy Eq. (10.18), that is, if E = E0 is not an eigenvalue? In graphical form, imagine that E and therefore k1 are varied slightly. For E = E1 < E0 , k1 is reduced and sin k1 a has not turned down enough to match exp(−k2 a). The joining conditions, Eq. (10.17), require A > 0 and the wave function goes to +∞ exponentially. For E = E2 > E0 , k1 is larger, sin k1 a peaks sooner and has descended more rapidly at r = a. The joining conditions demand A < 0, and the wave function goes to −∞ exponentially. Only for E = E0 , an eigenvalue, will the wave function have the required negative exponential asymptotic behavior (see Fig. 10.1).  Boundary Conditions In the foregoing definition of eigenfunction, it was noted that the eigenfunction uλ (x) was required to satisfy certain imposed boundary conditions. The term boundary conditions includes as a special case the concept of initial conditions. For instance, specifying the initial position x0 and the initial velocity v0 in some dynamical problem would correspond to the Cauchy boundary conditions. The only difference in the present usage of boundary 628 Chapter 10 Sturm–Liouville Theory — Orthogonal Functions conditions in these one-dimensional problems is that we are going to apply the conditions on both ends of the allowed range of the variable. Usually the form of the differential equation or the boundary conditions on the solutions will guarantee that at the ends of our interval (that is, at the boundary, as suggested by Eq. (10.3)) the following products will vanish:   du(x)  du(x)  ∗ ∗ p(x)v (x) =0 and p(x)v (x) = 0. (10.19) dx  dx  x=a x=b Here u(x) and v(x) are solutions of the particular ODE (Eq. (10.8)) being considered. A reason for this particular form of Eq. (10.19) is suggested shortly. If we recall the radial wave function u of the hydrogen atom with u(0) = 0 and du/dr ∼ e−kr → 0 as r → ∞, then both boundary conditions are satisfied. Similarly in the deuteron Example 10.1.2, sin k1 r → 0 as r → 0 and d(e−k2 r )/dr → 0 as r → ∞, both boundary conditions are obeyed. We can, however, work with a somewhat less restrictive set of boundary conditions,   v ∗ pu′  = v ∗ pu′  , (10.20) x=a x=b in which u(x) and v(x) are solutions of the differential equation corresponding to the same or to different eigenvalues. Equation (10.20) might well be satisfied if we were dealing with a periodic physical system, such as a crystal lattice. Equations (10.19) and (10.20) are written in terms of v ∗ , complex conjugate. When the solutions are real, v = v ∗ and the asterisk may be ignored. However, in Fourier exponential expansions and in quantum mechanics the functions will be complex and the complex conjugate will be needed. Example 10.1.3 INTEGRATION INTERVAL [a, b] For L = d 2 /dx 2 , a possible eigenvalue equation is d2 u(x) + n2 u(x) = 0, dx 2 (10.21) with eigenfunctions un = cos nx, vm = sin mx. Equation (10.20) becomes b −n sin mx sin nx a = 0, or b m cos mx cos nx a = 0, interchanging un and vm . Since sin mx and cos nx are periodic with period 2π (for n and m integral), Eq. (10.20) is clearly satisfied if a = x0 and b = x0 + 2π . If a problem prescribes a different interval, the eigenfunctions and eigenvalues will change along with the boundary conditions. The functions must always be chosen so that the boundary conditions (Eq. (10.20) etc.) are satisfied. For this case (Fourier series) the usual choices are x0 = 0 leading to (0, 2π) and x0 = −π leading to (−π, π). Here and throughout the following several chapters the orthogonality interval is so that the boundary conditions (Eq. (10.20)) will be satisfied. The interval [a, b] and the weighting factor w(x) for the most commonly encountered second-order differential equations are listed in Table 10.2.  10.1 Self-Adjoint ODEs 629 Table 10.2 Equation a b w(x) Legendre Shifted Legendre Associated Legendre Chebyshev I Shifted Chebyshev I Chebyshev II Laguerre Associated Laguerre −1 0 −1 −1 0 −1 0 0 1 1 1 1 1 1 ∞ ∞ 1 1 1 (1 − x 2 )−1/2 [x(1 − x)]−1/2 (1 − x 2 )1/2 e−x x k e−x Hermite −∞ Simple harmonic oscillator 0 −π e−x 1 1 ∞ 2π π 2 The orthogonality interval [a, b] is determined by the boundary conditions of Section 10.1. 2. The weighting function is established by putting the ODE in selfadjoint form. Hermitian Operators We now prove an important property of the self-adjoint, second-order differential operator (Eq. (10.8)), in conjunction with solutions u(x) and v(x) that satisfy boundary conditions given by Eq. (10.20). This is motivated by applications in quantum mechanics. By integrating v ∗ (complex conjugate) times the second-order self-adjoint differential operator L (operating on u) over the range a ≤ x ≤ b, we obtain b b b v ∗ qu dx (10.22) v ∗ (pu′ )′ dx + v ∗ Lu dx = a a a using Eq. (10.6). Integrating by parts, we have b  ∗ ′ ′ ∗ ′ b v (pu ) dx = v pu a − b v ∗ ′ pu′ dx. (10.23) a a The integrated part vanishes on application of the boundary conditions (Eq. (10.20)). Integrating the remaining integral by parts a second time, we have b b b ′ − u(pv ∗ ′ )′ dx. (10.24) v ∗ ′ pu′ dx = −v ∗ pua + a a Again, the integrated part vanishes in an application of Eq. (10.20). A combination of Eqs. (10.22) to (10.24) gives us b b u(Lv)∗ dx. (10.25) v ∗ Lu dx = a a This property, given by Eq. (10.25), is expressed by saying that the operator L is Hermitian with respect to the functions u(x) and v(x), which satisfy the boundary conditions specified by Eq. (10.20). Note that if this Hermitian property follows from self-adjointness in a Hilbert space, then it includes that boundary conditions are imposed on all functions of that space. 630 Chapter 10 Sturm–Liouville Theory — Orthogonal Functions Hermitian Operators in Quantum Mechanics The proceeding development in this section has focused on the classical second-order differential operators of mathematical physics. Generalizing our Hermitian operator theory as required in quantum mechanics, we have an extension: The operators need be neither second-order differential operators nor real. px = −i h(∂/∂x) will be a Hermitian operator. ¯ We simply assume (as is customary in quantum mechanics) that the wave functions satisfy appropriate boundary conditions: vanishing sufficiently strongly at infinity or having periodic behavior (as in a crystal lattice, or unit intensity for scattering problems). The operator L is called Hermitian if (10.26) ψ1∗ Lψ2 dτ = (Lψ1 )∗ ψ2 dτ. Apart from the simple extension to complex quantities, this definition is identical with Eq. (10.25). The adjoint A† of an operator A is defined by ψ1∗ A† ψ2 dτ ≡ (Aψ1 )∗ ψ2 dτ. (10.27) This generalizes our classical, second-derivative-operator–oriented definition, Eq. (10.5). Here the adjoint is defined in terms of the resultant integral, with the A† as part of the integrand. Clearly, if A = A† (self-adjoint) and satisfies the aforementioned boundary conditions, then A is Hermitian. The expectation value of an operator L is defined as L = ψ ∗ Lψ dτ. (10.28a) In the framework of quantum mechanics L corresponds to the result of a measurement of the physical quantity represented by L when the physical system is in a state described by the wave function ψ. If we require L to be Hermitian, it is easy to show that L is real (as would be expected from a measurement in a physical theory). Taking the complex conjugate of Eq. (10.28a), we obtain ∗ L∗ = ψ ∗ Lψ dτ = ψL∗ ψ ∗ dτ. Rearranging the factors in the integrand, we have ∗ L = (Lψ)∗ ψ dτ. Then, applying our definition of Hermitian operator, Eq. (10.26), we get L∗ = ψ ∗ Lψ dτ = L, or L is real. It is worth noting that ψ is not necessarily an eigenfunction of L. (10.28b) 10.1 Self-Adjoint ODEs 631 Exercises 10.1.1 Show that Laguerre’s ODE, Eq. (13.52), may be put into self-adjoint form by multiplying by e−x and that w(x) = e−x is the weighting function. 10.1.2 Show that the Hermite ODE, Eq. (13.10), may be put into self-adjoint form by multi2 2 plying by e−x and that this gives w(x) = e−x as the appropriate density function. 10.1.3 Show that the Chebyshev (type I) ODE, Eq. (13.100), may be put into self-adjoint form by multiplying by (1 − x 2 )−1/2 and that this gives w(x) = (1 − x 2 )−1/2 as the appropriate density function. 10.1.4 Show the following when the linear second-order differential equation is expressed in self-adjoint form: (a) The Wronskian is equal to a constant divided by the initial coefficient p: W (x) = (b) A second solution is given by y2 (x) = Cy1 (x) 10.1.5 C . p(x) x dt . p(t)[y1 (t)]2 Un (x), the Chebyshev polynomial (type II), satisfies the ODE, Eq. (13.101), (a)  1 − x 2 Un′′ (x) − 3xUn′ (x) + n(n + 2)Un (x) = 0. Locate the singular points that appear in the finite plane, and show whether they are regular or irregular. (b) Put this equation in self-adjoint form. (c) Identify the complete eigenvalue. (d) Identify the weighting function. 10.1.6 For the very special case λ = 0 and q(x) = 0 the self-adjoint eigenvalue equation becomes  d du(x) p(x) = 0, dx dx satisfied by du 1 = . dx p(x) Use this to obtain a “second” solution of the following: (a) Legendre’s equation, (b) Laguerre’s equation, (c) Hermite’s equation. 632 Chapter 10 Sturm–Liouville Theory — Orthogonal Functions 1 1+x ln , 2 1 − x x dt (b) u2 (x) − u2 (x0 ) = et , t x0 x 2 (c) u2 (x) = et dt. ANS. (a) u2 (x) = 0 These second solutions illustrate the divergent behavior usually found in a second solution. Note. In all three cases u1 (x) = 1. 10.1.7 Given that Lu = 0 and gLu is self-adjoint, show that for the adjoint operator ¯ L(gu) ¯ L, = 0. 10.1.8 For a second-order differential operator L that is self-adjoint show that a b b [y2 Ly1 − y1 Ly2 ] dx = p(y1′ y2 − y1 y2′ )a . 10.1.9 Show that if a function ψ is required to satisfy Laplace’s equation in a finite region of space and to satisfy Dirichlet boundary conditions over the entire closed bounding surface, then ψ is unique. Hint. One of the forms of Green’s theorem, Section 1.11, will be helpful. 10.1.10 Consider the solutions of the Legendre, Chebyshev, Hermite, and Laguerre equations to be polynomials. Show that the ranges of integration that guarantee that the Hermitian operator boundary conditions will be satisfied are (a) (c) 10.1.11 Legendre [−1, 1], Hermite (−∞, ∞), (b) Chebyshev [−1, 1], (d) Laguerre [0, ∞). Within the framework of quantum mechanics (Eqs. (10.26) and following), show that the following are Hermitian operators: (a) (b) h ∇ 2π h angular momentum L = −i hr ¯ × ∇ ≡ −i 2π r × ∇. momentum p = −i h∇ ¯ ≡ −i Hint. In Cartesian form L is a linear combination of noncommuting Hermitian operators. 10.1.12 (a) A is a non-Hermitian operator. In the sense of Eqs. (10.26) and (10.27), show that A + A† and i(A − A† ) are Hermitian operators. (b) Using the preceding result, show that every non-Hermitian operator may be written as a linear combination of two Hermitian operators. 10.1 Self-Adjoint ODEs 10.1.13 633 U and V are two arbitrary operators, not necessarily Hermitian. In the sense of Eq. (10.27), show that (U V )† = V † U † . Note the resemblance to Hermitian adjoint matrices. Hint. Apply the definition of adjoint operator, Eq. (10.27). 10.1.14 Prove that the product of two Hermitian operators is Hermitian (Eq. (10.26)) if and only if the two operators commute. 10.1.15 A and B are noncommuting quantum mechanical operators: AB − BA = iC. Show that C is Hermitian. Assume that appropriate boundary conditions are satisfied. 10.1.16 The operator L is Hermitian. Show that L2  ≥ 0. 10.1.17 A quantum mechanical expectation value is defined by A = ψ ∗ (x)Aψ(x) dx, where A is a linear operator. Show that demanding that A be real means that A must be Hermitian — with respect to ψ(x). 10.1.18 From definition adjoint, Eq. (10.27), show that A†† = A in the sense that ∗ the of ∗ †† ψ1 A ψ2 dτ = ψ1 Aψ2 dτ . The adjoint of the adjoint is the original operator. Hint. The functions ψ1 and ψ2 of Eq. (10.27) represent a class of functions. The subscripts 1 and 2 may be interchanged or replaced by other subscripts. 10.1.19 The Schrödinger wave equation for the deuteron (with a Woods–Saxon potential) is − h¯ 2 2 V0 ∇ ψ+ ψ = Eψ. 2M 1 + exp[(r − r0 )/a] Here E = −2.224 MeV, a is a “thickness parameter,” 0.4 × 10−13 cm. Expressing lengths in fermis (10−13 cm) and energies in million electron volts (MeV), we may rewrite the wave equation as  V0 d2 1 E− (rψ) = 0. (rψ) + 41.47 1 + exp((r − r0 )/a) dr 2 E is assumed known from experiment. The goal is to find V0 for a specified value of r0 (say, r0 = 2.1). If we let y(r) = rψ(r), then y(0) = 0 and we take y ′ (0) = 1. Find V0 such that y(20.0) = 0. (This should be y(∞), but r = 20 is far enough beyond the range of nuclear forces to approximate infinity.) ANS. For a = 0.4 and r0 = 2.1 fm, V0 = −34.159 MeV. 10.1.20 Determine the nuclear potential well parameter V0 of Exercise 10.1.19 as a function of r0 for r = 2.00(0.05)2.25 fermis. Express your results as a power law |V0 |r0ν = k. Determine the exponent ν and the constant k. This power-law formulation is useful for accurate interpolation. 634 Chapter 10 Sturm–Liouville Theory — Orthogonal Functions 10.1.21 10.1.22 In Exercise 10.1.19 it was assumed that 20 fermis was a good approximation to infinity. Check on this by calculating V0 for rψ(r) = 0 at (a) r = 15, (b) r = 20, (c) r = 25, and (d) r = 30. Sketch your results. Take r0 = 2.10 and a = 0.4 (fermis). For a quantum particle moving in a potential well, V (x) = 12 mω2 x 2 , the Schrödinger wave equation is − h¯ 2 d 2 ψ(x) 1 + mω2 x 2 ψ(x) = Eψ(x), 2m dx 2 2 or 2E d 2 ψ(z) − z2 ψ(z) = − ψ(z), h¯ ω dz2 where z = (mω/h¯ )1/2 x. Since this operator is even, we expect solutions of definite parity. For the initial conditions that follow, integrate out from the origin and determine the minimum constant 2E/h¯ ω that will lead to ψ(∞) = 0 in each case. (You may take z = 6 as an approximation of infinity.) (a) (b) For an even eigenfunction, ψ(0) = 1, ψ ′ (0) = 0. ψ(0) = 0, ψ ′ (0) = 1. For an odd eigenfunction, Note. Analytical solutions appear in Section 13.1. 10.2 HERMITIAN OPERATORS Hermitian, or self-adjoint, operators with appropriate boundary conditions have three properties that are of extreme importance in physics, both classical and quantum. 1. The eigenvalues of a Hermitian operator are real. 2. A Hermitian operator possesses an orthogonal set of eigenfunctions. 3. The eigenfunctions of a Hermitian operator form a complete set.6 Real Eigenvalues We proceed to prove the first two of these three properties. Let Lui + λi wui = 0. (10.29) 6 This third property is not universal. It does hold for our linear, second-order differential operators in Sturm–Liouville (self- adjoint) form. Completeness is defined and discussed in Section 10.4. A proof that the eigenfunctions of our linear, second-order, self-adjoint, differential equations form a complete set may be developed from the calculus of variations of Section 17.8. 10.2 Hermitian Operators 635 Assuming the existence of a second eigenvalue and eigenfunction, Luj + λj wuj = 0. (10.30) Then, taking the complex conjugate, we obtain L∗ u∗j + λ∗j wu∗j = 0. (10.31) Here w(x) ≥ 0 is a real function. But we permit λk , the eigenvalues, and uk , the eigenfunctions, to be complex. Multiplying Eq. (10.29) by u∗j and Eq. (10.31) by ui and then subtracting, we have u∗j Lui − ui L∗ u∗j = (λ∗j − λi )wui u∗j . (10.32) We integrate over the range a ≤ x ≤ b: a b u∗j Lui dx − a b ui L∗ u∗j dx = (λ∗j − λi ) a b ui u∗j w dx. (10.33) Since L is Hermitian, the left-hand side vanishes by Eq. (10.26) and (λ∗j − λi ) a b ui u∗j w dx = 0. (10.34) If i = j , the integral cannot vanish [w(x) > 0, apart from isolated points], except in the trivial case ui = 0. Hence the coefficient (λ∗i − λi ) must be zero, λ∗i = λi , (10.35) which says that the eigenvalue is real. Since λi can represent any one of the eigenvalues, this proves the first property. This is an exact analog of the nature of the eigenvalues of real symmetric (and of Hermitian) matrices (compare Section 3.5). The analog of the spectral decomposition of a real symmetric matrix in Section 3.5 for a Hermitian operator L with a discrete set of eigenvalues λi takes the form L=  i λi |ui ui |, f (L) =  i f (λi )|ui ui | with eigenvectors |ui  and any infinitely differentiable function f . Real eigenvalues of Hermitian operators have a fundamental significance in quantum mechanics. In quantum mechanics the eigenvalues correspond to precisely measurable quantities, such as energy and angular momentum. With the theory formulated in terms of Hermitian operators, this proof of real eigenvalues guarantees that the theory will predict real numbers for these measurable physical quantities. In Section 17.8 it will be seen that the set of real eigenvalues has a lower bound (for nonrelativistic problems). 636 Chapter 10 Sturm–Liouville Theory — Orthogonal Functions Orthogonal Eigenfunctions If we now take i = j and if λi = λj in Eq. (10.34), the integral of the product of the two different eigenfunctions must vanish: b ui u∗j w dx = 0. (10.36) a This condition, called orthogonality, is the continuum analog of the vanishing of a scalar product of two vectors.7 We say that the eigenfunctions ui (x) and uj (x) are orthogonal with respect to the weighting function w(x) over the interval [a, b]. Equation (10.36) constitutes a partial proof of the second property of our Hermitian operators. Again, the precise analogy with matrix analysis should be noted. Indeed, we can establish a one-to-one correspondence between this Sturm–Liouville theory of differential equations and the treatment of Hermitian matrices. Historically, this correspondence has been significant in establishing the mathematical equivalence of matrix mechanics developed by Heisenberg and wave mechanics developed by Schrödinger. Today, the two diverse approaches are merged into the theory of quantum mechanics, and the mathematical formulation that is more convenient for a particular problem is used for that problem. Actually the mathematical alternatives do not end here. Integral equations, Chapter 16, form a third equivalent and sometimes more convenient or more powerful approach. This proof of orthogonality is not quite complete. There is a loophole, because we may have ui = uj but still have λi = λj . Such a case is labeled degenerate. Illustrations of degeneracy are given at the end of this section. If λi = λj , the integral in Eq. (10.34) need not vanish. This means that linearly independent eigenfunctions corresponding to the same eigenvalue are not automatically orthogonal and that some other method must be sought to obtain an orthogonal set. Although the eigenfunctions in this degenerate case may not be orthogonal, they can always be made orthogonal. One method is developed in the next section. See also Eq. (4.21) for degeneracy due to symmetry. We shall see in succeeding chapters that it is just as desirable to have a given set of functions orthogonal as it is to have an orthogonal coordinate system. We can work with nonorthogonal functions, but they are likely to prove as messy as an oblique coordinate system. Example 10.2.1 FOURIER SERIES — ORTHOGONALITY To continue Example 10.1.3, the eigenvalue equation, Eq. (10.21), d2 y(x) + n2 y(x) = 0, dx 2 7 From the definition of Riemann integral, b a f (x)g(x) dx = lim  N N →∞ i=1  f (xi )g(xi )x , where x0 = a, xN = b, and xi − xi−1 = x. If we interpret f (xi ) and g(xi ) as the ith components of an N -component vector, then this sum (and therefore this integral) corresponds directly to a scalar product of vectors, Eq. (1.24). The vanishing of the scalar product is the condition for orthogonality of the vectors — or functions. 10.2 Hermitian Operators 637 may describe a quantum mechanical particle in a box, or perhaps a vibrating violin string, a classical harmonic oscillator with degenerate eigenfunctions — cos nx, sin nx — and eigenvalues n2 , n an integer. With n real (here taken to be integral), the orthogonality integrals become (a) x0 +2π x0 (b) (c) x0 +2π x0 x0 +2π x0 sin mx sin nx dx = Cn δnm , cos mx cos nx dx = Dn δnm , sin mx cos nx dx = 0. For an interval of 2π the preceding analysis guarantees the Kronecker delta in (a) and (b) but not the zero in (c) because (c) may involve degenerate eigenfunctions. However, inspection shows that (c) always vanishes for all integral m and n. Our Sturm–Liouville theory says nothing about the values of Cn and Dn because homogeneous ODEs have solutions whose scaling is arbitrary. Actual calculation yields + + π, n = 0, π, n = 0, Cn = Dn = 2π, n = 0. 0, n = 0, These orthogonality integrals form the basis of the Fourier series developed in Chapter 14.  Example 10.2.2 EXPANSION IN ORTHOGONAL EIGENFUNCTIONS—SQUARE WAVE The property of completeness (see Eq. (1.190) and Section 10.4) means that certain classes of functions (for example, sectionally or piecewise continuous) may be represented by a series of orthogonal eigenfunctions. Consider the square-wave shape  h   , 0 < x < π, (10.37) f (x) = 2  −h, −π < x < 0. 2 This function may be expanded in any of a variety of eigenfunctions — Legendre, Hermite, Chebyshev, and so on. The choice of eigenfunction is made on the basis of convenience or an application. To illustrate the expansion technique, let us choose the eigenfunctions of Example 10.2.1, cos nx and sin nx. The eigenfunction series is conveniently (and conventionally) written as f (x) = ∞ a0  + (am cos mx + bm sin mx). 2 m=1 Upon multiplying f (t) by cos nt or sin nt and integrating, only the nth term survives, by the orthogonality integrals of Example 10.2.1, thus yielding the coefficients 1 π 1 π an = f (t) cos nt dt, bn = f (t) sin nt dt, n = 0, 1, 2 . . . . π −π π −π 638 Chapter 10 Sturm–Liouville Theory — Orthogonal Functions Direct substitution of ±h/2 for f (t) yields an = 0, which is expected here because of the antisymmetry, f (−x) = −f (x), and   0, h (1 − cos nπ) = 2h bn =  nπ , nπ n even, n odd. Hence the eigenfunction (Fourier) expansion of the square wave is f (x) = ∞ 2h  sin(2n + 1)x . π 2n + 1 (10.38) n=0 Additional examples, using other eigenfunctions, appear in Chapters 11 and 12.  Degeneracy The concept of degeneracy was introduced earlier. If N linearly independent eigenfunctions correspond to the same eigenvalue, the eigenvalue is said to be N -fold degenerate. A particularly simple illustration is provided by the eigenvalues and eigenfunctions of the classical harmonic oscillator equation, Example 10.2.1. For each eigenvalue n2 , there are two possible solutions: sin nx and cos nx (and any linear combination, n an integer). We say the eigenfunctions are degenerate or the eigenvalue is degenerate. A more involved example is furnished by the physical system of an electron in an atom (nonrelativistic treatment, spin neglected). From the Schrödinger equation, Eq. (13.84) for hydrogen, the total energy of the electron is our eigenvalue. We may label it EnLM by using the quantum numbers n, L, and M as subscripts. For each distinct set of quantum numbers (n, L, M) there is a distinct, linearly independent eigenfunction ψnLM (r, θ, ϕ). For hydrogen, the energy EnLM is independent of L and M, reflecting the spherical (and SO(4)) symmetry of the Coulomb potential. With 0 ≤ L ≤ n − 1 and −L ≤ M ≤ L, the eigenvalue is n2 -fold degenerate (including the electron spin would raise this to 2n2 ). In atoms with more than one electron, the electrostatic potential is no longer a simple r −1 potential. The energy depends on L as well as on n, although not on M; EnLM is still (2L + 1)-fold degenerate. This degeneracy — due to rotational invariance of the potential — may be removed by applying an external magnetic field, breaking spherical symmetry and giving rise to the Zeeman effect. As a rule, the eigenfunctions form a Hilbert space, that is, a complete vector space of functions with a metric defined by the inner product (see Section 10.4 for more details and examples). Often an underlying symmetry, such as rotational invariance, is causing the degeneracies. States belonging to the same energy eigenvalue then will form a multiplet or representation of the symmetry group. The powerful group-theoretical methods are treated in Chapter 4 in some detail. 10.2 Hermitian Operators 639 Exercises 10.2.1 The functions u1 (x) and u2 (x) are eigenfunctions of the same Hermitian operator but for distinct eigenvalues λ1 and λ2 . Prove that u1 (x) and u2 (x) are linearly independent. 10.2.2 The vectors en are orthogonal to each other: en · em = 0 for n = m. Show that they are linearly independent. (b) The functions ψn (x) are orthogonal to each other over the interval [a, b] and with respect to the weighting function w(x). Show that the ψn (x) are linearly independent. 10.2.3 Given that (a) P1 (x) = x and Q0 (x) =   1+x 1 ln 2 1−x are solutions of Legendre’s differential equation corresponding to different eigenvalues: (a) Evaluate their orthogonality integral   1 1+x x dx. ln 1−x −1 2 (b) Explain why these two functions are not orthogonal, that is, why the proof of orthogonality does not apply. 10.2.4 T0 (x) = 1 and V1 (x) = (1 − x 2 )1/2 are solutions of the Chebyshev differential equation corresponding to different eigenvalues. Explain, in terms of the boundary conditions, why these two functions are not orthogonal. 10.2.5 (a) Show that the first derivatives of the Legendre polynomials satisfy a self-adjoint differential equation with eigenvalue λ = n(n + 1) − 2. (b) Show that these Legendre polynomial derivatives satisfy an orthogonality relation 1  m = n. Pm′ (x)Pn′ (x) 1 − x 2 dx = 0, −1 Note. In Section 12.5, (1 − x 2 )1/2 Pn′ (x) will be labeled an associated Legendre polynomial, Pn1 (x). 10.2.6 A set of functions un (x) satisfies the Sturm–Liouville equation  d d p(x) un (x) + λn w(x)un (x) = 0. dx dx The functions um (x) and un (x) satisfy boundary conditions that lead to orthogonality. The corresponding eigenvalues λm and λn are distinct. Prove that for appropriate boundary conditions, u′m (x) and u′n (x) are orthogonal with p(x) as a weighting function. 640 Chapter 10 Sturm–Liouville Theory — Orthogonal Functions 10.2.7 A linear operator A has n distinct eigenvalues and n corresponding eigenfunctions: Aψi = λi ψi . Show that the n eigenfunctions are linearly independent. A is not necessarily Hermitian. Hint. Assume linear dependence — that ψn = n−1 i=1 ai ψi . Use this relation and the operator–eigenfunction equation first in one order and then in the reverse order. Show that a contradiction results. 10.2.8 (a) Show that the Liouville substitution transforms  −1/4 , u(x) = v(ξ ) p(x)w(x) a w(t) p(t) 1/2 dt d 2v  + λ − Q(ξ ) v(ξ ) = 0, 2 dξ where Q(ξ ) = 10.2.9 x   d d p(x) u + λw(x) − q(x) u(x) = 0 dx dx into (b) ξ=  −1/4 d 2 q(x(ξ ))   + p x(ξ ) w x(ξ ) (pw)1/4 . w(x(ξ )) dξ 2 If v1 (ξ ) and v2 (ξ ) are obtained from u1 (x) and u2 (x), respectively, by a Liouville b c substitution, show that a w(x)u1 u2 dx is transformed into 0 v1 (ξ )v2 (ξ ) dξ with b 1/2 dx. c = a [w p] The ultraspherical polynomials Cn(α) (x) are solutions of the differential equation   2 d 2 d (1 − x ) 2 − (2α + 1)x + n(n + 2α) Cn(α) (x) = 0. dx dx (a) Transform this differential equation into self-adjoint form. (α) (b) Show that the Cn (x) are orthogonal for different n. Specify the interval of integration and the weighting factor. Note. Assume that your solutions are polynomials. 10.2.10 With L not self-adjoint, Lui + λi wui = 0 and ¯ j + λj wvj = 0. Lv 10.2 Hermitian Operators (a) 641 Show that b vj Lui dx = a a b ¯ j dx, ui Lv provided b b ui p0 vj′ a = vj p0 u′i a and (b) b ui (p1 − p0′ )vj a = 0. Show that the orthogonality integral for the eigenfunctions ui and vj becomes b ui vj w dx = 0 (λi = λj ). a 10.2.11 In Exercise 9.5.8 the series solution of the Chebyshev equation is found to be convergent for all eigenvalues n. Therefore n is not quantized by the argument used for Legendre’s (Exercise 9.5.5). Calculate the sum of the indicial equation k = 0 Chebyshev series for n = v = 0.8, 0.9, and 1.0 and for x = 0.0(0.1)0.9. Note. The Chebyshev series recurrence relation is given in Exercise 5.2.16. 10.2.12 Evaluate the n = ν = 0.9, indicial equation k = 0 Chebyshev series for x = 0.98, 0.99, and 1.00. The series converges very slowly at x = 1.00. You may wish to use double precision. Upper bounds to the error in your calculation can be set by comparison with the ν = 1.0 case, which corresponds to (1 − x 2 )1/2 . (b) These series solutions for eigenvalue ν = 0.9 and for ν = 1.0 are obviously not orthogonal, despite the fact that they satisfy a self-adjoint eigenvalue equation with different eigenvalues. From the behavior of the solutions in the vicinity of x = 1.00 try to formulate a hypothesis as to why the proof of orthogonality does not apply. 10.2.13 The Fourier expansion of the (asymmetric) square wave is given by Eq. (10.38). With h = 2, evaluate this series for x = 0(π/18)π/2, using the first (a) 10 terms, (b) 100 terms of the series. Note. For 10 terms and x = π/18, or 10◦ , your Fourier representation has a sharp hump. This is the Gibbs phenomenon of Section 14.5. For 100 terms this hump has been shifted over to about 1◦ . 10.2.14 The symmetric square wave (a) f (x) = has a Fourier expansion f (x) =   1,  −1, π |x| < 2 π < |x| < π 2 ∞ 4 cos(2n + 1)x . (−1)n π 2n + 1 n=0 Evaluate this series for x = 0(π/18)π/2 using the first 642 Chapter 10 Sturm–Liouville Theory — Orthogonal Functions (a) 10 terms, (b) 100 terms of the series. Note. As in Exercise 10.2.13, the Gibbs phenomenon appears at the discontinuity. This means that a Fourier series is not suitable for precise numerical work in the vicinity of a discontinuity. 10.3 GRAM–SCHMIDT ORTHOGONALIZATION The Gram–Schmidt orthogonalization is a method that takes a nonorthogonal set of linearly independent vectors (see Section 3.1) or functions8 and constructs an orthogonal set of vectors or functions over an arbitrary interval and with respect to an arbitrary weight or density factor. In the language of linear algebra, the process is equivalent to a matrix transformation relating an orthogonal set of basis vectors (functions) to a nonorthogonal set. A specific example of this matrix transformation appears in Exercise 12.2.1. Next we apply the Gram–Schmidt procedure to a set of functions. The functions involved may be real or complex. Here for convenience they are assumed to be real. The generalization to the complex case offers no difficulty. Before taking up orthogonalization, we should consider normalization of functions. So far no normalization has been specified. This means that b ϕi2 w dx = Ni2 , a but no attention has been paid to the value of Ni . Since our basic equation (Eq. (10.8)) is linear and homogeneous, we may multiply our solution by any constant and it will still be a solution. We now demand that each solution ϕi (x) be multiplied by Ni−1 so that the new (normalized) ϕi will satisfy b ϕi2 (x)w(x) dx = 1 (10.39) a and a b ϕi (x)ϕj (x)w(x) dx = δij . (10.40) Equation (10.39) says that we have normalized to unity. Including the property of orthogonality, we have Eq. (10.40). Functions satisfying this equation are said to be orthonormal (orthogonal plus unit normalization). Other normalizations are certainly possible, and indeed, by historical convention, each of the special functions of mathematical physics treated in Chapters 12 and 13 will be normalized differently. We consider three sets of functions: an original, linearly independent given set un (x), n = 0, 1, 2, . . . ; an orthogonalized set ψn (x) to be constructed; and a final set 8 Such a set of functions might well arise from the solutions of a PDE in which the eigenvalue was independent of one or more of the constants of separation. As an example, we have the hydrogen atom problem (Sections 10.2 and 13.2). The eigenvalue (energy) is independent of both the electron orbital angular momentum and its projection on the z-axis, m. Note, however, that the origin of the set of functions is irrelevant to the Gram–Schmidt orthogonalization procedure. 10.3 Gram–Schmidt Orthogonalization 643 of functions ϕn (x), which are the normalized ψn . The original un may be degenerate eigenfunctions, but this is not necessary. We shall have the following properties: un (x) Linearly independent Nonorthogonal Unnormalized ψn (x) Linearly independent Orthogonal Unnormalized ϕn (x) Linearly independent Orthogonal Normalized (orthonormal) The Gram–Schmidt procedure takes the nth ψ function (ψn ) to be un (x) plus an unknown linear combination of the previous ϕ. The presence of the new un (x) will guarantee linear independence. The requirement that ψn (x) be orthogonal to each of the previous ϕ yields just enough constraints to determine each of the unknown coefficients. Then the fully determined ψn will be normalized to unity, yielding ϕn (x). Then the sequence of steps is repeated for ψn+1 (x). We start with n = 0, letting ψ0 (x) = u0 (x), (10.41) with no “previous” ϕ to worry about. Then we normalize ψ0 (x) ϕ0 (x) = 2 . [ ψ0 w dx]1/2 For n = 1, let ψ1 (x) = u1 (x) + a1,0 ϕ0 (x). (10.42) (10.43) We demand that ψ1 (x) be orthogonal to ϕ0 (x). (At this stage the normalization of ψ1 (x) is irrelevant.) This orthogonality leads to ψ1 ϕ0 w dx = u1 ϕ0 w dx + a1,0 ϕ02 w dx = 0. (10.44) Since ϕ0 is normalized to unity (Eq. (10.42)), we have a1,0 = − u1 ϕ0 w dx, (10.45) fixing the value of a1,0 . Normalizing, we define Finally, we generalize so that where ψ1 (x) . ϕ1 (x) = 2 ( ψ1 w dx)1/2 ψi (x) ϕi (x) = 2 , ( ψi (x)w(x) dx)1/2 ψi (x) = ui + ai,0 ϕ0 + ai,1 ϕ1 + · · · + ai,i−1 ϕi−1 . The coefficients ai,j are given by ai,j = − ui ϕj w dx. (10.46) (10.47) (10.48) (10.49) 644 Chapter 10 Sturm–Liouville Theory — Orthogonal Functions Equation (10.49) holds for unit normalization. If some other normalization is selected, b  2 ϕj (x) w(x) dx = Nj2 , a then Eq. (10.47) is replaced by and ai,j becomes ψi (x) ϕi (x) = Ni 2 . ( ψi w dx)1/2 ai,j = − ui ϕj w dx Nj2 . (10.47a) (10.49a) Equations (10.48) and (10.49) may be rewritten in terms of projection operators, Pj . If we consider the ϕn (x) to form a linear vector space, then the integral in Eq. (10.49) may be interpreted as the projection of ui into the ϕj “coordinate,” or the j th component of ui . With   Pj ui (x) = ui (t)ϕj (t)w(t) dt ϕj (x), Eq. (10.48) becomes   i−1  Pj ui (x). ψi (x) = 1 − (10.48a) j =1 Subtracting off the components, j = 1 to i − 1, leaves ψi (x) orthogonal to all the ϕj (x). It will be noticed that although this Gram–Schmidt procedure is one possible way of constructing an orthogonal or orthonormal set, the functions ϕi (x) are not unique. There is an infinite number of possible orthonormal sets for a given interval and a given density function. As an illustration of the freedom involved, consider two (nonparallel) vectors A and B in the xy-plane. We may normalize A to unit magnitude and then form B′ = aA + B so that B′ is perpendicular to A. By normalizing B′ we have completed the Gram–Schmidt orthogonalization for two vectors. But any two perpendicular unit vectors, such as xˆ and yˆ , could have been chosen as our orthonormal set. Again, with an infinite number of possible rotations of xˆ and yˆ about the z-axis, we have an infinite number of possible orthonormal sets. Example 10.3.1 LEGENDRE POLYNOMIALS BY GRAM–SCHMIDT ORTHOGONALIZATION Let us form an orthonormal set from the set of functions un (x) = x n , n = 0, 1, 2 . . . . The interval is −1 ≤ x ≤ 1 and the density function is w(x) = 1. In accordance with the Gram–Schmidt orthogonalization process described, u0 = 1, hence 1 ϕ0 = √ . 2 (10.50) 10.3 Gram–Schmidt Orthogonalization 645 Then 1 ψ1 (x) = x + a1,0 √ 2 (10.51) and a1,0 = − 1 −1 x √ dx = 0 2 (10.52) ) (10.53) by symmetry. We normalize ψ1 to obtain ϕ1 (x) = 3 x. 2 Then we continue the Gram–Schmidt procedure with ) 3 1 ψ2 (x) = x + a2,0 √ + a2,1 x, 2 2 2 (10.54) where √ 2 x2 , a2,0 = − √ dx = − 3 2 −1 1) 3 3 x dx = 0, a2,1 = − −1 2 1 (10.55) (10.56) again by symmetry. Therefore 1 ψ2 (x) = x 2 − , 3 (10.57) and, on normalizing to unity, we have ϕ2 (x) = ) The next function, ϕ3 (x), becomes ϕ3 (x) = ) Reference to Chapter 12 will show that ϕn (x) = 5 1 2 · 3x − 1 . 2 2 7 1 3 · 5x − 3x . 2 2 ) 2n + 1 Pn (x), 2 (10.58) (10.59) (10.60) where Pn (x) is the nth-order Legendre polynomial. Our Gram–Schmidt process provides a possible but very cumbersome method of generating the Legendre polynomials. It illustrates how a power-series expansion in un (x) = x n , which is not orthogonal, can be converted into an orthogonal series.  646 Chapter 10 Sturm–Liouville Theory — Orthogonal Functions The equations for Gram–Schmidt orthogonalization tend to be ill-conditioned because of the subtractions, Eqs. (10.48) and (10.49). A technique for avoiding this difficulty using the polynomial recurrence relation is discussed by Hamming.9 In Example 10.3.1 we have specified an orthogonality interval [−1, 1], a unit weighting function, and a set of functions x n to be taken one at a time in increasing order. Given all these specifications, the Gram–Schmidt procedure is unique (to within a normalization factor and an overall sign, as discussed subsequently). Our resulting orthogonal set, the Legendre polynomials, P0 up through Pn , form a complete set for the description of polynomials of order ≤ n over [−1, 1]. This concept of completeness is taken up in detail in Section 10.4. Expansions of functions in series of Legendre polynomials are found in Section 12.3. Orthogonal Polynomials Example 10.3.1 has been chosen strictly to illustrate the Gram–Schmidt procedure. Although it has the advantage of introducing the Legendre polynomials, the initial functions un = x n are not degenerate eigenfunctions and are not solutions of Legendre’s equation. They are simply a set of functions that we have here rearranged to create an orthonormal set for the given interval and given weighting function. The fact that we obtained the Legendre polynomials is not quite black magic but a direct consequence of the choice of interval and weighting function. The use of un (x) = x n but with other choices of interval and Table 10.3 Orthogonal Polynomials Generated by Gram–Schmidt Orthogonalization of un (x) = x n , n = 0, 1, 2, . . . Polynomials Interval Weighting function w(x) −1 ≤ x ≤ 1 1 0≤x≤1 1 −1 ≤ x ≤ 1 (1 − x 2 )−1/2 0≤x≤1 [x(1 − x)]−1/2 Chebyshev II −1 ≤ x ≤ 1 (1 − x 2 )1/2 Laguerre 0≤x 0, the integrand is nonnegative. The integral vanishes by Eq. (10.62) if we have a complete set. Otherwise it is positive. Expanding the squared term, we obtain b  b   2 f (x) w(x) dx − 2 ai f (x)ϕi (x)w(x) dx + ai2 ≥ 0. (10.72) a a i Applying Eq. (10.64), we have a i  2 f (x) w(x) dx ≥ ai2 . b (10.73) i Hence the sum of the squares of the expansion coefficients ai is less than or equal to the weighted integral of [f (x)]2 , the equality holding if and only if the expansion is exact, that is, if the set of functions ϕn (x) is a complete set. In later chapters, when we consider eigenfunctions that form complete sets (such as Legendre polynomials), Eq. (10.73) with the equal sign holding will be called a Parseval relation. Bessel’s inequality has a variety of uses, including proof of convergence of the Fourier series. Schwarz Inequality The frequently used Schwarz inequality is similar to the Bessel inequality. Consider the quadratic equation with unknown x:   n n   bi 2 ai2 x + (ai x + bi )2 = =0 (10.74) ai i=1 i=1 with real ai , bi . If bi /ai = constant, c, that is, independent of the index i, then the solution is x = −c. If bi /ai is not a constant in i, all terms cannot vanish simultaneously for real x. So the solution must be complex. Expanding, we find that x2 n  i ai2 + 2x n  i ai bi + n  i bi2 = 0, (10.75) 10.4 Completeness of Eigenfunctions 653 and since x is complex (or = −bi /ai ), the quadratic formula13 for x leads to  2    n n n 2 2 ai bi ≤ ai bi , i=1 i=1 (10.76) i=1 the equality holding when bi /ai equals a constant, independent of i. Once more, in terms of vectors, we have (a · b)2 = a 2 b2 cos2 θ ≤ a 2 b2 , (10.77) where θ is the angle included between a and b. The analogous Schwarz inequality for complex functions has the form     a b 2  f ∗ (x)g(x) dx  ≤ b f ∗ (x)f (x) dx b g ∗ (x)g(x) dx, (10.78) a a the equality holding if and only if g(x) = αf (x), α being a constant. To prove this function form of the Schwarz inequality,14 consider a complex function ψ(x) = f (x) + λg(x) with λ a complex constant, where f (x) and g(x) are any two square integrable functions (for which the integrals on the right-hand side exist). Multiplying by the complex conjugate and integrating, we obtain b b b b ψ ∗ ψ dx ≡ f ∗ f dx + λ f ∗ g dx + λ∗ g ∗ f dx a a a λλ∗ b a a g ∗ g dx ≥ 0. (10.79) The ≥ 0 appears since ψ ∗ ψ is nonnegative, the equal (=) sign holding only if ψ(x) is identically zero. Noting that λ and λ∗ are linearly independent, we differentiate with respect to b one of them and set the derivative equal to zero to minimize a ψ ∗ ψ dx: b b b ∂ ∗ ∗ g ∗ g dx = 0. g f dx + λ ψ ψ dx = ∂λ∗ a a a This yields b λ = − ab g ∗ f dx a g ∗ g dx . (10.80a) Taking the complex conjugate, we obtain ∗ b λ = − ab a f ∗ g dx g ∗ g dx . (10.80b) Substituting these values of λ and λ∗ back into Eq. (10.79), we obtain Eq. (10.78), the Schwarz inequality. 13 With negative (or zero) discriminant. 14 An alternate derivation is provided by the inequality [f (x)g(y) − f (y)g(x)]∗ [f (x)g(y) − f (y)g(x)] dx dy ≥ 0. 654 Chapter 10 Sturm–Liouville Theory — Orthogonal Functions In quantum mechanics f (x) and g(x) might each represent a state or configuration of a physical system, that is, a linear combination of wave functions. Then the Schwarz in b equality gives an upper limit for the absolute value of the inner product a f ∗ (x)g(x) dx. In some texts the Schwarz inequality is a key step in the derivation of the Heisenberg uncertainty principle. The function notation of Eqs. (10.78) and (10.79) is relatively cumbersome. In advanced mathematical physics and especially in quantum mechanics it is common to use the Dirac bra-ket notation. Using this notation, we simply understand the range of integration, (a, b), and the presence of the weighting function w(x) ≥ 0. In this notation the Schwarz inequality takes the elegant form   f |g2 ≤ f |f g|g. (78a) If g(x) is a normalized eigenfunction, ϕi (x), Eq. (10.78) yields (here w(x) = 1) b f ∗ (x)f (x) dx, (10.81) ai∗ ai ≤ a a result that also follows from Eq. (10.73). For useful representations of Dirac’s delta function in terms of orthogonal sets of functions and the relation between closure and completeness we refer to the relevant subsection of Section 1.15, including Exercise 1.15.16, and for coordinate versus momentum representations in quantum mechanics to Section 15.6. Summary — Vector Spaces, Completeness Here we summarize some properties of vector spaces, first with the vectors taken to be the familiar real vectors of Chapter 1 and then with the vectors taken to be ordinary functions. The concept of completeness has been developed for finite vector spaces (Chapter 1, Eq. (1.5)) and carries over into infinite vector spaces. For example, in three-dimensional Euclidean space every vector can be written in terms of a linear combination of the three coordinate unit vectors (representing a basis) involving the vector’s Cartesian components as the expansion coefficients. Or a periodic function of an infinite vector space can be expanded in terms of the set of periodic functions sin nx, cos nx, n = 0, 1, 2, . . . , that form a basis of this space. Since any periodic function with reasonable properties (spelled out in Chapter 14) can be expanded in terms of these sine and cosine functions, they are complete and form a basis of such a linear function space. 1v. We shall describe our vector space with a set of n linearly independent vectors ei , i = 1, 2, . . . , n. If n = 3, then e1 = xˆ , e2 = yˆ , and e3 = zˆ . The nei span the linear vector space. 1f. We shall describe our vector (function) space with a set of n linearly independent functions, ϕi (x), i = 0, 1, . . . , n − 1. The index i starts with 0 to agree with the labeling of the classical polynomials. Here ϕi (x) is assumed to be a polynomial of degree i. The nϕi (x) span the linear vector (function) space. 2v. The vectors in our vector space satisfy the following relations (Section 1.2; the vector components are numbers): 10.4 Completeness of Eigenfunctions a. Vector addition is commutative b. Vector addition is associative c. There is a null vector d. Multiplication by a scalar Distributive Distributive Associative e. Multiplication By unit scalar By zero f. Negative vector 655 u+v=v+u [u + v] + w = u + [v + w] 0+v=v a[u + v] = au + av (a + b)u = au + bu a[bu] = (ab)u 1u = u 0u = 0 (−1)u = −u. 2f. The functions in our linear function space satisfy the properties listed for vectors (substitute “function” for “vector”): f (x) + g(x) = g(x) + f (x)   f (x) + g(x) + h(x) = f (x) + g(x) + h(x) 0 + f (x) = f (x)  a f (x) + g(x) = af (x) + ag(x) (a + b)f (x) = af (x) + bf (x)  a bf (x) = (ab)f (x) 1 · f (x) = f (x) 0 · f (x) = 0 (−1) · f (x) = −f (x). 3v. In n-dimensional vector space an arbitrary vector c is described by its n components (c1 , c2 , . . . , cn ), or c= n  ci ei . i=1 When nei (1) are linearly independent and (2) span the n-dimensional vector space, then the ei form a basis and constitute a complete set. 3f. In n-dimensional function space a polynomial of degree m ≤ n − 1 is described by f (x) = n−1  ci ϕi (x). i=0 When the nϕi (x) (1) are linearly independent and (2) span the n-dimensional function space, then the ϕi (x) form a basis and constitute a complete set (for describing polynomials of degree m ≤ n − 1). 4v. An inner product (scalar, dot product) of a vector space is defined by c·d= n  i=1 ci di . 656 Chapter 10 Sturm–Liouville Theory — Orthogonal Functions If c and d have complex components in an orthogonal coordinate system, the inner product is defined as ni=1 ci∗ di . The inner product has the properties of c · (d + e) = c · d + c · e c · ad = ac · d c · d = (d · c)∗ . a. Distributive law of addition b. Scalar multiplication c. Complex conjugation 4f. An inner product of a linear space of functions is defined by b f |g = f ∗ (x)g(x)w(x) dx. a The choice of the weighting function w(x) and the interval (a, b) follows from the differential equation satisfied by ϕi (x) and the boundary conditions — Section 10.1. In matrix terminology, Section 3.2, |g is a column vector and f | is a row vector, the adjoint of |f , where both may have infinitely many components. For example, if we expand g(x) = i gi ϕi (x), then |g has the ith component gi in a column vector and |f  has fi∗ as its ith component in a row vector. The inner product has the properties listed for vectors: a. f |g + h = f |g + f |h b. f |ag = af |g c. f |g = g|f ∗ . 5v. Orthogonality: ej · ej = 0, i = j. If the nei are not already orthogonal, the Gram–Schmidt process may be used to create an orthogonal set. 5f. Orthogonality: b ϕi |ϕj  = i = j. ϕi∗ (x)ϕj (x)w(x) dx = 0, a If the nϕi (x) are not already orthogonal, the Gram–Schmidt process (Section 10.3) may be used to create an orthogonal set. 6v. Definition of norm: 1/2  n |c| = (c · c)1/2 = . ci2 i=1 The basis vectors ei are taken to have unit norm (length) ei · ei = 1. The components of c are given by ci = ei · c, i = 1, 2, . . . , n. 6f. Definition of norm: f  = f |f 1/2 = a  f (x)2 w(x) dx b 1/2 =  n−1 i=0 |ci |2 1/2 , 10.4 Completeness of Eigenfunctions 657 Parseval’s identity. f  > 0 unless f (x) is identically zero. The basis functions ϕi (x) may be taken to have unit norm (unit normalization), ϕi  = 1. The expansion coefficients of our polynomial f (x) are given by ci = ϕi |f , i = 0, 1, . . . , n − 1. 7v. Bessel’s inequality: c·c≥  ci2 . i If the equals sign holds for all c, it indicates that the ei span the vector space; that is, they are complete. 7f. Bessel’s inequality: b    f (x)2 w(x) dx ≥ f |f  = |ci |2 . a i If the equals sign holds for all allowable f , it indicates that the ϕi (x) span the function space; that is, they are complete. 8v. Schwarz’ inequality: |c · d| ≤ |c| · |d|. The equals sign holds when c is a multiple of d. If the angle included between c and d is θ , then | cos θ | ≤ 1. 8f. Schwarz’ inequality:   f |g ≤ f |f 1/2 g|g1/2 = f  · g. The equals sign holds when f (x) and g(x) are linearly dependent, that is, when f (x) is a multiple of g(x). Now, let n → ∞, forming an infinite-dimensional linear vector space, l 2 . 9v. In an infinite-dimensional space our vector c is c= ∞  ∞  ci2 < ∞. ci ei . i=1 We require that i=1 The components of c are given by ci = ei · c, i = 1, 2, . . . , ∞, exactly as in a finite-dimensional vector space. Then let n → ∞, forming an infinite-dimensional vector (function) space L2 . Then L stands for Lebesgue, the superscript 2 for the quadratic norm, that is, the 2 in |f (x)|2 . Our functions need no longer be polynomials, but we do require that f (x) be at least piecewise 658 Chapter 10 Sturm–Liouville Theory — Orthogonal Functions b continuous (Dirichlet conditions for Fourier series) and that f |f  = a |f (x)|2 w(x) dx exist. This latter condition is often stated as a requirement that f (x) be square integrable. 9f. Cauchy sequence (generalized Fourier expansion): Expand f (x) = ∞ i=0 fi ϕi (x) and let fn (x) = n  fi ϕi (x). i=0 If or f (x) − fn (x) → 0 as n → ∞ 2  n    f (x) −  w(x) dx = 0, f ϕ (x) i i   n→∞ lim i=0 then we have convergence in the mean. This is analogous to the partial sum–Cauchy sequence criterion for the convergence of an infinite series, Section 5.1. If every Cauchy sequence of allowable vectors (square integrable, piecewise continuous functions) converges to a limit vector in our linear space, the space is said to be complete. Then f (x) = ∞  ci ϕi (x) (almost everywhere) i=0 in the sense of convergence in the mean. As noted before, this is a weaker requirement than pointwise convergence (fixed value of x) or uniform convergence. Expansion Coefficients For a function f its expansion coefficients are defined as ci = ϕi |f , i = 0, 1, . . . , ∞, exactly as in a finite-dimensional vector space. Hence  f (x) = ϕi |f ϕi (x). i A linear space (finite- or infinite-dimensional) that (1) has an inner product defined (f |g) and (2) is complete is a Hilbert space. Infinite-dimensional Hilbert space provides a natural mathematical frame-work for modern quantum mechanics. Away from quantum mechanics, Hilbert space retains its abstract mathematical power and beauty and has many uses. 10.4 Completeness of Eigenfunctions 659 Exercises 10.4.1 A function f (x) is expanded in a series of orthonormal eigenfunctions f (x) = ∞  an ϕn (x). n=0 Show that the series expansion is unique for a given set of ϕn (x). The functions ϕn (x) are being taken here as the basis vectors in an infinite-dimensional Hilbert space. 10.4.2 A function f (x) is represented by a finite set of basis functions ϕi (x), f (x) = 10.4.3 N  ci ϕi (x). i=1 Show that the components ci are unique, that no different set ci′ exists. Note. Your basis functions are automatically linearly independent. They are not necessarily orthogonal. i A function f (x) is approximated by a power series n−1 i=0 ci x over the interval [0, 1]. Show that minimizing the mean square error leads to a set of linear equations Ac = b, where Aij = 0 1 x i+j dx = 1 , i +j +1 i, j = 0, 1, 2, . . . , n − 1 and bi = 1 x i f (x) dx, i = 0, 1, 2, . . . , n − 1. 0 Note. The Aij are the elements of the Hilbert matrix of order n. The determinant of this Hilbert matrix is a rapidly decreasing function of n. For n = 5, det A = 3.7 × 10−12 and the set of equations Ac = b is becoming ill-conditioned and unstable. 10.4.4 In place of the expansion of a function F (x) given by F (x) = ∞  an ϕn (x), n=0 with an = b F (x)ϕn (x)w(x) dx, a take the finite series approximation F (x) ≈ m  n=0 cn ϕn (x). 660 Chapter 10 Sturm–Liouville Theory — Orthogonal Functions Show that the mean square error 2 b m  F (x) − cn ϕn (x) w(x) dx a n=0 is minimized by taking cn = an . Note. The values of the coefficients are independent of the number of terms in the finite series. This independence is a consequence of orthogonality and would not hold for a least-squares fit using powers of x. 10.4.5 From Example 10.2.2,  h   , f (x) = 2  −h, 2 (a)   0 t), according to the construction of our Green’s function, giving b b t G(x, t)ρ(x)y(x) dx. (10.115) G2 (x, t)Ly(x) dx = λ G1 (x, t)Ly(x) dx − − a t a Note that t is the upper limit for the G1 integrals and the lower limit for the G2 integrals. We are going to reduce the left-hand side of Eq. (10.115) to y(t). Then, with G(x, t) = G(t, x), we have Eq. (10.112) (with x and t interchanged). Applying Green’s theorem to the left-hand side or, equivalently, integrating by parts, we obtain    t d d − p(x) y(x) + q(x)y(x) dx G1 (t, x) dx dx a  t  x=t ∂ ′  = − G1 (x, t)p(x)y (x) x=a + G1 (x, t) p(x)y ′ (x) dx ∂x a t − G1 (x, t)q(x)y(x) dx, (10.116) a with an equivalent expression for the second integral. A second integration by parts yields t t y(x)LG1 (x, t) dx G1 (x, t)Ly(x) dx = − − a a  x=t − G1 (x, t)p(x)y ′ (x) x=a x=t  + G′1 (x, t)p(x)y(x) x=a . (10.117) The integral on the right vanishes because LG1 = 0. By combining the integrated terms with those from integrating G2 , we have    ∂ ∂ ′ ′   −p(t) G1 (t, t)y (t) − y(t) G1 (x, t) x=t − G2 (t, t)y (t) + y(t) G2 (x, t) x=t ∂x ∂x   ∂ + p(a) y ′ (a)G1 (a, t) − y(a) G1 (x, t)x=a ∂x   ∂ (10.118) − p(b) G2 (b, t)y ′ (b) − y(b) G2 (x, t)x=b . ∂x Each of the last two expressions vanishes, for G(x, t) and y(x) satisfy the same boundary conditions. The first expression, with the help of Eqs. (10.95) and (10.96), reduces to y(t). Substituting into Eq. (10.115), we have Eq. (10.112), thus completing the demonstration of the equivalence of the integral equation and the differential equation plus boundary conditions. Example 10.5.1 LINEAR OSCILLATOR As a simple example, consider the linear oscillator equation (for a vibrating string): y ′′ (x) + λy(x) = 0. (10.119) 10.5 Green’s Function — Eigenfunction Expansion 669 We impose the conditions y(0) = y(1) = 0, which correspond to a string clamped at both ends. Now, to construct our Green’s function, we need solutions of the homogeneous equation Ly(x) = 0, which is y ′′ (x) = 0. To satisfy the boundary conditions, we must have one solution vanish at x = 0, the other at x = 1. Such solutions (unnormalized) are u(x) = x, v(x) = 1 − x. (10.120) We find that uv ′ − vu′ = −1 or, by Eq. (10.101) with p(x) = 1, A = −1. Our Green’s function becomes + x(1 − t), 0 ≤ x < t, G(x, t) = t (1 − x), t < x ≤ 1. Hence by Eq. (10.112) our clamped vibrating string satisfies 1 y(x) = λ G(x, t)y(t) dt. (10.121) (10.122) (10.123) 0 You may show that the known solutions of Eq. (10.119), λ = n2 π 2 , y = sin nπx, do indeed satisfy Eq. (10.123). Note that our eigenvalue λ is not the wavelength.  Green’s Function and the Dirac Delta Function One more approach to the Green’s function may shed additional light on our formulation and particularly on its relation to physical problems. Let us refer once more to Poisson’s equation, this time for a point charge: ρpoint ∇ 2 ϕ(r) = − . (10.124) ε0 The Green’s function solution of this equation was developed in Section 9.7. This time let us take a one-dimensional analog Ly(x) + f (x)point = 0. (10.125) Here f (x)point refers to a unit point “charge,” or a point force. We may represent it by a number of forms, but perhaps the most convenient is   1 , t − ε < x < t + ε, f (x)point = 2ε (10.126)  0, elsewhere, which is essentially the same as Eq. (1.172). Then, integrating Eq. (10.125), we have t+ε t+ε f (x)point dx = −1 (10.127) Ly(x) dx = − t−ε t−ε 670 Chapter 10 Sturm–Liouville Theory — Orthogonal Functions from the definition of f (x). Let us examine Ly(x) more closely. We have t+ε t+ε d  p(x)y ′ (x) dx + q(x)y(x) dx t−ε dx t−ε t+ε  t+ε = p(x)y ′ (x)t−ε + q(x)y(x) dx = −1. (10.128) t−ε In the limit ε → 0 we may satisfy this relation by permitting y ′ (x) to have a discontinuity of −1/p(x) at x = t, y(x) itself remaining continuous.20 These, however, are just the properties used to define our Green’s function, G(x, t). In addition, we note that in the limit ε → 0, f (x)point = δ(x − t), (10.129) in which δ(x − t) is our Dirac delta function, defined in this manner in Section 1.15. Hence Eq. (10.125) has become LG(x, t) = −δ(x − t). (10.130) This is a one-dimensional version of Eq. (9.159), which we exploit for the development of Green’s functions in two and three dimensions — Section 9.7. It will be recalled that we used this relation in Section 9.7 to determine our Green’s functions. Equation (10.130) could have been expected since it is actually a consequence of our differential equation, Eq. (10.92), and Green’s function integral solution, Eq. (10.97). If we let Lx (subscript to emphasize that it operates on the x-dependence) operate on both sides of Eq. (10.97), then b Lx y(x) = Lx G(x, t)f (t) dt. a By Eq. (10.92) the left-hand side is just −f (x). On the right Lx , is independent of the variable of integration t, so we may write b ' ( Lx G(x, t) f (t) dt. −f (x) = a By definition of Dirac’s delta function, Eqs. (1.171b) and (1.183), we have Eq. (10.130). Exercises 10.5.1 Show that G(x, t) = + x, 0 ≤ x < t, t, t < x ≤ 1, is the Green’s function for the operator L = d 2 /dx 2 and the boundary conditions y(0) = 0, y ′ (1) = 0. 20 The functions p(x) and q(x) appearing in the operator L are continuous functions. With y(x) remaining continuous, q(x)y(x) dx is certainly continuous. Hence this integral over an interval 2ε (Eq. (10.128)) vanishes as ε vanishes. 10.5 Green’s Function — Eigenfunction Expansion 10.5.2 Find the Green’s function for (a) (b) 10.5.3 671 + d 2 y(x) + y(x), Ly(x) = dx 2 Ly(x) = d 2 y(x) dx 2 − y(x), y(0) = 0, y ′ (1) = 0. y(x) finite for −∞ < x < ∞. Find the Green’s function for the operators (a) Ly(x) =   dy(x) d x . dx dx ANS. G(x, t) = (b) + − ln t, 0 ≤ x < t, − ln x, t < x ≤ 1.   d dy(x) n2 Ly(x) = x − y(x), with y(0) finite and y(1) = 0. dx dx x   n  1 x  n   , − (xt)  2n t ANS. G(x, t) =   n  1 t  n  , − (xt)  2n x 0 ≤ x < t, t < x ≤ 1. The combination of operator and interval specified in Exercise 10.5.3(a) is pathological, in that one of the endpoints of the interval (zero) is a singular point of the operator. As a consequence, the integrated part (the surface integral of Green’s theorem) does not vanish. The next four exercises explore this situation. 10.5.4 (a) (b) Show that the particular solution of  d d x y(x) = −1 dx dx is yP (x) = −x. Show that yP (x) = −x = 1 G(x, t)(−1) dt, 0 where G(x, t) is the Green’s function of Exercise 10.5.3(a). 10.5.5 Show that Green’s theorem, Eq. (1.104) in one dimension with a Sturm–Liouville-type operator (d/dt)p(t)(d/dt) replacing ∇ · ∇, may be rewritten as     b d dv(t) d du(t) u(t) p(t) − v(t) p(t) dt dt dt dt dt a  du(t) b dv(t) = u(t)p(t) − v(t)p(t) . dt dt a 672 Chapter 10 Sturm–Liouville Theory — Orthogonal Functions 10.5.6 10.5.7 Using the one-dimensional form of Green’s theorem of Exercise 10.5.5, let   d dy(t) v(t) = y(t) and p(t) = −f (t), dt dt   ∂G(x, t) d p(t) = −δ(x − t). u(t) = G(x, t) and dt ∂t Show that Green’s theorem yields t=b b  ∂ dy(t) y(x) = − y(t)p(t) G(x, t)  . G(x, t)f (t) dt + G(x, t)p(t) dt ∂t a t=a For p(t) = t, y(t) = −t, G(x, t) = + − ln t, 0≤x − 21 . Hint. Here is a chance to use series expansion and term-by-term integration. The formulas of Section 8.4 will prove useful. 11.1 Bessel Functions of the First Kind, Jν (x) (b) 691 Transform the integral in part (a) into  ν π 1 x cos(x cos θ ) sin2ν θ dθ Jν (x) = π 1/2 (ν − 21 )! 2 0  ν π x 1 = e±ix cos θ sin2ν θ dθ 1 1/2 π (ν − 2 )! 2 0  ν 1 x 1 e±ipx (1 − p 2 )ν−1/2 dp. = π 1/2 (ν − 21 )! 2 −1 These are alternate integral representations of Jν (x). 11.1.19 (a) From   1 x ν 2 Jν (x) = t −ν−1 et−x /4t dt 2πi 2 derive the recurrence relation ν Jν (x) − Jν+1 (x). x Jν′ (x) = (b) From 1 Jν (x) = 2πi t −ν−1 e(x/2)(t−1/t) dt derive the recurrence relation Jν′ (x) = 11.1.20 Show that the recurrence relation Jn′ (x) = 1 2 1 2  Jν−1 (x) − Jν+1 (x) .  Jn−1 (x) − Jn+1 (x) follows directly from differentiation of 1 π Jn (x) = cos(nθ − x sin θ ) dθ. π 0 11.1.21 Evaluate ∞ e−ax J0 (bx) dx, a, b > 0. 0 Actually the results hold for a ≥ 0, −∞ < b < ∞. This is a Laplace transform of J0 . Hint. Either an integral representation of J0 or a series expansion will be helpful. 11.1.22 Using trigonometric forms, verify that J0 (br) = 11.1.23 (a) 1 2π 2π eibr sin θ dθ. 0 Plot the intensity (2 of Eq. (11.35)) as a function of (sin α/λ) along a diameter of the circular diffraction pattern. Locate the first two minima. 692 Chapter 11 Bessel Functions (b) What fraction of the total light intensity falls within the central maximum? Hint. [J1 (x)]2 /x may be written as a derivative and the area integral of the intensity integrated by inspection. 11.1.24 The fraction of light incident on a circular aperture (normal incidence) that is transmitted is given by 2ka 2ka 1 dx T =2 − J2 (x) J2 (x) dx. x 2ka 0 0 Here a is the radius of the aperture and k is the wave number, 2π/λ. Show that 2ka ∞ 1  1 (a) T = 1 − J2n+1 (2ka), (b) T = 1 − J0 (x) dx. ka 2ka 0 n=0 11.1.25 The amplitude U (ρ, ϕ, t) of a vibrating circular membrane of radius a satisfies the wave equation ∇2 U − 1 ∂ 2U = 0. v 2 ∂t 2 Here v is the phase velocity of the wave fixed by the elastic constants and whatever damping is imposed. (a) (b) Show that a solution is   U (ρ, ϕ, t) = Jm (kρ) a1 eimϕ + a2 e−imϕ b1 eiωt + b2 e−iωt . From the Dirichlet boundary condition, Jm (ka) = 0, find the allowable values of the wavelength λ(k = 2π/λ). Note. There are other Bessel functions besides Jm , but they all diverge at ρ = 0. This is shown explicitly in Section 11.3. The divergent behavior is actually implicit in Eq. (11.6). 11.1.26 Example 11.1.2 describes the TM modes of electromagnetic cavity oscillation. The transverse electric (TE) modes differ, in that we work from the z component of the magnetic induction B: ∇ 2 Bz + α 2 Bz = 0 with boundary conditions Bz (0) = Bz (l) = 0 and  ∂Bz  = 0. ∂ρ ρ=0 Show that the TE resonant frequencies are given by , 2 βmn p2 π 2 + 2 , p = 1, 2, 3, . . . . ωmnp = c 2 a l 11.1 Bessel Functions of the First Kind, Jν (x) 693 11.1.27 Plot the three lowest TM and the three lowest TE angular resonant frequencies, ωmnp , as a function of the radius/length (a/ l) ratio for 0 ≤ a/ l ≤ 1.5. Hint. Try plotting ω2 (in units of c2 /a 2 ) versus (a/ l)2 . Why this choice? 11.1.28 A thin conducting disk of radius a carries a charge q. Show that the potential is described by ∞ q sin ka ϕ(r, z) = dk, e−k|z| J0 (kr) 4πε0 a 0 k where J0 is the usual Bessel function and r and z are the familiar cylindrical coordinates. Note. This is a difficult problem. One approach is through Fourier transforms such as Exercise 15.3.11. For a discussion of the physical problem see Jackson (Classical Electrodynamics in Additional Readings). 11.1.29 Show that a 0 x m Jn (x) dx, m ≥ n ≥ 0, is integrable in terms of Bessel functions and powers of x (such as a p Jq (a)) for m + n odd; a (b) may be reduced to integrated terms plus 0 J0 (x)dx for m + n even. (a) 11.1.30 Show that 0 α0n   α0n y 1 J0 (y)y dy = 1− J0 (y) dy. α0n α0n 0 Here α0n is the nth root of J0 (y). This relation is useful (see Exercise 11.2.11): The expression on the right is easier and quicker to evaluate — and much more accurate. Taking the difference of two terms in the expression on the left leads to a large relative error. 11.1.31 The circular aperature diffraction amplitude  of Eq. (17.35) is proportional to f (z) = J1 (z)/z. The corresponding single slit diffraction amplitude is proportional to g(z) = sin z/z. (a) Calculate and plot f (z) and g(z) for z = 0.0(0.2)12.0. (b) Locate the two lowest values of z(z > 0) for which f (z) takes on an extreme value. Calculate the corresponding values of f (z). (c) Locate the two lowest values of z(z > 0) for which g(z) takes on an extreme value. Calculate the corresponding values of g(z). 11.1.32 Calculate the electrostatic potential of a charged disk ϕ(r, z) from the integral form of Exercise 11.1.28. Calculate the potential for r/a = 0.0(0.5)2.0 and z/a = 0.25(0.25)1.25. Why is z/a = 0 omitted? Exercise 12.3.17 is a spherical harmonic version of this same problem. 694 11.2 Chapter 11 Bessel Functions ORTHOGONALITY If Bessel’s equation, Eq. (11.22a), is divided by ρ, we see that it becomes self-adjoint, and therefore, by the Sturm–Liouville theory, Section 10.2, the solutions are expected to be orthogonal — if we can arrange to have appropriate boundary conditions satisfied. To take care of the boundary conditions for a finite interval [0, a], we introduce parameters a and ανm into the argument of Jν to get Jν (ανm ρ/a). Here a is the upper limit of the cylindrical radial coordinate ρ. From Eq. (11.22a),    2      ρ d ρ ανm ρ ν 2 ρ d2 + + Jν ανm = 0. (11.45) Jν ανm − ρ 2 Jν ανm a dρ a ρ a dρ a2 Changing the parameter ανm to ανn , we find that Jν (ανn ρ/a) satisfies      2    d ανn ρ ν 2 ρ d2 ρ ρ α + + J = 0. Jν ανn ρ 2 Jν ανn − νn ν a dρ a ρ a dρ a2 (11.45a) Proceeding as in Section 10.2, we multiply Eq. (11.45) by Jν (ανn ρ/a) and Eq. (11.45a) by Jν (ανm ρ/a) and subtract, obtaining         ρ d d d ρ ρ ρ d ρ Jν ανm − Jν ανm ρ Jν ανn Jν ανn a dρ dρ a a dρ dρ a     ρ α2 − α2 ρ Jν ανn . (11.46) = νn 2 νm ρJν ανm a a a Integrating from ρ = 0 to ρ = a, we obtain       a  a  ρ d ρ d d d ρ ρ ρ Jν ανm dρ − ρ Jν ανn dρ Jν ανm Jν ανn a dρ dρ a a dρ dρ a 0 0     2 − α2 a ρ ανn ρ νm Jν ανn ρ dρ. (11.47) = Jν ανm a a a2 0 Upon integrating by parts, we see that the left-hand side of Eq. (11.47) becomes       a   a     ρJν ανn ρ d Jν ανm ρ  − ρJν ανm ρ d Jν ανn ρ  .  a dρ a 0  a dρ a 0 (11.48) For ν ≥ 0 the factor ρ guarantees a zero at the lower limit, ρ = 0. Actually the lower limit on the index ν may be extended down to ν > −1, Exercise 11.2.4.12 At ρ = a, each expression vanishes if we choose the parameters ανn and ανm to be zeros, or roots of Jν ; that is, Jν (ανm ) = 0. The subscripts now become meaningful: ανm is the mth zero of Jν . With this choice of parameters, the left-hand side vanishes (the Sturm–Liouville boundary conditions are satisfied) and for m = n,    a  ρ ρ Jν ανm Jν ανn ρ dρ = 0. (11.49) a a 0 This gives us orthogonality over the interval [0, a]. 12 The case ν = −1 reverts to ν = +1, Eq. (11.8). 11.2 Orthogonality 695 Normalization The normalization integral may be developed by returning to Eq. (11.48), setting ανn = ανm + ε, and taking the limit ε → 0 (compare Exercise 11.2.2). With the aid of the recurrence relation, Eq. (11.16), the result may be written as  a  2 ρ 2 a2  Jν ανm Jν+1 (ανm ) . ρ dρ = (11.50) a 2 0 Bessel Series If we assume that the set of Bessel functions Jν (ανm ρ/a))(ν fixed, m = 1, 2, 3, . . .) is complete, then any well-behaved but otherwise arbitrary function f (ρ) may be expanded in a Bessel series (Bessel–Fourier or Fourier–Bessel)   ∞  ρ , 0 ≤ ρ ≤ a, ν > −1. (11.51) cνm Jν ανm f (ρ) = a m=1 The coefficients cνm are determined by using Eq. (11.50),   a 2 ρ ρ dρ. α cνm = 2 f (ρ)J νm ν a a [Jν+1 (ανm )]2 0 (11.52) A similar series expansion involving Jν (βνm ρ/a) with (d/dρ)Jν (βνm ρ/a)|ρ=a = 0 is included in Exercises 11.2.3 and 11.2.6(b). Example 11.2.1 ELECTROSTATIC POTENTIAL IN A HOLLOW CYLINDER From Table 9.3 of Section 9.3 (with α replaced by k), our solution of Laplace’s equation in circular cylindrical coordinates is a linear combination of  (11.53) ψkm (ρ, ϕ, z) = Jm (kρ)[am sin mϕ + bm cos mϕ] c1 ekz + c2 e−kz . The particular linear combination is determined by the boundary conditions to be satisfied. Our cylinder here has a radius a and a height l. The top end section has a potential distribution ψ(ρ, ϕ). Elsewhere on the surface the potential is zero.13 The problem is to find the electrostatic potential  ψ(ρ, ϕ, z) = ψkm (ρ, ϕ, z) (11.54) k,m everywhere in the interior. For convenience, the circular cylindrical coordinates are placed as shown in Fig. 11.3. Since ψ(ρ, ϕ, 0) = 0, we take c1 = −c2 = 21 . The z dependence becomes sinh kz, vanishing at z = 0. The requirement that ψ = 0 on the cylindrical sides is met by requiring the separation constant k to be αmn , (11.55) k = kmn = a 13 If ψ = 0 at z = 0, l, but ψ = 0 for ρ = a, the modified Bessel functions, Section 11.5, are involved. 696 Chapter 11 Bessel Functions where the first subscript, m, gives the index of the Bessel function, whereas the second subscript identifies the particular zero of Jm . The electrostatic potential becomes   ∞ ∞   ρ Jm αmn ψ(ρ, ϕ, z) = a m=0 n=1   z . (11.56) · [amn sin mϕ + bmn cos mϕ] · sinh αmn a Equation (11.56) is a double series: a Bessel series in ρ and a Fourier series in ϕ. At z = l, ψ = ψ(ρ, ϕ), a known function of ρ and ϕ. Therefore   ∞  ∞  ρ Jm αmn ψ(ρ, ϕ) = a m=0 n=1   l · [amn sin mϕ + bmn cos mϕ] · sinh αmn . (11.57) a The constants amn and bmn are evaluated by using Eqs. (11.49) and (11.50) and the corresponding equations for sin ϕ and cos ϕ (Example 10.2.1 and Eqs. (14.2), (14.3), (14.15) to (14.17)). We find14   −1  l amn 2 Jm+1 = 2 πa 2 sinh αmn (αmn ) bmn a    2π a ρ sin mϕ ρ dρ dϕ. (11.58) ψ(ρ, ϕ)Jm αmn · cos mϕ a 0 0 These are definite integrals, that is, numbers. Substituting back into Eq. (11.56), the series is specified and the potential ψ(ρ, ϕ, z) is determined.  Continuum Form The Bessel series, Eq. (11.51), and Exercise 11.2.6 apply to expansions over the finite interval [0, a]. If a → ∞, then the series forms may be expected to go over into integrals. The discrete roots ανm become a continuous variable α. A similar situation is encountered in the Fourier series, Section 15.2. The development of the Bessel integral from the Bessel series is left as Exercise 11.2.8. For operations with a continuum of Bessel functions, Jν (αρ), a key relation is the Bessel function closure equation, ∞ 1 1 Jν (αρ)Jν (α ′ ρ)ρ dρ = δ(α − α ′ ), ν>− . (11.59) α 2 0 This may be proved by the use of Hankel transforms, Section 15.1. An alternate approach, starting from a relation similar to Eq. (10.82), is given by Morse and Feshbach, Section 6.3. A second kind of orthogonality (varying the index) is developed for spherical Bessel functions in Section 11.7. 14 If m = 0, the factor 2 is omitted (compare Eq. (14.16)). 11.2 Orthogonality 697 Exercises 11.2.1 Show that  2 2 a −b P 0 with  Jν (ax)Jν (bx)x dx = P bJν (aP )Jν′ (bP ) − aJν′ (aP )Jν (bP ) ,  d Jν (ax)x=P , d(ax)     P 2  2 2 P2  ′ ν2  Jν (aP ) + 1 − 2 2 Jν (aP ) , Jν (ax) x dx = 2 a P 0 Jν′ (aP ) = ν > −1. These two integrals are usually called the first and second Lommel integrals. Hint. We have the development of the orthogonality of the Bessel functions as an analogy. 11.2.2 Show that  a  2 ρ 2 a2  Jν+1 (ανm ) , ρ dρ = Jν ανm a 2 0 ν > −1. Here ανm is the mth zero of Jν . Hint. With ανn = ανm + ε, expand Jν [(ανm + ε)ρ/a] about ανm ρ/a by a Taylor expansion. 11.2.3 (a) (b) If βνm is the mth zero of (d/dρ)Jν (βνm ρ/a), show that the Bessel functions are orthogonal over the interval [0, a] with an orthogonality integral    a  ρ ρ Jν βνn ρ dρ = 0, m = n, ν > −1. Jν βνm a a 0 Derive the corresponding normalization integral (m = n).   2 a2 ν2  ANS. 1− 2 Jν (βνm ) , 2 βνm ν > −1. 11.2.4 Verify that the orthogonality equation, Eq. (11.49), and the normalization equation, Eq. (11.50), hold for ν > −1. Hint. Using power-series expansions, examine the behavior of Eq. (11.48) as ρ → 0. 11.2.5 From Eq. (11.49) develop a proof that Jν (z), ν > −1, has no complex roots (with nonzero imaginary part). Hint. (a) Use the series form of Jν (z) to exclude pure imaginary roots. ∗ . (b) Assume ανm to be complex and take ανn to be ανm 11.2.6 (a) In the series expansion f (ρ) = ∞  m=1   ρ , cνm Jν ανm a 0 ≤ ρ ≤ a, ν > −1, 698 Chapter 11 Bessel Functions with Jν (ανm ) = 0, show that the coefficients are given by   a 2 ρ cνm = 2 ρ dρ. f (ρ)Jν ανm a a [Jν+1 (ανm )]2 0 (b) In the series expansion f (ρ) = ∞  m=1   ρ , dνm Jν βνm a 0 ≤ ρ ≤ a, ν > −1, with (d/dρ)Jν (βνm ρ/a) |ρ=a = 0, show that the coefficients are given by   a ρ 2 β ρ dρ. dνm = 2 f (ρ)J νm ν 2 )[J (β )]2 a a (1 − ν 2 /βνm ν νm 0 11.2.7 A right circular cylinder has an electrostatic potential of ψ(ρ, ϕ) on both ends. The potential on the curved cylindrical surface is zero. Find the potential at all interior points. Hint. Choose your coordinate system and adjust your z dependence to exploit the symmetry of your potential. 11.2.8 For the continuum case, show that Eqs. (11.51) and (11.52) are replaced by ∞ f (ρ) = a(α)Jν (αρ) dα, 0 a(α) = α ∞ f (ρ)Jν (αρ)ρ dρ. 0 Hint. The corresponding case for sines and cosines is worked out in Section 15.2. These are Hankel transforms. A derivation for the special case ν = 0 is the topic of Exercise 15.1.1. 11.2.9 A function f (x) is expressed as a Bessel series: f (x) = ∞  an Jm (αmn x), n=1 with αmn the nth root of Jm . Prove the Parseval relation, 0 11.2.10 ∞ 2 2 1  2 an Jm+1 (αmn ) . f (x) x dx = 2 1 n=1 Prove that ∞  (αmn )−2 = n=1 1 . 4(m + 1) Hint. Expand x m in a Bessel series and apply the Parseval relation. 11.3 Neumann Functions 11.2.11 699 A right circular cylinder of length l has a potential     ρ l = 100 1 − , ψ z=± 2 a where a is the radius. The potential over the curved surface (side) is zero. Using the Bessel series from Exercise 11.2.7, calculate the electrostatic potential for ρ/a = 0.0(0.2)1.0 and z/ l = 0.0(0.1)0.5. Take a/ l = 0.5. Hint. From Exercise 11.1.30 you have  α0n  y 1− J0 (y)y dy. α0n 0 Show that this equals 1 α0n α0n J0 (y) dy. 0 Numerical evaluation of this latter form rather than the former is both faster and more accurate. Note. For ρ/a = 0.0 and z/ l = 0.5 the convergence is slow, 20 terms giving only 98.4 rather than 100. Check value. For ρ/a = 0.4 and z/ l = 0.3, ψ = 24.558. 11.3 NEUMANN FUNCTIONS, BESSEL FUNCTIONS OF THE SECOND KIND From the theory of ODEs it is known that Bessel’s equation has two independent solutions. Indeed, for nonintegral order ν we have already found two solutions and labeled them Jν (x) and J−ν (x), using the infinite series (Eq. (11.5)). The trouble is that when ν is integral, Eq. (11.8) holds and we have but one independent solution. A second solution may be developed by the methods of Section 9.6. This yields a perfectly good second solution of Bessel’s equation but is not the standard form. Definition and Series Form As an alternate approach, we take the particular linear combination of Jν (x) and J−ν (x) Nν (x) = cos νπJν (x) − J−ν (x) . sin νπ (11.60) This is the Neumann function (Fig. 11.5).15 For nonintegral ν, Nν (x) clearly satisfies Bessel’s equation, for it is a linear combination of known solutions Jν (x) and J−ν (x). 15 In AMS-55 (see footnote 4 in Chapter 5 or Additional Readings of Chapter 8 p. for this ref.) and in most mathematics tables, this is labeled Yν (x). 700 Chapter 11 Bessel Functions FIGURE 11.5 Neumann functions N0 (x), N1 (x), and N2 (x). Substituting the power-series Eq. (11.6) for n → ν (given in Exercise 11.1.7) yields   (ν − 1)! 2 ν + · · · ,16 (11.61) Nν (x) = − π x for ν > 0. However, for integral ν, ν = n, Eq. (11.8) applies and Eq. (11.60)16 becomes indeterminate. The definition of Nν (x) was chosen deliberately for this indeterminate property. Again substituting the power series and evaluating Nν (x) for ν → 0 by l’Hôpital’s rule for indeterminate forms, we obtain the limiting value N0 (x) = for n = 0 and x → 0, using  2 (ln x + γ − ln 2) + O x 2 π ν!(−ν)! = πν sin πν (11.62) (11.63) from Eq. (8.32). The first and third terms in Eq. (11.62) come from using (d/dν)(x/2)ν = (x/2)ν ln(x/2), while γ comes from (d/dν)ν! for ν → 0 using Eqs. (8.38) and (8.40). For n > 0 we obtain similarly    n   2 x 1 2 x n1 Nn (x) = − (n − 1)! + ··· . (11.64) ln + ··· + π x π 2 n! 2 Equations (11.62) and (11.64) exhibit the logarithmic dependence that was to be expected. This, of course, verifies the independence of Jn and Nn . 16 Note that this limiting form applies to both integral and nonintegral values of the index ν. 11.3 Neumann Functions 701 Other Forms As with all the other Bessel functions, Nν (x) has integral representations. For N0 (x) we have 2 ∞ 2 ∞ cos(xt) N0 (x) = − dt, x > 0. cos(x cosh t) dt = − π 0 π 1 (t 2 − 1)1/2 These forms can be derived as the imaginary part of the Hankel representations of Exercise 11.4.7. The latter form is a Fourier cosine transform. To verify that Nν (x), our Neumann function (Fig. 11.5) or Bessel function of the second kind, actually does satisfy Bessel’s equation for integral n, we may proceed as follows. L’Hôpital’s rule applied to Eq. (11.60) yields  (d/dν)[cos νπJν (x) − J−ν (x)]  Nn (x) =  (d/dν) sin νπ ν=n −π sin nπJn (x) + [cos nπ∂Jν /∂ν − ∂J−ν /∂ν]|ν=n π cos nπ  1 ∂Jν (x) ∂J−ν (x)  − (−1)n =  . π ∂ν ∂ν ν=n = Differentiating Bessel’s equation for J±ν (x) with respect to ν, we have     ∂J±ν  d 2 ∂J±ν d ∂J±ν x2 2 +x + x2 − ν2 = 2νJ±ν . ∂ν dx ∂ν ∂ν dx (11.65) (11.66) Multiplying the equation for J−ν by (−1)ν , subtracting from the equation for Jν (as suggested by Eq. (11.65)), and taking the limit ν → n, we obtain x2  d2 2n  d Jn − (−1)n J−n . Nn + x Nn + x 2 − n2 Nn = 2 dx π dx (11.67) For ν = n, an integer, the right-hand side vanishes by Eq. (11.8) and Nn (x) is seen to be a solution of Bessel’s equation. The most general solution for any ν can therefore be written as y(x) = AJν (x) + BNν (x). (11.68) It is seen from Eqs. (11.62) and (11.64) that Nn diverges, at least logarithmically. Any boundary condition that requires the solution to be finite at the origin (as in our vibrating circular membrane (Section 11.1)) automatically excludes Nn (x). Conversely, in the absence of such a requirement, Nn (x) must be considered. To a certain extent the definition of the Neumann function Nn (x) is arbitrary. Equations (11.62) and (11.64) contain terms of the form an Jn (x). Clearly, any finite value of the constant an would still give us a second solution of Bessel’s equation. Why should an have the particular value implicit in Eqs. (11.62) and (11.64)? The answer involves the asymptotic dependence developed in Section 11.6. If Jn corresponds to a cosine wave, then Nn corresponds to a sine wave. This simple and convenient asymptotic phase relationship is a consequence of the particular admixture of Jn in Nn . 702 Chapter 11 Bessel Functions Recurrence Relations Substituting Eq. (11.60) for Nν (x) (nonintegral ν) into the recurrence relations (Eqs. (11.10) and (11.12) for Jn (x), we see immediately that Nν (x) satisfies these same recurrence relations. This actually constitutes another proof that Nν is a solution. Note that the converse is not necessarily true. All solutions need not satisfy the same recurrence relations. An example of this sort of trouble appears in Section 11.5. Wronskian Formulas From Section 9.6 and Exercise 10.1.4 we have the Wronskian formula17 for solutions of the Bessel equation, uν (x)vν′ (x) − u′ν (x)vν (x) = Aν , x (11.69) in which Aν is a parameter that depends on the particular Bessel functions uν (x) and vν (x) being considered. Aν is a constant in the sense that it is independent of x. Consider the special case uν (x) = Jν (x), vν (x) = J−ν (x), ′ Jν J−ν − Jν′ J−ν = Aν . x (11.70) (11.71) Since Aν is a constant, it may be identified at any convenient point, such as x = 0. Using the first terms in the series expansions (Eqs. (11.5) and (11.6)), we obtain Jν → xν , 2ν ν! Jν′ → νx ν−1 , 2ν ν! J−ν → 2ν x −ν (−ν)! ν2ν x −ν−1 . (−ν)! (11.72) 2 sin νπ −2ν =− , xν!(−ν)! πx (11.73) ′ J−ν →− Substitution into Eq. (11.69) yields ′ Jν (x)J−ν (x) − Jν′ (x)J−ν (x) = using Eq. (8.32). Note that Aν vanishes for integral ν, as it must, since the nonvanishing of the Wronskian is a test of the independence of the two solutions. By Eq. (11.73), Jn and J−n are clearly linearly dependent. Using our recurrence relations, we may readily develop a large number of alternate forms, among which are Jν J−ν+1 + J−ν Jν−1 = 2 sin νπ , πx (11.74) 17 This result depends on P (x) of Section 9.5 being equal to p ′ (x)/p(x), the corresponding coefficient of the self-adjoint form of Section 10.1. 11.3 Neumann Functions Jν J−ν−1 + J−ν Jν+1 = − Jν Nν′ − Jν′ Nν = 2 sin νπ , πx 2 , πx Jν Nν+1 − Jν+1 Nν = − 2 . πx 703 (11.75) (11.76) (11.77) Many more will be found in the references given at chapter’s end. You will recall that in Chapter 9 Wronskians were of great value in two respects: (1) in establishing the linear independence or linear dependence of solutions of differential equations and (2) in developing an integral form of a second solution. Here the specific forms of the Wronskians and Wronskian-derived combinations of Bessel functions are useful primarily to illustrate the general behavior of the various Bessel functions. Wronskians are of great use in checking tables of Bessel functions. In Section 10.5 Wronskians appeared in connection with Green’s functions. Example 11.3.1 COAXIAL WAVE GUIDES We are interested in an electromagnetic wave confined between the concentric, conducting cylindrical surfaces ρ = a and ρ = b. Most of the mathematics is worked out in Section 9.3 and Example 11.1.2. To go from the standing wave of these examples to the traveling wave here, we let A = iB, A = amn , B = bmn in Eq. (11.40a) and obtain Ez =  bmn Jm (γρ)e±imϕ ei(kz−ωt) . (11.78) m,n Additional properties of the components of the electromagnetic wave in the simple cylindrical wave guide are explored in Exercises 11.3.8 and 11.3.9. For the coaxial wave guide one generalization is needed. The origin, ρ = 0, is now excluded (0 < a ≤ ρ ≤ b). Hence the Neumann function Nm (γρ) may not be excluded. Ez (ρ, ϕ, z, t) becomes  bmn Jm (γρ) + cmn Nm (γρ) e±imϕ ei(kz−ωt) . (11.79) Ez = m,n With the condition Hz = 0, (11.80) we have the basic equations for a TM (transverse magnetic) wave. The (tangential) electric field must vanish at the conducting surfaces (Dirichlet boundary condition), or bmn Jm (γ a) + cmn Nm (γ a) = 0, (11.81) bmn Jm (γ b) + cmn Nm (γ b) = 0. (11.82) 704 Chapter 11 Bessel Functions These transcendental equations may be solved for γ (γmn ) and the ratio cmn /bmn . From Example 11.1.2, k 2 = ω2 µ0 ε0 − γ 2 = ω2 − γ 2. c2 (11.83) Since k 2 must be positive for a real wave, the minimum frequency that will be propagated (in this TM mode) is ω = γ c, (11.84) with γ fixed by the boundary conditions, Eqs. (11.81) and (11.82). This is the cutoff frequency of the wave guide. There is also a TE (transverse electric) mode, with Ez = 0 and Hz given by Eq. (11.79). Then we have Neumann boundary conditions in place of Eqs. (11.81) and (11.82). Finally, for the coaxial guide (not for the plain cylindrical guide, a = 0), a TEM (transverse electromagnetic) mode, Ez = Hz = 0, is possible. This corresponds to a plane wave, as in free space. The simpler cases (no Neumann functions, simpler boundary conditions) of a circular wave guide are included as Exercises 11.3.8 and 11.3.9. To conclude this discussion of Neumann functions, we introduce the Neumann function Nν (x) for the following reasons: 1. It is a second, independent solution of Bessel’s equation, which completes the general solution. 2. It is required for specific physical problems such as electromagnetic waves in coaxial cables and quantum mechanical scattering theory. 3. It leads to a Green’s function for the Bessel equation (Sections 9.7 and 10.5). 4. It leads directly to the two Hankel functions (Section 11.4).  Exercises 11.3.1 Prove that the Neumann functions Nn (with n an integer) satisfy the recurrence relations Nn−1 (x) + Nn+1 (x) = 2n Nn (x), x Nn−1 (x) − Nn+1 (x) = 2Nn′ (x). Hint. These relations may be proved by differentiating the recurrence relations for Jν or by using the limit form of Nν but not dividing everything by zero. 11.3.2 Show that N−n (x) = (−1)n Nn (x). 11.3.3 Show that N0′ (x) = −N1 (x). 11.3 Neumann Functions 11.3.4 705 If Y and Z are any two solutions of Bessel’s equation, show that Yν (x)Zν′ (x) − Yν′ (x)Zν (x) = Aν , x in which Aν may depend on ν but is independent of x. This is a special case of Exercise 10.1.4. 11.3.5 Verify the Wronskian formulas 2 sin νπ , πx 2 . Jν (x)Nν′ (x) − Jν′ (x)Nν (x) = πx Jν (x)J−ν+1 (x) + J−ν (x)Jν−1 (x) = 11.3.6 As an alternative to letting x approach zero in the evaluation of the Wronskian constant, we may invoke uniqueness of power series (Section 5.7). The coefficient of x −1 in the series expansion of uν (x)vν′ (x) − u′ν (x)vν (x) is then Aν . Show by series expansion that ′ (x) − J ′ (x)J (x) are each zero. the coefficients of x 0 and x 1 of Jν (x)J−ν −ν ν 11.3.7 (a) By differentiating and substituting into Bessel’s ODE, show that ∞ cos(x cosh t) dt 0 (b) is a solution. Hint. You can rearrange the final integral as ∞ ( d' x sin(x cosh t) sinh t dt. dt 0 Show that N0 (x) = − 2 π ∞ cos(x cosh t) dt 0 is linearly independent of J0 (x). 11.3.8 A cylindrical wave guide has radius r0 . Find the nonvanishing components of the electric and magnetic fields for (a) TM01 , transverse magnetic wave (Hz = Hρ = Eϕ = 0), (b) TE01 , transverse electric wave (Ez = Eρ = Hϕ = 0). The subscripts 01 indicate that the longitudinal component (Ez or Hz ) involves J0 and the boundary condition is satisfied by the first zero of J0 or J0′ . Hint. All components of the wave have the same factor: exp i(kz − ωt). 11.3.9 For a given mode of oscillation the minimum frequency that will be passed by a circular cylindrical wave guide (radius r0 ) is νmin = c , λc 706 Chapter 11 Bessel Functions in which λc is fixed by the boundary condition   2πr0 =0 for TMnm mode, Jn λc   ′ 2πr0 =0 for TEnm mode. Jn λc The subscript n denotes the order of the Bessel function and m indicates the zero used. Find this cutoff wavelength λc for the three TM and three TE modes with the longest cutoff wavelengths. Explain your results in terms of the graph of J0 , J1 , and J2 (Fig. 11.1). 11.3.10 Write a program that will compute successive roots of the Neumann function Nn (x), that is αns , where Nn (αns ) = 0. Tabulate the first five roots of N0 , N1 , and N2 . Check your values for the roots against those listed in AMS-55 (see Additional Readings of Chapter 8 for the full ref.). Check value. α12 = 5.42968. 11.3.11 For the case m = 0, a = 1, and b = 2, the coaxial wave guide boundary conditions lead to J0 (2x) J0 (x) − f (x) = N0 (2x) N0 (x) (Fig. 11.6). (a) Calculate f (x) for x = 0.0(0.1)10.0 and plot f (x) versus x to find the approximate location of the roots. FIGURE 11.6 f (x) of Exercise 11.3.11. 11.4 Hankel Functions (b) 707 Call a root-finding subroutine to determine the first three roots to higher precision. ANS. 3.1230, 6.2734, 9.4182. Note. The higher roots can be expected to appear at intervals whose length approaches n. Why? AMS-55 (see Additional Readings of Chapter 8 for the reference), gives an approximate formula for the roots. The function g(x) = J0 (x)N0 (2x) − J0 (2x)N0 (x) is much better behaved than f (x) previously discussed. 11.4 HANKEL FUNCTIONS Many authors prefer to introduce the Hankel functions by means of integral representations and then to use them to define the Neumann function Nν (z). An outline of this approach is given at the end of this section. Definitions Because we have already obtained the Neumann function by more elementary (and less (1) (2) powerful) techniques, we may use it to define the Hankel functions Hν (x) and Hν (x): Hν(1) (x) = Jν (x) + iNν (x) (11.85) Hν(2) (x) = Jν (x) − iNν (x). (11.86) and This is exactly analogous to taking e±iθ = cos θ ± i sin θ. (1) (11.87) (2) For real arguments, Hν and Hν are complex conjugates. The extent of the analogy will be seen even better when the asymptotic forms are considered (Section 11.6). Indeed, it is their asymptotic behavior that makes the Hankel functions useful. (1) (2) Series expansion of Hν (x) and Hν (x) may be obtained by combining Eqs. (11.5) and (11.63). Often only the first term is of interest; it is given by 2 2 ln x + 1 + i (γ − ln 2) + · · · , π π   (ν − 1)! 2 ν (1) Hν (x) ≈ −i + ··· , ν > 0, π x (1) H0 (x) ≈ i 2 2 ln x + 1 − i (γ − ln 2) + · · · , π π  ν (ν − 1)! 2 Hν(2) (x) ≈ i + ··· , ν > 0. π x (2) H0 (x) ≈ −i (11.88) (11.89) (11.90) (11.91) 708 Chapter 11 Bessel Functions Since the Hankel functions are linear combinations (with constant coefficients) of Jν and Nν , they satisfy the same recurrence relations (Eqs. (11.10) and (11.12)) Hν−1 (x) + Hν+1 (x) = 2ν Hν (x), x Hν−1 (x) − Hν+1 (x) = 2Hν′ (x), (11.92) (11.93) for both Hν(1) (x) and Hν(2) (x). A variety of Wronskian formulas can be developed: (1) (2) Hν(2) Hν+1 − Hν(1) Hν+1 = (1) Jν−1 Hν(1) − Jν Hν−1 = (2) Jν Hν−1 − Jν−1 Hν(2) = Example 11.4.1 4 , iπx (11.94) 2 , iπx (11.95) 2 . iπx (11.96) CYLINDRICAL TRAVELING WAVES As an illustration of the use of Hankel functions, consider a two-dimensional wave problem similar to the vibrating circular membrane of Exercise 11.1.25. Now imagine that the waves are generated at r = 0 and move outward to infinity. We replace our standing waves by traveling ones. The differential equation remains the same, but the boundary conditions change. We now demand that for large r the wave behave like U ∼ ei(kr−ωt) (11.97) to describe an outgoing wave. As before, k is the wave number. This assumes, for simplicity, that there is no azimuthal dependence, that is, no angular momentum, or m = 0. In (1) Sections 7.3 and 11.6, H0 (kr) is shown to have the asymptotic behavior (for r → ∞) (1) H0 (kr) ∼ eikr . (11.98) This boundary condition at infinity then determines our wave solution as (1) U (r, t) = H0 (kr)e−iωt . (11.99) This solution diverges as r → 0, which is the behavior to be expected with a source at the origin. The choice of a two-dimensional wave problem to illustrate the Hankel function H0(1) (z) is not accidental. Bessel functions may appear in a variety of ways, such as in the separation of conical coordinates. However, they enter most commonly in the radial equations from the separation of variables in the Helmholtz equation in cylindrical and in spherical polar coordinates. We have taken a degenerate form of cylindrical coordinates for this illustration. Had we used spherical polar coordinates (spherical waves), we should have encountered index ν = n + 12 , n an integer. These special values yield the spherical Bessel functions to be discussed in Section 11.7.  11.4 Hankel Functions 709 Contour Integral Representation of the Hankel Functions The integral representation (Schlaefli integral)  1 dt Jν (x) = e(x/2)(t−1/t) ν+1 2πi C t (11.100) may easily be established as a Cauchy integral for ν = n, an integer (by recognizing that the numerator is the generating function (Eq. (11.1)) and integrating around the origin). If ν is not an integer, the integrand is not single-valued and a cut line is needed in our complex plane. Choosing the negative real axis as the cut line and using the contour shown in Fig. 11.7, we can extend Eq. (11.100) to nonintegral ν. Substituting Eq. (11.100) into Bessel’s ODE, we can represent the combined integrand by an exact differential that vanishes as t → ∞e±iπ (compare Exercise 11.1.16). We now deform the contour so that it approaches the origin along the positive real axis, as shown in Fig. 11.8. For x > 0, this particular approach guarantees that the exact differential mentioned will vanish as t → 0 because of the e−x/2t → 0 factor. Hence each of the separate portions (∞ e−iπ to 0) and (0 to ∞ eiπ ) is a solution of Bessel’s equation. We define ∞eiπ 1 dt Hν(1) (x) = (11.101) e(x/2)(t−1/t) ν+1 , πi 0 t Hν(2) (x) = 1 πi 0 ∞e−iπ e(x/2)(t−1/t) dt t ν+1 . (11.102) These expressions are particularly convenient because they may be handled by the method (1) (2) of steepest descents (Section 7.3). Hν (x) has a saddle point at t = +i, whereas Hν (x) has a saddle point at t = −i. FIGURE 11.7 Bessel function contour. 710 Chapter 11 Bessel Functions FIGURE 11.8 Hankel function contours. The problem of relating Eqs. (11.101) and (11.102) to our earlier definition of the Hankel function (Eqs. (11.85) and (11.86)) remains. Since Eqs. (11.100) to (11.102) combined yield Jν (x) = by inspection, we need only show that Nν (x) = 1  (1) Hν (x) + Hν(2) (x) 2 (11.103) 1  (1) H (x) − Hν(2) (x) . 2i ν (11.104) This may be accomplished by the following steps: (1) With the substitutions t = eiπ /s for Hν (2) and t = e−iπ /s for Hν , we obtain (1) Hν(1) (x) = e−iνπ H−ν (x), (2) Hν(2) (x) = eiνπ H−ν (x). (11.105) (11.106) From Eqs. (11.103) (ν → −ν), (11.105), and (11.106), J−ν (x) = 1  iνπ (1) e Hν (x) + e−iνπ Hν(2) (x) . 2 (11.107) Finally substitute Jν (Eq. (11.103)) and J−ν (Eq. (11.107)) into the defining equation for Nν , Eq. (11.60). This leads to Eq. (11.104) and establishes the contour integrals Eqs. (11.101) and (11.102) as the Hankel functions. Integral representations have appeared before: Eq. (8.35) for Ŵ(z) and various representations of Jν (z) in Section 11.1. With these integral representations of the Hankel functions, it is perhaps appropriate to ask why we are interested in integral representations. There are at least four reasons. The first is simply aesthetic appeal. Second, the integral representations help to distinguish between two linearly independent solutions. In Fig. 11.6, the contours C1 and C2 cross different saddle points (Section 7.3). For the Legendre functions the contour for Pn (z) (Fig. 12.11) and that for Qn (z) encircle different singular points. 11.4 Hankel Functions 711 Third, the integral representations facilitate manipulations, analysis, and the development of relations among the various special functions. Fourth, and probably most important of all, the integral representations are extremely useful in developing asymptotic expansions. One approach, the method of steepest descents, appears in Section 7.3. A second approach, the direct expansion of an integral representation is given in Section 11.6 for the modified Bessel function Kν (z). This same technique may be used to obtain asymptotic expansions of the confluent hypergeometric functions M and U — Exercise 13.5.13. In conclusion, the Hankel functions are introduced here for the following reasons: • As analogs of e±ix they are useful for describing traveling waves. • They offer an alternate (contour integral) and a rather elegant definition of Bessel functions. • (1) Hν is used to define the modified Bessel function Kν of Section 11.5. Exercises 11.4.1 Verify the Wronskian formulas (a) (b) (c) (d) (e) (f) (g) 11.4.2 (1)′ (1) Jν (x)Hν (x) − Jν′ (x)Hν (x) = 2i πx , (2) Jν (x)Hν (x) − Jν′ (x)Hν (x) = −2i πx , ′ (1) (1) ′ Nν (x)Hν (x) − Nν (x)Hν (x) = −2 πx , (2) (2)′ ′ Nν (x)Hν (x) − Nν (x)Hν (x) = −2 πx , (1) (2)′ (1)′ (2) Hν (x)Hν (x) − Hν (x)Hν (x) = −4i πx , (2) (1) (1) (2) 4 , Hν (x)Hν+1 (x) − Hν (x)Hν+1 (x) = iπx (1) (1) 2 Jν−1 (x)Hν (x) − Jν (x)Hν−1 (x) = iπx . (2)′ Show that the integral forms (a) (b) 1 iπ 1 iπ ∞eiπ e(x/2)(t−1/t) 0C1 0 ∞e−iπ C2 dt = Hν(1) (x), t ν+1 e(x/2)(t−1/t) dt = Hν(2) (x) t ν+1 satisfy Bessel’s ODE. The contours C1 and C2 are shown in Fig. 11.8. 11.4.3 11.4.4 Using the integrals and contours given in problem 11.4.2, show that 1  (1) Hν (x) − Hν(2) (x) = Nν (x). 2i Show that the integrals in Exercise 11.4.2 may be transformed to yield 1 1 (1) ex sinh γ −νγ dγ , (b) Hν(2) (x) = ex sinh γ −νγ dγ (a) Hν (x) = πi C3 πi C4 712 Chapter 11 Bessel Functions FIGURE 11.9 Hankel function contours. (see Fig. 11.9). 11.4.5 (a) (1) Transform H0 (x), Eq. (11.101), into (1) H0 (x) = 1 iπ eix cosh s ds, C where the contour C runs from −∞ − iπ/2 through the origin of the s-plane to ∞ + iπ/2. (1) (b) Justify rewriting H0 (x) as ∞+iπ/2 2 (1) H0 (x) = eix cosh s ds. iπ 0 (c) 11.4.6 Verify that this integral representation actually satisfies Bessel’s differential equation. (The iπ/2 in the upper limit is not essential. It serves as a convergence factor. We can replace it by iaπ/2 and take the limit.) From (1) H0 (x) = show that (a) J0 (x) = 2 π ∞ 2 iπ sin(x cosh s) ds, ∞ eix cosh s ds 0 (b) 0 J0 (x) = 2 π This last result is a Fourier sine transform. 11.4.7 From (see Exercises 11.4.4 and 11.4.5) (1) H0 (x) = 2 iπ show that (a) 2 N0 (x) = − π 0 ∞ cos(x cosh s) ds. ∞ 0 eix cosh s ds 1 ∞ sin(xt) dt. √ t2 − 1 11.5 Modified Bessel Functions, Iν (x) and Kν (x) (b) N0 (x) = − 2 π ∞ 1 713 cos(xt) dt. t 2 − 1) These are the integral representations in Section 11.3 (Other Forms). This last result is a Fourier cosine transform. 11.5 MODIFIED BESSEL FUNCTIONS, Iν (x) AND Kν (x) The Helmholtz equation, ∇ 2 ψ + k 2 ψ = 0, separated in circular cylindrical coordinates, leads to Eq. (11.22a), the Bessel equation. Equation (11.22a) is satisfied by the Bessel and Neumann functions Jν (kρ) and Nν (kρ) (1) (2) and any linear combination, such as the Hankel functions Hν (kρ) and Hν (kρ). Now, the Helmholtz equation describes the space part of wave phenomena. If instead we have a diffusion problem, then the Helmholtz equation is replaced by ∇ 2 ψ − k 2 ψ = 0. (11.108) The analog to Eq. (11.22a) is ρ2  d2 d Yν (kρ) + ρ Yν (kρ) − k 2 ρ 2 + ν 2 Yν (kρ) = 0. 2 dρ dρ (11.109) The Helmholtz equation may be transformed into the diffusion equation by the transformation k → ik. Similarly, k → ik changes Eq. (11.22a) into Eq. (11.109) and shows that Yν (kρ) = Zν (ikρ). The solutions of Eq. (11.109) are Bessel functions of imaginary argument. To obtain a solution that is regular at the origin, we take Zν as the regular Bessel function Jν . It is customary (and convenient) to choose the normalization so that Yν (x) = Iν (x) ≡ i −ν Jν (ix). (11.110) (Here the variable kρ is being replaced by x for simplicity.) The extra i −ν normalization cancels the i ν from each term and leaves Iν (x) real. Often this is written as  Iν (x) = e−νπi/2 Jν xeiπ/2 . I0 and I1 are shown in Fig. 11.10. (11.111) 714 Chapter 11 Bessel Functions FIGURE 11.10 Modified Bessel functions. Series Form In terms of infinite series this is equivalent to removing the (−1)s sign in Eq. (11.5) and writing  2s+ν  2s−ν ∞ ∞   x x 1 1 Iν (x) = , I−ν (x) = . (11.112) s!(s + ν)! 2 s!(s − ν)! 2 s=0 s=0 For integral ν this yields In (x) = I−n (x). (11.113) Recurrence Relations The recurrence relations satisfied by Iν (x) may be developed from the series expansions, but it is perhaps easier to work from the existing recurrence relations for Jν (x). Let us replace x by −ix and rewrite Eq. (11.110) as Jν (x) = i ν Iν (−ix). (11.114) Then Eq. (11.10) becomes 2ν ν i Iν (−ix). x Replacing x by ix, we have a recurrence relation for Iν (x), i ν−1 Iν−1 (−ix) + i ν+1 Iν+1 (−ix) = Iν−1 (x) − Iν+1 (x) = 2ν Iν (x). x (11.115) 11.5 Modified Bessel Functions, Iν (x) and Kν (x) 715 Equation (11.12) transforms to Iν−1 (x) + Iν+1 (x) = 2Iν′ (x). (11.116) These are the recurrence relations used in Exercise 11.1.14. It is worth emphasizing that although two recurrence relations, Eqs. (11.115) and (11.116) or Exercise 11.5.7, specify the second-order ODE, the converse is not true. The ODE does not uniquely fix the recurrence relations. Equations (11.115) and (11.116) and Exercise 11.5.7 provide an example. From Eq. (11.113) it is seen that we have but one independent solution when ν is an integer, exactly as in the Bessel functions Jν . The choice of a second, independent solution of Eq. (11.108) is essentially a matter of convenience. The second solution given here is selected on the basis of its asymptotic behavior — as shown in the next section. The confusion of choice and notation for this solution is perhaps greater than anywhere else in this field.18 Many authors19 choose to define a second solution in terms of the Hankel (1) function Hν (x) by  π π Kν (x) ≡ i ν+1 Hν(1) (ix) = i ν+1 Jν (ix) + iNν (ix) . (11.117) 2 2 The factor i ν+1 makes Kν (x) real when x is real. Using Eqs. (11.60) and (11.110), we may transform Eq. (11.117) to20 π I−ν (x) − Iν (x) , (11.118) 2 sin νπ analogous to Eq. (11.60) for Nν (x). The choice of Eq. (11.117) as a definition is somewhat unfortunate in that the function Kν (x) does not satisfy the same recurrence relations as Iν (x) (compare Exercises 11.5.7 and 11.5.8). To avoid this annoyance, other authors21 have included an additional factor of cos νπ . This permits Kν to satisfy the same recurrence relations as Iν , but it has the disadvantage of making Kν = 0 for ν = 12 , 53 , 52 , . . . . (1) The series expansion of Kν (x) follows directly from the series form of Hν (ix). The lowest-order terms are (cf. Eqs. (11.61) and (11.62)) Kν (x) = K0 (x) = − ln x − γ + ln 2 + · · · , Kν (x) = 2ν−1 (ν − 1)!x −ν + · · · . (11.119) Because the modified Bessel function Iν is related to the Bessel function Jν , much as sinh is related to sine, Iν and the second solution Kν are sometimes referred to as hyperbolic Bessel functions. K0 and K1 are shown in Fig. 11.10. I0 (x) and K0 (x) have the integral representations 1 π I0 (x) = cosh(x cos θ ) dθ, (11.120) π 0 K0 (x) = ∞ 0 cos(x sinh t) dt = ∞ 0 cos(xt) dt , (t 2 + 1)1/2 x > 0. 18 A discussion and comparison of notations will be found in Math. Tables Aids Comput. 1: 207–308 (1944). 19 Watson, Morse and Feshbach, Jeffreys and Jeffreys (without the π/2). 20 For integral index n we take the limit as ν → n. 21 Whittaker and Watson, see Additional Readings of Chapter 13. (11.121) 716 Chapter 11 Bessel Functions Equation (11.120) may be derived from Eq. (11.30) for J0 (x) or may be taken as a special case of Exercise 11.5.4, ν = 0. The integral representation of K0 , Eq. (11.121), is a Fourier transform and may best be derived with Fourier transforms, Chapter 15, or with Green’s functions Section 9.7. A variety of other forms of integral representations (including ν = 0) appear in the exercises. These integral representations are useful in developing asymptotic forms (Section 11.6) and in connection with Fourier transforms, Chapter 15. To put the modified Bessel functions Iν (x) and Kν (x) in proper perspective, we introduce them here because: • These functions are solutions of the frequently encountered modified Bessel equation. • They are needed for specific physical problems, such as diffusion problems. • Kν (x) provides a Green’s function, Section 9.7. • Kν (x) leads to a convenient determination of asymptotic behavior (Section 11.6). Exercises 11.5.1 Show that e(x/2)(t+1/t) = ∞  In (x)t n , n=−∞ thus generating modified Bessel functions, In (x). 11.5.2 Verify the following identities (a) (b) (c) (d) (e) 11.5.3 (a) 1 = I0 (x) + 2 ∞  (−1)n I2n (x), n=1 ∞  ex = I0 (x) + 2 e−x = I0 (x) + 2 (−1)n In (x), n=1 ∞  cosh x = I0 (x) + 2 sinh x = 2 ∞  In (x), n=1 ∞  I2n (x), n=1 I2n−1 (x). n=1 From the generating function of Exercise 11.5.1 show that   dt 1 In (x) = exp (x/2)(t + 1/t) n+1 . 2πi t 11.5 Modified Bessel Functions, Iν (x) and Kν (x) (b) 717 For n = ν, not an integer, show that the preceding integral representation may be generalized to  dt 1 Iν (x) = exp (x/2)(t + 1/t) ν+1 . 2πi C t The contour C is the same as that for Jν (x), Fig. 11.7. 11.5.4 For ν > − 12 show that Iν (z) may be represented by  ν π z 1 Iν (z) = e±z cos θ sin2ν θ dθ 1 1/2 π (ν − 2 )! 2 0  ν 1  ν−1/2 z 1 e±zp 1 − p 2 dp = 1 1/2 2 π (ν − 2 )! −1  ν π/2 z 2 cosh(z cos θ ) sin2ν θ dθ. = 1 π 1/2 (ν − 2 )! 2 0 11.5.5 A cylindrical cavity has a radius a and height l, Fig. 11.3. The ends, z = 0 and l, are at zero potential. The cylindrical walls, ρ = a, have a potential V = V (ϕ, z). (a) Show that the electrostatic potential (ρ, ϕ, z) has the functional form (ρ, ϕ, z) = (b) ∞  ∞  m=0 n=1 Im (kn ρ) sin kn z · (amn sin mϕ + bmn cos mϕ), where kn = nπ/ l. Show that the coefficients amn and bmn are given by22    2π l 2 amn sin mϕ = dz dϕ. V (ϕ, z) sin kn z · bmn cos mϕ πlIm (kn a) 0 0 Hint. Expand V (ϕ, z) as a double series and use the orthogonality of the trigonometric functions. 11.5.6 Verify that Kν (x) is given by Kν (x) = π I−ν (x) − Iν (x) 2 sin νπ and from this show that Kν (x) = K−ν (x). 11.5.7 Show that Kν (x) satisfies the recurrence relations Kν−1 (x) − Kν+1 (x) = − 2ν Kν (x), x Kν−1 (x) + Kν+1 (x) = −2Kν′ (x). 22 When m = 0, the 2 in the coefficient is replaced by 1. 718 Chapter 11 Bessel Functions 11.5.8 If Kν = eνπi Kν , show that Kν satisfies the same recurrence relations as Iν . 11.5.9 For ν > − 21 show that Kν (z) may be represented by  ν ∞ z π π π 1/2 e−z cosh t sinh2ν t dt, − < arg z < Kν (z) = 1 2 2 (ν − 2 )! 2 0   π 1/2 z ν ∞ −zp 2 e (p − 1)ν−1/2 dp. = (ν − 12 )! 2 1 11.5.10 Show that Iν (x) and Kν (x) satisfy the Wronskian relation 1 Iν (x)Kν′ (x) − Iν′ (x)Kν (x) = − . x This result is quoted in Section 9.7 in the development of a Green’s function. 11.5.11 If r = (x 2 + y 2 )1/2 , prove that 2 1 = r π ∞ cos(xt)K0 (yt) dt. 0 This is a Fourier cosine transform of K0 . 11.5.12 (a) Verify that I0 (x) = 1 π π cosh(x cos θ ) dθ 0 satisfies the modified Bessel equation, ν = 0. Show that this integral contains no admixture of K0 (x), the irregular second solution. (c) Verify the normalization factor 1/π . (b) 11.5.13 Verify that the integral representations 1 π z cos t e cos(nt) dt, In (z) = π 0 ∞ Kν (z) = e−z cosh t cosh(νt) dt, 0 ℜ(z) > 0, satisfy the modified Bessel equation by direct substitution into that equation. How can you show that the first form does not contain an admixture of Kn and that the second form does not contain an admixture of Iν ? How can you check the normalization? 11.5.14 Derive the integral representation In (x) = 1 π π ex cos θ cos(nθ ) dθ. 0 Hint. Start with the corresponding integral representation of Jn (x). Equation (11.120) is a special case of this representation. 11.6 Asymptotic Expansions 11.5.15 719 Show that K0 (z) = ∞ e−z cosh t dt 0 satisfies the modified Bessel equation. How can you establish that this form is linearly independent of I0 (z)? 11.5.16 Show that eax = I0 (a)T0 (x) + 2 ∞  n=1 In (a)Tn (x), −1 ≤ x ≤ 1. Tn (x) is the nth-order Chebyshev polynomial, Section 13.3. Hint. Assume a Chebyshev series expansion. Using the orthogonality and normalization of the Tn (x), solve for the coefficients of the Chebyshev series. Write a double precision subroutine to calculate In (x) to 12-decimal-place accuracy for n = 0, 1, 2, 3, . . . and 0 ≤ x ≤ 1. Check your results against the 10-place values given in AMS-55, Table 9.11, see Additional Readings of Chapter 8 for the reference. (b) Referring to Exercise 11.5.16, calculate the coefficients in the Chebyshev expansions of cosh x and of sinh x. 11.5.17 (a) 11.5.18 The cylindrical cavity of Exercise 11.5.5 has a potential along the cylinder walls: + 100 zl , 0 ≤ zl ≤ 21 , V (z) =  100 1 − zl , 12 ≤ zl ≤ 1. With the radius–height ratio a/ l = 0.5, calculate the potential for z/ l = 0.1(0.1)0.5 and ρ/a = 0.0(0.2)1.0. Check value. For z/ l = 0.3 and ρ/a = 0.8, V = 26.396. 11.6 ASYMPTOTIC EXPANSIONS Frequently in physical problems there is a need to know how a given Bessel or modified Bessel function behaves for large values of the argument, that is, the asymptotic behavior. This is one occasion when computers are not very helpful. One possible approach is to develop a power-series solution of the differential equation, as in Section 9.5, but now using negative powers. This is Stokes’ method, Exercise 11.6.5. The limitation is that starting from some positive value of the argument (for convergence of the series), we do not know what mixture of solutions or multiple of a given solution we have. The problem is to relate the asymptotic series (useful for large values of the variable) to the power-series or related definition (useful for small values of the variable). This relationship can be established by introducing a suitable integral representation and then using either the method of steepest descent, Section 7.3, or the direct expansion as developed in this section. 720 Chapter 11 Bessel Functions Expansion of an Integral Representation As a direct approach, consider the integral representation (Exercise 11.5.9)  ν ∞  ν−1/2 π 1/2 1 z Kν (z) = e−zx x 2 − 1 dx, ν>− . 1 2 (ν − 2 )! 2 1 (11.122) For the present let us take z to be real, although Eq. (11.122) may be established for −π/2 < arg z < π/2 (ℜ(z) > 0). We have three tasks: To show that Kν as given in Eq. (11.122) actually satisfies the modified Bessel equation (11.109). 2. To show that the regular solution Iν is absent. 3. To show that Eq. (11.122) has the proper normalization. 1. The fact that Eq. (11.122) is a solution of the modified Bessel equation may be verified by direct substitution. We obtain ∞ ν+1/2 d  −zx  2 e x −1 dx = 0, zν+1 dx 1 which transforms the combined integrand into the derivative of a function that vanishes at both endpoints. Hence the integral is some linear combination of Iν and Kν . The rejection of the possibility that this solution contains Iν constitutes Exercise 11.6.1. 3. The normalization may be verified by showing that, in the limit z → 0, Kν (z) is in agreement with Eq. (11.119). By substituting x = 1 + t/z,  ν ∞  ν−1/2 z π 1/2 e−zx x 2 − 1 dx 1 (ν − 2 )! 2 1  2   ν ∞ 2t ν−1/2 dt π 1/2 z −z −t t (11.123a) e e + = z z z2 (ν − 21 )! 2 0   ∞ π 1/2 e−z 2z ν−1/2 −t 2ν−1 = 1+ dt, (11.123b) e t t (ν − 21 )! 2ν zν 0 taking out t 2 /z2 as a factor. This substitution has changed the limits of integration to a more convenient range and has isolated the negative exponential dependence e−z . The integral in Eq. (11.123b) may be evaluated for z = 0 to yield (2ν − 1)!. Then, using the duplication formula (Section 8.4), we have lim Kν (z) = z→0 (ν − 1)!2ν−1 , zν ν > 0, (11.124) in agreement with Eq. (11.119), which thus checks the normalization.23 23 For ν → 0 the integral diverges logarithmically, in agreement with the logarithmic divergence of K (z) for z → 0 (Sec0 tion 11.5). 11.6 Asymptotic Expansions 721 Now, to develop an asymptotic series for Kν (z), we may rewrite Eq. (11.123a) as )   ∞ t ν−1/2 π e−z −t ν−1/2 1+ Kν (z) = dt (11.125) e t 2z (ν − 12 )! 0 2z (taking out 2t/z as a factor). We expand (1 + t/2z)ν−1/2 by the binomial theorem to obtain ) ∞ ∞ π e−z  (ν − 21 )! −r Kν (z) = (2z) e−t t ν+r−1/2 dt. 1 2z (ν − 12 )! r!(ν − r − )! 0 2 r=0 (11.126) Term-by-term integration (valid for asymptotic series) yields the desired asymptotic expansion of Kν (z): )  (4ν 2 − 12 ) (4ν 2 − 12 )(4ν 2 − 32 ) π −z 1+ e + Kν (z) ∼ + ··· . (11.127) 2z 1!8z 2!(8z)2 Although the integral of Eq. (11.122), integrating along the real axis, was convergent only for −π/2 < arg z < π/2, Eq. (11.127) may be extended to −3π/2 < arg z < 3π/2. Considered as an infinite series, Eq. (11.127) is actually divergent.24 However, this series is asymptotic, in the sense that for large enough z, Kν (z) may be approximated to any fixed degree of accuracy with a small number of terms. (Compare Section 5.10 for a definition and discussion of asymptotic series.) It is convenient to rewrite Eq. (11.127) as ) π −z  Kν (z) = e Pν (iz) + iQν (iz) , (11.128) 2z where Pν (z) ∼ 1 − (µ − 1)(µ − 9) (µ − 1)(µ − 9)(µ − 25)(µ − 49) + − ··· , 2!(8z)2 4!(8z)4 Qν (z) ∼ µ − 1 (µ − 1)(µ − 9)(µ − 25) − + ··· , 1!(8z) 3!(8z)3 (11.129a) (11.129b) and µ = 4ν 2 . It should be noted that although Pν (z) of Eq. (11.129a) and Qν (z) of Eq. (11.129b) have alternating signs, the series for Pν (iz) and Qν (iz) of Eq. (11.128) have all signs positive. Finally, for z large, Pν dominates. Then with the asymptotic form of Kν (z), Eq. (11.128), we can obtain expansions for all other Bessel and hyperbolic Bessel functions by defining relations: 24 Our binomial expansion is valid only for t < 2z and we have integrated t out to infinity. The exponential decrease of the integrand prevents a disaster, but the resultant series is still only asymptotic, not convergent. By Table 9.3, z = ∞ is an essential singularity of the Bessel (and modified Bessel) equations. Fuchs’ theorem does not guarantee a convergent series and we do not get a convergent series. 722 Chapter 11 Bessel Functions 1. From π ν+1 (1) i Hν (iz) = Kν (z) 2 (11.130) we have Hν(1) (z) )     2 1 π = exp i z − ν + πz 2 2  · Pν (z) + iQν (z) , −π < arg z < 2π. (11.131) The second Hankel function is just the complex conjugate of the first (for real argument), )     1 π 2 (2) Hν (z) = exp −i z − ν + πz 2 2  · Pν (z) − iQν (z) , −2π < arg z < π. (11.132) An alternate derivation of the asymptotic behavior of the Hankel functions appears in Section 7.3 as an application of the method of steepest descents. (1) 3. Since Jν (z) is the real part of Hν (z) for real z, )     2 1 π Pν (z) cos z − ν + Jν (z) = πz 2 2    1 π − Qν (z) sin z − ν + , −π < arg z < π, (11.133) 2 2 holds for real z, that is, arg z = 0, π . Once Eq. (11.133) is established for real z, the relation is valid for complex z in the given range of argument. (1) The Neumann function is the imaginary part of Hν (z) for real z, or )     2 1 π Pν (z) sin z − ν + Nν (z) = πz 2 2    1 π + Qν (z) cos z − ν + , −π < arg z < π. (11.134) 2 2 Initially, this relation is established for real z, but it may be extended to the complex domain as shown. 5. Finally, the regular hyperbolic or modified Bessel function Iν (z) is given by Iν (z) = i −ν Jν (iz) (11.135) or ez  Pν (iz) − iQν (iz) , Iν (z) = √ 2πz − π π < arg z < . 2 2 (11.136) 11.6 Asymptotic Expansions FIGURE 11.11 723 Asymptotic approximation of J0 (x). This completes our determination of the asymptotic expansions. However, it is perhaps worth noting the primary characteristics. Apart from the ubiquitous z−1/2 , Jν and Nν behave as cosine and sine, respectively. The zeros are almost evenly spaced at intervals of π ; the spacing becomes exactly π in the limit as z → ∞. The Hankel functions have been defined to behave like the imaginary exponentials, and the modified Bessel functions Iν and Kν go into the positive and negative exponentials. This asymptotic behavior may be sufficient to eliminate immediately one of these functions as a solution for a physical problem. We should also note that the asymptotic series Pν (z) and Qν (z), Eqs. (11.129a) and (11.129b), terminate for ν = ±1/2, ±3/2, . . . and become polynomials (in negative powers of z). For these special values of ν the asymptotic approximations become exact solutions. It is of some interest to consider the accuracy of the asymptotic forms, taking just the first term, for example (Fig. 11.11), )    π 1 2 Jn (x) ≈ . (11.137) cos x − n + πx 2 2 Clearly, the condition for the validity of Eq. (11.137) is that the sine term be negligible; that is, 8x ≫ 4n2 − 1. (11.138) For n or ν > 1 the asymptotic region may be far out. As pointed out in Section 11.3, the asymptotic forms may be used to evaluate the various Wronskian formulas (compare Exercise 11.6.3). Exercises 11.6.1 In checking the normalization of the integral representation of Kν (z) (Eq. (11.122)), we assumed that Iν (z) was not present. How do we know that the integral representation (Eq. (11.122)) does not yield Kν (z) + εIν (z) with ε = 0? 724 Chapter 11 Bessel Functions FIGURE 11.12 11.6.2 (a) Modified Bessel function contours. Show that y(z) = z ν  ν−1/2 e−zt t 2 − 1 dt satisfies the modified Bessel equation, provided the contour is chosen so that  ν+1/2 e−zt t 2 − 1 (b) 11.6.3 has the same value at the initial and final points of the contour. Verify that the contours shown in Fig. 11.12 are suitable for this problem. Use the asymptotic expansions to verify the following Wronskian formulas: (a) Jν (x)J−ν−1 (x) + J−ν (x)Jν+1 (x) = −2 sin νπ/πx, (b) Jν (x)Nν+1 (x) − Jν+1 (x)Nν (x) = −2/πx, (2) (x) − Jν−1 (x)Hν(2) (x) = 2/iπx, (c) Jν (x)Hν−1 ′ (d) Iν (x)Kν (x) − Iν′ (x)Kν (x) = −1/x, (e) Iν (x)Kν+1 (x) + Iν+1 (x)Kν (x) = 1/x. 11.6.4 From the asymptotic form of Kν (z), Eq. (11.127), derive the asymptotic form of (1) Hν (z), Eq. (11.131). Note particularly the phase, (ν + 21 )π/2. 11.6.5 Stokes’ method. (a) Replace the Bessel function in Bessel’s equation by x −1/2 y(x) and show that y(x) satisfies   ν 2 − 14 ′′ y (x) + 1 − y(x) = 0. x2 (b) Develop a power-series solution with negative powers of x starting with the assumed form y(x) = eix ∞  an x −n . n=0 Determine the recurrence relation giving an+1 in terms of an . Check your result against the asymptotic series, Eq. (11.131). (c) From the results of Section 7.4 determine the initial coefficient, a0 . 11.7 Spherical Bessel Functions 11.6.6 725 Calculate the first 15 partial sums of P0 (x) and Q0 (x), Eqs. (11.129a) and (11.129b). Let x vary from 4 to 10 in unit steps. Determine the number of terms to be retained for maximum accuracy and the accuracy achieved as a function of x. Specifically, how small may x be without raising the error above 3 × 10−6 ? ANS. xmin = 6. 11.6.7 11.7 (a) Using the asymptotic series (partial sums) P0 (x) and Q0 (x) determined in Exercise 11.6.6, write a function subprogram FCT(X) that will calculate J0 (x), x real, for x ≥ xmin . (b) Test your function by comparing it with the J0 (x) (tables or computer library subroutine) for x = xmin (10)xmin + 10. Note. A more accurate and perhaps simpler asymptotic form for J0 (x) is given in AMS55, Eq. (9.4.3), see Additional Readings of Chapter 8 for the reference. SPHERICAL BESSEL FUNCTIONS When the Helmholtz equation is separated in spherical coordinates, the radial equation has the form r2 d 2R dR  2 2 + k r − n(n + 1) R = 0. + 2r dr dr 2 (11.139) This is Eq. (9.65) of Section 9.3. The parameter k enters from the original Helmholtz equation, while n(n + 1) is a separation constant. From the behavior of the polar angle function (Legendre’s equation, Sections 9.5 and 12.5), the separation constant must have this form, with n a nonnegative integer. Equation (11.139) has the virtue of being selfadjoint, but clearly it is not Bessel’s equation. However, if we substitute R(kr) = Z(kr) , (kr)1/2 Equation (11.139) becomes r2    d 2Z 1 2 dZ 2 2 Z = 0, + k r − n + + r dr 2 dr 2 (11.140) which is Bessel’s equation. Z is a Bessel function of order n + 21 (n an integer). Because of the importance of spherical coordinates, this combination, that is, Zn+1/2 (kr) , (kr)1/2 occurs quite often. 726 Chapter 11 Bessel Functions Definitions It is convenient to label these functions spherical Bessel functions with the following defining equations: jn (x) = nn (x) = (1) hn (x) (2) hn (x) = = ) ) ) ) π Jn+1/2 (x), 2x π Nn+1/2 (x) = (−1)n+1 2x ) π J−n−1/2 (x),25 2x (11.141) π (1) H (x) = jn (x) + inn (x), 2x n+1/2 π (2) H (x) = jn (x) − inn (x). 2x n+1/2 These spherical Bessel functions (Figs. 11.13 and 11.14) can be expressed in series form by using the series (Eq. (11.5)) for Jn , replacing n with n + 12 : Jn+1/2 (x) = ∞  s=0  2s+n+1/2 x . 1 s!(s + n + 2 )! 2 (−1)s (11.142) Using the Legendre duplication formula, z!(z + 12 )! = 2−2z−1 π 1/2 (2z + 1)!, (11.143) we have jn (x) = )   ∞ π  (−1)s 22s+2n+1 (s + n)! x 2s+n+1/2 2x π 1/2 (2s + 2n + 1)!s! 2 = 2n x n s=0 ∞  (−1)s (s + n)! 2s x . s!(2s + 2n + 1)! (11.144) s=0 Now, Nn+1/2 (x) = (−1)n+1 J−n−1/2 (x) and from Eq. (11.5) we find that J−n−1/2 (x) =  2s−n−1/2 x . 1 s!(s − n − 2 )! 2 (11.145)  2s ∞ 2n π 1/2  (−1)s x . 1 2 x n+1 s!(s − n − )! 2 s=0 (11.146) ∞  s=0 (−1)s This yields nn (x) = (−1)n+1 25 This is possible because cos(n + 1 )π = 0, see Eq. (11.60). 2 11.7 Spherical Bessel Functions FIGURE 11.13 FIGURE 11.14 Spherical Bessel functions. Spherical Neumann functions. 727 728 Chapter 11 Bessel Functions The Legendre duplication formula can be used again to give nn (x) = ∞ (−1)n+1  (−1)s (s − n)! 2s x . s!(2s − 2n)! 2n x n+1 (11.147) s=0 These series forms, Eqs. (11.144) and (11.147), are useful in three ways: (1) limiting values as x → 0, (2) closed-form representations for n = 0, and, as an extension of this, (3) an indication that the spherical Bessel functions are closely related to sine and cosine. For the special case n = 0 we find from Eq. (11.144) that j0 (x) = ∞  (−1)s 2s sin x x = , (2s + 1)! x (11.148) s=0 whereas for n0 , Eq. (11.147) yields cos x . x From the definition of the spherical Hankel functions (Eq. (11.141)), n0 (x) = − (11.149) i 1 (sin x − i cos x) = − eix , x x i 1 (2) h0 (x) = (sin x + i cos x) = e−ix . (11.150) x x Equations (11.148) and (11.149) suggest expressing all spherical Bessel functions as combinations of sine and cosine. The appropriate combinations can be developed from the power-series solutions, Eqs. (11.144) and (11.147), but this approach is awkward. Actually the trigonometric forms are already available as the asymptotic expansion of Section 11.6. From Eqs. (11.131) and (11.129a), ) π (1) (1) H (z) hn (x) = 2z n+1/2 (1) h0 (x) = ( eiz ' Pn+1/2 (z) + iQn+1/2 (z) . (11.151) z Now, Pn+1/2 and Qn+1/2 are polynomials. This means that Eq. (11.151) is mathematically exact, not simply an asymptotic approximation. We obtain = (−i)n+1 h(1) n (z) n+1 e = (−i) = (−i)n+1 iz z eiz z n  s=0 i s (2n + 2s)!! s!(8z)s (2n − 2s)!! n  s=0 i s (n + s)! . s!(2z)s (n − s)! (11.152) Often a factor (−i)n = (e−iπ/2 )n will be combined with the eiz to give ei(z−nπ/2) . For (2) z real, jn (z) is the real part of this, nn (z) the imaginary part, and hn (z) the complex conjugate. Specifically,   i 1 (1) (11.153a) h1 (x) = eix − − 2 , x x 11.7 Spherical Bessel Functions 729   i 3 3i (1) h2 (x) = eix (11.153b) − 2− 3 , x x x sin x cos x − , x x2   3 1 3 sin x − 2 cos x, j2 (x) = − x3 x x (11.154) cos x sin x , − x x2   3 1 3 n2 (x) = − 3 − cos x − 2 sin x, x x x (11.155) j1 (x) = n1 (x) = − and so on. Limiting Values For x ≪ 1,26 Eqs. (11.144) and (11.147) yield jn (x) ≈ nn (x) ≈ xn 2n n! xn = , (2n + 1)! (2n + 1)!! (11.156) (−1)n+1 (−n)! −n−1 x · 2n (−2n)! (2n)! −n−1 x = −(2n − 1)!!x −n−1 . (11.157) 2n n! The transformation of factorials in the expressions for nn (x) employs Exercise 8.1.3. The limiting values of the spherical Hankel functions go as ±inn (x). (2) (1) The asymptotic values of jn , nn , hn , and hn may be obtained from the Bessel asymptotic forms, Section 11.6. We find   nπ 1 , (11.158) jn (x) ∼ sin x − x 2   nπ 1 nn (x) ∼ − cos x − , (11.159) x 2 =− n+1 h(1) n (x) ∼ (−i) n+1 h(2) n (x) ∼ i eix ei(x−nπ/2) = −i , x x e−ix e−i(x−nπ/2) =i . x x (11.160a) (11.160b) 26 The condition that the second term in the series be negligible compared to the first is actually x ≪ 2[(2n + 2)(2n + 3)/ (n + 1)]1/2 for jn (x). 730 Chapter 11 Bessel Functions The condition for these spherical Bessel forms is that x ≫ n(n + 1)/2. From these asymptotic values we see that jn (x) and nn (x) are appropriate for a description of standing (1) (2) spherical waves; hn (x) and hn (x) correspond to traveling spherical waves. If the time (1) dependence for the traveling waves is taken to be e−iωt , then hn (x) yields an outgoing (2) traveling spherical wave, hn (x) an incoming wave. Radiation theory in electromagnetism and scattering theory in quantum mechanics provide many applications. Recurrence Relations The recurrence relations to which we now turn provide a convenient way of developing the higher-order spherical Bessel functions. These recurrence relations may be derived from the series, but, as with the modified Bessel functions, it is easier to substitute into the known recurrence relations (Eqs. (11.10) and (11.12)). This gives fn−1 (x) + fn+1 (x) = 2n + 1 fn (x), x (11.161) nfn−1 (x) − (n + 1)fn+1 (x) = (2n + 1)fn′ (x). (11.162) Rearranging these relations (or substituting into Eqs. (11.15) and (11.17)), we obtain d  n+1 x fn (x) = x n+1 fn−1 (x), dx (11.163) d  −n x fn (x) = −x −n fn+1 (x). dx (11.164) (2) Here fn may represent jn , nn , h(1) n , or hn . The specific forms, Eqs. (11.154) and (11.155), may also be readily obtained from Eq. (11.164). By mathematical induction we may establish the Rayleigh formulas n    sin x n n 1 d , (11.165) jn (x) = (−1) x x dx x n n nn (x) = −(−1) x h(1) n (x)  n n = −i(−1) x n n h(2) n (x) = i(−1) x  1 d x dx  n  1 d x dx 1 d x dx  cos x , x n  n   eix , x  e−ix x . (11.166) (11.167) 11.7 Spherical Bessel Functions 731 Orthogonality We may take the orthogonality integral for the ordinary Bessel functions (Eqs. (11.49) and (11.50)),    a  2 a2  ρ ρ Jν ανq ρ dρ = Jν+1 (ανp ) δpq , Jν ανp (11.168) a a 2 0 and substitute in the expression for jn to obtain    a  2 ρ 2 a3  ρ jn αnq ρ dρ = jn+1 (αnp ) δpq . jn αnp a a 2 0 (11.169) Here αnp and αnq are roots of jn . This represents orthogonality with respect to the roots of the Bessel functions. An illustration of this sort of orthogonality is provided in Example 11.7.1, the problem of a particle in a sphere. Equation (11.169) guarantees orthogonality of the wave functions jn (r) for fixed n. (If n varies, the accompanying spherical harmonic will provide orthogonality.) Example 11.7.1 PARTICLE IN A SPHERE An illustration of the use of the spherical Bessel functions is provided by the problem of a quantum mechanical particle in a sphere of radius a. Quantum theory requires that the wave function ψ, describing our particle, satisfy − h¯ 2 2 ∇ ψ = Eψ, 2m (11.170) and the boundary conditions (1) ψ(r ≤ a) remains finite, (2) ψ(a) = 0. This corresponds to a square-well potential V = 0, r ≤ a, and V = ∞, r > a. Here h¯ is Planck’s constant divided by 2π, m is the mass of our particle, and E is, its energy. Let us determine the minimum value of the energy for which our wave equation has an acceptable solution. Equation (11.170) is Helmholtz’s equation with a radial part (compare Section 9.3 for separation of variables):  n(n + 1) d 2 R 2 dR 2 + k − R = 0, (11.171) + r dr dr 2 r2 with k 2 = 2mE/h¯ 2 . Hence by Eq. (11.139), with n = 0, R = Aj0 (kr) + Bn0 (kr). We choose the orbital angular momentum index n = 0, for any angular dependence would raise the energy. The spherical Neumann function is rejected because of its divergent behavior at the origin. To satisfy the second boundary condition (for all angles), we require √ 2mE ka = a = α, (11.172) h¯ 732 Chapter 11 Bessel Functions where α is a root of j0 , that is, j0 (α) = 0. This has the effect of limiting the allowable energies to a certain discrete set, or, in other words, application of boundary condition (2) quantizes the energy E. The smallest α is the first zero of j0 , α = π, and Emin = π 2 h¯ 2 h2 = , 2ma 2 8ma 2 (11.173) which means that for any finite sphere the particle energy will have a positive minimum or zero-point energy. This is an illustration of the Heisenberg uncertainty principle for p with r ≤ a. In solid-state physics, astrophysics, and other areas of physics, we may wish to know how many different solutions (energy states) correspond to energies less than or equal to some fixed energy E0 . For a cubic volume (Exercise 9.3.5) the problem is fairly simple. The considerably more difficult spherical case is worked out by R. H. Lambert, Am. J. Phys. 36: 417, 1169 (1968). The relevant orthogonality relation for the jn (kr) can be derived from the integral given in Exercise 11.7.23.  Another form, orthogonality with respect to the indices, may be written as ∞ jm (x)jn (x) dx = 0, m = n, m, n ≥ 0. (11.174) −∞ The proof is left as Exercise 11.7.10. If m = n (compare Exercise 11.7.11), we have ∞  2 π jn (x) dx = . (11.175) 2n + 1 −∞ Most physical applications of orthogonal Bessel and spherical Bessel functions involve orthogonality with varying roots and an interval [0, a] and Eqs. (11.168) and (11.169) and Exercise 11.7.23 for continuous-energy eigenvalues. The spherical Bessel functions will enter again in connection with spherical waves, but further consideration is postponed until the corresponding angular functions, the Legendre functions, have been introduced. Exercises 11.7.1 Show that if nn (x) = ) (−1)n+1 ) π Nn+1/2 (x), 2x it automatically equals π J−n−1/2 (x). 2x 11.7 Spherical Bessel Functions 11.7.2 733 Derive the trigonometric-polynomial forms of jn (z) and nn (z).27   [n/2] nπ  (−1)s (n + 2s)! 1 jn (z) = sin z − z 2 (2s)!(2z)2s (n − 2s)! s=0 nn (z) = s=0  [n/2]  nπ  (−1)s (n + 2s)! (−1)n+1 cos z + z 2 (2s)!(2z)2s (n − 2s)! s=0 11.7.3   [(n−1)/2]  nπ 1 (−1)s (n + 2s + 1)! cos z − , z 2 (2s + 1)!(2z)2s (n − 2s − 1)!  [(n−1)/2]   (−1)n+1 (−1)s (n + 2s + 1)! nπ sin z + . z 2 (2s + 1)!(2z)2s+1 (n − 2s − 1)! s=0 Use the integral representation of Jν (x),  ν 1  ν−1/2 1 x Jν (x) = e±ixp 1 − p 2 dp, 1 1/2 2 π (ν − 2 )! −1 to show that the spherical Bessel functions jn (x) are expressible in terms of trigonometric functions; that is, for example, sin x , x Derive the recurrence relations j0 (x) = 11.7.4 (a) j1 (x) = sin x cos x − . x x2 fn−1 (x) + fn+1 (x) = 2n + 1 fn (x), x nfn−1 (x) − (n + 1)fn+1 (x) = (2n + 1)fn′ (x) 11.7.5 (2) satisfied by the spherical Bessel functions jn (x), nn (x), h(1) n (x), and hn (x). (b) Show, from these two recurrence relations, that the spherical Bessel function fn (x) satisfies the differential equation  x 2 fn′′ (x) + 2xfn′ (x) + x 2 − n(n + 1) fn (x) = 0. Prove by mathematical induction that n n jn (x) = (−1) x  1 d x dx n  sin x x  for n an arbitrary nonnegative integer. 11.7.6 From the discussion of orthogonality of the spherical Bessel functions, show that a Wronskian relation for jn (x) and nn (x) is jn (x)n′n (x) − jn′ (x)nn (x) = 1 . x2 27 The upper limit on the summation [n/2] means the largest integer that does not exceed n/2. 734 Chapter 11 Bessel Functions 11.7.7 Verify 11.7.8 2i . x2 Verify Poisson’s integral representation of the spherical Bessel function, π zn jn (z) = n+1 cos(z cos θ ) sin2n+1 θ dθ. 2 n! 0 11.7.9 Show that ′ ′ (2) (1) (2) h(1) n (x)hn (x) − hn (x)hn (x) = − ∞ Jµ (x)Jν (x) 0 11.7.10 Derive Eq. (11.174): ∞ −∞ 11.7.11 2 sin[(µ − ν)π/2] dx = , x π µ2 − ν 2 jm (x)jn (x) dx = 0, m = n m, n ≥ 0. Derive Eq. (11.175): 11.7.12 µ + ν > −1. ∞ −∞ 2 jn (x) dx = π . 2n + 1 Set up the orthogonality integral for jL (kr) in a sphere of radius R with the boundary condition jL (kR) = 0. The result is used in classifying electromagnetic radiation according to its angular momentum. 11.7.13 The Fresnel integrals (Fig. 11.15 and Exercise 5.10.2) occurring in diffraction theory are given by ) ) )  t )  t  2  π π π π y(t) = x(t) = cos v dv, sin v 2 dv. C t = S t = 2 2 2 2 0 0 Show that these integrals may be expanded in series of spherical Bessel functions ∞  1 s x(s) = j−1 (u)u1/2 du = s 1/2 j2n (s), 2 0 n=0 y(s) = 1 2 0 s j0 (u)u1/2 du = s 1/2 ∞  j2n+1 (s). n=0 Hint. To establish the equality of the integral and the sum, you may wish to work with their derivatives. The spherical Bessel analogs of Eqs. (11.12) and (11.14) are helpful. 11.7.14 A hollow sphere of radius a (Helmholtz resonator) contains standing sound waves. Find the minimum frequency of oscillation in terms of the radius a and the velocity of sound v. The sound waves satisfy the wave equation ∇2 ψ = 1 ∂ 2ψ v 2 ∂t 2 11.7 Spherical Bessel Functions FIGURE 11.15 735 Fresnel integrals. and the boundary condition ∂ψ = 0, ∂r r = a. This is a Neumann boundary condition. Example 11.7.1 has the same PDE but with a Dirichlet boundary condition. ANS. νmin = 0.3313v/a, 11.7.15 λmax = 3.018a. Defining the spherical modified Bessel functions (Fig. 11.16) by ) ) π 2 in (x) = In+1/2 (x), Kn+1/2 (x), kn (x) = 2x πx show that i0 (x) = sinh x , x k0 (x) = e−x . x Note that the numerical factors in the definitions of in and kn are not identical. 11.7.16 (a) Show that the parity of in (x) is (−1)n . (b) Show that kn (x) has no definite parity. 736 Chapter 11 Bessel Functions FIGURE 11.16 11.7.17 Show that the spherical modified Bessel functions satisfy the following relations: (a) (b) (c) 11.7.18 Spherical modified Bessel functions. in (x) = i −n jn (ix), (1) kn (x) = −i n hn (ix), d  −n x in , dx d  −n x kn , kn+1 (x) = −x n dx  n sinh x n 1 d , in (x) = x x dx x   1 d n e−x kn (x) = (−1)n x n . x dx x in+1 (x) = x n Show that the recurrence relations for in (x) and kn (x) are (a) 2n + 1 in (x), x nin−1 (x) + (n + 1)in+1 (x) = (2n + 1)in′ (x), in−1 (x) − in+1 (x) = 11.7 Spherical Bessel Functions (b) 11.7.19 2n + 1 kn (x), x nkn−1 (x) + (n + 1)kn+1 (x) = −(2n + 1)kn′ (x). kn−1 (x) − kn+1 (x) = − Derive the limiting values for the spherical modified Bessel functions (a) (b) 11.7.20 in (x) ≈ xn , (2n + 1)!! in (x) ∼ ex , 2x kn (x) ≈ kn (x) ∼ (2n − 1)!! , x n+1 e−x , x x ≪ 1. 1 x ≫ n(n + 1). 2 Show that the Wronskian of the spherical modified Bessel functions is given by in (x)kn′ (x) − in′ (x)kn (x) = − 11.7.21 737 1 . x2 A quantum particle of mass M is trapped in a “square” well of radius a. The Schrödinger equation potential is + −V0 , 0 ≤ r < a V (r) = 0, r > a. The particle’s energy E is negative (an eigenvalue). Show that the radial part of the wave function is given by jl (k1 r) for 0 ≤ r < a and kl (k2 r) for r > a. (We require that ψ(0) be finite and ψ(∞) → 0.) Here k12 = 2M(E + V0 )/h¯ 2 , k22 = −2ME/h¯ 2 , and l is the angular momentum (n in Eq. (11.139)). (b) The boundary condition at r = a is that the wave function ψ(r) and its first derivative be continuous. Show that this means   (d/dr)kl (k2 r)  (d/dr)jl (k1 r)  =   . jl (k1 r) kl (k2 r) r=a r=a (a) This equation determines the energy eigenvalues. Note. This is a generalization of Example 10.1.2. 11.7.22 The quantum mechanical radial wave function for a scattered wave is given by ψk = sin(kr + δ0 ) , kr √ where k is the wave number, k = 2mE/h¯ , and δ0 is the scattering phase shift. Show that the normalization integral is ∞ π ψk (r)ψk ′ (r)r 2 dr = δ(k − k ′ ). 2k 0 Hint. You can use a sine representation of the Dirac delta function. See Exercise 15.3.8. 738 Chapter 11 Bessel Functions 11.7.23 Derive the spherical Bessel function closure relation 2a 2 ∞ jn (ar)jn (br)r 2 dr = δ(a − b). π 0 Note. An interesting derivation involving Fourier transforms, the Rayleigh plane-wave expansion, and spherical harmonics has been given by P. Ugincius, Am. J. Phys. 40: 1690 (1972). 11.7.24 (a) Write a subroutine that will generate the spherical Bessel functions, jn (x), that is, will generate the numerical value of jn (x) given x and n. Note. One possibility is to use the explicit known forms of j0 and j1 and to develop the higher index jn , by repeated application of the recurrence relation. (b) Check your subroutine by an independent calculation, such as Eq. (11.154). If possible, compare the machine time needed for this check with the time required for your subroutine. 11.7.25 The wave function of a particle √ in a sphere (Example 11.7.1) with angular momentum l is ψ(r, θ, ϕ) = Ajl (( 2ME)r/h¯ )Ylm (θ, ϕ). The Ylm (θ, ϕ) is a spherical harmonic, √ described in Section 12.6. From the boundary condition ψ(a, θ, ϕ) = 0 or jl (( 2ME)a/h¯ ) = 0 calculate the 10 lowest-energy states. Disregard the m degeneracy (2l + 1 values of m for each choice of l). Check your results against AMS-55, Table 10.6, see Additional Readings for Chapter 8 for the reference. Hint. You can use your spherical Bessel subroutine and a root-finding subroutine. Check values. jl (αls ) = 0, α01 = 3.1416 α11 = 4.4934 α21 = 5.7635 α02 = 6.2832. 11.7.26 Let Example 11.7.1 be modified so that the potential is a finite V0 outside (r > a). (a) For E < V0 show that ψout (r, θ, ϕ) ∼ kl (b)   r 2M(V0 − E) . h¯ The new boundary conditions to be satisfied at r = a are ψin (a, θ, ϕ) = ψout (a, θ, ϕ), ∂ ∂ ψin (a, θ, ϕ) = ψout (a, θ, ϕ) ∂r ∂r or   1 ∂ψout  1 ∂ψin  = . ψin ∂r r=a ψout ∂r r=a For l = 0 show that the boundary condition at r = a leads to     1 1 f (E) = k cot ka − + k′ 1 + ′ = 0, ka ka √ √ where k = 2ME/h¯ and k ′ = 2M(V0 − E)/h¯ . 11.7 Additional Readings (c) With a = 4πε0 h¯ 2 /Me2 (Bohr radius) and V0 = 4Me4 /2h¯ 2 , compute the possible bound states (0 < E < V0 ). Hint. Call a root-finding subroutine after you know the approximate location of the roots of f (E) = 0 (d) 11.7.27 739 h2 /Me2 (0 ≤ E ≤ V0 ). Show that when a = 4πε0 ¯ the minimum value of V0 for which a bound state exists is V0 = 2.4674Me4 /2h¯ 2 . In some nuclear stripping reactions the differential cross section is proportional to jl (x)2 , where l is the angular momentum. The location of the maximum on the curve of experimental data permits a determination of l, if the location of the (first) maximum of jl (x) is known. Compute the location of the first maximum of j1 (x), j2 (x), and j3 (x). Note. For better accuracy look for the first zero of jl′ (x). Why is this more accurate than direct location of the maximum? Additional Readings Jackson, J. D., Classical Electrodynamics, 3rd ed., New York: J. Wiley (1999). McBride, E. B., Obtaining Generating Functions. New York: Springer-Verlag (1971). An introduction to methods of obtaining generating functions. Watson, G. N., A Treatise on the Theory of Bessel Functions, 2nd ed. Cambridge, UK: Cambridge University Press (1952). This is the definitive text on Bessel functions and their properties. Although difficult reading, it is invaluable as the ultimate reference. Watson, G. N., A Treatise on the Theory of Bessel Functions, 1st ed. Cambridge, UK: Cambridge University Press (1922). See also the references listed at the end of Chapter 13. This page intentionally left blank CHAPTER 12 LEGENDRE FUNCTIONS 12.1 GENERATING FUNCTION Legendre polynomials appear in many different mathematical and physical situations. (1) They may originate as solutions of the Legendre ODE which we have already encountered in the separation of variables (Section 9.3) for Laplace’s equation, Helmholtz’s equation, and similar ODEs in spherical polar coordinates. (2) They enter as a consequence of a Rodrigues’ formula (Section 12.4). (3) They arise as a consequence of demanding a complete, orthogonal set of functions over the interval [−1, 1] (Gram–Schmidt orthogonalization, Section 10.3). (4) In quantum mechanics they (really the spherical harmonics, Sections 12.6 and 12.7) represent angular momentum eigenfunctions. (5) They are generated by a generating function. We introduce Legendre polynomials here by way of a generating function. Physical Basis — Electrostatics As with Bessel functions, it is convenient to introduce the Legendre polynomials by means of a generating function, which here appears in a physical context. Consider an electric charge q placed on the z-axis at z = a. As shown in Fig. 12.1, the electrostatic potential of charge q is ϕ= q 1 · 4πε0 r1 (SI units). (12.1) We want to express the electrostatic potential in terms of the spherical polar coordinates r and θ (the coordinate ϕ is absent because of symmetry about the z-axis). Using the law of cosines in Fig. 12.1, we obtain −1/2 q  2 r + a 2 − 2ar cos θ . (12.2) ϕ= 4πε0 741 742 Chapter 12 Legendre Functions FIGURE 12.1 Electrostatic potential. Charge q displaced from origin. Legendre Polynomials Consider the case of r > a or, more precisely, r 2 > |a 2 − 2ar cos θ |. The radical in Eq. (12.2) may be expanded in a binomial series and then rearranged in powers of (a/r). The Legendre polynomial Pn (cos θ ) (see Fig. 12.2) is defined as the coefficient of the nth power in  n ∞ a q  ϕ= . Pn (cos θ ) 4πε0 r r n=0 FIGURE 12.2 Legendre polynomials P2 (x), P3 (x), P4 (x), and P5 (x). (12.3) 12.1 Generating Function 743 Dropping the factor q/4πε0 r and using x and t instead of cos θ and a/r, respectively, we have ∞  −1/2  Pn (x)t n , g(t, x) = 1 − 2xt + t 2 = n=0 |t| < 1. (12.4) Equation (12.4) is our generating function formula. In the next section it is shown that |Pn (cos θ )| ≤ 1, which means that the series expansion (Eq. (12.4)) is convergent for |t| < 1.1 Indeed, the series is convergent for |t| = 1 except for |x| = 1. In physical applications Eq. (12.4) often appears in the vector form (see Section 9.7)  ∞  1  r< n 1 = Pn (cos θ ), (12.4a) |r1 − r2 | r> r> n=0 where r> = |r1 | r< = |r2 |  for |r1 | > |r2 |, (12.4b) r> = |r2 | r< = |r1 |  for |r2 | > |r1 |. (12.4c) and Using the binomial theorem (Section 5.6) and Exercise 8.1.15, we expand the generating function as (compare Eq. (12.33)) ∞  −1/2  n (2n)!  = 1 − 2xt + t 2 2xt − t 2 2n 2 2 (n!) n=0 =1+ ∞  (2n − 1)!!  n=1 (2n)!! n 2xt − t 2 . (12.5) For the first few Legendre polynomials, say, P0 , P1 , and P2 , we need the coefficients of t 0 , t 1 , and t 2 . These powers of t appear only in the terms n = 0, 1, and 2, and hence we may limit our attention to the first three terms of the infinite series: 0 1 2 0!  2!  4!  2xt − t 2 + 2 2xt − t 2 + 4 2xt − t 2 0 2 2 2 2 (0!) 2 (1!) 2 (2!)    3 2 1 2 t + O t3 . x − = 1t 0 + xt 1 + 2 2 Then, from Eq. (12.4) (and uniqueness of power series), 3 1 P2 (x) = x 2 − . 2 2 We repeat this limited development in a vector framework later in this section. P0 (x) = 1, P1 (x) = x, 1 Note that the series in Eq. (12.3) is convergent for r > a, even though the binomial expansion involved is valid only for √ r > (a 2 + 2ar)1/2 and cos θ = −1, or r > a(1 + 2 ). 744 Chapter 12 Legendre Functions In employing a general treatment, we find that the binomial expansion of the (2xt − t 2 )n factor yields the double series ∞ n −1/2   (2n)! n  n! (2x)n−k t k 1 − 2xt + t 2 = (−1)k t 2n 2 k!(n − k)! 2 (n!) n=0 = k=0 ∞  n  (−1)k n=0 k=0 (2n)! · (2x)n−k t n+k . 22n n!k!(n − k)! (12.6) From Eq. (5.64) of Section 5.4 (rearranging the order of summation), Eq. (12.6) becomes [n/2] ∞   −1/2  1 − 2xt + t 2 = (−1)k n=0 k=0 (2n − 2k)! · (2x)n−2k t n , 22n−2k k!(n − k)!(n − 2k)! (12.7) with the t n independent of the index k.2 Now, equating our two power series (Eqs. (12.4) and (12.7)) term by term, we have3 Pn (x) = [n/2]  (−1)k k=0 (2n − 2k)! x n−2k . 2n k!(n − k)!(n − 2k)! (12.8) Hence, for n even, Pn has only even powers of x and even parity (see Eq. (12.37)), and odd powers and odd parity for odd n. Linear Electric Multipoles Returning to the electric charge on the z-axis, we demonstrate the usefulness and power of the generating function by adding a charge −q at z = −a, as shown in Fig. 12.3. The FIGURE 12.3 Electric dipole. 2 [n/2] = n/2 for n even, (n − 1)/2 for n odd. 3 Equation (12.8) starts with x n . By changing the index, we can transform it into a series that starts with x 0 for n even and x 1 for n odd. These ascending series are given as hypergeometric functions in Eqs. (13.138) and (13.139), Section 13.4. 12.1 Generating Function potential becomes   1 q 1 ϕ= , − 4πε0 r1 r2 and by using the law of cosines, we have     2 −1/2 a a q 1−2 cos θ + ϕ= 4πε0 r r r  2 −1/2    a a cos θ + , − 1+2 r r 745 (12.9) (r > a). Clearly, the second radical is like the first, except that a has been replaced by −a. Then, using Eq. (12.4), we obtain  n    n  ∞ ∞ q a n a ϕ= Pn (cos θ )(−1) − Pn (cos θ ) 4πε0 r r r n=0 n=0    3  2q a a = P1 (cos θ ) + P3 (cos θ ) + ··· . 4πε0 r r r (12.10) The first term (and dominant term for r ≫ a) is 2aq P1 (cos θ ) · , (12.11) ϕ= 4πε0 r2 which is the electric dipole potential, and 2aq is the dipole moment (Fig. 12.3). This analysis may be extended by placing additional charges on the z-axis so that the P1 term, as well as the P0 (monopole) term, is canceled. For instance, charges of q at z = a and z = −a, −2q at z = 0 give rise to a potential whose series expansion starts with P2 (cos θ ). This is a linear electric quadrupole. Two linear quadrupoles may be placed so that the quadrupole term is canceled but the P3 , the octupole term, survives. Vector Expansion We consider the electrostatic potential produced by a distributed charge ρ(r2 ): 1 ρ(r2 ) 3 d r2 . (12.12a) ϕ(r1 ) = 4πε0 |r1 − r2 | This expression has already appeared in Sections 1.16 and 9.7. Taking the denominator of the integrand, using first the law of cosines and then a binomial expansion, yields (see Fig. 1.42) −1/2  1 (12.12b) = r12 − 2r1 · r2 + r22 |r1 − r2 |   2r1 · r2 r22 −1/2 1 1+ − = + , for r1 > r2 r1 r12 r12  3  r1 · r2 1 r22 3 (r1 · r2 )2 1 r2 1+ . = − + + O 2 2 4 r1 2 r1 2 r1 r1 r1 746 Chapter 12 Legendre Functions (For r1 = 1, r2 = t, and r1 · r2 = xt, Eq. (12.12b) reduces to the generating function, Eq. (12.4).) The first term in the square bracket, 1, yields a potential 1 1 ϕ0 (r1 ) = ρ(r2 ) d 3 r2 . (12.12c) 4πε0 r1 The integral is just the total charge. This part of the total potential is an electric monopole. The second term yields 1 r1 · ϕ1 (r1 ) = r2 ρ(r2 ) d 3 r2 , (12.12d) 4πε0 r13 where the integral is the dipole moment whose charge density ρ(r2 ) is weighted by a moment arm r2 . We have an electric dipole potential. For atomic or nuclear states of definite parity, ρ(r2 ) is an even function and the dipole integral is identically zero. The last two terms, both of order (r2 /r1 )2 , may be handled by using Cartesian coordinates: (r1 · r2 )2 = 3  i=1 x1i x2i 3  x1j x2j . j =1 Rearranging variables to take the x1 components outside the integral yields ϕ2 (r1 ) = 3  1 1  x x 3x2i x2j − δij r22 ρ(r2 ) d 3 r2 . 1i 1j 5 4πε0 2r1 (12.12e) i,j =1 This is the electric quadrupole term. We note that the square bracket in the integrand forms a symmetric, zero-trace tensor. A general electrostatic multipole expansion can also be developed by using Eq. (12.12a) for the potential ϕ(r1 ) and replacing 1/(4π|r1 − r2 |) by Green’s function, Eq. (9.187). This yields the potential ϕ(r1 ) as a (double) series of the spherical harmonics Ylm (θ1 , ϕ1 ) and Ylm (θ2 , ϕ2 ). Before leaving multipole fields, perhaps we should emphasize three points. • First, an electric (or magnetic) multipole is isolated and well defined only if all lowerorder multipoles vanish. For instance, the potential of one charge q at z = a was expanded in a series of Legendre polynomials. Although we refer to the P1 (cos θ ) term in this expansion as a dipole term, it should be remembered that this term exists only because of our choice of coordinates. We also have a monopole, P0 (cos θ ). • Second, in physical systems we do not encounter pure multipoles. As an example, the potential of the finite dipole (q at z = a, −q at z = −a) contained a P3 (cos θ ) term. These higher-order terms may be eliminated by shrinking the multipole to a point multipole, in this case keeping the product qa constant (a → 0, q → ∞) to maintain the same dipole moment. 12.1 Generating Function 747 • Third, the multipole theory is not restricted to electrical phenomena. Planetary configurations are described in terms of mass multipoles, Sections 12.3 and 12.6. Gravitational radiation depends on the time behavior of mass quadrupoles. (The gravitational radiation field is a tensor field. The radiation quanta, gravitons, carry two units of angular momentum.) It might also be noted that a multipole expansion is actually a decomposition into the irreducible representations of the rotation group (Section 4.2). Extension to Ultraspherical Polynomials The generating function used here, g(t, x), is actually a special case of a more general generating function, ∞  1 Cn(α) (x)t n . = 2 α (1 − 2xt + t ) (12.13) n=0 (α) The coefficients Cn (x) are the ultraspherical polynomials (proportional to the Gegen(1/2) bauer polynomials). For α = 1/2 this equation reduces to Eq. (12.4); that is, Cn (x) = Pn (x). The cases a = 0 and α = 1 are considered in Chapter 13 in connection with the Chebyshev polynomials. Exercises 12.1.1 Develop the electrostatic potential for the array of charges shown. This is a linear electric quadrupole (Fig. 12.4). 12.1.2 Calculate the electrostatic potential of the array of charges shown in Fig. 12.5. Here is an example of two equal but oppositely directed dipoles. The dipole contributions cancel. The octupole terms do not cancel. 12.1.3 Show that the electrostatic potential produced by a charge q at z = a for r < a is ∞   q  r n ϕ(r) = Pn (cos θ ). 4πε0 a a n=0 FIGURE 12.4 Linear electric quadrupole. 748 Chapter 12 Legendre Functions FIGURE 12.5 Linear electric octupole. FIGURE 12.6 12.1.4 Using E = −∇ϕ, determine the components of the electric field corresponding to the (pure) electric dipole potential ϕ(r) = 2aqP1 (cos θ ) . 4πε0 r 2 Here it is assumed that r ≫ a. ANS. Er = + 4aq cos θ , 4πε0 r 3 Eθ = + 2aq sin θ , 4πε0 r 3 Eϕ = 0. 12.1.5 A point electric dipole of strength p (1) is placed at z = a; a second point electric dipole of equal but opposite strength is at the origin. Keeping the product p (1) a constant, let a → 0. Show that this results in a point electric quadrupole. Hint. Exercise 12.2.5 (when proved) will be helpful. 12.1.6 A point charge q is in the interior of a hollow conducting sphere of radius r0 . The charge q is displaced a distance a from the center of the sphere. If the conducting sphere is grounded, show that the potential in the interior produced by q and the distributed induced charge is the same as that produced by q and its image charge q ′ . The image charge is at a distance a ′ = r02 /a from the center, collinear with q and the origin (Fig. 12.6). Hint. Calculate the electrostatic potential for a < r0 < a ′ . Show that the potential vanishes for r = r0 if we take q ′ = −qr0 /a. 12.1.7 Prove that nr Pn (cos θ ) = (−1)   ∂n 1 . n! ∂zn r n+1 Hint. Compare the Legendre polynomial expansion of the generating function (a → z, Fig. 12.1) with a Taylor series expansion of 1/r, where z dependence of r changes from z to z − z (Fig. 12.7). 12.1.8 By differentiation and direct substitution of the series form, Eq. (12.8), show that Pn (x) satisfies the Legendre ODE. Note that there is no restriction upon x. We may have any x, −∞ < x < ∞, and indeed any z in the entire finite complex plane. 12.2 Recurrence Relations 749 FIGURE 12.7 12.1.9 The Chebyshev polynomials (type II) are generated by (Eq. (13.93), Section 13.3) ∞  1 Un (x)t n . = 1 − 2xt + t 2 n=0 Using the techniques of Section 5.4 for transforming series, develop a series representation of Un (x). ANS. Un (x) = 12.2 [n/2]  (−1)k k=0 (n − k)! (2x)n−2k . k!(n − 2k)! RECURRENCE RELATIONS AND SPECIAL PROPERTIES Recurrence Relations The Legendre polynomial generating function provides a convenient way of deriving the recurrence relations4 and some special properties. If our generating function (Eq. (12.4)) is differentiated with respect to t, we obtain ∞  ∂g(t, x) x −t = nPn (x)t n−1 . = ∂t (1 − 2xt + t 2 )3/2 (12.14) n=0 By substituting Eq. (12.4) into this and rearranging terms, we have ∞ ∞    2 n−1 1 − 2xt + t Pn (x)t n = 0. nPn (x)t + (t − x) n=0 (12.15) n=0 The left-hand side is a power series in t. Since this power series vanishes for all values of t, the coefficient of each power of t is equal to zero; that is, our power series is unique (Section 5.7). These coefficients are found by separating the individual summations and 4 We can also apply the explicit series form Eq. (12.8) directly. 750 Chapter 12 Legendre Functions using distinctive summation indices: ∞  m=0 mPm (x)t m−1 − + ∞  s=0 ∞  n=0 Ps (x)t s+1 − 2nxPn (x)t n + ∞  n=0 ∞  sPs (x)t s+1 s=0 xPn (x)t n = 0. (12.16) Now, letting m = n + 1, s = n − 1, we find (2n + 1)xPn (x) = (n + 1)Pn+1 (x) + nPn−1 (x), n = 1, 2, 3, . . . . (12.17) This is another three-term recurrence relation, similar to (but not identical with) the recurrence relation for Bessel functions. With this recurrence relation we may easily construct the higher Legendre polynomials. If we take n = 1 and insert the easily found values of P0 (x) and P1 (x) (Exercise 12.1.7 or Eq. (12.8)), we obtain 3xP1 (x) = 2P2 (x) + P0 (x), (12.18) or 1 2 3x − 1 . (12.19) 2 This process may be continued indefinitely, the first few Legendre polynomials are listed in Table 12.1. As cumbersome as it may appear at first, this technique is actually more efficient for a digital computer than is direct evaluation of the series (Eq. (12.8)). For greater stability (to avoid undue accumulation and magnification of round-off error), Eq. (12.17) is rewritten as 1  Pn+1 (x) = 2xPn (x) − Pn−1 (x) − xPn (x) − Pn−1 (x) . (12.17a) n+1 P2 (x) = One starts with P0 (x) = 1, P1 (x) = x, and computes the numerical values of all the Pn (x) for a given value of x up to the desired PN (x). The values of Pn (x), 0 ≤ n < N , are available as a fringe benefit. Table 12.1 Legendre Polynomials P0 (x) = 1 P1 (x) = x P2 (x) = 12 (3x 2 − 1) P3 (x) = 12 (5x 3 − 3x) P4 (x) = 18 (35x 4 − 30x 2 + 3) P5 (x) = 18 (63x 5 − 70x 3 + 15x) 1 (231x 6 − 315x 4 + 105x 2 − 5) P6 (x) = 16 1 (429x 7 − 693x 5 + 315x 3 − 35x) P7 (x) = 16 1 (6435x 8 − 12012x 6 + 6930x 4 − 1260x 2 + 35) P8 (x) = 128 12.2 Recurrence Relations 751 Differential Equations More information about the behavior of the Legendre polynomials can be obtained if we now differentiate Eq. (12.4) with respect to x. This gives ∞  ∂g(t, x) t Pn′ (x)t n , = = 2 3/2 ∂x (1 − 2xt + t ) (12.20) n=0 or  1 − 2xt + t 2 ∞  n=0 Pn′ (x)t n − t ∞  n=0 Pn (x)t n = 0. (12.21) As before, the coefficient of each power of t is set equal to zero and we obtain ′ ′ Pn+1 (x) + Pn−1 (x) = 2xPn′ (x) + Pn (x). (12.22) A more useful relation may be found by differentiating Eq. (12.17) with respect to x and multiplying by 2. To this we add (2n + 1) times Eq. (12.22), canceling the Pn′ term. The result is ′ ′ Pn+1 (x) − Pn−1 (x) = (2n + 1)Pn (x). (12.23) From Eqs. (12.22) and (12.23) numerous additional equations may be developed,5 including ′ Pn+1 (x) = (n + 1)Pn (x) + xPn′ (x), (12.24) ′ (x) = −nPn (x) + xPn′ (x), Pn−1  1 − x 2 Pn′ (x) = nPn−1 (x) − nxPn (x),  1 − x 2 Pn′ (x) = (n + 1)xPn (x) − (n + 1)Pn+1 (x). By differentiating Eq. (12.26) and using Eq. (12.25) to eliminate Pn (x) satisfies the linear second-order ODE ′ (x), Pn−1  1 − x 2 Pn′′ (x) − 2xPn′ (x) + n(n + 1)Pn (x) = 0. (12.25) (12.26) (12.27) we find that (12.28) The previous equations, Eqs. (12.22) to (12.27), are all first-order ODEs, but with polynomials of two different indices. The price for having all indices alike is a second-order 5 Using the equation number in parentheses to denote the left-hand side of the equation, we may write the derivatives as d (12.17) + (2n + 1) · (12.22) ⇒ (12.23), 2 · dx 1 '(12.22) + (12.23)( ⇒ (12.24), 2 1 '(12.22) − (12.23)( ⇒ (12.25), 2 (12.24)n→n−1 + x · (12.25) ⇒ (12.26), d dx (12.26) + n · (12.25) ⇒ (12.28). 752 Chapter 12 Legendre Functions differential equation. Equation (12.28) is Legendre’s ODE. We now see that the polynomials Pn (x) generated by the power series for (1 − 2xt + t 2 )−1/2 satisfy Legendre’s equation, which, of course, is why they are called Legendre polynomials. In Eq. (12.28) differentiation is with respect to x (x = cos θ ). Frequently, we encounter Legendre’s equation expressed in terms of differentiation with respect to θ :   dPn (cos θ ) 1 d sin θ + n(n + 1)Pn (cos θ ) = 0. sin θ dθ dθ (12.29) Special Values Our generating function provides still more information about the Legendre polynomials. If we set x = 1, Eq. (12.4) becomes ∞  1 1 = t n, = 2 1/2 1−t (1 − 2t + t ) (12.30) n=0 using a binomial expansion or the geometric series, Example 5.1.1. But Eq. (12.4) for x = 1 defines ∞  1 Pn (1)t n . = (1 − 2t + t 2 )1/2 n=0 Comparing the two series expansions (uniqueness of power series, Section 5.7), we have If we let x = −1 in Eq. (12.4) and use this shows that Pn (1) = 1. (12.31) 1 1 , = (1 + 2t + t 2 )1/2 1 + t Pn (−1) = (−1)n . (12.32) For obtaining these results, we find that the generating function is more convenient than the explicit series form, Eq. (12.8). If we take x = 0 in Eq. (12.4), using the binomial expansion  −1/2 1 · 3 · · · (2n − 1) 2n 3 1 1 + t2 t + ··· , (12.33) = 1 − t 2 + t 4 + · · · + (−1)n 2 8 2n n! we have6 P2n (0) = (−1)n P2n+1 (0) = 0, 1 · 3 · · · (2n − 1) (2n − 1)!! (−1)n (2n)! = (−1)n = 2n n 2 n! (2n)!! 2 (n!)2 n = 0, 1, 2 . . . . (12.35) These results also follow from Eq. (12.8) by inspection. 6 The double factorial notation is defined in Section 8.1: (2n)!! = 2 · 4 · 6 · · · (2n), (2n − 1)!! = 1 · 3 · 5 · · · (2n − 1), (12.34) (−1)!! = 1. 12.2 Recurrence Relations 753 Parity Some of these results are special cases of the parity property of the Legendre polynomials. We refer once more to Eqs. (12.4) and (12.8). If we replace x by −x and t by −t, the generating function is unchanged. Hence −1/2  g(t, x) = g(−t, −x) = 1 − 2(−t)(−x) + (−t)2 = ∞  n=0 Pn (−x)(−t)n = ∞  Pn (x)t n . (12.36) n=0 Comparing these two series, we have Pn (−x) = (−1)n Pn (x); (12.37) that is, the polynomial functions are odd or even (with respect to x = 0, θ = π/2) according to whether the index n is odd or even. This is the parity,7 or reflection, property that plays such an important role in quantum mechanics. For central forces the index n is a measure of the orbital angular momentum, thus linking parity and orbital angular momentum. This parity property is confirmed by the series solution and for the special values tabulated in Table 12.1. It might also be noted that Eq. (12.37) may be predicted by inspection of Eq. (12.17), the recurrence relation. Specifically, if Pn−1 (x) and xPn (x) are even, then Pn+1 (x) must be even. Upper and Lower Bounds for Pn (cos θ) Finally, in addition to these results, our generating function enables us to set an upper limit on |Pn (cos θ )|. We have  −1/2  −1/2  −1/2 1 − 2t cos θ + t 2 = 1 − teiθ 1 − te−iθ  = 1 + 21 teiθ + 83 t 2 e2iθ + · · ·  · 1 + 21 te−iθ + 38 t 2 e−2iθ + · · · , (12.38) with all coefficients positive. Our Legendre polynomial, Pn (cos θ ), still the coefficient of t n , may now be written as a sum of terms of the form  imθ 1 + e−imθ = am cos mθ (12.39a) 2 am e with all the am positive and m and n both even or odd so that Pn (cos θ ) = n  am cos mθ. (12.39b) m=0 or 1 7 In spherical polar coordinates the inversion of the point (r, θ, ϕ) through the origin is accomplished by the transformation [r → r, θ → π − θ , and ϕ → ϕ ± π ]. Then, cos θ → cos(π − θ) = − cos θ , corresponding to x → −x (compare Exercise 2.5.8). 754 Chapter 12 Legendre Functions This series, Eq. (12.39b), is clearly a maximum when θ = 0 and cos mθ = 1. But for x = cos θ = 1, Eq. (12.31) shows that Pn (1) = 1. Therefore   Pn (cos θ ) ≤ Pn (1) = 1. (12.39c) A fringe benefit of Eq. (12.39b) is that it shows that our Legendre polynomial is a linear combination of cos mθ . This means that the Legendre polynomials form a complete set for any functions that may be expanded by a Fourier cosine series (Section 14.1) over the interval [0, π]. • In this section various useful properties of the Legendre polynomials are derived from the generating function, Eq. (12.4). • The explicit series representation, Eq. (12.8), offers an alternate and sometimes superior approach. Exercises 12.2.1 Given the series α0 + α2 cos2 θ + α4 cos4 θ + α6 cos6 θ = a0 P0 + a2 P2 + a4 P4 + a6 P6 , express the coefficients αi as a column vector α and the coefficients ai as a column vector a and determine the matrices A and B such that Aα = a and Ba = α. Check your computation by showing that AB = 1 (unit matrix). Repeat for the odd case α1 cos θ + α3 cos3 θ + α5 cos5 θ + α7 cos7 θ = a1 P1 + a3 P3 + a5 P5 + a7 P7 . Note. Pn (cos θ ) and cosn θ are tabulated in terms of each other in AMS-55 (see Additional Readings of Chapter 8 for the complete reference). 12.2.2 By differentiating the generating function g(t, x) with respect to t, multiplying by 2t, and then adding g(t, x), show that ∞  1 − t2 (2n + 1)Pn (x)t n . = 2 3/2 (1 − 2tx + t ) n=0 This result is useful in calculating the charge induced on a grounded metal sphere by a point charge q. 12.2.3 (a) (b) Derive Eq. (12.27),  1 − x 2 Pn′ (x) = (n + 1)xPn (x) − (n + 1)Pn+1 (x). Write out the relation of Eq. (12.27) to preceding equations in symbolic form analogous to the symbolic forms for Eqs. (12.23) to (12.26). 12.2 Recurrence Relations 755 12.2.4 A point electric octupole may be constructed by placing a point electric quadrupole (pole strength p (2) in the z-direction) at z = a and an equal but opposite point electric quadrupole at z = 0 and then letting a → 0, subject to p (2) a = constant. Find the electrostatic potential corresponding to a point electric octupole. Show from the construction of the point electric octupole that the corresponding potential may be obtained by differentiating the point quadrupole potential. 12.2.5 Operating in spherical polar coordinates, show that  Pn+1 (cos θ ) ∂ Pn (cos θ ) = −(n + 1) . n+1 ∂z r r n+2 This is the key step in the mathematical argument that the derivative of one multipole leads to the next higher multipole. Hint. Compare Exercise 2.5.12. 12.2.6 From PL (cos θ ) = show that −1/2  1 ∂L   1 − 2t cos θ + t 2 t=0 L L! ∂t PL (1) = 1, 12.2.7 PL (−1) = (−1)L . Prove that Pn′ (1) =  1 d Pn (x)x=1 = n(n + 1). dx 2 12.2.8 Show that Pn (cos θ ) = (−1)n Pn (− cos θ ) by use of the recurrence relation relating Pn , Pn+1 , and Pn−1 and your knowledge of P0 and P1 . 12.2.9 From Eq. (12.38) write out the coefficient of t 2 in terms of cos nθ , n ≤ 2. This coefficient is P2 (cos θ ). 12.2.10 Write a program that will generate the coefficients as in the polynomial form of the Legendre polynomial Pn (x) = n  as x s . s=0 12.2.11 (a) Calculate P10 (x) over the range [0, 1] and plot your results. (b) Calculate precise (at least to five decimal places) values of the five positive roots of P10 (x). Compare your values with the values listed in AMS-55, Table 25.4. (For the complete reference, see Additional Readings of Chapter 8.) 12.2.12 (a) Calculate the largest root of Pn (x) for n = 2(1)50. (b) Develop an approximation for the largest root from the hypergeometric representation of Pn (x) (Section 13.4) and compare your values from part (a) with your hypergeometric approximation. Compare also with the values listed in AMS-55, Table 25.4. (For the complete reference, see Additional Readings of Chapter 8.) 756 Chapter 12 Legendre Functions 12.2.13 12.2.14 12.3 From Exercise 12.2.1 and AMS-55, Table 22.9, develop the 6 × 6 matrix B that will transforma series of even-order Legendre polynomials through P10 (x) into a power series 5n=0 α2n x 2n . (b) Calculate A as B−1 . Check the elements of A against the values listed in AMS-55, Table 22.9. (For the complete reference, see Additional Readings of Chapter 8.) (c) By using matrix multiplication, transform some even power series 5n=0 α2n x 2n into a Legendre series. n Write a subroutine that will transform a finite power series N n=0 an x into a Legendre N series n=0 bn Pn (x). Use the recurrence relation, Eq. (12.17), and follow the technique outlined in Section 13.3 for a Chebyshev series. (a) ORTHOGONALITY Legendre’s ODE (12.28) may be written in the form d  1 − x 2 Pn′ (x) + n(n + 1)Pn (x) = 0, (12.40) dx showing clearly that it is self-adjoint. Subject to satisfying certain boundary conditions, then, it is known that the solutions Pn (x) will be orthogonal. Upon comparing Eq. (12.40) with Eqs. (10.6) and (10.8) we see that the weight function w(x) = 1, L = (d/dx)(1 − x 2 )(d/dx), p(x) = 1 − x 2 , and the eigenvalue λ = n(n + 1). The integration limits on x are ±1, where p(±1) = 0. Then for m = n, Eq. (10.34) becomes 1 Pn (x)Pm (x) dx = 0,8 (12.41) π 0 −1 Pn (cos θ )Pm (cos θ ) sin θ dθ = 0, (12.42) showing that Pn (x) and Pm (x) are orthogonal for the interval [−1, 1]. This orthogonality may also be demonstrated by using Rodrigues’ definition of Pn (x) (compare Section 12.4, Exercise 12.4.2). We shall need to evaluate the integral (Eq. (12.41)) when n = m. Certainly it is no longer zero. From our generating function, 2  ∞  −1 1 − 2tx + t 2 = Pn (x)t n . (12.43) n=0 Integrating from x = −1 to x = +1, we have 1 1 ∞  2  dx 2n Pn (x) dx; t = 2 −1 −1 1 − 2tx + t (12.44) n=0 8 In Section 10.4 such integrals are interpreted as inner products in a linear vector (function) space. Alternate notations are 1  ∗ Pn (x) Pm (x) dx ≡ Pn |Pm  ≡ (Pn , Pm ). −1 The   form, popularized by Dirac, is common in the physics literature. The ( ) form is more common in the mathematics literature. 12.3 Orthogonality 757 the cross terms in the series vanish by means of Eq. (12.42). Using y = 1 − 2tx + t 2 , dy = −2t dx, we obtain   1 2 1+t dx 1 (1+t) dy 1 = . (12.45) = ln 2 2t (1−t)2 y t 1−t −1 1 − 2tx + t Expanding this in a power series (Exercise 5.4.1) gives us   ∞  1+t t 2n 1 ln . =2 t 1−t 2n + 1 (12.46) Comparing power-series coefficients of Eqs. (12.44) and (12.46), we must have 1  2 2 . Pn (x) dx = 2n + 1 −1 (12.47) n=0 Combining Eq. (12.42) with Eq. (12.47) we have the orthonormality condition 1 2δmn . Pm (x)Pn (x) dx = 2n + 1 −1 (12.48) We shall return to this result in Section 12.6 when we construct the orthonormal spherical harmonics. Expansion of Functions, Legendre Series In addition to orthogonality, the Sturm–Liouville theory implies that the Legendre polynomials form a complete set. Let us assume, then, that the series ∞  n=0 an Pn (x) = f (x) (12.49) converges in the mean (Section 10.4) in the interval [−1, 1]. This demands that f (x) and f ′ (x) be at least sectionally continuous in this interval. The coefficients an are found by multiplying the series by Pm (x) and integrating term by term. Using the orthogonality property expressed in Eqs. (12.42) and (12.48), we obtain 1 2 am = f (x)Pm (x) dx. (12.50) 2m + 1 −1 We replace the variable of integration x by t and the index m by n. Then, substituting into Eq. (12.49), we have  1  ∞  2n + 1 f (x) = f (t)Pn (t) dt Pn (x). (12.51) 2 −1 n=0 This expansion in a series of Legendre polynomials is usually referred to as a Legendre series.9 Its properties are quite similar to the more familiar Fourier series (Chapter 14). In 9 Note that Eq. (12.50) gives a as a definite integral, that is, a number for a given f (x). m 758 Chapter 12 Legendre Functions particular, we can use the orthogonality property (Eq. (12.48)) to show that the series is unique. On a more abstract (and more powerful) level, Eq. (12.51) gives the representation of f (x) in the vector space of Legendre polynomials (a Hilbert space, Section 10.4). From the viewpoint of integral transforms (Chapter 15), Eq. (12.50) may be considered a finite Legendre transform of f (x). Equation (12.51) is then the inverse transform. It may also be interpreted in terms of the projection operators of quantum theory. We may take Pm in  2m + 1 1 Pm f ≡ Pm (x) Pm (t) f (t) dt 2 −1 as an (integral) operator, ready to operate on f (t). (The f (t) would go in the square bracket as a factor in the integrand.) Then, from Eq. (12.50), Pm f = am Pm (x).10 The operator Pm projects out the mth component of the function f . Equation (12.3), which leads directly to the generating function definition of Legendre polynomials, is a Legendre expansion of 1/r1 . This Legendre expansion of 1/r1 or 1/r12 appears in several exercises of Section 12.8. Going beyond a Coulomb field, the 1/r12 is often replaced by a potential V (|r1 − r2 |), and the solution of the problem is again effected by a Legendre expansion. The Legendre series, Eq. (12.49), has been treated as a known function f (x) that we arbitrarily chose to expand in a series of Legendre polynomials. Sometimes the origin and nature of the Legendre series are different. In the next examples we consider unknown functions we know can be represented by a Legendre series because of the differential equation the unknown functions satisfy. As before, the problem is to determine the unknown coefficients in the series expansion. Here, however, the coefficients are not found by Eq. (12.50). Rather, they are determined by demanding that the Legendre series match a known solution at a boundary. These are boundary value problems. Example 12.3.1 EARTH’S GRAVITATIONAL FIELD An example of a Legendre series is provided by the description of the Earth’s gravitational potential U (for exterior points), neglecting azimuthal effects. With R = equatorial radius = 6378.1 ± 0.1 km GM = 62.494 ± 0.001 km2 /s2 , R we write  n+1  ∞ GM R  R U (r, θ ) = Pn (cos θ ) , an − R r r n=2 10 The dependent variables are arbitrary. Here x came from the x in P . m (12.52) 12.3 Orthogonality 759 a Legendre series. Artificial satellite motions have shown that a2 = (1,082,635 ± 11) × 10−9 , a3 = (−2,531 ± 7) × 10−9 , a4 = (−1,600 ± 12) × 10−9 . This is the famous pear-shaped deformation of the Earth. Other coefficients have been computed through n = 20. Note that P1 is omitted because the origin from which r is measured is the Earth’s center of mass (P1 would represent a displacement). More recent satellite data permit a determination of the longitudinal dependence of the Earth’s gravitational field. Such dependence may be described by a Laplace series (Section 12.6).  Example 12.3.2 SPHERE IN A UNIFORM FIELD Another illustration of the use of Legendre polynomials is provided by the problem of a neutral conducting sphere (radius r0 ) placed in a (previously) uniform electric field (Fig. 12.8). The problem is to find the new, perturbed, electrostatic potential. If we call the electrostatic potential11 V , it satisfies ∇ 2 V = 0, (12.53) Laplace’s equation. We select spherical polar coordinates because of the spherical shape of the conductor. (This will simplify the application of the boundary condition at the surface of the conductor.) Separating variables and glancing at Table 9.2, we can write the unknown potential V (r, θ ) in the region outside the sphere as a linear combination of solutions: V (r, θ ) = ∞  n=0 an r n Pn (cos θ ) + ∞  n=0 bn Pn (cos θ ) . r n+1 (12.54) FIGURE 12.8 Conducting sphere in a uniform field. 11 It should be emphasized that this is not a presentation of a Legendre-series expansion of a known V (cos θ). Here we are back to boundary value problems of PDEs. 760 Chapter 12 Legendre Functions No ϕ-dependence appears because of the axial symmetry of our problem. (The center of the conducting sphere is taken as the origin and the z-axis is oriented parallel to the original uniform field.) It might be noted here that n is an integer, because only for integral n is the θ dependence well behaved at cos θ = ±1. For nonintegral n the solutions of Legendre’s equation diverge at the ends of the interval [−1, 1], the poles θ = 0, π of the sphere (compare Example 5.2.4 and Exercises 5.2.15 and 9.5.5). It is for this same reason that the second solution of Legendre’s equation, Qn , is also excluded. Now we turn to our (Dirichlet) boundary conditions to determine the unknown an and bn of our series solution, Eq. (12.54). If the original, unperturbed electrostatic field is E0 , we require, as one boundary condition, V (r → ∞) = −E0 z = −E0 r cos θ = −E0 rP1 (cos θ ). (12.55) Since our Legendre series is unique, we may equate coefficients of Pn (cos θ ) in Eq. (12.54) (r → ∞) and Eq. (12.55) to obtain an = 0, n>1 and n = 0, a1 = −E0 . (12.56) If an = 0 for n > 1, these terms would dominate at large r and the boundary condition (Eq. (12.55)) could not be satisfied. As a second boundary condition, we may choose the conducting sphere and the plane θ = π/2 to be at zero potential, which means that Eq. (12.54) now becomes   ∞  Pn (cos θ ) b1 b0 = 0. (12.57) bn + 2 − E0 r0 P1 (cos θ ) + V (r = r0 ) = r0 r0 r0n+1 n=2 In order that this may hold for all values of θ , each coefficient of Pn (cos θ ) must vanish.12 Hence b0 = 0, 13 bn = 0, n ≥ 2, (12.58) whereas b1 = E0 r03 . (12.59) The electrostatic potential (outside the sphere) is then V = −E0 rP1 (cos θ ) +   E0 r03 r03 . P (cos θ ) = −E rP (cos θ ) 1 − 1 0 1 r2 r3 (12.60) In Section 1.16 it was shown that a solution of Laplace’s equation that satisfied the boundary conditions over the entire boundary was unique. The electrostatic potential V , as given by Eq. (12.60), is a solution of Laplace’s equation. It satisfies our boundary conditions and therefore is the solution of Laplace’s equation for this problem. 12 Again, this is equivalent to saying that a series expansion in Legendre polynomials (or any complete orthogonal set) is unique. 13 The coefficient of P is b /r . We set b = 0 because there is no net charge on the sphere. If there is a net charge q, then 0 0 0 0 b0 = 0. 12.3 Orthogonality 761 It may further be shown (Exercise 12.3.13) that there is an induced surface charge density  ∂V  σ = −ε0 = 3ε0 E0 cos θ (12.61) ∂r r=r0 on the surface of the sphere and an induced electric dipole moment (Exercise 12.3.13) P = 4πr03 ε0 E0 . (12.62)  Example 12.3.3 ELECTROSTATIC POTENTIAL OF A RING OF CHARGE As a further example, consider the electrostatic potential produced by a conducting ring carrying a total electric charge q (Fig. 12.9). From electrostatics (and Section 1.14) the potential ψ satisfies Laplace’s equation. Separating variables in spherical polar coordinates (compare Table 9.2), we obtain ψ(r, θ ) = ∞  n=0 cn an r n+1 Pn (cos θ ), r > a. (12.63a) Here a is the radius of the ring that is assumed to be in the θ = π/2 plane. There is no ϕ (azimuthal) dependence because of the cylindrical symmetry of the system. The terms with positive exponent in the radial dependence have been rejected because the potential must have an asymptotic behavior, ψ∼ q 1 · , 4πε0 r r ≫ a. (12.63b) The problem is to determine the coefficients cn in Eq. (12.63a). This may be done by evaluating ψ(r, θ ) at θ = 0, r = z, and comparing with an independent calculation of the FIGURE 12.9 Charged, conducting ring. 762 Chapter 12 Legendre Functions potential from Coulomb’s law. In effect, we are using a boundary condition along the zaxis. From Coulomb’s law (with all charge equidistant),  1 q θ =0 · 2 , ψ(r, θ ) = 2 1/2 r = z, 4πε0 (z + a )  2s ∞ (2s)! a q  , z > a. (12.63c) (−1)s 2s = 4πε0 z 2 (s!)2 z s=0 The last step uses the result of Exercise 8.1.15. Now, Eq. (12.63a) evaluated at θ = 0, r = z (with Pn (1) = 1), yields ψ(r, θ ) = ∞  cn n=0 an , zn+1 r = z. (12.63d) Comparing Eqs. (12.63c) and (12.63d), we get cn = 0 for n odd. Setting n = 2s, we have c2s = (2s)! q (−1)s 2s , 4πε0 2 (s!)2 and our electrostatic potential ψ(r, θ ) is given by  2s ∞ a (2s)! q  ψ(r, θ ) = P2s (cos θ ), (−1)s 2s 4πε0 r 2 (s!)2 r (12.63e) r > a. (12.63f) s=0 The magnetic analog of this problem appears in Example 12.5.3.  Exercises 12.3.1 You have constructed a set of orthogonal functions by the Gram–Schmidt process (Section 10.3), taking un (x) = x n , n = 0, 1, 2, . . . , in increasing order with w(x) = 1 and an interval −1 ≤ x ≤ 1. Prove that the nth such function constructed is proportional to Pn (x). Hint. Use mathematical induction. 12.3.2 Expand the Dirac delta function in a series of Legendre polynomials using the interval −1 ≤ x ≤ 1. 12.3.3 Verify the Dirac delta function expansions δ(1 − x) = δ(1 + x) = ∞  2n + 1 n=0 2 Pn (x) ∞  2n + 1 (−1)n Pn (x). 2 n=0 These expressions appear in a resolution of the Rayleigh plane-wave expansion (Exercise 12.4.7) into incoming and outgoing spherical waves. Note. Assume that the entire Dirac delta function is covered when integrating over [−1, 1]. 12.3 Orthogonality 12.3.4 763 Neutrons (mass 1) are being scattered by a nucleus of mass A (A > 1). In the centerof-mass system the scattering is isotropic. Then, in the laboratory system the average of the cosine of the angle of deflection of the neutron is 1 π A cos θ + 1 cos ψ = sin θ dθ. 2 2 0 (A + 2A cos θ + 1)1/2 Show, by expansion of the denominator, that cos ψ = 2/3A. 12.3.5 12.3.6 A particular function f (x) defined over the interval [−1, 1] is expanded in a Legendre series over this same interval. Show that the expansion is unique. A function f (x) is expanded in a Legendre series f (x) = ∞ n=0 an Pn (x). Show that 1 ∞   2 2an2 . f (x) dx = 2n + 1 −1 n=0 This is the Legendre form of the Fourier series Parseval identity, Exercise 14.4.2. It also illustrates Bessel’s inequality, Eq. (10.72), becoming an equality for a complete set. 12.3.7 12.3.8 Derive the recurrence relation  1 − x 2 Pn′ (x) = nPn−1 (x) − nxPn (x) from the Legendre polynomial generating function. 1 Evaluate 0 Pn (x) dx. ANS. n = 2s; n = 2s + 1; 1 for s = 0, 0 for s > 0, P2s (0)/(2s + 2) = (−1)s (2s − 1)!!/1(2s + 2)!! Hint. Use a recurrence relation to replace Pn (x) by derivatives and then integrate by inspection. Alternatively, you can integrate the generating function. 12.3.9 (a) For f (x) =  +1, −1, 0 a, (b) r < a. 12.3.17 As an extension of Example 12.3.3, find the potential ψ(r, θ ) produced by a charged conducting disk, Fig. 12.10, for r > a, the radius of the disk. The charge density σ (on each side of the disk) is q σ (ρ) = , ρ2 = x2 + y2. 2 4πa(a − ρ 2 )1/2 Hint. The definite integral you get can be evaluated as a beta function, Section 8.4. For more details see Section 5.03 of Smythe in Additional Readings.  2l ∞ a 1 q  l P2l (cos θ ). (−1) ANS. ψ(r, θ ) = 4πε0 r 2l + 1 r l=0 12.3.18 From the result of Exercise 12.3.17 calculate the potential of the disk. Since you are violating the condition r > a, justify your calculation. Hint. You may run into the series given in Exercise 5.2.9. 12.3.19 The hemisphere defined by r = a, 0 ≤ θ < π/2, has an electrostatic potential +V0 . The hemisphere r = a, π/2 < θ ≤ π has an electrostatic potential −V0 . Show that the potential at interior points is   ∞  4n + 3 r 2n+1 V = V0 P2n (0)P2n+1 (cos θ ) 2n + 2 a n=0  2n+1 ∞  n (4n + 3)(2n − 1)!! r P2n+1 (cos θ ). (−1) = V0 (2n + 2)!! a n=0 Hint. You need Exercise 12.3.8. 12.3.20 A conducting sphere of radius a is divided into two electrically separate hemispheres by a thin insulating barrier at its equator. The top hemisphere is maintained at a potential V0 , the bottom hemisphere at −V0 . 766 Chapter 12 Legendre Functions (a) Show that the electrostatic potential exterior to the two hemispheres is   ∞  (2s − 1)!! a 2s+2 s (−1) (4s + 3) P2s+1 (cos θ ). V (r, θ ) = V0 (2s + 2)!! r s=0 (b) Calculate the electric charge density σ on the outside surface. Note that your series diverges at cos θ = ±1, as you expect from the infinite capacitance of this system (zero thickness for the insulating barrier).  ∂V  ANS. σ = ε0 En = −ε0 ∂r r=a = ε0 V0 12.3.21 ∞  s=0 (−1)s (4s + 3) (2s − 1)!! P2s+1 (cos θ ). (2s)!! √ In the notation of Section 10.4, ϕs (x) = (2s + 1)/2Ps (x), a Legendre polynomial is renormalized to unity. Explain how |ϕs ϕs | acts as a projection operator. In particular, show that if |f  = n an′ |ϕn , then |ϕs ϕs |f  = as′ |ϕs . 12.3.22 Expand x 8 as a Legendre series. Determine the Legendre coefficients from Eq. (12.50), 2m + 1 1 8 x Pm (x) dx. am = 2 −1 Check your values against AMS-55, Table 22.9. (For the complete reference, see Additional Readings in Chapter 8). This illustrates the expansion of a simple function f (x). Actually if f (x) is expressed as a power series, the technique of Exercise 12.2.14 is both faster and more accurate. Hint. Gaussian quadrature can be used to evaluate the integral. 12.3.23 Calculate and tabulate the electrostatic potential created by a ring of charge, Example 12.3.3, for r/a = 1.5(0.5)5.0 and θ = 0◦ (15◦ )90◦ . Carry terms through P22 (cos θ ). Note. The convergence of your series will be slow for r/a = 1.5. Truncating the series at P22 limits you to about four-significant-figure accuracy. Check value. For r/a = 2.5 and θ = 60◦ , ψ = 0.40272(q/4πε0 r). 12.3.24 Calculate and tabulate the electrostatic potential created by a charged disk, Exercise 12.3.17, for r/a = 1.5(0.5)5.0 and θ = 0◦ (15◦ )90◦ . Carry terms through P22 (cos θ ). Check value. For r/a = 2.0 and θ = 15◦ , ψ = 0.46638(q/4πε0 r). 12.3.25 Calculate the first five (nonvanishing) coefficients in the Legendre series expansion of f (x) = 1 − |x| using Eq. (12.51) — numerical integration. Actually these coefficients can be obtained in closed form. Compare your coefficients with those obtained from Exercise 13.3.28. ANS. a0 = 0.5000, a2 = −0.6250, a4 = 0.1875, a6 = −0.1016, a8 = 0.0664. 12.4 Alternate Definitions 12.3.26 767 Calculate and tabulate the exterior electrostatic potential created by the two charged hemispheres of Exercise 12.3.20, for r/a = 1.5(0.5)5.0 and θ = 0◦ (15◦ )90◦ . Carry terms through P23 (cos θ ). Check value. For r/a = 2.0 and θ = 45◦ , V = 0.27066V0 . 12.3.27 12.4 Given f (x) = 2.0, |x| < 0.5; f (x) = 0, 0.5 < |x| < 1.0, expand f (x) in a Legendre series and calculate the coefficients an through a80 (analytically). (b) Evaluate 80 n=0 an Pn (x) for x = 0.400(0.005)0.600. Plot your results. Note. This illustrates the Gibbs phenomenon of Section 14.5 and the danger of trying to calculate with a series expansion in the vicinity of a discontinuity. (a) ALTERNATE DEFINITIONS OF LEGENDRE POLYNOMIALS Rodrigues’ Formula The series form of the Legendre polynomials (Eq. (12.8)) of Section 12.1 may be transformed as follows. From Eq. (12.8), Pn (x) = [n/2]  (−1)r r=0 (2n − 2r)! 2n r!(n − 2r)!(n − r)! x n−2r . (12.64) For n an integer, Pn (x) = [n/2]  (−1)r r=0  n 1 d x 2n−2r 2n r!(n − r)! dx  n  n d 1 (−1)r n! 2n−2r x = n . 2 n! dx r!(n − r)! (12.64a) r=0 Note the extension of the upper limit. The reader is asked to show in Exercise 12.4.1 that the additional terms [n/2] + 1 to n in the summation contribute nothing. However, the effect of these extra terms is to permit the replacement of the new summation by (x 2 − 1)n (binomial theorem once again) to obtain  n  2 n d 1 x −1 . Pn (x) = n 2 n! dx (12.65) This is Rodrigues’ formula. It is useful in proving many of the properties of the Legendre polynomials, such as orthogonality. A related application is seen in Exercise 12.4.3. The Rodrigues definition is extended in Section 12.5 to define the associated Legendre functions. In Section 12.7 it is used to identify the orbital angular momentum eigenfunctions. 768 Chapter 12 Legendre Functions Schlaefli Integral Rodrigues’ formula provides a means of developing an integral representation of Pn (z). Using Cauchy’s integral formula (Section 6.4)  1 f (t) f (z) = dt (12.66) 2πi t −z with we have n  f (z) = z2 − 1 ,  2 n 1 z −1 = 2πi  (t 2 − 1)n dt. t −z Differentiating n times with respect to z and multiplying by 1/2n n! gives  n 2−n 1 dn  2 (t 2 − 1)n z − 1 = Pn (z) = n dt, 2 n! dzn 2πi (t − z)n+1 (12.67) (12.68) (12.69) with the contour enclosing the point t = z. This is the Schlaefli integral. Margenau and Murphy14 use this to derive the recurrence relations we obtained from the generating function. The Schlaefli integral may readily be shown to satisfy Legendre’s equation by differentiation and direct substitution (Fig. 12.11). We obtain   d 2 Pn  dPn n+1 d (t 2 − 1)n+1 dt. (12.70) + n(n + 1)P − 2z = 1 − z2 n dz 2n 2πi dt (t − z)n+2 dz2 For integral n our function (t 2 − 1)n+1 /(t − z)n+2 is single-valued, and the integral around the closed path vanishes. The Schlaefli integral may also be used to define Pν (z) for nonintegral ν integrating around the points t = z, t = 1, but not crossing the cut line −1 to −∞. We could equally well encircle the points t = z and t = −1, but this would lead to FIGURE 12.11 Schlaefli integral contour. 14 H. Margenau and G. M. Murphy, The Mathematics of Physics and Chemistry, 2nd ed., Princeton, NJ: Van Nostrand (1956), Section 3.5. 12.4 Alternate Definitions 769 nothing new. A contour about t = +1 and t = −1 will lead to a second solution, Qν (z), Section 12.10. Exercises 12.4.1 Show that each term in the summation  n n  d (−1)r n! 2n−2r x dx r!(n − r)! r=[n/2]+1 vanishes (r and n integral). 12.4.2 Using Rodrigues’ formula, show that the Pn (x) are orthogonal and that 1  2 2 . Pn (x) dx = 2n + 1 −1 12.4.3 Hint. Use Rodrigues’ formula and integrate by parts. 1 Show that −1 x m Pn (x)dx = 0 when m < n. Hint. Use Rodrigues’ formula or expand x m in Legendre polynomials. 12.4.4 Show that 1 −1 x n Pn (x) dx = 2n+1 n!n! . (2n + 1)! Note. You are expected to use Rodrigues’ formula and integrate by parts, but also see if you can get the result from Eq. (12.8) by inspection. 12.4.5 Show that 1 −1 12.4.6 22n+1 (2r)!(r + n!) , (2r + 2n + 1)!(r − n)! r ≥ n. As a generalization of Exercises 12.4.4 and 12.4.5, show that the Legendre expansions of x s are (a) (b) 12.4.7 x 2r P2n (x) dx = x 2r = r  22n (4n + 1)(2r)!(r + n)! n=0 x 2r+1 = (2r + 2n + 1)!(r − n)! P2n (x), r  22n+1 (4n + 3)(2r + 1)!(r + n + 1)! n=0 (2r + 2n + 3)!(r − n)! s = 2r, P2n+1 (x), s = 2r + 1. A plane wave may be expanded in a series of spherical waves by the Rayleigh equation, eikr cos γ = Show that an = i n (2n + 1). ∞  n=0 an jn (kr)Pn (cos γ ). 770 Chapter 12 Legendre Functions Hint. 1. Use the orthogonality of the Pn to solve for an jn (kr). 2. Differentiate n times with respect to (kr) and set r = 0 to eliminate the rdependence. 3. Evaluate the remaining integral by Exercise 12.4.4. Note. This problem may also be treated by noting that both sides of the equation satisfy the Heemholtz equation. The equality can be established by showing that the solutions have the same behavior at the origin and also behave alike at large distances. A “by inspection” type of solution is developed in Section 9.7 using Green’s functions. 12.4.8 Verify the Rayleigh equation of Exercise 12.4.7 by starting with the following steps: (a) Differentiate with respect to (kr) to establish   an jn′ (kr)Pn (cos γ ) = i an jn (kr) cos γ Pn (cos γ ). n (b) n Use a recurrence relation to replace cos γ Pn (cos γ ) by a linear combination of Pn−1 and Pn+1 . (c) Use a recurrence relation to replace jn′ by a linear combination of jn−1 and jn+1 . 12.4.9 From Exercise 12.4.7 show that jn (kr) = 1 2i n 1 −1 eikrµ Pn (µ) dµ. This means that (apart from a constant factor) the spherical Bessel function jn (kr) is the Fourier transform of the Legendre polynomial Pn (µ). 12.4.10 The Legendre polynomials and the spherical Bessel functions are related by π 1 eiz cos θ Pn (cos θ ) sin θ dθ, n = 0, 1, 2, . . . . jn (z) = (−i)n 2 0 Verify this relation by transforming the right-hand side into π zn cos(z cos θ ) sin2n+1 θ dθ 2n+1 n! 0 and using Exercise 11.7.8. 12.4.11 By direct evaluation of the Schlaefli integral show that Pn (1) = 1. 12.4.12 Explain why the contour of the Schlaefli integral, Eq. (12.69), is chosen to enclose the points t = z and t = 1 when n → ν, not an integer. 12.4.13 In numerical work (for example, the Gauss–Legendre quadrature) it is useful to establish that Pn (x) has n real zeros in the interior of [−1, 1]. Show that this is so. Hint. Rolle’s theorem shows that the first derivative of (x 2 − 1)2n has one zero in the interior of [−1, 1]. Extend this argument to the second, third, and ultimately the nth derivative. 12.5 Associated Legendre Functions 12.5 771 ASSOCIATED LEGENDRE FUNCTIONS When Helmholtz’s equation is separated in spherical polar coordinates (Section 9.3), one of the separated ODEs is the associated Legendre equation    dPnm (cos θ ) m2 1 d sin θ + n(n + 1) − 2 Pnm (cos θ ) = 0. sin θ dθ dθ sin θ (12.71) With x = cos θ , this becomes   2 m m2 d m 2 d P m (x) = 0. 1−x P (x) − 2x Pn (x) + n(n + 1) − dx dx 2 n 1 − x2 n (12.72) If the azimuthal separation constant m2 = 0, we have Legendre’s equation, Eq. (12.28). The regular solutions Pnm (x) (with m not necessarily zero, but an integer) are  m/2 d m v ≡ Pnm (x) = 1 − x 2 Pn (x) dx m (12.73a) with m ≥ 0 an integer. One way of developing the solution of the associated Legendre equation is to start with the regular Legendre equation and convert it into the associated Legendre equation by using multiple differentiation. These multiple differentiations are suggested by Eq. (12.73a), the generation of associated Legendre polynomials, and spherical harmonics of Section 12.6 more generally, in Section 4.3 using raising or lowering operators of Eq. (4.69) repeatedly. For their derivative form see Exercise 12.6.8. We take Legendre’s equation  1 − x 2 Pn′′ − 2xPn′ + n(n + 1)Pn = 0, (12.74) and with the help of Leibniz’ formula15 differentiate m times. The result is  1 − x 2 u′′ − 2x(m + 1)u′ + (n − m)(n + m + 1)u = 0, (12.75) where u≡ dm Pn (x). dx m (12.76) Equation (12.74) is not self-adjoint. To put it into self-adjoint form and convert the weighting function to 1, we replace u(x) by  m/2  m/2 d m Pn (x) . v(x) = 1 − x 2 u(x) = 1 − x 2 dx m 15 Leibniz’ formula for the nth derivative of a product is n   n−s  dn  ds n d A(x)B(x) = A(x) s B(x), n n−s s dx dx dx s=0 a binomial coefficient.   n! n = , s (n − s)!s! (12.73b) 772 Chapter 12 Legendre Functions Solving for u and differentiating, we obtain   −m/2 mxv  1 − x2 , u′ = v ′ + 2 1−x  −m/2 2mxv ′ mv m(m + 2)x 2 v  · 1 − x2 . + + u′′ = v ′′ + 2 2 2 2 1−x 1−x (1 − x ) (12.77) (12.78) Substituting into Eq. (12.74), we find that the new function v satisfies the self-adjoint ODE   m2 v = 0, (12.79) 1 − x 2 v ′′ − 2xv ′ + n(n + 1) − 1 − x2 which is the associated Legendre equation; it reduces to Legendre’s equation when m is set equal to zero. Expressed in spherical polar coordinates, the associated Legendre equation is    1 d dv m2 (12.80) sin θ + n(n + 1) − 2 v = 0. sin θ dθ dθ sin θ Associated Legendre Polynomials The regular solutions, relabeled Pnm (x), are m/2 d m  Pn (x). v ≡ Pnm (x) = 1 − x 2 dx m (12.73c) These are the associated Legendre functions.16 Since the highest power of x in Pn (x) is x n , we must have m ≤ n (or the m-fold differentiation will drive our function to zero). In quantum mechanics the requirement that m ≤ n has the physical interpretation that the expectation value of the square of the z component of the angular momentum is less than or equal to the expectation value of the square of the angular momentum vector L,  2  2 ∗ 2 Lz ≤ L ≡ ψlm L ψlm d 3 r. From the form of Eq. (12.73c) we might expect m to be nonnegative. However, if Pn (x) is expressed by Rodrigues’ formula, this limitation on m is relaxed and we may have −n ≤ m ≤ n, negative as well as positive values of m being permitted. These limits are consistent with those obtained by means of raising and lowering operators in Chapter 4. In particular, |m| > n is ruled out. This also follows from Eq. (12.73c). Using Leibniz’ differentiation formula once again, we can show (Exercise 12.5.1) that Pnm (x) and Pn−m (x) are related by Pn−m (x) = (−1)m (n − m)! m P (x). (n + m)! n (12.81) 16 Occasionally (as in AMS-55; for the complete reference, see the Additional Readings of Chapter 8), one finds the associated Legendre functions defined with an additional factor of (−1)m . This (−1)m seems an unnecessary complication at this point. It will be included in the definition of the spherical harmonics Ynm (θ, ϕ) in Section 12.6. Our definition agrees with Jackson’s Electrodynamics (see Additional Readings of Chapter 11 for this reference). Note also that the upper index m is not an exponent. 12.5 Associated Legendre Functions 773 From our definition of the associated Legendre functions Pnm (x), Pn0 (x) = Pn (x). (12.82) A generating function for the associated Legendre functions is obtained, via Eq. (12.71), from that of the ordinary Legendre polynomials: ∞  (2m)!(1 − x 2 )m/2 m Ps+m (x)t s . = 2m m!(1 − 2tx + t 2 )m+1/2 (12.83) s=0 If we drop the factor (1 − x 2 )m/2 = sinm θ from this formula and define the polynomim (x) = P m (x)(1 − x 2 )−m/2 , then we obtain a practical form of the generating als Ps+m s+m function, gm (x, t) ≡ ∞  (2m)! m Ps+m (x)t s . = m 2 m+1/2 2 m!(1 − 2tx + t ) (12.84) s=0 We can derive a recursion relation for associated Legendre polynomials that is analogous to Eqs. (12.14) and (12.17) by differentiation as follows:  ∂gm = (2m + 1)(x − t)gm (x, t). 1 − 2tx + t 2 ∂t Substituting the defining expansions for associated Legendre polynomials we get   m  m m xPs+m sPs+m (x)t s−1 = (2m + 1) t s − Ps+m t s+1 . 1 − 2tx + t 2 s s Comparing coefficients of powers of t in these power series, we obtain the recurrence relation m m m (s + 1)Ps+m+1 − (2m + 1 + 2s)xPs+m + (s + 2m)Ps+m−1 = 0. (12.85) For m = 0 and s = n this relation is Eq. (12.17). Before we can use this relation we need to initialize it, that is, relate the associated m = (2m − 1)!! Legendre polynomials to ordinary Legendre polynomials. We can use Pm n+1 from Eq. (12.73c). Also, since |m| ≤ n, we may set Pn = 0 and use this to obtain starting values for various recursive processes. We observe that   −1/2  1 − 2xt + t 2 g1 (x, t) = 1 − 2xt + t 2 Ps (x)t s , (12.86) = s so upon inserting Eq. (12.84) we get the recursion 1 1 Ps+1 − 2xPs1 + Ps−1 = Ps (x). More generally, we also have the identity  1 − 2xt + t 2 gm+1 (x, t) = (2m + 1)gm (x, t), (12.87) (12.88) from which we extract the recursion m+1 m+1 m+1 m Ps+m+1 − 2xPs+m + Ps+m−1 = (2m + 1)Ps+m (x), (12.89) which relates the associated Legendre polynomials with superindex m + 1 to those with m. For m = 0 we recover the initial recursion Eq. (12.87). 774 Chapter 12 Legendre Functions Table 12.2 Associated Legendre Functions P11 (x) = (1 − x 2 )1/2 = sin θ P21 (x) = 3x(1 − x 2 )1/2 = 3 cos θ sin θ P22 (x) = 3(1 − x 2 ) = 3 sin2 θ P31 (x) = 32 (5x 2 − 1)(1 − x 2 )1/2 = 23 (5 cos2 θ − 1) sin θ P32 (x) = 15x(1 − x 2 ) = 15 cos θ sin2 θ P33 (x) = 15(1 − x 2 )3/2 = 15 sin3 θ P41 (x) = 52 (7x 3 − 3x)(1 − x 2 )1/2 = 52 (7 cos3 θ − 3 cos θ) sin θ 15 2 2 2 2 P42 (x) = 15 2 (7x − 1)(1 − x ) = 2 (7 cos θ − 1) sin θ P43 (x) = 105x(1 − x 2 )3/2 = 105 cos θ sin3 θ P44 (x) = 105(1 − x 2 )2 = 105 sin4 θ Example 12.5.1 LOWEST ASSOCIATED LEGENDRE POLYNOMIALS Now we are ready to derive the entries of Table 12.2. For m = 1 and s = 0, Eq. (12.87) 1 do not occur in the definition, Eq. (12.84), of the yields P11 = 1, because P01 = 0 = P−1 associated Legendre polynomials. Multiplying by (1 − x 2 )1/2 = sin θ we get the first line of Table 12.2. For s = 1 we find, from Eq. (12.87), P21 (x) = P1 + 2xP11 = x + 2x = 3x, from which the second line of Table 12.2, 3 cos θ sin θ , follows upon multiplying by sin θ . For s = 2 we get 15 3 1 P31 (x) = P2 + 2xP21 − P11 = 3x 2 − 1 + 6x 2 − 1 = x 2 − , 2 2 2 in agreement with line 4 of Table 12.2. To get line 3 we use Eq. (12.88). For m = 1, s = 0, this gives P22 (x) = 3P11 (x) = 3, and multiplying by 1 − x 2 = sin2 θ reproduces line 3 of Table 12.2. For lines 5, 8, 9, Eq. (12.84) may be used, which we leave as an exercise. More m . Then generally, we use Eq. (12.89) instead of Eq. (12.87) to get a starting value of Pm m , giving (2m − 1)!!. Note that, if m = 0, Eq. (12.85) reduces to a two-term formula for Pm this is (−1)!! = 1.  Example 12.5.2 SPECIAL VALUES For x = 1 we use  ∞    −2m − 1 s 2 −m−1/2 −2m−1 t 1 − 2t + t = (1 − t) = s s=0 in Eq. (12.84) and find m Ps+m (1) = (2m)! 2m m!   −2m − 1 , s (12.90) 12.5 Associated Legendre Functions where for s = 0 and    −m s  775 =1 (−m)(−m − 1) · · · (1 − s − m) s!   = 1; for s = 1, P21 (1) = − −3 for s ≥ 1. For m = 1, s = 0 we have P11 (1) = −3 1 = 3; 0  −3 (−3)(−4) 3 1 = 6 = 2 (5 − 1), which all agree with Table 12.2. For for s = 2, P3 (1) = 2 = 2 x = 0 we can also use the binomial expansion, which we leave as an exercise.  −m s = Recurrence Relations As expected and already seen, the associated Legendre functions satisfy recurrence relations. Because of the existence of two indices instead of just one, we have a wide variety of recurrence relations:  2mx Pnm + n(n + 1) − m(m − 1) Pnm−1 = 0, (12.91) Pnm+1 − 2 1/2 (1 − x ) m m + (n − m + 1)Pn+1 , (2n + 1)xPnm = (n + m)Pn−1 (12.92) 1/2 m  m+1 m+1 Pn = Pn+1 − Pn−1 (2n + 1) 1 − x 2 m−1 = (n + m)(n + m − 1)Pn−1 m−1 − (n − m + 1)(n − m + 2)Pn+1 , (12.93)  1/2 m′ 1 m+1 1 1 − x2 Pn = Pn − (n + m)(n − m + 1)Pnm−1 . (12.94) 2 2 These relations, and many other similar ones, may be verified by use of the generating function (Eq. (12.4)), by substitution of the series solution of the associated Legendre equation (12.79) or reduction to the Legendre polynomial recurrence relations, using Eq. (12.73c). As an example of the last method, consider Eq. (12.93). It is similar to Eq. (12.23): ′ ′ (x) − Pn−1 (x). (2n + 1)Pn (x) = Pn+1 (12.95) Let us differentiate this Legendre polynomial recurrence relation m times to obtain (2n + 1) dm ′ dm ′ dm Pn (x) = m Pn+1 (x) − m Pn−1 (x) m dx dx dx = d m+1 d m+1 P (x) − Pn−1 (x). n+1 dx m+1 dx m+1 (12.96) Now multiplying by (1 − x 2 )(m+1)/2 and using the definition of Pn (x), we obtain the first part of Eq. (12.93). 776 Chapter 12 Legendre Functions Parity The parity relation satisfied by the associated Legendre functions may be determined by examination of the defining equation (12.73c). As x → −x, we already know that Pn (x) contributes a (−1)n . The m-fold differentiation yields a factor of (−1)m . Hence we have Pnm (−x) = (−1)n+m Pnm (x). (12.97) A glance at Table 12.2 verifies this for 1 ≤ m ≤ n ≤ 4. Also, from the definition in Eq. (12.73c), Pnm (±1) = 0, for m = 0. (12.98) Orthogonality The orthogonality of the Pnm (x) follows from the ODE, just as for the Pn (x) (Section 12.3), if m is the same for both functions. However, it is instructive to demonstrate the orthogonality by another method, a method that will also provide the normalization constant. Using the definition in Eq. (12.73c) and Rodrigues’ formula (Eq. (12.65)) for Pn (x), we find  q+m  p+m 1 1 d d (−1)m p Ppm (x)Pqm (x) dx = p+q X X q dx. (12.99) Xm p+m 2 p!q! −1 dx dx q+m −1 The function X is given by X ≡ (x 2 − 1). If p = q, let us assume that p < q. Notice that the superscript m is the same for both functions. This is an essential condition. The technique is to integrate repeatedly by parts; all the integrated parts will vanish as long as there is a factor X = x 2 − 1. Let us integrate q + m times to obtain   1 p+m (−1)m (−1)q+m 1 q d q+m m d p X dx. (12.100) X X Ppm (x)Pqm (x) dx = 2p+q p!q! dx q+m dx p+m −1 −1 The integrand on the right-hand side is now expanded by Leibniz’ formula to give   d q+m d p+m X q q+m X m p+m X p dx dx  p+m+i q+m  (q + m)!  d q+m−i q m d =X X Xp . (12.101) i!(q + m − i)! dx q+m−i dx p+m+i i=0 Since the term X m contains no power of x greater than x 2m , we must have q + m − i ≤ 2m (12.102) or the derivative will vanish. Similarly, p + m + i ≤ 2p. (12.103) q ≤ p, (12.104) Adding both inequalities yields 12.5 Associated Legendre Functions 777 which contradicts our assumption that p < q. Hence, there is no solution for i and the integral vanishes. The same result obviously will follow if p > q. For the remaining case, p = q, we have the single term corresponding to i = q − m. Putting Eq. (12.101) into Eq. (12.100), we have  2q   2m 1 1  m 2 d d (−1)q+2m (q + m)! m q dx. Pq (x) dx = 2q X X Xq 2 q!q!(2m)!(q − m)! −1 dx 2m dx 2q −1 (12.105) Since  m X m = x 2 − 1 = x 2m − mx 2m−2 + · · · , (12.106) d 2m m X = (2m)!, dx 2m Eq. (12.105) reduces to 1  m 2 (−1)q+2m (2q)!(q + m)! 1 q Pq (x) dx = X dx. 22q q!q!(q − m)! −1 −1 The integral on the right is just q (−1) π 0 sin2q+1 θ dθ = (−1)q 22q+1 q!q! (2q + 1)! (12.107) (12.108) (12.109) (compare Exercise 8.4.9). Combining Eqs. (12.108) and (12.109), we have the orthogonality integral, 1 2 (q + m)! Ppm (x)Pqm (x) dx = (12.110) · δpq , 2q + 1 (q − m)! −1 or, in spherical polar coordinates, π Ppm (cos θ )Pqm (cos θ ) sin θ dθ = 0 (q + m)! 2 · δpq . 2q + 1 (q − m)! (12.111) The orthogonality of the Legendre polynomials is a special case of this result, obtained by setting m equal to zero; that is, for m = 0, Eq. (12.110) reduces to Eqs. (12.47) and (12.48). In both Eqs. (12.110) and (12.111), our Sturm–Liouville theory of Chapter 10 could provide the Kronecker delta. A special calculation, such as the analysis here, is required for the normalization constant. The orthogonality of the associated Legendre functions over the same interval and with the same weighting factor as the Legendre polynomials does not contradict the uniqueness of the Gram–Schmidt construction of the Legendre polynomials, Example 10.3.1. 1 Table 12.2 suggests (and Section 12.4 verifies) that −1 Ppm (x)Pqm (x) dx may be written as 1  m Ppm (x)Pqm (x) 1 − x 2 dx, −1 where we defined earlier  m/2 Ppm (x) 1 − x 2 = Ppm (x). 778 Chapter 12 Legendre Functions The functions Ppm (x) may be constructed by the Gram–Schmidt procedure with the weighting function w(x) = (1 − x 2 )m . It is possible to develop an orthogonality relation for associated Legendre functions of the same lower index but different upper index. We find 1  −1 (n + m)! Pnm (x)Pnk (x) 1 − x 2 dx = δm,k . m(n − m!) −1 (12.112) Note that a new weighting factor, (1 − x 2 )−1 , has been introduced. This relation is a mathematical curiosity. In physical problems with spherical symmetry solutions of Eqs. (12.80) and (9.64) appear in conjunction with those of Eq. (9.61), and orthogonality of the azimuthal dependence makes the two upper indices equal and always leads to Eq. (12.111). Example 12.5.3 MAGNETIC INDUCTION FIELD OF A CURRENT LOOP Like the other ODEs of mathematical physics, the associated Legendre equation is likely to pop up quite unexpectedly. As an illustration, consider the magnetic induction field B and magnetic vector potential A created by a single circular current loop in the equatorial plane (Fig. 12.12). We know from electromagnetic theory that the contribution of current element I dλ to the magnetic vector potential is dA = µ0 I dλ . 4π r (12.113) (This follows from Exercise 1.14.4 and Section 9.7) Equation (12.113), plus the symmetry of our system, shows that A has only a ϕˆ component and that the component is independent FIGURE 12.12 Circular current loop. 12.5 Associated Legendre Functions 779 of ϕ,17 ˆ ϕ (r, θ ). A = ϕA (12.114) By Maxwell’s equations, ∇ × H = J, ∂D =0 ∂t (SI units). (12.115) Since µ0 H = B = ∇ × A, (12.116) ∇ × (∇ × A) = µ0 J, (12.117) we have where J is the current density. In our problem J is zero everywhere except in the current loop. Therefore, away from the loop, ˆ ϕ (r, θ ) = 0, ∇ × ∇ × ϕA (12.118) using Eq. (12.114). From the expression for the curl in spherical polar coordinates (Section 2.5), we obtain (Example 2.5.2)  2  ∂ Aϕ 2 ∂Aϕ 1 ∂ 2 Aϕ 1 ∂ ˆ ϕ (r, θ ) = ϕˆ − ∇ × ∇ × ϕA − (cot θ A ) − − ϕ r ∂r ∂r 2 r 2 ∂θ 2 r 2 ∂θ = 0. (12.119) Letting Aϕ (r, θ ) = R(r)(θ ) and separating variables, we have d 2R dR − n(n + 1)R = 0, +2 2 dr dr (12.120) d 2  d + n(n + 1) − 2 = 0. + cot θ 2 dθ dθ sin θ (12.121) r2 The second equation is the associated Legendre equation (12.80) with m = 1, and we may immediately write (θ ) = Pn1 (cos θ ). (12.122) The separation constant n(n + 1), n a nonnegative integer, was chosen to keep this solution well behaved. By trial, letting R(r) = r α , we find that α = n, or −n − 1. The first possibility is discarded, for our solution must vanish as r → ∞. Hence  n+1 a bn 1 Aϕn = n+1 Pn (cos θ ) = cn Pn1 (cos θ ) (12.123) r r 17 Pair off corresponding current elements I dλ(ϕ ) and I dλ(ϕ ), where ϕ − ϕ = ϕ − ϕ. 1 2 1 2 780 Chapter 12 Legendre Functions and Aϕ (r, θ ) = ∞  n=1 cn  n+1 a Pn1 (cos θ ) r (r > a). (12.124) Here a is the radius of the current loop. Since Aϕ must be invariant to reflection in the equatorial plane, by the symmetry of our problem, Aϕ (r, cos θ ) = Aϕ (r, − cos θ ), (12.125) the parity property of Pnm (cos θ ) (Eq. (12.97)) shows that cn = 0 for n even. To complete the evaluation of the constants, we may use Eq. (12.124) to calculate Bz along the z-axis (Bz = Br (r, θ = 0)) and compare with the expression obtained from the Biot–Savart law. This is the same technique as used in Example 12.3.3. We have (compare Eq. (2.47))   ∂ 1 cot θ 1 ∂Aϕ  (sin θ Aϕ ) = Aϕ + . (12.126) Br = ∇ × A r = r sin θ ∂θ r r ∂θ Using ∂Pn1 (cos θ ) dP 1 (cos θ ) 1 n(n + 1) 0 = − sin θ n = − Pn2 + Pn ∂θ d(cos θ ) 2 2 (12.127) (Eq. (12.94)) and then Eq. (12.91) with m = 1, Pn2 (cos θ ) − 2 cos θ 1 P (cos θ ) + n(n + 1)Pn (cos θ ) = 0, sin θ n (12.128) we obtain Br (r, θ ) = ∞  n=1 cn n(n + 1) (for all θ ). In particular, for θ = 0, Br (r, 0) = ∞  n=1 a n+1 Pn (cos θ ), r n+2 cn n(n + 1) r > a, a n+1 . r n+2 (12.129) (12.130) We may also obtain Bθ (r, θ ) = − ∞ 1 ∂(rAϕ )  a n+1 = cn n n+2 Pn1 (cos θ ), r ∂r r r > a, (12.131) n=1 The Biot–Savart law states that µ0 dλ × rˆ I (SI units). (12.132) 4π r2 We now integrate over the perimeter of our loop (radius a). The geometry is shown in Fig. 12.13. The resulting magnetic induction field is zˆ Bz , along the z-axis, with   −3/2 µ0 I a 2 a 2 −3/2 µ0 I 2  2 1 + a a + z2 Bz = . (12.133) = 2 2 z3 z2 dB = 12.5 Associated Legendre Functions FIGURE 12.13 781 Biot–Savart law applied to a circular loop. Expanding by the binomial theorem, we obtain      3 a 2 15 a 4 µ0 I a 2 1− Bz = + − ··· 2 z3 2 z 8 z  2s ∞ µ0 I a 2  s (2s + 1)!! a (−1) , z > a. = 2 z3 (2s)!! z (12.134) s=0 Equating Eqs. (12.130) and (12.134) term by term (with r = z),18 we find µ0 I , 4 µ0 I , c2 = c4 = · · · = 0. 16 (n/2)! µ0 I · cn = (−1)(n−1)/2 , n odd. 2n(n + 1) [(n − 1)/2]!( 12 )! c1 = c3 = − (12.135) Equivalently, we may write c2n+1 = (−1)n 18 The descending power series is also unique. µ0 I µ0 I (2n − 1)!! (2n)! · = (−1)n · 2n+2 n!(n + 1)! 2 (2n + 2)!! 2 (12.136) 782 Chapter 12 Legendre Functions and Aϕ (r, θ ) = Br (r, θ ) = Bθ (r, θ ) =  2n  2  ∞ a a 1 P2n+1 (cos θ ), c2n+1 r r (12.137) n=0  2n ∞ a a2  c (2n + 1)(2n + 2) P2n+1 (cos θ ), 2n+1 r r3 (12.138) n=0 ∞ a2  r3 n=0  2n a 1 P2n+1 (cos θ ). c2n+1 (2n + 1) r (12.139) These fields may be described in closed form by the use of elliptic integrals. Exercise 5.8.4 is an illustration of this approach. A third possibility is direct integration of Eq. (12.113) by expanding the denominator of the integral for Aϕ in Exercise 5.8.4 as a Legendre polynomial generating function. The current is specified by Dirac delta functions. These methods have the advantage of yielding the constants cn directly. A comparison of magnetic current loop dipole fields and finite electric dipole fields may be of interest. For the magnetic current loop dipole, the preceding analysis gives    3 a 2 µ0 I a 2 P P + · · · , (12.140) − Br (r, θ ) = 3 1 2 r3 2 r    µ0 I a 2 1 3 a 2 1 P − Bθ (r, θ ) = P3 + · · · . (12.141) 4 r3 1 4 r From the finite electric dipole potential of Section 12.1 we have  2  qa a Er (r, θ ) = P1 + 2 P3 + · · · , r πε0 r 3  2  a qa 1 1 P + Eθ (r, θ ) = P3 + · · · . r 2πε0 r 3 1 (12.142) (12.143) The two fields agree in form as far as the leading term is concerned (r −3 P1 ), and this is the basis for calling them both dipole fields. As with electric multipoles, it is sometimes convenient to discuss point magnetic multipoles (see Fig. 12.14). For the dipole case, Eqs. (12.140) and (12.141), the point dipole is formed by taking the limit a → 0, I → ∞, with I a 2 held constant. With n a unit vector normal to the current loop (positive sense by right-hand rule, Section 1.10), the magnetic  moment m is given by m = nI πa 2 . FIGURE 12.14 Electric dipole. 12.5 Associated Legendre Functions 783 Exercises 12.5.1 Prove that Pn−m (x) = (−1)m where Pnm (x) is defined by (n − m)! m P (x), (n + m)! n m/2 d n+m  2 n 1  1 − x2 x −1 . n+m dx Hint. One approach is to apply Leibniz’ formula to (x + 1)n (x − 1)n . Pnm (x) = 12.5.2 2n n! Show that 1 P2n (0) = 0, 1 P2n+1 (0) = (−1)n (2n + 1)!! (2n + 1)! , = (−1)n (2n)!! (2n n!)2 by each of these three methods: (a) use of recurrence relations, (b) expansion of the generating function, (c) Rodrigues’ formula. 12.5.3 Evaluate Pnm (0). ANS. Pnm (0) =  (n + m)!  (−1)(n−m)/2 n , 2 ((n − m)/2)!((n + m)/2!) 0, Also, 12.5.4 (n − m)!! Show that even. n = 0, 1, 2, . . . . Derive the associated Legendre recurrence relation Pnm+1 (x) − 12.5.6 n + m odd. (n + m − 1)!! Pnm (0) = (−1)(n−m)/2 ,n + m Pnn (cos θ ) = (2n − 1)!! sinn θ, 12.5.5 n + m even,  2mx P m (x) + n(n + 1) − m(m − 1) Pnm−1 (x) = 0. (1 − x 2 )1/2 n Develop a recurrence relation that will yield Pn1 (x) as Pn1 (x) = f1 (x, n)Pn (x) + f2 (x, n)Pn−1 (x). Follow either (a) or (b). (a) Derive a recurrence relation of the preceding form. Give f1 (x, n) and f2 (x, n) explicitly. (b) Find the appropriate recurrence relation in print. (1) Give the source. 784 Chapter 12 Legendre Functions (2) Verify the recurrence relation. ANS. Pn1 (x) = − 12.5.7 Show that sin θ 12.5.8 nx n Pn + Pn−1 . 2 1/2 (1 − x ) (1 − x 2 )1/2 d Pn (cos θ ) = Pn1 (cos θ ). d cos θ Show that (a) (b)  m2 Pnm Pnm′ dPnm′ 2n(n + 1) (n + m)! sin θ dθ = + δnn′ , 2 dθ dθ 2n + 1 (n − m)! sin θ 0  π 1 Pn1′ dPn1 Pn dPn1′ sin θ dθ = 0. + sin θ dθ sin θ dθ 0 π  dP m n These integrals occur in the theory of scattering of electromagnetic waves by spheres. 12.5.9 As a repeat of Exercise 12.3.10, show, using associated Legendre functions, that 1  2 n! n+1 · · δm,n−1 x 1 − x 2 Pn′ (x)Pm′ (x) dx = 2n + 1 2n − 1 (n − 2)! −1 + 12.5.10 Evaluate 12.5.11 2 (n + 2)! n · · δm,n+1 . 2n + 1 2n + 3 n! π 0 sin2 θ Pn1 (cos θ ) dθ. The associated Legendre polynomial Pnm (x) satisfies the self-adjoint ODE  2 m  m2 dPnm (x) 2 d Pn (x) P m (x) = 0. + n(n + 1) − 1−x − 2x dx dx 2 1 − x2 n From the differential equations for Pnm (x) and Pnk (x) show that 1 dx = 0, Pnm (x)Pnk (x) 1 − x2 −1 for k = m. 12.5.12 Determine the vector potential of a magnetic quadrupole by differentiating the magnetic dipole potential. P 1 (cos θ ) µ0  2 I a (dz)ϕˆ 2 3 + higher-order terms. 2 r   2 3P2 (cos θ ) ˆ P21 (cos θ ) . BMQ = µ0 I a (dz) rˆ + θ r4 r4 ANS. AMQ = 12.5 Associated Legendre Functions 785 This corresponds to placing a current loop of radius a at z → dz and an oppositely directed current loop at z → −dz and letting a → 0 subject to (dz)a (dipole strength) equal constant. Another approach to this problem would be to integrate dA (Eq. (12.113), to expand the denominator in a series of Legendre polynomials, and to use the Legendre polynomial addition theorem (Section 12.8). 12.5.13 A single loop of wire of radius a carries a constant current I . (a) Find the magnetic induction B for r < a, θ = π/2. (b) Calculate the integral of the magnetic flux (B · dσ ) over the area of the current loop, that is  a 2π  π dϕ r dr. Bz r, θ = 2 0 0 ANS. ∞. The Earth is within such a ring current, in which I approximates millions of amperes arising from the drift of charged particles in the Van Allen belt. 12.5.14 (a) Show that in the point dipole limit the magnetic induction field of the current loop becomes µ0 m Br (r, θ ) = P1 (cos θ ), 2π r 3 µ0 m 1 Bθ (r, θ ) = P (cos θ ) 2π r 3 1 with m = I πa 2 . (b) Compare these results with the magnetic induction of the point magnetic dipole of Exercise 1.8.17. Take m = zˆ m. 12.5.15 A uniformly charged spherical shell is rotating with constant angular velocity. (a) Calculate the magnetic induction B along the axis of rotation outside the sphere. (b) Using the vector potential series of Section 12.5, find A and then B for all space outside the sphere. 12.5.16 In the liquid drop model of the nucleus, the spherical nucleus is subjected to small deformations. Consider a sphere of radius r0 that is deformed so that its new surface is given by  r = r0 1 + α2 P2 (cos θ ) . Find the area of the deformed sphere through terms of order α22 . Hint.  2 1/2 dr r sin θ dθ dϕ. dA = r 2 + dθ   ANS. A = 4πr02 1 + 45 α22 + O α23 . 786 Chapter 12 Legendre Functions Note. The area element dA follows from noting that the line element ds for fixed ϕ is given by 12.5.17   2 1/2  2 2 dr 2 1/2 2 = r + dθ. ds = r dθ + dr dθ A nuclear particle is in a spherical square well potential V (r, θ, ϕ) = 0 for 0 ≤ r < a and ∞ for r > a. The particle is described by a wave function ψ(r, θ, ϕ) which satisfies the wave equation − h¯ 2 2 ∇ ψ + V0 ψ = Eψ, 2M r < a, and the boundary condition ψ(r = a) = 0. Show that for the energy E to be a minimum there must be no angular dependence in the wave function; that is, ψ = ψ(r). Hint. The problem centers on the boundary condition on the radial function. 12.5.18 (a) Write a subroutine to calculate the numerical value of the associated Legendre function PN1 (x) for given values of N and x. Hint. With the known forms of P11 and P21 you can use the recurrence relation Eq. (12.92) to generate PN1 , N > 2. (b) Check your subroutine by having it calculate PN1 (x) for x = 0.0(0.5) 1.0 and N = 1(1)10. Check these numerical values against the known values of PN1 (0) and PN1 (1) and against the tabulated values of PN1 (0.5). 12.5.19 Calculate the magnetic vector potential of a current loop, Example 12.5.1. Tabulate your results for r/a = 1.5(0.5)5.0 and θ = 0◦ (15◦ )90◦ . Include terms in the series expansion, Eq. (12.137), until the absolute values of the terms drop below the leading term by a factor of 105 or more. Note. This associated Legendre expansion can be checked by comparison with the elliptic integral solution, Exercise 5.8.4. Check value. For r/a = 4.0 and θ = 20◦ , Aϕ /µ0 I = 4.9398 × 10−3 . 12.6 SPHERICAL HARMONICS In the separation of variables of (1) Laplace’s equation, (2) Helmholtz’s or the spacedependence of the electromagnetic wave equation, and (3) the Schrödinger wave equation for central force fields, ∇ 2 ψ + k 2 f (r)ψ = 0, (12.144) 12.6 Spherical Harmonics 787 the angular dependence, coming entirely from the Laplacian operator, is19   d (θ ) d 2 (ϕ) (ϕ) d sin θ + 2 + n(n + 1)(θ )(ϕ) = 0. sin θ dθ dθ sin θ dϕ 2 (12.145) Azimuthal Dependence — Orthogonality The separated azimuthal equation is 1 d 2 (ϕ) = −m2 , (ϕ) dϕ 2 (12.146) (ϕ) = e−imϕ , eimϕ , (12.147) with solutions with m integer, which satisfy the orthogonal condition 2π 0 e−im1 ϕ eim2 ϕ dϕ = 2πδm1 m2 . (12.148) Notice that it is the product ∗m1 (ϕ)m2 (ϕ) that is taken and that ∗ is used to indicate the complex conjugate function. This choice is not required, but it is convenient for quantum mechanical calculations. We could have used  = sin mϕ, cos mϕ (12.149) and the conditions of orthogonality that form the basis for Fourier series (Chapter 14). For applications such as describing the Earth’s gravitational or magnetic field, sin mϕ and cos mϕ would be the preferred choice (see Example 12.6.1). In electrostatics and most other physical problems we require m to be an integer in order that (ϕ) be a single-valued function of the azimuth angle. In quantum mechanics the question is much more involved: Compare the footnote in Section 9.3. By means of Eq. (12.148), 1 m = √ eimϕ 2π (12.150) is orthonormal (orthogonal and normalized) with respect to integration over the azimuth angle ϕ. 19 For a separation constant of the form n(n + 1) with n an integer, a Legendre-equation-series solution becomes a polynomial. Otherwise both series solutions diverge, Exercise 9.5.5. 788 Chapter 12 Legendre Functions Polar Angle Dependence Splitting off the azimuthal dependence, the polar angle dependence (θ ) leads to the associated Legendre equation (12.80), which is satisfied by the associated Legendre functions; that is, (θ ) = Pnm (cos θ ). To include negative values of m, we use Rodrigues’ formula, Eq. (12.65), in the definition of Pnm (cos θ ). This leads to m+n  n 1  2 m/2 d 1 − x x2 − 1 , −n ≤ m ≤ n. (12.151) n m+n 2 n! dx Pnm (cos θ ) and Pn−m (cos θ ) are related as indicated in Exercise 12.5.1. An advantage of this approach over simply defining Pnm (cos θ ) for 0 ≤ m ≤ n and requiring that Pn−m = Pnm is that the recurrence relations valid for 0 ≤ m ≤ n remain valid for −n ≤ m < 0. Normalizing the associated Legendre function by Eq. (12.110), we obtain the orthonormal functions , 2n + 1 (n − m)! m P (cos θ ), −n ≤ m ≤ n, (12.152) 2 (n + m)! n Pnm (cos θ ) = which are orthonormal with respect to the polar angle θ . Spherical Harmonics The function m (ϕ) (Eq. (12.150)) is orthonormal with respect to the azimuthal angle ϕ. We take the product of m (ϕ) and the orthonormal function in polar angle from Eq. (12.152) and define , 2n + 1 (n − m)! m P (cos θ )eimϕ (12.153) Ynm (θ, ϕ) ≡ (−1)m 4π (n + m)! n to obtain functions of two angles (and two indices) that are orthonormal over the spherical surface. These Ynm (θ, ϕ) are spherical harmonics, of which the first few are plotted in Fig. 12.15. The complete orthogonality integral becomes 2π π Ynm11 ∗ (θ, ϕ)Ynm22 (θ, ϕ) sin θ dθ dϕ = δn1 n2 δm1 m2 . (12.154) ϕ=0 θ=0 The extra (−1)m included in the defining equation of Ynm (θ, ϕ) deserves some comment. It is clearly legitimate, since Eq. (12.144) is linear and homogeneous. It is not necessary, but in moving on to certain quantum mechanical calculations, particularly in the quantum theory of angular momentum (Section 12.7), it is most convenient. The factor (−1)m is a phase factor, often called the Condon–Shortley phase, after the authors of a classic text on atomic spectroscopy. The effect of this (−1)m (Eq. (12.153)) and the (−1)m of Eq. (12.73c) for Pn−m (cos θ ) is to introduce an alternation of sign among the positive m spherical harmonics. This is shown in Table 12.3. The functions Ynm (θ, ϕ) acquired the name spherical harmonics first because they are defined over the surface of a sphere with θ the polar angle and ϕ the azimuth. The harmonic was included because solutions of Laplace’s equation were called harmonic functions and Ynm (cos, ϕ) is the angular part of such a solution. 12.6 Spherical Harmonics FIGURE 12.15 [ℜYlm (θ, ϕ)]2 for 0 ≤ l ≤ 3, m = 0, . . . , 3. 789 790 Chapter 12 Legendre Functions Table 12.3 Spherical Harmonics (Condon–Shortley Phase) 1 Y00 (θ, ϕ) = √ 4π ) Y11 (θ, ϕ) = − 3 sin θeiϕ 8π ) 3 cos θ 4π ) 3 Y1−1 (θ, ϕ) = + sin θe−iϕ 8π ) 5 3 sin2 θe2iϕ Y22 (θ, ϕ) = 96π ) 5 Y21 (θ, ϕ) = − 3 sin θ cos θeiϕ 24π )   5 3 1 cos2 θ − Y20 (θ, ϕ) = 4π 2 2 ) 5 Y2−1 (θ, ϕ) = + 3 sin θ cos θe−iϕ 24π ) 5 Y2−2 (θ, ϕ) = 3 sin2 θe−2iϕ 96π Y10 (θ, ϕ) = In the framework of quantum mechanics Eq. (12.145) becomes an orbital angular momentum equation and the solution YLM (θ, ϕ) (n replaced by L, m replaced by M) is an angular momentum eigenfunction, L being the angular momentum quantum number and M the z-axis projection of L. These relationships are developed in more detail in Sections 4.3 and 12.7. Laplace Series, Expansion Theorem Part of the importance of spherical harmonics lies in the completeness property, a consequence of the Sturm–Liouville form of Laplace’s equation. This property, in this case, means that any function f (θ, ϕ) (with sufficient continuity properties) evaluated over the surface of the sphere can be expanded in a uniformly convergent double series of spherical harmonics20 (Laplace’s series):  f (θ, ϕ) = amn Ynm (θ, ϕ). (12.155) m,n If f (θ, ϕ) is known, the coefficients can be immediately found by the use of the orthogonality integral. 20 For a proof of this fundamental theorem see E. W. Hobson, The Theory of Spherical and Ellipsoidal Harmonics, New York: Chelsea (1955), Chapter VII. If f (θ, ϕ) is discontinuous we may still have convergence in the mean, Section 10.4. 12.6 Spherical Harmonics Table 12.4 Coefficienta C20 C22 S22 791 Gravity Field Coefficients, Eq. (12.156) Earth Moon Mars 1.083 × 10−3 0.16 × 10−5 −0.09 × 10−5 (0.200 ± 0.002) × 10−3 (2.4 ± 0.5) × 10−5 (0.5 ± 0.6) × 10−5 (1.96 ± 0.01) × 10−3 (−5 ± 1) × 10−5 (3 ± 1) × 10−5 a C represents an equatorial bulge, whereas C and S represent an azimuthal dependence of the 20 22 22 gravitational field. Example 12.6.1 LAPLACE SERIES — GRAVITY FIELDS The gravity fields of the Earth, the Moon, and Mars have been described by a Laplace series with real eigenfunctions:  ∞ n   ( GM R   R n+1 ' e o − Cnm Ymn (θ, ϕ) + Snm Ymn (θ, ϕ) . (12.156) U (r, θ, ϕ) = R r r n=2 m=0 e and Here M is the mass of the body and R is the equatorial radius. The real functions Ymn o Ymn are defined by e Ymn (θ, ϕ) = Pnm (cos θ ) cos mϕ, o Ymn (θ, ϕ) = Pnm (cos θ ) sin mϕ. For applications such as this, the real trigonometric forms are preferred to the imaginary exponential form of YLM (θ, ϕ). Satellite measurements have led to the numerical values shown in Table 12.4.  Exercises 12.6.1 Show that the parity of YLM (θ, ϕ) is (−1)L . Note the disappearance of any M dependence. Hint. For the parity operation in spherical polar coordinates see Exercise 2.5.8 and footnote 7 in Section 12.2. 12.6.2 Prove that YLM (0, ϕ) = 12.6.3  2L + 1 4π 1/2 δM,0 . In the theory of Coulomb excitation of nuclei we encounter YLM (π/2, 0). Show that     2L + 1 1/2 [(L − M)!(L + M)!]1/2 M π YL for L + M even, ,0 = (−1)(L+M)/2 2 4π (L − M)!!(L + M)!! =0 for L + M odd. Here (2n)!! = 2n(2n − 2) · · · 6 · 4 · 2, (2n + 1)!! = (2n + 1)(2n − 1) · · · 5 · 3 · 1. 792 Chapter 12 Legendre Functions 12.6.4 (a) Express the elements of the quadrupole moment tensor xi xj as a linear combination of the spherical harmonics Y2m (and Y00 ). Note. The tensor xi xj is reducible. The Y00 indicates the presence of a scalar component. (b) The quadrupole moment tensor is usually defined as  Qij = 3xi xj − r 2 δij ρ(r) dτ, with ρ(r) the charge density. Express the components of (3xi xj − r 2 δij ) in terms of r 2 Y2M . (c) What is the significance of the −r 2 δij term? Hint. Compare Sections 2.9 and 4.4. 12.6.5 The orthogonal azimuthal functions yield a useful representation of the Dirac delta function. Show that δ(ϕ1 − ϕ2 ) = 12.6.6 ∞  1  exp im(ϕ1 − ϕ2 ) . 2π m=−∞ Derive the spherical harmonic closure relation ∞  +l  l=0 m=−l Ylm (θ1 , ϕ1 )Ylm∗ (θ2 , ϕ2 ) = 1 δ(θ1 − θ2 )δ(ϕ1 − ϕ2 ) sin θ1 = δ(cos θ1 − cos θ2 )δ(ϕ1 − ϕ2 ). 12.6.7 The quantum mechanical angular momentum operators Lx ± iLy are given by   ∂ ∂ iϕ , + i cot θ Lx + iLy = e ∂θ ∂ϕ   ∂ ∂ − i cot θ Lx − iLy = −e−iϕ . ∂θ ∂ϕ Show that 12.6.8 (a) (Lx + iLy )YLM (θ, ϕ) = (b) (Lx − iLy )YLM (θ, ϕ) = With L± given by (L − M)(L + M + 1)YLM+1 (θ, ϕ), (L + M)(L − M + 1)YLM−1 (θ, ϕ). L± = Lx ± iLy = ±e show that (a) Ylm = , (l + m)! (L− )l−m Yll , (2l)!(l − m)! ±iϕ  ∂ ∂ , ± i cot θ ∂θ ∂ϕ 12.7 Orbital Angular Momentum Operators (b) 12.6.9 Ylm = , 793 (l − m)! (L+ )l+m Yl−l . (2l)!(l + m)! In some circumstances it is desirable to replace the imaginary exponential of our spherical harmonic by sine or cosine. Morse and Feshbach (see the General References at book’s end) define e Ymn = Pnm (cos θ ) cos mϕ, o = Pnm (cos θ ) sin mϕ, Ymn where 0 2π 0 π 2 e or o Ymn (θ, ϕ) sin θ dθ dϕ = (n + m)! 4π , 2(2n + 1) (n − m)! = 4π n = 1, 2, . . . o for n = 0 (Y00 is undefined). These spherical harmonics are often named according to the patterns of their positive and negative regions on the surface of a sphere — zonal harmonics for m = 0, sectoral e , n = 4, m = harmonics for m = n, and tesseral harmonics for 0 < m < n. For Ymn 0, 2, 4, indicate on a diagram of a hemisphere (one diagram for each spherical harmonic) the regions in which the spherical harmonic is positive. 12.6.10 A function f (r, θ, ϕ) may be expressed as a Laplace series  f (r, θ, ϕ) = alm r l Ylm (θ, ϕ). l,m 12.7 With  sphere used to mean the average over a sphere (centered on the origin), show that   f (r, θ, ϕ) sphere = f (0, 0, 0). ORBITAL ANGULAR MOMENTUM OPERATORS Now we return to the specific orbital angular momentum operators Lx , Ly , and Lz of quantum mechanics introduced in Section 4.3. Equation (4.68) becomes Lz ψLM (θ, ϕ) = MψLM (θ, ϕ), and we want to show that ψLM (θ, ϕ) = YLM (θ, ϕ) are the eigenfunctions |LM of L2 and Lz of Section 4.3 in spherical polar coordinates, the spherical harmonics. The explicit form of Lz = −i∂/∂ϕ from Exercise 2.5.13 indicates that ψLM has a ϕ dependence of exp(iMϕ) — with M an integer to keep ψLM single-valued. And if M is an integer, then L is an integer also. To determine the θ dependence of ψLM (θ, ϕ), we proceed in two main steps: (1) the determination of ψLL (θ, ϕ) and (2) the development of ψLM (θ, ϕ) in terms of ψLL with the phase fixed by ψL0 . Let ψLM (θ, ϕ) = LM (θ )eiMϕ . (12.157) 794 Chapter 12 Legendre Functions From L+ ψLL = 0, L being the largest M, using the form of L+ given in Exercises 2.5.14 and 12.6.7, we have  i(L+1)ϕ d e (12.158) − L cot θ LL (θ ) = 0, dθ and thus ψLL (θ, ϕ) = cL sinL θ eiLϕ . (12.159) Normalizing, we obtain ∗ cL cL 2π 0 π 0 sin2L+1 θ dθ dϕ = 1. The θ integral may be evaluated as a beta function (Exercise 8.4.9) and , ) √ (2L + 1)!! (2L)! 2L + 1 = L . |cL | = 4π(2L)!! 2 L! 4π (12.160) (12.161) This completes our first step. To obtain the ψLM , M = ±L, we return to the ladder operators. From Eqs. (4.83) and (4.84) and as shown in Exercise 12.7.2 (J+ replaced by L+ and J− replaced by L− ), , (L + M)! (L− )L−M ψLL (θ, ϕ), ψLM (θ, ϕ) = (2L)!(L − M)! (12.162) , (L − M)! (L+ )L+M ψL,−L (θ, ϕ). ψLM (θ, ϕ) = (2L)!(L + M)! Again, note that the relative phases are set by the ladder operators. L+ and L− operating on LM (θ )eiMϕ may be written as  d − M cot θ LM (θ ) L+ LM (θ )eiMϕ = ei(M+1)ϕ dθ = ei(M+1)ϕ sin1+M θ L− LM (θ )e iMϕ = −e i(M−1)ϕ d sin−M LM (θ ), d(cos θ )  d + M cot θ LM (θ ) dθ = ei(M−1)ϕ sin1−M θ (12.163) d sinM θ LM (θ ). d(cos θ ) Repeating these operations n times yields (L+ )n LM (θ )eiMϕ = (−1)n ei(M+n)ϕ sinn+M θ d n sin−M θ LM (θ ) , d(cos θ )n (12.164) (L− )n LM (θ )eiMϕ d n sinM θ LM (θ ) . = ei(M−n)ϕ sinn−M θ d(cos θ )n 12.7 Orbital Angular Momentum Operators 795 From Eq. (12.163), ψLM (θ, ϕ) = cL , (L + M)! d L−M sin2L θ, eiMϕ sin−M θ (2L)!(L − M)! d(cos θ )L−M (12.165) and for M = −L: ψL,−L (θ, ϕ) = cL −iLϕ L d 2L e sin θ sin2L θ (2L)! d(cos θ )2L = (−1)L cL sinL θ e−iLϕ . (12.166) Note the characteristic (−1)L phase of ψL,−L relative to ψL,L . This (−1)L enters from L   L sin2L θ = 1 − x 2 = (−1)L x 2 − 1 . (12.167) Combining Eqs. (12.163), (12.163), and (12.166), we obtain L ψLM (θ, ϕ) = (−1) cL , (L − M)! d L+M sin2L θ (−1)L+M eiMϕ sinM θ . (12.168) (2L)!(L + M)! d(cos θ )L+M Equations (12.165) and (12.168) agree if ψL0 (θ, ϕ) = cL √ dL 1 sin2L θ. (2L)! (d cos θ )L (12.169) Using Rodrigues’ formula, Eq. (12.65), we have 2L L! PL (cos θ ) ψL0 (θ, ϕ) = (−1)L cL √ (2L)! ) 2L + 1 L cL PL (cos θ ). = (−1) |cL | 4π (12.170) The last equality follows from Eq. (12.161). We now demand that ψL0 (0, 0) be real and positive. Therefore ) √ (2L)! 2L + 1 . cL = (−1) |cL | = (−1) 2L L! 4π L L (12.171) With (−1)L cL /|cL | = 1, ψL0 (θ, ϕ) in Eq. (12.170) may be identified with the spherical harmonic YL0 (θ, ϕ) of Section 12.6. 796 Chapter 12 Legendre Functions When we substitute the value of (−1)L cL into Eq. (12.168), , ) √ (2L)! 2L + 1 (L − M)! (−1)L+M ψLM (θ, ϕ) = L 2 L! 4π (2L)!(L + M)! · eiMϕ sinM θ d L+M sin2L θ d(cos θ )L+M , 2L + 1 (L − M)! iMϕ = e (−1)M 4π (L + M)!   L+M  L 1  2 M/2 d 2 , 1−x · L x −1 2 L! dx L+M ) x = cos θ, M ≥ 0. (12.172) The expression in the curly bracket is identified as the associated Legendre function (Eq. (12.151), and we have ψLM (θ, ϕ) = YLM (θ, ϕ) , M 2L + 1 (L − M)! = (−1) · · P M (cos θ )eiMϕ , 4π (L + M)! L in complete agreement with Section 12.6. Then by Eq. (12.73c), script is given by  ∗ YL−M (θ, ϕ) = (−1)M YLM (θ, ϕ) . YLM M ≥ 0, (12.173) for negative super(12.174) • Our angular momentum eigenfunctions ψLM (θ, ϕ) are identified with the spherical harmonics. The phase factor (−1)M is associated with the positive values of M and is seen to be a consequence of the ladder operators. • Our development of spherical harmonics here may be considered a portion of Lie algebra — related to group theory, Section 4.3. Exercises 12.7.1 12.7.2 Using the known forms of L+ and L− (Exercises 2.5.14 and 12.6.7), show that  ∗   M ∗  M L+ YLM L+ YLM d. YL L− L+ YL d = Derive the relations , (a) (b) ψLM (θ, ϕ) = ψLM (θ, ϕ) = , (L + M)! (L− )L−M ψLL (θ, ϕ), (2L)!(L − M)! (L − M)! (L+ )L+M ψL,−L (θ, ϕ). (2L)!(L + M)! 12.8 Addition Theorem for Spherical Harmonics 797 Hint. Equations (4.83) and (4.84) may be helpful. 12.7.3 Derive the multiple operator equations (L+ )n LM (θ )eiMϕ = (−1)n ei(M+n)ϕ sinn+M θ (L− )n LM (θ )eiMϕ = ei(M−n)ϕ sinn−M θ d n sin−M θ LM (θ ) , d(cos θ )n d n sinM θ LM (θ ) . d(cos θ )n Hint. Try mathematical induction. 12.7.4 12.7.5 Show, using (L− )n , that YL−M (θ, ϕ) = (−1)M YL∗M (θ, ϕ). Verify by explicit calculation that (a) (b) ) √ 3 sin θ eiϕ = 2Y11 (θ, ϕ), 4π ) √ 3 L− Y10 (θ, ϕ) = + sin θ e−iϕ = 2Y1−1 (θ, ϕ). 4π L+ Y10 (θ, ϕ) = − The signs (Condon–Shortley phase) are a consequence of the ladder operators L+ and L− . 12.8 THE ADDITION THEOREM FOR SPHERICAL HARMONICS Trigonometric Identity In the following discussion, (θ1 , ϕ1 ) and (θ2 , ϕ2 ) denote two different directions in our spherical coordinate system (x1 , y1 , z1 ), separated by an angle γ (Fig. 12.16). The polar angles θ1 , θ2 are measured from the z1 -axis. These angles satisfy the trigonometric identity cos γ = cos θ1 cos θ2 + sin θ1 sin θ2 cos(ϕ1 − ϕ2 ), (12.175) which is perhaps most easily proved by vector methods (compare Chapter 1). The addition theorem, then, asserts that Pn (cos γ ) = n  4π (−1)m Ynm (θ1 , ϕ1 )Yn−m (θ2 , ϕ2 ), 2n + 1 m=−n (12.176) or equivalently, Pn (cos γ ) = n   ∗ 4π Ynm (θ1 , ϕ1 ) Ynm (θ2 , ϕ2 ) .21 2n + 1 m=−n 21 The asterisk for complex conjugation may go on either spherical harmonic. (12.177) 798 Chapter 12 Legendre Functions In terms of the associated Legendre functions, the addition theorem is Pn (cos γ ) = Pn (cos θ1 )Pn (cos θ2 ) +2 n  (n − m)! m P (cos θ1 )Pnm (cos θ2 ) cos m(ϕ1 − ϕ2 ). (n + m)! n m=1 (12.178) Equation (12.175) is a special case of Eq. (12.178), n = 1. Derivation of Addition Theorem We now derive Eq. (12.177). Let (γ , ξ ) be the angles that specify the direction (θ1 , ϕ1 ) in a coordinate system (x2 , y2 , z2 ) whose axis is aligned with (θ2 , ϕ2 ). (Actually, the choice of the 0 azimuth angle ξ in Fig. 12.16 is irrelevant.) First, we expand Ynm (θ1 , ϕ1 ) in spherical harmonics in the (γ , ξ ) angular variables: Ynm (θ1 , ϕ1 ) = n  m σ anσ Yn (γ , ξ ). σ =−n (12.179) We write no summation over n in Eq. (12.179) because the angular momentum n of Ynm is conserved (see Section 4.3); as a spherical harmonic, Ynm (θ1 , ϕ1 ) is an eigenfunction of L2 with eigenvalue n(n + 1). FIGURE 12.16 Two directions separated by an angle γ . 12.8 Addition Theorem for Spherical Harmonics 799 m , which we get by multiplying Eq. (12.179) We need for our proof only the coefficient an0 by [Yn0 (γ , ξ )]∗ and integrating over the sphere:  ∗ m = Ynm (θ1 , ϕ1 ) Yn0 (γ , ξ ) dγ ,ξ . (12.180) an0 Similarly, we expand Pn (cos γ ) in terms of spherical harmonics Ynm (θ1 , ϕ1 ): Pn (cos γ ) =  4π 2n + 1 1/2 Yn0 (γ , ξ ) = n  bnm Ynm (θ1 , ϕ1 ), (12.181) m=−n where the bnm will, of course, depend on θ2 , ϕ2 , that is, on the orientation of the z2 -axis. Multiplying by [Ynm (θ1 , ϕ1 )]∗ and integrating with respect to θ1 and ϕ1 over the sphere, we have (12.182) bnm = Pn (cos γ )Ynm∗ (θ1 , ϕ1 ) dθ1 ,ϕ1 . In terms of spherical harmonics Eq. (12.182) becomes  1/2  ∗ 4π Yn0 (γ , ψ) Ynm (θ1 , ϕ1 ) d = bnm . 2n + 1 (12.183) Note that the subscripts have been dropped from the solid angle element d. Since the range of integration is over all solid angles, the choice of polar axis is irrelevant. Then comparing Eqs. (12.180) and (12.183), we see that  1/2 4π ∗ m bnm = an0 . (12.184) 2n + 1 Now we evaluate Ynm (θ2 , ϕ2 ) using the expansion of Eq. (12.179) and noting that the values of (γ , ξ ) corresponding to (θ1 , ϕ1 ) = (θ2 , ϕ2 ) are (0, 0). The result is  1/2 m 0 m 2n + 1 m Yn (θ2 , ϕ2 ) = an0 Yn (0, 0) = an0 , (12.185) 4π all terms with nonzero σ vanishing. Substituting this back into Eq. (12.184), we obtain bnm = ∗ 4π  m Yn (θ2 , ϕ2 ) . 2n + 1 (12.186) Finally, substituting this expression for bnm into the summation, Eq. (12.181) yields Eq. (12.177), thus proving our addition theorem. Those familiar with group theory will find a much more elegant proof of Eq. (12.177) by using the rotation group.22 This is Exercise 4.4.5. One application of the addition theorem is in the construction of a Green’s function for the three-dimensional Laplace equation in spherical polar coordinates. If the source is on 22 Compare M. E. Rose, Elementary Theory of Angular Momentum, New York: Wiley (1957). 800 Chapter 12 Legendre Functions the polar axis at the point (r = a, θ = 0, ϕ = 0), then, by Eq. (12.4a), ∞  1 an 1 = = Pn (cos γ ) n+1 , R |r − zˆ a| r r >a n=0 = ∞  Pn (cos γ ) n=0 rn a n+1 , r < a. (12.187) Rotating our coordinate system to put the source at (a, θ2 , ϕ2 ) and the point of observation at (r, θ1 , ϕ1 ), we obtain G(r, θ1 , ϕ1 , a, θ2 , ϕ2 ) = = = 1 R ∞  n  n=0 ∗ 4π  m an Yn (θ1 , ϕ1 ) Ynm (θ2 , ϕ2 ) n+1 , 2n + 1 r m=−n ∞  n  n=0 ∗ 4π  m rn Yn (θ1 , ϕ1 ) Ynm (θ2 , ϕ2 ) n+1 , 2n + 1 a m=−n r > a, r < a. (12.188) In Section 9.7 this argument is reversed to provide another derivation of the Legendre polynomial addition theorem. Exercises 12.8.1 In proving the addition theorem, we assumed that Ynk (θ1 , ϕ1 ) could be expanded in a series of Ynm (θ2 , ϕ2 ), in which m varied from −n to +n but n was held fixed. What arguments can you develop to justify summing only over the upper index, m, and not over the lower index, n? Hints. One possibility is to examine the homogeneity of the Ynm , that is, Ynm may be expressed entirely in terms of the form cosn−p θ sinp θ , or x n−p−s y p zs /r n . Another possibility is to examine the behavior of the Legendre equation under rotation of the coordinate system. 12.8.2 An atomic electron with angular momentum L and magnetic quantum number M has a wave function ψ(r, θ, ϕ) = f (r)YLM (θ, ϕ). 12.8.3 Show that the sum of the electron densities in a given complete shell is spherically ∗ symmetric; that is, L M=−L ψ (r, θ, ϕ)ψ(r, θ, ϕ) is independent of θ and ϕ. The potential of an electron at point re in the field of Z protons at points rp is =− Z e2  1 . 4πε0 |re − rp | p=1 12.8 Addition Theorem for Spherical Harmonics 801 Show that this may be written as =−   Z ∗ e2   rp L 4π  M YL (θp , ϕp ) YLM (θe , ϕe ), 4πε0 re re 2L + 1 p=1 L,M where re > rp . How should  be written for re < rp ? 12.8.4 Two protons are uniformly distributed within the same spherical volume. If the coordinates of one element of charge are (r1 , θ1 , ϕ1 ) and the coordinates of the other are (r2 , θ2 , ϕ2 ) and r12 is the distance between them, the element of energy of repulsion will be given by dψ = ρ 2 r 2 dr1 sin θ1 dθ1 dϕ1 r22 dr2 sin θ2 dθ2 dϕ2 dτ1 dτ2 = ρ2 1 . r12 r12 Here ρ= charge 3e , = volume 4πR 3 charge density, 2 r12 = r12 + r22 − 2r1 r2 cos γ . Calculate the total electrostatic energy (of repulsion) of the two protons. This calculation is used in accounting for the mass difference in “mirror” nuclei, such as O15 and N15 . 6 e2 . 5R This is double that required to create a uniformly charged sphere because we have two separate cloud charges interacting, not one charge interacting with itself (with permutation of pairs not considered). ANS. 12.8.5 Each of the two 1S electrons in helium may be described by a hydrogenic wave function  3 1/2 Z ψ(r) = e−Zr/a0 πa03 in the absence of the other electron. Here Z, the atomic number, is 2. The symbol a0 is the Bohr radius, h¯ 2 /me2 . Find the mutual potential energy of the two electrons, given by e2 ψ ∗ (r1 )ψ ∗ (r2 ) ψ(r1 )ψ(r2 ) d 3 r1 d 3 r2 . r12 ANS. Note. d 3 r1 = r 2 dr1 sin θ1 dθ1 dϕ1 ≡ dτ1 , 12.8.6 r12 = |r1 − r2 |. 5e2 Z . 8a0 The probability of finding a 1S hydrogen electron in a volume element r 2 dr sin θ dθ dϕ is 1 exp[−2r/a0 ]r 2 dr sin θ dθ dϕ. πa03 802 Chapter 12 Legendre Functions Find the corresponding electrostatic potential. Calculate the potential from V (r1 ) = q 4πε0 ρ(r2 ) 3 d r2 , r12 with r1 not on the z-axis. Expand r12 . Apply the Legendre polynomial addition theorem and show that the angular dependence of V (r1 ) drops out.      1 1 2r1 2r1 q + Ŵ 2, . ANS. V (r1 ) = γ 3, 4πε0 2r1 a0 a0 a0 12.8.7 A hydrogen electron in a 2P orbit has a charge distribution ρ= q 64πa05 r 2 e−r/a0 sin2 θ, where a0 is the Bohr radius, h¯ 2 /me2 . Find the electrostatic potential corresponding to this charge distribution. 12.8.8 The electric current density produced by a 2P electron in a hydrogen atom is J = ϕˆ q h¯ 32ma05 e−r/a0 r sin θ. Using A(r1 ) = µ0 4π J(r2 ) 3 d r2 , |r1 − r2 | find the magnetic vector potential produced by this hydrogen electron. Hint. Resolve into Cartesian components. Use the addition theorem to eliminate γ , the angle included between r1 and r2 . 12.8.9 (a) As a Laplace series and as an example of Eq. (1.190) (now with complex functions), show that δ(1 − 2 ) = (b) ∞  n  Ynm∗ (θ2 , ϕ2 )Ynm (θ1 , ϕ1 ). n=0 m=−n Show also that this same Dirac delta function may be written as δ(1 − 2 ) = ∞  2n + 1 n=0 4π Pn (cos γ ). Now, if you can justify equating the summations over n term by term, you have an alternate derivation of the spherical harmonic addition theorem. 12.9 Integrals of Three Y’s 12.9 803 INTEGRALS OF PRODUCTS OF THREE SPHERICAL HARMONICS Frequently in quantum mechanics we encounter integrals of the general form 2π π  M1  M2  M3   M1 ∗ M2 M3   YL1 YL2 YL3 = YL1 YL2 YL3 sin θ dθ dϕ 0 = , 0 (2L2 + 1)(2L3 + 1) C(L2 L3 L1 |000)C(L2 L3 L1 |M2 M3 M1 ), 4π(2L1 + 1) (12.189) in which all spherical harmonics depend on θ, ϕ. The first factor in the integrand may come from the wave function of a final state and the third factor from an initial state, whereas the middle factor may represent an operator that is being evaluated or whose “matrix element” is being determined. By using group theoretical methods, as in the quantum theory of angular momentum, we may give a general expression for the forms listed. The analysis involves the vector– addition or Clebsch–Gordan coefficients from Section 4.4, which are tabulated. Three general restrictions appear. 1. The integral vanishes unless the triangle condition of the L’s (angular momentum) is zero, |L1 − L3 | ≤ L2 ≤ L1 + L3 . 2. The integral vanishes unless M2 + M3 = M1 . Here we have the theoretical foundation of the vector model of atomic spectroscopy. M 3. Finally, the integral vanishes unless the product [YLM11 ]∗ YLM22 YL33 is even, that is, unless L1 + L2 + L3 is an even integer. This is a parity conservation law. The key to the determination of the integral in Eq. (12.189) is the expansion of the product of two spherical harmonics depending on the same angles (in contrast to the addition theorem), which are coupled by Clebsch–Gordan coefficients to angular momentum L, M, which, from its rotational transformation properties, must be proportional to YLM (θ, ϕ); that is,  M C(L2 L3 L1 |M2 M3 M1 )YLM22 (θ, ϕ)YL33 (θ, ϕ) ∼ YLM11 (θ, ϕ). M1 ,M2 For details we refer to Edmonds.23 Let us outline some of the steps of this general and powerful approach using Section 4.4. The Wigner–Eckart theorem applied to the matrix element in Eq. (12.189) yields  M1  M2  M3  YL1 YL2 YL3 = (−1)L2 −L3 +L1 C(L2 L3 L1 |M2 M3 M1 ) YL YL2 YL3  , · √1 (2L1 + 1) (12.190) 23 E. U. Condon and G. H. Shortley, The Theory of Atomic Spectra, Cambridge, UK: Cambridge University Press (1951); M. E. Rose, Elementary Theory of Angular Momentum, New York: Wiley (1957); A. Edmonds, Angular Momentum in Quantum Mechanics, Princeton, NJ: Princeton University Press (1957); E. P. Wigner, Group Theory and Its Applications to Quantum Mechanics (translated by J. J. Griffin), New York: Academic Press (1959). 804 Chapter 12 Legendre Functions where the double bars denote the reduced matrix element, which no longer depends on the Mi . Selection rules (1) and (2) mentioned earlier follow directly from the Clebsch– Gordan coefficient in Eq. (12.190). Next we use Eq. (12.190) for M1 = M2 = M3 = 0 in conjunction with Eq. (12.153) for m = 0, which yields  0  0  0  (−1)L2 −L3 +L1 YL1 YL2 YL3 = √ C(L2 L3 L1 |000) · YL1 YL2 YL3  2L1 + 1 ) (2L1 + 1)(2L2 + 1)(2L3 + 1) = 4π 1 1 · · PL (x)PL2 (x)PL3 (x) dx, 2 −1 1 (12.191) where x = cos θ . By elementary methods it can be shown that 1 −1 PL1 (x)PL2 (x)PL3 (x) dx = 2 C(L2 L3 L1 |000)2 . 2L1 + 1 (12.192) Substituting Eq. (12.192) into (12.191) we obtain L2 −L3 +L1 YL1 YL2 YL3  = (−1) ) C(L2 L3 L1 |000) (2L2 + 1)(2L3 + 1) . 4π (12.193) The aforementioned parity selection rule (3) above follows from Eq. (12.193) in conjunction with the phase relation C(L2 L3 L1 | − M2 , −M3 , −M1 ) = (−1)L2 +L3 −L1 C(L2 L3 L1 |M2 M3 M1 ). (12.194) Note that the vector-addition coefficients are developed in terms of the Condon–Shortley phase convention,23 in which the (−1)m of Eq. (12.153) is associated with the positive m. It is possible to evaluate many of the commonly encountered integrals of this form with the techniques already developed. The integration over azimuth may be carried out by inspection: 0 2π e−iM1 ϕ eiM2 ϕ eiM3 ϕ dϕ = 2πδM2 +M3 −M1 ,0 . (12.195) Physically this corresponds to the conservation of the z component of angular momentum. Application of Recurrence Relations A glance at Table 12.3 will show that the θ -dependence of YLM22 , that is, PLM22 (θ ), can be expressed in terms of cos θ and sin θ . However, a factor of cos θ or sin θ may be combined M with the YL33 factor by using the associated Legendre polynomial recurrence relations. For 12.9 Integrals of Three Y’s instance, from Eqs. (12.92) and (12.93) we get  (L − M + 1)(L + M + 1) 1/2 M YL+1 cos θ YLM = + (2L + 1)(2L + 3)  (L − M)(L + M) 1/2 M + YL−1 (2L − 1)(2L + 1)  (L + M + 1)(L + M + 2) 1/2 M+1 iϕ M e sin θ YL = − YL+1 (2L + 1)(2L + 3)  (L − M)(L − M − 1) 1/2 M+1 + YL−1 (2L − 1)(2L + 1)  (L − M + 1)(L − M + 2) 1/2 M−1 e−iϕ sin θ YLM = + YL+1 (2L + 1)(2L + 3)  (L + M)(L + M − 1) 1/2 M−1 − YL−1 . (2L − 1)(2L + 1) 805 (12.196) (12.197) (12.198) Using these equations, we obtain  (L − M + 1)(L + M + 1) 1/2 M1 ∗ M YL1 cos θ YL d = δM1 ,M δL1 ,L+1 (2L + 1)(2L + 3)  (L − M)(L + M) 1/2 + δM1 ,M δL1 ,L−1 . (12.199) (2L − 1)(2L + 1) The occurrence of the Kronecker delta (L1 , L ± 1) is an aspect of the conservation of angular momentum. Physically, this integral arises in a consideration of ordinary atomic electromagnetic radiation (electric dipole). It leads to the familiar selection rule that transitions to an atomic level with orbital angular momentum quantum number L1 can originate only from atomic levels with quantum numbers L1 − 1 or L1 + 1. The application to expressions such as quadrupole moment ∼ YLM∗ (θ, ϕ)P2 (cos θ )YLM (θ, ϕ) d is more involved but perfectly straightforward. Exercises 12.9.1 Verify (a) (b) 1 YLM (θ, ϕ)Y00 (θ, ϕ)YLM∗ (θ, ϕ) d = √ , 4π ) , 3 (L + M + 1)(L − M + 1) M∗ , d = YLM Y10 YL+1 4π (2L + 1)(2L + 3) 806 Chapter 12 Legendre Functions (c) (d) ) , (L + M + 1)(L + M + 2) , (2L + 1)(2L + 3) , 3 (L − M)(L − M − 1) M 1 M+1∗ YL Y1 YL−1 d = − . 8π (2L − 1)(2L + 1) M+1∗ YLM Y11 YL+1 d = 3 8π ) These integrals were used in an investigation of the angular correlation of internal conversion electrons. 12.9.2 Show that (a) (b) 12.9.3     2(L + 1) , N = L + 1, (2L + 1)(2L + 3) xPL (x)PN (x) dx = 2L  −1   , N = L − 1, (2L − 1)(2L + 1)  2(L + 1)(L + 2)   , N = L + 2,    (2L + 1)(2L + 3)(2L + 5)   1  2(2L2 + 2L − 1) x 2 PL (x)PN (x) dx = , N = L,  (2L − 1)(2L + 1)(2L + 3) −1     2L(L − 1)   , N = L − 2.  (2L − 3)(2L − 1)(2L + 1) 1 Since xPn (x) is a polynomial (degree n + 1), it may be represented by the Legendre series ∞  as Ps (x). xPn (x) = s=0 (a) Show that as = 0 for s < n − 1 and s > n + 1. (b) Calculate an−1 , an , and an+1 and show that you have reproduced the recurrence relation, Eq. 12.17. Note. This argument may be put in a general form to demonstrate the existence of a three-term recurrence relation for any of our complete sets of orthogonal polynomials: xϕn = an+1 ϕn+1 + an ϕn + an−1 ϕn−1 . 12.9.4 Show that Eq. (12.199) is a special case of Eq. (12.190) and derive the reduced matrix element YL1 Y1 YL . √ 3(2L + 1) . ANS. YL1 Y1 YL  = (−1)L1 +1−L C(1LL1 |000) 4π 12.10 LEGENDRE FUNCTIONS OF THE SECOND KIND In all the analysis so far in this chapter we have been dealing with one solution of Legendre’s equation, the solution Pn (cos θ ), which is regular (finite) at the two singular points of 12.10 Legendre Functions of the Second Kind 807 the differential equation, cos θ = ±1. From the general theory of differential equations it is known that a second solution exists. We develop this second solution, Qn , with nonnegative integer n (because Qn in applications will occur in conjunction with Pn ), by a series solution of Legendre’s equation. Later a closed form will be obtained. Series Solutions of Legendre’s Equation To solve  d 2 dy (1 − x ) + n(n + 1)y = 0 dx dx (12.200) we proceed as in Chapter 9, letting24 ∞  aλ x k+λ , (12.201) ∞  (k + λ)aλ x k+λ−1 , (12.202) y= λ=0 with y′ = y ′′ = λ=0 ∞  (k + λ)(k + λ − 1)aλ x k+λ−2 . (12.203) λ=0 Substitution into the original differential equation gives ∞  (k + λ)(k + λ − 1)aλ x k+λ−2 λ=0 + ∞   n(n + 1) − 2(k + λ) − (k + λ)(k + λ − 1) aλ x k+λ = 0. (12.204) λ=0 The indicial equation is k(k − 1) = 0, (12.205) with solutions k = 0, 1. We try first k = 0 with a0 = 1, a1 = 0. Then our series is described by the recurrence relation  (λ + 2)(λ + 1)aλ+2 + n(n + 1) − 2λ − λ(λ − 1) aλ = 0, (12.206) which becomes aλ+2 = − 24 Note that x may be replaced by the complex variable z. (n + λ + 1)(n − λ) aλ . (λ + 1)(λ + 2) (12.207) 808 Chapter 12 Legendre Functions Labeling this series, from Eq. (12.201), y(x) = pn (x), we have n(n + 1) 2 (n − 2)n(n + 1)(n + 3) 4 (12.208) x + x + ··· . 2! 4! The second solution of the indicial equation, k = 1, with a0 = 0, a1 = 1, leads to the recurrence relation (n + λ + 2)(n − λ − 1) aλ+2 = − aλ . (12.209) (λ + 2)(λ + 3) pn (x) = 1 − Labeling this series, from Eq. (12.201), y(x) = qn (x), we obtain (n − 1)(n + 2) 3 (n − 3)(n − 1)(n + 2)(n + 4) 5 x + x − ··· . 3! 5! Our general solution of Eq. (12.200), then, is qn (x) = x − yn (x) = An pn (x) + Bn qn (x), (12.210) (12.211) provided we have convergence. From Gauss’ test, Section 5.2 (see Example 5.2.4), we do not have convergence at x = ±1. To get out of this difficulty, we set the separation constant n equal to an integer (Exercise 9.5.5) and convert the infinite series into a polynomial. For n a positive even integer (or zero), series pn terminates, and with a proper choice of a normalizing factor (selected to obtain agreement with the definition of Pn (x) in Section 12.1) Pn (x) = (−1)n/2 = (−1)s (2s)! n! pn (x) = (−1)s 2s p2s (x) 2n [(n/2)!]2 2 (s!)2 (2s − 1)!! p2s (x), (2s)!! for n = 2s. (12.212) If n is a positive odd integer, series qn terminates after a finite number of terms, and we write n! Pn (x) = (−1)n−1)/2 qn (x) n−1 {[n − 1)/2]!}2 2 = (−1)s (2s + 1)! (2s + 1)!! q2s+1 (x), q2s+1 (x) = (−1)s (2s)!! 22s (s!)2 for n = 2s + 1. (12.213) Note that these expressions hold for all real values of x, −∞ < x < ∞, and for complex values in the finite complex plane. The constants that multiply pn and qn are chosen to make Pn agree with Legendre polynomials given by the generating function. Equations (12.208) and (12.210) may still be used with n = ν, not an integer, but now the series no longer terminates, and the range of convergence becomes −1 < x < 1. The endpoints, x = ±1, are not included. It is sometimes convenient to reverse the order of the terms in the series. This may be done by putting n s = −λ n even, in the first form of Pn (x), 2 n−1 −λ in the second form of Pn (x), n odd, s= 2 12.10 Legendre Functions of the Second Kind 809 so that Eqs. (12.212) and (12.213) become Pn (x) = [n/2]  (−1)s s=0 (2n − 2s)! 2n s!(n − s)!(n − 2s)! x n−2s , (12.214) where the upper limit s = n/2 (for n even) or (n − 1)/2 (for n odd). This reproduces Eq. (12.8) of Section 12.1, which is obtained directly from the generating function. This agreement with Eq. (12.8) is the reason for the particular choice of normalization in Eqs. (12.212) and (12.213). Qn (x) Functions of the Second Kind It will be noticed that we have used only pn for n even and qn for n odd (because they terminated for this choice of n). We may now define a second solution of Legendre’s equation (Fig. 12.17) by [n/2]!2 2n qn (x) n! (2s)!! q2s (x), = (−1)s (2s − 1)!! Qn (x) = (−1)n/2 for n even, n = 2s, FIGURE 12.17 Second Legendre function, Qn (x), 0 ≤ x < 1. (12.215) 810 Chapter 12 Legendre Functions FIGURE 12.18 Second Legendre function, Qn (x), x > 1. {[(n − 1)/2]!}2 2n−1 pn (x) n! (2s)!! = (−1)s+1 for n odd, n = 2s + 1. p2s+1 (x), (2s + 1)!! Qn (x) = (−1)(n+1)/2 (12.216) This choice of normalizing factors forces Qn to satisfy the same recurrence relations as Pn . This may be verified by substituting Eqs. (12.215) and (12.216) into Eqs. (12.17) and (12.26). Inspection of the (series) recurrence relations (Eqs. (12.207) and (12.209)), that is, by the Cauchy ratio test, shows that Qn (x) will converge for −1 < x < 1. If |x| ≥ 1, these series forms of our second solution diverge. A solution in a series of negative powers of x can be developed for the region |x| > 1 (Fig. 12.18), but we proceed to a closed-form solution that can be used over the entire complex plane (apart from the singular points x = ±1 and with care on cut lines). Closed-Form Solutions Frequently, a closed form of the second solution, Qn (z), is desirable. This may be obtained by the method discussed in Section 9.6. We write   z dx , (12.217) Qn (z) = Pn (z) An + Bn (1 − x 2 )[Pn (x)]2 in which the constant An replaces the evaluation of the integral at the arbitrary lower limit. Both constants, An and Bn , may be determined for special cases. 12.10 Legendre Functions of the Second Kind For n = 0, Eq. (12.217) yields  Q0 (z) = P0 (z) A0 + B0 811  1 1+z dx = A0 + B0 ln 2 2 2 1−z (1 − x )[P0 (x)]   3 5 2s+1 z z z + + ··· + + ··· , (12.218) = A0 + B0 z + 3 5 2s + 1 z the last expression following from a Maclaurin expansion of the logarithm. Comparing this with the series solution (Eq. (12.210)), we obtain Q0 (z) = q0 (z) = z + z2s+1 z3 z5 + + ··· + + ··· , 3 5 2s + 1 we have A0 = 0, B0 = 1. Similar results follow for n = 1. We obtain  z dx Q1 (z) = z A1 + B1 (1 − x 2 )x 2   1 1+z 1 . = A1 z + B1 z ln − 2 1−z z (12.219) (12.220) Expanding in a power series and comparing with Q1 (z) = −p1 (z), we have A1 = 0, B1 = 1. Therefore we may write Q0 (z) = 1 1+z ln , 2 1−z 1+z 1 Q1 (z) = z ln − 1, 2 1−z |z| < 1. (12.221) Perhaps the best way of determining the higher-order Qn (z) is to use the recurrence relation (Eq. (12.17)), which may be verified for both x 2 < 1 and x 2 > 1 by substituting in the series forms. This recurrence relation technique yields 1 1+z 3 − P1 (z). Q2 (z) = P2 (z) ln 2 1−z 2 (12.222) Repeated application of the recurrence formula leads to 1 + z 2n − 1 2n − 5 1 Qn (z) = Pn (z) ln − Pn−1 (z) − Pn−3 (z) − · · · . 2 1−z 1·n 3(n − 1) (12.223) From the form ln[(1 + z)/(1 − z)] it will be seen that for real z these expressions hold in the range −1 < x < 1. If we wish to have closed forms valid outside this range, we need only replace ln 1+x 1−x by ln z+1 . z−1 When using the latter form, valid for large z, we take the line interval −1 ≤ x ≤ 1 as a cut line. Values of Qn (x) on the cut line are customarily assigned by the relation Qn (x) = 1 Qn (x + i0) + Qn (x − i0) , 2 (12.224) 812 Chapter 12 Legendre Functions the arithmetic average of approaches from the positive imaginary side and from the negative imaginary side. Note that for z → x > 1, z − 1 → (1 − x)e±iπ . The result is that for all z, except on the real axis, −1 ≤ x ≤ 1, we have 1 z+1 ln , 2 z−1 1 z+1 Q1 (z) = z ln − 1, 2 z−1 Q0 (z) = (12.225) (12.226) and so on. For convenient reference some special values of Qn (z) are given. Qn (1) = ∞, from the logarithmic term (Eq. (12.223)). Qn (∞) = 0. This is best obtained from a representation of Qn (x) as a series of negative powers of x, Exercise 12.10.4. 3. Qn (−z) = (−1)n+1 Qn (z). This follows from the series form. It may also be derived by using Q0 (z), Q1 (z) and the recurrence relation (Eq. (12.17)). 4. Qn (0) = 0, for n even, by (3). {[(n − 1)/2]!}2 n−1 5. Qn (0) = (−1)(n+1)/2 2 n! (2s)!! , for n odd, n = 2s + 1. = (−1)s+1 (2s + 1)!! 1. 2. This last result comes from the series form (Eq. (12.216)) with pn (0) = 1. Exercises 12.10.1 Derive the parity relation for Qn (x). 12.10.2 From Eqs. (12.212) and (12.213) show that (a) (b) P2n (x) = n (−1)n  (2n + 2s − 1)! x 2s . (−1)s 2n−1 (2s)!(n + s − 1)!(n − s)! 2 s=0 P2n+1 (x) = n (2n + 2s + 1)! (−1)n  x 2s+1 . (−1)s 2n (2s + 1)!(n + s)!(n − s)! 2 s=0 Check the normalization by showing that one term of each series agrees with the corresponding term of Eq. (12.8). 12.10.3 Show that (a) Q2n (x) = (−1)n 22n + 22n n  (n + s)!(n − s)! 2s+1 (−1)s x (2s + 1)!(2n − 2s)! s=0 ∞  (n + s)!(2s − 2n)! 2s+1 x , (2s + 1)!(s − n)! s=n+1 |x| < 1. 12.11 Vector Spherical Harmonics (b) Q2n+1 (x) = (−1)n+1 22n + 22n+1 12.10.4 (a) 813 n  (n + s)!(n − s)! 2s (−1)s x (2s)!(2n − 2s + 1)! s=0 ∞  (n + s)!(2s − 2n − 2)! 2s x , (2s)!(s − n − 1)! s=n+1 |x| < 1. Starting with the assumed form Qn (x) = ∞  λ=0 b−λ x k−λ , show that Qn (x) = b0 x −n−1 (b) ∞  (n + s)!(n + 2s)!(2n + 1)! s=0 s!(n!)2 (2n + 2s + 1)! x −2s . The standard choice of b0 is b0 = 2n (n!)2 . (2n + 1)! Show that this choice of bo brings this negative power-series form of Qn (x) into agreement with the closed-form solutions. 12.10.5 Verify that the Legendre functions of the second kind, Qn (x), satisfy the same recurrence relations as Pn (x), both for |x| < 1 and for |x| > 1: (2n + 1)xQn (x) = (n + 1)Qn+1 (x) + nQn−1 (x), (2n + 1)Qn (x) = Q′n+1 (x) − Q′n−1 (x). 12.10.6 (a) (b) 12.10.7 Using the recurrence relations, prove (independent of the Wronskian relation) that  n Pn (x)Qn−1 (x) − Pn−1 (x)Qn (x) = P1 (x)Q0 (x) − P0 (x)Q1 (x). By direct substitution show that the right-hand side of this equation equals 1. (a) Write a subroutine that will generate Qn (x) and Q0 through Qn−1 based on the recurrence relation for these Legendre functions of the second kind. Take x to be within (−1, 1) — excluding the endpoints. Hint. Take Q0 (x) and Q1 (x) to be known. (b) Test your subroutine for accuracy by computing Q10 (x) and comparing with the values tabulated in AMS-55 (for a complete reference, see Additional Readings in Chapter 8). 12.11 VECTOR SPHERICAL HARMONICS Most of our attention in this chapter has been directed toward solving the equations of scalar fields, such as the electrostatic potential. This was done primarily because the scalar fields are easier to handle than vector fields. However, with scalar field problems under firm control, more and more attention is being paid to vector field problems. 814 Chapter 12 Legendre Functions Maxwell’s equations for the vacuum, where the external current and charge densities vanish, lead to the wave (or vector Helmholtz) equation for the vector potential A. In a partial wave expansion of A in spherical polar coordinates we want to use angular eigenfunctions that are vectors. To this end we write the coordinate unit vectors xˆ , yˆ , zˆ in spherical notation (see Section 4.4), xˆ + i yˆ eˆ +1 = − √ , 2 xˆ − i yˆ eˆ −1 = √ , 2 eˆ 0 = zˆ , (12.227) so that eˆ m form a spherical tensor of rank 1. If we couple the spherical harmonics with the eˆ m to total angular momentum J using the relevant Clebsch–Gordan coefficients, we are led to the vector spherical harmonics: YJ LMJ (θ, ϕ) =  m,M C(L1J |MmMJ )YLM (θ, ϕ)ˆem . (12.228) It is obvious that they obey the orthogonality relations Y∗J LMJ (θ, ϕ) · YJ ′ L′ MJ′ (θ, ϕ) d = δJ J ′ δLL′ δMJ MM′ . (12.229) Given J , the selection rules of angular momentum coupling tell us that L can only take on the values J + 1, J , and J − 1. If we look up the Clebsch–Gordan coefficients and invert Eq. (12.228) we get rˆ YLM (θ, ϕ) = − L+1 2L + 1 1/2 YLL+1M L + 2L + 1 1/2 YLL−1M , (12.230) displaying the vector character of the Y and the orbital angular momentum contents, L + 1 and L − 1, of rˆ YLM . Under the parity operations (coordinate inversion) the vector spherical harmonics transform as YLL+1M (θ ′ , ϕ ′ ) = (−1)L+1 YLL+1M (θ, ϕ), YLL−1M (θ ′ , ϕ ′ ) = (−1)L+1 YLL−1M (θ, ϕ), ′ ′ (12.231) L YLLM (θ , ϕ ) = (−1) YLLM (θ, ϕ), where θ′ = π − θ ϕ ′ = π + ϕ. (12.232) The vector spherical harmonics are useful in a further development of the gradient (Eq. (2.46)), divergence (Eq. (2.47)) and curl (Eq. (2.49)) operators in spherical polar co- 12.11 Vector Spherical Harmonics 815 ordinates:    L L + 1 1/2 d F YLL+1M − ∇ F (r)YLM (θ, ϕ) = − 2L + 1 dr r 1/2  d L+1 L + F YLL−1M , + 2L + 1 dr r     L+2 L + 1 1/2 dF + F YLM (θ, ϕ), ∇ · F (r)YLL+1M (θ, ϕ) = − 2L + 1 dr r  1/2   L−1 dF L ∇ · F (r)YLL−1M (θ, ϕ) = − F YLM (θ, ϕ), 2L + 1 dr r  ∇ · F (r)YLLM (θ, ϕ) = 0, 1/2   dF L+2 L + F YLLM , ∇ × F (r)YLL+1M = i 2L + 1 dr r 1/2    dF L L − F YLL+1M ∇ × F (r)YLLM = i 2L + 1 dr r 1/2   dF L+1 L+1 + F YLL−1M , +i 2L + 1 dr r 1/2   L−1 dF L+1 ∇ × F (r)YLL−1M = i − F YLLM . 2L + 1 dr r (12.233) (12.234) (12.235) (12.236) (12.237) (12.238) (12.239) If we substitute Eq. (12.230) into the radial component rˆ ∂/∂r of the gradient operator, for example, we obtain both dF /dr terms in Eq. (12.233). For a complete derivation of Eqs. (12.233) to (12.239) we refer to the literature.25 These relations play an important role in the partial wave expansion of classical and quantum electrodynamics. The definitions of the vector spherical harmonics given here are dictated by convenience, primarily in quantum mechanical calculations, in which the angular momentum is a significant parameter. Further examples of the usefulness and power of the vector spherical harmonics will be found in Blatt and Weisskopf,25 in Morse and Feshbach (see General References book’s end), and in Jackson’s Classical Electrodynamics, 3rd ed., New York: J. Wiley & Sons (1998), which use vector spherical harmonics in a description of multipole radiation and related electromagnetic problems. • Vector spherical harmonics are developed from coupling L units of orbital angular momentum and 1 unit of spin angular momentum. An extension, coupling L units of orbital angular momentum and 2 units of spin angular momentum to form tensor spherical harmonics, is presented by Mathews.26 25 E. H. Hill, Theory of vector spherical harmonics, Am. J. Phys. 22: 211 (1954); also J. M. Blatt and V. Weisskopf, Theoret- ical Nuclear Physics, New York: Wiley (1952). Note that Hill assigns phases in accordance with the Condon–Shortley phase convention (Section 4.4). In Hill’s notation XLM = YLLM , VLM = YLL+1M , WLM = YLL−1M . 26 J. Mathews, Gravitational multipole radiation, in In Memoriam (H.P. Robertson, ed.), Philadelphia: Society for Industrial and Applied Mathematics (1963). 816 Chapter 12 Legendre Functions • The major application of tensor spherical harmonics is in the investigation of gravitational radiation. Exercises 12.11.1 Construct the l = 0, m = 0 and l = 1, m = 0 vector spherical harmonics. ANS. Y010 = −ˆr(4π)−1/2 Y000 = 0 Y120 = −ˆr(2π)−1/2 cos θ − θˆ (8π)−1/2 sin θ 1/2 sin θ ˆ Y110 = ϕi(3/8π) Y100 = rˆ (4π)−1/2 cos θ − θˆ (4π)−1/2 sin θ . 12.11.2 Verify that the parity of YLL+1M is (−1)L+1 , that of YLLM is (−1)L , and that of YLL−1M is (−1)L+1 . What happened to the M-dependence of the parity? Hint. rˆ and ϕˆ have odd parity; θˆ has even parity (compare Exercise 2.5.8). 12.11.3 Verify the orthonormality of the vector spherical harmonics YJ LMJ . 12.11.4 In Jackson’s Classical Electrodynamics, 3rd ed., (see Additional Readings of Chapter 11 for the reference) defines YLLM by the equation YLLM (θ, ϕ) = √ 1 LY M (θ, ϕ), L(L + 1) L in which the angular momentum operator L is given by L = −i(r × ∇). Show that this definition agrees with Eq. (12.228). 12.11.5 Show that L  M=−L Y∗LLM (θ, ϕ) · YLLM (θ, ϕ) = 2L + 1 . 4π Hint. One way is to use Exercise 12.11.4 with L expanded in Cartesian coordinates using the raising and lowering operators of Section 4.3. 12.11.6 Show that YLLM · (ˆr × YLLM ) d = 0. The integrand represents an interference term in electromagnetic radiation that contributes to angular distributions but not to total intensity. Additional Readings Hobson, E. W., The Theory of Spherical and Ellipsoidal Harmonics. New York: Chelsea (1955). This is a very complete reference and the classic text on Legendre polynomials and all related functions. Smythe, W. R., Static and Dynamic Electricity, 3rd ed. New York: McGraw-Hill (1989). See also the references listed in Sections 4.4 and 12.9 and at the end of Chapter 13. CHAPTER 13 MORE SPECIAL FUNCTIONS In this chapter we shall study four sets of orthogonal polynomials, Hermite, Laguerre, and Chebyshev1 of first and second kinds. Although these four sets are of less importance in mathematical physics than are the Bessel and Legendre functions of Chapters 11 and 12, they are used and therefore deserve attention. For example, Hermite polynomials occur in solutions of the simple harmonic oscillator of quantum mechanics and Laguerre polynomials in wave functions of the hydrogen atom. Because the general mathematical techniques duplicate those of the preceding two chapters, the development of these functions is only outlined. Detailed proofs, along the lines of Chapters 11 and 12, are left to the reader. We express these polynomials and other functions in terms of hypergeometric and confluent hypergeometric functions. To conclude the chapter, we give an introduction to Mathieu functions, which arise as solutions of ODEs and PDEs with elliptical boundary conditions. 13.1 HERMITE FUNCTIONS Generating Functions — Hermite Polynomials The Hermite polynomials (Fig. 13.1), Hn (x), may be defined by the generating function2 g(x, t) = e−t 2 +2tx = ∞  n=0 Hn (x) tn . n! (13.1) 1 This is the spelling choice of AMS-55 (for the complete reference see footnote 4 in Chapter 5). However, a variety of names, such as Tschebyscheff, is encountered. 2 A derivation of this Hermite-generating function is outlined in Exercise 13.1.1. 817 818 Chapter 13 More Special Functions FIGURE 13.1 Hermite polynomials. Recurrence Relations Note the absence of a superscript, which distinguishes Hermite polynomials from the unrelated Hankel functions. From the generating function we find that the Hermite polynomials satisfy the recurrence relations Hn+1 (x) = 2xHn (x) − 2nHn−1 (x) (13.2) Hn′ (x) = 2nHn−1 (x). (13.3) and Equation (13.2) is obtained by differentiating the generating function with respect to t: ∞  ∂g tn 2 = (−2t + 2x)e−t +2tx = Hn+1 (x) ∂t n! n=0 = −2 ∞  Hn (x) n=0 t n+1 n! 2x ∞  Hn (x) n=0 tn , n! which can be rewritten as ∞ n  t  Hn+1 (x) − 2xHn (x) + 2nHn−1 (x) = 0. n! n=0 Because each coefficient of this power series vanishes, Eq. (13.2) is established. Similarly, differentiation with respect to x leads to ∞ ∞ n=0 n=0   ∂g t n+1 tn 2 Hn (x) Hn′ (x) = 2 = 2te−t +2tx = , ∂x n! n! which yields Eq. (13.3) upon shifting the summation index n in the last sum n + 1 → n. 13.1 Hermite Functions 819 Table 13.1 Hermite Polynomials H0 (x) = 1 H1 (x) = 2x H2 (x) = 4x 2 − 2 H3 (x) = 8x 3 − 12x H4 (x) = 16x 4 − 48x 2 + 12 H5 (x) = 32x 5 − 160x 3 + 120x H6 (x) = 64x 6 − 480x 4 + 720x 2 − 120 The Maclaurin expansion of the generating function e −t 2 +2tx = ∞  (2tx − t 2 )n n! n=0  = 1 + 2tx − t 2 + · · · (13.4) gives H0 (x) = 1 and H1 (x) = 2x, and then the recursion Eq. (13.2) permits the construction of any Hn (x) desired (integral n). For convenient reference the first several Hermite polynomials are listed in Table 13.1. Special values of the Hermite polynomials follow from the generating function for x = 0: 2 e−t = ∞  (−t 2 )n n=0 n! = ∞  Hn (0) n=0 tn , n! that is, H2n (0) = (−1)n (2n)! , n! H2n+1 (0) = 0, n = 0, 1, . . . . (13.5) We also obtain from the generating function the important parity relation Hn (x) = (−1)n Hn (−x) (13.6) by noting that Eq. (13.1) yields g(−x, −t) = ∞  Hn (−x) n=0 ∞  tn (−t)n = g(x, t) = Hn (x) . n! n! n=0 Alternate Representations The Rodrigues representation of Hn (x) is Hn (x) = (−1)n ex 2 d n −x 2 e . dx n Let us show this using mathematical induction as follows. (13.7) 820 Chapter 13 More Special Functions Example 13.1.1 RODRIGUES REPRESENTATION 2 2 We rewrite the generating function as g(x, t) = ex e−(t−x) and note that ∂ −(t−x)2 ∂ 2 e = − e−(t−x) . ∂t ∂x This yields   ∂g  2 d 2  e−x , = 2x = H1 (x) = −ex = (2x − 2t)g   t=0 ∂t t=0 dx which is the initial n = 1 case. Assuming the case n of Eq. (13.7) as valid, we now use the 2 2 d d x2 e = 2xex + ex dx in operator identity dx  n n+1 2 2 d x2 d n+1 d x 2 −x 2 e − 2xe e−x = (−1) e (−1)n+1 ex dx dx n dx n+1 =− d Hn (x) + 2xHn (x) = Hn+1 (x) dx to establish the n + 1 case, with the last equality following from Eqs.(13.2) and (13.3). More directly, differentiation of the generating function n times with respect to t and then setting t equal to zero yields  n n  ∂n  2 2 ∂ 2 −(t−x)2  n x2 d Hn (x) = n e−t +2tx  = (−1)n ex e e−x . = (−1) e  t=0 t=0 ∂t ∂x n dx n  A second representation may be obtained by using the calculus of residues (Section 7.1). If we multiply Eq. (13.1) by t −m−1 and integrate around the origin in the complex t-plane, only the term with Hm (x) will survive:  m! 2 Hm (x) = (13.8) t −m−1 e−t +2tx dt. 2πi Also, from the Maclaurin expansion, Eq. (13.4), we can derive our Hermite polynomial Hn (x) in series form: Using the binomial expansion of (2x − t)ν and the index N = s + ν, ∞ ν  ν   ∞ ν   t t ν ν −t 2 +2tx = (2x)ν−s (−t)s e (2x − t) = s ν! ν! ν=0 ν=0 s=0   ∞ N [N/2]  t  N! N −s N −2s s , = (2x) (−1) s N! (N − s)! N =0 s=0 where [N/2] is the largest integer less than or equal to N/2. Writing the binomial coefficient in terms of factorials and using Eq. (13.1) we obtain HN (x) = [N/2]  (2x)N −2s (−1)s s=0 N! . s!(N − 2s)! 13.1 Hermite Functions 821 More explicitly, replacing N → n, we have 2n! 4n! (2x)n−2 + (2x)n−4 1 · 3 · · · (n − 2)!2! (n − 4)!4!   [n/2]  n s n−2s = (−2) (2x) 1 · 3 · 5 · · · (2s − 1) 2s Hn (x) = (2x)n − s=0 = [n/2]  (−1)s (2x)n−2s s=0 n! . (n − 2s)!s! (13.9) This series terminates for integral n and yields our Hermite polynomial. Orthogonality If we substitute the recursion Eq. (13.3) into Eq. (13.2) we can eliminate the index n − 1, obtaining Hn+1 (x) = 2xHn (x) − Hn′ (x), which was used already in Example 13.1.1. If we differentiate this recursion relation and substitute Eq. (13.3) for the index n + 1 we find ′ Hn+1 (x) = 2(n + 1)Hn (x) = 2Hn (x) + 2xHn′ (x) − Hn′′ (x), which can be rearranged to the second-order ODE for Hermite polynomials. Thus, the recurrence relations (Eqs. (13.2) and (13.3)) lead to the second-order ODE Hn′′ (x) − 2xHn′ (x) + 2nHn (x) = 0, (13.10) which is clearly not self-adjoint. To put the ODE in self-adjoint form, following Section 10.1, we multiply by exp(−x 2 ), Exercise 10.1.2. This leads to the orthogonality integral ∞ 2 m = n, (13.11) Hm (x)Hn (x)e−x dx = 0, −∞ with the weighting function exp(−x 2 ), a consequence of putting the ODE into self-adjoint form. The interval (−∞, ∞) is chosen to obtain the Hermitian operator boundary conditions, Section 10.1. It is sometimes convenient to absorb the weighting function into the Hermite polynomials. We may define ϕn (x) = e−x 2 /2 Hn (x), with ϕn (x) no longer a polynomial. Substitution into Eq. (13.10) yields the differential equation for ϕn (x),  ϕn′′ (x) + 2n + 1 − x 2 ϕn (x) = 0. (13.12) (13.13) This is the differential equation for a quantum mechanical, simple harmonic oscillator, which is perhaps the most important physics application of the Hermite polynomials. 822 Chapter 13 More Special Functions Equation (13.13) is self-adjoint, and the solutions ϕn (x) are orthogonal for the interval (−∞ < x < ∞) with a unit weighting function. The problem of normalizing these functions remains. Proceeding as in Section 12.3, we 2 multiply Eq. (13.1) by itself and then by e−x . This yields 2 e−x e−s 2 +2sx e−t 2 +2tx = ∞  2 e−x Hm (x)Hn (x) m,n=0 smt n . m!n! When we integrate this relation over x from −∞ to +∞, the cross terms of the double sum drop out because of the orthogonality property:3 ∞ ∞  2 (st)n ∞ −x 2  2 2 2 e Hn (x) dx = e−x −s +2sx−t +2tx dx n!n! −∞ −∞ n=0 ∞ 2 = e−(x−s−t) e2st dx −∞ = π 1/2 e2st = π 1/2 ∞ n  2 (st)n n=0 n! , (13.14) using Eqs. (8.6) and (8.8). By equating coefficients of like powers of st, we obtain ∞ 2 2 e−x Hn (x) dx = 2n π 1/2 n!. (13.15) −∞ Quantum Mechanical Simple Harmonic Oscillator The following development of Hermite polynomials via simple harmonic oscillator wave functions φn (x) is analogous to the use of the raising and lowering operators for angular momentum operators presented in Section 4.3. This means that we derive the eigenvalues n + 1/2 and eigenfunctions (the Hn (x)) without assuming the development that d2 2 led to Eq. (13.13). The key aspect of the eigenvalue Eq. (13.13), ( dx 2 − x )ϕn (x) = −(2n + 1)ϕn (x), is that the Hamiltonian     d d d2 d 2 −2H ≡ 2 − x = (13.16) −x + x + x, dx dx dx dx almost factorizes. Using naively a 2 − b2 = (a − b)(a + b), the basic commutator [px , x] = h¯ /i of quantum mechanics (with momentum px = (h¯ /i)d/dx) enters as a correction in Eq. (13.16). (Because px is Hermitian, d/dx is anti-Hermitian, (d/dx)† = −d/dx.) This commutator can be evaluated as follows. Imagine the differential operator d/dx acts on a wave function ϕ(x) to the right, as in Eq. (13.13), so d d (xϕ) = x ϕ + ϕ, dx dx (13.17) 3 The cross terms (m = n) may be left in, if desired. Then, when the coefficients of s α t β are equated, the orthogonality will be apparent. 13.1 Hermite Functions 823 by the product rule. Dropping the wave function ϕ from Eq. (13.17), we rewrite Eq. (13.17) as  d d d x −x ≡ , x = 1, (13.18) dx dx dx a constant, and then verify Eq. (13.16) directly by expanding the product of operators. The product form of Eq. (13.16), up to the constant commutator, suggests introducing the non-Hermitian operators   d 1 , aˆ ≡ √ x − dx 2 †   1 d aˆ ≡ √ x + , dx 2 (13.19) with (a) ˆ † = aˆ † , which are adjoints of each other. They obey the commutation relations    d † a, ˆ aˆ = , x = 1, [a, ˆ a] ˆ = 0 = aˆ † , aˆ † , (13.20) dx which are characteristic of these operators and straightforward to derive from Eq. (13.18) and    d d d d = 0 = [x, x] and x, =− , ,x . dx dx dx dx Returning to Eq. (13.16) and using Eq. (13.19) we rewrite the Hamiltonian as H = aˆ † aˆ + 1 1 1 = aˆ † aˆ + aˆ † aˆ + aˆ aˆ † = aˆ † aˆ + aˆ aˆ † 2 2 2 (13.21) and introduce the Hermitian number operator N = aˆ † aˆ so that H = N + 1/2. Let |n be an eigenfunction of H, H|n = λn |n, whose eigenvalue λn is unknown at this point. Now we prove the key property that N has nonnegative integer eigenvalues   1 |n = n|n, N |n = λn − 2 n = 0, 1, 2 . . . , (13.22) that is, λn = n + 1/2. Since a|n ˆ is complex conjugate to n|aˆ † , the normalization integral † ˆ ≥ 0 and is finite. From n|aˆ a|n    † 1 † ≥0 (13.23) a|n ˆ a|n ˆ = n|aˆ a|n ˆ = λn − 2 we see that N has nonnegative eigenvalues. We now show that if a|n ˆ is nonzero it is an eigenfunction with eigenvalue λn−1 = λn − 1. After normalizing a|n, ˆ this state is designated |n − 1. This is proved by the commutation relations  N, aˆ † = aˆ † , [N, a] ˆ = −a, ˆ (13.24) 824 Chapter 13 More Special Functions which follow from Eq. (13.20). These commutation relations characterize N as the number ˆ Using operator. To see this, we determine the eigenvalue of N for the states aˆ † |n and a|n. ˆ aˆ † ] = N + 1, we find that aˆ aˆ † = N + [a,    ˆ aˆ † + N |n N aˆ † |n = aˆ † aˆ aˆ † |n = aˆ † a,   1 † = aˆ † (N + 1)|n = λn + aˆ |n = (n + 1)aˆ † |n, (13.25) 2   † N a|n ˆ = aˆ aˆ − 1 a|n ˆ = a(N ˆ − 1)|n = (n − 1)a|n. ˆ In other words, N acting on aˆ † |n shows that aˆ † has raised the eigenvalue n corresponding to |n by one unit, whence its name raising, or creation, operator. Applying aˆ † repeatedly, we can reach all higher states. There is no upper limit to the sequence of eigenvalues. Similarly, aˆ lowers the eigenvalue n by one unit; hence it is a lowering (or annihilation) operator because. Therefore, aˆ † |n ∼ |n + 1, a|n ˆ ∼ |n − 1. (13.26) Applying aˆ repeatedly, we can reach the lowest, or ground, state |0 with eigenvalue λ0 . We ˆ ≡ 0, suggesting we construct ψ0 = |0 cannot step lower because λ0 ≥ 1/2. Therefore a|0 from the (factored) first-order ODE   √ d + x ψ0 = 0. 2aψ ˆ 0= (13.27) dx Integrating ψ0′ = −x, ψ0 (13.28) 1 ln ψ0 = − x 2 + ln c0 , 2 (13.29) we obtain where c0 is an integration constant. The solution, ψ0 (x) = c0 e−x 2 /2 , (13.30) is a Gaussian that can be normalized, with c0 = π −1/4 using the error integral, Eqs. (8.6) and (8.8). Substituting ψ0 into Eq. (13.13) we find   1 1 |0 = |0, (13.31) H|0 = aˆ † aˆ + 2 2 so its energy eigenvalue is λ0 = 1/2 and its number eigenvalue is n = 0, confirming the notation |0. Applying aˆ † repeatedly to ψ0 = |0, all other eigenvalues are confirmed to be λn = n + 1/2, proving Eq. (13.13). The normalizations needed for Eq. (13.26) follow from Eqs. (13.25) and (13.23) and n|aˆ aˆ † |n = n|aˆ † aˆ + 1|n = n + 1, (13.32) 13.1 Hermite Functions 825 showing √ n + 1|n + 1 = aˆ † |n, √ n|n − 1 = a|n. ˆ (13.33) Thus, the excited-state wave functions, ψ1 , ψ2 , and so on, are generated by the raising operator √   d 1 x 2 −x 2 /2 † ψ0 (x) = 1/4 e , (13.34) |1 = aˆ |0 = √ x − dx π 2 yielding (and leading to upcoming Eq. (13.38)) ψn (x) = Nn Hn (x)e−x 2 /2  −1/2 Nn ≡ π −1/4 2n n! , , (13.35) where Hn are the Hermite polynomials (Fig. 13.2). As shown, the Hermite polynomials are used in analyzing the quantum mechanical simple harmonic oscillator. For a potential energy V = 21 Kz2 = 12 mω2 z2 (force F = −∇V = −Kzˆz), the Schrödinger wave equation is h¯ 2 2 1 ∇ (z) + Kz2 (z) = E(z). (13.36) 2m 2 Our oscillating particle has mass m and total energy E. By use of the abbreviations − x = αz m2 ω2 mK , α4 = 2 = h¯ 2  1/2 h¯ 2E m 2E λ= = , h¯ K h¯ ω with (13.37) in which ω is the angular frequency of the corresponding classical oscillator, Eq. (13.36) becomes (with (z) = (x/α) = ψ(x)) d 2 ψ(x)  + λ − x 2 ψ(x) = 0. 2 dx This is Eq. (13.13) with λ = 2n + 1. Hence (Fig. 13.2), ψn (x) = 2−n/2 π −1/4 (n!)−1/2 e−x 2 /2 Hn (x), (normalized). (13.38) (13.39) Alternatively, the requirement that n be an integer is dictated by the boundary conditions of the quantum mechanical system, lim (z) = 0. z→±∞ Specifically, if n → ν, not an integer, a power-series solution of Eq. (13.13) (Exercise 9.5.6) 2 shows that Hν (x) will behave as x ν ex for large x. The functions ψν (x) and ν (z) will therefore blow up at infinity, and it will be impossible to normalize the wave function (z). With this requirement, the energy E becomes   1 E = n+ h¯ ω. 2 (13.40) 826 Chapter 13 More Special Functions FIGURE 13.2 Quantum mechanical oscillator wave functions: The heavy bar on the x-axis indicates the allowed range of the classical oscillator with the same total energy. As n ranges over integral values (n ≥ 0), we see that the energy is quantized and that there is a minimum or zero point energy 1 Emin = hω. ¯ 2 (13.41) This zero point energy is an aspect of the uncertainty principle, a genuine quantum phenomenon. 13.1 Hermite Functions 827 In quantum mechanical problems, particularly in molecular spectroscopy, a number of integrals of the form ∞ 2 x r e−x Hn (x)Hm (x) dx −∞ are needed. Examples for r = 1 and r = 2 (with n = m) are included in the exercises at the end of this section. A large number of other examples are contained in Wilson, Decius, and Cross.4 In the dynamics and spectroscopy of molecules in the Born–Oppenheimer approximation, the motion of a molecule is separated into electronic, vibrational and rotational motion. Each vibrating atom contributes to a matrix element two Hermite polynomials, its initial state and another one to its final state. Thus, integrals of products of Hermite polynomials are needed. Example 13.1.2 THREEFOLD HERMITE FORMULA We want to calculate the following integral involving m = 3 Hermite polynomials: I3 ≡ ∞ −∞ 2 e−x HN1 (x)HN2 (x)HN3 (x) dx, (13.42) where Ni ≥ 0 are integers. The formula (due to E. C. Titchmarsh, J. Lond. Math. Soc. 23: 15 (1948), see Gradshteyn and Ryzhik, p. 838, in the Additional Readings) generalizes the m = 2 case needed for the orthogonality and normalization of Hermite polynomials. To derive it, we start with the product of three generating functions of Hermite polynomials, 2 multiply by e−x , and integrate over x in order to generate I3 : ∞ ∞ 3 3 2 2 # 2xt −t 2 Z3 ≡ e−x e−( j =1 tj −x) +2(t1 t2 +t1 t3 +t2 t3 ) dx e j j dx = −∞ = √ −∞ j =1 πe2(t1 t2 +t1 t3 +t2 t3 ) . (13.43) The last equality follows from substituting y = x − j tj and using the error integral ∞ −y 2 √ dy = π , Eqs. (8.6) and (8.8). Expanding the generating functions in terms of −∞ e Hermite polynomials we obtain ∞ N ∞  t1N1 t2N2 t3 3 2 e−x HN1 (x)HN2 (x)HN3 (x) dx Z3 = N1 !N2 !N3 ! −∞ N1 ,N2 ,N3 =0 = = ∞ √  2N (t1 t2 + t1 t3 + t2 t3 )N π N! N =0 ∞ √  2N π N! N =0  0≤ni ≤N, i ni =N N! (t1 t2 )n1 (t1 t3 )n2 (t2 t3 )n3 , n1 !n2 !n3 ! 4 E. B. Wilson, Jr., J. C. Decius, and P. C. Cross, Molecular Vibrations, New York: McGraw-Hill (1955), reprinted Dover (1980). 828 Chapter 13 More Special Functions using the polynomial expansion  m aj j =1 N =  0≤ni ≤m N! nm . a n1 · · · am n1 ! · · · nm ! 1 The powers of the foregoing tj tk become N (t1 t2 )n1 (t1 t3 )n2 (t2 t3 )n3 = t1N1 t2N2 t3 3 ; N1 = n1 + n2 , N2 = n1 + n3 , N3 = n2 + n3 . That is, from 2N = 2(n1 + n2 + n3 ) = N1 + N2 + N3 there follows 2N = 2n1 + 2N3 = 2n2 + 2N2 = 2n3 + 2N1 , so we obtain n1 = N − N3 , n2 = N − N2 , n3 = N − N1 . The ni are all fixed (making this case special and easy) because the Ni are fixed, and 3 2N = Ni , with N ≥ 0 an integer by parity. Hence, upon comparing the foregoing like i=1 t1 t2 t3 powers, I3 = √ π2N N1 !N2 !N3 ! , (N − N1 )!(N − N2 )!(N − N3 )! (13.44) which is the desired formula. If we order N1 ≥ N2 ≥ N3 ≥ 0, then n1 ≥ n2 ≥ n3 ≥ 0 follows, being equivalent to N − N3 ≥ N − N2 ≥ N − N1 ≥ 0, which occur in the denominators of the factorials of I3 .  Example 13.1.3 DIRECT EXPANSION OF PRODUCTS OF HERMITE POLYNOMIALS In an alternative approach, we now start again from the generating function identity ∞  N1 ,N2 =0 HN1 (x)HN2 (x) t1N1 t2N2 2 2 2 = e2x(t1 +t2 )−t1 −t2 = e2x(t1 +t2 )−(t1 +t2 ) · e2t1 t2 N1 ! N2 ! = ∞  N =0 HN (x) ∞ (t1 + t2 )N  (2t1 t2 )ν . N! ν! ν=0 13.1 Hermite Functions 829 Using the binomial expansion and then comparing like powers of t1 t2 we extract an identity due to E. Feldheim (J. Lond. Math. Soc. 13: 22 (1938)): HN1 (x)HN2 (x) = =   N1 !N2 !2ν N1 + N2 − 2ν HN1 +N2 −2ν N1 − ν ν!(N1 + N2 − 2ν)! ν=0     N2 N1 ν . (13.45) HN1 +N2 −2ν 2 ν! ν ν min(N 1 ,N2 )  0≤ν≤min(N1 ,N2 ) For ν = 0 the coefficient of HN1 +N2 is obviously unity. Special cases, such as H12 = H2 + 2, H1 H2 = H3 + 4H1 , H22 = H4 + 8H2 + 8, H1 H3 = H4 + 6H2 , can be derived from Table 13.1 and agree with the general twofold product formula. This compact formula can be generalized to products of m Hermite polynomials, and this in turn yields a new closed form result for the integral Im . Let us begin with a new result for I4 containing a product of four Hermite polynomials. Inserting the Feldheim identity for HN1 HN2 and HN3 HN4 and using orthogonality ∞ √ 2 e−x HN1 HN2 dx = π2N1 N1 !δN1 N2 −∞ for the remaining product of two Hermite polynomials yields ∞ 2 I4 = e−x HN1 HN2 HN3 HN4 dx −∞ = 2µ µ! 0≤µ≤min(N1 ,N2 );0≤ν≤min(N3 ,N4 )  N1 · µ =       ∞ 2 N4 N2 ν N3 2 ν! e−x HN1 +N2 −2µ HN3 +N4 −2ν dx ν ν µ −∞ √ N4  π2M (N3 + N4 − 2ν)!N1 !N2 !N3 !N4 ! . (M − N3 − N4 − ν)!(M − N1 + ν)!(M − N2 + ν)!(N3 − ν)!(N4 − ν)!ν! ν=0 (13.46) Here we use the notation M = (N1 + N2 + N3 + N4 )/2 and write the binomial coefficients explicitly, so 1 2 (N1 N2 − N3 − N4 ) = M − N3 − N4 , 1 2 (N1 − N2 + N3 + N4 ) = M − N2 , 1 2 (N2 − N1 + N3 + N4 ) = M − N1 . From orthogonality we have µ = (N1 + N2 − N3 − N4 )/2 + ν. The upper limit of ν is min(N3 , N4 , M − N1 , M − N2 ) = min(N4 , M − N1 ) and the lower limit is max(0, N3 + N4 − M) = 0, if we order N1 ≥ N2 ≥ N3 ≥ N4 . 830 Chapter 13 More Special Functions Now we return to the product expansion of m Hermite polynomials and the corresponding new result from it for Im . We prove a generalized Feldheim identity,  HN1 (x) · · · HNm (x) = HM (x)aν1 ,...,νm−1 , (13.47) ν1 ,...,νm−1 where M= m−1  i=1 (Ni − 2νi ) + Nm , by mathematical induction. Multiplying this by HNm+1 and using the Feldheim identity, we end up with the same formula for m + 1 Hermite polynomials, including the recursion relation    m−1  Nm+1 (Ni − 2νi ) + Nm+1 νm i=1 aν1 ,...,νm = aν1 ,...,νm−1 2 νm ! . νm νm Its solution is aν1 ,...,νm−1 = m−1 # i=1 Ni+1 νi   i−1 j =1 (Nj − 2νj ) + Ni νi  2νi νi !. (13.48) The limits of the summation indices are 0 ≤ ν1 ≤ min(N1 , N2 ), 0 ≤ ν2 ≤ min(N3 , N1 + N2 − 2ν1 ), . . . ,   m−2  0 ≤ νm−1 ≤ min Nm , (Ni − 2νi ) + Nm−1 . (13.49) i=1 We now apply this generalized Feldheim identity, with indices ordered as N1 ≥ N2 ≥ · · · ≥ Nm , to Im , grouping HN2 · · · HNm together and using orthogonality for the remaining product of two Hermite polynomials HN1 Hm−1 (N −2ν )+Nm . This yields N1 = i i i=2 m−1 (N − 2ν ) + N , fixing ν , and i i m m−1 i=2 Im = √ N1 π2 N1 !  m−1 # ν2 ,...,νm−1 i=2 Ni+1 νi   i−1 j =2 (Nj − 2νj ) + Ni νi  νi !2νi , (13.50) where the limits on the summation indices are 0 ≤ ν2 ≤ min(N3 , N2 ), . . . ,   m−2  (Ni − 2νi ) + Nm−1 . (13.51) 0 ≤ νm−1 ≤ min Nm , i=2  13.1 Hermite Functions Example 13.1.4 831 APPLICATIONS OF THE PRODUCT FORMULAS To check the expression Im for m = 3, we note that the sum i−1 j =2 with i − 1 = m − 2 = 1 in the second binomial coefficient in Im is empty (that is, zero), so only Ni = Nm−1 = N2 remains. Also, with νm−2 = ν1 the sum over the νi is that over ν2 , which is fixed by the constraint on the summation index ν2 : N1 = N2 − 2ν2 + N3 . Hence ν2 = (N2 + N3 − N1 )/2 = N − N1 , with N = (N1 + N2 + N3 )/2. That is, only the product remains in Im . The general formula for Im therefore gives √ N    √ N1 N2 π2 N1 !N2 !N3 ! N3 ν2 ν2 !2 = , I3 = π2 N1 ! ν2 ν2 (N − N1 )!(N − N2 )!(N − N3 )! which agrees with our earlier result of Example 13.1.2. The last expression is based on the following observations. The power of 2 has the exponent N1 + ν2 = N . The factorials from the binomial coefficients are N3 − ν2 = (N1 + N3 − N2 )/2 = N − N2 , N2 − ν2 = (N1 + N2 − N3 )/2 = N − N3 . Next let us consider m = 4, where we do not order the Hermite indices Ni as yet. The reason is that the general Im expression was derived with a different grouping of the Hermite polynomials than the separate calculation of I4 with which we compare. That is why we’ll have to permute the indices to get the earlier result for I4 . That is a general conclusion: Different groupings of the Hermite polynomials just give different permutations of the Hermite indices in the general result. We have two summations over ν2 and νm−1 = ν3 , which is fixed by the constraint N1 = N2 − 2ν2 + N3 − 2ν3 + N4 . Hence ν3 = 12 (N2 + N3 + N4 − N1 ) − ν2 = M − N1 − ν2 with M = 12 (N1 + N2 + N3 + N4 ). The exponent of 2 is N1 + ν2 + ν3 = M. Therefore for m = 4 the Im formula gives   N3   N4   N2   N2 − 2ν2 + N3  √ N1 ν2 !2ν2 ν3 !2ν3 I4 = π 2 N1 ! ν2 ν3 ν2 ν3 ν2 ≥0 = = =  ν2 ≥0  ν2 ≥0  ν2 ≥0 √ M π2 N1 !N2 !N3 !N4 !(N2 − 2ν2 + N3 )! ν2 !ν3 !(N2 − ν2 )!(N3 − ν2 )!(N4 − ν3 )!(N2 + N3 − 2ν2 − ν3 )! √ M π2 N1 !N2 !N3 !N4 !(N2 + N3 − 2ν2 )! (N2 − ν2 )!(N3 − ν2 )!(N4 − ν3 )!ν2 !ν3 !(N2 + N3 − 2ν2 − ν3 )! √ M π 2 N1 !N2 !N3 !N4 !(N2 + N3 − 2ν2 )! . ν2 !(M − N1 − ν2 )!(N3 − ν2 )!(N2 − ν2 )!(M − N2 − N3 + ν2 )!(M − N4 − ν2 )! In the last expression we have substituted ν3 and used N4 − ν3 = (N1 − N2 − N3 + N4 ) + ν2 = M − N2 − N3 + ν2 , N2 + N3 − 2ν2 − ν3 = N1 + N2 + N3 − N4 − ν2 = M − N4 − ν2 . 2 832 Chapter 13 More Special Functions The upper limit is ν2 ≤ min(N2 , N3 , M − N1 , M − N4 ), and the lower limit is ν2 ≥ max(0, N2 + N3 − M). If we make the permutation N2 ↔ N4 , ν2 → ν, then our previous I4 result is obtained with upper limit ν ≤ min(N4 , M − N1 ) = 12 (N2 + N3 + N4 − N1 ) and lower limit ν ≥ max(0, N3 + N4 − M) = 0 because N3 + N4 − N1 − N2 ≤ 0 for N1 ≥ N2 ≥ N3 ≥ N4 ≥ 0.  The Hermite polynomial product formula also applies to products of simple harmonic ∞ 2 oscillator wave functions, −∞ e−mx /2 HN1 (x) · · · HNm (x) dx, with a different exponential weight function. To evaluate such integrals we use the generalized Feldheim identity for HN2 · · · HNm in conjunction with the integral (see Gradshteyn and Ryzhik, p. 845, in the Additional Readings),     ∞ m+n+1 1 2 m+n+1  2 (m+n)/2 −a 2 x 2 1−a Ŵ Hm (x)Hn (x) dx = e 2 a 2 −∞   1−m−n a2 ; , · 2 F1 −m, −n; 2 2(a 2 − 1) instead of the standard orthogonality integral for the remaining product of two Hermite polynomials. Here the hypergeometric function is the finite sum    min(m,n)−1 ν  a2 a2 1−m−n (−m)ν (−n)ν −m, −n; ; = F 2 1 2 2(a 2 − 1) )ν 2(a 2 − 1) ν!( 1−m−n 2 ν=0 with (−m)ν = (−m)(1 − m) · · · (ν − 1 − m) and (−m)0 ≡ 1. This yields a result similar to Im but somewhat more complicated. The oscillator potential has also been employed extensively in calculations of nuclear structure (nuclear shell model) quark models of hadrons and the nuclear force. There is a second independent solution of Eq. (13.13). This Hermite function of the second kind is an infinite series (Sections 9.5 and 9.6) and is of no physical interest, at least not yet. Exercises 13.1.1 Assume the Hermite polynomials are known to be solutions of the differential equation (13.13). From this the recurrence relation, Eq. (13.3), and the values of Hn (0) are also known. (a) Assume the existence of a generating function g(x, t) = (b) ∞  Hn (x)t n n=0 n! . Differentiate g(x, t) with respect to x and using the recurrence relation develop a first-order PDE for g(x, t). (c) Integrate with respect to x, holding t fixed. (d) Evaluate g(0, t) using Eq. (13.5). Finally, show that  g(x, t) = exp −t 2 + 2tx . 13.1 Hermite Functions 13.1.2 833 In developing the properties of the Hermite polynomials, start at a number of different points, such as: 1. 2. 3. 4. 5. Hermite’s ODE, Eq. (13.13), Rodrigues’ formula, Eq. (13.7), Integral representation, Eq. (13.8), Generating function, Eq. (13.1), Gram–Schmidt construction of a complete set of orthogonal polynomials over (−∞, ∞) with a weighting factor of exp(−x 2 ), Section 10.3. Outline how you can go from any one of these starting points to all the other points. 13.1.3 Prove that  d 2x − dx n 1 = Hn (x). Hint. Check out the first couple of examples and then use mathematical induction. 13.1.4 13.1.5 Prove that     Hn (x) ≤ Hn (ix). Rewrite the series form of Hn (x), Eq. (13.9), as an ascending power series. ANS. H2n (x) = (−1)n n  (−1)2s (2x)2s s=0 H2n+1 (x) = (−1)n 13.1.6 n  (2n)! , (2s)!(n − s)! (−1)s (2x)2s+1 s=0 (2n + 1)! . (2s + 1)!(n − s)! (a) Expand x 2r in a series of even-order Hermite polynomials. (b) Expand x 2r+1 in a series of odd-order Hermite polynomials. r (2r)!  H2n (x) (2n)!(r − n)! 22r n=0 r H2n+1 (x) (2r + 1)!  , (b) x 2r+1 = 2r+1 (2n + 1)!(r − n)! 2 ANS. (a) x 2r = n=0 Hint. Use a Rodrigues representation and integrate by parts. 13.1.7 Show that (a) (b)  2 x 2πn!/(n/2)!, n even dx = Hn (x) exp − 0, n odd. 2 −∞  2 ∞ n even 0, x (n + 1)! dx = xHn (x) exp − , n odd. 2π 2 −∞ ((n + 1)/2)! ∞ r = 0, 1, 2, . . . . 834 Chapter 13 More Special Functions 13.1.8 Show that ∞ −∞ 13.1.9 2 x m e−x Hn (x) dx = 0 for m an integer, 0 ≤ m ≤ n − 1. The transition probability between two oscillator states m and n depends on ∞ 2 xe−x Hn (x)Hm (x) dx. −∞ Show that this integral equals π 1/2 2n−1 n!δm,n−1 + π 1/2 2n (n + 1)!δm,n+1 . This result shows that such transitions can occur only between states of adjacent energy levels, m = n ± 1. Hint. Multiply the generating function (Eq. (13.1)) by itself using two different sets of variables (x, s) and (x, t). Alternatively, the factor x may be eliminated by the recurrence relation, Eq. (13.2). 13.1.10 Show that ∞ 2 −x 2 x e −∞ Hn (x)Hn (x) dx = π   1 . 2 n! n + 2 1/2 n This integral occurs in the calculation of the mean-square displacement of our quantum oscillator. Hint. Use the recurrence relation, Eq. (13.2), and the orthogonality integral. 13.1.11 Evaluate ∞ −∞ 2 x 2 e−x Hn (x)Hm (x) dx in terms of n and m and appropriate Kronecker delta functions. ANS. 2n−1 π 1/2 (2n + 1)n!δnm + 2n π 1/2 (n + 2)!δn+2,m + 2n−2 π 1/2 n!δn−2,m . 13.1.12 Show that ∞ −∞ r −x 2 x e  0, Hn (x)Hn+p (x) dx = n 1/2 2 π (n + r)!, p>r p = r, with n, p, and r nonnegative integers. Hint. Use the recurrence relation, Eq. (13.2), p times. 13.1.13 (a) Using the Cauchy integral formula, develop an integral representation of Hn (x) based on Eq. (13.1) with the contour enclosing the point z = −x. ANS. Hn (x) = (b) n! x 2 e 2πi  2 e−z dz. (z + x)n+1 Show by direct substitution that this result satisfies the Hermite equation. 13.1 Hermite Functions 13.1.14 835 With ψn (x) = e−x 2 /2 Hn (x) , (2n n!π 1/2 )1/2 verify that  1 aˆ n ψn (x) = √ x + 2  1 aˆ n† ψn (x) = √ x − 2  d ψn (x) = n1/2 ψn−1 (x), dx  d ψn (x) = (n + 1)1/2 ψn+1 (x). dx Note. The usual quantum mechanical operator approach establishes these raising and lowering properties before the form of ψn (x) is known. 13.1.15 (a) Verify the operator identity x− (b) 2 2 d x x d . = − exp exp − dx 2 dx 2 The normalized simple harmonic oscillator wave function is 2  −1/2 x ψn (x) = π 1/2 2n n! Hn (x). exp − 2 Show that this may be written as 2    1/2 n −1/2 x d n ψn (x) = π 2 n! exp − . x− dx 2 Note. This corresponds to an n-fold application of the raising operator of Exercise 13.1.14. 13.1.16 (a) Show that the simple oscillator Hamiltonian (from Eq. (13.38)) may be written as 1 d2 1 1 + x 2 = aˆ aˆ † + aˆ † aˆ . 2 dx 2 2 2 Hint. Express E in units of h¯ ω. (b) Using the creation–annihilation operator formulation of part (a), show that   1 Hψ(x) = n + ψ(x). 2 H=− This means the energy eigenvalues are E = (n + 12 )(h¯ ω), in agreement with Eq. (13.40). 13.1.17 13.1.18 Write a program that will generate the coefficients as , in the polynomial form of the Hermite polynomial Hn (x) = ns=0 as x s . A function f (x) is expanded in a Hermite series: f (x) = ∞  n=0 an Hn (x). 836 Chapter 13 More Special Functions From the orthogonality and normalization of the Hermite polynomials the coefficient an is given by ∞ 1 2 an = n 1/2 f (x)Hn (x)e−x dx. 2 π n! −∞ For f (x) = x 8 determine the Hermite coefficients an by the Gauss–Hermite quadrature. Check your coefficients against AMS-55, Table 22.12 (for the reference see footnote 4 in Chapter 5 or the General References at book’s end). 13.1.19 (a) In analogy with Exercise 12.2.13, set up the matrix of even Hermite polynomial coefficients that will transform an even Hermite series into an even power series:   1 −2 12 · · · 0 4 −48 · · ·   B= 0 16 · · ·  . 0 .. .. .. . . . ··· Extend B to handle an even polynomial series through H8 (x). Invert your matrix to obtain matrix A, which will transform an even power series (through x 8 ) into a series of even Hermite polynomials. Check the elements of A against those listed in AMS-55 (Table 22.12, in the General References at book’s end). (c) Finally, using matrix multiplication, determine the Hermite series equivalent to f (x) = x 8 . n Write a subroutine that will transform a finite power series, N n=0 an x , into a Hermite N series, n=0 bn Hn (x). Use the recurrence relation, Eq. (13.2). Note. Both Exercises 13.1.19 and 13.1.20 are faster and more accurate than the Gaussian quadrature, Exercise 13.1.18, if f (x) is available as a power series. (b) 13.1.20 13.1.21 Write a subroutine for evaluating Hermite polynomial matrix elements of the form ∞ 2 Mpqr = Hp (x)Hq (x)x r e−x dx, −∞ using the 10-point Gauss–Hermite quadrature (for p + q + r ≤ 19). Include a parity check and set equal to zero the integrals with odd-parity integrand. Also, check to see if r is in the range |p − q| ≤ r. Otherwise Mpqr = 0. Check your results against the specific cases listed in Exercises 13.1.9, 13.1.10, 13.1.11, and 13.1.12. 13.1.22 13.1.23 Calculate and tabulate the normalized linear oscillator wave functions  2 x −n/2 −1/4 −1/2 for x = 0.0(0.1)5.0 ψn (x) = 2 π (n!) Hn (x) exp − 2 and n = 0(1)5. If a plotting routine is available, plot your results. ∞ 2 Evaluate −∞ e−2x HN1 (x) · · · HN4 (x) dx in closed form. ∞ −2x 2 Hint. −∞ e HN1 (x)HN2 (x)HN3 (x) dx = π1 2(N1 +N2 +N3 −1)/2 · Ŵ(s − N1 )Ŵ(s − N2 ) ∞ 2 · Ŵ(s − N3 ), s = (N1 + N2 + N3 + 1)/2 or −∞ e−2x HN1 (x)HN2 (x) dx = (−1)(N1 +N2 −1)/2 2(N1 +N2 −1)/2 · Ŵ((N1 + N2 + 1)/2) may be helpful. Prove these formulas (see Gradshteyn and Ryzhik, no. 7.375 on p. 844, in the Additional Readings). 13.2 Laguerre Functions 13.2 837 LAGUERRE FUNCTIONS Differential Equation — Laguerre Polynomials If we start with the appropriate generating function, it is possible to develop the Laguerre polynomials in analogy with the Hermite polynomials. Alternatively, a series solution may be developed by the methods of Section 9.5. Instead, to illustrate a different technique, let us start with Laguerre’s ODE and obtain a solution in the form of a contour integral, as we did with the integral representation for the modified Bessel function Kν (x) (Section 11.6). From this integral representation a generating function will be derived. Laguerre’s ODE (which derives from the radial ODE of Schrödinger’s PDE for the hydrogen atom) is xy ′′ (x) + (1 − x)y ′ (x) + ny(x) = 0. (13.52) We shall attempt to represent y, or rather yn , since y will depend on the parameter n, a nonnegative integer, by the contour integral  1 e−xz/(1−z) dz (13.53a) yn (x) = 2πi (1 − z)zn+1 and demonstrate that it satisfies Laguerre’s ODE. The contour includes the origin but does not enclose the point z = 1. By differentiating the exponential in Eq. (13.53a) we obtain  −xz/(1−z) 1 e yn′ (x) = − dz, (13.53b) 2πi (1 − z)2 zn  1 e−xz/(1−z) yn′′ (x) = dz. (13.53c) 2πi (1 − z)3 zn−1 Substituting into the left-hand side of Eq. (13.52), we obtain   1 x 1−x n e−xz/(1−z) dz, − + 2πi (1 − z)3 zn−1 (1 − z)2 zn (1 − z)zn+1 which is equal to 1 − 2πi   d e−xz/(1−z) dz. dz (1 − z)zn (13.54) If we integrate our perfect differential around a closed contour (Fig. 13.3), the integral will vanish, thus verifying that yn (x) (Eq. (13.53a)) is a solution of Laguerre’s equation. It has become customary to define Ln (x), the Laguerre polynomial (Fig. 13.4), by5  1 e−xz/(1−z) dz. (13.55) Ln (x) = 2πi (1 − z)zn+1 5 Other definitions of L (x) are in use. The definitions here of the Laguerre polynomial L (x) and the associated Laguerre n n polynomial Lkn (x) agree with AMS-55, Chapter 22. (For the full ref. see footnote 4 in Chapter 5 or the General References at book’s end.) 838 Chapter 13 More Special Functions FIGURE 13.3 Laguerre polynomial contour. FIGURE 13.4 Laguerre polynomials. This is exactly what we would obtain from the series g(x, z) = ∞ e−xz/(1−z)  = Ln (x)zn , 1−z n=0 |z| < 1, (13.56) if we multiplied g(x, z) by z−n−1 and integrated around the origin. As in the development of the calculus of residues (Section 7.1), only the z−1 term in the series survives. On this basis we identify g(x, z) as the generating function for the Laguerre polynomials. 13.2 Laguerre Functions 839 With the transformation xz s −x = s − x, or z= , 1−z s  ex s n e−s ds, Ln (x) = 2πi (s − x)n+1 (13.57) (13.58) the new contour enclosing the point s = x in the s-plane. By Cauchy’s integral formula (for derivatives), Ln (x) = ex d n  n −x x e n! dx n (integral n), (13.59) giving Rodrigues’ formula for Laguerre polynomials. From these representations of Ln (x) we find the series form (for integral n), Ln (x) = =  (−1)n n n2 n−1 n2 (n − 1)2 n−2 x x − x + − · · · + (−1)n n! n! 1! 2! n n   (−1)m n!x m (−1)n−s n!x n−s = (n − m)!m!m! (n − s)!(n − s)!s! m=0 (13.60) s=0 and the specific polynomials listed in Table 13.2 (Exercise 13.2.1). Clearly, the definition of Laguerre polynomials in Eqs. (13.55), (13.56), (13.59), and (13.60) are equivalent. Practical applications will decide which approach is used as one’s starting point. Equation (13.59) is most convenient for generating Table 13.2, Eq. (13.56) for deriving recursion relations from which the ODE (13.52) is recovered. By differentiating the generating function in Eq. (13.56) with respect to x and z, we obtain recurrence relations for the Laguerre polynomials as follows. Using the product rule for differentiation we verify the identities (1 − z)2 ∂g = (1 − x − z)g(x, z), ∂z Table 13.2 (z − 1) ∂g = zg(x, z). ∂x Laguerre Polynomials L0 (x) = 1 L1 (x) = −x + 1 2!L2 (x) = x 2 − 4x + 2 3!L3 (x) = −x 3 + 9x 2 − 18x + 6 4!L4 (x) = x 4 − 16x 3 + 72x 2 − 96x + 24 5!L5 (x) = −x 5 + 25x 4 − 200x 3 + 600x 2 − 600x + 120 6!L6 (x) = x 6 − 36x 5 + 450x 4 − 2400x 3 + 5400x 2 − 4320x + 720 (13.61) 840 Chapter 13 More Special Functions Writing the left-hand and right-hand sides of the first identity in terms of Laguerre polynomials using Eq. (13.56) we obtain  (n + 1)Ln+1 (x) − 2nLn (x) + (n − 1)Ln−1 (x) zn n =  (1 − x)Ln (x) − Ln−1 (x) zn . n Equating coefficients of zn yields (n + 1)Ln+1 (x) = (2n + 1 − x)Ln (x) − nLn−1 (x). (13.62) To get the second recursion relation we use both identities of Eqs. (13.61) to verify the third identity, x ∂g ∂g ∂(zg) =z −z , ∂x ∂z ∂z (13.63) which, when written similarly in terms of Laguerre polynomials, is seen to be equivalent to xL′n (x) = nLn (x) − nLn−1 (x). (13.64) Equation (13.61), modified to read Ln+1 (x) = 2Ln (x) − Ln−1 (x) − 1  (1 + x)Ln (x) − Ln−1 (x) , n+1 (13.65) for reasons of economy and numerical stability, is used for computation of numerical values of Ln (x). The computer starts with known numerical values of L0 (x) and L1 (x), Table 13.2, and works up step by step. This is the same technique discussed for computing Legendre polynomials, Section 12.2. Also, from Eq. (13.56) we find g(0, z) = ∞ ∞ n=0 n=0   1 = Ln (0)zn , zn = 1−z which yields the special values of Laguerre polynomials Ln (0) = 1. (13.66) As is seen from the form of the generating function, from the form of Laguerre’s ODE, or from Table 13.2, the Laguerre polynomials have neither odd nor even symmetry under the parity transformation x → −x. The Laguerre ODE is not self-adjoint, and the Laguerre polynomials Ln (x) do not by themselves form an orthogonal set. However, following the method of Section 10.1, if we multiply Eq. (13.52) by e−x (Exercise 10.1.1) we obtain ∞ e−x Lm (x)Ln (x) dx = δmn . (13.67) 0 13.2 Laguerre Functions 841 This orthogonality is a consequence of the Sturm–Liouville theory, Section 10.1. The normalization follows from the generating function. It is sometimes convenient to define orthogonalized Laguerre functions (with unit weighting function) by ϕn (x) = e−x/2 Ln (x). (13.68) Our new orthonormal function, ϕn (x), satisfies the ODE   1 x ′′ ′ ϕn (x) = 0, xϕn (x) + ϕn (x) + n + − 2 4 (13.69) which is seen to have the (self-adjoint) Sturm–Liouville form. Note that the interval (0 ≤ x < ∞) was used because Sturm–Liouville boundary conditions are satisfied at its endpoints. Associated Laguerre Polynomials In many applications, particularly in quantum mechanics, we need the associated Laguerre polynomials defined by6 Lkn (x) = (−1)k dk Ln+k (x). dx k (13.70) From the series form of Ln (x) we verify that the lowest associated Laguerre polynomials are given by Lk0 (x) = 1, Lk1 (x) = −x + k + 1, Lk2 (x) = (k + 2)(k + 1) x2 − (k + 2)x + . 2 2 (13.71) In general, Lkn (x) = n  (−1)m m=0 (n + k)! xm, (n − m)!(k + m)!m! k > −1. (13.72) A generating function may be developed by differentiating the Laguerre generating function k times to yield (−1)k ∞ ∞   dk d k e−xz/(1−z) = (−1)k Lkn (x)zn+k Ln+k (x)zn+k = k k dx 1−z dx n=0 n=0 =  z 1−z k exz/(1−z) 1−z . k k 6 Some authors use Lk (x) = (d k /dx k )[L k n+k (x)]. Hence our Ln (x) = (−1) Ln+k (x). n+k 842 Chapter 13 More Special Functions From the last two members of this equation, canceling the common factor zk , we obtain ∞ e−xz/(1−z)  k = Ln (x)zn , (1 − z)k+1 |z| < 1. n=0 (13.73) From this, for x = 0, the binomial expansion  ∞  ∞   1 −k − 1 n (−z) = Lkn (0)zn = n (1 − z)k+1 n=0 n=0 yields Lkn (0) = (n + k)! . n!k! (13.74) Recurrence relations can be derived from the generating function or by differentiating the Laguerre polynomial recurrence relations. Among the numerous possibilities are (n + 1)Lkn+1 (x) = (2n + k + 1 − x)Lkn (x) − (n + k)Lkn−1 (x), x dLkn (x) = nLkn (x) − (n + k)Lkn−1 (x). dx (13.75) (13.76) Thus, we obtain from differentiating Laguerre’s ODE once x dLn dL′′n dL′ + L′′n − L′n + (1 − x) n + n = 0, dx dx dx and eventually from differentiating Laguerre’s ODE k times x d k ′′ d k−1 ′′ dk d k−1 ′ dk ′ L + k L + n Ln = 0. L − k L + (1 − x) dx k n dx k n dx k dx k−1 n dx k−1 n Adjusting the index n → n + k, we have the associated Laguerre ODE x dLk (x) d 2 Lkn (x) + nLkn (x) = 0. + (k + 1 − x) n 2 dx dx (13.77) When associated Laguerre polynomials appear in a physical problem it is usually because that physical problem involves Eq. (13.77). The most important application is the bound states of the hydrogen atom, which are derived in upcoming Example 13.2.1. A Rodrigues representation of the associated Laguerre polynomial Lkn (x) = ex x −k d n  −x n+k e x n! dx n (13.78) may be obtained from substituting Eq. (13.59) into Eq. (13.70). Note that all these formulas for associated Legendre polynomials Lkn (x) reduce to the corresponding expressions for Ln (x) when k = 0. 13.2 Laguerre Functions 843 The associated Laguerre equation (13.77) is not self-adjoint, but it can be put in selfadjoint form by multiplying by e−x x k , which becomes the weighting function (Section 10.1). We obtain ∞ (n + k)! δmn . e−x x k Lkn (x)Lkm (x) dx = (13.79) n! 0 Equation (13.79) shows the same orthogonality interval (0, ∞) as that for the Laguerre polynomials, but with a new weighting function we have a new set of orthogonal polynomials, the associated Laguerre polynomials. By letting ψnk (x) = e−x/2 x k/2 Lkn (x), ψnk (x) satisfies the self-adjoint ODE   x 2n + k + 1 k 2 d 2 ψnk (x) dψnk (x) ψnk (x) = 0. + − + − x (13.80) + dx 4 2 4x dx 2 The ψnk (x) are sometimes called Laguerre functions. Equation (13.67) is the special case k = 0 of Eq. (13.79). A further useful form is given by defining7 kn (x) = e−x/2 x (k+1)/2 Lkn (x). Substitution into the associated Laguerre equation yields   d 2 kn (x) 1 2n + k + 1 k 2 − 1 − kn (x) = 0. + − + 4 2x dx 2 4x 2 ∞ The corresponding normalization integral 0 |kn (x)|2 dx is ∞  2 (n + k)! (2n + k + 1). e−x x k+1 Lkn (x) dx = n! 0 (13.81) (13.82) (13.83) Notice that the kn (x) do not form an orthogonal set (except with x −1 as a weighting funcµ tion) because of the x −1 in the term (2n + k + 1)/2x. (The Laguerre functions Lν (x) in which the indices ν and µ are not integers may be defined using the confluent hypergeometric functions of Section 13.5.) Example 13.2.1 THE HYDROGEN ATOM The most important application of the Laguerre polynomials is in the solution of the Schrödinger equation for the hydrogen atom. This equation is − h¯ 2 2 Ze2 ∇ ψ− ψ = Eψ, 2m 4πǫ0 r (13.84) in which Z = 1 for hydrogen, 2 for ionized helium, and so on. Separating variables, we find that the angular dependence of ψ is the spherical harmonics YLM (θ, ϕ). The radial part, R(r), satisfies the equation   Ze2 h¯ 2 1 d h¯ 2 L(L + 1) 2 dR r − R = ER. (13.85) − R+ 2 2m r dr dr 4πǫ0 r 2m r2 7 This corresponds to modifying the function ψ in Eq. (13.80) to eliminate the first derivative (compare Exercise 9.6.11). 844 Chapter 13 More Special Functions For bound states, R → 0 as r → ∞, and R is finite at the origin, r = 0. We do not consider continuum states with positive energy. Only when the latter are included do hydrogen wave functions form a complete set. By use of the abbreviations (resulting from rescaling r to the dimensionless radial variable ρ) ρ = αr with α 2 = − 8mE h¯ 2 , λ= E < 0, mZe2 2πǫ0 α h¯ 2 , (13.86) Eq. (13.85) becomes     λ 1 L(L + 1) 1 d 2 dχ(ρ) χ(ρ) = 0, ρ + − − dρ ρ 4 ρ 2 dρ ρ2 (13.87) where χ(ρ) = R(ρ/α). A comparison with Eq. (13.82) for kn (x) shows that Eq. (13.87) is satisfied by 2L+1 (ρ), ρχ(ρ) = e−ρ/2 ρ L+1 Lλ−L−1 (13.88) in which k is replaced by 2L + 1 and n by λ − L − 1, upon using 1 d 2 dχ 1 d2 (ρχ). ρ = ρ dρ 2 ρ 2 dρ dρ We must restrict the parameter λ by requiring it to be an integer n, n = 1, 2, 3, . . . .8 This is necessary because the Laguerre function of nonintegral n would diverge9 as ρ n eρ , which is unacceptable for our physical problem, in which lim R(r) = 0. r→∞ This restriction on λ, imposed by our boundary condition, has the effect of quantizing the energy,  2 2 e Z2m . (13.89) En = − 2 2 4πǫ 2n h¯ 0 The negative sign reflects the fact that we are dealing here with bound states (E < 0), corresponding to an electron that is unable to escape to infinity, where the Coulomb potential goes to zero. Using this result for En , we have α= me2 h2 2πǫ0 ¯ · 2Z Z = , n na0 ρ= 2Z r, na0 with a0 = 4πǫ0 h¯ 2 , me2 the Bohr radius. 8 This is the conventional notation for λ. It is not the same n as the index n in k (x). n 9 This may be shown, as in Exercise 9.5.5. (13.90) 13.2 Laguerre Functions 845 Thus, the final normalized hydrogen wave function is written as ψnLM (r, θ, ϕ) =  2Z na0 3 (n − L − 1)! 2n(n + L)! 1/2 M e−αr/2 (αr)L L2L+1 n−L−1 (αr)YL (θ, ϕ). (13.91) Regular solutions exist for n ≥ L + 1, so the lowest state with L = 1 (called a 2P state) occurs only with n = 2.  Exercises 13.2.1 Show with the aid of the Leibniz formula that the series expansion of Ln (x) (Eq. (13.60)) follows from the Rodrigues representation (Eq. (13.59)). 13.2.2 (a) Using the explicit series form (Eq. (13.60)) show that L′n (0) = −n, L′′n (0) = 21 n(n − 1). (b) Repeat without using the explicit series form of Ln (x). 13.2.3 From the generating function derive the Rodrigues representation 13.2.4 ex x −k d n  −x n+k e x . n! dx n Derive the normalization relation (Eq. (13.79)) for the associated Laguerre polynomials. Lkn (x) = 13.2.5 Expand x r in a series of associated Laguerre polynomials Lkn (x), k fixed and n ranging from 0 to r (or to ∞ if r is not an integer). Hint. The Rodrigues form of Lkn (x) will be useful. r  (−1)n Lkn (x) , 0 ≤ x < ∞. ANS. x r = (r + k)!r! (n + k)!(r − n)! n=0 13.2.6 e−ax Expand in a series of associated Laguerre polynomials Lkn (x), k fixed and n ranging from 0 to ∞. (a) Evaluate directly the coefficients in your assumed expansion. (b) Develop the desired expansion from the generating function. ANS. e−ax = 13.2.7 Show that 0 Hint. Note that ∞ n ∞   a 1 Lkn (x), 1+a (1 + a)1+k e−x x k+1 Lkn (x)Lkn (x) dx = n=0 (n + k)! (2n + k + 1). n! xLkn = (2n + k + 1)Lkn − (n + k)Lkn−1 − (n + 1)Lkn+1 . 0 ≤ x < ∞. 846 Chapter 13 More Special Functions 13.2.8 Assume that a particular problem in quantum mechanics has led to the ODE  2 d 2y k − 1 2n + k + 1 1 + y=0 − − 2x 4 dx 2 4x 2 for nonnegative integers n, k. Write y(x) as y(x) = A(x)B(x)C(x), with the requirement that (a) A(x) be a negative exponential giving the required asymptotic behavior of y(x) and (b) B(x) be a positive power of x giving the behavior of y(x) for 0 ≤ x ≪ 1. Determine A(x) and B(x). Find the relation between C(x) and the associated Laguerre polynomial. ANS. A(x) = e−x/2 , 13.2.9 B(x) = x (k+1)/2 , C(x) = Lkn (x). From Eq. (13.91) the normalized radial part of the hydrogenic wave function is 1/2 2L+1 3 (n − L − 1)! RnL (r) = α e−αr (αr)L Ln−L−1 (αr), 2n(n + L)! in which α = 2Z/na0 = 2Zme2 /4πε0 h¯ 2 . Evaluate (a) (b) r = ∞ rRnL (αr)RnL (αr)r 2 dr, 0  −1  r = ∞ r −1 RnL (αr)RnL (αr)r 2 dr. 0 The quantity r is the average displacement of the electron from the nucleus, whereas r −1  is the average of the reciprocal displacement. ANS. r = 13.2.10 a0  2 3n − L(L + 1) , 2  −1  = r 1 . n 2 a0 Derive the recurrence relation for the hydrogen wave function expectation values:     s + 1 s + 2  s+1  (2L + 1)2 − (s + 1)2 a02 r s−1 = 0, r − (2s + 3)a0 r s + 2 4 n s s with s ≥ −2L − 1, r  ≡ r$ . Hint. Transform Eq. (13.87) into a form analogous to Eq. (13.80). Multiply by ρ s+2 u′ − cρ s+1 u. Here u = ρ. Adjust c to cancel terms that do not yield expectation values. 13.2.11 The hydrogen wave functions, Eq. (13.91), are mutually orthogonal, as they should be, since they are eigenfunctions of the self-adjoint Schrödinger equation ψn∗1 L1 M1 ψn2 L2 M2 r 2 dr d = δn1 n2 δL1 L2 δM1 M2 . 13.2 Laguerre Functions 847 Yet the radial integral has the (misleading) form ∞ e−αr/2 (αr)L Ln2L+1 (αr)e−αr/2 (αr)L Ln2L+1 (αr)r 2 dr, 1 −L−1 2 −L−1 0 which appears to match Eq. (13.83) and not the associated Laguerre orthogonality relation, Eq. (13.79). How do you resolve this paradox? ANS. The parameter α is dependent on n. The first three α, previously shown, are 2Z/n1 a0 . The last three are 2Z/n2 a0 . For n1 = n2 Eq. (13.83) applies. For n1 = n2 neither Eq. (13.79) nor Eq. (13.83) is applicable. 13.2.12 A quantum mechanical analysis of the Stark effect (parabolic coordinate) leads to the ODE     du 1 m2 1 2 d ξ + Eξ + L − − F ξ u = 0. dξ dξ 2 4ξ 4 Here F is a measure of the perturbation energy introduced by an external electric field. Find the unperturbed wave functions (F = 0) in terms of associated Laguerre polynomials. √ ANS. u(ξ ) = e−εξ/2 ξ m/2 Lm p (εξ ), with ε = −2E > 0, p = L/ε − (m + 1)/2, a nonnegative integer. 13.2.13 The wave equation for the three-dimensional harmonic oscillator is h¯ 2 2 1 ∇ ψ + Mω2 r 2 ψ = Eψ. 2M 2 Here ω is the angular frequency of the corresponding classical oscillator. Show that the radial part of ψ (in spherical polar coordinates ) may be written in terms of associated Laguerre functions of argument (βr 2 ), where β = Mω/h¯ . 2 Hint. As in Exercise 13.2.8, split off radial factors of r l and e−βr /2 . The associated l+1/2 Laguerre function will have the form L1/2(n−l−1) (βr 2 ). − 13.2.14 13.2.15 13.2.16 Write a computer program that will generate the coefficients as in the polynomial form of the Laguerre polynomial Ln (x) = ns=0 as x s . n Write a computer program that will transform a finite power series N n=0 an x into a N Laguerre series n=0 bn Ln (x). Use the recurrence relation, Eq. (13.62). Tabulate L10 (x) for x = 0.0(0.1)30.0. This will include the 10 roots of L10 . Beyond x = 30.0, L10 (x) is monotonic increasing. If graphic software is available, plot your results. Check value. Eighth root = 16.279. 13.2.17 Determine the 10 roots of L10 (x) using root-finding software. You may use your knowledge of the approximate location of the roots or develop a search routine to look for the roots. The 10 roots of L10 (x) are the evaluation points for the 10-point Gauss–Laguerre quadrature. Check your values by comparing with AMS-55, Table 25.9. (For the reference see footnote 4 in Chapter 5 or the General References at book’s end.) 848 Chapter 13 More Special Functions 13.2.18 Calculate the coefficients of a Laguerre series expansion (Ln (x), k = 0) of the exponential e−x . Evaluate the coefficients by the Gauss–Laguerre quadrature (compare Eq. (10.64)). Check your results against the values given in Exercise 13.2.6. Note. Direct application of the Gauss–Laguerre quadrature with f (x)Ln (x)e−x gives poor accuracy because of the extra e−x . Try a change of variable, y = 2x, so that the function appearing in the integrand will be simply Ln (y/2). 13.2.19 (a) Write a subroutine to calculate the Laguerre matrix elements ∞ Lm (x)Ln (x)x p e−x dx. Mmnp = 0 Include a check of the condition |m − n| ≤ p ≤ m + n. (If p is outside this range, Mmnp = 0. Why?) Note. A 10-point Gauss–Laguerre quadrature will give accurate results for m + n + p ≤ 19. (b) Call your subroutine to calculate a variety of Laguerre matrix elements. Check Mmn1 against Exercise 13.2.7. 13.2.20 13.2.21 13.2.22 13.3 Write a subroutine to calculate the numerical value of Lkn (x) for specified values of n, k, and x. Require that n and k be nonnegative integers and x ≥ 0. Hint. Starting with known values of Lk0 and Lk1 (x), we may use the recurrence relation, Eq. (13.75), to generate Lkn (x), n = 2, 3, 4, . . . . ∞ √ 2 Show that −∞ x n e−x Hn (xy) dx = πn!Pn (y), where Pn is a Legendre polynomial. Write a program to calculate the normalized hydrogen radial wave function ψnL (r). This is ψnLM of Eq. (13.91), omitting the spherical harmonic YLM (θ, ϕ). Take Z = 1 and a0 = 1 (which means that r is being expressed in units of Bohr radii). Accept n and L as input data. Tabulate ψnL (r) for r = 0.0(0.2)R with R taken large enough to exhibit the significant features of ψ . This means roughly R = 5 for n = 1, R = 10 for n = 2, and R = 30 for n = 3. CHEBYSHEV POLYNOMIALS In this section two types of Chebyshev polynomials are developed as special cases of ultraspherical polynomials. Their properties follow from the ultraspherical polynomial generating function. The primary importance of the Chebyshev polynomials is in numerical analysis. Generating Functions In Section 12.1 the generating function for the ultraspherical, or Gegenbauer, polynomials ∞  1 Cn(α) (x)t n , = (1 − 2xt + t 2 )α n=0 |x| < 1, |t| < 1 (13.92) was mentioned, with α = 21 giving rise to the Legendre polynomials. In this section we first take α = 1 and then α = 0 to generate two sets of polynomials known as the Chebyshev polynomials. 13.3 Chebyshev Polynomials 849 Type II (1) With α = 1 and Cn (x) = Un (x), Eq. (13.92) gives ∞  1 Un (x)t n , = 1 − 2xt + t 2 |x| < 1, n=0 |t| < 1. (13.93) These functions Un (x) generated by (1 − 2xt + t 2 )−1 are labeled Chebyshev polynomials type II. Although these polynomials have few applications in mathematical physics, one unusual application is in the development of four-dimensional spherical harmonics used in angular momentum theory. Type I With α = 0 there is a difficulty. Indeed, our generating function reduces to the constant 1. We may avoid this problem by first differentiating Eq. (13.92) with respect to t. This yields ∞  −α(−2x + 2t) nCn(α) (x)t n−1 , = (1 − 2xt + t 2 )α+1 (13.94) (α)  ∞  n Cn (x) n−1 x−t t . = 2 α (1 − 2xt + t 2 )α+1 (13.95) n=1 or n=1 (0) We define Cn (x) by (α) Cn (x) . α→0 α Cn(0) (x) = lim (13.96) The purpose of differentiating with respect to t was to get α in the denominator and to create an indeterminate form. Now multiplying Eq. (13.95) by 2t and adding 1 = (1 − 2xt + t 2 )/(1 − 2xt + t 2 ), we obtain ∞ n 1 − t2 C (0) (x)t n . =1+2 2 2 n 1 − 2xt + t (13.97) n=1 We define Tn (x) by + 1, n=0 n (0) (13.98) n > 0. C (x), 2 n Notice the special treatment for n = 0. This is similar to the treatment of the n = 0 term in (0) the Fourier series. Also, note that Cn is the limit indicated in Eq. (13.96) and not a literal substitution of α = 0 into the generating function series. With these new labels, Tn (x) = ∞  1 − t2 Tn (x)t n , = T (x) + 2 0 1 − 2xt + t 2 n=1 |x| ≤ 1, |t| < 1. (13.99) 850 Chapter 13 More Special Functions We call Tn (x) the type I Chebyshev polynomials. Note that the notation and spelling of the name for these functions differs from reference to reference. Here we follow the usage of AMS-55 (for the full reference see footnote 4 in Chapter 5). Differentiating the generating function (Eqs. (13.99)) with respect to t and multiplying by the denominator, 1 − 2xt + t 2 , we obtain  ∞ ∞    nTn (x)t n−1 Tn (x)t n = 1 − 2xt + t 2 −t − (t − x) T0 (x) + 2 n=1 n=1 = ∞   nTn t n−1 − 2xnTn t n + nTn t n+1 , n=1 from which the recurrence relation Tn+1 (x) − 2xTn (x) + Tn−1 (x) = 0 (13.100) follows by shifting the summation index so as to get the same power, t n , in each term and then comparing coefficients of t n . Similarly treating Eq. (13.93) we find − ∞   2(t − x) 2 nUn (x)t n−1 = 1 − 2xt + t 1 − 2xt + t 2 n=1 from which the recursion relation Un+1 (x) − 2xUn (x) + Un−1 (x) = 0 (13.101) follows upon comparing coefficients of like powers of t (see Table 13.3). Then, using the generating functions for the first few values of n and these recurrence relations for the higher-order polynomials, we get Tables 13.4 and 13.5 (see also Figs. 13.5 and 13.6). As with the Hermite polynomials, Section 13.1, the recurrence relations, Eqs. (13.100) and (13.101), together with the known values of T0 (x), T1 (x), U0 (x), and U1 (x), provide a convenient — that is, for a computer — means of getting the numerical value of any Tn (x0 ) or Un (x0 ), with x0 a given number. Table 13.3 Recursion relationa Pn+1 (x) = (An x + Bn )Pn (x) − Cn Pn−1 (x) Pn (x) Legendre Chebyshev I Shifted Chebyshev I Chebyshev II Shifted Chebyshev II Associated Laguerre Hermite Pn (x) Tn (x) Tn∗ (x) Un (x) Un∗ (x) (k) Ln (x) Hn (x) An Bn Cn 2n+1 n+1 0 0 −2 0 −2 1 n+1 2 4 2 4 1 − n+1 2 a P denotes any of the orthogonal polynomials. n 1 1 1 1 2n+k+1 n+1 n+k n+1 0 2n 13.3 Chebyshev Polynomials Table 13.4 Chebyshev polynomials, type I Table 13.5 Chebyshev polynomials, type II T0 = 1 T1 = x T2 = 2x 2 − 1 T3 = 4x 3 − 3x T4 = 8x 4 − 8x 2 + 1 T5 = 16x 5 − 20x 3 + 5x T6 = 32x 6 − 48x 4 + 18x 2 − 1 U0 = 1 U1 = 2x U2 = 4x 2 − 1 U3 = 8x 3 − 4x U4 = 16x 4 − 12x 2 + 1 U5 = 32x 5 − 32x 3 + 6x U6 = 64x 6 − 80x 4 + 24x 2 − 1 FIGURE 13.5 Chebyshev polynomials Tn (x). FIGURE 13.6 Chebyshev polynomials Un (x). 851 852 Chapter 13 More Special Functions Again, from the generating functions, we can obtain the special values of various polynomials: Tn (−1) = (−1)n , Tn (1) = 1, T2n (0) = (−1)n , Un (1) = n + 1, U2n (0) = (−1)n , T2n+1 (0) = 0; Un (−1) = (−1)n (n + 1), U2n+1 (0) = 0. (13.102) (13.103) For example, comparing the power series ∞ 1 − t2 1 + t  n = t + t n+1 = 2 1−t (1 − t) n=0 with Eq. (13.99) for x = 1 gives Tn (1), and for x = −1 a similar expansion of (1 − t)/(1 + t) gives Tn (−1), while replacing t → −t 2 in the first power series yields Tn (0). The power series for (1 ± t)−2 and (1 + t 2 )−1 generate the corresponding Un (±1), Un (0). The parity relations for Tn and Un follow from their generating functions, with the substitutions t → −t, x → −x, which leave them invariant; these are Tn (x) = (−1)n Tn (−x), Un (x) = (−1)n Un (−x). (13.104) Rodrigues’ representations of Tn (x) and Un (x) are Tn (x) = and Un (x) = n−1/2 (−1)n π 1/2 (1 − x 2 )1/2 d n  1 − x2 n 1 n dx 2 (n − 2 )! (−1)n (n + 1)π 1/2 2n+1 (n + 1 2 1/2 2 )!(1 − x ) n+1/2 d n  1 − x2 . n dx (13.105) (13.106) Recurrence Relations — Derivatives Differentiation of the generating functions for Tn (x) and Un (x) with respect to the variable x leads to a variety of recurrence relations involving derivatives. For example, from Eq. (13.99) we thus obtain  ∞ ∞    n ′ n 2 1 − 2xt + t 2 Tn (x)t , Tn (x)t = 2t T0 (x) + 2 n=1 n=1 from which we extract the recursion ′ ′ 2Tn−1 (x) = Tn′ (x) − 2xTn−1 (x) + Tn−2 (x), (13.107) which is the derivative of Eq. (13.100) for n → n − 1. Among the more useful recursions we thus find are  1 − x 2 Tn′ (x) = −nxTn (x) + nTn−1 (x) (13.108) 13.3 Chebyshev Polynomials 853 and  1 − x 2 Un′ (x) = −nxUn (x) + (n + 1)Un−1 (x). (13.109) Manipulating a variety of these recursions as in Section 12.2 for Legendre polynomials one can eliminate the index n − 1 also in favor of Tn′′ and establish that Tn (x), the Chebyshev polynomial type I, satisfies the ODE  1 − x 2 Tn′′ (x) − xTn′ (x) + n2 Tn (x) = 0. (13.110) The Chebyshev polynomial of type II, Un (x), satisfies  1 − x 2 Un′′ (x) − 3xUn′ (x) + n(n + 2)Un (x) = 0. (13.111) Chebyshev polynomials may be defined starting from these ODEs, but our emphasis has been on generating functions. The ultraspherical equation  d 2 (α) d 1 − x2 (13.112) Cn (x) − (2α + 1)x Cn(α) (x) + n(n + 2α)Cn(α) (x) = 0 2 dx dx is a generalization of these differential equations, reducing to Eq. (13.110) for α = 0 and Eq. (13.111) for α = 1 (and to Legendre’s equation for α = 21 ). Trigonometric Form At this point in the development of the properties of the Chebyshev solutions it is beneficial to change variables, replacing x by cos θ . With x = cos θ and d/dx = (−1/ sin θ )(d/dθ ), we verify that  d 2 Tn d 2 Tn dTn 1 − x2 , = − cot θ 2 2 dθ dx dθ Adding these terms, Eq. (13.110) becomes xTn′ = − cot θ dTn . dθ d 2 Tn + n2 Tn = 0, (13.113) dθ 2 the simple harmonic oscillator equation with solutions cos nθ and sin nθ . The special values (boundary conditions at x = 0, 1) identify Tn = cos nθ = cos n(arccos x). (13.114a) A second linearly independent solution of Eqs. (13.110) and (13.113) is labeled Vn = sin nθ = sin n(arccos x). (13.114b) The corresponding solutions of the type II Chebyshev equation, Eq. (13.111), become sin(n + 1)θ , sin θ cos(n + 1)θ . Wn = sin θ Un = (13.115a) (13.115b) 854 Chapter 13 More Special Functions The two sets of solutions, type I and type II, are related by  1/2 Vn (x) = 1 − x 2 Un−1 (x),  −1/2 Wn (x) = 1 − x 2 Tn+1 (x). (13.116a) (13.116b) As already seen from generating functions, Tn (x) and Un (x) are polynomials. Clearly, Vn (x) and Wn (x) are not polynomials. From Tn (x) + iVn (x) = cos nθ + i sin nθ 1/2 n   , = (cos θ + i sin θ )n = x + i 1 − x 2 |x| ≤ 1 (13.117) we obtain expansions Tn (x) = x n − and Vn (x) =     2 n n−2  n n−4  x 1 − x2 + x 1 − x2 − · · · 2 4 1 − x2      n n−1 n n−3  1 − x2 + · · · . − x x 1 3 (13.118a) (13.118b) From the generating functions, or from the ODEs, power-series representations are [n/2] (n − m − 1)! n (2x)n−2m , (−1)m Tn (x) = 2 m!(n − 2m)! (13.119a) m=0 for n ≥ 1, with [n/2] the largest integer below n/2 and Un (x) = [n/2]  m=0 (−1)m (n − m)! (2x)n−2m . m!(n − 2m)! (13.119b) Orthogonality If Eq. (13.110) is put into self-adjoint form (Section 10.1), we obtain w(x) = (1 − x 2 )−1/2 as a weighting factor. For Eq. (13.111) the corresponding weighting factor is (1 − x 2 )+1/2 . The resulting orthogonality integrals,  m = n,  1 0,  π 2 −1/2 , m = n = 0, (13.120) Tm (x)Tn (x) 1 − x dx =  −1 2 π, m = n = 0,  m = n,  1 0,  π 2 −1/2 , m = n = 0, (13.121) Vm (x)Vn (x) 1 − x dx =  −1 2 0, m = n = 0, 1  1/2 π (13.122) Um (x)Un (x) 1 − x 2 dx = δm,n , 2 −1 13.3 Chebyshev Polynomials 855 and 1  1/2 π Wm (x)Wn (x) 1 − x 2 dx = δm,n , 2 −1 (13.123) are a direct consequence of the Sturm–Liouville theory, Chapter 10. The normalization values may best be obtained by using x = cos θ and converting these four integrals into Fourier normalization integrals (for the half-period interval [0, π]). Exercises 13.3.1 Another Chebyshev generating function is ∞  1 − xt Xn (x)t n , = 1 − 2xt + t 2 |t| < 1. n=0 How is Xn (x) related to Tn (x) and Un (x)? 13.3.2 Given  1 − x 2 Un′′ (x) − 3xUn′ (x) + n(n + 2)Un (x) = 0, show that Vn (x) (Eq. (13.116a)) satisfies  1 − x 2 Vn′′ (x) − xVn′ (x) + n2 Vn (x) = 0, which is Chebyshev’s equation. 13.3.3 Show that the Wronskian of Tn (x) and Vn (x) is given by Tn (x)Vn′ (x) − Tn′ (x)Vn (x) = − n . (1 − x 2 )1/2 This verifies that Tn and Vn (n = 0) are independent solutions of Eq. (13.110). Conversely, for n = 0, we do not have linear independence. What happens at n = 0? Where is the “second” solution? 13.3.4 13.3.5 Show that Wn (x) = (1 − x 2 )−1/2 Tn+1 (x) is a solution of  1 − x 2 Wn′′ (x) − 3xWn′ (x) + n(n + 2)Wn (x) = 0. Evaluate the Wronskian of Un (x) and Wn (x) = (1 − x 2 )−1/2 Tn+1 (x). 13.3.6 Vn (x) = (1 − x 2 )1/2 Un−1 (x) is not defined for n = 0. Show that a second and independent solution of the Chebyshev differential equation for Tn (x) (n = 0) is V0 (x) = arccos x (or arcsin x). 13.3.7 Show that Vn (x) satisfies the same three-term recurrence relation as Tn (x) (Eq. (13.100)). 13.3.8 Verify the series solutions for Tn (x) and Un (x) (Eqs. (13.109a) and (13.119b)). 856 Chapter 13 More Special Functions 13.3.9 Transform the series form of Tn (x), Eq. (13.119a), into an ascending power series. n ANS. T2n (x) = (−1) n T2n+1 (x) = 13.3.10 n  (−1)m m=0 n  2n + 1 2 m=0 n ≥ 1, (−1)m+n (n + m)! (2x)2m+1 . (n − m)!(2m + 1)! Rewrite the series form of Un (x), Eq. (13.119b), as an ascending power series. ANS. U2n (x) = (−1)n n  (−1)m m=0 n  n U2n+1 (x) = (−1) 13.3.11 (n + m − 1)! (2x)2m , (n − m)!(2m)! (n + m)! (2x)2m , (n − m)!(2m)! (−1)m m=0 (n + m + 1)! (2x)2m+1 . (n − m)!(2m + 1)! Derive the Rodrigues representation of Tn (x), Tn (x) = n−1/2 (−1)n π 1/2 (1 − x 2 )1/2 d n  1 − x2 . n 1 dx 2n (n − 2 )! Hint. One possibility is to use the hypergeometric function relation   −z −a a, c − b; c; , F F (a, b; c; z) = (1 − z) 2 1 2 1 1−z with z = (1−x)/2. An alternate approach is to develop a first-order differential equation for y = (1 − x 2 )n−1/2 . Repeated differentiation of this equation leads to the Chebyshev equation. 13.3.12 (a) (b) 13.3.13 From the differential equation for Tn (in self-adjoint form) show that 1 1/2 dTm (x) dTn (x)  1 − x2 dx = 0, m = n. dx dx −1 Confirm the preceding result by showing that dTn (x) = nUn−1 (x). dx The expansion of a power of x in a Chebyshev series leads to the integral 1 dx . Imn = x m Tn (x) √ 1 − x2 −1 (a) Show that this integral vanishes for m < n. (b) Show that this integral vanishes for m + n odd. 13.3.14 Evaluate the integral Imn = 1 x m Tn (x) √ dx 1 − x2 for m ≥ n and m + n even by each of two methods: −1 13.3 Chebyshev Polynomials 857 (a) Operate with x as the variable replacing Tn by its Rodrigues representation. (b) Using x = cos θ transform the integral to a form with θ as the variable. ANS. Imn = π m! (m − n − 1)!! , (m − n)! (m + n)!! 13.3.16 Establish the following bounds, −1 ≤ x ≤ 1:    d  (a) |Un (x)| ≤ n + 1, (b)  Tn (x) ≤ n2 . dx 13.3.17 Verify the orthogonality-normalization integrals for 13.3.15 m ≥ n, m + n even. (a) Establish the following bound, −1 ≤ x ≤ 1: |Vn (x)| ≤ 1. (b) Show that Wn (x) is unbounded in −1 ≤ x ≤ 1. (a) Tm (x), Tn (x), (c) Um (x), Un (x), (b) Vm (x), Vn (x), (d) Wm (x), Wn (x). Hint. All these can be converted to Fourier orthogonality-normalization integrals. 13.3.18 Show whether Tm (x) and Vn (x) are or are not orthogonal over the interval [−1, 1] with respect to the weighting factor (1 − x 2 )−1/2 . (b) Um (x) and Wn (x) are or are not orthogonal over the interval [−1, 1] with respect to the weighting factor (1 − x 2 )1/2 . (a) 13.3.19 Derive (a) Tn+1 (x) + Tn−1 (x) = 2xTn (x), (b) Tm+n (x) + Tm−n (x) = 2Tm (x)Tn (x), from the “corresponding” cosine identities. 13.3.20 A number of equations relate the two types of Chebyshev polynomials. As examples show that Tn (x) = Un (x) − xUn−1 (x) and 13.3.21 Show that  1 − x 2 Un (x) = xTn+1 (x) − Tn+2 (x). dVn (x) Tn (x) = −n √ dx 1 − x2 (a) using the trigonometric forms of Vn and Tn , (b) using the Rodrigues representation. 858 Chapter 13 More Special Functions 13.3.22 Starting with x = cos θ and Tn (cos θ ) = cos nθ , expand  iθ  e + e−iθ k k x = 2 and show that      k k x = k−1 Tk (x) + + ··· , T T (x) + 2 k−4 1 k−2 2   the series in brackets terminating with mk T1 (x) for k = 2m + 1 or 12 mk T0 for k = 2m. k 1 13.3.23 Calculate and tabulate the Chebyshev functions V1 (x), V2 (x), and V3 (x) for x = −1.0(0.1)1.0. (b) A second solution of the Chebyshev differential equation, Eq. (13.100), for n = 0 is y(x) = sin−1 x. Tabulate and plot this function over the same range: −1.0(0.1)1.0. 13.3.24 Write a computer program that will generate the coefficients as in the polynomial form of the Chebyshev polynomial Tn (x) = ns=0 as x s . 13.3.25 13.3.26 13.3.27 (a) Tabulate T10 (x) for 0.00(0.01)1.00. This will include the five positive roots of T10 . If graphics software is available, plot your results. Determine the five positive roots of T10 (x) by calling a root-finding subroutine. Use your knowledge of the approximate location of these roots from Exercise 13.3.25 or write a search routine to look for the roots. These five positive roots (and their negatives) are the evaluation points of the 10-point Gauss–Chebyshev quadrature method.  Check values. xk = cos (2k − 1)π/20 , k = 1, 2, 3, 4, 5. Develop the following Chebyshev expansions (for [−1, 1]):  ∞   1/2 2  2 −1 1−2 1 − x2 = 4s − 1 T2s (x) . π s=1  ∞  4 +1, 0 < x ≤ 1 = (b) (−1)s (2s + 1)−1 T2s+1 (x). −1, −1 ≤ x < 0 π (a) s=0 13.3.28 (a) For the interval [−1, 1] show that |x| = = (b) ∞ 1  (2s − 3)!! + (4s + 1)P2s (x) (−1)s+1 2 (2s + 2)!! s=1 ∞ 4 1 2 + T2s (x). (−1)s+1 2 π π 4s − 1 s=1 Show that the ratio of the coefficient of T2s (x) to that of P2s (x) approaches (πs)−1 as s → ∞. This illustrates the relatively rapid convergence of the Chebyshev series. 13.4 Hypergeometric Functions 859 Hint. With the Legendre recurrence relations, rewrite xPn (x) as a linear combination of derivatives. The trigonometric substitution x = cos θ, Tn (x) = cos nθ is most helpful for the Chebyshev part. 13.3.29 Show that ∞  −2 π2 =1+2 4s 2 − 1 . 8 s=1 Hint. Apply Parseval’s identity (or the completeness relation) to the results of Exercise 13.3.28. 13.3.30 Show that ∞ 4 1 π − T2n+1 (x). 2 π (2n + 1)2 (a) cos−1 x = (b) ∞ 1 4 sin−1 x = T2n+1 (x). π (2n + 1)2 n=0 n=0 13.4 HYPERGEOMETRIC FUNCTIONS In Chapter 9 the hypergeometric equation10  x(1 − x)y ′′ (x) + c − (a + b + 1)x y ′ (x) − ab y(x) = 0 (13.124) was introduced as a canonical form of a linear second-order ODE with regular singularities at x = 0, 1, and ∞. One solution is y(x) = 2 F1 (a, b; c; x) =1+ a(a + 1)b(b + 1) x 2 a·b x + + ··· , c 1! c(c + 1) 2! c = 0, −1, −2, −3, . . . , (13.125) which is known as the hypergeometric function or hypergeometric series. The range of convergence for c > a + b is |x| < 1 and x = 1, and is x = −1 for c > a + b − 1. In terms of the often-used Pochhammer symbol, (a)n = a(a + 1)(a + 2) · · · (a + n − 1) = (a)0 = 1, (a + n − 1)! , (a − 1)! (13.126) the hypergeometric function becomes 2 F1 (a, b; c; x) = ∞  (a)n (b)n x n n=0 (c)n 10 This is sometimes called Gauss’ ODE. The solutions then become Gauss functions. n! . (13.127) 860 Chapter 13 More Special Functions In this form the subscripts 2 and 1 become clear. The leading subscript 2 indicates that two Pochhammer symbols appear in the numerator and the final subscript 1 indicates one Pochhammer symbol in the denominator.11 (The confluent hypergeometric function 1 F1 with one Pochhammer symbol in the numerator and one in the denominator appears in Section 13.5.) From the form of Eq. (13.125) we see that the parameter c may not be zero or a negative integer. On the other hand, if a or b equals 0 or a negative integer, the series terminates and the hypergeometric function becomes a polynomial. Many more or less elementary functions can be represented by the hypergeometric function.12 Comparing the power series we verify that ln(1 + x) = x 2 F1 (1, 1; 2; −x). For the complete elliptic integrals K and E,   π/2  2  1 1 π 2 2 2 −1/2 dθ = 2 F1 , ; 1; k , 1 − k sin θ K k = 2 2 2 0   π/2  2  1/2 1 1 π 2 2 E k = 1 − k sin θ dθ = 2 F1 , − ; 1; k . 2 2 2 0 (13.128) (13.129) (13.130) The explicit series forms and other properties of the elliptic integrals are developed in Section 5.8. The hypergeometric equation as a second-order linear ODE has a second independent solution. The usual form is y(x) = x 1−c 2 F1 (a + 1 − c, b + 1 − c; 2 − c; x), c = 2, 3, 4, . . . . (13.131) If c is an integer either the two solutions coincide or (barring a rescue by integral a or integral b) one of the solutions will blow up (see Exercise 13.4.1). In such a case the second solution is expected to include a logarithmic term. Alternate forms of the hypergeometric ODE include      d 2  1−z 1−z 2 d − (a + b + 1)z − (a + b + 1 − 2c) y y 1−z 2 dz 2 dz2   1−z − ab y = 0, (13.132) 2   d2  1 − 2c d  2 2 1 − z2 y z − 4ab y z2 = 0. y(z ) − (2a + 2b + 1)z + 2 z dz dz 11 The Pochhammer symbol is often useful in other expressions involving factorials, for instance, (1 − z)−a = ∞  (a)n zn /n!, n=0 12 With three parameters, a, b, and c, we can represent almost anything. |z| < 1. (13.133) 13.4 Hypergeometric Functions 861 Contiguous Function Relations The parameters a, b, and c enter in the same way as the parameter n of Bessel, Legendre, and other special functions. As we found with these functions, we expect recurrence relations involving unit changes in the parameters a, b, and c. The usual nomenclature for the hypergeometric functions, in which one parameter changes by + or −1, is a “contiguous function.” Generalizing this term to include simultaneous unit changes in more than one parameter, we find 26 functions contiguous to 2 F1 (a, b; c; x). Taking them two at a time, we can develop the formidable total of 325 equations among the contiguous functions. One typical example is '  ( (a − b) c(a + b − 1) + 1 − a 2 − b2 + (a − b)2 − 1 (1 − x) 2 F1 (a, b; c; x) = (c − a)(a − b + 1)b 2 F1 (a − 1, b + 1; c; x) (c − b)(a − b − 1)a 2 F1 (a + 1, b − 1; c; x). (13.134) Another contiguous function relation appears in Exercise 13.4.10. Hypergeometric Representations Since the ultraspherical equation (13.112) in Section 13.3 is a special case of Eq. (13.124), we see that ultraspherical functions (and Legendre and Chebyshev functions) may be expressed as hypergeometric functions. For the ultraspherical function we obtain   1−x (n + 2β)! −n, n + 2β + 1; 1 + β; (13.135) F Cnβ (x) = β 2 1 2 n!β! 2 upon comparing its ODE with Eq. (13.124) and the power-series solutions. For Legendre and associated Legendre functions we find similarly   1−x , (13.136) Pn (x) = 2 F1 −n, n + 1; 1; 2   1−x (n + m)! (1 − x 2 )m/2 . (13.137) m − n, m + n + 1; m + 1; Pnm (x) = F 2 1 (n − m)! 2m m! 2 Alternate forms are   1 1 2 (2n)! −n, n + ; ; x F 2 1 2 2 22n n!n!   1 1 2 (2n − 1)!! , = (−1)n 2 F1 −n, n + ; ; x (2n)!! 2 2   3 3 2 n (2n + 1)! x P2n+1 (x) = (−1) 2n 2 F1 −n, n + ; ; x 2 2 2 n!n!   3 3 2 (2n + 1)!! −n, n + x. ; ; x = (−1)n F 2 1 (2n)!! 2 2 P2n (x) = (−1)n (13.138) (13.139) 862 Chapter 13 More Special Functions In terms of hypergeometric functions, the Chebyshev functions become   1 1−x , Tn (x) = 2 F1 −n, n; ; 2 2   3 1−x , Un (x) = (n + 1) 2 F1 −n, n + 2; ; 2 2   3 1−x 2 Vn (x) = n 1 − x 2 F1 −n + 1, n + 1; ; . 2 2 (13.140) (13.141) (13.142) The leading factors are determined by direct comparison of complete power series, comparison of coefficients of particular powers of the variable, or evaluation at x = 0 or 1, and so on. The hypergeometric series may be used to define functions with nonintegral indices. The physical applications are minimal. Exercises 13.4.1 (a) For c, an integer, and a and b nonintegral, show that 2 F1 (a, b; c; x) (b) and x 1−c 2 F1 (a + 1 − c, b + 1 − c; 2 − c; x) yield only one solution to the hypergeometric equation. What happens if a is an integer, say, a = −1, and c = −2? 13.4.2 Find the Legendre, Chebyshev I, and Chebyshev II recurrence relations corresponding to the contiguous hypergeometric function Eq. (13.134). 13.4.3 Transform the following polynomials into hypergeometric functions of argument x 2 . (a) T2n (x); (b) x −1 T2n+1 (x); (c) U2n (x); (d) x −1 U2n+1 (x). ANS. (a) T2n (x) = (−1)n 2 F1 (−n, n; 21 ; x 2 ). (b) x −1 T2n+1 (x) = (−1)n (2n + 1) 2 F1 (−n, n + 1; 32 ; x 2 ). (c) U2n (x) = (−1)n 2 F1 (−n, n + 1; 12 ; x 2 ). (d) x −1 U2n+1 (x) = (−1)n (2n + 2) 2 F1 (−n, n + 2; 32 ; x 2 ). 13.4.4 Derive or verify the leading factor in the hypergeometric representations of the Chebyshev functions. 13.4.5 Verify that the Legendre function of the second kind, Qν (z), is given by   ν 1 ν ν 3 −2 π 1/2 ν! , + , + 1; + ; z Qν (z) = 2 F1 2 2 2 2 2 (ν + 12 )!(2z)ν+1 |z| > 1, 13.4.6 | arg z| < π, ν = −1, −2, −3, . . . . Analogous to the incomplete gamma function, we may define an incomplete beta function by x Bx (a, b) = t a−1 (1 − t)b−1 dt. 0 13.5 Confluent Hypergeometric Functions 863 Show that Bx (a, b) = a −1 x a 2 F1 (a, 1 − b; a + 1; x). 13.4.7 Verify the integral representation 2 F1 (a, b; c; z) = Ŵ(c) Ŵ(b)Ŵ(c − b) 1 0 t b−1 (1 − t)c−b−1 (1 − tz)−a dt. What restrictions must be placed on the parameters b and c and on the variable z? Note. The restriction on |z| can be dropped — analytic continuation. For nonintegral a the real axis in the z-plane from 1 to ∞ is a cut line. Hint. The integral is suspiciously like a beta function and can be expanded into a series of beta functions. ANS. ℜ(c) > ℜ(b) > 0, and |z| < 1. 13.4.8 Prove that 2 F1 (a, b; c; 1) = Ŵ(c)Ŵ(c − a − b) , Ŵ(c − a)Ŵ(c − b) c = 0, −1, −2, . . . c > a + b. Hint. Here is a chance to use the integral representation, Exercise 13.4.7. 13.4.9 Prove that −a 2 F1 (a, b; c; x) = (1 − x) 2 F1   −x a, c − b; c; . 1−x Hint. Try an integral representation. Note. This relation is useful in developing a Rodrigues representation of Tn (x) (compare Exercise 13.3.11). 13.4.10 Verify that 2 F1 (−n, b; c; 1) = (c − b)n . (c)n Hint. Here is a chance to use the contiguous function relation [2a − c + (b − a)x] · 2 F1 (a, b; c; x) = a(1 − x) 2 F1 (a + 1, b; c; x) − (c − a) 2 F1 (a − 1, b; c; x) and mathematical induction. Alternatively, use the integral representation and the beta function. 13.5 CONFLUENT HYPERGEOMETRIC FUNCTIONS The confluent hypergeometric equation13 xy ′′ (x) + (c − x)y ′ (x) − ay(x) = 0 13 This is often called Kummer’s equation. The solutions, then, are Kummer functions. (13.143) 864 Chapter 13 More Special Functions has a regular singularity at x = 0 and an irregular one at x = ∞. It is obtained from the hypergeometric equation of Section 13.4 by merging (by hand: x(1 − x) → x in Eq. (13.124)) two of the latter’s three singularities. One solution of the confluent hypergeometric equation is y(x) = 1 F1 (a; c; x) = M(a, c, x) =1+ a(a + 1) x 2 ax + + ··· , c 1! c(c + 1) 2! c = 0, −1, −2, . . . . (13.144) This solution is convergent for all finite x (or complex z). In terms of the Pochhammer symbols, we have M(a, c, x) = ∞  (a)n x n n=0 (c)n n! . (13.145) Clearly, M(a, c, x) becomes a polynomial if the parameter a is 0 or a negative integer. Numerous more or less elementary functions may be represented by the confluent hypergeometric function. Examples are the error function and the incomplete gamma function (from Eq. (8.69)):   x 2 2 1 3 2 −t 2 erf(x) = 1/2 (13.146) e dt = 1/2 xM , , −x , 2 2 π π 0 x γ (a, x) = e−t t a−1 dt = a −1 x a M(a, a + 1, −x), ℜ(a) > 0. (13.147) 0 Clearly, this coincides with the first solution for c = a. The error function and the incomplete gamma function are discussed further in Section 8.5. A second solution of Eq. (13.143) is given by y(x) = x 1−c M(a + 1 − c, 2 − c, x), c = 2, 3, 4, . . . . (13.148) The standard form of the second solution of Eq. (13.143) is a linear combination of Eqs. (13.144) and (13.148):  π M(a, c, x) x 1−c M(a + 1 − c, 2 − c, x) . (13.149) − U (a, c, x) = sin πc (a − c)!(c − 1)! (a − 1)!(1 − c)! Note the resemblance to our definition of the Neumann function, Eq. (11.60). As with our Neumann function, Eq. (11.60), this definition of U (a, c, x) becomes indeterminate in this case for c an integer. An alternate form of the confluent hypergeometric equation that will be useful later is obtained by changing the independent variable from x to x 2 :   2 d 2  2 2c − 1 d  2 y x + − 4ay x = 0. (13.150) − 2x y x x dx dx 2 As with the hypergeometric functions, contiguous functions exist in which the parameters a and c are changed by ±1. Including the cases of simultaneous changes in the 13.5 Confluent Hypergeometric Functions 865 two parameters,14 we have eight possibilities. Taking the original function and pairs of the contiguous functions, we can develop a total of 28 equations.15 Integral Representations It is frequently convenient to have the confluent hypergeometric functions in integral form. We find (Exercise 13.5.10) 1 Ŵ(c) M(a, c, x) = ext t a−1 (1 − t)c−a−1 dt, ℜ(c) > ℜ(a) > 0, Ŵ(a)Ŵ(c − a) 0 (13.151) ∞ 1 e−xt t a−1 (1 + t)c−a−1 dt, ℜ(x) > 0, ℜ(a) > 0. U (a, c, x) = Ŵ(a) 0 (13.152) Three important techniques for deriving or verifying integral representations are as follows: 1. Transformation of generating function expansions and Rodrigues representations: The Bessel and Legendre functions provide examples of this approach. 2. Direct integration to yield a series: This direct technique is useful for a Bessel function representation (Exercise 11.1.18) and a hypergeometric integral (Exercise 13.4.7). 3. (a) Verification that the integral representation satisfies the ODE. (b) Exclusion of the other solution. (c) Verification of normalization. This is the method used in Section 11.5 to establish an integral representation of the modified Bessel function Kν (z). It will work here to establish Eqs. (13.151) and (13.152). Bessel and Modified Bessel Functions Kummer’s first formula, M(a, c, x) = ex M(c − a, c, −x), (13.153) is useful in representing the Bessel and modified Bessel functions. The formula may be verified by series expansion or by use of an integral representation (compare Exercise 13.5.10). As expected from the form of the confluent hypergeometric equation and the character of its singularities, the confluent hypergeometric functions are useful in representing a number of the special functions of mathematical physics. For the Bessel functions,     e−ix x ν 1 (13.154) Jν (x) = M ν + , 2ν + 1, 2ix , ν! 2 2 whereas for the modified Bessel functions of the first kind,     e−x x ν 1 Iν (x) = M ν + , 2ν + 1, 2x . ν! 2 2 14 Slater refers to these as associated functions. 15 The recurrence relations for Bessel, Hermite, and Laguerre functions are special cases of these equations. (13.155) 866 Chapter 13 More Special Functions Hermite Functions The Hermite functions are given by   1 2 H2n (x) = (−1) M −n, , x , n! 2   2(2n + 1)! 3 H2n+1 (x) = (−1)n xM −n, , x 2 , n! 2 n (2n)! (13.156) (13.157) using Eq. (13.150). Comparing the Laguerre ODE with the confluent hypergeometric equation (13.143), we have Ln (x) = M(−n, 1, x). (13.158) The constant is fixed as unity by noting Eq. (13.66) for x = 0. For the associated Laguerre functions, m (n + m)! m d Ln+m (x) = Lm M(−n, m + 1, x). (13.159) n (x) = (−1) dx m n!m! Alternate verification is obtained by comparing Eq. (13.159) with the power-series solution (Eq. (13.72) of Section 13.2). Note that in the hypergeometric form, as distinct from a Rodrigues representation, the indices n and m need not be integers, and, if they are not integers, Lm n (x) will not be a polynomial. Miscellaneous Cases There are certain advantages in expressing our special functions in terms of hypergeometric and confluent hypergeometric functions. If the general behavior of the latter functions is known, the behavior of the special functions we have investigated follows as a series of special cases. This may be useful in determining asymptotic behavior or evaluating normalization integrals. The asymptotic behavior of M(a, c, x) and U (a, c, x) may be conveniently obtained from integral representations of these functions, Eqs. (13.151) and (13.152). The further advantage is that the relations between the special functions are clarified. For instance, an examination of Eqs. (13.156), (13.157), and (13.159) suggests that the Laguerre and Hermite functions are related. The confluent hypergeometric equation (13.143) is clearly not self-adjoint. For this and other reasons it is convenient to define   1 −x/2 µ+1/2 (13.160) x M µ − k + , 2µ + 1, x . Mkµ (x) = e 2 This new function, Mkµ (x), is a Whittaker function that satisfies the self-adjoint equation  1 2 1 k ′′ 4 −µ Mkµ (x) + − + + Mkµ (x) = 0. (13.161) 4 x x2 The corresponding second solution is   1 Wkµ (x) = e−x/2 x µ+1/2 U µ − k + , 2µ + 1, x . 2 (13.162) 13.5 Confluent Hypergeometric Functions 867 Exercises 13.5.1 Verify the confluent hypergeometric representation of the error function   2x 1 3 2 erf(x) = 1/2 M , , −x . 2 2 π 13.5.2 Show that the Fresnel integrals C(x) and S(x) of Exercise 5.10.2 may be expressed in terms of the confluent hypergeometric function as   1 3 iπx 2 C(x) + iS(x) = xM , , . 2 2 2 13.5.3 By direct differentiation and substitution verify that x −a e−t t a−1 dt = ax −a γ (a, x) y = ax 0 satisfies xy ′′ + (a + 1 + x)y ′ + ay = 0. 13.5.4 Show that the modified Bessel function of the second kind, Kν (x), is given by   1 1/2 −x ν Kν (x) = π e (2x) U ν + , 2ν + 1, 2x . 2 13.5.5 Show that the cosine and sine integrals of Section 8.5 may be expressed in terms of confluent hypergeometric functions as Ci(x) + i si(x) = −eix U (1, 1, −ix). This relation is useful in numerical computation of Ci(x) and si(x) for large values of x. 13.5.6 Verify the confluent hypergeometric form of the Hermite polynomial H2n+1 (x) (Eq. (13.157)) by showing that H2n+1 (x)/x satisfies the confluent hypergeometric equation with a = −n, c = and argument x 2 , 2(2n + 1)! H2n+1 (x) = (−1)n . (b) lim x→0 x n! (a) 13.5.7 Show that the contiguous confluent hypergeometric function equation (c − a)M(a − 1, c, x) + (2a − c + x)M(a, c, x) − aM(a + 1, c, x) = 0 leads to the associated Laguerre function recurrence relation (Eq. (13.75)). 13.5.8 Verify the Kummer transformations: (a) M(a, c, x) = ex M(c − a, c, −x) (b) U (a, c, x) = x 1−c U (a − c + 1, 2 − c, x). 3 2 868 Chapter 13 More Special Functions 13.5.9 13.5.10 Prove that (a) dn (a)n M(a, c, x) = M(a + n, b + n, x), n dx (b)n (b) dn U (a, c, x) = (−1)n (a)n U (a + n, c + n, x). dx n Verify the following integral representations: (a) (b) 1 Ŵ(c) ext t a−1 (1 − t)c−a−1 dt, ℜ(c) > ℜ(a) > 0. Ŵ(a)Ŵ(c − a) 0 ∞ 1 e−xt t a−1 (1 + t)c−a−1 dt, ℜ(x) > 0, ℜ(a) > 0. U (a, c, x) = Ŵ(a) 0 M(a, c, x) = Under what conditions can you accept ℜ(x) = 0 in part (b)? 13.5.11 From the integral representation of M(a, c, x), Exercise 13.5.10(a), show that M(a, c, x) = ex M(c − a, c, −x). Hint. Replace the variable of integration t by 1 − s to release a factor ex from the integral. 13.5.12 From the integral representation of U (a, c, x), Exercise 13.5.10(b), show that the exponential integral is given by E1 (x) = e−x U (1, 1, x). Hint. Replace the variable of integration t in E1 (x) by x(1 + s). 13.5.13 From the integral representations of M(a, c, x) and U (a, c, x) in Exercise 13.5.10 develop asymptotic expansions of (a) M(a, c, x), (b) U (a, c, x). Hint. You can use the technique that was employed with Kν (z), Section 11.6.  (1 − a)(c − a) Ŵ(c) ex 1+ + ANS. (a) c−a Ŵ(a) x 1!x  (1 − a)(2 − a)(c − a)(c − a + 1) + ··· 2!x 2   a(1 + a − c) a(a + 1)(1 + a − c)(2 + a − c) 1 + + · · · . (b) a 1 + x 1!(−x) 2!(−x)2 13.5.14 Show that the Wronskian of the two confluent hypergeometric functions M(a, c, x) and U (a, c, x) is given by MU ′ − M ′ U = − What happens if a is 0 or a negative integer? (c − 1)! ex . (a − 1)! x c 13.6 Mathieu Functions 13.5.15 869 The Coulomb wave equation (radial part of the Schrödinger equation with Coulomb potential) is  d 2y 2η L(L + 1) y = 0. + 1 − − ρ dρ 2 ρ2 Show that a regular solution y = FL (η, ρ) is given by FL (η, ρ) = CL (η)ρ L+1 e−iρ M(L + 1 − iη, 2L + 2, 2iρ). 13.5.16 (a) Show that the radial part of the hydrogen wave function, Eq. (13.81), may be written as 2L+1 e−αr/2 (αr)L Ln−L−1 (αr) = (n + L)! e−αr/2 (αr)L M(L + 1 − n, 2L + 2, αr). (n − L − 1)!(2L + 1)! It was assumed previously that the total (kinetic + potential) energy E of the electron was negative. Rewrite the (unnormalized) radial wave function for the free electron, E > 0. (b) ANS. eiαr/2 (αr)L M(L + 1 − in, 2L + 2, −iαr), outgoing wave. This representation provides a powerful alternative technique for the calculation of photoionization and recombination coefficients. 13.5.17 Evaluate ∞ 2  Mkµ (x) dx, (a) 0 (c) 0 ∞ 2 dx Mkµ (x) , x 1−a (b) 0 where 2µ = 0, 1, 2, . . . , k − µ − 13.6 ∞ 1 2 2 dx Mkµ (x) , x = 0, 1, 2, . . . , a > −2µ − 1. ANS. (a) (2µ)!2k. (b) (2µ)!. (c) (2µ)!(2k)a . MATHIEU FUNCTIONS When PDEs such as Laplace’s, Poisson’s, and the wave equation are solved with cylindrical or spherical boundary conditions by separating variables in polar coordinates, we find radial solutions, which are the Bessel functions of Chapter 11, and angular solutions, which are sin mϕ, cos mϕ in cylindrical cases and spherical harmonics in spherical cases. Examples are electromagnetic waves in resonant cavities, vibrating circular drumheads, and coaxial wave guides. When in such cylindrical problems the circular boundary condition becomes elliptical we are led to the angular and radial Mathieu functions, which, therefore, might be called elliptic cylinder functions. In fact, in 1868 Mathieu developed the leading terms of series solutions of the vibrating elliptical drumhead, and Whittaker and others in the early 1900s derived higher-order terms as well. 870 Chapter 13 More Special Functions Here our goal is to give an introduction to the rich and complex properties of Mathieu functions. Separation of Variables in Elliptical Coordinates Elliptical cylinder coordinates ξ, η, z, which are appropriate for elliptical boundary conditions, are expressed in rectangular coordinates as x = c cosh ξ cos η, 0 ≤ ξ < ∞, y = c sinh ξ sin η, z = z, (13.163) 0 ≤ η ≤ 2π, where the parameter 2c > 0 is the distance between the foci of the confocal ellipses described by these coordinates (Fig. 13.7). We want to show that in the limit c → 0 the foci of the ellipses coalesce to the center of circles. We work at constant z-coordinate mostly, z = 0, say. Indeed for fixed radial variable ξ =const. we can eliminate the angular variable η to obtain from Eq. (13.163) x2 c2 cosh2 ξ + y2 c2 sinh2 ξ = 1, (13.164) describing confocal ellipses centered at the origin of the x, y-plane with major and minor half-axes a = c cosh ξ, b = c sinh ξ, (13.165) respectively. Since b = tanh ξ = a , 1− 1 2 cosh ξ ≡ 1 − e2 , (13.166) the eccentricity e = 1/ cosh ξ of the ellipse with 0 ≤ e ≤ 1, and the distance between the foci 2ae = 2c, providing a geometrical interpretation of the radial coordinate ξ and the FIGURE 13.7 Elliptical coordinates ξ, η. 13.6 Mathieu Functions 871 parameter c. As ξ → ∞, e → 0 and the ellipses become circles, which is indicated in Fig. 13.7. As ξ → 0, the ellipse becomes more elongated until, at ξ = 0, it has shrunk to the line segment between the foci. When η =const. we eliminate ξ to find confocal hyperbolas y2 x2 − = 1, c2 cos2 η c2 sin2 η (13.167) which are also plotted in Fig. 13.7. Differentiating the ellipse, we obtain x dx 2 cosh ξ + y dy sinh2 ξ = 0, (13.168) which means that the tangent vector (dx, dy) of the ellipse is perpendicular to the vector ( x 2 , y 2 ). For the hyperbola the orthogonality condition is cosh ξ sinh ξ x dx y dy − 2 = 0, 2 cos η sin η (13.169) so the scalar product of the ellipse and hyperbola tangent vectors at each of their intersection points (x, y) of Eq. (13.163) obey x2 2 cosh ξ cos2 η − y2 2 sinh ξ sin2 η = c2 − c2 = 0. (13.170) This means that these confocal ellipses and hyperbolas form an orthogonal coordinate system, in the sense of Section 2.1. To extract the scale factors hξ , hη from the differentials of the elliptical coordinates dx = c sinh ξ cos η dξ − c cosh ξ sin η dη, (13.171) dy = c cosh ξ sin η dξ + c sinh ξ cos η dη, we sum their squares, finding   dx 2 + dy 2 = c2 sinh2 ξ cos2 η + cosh2 ξ sin2 η dξ 2 + dη2   = c2 cosh2 ξ − cos2 η dξ 2 + dη2 ≡ h2ξ dξ 2 + h2η dη2 (13.172) and yielding  1/2 hξ = hη = c cosh2 ξ − cos2 η . (13.173) Note that there is no cross term involving dξ dη, showing again that we are dealing with orthogonal coordinates. Now we are ready to derive Mathieu’s differential equations. 872 Chapter 13 More Special Functions Example 13.6.1 ELLIPTICAL DRUM We consider vibrations of an elliptical drumhead with vertical displacement z = z(x, y, t) governed by the wave equation ∂ 2z ∂ 2z 1 ∂ 2z + = , ∂x 2 ∂y 2 v 2 ∂t 2 (13.174) where the velocity squared v 2 = T /ρ with tension T and mass density ρ is a constant. We first separate the harmonic time dependence, writing z(x, y, t) = u(x, y)w(t), (13.175) where w(t) = cos(ωt + δ), with ω the frequency and δ a constant phase. Substituting this function z into Eq. (13.174) yields   1 ∂ 2u ∂ 2u 1 ∂ 2w ω2 = 2 + = − 2 = −k 2 = const., (13.176) 2 2 2 u ∂x ∂y v w ∂t v that is, the two-dimensional Helmholtz equation for the displacement u. We now use Eq. (2.22) to convert the Laplacian ∇ 2 to the elliptical coordinates, where we drop the zcoordinate. This gives   ∂ 2u ∂ 2u 1 ∂ 2u ∂ 2u 2 + k 2 u = 0, (13.177) + +k u= 2 + ∂x 2 ∂y 2 hξ ∂ξ 2 ∂η2 that is, the Helmholtz equation in elliptical ξ, η coordinates,  ∂ 2u ∂ 2u + 2 + c2 k 2 cosh2 ξ − cos2 η u = 0. 2 ∂ξ ∂η (13.178) Lastly, we separate ξ and η, writing u(ξ, η) = R(ξ )(η), which yields 1 d 2 1 1 d 2R + c2 k 2 cosh2 ξ = c2 k 2 cos2 η − = λ + c2 k 2 , 2 R dξ  dη2 2 (13.179) where λ + c2 k 2 /2 is the separation constant. Writing cosh 2ξ, cos 2η instead of cosh2 ξ, cos2 η (which motivates the special form of the separation constant in Eq. (13.179)) we find the linear, second-order ODE d 2R − (λ − 2q cosh 2ξ )R(ξ ) = 0, dξ 2 1 q = c2 k 2 , 4 (13.180) which is also called the radial Mathieu equation, and d 2 + (λ − 2q cos 2η)(η) = 0, dη2 (13.181) the angular, or modified, Mathieu equation. Note that the eigenvalue λ(q) is a function of the continuous parameter q in the Mathieu ODEs. It is this parameter dependence that complicates the analysis of Mathieu functions and makes them among the most difficult special functions used in physics.  13.6 Mathieu Functions 873 Clearly, all finite points are regular points of both ODEs, while infinity is an essential singularity for both ODEs, which are of the Sturm–Liouville type (Chapter 10) with coefficient functions p ≡ 1 and q(ξ ) = −λ + 2q cosh 2ξ, q(η) = λ − 2q cos 2η. (13.182) (These functions q must not be confused with the parameter q.) As a consequence, their solutions form orthogonal sets of functions. The substitution η → iξ transforms the angular to the radial Mathieu ODE, so their solutions are closely related. Using the Lindemann–Stieltjes substitution z = cos2 η, dz/dη = − sin 2η, the angular Mathieu ODE is transformed into an ODE with coefficients that are algebraic in the varid dz d d d2 d d2 2 able z (using dη = dη dz = − sin 2η dz and dη2 = −2 cos 2η dz + sin 2η dz2 ): 4z(1 − z) d  d 2 + λ + 2q(1 − 2z)  = 0. + 2(1 − 2z) 2 dz dz (13.183) This ODE has regular singularities at z = 0 and z = 1, whereas the point at infinity is an essential singularity (Chapter 9). By comparison, the hypergeometric ODE has three regular singularities. But not all ODEs with two regular singularities and one essential singularity can be transformed into an ODE of the Mathieu type. Example 13.6.2 THE QUANTUM PENDULUM A plane pendulum of length l and mass m with gravitational potential V (θ ) = −mgl cos θ is called a quantum pendulum if its wave function  obeys the Schrödinger equation − h¯ 2 d 2   + V (θ ) − E  = 0, 2 2 2ml dθ (13.184) where the variable θ is the angular displacement from the vertical direction. (For further details and illustrations we refer to Gutiérrez-Vega et al. in the Additional Readings.) A boundary condition applies to  so as to be single-valued; that is, (θ + 2π) = (θ ). Substituting θ = 2η, λ= 8Eml 2 h¯ 2 , q =− 4m2 gl 3 h¯ 2 (13.185) into the Schrödinger equation yields the angular Mathieu ODE for (2(η + π)) = (2η).  For many other applications involving Mathieu functions we refer to Ruby in the Additional Readings. Our main focus will be on the solutions of the angular Mathieu ODE, which has the important property that its coefficient function is periodic with period π . 874 Chapter 13 More Special Functions General Properties of Mathieu Functions In physics applications the angular Mathieu functions are required to be single-valued, that is, periodic with period 2π . Let us start with some nomenclature. Since Mathieu’s ODEs are invariant under parity (η → −η), Mathieu functions have definite parity. Those of odd parity that have period 2π and, for small q, start with sin(2n + 1)η are called se2n+1 (η, q), with n an integer, n = 0, 1, 2, . . . (se is short for sine-elliptic). Mathieu functions of odd parity and period π that start with sin 2nη for small q are called se2n (η, q), with n = 1, 2, . . . . Mathieu functions of even parity, period π that start with cos 2nη for small q are called ce2n (η, q) (ce is short for cosine-elliptic), while those with period 2π that start with cos(2n + 1)η, n = 0, 1, . . . , for small q are called ce2n+1 (η, q). In the limit where the parameter q → 0 (and the Mathieu ODE becomes the classical harmonic oscillator ODE), Mathieu functions reduce to these trigonometric functions. The periodicity condition (η + 2π) = (η) is sufficient to determine a set of eigenvalues λ in terms of q. An elementary analog of this result is the fact that a solution of the classical harmonic oscillator ODE u′′ (η) + λu(η) = 0 has period 2π if, and only if, λ = n2 is the square of an integer. Such problems will be pursued in Section 14.7 as applications of Fourier series. Example 13.6.3 RADIAL MATHIEU FUNCTIONS Upon replacing the angular elliptic variable η → iξ , the angular Mathieu ODE, Eq. (13.181), becomes the radial ODE, Eq. (13.180). This motivates the definitions of radial Mathieu functions as Ce2n+p (ξ, q) = ce2n+p (iξ, q), p = 0, 1; Se2n+p (ξ, q) = −ise2n+p (iξ, q), p = 0, 1; n = 0, 1, . . . , n = 1, 2, . . . . Because these functions are differentiable, they correspond to the regular solutions of the radial Mathieu ODE. Of course, they are no longer periodic but are oscillatory (Fig. 13.8). In physical problems involving elliptical coordinates, the radial Mathieu ODE, Eq. (13.180), plays a role corresponding to Bessel’s ODE in cylindrical geometry. Because there are four families of independent Bessel functions — the regular solutions Jn and irregular Neumann functions Nn , along with the modified Bessel functions In and Kn — we expect four kinds of radial Mathieu functions. Because of parity, the solutions split into even and odd Mathieu functions and so there are eight kinds. For q > 0, Je2n (ξ, q) = Ce2n (ξ, q), Je2n+1 (ξ, q) = Ce2n+1 (ξ, q), Jo2n (ξ, q) = Se2n (ξ, q), Jo2n+1 (ξ, q) = Se2n+1 (ξ, q), Nen (ξ, q), Non (ξ, q), regular or first kind; irregular or second kind; for q < 0, the solutions of the radial Mathieu ODE are denoted by Ien (ξ, q), Ion (ξ, q), Ken (ξ, q), Kon (ξ, q), regular or first kind, irregular or second kind 13.6 Mathieu Functions 875 FIGURE 13.8 Radial Mathieu functions: q = 1 (solid line), q = 2 (dashed line), q = 3 (dotted line). (From Gutiérrez-Vega et al., Am. J. Phys. 71: 233 (2003).) and are known as the evanescent radial Mathieu functions. Mathieu functions corresponding to the Hankel functions can be similarly defined. In Fig. 13.8 some of them are plotted. In applications such as a vibrating drumhead with elliptical boundary conditions (see Example 13.6.1), the solution can be expanded in even and odd Mathieu functions: zen ≡ Jen (ξ, q)cen (η, q) cos(ωn t), m ≥ 0, zon ≡ Jon (ξ, q)sen (η, q) cos(ωn t), m ≥ 1. They obey Dirichlet boundary conditions, zen (ξ0 , η, t) = 0 = zon (ξ0 , η, t), which hold provided the radial functions satisfy Jen (ξ0 , q) = 0 = Jon (ξ0 , q) at the elliptical boundary, where ξ = ξ0 . When the focal distance c → 0, the angular Mathieu functions become the conventional trigonometric functions, while the radial Mathieu functions become Bessel functions. In the case of oscillations of a confocal annular elliptic lake, the modes have to include the Mathieu functions of the second kind and are thus given by  m ≥ 0, zen ≡ AJen (ξ, q) + BNen (ξ, q) cen (η, q) cos(ωn t),  zon ≡ AJon (ξ, q) + BNon (ξ, q) sen (η, q) cos(ωn t), m ≥ 1, with A, B constants. These standing wave solutions must obey Neumann boundary conditions at the inner (ξ = ξ0 ) and outer (ξ = ξ1 ) elliptical boundaries; that is, the normal 876 Chapter 13 More Special Functions derivatives (a prime denotes d/dξ ) of zen and zon vanish at each point of the boundaries. For even modes, we have ze′n (ξ0 , η, t) = 0 = ze′n (ξ1 , η, t). The implied radial constraints are similar to Eqs. (11.81) and (11.82) of Example 11.3.1. Numerical examples and plots, also for traveling waves, are given in Gutiérrez-Vega et al. in the Additional Readings.  For zeros of Mathieu functions, their asymptotic expansions, and a more complete listing of formulas we refer to Abramowitz and Stegun (AMS-55) in the Additional Readings, Am. J. Phys. 71, Jahnke and Emde and Gradshteyn and Ryzhik in the Additional Readings. To illustrate and support the nomenclature, we want to show16 that there is an angular Mathieu function that is • even in η and of period π if and only if ′1 (π/2) = 0; • odd and of period π if and only if 2 (π/2) = 0; • even and of period 2π if and only if 1 (π/2) = 0; • odd and of period 2π if and only if ′2 (π/2) = 0, where 1 (η), 2 (η) are two linearly independent solutions of the angular Mathieu ODE so that 1 (0) = 1, ′1 (0) = 0; 2 (0) = 0, ′2 (0) = 1. (13.186) Since the Mathieu ODE is a linear second-order ODE, we know (Chapter 9) that these initial conditions are realistic. The first case just given corresponds to ce2n (η, q), with ′1 (π/2) = −2n sin 2nη|η=π/2 + · · · = 0 for n = 1, 2, . . . . The second is the se2n (η, q), with 2 (π/2) = sin 2nη|π/2 + · · · = 0. The third case is the ce2n+1 (η, q), with 1 (π/2) = cos(2n + 1)π/2 + · · · = 0. The fourth case is the se2n+1 (η, q). The key to the proof is Floquet’s approach to linear second-order ODEs with periodic coefficient functions, such as Mathieu’s angular ODE or the simple pendulum (Exercise 13.6.1). If 1 (η), 2 (η) are two linearly independent solutions of the ODE, any other solution  can be expressed as (η) = c1 1 (η) + c2 2 (η), (13.187) with constants c1 , c2 . Now, k (η + 2π) are also solutions because such an ODE is invariant under the translation η → η + 2π , and in particular 1 (η + 2π) = a1 1 (η) + a2 2 (η), 2 (η + 2π) = b1 1 (η) + b2 2 (η), (13.188) with constants ai , bj . Substituting Eq. (13.188) into Eq. (13.187) we get (η + 2π) = (c1 a1 + c2 b1 )1 (η) + (c2 b2 + c1 a2 )2 (η), (13.189) where the constants ci can be chosen as solutions of the eigenvalue equations a1 c1 + b1 c2 = λc1 , a2 c1 + b2 c2 = λc2 . 16 See Hochstadt in the Additional Readings. (13.190) 13.6 Mathieu Functions Then Floquet’s theorem states that (η + 2π) = λ(η), where λ is a root of    a1 − λ b1   = 0.  a2 b2 − λ  877 (13.191) A useful corollary is obtained if we define µ and y by λ = exp (2πµ) and y(η) = exp (−µη)(η), so y(η + 2π) = e−µη e−2πµ (η + 2π) = e−µη (η) = y(η). (13.192) Thus, (η) = eµη y(η), with y a periodic function of η with period 2π . Let us apply Floquet’s argument to the k (η + π), which are also solutions of Mathieu’s ODE because the latter is invariant under the translation η → η + π . Using the special values in Eq. (13.186) we know that 1 (η + π) = 1 (π)1 (η) + ′1 (π)2 (η), 2 (η + π) = 2 (π)1 (η) + ′2 (π)2 (η), (13.193) because these linear combinations of k (η) are solutions of Mathieu’s ODE with the correct values i (η + π), ′i (η + π) for η = 0. Therefore, i (η + π) = λi i (η), where the λi are the roots of    1 (π) − λ 2 (π)   = 0.  ′ (π) ′2 (π) − λ  1 The constant term in the characteristic polynomial is given by the Wronskian  W 1 (η), 2 (η) = C, (13.194) (13.195) (13.196) a constant because the coefficient of d/dη in the angular Mathieu ODE vanishes, implying dW/dη = 0. In fact, using Eq. (13.186),  W 1 (0), 2 (0) = 1 (0)′2 (0) − ′1 (0)2 (0) = 1  (13.197) = W 1 (π), 2 (π) , so the eigenvalue Eq. (13.195) for λ becomes   1 (π) − λ ′2 (π) − λ − 2 (π)′1 (π) = 0  = λ2 − 1 (π) + ′2 (π) λ + 1, (13.198) with λ1 · λ2 = 1 and λ1 + λ2 = 1 (π) + ′2 (π). If |λ1 | = |λ2 | = 1, then λ1 = exp (iφ) and λ2 = exp (−iφ), so λ1 + λ2 = 2 cos φ. For φ = 0, π, 2π, . . . this case corresponds to |1 (π) + ′2 (π)| < 2, where both solutions remain bounded as η → ∞ in steps of π using Eq. (13.194). These cases do not yield periodic Mathieu functions, and this is also the case when |1 (π) + ′2 (π)| > 2. If φ = 0, that is, λ1 = 1 = λ2 is a double root, then the i have period π and |1 (π) + ′2 (π)| = 2. If φ = π , that is, λ1 = −1 = λ2 is again a double root, then |1 (π) + ′2 (π)| = −2 and the i have period 2π with i (η + π) = −i (η). 878 Chapter 13 More Special Functions Because the angular Mathieu ODE is invariant under a parity transformation η → −η, it is convenient to consider solutions 1 1 o (η) = (η) − (−η) (13.199) e (η) = (η) + (−η) , 2 2 of definite parity, which obey the same initial conditions as i . We now relabel e → 1 , o → 2 , taking 1 to be even and 2 to be odd under parity. These solutions of definite parity of Mathieu’s ODE are called Mathieu functions and are labeled according to our nomenclature discussed earlier. If 1 (η) has period π , then ′1 (η + π) = ′1 (η) also has period π but is odd under parity. Substituting η = −π/2 we obtain         π π π π = ′1 − = −′1 , so ′1 = 0. (13.200) ′1 2 2 2 2 Conversely, if ′1 (π/2) = 0, then 1 (η) has period π . To see this, we use 1 (η + π) = c1 1 (η) + c2 2 (η). (13.201) This expansion is valid because 1 (η + π) is a solution of the angular Mathieu ODE. We now determine the coefficients ci , setting η = −π/2, and recall that 1 and ′2 are even under parity, whereas 2 and ′1 are odd. This yields       π π π = c1 1 − c2 2 , 1 2 2 2 (13.202)       ′ π ′ π ′ π 1 = −c1 1 + c2 2 . 2 2 2 Since ′1 (π/2) = 0, ′2 (π/2) = 0, or the Wronskian would vanish and 2 ∼ 1 would follow. Hence c2 = 1 follows from the second equation and c1 = 1 from the first. Thus, 1 (η + π) = 1 (η). The other bulleted cases listed earlier can be proved similarly. Because the Mathieu ODEs are of the Sturm–Liouville type, Mathieu functions represent orthogonal systems of functions. So, for m, n nonnegative integers, the orthogonality relations and normalizations are π π sem sen dη = 0, if m = n; cem cen dη = −π −π π −π π −π cem sen dη = 0; 2 [ce2n ] dη = (13.203) π −π 2 [se2n ] dη = π, if n ≥ 1; 0 π 2 ce0 (η, q) dη = π. If a function f (η) is periodic with period π , then it can be expanded in a series of orthogonal Mathieu functions as ∞   1 an ce2n (η, q) + bn se2n (η, q) f (η) = a0 ce0 (η, q) + 2 n=1 (13.204) 13.6 Additional Readings with an = 1 π 1 bn = π π −π 879 f (η)ce2n (η, q) dη, n ≥ 0; (13.205) π −π f (η)se2n (η, q) dη, n ≥ 1. Similar expansions exist for functions of period 2π in terms of ce2n+1 and se2n+1 . Series expansions of Mathieu functions will be derived in Section 14.7. Exercises 13.6.1 For the simple pendulum ODE of Section 5.8, apply Floquet’s method and derive the properties of its solutions similar to those marked by bullets before Eq. (13.186). 13.6.2 Derive a Mathieu function analog for the Rayleigh expansion of a plane wave for cos(k cos η cos θ ) and sin(k cos η cos θ ). Additional Readings Abramowitz, M., and I. A. Stegun, eds., Handbook of Mathematical Functions, Applied Mathematics Series55 (AMS-55). Washington, DC: National Bureau of Standards (1964). Paperback edition, New York: Dover (1974). Chapter 22 is a detailed summary of the properties and representations of orthogonal polynomials. Other chapters summarize properties of Bessel, Legendre, hypergeometric, and confluent hypergeometric functions and much more. Buchholz, H., The Confluent Hypergeometric Function. New York: Springer-Verlag (1953); translated (1969). Buchholz strongly emphasizes the Whittaker rather than the Kummer forms. Applications to a variety of other transcendental functions. Erdelyi, A., W. Magnus, F. Oberhettinger, and F. G. Tricomi, Higher Transcendental Functions, 3 vols. New York: McGraw-Hill (1953). Reprinted Krieger (1981). A detailed, almost exhaustive listing of the properties of the special functions of mathematical physics. Fox, L. and I. B. Parker, Chebyshev Polynomials in Numerical Analysis. Oxford: Oxford University Press (1968). A detailed, thorough, but very readable account of Chebyshev polynomials and their applications in numerical analysis. Gradshteyn, I. S., and I. M. Ryzhik, Table of Integrals, Series and Products, New York: Academic Press (1980). Gutiérrez-Vega, J. C., R. M. Rodríguez-Dagnino, M. A. Meneses-Nava and S. Chávez-Cerda, Am. J. Phys. 71: 233 (2003). Hochstadt, H., Special Functions of Mathematical Physics. New York: Holt, Rinehart and Winston (1961), reprinted Dover (1986). Jahnke, E., and F. Emde, Table of Functions. Leipzig: Teubner (1933); New York: Dover (1943). Lebedev, N. N., Special Functions and their Applications (translated by R. A. Silverman). Englewood Cliffs, NJ: Prentice-Hall (1965). Paperback, New York: Dover (1972). Luke, Y. L., The Special Functions and Their Approximations. New York: Academic Press (1969). Two volumes: Volume 1 is a thorough theoretical treatment of gamma functions, hypergeometric functions, confluent hypergeometric functions, and related functions. Volume 2 develops approximations and other techniques for numerical work. Luke, Y. L., Mathematical Functions and Their Approximations. New York: Academic Press (1975). This is an updated supplement to Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables (AMS-55). 880 Chapter 13 More Special Functions Mathieu, E., J. de Math. Pures et Appl. 13: 137–203 (1868). McLachlan, N. W., Theory and Applications of Mathieu Functions. Oxford, UK: Clarendon Press (1947). Magnus, W., F. Oberhettinger, and R. P. Soni, Formulas and Theorems for the Special Functions of Mathematical Physics. New York: Springer (1966). An excellent summary of just what the title says, including the topics of Chapters 10 to 13. Rainville, E. D., Special Functions. New York: Macmillan (1960), reprinted Chelsea (1971). This book is a coherent, comprehensive account of almost all the special functions of mathematical physics that the reader is likely to encounter. Rowland, D. R., Am. J. Phys. 72: 758–766 (2004). Ruby, L., Am. J. Phys. 64: 39–44 (1996). Sansone, G., Orthogonal Functions (translated by A. H. Diamond). New York: Interscience (1959). Reprinted Dover (1991). Slater, L. J., Confluent Hypergeometric Functions. Cambridge, UK: Cambridge University Press (1960). This is a clear and detailed development of the properties of the confluent hypergeometric functions and of relations of the confluent hypergeometric equation to other ODEs of mathematical physics. Sneddon, I. N., Special Functions of Mathematical Physics and Chemistry, 3rd ed. New York: Longman (1980). Whittaker, E. T., and G. N. Watson, A Course of Modern Analysis. Cambridge, UK: Cambridge University Press, reprinted (1997). The classic text on special functions and real and complex analysis. CHAPTER 14 FOURIER SERIES 14.1 GENERAL PROPERTIES Periodic phenomena involving waves, rotating machines (harmonic motion), or other repetitive driving forces are described by periodic functions. Fourier series are a basic tool for solving ordinary differential equations (ODEs) and partial differential equations (PDEs) with periodic boundary conditions. Fourier integrals for nonperiodic phenomena are developed in Chapter 15. The common name for the field is Fourier analysis. A Fourier series is defined as an expansion of a function or representation of a function in a series of sines and cosines, such as f (x) = ∞ ∞ n=1 n=1  a0  + an cos nx + bn sin nx. 2 (14.1) The coefficients a0 , an , and bn are related to the periodic function f (x) by definite integrals: 1 2π an = f (x) cos nx dx, (14.2) π 0 1 2π bn = f (x) sin nx dx, n = 0, 1, 2, . . . , (14.3) π 0 which are subject to the requirement that the integrals exist. Notice that a0 is singled out for special treatment by the inclusion of the factor 21 . This is done so that Eq. (14.2) will apply to all an , n = 0 as well as n > 0. The conditions imposed on f (x) to make Eq. (14.1) valid are that f (x) have only a finite number of finite discontinuities and only a finite number of extreme values, maxima, and minima in the interval [0, 2π].1 Functions satisfying these conditions may be called 1 These conditions are sufficient but not necessary. 881 882 Chapter 14 Fourier Series piecewise regular. The conditions themselves are known as the Dirichlet conditions. Although there are some functions that do not obey these Dirichlet conditions, they may well be labeled pathological for purposes of Fourier expansions. In the vast majority of physical problems involving a Fourier series these conditions will be satisfied. In most physical problems we shall be interested in functions that are square integrable (in the Hilbert space L2 of Section 10.4). In this space the sines and cosines form a complete orthogonal set. And this in turn means that Eq. (14.1) is valid, in the sense of convergence in the mean. Expressing cos nx and sin nx in exponential form, we may rewrite Eq. (14.1) as f (x) = ∞  cn einx , (14.4) n=−∞ in which 1 cn = (an − ibn ), 2 1 c−n = (an + ibn ), 2 n > 0, (14.5a) and 1 c0 = a0 . 2 (14.5b) Complex Variables — Abel’s Theorem Consider a function f (z) represented by a convergent power series f (z) = ∞  n=0 Cn z n = ∞  Cn r n einθ . (14.6) n=0 This is our Fourier exponential series, Eq. (14.4). Separating real and imaginary parts we get u(r, θ ) = ∞  n=0 n Cn r cos nθ, v(r, θ ) = ∞  Cn r n sin nθ, (14.7a) n=1 the Fourier cosine and sine series. Abel’s theorem asserts that if u(1, θ ) and v(1, θ ) are convergent for a given θ , then  (14.7b) u(1, θ ) + iv(1, θ ) = lim f reiθ . r→1 An application of this appears as Exercise 14.1.9 and in Example 14.1.1. Example 14.1.1 SUMMATION OF A FOURIER SERIES Usually in this chapter we shall be concerned with finding the coefficients of the Fourier expansion of a known function. Occasionally, we may wish to reverse this process and determine the function represented by a given Fourier series. 14.1 General Properties 883 Consider the series ∞ n=1 (1/n) cos nx, x ∈ (0, 2π). Since this series is only conditionally convergent (and diverges at x = 0), we take ∞  cos nx n n=1 = lim r→1 ∞ n  r cos nx n n=1 , (14.8) absolutely convergent for |r| < 1. Our procedure is to try forming power series by transforming the trigonometric functions into exponential form: ∞ n  r cos nx n n=1 = ∞ ∞ n=1 n=1 1  r n einx 1  r n e−inx + . 2 n 2 n (14.9) Now, these power series may be identified as Maclaurin expansions of − ln(1 − z), z = reix , re−ix (Eq. (5.95)), and ∞ n  r cos nx n=1 n =−  1  ln 1 − reix + ln 1 − re−ix 2  1/2 = − ln 1 + r 2 − 2r cos x . (14.10) Letting r = 1 and using Abel’s theorem, we see that ∞  cos nx n=1 n = − ln(2 − 2 cos x)1/2   x , = − ln 2 sin 2 x ∈ (0, 2π).2 Both sides of this expression diverge as x → 0 and 2π . (14.11)  Completeness The problem of establishing completeness may be approached in a number of different ways. One way is to transform the trigonometric Fourier series into exponential form and to compare it with a Laurent series. If we expand f (z) in a Laurent series3 (assuming f (z) is analytic), f (z) = ∞  dn z n . On the unit circle z = eiθ and f (z) = f (eiθ ) = ∞  dn einθ . n=−∞ 2 The limits may be shifted to [−π, π ] (and x = 0) using |x| on the right-hand side. 3 Section 6.5. (14.12) n=−∞ (14.13) 884 Chapter 14 Fourier Series FIGURE 14.1 Fourier representation of sawtooth wave. The Laurent expansion on the unit circle (Eq. (14.13)) has the same form as the complex Fourier series (Eq. (14.12)), which shows the equivalence between the two expansions. Since the Laurent series as a power series has the property of completeness, we see that the Fourier functions einx form a complete set. There is a significant limitation here. Laurent series and complex power series cannot handle discontinuities such as a square wave or the sawtooth wave of Fig. 14.1, except on the circle of convergence. The theory of vector spaces provides a second approach to the completeness of the sines and cosines. Here completeness is established by the Weierstrass theorem for two variables. The Fourier expansion and the completeness property may be expected, for the functions sin nx, cos nx, einx are all eigenfunctions of a self-adjoint linear ODE, y ′′ + n2 y = 0. (14.14) We obtain orthogonal eigenfunctions for different values of the eigenvalue n for the interval [0, 2π] that satisfy the boundary conditions in the Sturm–Liouville theory (Chapter 10). Different eigenfunctions for the same eigenvalue n are orthogonal. We have  2π m = 0, πδmn , sin mx sin nx dx = (14.15) 0, m = 0, 0  2π m = 0, πδmn , cos mx cos nx dx = (14.16) 2π, m = n = 0, 0 2π sin mx cos nx dx = 0 for all integral m and n. (14.17) 0 Note that any interval x0 ≤ x ≤ x0 + 2π will be equally satisfactory. Frequently, we shall use x0 = −π to obtain the interval −π ≤ x ≤ π . For the complex eigenfunctions e±inx orthogonality is usually defined in terms of the complex conjugate of one of the two factors, 2π  imx ∗ inx e e dx = 2πδmn . (14.18) 0 This agrees with the treatment of the spherical harmonics (Section 12.6). 14.1 General Properties 885 Sturm–Liouville Theory The Sturm–Liouville theory guarantees the validity of Eq. (14.1) (for functions satisfying the Dirichlet conditions) and, by use of the orthogonality relations, Eqs. (14.15), (14.16), and (14.17), allows us to compute the expansion coefficients an , bn , as shown in Eqs. (14.2), and (14.3). Substituting Eqs. (14.2) and (14.3) into Eq. (14.1), we write our Fourier expansion as 2π 1 f (x) = f (t) dt 2π 0  2π 2π ∞  1 f (t) cos nt dt + sin nx f (t) sin nt dt + cos nx π 0 0 n=1 = 1 2π 0 2π f (t) dt + ∞ 1  2π f (t) cos n(t − x) dt, π 0 (14.19) n=1 the first (constant) term being the average value of f (x) over the interval [0, 2π]. Equation (14.19) offers one approach to the development of the Fourier integral and Fourier transforms, Section 15.1. Another way of describing what we are doing here is to say that f (x) is part of an infinite-dimensional Hilbert space, with the orthogonal cos nx and sin nx as the basis. (They can always be renormalized to unity if desired.) The statement that cos nx and sin nx (n = 0, 1, 2, . . .) span this Hilbert space is equivalent to saying that they form a complete set. Finally, the expansion coefficients an and bn correspond to the projections of f (x), with the integral inner products (Eqs. (14.2) and (14.3)) playing the role of the dot product of Section 1.3. These points are outlined in Section 10.4. Example 14.1.2 SAWTOOTH WAVE An idea of the convergence of a Fourier series and the error in using only a finite number of terms in the series may be obtained by considering the expansion of  x, 0 ≤ x < π, f (x) = (14.20) x − 2π, π < x ≤ 2π. This is a sawtooth wave, and for convenience we shall shift our interval from [0, 2π] to [−π, π]. In this interval we have f (x) = x. Using Eqs. (14.2) and (14.3), we show the expansion to be  sin 2x sin 3x n+1 sin nx + − · · · + (−1) + ··· . (14.21) f (x) = x = 2 sin x − 2 3 n Figure 14.1 shows f (x) for 0 ≤ x < π for the sum of 4, 6, and 10 terms of the series. Three features deserve comment. 1. There is a steady increase in the accuracy of the representation as the number of terms included is increased. 886 Chapter 14 Fourier Series 2. All the curves pass through the midpoint, f (x) = 0, at x = π . 3. In the vicinity of x = π there is an overshoot that persists and shows no sign of diminishing. As a matter of incidental interest, setting x = π/2 in Eq. (14.21) provides an alternate derivation of Leibniz’ formula, Exercise 5.7.6.  Behavior of Discontinuities The behavior of the sawtooth wave f (x) at x = π is an example of a general rule that at a finite discontinuity the series converges to the arithmetic mean. For a discontinuity at x = x0 the series yields  f (x0 ) = 12 f (x0 + 0) + f (x0 − 0) , (14.22) the arithmetic mean of the right and left approaches to x = x0 . A general proof using partial sums, as in Section 14.5, is given by Jeffreys and Jeffreys and by Carslaw (see the Additional Readings). The proof may be simplified by the use of Dirac delta functions — Exercise 14.5.1. The overshoot of the sawtooth wave just before x = π in Fig. 14.1 is an example of the Gibbs phenomenon, discussed in Section 14.5. Exercises 14.1.1 A function f (x) (quadratically integrable) is to be represented by a finite Fourier series. A convenient measure of the accuracy of the series is given by the integrated square of the deviation, 2 2π p a0  − (an cos nx + bn sin nx) dx. f (x) − p = 2 0 n=1 Show that the requirement that p be minimized, that is, ∂p = 0, ∂bn ∂p = 0, ∂an for all n, leads to choosing an and bn as given in Eqs. (14.2) and (14.3). Note. Your coefficients an and bn are independent of p. This independence is a consequence of orthogonality and would not hold for powers of x, fitting a curve with polynomials. 14.1.2 In the analysis of a complex waveform (ocean tides, earthquakes, musical tones, etc.) it might be more convenient to have the Fourier series written as f (x) = ∞ a0  + αn cos(nx − θn ). 2 n=1 14.1 General Properties 887 Show that this is equivalent to Eq. (14.1) with an = αn cos θn , αn2 = an2 + bn2 , bn = αn sin θn , tan θn = bn /an . Note. The coefficients αn2 as a function of n define what is called the power spectrum. The importance of αn2 lies in their invariance under a shift in the phase θn . 14.1.3 A function f (x) is expanded in an exponential Fourier series f (x) = If f (x) is real, 14.1.4 Assuming that f (x) = f ∗ (x), π −π [f (x)]2 dx ∞  cn einx . n=−∞ what restriction is imposed on the coefficients cn ? is finite, show that lim am = 0, m→∞ lim bm = 0. m→∞ Hint. Integrate [f (x) − sn (x)]2 , where sn (x) is the nth partial sum, and use Bessel’s inequality, π Section 10.4. For our finite interval πthe assumption that f (x) is square integrable ( −π |f (x)|2 dx is finite) implies that −π |f (x)| dx is also finite. The converse does not hold. 14.1.5 Apply the summation technique of this section to show that +1 ∞  0 0. dω = e−ax , 2 + a2 2a ω 0 (a) These results are also obtained by contour integration (Exercise 7.1.14). 15.3.5 Find the Fourier transform of the triangular pulse (Fig. 15.4). +  h 1 − a|x| , |x| < a1 , f (x) = 0, |x| > a1 . Note. This function provides another delta sequence with h = a and a → ∞. 15.3 Fourier Transforms — Inversion Theorem FIGURE 15.4 15.3.6 943 Triangular pulse. Define a sequence δn (x) = + n, |x| 1 2n , 1 2n . (This is Eq. (1.172).) Express δn (x) as a Fourier integral (via the Fourier integral theorem, inverse transform, etc.). Finally, show that we may write ∞ 1 e−ikx dk. δ(x) = lim δn (x) = n→∞ 2π −∞ 15.3.7 Using the sequence show that  n δn (x) = √ exp −n2 x 2 , π δ(x) = 1 2π ∞ −∞ e−ikx dk. Note. Remember that δ(x) is defined in terms of its behavior as part of an integrand (Section 1.15), especially Eqs. (1.178) and (1.179). 15.3.8 Derive sine and cosine representations of δ(t − x) that are comparable to the exponential representation, Eq. (15.21d). 2 ∞ 2 ∞ sin ωt sin ωx dω, cos ωt cos ωx dω. ANS. π 0 π 0 15.3.9 In a resonant cavity an electromagnetic oscillation of frequency ω0 dies out as A(t) = A0 e−ω0 t/2Q e−iω0 t , t > 0. (Take A(t) = 0 for t < 0.) The parameter Q is a measure of the ratio of stored energy to energy loss per cycle. Calculate the frequency distribution of the oscillation, a ∗ (ω)a(ω), where a(ω) is the Fourier transform of A(t). Note. The larger Q is, the sharper your resonance line will be. ANS. a ∗ (ω)a(ω) = A20 1 . 2π (ω − ω0 )2 + (ω0 /2Q)2 944 Chapter 15 Integral Transforms 15.3.10 Prove that h¯ 2πi ∞ −∞ e−iωt dω = E0 − iŴ/2 − h¯ ω +   exp − 2Ŵth¯ exp − iEh¯0 t , 0, t > 0, t < 0. This Fourier integral appears in a variety of problems in quantum mechanics: WKB barrier penetration, scattering, time-dependent perturbation theory, and so on. Hint. Try contour integration. 15.3.11 Verify that the following are Fourier integral transforms of one another: (a) (b) ) 1 2 ·√ , 2 π a − x2 0, 0,) |x| < a, and J0 (ay), |x| > a, |x| < a, 1 2 − , |x| > a, and N0 (a|y|), √ 2 π x + a2 )  1 π ·√ (c) and K0 a|y| . 2 x 2 + a2 (d) Can you suggest why I0 (ay) is not included in this list? Hint. J0 , N0 , and K0 may be transformed most easily by using an exponential representation, reversing the order of integration, and employing the Dirac delta function exponential representation (Section 15.2). These cases can be treated equally well as Fourier cosine transforms. Note. The K0 relation appears as a consequence of a Green’s function equation in Exercise 9.7.14. 15.3.12 A calculation of the magnetic field of a circular current loop in circular cylindrical coordinates leads to the integral ∞ cos kz k K1 (ka) dk. 0 Show that this integral is equal to 2(z2 πa . + a 2 )3/2 Hint. Try differentiating Exercise 15.3.11(c). 15.3.13 As an extension of Exercise 15.3.11, show that ∞ ∞ (a) J0 (y) dy = 1, (b) N0 (y) dy = 0, 0 15.3.14 0 (c) 0 ∞ K0 (y) dy = π . 2 The Fourier integral, Eq. (15.18), has been held meaningless for f (t) = cos αt. Show that the Fourier integral can be extended to cover f (t) = cos αt by use of the Dirac delta function. 15.3 Fourier Transforms — Inversion Theorem 15.3.15 945 Show that ∞ 0 sin ka J0 (kρ) dk =  −1/2 , a2 − ρ 2 0, ρ < a, ρ > a. Here a and ρ are positive. The equation comes from the determination of the distribution of charge on an isolated conducting disk, radius a. Note that the function on the right has an infinite discontinuity at ρ = a. Note. A Laplace transform approach appears in Exercise 15.10.8. 15.3.16 The function f (r) has a Fourier exponential transform, 1 1 f (r)eik·r d 3 r = . g(k) = 3/2 (2π) (2π)3/2 k 2 Determine f (r). Hint. Use spherical polar coordinates in k-space. ANS. f (r) = 1 . 4πr 15.3.17 (a) Calculate the Fourier exponential transform of f (x) = e−a|x| . (b) Calculate the inverse transform by employing the calculus of residues (Section 7.1). 15.3.18 Show that the following are Fourier transforms of each other i n Jn (t) and )  −1/2  2 Tn (x) 1 − x 2 ,  π 0, |x| < 1, |x| > 1. Tn (x) is the nth-order Chebyshev polynomial. Hint. With Tn (cos θ ) = cos nθ , the transform of Tn (x)(1 − x 2 )−1/2 leads to an integral representation of Jn (t). 15.3.19 Show that the Fourier exponential transform of f (µ) =  |µ| ≤ 1, |µ| > 1 Pn (µ), 0, is (2i n /2π)jn (kr). Here Pn (µ) is a Legendre polynomial and jn (kr) is a spherical Bessel function. 15.3.20 Show that the three-dimensional Fourier exponential transform of a radially symmetric function may be rewritten as a Fourier sine transform: 1 (2π)3/2 ∞ −∞ f (r)e ik·r 1 d x= k 3 ) 2 π 0 ∞ rf (r) sin kr dr. 946 Chapter 15 Integral Transforms 15.3.21 15.4 Show that f (x) = x −1/2 is a self-reciprocal under both Fourier cosine and sine transforms; that is, ) 2 ∞ −1/2 x cos xt dx = t −1/2 , π 0 ) 2 ∞ −1/2 x sin xt ds = t −1/2 . π 0 ∞ (b) Use the preceding results to evaluate the Fresnel integrals 0 cos(y 2 ) dy and ∞ 2 0 sin(y ) dy. (a) FOURIER TRANSFORM OF DERIVATIVES In Section 15.1, Fig. 15.1 outlines the overall technique of using Fourier transforms and inverse transforms to solve a problem. Here we take an initial step in solving a differential equation — obtaining the Fourier transform of a derivative. Using the exponential form, we determine that the Fourier transform of f (x) is ∞ 1 g(ω) = √ f (x)eiωx dx (15.37) 2π −∞ and for df (x)/dx 1 g1 (ω) = √ 2π ∞ −∞ df (x) iωx e dx. dx (15.38) Integrating Eq. (15.38) by parts, we obtain ∞ ∞ iω eiωx  g1 (ω) = √ f (x) −√ f (x)eiωx dx. −∞ 2π 2π −∞ (15.39) If f (x) vanishes4 as x → ±∞, we have g1 (ω) = −iω g(ω); (15.40) that is, the transform of the derivative is (−iω) times the transform of the original function. This may readily be generalized to the nth derivative to yield gn (ω) = (−iω)n g(ω), (15.41) provided all the integrated parts vanish as x → ±∞. This is the power of the Fourier transform, the reason it is so useful in solving (partial) differential equations. The operation of differentiation has been replaced by a multiplication in ω-space. 4 Apart from cases such as Exercise 15.3.6, f (x) must vanish as x → ±∞ in order for the Fourier transform of f (x) to exist. 15.4 Fourier Transform of Derivatives Example 15.4.1 947 WAVE EQUATION This technique may be used to advantage in handling PDEs. To illustrate the technique, let us derive a familiar expression of elementary physics. An infinitely long string is vibrating freely. The amplitude y of the (small) vibrations satisfies the wave equation ∂ 2y 1 ∂ 2y = . ∂x 2 v 2 ∂t 2 (15.42) y(x, 0) = f (x), (15.43) We shall assume an initial condition where f is localized, that is, approaches zero at large x. Applying our Fourier transform in x, which means multiplying by eiαx and integrating over x, we obtain ∞ 2 1 ∞ ∂ 2 y(x, t) iαx ∂ y(x, t) iαx e dx = e dx (15.44) ∂x 2 v 2 −∞ ∂t 2 −∞ or (−iα)2 Y (α, t) = Here we have used 1 Y (α, t) = √ 2π 1 ∂ 2 Y (α, t) . v2 ∂t 2 ∞ y(x, t)eiαx dx (15.45) (15.46) −∞ and Eq. (15.41) for the second derivative. Note that the integrated part of Eq. (15.39) vanishes: The wave has not yet gone to ±∞ because it is propagating forward in time, and there is no source at infinity because f (±∞) = 0. Since no derivatives with respect to α appear, Eq. (15.45) is actually an ODE — in fact, the linear oscillator equation. This transformation, from a PDE to an ODE, is a significant achievement. We solve Eq. (15.45) subject to the appropriate initial conditions. At t = 0, applying Eq. (15.43), Eq. (15.46) reduces to ∞ 1 Y (α, 0) = √ f (x)eiαx dx = F (α). (15.47) 2π −∞ The general solution of Eq. (15.45) in exponential form is Y (α, t) = F (α)e±ivαt . Using the inversion formula (Eq. (15.23)), we have ∞ 1 y(x, t) = √ Y (α, t)e−iαx dα, 2π −∞ (15.48) (15.49) and, by Eq. (15.48), 1 y(x, t) = √ 2π ∞ −∞ F (α)e−iα(x∓vt) dα. (15.50) 948 Chapter 15 Integral Transforms Since f (x) is the Fourier inverse transform of F (α), y(x, t) = f (x ∓ vt), (15.51) corresponding to waves advancing in the +x- and −x-directions, respectively. The particular linear combinations of waves is given by the boundary condition of Eq. (15.43) and some other boundary condition, such as a restriction on ∂y/∂t.  The accomplishment of the Fourier transform here deserves special emphasis. • Our Fourier transform converted a PDE into an ODE, where the “degree of transcendence” of the problem was reduced. In Section 15.9 Laplace transforms are used to convert ODEs (with constant coefficients) into algebraic equations. Again, the degree of transcendence is reduced. The problem is simplified — as outlined in Fig. 15.1. Example 15.4.2 HEAT FLOW PDE To illustrate another transformation of a PDE into an ODE, let us Fourier transform the heat flow partial differential equation ∂ 2ψ ∂ψ = a2 2 , ∂t ∂x where the solution ψ(x, t) is the temperature in space as a function of time. By taking the Fourier transform of both sides of this equation (note that here only ω is the transform variable conjugate to x because t is the time in the heat flow PDE), where ∞ 1 (ω, t) = √ ψ(x, t)eiωx dx, 2π −∞ this yields an ODE for the Fourier transform  of ψ in the time variable t, ∂(ω, t) = −a 2 ω2 (ω, t). ∂t Integrating we obtain ln  = −a 2 ω2 t + ln C, or  = Ce−a 2 ω2 t , where the integration constant C may still depend on ω and, in general, is determined by initial conditions. In fact, C = (ω, 0) is the initial spatial distribution of , so it is given by the transform (in x) of the initial distribution of ψ, namely, ψ(x, 0). Putting this solution back into our inverse Fourier transform, this yields ∞ 1 2 2 ψ(x, t) = √ C(ω)e−iωx e−a ω t dω. 2π −∞ For simplicity, we here take C ω-independent (assuming a delta-function initial temperature distribution) and integrate by completing the square in ω, as in Example 15.1.1, 15.4 Fourier Transform of Derivatives 949 making appropriate changes of variables and parameters (a 2 → a 2 t, ω → x, t → −ω). This yields the particular solution of the heat flow PDE,   C x2 ψ(x, t) = √ exp − 2 , 4a t a 2t which appears as a clever guess in Chapter 8. In effect, we have shown that ψ is the inverse Fourier transform of C exp(−a 2 ω2 t).  Example 15.4.3 INVERSION OF PDE Derive a Fourier integral for the Green’s function G0 of Poisson’s PDE, which is a solution of ∇ 2 G0 (r, r′ ) = −δ(r − r′ ). Once G0 is known, the general solution of Poisson’s PDE, ∇ 2  = −4πρ(r) of electrostatics, is given as (r) = G0 (r, r′ )4πρ(r′ ) d 3 r ′ . Applying ∇ 2 to  and using the PDE the Green’s function satisfies, we check that ∇ 2 (r) = ∇ 2 G0 (r, r′ )4πρ(r′ ) d 3 r ′ = − δ(r − r′ )4πρ(r′ ) d 3 r ′ = −4πρ(r). Now we use the Fourier transform of G0 , which is g0 , and of that of the δ function, writing 3 3 ip·(r−r′ ) d p 2 ip·(r−r′ ) d p = − e . ∇ g0 (p)e (2π)3 (2π)3 Because the integrands of equal Fourier integrals must be the same (almost) everywhere, which follows from the inverse Fourier transform, and with ′ ′ ∇eip·(r−r ) = ipeip·(r−r ) , this yields −p2 g0 (p) = −1. Therefore, application of the Laplacian to a Fourier integral f (r) corresponds to multiplying its Fourier transform g(p) by −p2 . Substituting this solution into the inverse Fourier transform for G0 gives d 3p 1 ′ ′ . G0 (r, r ) = eip·(r−r ) = (2π)3 p2 4π|r − r′ | We can verify the last part of this result by applying ∇ 2 to G0 again and recalling from 1 ′ Chapter 1 that ∇ 2 |r−r ′ | = −4πδ(r − r ). 950 Chapter 15 Integral Transforms The inverse Fourier transform can be evaluated using polar coordinates, exploiting the spherical symmetry of p2 . For simplicity, we write R = r − r′ and call θ the angle between R and p, 2π ∞ 1 3 ip·R d p ipR cos θ e dϕ e d cos θ dp = p2 0 −1 0 2π ∞ dp ipR cos θ 1 4π ∞ sin pR e dp = =  cos θ=−1 iR 0 p R 0 p 4π ∞ sin pR 2π 2 = d(pR) = , R 0 pR R ∞ where θ and ϕ are the angles of p and 0 sinx x dx = π2 , from Example 7.1.4. Dividing by (2π)3 , we obtain G0 (R) = 1/(4πR), as claimed. An evaluation of this Fourier transform by contour integration is given in Example 9.7.2.  Exercises 15.4.1 The one-dimensional Fermi age equation for the diffusion of neutrons slowing down in some medium (such as graphite) is ∂ 2 q(x, τ ) ∂q(x, τ ) . = ∂τ ∂x 2 Here q is the number of neutrons that slow down, falling below some given energy per second per unit volume. The Fermi age, τ , is a measure of the energy loss. If q(x, 0) = Sδ(x), corresponding to a plane source of neutrons at x = 0, emitting S neutrons per unit area per second, derive the solution 2 e−x /4τ q =S √ . 4πτ Hint. Replace q(x, τ ) with 1 p(k, τ ) = √ 2π ∞ q(x, τ )eikx dx. −∞ This is analogous to the diffusion of heat in an infinite medium. 15.4.2 Equation (15.41) yields g2 (ω) = −ω2 g(ω) for the Fourier transform of the second derivative of f (x). The condition f (x) → 0 for x → ±∞ may be relaxed slightly. Find the least restrictive condition for the preceding equation for g2 (ω) to hold.  ∞ df (x)  = 0. − iωf (x) eiωx  ANS. −∞ dx 15.5 Convolution Theorem 15.4.3 951 The one-dimensional neutron diffusion equation with a (plane) source is −D d 2 ϕ(x) + K 2 Dϕ(x) = Qδ(x), dx 2 where ϕ(x) is the neutron flux, Qδ(x) is the (plane) source at x = 0, and D and K 2 are constants. Apply a Fourier transform. Solve the equation in transform space. Transform your solution back into x-space. Q −|Kx| e . 2KD For a point source at the origin, the three-dimensional neutron diffusion equation becomes ANS. ϕ(x) = 15.4.4 −D∇ 2 ϕ(r) + K 2 Dϕ(r) = Qδ(r). Apply a three-dimensional Fourier transform. Solve the transformed equation. Transform the solution back into r-space. 15.4.5 (a) Given that F (k) is the three-dimensional Fourier transform of f (r) and F1 (k) is the three-dimensional Fourier transform of ∇f (r), show that F1 (k) = (−ik)F (k). (b) This is a three-dimensional generalization of Eq. (15.40). Show that the three-dimensional Fourier transform of ∇ · ∇f (r) is F2 (k) = (−ik)2 F (k). Note. Vector k is a vector in the transform space. In Section 15.6 we shall have h¯ k = p, linear momentum. 15.5 CONVOLUTION THEOREM We shall employ convolutions to solve differential equations, to normalize momentum wave functions (Section 15.6), and to investigate transfer functions (Section 15.7). Let us consider two functions f (x) and g(x) with Fourier transforms F (t) and G(t), respectively. We define the operation ∞ 1 f ∗g≡ √ g(y)f (x − y) dy (15.52) 2π −∞ as the convolution of the two functions f and g over the interval (−∞, ∞). This form of an integral appears in probability theory in the determination of the probability density of two random, independent variables. Our solution of Poisson’s equation, Eq. (9.148), may be interpreted as a convolution of a charge distribution, ρ(r2 ), and a weighting function, (4πε0 |r1 − r2 |)−1 . In other works this is sometimes referred to as the Faltung, to use the 952 Chapter 15 Integral Transforms FIGURE 15.5 German term for “folding.”5 We now transform the integral in Eq. (15.52) by introducing the Fourier transforms: ∞ ∞ ∞ 1 F (t)e−it (x−y) dt dy g(y) g(y)f (x − y)dy = √ 2π −∞ −∞ −∞ ∞  ∞ 1 ity =√ F (t) g(y)e dy e−itx dt 2π −∞ −∞ ∞ = F (t)G(t)e−itx dt, (15.53) −∞ interchanging the order of integration and transforming g(y). This result may be interpreted as follows: The Fourier inverse transform of a product of Fourier transforms is the convolution of the original functions, f ∗ g. For the special case x = 0 we have ∞ ∞ f (−y)g(y) dy. (15.54) F (t)G(t) dt = −∞ −∞ The minus sign in −y suggests that modifications be tried. We now do this with g ∗ instead of g using a different technique. Parseval’s Relation Results analogous to Eqs. (15.53) and (15.54) may be derived for the Fourier sine and cosine transforms (Exercises 15.5.1 and 15.5.3). Equation (15.54) and the corresponding sine and cosine convolutions are often labeled Parseval’s relations by analogy with Parseval’s theorem for Fourier series (Chapter 14, Exercise 14.4.2). 5 For f (y) = e−y , f (y) and f (x − y) are plotted in Fig. 15.5. Clearly, f (y) and f (x − y) are mirror images of each other in relation to the vertical line y = x/2, that is, we could generate f (x − y) by folding over f (y) on the line y = x/2. 15.5 Convolution Theorem The Parseval relation6,7 ∞ −∞ ∗ F (ω)G (ω) dω = ∞ −∞ f (t)g ∗ (t) dt 953 (15.55) may be derived elegantly using the Dirac delta function representation, Eq. (15.21d). We have ∞ ∞ ∞ ∞ 1 1 ∗ −iωt f (t)g (t) dt = F (ω)e dω · √ G∗ (x)eixt dx dt, √ 2π −∞ 2π −∞ −∞ −∞ (15.56) with attention to the complex conjugation in the G∗ (x) to g ∗ (t) transform. Integrating over t first, and using Eq. (15.21d), we obtain ∞ ∞ ∞ G∗ (x)δ(x − ω) dx dω F (ω) f (t)g ∗ (t) dt = −∞ = −∞ ∞ −∞ −∞ F (ω)G∗ (ω) dω, (15.57) our desired Parseval relation. If f (t) = g(t), then the integrals in the Parseval relation are normalization integrals (Section 10.4). Equation (15.57) guarantees that if a function f (t) is normalized to unity, its transform F (ω) is likewise normalized to unity. This is extremely important in quantum mechanics as developed in the next section. It may be shown that the Fourier transform is a unitary operation (in the Hilbert space L2 , square integrable functions). The Parseval relation is a reflection of this unitary property — analogous to Exercise 3.4.26 for matrices. In Fraunhofer diffraction optics the diffraction pattern (amplitude) appears as the transform of the function describing the aperture (compare Exercise 15.5.5). With intensity proportional to the square of the amplitude the Parseval relation implies that the energy passing through the aperture seems to be somewhere in the diffraction pattern — a statement of the conservation of energy. Parseval’s relations may be developed independently of the inverse Fourier transform and then used rigorously to derive the inverse transform. Details are given by Morse and Feshbach,8 Section 4.8 (see also Exercise 15.5.4). Exercises 15.5.1 Work out the convolution equation corresponding to Eq. (15.53) for (a) Fourier sine transforms ∞  1 ∞ g(y) f (y + x) + f (y − x) dy = Fs (s)Gs (s) cos sx ds, 2 0 0 where f and g are odd functions. 6 Note that all arguments are positive, in contrast to Eq. (15.54). 7 Some authors prefer to restrict Parseval’s name to series and refer to Eq. (15.55) as Rayleigh’s theorem. 8 P. M. Morse and H. Feshbach, Methods of Theoretical Physics, New York: McGraw-Hill (1953). 954 Chapter 15 Integral Transforms (b) Fourier cosine transforms ∞  1 ∞ g(y) f (y + x) + f (x − y) dy = Fc (s)Gc (s) cos sx ds, 2 0 0 where f and g are even functions. 15.5.2 F (ρ) and G(ρ) are the Hankel transforms of f (r) and g(r), respectively (Exercise 15.1.1). Derive the Hankel transform Parseval relation: ∞ ∞ ∗ F (ρ)G(ρ)ρ dρ = f ∗ (r)g(r)r dr. 0 15.5.3 0 Show that for both Fourier sine and Fourier cosine transforms Parseval’s relation has the form ∞ ∞ f (y)g(y) dy. F (t)G(t) dt = 0 0 15.5.4 Starting from Parseval’s relation (Eq. (15.54)), let g(y) = 1, 0 ≤ y ≤ α, and zero elsewhere. From this derive the Fourier inverse transform (Eq. (15.23)). Hint. Differentiate with respect to α. 15.5.5 (a) A rectangular pulse is described by f (x) =  1, 0, |x| < a, |x| > a. Show that the Fourier exponential transform is ) 2 sin at . F (t) = π t This is the single-slit diffraction problem of physical optics. The slit is described by f (x). The diffraction pattern amplitude is given by the Fourier transform F (t). (b) Use the Parseval relation to evaluate ∞ sin2 t dt. 2 −∞ t This integral may also be evaluated by using the calculus of residues, Exercise 7.1.12. ANS. (b) π. 15.5.6 Solve Poisson’s equation, ∇ 2 ψ(r) = −ρ(r)/ε0 , by the following sequence of operations: (a) Take the Fourier transform of both sides of this equation. Solve for the Fourier transform of ψ(r). (b) Carry out the Fourier inverse transform by using a three-dimensional analog of the convolution theorem, Eq. (15.53). 15.6 Momentum Representation 15.5.7 955 (a) Given f (x) = 1 − |x/2|, −2 ≤ x ≤ 2, and zero elsewhere, show that the Fourier transform of f (x) is )   2 sin t 2 . F (t) = π t (b) Using the Parseval relation, evaluate ∞ −∞ sin t t 4 dt. 2π . 3 With F (t) and G(t) the Fourier transforms of f (x) and g(x), respectively, show that ∞ ∞     f (x) − g(x)2 dx = F (t) − G(t)2 dt. ANS. (b) 15.5.8 −∞ −∞ If g(x) is an approximation to f (x), the preceding relation indicates that the mean square deviation in t-space is equal to the mean square deviation in x-space. 15.5.9 Use the Parseval relation to evaluate ∞ ∞ ω2 dω dω (a) , (b) . 2 2 2 2 2 2 −∞ (ω + a ) −∞ (ω + a ) Hint. Compare Exercise 15.3.4. ANS. (a) 15.6 π π , (b) . 3 2a 2a MOMENTUM REPRESENTATION In advanced dynamics and in quantum mechanics, linear momentum and spatial position occur on an equal footing. In this section we shall start with the usual space distribution and derive the corresponding momentum distribution. For the one-dimensional case our wave function ψ(x) has the following properties: ψ ∗ (x)ψ(x) dx is the probability density of finding a quantum particle between x and x + dx, and ∞ 2. ψ ∗ (x)ψ(x) dx = 1 (15.58) 1. −∞ corresponds to probability unity. 3. In addition, we have x = ∞ −∞ ψ ∗ (x)xψ(x) dx (15.59) for the average position of the particle along the x-axis. This is often called an expectation value. We want a function g(p) that will give the same information about the momentum: 956 Chapter 15 Integral Transforms g ∗ (p)g(p) dp is the probability density that our quantum particle has a momentum between p and p + dp. ∞ g ∗ (p)g(p) dp = 1. (15.60) 2. 1. −∞ p = 3. ∞ −∞ g ∗ (p)p g(p) dp. (15.61) As subsequently shown, such a function is given by the Fourier transform of our space function ψ(x). Specifically,9 ∞ 1 g(p) = √ ψ(x)e−ipx/h¯ dx, (15.62) 2π h¯ −∞ ∞ 1 ∗ g (p) = √ ψ ∗ (x)eipx/h¯ dx. (15.63) 2π h¯ −∞ The corresponding three-dimensional momentum function is 1 g(p) = (2π h) ¯ 3/2 ∞ ψ(r)e−ir·p/h¯ d 3 r. −∞ To verify Eqs. (15.62) and (15.63), let us check on properties 2 and 3. Property 2, the normalization, is automatically satisfied as a Parseval relation, Eq. (15.55). If the space function ψ(x) is normalized to unity, the momentum function g(p) is also normalized to unity. To check on property 3, we must show that ∞ ∞ h¯ d p = ψ(x) dx, (15.64) g ∗ (p)pg(p) dp = ψ ∗ (x) i dx −∞ −∞ where (h¯ /i)(d/dx) is the momentum operator in the space representation. We replace the momentum functions by Fourier-transformed space functions, and the first integral becomes 1 2π h¯ ∞ ′ pe−ip(x−x )/h¯ ψ ∗ (x ′ )ψ(x) dp dx ′ dx. (15.65) −∞ Now we use the plane-wave identity pe −ip(x−x ′ )/h¯  h¯ −ip(x−x ′ )/h¯ d , − e = dx i 9 The h may be avoided by using the wave number k, p = k h (and p = kh), so ¯ ¯ ¯ ϕ(k) = An example of this notation appears in Section 16.1. 1 (2π )1/2 ψ(x)e−ikx dx. (15.66) 15.6 Momentum Representation 957 with p a constant, not an operator. Substituting into Eq. (15.65) and integrating by parts, holding x ′ and p constant, we obtain  ∞ ∞ 1 h¯ d ′ p = (15.67) e−ip(x−x )/h¯ dp · ψ ∗ (x ′ ) ψ(x) dx ′ dx. 2π h¯ −∞ i dx −∞ Here we assume ψ(x) vanishes as x → ±∞, eliminating the integrated part. Using the Dirac delta function, Eq. (15.21c), Eq. (15.67) reduces to Eq. (15.64) to verify our momentum representation. Alternatively, if the integration over p is done first in Eq. (15.65), leading to ∞ ′ pe−ip(x−x )/h¯ dp = 2πi h¯ 2 δ ′ (x − x ′ ), −∞ and using Exercise 1.15.9, we can do the integration over x, which causes ψ(x) to become −dψ(x ′ )/dx ′ . The remaining integral over x ′ is the right-hand side of Eq. (15.64). Example 15.6.1 HYDROGEN ATOM The hydrogen atom ground state10 may be described by the spatial wave function   1 1/2 −r/a0 e , (15.68) ψ(r) = πa03 a0 being the Bohr radius, 4πε0 h¯ 2 /me2 . We now have a three-dimensional wave function. The transform corresponding to Eq. (15.62) is 1 ψ(r)e−ip·r/h¯ d 3 r. (15.69) g(p) = (2π h) ¯ 3/2 Substituting Eq. (15.68) into Eq. (15.69) and using 8πa , (15.70) e−ar+ib·r d 3 r = 2 (a + b2 )2 we obtain the hydrogenic momentum wave function, 3/2 g(p) = 23/2 a0 h¯ 5/2 . π (a02 p 2 + h¯ 2 )2 (15.71) Such momentum functions have been found useful in problems like Compton scattering from atomic electrons, the wavelength distribution of the scattered radiation, depending on the momentum distribution of the target electrons. The relation between the ordinary space representation and the momentum representation may be clarified by considering the basic commutation relations of quantum mechanics. We go from a classical Hamiltonian to the Schrödinger wave equation by requiring that momentum p and position x not commute. Instead, we require that [p, x] ≡ px − xp = −i h. ¯ (15.72) 10 See E. V. Ivash, A momentum representation treatment of the hydrogen atom problem. Am. J. Phys. 40: 1095 (1972) for a momentum representation treatment of the hydrogen atom l = 0 states. 958 Chapter 15 Integral Transforms For the multidimensional case, Eq. (15.72) is replaced by [pi , xj ] = −i hδ ¯ ij . (15.73) The Schrödinger (space) representation is obtained by using x →x: pi → −i h¯ ∂ , ∂xi replacing the momentum by a partial space derivative. We see that [p, x]ψ(x) = −i hψ(x). ¯ (15.74) However, Eq. (15.72) can equally well be satisfied by using xj → i h¯ p → p: ∂ . ∂pj This is the momentum representation. Then [p, x]g(p) = −i hg(p). ¯ (15.75) Hence the representation (x) is not unique; (p) is an alternate possibility. In general, the Schrödinger representation (x) leading to the Schrödinger wave equation is more convenient because the potential energy V is generally given as a function of position V (x, y, z). The momentum representation (p) usually leads to an integral equation (compare Chapter 16 for the pros and cons of the integral equations). For an exception, consider the harmonic oscillator.  Example 15.6.2 HARMONIC OSCILLATOR The classical Hamiltonian (kinetic energy + potential energy = total energy) is H (p, x) = 1 p2 + kx 2 = E, 2m 2 (15.76) where k is the Hooke’s law constant. In the Schrödinger representation we obtain h¯ 2 d 2 ψ(x) 1 2 (15.77) + kx ψ(x) = Eψ(x). 2m dx 2 2 √ For total energy E equal to (k/m)h¯ /2 there is an unnormalized solution (Section 13.1), − ψ(x) = e−( √ mk/2h¯ )x 2 . (15.78) The momentum representation leads to h¯ 2 k d 2 g(p) p2 g(p) − = Eg(p). 2m 2 dp 2 (15.79) ) (15.80) Again, for E= k h¯ m2 15.6 Momentum Representation 959 the momentum wave equation (15.79) is satisfied by the unnormalized g(p) = e−p √ ¯ mk) . 2 /(2h (15.81) Either representation, space or momentum (and an infinite number of other possibilities), may be used, depending on which is more convenient for the particular problem under attack. The demonstration that g(p) is the momentum wave function corresponding to Eq. (15.78) — that it is the Fourier inverse transform of Eq. (15.78) — is left as Exercise 15.6.3.  Exercises 15.6.1 15.6.2 The function eik·r describes a plane wave of momentum p = h¯ k normalized to unit density. (Time dependence of e−iωt is assumed.) Show that these plane-wave functions satisfy an orthogonality relation  ik·r ∗ ik′ ·r e dx dy dz = (2π)3 δ(k − k′ ). e An infinite plane wave in quantum mechanics may be represented by the function ′ ψ(x) = eip x/h¯ . Find the corresponding momentum distribution function. Note that it has an infinity and that ψ(x) is not normalized. 15.6.3 A linear quantum oscillator in its ground state has a wave function ψ(x) = a −1/2 π −1/4 e−x 2 /2a 2 . Show that the corresponding momentum function is g(p) = a 1/2 π −1/4 h¯ −1/2 e−a 15.6.4 2 p 2 /2h2 ¯ . The nth excited state of the linear quantum oscillator is described by ψn (x) = a −1/2 2−n/2 π −1/4 (n!)−1/2 e−x 2 /2a 2 Hn (x/a), where Hn (x/a) is the nth Hermite polynomial, Section 13.1. As an extension of Exercise 15.6.3, find the momentum function corresponding to ψn (x). Hint. ψn (x) may be represented by (aˆ † )n ψ0 (x), where aˆ † is the raising operator, Exercise 13.1.14 to 13.1.16. 15.6.5 A free particle in quantum mechanics is described by a plane wave ψk (x, t) = ei[kx−(h¯ k 2 /2m)t] . Combining waves of adjacent momentum with an amplitude weighting factor ϕ(k), we form a wave packet ∞ 2 ϕ(k)ei[kx−(h¯ k /2m)t] dk. (x, t) = −∞ 960 Chapter 15 Integral Transforms (a) Solve for ϕ(k) given that (x, 0) = e−x (b) 2 /2a 2 . Using the known value of ϕ(k), integrate to get the explicit form of (x, t). Note that this wave packet diffuses or spreads out with time. 2 2 e−{x /2[a +(i h¯ /m)t]} ANS. (x, t) = . 2 )]1/2 [1 + (i ht/ma ¯ Note. An interesting discussion of this problem from the evolution operator point of view is given by S. M. Blinder, Evolution of a Gaussian wave packet, Am. J. Phys. 36: 525 (1968). 15.6.6 Find the time-dependent momentum wave function g(k, t) corresponding to (x, t) of Exercise 15.6.5. Show that the momentum wave packet g ∗ (k, t)g(k, t) is independent of time. 15.6.7 The deuteron, Example 10.1.2, may be described reasonably well with a Hulthén wave function ψ(r) = A[e−αr − e−βr ]/r, with A, α, and β constants. Find g(p), the corresponding momentum function. Note. The Fourier transform may be rewritten as Fourier sine and cosine transforms or as a Laplace transform, Section 15.8. 15.6.8 The nuclear form factor F (k) and the charge distribution ρ(r) are three-dimensional Fourier transforms of each other: 1 ρ(r)eik·r d 3 r. F (k) = (2π)3/2 If the measured form factor is −3/2 F (k) = (2π)  k2 1+ 2 a −1 , find the corresponding charge distribution. ANS. ρ(r) = 15.6.9 a 2 e−ar . 4π r Check the normalization of the hydrogen momentum wave function 3/2 g(p) = 15.6.10 by direct evaluation of the integral 23/2 a0 h¯ 5/2 π (a02 p 2 + h¯ 2 )2 g ∗ (p)g(p) d 3 p. With ψ(r) a wave function in ordinary space and ϕ(p) the corresponding momentum function, show that 15.7 Transfer Functions (a) (b) 1 (2π h) ¯ 3/2 1 (2π h) ¯ 3/2 961 rψ(r)e−ir·p/h¯ d 3 r = i h¯ ∇ p ϕ(p), r2 ψ(r)e−r·p/h¯ d 3 r = (i h¯ ∇ p )2 ϕ(p). Note. ∇ p is the gradient in momentum space: xˆ ∂ ∂ ∂ + yˆ + zˆ . ∂px ∂py ∂pz These results may be extended to any positive integer power of r and therefore to any (analytic) function that may be expanded as a Maclaurin series in r. 15.6.11 The ordinary space wave function ψ(r, t) satisfies the time-dependent Schrödinger equation ∂ψ(r, t) h¯ 2 2 =− ∇ ψ + V (r)ψ. ∂t 2m Show that the corresponding time-dependent momentum wave function satisfies the analogous equation, i h¯ p2 ∂ϕ(p, t) = ϕ + V (i h¯ ∇ p )ϕ. ∂t 2m Note. Assume that V (r) may be expressed by a Maclaurin series and use Exercise 15.6.10. V (i h¯ ∇ p ) is the same function of the variable i h¯ ∇ p that V (r) is of the variable r. i h¯ 15.6.12 The one-dimensional time-independent Schrödinger wave equation is h¯ 2 d 2 ψ(x) + V (x)ψ(x) = Eψ(x). 2m dx 2 For the special case of V (x) an analytic function of x, show that the corresponding momentum wave equation is   p2 d g(p) + g(p) = Eg(p). V i h¯ dp 2m − Derive this momentum wave equation from the Fourier transform, Eq. (15.62), and its inverse. Do not use the substitution x → i h(d/dp) directly. ¯ 15.7 TRANSFER FUNCTIONS A time-dependent electrical pulse may be regarded as built-up as a superposition of plane waves of many frequencies. For angular frequency ω we have a contribution F (ω)eiωt . Then the complete pulse may be written as ∞ 1 F (ω)eiωt dω. f (t) = 2π −∞ (15.82) 962 Chapter 15 Integral Transforms FIGURE 15.6 Servomechanism or a stereo amplifier. Because the angular frequency ω is related to the linear frequency ν by ω ν= , 2π it is customary to associate the entire 1/2π factor with this integral. But if ω is a frequency, what about the negative frequencies? The negative ω may be looked on as a mathematical device to avoid dealing with two functions (cos ωt and sin ωt) separately (compare Section 14.1). Because Eq. (15.82) has the form of a Fourier transform, we may solve for F (ω) by writing the inverse transform, ∞ F (ω) = f (t)e−iωt dt. (15.83) −∞ Equation (15.83) represents a resolution of the pulse f (t) into its angular frequency components. Equation (15.82) is a synthesis of the pulse from its components. Consider some device, such as a servomechanism or a stereo amplifier (Fig. 15.6), with an input f (t) and an output g(t). For an input of a single frequency ω, fω (t) = eiωt , the amplifier will alter the amplitude and may also change the phase. The changes will probably depend on the frequency. Hence gω (t) = ϕ(ω)fω (t). (15.84) This amplitudes- and phase-modifying function ϕ(ω) is called a transfer function. It usually will be complex: ϕ(ω) = u(ω) + iv(ω), (15.85) where the functions u(ω) and v(ω) are real. In Eq. (15.84) we assume that the transfer function ϕ(ω) is independent of input amplitude and of the presence or absence of any other frequency components. That is, we are assuming a linear mapping of f (t) onto g(t). Then the total output may be obtained by integrating over the entire input, as modified by the amplifier ∞ 1 g(t) = ϕ(ω)F (ω)eiωt dω. (15.86) 2π −∞ The transfer function is characteristic of the amplifier. Once the transfer function is known (measured or calculated), the output g(t) can be calculated for any input f (t). Let us consider ϕ(ω) as the Fourier (inverse) transform of some function (t): ∞ ϕ(ω) = (t)e−iωt dt. (15.87) −∞ 15.7 Transfer Functions 963 Then Eq. (15.86) is the Fourier transform of two inverse transforms. From Section 15.5 we obtain the convolution ∞ f (τ )(t − τ ) dτ. (15.88) g(t) = −∞ Interpreting Eq. (15.88), we have an input — a “cause” — f (τ ), modified by (t − τ ), producing an output — an “effect” — g(t). Adopting the concept of causality — that the cause precedes the effect — we must require τ < t. We do this by requiring (t − τ ) = 0, τ > t. (15.89) Then Eq. (15.88) becomes g(t) = t −∞ f (τ )(t − τ ) dτ. (15.90) The adoption of Eq. (15.89) has profound consequences here and equivalently in dispersion theory, Section 7.2. Significance of (t) To see the significance of , let f (τ ) be a sudden impulse starting at τ = 0, f (τ ) = δ(τ ), where δ(τ ) is a Dirac delta distribution on the positive side of the origin. Then Eq. (15.90) becomes t g(t) = δ(τ )(t − τ ) dτ, −∞  (t), g(t) = 0, (15.91) t > 0, t < 0. This identifies (t) as the output function corresponding to a unit impulse at t = 0. Equation (15.91) also serves to establish that (t) is real. Our original transfer function gives the steady-state output corresponding to a unit-amplitude single-frequency input. (t) and ϕ(ω) are Fourier transforms of each other. From Eq. (15.87) we now have ∞ ϕ(ω) = (t)e−iωt dt, (15.92) 0 with the lower limit set equal to zero by causality (Eq. (15.89)). With (t) real from Eq. (15.91) we separate real and imaginary parts and write ∞ u(ω) = (t) cos ωt dt, 0 v(ω) = − 0 ∞ (15.93) (t) sin ωt dt, ω > 0. 964 Chapter 15 Integral Transforms From this we see that the real part of ϕ(ω), u(ω), is even, whereas the imaginary part of ϕ(ω), v(ω), is odd: u(−ω) = u(ω), v(−ω) = −v(ω). Compare this result with Exercise 15.3.1. Interpreting Eq. (15.93) as Fourier cosine and sine transforms, we have 2 ∞ u(ω) cos ωt dω (t) = π 0 2 ∞ =− v(ω) sin ωt dω, t > 0. π 0 Combining Eqs. (15.93) and (15.94), we obtain   ∞ ∞ 2 ′ ′ ′ u(ω ) cos ω t dω dt, sin ωt v(ω) = − π 0 0 (15.94) (15.95) showing that if our transfer function has a real part, it will also have an imaginary part (and vice versa). Of course, this assumes that the Fourier transforms exist, thus excluding cases such as (t) = 1. The imposition of causality has led to a mutual interdependence of the real and imaginary parts of the transfer function. The reader should compare this with the results of the dispersion theory of Section 7.2, also involving causality. It may be helpful to show that the parity properties of u(ω) and v(ω) require (t) to vanish for negative t. Inverting Eq. (15.87), we have ∞   1 (t) = u(ω) + iv(ω) cos ωt + i sin ωt dω. (15.96) 2π −∞ With u(ω) even and v(ω) odd, Eq. (15.96) becomes 1 ∞ 1 ∞ u(ω) cos ωt dω − v(ω) sin ωt dω. (t) = π 0 π 0 From Eq. (15.94), 0 ∞ u(ω) cos ωt dω = − ∞ v(ω) sin ωt dω, t > 0. 0 If we reverse the sign of t, sin ωt reverses sign and, from Eq. (15.97), (t) = 0, t 0, 11 This is sometimes called a one-sided Laplace transform; the integral from −∞ to +∞ is referred to as a two-sided Laplace transform. Some authors introduce an additional factor of s. This extra s appears to have little advantage and continually gets in the way (compare Jeffreys and Jeffreys, Section 14.13 — see the Additional Readings — for additional comments). Generally, we take s to be real and positive. It is possible to have s complex, provided ℜ(s) > 0. 966 Chapter 15 Integral Transforms then L{1} = ∞ 0 1 e−st dt = , s for s > 0. (15.102) Again, let F (t) = ekt , t > 0. The Laplace transform becomes ∞ ' ( L ekt = e−st ekt dt = 0 1 , s −k for s > k. (15.103) Using this relation, we obtain the Laplace transform of certain other functions. Since cosh kt = we have 1  kt e + e−kt , 2 sinh kt = 1  kt e − e−kt , 2   1 1 s 1 + = 2 , 2 s −k s +k s − k2   k 1 1 1 = 2 − , L{sinh kt} = 2 s −k s +k s − k2 (15.104) L{cosh kt} = (15.105) both valid for s > k. We have the relations cos kt = cosh ikt, sin kt = −i sinh ikt. (15.106) Using Eqs. (15.105) with k replaced by ik, we find that the Laplace transforms are L{cos kt} = s , s 2 + k2 (15.107) k L{sin kt} = 2 , s + k2 both valid for s > 0. Another derivation of this last transform is given in the next section. ∞ Note that lims→0 L{sin kt} = 1/k. The Laplace transform assigns a value of 1/k to 0 sin kt dt. Finally, for F (t) = t n , we have ∞ ' n( L t = e−st t n dt, 0 which is just the factorial function. Hence ' ( n! L t n = n+1 , s s > 0, n > −1. (15.108) Note that in all these transforms we have the variable s in the denominator — negative powers of s. In particular, lims→∞ f (s) = 0. The significance of this point is that if f (s) involves positive powers of s (lims→∞ f (s) → ∞), then no inverse transform exists. 15.8 Laplace Transforms 967 Inverse Transform There is little importance to these operations unless we can carry out the inverse transform, as in Fourier transforms. That is, with ' ( L F (t) = f (s), then ' ( L−1 f (s) = F (t). (15.109) This inverse transform is not unique. Two functions F1 (t) and F2 (t) may have the same transform, f (s). However, in this case F1 (t) − F2 (t) = N (t), where N (t) is a null function (Fig. 15.7), indicating that t0 N (t) dt = 0, 0 for all positive t0 . This result is known as Lerch’s theorem. Therefore to the physicist and engineer N (t) may almost always be taken as zero and the inverse operation becomes unique. The inverse transform can be determined in various ways. (1) A table of transforms can be built up and used to carry out the inverse transformation, exactly as a table of logarithms can be used to look up antilogarithms. The preceding transforms constitute the beginnings of such a table. For a more complete set of Laplace transforms see upcoming Table 15.2 or AMS-55, Chapter 29 (see footnote 4 in Chapter 5 for the reference). Employing partial fraction expansions and various operational theorems, which are considered in succeeding sections, facilitates use of the tables. • There is some justification for suspecting that these tables are probably of more value in solving textbook exercises than in solving real-world problems. • (2) A general technique for L−1 will be developed in Section 15.12 by using the calculus of residues. FIGURE 15.7 A possible null function. 968 Chapter 15 Integral Transforms • (3) For the difficulties and the possibilities of a numerical approach — numerical inversion — we refer to the Additional Readings. Partial Fraction Expansion Utilization of a table of transforms (or inverse transforms) is facilitated by expanding f (s) in partial fractions. Frequently f (s), our transform, occurs in the form g(s)/ h(s), where g(s) and h(s) are polynomials with no common factors, g(s) being of lower degree than h(s). If the factors of h(s) are all linear and distinct, then by the method of partial fractions we may write c1 c2 cn f (s) = + + ··· + , (15.110) s − a1 s − a2 s − an where the ci are independent of s. The ai are the roots of h(s). If any one of the roots, say, a1 , is multiple (occurring m times), then f (s) has the form n  ci c1,m c1,1 c1,m−1 f (s) = + · · · + + + . (s − a1 )m (s − a1 )m−1 s − a1 s − ai (15.111) i=2 Finally, if one of the factors is quadratic, (s 2 + ps + q), then the numerator, instead of being a simple constant, will have the form s2 as + b . + ps + q There are various ways of determining the constants introduced. For instance, in Eq. (15.110) we may multiply through by (s − ai ) and obtain ci = lim (s − ai )f (s). (15.112) s→ai In elementary cases a direct solution is often the easiest. Example 15.8.1 PARTIAL FRACTION EXPANSION Let f (s) = as + b c k2 = + . s(s 2 + k 2 ) s s 2 + k 2 (15.113) Putting the right side of the equation over a common denominator and equating like powers of s in the numerator, we obtain k2 c(s 2 + k 2 ) + s(as + b) = , 2 +k ) s(s 2 + k 2 ) (15.114) s(s 2 s2; b = 0, c = 1, b = 0, c + a = 0, s1; ck 2 = k 2 , Solving these (s = 0), we have a = −1, s0. 15.8 Laplace Transforms 969 giving f (s) = and by Eqs. (15.102) and (15.106). Example 15.8.2 1 s − 2 , s s + k2 ' ( L−1 f (s) = 1 − cos kt (15.115) (15.116)  A STEP FUNCTION As one application of Laplace transforms, consider the evaluation of ∞ sin tx F (t) = dx. x 0 (15.117) Suppose we take the Laplace transform of this definite (and improper) integral:  ∞  ∞ ∞ sin tx sin tx e−st L dx = dx dt. (15.118) x x 0 0 0 Now, interchanging the order of integration (which is justified),12 we get  ∞ ∞ ∞ dx 1 −st e sin tx dt dx = , 2 x 0 s + x2 0 0 (15.119) since the factor in square brackets is just the Laplace transform of sin tx. From the integral tables,   ∞ dx 1 −1 x ∞ π = = f (s). (15.120) = tan s s 0 2s s2 + x2 0 By Eq. (15.102) we carry out the inverse transformation to obtain π F (t) = , t > 0, (15.121) 2 in agreement with an evaluation by the calculus of residues (Section 7.1). It has been assumed that t > 0 in F (t). For F (−t) we need note only that sin(−tx) = − sin tx, giving F (−t) = −F (t). Finally, if t = 0, F (0) is clearly zero. Therefore π ∞ , t >0 2 π sin tx dx = 2u(t) − 1 = 0, (15.122) t =0  π x 2 0 − 2 , t < 0. ∞ Note that 0 (sin tx/x) dx, taken as a function of t, describes a step function (Fig. 15.8), a step of height π at t = 0. This is consistent with Eq. (1.174).  The technique in the preceding example was to (1) introduce a second integration — the Laplace transform, (2) reverse the order of integration and integrate, and (3) take the 12 See — in the Additional Readings — Jeffreys and Jeffreys (1966), Chapter 1 (uniform convergence of integrals). 970 Chapter 15 Integral Transforms ∞ FIGURE 15.8 F (t) = 0 a step function. sin tx x dx, inverse Laplace transform. There are many opportunities where this technique of reversing the order of integration can be applied and proved useful. Exercise 15.8.6 is a variation of this. Exercises 15.8.1 Prove that lim sf (s) = lim F (t). s→∞ t→+0 Hint. Assume that F (t) can be expressed as F (t) = 15.8.2 Show that ∞ n=0 an t n. 1 lim L{cos xt} = δ(x). π s→0 15.8.3 Verify that  cos at − cos bt L b2 − a 2 15.8.4 = s , (s 2 + a 2 )(s 2 + b2 ) a 2 = b2 . Using partial fraction expansions, show that (a) (b) 15.8.5  L −1 L  −1   1 e−at − e−bt = , (s + a)(s + b) b−a  ae−at − be−bt s = , (s + a)(s + b) a−b a = b. a = b. Using partial fraction expansions, show that for a 2 = b2 ,  (a) L−1 (b) L−1     1 sin at sin bt 1 , − = − a b (s 2 + a 2 )(s 2 + b2 ) a 2 − b2  s2 1 = 2 {a sin at − b sin bt}. (s 2 + a 2 )(s 2 + b2 ) a − b2 15.9 Laplace Transform of Derivatives 15.8.6 971 The electrostatic potential of a charged conducting disk is known to have the general form (circular cylindrical coordinates) ∞ e−k|z| J0 (kρ)f (k) dk, (ρ, z) = 0 with f (k) unknown. At large distances (z → ∞) the potential must approach the Coulomb potential Q/4πε0 z. Show that q lim f (k) = . k→0 4πε0 Hint. You may set ρ = 0 and assume a Maclaurin expansion of f (k) or, using e−kz , construct a delta sequence. 15.8.7 Show that ∞ cos s π , ds = (a) ν s 2(ν − 1)! cos(νπ/2) 0 ∞ sin s π (b) ds = , ν s 2(ν − 1)! sin(νπ/2) 0 0 < ν < 1, 0 < ν < 2, Why is ν restricted to (0, 1) for (a), to (0, 2) for (b)? These integrals may be interpreted as Fourier transforms of s −ν and as Mellin transforms of sin s and cos s. Hint. Replace s −ν by a Laplace transform integral: L{t ν−1 }/(ν − 1)!. Then integrate with respect to s. The resulting integral can be treated as a beta function (Section 8.4). 15.8.8 A function F (t) can be expanded in a power series (Maclaurin); that is, F (t) = ∞  an t n . n=0 Then ' ( L F (t) = ∞ 0 e−st ∞  n=0 an t n dt = ∞  n=0 an ∞ e−st t n dt. 0 Show that f (s), the Laplace transform of F (t), contains no powers of s greater than s −1 . Check your result by calculating L{δ(t)}, and comment on this fiasco. 15.8.9 15.9 Show that the Laplace transform of M(a, c, x) is   ' ( 1 1 L M(a, c, x) = 2 F1 a, 1; c, . s s LAPLACE TRANSFORM OF DERIVATIVES Perhaps the main application of Laplace transforms is in converting differential equations into simpler forms that may be solved more easily. It will be seen, for instance, that coupled differential equations with constant coefficients transform to simultaneous linear algebraic equations. 972 Chapter 15 Integral Transforms Let us transform the first derivative of F (t): ( ' L F ′ (t) = ∞ e−st 0 dF (t) dt. dt Integrating by parts, we obtain ∞ ( '  L F ′ (t) = e−st F (t) + s 0 ∞ e−st F (t) dt 0 ' ( = sL F (t) − F (0). (15.123) Strictly speaking, F (0) = F (+0)13 and dF /dt is required to be at least piecewise continuous for 0 ≤ t < ∞. Naturally, both F (t) and its derivative must be such that the integrals do not diverge. Incidentally, Eq. (15.123) provides another proof of Exercise 15.8.8. An extension gives ' ( ' ( L F (2) (t) = s 2 L F (t) − sF (+0) − F ′ (+0), ' ( ' ( L F (n) (t) = s n L F (t) − s n−1 F (+0) − · · · − F (n−1) (+0). (15.124) (15.125) The Laplace transform, like the Fourier transform, replaces differentiation with multiplication. In the following examples ODEs become algebraic equations. Here is the power and the utility of the Laplace transform. But see Example 15.10.3 for what may happen if the coefficients are not constant. Note how the initial conditions, F (+0), F ′ (+0), and so on, are incorporated into the transform. Equation (15.124) may be used to derive L{sin kt}. We use the identity −k 2 sin kt = d2 sin kt. dt 2 (15.126) Then applying the Laplace transform operation, we have −k 2 L{sin kt} = L  d2 sin kt dt 2  = s 2 L{sin kt} − s sin(0) − Since sin(0) = 0 and d/dt sin kt|t=0 = k, L{sin kt} = verifying Eq. (15.107). 13 Zero is approached from the positive side. s2 k , + k2  d  sin kt  . t=0 dt (15.127) (15.128) 15.9 Laplace Transform of Derivatives Example 15.9.1 973 SIMPLE HARMONIC OSCILLATOR As a physical example, consider a mass m oscillating under the influence of an ideal spring, spring constant k. As usual, friction is neglected. Then Newton’s second law becomes m d 2 X(t) + kX(t) = 0; dt 2 (15.129) also, we take as initial conditions X(0) = X0 , X ′ (0) = 0. Applying the Laplace transform, we obtain  2  ' ( d X + kL X(t) = 0, mL 2 dt (15.130) and by use of Eq. (15.124) this becomes ms 2 x(s) − msX0 + kx(s) = 0, x(s) = X0 s , s 2 + ω02 with ω02 ≡ (15.131) k . m (15.132) From Eq. (15.107) this is seen to be the transform of cos ω0 t, which gives X(t) = X0 cos ω0 t,  as expected. Example 15.9.2 (15.133) EARTH’S NUTATION A somewhat more involved example is the nutation of the earth’s poles (force-free precession). If we treat the Earth as a rigid (oblate) spheroid, the Euler equations of motion reduce to dX dY = −aY, = +aX, (15.134) dt dt where a ≡ [(Iz − Ix )/Iz ]ωz , X = ωx , Y = ωy with angular velocity vector ω = (ωx , ωy , ωz ) (Fig. 15.9), Iz = moment of inertia about the z-axis and Iy = Ix moment of inertia about the x- (or y-)axis. The z-axis coincides with the axis of symmetry of the Earth. It differs from the axis for the Earth’s daily rotation, ω, by some 15 meters, measured at the poles. Transformation of these coupled differential equations yields sx(s) − X(0) = −ay(s), sy(s) − Y (0) = ax(s). (15.135) Combining to eliminate y(s), we have s 2 x(s) − sX(0) + aY (0) = −a 2 x(s), or x(s) = X(0) s2 s a − Y (0) 2 . 2 +a s + a2 (15.136) 974 Chapter 15 Integral Transforms FIGURE 15.9 Hence X(t) = X(0) cos at − Y (0) sin at. (15.137) Y (t) = X(0) sin at + Y (0) cos at. (15.138) Similarly, This is seen to be a rotation of the vector (X, Y ) counterclockwise (for a > 0) about the z-axis with angle θ = at and angular velocity a. A direct interpretation may be found by choosing the time axis so that Y (0) = 0. Then X(t) = X(0) cos at, Y (t) = X(0) sin at, (15.139) which are the parametric equations for rotation of (X, Y ) in a circular orbit of radius X(0), with angular velocity a in the counterclockwise sense. In the case of the Earth’s angular velocity, vector X(0) is about 15 meters, whereas a, as defined here, corresponds to a period (2π/a) of some 300 days. Actually because of departures from the idealized rigid body assumed in setting up Euler’s equations, the period is about 427 days.14 If in Eq. (15.134) we set X(t) = Lx , Y (t) = Ly , where Lx and Ly are the x- and y-components of the angular momentum L, a = −gL Bz , gL is the gyromagnetic ratio, and Bz is the magnetic field (along the z-axis), then Eq. (15.134) describes the Larmor precession of charged bodies in a uniform magnetic field Bz .  14 D. Menzel, ed., Fundamental Formulas of Physics, Englewood Cliffs, NJ: Prentice-Hall (1955), reprinted, 2nd ed., Dover (1960), p. 695. 15.9 Laplace Transform of Derivatives 975 Dirac Delta Function For use with differential equations one further transform is helpful — the Dirac delta function:15 ∞ ' ( e−st δ(t − t0 ) dt = e−st0 , for t0 ≥ 0, (15.140) L δ(t − t0 ) = 0 and for t0 = 0 ' ( L δ(t) = 1, (15.141) where it is assumed that we are using a representation of the delta function such that ∞ δ(t) dt = 1, δ(t) = 0, for t > 0. (15.142) 0 As an alternate method, δ(t) may be considered the limit as ε → 0 of F (t), where  t < 0,  0, F (t) = ε −1 , (15.143) 0 < t < ε,  0, t > ε. By direct calculation ' ( 1 − e−εs . L F (t) = εs (15.144) Taking the limit of the integral (instead of the integral of the limit), we have ' ( lim L F (t) = 1, ε→0 or Eq. (15.141), ' ( L δ(t) = 1. This delta function is frequently called the impulse function because it is so useful in describing impulsive forces, that is, forces lasting only a short time. Example 15.9.3 IMPULSIVE FORCE Newton’s second law for impulsive force acting on a particle of mass m becomes m d 2X = P δ(t), dt 2 (15.145) where P is a constant. Transforming, we obtain ms 2 x(s) − msX(0) − mX ′ (0) = P . (15.146) 15 Strictly speaking, the Dirac delta function is undefined. However, the integral over it is well defined. This approach is developed in Section 1.16 using delta sequences. 976 Chapter 15 Integral Transforms For a particle starting from rest, X ′ (0) = 0.16 We shall also take X(0) = 0. Then x(s) = P , ms 2 (15.147) and P t, (15.148) m dX(t) P = , a constant. (15.149) dt m The effect of the impulse P δ(t) is to transfer (instantaneously) P units of linear momentum to the particle. A similar analysis applies to the ballistic galvanometer. The torque on the galvanometer is given initially by kι, in which ι is a pulse of current and k is a proportionality constant. Since ι is of short duration, we set X(t) = kι = kq δ(t), (15.150) where q is the total charge carried by the current ι. Then, with I the moment of inertia, d 2θ = kq δ(t), (15.151) dt 2 and, transforming as before, we find that the effect of the current pulse is a transfer of kq units of angular momentum to the galvanometer.  I Exercises 15.9.1 Use the expression for the transform of a second derivative to obtain the transform of cos kt. 15.9.2 A mass m is attached to one end of an unstretched spring, spring constant k (Fig. 15.10). At time t = 0 the free end of the spring experiences a constant acceleration a, away from the mass. Using Laplace transforms, FIGURE 15.10 Spring. 16 This should be X ′ (+0). To include the effect of the impulse, consider that the impulse will occur at t = ε and let ε → 0. 15.9 Laplace Transform of Derivatives 977 (a) Find the position x of m as a function of time. (b) Determine the limiting form of x(t) for small t. 15.9.3 1 k a ANS. (a) x = at 2 − 2 (1 − cos ωt), ω2 = , 2 m ω aω2 4 (b) x = t , ωt ≪ 1. 4! Radioactive nuclei decay according to the law dN = −λN, dt N being the concentration of a given nuclide and λ being the particular decay constant. This equation may be interpreted as stating that the rate of decay is proportional to the number of these radioactive nuclei present. They all decay independently. In a radioactive series of n different nuclides, starting with N1 , dN1 = −λ1 N1 , dt dN2 = λ1 N 1 − λ2 N 2 , and so on. dt dNn = λn−1 Nn−1 , stable. dt Find N1 (t), N2 (t), N3 (t), n = 3, with N1 (0) = N0 , N2 (0) = N3 (0) = 0. λ1  −λ1 t e − e−λ2 t , ANS.N1 (t) = N0 e−λ1 t , N2 (t) = N0 λ2 − λ1   λ2 λ1 −λ1 t −λ2 t . e + e N3 (t) = N0 1 − λ2 − λ1 λ2 − λ1 Find an approximate expression for N2 and N3 , valid for small t when λ1 ≈ λ2 . N3 ≈ N0 λ1 λ2 t 2 . 2 ANS. (a) N2 ≈ N0 e−λ2 t ,  N3 ≈ N0 1 − e−λ2 t , λ1 t ≫ 1. ANS. N2 ≈ N0 λ1 t, Find approximate expressions for N2 and N3 , valid for large t, when (a) λ1 ≫ λ2 , (b) λ1 ≪ λ2 . 15.9.4 (b) N2 ≈ N0 λλ21 e−λ1 t ,  N3 ≈ N0 1 − e−λ1 t , The formation of an isotope in a nuclear reactor is given by dN2 = nvσ1 N10 − λ2 N2 (t) − nvσ2 N2 (t). dt λ2 t ≫ 1. 978 Chapter 15 Integral Transforms Here the product nv is the neutron flux, neutrons per cubic centimeter, times centimeters per second mean velocity; σ1 and σ2 (cm2 ) are measures of the probability of neutron absorption by the original isotope, concentration N10 , which is assumed constant and the newly formed isotope, concentration N2 , respectively. The radioactive decay constant for the isotope is λ2 . (a) Find the concentration N2 of the new isotope as a function of time. (b) If the original element is Eu153 , σ1 = 400 barns = 400 × 10−24 cm2 , σ2 = 1000 barns = 1000 × 10−24 cm2 , and λ2 = 1.4 × 10−9 s−1 . If N10 = 1020 and (nv) = 109 cm−2 s−1 , find N2 , the concentration of Eu154 after one year of continuous irradiation. Is the assumption that N1 is constant justified? 15.9.5 In a nuclear reactor Xe135 is formed as both a direct fission product and a decay product of I135 , half-life, 6.7 hours. The half-life of Xe135 is 9.2 hours. Because Xe135 strongly absorbs thermal neutrons thereby “poisoning” the nuclear reactor, its concentration is a matter of great interest. The relevant equations are dNI = γI ϕσf NU − λI NI , dt dNX = λI NI + γX ϕσf NU − λX NX − ϕσX NX . dt Here NI = concentration of I135 (Xe135 , U235 ). Assume NU = constant, γI = yield of I135 per fission = 0.060, γX = yield of Xe135 direct from fission = 0.003,  ln 2 0.693 λI = I135 Xe135 decay constant = = , t1/2 t1/2 σf = thermal neutron fission cross section for U235 , σX = thermal neutron absorption cross section for Xe135 = 3.5 × 106 barns = 3.5 × 10−18 cm2 . (σI the absorption cross section of I135 , is negligible.) ϕ = neutron flux = neutrons/cm3 × mean velocity (cm/s). (a) Find NX (t) in terms of neutron flux ϕ and the product σf NU . (b) Find NX (t → ∞). (c) After NX has reached equilibrium, the reactor is shut down, ϕ = 0. Find NX (t) following shutdown. Notice the increase in NX , which may for a few hours interfere with starting the reactor up again. 15.10 Other Properties 979 15.10 OTHER PROPERTIES Substitution If we replace the parameter s by s − a in the definition of the Laplace transform (Eq. (15.99)), we have ∞ ∞ e−(s−a)t F (t) dt = e−st eat F (t) dt f (s − a) = 0 0 ' ( = L eat F (t) . (15.152) eat , and conHence the replacement of s with s − a corresponds to multiplying F (t) by versely. This result can be used to good advantage in extending our table of transforms. From Eq. (15.107) we find immediately that ' ( L eat sin kt = also, ' ( L eat cos kt = Example 15.10.1 k ; (s − a)2 + k 2 s−a , (s − a)2 + k 2 (15.153) s > a. DAMPED OSCILLATOR These expressions are useful when we consider an oscillating mass with damping proportional to the velocity. Equation (15.129), with such damping added, becomes mX ′′ (t) + bX ′ (t) + kX(t) = 0, (15.154) in which b is a proportionality constant. Let us assume that the particle starts from rest at X(0) = X0 , X ′ (0) = 0. The transformed equation is   m s 2 x(s) − sX0 + b sx(s) − X0 + kx(s) = 0, (15.155) and ms + b . ms 2 + bs + k This may be handled by completing the square of the denominator:     k b 2 b2 k b 2 − . + s + s+ = s+ m m 2m m 4m2 x(s) = X0 (15.156) (15.157) If the damping is small, b2 < 4 km, the last term is positive and will be denoted by ω12 : x(s) = X0 = X0 s + b/m (s + b/2m)2 + ω12 (b/2mω1 )ω1 s + b/2m + X0 . 2 2 (s + b/2m) + ω1 (s + b/2m)2 + ω12 (15.158) 980 Chapter 15 Integral Transforms By Eq. (15.153), X(t) = X0 e = X0 −(b/2m)t  cos ω1 t + b sin ω1 t 2mω1 ω0 −(b/2m)t e cos(ω1 t − ϕ), ω1  (15.159) where tan ϕ = b , 2mω1 ω02 = k . m Of course, as b → 0, this solution goes over to the undamped solution (Section 15.9).  RLC Analog It is worth noting the similarity between this damped simple harmonic oscillation of a mass on a spring and an RLC circuit (resistance, inductance, and capacitance) (Fig. 15.11). At any instant the sum of the potential differences around the loop must be zero (Kirchhoff’s law, conservation of energy). This gives dI 1 t L + RI + I dt = 0. (15.160) dt C Differentiating the current I with respect to time (to eliminate the integral), we have L 1 dI d 2I + I = 0. +R 2 dt C dt (15.161) If we replace I (t) with X(t), L with m, R with b, and C −1 with k, then Eq. (15.161) is identical with the mechanical problem. It is but one example of the unification of diverse branches of physics by mathematics. A more complete discussion will be found in Olson’s book.17 FIGURE 15.11 RLC circuit. 17 H. F. Olson, Dynamical Analogies, New York: Van Nostrand (1943). 15.10 Other Properties 981 FIGURE 15.12 Translation. Translation This time let f (s) be multiplied by e−bs , b > 0: ∞ e−bs f (s) = e−bs e−st F (t) dt = 0 ∞ e−s(t+b) F (t) dt. (15.162) 0 Now let t + b = τ . Equation (15.162) becomes ∞ e−bs f (s) = e−sτ F (τ − b) dτ b = 0 ∞ e−sτ F (τ − b)u(τ − b) dτ, (15.163) where u(τ − b) is the unit step function. This relation is often called the Heaviside shifting theorem (Fig. 15.12). Since F (t) is assumed to be equal to zero for t < 0, F (τ − b) = 0 for 0 ≤ τ < b. Therefore we can extend the lower limit to zero without changing the value of the integral. Then, noting that τ is only a variable of integration, we obtain ' ( (15.164) e−bs f (s) = L F (t − b) . Example 15.10.2 ELECTROMAGNETIC WAVES The electromagnetic wave equation with E = Ey or Ez , a transverse wave propagating along the x-axis, is ∂ 2 E(x, t) 1 ∂ 2 E(x, t) − = 0. ∂x 2 v2 ∂t 2 (15.165) Transforming this equation with respect to t, we get  ( s2 ' ( ∂2 ' s 1 ∂E(x, t)  L E(x, t) − L E(x, t) + E(x, 0) + = 0. ∂t t=0 ∂x 2 v2 v2 v2 (15.166) 982 Chapter 15 Integral Transforms If we have the initial condition E(x, 0) = 0 and  ∂E(x, t)  = 0, ∂t  t=0 then ( s2 ' ( ∂2 ' L E(x, t) = 2 L E(x, t) . ∂x 2 v The solution (of this ODE) is ' ( L E(x, t) = c1 e−(s/v)x + c2 e+(s/v)x . (15.167) (15.168) The “constants” c1 and c2 are obtained by additional boundary conditions. They are constant with respect to x but may depend on s. If our wave remains finite as x → ∞, L{E(x, t)} will also remain finite. Hence c2 = 0. If E(0, t) is denoted by F (t), then c1 = f (s) and ' ( L E(x, t) = e−(s/v)x f (s). (15.169) From the translation property (Eq. (15.164)) we find immediately that +  F t − xv , t ≥ xv , E(x, t) = 0, t < xv . (15.170) Differentiation and substitution into Eq. (15.165) verifies Eq. (15.170). Our solution represents a wave (or pulse) moving in the positive x-direction with velocity v. Note that for x > vt the region remains undisturbed; the pulse has not had time to get there. If we had wanted a signal propagated along the negative x-axis, c1 would have been set equal to 0 and we would have obtained +  F t + xv , t ≥ − xv , E(x, t) = (15.171) 0, t < − xv ,  a wave along the negative x-axis. Derivative of a Transform When F (t), which is at least piecewise continuous, and s are chosen so that e−st F (t) converges exponentially for large s, the integral ∞ e−st F (t) dt 0 is uniformly convergent and may be differentiated (under the integral sign) with respect to s. Then ∞ ' ( (−t)e−st F (t) dt = L −tF (t) . (15.172) f ′ (s) = 0 Continuing this process, we obtain ' ( f (n) (s) = L (−t)n F (t) . (15.173) 15.10 Other Properties 983 All the integrals so obtained will be uniformly convergent because of the decreasing exponential behavior of e−st F (t). This same technique may be applied to generate more transforms. For example, ∞ ' kt ( 1 , s > k. (15.174) L e = e−st ekt dt = s − k 0 Differentiating with respect to s (or with respect to k), we obtain ' ( 1 L tekt = , s > k. (s − k)2 Example 15.10.3 (15.175) BESSEL’S EQUATION An interesting application of a differentiated Laplace transform appears in the solution of Bessel’s equation with n = 0. From Chapter 11 we have x 2 y ′′ (x) + xy ′ (x) + x 2 y(x) = 0. (15.176) tF ′′ (t) + F ′ (t) + tF (t) = 0. (15.177) df s ds =− 2 , f s +1 (15.180) Dividing by x and substituting t = x and F (t) = y(x) to agree with the present notation, we see that the Bessel equation becomes We need a regular solution, in particular, F (0) = 1. From Eq. (15.177) with t = 0, F ′ (+0) = 0. Also, we assume that our unknown F (t) has a transform. Transforming and using Eqs. (15.123), (15.124), and (15.172), we have d  2 d s f (s) − s + sf (s) − 1 − f (s) = 0. (15.178) − ds ds Rearranging Eq. (15.178), we obtain  2 s + 1 f ′ (s) + sf (s) = 0, (15.179) or a first-order ODE. By integration, which may be rewritten as  ln f (s) = − 12 ln s 2 + 1 + ln C, f (s) = √ C (15.181) . (15.182) +1 To make use of Eq. (15.108), we expand f (s) in a series of negative powers of s, convergent for s > 1:   C 1 −1/2 f (s) = 1+ 2 s s  1 1·3 (−1)n (2n)! C 1 − + − · · · + + · · · . (15.183) = s 2s 2 22 · 2!s 4 (2n n!)2 s 2n s2 984 Chapter 15 Integral Transforms Inverting, term by term, we obtain F (t) = C ∞  (−1)n t 2n n=0 (2n n!)2 . (15.184) When C is set equal to 1, as required by the initial condition F (0) = 1, F (t) is just J0 (t), our familiar Bessel function of order zero. Hence ' ( 1 L J0 (t) = √ . (15.185) 2 s +1 Note that we assumed s > 1. The proof for s > 0 is left as a problem. It is worth noting that this application was successful and relatively easy because we took n = 0 in Bessel’s equation. This made it possible to divide out a factor of x (or t). If this had not been done, the terms of the form t 2 F (t) would have introduced a second derivative of f (s). The resulting equation would have been no easier to solve than the original one. When we go beyond linear ODEs with constant coefficients, the Laplace transform may still be applied, but there is no guarantee that it will be helpful. The application to Bessel’s equation, n = 0, will be found in the references. Alternatively, we can show that √ ' ( a −n ( s 2 + a 2 − s)n L Jn (at) = (15.186) √ s2 + a2 by expressing Jn (t) as an infinite series and transforming term by term.  Integration of Transforms Again, with F (t) at least piecewise continuous and x large enough so that e−xt F (t) decreases exponentially (as x → ∞), the integral ∞ f (x) = e−xt F (t) dt (15.187) 0 is uniformly convergent with respect to x. This justifies reversing the order of integration in the following equation: b ∞ b dt e−xt F (t) dx f (x) dx = 0 s s = 0 ∞ F (t)  −st e − e−bt dt, t (15.188) on integrating with respect to x. The lower limit s is chosen large enough so that f (s) is within the region of uniform convergence. Now letting b → ∞, we have   ∞ ∞ F (t) F (t) −st , (15.189) e dt = L f (x) dx = t t s 0 provided that F (t)/t is finite at t = 0 or diverges less strongly than t −1 (so that L{F (t)/t} will exist). 15.10 Other Properties 985 Limits of Integration — Unit Step Function The actual limits of integration for the Laplace transform may be specified with the (Heaviside) unit step function  0, t k. For instance, ' ( L u(t − k) = k ∞ 1 e−st dt = e−ks . s A rectangular pulse of width k and unit height is described by F (t) = u(t) − u(t − k). Taking the Laplace transform, we obtain k ' ( 1 L u(t) − u(t − k) = e−st dt = 1 − e−ks . s 0 The unit step function is also used in Eq. (15.163) and could be invoked in Exercise 15.10.13. Exercises 15.10.1 Solve Eq. (15.154), which describes a damped simple harmonic oscillator for X(0) = X0 , X ′ (0) = 0, and (a) b2 = 4 km (critically damped), (b) b2 > 4 km (overdamped). ANS. (a) X(t) = X0 e 15.10.2 −(b/2m)t   b 1+ t . 2m Solve Eq. (15.154), which describes a damped simple harmonic oscillator for X(0) = 0, X ′ (0) = v0 , and (a) b2 < 4 km (underdamped), (b) b2 = 4 km (critically damped), (c) b2 > 4 km (overdamped). v0 −(b/2m)t e sin ω1 t, ω1 −(b/2m)t (b) X(t) = v0 te . ANS. (a) X(t) = 15.10.3 The motion of a body falling in a resisting medium may be described by m d 2 X(t) dX(t) = mg − b 2 dt dt 986 Chapter 15 Integral Transforms FIGURE 15.13 Ringing circuit. when the retarding force is proportional to the velocity. Find X(t) and dX(t)/dt for the initial conditions  dX  = 0. X(0) = dt  t=0 15.10.4 Ringing circuit. In certain electronic circuits, resistance, inductance, and capacitance are placed in the plate circuit in parallel (Fig. 15.13). A constant voltage is maintained across the parallel elements, keeping the capacitor charged. At time t = 0 the circuit is disconnected from the voltage source. Find the voltages across the parallel elements R, L, and C as a function of time. Assume R to be large. Hint. By Kirchhoff’s laws IR + IC + IL = 0 ER = EC = EL , and where ER = IR R, EC = 1 q0 + C C EL = L dIL dt t IC dt, 0 q0 = initial charge of capacitor. With the DC impedance of L = 0, let IL (0) = I0 , EL (0) = 0. This means q0 = 0. 15.10.5 15.10.6 15.10.7 With J0 (t) expressed as a contour integral, apply the Laplace transform operation, reverse the order of integration, and thus show that ' (  −1/2 L J0 (t) = s 2 + 1 , for s > 0. Develop the Laplace transform of Jn (t) from L{J0 (t)} by using the Bessel function recurrence relations. Hint. Here is a chance to use mathematical induction. A calculation of the magnetic field of a circular current loop in circular cylindrical coordinates leads to the integral ∞ e−kz kJ1 (ka) dk, ℜ(z) ≥ 0. 0 Show that this integral is equal to a/(z2 + a 2 )3/2 . 15.10 Other Properties 15.10.8 987 The electrostatic potential of a point charge q at the origin in circular cylindrical coordinates is ∞ q q 1 e−kz J0 (kρ) dk = · , ℜ(z) ≥ 0. 4πε0 0 4πε0 (ρ 2 + z2 )1/2 From this relation show that the Fourier cosine and sine transforms of J0 (kρ) are (a) (b) ) ) ( π ' Fc J0 (kρ) = 2 ( π ' Fs J0 (kρ) = 2 ∞ 0 0 ∞  2 2 −1/2 , J0 (kρ) cos kζ dk = ρ − ζ 0,  0, −1/2 J0 (kρ) sin kζ dk =  2 ρ − ζ2 , ρ > ζ, ρ < ζ. ρ > ζ, ρ < ζ. Hint. Replace z by z + iζ and take the limit as z → 0. 15.10.9 Show that ' (  −1/2 L I0 (at) = s 2 − a 2 , s > a. 15.10.10 Verify the following Laplace transforms:     ' ( s 1 sin at = cot−1 , L j0 (at) = L at a a ' ( (b) L n0 (at) does not exist,     ' ( 1 s +a 1 sinh at −1 s (c) L i0 (at) = L = , ln = coth at 2a s − a a a ' ( (d) L k0 (at) does not exist. (a) 15.10.11 Develop a Laplace transform solution of Laguerre’s equation tF ′′ (t) + (1 − t)F ′ (t) + nF (t) = 0. Note that you need a derivative of a transform and a transform of derivatives. Go as far as you can with n; then (and only then) set n = 0. 15.10.12 Show that the Laplace transform of the Laguerre polynomial Ln (at) is given by 15.10.13 Show that where ( (s − a)n ' L Ln (at) = , s n+1 s > 0. ' ( 1 L E1 (t) = ln(s + 1), s E1 (t) = t ∞ e−τ dτ = τ E1 (t) is the exponential-integral function. 1 s > 0, ∞ e−xt dx. x 988 Chapter 15 Integral Transforms 15.10.14 (a) From Eq. (15.189) show that ∞ 0 f (x) dx = provided the integrals exist. (b) From the preceding result show that ∞ 0 0 ∞ F (t) dt, t π sin t dt = , t 2 in agreement with Eqs. (15.122) and (7.56). 15.10.15 (a) Show that L (b)  sin kt t  = cot−1   s . k Using this result (with k = 1), prove that ' ( 1 L si(t) = − tan−1 s, s where ∞ sin x si(t) = − dx, the sine integral. x t 15.10.16 If F (t) is periodic (Fig. 15.14) with a period a so that F (t + a) = F (t) for all t ≥ 0, show that a −st ' ( e F (t) dt , L F (t) = 0 1 − e−as with the integration now over only the first period of F (t). 15.10.17 Find the Laplace transform of the square wave (period a) defined by + 1, 0 < t < a2 F (t) = a 0, 2 < t < a. ANS. f (s) = FIGURE 15.14 Periodic function. 1 1 − e−as/2 · . s 1 − e−as 15.10 Other Properties 989 15.10.18 Show that (a) L{cosh at cos at} = s3 , s 4 + 4a 4 (c) L{sinh at cos at} = as 2 − 2a 3 , s 4 + 4a 4 (b) L{cosh at sin at} = as 2 + 2a 3 , s 4 + 4a 4 (d) L{sinh at sin at} = 2a 2 s . s 4 + 4a 4 15.10.19 Show that ' −2 ( 1 1 = 3 sin at − 2 t cos at, L−1 s 2 + a 2 2a 2a '  ( 1 −2 (b) L−1 s s 2 + a 2 = t sin at, 2a '  ( 1 1 −2 sin at + t cos at, = (c) L−1 s 2 s 2 + a 2 2a 2 '  ( a −1 3 2 2 −2 (d) L s s + a = cos at − t sin at. 2 (a) 15.10.20 Show that ' −1/2 ( L t 2 − k2 u(t − k) = K0 (ks). Hint. Try transforming an integral representation of K0 (ks) into the Laplace transform integral. 15.10.21 The Laplace transform ∞ 0 e−xs xJ0 (x) dx = s (s 2 + 1)3/2 may be rewritten as 1 s2 ∞ e−y yJ0 0   s y dy = 2 , s (s + 1)3/2 which is in Gauss–Laguerre quadrature form. Evaluate this integral for s = 1.0, 0.9, 0.8, . . . , decreasing s in steps of 0.1 until the relative error rises to 10 percent. (The effect of decreasing s is to make the integrand oscillate more rapidly per unit length of y, thus decreasing the accuracy of the numerical quadrature.) 15.10.22 (a) Evaluate ∞ e−kz kJ1 (ka) dk 0 (b) by the Gauss–Laguerre quadrature. Take a = 1 and z = 0.1(0.1)1.0. From the analytic form, Exercise 15.10.7, calculate the absolute error and the relative error. 990 Chapter 15 Integral Transforms 15.11 CONVOLUTION (FALTUNGS) THEOREM One of the most important properties of the Laplace transform is that given by the convolution, or Faltungs, theorem.18 We take two transforms, ' ( ' ( f1 (s) = L F1 (t) and f2 (s) = L F2 (t) , (15.190) and multiply them together. To avoid complications when changing variables, we hold the upper limits finite: a a−x e−sx F1 (x) dx e−sy F2 (y) dy. (15.191) f1 (s)f2 (s) = lim a→∞ 0 0 The upper limits are chosen so that the area of integration, shown in Fig. 15.15a, is the shaded triangle, not the square. If we integrate over a square in the xy-plane, we have a parallelogram in the tz-plane, which simply adds complications. This modification is permissible because the two integrands are assumed to decrease exponentially. In the limit a → ∞, the integral over the unshaded triangle will give zero contribution. Substituting x = t − z, y = z, the region of integration is mapped into the triangle shown in Fig. 15.15b. To verify the mapping, map the vertices: t = x + y, z = y. Using Jacobians to transform the element of area, we have    ∂x ∂y       ∂t  1 0 ∂t     (15.192) dx dy =   dt dz =  −1 1  dt dz  ∂x ∂y    ∂z ∂z or dx dy = dt dz. With this substitution Eq. (15.191) becomes t a −st f1 (s)f2 (s) = lim F1 (t − z)F2 (z) dz dt e a→∞ 0  t =L a 0 0  F1 (t − z)F2 (z) dz . b FIGURE 15.15 Change of variables, (a) xy-plane (b) zt-plane. 18 An alternate derivation employs the Bromwich integral (Section 15.12). This is Exercise 15.12.3. (15.193) 15.11 Convolution (Faltungs) Theorem For convenience this integral is represented by the symbol t F1 (t − z)F2 (z) dz ≡ F1 ∗ F2 991 (15.194) 0 and referred to as the convolution, closely analogous to the Fourier convolution (Section 15.5). If we substitute w = t − z, we find F1 ∗ F2 = F2 ∗ F1 , showing that the relation is symmetric. Carrying out the inverse transform, we also find t ' ( −1 L f1 (s)f2 (s) = F1 (t − z)F2 (z) dz. (15.195) (15.196) 0 This can be useful in the development of new transforms or as an alternative to a partial fraction expansion. One immediate application is in the solution of integral equations (Section 16.2). Since the upper limit, t, is variable, this Laplace convolution is useful in treating Volterra integral equations. The Fourier convolution with fixed (infinite) limits would apply to Fredholm integral equations. Example 15.11.1 DRIVEN OSCILLATOR WITH DAMPING As one illustration of the use of the convolution theorem, let us return to the mass m on a spring, with damping and a driving force F (t). The equation of motion ((15.129) or (15.154)) now becomes mX ′′ (t) + bX ′ (t) + kX(t) = F (t). (15.197) ms 2 x(s) + bs x(s) + kx(s) = f (s), (15.198) Initial conditions X(0) = 0, X ′ (0) = 0 are used to simplify this illustration, and the transformed equation is or x(s) = f (s) 1 , m (s + b/2m)2 + ω12 where ω12 ≡ k/m − b2 /4m2 , as before. By the convolution theorem (Eq. (15.193) or (15.196)), t 1 X(t) = F (t − z)e−(b/2m)z sin ω1 z dz. mω1 0 (15.199) (15.200) If the force is impulsive, F (t) = P δ(t),19 X(t) = 19 Note that δ(t) lies inside the interval [0, t]. P −(b/2m)t e sin ω1 t. mω1 (15.201) 992 Chapter 15 Integral Transforms P represents the momentum transferred by the impulse, and the constant P /m takes the place of an initial velocity X ′ (0). If F (t) = F0 sin ωt, Eq. (15.200) may be used, but a partial fraction expansion is perhaps more convenient. With f (s) = Eq. (15.199) becomes F0 ω + ω2 s2 F0 ω 1 1 · · 2 2 m s + ω (s + b/2m)2 + ω12  F0 ω a ′ s + b ′ c′ s + d ′ . = + m s 2 + ω2 (s + b/2m)2 + ω12 x(s) = (15.202) The coefficients a ′ , b′ , c′ , and d ′ are independent of s. Direct calculation shows 2 b m 2 1 ω0 − ω 2 , = ω2 + ′ a m b  b 2 m 2 2 1 m 2 − ′ = − ω0 − ω 2 ω0 − ω 2 . ω + b b m b − Since c′ and d ′ will lead to exponentially decreasing terms (transients), they will be discarded here. Carrying out the inverse operation, we find for the steady-state solution X(t) = [b2 ω2 F0 2 + m (ω02 − ω2 )2 ]1/2 sin(ωt − ϕ), (15.203) where tan ϕ = bω . − ω2 ) m(ω02 Differentiating the denominator, we find that the amplitude has a maximum when ω2 = ω02 − b2 b2 = ω12 − . 2 2m 4m2 (15.204) This is the resonance condition.20 At resonance the amplitude becomes F0 /bω1 , showing that the mass m goes into infinite oscillation at resonance if damping is neglected (b = 0). It is worth noting that we have had three different characteristic frequencies: ω22 = ω02 − b2 , 2m2 resonance for forced oscillations, with damping; ω12 = ω02 − b2 , 4m2 20 The amplitude (squared) has the typical resonance denominator, the Lorentz line shape, Exercise 15.3.9. 15.11 Convolution (Faltungs) Theorem 993 free oscillation frequency, with damping; and k , m free oscillation frequency, no damping. They coincide only if the damping is zero. ω02 =  Returning to Eqs. (15.197) and (15.199), Eq. (15.197) is our ODE for the response of a dynamical system to an arbitrary driving force. The final response clearly depends on both the driving force and the characteristics of our system. This dual dependence is separated in the transform space. In Eq. (15.199) the transform of the response (output) appears as the product of two factors, one describing the driving force (input) and the other describing the dynamical system. This latter part, which modifies the input and yields the output, is often called a transfer function. Specifically, [(s + b/2m)2 + ω12 ]−1 is the transfer function corresponding to this damped oscillator. The concept of a transfer function is of great use in the field of servomechanisms. Often the characteristics of a particular servomechanism are described by giving its transfer function. The convolution theorem then yields the output signal for a particular input signal. Exercises 15.11.1 From the convolution theorem show that  t  1 f (s) = L F (x) dx , s 0 where f (s) = L{F (t)}. 15.11.2 If F (t) = t a and G(t) = t b , a > −1, b > −1: (a) Show that the convolution F ∗ G = t a+b+1 (b) 0 1 y a (1 − y)b dy. By using the convolution theorem, show that 1 y a (1 − y)b dy = 0 a!b! . (a + b + 1)! This is the Euler formula for the beta function (Eq. (8.59a)). 15.11.3 15.11.4 Using the convolution integral, calculate   s −1 L , (s 2 + a 2 )(s 2 + b2 ) a 2 = b2 . An undamped oscillator is driven by a force F0 sin ωt. Find the displacement as a function of time. Notice that it is a linear combination of two simple harmonic motions, one with the frequency of the driving force and one with the frequency ω0 of the free oscillator. (Assume X(0) = X ′ (0) = 0.)   ω F0 /m sin ω0 t − sin ωt . ANS. X(t) = ω2 − ω02 ω0 994 Chapter 15 Integral Transforms Other exercises involving the Laplace convolution appear in Section 16.2. 15.12 INVERSE LAPLACE TRANSFORM Bromwich Integral We now develop an expression for the inverse Laplace transform L−1 appearing in the equation ' ( F (t) = L−1 f (s) . (15.205) One approach lies in the Fourier transform, for which we know the inverse relation. There is a difficulty, however. Our Fourier transformable function had to satisfy the Dirichlet conditions. In particular, we required that lim G(ω) = 0 ω→∞ (15.206) so that the infinite integral would be well defined.21 Now we wish to treat functions F (t) that may diverge exponentially. To surmount this difficulty, we extract an exponential factor, eγ t , from our (possibly) divergent Laplace function and write F (t) = eγ t G(t). (15.207) If F (t) diverges as eαt , we require γ to be greater than α so that G(t) will be convergent. Now, with G(t) = 0 for t < 0 and otherwise suitably restricted so that it may be represented by a Fourier integral (Eq. (15.20)), ∞ ∞ 1 iut G(t) = e du (15.208) G(v)e−iuv dv. 2π −∞ 0 Using Eq. (15.207), we may rewrite (15.208) as ∞ eγ t ∞ iut F (v)e−γ v e−iuv dv. e du F (t) = 2π −∞ 0 (15.209) Now, with the change of variable, s = γ + iu, the integral over v is thrown into the form of a Laplace transform, ∞ f (v)e−sv dv = f (s); (15.210) (15.211) 0 21 If delta functions are included, G(ω) may be a cosine. Although this does not satisfy Eq. (15.206), G(ω) is still bounded. 15.12 Inverse Laplace Transform 995 FIGURE 15.16 Singularities of est f (s). s is now a complex variable, and ℜ(s) ≥ γ to guarantee convergence. Notice that the Laplace transform has mapped a function specified on the positive real axis onto the complex plane, ℜ(s) ≥ γ .22 With γ as a constant, ds = i du. Substituting Eq. (15.211) into Eq. (15.209), we obtain γ +i∞ 1 est f (s) ds. (15.212) F (t) = 2πi γ −i∞ Here is our inverse transform. We have rotated the line of integration through 90◦ (by using ds = i du). The path has become an infinite vertical line in the complex plane, the constant γ having been chosen so that all the singularities of f (s) are on the left-hand side (Fig. 15.16). Equation (15.212), our inverse transformation, is usually known as the Bromwich integral, although sometimes it is referred to as the Fourier–Mellin theorem or Fourier– Mellin integral. This integral may now be evaluated by the regular methods of contour integration (Chapter 7). If t > 0, the contour may be closed by an infinite semicircle in the left half-plane. Then by the residue theorem (Section 7.1) F (t) = (residues included for ℜ(s) < γ ). (15.213) Possibly this means of evaluation with ℜ(s) ranging through negative values seems paradoxical in view of our previous requirement that ℜ(s) ≥ γ . The paradox disappears when we recall that the requirement ℜ(s) ≥ γ was imposed to guarantee convergence of the Laplace transform integral that defined f (s). Once f (s) is obtained, we may then proceed to exploit its properties as an analytical function in the complex plane wherever we choose.23 In effect we are employing analytic continuation to get L{F (t)} in the left halfplane, exactly as the recurrence relation for the factorial function was used to extend the Euler integral definition (Eq. (8.5)) to the left half-plane. Perhaps a pair of examples may clarify the evaluation of Eq. (15.212). 22 For a derivation of the inverse Laplace transform using only real variables, see C. L. Bohn and R. W. Flynn, Real variable inversion of Laplace transforms: An application in plasma physics. Am. J. Phys. 46: 1250 (1978). 23 In numerical work f (s) may well be available only for discrete real, positive values of s. Then numerical procedures are indicated. See Krylov and Skoblya in the Additional Reading. 996 Chapter 15 Integral Transforms Example 15.12.1 INVERSION VIA CALCULUS OF RESIDUES If f (s) = a/(s 2 − a 2 ), then est f (s) = aest aest . = s 2 − a 2 (s + a)(s − a) (15.214) The residues may be found by using Exercise 6.6.1 or various other means. The first step is to identify the singularities, the poles. Here we have one simple pole at s = a and another simple pole at s = −a. By Exercise 6.6.1, the residue at s = a is ( 12 )eat and the residue at s = −a is (− 12 )e−at . Then   (15.215) Residues = 21 eat − e−at = sinh at = F (t),  in agreement with Eq. (15.105). Example 15.12.2 If f (s) = 1 − e−as , s then es(t−a) grows exponentially for t < a on the semicircle in the left-hand s-plane, so contour integration and the residue theorem are not applicable. However, we can evaluate the integral explicitly as follows. We let γ → 0 and substitute s = iy, so γ +i∞ ∞  iyt dy 1 1 F (t) = . (15.216) e − eiy(t−a) est f (s) = 2πi γ −i∞ 2π −∞ y Using the Euler identity, only the sines survive that are odd in y and we obtain  1 ∞ sin ty sin(t − a)y . (15.217) − F (t) = π −∞ y y ∞ If k > 0, then 0 sinyky dy gives π/2, and it gives −π/2 if k < 0. As a consequence, F (t) = 0 if t > a > 0 and if t < 0. If 0 < t < a, then F (t) = 1. This can be written compactly in terms of the Heaviside unit step function u(t) as follows:  t < 0,  0, 0 < t < a, F (t) = u(t) − u(t − a) = 1, (15.218)  0, t > a, a step function of unit height and length a (Fig. 15.17).  Two general comments may be in order. First, these two examples hardly begin to show the usefulness and power of the Bromwich integral. It is always available for inverting a complicated transform when the tables prove inadequate. Second, this derivation is not presented as a rigorous one. Rather, it is given more as a plausibility argument, although it can be made rigorous. The determination of the inverse transform is somewhat similar to the solution of a differential equation. It makes 15.12 Inverse Laplace Transform 997 FIGURE 15.17 Finite-length step function u(t) − u(t − a). little difference how you get the solution. Guess at it if you want. The solution can always be checked by substitution back into the original differential equation. Similarly, F (t) can (and, to check on careless errors, should) be checked by determining whether, by Eq. (15.99), ' ( L F (t) = f (s). Two alternate derivations of the Bromwich integral are the subjects of Exercises 15.12.1 and 15.12.2. As a final illustration of the use of the Laplace inverse transform, we have some results from the work of Brillouin and Sommerfeld (1914) in electromagnetic theory. Example 15.12.3 VELOCITY OF ELECTROMAGNETIC WAVES IN A DISPERSIVE MEDIUM The group velocity u of traveling waves is related to the phase velocity v by the equation u=v−λ dv . dλ (15.219) Here λ is the wavelength. In the vicinity of an absorption line (resonance), dv/dλ may be sufficiently negative so that u > c (Fig. 15.18). The question immediately arises whether a signal can be transmitted faster than c, the velocity of light in vacuum. This question, which assumes that such a group velocity is meaningful, is of fundamental importance to the theory of special relativity. We need a solution to the wave equation ∂ 2ψ 1 ∂ 2ψ = , ∂x 2 v 2 ∂t 2 (15.220) corresponding to a harmonic vibration starting at the origin at time zero. Since our medium is dispersive, v is a function of the angular frequency. Imagine, for instance, a plane wave, angular frequency ω, incident on a shutter at the origin. At t = 0 the shutter is (instantaneously) opened, and the wave is permitted to advance along the positive x-axis. 998 Chapter 15 Integral Transforms FIGURE 15.18 Optical dispersion. Let us then build up a solution starting at x = 0. It is convenient to use the Cauchy integral formula, Eq. (6.43),  −izt 1 e ψ(0, t) = dz = e−iz0 t 2πi z − z0 (for a contour encircling z = z0 in the positive sense). Using s = −iz and z0 = ω, we obtain  γ +i∞ est 1 0, t < 0, ds = −iωt (15.221) ψ(0, t) = e , t > 0. 2πi γ −i∞ s + iω To be complete, the loop integral is along the vertical line ℜ(s) = γ and an infinite semicircle, as shown in Fig. 15.19. The location of the infinite semicircle is chosen so that the integral over it vanishes. This means a semicircle in the left half-plane for t > 0 and the residue is enclosed. For t < 0 we pick the right half-plane and no singularity is enclosed. The fact that this is just the Bromwich integral may be verified by noting that  0, t < 0, F (t) = −iωt (15.222) e , t >0 FIGURE 15.19 Possible closed contours. 15.12 Inverse Laplace Transform 999 and applying the Laplace transform. The transformed function f (s) becomes f (s) = 1 . s + iω (15.223) Our Cauchy–Bromwich integral provides us with the time dependence of a signal leaving the origin at t = 0. To include the space dependence, we note that es(t−x/v) satisfies the wave equation. With this as a clue, we replace t by t − x/v and write a solution: ψ(x, t) = 1 2πi γ +i∞ γ −i∞ es(t−x/v) ds. s + iω (15.224) It was seen in the derivation of the Bromwich integral that our variable s replaces the ω of the Fourier transformation. Hence the wave velocity v may become a function of s, that is, v(s). Its particular form need not concern us here. We need only the property v ≤ c and lim v(s) = constant, c. |s|→∞ (15.225) This is suggested by the asymptotic behavior of the curve on the right side of Fig. 15.18.24 Evaluating Eq. (15.225) by the calculus of residues, we may close the path of integration by a semicircle in the right half-plane, provided t− x < 0. c Hence ψ(x, t) = 0, t− x < 0, c (15.226) which means that the velocity of our signal cannot exceed the velocity of light in the vacuum, c. This simple but very significant result was extended by Sommerfeld and Brillouin to show just how the wave advanced in the dispersive medium.  Summary — Inversion of Laplace Transform • Direct use of tables, Table 15.2, and references; use of partial fractions (Section 15.8) and the operational theorems of Table 15.1. • Bromwich integral, Eq. (15.212), and the calculus of residues. • Numerical inversion, see the Additional Readings. 24 Equation (15.225) follows rigorously from the theory of anomalous dispersion. See also the Kronig–Kramers optical disper- sion relations of Section 7.2. 1000 Chapter 15 Integral Transforms Table 15.1 Laplace Transform Operations Operations Laplace transform 2. Transform of derivative Transform of integral 4. Substitution 5. Translation 6. Derivative of transform 7. Integral of transform 8. Convolution Equation ∞ f (s) = L{F (t)} = 0 e−st F (t) dt sf (s) − F (+0) = L{F ′ (t)} (15.99) (15.123) s 2 f (s) − sF (+0) − F ′ (+0) = L{F ′′ (t)} (15.124) 1 f (s) = L s (Exercise 15.11.1)  t F (x) dx 0 f (s − a) = L{eat F (t)}  e−bs f (s) = L{F (t − b)} f (n) (s) = L{(−t)n F (t)}  F (t) f (x) dx = L t s  t  f1 (s)f2 (s) = L F1 (t − z)F2 (z) dz  ∞ (15.152) (15.164) (15.173) (15.189) (15.193) 0 Inverse transform, Bromwich integral γ +i∞ 1 est f (s) ds = F (t) 2π i γ −i∞ (15.212) Exercises 15.12.1 Derive the Bromwich integral from Cauchy’s integral formula. Hint. Apply the inverse transform L−1 to γ +iα f (z) 1 lim dz, f (s) = 2πi α→∞ γ −iα s − z where f (z) is analytic for ℜ(z) ≥ γ . 15.12.2 Starting with 1 2πi γ +i∞ est f (s) ds, γ −i∞ show that by introducing f (s) = ∞ e−sz F (z) dz, 0 we can convert one integral into the Fourier representation of a Dirac delta function. From this derive the inverse Laplace transform. 15.12.3 Derive the Laplace transformation convolution theorem by use of the Bromwich integral. 15.12.4 Find L−1  s s 2 − k2 (a) by a partial fraction expansion. (b) Repeat, using the Bromwich integral.  15.12 Inverse Laplace Transform Table 15.2 1001 Laplace Transforms f (s) F (t) Limitation Equation 1 δ(t) Singularity at +0 (15.141) 1 s>0 (15.102) tn s>0 (15.108) 1 2. s n! n+1 s n > −1 1 4. s −k 1 5. (s − k)2 s 6. 2 s − k2 k 7. 2 s − k2 s 8. 2 s + k2 k 9. 2 s + k2 s −a 10. (s − a)2 + k 2 k 11. (s − a)2 + k 2 s 2 − k2 12. 2 (s + k 2 )2 2ks 13. 2 (s + k 2 )2 14. (s 2 + a 2 )−1/2 ekt s>k (15.103) tekt s>k (15.175) cosh kt s>k (15.105) sinh kt s>k (15.105) cos kt s>0 (15.107) sin kt s>0 (15.107) eat cos kt s>a (15.153) eat sin kt s>a (15.153) t cos kt s>0 (Exercise 15.10.19) t sin kt s>0 (Exercise 15.10.19) J0 (at) s>0 (15.185) (s 2 − a 2 )−1/2 I0 (at) s>a (Exercise 15.10.9) 16. j0 (at) s>0 (Exercise 15.10.10) i0 (at) s>a (Exercise 15.10.10) Ln (at) s>0 (Exercise 15.10.12) E1 (x) = −Ei(−x) s>0 (Exercise 15.10.13) − ln t − γ s>0 (Exercise 15.12.9) 17. 20.   1 s cot−1 a a  1 s +a   ln  2a s −a  1 s  −1   coth a a n (s − a) s n+1 1 ln(s + 1) s ln s s A more extensive table of Laplace transforms appears in Chapter 29 of AMS-55 (see footnote 4 in Chapter 5 for the reference). 15.12.5 Find L −1  k2 s(s 2 + k 2 )  1002 Chapter 15 Integral Transforms (a) by using a partial fraction expansion. (b) Repeat using the convolution theorem. (c) Repeat using the Bromwich integral. ANS. F (t) = 1 − cos kt. 15.12.6 Use the Bromwich integral to find the function whose transform is f (s) = s −1/2 . Note that f (s) has a branch point at s = 0. The negative x-axis may be taken as a cut line. ANS. F (t) = (πt)−1/2 . 15.12.7 Show that −1/2 ( ' L−1 s 2 + 1 = J0 (t) by evaluation of the Bromwich integral. Hint. Convert your Bromwich integral into an integral representation of J0 (t). Figure 15.20 shows a possible contour. 15.12.8 Evaluate the inverse Laplace transform ' −1/2 ( L−1 s 2 − a 2 by each of the following methods: (a) Expansion in a series and term-by-term inversion. (b) Direct evaluation of the Bromwich integral. (c) Change of variable in the Bromwich integral: s = (a/2)(z + z−1 ). FIGURE 15.20 A possible contour for the inversion of J0 (t). 15.12 Additional Readings 15.12.9 Show that L −1  ln s s  1003 = − ln t − γ , where γ = 0.5772 . . . , the Euler–Mascheroni constant. 15.12.10 Evaluate the Bromwich integral for f (s) = (s 2 s . + a 2 )2 15.12.11 Heaviside expansion theorem. If the transform f (s) may be written as a ratio f (s) = g(s) , h(s) where g(s) and h(s) are analytic functions, h(s) having simple, isolated zeros at s = si , show that    g(si ) si t g(s) = e . F (t) = L−1 h(s) h′ (si ) i Hint. See Exercise 6.6.2. 15.12.12 Using the Bromwich integral, invert f (s) = s −2 e−ks . Express F (t) = L−1 {f (s)} in terms of the (shifted) unit step function u(t − k). ANS. F (t) = (t − k)u(t − k). 15.12.13 You have a Laplace transform: f (s) = 1 , (s + a)(s + b) a = b. Invert this transform by each of three methods: (a) Partial fractions and use of tables. (b) Convolution theorem. (c) Bromwich integral. ANS. F (t) = e−bt − e−at , a = b. a−b Additional Readings Champeney, D. C., Fourier Transforms and Their Physical Applications. New York: Academic Press (1973). Fourier transforms are developed in a careful, easy-to-follow manner. Approximately 60% of the book is devoted to applications of interest in physics and engineering. Erdelyi, A.,W. Magnus, F. Oberhettinger, and F. G. Tricomi, Tables of Integral Transforms, 2 vols. New York: McGraw–Hill (1954). This text contains extensive tables of Fourier sine, cosine, and exponential transforms, Laplace and inverse Laplace transforms, Mellin and inverse Mellin transforms, Hankel transforms, and other, more specialized integral transforms. 1004 Chapter 15 Integral Transforms Hanna, J. R., Fourier Series and Integrals of Boundary Value Problems. Somerset, NJ: Wiley (1990). This book is a broad treatment of the Fourier solution of boundary value problems. The concepts of convergence and completeness are given careful attention. Jeffreys, H., and B. S. Jeffreys, Methods of Mathematical Physics, 3rd ed. Cambridge, UK: Cambridge University Press (1972). Krylov, V. I., and N. S. Skoblya, Handbook of Numerical Inversion of Laplace Transform. Jerusalem: Israel Program for Scientific Translations (1969). Lepage, W. R., Complex Variables and the Laplace Transform for Engineers. New York: McGraw-Hill (1961); New York: Dover (1980). A complex variable analysis that is carefully developed and then applied to Fourier and Laplace transforms. It is written to be read by students, but intended for the serious student. McCollum, P. A., and B. F. Brown, Laplace Transform Tables and Theorems. New York: Holt, Rinehart and Winston (1965). Miles, J. W., Integral Transforms in Applied Mathematics. Cambridge, UK: Cambridge University Press (1971). This is a brief but interesting and useful treatment for the advanced undergraduate. It emphasizes applications rather than abstract mathematical theory. Papoulis, A., The Fourier Integral and Its Applications. New York: McGraw-Hill (1962). This is a rigorous development of Fourier and Laplace transforms and has extensive applications in science and engineering. Roberts, G. E., and H. Kaufman, Table of Laplace Transforms. Philadelphia: Saunders (1966). Sneddon, I. N., Fourier Transforms. New York: McGraw-Hill (1951), reprinted, Dover (1995). A detailed comprehensive treatment, this book is loaded with applications to a wide variety of fields of modern and classical physics. Sneddon, I. H., The Use of Integral Transforms. New York: McGraw-Hill (1972). Written for students in science and engineering in terms they can understand, this book covers all the integral transforms mentioned in this chapter as well as in several others. Many applications are included. Van der Pol, B., and H. Bremmer, Operational Calculus Based on the Two-sided Laplace Integral, 3rd ed. Cambridge, UK: Cambridge University Press (1987). Here is a development based on the integral range −∞ to +∞, rather than the useful 0 to ∞. Chapter V contains a detailed study of the Dirac delta function (impulse function). Wolf, K. B., Integral Transforms in Science and Engineering. New York: Plenum Press (1979). This book is a very comprehensive treatment of integral transforms and their applications. CHAPTER 16 INTEGRAL EQUATIONS 16.1 INTRODUCTION With the exception of the integral transforms of the last chapter, we have been considering relations between the unknown function ϕ(x) and one or more of its derivatives. We now proceed to investigate equations containing the unknown function within an integral. As with differential equations, we shall confine our attention to linear relations, linear integral equations. Integral equations are classified in two ways: • If the limits of integration are fixed, we call the equation a Fredholm equation; if one limit is variable, it is a Volterra equation. • If the unknown function appears only under the integral sign, we label it first kind. If it appears both inside and outside the integral, it is labeled second kind. Definitions Symbolically, we have a Fredholm equation of the first kind, b f (x) = K(x, t)ϕ(t) dt; (16.1) a the Fredholm equation of the second kind, with λ being the eigenvalue, b ϕ(x) = f (x) + λ K(x, t)ϕ(t) dt; (16.2) a the Volterra equation of the first kind, f (x) = x K(x, t)ϕ(t) dt; a 1005 (16.3) 1006 Chapter 16 Integral Equations and the Volterra equation of the second kind, x K(x, t)ϕ(t) dt. ϕ(x) = f (x) + (16.4) a In all four cases ϕ(t) is the unknown function. K(x, t), which we call the kernel, and f (x) are assumed to be known. When f (x) = 0, the equation is said to be homogeneous. Why do we bother about integral equations? After all, the differential equations have done a rather good job of describing our physical world so far. There are several reasons for introducing integral equations here. We have placed considerable emphasis on the solution of differential equations subject to particular boundary conditions. For instance, the boundary condition at r = 0 determines whether the Neumann function Nn (r) is present when Bessel’s equation is solved. The boundary condition for r → ∞ determines whether the In (r) is present in our solution of the modified Bessel equation. The integral equation relates the unknown function not only to its values at neighboring points (derivatives) but also to its values throughout a region, including the boundary. In a very real sense the boundary conditions are built into the integral equation rather than imposed at the final stage of the solution. It can be seen in Section 10.5, where kernels are constructed, that the form of the kernel depends on the values on the boundary. The integral equation, then, is compact and may turn out to be a more convenient or powerful form than the differential equation. Mathematical problems such as existence, uniqueness, and completeness may often be handled more easily and elegantly in integral form. Finally, whether or not we like it, there are some problems, such as some diffusion and transport phenomena, that cannot be represented by differential equations. If we wish to solve such problems, we are forced to handle integral equations. Finally, an integral equation may also appear as a matter of deliberate choice based on convenience or the need for the mathematical power of an integral equation formulation. Example 16.1.1 MOMENTUM REPRESENTATION IN QUANTUM MECHANICS The Schrödinger equation (in ordinary space representation) is − h¯ 2 2 ∇ ψ(r) + V (r)ψ(r) = Eψ(r), 2m (16.5) or where  ∇ 2 + a 2 ψ(r) = v(r)ψ(r), a2 = 2m h2 ¯ E, v(r) = 2m h¯ 2 V (r). If we generalize Eq. (16.6) to  2 2 ∇ + a ψ(r) = v(r, r′ )ψ(r′ ) d 3 r ′ , (16.6) (16.7) (16.8) then, for the special case of v(r, r′ ) = v(r′ )δ(r − r′ ), (16.9) 16.1 Introduction 1007 a local interaction, Eq. (16.8) reduces to Eq. (16.6). Consider the Fourier transform pair ψ and  (compare footnote 9 in Section 15.6): (k) = 1 (2π)3/2 ψ(r)e−ik·r d 3 r, ψ(r) = 1 (2π)3/2 (k)eik·r d 3 k, (16.10) with the abbreviation p for momentum so that p =k h¯ (wave number). (16.11) Multiplying Eq. (16.8) by the plane-wave e−ik·r , we obtain  e−ik·r ∇ 2 + a 2 ψ(r) d 3 r = d 3 r e−ik·r v(r, r′ )ψ(r′ ) d 3 r ′ . (16.12) Note that the ∇ 2 on the left operates only on the ψ(r). Integrating the left-hand side by parts and substituting Eq. (16.10) for ψ(r′ ) on the right, we get   − k 2 + a 2 ψ(r)e−ik·r d 3 r = (2π)3/2 − k 2 + a 2 (k) = 1 (2π)3/2 ′ ′ v(r, r′ )(k′ )e−i(k·r−k ·r ) d 3 r ′ d 3 r d 3 k ′ . (16.13) If we use f (k, k′ ) = 1 (2π)3/2 ′ ′ v(r, r′ )e−i(k·r−k ·r ) d 3 r ′ d 3 r, (16.14) Eq. (16.13) becomes  − k 2 + a 2 (k) = f (k, k′ )(k′ ) d 3 k ′ , (16.15) a Fredholm equation of the second kind in which the parameter a 2 corresponds to the eigenvalue. For our special but important case of local interaction, application of Eq. (16.9) leads to f (k, k′ ) = f (k − k′ ). (16.16) This is our momentum representation, equivalent to an ordinary static interaction potential in coordinate space. Our momentum wave function (k) satisfies the integral equation Eq. (16.15). It must be emphasized that all through here we have assumed that the required Fourier integrals exist. For a harmonic oscillator potential, V (r) = r 2 , the required integrals would not exist. Equation (16.10) would lead to divergent oscillations and we would have no Eq. (16.15).  1008 Chapter 16 Integral Equations Transformation of a Differential Equation into an Integral Equation Often we find that we have a choice. The physical problem may be represented by a differential or an integral equation. Let us assume that we have the differential equation and wish to transform it into an integral equation. Starting with a linear second-order ODE y ′′ + A(x)y ′ + B(x)y = g(x) (16.17) with initial conditions y(a) = y0 , we integrate to obtain x y ′ (x) = − A(t)y ′ (t) dt − a a y ′ (a) = y0′ , x B(t)y(t) dt + x a g(t) dt + y0′ . Integrating the first integral on the right by parts yields x x ′ ′ y (x) = −Ay(x) − g(t) dt + A(a)y0 + y0′ . (B − A )y(t) dt + (16.18) (16.19) a a Notice how the initial conditions are being absorbed into our new version. Integrating a second time, we obtain u x x  B(t) − A′ (t) y(t) dt du Ay dx − y(x) = − a + x du a a a u a  g(t) dt + A(a)y0 + y0′ (x − a) + y0 . To transform this equation into a neater form, we use the relation x u x (x − t)f (t) dt. f (t) dt = du a a (16.20) (16.21) a This may be verified by differentiating both sides. Since the derivatives are equal, the original expressions can differ only by a constant. Letting x → a, the constant vanishes and Eq. (16.21) is established. Applying it to Eq. (16.20), we obtain x '  ( y(x) = − A(t) + (x − t) B(t) − A′ (t) y(t) dt a + x a  (x − t)g(t) dt + A(a)y0 + y0′ (x − a) + y0 . If we now introduce the abbreviations  K(x, t) = (t − x) B(t) − A′ (t) − A(t), x  f (x) = (x − t)g(t) dt + A(a)y0 + y0′ (x − a) + y0 , a (16.22) (16.23) 16.1 Introduction 1009 Eq. (16.22) becomes y(x) = f (x) + x K(x, t)y(t) dt, (16.24) a which is a Volterra equation of the second kind. This reformulation as a Volterra integral equation offers certain advantages in investigating questions of existence and uniqueness. Example 16.1.2 LINEAR OSCILLATOR EQUATION As an illustration, consider the linear oscillator equation y ′′ + ω2 y = 0 (16.25) with y ′ (0) = 1. y(0) = 0, This yields (compare with Eq. (16.17)) B(x) = ω2 , A(x) = 0, g(x) = 0. Substituting into Eq. (16.22) (or Eqs. (16.23) and (16.24)), we find that the integral equation becomes x y(x) = x + ω2 (t − x)y(t) dt. (16.26) 0 • This integral equation, Eq. (16.26), is equivalent to the original differential equation plus the initial conditions. A check shows that each form is indeed satisfied by y(x) = (1/ω) sin ωx.  Let us reconsider the linear oscillator equation (16.25) but now with the boundary conditions y(0) = 0, y(b) = 0. Since y ′ (0) is not given, we must modify the procedure. The first integration gives x y ′ = −ω2 y dx + y ′ (0). (16.27) 0 Integrating a second time and again using Eq. (16.21), we have x 2 y = −ω (x − t)y(t) dt + y ′ (0)x. (16.28) 0 To eliminate the unknown y ′ (0), we now impose the condition y(b) = 0. This gives b 2 ω (b − t)y(t) dt = by ′ (0). (16.29) 0 1010 Chapter 16 Integral Equations FIGURE 16.1 Substituting this back into Eq. (16.28), we obtain x x b y(x) = −ω2 (x − t)y(t) dt + ω2 (b − t)y(t) dt. b 0 0 Now let us break the interval [0, b] into two intervals, [0, x] and [x, b]. Since x t (b − t) − (x − t) = (b − x), b b we find b x x t (b − x)y(t) dt + ω2 (b − t)y(t) dt. y(x) = ω2 b x b 0 Finally, if we define a kernel (Fig. 16.1)  t   (b − x), t < x, K(x, t) = bx   (b − t), t > x, b we have b K(x, t)y(t) dt, y(x) = ω2 (16.30) (16.31) (16.32) (16.33) (16.34) 0 a homogeneous Fredholm equation of the second kind. Our new kernel, K(x, t), has some interesting properties. 1. It is symmetric, K(x, t) = K(t, x). 2. It is continuous, in the sense that   t x   (b − x) = (b − t) . t=x t=x b b 3. Its derivative with respect to t is discontinuous. As t increases through the point t = x, there is a discontinuity of −1 in ∂K(x, t)/∂t. According to these properties in Section 9.7 we identify K(x, t) as a Green’s function. 1. In the transformation of a linear, second-order ODE into an integral equation, the initial or boundary conditions play a decisive role. If we have initial conditions (only one end of our interval), the differential equation transforms into a Volterra integral equation. For the case of the linear oscillator equation with boundary conditions (both ends 16.1 Introduction 1011 of our interval), the differential equation leads to a Fredholm integral equation with a kernel that will be a Green’s function. 2. Note that the reverse transformation (integral equation to differential equation) is not always possible. There exist integral equations for which no corresponding differential equation is known. Exercises 16.1.1 Starting with the ODE, integrate twice and derive the Volterra integral equation corresponding to (a) y ′′ (x) − y(x) = 0; y(0) = 0, y ′ (0) = 1. x ANS. y = (x − t)y(t) dt + x. 0 (b) y ′′ (x) − y(x) = 0; y ′ (0) = −1. y(0) = 1, ANS. y = 0 x (x − t)y(t) dt − x + 1. Check your results with Eq. (16.23). 16.1.2 Derive a Fredholm integral equation corresponding to y ′′ (x) − y(x) = 0, (a) (b) y(1) = 1, y(−1) = 1, by integrating twice, by forming the Green’s function. ANS. y(x) = 1 − K(x, t) = +1 K(x, t)y(t) dt, −1 2 (1 − x)(t 1 2 (1 − t)(x (a) 1 1), x > t, 1), x < t. 16.1.3 Starting with the given answers of Exercise 16.1.1, differentiate and recover the original ODEs and the boundary conditions. (b) Repeat for Exercise 16.1.2. 16.1.4 The general second-order linear ODE with constant coefficients is y ′′ (x) + a1 y ′ (x) + a2 y(x) = 0. Given the boundary conditions y(0) = y(1) = 0, integrate twice and develop the integral equation 1 K(x, t)y(t) dt, y(x) = 0 1012 Chapter 16 Integral Equations with K(x, t) = + a2 t (1 − x) + a1 (x − 1), a2 x(1 − t) + a1 x, t < x, x < t. Note that K(x, t) is symmetric and continuous if a1 = 0. How is this related to selfadjointness of the ODE? x x x Verify that a a f (t) dt dx = a (x − t)f (t) dt for all f (t) (for which the integrals exist). x Given ϕ(x) = x − 0 (t − x)ϕ(t) dt, solve this integral equation by converting it to an ODE (plus boundary conditions) and solving the ODE (by inspection). 16.1.5 16.1.6 16.1.7 Show that the homogeneous Volterra equation of the second kind x K(x, t)ψ(t) dt ψ(x) = λ 0 has no solution (apart from the trivial ψ = 0). Hint. Develop a Maclaurin expansion of ψ(x). Assume ψ(x) and K(x, t) are differentiable with respect to x as needed. 16.2 INTEGRAL TRANSFORMS, GENERATING FUNCTIONS Analogous to differentiation, linear ODEs are solved in Chapter 9. Analogous to integration, there is no general method available for solving integral equations. However, certain special cases may be treated with our integral transforms (Chapter 15). For convenience these are listed here. If ∞ 1 eixt ϕ(t) dt, ψ(x) = √ 2π −∞ then 1 ϕ(x) = √ 2π ∞ −∞ If ψ(x) = e−ixt ψ(t) dt ∞ (Fourier). (16.35) (Laplace). (16.36) e−xt ϕ(t) dt, 0 then ϕ(x) = 1 2πi γ +i∞ ext ψ(t) dt γ −i∞ If ψ(x) = 0 ∞ t x−1 ϕ(t) dt, 16.2 Integral Transforms, Generating Functions 1013 then ϕ(x) = 1 2πi γ +i∞ γ −i∞ If ψ(x) = then ϕ(x) = ∞ x −t ψ(t) dt ∞ (Mellin). (16.37) tϕ(t)Jν (xt) dt, 0 tψ(t)Jν (xt) dt (Hankel). (16.38) 0 Actually the usefulness of the integral transform technique extends a bit beyond these four rather specialized forms. Example 16.2.1 FOURIER TRANSFORM SOLUTION Let us consider a Fredholm equation of the first kind with a kernel of the general type k(x − t), ∞ k(x − t)ϕ(t) dt, (16.39) f (x) = −∞ in which ϕ(t) is our unknown function. Assuming that the needed transforms exist, we apply the Fourier convolution theorem (Section 15.5) to obtain ∞ f (x) = K(ω)(ω)e−iωx dω. (16.40) −∞ The functions K(ω), (ω), and F (ω) are the Fourier transforms of k(x), ϕ(x), and f (x), respectively. Taking the Fourier transform of both sides of Eq. (16.40), by Eq. (16.35) we have ∞ 1 F (ω) f (x)eiωx dx = √ . K(ω)(ω) = (16.41) 2π −∞ 2π Then 1 F (ω) , (ω) = √ · 2π K(ω) and, using the inverse Fourier transform, we have ∞ 1 F (ω) −iωx ϕ(x) = dω. e 2π −∞ K(ω) (16.42) (16.43) For a rigorous justification of this result one can follow Morse and Feshbach (see the Additional Readings) (1953) across complex planes. An extension of this transformation solution appears as Exercise 16.2.1.  1014 Chapter 16 Integral Equations Example 16.2.2 GENERALIZED ABEL EQUATION, CONVOLUTION THEOREM The generalized Abel equation is x ϕ(t) dt, f (x) = α 0 (x − t) 0 < α < 1, with  f (x) known, ϕ(t) unknown. Taking the Laplace transform of both sides of this equation, we obtain   x ' ( ' ( ϕ(t) dt = L x −α L ϕ(x) , L{f (x)} = L α (x − t) 0 (16.44) (16.45) the last step following by the Laplace convolution theorem (Section 15.11). Then ' ( s 1−α L{f (x)} L ϕ(x) = . (−α)! Dividing by s,1 we obtain ( s −α L{f (x)} L{x α−1 }L{f (x)} 1 ' L ϕ(x) = = . s (−α)! (α − 1)!(−α)! (16.46) (16.47) Combining the factorials (Eq. (8.32)) and applying the Laplace convolution theorem again, we discover that  x  ( sin πα f (t) 1 ' L ϕ(x) = L dt . (16.48) 1−α s π 0 (x − t) Inverting with the aid of Exercise 15.11.1, we get x sin πα x f (t) dt, ϕ(t) dt = 1−α π 0 0 (x − t) and finally, by differentiating, ϕ(x) = sin πα d π dx 0 x f (t) dt. (x − t)1−α (16.49) (16.50)  Generating Functions Occasionally, the reader may encounter integral equations that involve generating functions. Suppose we have the admittedly special case 1 ϕ(t) dt, −1 ≤ x ≤ 1. (16.51) f (x) = 2 1/2 −1 (1 − 2xt + x ) We notice two important features: 2. (1 − 2xt + x 2 )−1/2 generates the Legendre polynomials. [−1, 1] is the orthogonality interval for the Legendre polynomials. 1 s 1−α does not have an inverse for 0 < α < 1. 16.2 Integral Transforms, Generating Functions 1015 If we now expand the denominator (property 1) and assume that our unknown ϕ(t) may be written as a series of these same Legendre polynomials, 1 ∞ ∞  Pr (t)x r dt. (16.52) an Pn (t) f (x) = −1 n=0 r=0 Utilizing the orthogonality of the Legendre polynomials (property 2), we obtain ∞  2ar r f (x) = x . 2r + 1 (16.53) r=0 We may identify the an by differentiating n times and then setting x = 0: f (n) (0) = n! Hence ϕ(t) = 2 an . 2n + 1 ∞  2n + 1 f (n) (0) n=0 2 n! Pn (t). (16.54) (16.55) Similar results may be obtained with the other generating functions (compare Exercise 7.1.6). • This technique of expanding in a series of special functions is always available. It is worth a try whenever the expansion is possible (and convenient) and the interval is appropriate. Exercises 16.2.1 The kernel of a Fredholm equation of the second kind, ∞ ϕ(x) = f (x) + λ K(x, t)ϕ(t) dt, −∞ is of the form k(x − t).2 Assuming that the required transforms exist, show that ∞ F (t)e−ixt dt 1 ϕ(x) = √ . √ 2π −∞ 1 − 2π λK(t) F (t) and K(t) are the Fourier transforms of f (x) and k(x), respectively. 16.2.2 The kernel of a Volterra equation of the first kind, x K(x, t)ϕ(t) dt, f (x) = 0 2 This kernel and a range 0 ≤ x < ∞ are the characteristics of integral equations of the Wiener–Hopf type. Details will be found in Chapter 8 of Morse and Feshbach (1953); see the Additional Readings. 1016 Chapter 16 Integral Equations has the form k(x − t). Assuming that the required transforms exist, show that γ +i∞ F (s) xs 1 e ds. ϕ(x) = 2πi γ −i∞ K(s) F (s) and K(s) are the Laplace transforms of f (x) and k(x), respectively. 16.2.3 The kernel of a Volterra equation of the second kind, x ϕ(x) = f (x) + λ K(x, t)ϕ(t) dt, 0 has the form k(x − t). Assuming that the required transforms exist, show that γ +i∞ F (s) 1 exs ds. ϕ(x) = 2πi γ −i∞ 1 − λK(s) 16.2.4 Using the Laplace transform solution (Exercise 16.2.3), solve (a) (b) ϕ(x) = x + ϕ(x) = x − x (t − x)ϕ(t) dt. 0 0 ANS. ϕ(x) = sin x. x (t − x)ϕ(t) dt. ANS. ϕ(x) = sinh x. Check your results by substituting back into the original integral equations. 16.2.5 Reformulate the equations of Example 16.2.1 (Eqs. (16.39) to (16.43)), using Fourier cosine transforms. 16.2.6 Given the Fredholm integral equation, 2 e−x = ∞ −∞ 2 e−(x−t) ϕ(t) dt, apply the Fourier convolution technique of Example 16.2.1 to solve for ϕ(t). 16.2.7 Solve Abel’s equation, f (x) = 0 x ϕ(t) dt, (x − t)α 0 < α < 1, by the following method: Multiply both sides by (z − x)α−1 and integrate with respect to x over the range 0 ≤ x ≤ z. (b) Reverse the order of integration and evaluate the integral on the right-hand side (with respect to x) by the beta function. (a) 16.2 Integral Transforms, Generating Functions Note. 16.2.8 z t 1017 dx π = B(1 − α, α) = (−α)!(α − 1)! = . sin πα (z − x)1−α (x − t)α Given the generalized Abel equation with f (x) = 1, x ϕ(t) dt, 0 < α < 1, 1= α 0 (x − t) solve for ϕ(t) and verify that ϕ(t) is a solution of the given equation. ANS. ϕ(t) = 16.2.9 sin πα α−1 t . π 2 A Fredholm equation of the first kind has a kernel e−(x−t) : ∞ 2 f (x) = e−(x−t) ϕ(t) dt. −∞ Show that the solution is ∞ 1  f (n) (0) ϕ(x) = √ Hn (x), 2n n! π π=0 in which Hn (x) is an nth-order Hermite polynomial. 16.2.10 Solve the integral equation f (x) = 1 −1 ϕ(t) dt, (1 − 2xt + x 2 )1/2 −1 ≤ x ≤ 1, for the unknown function ϕ(t) if (a) f (x) = x 2s , (b) f (x) = x 2s+1 . 4s + 1 4s + 3 P2s (t), (b) ϕ(t) = P2s+1 (t). 2 2 A Kirchhoff diffraction theory analysis of a laser leads to the integral equation v(r2 ) = γ K(r1 , r2 )v(r1 ) dA. ANS. (a) ϕ(t) = 16.2.11 The unknown, v(r1 ), gives the geometric distribution of the radiation field over one mirror surface; the range of integration is over the surface of that mirror. For square confocal spherical mirrors the integral equation becomes −iγ eikb a a −(ik/b)(x1 x2 +y1 y2 ) e v(x1 , y1 ) dx1 dy1 , v(x2 , y2 ) = λb −a −a in which b is the centerline distance between the laser mirrors. This can be put in a somewhat simpler form by the substitutions kxi2 = ξi2 , b (a) kyi2 = ηi2 , b and ka 2 2πa 2 = = α2 . b λb Show that the variables separate and we get two integral equations. 1018 Chapter 16 Integral Equations Show that the new limits, ±α, may be approximated by ±∞ for a mirror dimension a ≫ λ. (c) Solve the resulting integral equations. (b) 16.3 NEUMANN SERIES, SEPARABLE (DEGENERATE) KERNELS Many and probably most integral equations cannot be solved by the specialized integral transform techniques of the preceding section. Here we develop three rather general techniques for solving integral equations. The first, due largely to Neumann, Liouville, and Volterra, develops the unknown function ϕ(x) as a power series in λ, where λ is a given constant. The method is applicable whenever the series converges. The second method is somewhat restricted because it requires that the two variables appearing in the kernel K(x, t) be separable. However, there are two major rewards: (1) The relation between an integral equation and a set of simultaneous linear algebraic equations is shown explicitly, and (2) the method leads to eigenvalues and eigenfunctions—in close analogy to Section 3.5. Third, a technique for numerical solution of Fredholm equations of both the first and second kind is outlined. The problem posed by ill-conditioned matrices is emphasized. Neumann Series We solve a linear integral equation of the second kind by successive approximations; our integral equation is the Fredholm equation, b ϕ(x) = f (x) + λ K(x, t)ϕ(t) dt, (16.56) a in which f (x) = 0. If the upper limit of the integral is a variable (Volterra equation), the following development will still hold, but with minor modifications. Let us try (there is no guarantee that it will work) to approximate our unknown function by ϕ(x) ≈ ϕ0 (x) = f (x). (16.57) This choice is not mandatory. If you can make a better guess, go ahead and guess. The choice here is equivalent to saying that the integral or the constant λ is small. To improve this first crude approximation, we feed ϕ0 (x) back into the integral, Eq. (16.56), and get b ϕ1 (x) = f (x) + λ K(x, t)f (t) dt. (16.58) a Repeating this process of substituting the new ϕn (x) back into Eq. (16.56), we develop the sequence b ϕ2 (x) = f (x) + λ K(x, t1 )f (t1 ) dt1 a λ2 b a a b K(x, t1 )K(t1 , t2 )f (t2 ) dt2 dt1 (16.59) 16.3 Neumann Series, Separable (Degenerate) Kernels 1019 and ϕn (x) = n  λi ui (x), (16.60) i=0 where u0 (x) = f (x), b K(x, t1 )f (t1 ) dt1 , u1 (x) = a u2 (x) = un (x) = b a (16.61) b K(x, t1 )K(t1 , t2 )f (t2 ) dt2 dt1 , a ··· K(x, t1 )K(t1 , t2 ) · · · K(tn−1 , tn ) · f (tn ) dtn · · · dt1 . We expect that our solution ϕ(x) will be ϕ(x) = lim ϕn (x) = lim n→∞ n→∞ n  λi ui (x), (16.62) i=0 provided that our infinite series converges. We may conveniently check the convergence by the Cauchy ratio test, Section 5.2, noting that     n λ un (x) ≤ λn  · |f |max · |K|n · |b − a|n , (16.63) max using |f |max to represent the maximum value of |f (x)| in the interval [a, b] and |K|max to represent the maximum value of |K(x, t)| in its domain in the x, t-plane. We have convergence if |λ| · |K|max · |b − a| < 1. (16.64) Note that λ|un (max)| is being used as a comparison series. If it converges, our actual series must converge. If this condition is not satisfied, we may or may not have convergence. A more sensitive test is required. Of course, even if the Neumann series diverges, there still may be a solution obtainable by another method. To see what has been done with this iterative manipulation, we may find it helpful to rewrite the Neumann series solution, Eq. (16.59), in operator form. We start by rewriting Eq. (16.56) as ϕ = λKϕ + f, b where K represents the integral operator a K(x, t)[ ] dt. Solving for ϕ, we obtain ϕ = (1 − λK)−1 f. Binomial expansion leads to Eq. (16.59). The convergence of the Neumann series is a demonstration that the inverse operator (1 − λK)−1 exists. 1020 Chapter 16 Integral Equations Example 16.3.1 NEUMANN SERIES SOLUTION To illustrate the Neumann method, we consider the integral equation 1 1 ϕ(x) = x + (t − x)ϕ(t) dt. 2 −1 (16.65) To start the Neumann series, we take ϕ0 (x) = x. (16.66) Then ϕ1 (x) = x + 1 2 1 −1 (t − x)t dt = x +   1 1 3 1 2 1 1 t − t x  =x+ . 2 3 2 3 −1 Substituting ϕ1 (x) back into Eq. (16.65), we get 1 x 1 1 1 1 1 (t − x)t dt + (t − x) dt = x + − . ϕ2 (x) = x + 2 −1 2 −1 3 3 3 Continuing this process of substituting back into Eq. (16.65), we obtain ϕ3 (x) = x + 1 1 x − − 2, 3 3 3 and by induction ϕ2n (x) = x + Letting n → ∞, we get n n   (−1)s−1 3−s . (−1)s−1 3−s − x s=1 (16.67) s=1 1 3 (16.68) ϕ(x) = x + . 4 4 This solution can (and should) be checked by substituting back into the original equation, Eq. (16.65).  It is interesting to note that our series converged easily even though Eq. (16.64) is not satisfied in this particular case. Actually Eq. (16.64) is a rather crude upper bound on λ. It can be shown that a necessary and sufficient condition for the convergence of our series solution is that |λ| < |λe |, where λe is the eigenvalue of smallest magnitude of the √ corresponding homogeneous equation [f (x) = 0)]. For this particular example λ = 3/2. e √ Clearly, λ = 21 < λe = 3/2. One approach to the calculation of time-dependent perturbations in quantum mechanics starts with the integral equation for the evolution operator i t U (t, t0 ) = 1 − V (t1 )U (t1 , t0 ) dt1 . (16.69a) h¯ t0 Iteration leads to U (t, t0 ) = 1 − i h¯ t t0 V (t1 ) dt1 +  2 t t1 i V (t1 )V (t2 ) dt2 dt1 + · · · . h¯ t0 t0 (16.69b) 16.3 Neumann Series, Separable (Degenerate) Kernels 1021 The evolution operator is obtained as a series of multiple integrals of the perturbing potential V (t), closely analogous to the Neumann series, Eq. (16.60). For V = V0 , independent of t, the evolution operator becomes (see Exercise 3.4.13, replace t → t, and construct U from products of T (t + t, t) as in Eq. (4.26))  i U (t1 , t0 ) = exp − (t − t0 )V0 . h¯ A second and similar relationship between the Neumann series and quantum mechanics appears when the Schrödinger wave equation for scattering is reformulated as an integral equation. The first term in a Neumann series solution is the incident (unperturbed) wave. The second term is the first-order Born approximation, Eq. (9.203b) of Section 9.7. The Neumann method may also be applied to Volterra integral equations of the second kind, Eq. (16.4) or Eq. (16.56) with the fixed upper limit, b, replaced by a variable, x. In the Volterra case the Neumann series converges for all λ as long as the kernel is square integrable. Separable Kernel The technique of replacing our integral equation by simultaneous algebraic equations may also be used whenever our kernel K(x, t) is separable, in the sense that K(x, t) = n  Mj (x)Nj (t), (16.70) j =1 where n, the upper limit of the sum, is finite. Such kernels are sometimes called degenerate. Our class of separable kernels includes all polynomials and many of the elementary transcendental functions; that is, cos(t − x) = cos t cos x + sin t sin x. (16.70a) If Eq. (16.70) is satisfied, substitution into the Fredholm equation of the second kind, Eq. (16.2), yields b n  Nj (t)ϕ(t) dt, (16.71) Mj (x) ϕ(x) = f (x) + λ a j =1 interchanging integration and summation. Now, the integral with respect to t is a constant, b Nj (t)ϕ(t) dt = cj . (16.72) a Hence Eq. (16.71) becomes ϕ(x) = f (x) + λ n  cj Mj (x). (16.73) j =1 This gives us ϕ(x), our solution, once the constants ci have been determined. Equation (16.73) further tells us the form of ϕ(x) : f (x), plus a linear combination of the x-dependent factors of the separable kernel. 1022 Chapter 16 Integral Equations We may find ci by multiplying Eq. (16.73) by Ni (x) and integrating to eliminate the x-dependence. Use of Eq. (16.72) yields ci = bi + λ n  aij cj , (16.74) (16.75) j =1 where bi = b aij = Ni (x)f (x) dx, a b Ni (x)Mj (x) dx. a It is perhaps helpful to write Eq. (16.74) in matrix form, with A = (aij ): b = c − λAc = (1 − λA)c, (16.76a) c = (1 − λA)−1 b. (16.76b) or3 Equation (16.76a) is equivalent to a set of simultaneous linear algebraic equations (1 − λa11 )c1 − λa12 c2 − λa13 c3 − · · · = b1 , −λa21 c1 + (1 − λa22 )c2 − λa23 c3 − · · · = b2 , −λa31 c1 − λa32 c2 + (1 − λa33 )c3 − · · · = b3 , (16.77) and so on. If our integral equation is homogeneous, [f (x) = 0], then b = 0. To get a solution, we set the determinant of the coefficients of ci equal to zero, |1 − λA| = 0, (16.78) exactly as in Section 3.5. The roots of Eq. (16.78) yield our eigenvalues. Substituting into (1 − λA)c = 0, we find the ci , and then Eq. (16.73) gives our solution. Example 16.3.2 To illustrate this technique for determining eigenvalues and eigenfunctions of the homogeneous Fredholm equation, we consider the case 1 ϕ(x) = λ (t + x)ϕ(t) dt. (16.79) −1 Here (compare with Eqs. (16.71) and (16.77)) M1 = 1, N1 (t) = t, M2 (x) = x, N2 = 1. Equation (16.75) yields a11 = a22 = 0, a12 = 23 , 3 Notice the similarity to the operator form of the Neumann series. a21 = 2; b1 = 0 = b2 . 16.3 Neumann Series, Separable (Degenerate) Kernels Equation (16.78), our secular equation, becomes    2λ   1 −   3  = 0.   −2λ 1  1023 (16.80) Expanding, we obtain √ 4λ2 3 1− = 0, λ=± . 3 2 √ Substituting the eigenvalues λ = ± 3/2 into Eq. (16.76), we have c2 c1 ∓ √ = 0. 3 Finally, with a choice of c1 = 1, Eq. (16.73) gives √ √ √ 3 3 (1 + 3x), , λ= ϕ1 (x) = 2 2 √ √ √ 3 3 ϕ2 (x) = − (1 − 3x), . λ=− 2 2 Since our equation is homogeneous, the normalization of ϕ(x) is arbitrary. (16.81) (16.82) (16.83) (16.84)  If the kernel is not separable in the sense of Eq. (16.70), there is still the possibility that it may be approximated by a kernel that is separable. Then we can get the exact solution of an approximate equation, an equation that approximates the original equation. The solution of the separable approximate kernel problem can then be checked by substituting back into the original, unseparable kernel problem. Numerical Solution There is extensive literature on the numerical solution of integral equations, and much of it concerns special techniques for certain situations. One method of fair generality is the replacement of the single integral equation by a set of simultaneous algebraic equations. And again matrix techniques are invoked. This simultaneous algebraic equation–matrix approach is applied here to two different cases. For the homogeneous Fredholm equation of the second kind this method works well. For the Fredholm equation of the first kind the method is a disaster. First we deal with the disaster. We consider the Fredholm integral equation of the first kind, b K(x, t)ϕ(t) dt, (16.84a) f (x) = a with f (x) and K(x, t) known and ϕ(t) unknown. The integral can be evaluated (in principle) by quadrature techniques. For maximum accuracy the Gaussian method is recommended (if the kernel is continuous and has continuous derivatives). The numerical quadrature replaces the integral by a summation, f (xi ) = n  k=1 Ak K(xi , tk )ϕ(tk ), (16.84b) 1024 Chapter 16 Integral Equations with Ak the quadrature coefficients. We abbreviate f (xi ) as fi , ϕ(tk ) as ϕk , and Ak K(xi , tk ) as Bik . In effect we are changing from a function description to a vector– matrix description, with the n components of the vector (fi ) defined as the values of the function at the n discrete points [f (xi )]. Equation (16.84b) becomes fi = n  Bik ϕk , k=1 a matrix equation. Inverting (Bik ), we obtain ϕ(xk ) = ϕk = n  −1 Bki fi , (16.84c) k=1 and Eq. (16.84a) is solved — in principle. In practice, the quadrature coefficient–kernel matrix is often “ill-conditioned” (with respect to inversion). This means that in the inversion process small (numerical) errors are multiplied by large factors. In the inversion process all significant figures may be lost and Eq. (16.84c) becomes numerical nonsense. This disaster should not be entirely unexpected. Integration is essentially a smoothing operation. f (x) is relatively insensitive to local variation of ϕ(t). Conversely, ϕ(t) may be exceedingly sensitive to small changes in f (x). Small errors in f (x) or in B−1 are magnified and accuracy disappears. This same behavior shows up in attempts to invert Laplace transforms numerically. When the quadrature–matrix technique is applied to the integral equation eigenvalue problem, the symmetric kernel, homogeneous Fredholm equation of the second kind,4 b λϕ(x) = K(x, t)ϕ(t) dt, (16.84d) a the technique is far more successful. Replacing the integral by a set of simultaneous algebraic equations (numerical quadrature), we have λϕi = n  Ak Kik ϕk , (16.84e) k=1 with ϕi = ϕ(xi ), as before. The points xi , i = 1, 2, . . . , n, are taken to be the same (numerically) as tk , k = 1, 2, . . . , n, so Kik will be symmetric. The system is symmetrized by 1/2 multiplying by Ai so that n  1/2  1/2  1/2  1/2 Ai Kik Ak Ak ϕk . λ Ai ϕi = (16.84f) k=1 1/2 1/2 1/2 Replacing Ai ϕi by ψi and Ai Kik Ak by Sik , we obtain λψ = Sψ, (16.84g) with S symmetric (since the kernel K(x, t) was assumed symmetric). Of course, ψ has components ψi = ψ(xi ). Equation (16.84g) is our matrix eigenvalue equation, Eq. (3.136). 4 The eigenvalue λ has been written on the left side, multiplying the eigenfunction, as is customary in matrix analysis (Section 3.5). In this form λ will take on a maximum value. 16.3 Neumann Series, Separable (Degenerate) Kernels 1025 The eigenvalues are readily obtained by calling a canned eigenroutine.5 For kernels such as those of Exercise 16.3.15 and using a 10-point Gauss–Legendre quadrature, the eigenroutine determines the largest eigenvalue to within about 0.5 percent for the cases where the kernel has discontinuities in its derivatives. If the derivatives are continuous, the accuracy is much better. Linz6 has described an interesting variational refinement in the determination of λmax to high accuracy. The key to his method is Exercise 17.8.7. The components of the eigenfunction vector are obtained from Eq. (16.84d) with ϕ(tk ) now known and ϕi = ϕ(xi ) generated as required. (The xi are no longer tied to the tk .) Exercises 16.3.1 Using the Neumann series, solve x tϕ(t) dt, ϕ(x) = 1 − 2 x0 (t − x)ϕ(t) dt, (b) ϕ(x) = x + 0 x (c) ϕ(x) = x − (t − x)ϕ(t) dt. (a) 0 2 ANS. (a) ϕ(x) = e−x . 16.3.2 Solve the equation 1 ϕ(x) = x + 2 1 −1 (t + x)ϕ(t) dt by the separable kernel method. Compare with the Neumann method solution of Section 16.3. ANS. ϕ(x) = 12 (3x − 1). 16.3.3 Find the eigenvalues and eigenfunctions of 1 ϕ(x) = λ (t − x)ϕ(t) dt. 16.3.4 Find the eigenvalues and eigenfunctions of 2π ϕ(x) = λ cos(x − t)ϕ(t) dt. −1 0 ANS. λ1 = λ2 = π1 , ϕ(x) = A cos x + B sin x. 5 See W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes, 2nd ed., Cambridge, UK: Cambridge University Press (1992), Chapter 11, for details, references, and computer codes. The symbolic software Mathematica and Maple also include matrix functions for computing eigenvalues and eigenvectors. 6 P. Linz, On the numerical computation of eigenvalues and eigenvectors of symmetric integral equations. Math. Comput. 24: 905 (1970). 1026 Chapter 16 Integral Equations 16.3.5 Find the eigenvalues and eigenfunctions of 1 y(x) = λ (x − t)2 y(t) dt. −1 Hint. This problem may be treated by the separable kernel method or by a Legendre expansion. 16.3.6 If the separable kernel technique of this section is applied to a Fredholm equation of the first kind (Eq. (16.1)), show that Eq. (16.76) is replaced by c = A−1 b. In general the solution for the unknown ϕ(t) is not unique. 16.3.7 Solve ψ(x) = x + 1 (1 + xt)ψ(t) dt 0 by each of the following methods: (a) (b) (c) 16.3.8 the Neumann series technique, the separable kernel technique, educated guessing. Use the separable kernel technique to show that π cos x sin tψ(t) dt ψ(x) = λ 0 has no solution (apart from the trivial ψ = 0). Explain this result in terms of separability and symmetry. 16.3.9 Solve ϕ(x) = 1 + λ2 0 x (x − t)ϕ(t) dt by each of the following methods: (a) reduction to an ODE (find the boundary conditions), (b) the Neumann series, (c) the use of Laplace transforms. ANS. ϕ(x) = cosh λx. 16.3.10 (a) (b) In Eq. (16.69a) take V = V0 , independent of t. Without using Eq. (16.69b), show that Eq. (16.69a) leads directly to  i U (t − t0 ) = exp − (t − t0 )V0 . h¯ Repeat for Eq. (16.69b) without using Eq. (16.69a). 16.3 Neumann Series, Separable (Degenerate) Kernels 16.3.11 16.3.12 1027 1 Given ϕ(x) = λ 0 (1 + xt)ϕ(t) dt, solve for the eigenvalues and the eigenfunctions by the separable kernel technique. Knowing the form of the solutions can be a great advantage, for the integral equation 1 (1 + xt)ϕ(t) dt, ϕ(x) = λ 0 assume ϕ(x) to have the form 1 + bx. Substitute into the integral equation. Integrate and solve for b and λ. 16.3.13 The integral equation ϕ(x) = λ 1 J0 (αxt)ϕ(t) dt, 0 J0 (α) = 0, is approximated by ϕ(x) = λ 0 1 1 − x 2 t 2 ϕ(t) dt. Find the minimum eigenvalue λ and the corresponding eigenfunction ϕ(t) of the approximate equation. ANS. λmin = 1.112486, 16.3.14 ϕ(x) = 1 − 0.303337x 2 . You are given the integral equation ϕ(x) = λ 1 sin πxtϕ(t) dt. 0 Approximate the kernel by K(x, t) = 4xt (1 − xt) ≈ sin πxt. Find the positive eigenvalue and the corresponding eigenfunction for the approximate integral equation. Note. For K(x, t) = sin πxt, λ = 1.6334. ANS. λ = 1.5678, ϕ(x) = x −√0.6955x 2 √ (λ+ = 31 − 4, λ− = − 31 − 4). 16.3.15 The equation f (x) = has a degenerate kernel K(x, t) = (a) n b K(x, t)ϕ(t) dt a i=1 Mi (x)Ni (t). Show that this integral equation has no solution unless f (x) can be written as f (x) = with the fi constants. n  i=1 fi Mi (x), 1028 Chapter 16 Integral Equations (b) Show that to any solution ϕ(x) we may add ψ(x), provided ψ(x) is orthogonal to all Ni (x): b Ni (x)ψ(x) dx = 0 for all i. a 16.3.16 Using numerical quadrature, convert 1 ϕ(x) = λ J0 (αxt)ϕ(t) dt, 0 J0 (α) = 0, to a set of simultaneous linear equations. (a) (b) Find the minimum eigenvalue λ. Determine ϕ(x) at discrete values of x and plot ϕ(x) versus x. Compare with the approximate eigenfunction of Exercise 16.3.13. ANS. (a) λmin = 1.14502. 16.3.17 Using numerical quadrature, convert ϕ(x) = λ 1 sin πxtϕ(t) dt 0 to a set of simultaneous linear equations. (a) (b) Find the minimum eigenvalue λ. Determine ϕ(x) at discrete values of x and plot ϕ(x) versus x. Compare with the approximate eigenfunction of Exercise 16.3.14. ANS. (a) λmin = 1.6334. 16.3.18 Given a homogeneous Fredholm equation of the second kind 1 λϕ(x) = K(x, t)ϕ(t) dt. 0 (a) Calculate the largest eigenvalue λ0 . Use the 10-point Gauss–Legendre quadrature technique. For comparison the eigenvalues listed by Linz are given as λexact . (b) Tabulate ϕ(xk ), where the xk are the 10 evaluation points in [0, 1]. (c) Tabulate the ratio 1 1 K(x, t)ϕ(t) dt for x = xk . λ0 ϕ(x) 0 This is the test of whether or not you really have a solution. (a) K(x, t) = ext . ANS. λexact = 1.35303. 16.4 Hilbert–Schmidt Theory (b) K(x, t) = +1 2 x(2 − t), 1 2 t (2 − x), 1029 x < t, x > t. ANS. λexact = 0.24296. (c) (d) K(x, t) = |x − t|. K(x, t) = + ANS. λexact = 0.34741. x, x < t, t, x > t. ANS. λexact = 0.40528. Note. (1) The evaluation points xi of Gauss–Legendre quadrature for [−1, 1] may be linearly transformed into [0, 1],  xi [0, 1] = 12 xi [−1, 1] + 1 . Then the weighting factors Ai are reduced in proportion to the length of the interval: Ai [0, 1] = 21 Ai [−1, 1]. 16.3.19 Using the matrix variational technique of Exercise 17.8.7, refine your calculation of the eigenvalue of Exercise 16.3.18(c) [K(x, t) = |x − t|]. Try a 40 × 40 matrix. Note. Your matrix should be symmetric so that the (unknown) eigenvectors will be orthogonal. ANS. (40-point Gauss–Legendre quadrature) 0.34727. 16.4 HILBERT–SCHMIDT THEORY Symmetrization of Kernels This is the development of the properties of linear integral equations (Fredholm type) with symmetric kernels: K(x, t) = K(t, x). (16.85) Before plunging into the theory, we note that some important nonsymmetric kernels can be symmetrized. If we have the equation b ϕ(x) = f (x) + λ K(x, t)ρ(t)ϕ(t) dt, (16.86) a the total kernel is actually K(x, t)ρ(t), clearly not symmetric if K(x, t) alone is symmetric. √ However, if we multiply Eq. (16.86) by ρ(x) and substitute ρ(x)ϕ(x) = ψ(x), (16.87) 1030 Chapter 16 Integral Equations we obtain ψ(x) = ρ(x)f (x) + λ a K(x, t) ρ(x)ρ(t) ψ(t) dt, b (16.88) √ with a symmetric total kernel K(x, t) ρ(x)ρ(t). We shall meet ρ(x) later as a positive weighting factor in this integral equation Sturm–Liouville theory. Orthogonal Eigenfunctions We now focus on the homogeneous Fredholm equation of the second kind: b ϕ(x) = λ K(x, t)ϕ(t) dt. (16.89) a We assume that the kernel K(x, t) is symmetric and real. Perhaps one of the first questions we might ask about the equation is: “Does it make sense?” or more precisely, “Does an eigenvalue λ satisfying this equation exist?” With the aid of the Schwarz and Bessel inequalities, Chapter 10 and Courant and Hilbert (Chapter III, Section 4 — see the Additional Readings) show that if K(x, t) is continuous, there is at least one such eigenvalue and possibly an infinite number of them. We show that the eigenvalues, λ, are real and that the corresponding eigenfunctions, ϕi (x), are orthogonal. Let λi , λj be two different eigenvalues and ϕi (x), ϕj (x) be the corresponding eigenfunctions. Equation (16.89) then becomes b ϕi (x) = λi K(x, t)ϕi (t) dt, (16.90a) a ϕj (x) = λj b K(x, t)ϕj (t) dt. (16.90b) a If we multiply Eq. (16.90a) by λj ϕj (x) and Eq. (16.90b) by λi ϕi (x) and then integrate with respect to x, the two equations become7 b b b λj ϕi (x)ϕj (x) dx = λi λj K(x, t)ϕi (t)ϕj (x) dt dx, (16.91a) a λi a b a ϕi (x)ϕj (x) dx = λi λj a b a b K(x, t)ϕj (t)ϕi (x) dt dx, (16.91b) a Since we have demanded that K(x, t) by symmetric, Eq. (16.91b) may be rewritten as b b b λi ϕi (x)ϕj (x) dx = λi λj K(x, t)ϕi (t)ϕj (x) dt dx. (16.92) a a a Subtracting Eq. (16.92) from Eq. (16.91a), we obtain b (λj − λi ) ϕi (x)ϕj (x) dx = 0. a 7 We assume that the necessary integrals exist. For an example of a simple pathological case, see Exercise 16.4.3. (16.93) 16.4 Hilbert–Schmidt Theory 1031 This has the same form as Eq. (10.34) in the Sturm–Liouville theory. Since λi = λj , b ϕi (x)ϕj (x) dx = 0, i = j, (16.94) a proving orthogonality. Note that with a real symmetric kernel, no complex conjugates are involved in Eq. (16.94). For the self-adjoint or Hermitian kernel, see Exercise 16.4.1. If the eigenvalue λi is degenerate,8 the eigenfunctions for that particular eigenvalue may be orthogonalized by the Gram–Schmidt method (Section 10.3). Our orthogonal eigenfunctions may, of course, be normalized, and we assume that this has been done. The result is b ϕi (x)ϕj (x) dx = δij . (16.95) a To demonstrate that the λi are real, we need to admit complex conjugates. Taking the complex conjugate of Eq. (16.90a), we have b ϕi∗ (x) = λ∗i K(x, t)ϕi∗ (t) dt, (16.96) a provided the kernel K(x, t) is real. Now, using Eq. (16.96) instead of Eq. (16.90b), we see that the analysis leads to b ϕi∗ (x)ϕi (x) dx = 0. (16.97) (λ∗i − λi ) a This time the integral cannot vanish (unless we have the trivial solution, ϕi (x) = 0) and λ∗i = λi , (16.98) or λi , our eigenvalue, is real. This is the third time we have passed this way, first with Hermitian matrices, then with Sturm–Liouville (self-adjoint) ODEs, and now with Hilbert–Schmidt integral equations. The correspondence between the Hermitian matrices and the self-adjoint ODEs shows up in physics as the two outstanding formulations of quantum mechanics — the Heisenberg matrix approach and the Schrödinger differential operator approach. In Section 17.8 and Exercise 17.7.6 we shall explore further the correspondence between the Hilbert–Schmidt symmetric kernel integral equations and the Sturm–Liouville self-adjoint differential equations. The eigenfunctions of our integral equations form a complete set,9 in the sense that any function g(x) that can be generated by the integral g(x) = K(x, t)h(t) dt, (16.99) 8 If more than one distinct eigenfunction corresponds to the same eigenvalue (satisfying Eq. (16.89)), that eigenvalue is said to be degenerate (see Chapters 3 and 4). 9 For a proof of this statement, see Courant and Hilbert (1953), Chapter III, Section 5, in the Additional Readings. 1032 Chapter 16 Integral Equations in which h(t) is any piecewise continuous function, can be represented by a series of eigenfunctions, g(x) = ∞  an ϕn (x). (16.100) n=1 The series converges uniformly and absolutely. Let us extend this to the kernel K(x, t) by asserting that K(x, t) = ∞  an ϕn (t), (16.101) n=1 and an = an (x). Substituting into the original integral equation (Eq. (16.89)) and using the orthogonality integral, we obtain ϕi (x) = λi ai (x). (16.102) Therefore for our homogeneous Fredholm equation of the second kind, the kernel may be expressed in terms of the eigenfunctions and eigenvalues by K(x, t) = ∞  ϕn (x)ϕn (t) n=1 (zero not an eigenvalue). λn (16.103) Here we have a bilinear expansion, a linear expansion in ϕn (x) and linear in ϕn (t). Similar bilinear expansions appear in Section 9.7. It is possible that the expansion given by Eq. (16.101) may not exist. As an illustration of the sort of pathological behavior that may occur, you are invited to apply this analysis to ∞ e−xt ϕ(t) dt ϕ(x) = λ 0 (compare Exercise 16.4.3). It should be emphasized that this Hilbert–Schmidt theory is concerned with the establishment of properties of the eigenvalues (real) and eigenfunctions (orthogonality, completeness), properties that may be of great interest and value. The Hilbert–Schmidt theory does not solve the homogeneous integral equation for us any more than the Sturm–Liouville theory of Chapter 10 solved the ODEs. The solutions of the integral equation come from Sections 16.2 and 16.3 (including numerical analysis). Nonhomogeneous Integral Equation We need a solution of the nonhomogeneous equation b ϕ(x) = f (x) + λ K(x, t)ϕ(t) dt. (16.104) a Let us assume that the solutions of the corresponding homogeneous integral equation are known: b ϕn (x) = λn K(x, t)ϕn (t) dt, (16.105) a 16.4 Hilbert–Schmidt Theory 1033 the solution ϕn (x) corresponding to the eigenvalue λn . We expand both ϕ(x) and f (x) in terms of this set of eigenfunctions: ϕ(x) = f (x) = ∞  an ϕn (x) (an unknown), (16.106) bn ϕn (x) (bn known). (16.107) n=1 ∞  n=1 Substituting into Eq. (16.104), we obtain ∞  n=1 an ϕn (x) = ∞  n=1 bn ϕn (x) + λ b K(x, t) a ∞  an ϕn (t) dt. (16.108) n=1 By interchanging the order of integration and summation, we may evaluate the integral by Eq. (16.105), and we get ∞  n=1 an ϕn (x) = ∞  n=1 bn ϕn (x) + λ ∞  an ϕn (x) n=1 λn . (16.109) If we multiply by ϕi (x) and integrate from x = a to x = b, the orthogonality of our eigenfunctions leads to ai ai = bi + λ . (16.110) λi This can be rewritten as ai = b i + λ bi , λi − λ (16.111) b (16.112) which brings us to our solution ϕ(x) = f (x) + λ ∞  i=1 a f (t)ϕi (t) dt ϕi (x). λi − λ Here it is assumed that the eigenfunctions ϕi (x) are normalized to unity. Note that if f (x) = 0 there is no solution unless λ = λi . This means that our homogeneous equation has no solution (except the trivial ϕ(x) = 0) unless λ is an eigenvalue, λi . In the event that λ for the nonhomogeneous equation (16.104) is equal to one of the eigenvalues λp of the homogeneous equation, our solution (Eq. (16.112)) blows up. To repair the damage we return to Eq. (16.110) and give the value ap = bp + λp ap = bp + ap λp (16.113) special attention. Clearly, ap drops out and is no longer determined by bp , whereas bp = 0. This implies that f (x)ϕp (x) dx = 0; that is, f (x) is orthogonal to the eigenfunction ϕp (x). If this is not the case, we have no solution. 1034 Chapter 16 Integral Equations Equation (16.111) still holds for i = p, so we multiply by ϕi (x) and sum over i(i = p) to obtain ∞ b  a f (t)ϕi (t) dt ϕi (x). (16.114) ϕ(x) = f (x) + ap ϕp + λp λi − λp i=1 i =p In this solution the ap remains as an undetermined constant.10 Exercises 16.4.1 In the Fredholm equation ϕ(x) = λ b K(x, t)ϕ(t) dt a the kernel K(x, t) is self-adjoint or Hermitian: K(x, t) = K ∗ (t, x). Show that (a) the eigenfunctions are orthogonal, in the sense that b ∗ ϕm (x)ϕn (x) dx = 0, m = n (λm = λn ), (b) the eigenvalues are real. a 16.4.2 Solve the integral equation ϕ(x) = x + 1 2 1 −1 (t + x)ϕ(t) dt (compare Exercise 16.3.2) by the Hilbert–Schmidt method. Note. The application of the Hilbert–Schmidt technique here is somewhat like using a shotgun to kill a mosquito, especially when the equation can be solved quickly by expanding in Legendre polynomials. 16.4.3 Solve the Fredholm integral equation ϕ(x) = λ ∞ e−xt ϕ(t) dt. 0 Note. A series expansion of the kernel e−xt would permit a separable kernel-type solution (Section 16.3), except that the series is infinite. This suggests an infinite number of eigenvalues and eigenfunctions. If you stop with ϕ(x) = x −1/2 , λ = π −1/2 , 10 This is like the inhomogeneous linear ODE. We may add to its solution any constant times a solution of the corresponding homogeneous ODE. 16.4 Hilbert–Schmidt Theory 1035 you will have missed most of the solutions. Show that the normalization integrals of the eigenfunctions do not exist. A basic reason for this anomalous behavior is that the range of integration is infinite, making this a “singular” integral equation. 16.4.4 Given y(x) = x + λ (a) (b) 1 xty(t) dt. 0 Determine y(x) as a Neumann series. Find the range of λ for which your Neumann series solution is convergent. Compare with the value obtained from |λ| · |K|max < 1. (c) Find the eigenvalue and the eigenfunction of the corresponding homogeneous integral equation. (d) By the separable kernel method show that the solution is y(x) = (e) 16.4.5 3x . 3−λ Find y(x) by the Hilbert–Schmidt method. In Exercise 16.3.4, K(x, t) = cos(x − t). The (unnormalized) eigenfunctions are cos x and sin x. (a) Show that there is a function h(t) such that K(x, s), considered as a function of s alone, may be written as 2π K(x, s) = K(s, t)h(t) dt. 0 (b) Show that K(x, t) may be expanded as K(x, t) = 16.4.6 2  ϕn (x)ϕn (t) n=1 λn . 1 The integral equation ϕ(x) = λ 0 (1+xt)ϕ(t) dt has eigenvalues λ1 = 0.7889 and λ2 = 15.211 and eigenfunctions ϕ1 = 1 + 0.5352x and ϕ2 = 1 − 1.8685x. (a) (b) (c) Show that these eigenfunctions are orthogonal over the interval [0, 1]. Normalize the eigenfunctions to unity. Show that K(x, t) = ϕ1 (x)ϕ1 (t) ϕ2 (x)ϕ2 (t) + . λ1 λ2 1036 Chapter 16 Integral Equations ANS. (b) ϕ1 (x) = 0.7831 + 0.4191x ϕ2 (x) = 1.8403 − 3.4386x. 16.4.7 An alternate form of the solution to the nonhomogeneous integral equation, Eq. (16.104), is ∞  bi λi ϕ(x) = ϕi (x). λi − λ i=1 (a) (b) 16.4.8 Derive this form without using Eq. (16.112). Show that this form and Eq. (16.112) are equivalent. (a) Show that the eigenfunctions of Exercise 16.3.5 are orthogonal. (b) Show that the eigenfunctions of Exercise 16.3.11 are orthogonal. Additional Readings Bocher, M., An Introduction to the Study of Integral Equations, Cambridge Tracts in Mathematics and Mathematical Physics, No. 10. New York: Hafner (1960). This is a helpful introduction to integral equations. Cochran, J. A., The Analysis of Linear Integral Equations. New York: McGraw-Hill (1972). This is a comprehensive treatment of linear integral equations which is intended for applied mathematicians and mathematical physicists. It assumes a moderate to high level of mathematical competence on the part of the reader. Courant, R., and D. Hilbert, Methods of Mathematical Physics, Vol.1 (English edition). New York: Interscience (1953). This is one of the classic works of mathematical physics. Originally published in German in 1924, the revised English edition is an excellent reference for a rigorous treatment of integral equations, Green’s functions, and a wide variety of other topics on mathematical physics. Golberg, M. A., ed., Solution Methods of Integral Equations. New York: Plenum Press (1979). This is a set of papers from a conference on integral equations. The initial chapter is excellent for up-to-date orientation and a wealth of references. Kanval, R. P., Linear Integral Equations. New York: Academic Press (1971), reprinted, Birkhäuser (1996). This book is a detailed but readable treatment of a variety of techniques for solving linear integral equations. Morse, P. M., and H. Feshbach, Methods of Theoretical Physics. New York: McGraw-Hill (1953). Chapter 7 is a particularly detailed, complete discussion of Green’s functions from the point of view of mathematical physics. Note, however, that Morse and Feshbach frequently choose a source of 4π δ(r − r′ ) in place of our δ(r − r′ ). Considerable attention is devoted to bounded regions. Muskhelishvili, N. I., Singular Integral Equations, 2nd ed., New York: Dover (1992). Stakgold, I., Green’s Functions and Boundary Value Problems. New York: Wiley (1979). CHAPTER 17 CALCULUS OF VARIATIONS Uses of the Calculus of Variations We now address problems where we search for a function or curve, rather than a value of some variable, that makes a given quantity stationary, usually an energy or action integral. Because a function is varied, these problems are called variational. Variational principles, such as D’Alembert’s and Hamilton’s, have been developed in classical mechanics, and Lagrangian techniques occur in quantum mechanics and field theory, for example, Fermat’s principle of the shortest optical path in electrodynamics. Before plunging into this rather different branch of mathematical physics, let us summarize some of its uses in both physics and mathematics. 1. In existing physical theories: a. b. c. Unification of diverse areas of physics using energy as a key concept. Convenience in analysis — Lagrange equations, Section 17.3. Elegant treatment of constraints, Section 17.7. Starting point for new, complex areas of physics and engineering. In general relativity the geodesic is taken as the minimum path of a light pulse or the free-fall path of a particle in curved Riemannian space (see geodesics in Section 2.10). Variational principles appear in quantum field theory. Variational principles have been applied extensively in control theory. 3. Mathematical unification. Variational analysis provides a proof of the completeness of the Sturm–Liouville eigenfunctions, Chapter 10, and establishes a lower bound for the eigenvalues. Similar results follow for the eigenvalues and eigenfunctions of the Hilbert–Schmidt integral equation, Section 16.4. 4. Calculation techniques, Section 17.8. Calculation of the eigenfunctions and eigenvalues of the Sturm–Liouville equation. Integral equation eigenfunctions and eigenvalues may be calculated using numerical quadrature and matrix techniques, Section 16.3. 1037 1038 17.1 Chapter 17 Calculus of Variations A DEPENDENT AND AN INDEPENDENT VARIABLE Concept of Variation The calculus of variations involves problems in which the quantity to be minimized (or maximized) appears as a stationary integral, a functional, because a function y(x, α) needs to be determined from a class described by an infinitesimal parameter α. As the simplest case, let x2 f (y, yx , x) dx. (17.1) J= x1 Here J is the quantity that takes on a stationary value. Under the integral sign, f is a known function of the indicated variables x and α, as are y(x, α), yx (x, α) ≡ ∂y(x, α)/∂x, but the dependence of y on x (and α) is not yet known; that is, y(x) is unknown. This means that although the integral is from x1 to x2 , the exact path of integration is not known (Fig. 17.1). We are to choose the path of integration through points (x1 , y1 ) and (x2 , y2 ) to minimize J . Strictly speaking, we determine stationary values of J : minima, maxima, or saddle points. In most cases of physical interest the stationary value will be a minimum. This problem is considerably more difficult than the corresponding problem of a function y(x) in differential calculus. Indeed, there may be no solution. In differential calculus the minimum is determined by comparing y(x0 ) with y(x), where x ranges over neighboring points. Here we assume the existence of an optimum path, that is, an acceptable path for which J is stationary, and then compare J for our (unknown) optimum path with that obtained from neighboring paths. In Fig. 17.1 two possible paths are shown. (There are an infinite number of possibilities.) The difference between these two for a given x is called the variation of y, δy, and is conveniently described by introducing a new function, η(x), to define the arbitrary deformation of the path and a scale factor, α, to give the magnitude of the variation. The function η(x) is arbitrary except for two restrictions. First, η(x1 ) = η(x2 ) = 0, FIGURE 17.1 A varied path. (17.2) 17.1 A Dependent and an Independent Variable 1039 which means that all varied paths must pass through the fixed endpoints. Second, as will be seen shortly, η(x) must be differentiable; that is, we may not use η(x) = 1, x = x0 , = 0, x = x0 , (17.3) but we can choose η(x) to have a form similar to the functions used to represent the Dirac delta function (Chapter 1) so that η(x) differs from zero only over an infinitesimal region.1 Then, with the path described by α and η(x), y(x, α) = y(x, 0) + αη(x) (17.4) δy = y(x, α) − y(x, 0) = αη(x). (17.5) and Let us choose y(x, α = 0) as the unknown path that will minimize J . Then y(x, α) for nonzero α describes a neighboring path. In Eq. (17.1), J is now a function2 of our parameter α: x2  f y(x, α), yx (x, α), x dx, (17.6) J (α) = x1 and our condition for an extreme value is that  ∂J (α) = 0, ∂α α=0 (17.7) analogous to the vanishing of the derivative dy/dx in differential calculus. Now, the α-dependence of the integral is contained in y(x, α) and yx (x, α) = (∂/∂x)y(x, α). Therefore3  x2 ∂f ∂y ∂f ∂yx ∂J (α) dx. (17.8) = + ∂α ∂y ∂α ∂yx ∂α x1 From Eq. (17.4), ∂y(x, α) = η(x), ∂α dη(x) ∂yx (x, α) = , ∂α dx (17.9) (17.10) so Eq. (17.8) becomes ∂J (α) = ∂α x2  ∂f x1 ∂y η(x) +  ∂f dη(x) dx. ∂yx dx (17.11) 1 Compare H. Jeffreys and B. S. Jeffreys, Methods of Mathematical Physics, 3rd ed., Cambridge, UK: Cambridge University Press (1966), Chapter 10, for a more complete discussion of this point. 2 Technically, J is a functional of y, y , but a function of α depending on the functions y(x, α) and y (x, α) : J [y(x, α), x x yx (x, α)]. 3 Note that y and y are being treated as independent variables. x 1040 Chapter 17 Calculus of Variations Integrating the second term by parts to get η(x) as a common and arbitrary nonvanishing factor, we obtain  x2 x2 dη(x) ∂f ∂f x2 d ∂f dx = η(x) dx. (17.12) − η(x) dx ∂y ∂y  dx ∂y x1 x x x1 x1 x The integrated part vanishes by Eq. (17.2), and Eq. (17.11) becomes  x2 ∂f d ∂f η(x) dx = 0. − ∂y dx ∂yx x1 (17.13) In this form α has been set equal to zero, corresponding to the solution path, and, in effect, is no longer part of the problem. Occasionally we will see Eq. (17.13) multiplied by δα, which gives, upon using η(x)δα = δy,   x2  ∂J ∂f d ∂f δy dx = δα − = δJ = 0. (17.14) ∂y dx ∂y ∂α α=0 x x1 Since η(x) is arbitrary, we may choose it to have the same sign as the bracketed expression in Eq. (17.13) whenever the latter differs from zero. Hence the integrand is always nonnegative. Equation (17.13), our condition for the existence of a stationary value, can then be satisfied only if the bracketed term itself is zero almost everywhere. The condition for our stationary value is thus a PDE,4 d ∂f ∂f − = 0, ∂y dx ∂yx (17.15) known as the Euler equation, which can be expressed in various other forms. Sometimes solutions are missed when they are not twice differentiable, as required by Eq. (17.15). An example is Goldschmidt’s discontinuous solution of Section 17.2. It is clear that Eq. (17.15) must be satisfied for J to take on a stationary value, that is, for Eq. (17.14) to be satisfied. Equation (17.15) is necessary, but it is by no means sufficient.5 Courant and Robbins (1996; see the Additional Readings) illustrate this very nicely by considering the distance over a sphere between points on the sphere, A and B, Fig. 17.2. Path (1), a great circle, is found from Eq. (17.15). But path (2), the remainder of the great circle through points A and B, also satisfies the Euler equation. Path (2) is a maximum, but only if we demand that it be a great circle and then only if we make less than one circuit; that is, path (2) +n complete revolutions is also a solution. If the path is not required to be a great circle, any deviation from (2) will increase the length. This is hardly the property of a local maximum, and that is why it is important to check the properties of solutions of Eq. (17.15) to see if they satisfy the physical conditions of the given problem. 4 It is important to watch the meaning of ∂/∂x and d/dx closely. For example, if f = f [y(x), y , x], x df ∂f ∂f dy ∂f d 2 y . = + + dx ∂x ∂y dx ∂yx dx 2 The first term on the right gives the explicit x-dependence. The second and third terms give the implicit x-dependence via y and yx . 5 For a discussion of sufficiency conditions and the development of the calculus of variations as a part of mathematics, see G. M. Ewing, Calculus of Variations with Applications, New York: Norton (1969). Sufficiency conditions are also covered by Sagan (in the Additional Readings at the end of this chapter). 17.1 A Dependent and an Independent Variable 1041 FIGURE 17.2 Stationary paths over a sphere. Example 17.1.1 OPTICAL PATH NEAR EVENT HORIZON OF A BLACK HOLE Determine the optical path in an atmosphere where the velocity of light increases in proportion to the height, v(y) = y/b, with b > 0 some parameter describing the light speed. So v = 0 at y = 0, which simulates the conditions at the surface of a black hole, called its event horizon, where the gravitational force is so strong that the velocity of light goes to zero, thus even trapping light. Because light takes the shortest time, the variational problem takes the form t2 2 dx + dy 2 ds t = =b dt = minimum. dt = v y t1 Here v = ds/dt = y/b is the velocity of light in this environment, the y coordinate being the height. A look at the variational functional suggests choosing y as the independent variable because x does not appear in the integrand. We can bring dy outside the radical and change the role of x and y in J of Eq. (17.1) and the resulting Euler equation. With x = x(y), x ′ = dx/dy, we obtain √ ′2 x +1 dy = minimum, b y and the Euler equation becomes ∂f d ∂f − = 0. ∂x dy ∂x ′ Since ∂f/∂x = 0, this can be integrated, giving x′ = C1 = const., √ y x ′2 + 1 or  x ′2 = C12 y 2 x ′ 2 + 1 . Separating dx and dy in this first-order ODE we find the integral x y C1 y dy  , dx = 1 − C12 y 2 1042 Chapter 17 Calculus of Variations FIGURE 17.3 Circular optical path in medium. which yields x + C2 = −1 1 − C12 y 2 , C1 or (x + C2 )2 + y 2 = 1 . C12 This is a circular light path with center on the x-axis along the event horizon. (See Fig. 17.3.) This example may be adapted to a mirage (Fata Morgana) in a desert with hot air near the ground and cooler air aloft (the index of refraction changes with height in cool versus hot air), thus changing the velocity law from v = y/b → v0 − y/b. In this case, the circular light path is no longer convex with center on the x-axis, but becomes concave.  Alternate Forms of Euler Equations One other form (Exercise 17.1.1), which is often useful, is   d ∂f ∂f = 0. − f − yx ∂x dx ∂yx (17.16) In problems in which f = f (y, yx ), that is, in which x does not appear explicitly, Eq. (17.16) reduces to   ∂f d f − yx = 0, (17.17) dx ∂yx or ∂f f − yx = constant. (17.18) ∂yx Example 17.1.2 Missing Dependent Variables Consider the variational problem f (˙r) dt = minimum. Here r is absent from the integrand. Therefore the Euler equations become d ∂f = 0, dt ∂ x˙ d ∂f = 0, dt ∂ y˙ d ∂f = 0, dt ∂ z˙ 17.1 A Dependent and an Independent Variable 1043 with r = (x, y, z), so fr˙ = c = const. Solving these three equations for the three unknowns x, ˙ y, ˙ z˙ yields r˙ = c1 = const. Integrating this constant velocity gives r = c1 t + c2 . The solutions are straight lines, despite the general nature of the function f . A physical example illustrating this case is the propagation of light in a crystal, where the velocity of light depends on the (crystal) directions but not on the location in the crystal, because a crystal is an anisotropic homogeneous medium. The variational problem √ 2 ds r˙ = dt = minimum v v(˙r) has the form of our example. Note that t need not be the time, but it parameterizes the light path.  Exercises 17.1.1 For dy/dx ≡ yx = 0, show the equivalence of the two forms of Euler’s equation: d ∂f ∂f − =0 ∂x dx ∂yx and 17.1.2   ∂f d ∂f f − yx = 0. − ∂y dx ∂yx Derive Euler’s equation by expanding the integrand of x2  J (α) = f y(x, α), yx (x, α), x dx x1 in powers of α, using a Taylor (Maclaurin) expansion with y and yx as the two variables (Section 5.6). Note. The stationary condition is ∂J (α)/∂α = 0, evaluated at α = 0. The terms quadratic in α may be useful in establishing the nature of the stationary solution (maximum, minimum, or saddle point). 17.1.3 Find the Euler equation corresponding to Eq. (17.15) if f = f (yxx , yx , y, x).     d ∂f ∂f ∂f d2 − + = 0, ANS. 2 dx ∂yx ∂y dx ∂yxx η(x1 ) = η(x2 ) = 0, ηx (x1 ) = ηx (x2 ) = 0. 17.1.4 The integrand f (y, yx , x) of Eq. (17.1) has the form f (y, yx , x) = f1 (x, y) + f2 (x, y)yx . (a) Show that the Euler equation leads to ∂f1 ∂f2 − = 0. ∂y ∂x (b) What does this imply for the dependence of the integral J upon the choice of path? 1044 Chapter 17 Calculus of Variations 17.1.5 Show that the condition that J= f (x, y) dx has a stationary value (a) (b) leads to f (x, y) independent of y and yields no information about any x-dependence. We get no (continuous, differentiable) solution. To be a meaningful variational problem, dependence on y or higher derivatives is essential. Note. The situation will change when constraints are introduced (compare Exercise 17.7.7). 17.2 APPLICATIONS OF THE EULER EQUATION Example 17.2.1 STRAIGHT LINE Perhaps the simplest application of the Euler equation is in the determination of the shortest distance between two points in the Euclidean xy-plane. Since the element of distance is  1/2  1/2 ds = (dx)2 + (dy)2 = 1 + yx2 dx, (17.19) the distance J may be written as J= x2 ,y2 x1 ,y1 ds = Comparison with Eq. (17.1) shows that x2 x1  1/2 1 + yx2 dx. 1/2  f (y, yx , x) = 1 + yx2 . Substituting into Eq. (17.16), we obtain  d 1 − = 0, dx (1 + yx2 )1/2 or 1 = C, a constant. (1 + yx2 )1/2 (17.20) (17.21) (17.22) (17.23) This is satisfied by yx = a, a second constant, (17.24) and y = ax + b, (17.25) which is the familiar equation for a straight line. The constants a and b are chosen so that the line passes through the two points (x1 , y1 ) and (x2 , y2 ). Hence the Euler equation 17.2 Applications of the Euler Equation 1045 predicts that the shortest6 distance between two fixed points in Euclidean space is a straight line.  The generalization of this in curved four-dimensional space–time leads to the important concept of the geodesic in general relativity (see Section 2.10). Example 17.2.2 SOAP FILM As a second illustration (Fig. 17.4), consider two parallel coaxial wire circles to be connected by a surface of minimum area that is generated by revolving a curve y(x) about the x-axis. The curve is required to pass through fixed endpoints (x1 , y1 ) and (x2 , y2 ). The variational problem is to choose the curve y(x) so that the area of the resulting surface will be a minimum. For the element of area shown in Fig. 17.4,  1/2 dA = 2πy ds = 2πy 1 + yx2 dx. (17.26) The variational equation is then J= Neglecting the 2π , we obtain x2 x1 1/2  2πy 1 + yx2 dx. 1/2  f (y, yx , x) = y 1 + yx2 . FIGURE 17.4 Surface of rotation — soap film problem. 6 Technically, we have a stationary value. From the α 2 terms it can be identified as a minimum (Exercise 17.2.2). (17.27) (17.28) 1046 Chapter 17 Calculus of Variations Since ∂f/∂x = 0, we may apply Eq. (17.18) directly and get 1/2  − yyx2 y 1 + yx2 or 1 = c1 , (1 + yx2 )1/2 y = c1 . (1 + yx2 )1/2 (17.29) (17.30) Squaring, we get y2 = c12 1 + yx2 2 with c12 ≤ ymin , (17.31) and (yx )−1 = This may be integrated to give dx c1 . = dy y 2 − c12 (17.32) y + c2 . c1 (17.33)   x − c2 , y = c1 cosh c1 (17.34) x = c1 cosh−1 Solving for y, we have and again c1 and c2 are determined by requiring the hyperbolic cosine to pass through the points (x1 , y1 ) and (x2 , y2 ). Our “minimum”-area surface is a special case of a catenary of revolution, or a catenoid.  Soap Film — Minimum Area This calculus of variations contains many pitfalls for the unwary. (Remember, the Euler equation is a necessary condition assuming a differentiable solution. The sufficiency conditions are quite involved. See the Additional Readings for details.) Respect for some of these hazards may be developed by considering a specific physical problem, for example, a minimum-area problem with (x1 , y1 ) = (−x0 , 1), (x2 , y2 ) = (+x0 , 1). The minimum surface is a soap film stretched between the two rings of unit radius at x = ±x0 . The problem is to predict the curve y(x) assumed by the soap film. By referring to Eq. (17.34), we find that c2 = 0 by the symmetry of the problem about x = 0. Then     x x0 , c1 cosh = 1. (17.34a) y = c1 cosh c1 c1 If we take x0 = 1 2 we obtain a transcendental equation for c1 , viz.   1 . 1 = c1 cosh 2c1 (17.35) 17.2 Applications of the Euler Equation 1047 We find that this equation has two solutions: c1 = 0.2350, leading to a “deep” curve, and c1 = 0.8483, leading to a “flat” curve. Which curve is assumed by the soap film? Before answering this question, consider the physical situation with the rings moved apart so that x0 = 1. Then Eq. (17.34a) becomes   1 , (17.36) 1 = c1 cosh c1 which has no real solutions. The physical significance is that as the unit-radius rings were moved out from the origin, a point was reached at which the soap film could no longer maintain the same horizontal force over each vertical section. Stable equilibrium was no longer possible. The soap film broke (irreversible process) and formed a circular film over each ring (with a total area of 2π = 6.2832 . . .). This is the Goldschmidt discontinuous solution. The next question is: How large may x0 be and still give a real solution for Eq. (17.34a)?7 Letting c1−1 = p, Eq. (17.34a) becomes p = cosh px0 . (17.37) To find x0 max we could solve for x0 (as in Eq. (17.33)) and then differentiate with respect to p. Finally, with an eye on Fig. 17.5, dx0 /dp would be set equal to zero. Alternatively, direct differentiation of Eq. (17.37) with respect to p yields  dx0 sinh px0 . 1 = x0 + p dp FIGURE 17.5 Solutions of Eq. (17.34a) for unit-radius rings at x = ±x0 . 7 From a numerical point of view it is easier to invert the problem. Pick a value of c and solve for x . Equation (17.34a) becomes 1 0 x0 = c1 cosh−1 (1/c1 ). This has numerical solutions in the range 0 < c1 ≤ 1. 1048 Chapter 17 Calculus of Variations The requirement that dx0 /dp vanish leads to 1 = x0 sinh px0 . (17.38) Equations (17.37) and (17.38) may be combined to form px0 = coth px0 , (17.39) px0 = 1.1997. (17.40) with the root Substituting into Eq. (17.37) or (17.38), we obtain p = 1.810, c1 = 0.5524 (17.41) and x0 max = 0.6627. (17.42) Returning to the question of the solution of Eq. (17.35) that describes the soap film, let us calculate the area corresponding to each solution. We have x0 1/2  4π x0 2 A = 4π y 1 + yx2 dx = y dx (by Eq. (17.30)) c1 0 0     x0  x 2 2x0 2x0 cosh + . (17.43) dx = πc12 sinh = 4πc1 c1 c1 c1 0 For x0 = 21 , Eq. (17.35) leads to c1 = 0.2350 → A = 6.8456, c1 = 0.8483 → A = 5.9917, showing that the former can at most be only a local minimum. A more detailed investigation (compare Bliss, Calculus of Variations, Chapter IV) shows that this surface is not even a local minimum. For x0 = 21 , the soap film will be described by the flat curve  y = 0.8483 cosh  x . 0.8483 (17.44) This flat or shallow catenoid (catenary of revolution) will be an absolute minimum for 0 ≤ x0 < 0.528. However, for 0.528 < x < 0.6627 its area is greater than that of the Goldschmidt discontinuous solution (6.2832) and it is only a relative minimum (Fig. 17.6). For an excellent discussion of both the mathematical problems and experiments with soap films, we refer to Courant and Robbins (1996) in the Additional Readings at the end of the chapter. 17.2 Applications of the Euler Equation FIGURE 17.6 1049 Catenoid area (unit-radius rings at x = ±x0 ). Exercises 17.2.1 A soap film is stretched across the space between two rings of unit radius centered at ±x0 on the x-axis and perpendicular to the x-axis. Using the solution developed in Section 17.2, set up the transcendental equations for the condition that x0 is such that the area of the curved surface of rotation equals the area of the two rings (Goldschmidt discontinuous solution). Solve for x0 (Fig. 17.7). 17.2.2 In Example 17.2.1, expand J [y(x, α)] − J [y(x, 0)] in powers of α. The term linear in α leads to the Euler equation and to the straight-line solution, Eq. (17.25). Investigate FIGURE 17.7 Surface of rotation. 1050 Chapter 17 Calculus of Variations the α 2 term and show that the stationary value of J , the straight-line distance, is a minimum. 17.2.3 (a) Show that the integral J= (b) 17.2.4 x2 f (y, yx , x) dx, x1 with f = y(x), has no extreme values. If f (y, yx , x) = y 2 (x), find a discontinuous solution similar to the Goldschmidt solution for the soap film problem. Fermat’s principle of optics states that a light ray will follow the path y(x) for which x2 ,y2 n(y, x) ds x1 ,y1 is a minimum when n is the index of refraction. For y2 = y1 = 1, −x1 = x2 = 1, find the ray path if (a) 17.2.5 n = ey , (b) n = a(y − y0 ), y > y0 . A frictionless particle moves from point A on the surface of the Earth to point B by sliding through a tunnel. Find the differential equation to be satisfied if the transit time is to be a minimum. Note. Assume the Earth to be nonrotating sphere of uniform density. ANS. (Eq. (17.15)): rϕϕ (r 3 − ra 2 ) + rϕ2 (2a 2 − r 2 ) + a 2 r 2 = 0, r(ϕ = 0) = r0 , rϕ (ϕ = 0) = 0, r(ϕ = ϕA ) = a, r(ϕ = ϕB ) = a. Eq. (17.18): rϕ2 = r 2 −r02 . The solution of these equations is a hypocycloid, genera 2 −r 2 1 radius 2 (a − r0 ) rolling inside the circle of radius a. You might like a2 r 2 r02 · ated by a circle of to show that the transit time is t =π (a 2 − r02 )1/2 . (ag)1/2 For details see P. W. Cooper, Am. J. Phys. 34: 68 (1966); G. Veneziano et al., ibid., pp. 701–704. 17.2.6 A ray of light follows a straight-line path in a first homogeneous medium, is refracted at an interface, and then follows a new straight-line path in the second medium. Use Fermat’s principle of optics to derive Snell’s law of refraction: n1 sin θ1 = n2 sin θ2 . Hint. Keep the points (x1 , y1 ) and (x2 , y2 ) fixed and vary x0 to satisfy Fermat (Fig. 17.8). This is not an Euler equation problem. (The light path is not differentiable at x0 .) 17.2 Applications of the Euler Equation FIGURE 17.8 17.2.7 1051 Snell’s law. A second soap film configuration for the unit-radius rings at x = ±x0 consists of a circular disk, radius a, in the x = 0 plane and two catenoids of revolution, one joining the disk and each ring. One catenoid may be described by   x + c3 . y = c1 cosh c1 Impose boundary conditions at x = 0 and x = x0 . Although not necessary, it is convenient to require that the catenoids form an angle of 120◦ where they join the central disk. Express this third boundary condition in mathematical terms. (c) Show that the total area of catenoids plus central disk is    2x0 2x0 2 . A = c1 sinh + 2c3 + c1 c1 (a) (b) 17.2.8 Note. Although this soap film configuration is physically realizable and stable, the area is larger than that of the simple catenoid for all ring separations for which both films exist.      1 = c1 cosh x0 + c3 dy c1 ANS. (a) = tan 30◦ = sinh c3 . (b)  dx  a = c cosh c , 1 3 For the soap film described in Exercise 17.2.7, find (numerically) the maximum value of x0 . Note. This calls for a pocket calculator with hyperbolic functions or a table of hyperbolic cotangents. ANS. x0 max = 0.4078. 1052 Chapter 17 Calculus of Variations 17.2.9 Find the root of px0 = coth px0 (Eq. (17.39)) and determine the corresponding values of p and x0 (Eqs. (17.41) and (17.42)). Calculate your values to five significant figures. 17.2.10 For the two-ring soap film problem of this section calculate and tabulate x0 , p, p −1 , and A, the soap film area for px0 = 0.00(0.02)1.30. 17.2.11 Find the value of x0 (to five significant figures) that leads to a soap film area, Eq. (17.43), equal to 2π , the Goldschmidt discontinuous solution. ANS. x0 = 0.52770. 17.2.12 17.3 Find the curve of quickest descent from (0, 0) to (x0 , y0 ) for a particle sliding under gravity and without friction. Show that the ratio of times taken by the particle along a straight line joining the two points compared to along the curve of quickest descent is (1 + 4/π 2 )1/2 . Hint. Take y to increase downwards. Apply Eq. (17.18) to obtain yx2 = (1 − c2 y)/c2 y, where c is an integration constant. Then make the substitution y = (sin2 ϕ/2)/c2 to parametrize the cycloid and take (x0 , y0 ) = (π/2c2 , 1/c2 ). SEVERAL DEPENDENT VARIABLES Our original variational problem, Eq. (17.1), may be generalized in several respects. In this section we consider the integrand f to be a function of several dependent variables y1 (x), y2 (x), y3 (x), . . . , all of which depend on x, the independent variable. In Section 17.4 f again will contain only one unknown function y, but y will be a function of several independent variables (over which we integrate). In Section 17.5 these two generalizations are combined. In Section 17.7 the stationary value is restricted by one or more constraints. For more than one dependent variable, Eq. (17.1) becomes x2  J= f y1 (x), y2 (x), . . . , yn (x), y1x (x), y2x (x), . . . , ynx (x), x dx. (17.45) x1 As in Section 17.1, we determine the extreme value of J by comparing neighboring paths. Let yi (x, α) = yi (x, 0) + αηi (x), i = 1, 2, . . . , n, (17.46) with the ηi independent of one another but subject to the restrictions discussed in Section 17.1. By differentiating Eq. (17.45) with respect to α and setting α = 0, since Eq. (17.7) still applies, we obtain  x2  ∂f ∂f (17.47) ηi + ηix dx = 0, ∂yi ∂yix x1 i the subscript x denoting partial differentiation with respect to x; that is, yix = ∂yi /∂x, and so on. Again, each of the terms (∂f/∂yix )ηix is integrated by parts. The integrated part vanishes and Eq. (17.47) becomes  x2  ∂f d ∂f ηi dx = 0. (17.48) − ∂yi dx ∂yix x1 i 17.3 Several Dependent Variables 1053 Since the ηi are arbitrary and independent of one another,8 each of the terms in the sum must vanish independently. We have ∂f ∂f d − = 0, ∂yi dx ∂(∂yi /∂x) i = 1, 2, . . . , n, (17.49) a whole set of Euler equations, each of which must be satisfied for an extreme value. Hamilton’s Principle The most important application of Eq. (17.45) occurs when the integrand f is taken to be a Lagrangian L. The Langrangian (for nonrelativistic systems; see Exercise 17.3.5 for a relativistic particle) is defined as the difference of kinetic and potential energies of a system: L ≡ T − V. (17.50) Using time as an independent variable instead of x and xi (t) as the dependent variables, we get x → t, yi → xi (t), yix → x˙i (t); xi (t) is the location and x˙i = dxi /dt is the velocity of particle i as a function of time. The equation δJ = 0 is then a mathematical statement of Hamilton’s principle of classical mechanics, t2 L(x1 , x2 , . . . , xn , x˙1 , x˙2 , . . . , x˙n ; t) dt = 0. (17.51) δ t1 In words, Hamilton’s principle asserts that the motion of the system from time t1 to t2 is such that the time integral of the Lagrangian L, or action, has a stationary value. The resulting Euler equations are usually called the Lagrangian equations of motion, d ∂L ∂L − = 0. dt ∂ x˙i ∂xi (17.52) These Lagrangian equations can be derived from Newton’s equations of motion, and Newton’s equations can be derived from Lagrange’s. The two sets of equations are equally “fundamental.” The Lagrangian formulation has advantages over the conventional Newtonian laws. Whereas Newton’s equations are vector equations, we see that Lagrange’s equations involve only scalar quantities. The coordinates x1 , x2 , . . . need not be any standard set of coordinates or lengths. They can be selected to match the conditions of the physical problem. The Lagrange equations are invariant with respect to the choice of coordinate system. Newton’s equations (in component form) are not manifestly invariant. Exercise 2.5.10 shows what happens to F = ma resolved in spherical polar coordinates. 8 For example, we could set η = η = η = · · · = 0, eliminating all but one term of the sum, and then treat η exactly as in 2 3 4 1 Section 17.1. 1054 Chapter 17 Calculus of Variations Exploiting the concept of energy, we may easily extend the Lagrangian formulation from mechanics to diverse fields, such as electrical networks and acoustical systems. Extensions to electromagnetism appear in the exercises. The result is a unity of otherwise-separate areas of physics. In the development of new areas, the quantization of Lagrangian particle mechanics provided a model for the quantization of electromagnetic fields and led to the gauge theory of quantum electrodynamics. One of the most valuable advantages of the Hamilton principle — Lagrange equation formulation — is the ease in seeing a relation between a symmetry and a conservation law. As an example, let xi = ϕ, an azimuthal angle. If our Lagrangian is independent of ϕ (that is, if ϕ is an ignorable coordinate), there are two consequences: (1) the conservation or invariance of a component of angular momentum and (2) from Eq. (17.52) ∂L/∂ ϕ˙ = constant. Similarly, invariance under translation leads to conservation of linear momentum. Noether’s theorem is a generalization of this invariance (symmetry) — the conservation law relation. Example 17.3.1 MOVING PARTICLE — CARTESIAN COORDINATES Consider Eq. (17.50), which describes one particle with kinetic energy T = 12 mx˙ 2 (17.53) and potential energy V (x), in which, as usual, the force is given by the negative gradient of the potential, F (x) = − dV (x) . dx (17.54) From Eq. (17.52), d ∂(T − V ) (mx) ˙ − = mx¨ − F (x) = 0, dt ∂x which is Newton’s second law of motion. Example 17.3.2 (17.55)  MOVING PARTICLE — CIRCULAR CYLINDRICAL COORDINATES Now let us describe a moving particle in cylindrical coordinates of the xy-plane, that is, z = 0. The kinetic energy is   (17.56) T = 12 m x˙ 2 + y˙ 2 = 12 m ρ˙ 2 + ρ 2 ϕ˙ 2 , and we take V = 0 for simplicity. The transformation of x˙ 2 + y˙ 2 into circular cylindrical coordinates could be carried out by taking x(ρ, ϕ) and y(ρ, ϕ), Eq. (2.28), and differentiating with respect to time and squaring. It is much easier to interpret x˙ 2 + y˙ 2 as v 2 and just write down the components ˆ ρ /dt) = ρˆ ρ, of v as ρ(ds ˙ and so on. (The dsρ is an increment of length, ρ changing by dρ, ϕ remaining constant. See Sections 2.1 and 2.4.) The Lagrangian equations yield d d 2 mρ ϕ˙ = 0. (mρ) ˙ − mρ ϕ˙ 2 = 0, (17.57) dt dt 17.3 Several Dependent Variables 1055 The second equation is a statement of conservation of angular momentum. The first may be interpreted as radial acceleration9 equated to centrifugal force. In this sense the centrifugal force is a real force. It is of some interest that this interpretation of centrifugal force as a real force is supported by the general theory of relativity.  Exercises 17.3.1 (a) Develop the equations of motion corresponding to L = 21 m(x˙ 2 + y˙ 2 ). t (b) In what sense do your solutions minimize the integral t12 L dt? Compare the result for your solution with x = const., y = const. 17.3.2 From the Lagrangian equations of motion, Eq. (17.52), show that a system in stable equilibrium has a minimum potential energy. 17.3.3 Write out the Lagrangian equations of motion of a particle in spherical coordinates for potential V equal to a constant. Identify the terms corresponding to (a) centrifugal force and (b) Coriolis force. 17.3.4 The spherical pendulum consists of a mass on a wire of length l, free to move in polar angle θ and azimuth angle ϕ (Fig. 17.9). (a) (b) 17.3.5 Set up the Lagrangian for this physical system. Develop the Lagrangian equations of motion. Show that the Lagrangian  L = m0 c2 1 − , 1− v2 c2  − V (r) FIGURE 17.9 Spherical pendulum. 9 Here is a second method of attacking Exercise 2.4.8. 1056 Chapter 17 Calculus of Variations leads to a relativistic form of Newton’s second law of motion,   m0 vi d = Fi , dt 1 − v 2 /c2 in which the force components are Fi = −∂V /∂xi . 17.3.6 The Lagrangian for a particle with charge q in an electromagnetic field described by scalar potential ϕ and vector potential A is L = 12 mv 2 − qϕ + qA · v. Find the equation of motion of the charged particle. Hint. (d/dt)Aj = ∂Aj /∂t + i (∂Aj /∂xi )x˙i . The dependence of the force fields E and B upon the potentials ϕ and A is developed in Section 1.13 (compare Exercise 1.13.10). ANS. mx¨i = q[E + v × B]i . 17.3.7 Consider a system in which the Lagrangian is given by L(qi , q˙i ) = T (qi , q˙i ) − V (qi ), where qi and q˙i represent sets of variables. The potential energy V is independent of velocity and neither T nor V has any explicit time dependence. (a) Show that   d  ∂L q˙j − L = 0. dt ∂ q˙j j (b) The constant quantity  j q˙j ∂L −L ∂ q˙j defines the Hamiltonian H . Show that under the preceding assumed conditions, H = T + V , the total energy. Note. The kinetic energy T is a quadratic function of the q˙i . 17.4 SEVERAL INDEPENDENT VARIABLES Sometimes the integrand f of Eq. (17.1) will contain one unknown function, u, that is a function of several independent variables, u = u(x, y, z), for the three-dimensional case, for example. Equation (17.1) becomes J= f [u, ux , uy , uz , x, y, z] dx dy dz, (17.58) ux = ∂u/∂x, and so on. The variational problem is to find the function u(x, y, z) for which J is stationary,  ∂J  = 0. (17.59) δJ = δα  ∂α α=0 17.4 Several Independent Variables 1057 Generalizing Section 17.1, we let u(x, y, z, α) = u(x, y, z, 0) + αη(x, y, z), (17.60) ux (x, y, z, α) = ux (x, y, z, 0) + αηx , (17.61) where u(x, y, z, α = 0) represents the (unknown) function for which Eq. (17.59) is satisfied, whereas again η(x, y, z) is the arbitrary deviation that describes the varied function u(x, y, z, α). This deviation η(x, y, z) is required to be differentiable and to vanish at the endpoints. Then from Eq. (17.60), and similarly for uy and uz . Differentiating the integral Eq. (17.58) with respect to the parameter α and then setting α = 0, we obtain    ∂f ∂f ∂f ∂f ∂J  (17.62) η + = η + η + η x y z dx dy dz = 0. ∂α  ∂u ∂u ∂u ∂u x α=0 y z Again, we integrate each of the terms (∂f/∂ui )ηi by parts. The integrated part vanishes at the endpoints (because the deviation η is required to go to zero at the endpoints) and   ∂f ∂ ∂f ∂ ∂f ∂ ∂f η(x, y, z) dx dy dz = 0.10 − (17.63) − − ∂u ∂x ∂ux ∂y ∂uy ∂z ∂uz Since the variation η(x, y, z) is arbitrary, the term in large parentheses is set equal to zero. This yields the Euler equation for (three) independent variables, ∂f ∂ ∂f ∂ ∂f ∂ ∂f − − − = 0. ∂y ∂x ∂ux ∂y ∂uy ∂z ∂uz Example 17.4.1 (17.64) LAPLACE’S EQUATION An example of this sort of variational problem is provided by electrostatics. The energy of an electrostatic field is energy density = 21 εE2 , (17.65) in which E is the usual electrostatic force field. In terms of the static potential ϕ, energy density = 12 ε(∇ϕ)2 . (17.66) Now let us impose the requirement that the electrostatic energy (associated with the field) in a given volume be a minimum. (Boundary conditions on E and ϕ must still be satisfied.) We have the volume integral11  2 J= (∇ϕ)2 dx dy dz = ϕx + ϕy2 + ϕz2 dx dy dz. (17.67) 10 Recall that ∂/∂x is a partial derivative, where y and z are held constant. But ∂/∂x also acts on implicit x-dependence as well as on explicit x-dependence. In this sense, for example,   ∂ ∂2f ∂2f ∂2f ∂2f ∂f ∂2f + ux + 2 uxx + uxy + uxz . = ∂x ∂ux ∂x∂ux ∂u∂ux ∂uy ∂ux ∂uz ∂ux ∂ux 11 The subscript x indicates the x-partial derivative, not an x-component. 1058 Chapter 17 Calculus of Variations With f (ϕ, ϕx , ϕy , ϕz , x, y, z) = ϕx2 + ϕy2 + ϕz2 , (17.68) the function ϕ replacing the u of Eq. (17.64), Euler’s equation (Eq. (17.64)) yields −2(ϕxx + ϕyy + ϕzz ) = 0, (17.69) ∇ 2 ϕ(x, y, z) = 0, (17.70) or which is Laplace’s equation of electrostatics. Closer investigation shows that this stationary value is indeed a minimum. Thus the demand that the field energy be minimized leads to Laplace’s PDE.  Exercises 17.4.1 The Lagrangian for a vibrating string (small-amplitude vibrations) is 1 2 1 2 L= 2 ρut − 2 τ ux dx, where ρ is the (constant) linear mass density and τ is the (constant) tension. The xintegration is over the length of the string. Show that application of Hamilton’s principle to the Lagrangian density (the integrand), now with two independent variables, leads to the classical wave equation ∂ 2u ρ ∂ 2u = . τ ∂t 2 ∂x 2 17.4.2 17.5 Show that the stationary value of the total energy of the electrostatic field of Example 17.4.1 is a minimum. Hint. Use Eq. (17.61) and investigate the α 2 terms. SEVERAL DEPENDENT AND INDEPENDENT VARIABLES In some cases our integrand f contains more than one dependent variable and more than one independent variable. Consider  f = f p(x, y, z), px , py , pz , q(x, y, z), qx , qy , qz , r(x, y, z), rx , ry , rz , x, y, z . (17.71) We proceed as before with p(x, y, z, α) = p(x, y, z, 0) + αξ(x, y, z), q(x, y, z, α) = q(x, y, z, 0) + αη(x, y, z), r(x, y, z, α) = r(x, y, z, 0) + αζ (x, y, z), (17.72) and so on. 17.5 Several Dependent and Independent Variables 1059 Keeping in mind that ξ, η, and ζ are independent of one another, as were the ηi in Section 17.3, the same differentiation and then integration by parts leads to ∂f ∂ ∂f ∂ ∂f ∂ ∂f − − = 0, − ∂p ∂x ∂px ∂y ∂py ∂z ∂pz (17.73) with similar equations for functions q and r. Replacing p, q, r, . . . with yi and x, y, z, . . . with xy , we can put Eq. (17.73) in a more compact form:  ∂  ∂f  ∂f − = 0, i = 1, 2, . . . , (17.73a) ∂yi ∂xj ∂yij j in which yij ≡ ∂yi . ∂xj An application of Eq. (17.73) appears in Section 17.7. Relation to Physics The calculus of variations as developed so far provides an elegant description of a wide variety of physical phenomena. The physics includes classical mechanics in Section 17.3; relativistic mechanics, Exercise 17.3.5; electrostatics, Example 17.4.1; and electromagnetic theory in Exercise 17.5.1. The convenience should not be minimized, but at the same time we should be aware that in these cases the calculus of variations has only provided an alternate description of what was already known. The situation does change with incomplete theories. • If the basic physics is not yet known, a postulated variational principle can be a useful starting point. Exercise 17.5.1 The Lagrangian (per unit volume) of an electromagnetic field with a charge density ρ is given by   1 1 2 L= ε0 E2 − B − ρϕ + ρv · A. 2 µ0 Show that Lagrange’s equations lead to two of Maxwell’s equations. (The remaining two are a consequence of the definition of E and B in terms of A and ϕ.) This Lagrange density comes from a scalar expression in Section 4.6. Hint. Take A1 , A2 , A3 , and ϕ as dependent variables, x, y, z, and t as independent variables. E and B are given in terms of A and ϕ by Eq. (4.142) and Eq. (1.88). 1060 17.6 Chapter 17 Calculus of Variations LAGRANGIAN MULTIPLIERS In this section the concept of a constraint is introduced. To simplify the treatment, the constraint appears as a simple function rather than as an integral. In this section we are not concerned with the calculus of variations, but in Section 17.7 the constraints, with our newly developed Lagrangian multipliers, are incorporated into the calculus of variations. Consider a function of three independent variables, f (x, y, z). For the function f to be a maximum (or extreme),12 df = 0. (17.74) The necessary and sufficient condition for this is ∂f ∂f ∂f = = = 0, ∂x ∂y ∂z (17.75) in which df = ∂f ∂f ∂f dx + dy + dz. ∂x ∂y ∂z (17.76) Often in physical problems the variables x, y, z are subject to constraints so that they are no longer all independent. It is possible, at least in principle, to use each constraint to eliminate one variable and to proceed with a new and smaller set of independent variables. The use of Lagrangian multipliers is an alternate technique that may be applied when this elimination of variables is inconvenient or undesirable. Let our equation of constraint be ϕ(x, y, z) = 0, (17.77) from which z(x, y) may be extracted if x, y are taken as the independent coordinates. Returning to Eq. (17.74), Eq. (17.75) no longer follows because there are now only two independent variables, so dz is no longer arbitrary. From the total differential dϕ = 0, we then obtain ∂ϕ ∂ϕ ∂ϕ dx + dy (17.78) − dz = ∂z ∂x ∂y and therefore df = assuming ϕz = obtain df + λ dϕ = ∂ϕ ∂z    ∂f ∂f ∂ϕ ∂ϕ dx + dy + λ dx + dx , ∂x ∂y ∂x ∂x λ=− fz , ϕz = 0. Thus, we may add Eq. (17.76) and a multiple of Eq. (17.78) to      ∂f ∂f ∂ϕ ∂ϕ ∂ϕ ∂f dx + dy + dz = 0. +λ +λ +λ ∂x ∂x ∂y ∂y ∂z ∂z (17.79) In other words, our Lagrangian multiplier λ is chosen so that ∂f ∂ϕ +λ = 0, ∂z ∂z 12 Including a saddle point. (17.80) 17.6 Lagrangian Multipliers assuming that ∂ϕ/∂z = 0. Equation (17.79) now becomes     ∂f ∂f ∂ϕ ∂ϕ dx + dy = 0. +λ +λ ∂x ∂x ∂y ∂y 1061 (17.81) However, now dx and dy are arbitrary and the quantities in parentheses must vanish: ∂f ∂ϕ +λ = 0, ∂x ∂x ∂f ∂ϕ +λ = 0. ∂y ∂y (17.82) When Eqs. (17.80) and (17.82) are satisfied, df = 0 and f is an extremum. Notice that there are now four unknowns: x, y, z, and λ. The fourth equation is, of course, the constraint Eq. (17.77). We want only x, y, and z, so λ need not be determined. For this reason λ is sometimes called Lagrange’s undetermined multiplier. This method will fail if all the coefficients of λ vanish at the extremum, ∂ϕ/∂x, ∂ϕ/∂y, ∂ϕ/∂z = 0. It is then impossible to solve for λ. Note that from the form of Eqs. (17.80) and (17.82), we could identify f as the function taking an extreme value subject to ϕ, the constraint, or we could identify f as the constraint and ϕ as the function. If we have a set of constraints ϕk , then Eqs. (17.80) and (17.82) become  ∂ϕk ∂f + = 0, λk ∂xi ∂xi i = 1, 2, . . . , n, k with a separate Lagrange multiplier λk for each ϕk . Example 17.6.1 PARTICLE IN A BOX As an example of the use of Lagrangian multipliers, consider the quantum mechanical problem of a particle (mass m) in a box. The box is a rectangular parallelepiped with sides a, b, and c. The ground-state energy of the particle is given by   1 1 h2 1 . (17.83) + + E= 8m a 2 b2 c2 We seek the shape of the box that will minimize the energy E, subject to constraint that the volume is constant, V (a, b, c) = abc = k. (17.84) With f (a, b, c) = E(a, b, c) and ϕ(a, b, c) = abc − k = 0, we obtain ∂E ∂ϕ h2 +λ =− + λbc = 0. ∂a ∂a 4ma 3 Also, − h2 + λac = 0, 4mb3 − h2 + λab = 0. 4mc3 (17.85) 1062 Chapter 17 Calculus of Variations Multiplying the first of these expressions by a, the second by b, and the third by c, we have λabc = h2 h2 h2 = = . 4ma 2 4mb2 4mc2 (17.86) Therefore our solution is a = b = c, a cube. (17.87)  Notice that λ has not been determined but follows from Eq. (17.86). Example 17.6.2 CYLINDRICAL NUCLEAR REACTOR A further example is provided by the nuclear reactor theory. Suppose a (thermal) nuclear reactor is to have the shape of a right circular cylinder of radius R and height H . Neutron diffusion theory supplies a constraint: ϕ(R, H ) =  2.4048 R 2 +  π H 2 = constant.13 (17.88) We wish to minimize the volume of the reactor vessel, f (R, H ) = πR 2 H. (17.89) Application of Eq. (17.82) leads to ∂f ∂ϕ (2.4048)2 = 0, +λ = 2πRH − 2λ ∂R ∂R R3 ∂ϕ π2 ∂f +λ = πR 2 − 2λ 3 = 0. ∂H ∂H H (17.90) By multiplying the first of these equations by R/2 and the second by H , we obtain πR 2 H = λ 2π 2 (2.4048)2 =λ 2 , 2 R H (17.91) or height √ 2πR = 1.847R, H= 2.4048 (17.92) for the minimum-volume right-circular cylindrical reactor. Strictly speaking, we have found only an extremum. Its identification as a minimum follows from a consideration of the original equations.  13 2.4048 . . . is the lowest root of Bessel function J (R) (compare Section 11.1). 0 17.6 Lagrangian Multipliers 1063 Exercises The following problems are to be solved by using Lagrangian multipliers. 17.6.1 The ground-state energy of a quantum particle of mass m in a pillbox (right-circular cylinder) is given by   h¯ 2 (2.4048)2 π2 E= , + 2m R2 H2 in which R is the radius and H is the height of the pillbox. Find the ratio of R to H that will minimize the energy for a fixed volume. 17.6.2 Find the ratio of R (radius) to H (height) that will minimize the total surface area of a right-circular cylinder of fixed volume. 17.6.3 The U.S. Post Office limits first class mail to Canada to a total of 36 inches, length plus girth. Using a Lagrange multiplier, find the maximum volume and the dimensions of a (rectangular parallelepiped) package subject to this constraint. 17.6.4 A thermal nuclear reactor is subject to the constraint  2  2  2 π π π ϕ(a, b, c) = + + = B 2, a b c a constant. Find the ratios of the sides of the rectangular parallelepiped reactor of minimum volume. ANS. a = b = c, cube. 17.6.5 For a lens of focal length f , the object distance p and the image distance q are related by 1/p + 1/q = 1/f . Find the minimum object–image distance (p + q) for fixed f . Assume real object and image (p and q both positive). 17.6.6 You have an ellipse (x/a)2 + (y/b)2 = 1. Find the inscribed rectangle of maximumarea. Show that the ratio of the area of the maximum-area rectangle to the area of the ellipse is 2/π = 0.6366. 17.6.7 A rectangular parallelepiped is inscribed in an ellipsoid of semiaxes a, b, and c. Maximize the volume of the inscribed rectangular parallelepiped. Show that the ratio of the √ maximum volume to the volume of the ellipsoid is 2/π 3 ≈ 0.367. 17.6.8 A deformed sphere has a radius given by r = r0 {α0 + α2 P2 (cos θ )}, where α0 ≈ 1 and |α2 | ≪ |α0 |. From Exercise 12.5.16 the area and volume are         4πr03 3 4 α2 2 3 α2 2 A = 4πr02 α02 1 + , V= . a0 1 + 5 α0 3 5 α0 Terms of order α23 have been neglected. With the constraint that the enclosed volume be held constant, that is, V = 4πr03 /3, show that the bounding surface of minimum area is a sphere (α0 = 1, α2 = 0). (b) With the constraint that the area of the bounding surface be held constant, that is, A = 4πr02 , show that the enclosed volume is a maximum when the surface is a sphere. (a) 1064 Chapter 17 Calculus of Variations 17.6.9 Find the maximum value of the directional derivative of ϕ(x, y, z), dϕ ∂ϕ ∂ϕ ∂ϕ = cos α + cos β + cos γ , ds ∂x ∂y ∂z subject to the constraint cos2 α + cos2 β + cos2 γ = 1. ANS.  dϕ ds  = |∇ϕ|. Note concerning the following exercises: In a quantum mechanical system there are gi distinct quantum states between energies Ei and Ei + dEi . The problem is to describe how ni particles are distributed among these states subject to two constraints: (a) fixed number of particles,  i (b) fixed total energy,  i 17.6.10 ni = n. ni Ei = E. For identical particles obeying the Pauli exclusion principle, the probability of a given arrangement is # gi ! WF D = . ni !(gi − ni )! i Show that maximizing WF D , subject to a fixed number of particles and fixed total energy, leads to gi ni = λ +λ E . e 1 2 i +1 With λ1 = −E0 /kT and λ2 = 1/kT , this yields Fermi–Dirac statistics. Hint. Try working with ln W and using Stirling’s formula, Section 8.3. The justification for differentiation with respect to ni is that we are dealing here with a large number of particles, ni /ni ≪ 1. 17.6.11 For identical particles but no restriction on the number in a given state, the probability of a given arrangement is # (ni + gi − 1)! WBE = . ni !(gi − 1)! i Show that maximizing WBE , subject to a fixed number of particles and fixed total energy, leads to gi . ni = λ +λ E e 1 2 i −1 With λ1 = −E0 /kT and λ2 = 1/kT , this yields Bose–Einstein statistics. Note. Assume that gi ≫ 1. 17.7 Variation with Constraints 17.6.12 17.7 1065 Photons satisfy WBE and the constraint that total energy is constant. They clearly do not satisfy the fixed-number constraint. Show that eliminating the fixed-number constraint leads to the foregoing result but with λ1 = 0. VARIATION WITH CONSTRAINTS As in the preceding sections, we seek the path that will make the integral   ∂yi , xj dxj J = f yi , ∂xj (17.93) stationary. This is the general case in which xj represents a set of independent variables and yi a set of dependent variables. Again, δJ = 0. (17.94) Now, however, we introduce one or more constraints. This means that the yi are no longer independent of each other. Not all the ηi may be varied arbitrarily, and Eqs. (17.62) and (17.73a) would not apply. The constraint may have the form ϕk (yi , xj ) = 0, (17.95) as in Section 17.6. In this case we may multiply by a function of xj , say, λk (xj ), and integrate over the same range as in Eq. (17.93) to obtain λk (xj )ϕk (yi , xj ) dxj = 0. (17.96) Then clearly (17.97) Alternatively, the constraint may appear in the form of an integral ϕk (yi , ∂yi /∂xj , xj ) dxj = constant. (17.98) δ λk (xj )ϕk (yi , xj ) dxj = 0. We may introduce any constant Lagrangian multiplier, and again Eq. (17.97) follows — now with λ a constant. In either case, by adding Eqs. (17.94) and (17.97), possibly with more than one constraint, we obtain     ∂yi f yi , δ λk ϕk (yi , xj ) dxj = 0. (17.99) , xj + ∂xj k The Lagrangian multiplier λk may depend on xj when ϕ(yi , xj ) is given in the form of Eq. (17.95). Treating the entire integrand as a new function,   ∂yi g yi , , xj , ∂xj 1066 Chapter 17 Calculus of Variations we obtain    ∂yi , xj = f + λk ϕk . g yi , ∂xj (17.100) k If we have N yi (i = 1, 2, . . . , N ) and m constraints (k = 1, 2, . . . m), then N − m of the ηi may be taken as arbitrary. For the remaining m ηi , the λ may, in principle, be chosen so that the remaining Euler–Lagrange equations are satisfied, completely analogous to Eq. (17.80). The result is that our composite function g must satisfy the usual Euler– Lagrange equations, ∂g  ∂ ∂g = 0, (17.101) − ∂yi ∂xj (∂yi /∂xj ) j with one such equation for each dependent variable yi (compare Eqs. (17.64) and (17.73)). These Euler equations and the equations of constraint are then solved simultaneously to find the function yielding a stationary value. Lagrangian Equations In the absence of constraints, Lagrange’s equations of motion (Eq. (17.52)) were found to be14 d ∂L ∂L − = 0, dt ∂ q˙i ∂qi with t (time) the one independent variable and qi (t) (particle positions) a set of dependent variables. Usually the generalized coordinates qi are chosen to eliminate the forces of constraint, but this is not necessary and not always desirable. In the presence of (holonomic) constraints, ϕk = 0, Hamilton’s principle is   L(qi , q˙i , t) + δ λk (t)ϕk (qi , t) dt = 0, (17.102) k and the constrained Lagrangian equations of motion are d ∂L ∂L  aik λk . − = dt ∂ q˙i ∂qi (17.103) k Usually ϕk = ϕk (qi , t), independent of the generalized velocities q˙i . In this case the coefficient aik is given by aik = ∂ϕk . ∂qi (17.104) Then aik λk (no summation) represents the force of the kth constraint in the qi -direction, appearing in Eq. (17.103) in exactly the same way as −∂V /∂qi . 14 The symbol q is customary in classical mechanics. It serves to emphasize that the variable is not necessarily a Cartesian variable (and not necessarily a length). 17.7 Variation with Constraints 1067 FIGURE 17.10 Simple pendulum. Example 17.7.1 SIMPLE PENDULUM To illustrate, consider the simple pendulum, a mass m constrained by a wire of length l to swing in an arc (Fig. 17.10). In the absence of the one constraint ϕ1 = r − l = 0 (17.105) there are two generalized coordinates r and θ (motion in vertical plane). The Lagrangian is  (17.106) L = T − V = 21 m r˙ 2 + r 2 θ˙ 2 + mgr cos θ, taking the potential V to be zero when the pendulum is horizontal, θ = π/2. By Eq. (17.103) the equations of motion are or d ∂L ∂L − = λ1 , dt ∂ r˙ ∂r d ∂L ∂L =0 − dt ∂ θ˙ ∂θ (ar1 = 1, aθ1 = 0), d (m˙r ) − mr θ˙ 2 − mg cos θ = λ1 , dt d 2 mr θ˙ + mgr sin θ = 0. dt (17.107) (17.108) Substituting in the equation of constraint (r = l, r˙ = 0), we have ml θ˙ 2 + mg cos θ = −λ1 , ml 2 θ¨ + mgl sin θ = 0. (17.109) The second equation may be solved for θ (t) to yield simple harmonic motion if the amplitude is small (sin θ ∼ θ ), whereas the first equation expresses the tension in the wire in terms of θ and θ˙ . Note that since the equation of constraint, Eq. (17.105), is in the form of Eq. (17.95), the Lagrange multiplier λ may be (and here is) a function of t (or of θ ).  1068 Chapter 17 Calculus of Variations FIGURE 17.11 A particle sliding on a cylindrical surface. Example 17.7.2 SLIDING OFF A LOG Closely related to this is the problem of a particle sliding on a cylindrical surface. The object is to find the critical angle θc at which the particle flies off from the surface. This critical angle is the angle at which the radial force of constraint goes to zero (Fig. 17.11). We have  L = T − V = 21 m r˙ 2 + r 2 θ˙ 2 − mgr cos θ (17.110) and the one equation of constraint, ϕ1 = r − l = 0. (17.111) Proceeding as in Example 17.7.1 with ar1 = 1, m¨r − mr θ˙ 2 + mg cos θ = λ1 (θ ), mr 2 θ¨ + 2mr r˙ θ˙ − mgr sin θ = 0, (17.112) in which the constraining force λ1 (θ ) is a function of the angle θ .15 Since r = l, r¨ = r˙ = 0, Eq. (17.112) reduces to −ml θ˙ 2 + mg cos θ = λ1 (θ ), 2¨ ml θ − mgl sin θ = 0. (17.113a) (17.113b) Differentiating Eq. (17.113a) with respect to time and remembering that df (θ ) df (θ ) = θ˙ , dt dθ (17.114) we obtain −2ml θ¨ − mg sin θ = dλ1 (θ ) . dθ (17.115) 15 Note that λ is the radial force exerted by the cylinder on the particle. Consideration of the physical problem shows that 1 λ1 must depend on the angle θ . We permitted λ = λ(t). Now we are replacing the time dependence by an (unknown) angular dependence using θ = θ(t). 17.7 Variation with Constraints 1069 Using Eq. (17.113b) to eliminate the θ¨ term and then integrating, we have λ1 (θ ) = 3mg cos θ + C. (17.116) Since λ1 (0) = mg, (17.117) C = −2mg. (17.118) The particle m will stay on the surface as long as the force of constraint is nonnegative, that is, as long as the surface has to push outward on the particle: λ1 (θ ) = 3mg cos θ − 2mg ≥ 0. (17.119) The critical angle lies where λ1 (θc ) = 0, the force of constraint going to zero. From Eq. (17.119), cos θc = 32 , or θc = 48◦ 11′ (17.120) from the vertical. At this angle (neglecting all friction) our particle takes off. It must be admitted that this result can be obtained more easily by considering a varying centripetal force furnished by the radial component of the gravitational force. The example was chosen to illustrate the use of Lagrange’s undetermined multiplier without confusing the reader with a complicated physical system.  Example 17.7.3 THE SCHRÖDINGER WAVE EQUATION As a final illustration of a constrained minimum, let us find the Euler equations for a quantum mechanical problem δ ψ ∗ (x, y, z)H ψ(x, y, z) dx dy dz = 0, (17.121) with the normalization constraint ψ ∗ ψ dx dy dz = 1. (17.122) Equation (17.121) is a statement that the energy of the system is stationary, H being the quantum mechanical Hamiltonian for a particle of mass m, a differential operator, h¯ 2 2 ∇ + V (x, y, z). (17.123) 2m Equation (17.122) is a bound-state constraint, ψ is the usual wave function, a dependent variable, and ψ ∗ , its complex conjugate, is treated as a second16 dependent variable. The integrand in Eq. (17.121) involves second derivatives, which can be converted to first derivatives by integrating by parts:  2  ∂ψ ∗ ∂ψ ∗ ∂ψ  ∗∂ ψ − ψ dx = ψ dx. (17.124) ∂x  ∂x ∂x ∂x 2 H =− We assume either periodic boundary conditions (as in the Sturm–Liouville theory, Chapter 10) or that the volume of integration is so large that ψ and ψ ∗ vanish rapidly 16 Compare Section 6.1. 1070 Chapter 17 Calculus of Variations enough17 at the boundary. Then the integrated part vanishes and Eq. (17.121) may be rewritten as  2 h¯ ∗ ∗ δ (17.125) ∇ψ · ∇ψ + V ψ ψ dx dy dz = 0. 2m The function g of Eq. (17.100) is g= h¯ 2 ∇ψ ∗ · ∇ψ + V ψ ∗ ψ − λψ ∗ ψ 2m h¯ 2 ∗ (ψ ψx + ψy∗ ψy + ψz∗ ψz ) + V ψ ∗ ψ − λψ ∗ ψ, (17.126) 2m x again using the subscript x to denote ∂/∂x. For yi = ψ ∗ , Eq. (17.101) becomes = ∂ ∂g ∂ ∂g ∂ ∂g ∂g − − − = 0. ∗ ∗ ∗ ∂ψ ∂x ∂ψx ∂y ∂ψy ∂z ∂ψz∗ This yields V ψ − λψ − h¯ 2 (ψxx + ψyy + ψzz ) = 0, 2m or h¯ 2 2 (17.127) ∇ ψ + V ψ = λψ. 2m Reference to Eq. (17.123) enables us to identify λ physically as the energy of the quantum mechanical system. With this interpretation, Eq. (17.127) is the celebrated Schrödinger wave equation.  − This variational approach is more than just a matter of academic curiosity. It provides a very powerful method of obtaining approximate solutions of the wave equation (Rayleigh– Ritz variational method, Section 17.8). Exercises 17.7.1 A particle, mass m, is on a frictionless horizontal surface. It is constrained to move so that θ = ωt (rotating radial arm, no friction). With the initial conditions t = 0, (a) r˙ = 0, r = r0 , find the radial positions as a function of time. ANS. r(t) = r0 cosh ωt. (b) find the force exerted on the particle by the constraint. ANS. F (c) = 2m˙r ω = 2mr0 ω2 sinh ωt. 17 For example, lim r→∞ rψ(r) = 0. 17.7 Variation with Constraints 17.7.2 1071 A point mass m is moving over a flat, horizontal, frictionless plane. The mass is constrained by a string to move radially inward at a constant rate. Using plane polar coordinates (ρ, ϕ), ρ = ρ0 − kt, (a) (b) (c) Set up the Lagrangian. Obtain the constrained Lagrange equations. Solve the ϕ-dependent Lagrange equation to obtain ω(t), the angular velocity. What is the physical significance of the constant of integration that you get from your “free” integration? (d) Using the ω(t) from part (b), solve the ρ-dependent (constrained) Lagrange equation to obtain λ(t). In other words, explain what is happening to the force of constraint as ρ → 0. 17.7.3 A flexible cable is suspended from two fixed points. The length of the cable is fixed. Find the curve that will minimize the total gravitational potential energy of the cable. ANS. Hyperbolic cosine. 17.7.4 A fixed volume of water is rotating in a cylinder with constant angular velocity ω. Find the curve of the water surface that will minimize the total potential energy of the water in the combined gravitational-centrifugal force field. ANS. Parabola. 17.7.5 (a) Show that for a fixed-length perimeter the figure with maximum area is a circle. (b) Show that for a fixed area the curve with minimum perimeter is a circle. Hint. The radius of curvature R is given by (r 2 + rθ2 )3/2 R= rrθθ − 2rθ2 − r 2 . Note. The problems of this section, variation subject to constraints, are often called isoperimetric. The term arose from problems of maximizing area subject to a fixed perimeter — as in Exercise 17.7.5(a). 17.7.6 Show that requiring J , given by J= a b p(x)yx2 − q(x)y 2 dx, to have a stationary value subject to the normalizing condition b y 2 w(x) dx = 1 a leads to the Sturm–Liouville equation of Chapter 10:   dy d p + qy + λwy = 0. dx dx Note. The boundary condition b  pyx y  = 0 a is used in Section 10.1 in establishing the Hermitian property of the operator. 1072 Chapter 17 Calculus of Variations 17.7.7 Show that requiring J , given by b J= a b K(x, t)ϕ(x)ϕ(t) dx dt, a to have a stationary value subject to the normalizing condition b ϕ 2 (x) dx = 1 a leads to the Hilbert–Schmidt integral equation, Eq. (16.89). Note. The kernel K(x, t) is symmetric. 17.8 RAYLEIGH–RITZ VARIATIONAL TECHNIQUE Exercise 17.7.6 opens up a relation between the calculus of variations and eigenfunction– eigenvalue problems. We may rewrite the expression of Exercise 17.7.6 as b  (py 2 − qy 2 ) dx F y(x) = a bx , (17.128) 2 a y w dx in which the constraint appears in the denominator as a normalizing condition. After the unconstrained minimum of F has been found, y can be normalized without changing the stationary value of F because stationary values of J correspond to stationary values of F . Then from Exercise 17.7.6, when y(x) is such that J and F take on a stationary value, the optimum function y(x) satisfies the Sturm–Liouville equation   dy d p + qy + λwy = 0, (17.129) dx dx with λ the eigenvalue (not a Lagrangian multiplier). Integrating the first term in the numerator of Eq. (17.128) by parts and using the boundary condition, b  pyx y  = 0, (17.130) a we obtain  F y(x) = − a b y     1 b dy d p + qy dx y 2 w dx. dx dx a (17.131) Then substituting in Eq. (17.129), the stationary values of F [y(x)] are given by  F y(x) = λn , (17.132) with λn the eigenvalue corresponding to the eigenfunction yn . Equation (17.132) with F given by either Eq. (17.128) or Eq. (17.131) forms the basis of the Rayleigh–Ritz method for the computation of eigenfunctions and eigenvalues. 17.8 Rayleigh–Ritz Variational Technique 1073 Ground State Eigenfunction Suppose that we seek to compute the ground-state eigenfunction y0 and eigenvalue18 λ0 of some complicated atomic or nuclear system. The classical example, for which no exact solution exists, is the helium atom problem. The eigenfunction y0 is unknown, but we shall assume we can make a pretty good guess at an approximate function y, so mathematically we may write19 y = y0 + ∞  ci yi . (17.133) i=1 The ci are small quantities. (How small depends on how good our guess was.) The yi are orthonormalized eigenfunctions (also unknown), and therefore our trial function y is not normalized. Substituting the approximate function y into Eq. (17.131) and noting that   b   dyj d p + qyi dx = −λi δij , (17.134) yi dx dx a 2  λ0 + ∞ i=1 c λi F y(x) = ∞ i2 . 1 + i=1 ci (17.135) Here we have taken the eigenfunctions to be orthonormal — since they are solutions of the Sturm–Liouville Eq. (17.129). We also assume that y0 is nondegenerate. Now, if equation, we replace i ci2 λi → i ci2 λ0 + i ci2 (λi − λ0 ) we obtain ∞ 2  ci (λi − λ0 ) . (17.136) F y(x) = λ0 + i=1 2 1+ ∞ i=1 ci Equation (17.136) contains two important results. • Whereas the error in the eigenfunction y was O(ci ), the error in λ is only O(ci2 ). Even a poor approximation of the eigenfunctions may yield an accurate calculation of the eigenvalue. • If λ0 is the lowest eigenvalue (ground state), then since λi − λ0 > 0,  F y(x) = λ ≥ λ0 , (17.137) or our approximation is always on the high side and becoming lower, converging on λ0 as our approximate eigenfunction y improves (ci → 0). Note that Eq. (17.137) is a direct consequence of Eq. (17.135). More directly, F [y(x)] in Eq. (17.135) is a positively weighted average of the λi and, therefore, must be no smaller than the smallest λi , to wit, λ0 . In practical problems in quantum mechanics, y often depends on parameters that may be varied to minimize F and thereby improve the estimate of the ground-state energy λ0 . This is the “variational method” discussed in quantum mechanics texts. 18 This means that λ is the lowest eigenvalue. It is clear from Eq. (17.128) that if p(x) ≥ 0 and q(x) ≤ 0 (compare Table 10.1), 0 then F [y(x)] has a lower bound and this lower bound is nonnegative. Recall from Section 10.1 that w(x) ≥ 0. 19 We are guessing at the form of the function. The normalization is irrelevant. 1074 Chapter 17 Calculus of Variations Example 17.8.1 VIBRATING STRING A vibrating string, clamped at x = 0 and 1, satisfies the eigenvalue equation d 2y + λy = 0 (17.138) dx 2 and the boundary condition y(0) = y(1) = 0. For this simple example we recognize immediately that y0 (x) = sin πx (unnormalized) and λ0 = π 2 . But let us try out the Rayleigh– Ritz technique. With one eye on the boundary conditions, we try y(x) = x(1 − x). Then with p = 1 and w = 1, Eq. (17.128) yields 1 (1 − 2x)2 dx 1/3 F [y(x)] = 10 = 10. = 2 2 1/30 x (1 − x) dx 0 (17.139) (17.140) This result, λ = 10, is a fairly good approximation (1.3% error)20 of λ0 = π 2 = 9.8696. You may have noted that y(x), Eq. (17.139), is not normalized to unity. The denominator in F [y(x)] compensates for the lack of unit normalization. F may also be calculated from Eq. (17.131) since Eq. (17.130) is satisfied by y from Eq. (17.139). In the usual scientific calculation the eigenfunction would be improved by introducing more terms and adjustable parameters, such as y = x(1 − x) + a2 x 2 (1 − x)2 . (17.141) It is convenient to have the additional terms orthogonal, but it is not necessary. The parameter a2 is adjusted to minimize F [y(x)]. In this case, choosing a2 = 1.1353 drives F [y(x)] down to 9.8697, very close to the correct eigenvalue value.  Exercises 17.8.1 From Eq. (17.128) develop in detail the argument when λ ≥ 0 or λ < 0. Explain the circumstances under which λ = 0, and illustrate with several examples. 17.8.2 An unknown function satisfies the differential equation  2 π y ′′ + y=0 2 and the boundary conditions y(0) = 1, y(1) = 0. 20 The closeness of the fit may be checked by a Fourier sine expansion (compare Exercise 14.2.3 over the half-interval [0, 1] or, equivalently, over the interval [−1, 1], with y(x) taken to be odd). Because of the even symmetry relative to x = 1/2, only odd n terms appear:    8 sin 3π x sin 5π x sin π x + + + · · · . y(x) = x(1 − x) = π3 33 53 17.8 Rayleigh–Ritz Variational Technique (a) 1075 Calculate the approximation λ = F [ytrial ] for ytrial = 1 − x 2 . (b) Compare with the exact eigenvalue. ANS. (a) λ = 2.5, 17.8.3 (b) λ/λexact = 1.013. In Exercise 17.8.2 use a trial function y = 1 − xn. (a) (b) Find the value of n that will minimize F [ytrial ]. Show that the optimum value of n drives the ratio λ/λexact down to 1.003. ANS. (a) n = 1.7247. 17.8.4 A quantum mechanical particle in a sphere (Example 11.7.1) satisfies ∇ 2 ψ + k 2 ψ = 0, with k 2 = 2mE/h¯ 2 . The boundary condition is that ψ(r = a) = 0, where a is the radius of the sphere. For the ground state [where ψ = ψ(r)] try an approximate wave function  2 r ψa (r) = 1 − a and calculate an approximate eigenvalue ka2 . Hint. To determine p(r) and w(r), put your equation in self-adjoint form (in spherical polar coordinates). 10.5 π2 2 , kexact = 2. 2 a a The wave equation for the quantum mechanical oscillator may be written as ANS. ka2 = 17.8.5 d 2 ψ(x)  + λ − x 2 ψ(x) = 0, 2 dx with λ = 1 for the ground state (Eq. (13.18)). Take + 2 x 2 ≤ a2 1 − xa 2 , ψtrial = 0, x 2 > a2 for the ground-state wave function (with a 2 an adjustable parameter) and calculate the corresponding ground-state energy. How much error do you have? Note. Your parabola is really not a very good approximation to a Gaussian exponential. What improvements can you suggest? 1076 Chapter 17 Calculus of Variations 17.8.6 The Schrödinger equation for a central potential may be written as 17.8.7 h¯ 2 l(l + 1) u(r) = Eu(r). 2Mr 2 The l(l + 1) term, the angular momentum barrier, comes from splitting off the angular dependence (Section 9.3). Treating this term as a perturbation, use your variational technique to show that E > E0 , where E0 is the energy eigenvalue of Lu0 = E0 u0 corresponding to l = 0. This means that the minimum energy state will have l = 0, zero angular momentum. Hint. You can expand u(r) as u0 (r) + ∞ i=1 ci ui , where Lui = Ei ui , Ei > E0 . Lu(r) + In the matrix eigenvector, eigenvalue equation Ari = λi ri , where λ is an n × n Hermitian matrix. For simplicity, assume that its n real eigenvalues (Section 3.5) are distinct, λ1 being the largest. If r is an approximation to r1 , r = r1 + n  δi r i , i=2 show that r† Ar ≤ λ1 r† r and that the error in λ1 is of the order |δi |2 . Take |δi | ≪ 1. Hint. The n ri form a complete orthogonal set spanning the n-dimensional (complex) space. 17.8.8 The variational solution of Example 17.8.1 may be refined by taking y = x(1 − x) + a2 x 2 (1−x)2 . Using the numerical quadrature, calculate λapprox = F [y(x)], Eq. (17.128), for a fixed value of a2 . Vary a2 to minimize λ. Calculate the value of a2 that minimizes λ and calculate λ itself, both to five significant figures. Compare your eigenvalue λ with π 2 . Additional Readings Bliss, G. A., Calculus of Variations. The Mathematical Association of America. LaSalle, IL: Open Court Publishing Co. (1925). As one of the older texts, this is still a valuable reference for details of problems such as minimum-area problems. Courant, R., and H. Robbins, What Is Mathematics? 2nd ed. New York: Oxford University Press (1996). Chapter VII contains a fine discussion of the calculus of variations, including soap film solutions to minimum-area problems. Lanczos, C., The Variational Principles of Mechanics, 4th ed. Toronto: University of Toronto Press (1970), reprinted, Dover (1986). This book is a very complete treatment of variational principles and their applications to the development of classical mechanics. Sagan, H., Boundary and Eigenvalue Problems in Mathematical Physics. New York: Wiley (1961), reprinted, Dover (1989). This delightful text could also be listed as a reference for Sturm–Liouville theory, Legendre and Bessel functions, and Fourier Series. Chapter 1 is an introduction to the calculus of variations, with applications to mechanics. Chapter 7 picks up the calculus of variations again and applies it to eigenvalue problems. 17.8 Additional Readings 1077 Sagan, H., Introduction to the Calculus of Variations. New York: McGraw-Hill (1969), reprinted, Dover (1983). This is an excellent introduction to the modern theory of the calculus of variations, which is more sophisticated and complete than his 1961 text. Sagan covers sufficiency conditions and relates the calculus of variations to problems of space technology. Weinstock, R., Calculus of Variations. New York: McGraw-Hill (1952); New York: Dover (1974). A detailed, systematic development of the calculus of variations and applications to Sturm–Liouville theory and physical problems in elasticity, electrostatics, and quantum mechanics. Yourgrau, W., and S. Mandelstam, Variational Principles in Dynamics and Quantum Theory, 3rd ed. Philadelphia: Saunders (1968); New York: Dover (1979). This is a comprehensive, authoritative treatment of variational principles. The discussions of the historical development and the many metaphysical pitfalls are of particular interest. This page intentionally left blank CHAPTER 18 NONLINEAR METHODS AND CHAOS Our mind would lose itself in the complexity of the world if that complexity were not harmonious; like the short–sighted, it would only see the details, and would be obliged to forget each of these details before examining the next, because it would be incapable of taking in the whole. The only facts worthy of our attention are those which introduce order into this complexity and so make it accessible to us. H ENRI P OINCARÉ 18.1 INTRODUCTION The origin of nonlinear dynamics goes back to the work of the renowned French mathematician Henri Poincaré on celestial mechanics at the turn of the twentieth century. Classical mechanics is, in general, nonlinear in its dependence on the coordinates of the particles and the velocities, one example being vibrations with a nonlinear restoring force. The Navier–Stokes equations are nonlinear, which makes hydrodynamics difficult to handle. For almost four centuries however, following the lead of Galileo, Newton, and others, physicists have focused on predictable, effectively linear responses of classical systems, which usually have linear and nonlinear properties. Poincaré was the first to understand the possibility of completely irregular, or “chaotic,” behavior of solutions of nonlinear differential equations that are characterized by an extreme sensitivity to initial conditions: Given slightly different initial conditions, from errors in measurements for example, solutions can grow exponentially apart with time, so the system soon becomes effectively unpredictable, or “chaotic.” This property of chaos, often called the “butterfly” effect, will be discussed in Section 18.3. Since the rediscovery of this effect by Lorenz in meteorology in the early 1960s, the field of nonlinear dynamics has grown tremendously. Thus, nonlinear dynamics and chaos theory now have entered the mainstream of physics. 1079 1080 Chapter 18 Nonlinear Methods and Chaos Numerous examples of nonlinear systems have been found to display irregular behavior. Surprisingly, order, in the sense of quantitative similarities as universal properties, or other regularities may arise spontaneously in chaos; a first example. Feigenbaum’s universal numbers α and δ will appear in Section 18.2. Dynamical chaos is not a rare phenomenon but is ubiquitous in nature. It includes irregular shapes of clouds, coast lines, and other landscapes, which are examples of fractals, to be discussed in Section 18.3, and turbulent flow of fluids, water dripping from a faucet, and the weather, of course. The damped, driven pendulum is among the simplest systems displaying chaotic motion. Necessary conditions for chaotic motion in dynamical systems described by first-order differential equations are • at least three dynamical variables, and • one or more nonlinear terms coupling two or several of them. As in classical mechanics, the space of the time-dependent dynamical variables of a system of coupled differential equations is called its phase space. In such deterministic systems, trajectories in phase space are not allowed to cross. If they did, the system would have a choice at each intersection and would not be deterministic. In two dimensions such nonlinear systems allow only for fixed points. An example is a damped pendulum, whose second derivative, θ¨ = f (θ˙ , θ ), can be written as two first-order derivatives, ω = θ˙ , ω˙ = f (ω, θ ), involving just two dynamic variables, ω(t) and θ (t). In the undamped case, there will only be periodic motion and equilibrium points. With three or more dynamic variables (for example, damped, driven pendulum, written as first-order coupled ODEs again), more complicated nonintersecting trajectories are possible. These include chaotic motion and are called deterministic chaos. A central theme in chaos is the evolution of complex forms from the repetition of simple but nonlinear operations; this is being recognized as a fundamental organizing principle of nature. While nonlinear differential equations are a natural place in physics for chaos to occur, the mathematically simpler iteration of nonlinear functions provides a quicker entry to chaos theory, which we will pursue first in Section 18.2. In this context, chaos already arises in certain nonlinear functions of a single variable. 18.2 THE LOGISTIC MAP The nonlinear one-dimensional iteration, or difference equation, xn+1 = µxn (1 − xn ), xn ∈ [0, 1]; 1 < µ < 4, (18.1) is called the logistic map. It is patterned after the nonlinear differential equation dx/dt = µx(1 − x), used by P. F. Verhulst in 1845 to model the development of a breeding population whose generations do not overlap. The density of the population at time n is xn . The linear term simulates the birth rate and the nonlinear term the death rate of the species in a constant environment controlled by the parameter µ. The quadratic function fµ (x) = µx(1 − x) is chosen because it has one maximum in the interval [0, 1] and is zero at the endpoints, fµ (0) = 0 = fµ (1). The maximum at xm = 1/2 18.2 The Logistic Map 1081 FIGURE 18.1 Cycle (x0 , x1 , . . .) for the logistic map for µ = 2, starting value x0 = 0.1 and attractor x ∗ = 1/2. is determined from f ′ (x) = 0, that is, fµ′ (xm ) = µ(1 − 2xm ) = 0, 1 xm = , 2 (18.2) where fµ (1/2) = µ/4. • Varying the single parameter µ controls a rich and complex behavior, including onedimensional chaos, as we shall see. More parameters or additional variables are hardly necessary at this point to increase the complexity. In a rather qualitative sense the simple logistic map of Eq. (18.1) is representative of many dynamical systems in biology, chemistry, and physics. Figure (18.1) shows a plot of fµ (x) = µx(1 − x) along with the diagonal and a series of points (x0 , x1 , . . .) called a cycle. To construct a cycle for a fixed value of µ (= 2 in Fig. 18.1), we choose some x0 ∈ [0, 1] [x0 = 0.1 in Eq. (18.1)]. The vertical line through x0 intersects the curve fµ (x) at x1 = fµ (x0 ) (= 0.18 in Fig. 18.1). Proceeding horizontally from x1 leads us to x1 on the diagonal. Going vertically from the abscissa x1 gives x2 = fµ (x1 ) on the curve (x2 = 0.2952 in Fig. 18.1), etc. That is, straight vertical lines show the intersections with the curve fµ and horizontal lines convert fµ (xi ) = xi+1 to the next abscissa. For any initial value x0 with 0 < x0 < 1, the xi converge toward the fixed point x ∗ , or attractor [= (0.5, 0.5) in Fig. 18.1]: fµ (x ∗ ) = µx ∗ (1 − x ∗ ) = x ∗ , i.e., x ∗ = 1 − 1 . µ (18.3) The interval (0, 1) defines a basin of attraction for the fixed point x ∗ . The attractor x ∗ is stable provided the slope |fµ′ (x ∗ )| = |2 − µ| < 1, or 1 < µ < 3. This can be seen from a Taylor expansion of an iteration near the attractor: xn+1 = fµ (xn ) = fµ (x ∗ ) + fµ′ (x ∗ )(xn − x ∗ ) + · · · , i.e., xn+1 − x ∗ = fµ′ (x ∗ ), xn − x ∗ 1082 Chapter 18 Nonlinear Methods and Chaos FIGURE 18.2 Part of the bifurcation plot for the logistic map: fixed points x ∗ versus µ. upon dropping all higher-order terms. Thus, if |fµ′ (x ∗ )| < 1, the next iterate, xn+1 , lies closer to x ∗ than does xn , implying convergence to and stability of the fixed point. However, if |fµ′ (x ∗ )| > 1, xn+1 moves farther from x ∗ than does xn implying divergence and instability. Given the continuity of fµ′ in µ, the fixed point and its properties persist when the parameter (here µ) is slightly varied. For µ > 1 and x0 < 0 or x0 > 1, it is easy to verify graphically or analytically that the xi → −∞. The origin, x = 0, is a repellent fixed point since fµ′ (0) = µ > 1 and the iterates move away from it. Since fµ′ (1) = −µ, the point x = 1 is a repellor for µ > 1. When fµ′ (x ∗ ) = µ(1 − 2x ∗ ) = 2 − µ = −1 is reached for µ = 3, two fixed points occur, shown as the two branches in Fig. 18.2, as µ increases beyond the value 3. They can be located by solving   x2∗ = fµ fµ (x2∗ ) = µ2 x2∗ (1 − x2∗ ) 1 − µx2∗ (1 − x2∗ ) for x2∗ . Here it is convenient to abbreviate f (1) (x) = fµ (x), f (2) (x) = fµ (fµ (x)) for the second iterate, etc. Now we drop the common x2∗ and then reduce the remaining thirdorder polynomial to second-order by recalling that a fixed point of fµ is also a fixed point of f (2) because fµ (fµ (x ∗ )) = fµ (x ∗ ) = x ∗ . So x2∗ = x ∗ is one solution. Factoring out the quadratic polynomial we obtain  0 = µ2 1 − (µ + 1)x2∗ + 2(x2∗ )2 − µ(x2∗ )3 − 1  = (µ − 1 − µx2∗ ) µ + 1 − µ(µ + 1)x2∗ + µ2 (x2∗ )2 . The roots of the quadratic polynomial are x2∗ = 1  µ + 1 ± (µ + 1)(µ − 3) , 2µ which are the two branches in Fig. 18.2 for µ > 3 starting at x2∗ = 2/3. This shows that both fixed points bifurcate at the same value of µ. Each x2∗ is a point of period 2 and invariant 18.2 The Logistic Map 1083 under two iterations of the map fµ . The iterates oscillate between both branches of fixed points x2∗ . A point xn is defined as a periodic point of period n for fµ if f (n) (x0 ) = x0 , but f (i) (x0 ) = x0 for 0 < i < n. Thus, for 3 < µ < 3.45 (see Fig. 18.2) the stable attractor bifurcates, or splits, into two fixed points x2∗ . The bifurcation for µ = 3, where the doubling occurs, is called a pitchfork bifurcation because of its characteristic (rounded Y-) shape. A bifurcation is a sudden change in the evolution of the system, such as a splitting of one curve into two curves. As√µ increases beyond 3, the derivative df (2) /dx decreases from unity to −1. For µ = 1 + 6 ∼ 3.44949, which can be derived from  df (2)  = −1, f (2) (x ∗ ) = x ∗ , dx x=x ∗ each branch of fixed points bifurcates again, so x4∗ = f (4) (x4∗ ), that is, has period 4. For √ µ = 1 + 6 these are x4∗ = 0.43996 and x4∗ = 0.849938. With increasing period doublings it becomes impossible to obtain analytic solutions. The iterations are better done numerically on a programmable pocket calculator or a personal computer, whose rapid improvements (computer-driven graphics, in particular) and wide distribution since the 1970s has accelerated the development of chaos theory. The sequence of bifurcations continues with ever longer periods until we reach µ∞ = 3.5699456 . . . , where an infinite number of bifurcations occur. Near bifurcation points, fluctuations, rounding errors in initial conditions, etc., play an increasing role because the system has to choose between two possible branches and becomes much more sensitive to small perturbations. In the present case the xn never repeat. The bands of fixed points x ∗ begin forming a continuum (shown dark in Fig. 18.2); this is where chaos starts. This increasing period doubling is the route to chaos for the logistic map that is characterized by a universal constant δ, called a Feigenbaum number. If the first bifurcation occurs at µ1 = 3, the second at µ2 = 3.45, . . . , then the ratio of spacings between the µn converges to δ: lim n→∞ µn − µn−1 = δ = 4.66920161 . . . . µn+1 − µn (18.4) From the bifurcation plot in Fig. 18.2 we obtain µ2 − µ1 3.45 − 3.00 = 5.0 = µ3 − µ2 3.54 − 3.45 as a first approximation for the dimensionless δ. The corresponding critical-period-2n points xn∗ lead to another universal and dimensionless quantity: lim n→∞ ∗ xn∗ − xn−1 ∗ xn+1 − xn∗ = α = 2.5029 . . . . Again reading off Fig. 18.2’s approximate values for xn∗ we obtain as a first approximation for α. 0.44 − 0.67 = 3.3 0.37 − 0.44 (18.5) 1084 Chapter 18 Nonlinear Methods and Chaos The Feigenbaum number δ is universal for the route to chaos via period doublings for all maps with a quadratic maximum similar to the logistic map. It is an example of order in chaos. Experience shows that its validity is even wider, including two-dimensional (dissipative) systems and twice continuously differentiable functions with subharmonic bifurcations.1 When the maps behave like |x − xm |1+ε near their maximum xm for some ε between 0 and 1, the Feigenbaum number will depend on the exponent ε; thus δ(ε) varies between δ(1) given in Eq. (18.4) for quadratic maps to δ(0) = 2 for ε = 0.2 Exercises 18.2.1 Show that x ∗ = 1 is a nontrivial fixed point of the map xn+1 = xn exp[r(1 − xn )] with a slope 1 − r, so that the equilibrium is stable if 0 < r < 2. 18.2.2 Draw a bifurcation diagram for the exponential map of Exercise 18.2.1 for r > 1.9. 18.2.3 Determine fixed points of the cubic map xn+1 = axn3 + (1 − a)xn for 0 < a < 4 and 0 < xn < 1. 18.2.4 Write the time-delayed logistical map xn+1 = µxn (1 − xn−1 ) as a two-dimensional map xn+1 = µxn (1 − yn ), yn+1 = xn , and determine some of its fixed points. 18.2.5 Show that the second√ bifurcation for the logistical map that leads to cycles of period 4 is located at µ = 1 + 6. 18.2.6 Construct a nonlinear iteration function with Feigenbaum δ in the interval 2 < δ < 4.6692 . . . . 18.2.7 Determine the Feigenbaum δ for (a) the exponential map of Exercise 18.2.1, (b) some cubic map of Exercise 18.2.3, (c) the time-delayed logistic map of Exercise 18.2.4. 18.2.8 Repeat Exercise 18.2.7 for Feigenbaum’s α instead of δ. 18.2.9 Find numerically the first four points µ for period doubling of the logistic map, and then obtain the first two approximations to the Feigenbaum δ. Compare with Fig. 18.2 and Eq. (18.4). 18.2.10 Find numerically the values µ where the cycle of period 1, 3, 4, 5, 6 begins and then where it becomes unstable. 3, µ = 3.8284, 4, µ = 3.9601, Check values. For period 5, µ = 3.7382, 6, µ = 3.6265. 18.2.11 Repeat Exercise 18.2.9 for Feigenbaum’s α. 1 More details and computer codes for the logistic map are given by G. L. Baker and J. P. Gollub, Chaotic Dynamics: An Introduction, Cambridge, UK: Cambridge University Press (1990). 2 For other maps and a discussion of the fascinating history how chaos became again a hot research topic, see D. Holton and R. M. May in The Nature of Chaos (T. Mullin, ed.), Oxford, UK: Clarendon Press (1993), Section 5, p. 95; and Gleick’s Chaos (1987) — see the Additional Readings. 18.3 Sensitivity to Initial Conditions and Parameters 18.3 1085 SENSITIVITY TO INITIAL CONDITIONS AND PARAMETERS Lyapunov Exponents In Section 18.2 we described how, as we approach the period-doubling accumulation parameter value µ∞ = 3.5699 . . . from below, the period n + 1 of cycles (x0 , x1 , . . . , xn ) with xn+1 = x0 gets longer. It is also easy to check that the distances   (18.6) dn = f (n) (x0 + ε) − f (n) (x0 ) grow as well for small ε > 0. From experience with chaotic behavior we find that this distance increases exponentially with n → ∞; that is, dn /ε = eλn , or   (n) |f (x0 + ε) − f (n) (x0 )| 1 , (18.7) λ = ln n ε where λ is a Lyapunov exponent for the cycle. For ε → 0 we may rewrite Eq. (18.7) in terms of derivatives as   n  1  df (n) (x0 )  1   ′ = (18.8) λ = ln ln f (xi ),  n dx n i=0 using the chain rule of differentiation for df (n) (x)/dx, where   dfµ  df (2) (x0 ) dfµ  = fµ′ (x1 )fµ′ (x0 ) = dx dx x=fµ (x0 ) dx x=x0 (18.9) and fµ′ = dfµ /dx, etc. Our Lyapunov exponent has been calculated at the point x0 , and Eq. (18.8) is exact for one-dimensional maps. As a measure of the sensitivity of the system to changes in initial conditions, one point is not enough to determine λ in higher-dimensional dynamical systems in general, where the motion often is bounded, so the dn cannot go to ∞. In such cases, we repeat the procedure for several points on the trajectory and average over them. This way, we obtain the average Lyapunov exponent for the sample. This average value is often called and taken as the Lyapunov exponent. The Lyapunov exponent λ is a quantitative measure of chaos: A one-dimensional iterated function similar to the logistic map has chaotic cycles (x0 , x1 , . . .) for the parameter µ if the average Lyapunov exponent is positive for that value of µ. Any such initial point x0 is called a strange or chaotic attractor (the shaded region in Fig. 18.2). For cycles of finite period, λ is negative. This is the case for µ < 3, for µ < µ∞ , and even in the periodic window at µ ∼ 3.627 inside the chaotic region of Fig. 18.2. At bifurcation points, λ = 0. For µ > µ∞ the Lyapunov exponent is positive, except in the periodic windows, where λ < 0, and λ grows with µ. In other words, the system becomes more chaotic as the control parameter µ increases. In the chaos region of the logistic map there is a scaling law for the average Lyapunov exponent (we do not derive it), λ(µ) = λ0 (µ − µ∞ )ln 2/ ln δ , (18.10) 1086 Chapter 18 Nonlinear Methods and Chaos where ln 2/ ln δ ∼ 0.445, δ is the universal Feigenbaum number of Section 18.2, and λ0 is a constant. This relation (18.10) is reminiscent of a physical observable at a (secondorder) phase transition. The exponent in Eq. (18.10) is a universal number; the Lyapunov exponent plays the role of an order parameter, while µ − µ∞ is the analog of T − Tc , where Tc is the critical temperature at which the phase transition occurs. Fractals In dissipative chaotic systems (but rarely in conservative Hamiltonian systems) often new geometric objects with intricate shapes appear that are called fractals because of their noninteger dimension. Fractals are irregular geometric objects whose dimension is typically not integral and that exist at many scales, so their smaller parts resemble their larger parts. Intuitively a fractal is a set which is (approximately) self-similar under magnification. A set of attracting points with noninteger dimension is called a strange attractor. We need a quantitative measure of dimensionality in order to describe fractals. Unfortunately, there are several definitions with usually different numerical values, none of which has yet become a standard. For strictly self-similar, sets, one measure suffices. More complicated (for instance, only approximately self-similar) sets require more measures for their complete description. The simplest is the box-counting dimension, due to Kolmogorov and Hausdorff. For a one-dimensional set, we cover the curve by line segments of length R. In two dimensions the boxes are squares of area R 2 , in three dimensions cubes of volume R 3 , etc. Then we count the number N (R) of boxes needed to cover the set. Letting R go to zero we expect N to scale as N (R) ∼ R −d . Taking the logarithm the box-counting dimension is defined as ln N (R) d ≡ lim . (18.11) R→0 ln R For example, in a two-dimensional space a single point is covered by one square, so ln N(R) = 0 and d = 0. A finite set of isolated points also has dimension d = 0. For a differentiable curve of length L, N (R) ∼ L/R as R → 0, so d = 1 from Eq. (18.11), as expected. Let us now construct a more irregular set, the Koch curve. We start with a line segment of unit length in Fig. 18.3 and remove the middle third. Then we replace it with two segments of length 1/3, which form a triangle in Fig. 18.3. We iterate this procedure with each segment ad infinitum. The resulting Koch curve is infinitely long and is nowhere differentiable because of the infinitely many discontinuous changes of slope. At the nth step each line segment has length Rn = 3−n and there are N (Rn ) = 4n segments. Hence its dimension is d = ln 4/ ln 3 = 1.26 . . . , which is more than a curve but less than a surface. Because the Koch curve results from iteration of the first step, it is strictly self-similar. For the logistic map the box-counting dimension at a period–doubling accumulation point µ∞ is 0.5388 . . . , which is a universal number for iterations of functions in one variable with a quadratic maximum. To see roughly how this comes about, consider the pairs of line segments originating from successive bifurcation points for a given parameter µ in the chaos regime (see Fig. 18.2). Imagine removing the interior space from the chaotic bands. When we go to the next bifurcation, the relevant scale parameter is α = 2.5029 . . . from Eq. (18.5). Suppose we need 2n line segments of length R to cover 2n bands. In the 18.3 Sensitivity to Initial Conditions and Parameters 1087 FIGURE 18.3 Construction of the Koch curve by iterations. next stage then we need 2n+1 segments of length R/α to cover the bands. This yields a dimension d = − ln(2n /2n+1 )/ ln α = 0.4498 . . . . This crude estimate can be improved by taking into account that the width between neighboring pairs of line segments differs by 1/α (see Fig. 18.2). The improved estimate, 0.543, is closer to 0.5388. . . . This example suggests that when the fractal set does not have a simple self-similar structure, then the box-counting dimension depends on the box-construction method. Finally, we turn to the beautiful fractals that are surprisingly easy to generate and whose color pictures had considerable impact. For complex c = a + ib, the corresponding quadratic complex map involving the complex variable z = x + iy, zn+1 = zn2 + c, (18.12) looks deceptively simple, but the equivalent two-dimensional map in terms of the real variables xn+1 = xn2 − yn2 + a, yn+1 = 2xn yn + b (18.13) reveals already more of its complexity. This map forms the basis for some of Mandelbrot’s beautiful multicolor fractal pictures (we refer the reader to Mandelbrot (1988) and Peitgen and Richter (1986) in the Additional Readings), and it has been found to generate rather intricate shapes for various c = 0. For example, the Julia set of a map zn+1 = F (zn ) is defined as the set of all its repelling fixed or periodic points. Thus it forms the boundary between initial conditions of a two-dimensional iterated map leading to iterates that diverge and those that stay within some finite region of the complex plane. For the case c = 0 and F (z) = z2 , the Julia set can be shown to be just a circle about the origin of the complex plane. Yet, just by adding a constant c = 0, the Julia set becomes fractal. For instance, for c = −1 one finds a fractal necklace with infinitely many loops (see Devaney (1989) in the Additional Readings). While the Julia set is drawn in the complex plane, the Mandelbrot set is constructed in the two-dimensional parameter space c = (a, b) = a + bi. It is constructed as follows. 1088 Chapter 18 Nonlinear Methods and Chaos Starting from the initial value z0 = 0 = (0, 0) one searches Eq. (18.12) for parameter values c so that the iterated {zn } do not diverge to ∞. Each color outside the fractal boundary of the Mandelbrot set represents a given number of iterations m, say, needed for the zn to go beyond a specified absolute (real) value R, |zm | > R > |zm−1 |. For real parameter value c = a, the resulting map, xn+1 = xn2 + a, is equivalent to the logistic map with period-doubling bifurcations (see Section 18.2) as a increases on the real axis inside the Mandelbrot set. Exercises 18.3.1 Use a programmable pocket calculator (or a personal computer with BASIC or FORTRAN or symbolic software such as Mathematica or Maple) to obtain the iterates xi of an initial 0 < x0 < 1 and fµ′ (xi ) for the logistic map. Then calculate the Lyapunov exponent for cycles of period 2, 3, . . . of the logistic map for 2 < µ < 3.7. Show that for µ < µ∞ the Lyapunov exponent λ is 0 at bifurcation points and negative elsewhere, while for µ > µ∞ it is positive except in periodic windows. Hint. See Fig. 9.3 of Hilborn (1994) in the Additional Readings. 18.3.2 Consider the map xn+1 = F (xn ) with F (x) = + a + bx, x < 1, c + dx, x > 1, for b > 0 and d < 0. Show that its Lyapunov exponent is positive when b > 1, d < −1. Plot a few iterations in the (xn+1 , xn ) plane. 18.4 NONLINEAR DIFFERENTIAL EQUATIONS In Section 18.1 we mentioned nonlinear differential equations (abbreviated as NDEs) as the natural place in physics for chaos to occur, but continued with the simpler iteration of nonlinear functions of one variable (maps). Here we briefly address the much broader area of NDEs and the far greater complexity in the behavior of their solutions. However, maps and systems of solutions of NDEs are closely related. The latter can often be analyzed in terms of discrete maps. One prescription is the so-called Poincaré section of a system of NDE solutions. Placing a plane transverse into a trajectory (of a solution of a NDE), it intersects the plane in a series of points at increasing discrete times, for example, in Fig. 18.4 (x(t1 ), y(t1 )) = (x1 , y1 ), (x2 , y2 ), . . . , which are recorded and graphically or numerically analyzed for fixed points, period-doubling bifurcations, etc. This method is useful when solutions of NDEs are obtained numerically in computer simulations so that one can generate Poincaré sections at various locations and with different orientations, with further analysis leading to two-dimensional iterated maps xn+1 = F1 (xn , yn ), yn+1 = F2 (xn , yn ) (18.14) stored by the computer. Extracting the functions Fj analytically or graphically is not always easy, though. Let us start with a few classical examples of NDEs. In Chapter 9 we have already discussed the soliton solution of the nonlinear Korteweg–de Vries PDE, Eq. (9.11). 18.4 Nonlinear Differential Equations 1089 FIGURE 18.4 Schematic of a Poincaré section. Exercise 18.4.1 For the damped harmonic oscillator x¨ + 2a x˙ + x = 0, consider the Poincaré section {x > 0, y = x˙ = 0}. Take 0 < a ≪1 and show that the map is given by xn+1 = bxn with b < 1. Find an estimate for b. Bernoulli and Riccati Equations Bernoulli equations are also nonlinear, having the form  n y ′ (x) = p(x)y(x) + q(x) y(x) , (18.15) then Eq. (18.15) becomes a first-order linear ODE,  u′ = (1 − n)y −n y ′ = (1 − n) p(x)u(x) + q(x) , (18.17) where p and q are real functions and n = 0, 1 to exclude first-order linear ODEs. If we substitute  1−n , (18.16) u(x) = y(x) which we can solve as described in Section 9.2. Riccati equations are quadratic in y(x): y ′ = p(x)y 2 + q(x)y + r(x), (18.18) 1090 Chapter 18 Nonlinear Methods and Chaos where p = 0 to exclude linear ODEs and r = 0 to exclude Bernoulli equations. There is no general method for solving Riccati equations. However, when a special solution y0 (x) of Eq. (18.18) is known by a guess or inspection, then one can write the general solution in the form y = y0 + u, with u satisfying the Bernoulli equation u′ = pu2 + (2py0 + q)u, (18.19) because substitution of y = y0 + u into Eq. (18.18) removes r(x) from Eq. (18.18). Just as for Riccati equations there are no general methods for obtaining exact solutions of other nonlinear ODEs. It is more important to develop methods for finding the qualitative behavior of solutions. In Chapter 9 we mentioned that power-series solutions of ODEs exist except (possibly) at regular or essential singularities, which are directly given by local analysis of the coefficient functions of the ODE. Such local analysis provides us with the asymptotic behavior of solutions as well. Fixed and Movable Singularities, Special Solutions Solutions of NDEs also have such singular points, independent of the initial or boundary conditions and called fixed singularities. In addition they may have spontaneous, or movable, singularities that vary with the initial or boundary conditions. They complicate the (asymptotic) analysis of NDEs. This point is illustrated by a comparison of the linear ODE y′ + y = 0, x −1 (18.20) which has the obvious regular singularity at x = 1, with the NDE y ′ = y 2 . Both have the same solution with initial condition y(0) = 1, namely, y(x) = 1/(1 − x). For y(0) = 2, though, the pole in the (obvious, but check) solution y(x) = 2/(1 − 2x) of the NDE has moved to x = 1/2. For a second-order ODE we have a complete description of (the asymptotic behavior of) its solutions when (that of) two linearly independent solutions are known. For NDEs there may still be special solutions whose asymptotic behavior is not obtainable from two independent solutions. This is another characteristic property of NDEs, which we illustrate again by an example. The general solution of the NDE y ′′ = yy ′ /x is given by y(x) = 2c1 tan(c1 ln x + c2 ) − 1, (18.21) where ci are integration constants. An obvious (check it) special solution is y = c3 = constant, which cannot be obtained from Eq. (18.20) for any choice of the parameters c1 , c2 . Note that using the substitution x = et , Y (t) = y(et ) so that x dy/dx = dY/dt, we obtain the ODE Y ′′ = Y ′ (Y + 1). This ODE can be integrated once to give Y ′ = 21 Y 2 + Y + c with c = 2(c12 + 1/4) an integration constant, and again according to Section 9.2 to lead to the solution of Eq. (18.21). 18.4 Nonlinear Differential Equations 1091 Autonomous Differential Equations Differential equations that do not explicitly contain the independent variable, taken to be the time t here, are called autonomous. Verhulst’s NDE y˙ = dy/dt = µy(1 − y), which we encountered briefly in Section 18.2 as motivation for the logistic map, is a special case of this wide and important class of ODEs.3 For one dependent variable y(t) they can be written as y˙ = f (y), (18.22a) and for several dependent variables as a system y˙i = fi (y1 , y2 , . . . , yn ), i = 1, 2, . . . , n, (18.22b) with sufficiently differentiable functions f , fi . A solution of Eq. (18.22b) is a curve or trajectory y(t) for n = 1 and in general a trajectory (y1 (t), y2 (t), . . . , yn (t)) in an ndimensional (so-called) phase space. As discussed already in Section 18.1, two trajectories cannot cross because of the uniqueness of the solutions of ODEs. Clearly, solutions of the algebraic system fi (y1 , y2 , . . . , yn ) = 0 (18.23) are special points in phase space, where the position vector (y1 , y2 , . . . , yn ) does not move on the trajectory; they are called critical (or fixed) points. It turns out that a local analysis of solutions near critical points leads to an understanding of the global behavior of the solutions. First let us look at a simple example. For Verhulst’s ODE, f (y) = µy(1 − y) = 0 gives y = 0 and y = 1 as the critical points. For the logistic map, y = 0 and y = 1 are repellent fixed points because df/dy(0) = µ at y = 0 and df/dy(1) = −µ at y = 1 for µ > 1. A local analysis near y = 0 suggests neglecting the y 2 term and solving y˙ = µy instead. Integrating dy/y = µt + ln c gives the solution y(t) = ceµt , which diverges as t → ∞, so y = 0 is a repellent critical point. (Note that for µ < 0 of the logistic map the critical point y = 0 would be attracting, leading to a converging y ∼ eµt solution.) Similarly at y = 1, dy/(1 − y) = µt − ln c leads to y(t) = 1 − ce−µt → 1 for t → ∞. Hence y = 1 is an attracting critical point. Because the ODE is separable, its general solution is given by  y 1 1 dy = ln = dy + = µt + ln c. y(1 − y) y 1−y 1−y Hence y(t) = ceµt /(1 + ceµt ) for t → ∞ converges to 1, thus confirming the local analysis. This example motivates us to look next at the properties of fixed points in more detail. For an arbitrary function f, it is easy to see that • in one dimension, fixed points yi with f (yi ) = 0 divide the y-axis into dynamically separate intervals because, given an initial value in one of the intervals, the trajectory y(t) will stay there, for it cannot go beyond either fixed point where y˙ = 0. 3 Solutions of nonautonomous equations can be much more complicated. 1092 Chapter 18 Nonlinear Methods and Chaos FIGURE 18.5 Fixed points: (a) repellor, (b) sink. If f ′ (y0 ) > 0 at the fixed point y0 where f (y0 ) = 0, then at y0 + ε for ε > 0 sufficiently small, y˙ = f ′ (y0 )ε + O(ε 2 ) > 0 in a neighborhood to the right of y0 , so the trajectory y(t) keeps moving to the right, away from the fixed point y0 . To the left of y0 , y˙ = −f ′ (y0 )ε + O(ε 2 ) < 0, so the trajectory moves away from the fixed point here as well. Hence, • a fixed point [with f (y0 ) = 0] at y0 with f ′ (y0 ) > 0, as shown in Fig. 18.5a, repels trajectories; that is, all trajectories move away from the critical point: · · · ← · → · · · ; it is a repellor. Similarly, we see that • a fixed point at y0 with f ′ (y0 ) < 0, as shown in Fig. 18.5b, attracts trajectories; that is, all trajectories converge toward the critical point y0 : · · · → · ← · · · ; it is a sink or node. Let us now consider the remaining case when also f ′ (y0 ) = 0. Let us assume f ′′ (y0 ) > 0. Then at y0 + ε to the right of fixed point y0 , y˙ = ′′ f (y0 )ε 2 /2 + O(ε 3 ) > 0, so the trajectory moves away from the fixed point there, while to the left it moves closer to y0 . In other words, we have a saddle point. For f ′′ (y0 ) < 0, the sign of y˙ is reversed, so we deal again with a saddle point with the motion to the right of y0 toward the fixed point and at left away from it. Let us summarize the local behavior of trajectories near such a fixed point y0 : We have a • a saddle point at y0 when f (y0 ) = 0, and f ′ (y0 ) = 0, as shown in Fig. 18.6a,b corresponding to the cases where (a) f ′′ (y0 ) > 0 and trajectories on one side of the critical point, converge toward it and diverge from it on the other side: · · · → · → · · · ; and (b) f ′′ (y0 ) < 0. Here the direction is simply reversed compared to (a). Figure 18.6(c) shows the cases where f ′′ (y0 ) = 0. So far we have ignored the additional dependence of f (y) on one or more parameters, such as µ for the logistic map. When a critical point maintains its properties qualitatively as we adjust a parameter slightly, we call it structurally stable. This is reasonable because structurally unstable objects are unlikely to occur in reality because noise and other neglected degrees of freedom act as perturbations on the system that effectively prevent such unstable points from being observed. Let us now look at fixed points from this point of view. Upon varying such a control parameter slightly we deform the function f , or we 18.4 Nonlinear Differential Equations FIGURE 18.6 1093 Saddle points. may just shift f up or down or sideways in Fig. 18.5 a bit. This will move a little the location y0 of the fixed point with f (y0 ) = 0, but maintain the sign of f ′ (y0 ). Thus, both sinks and repellors are stable, while a saddle point in general is not. For example, shifting f in Fig. 18.6a down a bit creates two fixed points, one a sink and the other a repellor, and removes the saddle point. Since two conditions must be satisfied at a saddle point, they are less common and important, being unstable with respect to variations of parameters. However, they mark the border between different types of dynamics and are useful and meaningful for the global analysis of the dynamics. We are now ready to consider the richer, but more complicated, higher-dimensional cases. Local and Global Behavior in Higher Dimensions In two or more dimensions we start the local analysis at a fixed point (y10 , y20 , . . .) with y˙i = fi (y10 , y20 , . . .) = 0 using the same Taylor expansion of the fi in Eq. (18.22b) as for the one-dimensional case. Retaining only the first-order derivatives, this approach linearizes the coupled NDEs of Eq. (18.22b) and reduces their solution to linear algebra as follows. We abbreviate the constant derivatives at the fixed point as a matrix F with elements  ∂fi  fij ≡ . (18.24) ∂yj (y 0 ,y 0 ,··· ) 1 2 1094 Chapter 18 Nonlinear Methods and Chaos In contrast to the standard linear algebra in Chapter 3, however, F is neither symmetric nor Hermitian in general. As a result, its eigenvalues may not be real. If we shift the fixed point to the origin and call the shifted coordinates xi = yi − yi0 , then the coupled NDEs of Eq. (18.22b) become  x˙i = fij xj , (18.25) j that is, coupled linear ODEs with constant coefficients. We solve Eq. (18.25) with the standard exponential Ansatz,  xi (t) = cij eλj t , (18.26) j with constant exponents λj and a constant matrix C of coefficients cij , so cj = (cij , i = 1, 2, . . .) forms the j th column vector of C. Substituting Eq. (18.26) into Eq. (18.25) yields a linear combination of exponential functions,   fik ckj eλj t , (18.27) cij λj eλj t = j j,k which are independent if λi = λj . This is the general case on which we focus, while degeneracies where two or more λ are equal require special treatment similar to saddle points in one dimension. Comparing coefficients of exponential functions with the same exponent yields the linear eigenvalue equations  fik ckj = λj cij , or Fcj = λj cj . (18.28) k A nontrivial solution comprising the eigenvalue λj and eigenvector cj of the homogeneous linear equations (18.28) requires λj to be a root of the secular equation (compare with Section 3.5): det(F − λ · 1) = 0. (18.29) Equation (18.28) means that C diagonalizes F, so we can write Eq. (18.28) also as C−1 FC = [λ1 , λ2 , . . .]. (18.30) In the new but in general nonorthogonal coordinates ξj , defined as Cξ = x, we have a fixed point for each direction ξj , as ξ˙j = λj ξj , where the λj play the role of f ′ (y0 ) in the one-dimensional case. The λ are characteristic exponents and complex numbers in general. This is seen by substituting x = Cξ into Eq. (18.25) in conjunction with Eqs. (18.28) and (18.30). Thus, this solution represents the independent combination of one-dimensional fixed points, one for each component of ξ and each independent of the other components. In two dimensions for λ1 < 0 and λ2 < 0, then, we have a sink in all directions. When both λ are greater than 0, we have repellor in all directions. 18.4 Nonlinear Differential Equations Example 18.4.1 1095 STABLE SINK The coupled ODEs x˙ = −x, y˙ = −x − 3y have an equilibrium point at the origin. The solutions have the form x(t) = c11 eλ1 t , y(t) = c21 eλ1 t + c22 eλ2 t , so the eigenvalue λ1 = −1 results from λ1 c11 = −c11 , and the solution is x = c11 e−t . The determinant of Eq. (18.29),   −1 − λ 0     −1 −3 − λ = (1 + λ)(3 + λ) = 0, yields the eigenvalues λ1 = −1, λ2 = −3. Because both are negative we have a stable sink at the origin. The ODE for y gives the linear relations λ1 c21 = −c11 − 3c21 = −c21 , λ2 c22 = −3c22 , from which we infer 2c21 = −c11 , or c21 = −c11 /2. Because the general solution will contain two constants, it is given by c11 x(t) = c11 e−t , y(t) = − e−t + c22 e−3t . 2 As the time t → ∞, we have y ∼ −x/2 and x → 0 and y → 0, while for t → −∞, y ∼ x 3 and x, y → ±∞. The motion toward the sink is indicated by arrows in Fig. 18.7. To find the orbit, we eliminate the independent variable, t, and find the cubics: x c22 y = − + 3 x3.  2 c11 When both λ are greater than 0, we have repellor. In this case the motion is away from the fixed point. However, when the λ have different signs, we have a saddle point, that is, a combination of a sink in one dimension and a repellor in the other. This type of behavior generalizes to higher dimensions. Example 18.4.2 SADDLE POINT The coupled ODEs x˙ = −2x − y, y˙ = −x + 2y have a fixed point at the origin. The solutions have the form x(t) = c11 eλ1 t + c12 eλ2 t , y(t) = c21 eλ1 t + c22 eλ2 t . √ The eigenvalues λ = ± 5 are determined from    −2 − λ −1   = λ2 − 5 = 0.   −1 2 − λ 1096 Chapter 18 Nonlinear Methods and Chaos FIGURE 18.7 Stable sink. Substituting the general solutions into the ODEs yields the linear equations √ √ λ1 c21 = −c11 + 2c21 = 5c21 , λ1 c11 = −2c11 − c21 = 5c11 , √ √ λ2 c12 = −2c12 − c22 = − 5c12 , λ2 c22 = −c12 + 2c22 = − 5c22 , or √ √ ( 5 + 2)c11 = −c21 , ( 5 − 2)c12 = c22 , √ √ ( 5 − 2)c21 = −c11 , ( 5 + 2)c22 = c12 , √ √ so c21 = −(2 + 5)c11 , c22 = ( 5 − 2)c12 . The family of solutions depends on two parameters, c11 , c12 . For large time t → ∞, the √ √ positive exponent prevails and y ∼ have y = ( 5 − 2)x. These straight lines are the −( 5 + 2)x, while for t → −∞ we √ √ asymptotes of the orbits. Because −( 5 + 2)( 5 − 2) = −1 they are orthogonal. We find the orbits by eliminating the independent variable, t, as follows. Substituting the c2j we write √ √ √ √ √  y + 2x = −c11 e 5t + c12 e− 5t . so y = −2x − 5 c11 e 5t − c12 e− 5t , √ 5 Now we add and subtract the solution x(t) to get √ 1 √ (y + 2x) + x = 2c12 e− 5t , 5 √ 1 √ (y + 2x) − x = −2c11 e 5t , 5 18.4 Nonlinear Differential Equations 1097 FIGURE 18.8 Saddle point. which we multiply to obtain 1 (y + 2x)2 − x 2 = −4c12 c11 = const. 5 The resulting quadratic form, y 2 + 4xy − x 2 = const., is a hyperbola because of the negative sign. The hyperbola is rotated in the sense that its asymptotes are not aligned with the x, y-axes (Fig. 18.8). Its orientation is given by the direction of the asymptotes that we found earlier. Alternatively we could find the direction of minimal distance from the origin, proceeding as follows. We set the x and y derivatives of f + g ≡ x 2 + y 2 + (y 2 + 4xy − x 2 ) equal to zero, where  is the Lagrange multiplier for the hyperbolic constraint. The four branches of hyperbolas correspond to the different signs of the parameters c11 and c12 . Figure 18.8 is plotted for the cases c11 = ±1, c12 = ±2.  However, a new kind of behavior arises for a pair of complex conjugate eigenvalues λ1,2 = ρ ± iκ. If we write the complex solutions ξ1,2 = exp(ρt ± iκt) in real variables ξ+ = (ξ1 + ξ2 )/2, ξ− = (ξ1 − ξ2 )/2i upon using the Euler identity exp(ix) = cos x + i sin x (see Section 6.1), ξ+ = exp(ρt) cos(κt), ξ− = exp(ρt) sin(κt) (18.31) describe a trajectory that spirals inward to the fixed point at the origin for ρ < 0, a spiral node, and spirals away from the fixed point for ρ > 0, a spiral repellor. 1098 Chapter 18 Nonlinear Methods and Chaos Example 18.4.3 SPIRAL FIXED POINT The coupled ODEs x˙ = −x + 3y, y˙ = −3x + 2y have a fixed point at the origin and solutions of the form x(t) = c11 eλ1 t + c12 eλ2 t , y(t) = c21 eλ1 t + c22 eλ2 t . The exponents λ1,2 are solutions of    −1 − λ 3   = (1 + λ)(λ − 2) + 9 = 0,  −3 2 − λ √ or λ2 − λ + 7 = 0. The eigenvalues are complex conjugate, λ = 1/2 ± i 27/2, so we deal with a spiral fixed point at the origin (a repellor because 1/2 > 0). Substituting the general solutions into the ODEs yields the linear equations λ1 c11 = −c11 + 3c21 , or λ2 c12 = −c12 + 3c22 , (λ1 + 1)c11 = 3c21 , (λ2 + 1)c12 = 3c22 , λ1 c21 = −3c11 + 2c21 , λ2 c22 = −3c12 + 2c22 , (λ1 − 2)c21 = −3c11 , (λ2 − 2)c22 = −3c12 , which, using the values of λ1,2 , imply the family of curves √ √  x(t) = et/2 c11 ei 27t/2 + c12 e−i 27t/2 , √ √ √ x 27 t/2  y(t) = + e i c11 ei 27t/2 − c12 e−i 27t/2 , 2 6 which depends on two parameters, c11 , c12 . To simplify we can separate real and imaginary parts of x(t) and y(t) using the Euler identity eix = cos x +i sin x. It is equivalent, but more convenient, to choose c11 = c12 = c/2 and rescale t → 2t, so with the Euler identity we have √ √ √ 27 t x t ce sin( 27t). x(t) = ce cos( 27t), y(t) = − 2 6 Here we can eliminate t and find the orbit   x 2  t 2 4 2 y− = ce . x + 3 2 For fixed t this is the positive definite quadratic form x 2 − xy + y 2 = const., that is, an ellipse. But there is no ellipse in the solutions because t is not fixed. Nonetheless, it is useful to find its orientation. We proceed as follows. With  the Lagrange multiplier for the elliptical constraint we seek the directions of maximal and minimal distance from the origin, forming  f (x, y) + g(x, y) ≡ x 2 + y 2 +  x 2 − xy + y 2 18.4 Nonlinear Differential Equations 1099 FIGURE 18.9 Spiral point. and setting ∂(f + g) = 2x + 2x − y = 0, ∂x ∂(f + g) = 2y + 2y − x = 0. ∂y From 2( + 1)x = y, 2( + 1)y = x we obtain the directions x  2( + 1) = = , y 2( + 1)  or 2 + 38  + 34 = 0. This yields the values  = −2/3, −2 and the directions y = ±x. In other words, our ellipse is centered at the origin and rotated by 45◦ . As we vary the independent variable, t, the size of the ellipse changes, so we get the rotated spiral shown in Fig. 18.9 for c = 1.  In the special case when ρ = 0 in Eq. (18.31), the circular trajectory is called a cycle. When trajectories near it are attracted as time goes on, it is called a limit cycle, representing periodic motion for autonomous systems. 1100 Chapter 18 Nonlinear Methods and Chaos FIGURE 18.10 Example 18.4.4 Center. CENTER OR CYCLE The undamped linear harmonic oscillator ODE x¨ + ω2 x = 0 can be written as two coupled ODEs: x˙ = −ωy, y˙ = ωx. Integrating the resulting ODE xx ˙ + yy ˙ = 0 yields the circular orbits x 2 + y 2 = const., which define a center at the origin and are shown in Fig. 18.10. The solutions can be parameterized as x = R cos t, y = R sin t, where R is the radius parameter. They correspond to the complex conjugate eigenvalues λ1,2 = ±iω. We can check them if we write the general solution as x(t) = c11 eλ1 t + c12 eλ2 t , y(t) = c21 eλ1 t + c22 eλ2 t . Then the eigenvalues follow from    −λ −ω  2 2    ω −λ  = λ + ω = 0 = 0.  Another classic attractor is quasiperiodic motion, such as the trajectory x(t) = A1 sin(ω1 t + b1 ) + A2 sin(ω2 t + b2 ), (18.32) 18.4 Nonlinear Differential Equations 1101 where the ratio ω1 /ω2 is an irrational number. Such combined oscillations occur as solutions of a damped anharmonic oscillator (Van der Pol nonautonomous system) x¨ + 2γ x˙ + ω22 x + βx 3 = f cos(ω1 t). (18.33) In three dimensions, when there is a positive characteristic exponent and an attracting complex conjugate pair in the other directions, we have a spiral saddle point as a new feature. Conversely, a negative characteristic exponent in conjunction with a repelling pair also gives rise to a spiral saddle point, where trajectories spiral out in two dimensions but are attracted in a third direction. In general, when some form of damping (or dissipation of energy) is present, the transients decay and the system settles either in equilibrium, that is, a single point, or in periodic or quasiperiodic motion. Chaotic motion in dissipative systems is now recognized as a fourth state, and its attractors are often called strange. In dissipative systems, initial conditions are not important because trajectories end up on some attractor. They are crucial in Hamiltonian systems. In nonintegrable Hamiltonian systems, chaos may also occur, and then it is called conservative chaos. We refer to Chapter 8 of Hilborn (1994) in the Additional Readings for this more complicated topic. For the driven damped pendulum when trajectories near a center (closed orbit) are attracted to it as time goes on, this closed orbit is defined as a limit cycle, representing periodic motion for autonomous systems. A damped pendulum usually spirals into the origin (the position at rest); that is, the origin is a spiral fixed point in its phase space. When we turn on a driving force, then the system formally becomes nonautonomous, because of its explicit time dependence, but also more interesting. In this case, we can call the explicit time in a sinusoidal driving force a new variable, ϕ, where ω0 is a fixed rate, in the equation of motion ω˙ + γ ω + sin θ = f sin ϕ, ω = θ˙ , ϕ = ω0 t. Then we increase the dimension of our phase space by 1 (adding one variable, ϕ) because ϕ˙ = ω0 = const., but we keep the coupled ODEs autonomous. This driven damped pendulum has trajectories that cross a closed orbit in phase space and spiral back to it; it is called limit cycle. This happens for a range of strength f of the driving force, the control parameter of the system. As we increase f , the phase space trajectories go through several neighboring limit cycles and eventually become aperiodic and chaotic. Such closed limit cycles are called Hopf bifurcations of the pendulum on its road to chaos, after the mathematician E. Hopf, who generalized Poincaré’s results on such bifurcations to higher dimensions of phase space. Such spiral sinks or saddle points cannot occur in one dimension, but we might ask if they are stable when they occur in higher dimensions. An answer is given by the Poincaré– Bendixson theorem, which says that either trajectories (in the finite region to be specified in a moment) are attracted to a fixed point as time goes on, or they approach a limit cycle provided the relevant two-dimensional subsystem stays inside a finite region, that is, does not diverge there as t → ∞. For a proof we refer to Hirsch and Smale (1974) and Jackson (1989) in the Additional Readings. In general, when some form of damping is present, the transients decay and the system settles either in equilibrium, that is, a single point, or in periodic or quasiperiodic motion. Chaotic motion is now recognized as a fourth state, and its attractors are often strange. 1102 Chapter 18 Nonlinear Methods and Chaos Exercise 18.4.2 Show that the (Rössler) coupled ODEs x˙1 = −x2 − x3 , x˙2 = x1 + a1 x2 , x˙3 = a2 + (x1 − a3 )x3 (a) have two fixed points for a2 = 2, a3 = 4, and 0 < a1 < 2, (b) have a spiral repellor at the origin, and (c) have a spiral chaotic attractor for a1 = 0.398. Dissipation in Dynamical Systems Dissipative forces often involve velocities, that is, first-order time derivatives, such as friction (for example, for the damped oscillator). Let us look for a measure of dissipation, that is, how a small area A = c1,2 ξ1 ξ2 at a fixed point shrinks or expands, first in two dimensions for simplicity. Here c1,2 ≡ sin(ξˆ1 , ξˆ2 ), involving the sine of the characteristic directions, is a time-independent angular factor that takes into account the nonorthogonality of the characteristic directions ξˆ1 and ξˆ2 of Eq. (18.28). If we take the time derivative of ˙ j = λj ξj , we obtain, A and use ξ˙j = λj ξj of the characteristic coordinates, implying ξ to lowest order in the ξj , A˙ = c1,2 [ξ1 λ2 ξ2 + ξ2 λ1 ξ1 ] = c1,2 ξ1 ξ2 (λ1 + λ2 ). (18.34) In the limit ξj → 0, we find from Eq. (18.34) that the rate A˙ = λ1 + λ2 = trace(F) = ∇ · f|y0 , A (18.35) with f = (f1 , f2 ) the vector of time evolution functions of Eq. (18.22b). Note that the timeindependent sine of the angle between ξ1 and ξ2 drops out of the rate. The generalization to higher dimensions is obvious. Moreover, in n dimensions,  λi . (18.36) trace(F) = i This trace formula follows from the invariance of the secular polynomial in Eq. (18.29) under a linear transformation, Cξ = x in particular, and it is a result of its determinental form using the product theorem for determinants (see Section 3.2), viz.    −1 det(F − λ · 1) det(C) = det C−1 (F − λ · 1)C det(F − λ · 1) = det(C) n  # = det C−1 FC − λ · 1 = (λi − λ). (18.37) i=1 Here the product form comes about by substituting Eq. (18.30). Now, trace(F) is the coefficient of (−λ)n−1&upon expanding det(F − λ · 1) in powers of λ, while it is i λi from the product form i (λi − λ), which proves Eq. (18.36). Clearly, according to Eqs. (18.35) and (18.36), 18.4 Nonlinear Differential Equations 1103 • it is the sign (and more precisely the trace) of the characteristic exponents of the derivative matrix at the fixed point that determines whether there is expansion or shrinkage of areas and volumes in higher dimensions near a critical point. In summary then, Eq. (18.35) states that dissipation requires ∇ · f(y) = 0, where y˙j = fj , and does not occur in Hamiltonian systems where ∇ · f = 0. Moreover, in two or more dimensions, there are the following global possibilities: • The trajectory may describe a closed orbit (cycle). • The trajectory may approach a closed orbit (spiraling inward or outward toward the orbit) as t → ∞. In this case we have a limit cycle. The local behavior of a trajectory near a critical point is also more varied in general than in one dimension: At a stable critical point all trajectories may approach the critical point along straight lines or spiral inward (toward the spiral node) or may follow a more complicated path. If all time-reversed trajectories move toward the critical point in spirals as t → −∞, then the critical point is a divergent spiral point, or spiral repellor. When some trajectories approach the critical point while others move away from it, then it is called a saddle point. When all trajectories form closed orbits about the critical point, it is called a center. Bifurcations in Dynamical Systems A bifurcation is a sudden change in dynamics for specific parameter values, such as the birth of a node–repellor pair of fixed points or their disappearance upon adjusting a control parameter; that is, the motions before and after the bifurcation are topologically different. At a bifurcation point, not only are solutions unstable when one or more parameters are changed slightly, but the character of the bifurcation in phase space or in the parameter manifold may change. Thus we are dealing with fairly sudden events of nonlinear dynamics. Rather sudden changes from regular to random behavior of trajectories are characteristic of bifurcations, as is sensitive dependence on initial conditions: Nearby initial conditions can lead to very different long-term behavior. If a bifurcation does not change qualitatively with parameter adjustments, it is called structurally stable. Note that structurally unstable bifurcations are unlikely to occur in reality because noise and other neglected degrees of freedom act as perturbations on the system that effectively eliminate unstable bifurcations from our view. Bifurcations (such as doublings in maps) are important as one among many routes to chaos. Others are sudden changes in trajectories associated with several critical points called global bifurcations. Often they involve changes in basins of attraction and/or other global structures. The theory of global bifurcations is fairly complicated and is still in its infancy at present. Bifurcations that are linked to sudden changes in the qualitative behavior of dynamical systems at a single fixed point are called local bifurcations. More specifically, a change in stability occurs in parameter space where the real part of a characteristic exponent of the fixed point alters its sign, that is, moves from attracting to repelling trajectories, or vice versa. The center–manifold theorem says that at a local bifurcation only those degrees 1104 Chapter 18 Nonlinear Methods and Chaos of freedom matter that are involved with characteristic exponents going to zero: ℜλi = 0. Locating the set of these points is the first step in a bifurcation analysis. Another step consists in cataloguing the types of bifurcations in dynamical systems, to which we turn next. The conventional normal forms of dynamical equations represent a start in classifying bifurcations. For systems with one parameter (that is, a one-dimensional center manifold) we write the general case of NDE as follows: x˙ = ∞  j =0 (0) aj x j + c ∞  j =0 (1) aj x j + c 2 ∞  j =0 (2) aj x j + · · · , (18.38) where the superscript on the a (m) denotes the power of the parameter c they are associated with. One-dimensional iterated nonlinear maps such as the logistic map of Section 18.2 (which occur in Poincaré sections) of nonlinear dynamical systems can be classified similarly, viz. xn+1 = ∞  (0) j aj x n j =0 +c ∞  (1) j aj xn j =0 +c 2 ∞  j =0 (2) j aj xn + · · · . (18.39) Thus, one of the simplest NDEs with a bifurcation is (m) aj x˙ = x 2 − c, (18.40) (0) a2 (1) a0 = 1. For c > 0, there are = −1 and = 0 except for which corresponds to all √ two fixed points (recall, x˙ = 0) x± = ± c with characteristic exponents 2x± , so x− is a node and x+ is a repellor. For c < 0 there are no fixed points. Therefore, as c → 0 the fixed point pair disappears suddenly; that is, the parameter value c = 0 is a repellor-node bifurcation that is structurally unstable. This complex map (with c → −c) generates the fractal Julia and Mandelbrot sets discussed in Section 18.3. A pitchfork bifurcation occurs for the undamped (nondissipative and special case of the Duffing) oscillator with a cubic anharmonicity x¨ + ax + bx 3 = 0, b > 0. (18.41) It has a continuous frequency spectrum and is, among others, a model for a ball bouncing between two walls. When the control parameter a > 0, there is only √ one fixed point, at x = 0, a node, while for a < 0 there are two more nodes, at x± = ± −a/b. Thus, we have a pitchfork bifurcation of a node at the orgin into a saddle point at the origin and two nodes, at x± = 0. In terms of a potential formulation, V (x) = ax 2 /2 + bx 4 /4 is a single well for a > 0 but a double well (with a maximum at x = 0) for a < 0. When a pair of complex conjugate characteristic exponents ρ ± iκ crosses from a spiral node (ρ < 0) to a repelling spiral (ρ > 0) and periodic motion (limit cycle) emerges, then we call the qualitative change a Hopf bifurcation. They occur in the quasiperiodic route to chaos that will be discussed in the next section, on chaos. In a global analysis we piece together the motions near various critical points, such as nodes and bifurcations, to bundles of trajectories that flow more or less together in two dimensions. (This geometric view is the current mode of analyzing solutions of dynamical systems.) But this flow is no longer collective in the case of three dimensions, where they diverge from each other in general, because chaotic motion is possible that typically fills the plane of a Poincaré section with points. 18.4 Nonlinear Differential Equations 1105 Chaos in Dynamical Systems Our previous summaries of intricate and complicated features of dynamical systems due to nonlinearities in one and two dimensions do not include chaos, although some of them, such as bifurcations, sometimes are precursors to chaos. In three- or more-dimensional NDEs, chaotic motion may occur, often when a constant of the motion (an energy integral for NDEs defined by a Hamiltonian, for example) restricts the trajectories to a finite volume in phase space and when there are no critical points. Another characteristic signal for chaos is when for each trajectory there are nearby ones, some of which move away from it, while others approach it with increasing time. The notion of exponential divergence of nearby trajectories is made quantitative by the Lyapunov exponent λ (see Section 18.3 for more details) of iterated maps of Poincaré sections associated with the dynamical system. If two nearby trajectories are at a distance d0 at time t = 0 but diverge with a distance d(t) at a later time t, then d(t) ≈ d0 eλt holds. Thus, by analyzing the series of points, that is, iterated maps generated on Poincaré sections, one can study routes to chaos of three-dimensional dynamical systems. This is the key method for studying chaos. As one varies the location and orientation of the Poincaré plane, a fixed point on it often is recognized to originate from a limit cycle in the three-dimensional phase space whose structural stability can be checked there. For example, attracting limit cycles show up as nodes in Poincaré sections, repelling limit cycles as repellors of Poincaré maps, and saddle cycles as saddle points of associated Poincaré maps. Three or more dimensions of phase space are required for chaos to occur because of the interplay of the necessary conditions we just discussed, viz. • bounded trajectories (are often the case for Hamiltonian systems), • exponential divergence of nearby trajectories (is guaranteed by positive Lyapunov exponents of corresponding Poincaré maps), • no intersection of trajectories. The last condition is obeyed by deterministic systems in particular, as we discussed in Section 18.1. A surprising feature of chaos, mentioned in Section 18.1, is how prevalent it is and how universal the routes to chaos often are, despite the overwhelming variety of NDEs. An example for spatially complex patterns in classical mechanics is the planar pendulum, whose one-dimensional equation of motion dL dθ = L, = −lmg sin θ (18.42) dt dt is nonlinear in the dynamic variable θ (t). Here I is the moment of inertia, l is the distance to the center of mass, m is the mass, and g is the gravitational acceleration constant. When all parameters in Eq. (18.42) are constant in time and space, then the solutions are given in terms of elliptic integrals (see Section 5.8) and no chaos exists. However, a pendulum under a periodic external force can exhibit chaotic dynamics, for example, for the Lagrangian m L = r˙ 2 − mg(l − z), (x − x0 )2 + y 2 + z2 = l 2 , (18.43) 2 x0 = εl cos ωt. (18.44) I 1106 Chapter 18 Nonlinear Methods and Chaos (See Moon (1992) in the Additional Readings.) Good candidates for chaos are multiple well potential problems,   d 2r dr , t , + ∇V (r) = F r, dt dt 2 (18.45) where F represents dissipative and/or driving forces. Another classic example is rigid-body rotation, whose nonlinear three-dimensional Euler equations are familiar, viz. d I1 ω1 = (I2 − I3 )ω2 ω3 + M1 , dt d I2 ω2 = (I3 − I1 )ω1 ω3 + M2 , (18.46) dt d I3 ω3 = (I1 − I2 )ω1 ω2 + M3 . dt Here the Ij are the principal moments of inertia and ω is the angular velocity with components ωj about the body-fixed principal axes. Even free rigid-body rotation can be chaotic, for its nonlinear couplings and three-dimensional form satisfy all requirements for chaos to occur (see Section 18.1). A rigid-body example of chaos in our solar system is the chaotic tumbling of Hyperion, one of Saturn’s moons that is highly nonspherical. It is a world where the Saturn rise and set is so irregular as to be unpredictable. Another is Halley’s comet, whose orbit is perturbed by Jupiter and Saturn. In general, when three or more celestial bodies interact gravitationally, stochastic dynamics are possible. Note, though, that computer simulations over large time intervals are required to ascertain chaotic dynamics in the solar system. For more details on chaos in such conservative Hamiltonian systems we refer to Chapter 8 of Hilborn (1994) in the Additional Readings. Exercise 18.4.3 Construct a Poincaré map for the Duffing oscillator in Eq. (18.41). Routes to Chaos in Dynamical Systems Let us now look at some routes to chaos. The period-doubling route to chaos is exemplified by the logistic map in Section 18.2, and the universal Feigenbaum numbers α, δ are its quantitative features, along with Lyapunov exponents. It is common in dynamical systems. It may begin with limit cycle (periodic) motion that shows up as a fixed point in a Poincaré section. The limit cycle may have originated in a bifurcation from a node or some other fixed point. As a control parameter changes, the fixed point of the Poincaré map splits into two points; that is, the limit cycle has a characteristic exponent going through zero from attracting to repelling, say. The periodic motion now has a period twice as long as before, etc. We refer to Chapter 11 of Barger and Olsson (1995) in the Additional Readings for period-doubling plots of Poincaré sections for the Duffing equation (18.41) with a periodic external force. Another example for period doubling is a forced oscillator with friction (see Helleman in Cvitanovic (1989) in the Additional Readings). 18.4 Additional Readings 1107 The quasiperiodic route to chaos is also quite common in dynamical systems, for example, starting from a time-independent node, a fixed point. If we adjust a control parameter, the system undergoes a Hopf bifurcation to the periodic motion corresponding to a limit cycle in phase space. With further change of the control parameter, a second frequency appears. If the frequency ratio is an irrational number, the trajectories are quasiperiodic, eventually covering the surface of a torus in phase space; that is, quasiperiodic orbits never close or repeat. Further changes of the control parameter may lead to a third frequency or directly to chaotic motion. Bands of chaotic motion can alternate with quasiperiodic motion in parameter space. An example for such a dynamic system is a periodically driven pendulum. A third route to chaos goes via intermittency, where the dynamical system switches between two qualitatively different motions at fixed control parameters. For example, at the beginning, periodic motion alternates with an occasional burst of chaotic motion. With a change of the control parameter, the chaotic bursts typically lengthen until, eventually, no periodic motion remains. The chaotic parts are irregular and do not resemble each other, but one needs to check for a positive Lyapunov exponent to demonstrate chaos. Intermittencies of various types are common features of turbulent states in fluid dynamics. The Lorenz coupled NDEs also show intermittency. Exercise 18.4.4 Plot the intermittency region of the logistic √ map at µ = 3.8319. What is the period of the cycles? What happens at µ = 1 + 2 2? ANS. There is a tangent bifurcation to period 3 cycles. Additional Readings Amann, H., Ordinary Differential Equations: An Introduction To Nonlinear Analysis. New York: de Gruyter (1990). Baker, G. L., and J. P. Gollub, Chaotic Dynamics: An Introduction, 2nd ed. Cambridge, UK: Cambridge University Press (1996). Barger, V. D., and M. G. Olsson, Classical Mechanics, 2nd ed. New York: McGraw-Hill (1995). Bender, C. M., and S. A. Orszag, Advanced Mathematical Methods For Scientists and Engineers. New York: McGraw-Hill (1978), Chapter 4 in particular. Bergé, P., Y. Pomeau, and C. Vidal, Order within Chaos. New York: Wiley (1987). Cvitanovic, P., ed., Universality in Chaos, 2nd ed. Bristol, UK: Adam Hilger (1989). Devaney, R. L., An Introduction to Chaotic Dynamical Systems. Menlo Park, CA: Benjamin/Cummings; 2nd ed., Perseus (1989). Earnshaw, J. C., and D. Haughey, Lyapunov exponents for pedestrians. Am. J. Phys. 61: 401 (1993). Gleick, J., Chaos. New York: Penguin Books (1987). Hilborn, R. C., Chaos and Nonlinear Dynamics. New York: Oxford University Press (1994). Hirsch, M. W., and S. Smale, Differential Equations, Dynamical Systems, and Linear Algebra. New York: Academic Press (1974). Infeld, E., and G. Rowlands, Nonlinear Waves, Solitons and Chaos. Cambridge, UK: Cambridge University Press (1990). 1108 Chapter 18 Nonlinear Methods and Chaos Jackson, E. A., Perspectives of Nonlinear Dynamics. Cambridge, UK: Cambridge University Press (1989). Jordan, D. W., and P. Smith, Nonlinear Ordinary Differential Equations, 2nd ed. Oxford, UK: Oxford University Press (1987). Lyapunov, A. M., The General Problem of the Stability of Motion. Bristol, PA: Taylor & Francis (1992). Mandelbrot, B. B., The Fractal Geometry of Nature. San Francisco: W. H. Freeman, reprinted (1988). Moon, F. C., Chaotic and Fractal Dynamics. New York: Wiley (1992). Peitgen, H.-O., and P. H. Richter, The Beauty of Fractals. New York: Springer (1986). Sachdev, P. L., Nonlinear Differential Equations and their Applications. New York: Marcel Dekker (1991). Tufillaro, N. B., T. Abbott, and J. Reilly, An Experimental Approach to Nonlinear Dynamics and Chaos. Redwood City, CA: Addison-Wesley (1992). CHAPTER 19 PROBABILITY Probabilities arise in many problems dealing with random events or large numbers of particles defining random variables. An event is called random if it is practically impossible to predict from the initial state. This includes those cases where we have merely incomplete information about initial states and/or the dynamics, as in statistical mechanics, where we may know the energy of the system that corresponds to very many possible microscopic configurations, preventing us from predicting individual outcomes. Often the average properties of many similar events are predictable, as in quantum theory. This is why probability theory can be and has been developed. Random variables are involved when data depend on chance, such as weather reports and stock prices. The theory of probability describes mathematical models of chance processes in terms of probability distributions of random variables that describe how some “random events” are more likely than others. In this sense probability is a measure of our ignorance, giving quantitative meaning to qualitative statements such as “It will probably rain tomorrow” and “I’m unlikely to draw the heart queen.” Probabilities are of fundamental importance in quantum mechanics and statistical mechanics and are applied in meteorology, economics, games, and many other areas of daily life. To a mathematician, probabilities are based on axioms, but we will discuss here practical ways of calculating probabilities for random events. Because experiments in the sciences are always subject to errors, theories of errors and their propagation involve probabilities. In statistics we deal with the applications of probability theory to experimental data. 19.1 DEFINITIONS, SIMPLE PROPERTIES All possible mutually exclusive1 outcomes of an experiment that is subject to chance represent the events (or points) of the sample space S. For example, each time we toss a coin we give the trial a number i = 1, 2, . . . and observe the outcomes xi . Here the sample 1 This means that given that one particular event did occur, the others could not have occurred. 1109 1110 Chapter 19 Probability consists of two events: heads and tails, and the xi represent a discrete random variable that takes on one of two values, heads or tails. When two coins are tossed, the sample contains the events two heads, one head and one tail, two tails; the number of heads is a good value to assign to the random variable, so the possible values are 2, 1, and 0. There are four equally probable outcomes, of which one has value 2, two have value 1, and one has value 0. So the probabilities of the three values of the random variable are 1/4 for two heads (value 2), 1/4 for no heads (value 0), and 1/2 for value 1. In other words, we define the theoretical probability P of an event denoted by the point xi of the sample as number of outcomes of event xi . (19.1) total number of all events An experimental definition applies when the total number of events is not well defined (or is difficult to obtain) or equally likely outcomes do not always occur. Then P (xi ) ≡ number of times event xi occurs (19.2) total number of trials is more appropriate. A large, thoroughly mixed pile of black and white sand grains of the same size and in equal proportions is a relevant example, because it is impractical to count them all. But we can count the grains in a small sample volume that we pick. This way we can check that white and black grains turn up with roughly equal probability 1/2, provided we put back each sample and mix the pile again. It is found that the larger the sample volume, the smaller the spread about 1/2 will be. The more trials we run, the closer the average occurrence of all trial counts will be to 1/2. We could even pick single grains and check if the probability 1/4 of picking two black grains in a row equals that of two white grains, etc. There are lots of statistics questions we can pursue. Thus, piles of colored sand provide for instructive experiments. The following axioms are self-evident. P (xi ) ≡ • Probabilities satisfy 0 ≤ P ≤ 1. Probability 1 means certainty; probability 0 means impossibility. • The entire sample has probability 1. For example, drawing an arbitrary card has probability 1. • The probabilities for mutually exclusive events add. The probability for getting one head in two coin tosses is 1/4 + 1/4 = 1/2 because it is 1/4 for head first and then tail, plus 1/4 for tail first and then head. Example 19.1.1 PROBABILITY FOR A OR B What is the probability for drawing2 a club or a jack from a shuffled deck of cards? Because there are 52 cards in a deck, each being equally likely, 13 cards for each suit and 4 jacks, there are 13 clubs including the club jack, and 3 other jacks; that is, there are 16 possible cards out of 52, giving the probability (13 + 3)/52 = 16/52 = 4/13.  2 These are examples of non-mutually exclusive events. 19.1 Definitions, Simple Properties 1111 If we represent the sample space by a set S of points, then events are subsets A, B, . . . of S, denoted as A ⊂ S, etc. Two sets A, B are equal if A is contained in B, A ⊂ B, and B is contained in A, B ⊂ A. The union A ∪ B consists of all points (events) that are in A or B or both (see Fig. 19.1). The intersection A ∩ B consists of all points, that are in both A and B. If A and B have no common points, their intersection is the empty set, A ∩ B = ∅, which has no elements (events). The set of points in A that are not in the intersection of A and B is denoted by A − A ∩ B, defining a subtraction of sets. If we take the club suit in Example 19.1.1 as set A and the four jacks as set B, then their union comprises all clubs and jacks, and their intersection is the club jack only. Each subset A has its probability P (A) ≥ 0. In terms of these set theory concepts and notations, the probability laws we just discussed become 0 ≤ P (A) ≤ 1. The entire sample space has P (S) = 1. The probability of the union A ∪ B of mutually exclusive events is the sum P (A ∪ B) = P (A) + P (B), A ∩ B = ∅. The addition rule for probabilities of arbitrary sets is given by the following theorem. A DDITION RULE : P (A ∪ B) = P (A) + P (B) − P (A ∩ B). (19.3) To prove this, we decompose the union into two mutually exclusive sets A ∪ B = A ∪ (B − B ∩ A), subtracting the intersection of A and B from B before joining them. Their probabilities are P (A), P (B) − P (B ∩ A), which we add. We could also have decomposed A ∪ B = (A − A ∩ B) ∪ B, from which our theorem follows similarly by adding these probabilities, P (A ∪ B) = [P (A) − P (A ∩ B)] + P (B). Note that A ∩ B = B ∩ A. (See Fig. 19.1.) Sometimes the rules and definitions of probabilities that we have discussed so far are not sufficient, however. FIGURE 19.1 The shaded area gives the intersection A ∩ B, corresponding to the A and B events, the dashed line encloses A ∪ B, corresponding to the A or B events. 1112 Chapter 19 Probability Example 19.1.2 CONDITIONAL PROBABILITY A simple example consists of a box of 10 identical red and 20 identical blue pens, arranged in random order, from which we remove pens successively, that is, without putting them back. Suppose we draw a red pen first, event A. That will happen with probability P (A) = 10/30 = 1/3 if the pens are thoroughly mixed up. The conditional probability P (B|A) of drawing a blue pen in the next round, event B, however, will depend on the fact that we drew a red pen in the first round. It is given by 20/29. There are 10 · 20 possible sample points (red/blue pen events) in two rounds, and the sample has 30 · 29 events, so the combined probability is 10 20 10 · 20 20 = = .  30 29 30 · 29 87 In general, the combined probability P (A, B) that A and B happen (in this order) is given by the product of the probability that A happens, P (A), and the probability that B happens if A does, P (B|A): P (A, B) = P (A, B) = P (A)P (B|A). (19.4) In other words, the conditional probability P (B|A) is given by the ratio P (B|A) = P (A, B) . P (A) (19.5) If the conditional probability P (B|A) = P (B) is independent of A, then the events A and B are called independent, and the combined probability P (A ∩ B) = P (A)P (B) (19.6) is simply the product of both probabilities. Example 19.1.3 SCHOLASTIC APTITUDE TESTS Colleges and universities rely on the verbal and mathematics SAT scores, among others, as predictors of a student’s success in passing courses and graduating. A research university is known to admit mostly students with a combined verbal and mathematics score above 1400 points. The graduation rate is 95%; that is, 5% drop out or transfer elsewhere. Of those who graduate, 97% have an SAT score of more than 1400 points, while 80% of those who drop out have an SAT score below 1400. Suppose a student has an SAT score below 1400. What is his/her probability of graduating? Let A be the cases having an SAT test score below 1400, B represent those above 1400, mutually exclusive events with P (A) + P (B) = 1, and C be those students who graduate. That is, we want to know the conditional probabilities P (C|A) and P (C|B). To apply Eq. (19.5) we need P (A) and P (B). There are 3% of students with scores below 1400 among those who graduate (95%) and 80% of those 5% who do not graduate, so 4 P (A) = 0.03 · 0.95 + 0.05 = 0.0685, 5 P (B) = 0.97 · 0.95 + 0.05 = 0.9315, 5 19.1 Definitions, Simple Properties 1113 and also P (C ∩ A) = 0.03 · 0.95 = 0.0285 and P (C ∩ B) = 0.97 · 0.95 = 0.9215. Here the combined probabilities P (C, A) = P (C ∩ A), P (C, B) = P (C ∩ B) as C and A (and C and B) are parts of the same sample space. Therefore, P (C|A) = P (C ∩ A) 0.0285 = ∼ 41.6%, P (A) 0.0685 P (C|B) = P (C ∩ B) 0.9215 = ∼ 98.9%; P (B) 0.9315 that is, a little less than 42% is the probability for a student with a score below 1400 to graduate at this particular university.  As a corollary to the definition of a conditional probability, Eq. (19.5), we compare P (A|B) = P (A ∩ B)/P (B) and P (B|A) = P (A ∩ B)/P (A), which leads to the following theorem. BAYES THEOREM : P (A|B) = P (A) P (B|A). P (B) (19.7) This can be generalized to the following. T HEOREM : If the random events Ai with probabilities P (Ai ) > 0 are mutually exclusive and their union represents the entire sample S, then an arbitrary random event B ⊂ S has the probability P (B) = n  P (Ai )P (B|Ai ). i=1 FIGURE 19.2 The shaded area B is composed of mutually exclusive subsets of B belonging also to A1 , A2 , A3 , where the Ai are mutually exclusive. (19.8) 1114 Chapter 19 Probability This decomposition law resembles the expansion of a vector into a basis of unit vectors defining the from the obvious decomposi2 components of the vector. This relation follows tion B = i (B ∩ Ai ), Fig. 19.2, which implies P (B) = i P (B ∩ Ai ) for the probabilities because the components B ∩ Ai are mutually exclusive. For each i, we know from Eq. (19.5) that P (B ∩ Ai ) = P (Ai )P (B|Ai ), which proves the theorem. Counting of Permutations and Combinations Counting particles in samples can help us find probabilities, as in statistical mechanics. If we have n different molecules, let us ask in how many ways we can arrange them in a row, that is, permute them. This number is defined as the number of their permutations. Thus, by definition, the order matters in permutations. There are n choices of picking the first molecule, n − 1 for the second, etc. Altogether there are n! permutations of n different molecules or objects. Generalizing this, suppose there are n people but only k < n chairs to seat them. In how many ways can we seat k people in the chairs? Counting as before, we get n(n − 1) · · · (n − k + 1) = n! (n − k)! for the number of permutations of n different objects, k at a time. We now consider the number of combinations of objects when their order is irrelevant by definition. For example, three letters a, b, c can be combined, two letters at a time, in 3 = 3! 2! ways: ab, ac, bc. If letters can be repeated, then we add the pairs aa, bb, cc and have six combinations. Thus, a combination of different particles differs from a permutation in that their order does not matter. Combinations occur with repetition (the mathematician’s way of treating indistinguishable objects) and without, where no two sets contain the same particles. The number of different combinations of n particles, k at a time and without repetitions, is given by the binomial coefficient n(n − 1) · · · (n − k + 1) $ n % . = k! k If repetition is allowed, then the number is   n+k−1 . k In the number n!/(n − k)! of permutations of n particles, k at a time, we have to divide out the number k! of permutations of the groups of k particles because their order does not matter in a combination. This proves the first claim. The second one is shown by mathematical induction. In statistical mechanics, we ask in how many ways we can put n particles in k boxes so that there will be ni (distinguishable) particles in the ith box, without regard to order in each box, with ki=1 ni = n. Counting as before, there are n choices for selecting the first particle, n − 1 for picking the second, etc., but the n1 ! permutations within the first box are 19.1 Definitions, Simple Properties 1115 discounted, and n2 ! permutations within the second box are disregarded, etc. Therefore the number of combinations is n! , n1 + n2 + · · · + nk = n. n1 !n2 ! · · · nk ! In statistical mechanics, particles that obey • Maxwell–Boltzmann (MB) statistics are distinguishable, without restriction on their number in each state; • Bose–Einstein (BE) statistics are indistinguishable, with no restriction on the number of particles in each quantum state; • Fermi–Dirac (FD) statistics are indistinguishable, with at most one particle per state. For example, putting three particles in four boxes, there are 43 equally likely arrangements for the MB case, because each particle can be put into any box in four ways, giving a total of  43 choices. For BE statistics, the number of combinations with repetitions is 3+4−1 = 63 3  4 = 3 . More generally, for MB for the Bose–Einstein case. For FD statistics, it is 3+1 3 n statistics the number of distinct arrangements of n particles k among k states (boxes) is k , n+k−1 for BE statistics it is , and for FD statistics it is n . n Exercises 19.1.1 A card is drawn from a shuffled deck. (a) What is the probability that it is black, (b) a red nine, (c) or a queen of spades? 19.1.2 Find the probability of drawing two kings from a shuffled deck of cards (a) if the first card is put back before the second is drawn, and (b) if the first card is not put back after being drawn. 19.1.3 When two fair dice are thrown, what is the probability of (a) observing a number less than 4 or (b) a number greater than or equal to 4 but less than 6? 19.1.4 Rolling three fair dice, what is the probability of obtaining six points? 19.1.5 Determine the probability P (A ∩ B ∩ C) in terms of P (A), P (B), P (C), etc. 19.1.6 Determine directly or by mathematical induction the probability of a distribution of N (Maxwell–Boltzmann) particles in k boxes with N1 in box 1, N2 in box 2, . . . , Nk in the kth box for any numbers Nj ≥ 1 with N1 + N2 + · · · + Nk = N , k < N. Repeat this for Fermi–Dirac and Bose–Einstein particles. 19.1.7 Show that P (A ∪ B ∪ C) = P (A) + P (B) + P (C) − P (A ∩ B) − P (A ∩ C) − P (B ∩ C) + P (A ∩ B ∩ C). 19.1.8 Determine the probability that a positive integer n ≤ 100 is divisible by a prime number p ≤ 100. Verify your result for p = 3, 5, 7. 19.1.9 Put two particles obeying Maxwell–Boltzmann (Fermi–Dirac, or Bose–Einstein) statistics in three boxes. How many ways are there in each case? 1116 19.2 Chapter 19 Probability RANDOM VARIABLES Each time we toss a die, we give the trial a number i = 1, 2, . . . and observe the point xi = 1, or 2, 3, 4, 5, 6 with probability 1/6. If i denotes the trial number, then xi is a discrete random variable that takes the discrete values from 1 to 6 with a definite probability P (xi ) = 1/6. Example 19.2.1 DISCRETE RANDOM VARIABLE If we toss two dice and record the sum of the points shown in each trial, then this sum is also a discrete random variable, which takes on the value 2 when both dice show 1 with probability (1/6)2 ; the value 3 when one die has 1 and the other 2, hence with probability (1/6)2 + (1/6)2 = 1/18; the value 4 when both dice have 2 or one has 1 and the other 3, so with probability (1/6)2 + (1/6)2 + (1/6)2 = 1/12; the value 5 with probability 4(1/6)2 = 1/9; the value 6 with probability 5/36; the value 7 with the maximum probability, 6(1/6)2 = 1/6; up to the value 12 when both dice show 6 points with probability (1/6)2 . This probability distribution is symmetric about 7. This symmetry is obvious from Fig. 19.3 and becomes visible algebraically when we write the rising and falling linear parts as x − 1 6 − (7 − x) = , 36 36 13 − x 6 + (7 − x) P (x) = = , 36 36 In summary, then, P (x) = x = 2, 3, . . . , 7, x = 7, 8, . . . , 12.  • The different values xi that a random variable X assumes denote and distinguish the events in the sample space of an experiment; each event occurs by chance with a prob- FIGURE 19.3 Probability distribution P (x) of the sum of points when two dice are tossed. 19.2 Random Variables 1117 ability P (X = xi ) = pi ≥ 0 that is a function of the random variable X. A random variable X(ei ) = xi is defined on the sample space, that is, for the events ei ∈ S. • We define the probability density f (x) of a continuous random variable X as P (x ≤ X ≤ x + dx) = f (x) dx; (19.9) that is, f (x) dx is the probability that X lies in the interval x ≤ X ≤ x + dx. For f (x) to be a probability density, it has to satisfy f (x) ≥ 0 and f (x) dx = 1. The generalization to probability distributions depending on several random variables is straightforward. Quantum physics abounds in examples. Example 19.2.2 CONTINUOUS RANDOM VARIABLE: HYDROGEN ATOM Quantum mechanics gives the probability |ψ|2 d 3 r of finding a 1s electron in a hydrogen atom in volume3 d 3 r, where ψ = N e−r/a is the wave function that is normalized to ∞ e−2r/a r 2 dr = πa 3 N 2 , dV = r 2 dr d cos θ dϕ 1 = |ψ|2 dV = 4πN 2 0 being the volume element and a the Bohr radius. The radial integral is found by repeated integration by parts or by rescaling it to the gamma function 0 ∞ e−2r/a r 2 dr =  3 ∞ a a3 a3 e−x x 2 dx = Ŵ(3) = . 2 8 4 0 Here all points in space constitute the sample and represent three random variables, but the probability density |ψ|2 in this case depends only on the radial variable because of the spherical symmetry of the 1s state. A measure for the size of the H atom is given by the average radial distance of the electron from the proton at the center, which in quantum mechanics is called the expectation value: ∞ 3 2 2 1s|r|1s = r|ψ| dV = 4πN re−2r/a r 2 dr = a. 2 0 We shall define this concept for arbitrary probability distributions shortly.  • A random variable that takes only discrete values x1 , x2 , . . . , xn with probabilities p1 , p2 , . . . , pn , respectively, is called a discrete random variable, so i pi = 1. If an “experiment” or trial is performed, some outcome must occur, with unit probability. • If the values comprise a continuous range of values a ≤ x ≤ b, then we deal with a continuous random variable, whose probability distribution may or may not be a continuous function as well. 3 Note that |ψ|2 4π r 2 dr gives the probability for the electron to be found between r and r + dr, at any angle. 1118 Chapter 19 Probability When we measure a quantity x n times, obtaining the values xj , we define the average value n 1 x¯ = xj (19.10) n j =1 of the trials, also called the mean or expectation value, where this formula assumes that every observed value xi is equally likely and occurs with probability 1/n. This connection is the key link of experimental data with probability theory. This observation and practical experience suggest defining the mean value for a discrete random variable X as  X ≡ x i pi (19.11) i and that for a continuous random variable characterized by probability density f (x) as X = xf (x) dx. (19.12) These are linear averages. Other notations in the literature are X¯ and E(X). The use of the arithmetic mean x¯ of n measurements as the average value is suggested by simplicity and plain experience, assuming equal probability for each xi again. But why do we not consider the geometric mean xg = (x1 · x2 · · · · · xn )1/n or the harmonic mean xh determined by the relation   1 1 1 1 1 = + + ··· + xh n x1 x1 xn (19.13) (19.14) or that value x˜ that minimizes the sum of absolute deviations ˜ Here the xi are 2n+1 |xi − x|? |xi − x|, as in Fig. 19.4a, taken to increase monotonically. When we plot O(x) = i=1 for an odd number of points, we realize that it has a minimum at its central value i = n, FIGURE 19.4 (a) 3i=1 |xi − x| for an odd number of points; (b) 4i=1 |xi − x| for an even number of points. 19.2 Random Variables 1119 while for an even number of points E(x) = 2n i=1 |xi − x| is flat in its central region, as shown in Fig. 19.4b. These properties make these functions unacceptable for determining average values. Instead, when we minimize the sum of quadratic deviations, n  (x − xi )2 = minimum, (19.15) i=1 setting the derivative equal to zero yields 2 i (x − xi ) = 0, or 1 x= xi ≡ x, ¯ n i that is, the arithmetic mean. It has another important property: If we denote by vi = xi − x¯ the deviations, then i vi = 0, that is, the sum of positive deviations equals the sum of negative deviations. This principle of minimizing the quadratic sum of deviations, called the method of least squares, is due to C. F. Gauss, among others. How close a fit of the mean value to a set of data points is depends on the spread of the individual measurements from this mean. Again, we reject the average sum of deviations n ¯ as a measure of the spread because it selects the central measurement i=1 |xi − x|/n as the best value for no good reason. A more appropriate definition of the spread is the average of the deviations from the mean, squared, or standard deviation 3 4 n 41  (xi − x) ¯ 2, σ =5 n i=1 where the square root is motivated by dimensional analysis. Example 19.2.3 STANDARD DEVIATION OF MEASUREMENTS From the measurements x1 = √7, x2 = 9, x3 = 10, x4 = 11, x5 = 13 we extract x¯ = 10 for the mean value and σ = (9 + 1 + 1 + 9)/4 = 2.2361 for the standard deviation, or spread, using the experimental formula (19.2) because the probabilities are not known.  There is yet another interpretation of the standard variation, in terms of the sum of squares of measurement differences n n  1   2 xi + xk2 − 2xi xk (xi − xk )2 = 2 i 0 the probability Pn (t + dt) is composed of two mutually exclusive events that (i) n particles are emitted in the time t, none in dt, and (ii) n − 1 particles are emitted in time t, one in dt. Therefore Pn (t + dt) = Pn (t)P0 (dt) + Pn−1 (t)P1 (dt). Here we substitute the probability of observing one particle, P1 (dt) = µ dt, and no particle, P0 (dt) = 1 − P1 (dt), in time dt. This yields Pn (t + dt) = Pn (t)(1 − µ dt) + Pn−1 (t)µ dt. So, after rearranging and dividing by dt, we get dPn (t) Pn (t + dt) − Pn (t) = = µPn−1 (t) − µPn (t). (19.47) dt dt For n = 0 this differential recursion relation simplifies, because there is no particle in times t and dt giving dP0 (t) (19.48) = −µP0 (t). dt The ODE says that particles have a constant decay probability and decay removes them from the distribution. This ODE integrates to P0 (t) = e−µt if the probability that no particle is emitted during a zero time interval P0 (0) = 1 is used. Here P0 (0) = 1 means no decay takes place at t ≤ 0. Now we go back to Eq. (19.47) for n = 1,  P˙1 = µ e−µt − P1 , P1 (0) = 0, (19.49) and solve the homogeneous equation, which is the same for P1 as Eq. (19.48). This yields P1 (t) = µ1 e−µt . Then we solve the inhomogeneous ODE (Eq. (19.49)) by varying the constant µ1 to find µ˙ 1 = µ, so P1 (t) = µte−µt . The general solution is (µt)n −µt e , (19.50) n! as may be confirmed by substitution into Eq. (19.47) and verifying the initial conditions, Pn (0) = 0, n > 0. This is an example of the Poisson distribution. The Poisson distribution is defined with the probabilities Pn (t) = µn −µ e , X = n = 0, 1, 2, . . . (19.51) n! and is exhibited in Fig. 19.6. The random variable X is discrete. The probabilities are µn properly normalized because e−µ ∞ n=0 n! = 1. The mean value and variance, p(n) = X = e−µ ∞ ∞   µn µn = µe−µ = µ, n n! n! n=1 n=0   σ 2 = X 2 − X2 = µ(µ + 1) − µ2 = µ, (19.52) 1132 Chapter 19 Probability FIGURE 19.6 Poisson distribution compared with binomial distribution. follow from the characteristic function ∞ ∞   itX   (µeit )n µn it = e−µ = eµ(e −1) e eitn−µ = n! n! n=0 n=0 by differentiation and setting t = 0, using Eq. (19.17). A Poisson distribution becomes a good approximation of the binomial distribution for a large number n of trials and small probability p ∼ µ/n, µ a constant. T HEOREM : In the limit n → ∞ and p → 0 so that the mean value np → µ stays finite, the binomial distribution becomes a Poisson distribution. √ To prove this theorem, we apply Stirling’s formula (Chapter 8) n! ∼ 2πn(n/e)n for large n to the factorials in Eq. (19.42), keeping x finite while n → ∞. This yields for n → ∞: n−x  x  n−x  n  e n n n! n ∼ ∼ (n − x)! e n−x e n−x n−x  x  x  x n n 1+ ∼ e x ∼ nx , ∼ e n−x e and for n → ∞, p → 0, with np → µ:     pn n µ n (1 − p)n−x ∼ 1 − ∼ 1− ∼ e−µ . n n 19.4 Poisson Distribution 1133 Table 19.1 i→ ni → 0 57 1 203 2 383 3 525 4 532 5 408 6 273 7 139 8 45 9 27 10 16 Finally, p x nx → µx , so altogether n! µx −µ p x (1 − p)n−x → e , x!(n − x)! x! n → ∞, (19.53) which is a Poisson distribution for the random variable X = x with 0 ≤ x < ∞. This limit theorem is a particular example of the laws of large numbers. Exercises 19.4.1 Radioactive decays are governed by the Poisson distribution. In a Rutherford–Geiger experiment the number ni of emitted α particles is counted in n = 2608 time intervals of 7.5 seconds each. In Table 19.1 ni is the number of time intervals in which i particles were emitted. Determine the average number λ of emitted particles, and compare the ni of Table 19.1 with npi computed from the Poisson distribution with mean value λ. 19.4.2 Derive the standard deviation of a Poisson distribution of mean value µ. 19.4.3 The number of α decay particles of a radium sample is counted per minute for 40 hours. The total number is 5000. How many 1-minute intervals are there expected to be with (a) 2, (b) 5 α particles? 19.4.4 For a radioactive sample, 10 decays are counted on average in 100 seconds. Use the Poisson distribution to estimate the probability of counting 3 decays in 10 seconds. 19.4.5 238 U has a half-life of 4.51 × 109 years. Its decay series ends with the stable lead isotope The ratio of the number of 206 Pb to 238 U atoms in a rock sample is measured as 0.0058. Estimate the age of the rock assuming that all the lead in the rock is from the initial decay of the 238 U, which determines the rate of the entire decay process, because the subsequent steps take place far more rapidly. Hint. The decay constant λ in the decay law N (t) = N e−λt is related to the half-life T by T = ln 2/λ. 206 Pb. ANS. 3.8 × 107 years. 19.4.6 The probability of hitting a target in one shot is known to be 20%. If five shots are fired independently, what is the probability of striking the target at least once? 19.4.7 238 A piece of uranium is known to contain the isotopes 235 92 U and 92 U as well as from 0.80 206 g of 82 Pb per gram of uranium. Estimate the age of the piece (and thus Earth) in years. Hint. Assume the lead comes only from the 238 92 U. Use the decay constant from Exercise 19.4.5. 1134 19.5 Chapter 19 Probability GAUSS’ NORMAL DISTRIBUTION The bell-shaped Gauss distribution is defined by the probability density   1 [x − µ]2 f (x) = √ exp − , −∞ < x < ∞, 2σ 2 σ 2π (19.54) with mean value µ and variance σ 2 . It is by far the most important continuous probability distribution and is displayed in Fig. 19.7. √ , we obtain It is properly normalized because, substituting y = x−µ σ 2 1 √ σ 2π ∞ −∞ 2 e − (x−µ) 2 2σ 1 dx = √ π ∞ 2 2 e−y dy = √ π −∞ ∞ 0 2 e−y dy = 1. Similarly, substituting y = x − µ, we see that ∞ ∞ 2 y x − µ − (x−µ)2 2 −y 2σ dx = X − µ = √ e √ e 2σ 2 dy = 0, −∞ σ 2π −∞ σ 2π the integrand being odd in y, so the integral over y > 0 cancels that over y < 0. Similarly we check that the standard deviation is σ. From the normal distribution (by the substitution y = x−X σ )      |X − X| > k = P |Y | > k P X − X > kσ = P σ ) ) 2 ∞ −y 2 /2 4 ∞ −z2 k = e dy = dz = erfc √ , √ e π k π k/ 2 2 FIGURE 19.7 Normal Gauss distribution for mean √ value zero and various standard deviations h = 1/σ 2. 19.5 Gauss’ Normal Distribution 1135 we can evaluate the integral for k = 1, 2, 3 and thus extract the following numerical relations for a normally distributed random variable:     P X − X ≥ 2σ ∼ 0.0455, P X − X ≥ σ ∼ 0.3173,   (19.55) P X − X ≥ 3σ ∼ 0.0027, of which the last one is interesting to compare with Chebychev’s inequality (see Eq. (19.21).) giving ≤ 1/9 for an arbitrary probability distribution instead of ∼ 0.0027 for the 3σ -rule of the normal distribution. A DDITION THEOREM : If the random variables X, Y have the same normal distributions, that is, the same mean value and variance, then Z = X + Y has normal distribution with twice the mean value and twice the variance of X and Y . To prove this theorem, we take the Gauss density as ∞ 1 −x 2 /2 1 2 f (x) = √ e , with √ e−x /2 dx = 1, 2π 2π −∞ without loss of generality. Then the probability density of (X, Y ) is the product 1 1 1 −(x 2 +y 2 )/2 2 2 f (x, y) = √ e−x /2 √ e−y /2 = e . 2π 2π 2π Also, Eq. (17.36) gives the density for Z = X + Y as ∞ 1 1 2 2 g(z) = √ e−x /2 √ e−(x−z) /2 dx. 2π 2π −∞ Completing the square in the exponent,  √ z 2x − 2xz + z = x 2 − √ 2 2 2 2 + z2 , 2 we obtain 1 −z2 /4 e g(z) = 2π     1 √ z 2 dx. exp − x 2 − √ 2 2 −∞ ∞ Using the substitution u = x − 2z , we find that the integral transforms into     ∞ ∞ √ 1 √ z 2 2 dx = exp − x 2 − √ e−u du = π, 2 2 −∞ −∞ so the density for Z = X + Y is 1 2 g(z) = √ e−z /4 , 2 π (19.56) which means it has mean value zero and variance 2, twice that of X and Y. In a special limit the discrete Poisson probability distribution is closely related to the continuous Gauss distribution. This limit theorem is another example of the laws of large numbers, which are often dominated by the bell-shaped normal distribution. 1136 Chapter 19 Probability T HEOREM : For large n and mean value µ, the Poisson distribution approaches a Gauss distribution. To prove this theorem for n → ∞, we approximate the factorial in the Poisson’s probability p(n) of Eq. (19.51) by Stirling’s asymptotic formula (see Chapter 8),  n √ n n! ∼ 2nπ , n → ∞, e and choose the deviation v = n − µ from the mean value as the new variable. We let the mean value µ → ∞ and treat v/µ as small but v 2 /µ as finite. Substituting n = µ + v and expanding the logarithm in a MacLaurin series, keeping two terms, we obtain √ ln p(n) = −µ + n ln µ − n ln n + n − ln 2nπ = (µ + v) ln µ − (µ + v) ln(µ + v) + v − ln 2π(µ + v)   v = (µ + v) ln 1 − + v − ln 2πµ µ+v   v2 v − + v − ln 2πµ = (µ + v) − µ + v 2(µ + v)2 ∼− v2 − ln 2πµ, 2µ replacing µ + v → µ because |v| ≪ µ. Exponentiating this result we find that for large n and µ 1 2 p(n) → √ e−v /2µ , 2πµ (19.57) which is a Gauss distribution of the continuous variable v with mean value 0 and standard √ deviation σ = µ. In a special limit the discrete binomial probability distribution is also closely related to the continuous Gauss distribution. This limit theorem is another example of the laws of large numbers. T HEOREM : In the limit n → ∞, so that the mean value np → ∞, the binomial distribution becomes Gauss’ normal distribution. Recall from Section 19.4 that, when np → µ < ∞, the binomial distribution becomes a Poisson distribution. Instead of the large number x of successes in n trials, we use the deviation v = x − pn from the (large) mean value pn as our new continuous random variable, under the condition that |v| ≪ pn but v 2 /n is finite as n → ∞. Thus, we replace x by v + pn and n − x by qn − v in the factorials of Eq. (19.42), f (x) → W (v) as n → ∞, and then apply Stirling’s formula. This yields W (v) = √ p x q n−x nn+1/2 e−n+x+(n−x) 2π(v + pn)x+1/2 (qn − v)n−x+1/2 . 19.5 Gauss’ Normal Distribution 1137 Here we factor out the dominant powers of n and cancel powers of p and q to find W (v) = √     v −(qn−v+1/2) 1 v −(v+pn+1/2) 1− . 1+ pn qn 2πpqn In terms of the logarithm we have   1 v ln W (v) = ln √ − (v + pn + 1/2) ln 1 + pn 2πpqn   v − (qn − v + 1/2) ln 1 − qn   v2 v 1 − = ln √ − (v + pn + 1/2) + ··· pn 2p 2 n2 2πpqn   v2 v − + ··· − (qn − v + 1/2) − qn 2q 2 n2      v2 1 1 1 1 v 1 + + ··· , − + − = ln √ n 2p 2q n 2p 2q 2πpqn where v → 0, n v2 n is finite and  1 1 + 2p 2q  = 1 p+q = . 2pq 2pq Neglecting higher orders in v/n, such as v 2 /p 2 n2 and v 2 /q 2 n2 , we find the large n limit 1 2 e−v /2pqn , W (v) = √ 2πpqn (19.58) which is a Gaussian distribution in the deviations x − pn, with mean value 0 and standard √ deviation σ = npq. The large mean value pn (and the discarded terms) restricts the validity of the theorem to the central part of the Gaussian bell shape, excluding the tails. Exercises 19.5.1 What is the probability for a normally distributed random variable to differ by more than 4σ from its mean value? Compare your result with the corresponding one from Chebychev’s inequality. Explain the difference in your own words. 19.5.2 Let X1 , X2 , . . . , Xn be independent normal random variables with the same mean x¯ and X /n−x¯ is normal with mean zero and variance 1. variance σ 2 . Show that i √inσ 1138 Chapter 19 Probability 19.5.3 An instructor grades a final exam of a large undergraduate class, obtaining the mean value of points M and the variance σ 2 . Assuming a normal distribution for the number M of points, he defines a grade F when M < m − 3σ/2, D when m − 3σ/2 < M < m − σ/2, C when m − σ/2 < M < m + σ/2, B when m + σ/2 < M < m + 3σ/2, A when M > m + 3σ/2. What is the percentage of As, Fs; Bs, Ds; Cs? Redesign the cutoffs so that there are equal percentages of As and Fs (5%), 25% Bs and Ds, and 40% Cs. 19.5.4 If the random variable X is normal with mean value 29 and standard deviation 3, what are the distributions of 2X − 1 and 3X + 2? 19.5.5 19.6 For a normal distribution of mean value m and variance σ 2 , find the distance r such that half the area under the bell shape is between m − r and m + r. STATISTICS In statistics, probability theory is applied to the evaluation of data from random experiments or to samples to test some hypothesis because the data have random fluctuations due to lack of complete control over the experimental conditions. Typically one attempts to estimate the mean value and variance of the distributions, from which the samples derive, and to generalize properties valid for a sample to the rest of the events at a prescised confidence level. Any assumption about an unknown probability distribution is called a statistical hypothesis. The concepts of tests and confidence intervals are among the most important developments of statistics. Error Propagation When we measure a quantity x repeatedly, obtaining the values xj at random, or select a sample for testing, we determine the mean value (see Eq. (19.10)) and the variance, n x¯ = n 1 xj , n σ2 = j =1 1 (xj − x) ¯ 2, n j =1 as a measure for the error, or spread from the mean value x. ¯ We can write xj = x¯ + ej , where the error ej is the deviation from the mean value, and we know that j ej = 0. (See the discussion after Eq. (19.15).) Now suppose we want to determine a known function f (x) from these measurements; that is, we have a set fj = f (xj ) from the measurements of x. Substituting xj = x¯ + ej and forming the mean value from 1 1 f (xj ) = f (x¯ + ej ) f¯ = n n j j   1 1 = f (x) ¯ + f ′ (x) ej + f ′′ (x) ej2 + · · · ¯ ¯ n 2n j 1 ¯ + ··· , = f (x) ¯ + σ 2 f ′′ (x) 2 j (19.59) 19.6 Statistics 1139 we obtain the average value f¯ as f (x) ¯ in lowest order, as expected. But in second order ¯ It is interesting to there is a correction given by half the variance with a scale factor f ′′ (x). compare this correction of the mean value with the average spread of individual fj from the mean value f¯, the variance of f. To lowest order, this is given by the average of the ¯ j , yielding sum of squares of the deviations, in which we approximate fj ≈ f¯ + f ′ (x)e  2 1  2  ′ 2 2 1 σ 2 (f ) ≡ (fj − f¯)2 = f ′ (x) ¯ ej = f (x) ¯ σ . (19.60) n n j j In summary we may formulate somewhat symbolically f (x¯ ± σ ) = f (x) ¯ ± f ′ (x)σ ¯ as the simplest form of error propagation by a function of one measured variable. For a function f (xj , yk ) of two measured quantities xj = x¯ + uj , yk = y¯ + vk , we obtain similarly r r s s 1  1  ¯ f = fj k = f (x¯ + uj , y¯ + vk ) rs rs j =1 k=1 1 = f (x, ¯ y) ¯ + fx r where j uj = 0 = k vk ,  j j =1 k=1 1  uj + fx vk + · · · , s k so again f¯ = f (x, ¯ y) ¯ in lowest order. Here ∂f (x, ¯ y), ¯ ∂x fx = fy = ∂f (x, ¯ y) ¯ ∂y (19.61) denote partial derivatives. The sum of squares of the deviations from the mean value is given by r  s  j =1 k+1 because (fj k − f¯)2 = j,k uj vk = j uj j,k σ 2 (f ) =    (uj fx + vk fy )2 = sfx2 u2j + rfy2 vk2 , j k vk k = 0. Therefore the variance is 1  (fj k − f¯)2 = fx2 σx2 + fy2 σy2 , rs (19.62) j,k with fx , fy from Eq. (19.61); and σx2 = 1 2 uj , r j σy2 = 1 2 vk s k are the variances of the x and y data points. Symbolically the error propagation for a function of two measured variables may be summarized as  ¯ y) ¯ ± fx2 σx2 + fy2 σy2 . f (x¯ ± σx , y¯ ± σy ) = f (x, As an application and generalization of the last result, we now calculate the error of the mean value x¯ = n1 nj=1 xj of a sample of n individual measurements xj , each with 1140 Chapter 19 Probability spread σ. In this case the partial derivatives are given by fx = n1 = fy = · · · and σx = σ = σy = · · · . Thus, our last error propagation rule tells us that errors of a sum of variables add quadratically, so the uncertainty of the arithmetic mean is given by 1 2 σ σ¯ = nσ = √ , (19.63) n n decreasing with the number of measurements n. As the number n of measurements increases, we expect the arithmetic mean x¯ to converge to some true value x. Let x¯ differ from x by α and vj = xj − x be the true deviations; then    vj2 + nα 2 . ej2 = (xj − x) ¯ 2= j j j Taking into account the error of the arithmetic mean, we determine the spread of the individual points about the unknown true mean value to be 1 2 1 2 σ2 = ej = vj + α 2 . n n j j According to our earlier discussion leading to Eq. (19.63), α 2 = n1 σ 2 . As a result σ2 = 1  2 σ2 , vj + n n j from which the standard deviation of a sample in statistics follows: , , 2 2 v j j j (xj − x) σ= = , n−1 n−1 (19.64) with n − 1 being the number of control measurements of the sample. This modified mean error includes the expected error in the arithmetic mean. Because the spread is not well defined when there is no comparison measurement, that is, when n = 1, the variance is sometimes defined by Eq. (19.64), in which we replace the number n of measurements by the number n − 1 of control measurements in statistics. Fitting Curves to Data Suppose we have a sample of measurements yj (for example, a particle moving freely, that is, no force) taken at known times tj (which are taken to be practically free of errors; that is, the time t is an ordinary independent variable) that we expect to be linearly related as y = at, our hypothesis. We want to fit this line to the data. First we minimize the sum of deviations j (atj − yj )2 to determine the slope parameter a, also called the regression coefficient, using the method of least squares. Differentiating with respect to a we obtain  2 (atj − yj )tj = 0, j 19.6 Statistics 1141 FIGURE 19.8 Straight line fit to data points (tj , yj ) with tj known, yj measured. from which j tj y j 2 j tj a= (19.65) follows. Note that the numerator is built like a sample covariance, the scalar product of the variables t, y of the sample. As shown in Fig. 19.8, the measured values yj do not lie on the line as a rule. They have the spread (or root mean square deviation from the fitted line) , 2 j (yj − atj ) . σ= n−1 Alternatively, let the yj values be known (without error) while tj are measurements. As suggested by Fig. 19.9, in this case we need to interchange the role of t and y and to fit the line t = by to the data points. We minimize j (byj − tj )2 , set the derivative with respect FIGURE 19.9 Straight line fit to data points (tj , yj ) with yj known, tj measured. 1142 Chapter 19 Probability FIGURE 19.10 (a) Straight line fit to data points (tj , yj ). (b) Geometry of deviations uj , vj , dj . to b equal to zero, and find similarly the slope parameter j tj y j b= 2 . j yj (19.66) In case both tj and yj have errors (we take t and y to have the same units), we have to minimize the sum of squares of the deviations of both variables and fit to a parameterization t sin α −y cos α = 0, where t and y occur on an equal footing. As displayed in Fig. (19.10a) this means geometrically that the line has to be drawn so that the sum of the squares of the distances dj of the points (tj , yj ) from the line becomes a minimum. (See Fig. 19.10b and Chapter 1.) Here dj = tj sin α − yj cos α, so j dj2 = minimum must be solved for the angle α. Setting the derivative with respect to the angle equal to zero,  (tj sin α − yj cos α)(tj cos α + yj sin α) = 0, j yields sin α cos α    tj2 − yj2 − cos2 α − sin2 α tj yj = 0. j Therefore the angle of the straight-line fit is given by 2 j tj y j . tan 2α = 2 2 j (tj − yj ) j (19.67) This least-squares fitting applies when the measurement errors are unknown. It allows assigning at least some kind of error bar to the measured points. Recall that we did not use errors for the points. Our parameter a (or α) is most likely to reproduce the data under these circumstances. More precisely, the least-squares method is a maximum-likelihood estimate of the fitted parameters when it is reasonable to assume that the errors are independent and normally distributed with the same deviation for all points. This fairly 19.6 Statistics 1143 strong assumption can be relaxed in “weighted” least-squares fits called chi square fits.4 (See also Example 19.6.1.) The χ 2 Distribution This distribution is typically applied to fits of a curve y(t, a, . . .) with parameters a, . . . to data tj using the method of least squares involving the weighted sum of squares of deviations; that is,  N   yj − y(tj , a, . . .) 2 χ2 = yj j =1 is minimized, where N is the number of points and r is the number of adjusted parameters a, . . . . This quadratic merit function gives more weight to points with small measurement uncertainties yj . We represent each point by a normally distributed random variable X with zero mean value and variance σ 2 = 1, the latter in view of the weights in the χ 2 function. In a first step, we determine the probability density for the random variable Y = X 2 of a single point that takes only positive values. Assuming a zero mean value is no loss of generality because, if X = m = 0, we would consider the shifted variable Y = X − m, whose mean value is zero. We show that if X has a Gauss normal density 1 2 2 −∞ < x < ∞, f (x) = √ e−x /2σ , σ 2π then the probability of the random variable Y is zero if y ≤ 0, and √ √ P (Y < y) = P (X 2 < y) = P (− y < X < y ) if y > 0. y From the continuous normal distribution P (y) = −∞ f (x) dx, we obtain the probability density g(y) by differentiation: d  √ 1  √ √ √ P ( y ) − P (− y ) = √ f ( y ) + f (− y ) g(y) = dy 2 y 1 2 e−y/2σ , y > 0. (19.68) = √ σ 2πy 2 √ This density, ∼ e−y/2σ / y, corresponds to the integrand of the Euler integral of the gamma function. Such a probability distribution g(y) = y p−1 2 e−y/2σ 2 p Ŵ(p)(2σ ) is called a gamma distribution with parameters p and σ. Its characteristic function for our case, p = 1/2, is proportional to the Fourier transform ∞ ∞  itY  dx dy 1 1 2 = √ e−x √ e−y(1/2σ −it) √ = √ e 1 1/2 y σ 2π( 2 − it) x σ 2π 0 0 2σ  −1/2 = 1 − 2itσ 2 . 4 For more details, see Chapter 14 of Press et al. in the Additional Readings of Chapter 9. 1144 Chapter 19 Probability Since the χ 2 sample function contains a sum of squares, we need the following theorem. A DDITION THEOREM : for the gamma distributions: If the independent random variables Y1 and Y2 have a gamma distribution with p = 1/2, and the same σ then Y1 + Y2 has a gamma distribution with p = 1. Since Y1 and Y2 are independent, the product of their densities (Eq. (19.36)) generates the characteristic function  it (Y +Y )   itY itY   itY  itY   −1 e 1 2 = e 1 e 2 = e 1 e 2 = 1 − 2itσ 2 . (19.69) Now we come to the second step. We assess the quality of the fit by the random variable Y = nj=1 Xj2 , where n = N − r is the number of degrees of freedom for N data points and r fitted parameters. The independent random variables Xj are taken to be normally distributed with the (sample) variance σ 2 . (In our case r = 1 and σ = 1.) The χ 2 analysis does not really test the assumptions of normality and independence, but if these are not approximately valid, there will be many outlying points in the fit. The addition theorem gives the probability density (Fig. 19.11) for Y, n y 2 −1 2 gn (y) = n/2 n n e−y/2σ , 2 σ Ŵ( 2 ) y > 0, and gn (y) = 0 if y < 0, which is the χ 2 distribution corresponding to n degrees of freedom. Its characteristic function is  itY   −n/2 e = 1 − 2itσ 2 . Differentiating and setting t = 0 we obtain its mean value and variance Y  = nσ 2 , FIGURE 19.11 σ 2 (Y ) = 2nσ 4 . χ 2 probability density gn (y). (19.70) Table 19.2 19.6 Statistics 1145 χ 2 Distribution n v = 0.8 v = 0.7 v = 0.5 v = 0.3 v = 0.2 v = 0.1 1 2 3 4 5 6 0.064 0.446 1.005 1.649 2.343 3.070 0.148 0.713 1.424 2.195 3.000 3.828 0.455 1.386 2.366 3.357 4.351 5.348 1.074 2.408 3.665 4.878 6.064 7.231 1.642 3.219 4.642 5.989 7.289 8.558 2.706 4.605 6.251 7.779 9.236 10.645 Entries are χv for the probabilities v = P (χ 2 ≥ χv2 ) = ∞ −y/2 (n/2)−1 1 e y dy for σ = 1. 2n/2 Ŵ(n/2) χv2 Tables give values for the χ 2 probability for n degrees of freedom, ∞  2 1 2 P χ ≥ y0 = n/2 n n y n/2−1 e−y/2σ dy 2 σ Ŵ( 2 ) y0 for σ = 1 and y0 > 0. To use Table 19.2 for σ = 1, rescale y0 = v0 σ 2 so that P (χ 2 ≥ v0 σ 2 ) corresponds to P (χ 2 ≥ v0 ) of Table 19.2. The following example will illustrate the whole process. Example 19.6.1 Let us apply the χ 2 function to the fit in Fig. 19.8. The measured points (tj , yj ± yj ) with errors yj are (1, 0.8 ± 0.1), (2, 1.5 ± 0.05), (3, 3 ± 0.2). For comparison, the maximum-likelihood fit, Eq. (19.65), gives a= 1 · 0.8 + 2 · 1.5 + 3 · 3 12.8 = = 0.914. 1+4+9 14 Minimizing instead,  yj − atj 2 χ = , yj 2 j gives 0=  tj (yj − atj ) ∂χ 2 = −2 , ∂a (yj )2 j or a= tj yj j (yj )2 tj2 j (yj )2 . 1146 Chapter 19 Probability In our case a= 1·0.8 2·1.5 + 0.05 2 0.12 12 22 + 0.052 0.12 + 3·3 0.22 32 0.22 = 1505 = 0.782 1925 is dominated by the middle point with the smallest error, y2 = 0.05. The error propagation formula (Eq. (19.62)) gives us the variance σa2 of the estimate of a, tj2 2    1 (yj )2 2 2 ∂a σa = = (yj )  tk2 2 = tj2 ∂yj j j using k (yk )2 j (yj )2 tj ∂a (yj )2 = tk2 ∂yj . k (yk )2 √ For our case, σa = 1/ 1925 = 0.023; that is, our slope parameter is a = 0.782 ± 0.023. To estimate the quality of this fit of a, we compute the χ 2 probability that the two independent (control) points miss the fit by two standard deviations; that is, on average each point misses by one standard deviation. We apply the χ 2 distribution to the fit involving N = 3 data points and r = 1 parameter, that is, for n = 3 − 1 = 2 degrees of freedom. From Eq. (19.70) the χ 2 distribution has a mean value 2 and a variance 4. A rule of thumb is that χ 2 ≈ n for a reasonably good fit. Then P (χ 2 ≥ 2) ∼ 0.496 is read off Table 19.2, where we interpolate between P (χ 2 ≥ 1.3862 ) = 0.50 and P (χ 2 ≥ 2.4082 ) = 0.30 as follows:  2 − 1.3862   2 P χ ≥ 1.3862 − P χ 2 ≥ 2.4082 2 2 2.408 − 1.386 = 0.5 − 0.02 · 0.2 = 0.496.   P χ 2 ≥ 2 = P χ 2 ≥ 1.3862 − Thus the χ 2 probability that, on average, each point misses by one standard deviation is nearly 50% and fairly large.  Our next goal is to compute a confidence interval for the slope parameter of our fit. A confidence interval for an a priori unknown parameter of some distribution (for example, a determined by our fit) is an interval that contains a not with certainty but with a high probability p, the confidence level, which we can choose. Such an interval is computed for a given sample. Such an analysis involves the Student t distribution. The Student t Distribution Because we always compute the arithmetic mean of measured points, we now consider the sample function n 1 X¯ = Xj , n j =1 19.6 Statistics 1147 where the random variables Xj are assumed independent with a normal distribution of the same mean value m and variance σ 2 . The addition theorem for the Gauss distribution tells us that X1 + · · · + Xn has the mean value nm and variance nσ 2 . Therefore (X1 + · · · + Xn )/n is normal with mean value m and variance nσ 2 /n2 = σ 2 /n. The probability density of the variable X¯ − m is the Gauss distribution √   n n(x¯ − m)2 f¯(x¯ − m) = √ exp − . (19.71) 2σ 2 σ 2π The key problem solved by the Student t distribution is to provide estimates for the mean value m, when σ is not known, in terms of a sample function whose distribution is independent of σ. To this end, we define a rescaled sample function (traditionally called) t: n X¯ − m √ n − 1, S t= S2 = 1 ¯ 2. (Xj − X) n (19.72) j =1 It can be shown that t and S are independent random variables. Following the arguments leading to the χ 2 distribution, the density of the denominator variable S is given by the gamma distribution d(s) = n(n−1)/2 s n−2 e−ns 2 n−3 2 2 /2σ 2 n−1 Ŵ( n−1 2 )σ . (19.73) The probability for the ratio Z = X/Y of two independent random variables X, Y with normal density for f¯ and d as given by Eqs. (19.71) and (19.73) is (Eq. (19.40)) z ∞ f (yz)d(y)|y| dy dz, (19.74) R(z) = −∞ −∞ so the variable V = (X¯ − m)/S has the density   ∞ √ 2 2 n nv 2 s 2 n(n−1)/2 s n−2 e−ns /2σ r(v) = s ds √ exp − n−3 2σ 2 σ 2π 0 2 2 Ŵ( n−1 )σ n−1 = σn √ nn/2 π2(n−2)/2 Ŵ( n−1 2 ) Here we substitute z = s 2 and obtain r(v) = σ √ n nn/2 π2n/2 Ŵ( n−1 2 ) Now we substitute Ŵ(1/2) = 2 ∞ e−ns 2 (v 2 +1)/2σ 2 s n−1 ds. 0 ∞ e−nz(v 2 +1)/2σ 2 z(n−2)/2 dz. 0 √ π , define the parameter a= n(v 2 + 1) , 2σ 2 and transform the integral into Ŵ(n/2)/a n/2 to find Ŵ(n/2) , r(v) = √ n−1 π Ŵ( 2 )(v 2 + 1)n/2 −∞ < v < ∞. 1148 Chapter 19 Probability FIGURE 19.12 Student t probability density gn (y) for n = 3. Table 19.3 Student t Distribution p n=1 n=2 n=3 n=4 n=5 0.8 0.9 0.95 0.975 0.99 0.999 1.38 3.08 6.31 12.7 31.8 318.3 1.06 1.89 2.92 4.30 6.96 22.3 0.98 1.64 2.35 3.18 4.54 10.2 0.94 1.53 2.13 2.78 3.75 7.17 0.92 1.48 2.02 2.57 3.36 5.89 C 2 Entries are the values C in P (C) = Kn −∞ (1 + tn )−(n+1)/2 dt = p, n is the number of degrees of freedom. Finally we rescale this expression to the variable t in Eq. (19.72) with the density (Fig. 19.12) Ŵ(n/2) g(t) = √ π(n − 1)Ŵ( n−1 2 )(1 + t 2 n/2 n−1 ) −∞ < t < ∞, , (19.75) for the Student t distribution, which manifestly does not depend on m or σ . The probability for t1 < t < t2 is given by the integral t2 dt Ŵ(n/2) , (19.76) P (t1 , t2 ) = √ 2 ) π(n − 1)Ŵ( n−1 t1 (1 + t )n/2 2 n−1 and P (z) ≡ P (−∞, z) is tabulated. (See Table 19.3 for example.) Also, P (∞, −∞) = 1 and P (−z) = 1 − P (z), because the integrand in Eq (19.76) is even in t, so −z ∞ dt dt = 2 t t 2 n/2 n/2 −∞ (1 + z ) (1 + n−1 n−1 ) and z ∞ dt (1 + t2 n−1 )n/2 = ∞ −∞ dt (1 + t2 n−1 )n/2 − z −∞ dt (1 + t 2 n/2 n−1 ) . 19.6 Statistics 1149 Multiplying this by the factor preceding the integral in Eq. (19.76) yields P (−z) = 1 − P (z). In the following example we show how to apply the Student t distribution to our fit of Example 19.6.1. Example 19.6.2 CONFIDENCE INTERVAL Here we want to determine a confidence interval for the slope a in the linear y = at fit of Fig. 19.8. We assume • first that the sample points (tj , yj ) are random and independent, and • second that, for each fixed value t, the random variable Y is normal with mean µ(t) = at and variance σ 2 independent of t. These values yj are measurements of the random variable Y, but we will regard them as single measurements of the independent random variables Yj with the same normal distribution as Y (whose variance we do not know). We choose a confidence level, p = 95%, say. Then the Student probability is P (−C, C) = P (C) − P (−C) = p = −1 + 2P (C), hence 1 P (C) = (1 + p), 2 using P (−C) = 1 − P (C), and 1 P (C) = (1 + p) = 0.975 = Kn 2   t 2 −(n+1)/2 1+ dt, n −∞ C where Kn−1 is the factor preceding the integral in Eq. (19.76). Now we determine a solution C = 4.3 from Table 19.3 of Student’s t distribution, with n = N − r = 3 − 1 = 2 the number of degrees of freedom, noting √ that (1 + p)/2 corresponds to p in Table 19.3. Then we compute A = Cσa / N for sample size N = 3. The confidence interval is given by a − A ≤ a ≤ a + A, at p = 95% confidence level. From the χ 2 analysis of Example 17.6.1 we use the slope a = 0.782 and variance σa2 = √ = 0.057, and the confidence interval is determined by a − A = 0.0232 , so A = 4.3 0.023 3 0.782 − 0.057 = 0.725, a + A = 0.839, or 0.725 < a < 0.839 at 95% confidence level. Compared to σa , the uncertainty of a has increased due to the high confidence level. A look at Table 19.3 shows that a decrease in confidence level, p, reduces the uncertainty interval, and increasing the number of degrees of freedom, n, would also lower the range of uncertainty.  1150 Chapter 19 Probability Exercises 19.6.1 Let A be the error of a measurement of A, etc. Use error propagation to show that       σ (C) 2 σ (A) 2 σ (B) 2 = + C A B holds for the product C = AB and the ratio C = A/B. 19.6.2 Find the mean value and standard deviation of the sample of measurements x1 = 6.0, x2 = 6.5, x3 = 5.9, x5 = 6.2. If the point x6 = 6.1 is added to the sample, how does the change affect the mean value and standard deviation? 19.6.3 (a) Carry out a χ 2 analysis of the fit of case b in Fig. 19.9 assuming the same errors for the ti , ti = yi , as for the yi used in the χ 2 analysis of the fit in Fig. 19.8. (b) Determine the confidence interval at 95% confidence level. 19.6.4 If x1 , x2 , . . . , xn are a sample of measurements with mean value given by the arithmetic mean x¯ and the corresponding random variables Xj that take the values xj with the same probability are independent and have mean value µ and variance σ 2 , then show 1 2 2 2 that x ¯ = µ and σ (x) ¯ 2 is the sample variance, show ¯ = σ /n. If σ¯ = n j (xj − x) 2 that σ¯ 2  = n−1 n σ . Additional Readings Kreyszig, E., Introductory Mathematical Statistics: Principles and Methods. New York: Wiley (1970). Suhir, E., Applied Probability for Engineers and Scientists. New York: McGraw-Hill (1997). Papoulis, A., Probability, Random Variables, and Stochastic Processes, 3rd ed. New York: McGraw-Hill (1991). Ross, S. M., First Course in Probability, 5th ed., Vol. A. New York: Prentice-Hall (1997). Ross, S. M., Introduction to Probability Models, 7th ed. New York: Academic Press (2000). Ross, S. M., Introduction to Probability and Statistics for Engineers and Scientists, 2nd ed. New York: Academic Press (1999). Chung, K. L., A Course in Probability Theory Revised, 3rd ed. New York: Academic Press (2000). Devore, J. L., Probability and Statistics for Engineering and the Sciences, 5th ed. New York: Duxbury Pr. (1999). Montgomery, D. C., and G. C. Runger, Applied Statistics and Probability for Engineers, 2nd ed. New York: Wiley (1998). Degroot, M. H., Probability and Statistics, 2nd ed. New York: Addison-Wesley (1986). Bevington, P. R., and D. K. Robinson, Data Reduction and Error Analysis for the Physical Sciences, 3rd. ed. New York: McGraw-Hill (2003). General References Additional, more specialized references are listed at the end of each chapter. 1. E. T. Whittaker and G. N. Watson, A Course of modern Analysis, 4th ed. Cambridge, UK: Cambridge University Press (1962), paperback. Although this is the oldest of the references (original edition 1902), it still is the classic reference. It leans strongly toward pure mathematics, as of 1902, with full mathematical rigor. 2. P. M. Morse and H. Feshbach, Methods of Theoretical Physics, 2 vols. New York: McGraw-Hill (1953). This work presents the mathematics of much of theoretical physics in detail but at a rather advanced level. It is recommended as the outstanding source of information for supplementary reading and advanced study. 19.6 General References 3. 4. 5. 7. 1151 H. S. Jeffreys and B. S. Jeffreys, Methods of Mathematical Physics, 3rd ed. Cambridge, UK: Cambridge University Press (1972). This is a scholarly treatment of a wide range of mathematical analysis, in which considerable attention is paid to mathematical rigor. Applications are to classical physics and to geophysics. R. Courant and D. Hilbert, Methods of Mathematical Physics, Vol. 1 (1st English ed.). New York: Wiley (Interscience) (1953). As a reference book for mathematical physics, it is particularly valuable for existence theorems and discussions of areas such as eigenvalue problems, integral equations, and calculus of variations. F. W. Byron Jr., and R. W. Fuller, Mathematics of Classical and Quantum Physics, Reading, MA: AddisonWesley, reprinted, Dover (1992). This is an advanced text that presupposes a moderate knowledge of mathematical physics. C. M. Bender and S. A. Orszag, Advanced Mathematical Methods for Scientists and Engineers. New York: McGraw-Hill (1978). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, Applied Mathematics Series-55 (AMS-55). Washington, DC: National Bureau of Standards, U.S. Department of Commerce; reprinted, Dover (1974). As a tremendous compilation of just what the title says, this is an extremely useful reference. This page intentionally left blank INDEX Numbers angular momentum operators, 261 Clebsch–Gordan coefficients, 803 orbital, 793–96 spherical harmonics, 793 vector spherical harmonics, 813–16 annihilation operator, 824 anomalous dispersion, 998 anticommutation relation, 18 anti-Hermitian matrices, 221–23 eigenvalues degenerate, 223 and eigenvectors of real symmetric matrices, 221–22 overview, 221 antisymmetry antisymmetric matrices, 204 and determinants, 168–72 Gauss elimination, 170–72 overview, 168–70 of tensors, 137, 147 area law for planetary motion, 116–19 Argand diagram, 405 associated Legendre equation functions, see Legendre equation, functions polynomials associative, 2 asymptotic expansions, 719–25 asymptotic forms of factorial function Ŵ(1 + s), 494–95 of Hankel function, 493–94 asymptotic series, 389–96 Bessel functions, 722 confluent hypergeometric functions, 393 cosine and sine integrals, 392–93 definition of, 393–94 exercises, 394–96 incomplete gamma function, 389–92 overview, 389 overview: integral representation expansion, 719–23 steepest descent, 489–95 Stokes’ method, 719, 724 asymptotic values, Bessel functions, 722, 729 attractor, 1081 autonomous differential equations, 1091–93 average value, 1117–18 axes, see rotations; symmetry 1-forms, 304–5 2-forms, 305–6 3-forms, 306–7 A Abelian group, 242 Abel’s equation, 1016 Abel’s test, 351, 665, 882 Abel’s theorem, 882 absolute convergence, 340–42, 350, 363 addition of matrices, 178–79 of series, 324–25 of tensors, 136 addition rule, 1111 addition theorem for spherical harmonics, 797–802 Bessel functions, 636 derivation of addition theorem, 798–800 Legendre polynomials, 798 trigonometric identity, 797–98 adjoint operator property, 208 algebraic form, 405 aliasing, 916–17 alternating series, 339–42 absolute convergence, 340–42 exercises, 342 Leibniz criterion, 339–40 overview, 339 analytic continuation, 432–34 analytic functions, 415–18 z∗ , 416 z2 , 415 analytic landscape, 489–90 angular Mathieu equation, 872 angular momentum, 18, 215, 267, see also orbital angular momentum coupling, 266–78 Clebsch–Gordan coefficients SU(2) and SO(3), 267–70 exercises, 277–78 overview, 266–67 spherical tensors, 271–74 young tableaux for SU(n), 274–77 1153 1154 Index axial vector, 143–44 axis, coordinate, 4–5, 7–11, 195–99 azimuthal dependence — orthogonality, 787 B Baker–Hausdorff formula, 225 basin of attraction, 1081 Bernoulli and Riccati equations, 1089–90 Bernoulli numbers, 376–89, 473–74 Euler–Maclaurin integration formula, 380–82 exercises, 385–89 improvement of convergence, 385 overview, 376–79 polynomials, 379–80 Riemann Zeta function, 382–84 Bessel functions, 675–739, 865 asymptotic expansions, 719–25 exercises, 723–25 expansion of an integral representation, 720–23 asymptotic values, 722 closure equation, 696 of first kind, 675–93 alternate approaches, 685–86 Bessel functions of nonintegral order, 686 Bessel’s differential equation, 678–79 Bessel’s differential equation: self-adjoint form, 694 confluent hypergeometric representation, 865 cylindrical resonant cavity, 682–85 cylindrical wave guide, 705 exercises, 686–93 Fourier transform, 933 Fraunhofer diffraction, circular aperture, 680–82 generating function for integral order, 675–77 integral representation, 679–80 Laplace transform solution, 983–84 orthogonality, 694 recurrence relations, 677–78 second kinds, 699–707 series solution, 570–72, 676–79 singularities, 564 spherical, 725–39 Wronskian, 702–5 Fraunhofer diffraction, 680–82 Hankel functions, 707–13 contour integral representation of, 709–11 cylindrical traveling waves, 708 definitions, 707–8 exercises, 711–13 Helmholtz equation, 683–84, 725 Laplace’s equation, 695 modified, 713–19 asymptotic expansion, 711, 719 exercises, 716–19 Fourier transform, 716 generating function, 709 integral representation, 720–23 Laplace transform, 933 recurrence relations, 714–16 series form, 714 Neumann functions, Bessel functions of second kind, 699–707 coaxial wave guides, 703–4 definition and series form, 699–700 exercises, 704–7 other forms, 701 recurrence relations, 702 Wronskian formulas, 702–3 of nonintegral order, 686 orthogonality, 694–99 Bessel series, 695 continuum form, 696 electrostatic potential in a hollow cylinder, 695–96 exercises, 697–99 normalization, 695 recurrence relations, 677 spherical, 725–39 asymptotic values, 729 definitions, 726–29 exercises, 732–39 limiting values, 729–30 orthogonality, 731 particle in a sphere, 731–32 recurrence relations, 730 spherical waves, 730 in wave guides, 703–4 zeros, 682 Bessel’s differential equation, 678–79, 684 self-adjoint form, 694 Bessel’s equation, 983–84 Bessel series, 695 Bessel’s inequality, 651–52 beta function, 520–26 definite integrals, alternate forms, 521–22 derivation of Legendre duplication formula, 522–23 incomplete, 523 Laplace convolution, 993 verification of π α/ sin π α relation, 522 bifurcates, 1083 bifurcations in dynamical systems, 1103–4 Hopf, 1101, 1103–4, 1107 pitchfork, 1083, 1086, 1101, 1107 binomial coefficient, 356 binomial distribution, 1128–30 binomial expansion, 1129 binomial probability distribution, 1129 Index binomial theorem, 356–57 Biot and Savart law, 780–82 black hole, optical path near event horizon of, 1041–42 Bohr radius, 844 Born approximation, 603 Bose–Einstein statistics, 1064, 1115 boundary conditions, 542–43 Cauchy, 542 Dirichlet, 543 hollow cylinder, 695 magnetic field of current loop, 778–82 Neumann, 543 ring of charge, 761 sphere in uniform electric field, 759 Sturm–Liouville theory, 627–29 waveguide, coaxial cable, 703 bound state, 627 box counting dimension, 1086 branch cut (cut line), 409 branch points, 440–42 and multivalent functions, 447–50 of order 2, 440–42 Bromwich integral, 994–95 Butterfly effect, 1079 C calculus of residues, 455–82, see also definite integrals Cauchy principal value, 457–60 exercises, 474–82 Jordan’s lemma, 466–68 overview, 455 pole expansion of meromorphic functions, 461 product expansion of entire functions, 462–63 residue theorem, 455–56 calculus of variations, 1037–77 applications of the Euler equation, 1044–52 exercises, 1049–52 soap film, 1045–46 soap film — minimum area, 1046–49 straight line, 1044–45 dependent and an independent variable, 1038–44 alternate forms of Euler equations, 1042 concept of variation, 1038–41 exercises, 1043–44 missing dependent variables, 1042–43 optical path near event horizon of a black hole, 1041–42 Lagrangian multipliers, 1060–65 constraints, 1060–72 cylindrical nuclear reactor, 1062 exercises, 1063–65 particle in a box, 1061–62 1155 Rayleigh–Ritz variational technique, 1072–76 exercises, 1074–76 ground state eigenfunction, 1073 Sturm–Liouville equation, 1072 vibrating string, 1074 several dependent and independent variables, 1058–59 exercises, 1059 relation to physics, 1059 several dependent variables, 1052–58 exercises, 1055–56 Hamilton’s principle, 1053–54 Laplace’s equation, 1057–58 moving particle — Cartesian coordinates, 1054 moving particle — circular cylindrical coordinates, 1054–55 several independent variables, exercises, 1058 surface of revolution, 1046 uses of, 1037 variation with constraints, 1065–72 exercises, 1070–72 Lagrangian equations, 1066–67 Schrödinger wave equation, 1069–70 simple pendulum, 1067–68 sliding off a log, 1068–69 Cartesian components, 4 Cartesian coordinates, 554–55 unit vectors, 5 Casimir operators, 265 Catalan’s constant, 384, 513 catenoid, catenary of revolution, 1046 Cauchy (Maclaurin) integral test, 327–30, 418–30 contour integrals, 418–20 derivatives, 426–27 exercises, 424–25, 429–30 Goursat proof, 421–23 Morera’s theorem, 427–28 multiply connected regions, 423–24 overview, 418, 425–26 Stokes’ theorem proof, 420–21 Cauchy boundary conditions, 542 Cauchy criterion, 322 Cauchy inequality, 428 Cauchy principal value, 457–60, 471 Cauchy–Riemann conditions, 413–18 analytic functions, 415–18 z∗ , 416 z2 , 415 exercises, 416–18 overview, 413–15 causality, 486–87 cavities, cylindrical, 682–85 Cayley–Klein parameters, 252 center or cycle, 1100–1101 1156 Index central force, 117 central force field, 39–40, 44–46 centrifugal potentials, 72 chain rule, 34 chaos in dynamical systems, 1105–6 chaotic attractor, 1085 character, 184, 293 characteristics, 538–41 Chebyshev differential equation, 559 Chebyshev polynomials, 848–59 generating functions, 848 Gram–Schmidt construction, 646 hypergeometric representations, 862 orthogonality, 854–55 recurrence relation, 850 recurrence relations — derivatives, 852–53 shifted, 646, 850 trigonometric form, 853–54 type I, 849–52 type II, 849 chi-squared (χ 2 ) distribution, 1143–45 Christoffel symbols, 154–56, 314 circular cylinder coordinates, 115–23 area law for planetary motion, 116–19 exercises, 120–23 Navier–Stokes term, 119 overview, 115–16 circular cylindrical coordinates, 555–56 expansion, 601–2 circular membrane, Bessel functions, 693, 708 classes and character, 293 Clausen functions, 909 Clebsch–Gordan coefficients, 267–70, 803 Clifford algebra, 211–12 closed-form solutions, 810–12 closure, of Bessel function, 89, 248, 696 closure, of spherical harmonics, 790, 792 coaxial wave guides, 703–4 commutative, 1 commutator, 180, 225, 231, 249, 253, 262, 264 comparison tests, 325–26 completeness of eigenfunctions of Fourier series: of Sturm–Liouville eigenfunctions, 649–51 of Hilbert–Schmidt: of integral equations, 1031–33 complex variables, 403–54, 455–97, see also calculus of residues; Cauchy–Riemann conditions; functions; mapping; saddle points (steepest descent method); singularities algebra using, 404–13 calculus of residues, 455–82 complex conjugation, 407–8 exercises, 409–13 overview, 404–5 permanence of algebraic form, 405–7 Cauchy’s integral formula, 425–30 derivatives, 426–27 exercises, 429–30 Morera’s theorem, 427–28 overview, 425–26 Cauchy’s integral theorem, 418–25 Cauchy–Goursat proof, 421–23 contour integrals, 418–20 exercises, 424–25 multiply connected regions, 423–24 overview, 418 Stokes’ theorem proof, 420–21 dispersion relations, 482–89 causality, 486–87 exercises, 487–89 optical dispersion, 484–85 overview, 482–83 Parseval relation, 485–86 symmetry relations, 484 Laurent expansion, 430–38 analytic continuation, 432–34 exercises, 437–38 Schwarz reflection principle, 431–32 Taylor expansion, 430–31 overview, 403–4, 455 conditional convergence, 340 conditional probability, 1112 condition number, of ill-conditioned systems, 234 Condon–Shortley phase conventions, 270 confidence interval, 1146, 1149 confluent hypergeometric functions, 863–69 asymptotic expansions, 866 Bessel and modified Bessel functions, 865 Hermite functions, 866 integral representations, 865 Laguerre functions, 837–48 miscellaneous cases, 866 Whittaker functions, 866 Wronskian, 868 conformal mapping, 451–54 conjugation, complex, 407–8 connected, simply or multiply, 60, 95, 420, 423, 426, 435 conservation theorem, 309 conservative force, 34, 69 constant 1-forms, 305 constant B field, vector potentials of, 44 contiguous function relations, 861 continuation, analytic, 432–34 continuity equation, 40–42 continuity of power series, 364 continuous random variable, 1117–19 continuum form, 696 Index contour integral representation, Hankel functions, 709–11 contour integrals, 418–20 contour of integration, simple pole on, 468–69 contraction, 139 contravariant tensor, 135, 156–58, 160, 162 contravariant vector, 134 convergence, rate of, 334, 345 convergence of infinite series, 321, 903 absolute, 340–42, 350 improvement of, and Bernoulli numbers, 385 of infinite product, 397–98 of power series, 363 rate, 334, 345 and rational approximations, 345 tests, 325–39, see also Cauchy (Maclaurin) integral test comparison, 325–26 exercises, 335–39 Gauss’, 332–33, 357 improvement of, 334–39 Kummer’s, 330–32 overview, 325 partial sum approximation, 390 Raabe’s, 332 uniform and nonuniform, 348–49 convolution (Faltungs) theorem, 951–55, 990–94 driven oscillator with damping, 991–93 Fourier transform, 931–32, 936–45 Laplace transform, 965–1003 Parseval’s relation, 952–53 coordinates, see also circular cylinder coordinates; curved coordinates and vectors; orthogonal coordinates; spherical polar coordinates axes, rotation of, see rotations curvilinear, 104, 105, 110, 111, 112 divergence of coordinate vector, 39 Laplacian in orthogonal, 316 rotation of, 199 correlation, 1122 cosets and subgroups, 293–94 cosines asymptotic expansion, 392–93 confluent hypergeometric representation, 867 cosine transform, 939 direction, 196–97 direction cosines (orthogonal matrices), 4, 196–97, 201 functions of infinite products, 398–99 infinite product, 398, 462 integral, 392 integral of in denominator, 464–65 integrals in asymptotic series, 392–93 law of, 16–17, 118, 745 law of, theorem, 16 1157 theorem, 16 coupling, angular momentum, see angular momentum covariance, 1122 covariance of Maxwell’s equations, Lorentz, see Lorentz covariance of Maxwell’s equations covariant derivative, 151, 156 covariant vectors, 134, 152–53 tensor, 135, 156–58, 160, 162 Cramer’s rule, 166 creation operator, 824 criterion, Leibniz, 339–40 critical point, 1091–93 critical strip, 897 critical temperature, 1086 crossing conditions, 484 cross product, 18–22, see also triple vector products exercises, 22–25 overview, 18–22 of vectors, 315 crystallographic point and space groups, 299–300 curl, ∇×, 43–49 central force field, 44–46 as differential vector operator, 112–13 exercises, 47–49 gradient of dot product, 46 integral definitions of gradient, divergence and, 58–59 integration by parts of, 47 overview, 43 as tensor derivative operator, 162–63 vector potential of constant B field, 44 curl, ∇× central force field in circular cylindrical coordinates, 118 in curvilinear coordinates, 112–13 in spherical polar coordinates, 126 irrotational, 45 curved coordinates and vectors, 103–33, see also circular cylinder coordinates; orthogonal coordinates; spherical polar coordinates differential operators, 110–14 curl, 112–13 divergence, 111–12 exercises, 113–14 gradient, 110 overview, 110 overview, 103 special coordinate systems, 114–33 curves, fitting to data, 1140–43 curvilinear coordinates, 104, 105, 110, 111, 112 cut line (branch cut), 409 cylinder coordinates, circular, see circular cylinder coordinates 1158 Index cylindrical coordinates, 104 cylindrical symmetry, 617 cylindrical traveling waves, 708 D d’Alembertian, 141 d’Alembert ratio test, 326–27 damped oscillator, 979–80 damped simple harmonic oscillation, 980 decay, kaon, 282–83 definite integral (Euler), 500–501 definite integrals evaluation of, 463 exponential forms, 471–82 Bernoulli numbers, 473–74 factorial function, 472–73 ∞ f (x) dx, 465–66 −∞ ∞ iax dx, 466–71 −∞ f (x)e quantum mechanical scattering, 469–71 simple pole on contour of integration, 468–69 2π 0 f (sin θ, cos θ) dθ , 464–65 degeneracy, of Schrödinger’s wave equation, 638 degenerate eigenfunctions, 638 degenerate eigenvalues, 223, 638 Del (∇), 42, 43 for central force, 127 successive applications of, 49–54 electromagnetic wave equation, 51–53 exercises, 53–54 Laplacian of potential, 50–51 overview, 49–50 delta function, Dirac, 83–85, 669–70, 975 Bessel representation, 935 in circular cylindrical coordinates, 601 derivation, 937–38 eigenfunction expansion, 89, 650 exercises, 91–95 Fourier integral, 90 Fourier representation, 90 Green’s function and, 592–610 impulse force, 975 integral representations for, 90 Laplace transform, 975 overview, 83–87 phase space, 88 point source, 88, 593 quantum theory, 955–61 representation by orthogonal functions, 88–89 sequences, 83, 86 sine, cosine representations, 943 in spherical polar coordinates, 82, 599 theory of distributions, 86 total charge inside sphere, 88 De Moivre’s formula, 408–13 denominator, integral of cos in, 464–65 dependent and independent variables, 1038–44 alternate forms of Euler equations, 1042 concept of variation, 1038–41 missing dependent variables, 1042–43 optical path near event horizon of a black hole, 1041–42 derivative operators, tensor curl, 162–63 divergence, 160–61 exercises, 162–63 Laplacian, 161–62 overview, 160 derivatives, 426–27, see also exterior derivative covariant, 156 gauge covariant, 76 descending power series solutions, 369, 781 descent, steepest, see saddle points (steepest descent method) determinants, 165–239 antisymmetry, 168–72 Gauss elimination, 170–72 Gauss–Jordan elimination, inversion, 185–86 overview, 168–70 exercises, 174–76 Gram–Schmidt procedure, 173–74 overview, 173–74 vectors by orthogonalization, 174 homogeneous linear equations, 165–66 inhomogeneous linear equations, 166–67 Laplacian development by minors, 167–68 linear dependence of vectors, 172–73 overview, 165 product theorem, 181 representation of a vector product, 20 secular equation, 218 solution of a set of homogenous equations, 165 solution of a set of nonhomogenous equations, 166 deuteron, 626–27 diagonal matrices, 182–83, 215–31, see also anti-Hermitian matrices eigenvectors and eigenvalues, 216–19 exercises, 226–31 functions of, 224–26 Hermitian, 219–21 moment of inertia, 215–16 differential equations, 535–619, 751–52 first-order differential equations, 543–53 exact differential equations, 545–47 exercises, 550–53 linear first-order ODEs, 547–50 nonlinear, 1088–1102 parachutist, 544–49 Index RL circuit, 549–50 separable variables, 544–45 Fuchs’ theorem, 573 heat flow, or diffusion, PDE, 611–18 alternate solutions, 614–15 special boundary condition again, 615–16 specific boundary condition, 612–13 spherically symmetric heat flow, 616–17 homogeneous, 536, 548–50, 565 linear independence of solutions second solution, 581–83 series form of the second solution, 583–85 nonhomogeneous equation — Green’s function, 592–610 circular cylindrical coordinate expansion, 27, 601–2 exercises, 607–10 form of Green’s functions, 596–98 Legendre polynomial addition theorem, 600–601 quantum mechanical scattering — Green’s function, 603–6 quantum mechanical scattering — Neumann series solution, 602–3 spherical polar coordinate expansion, 598–600 symmetry of Green’s function, 595–96 partial differential equations, 535–43 boundary conditions, 542–43 classes of PDEs and characteristics, 538–41 examples of, 536–38 introduction, 535–36 nonlinear PDEs, 541–42 particular solution, 548, 565 second solution, 573–78 exercises, 588–92 linear dependence, 580–81 linear independence, 580 linear independence of solutions, 579–80 second solution for the linear oscillator equation, 583 second solution of Bessel’s equation, 586–87 second solution logarithmic term, 585, 700 separation of variables, 554–62 Cartesian coordinates, 554–55 circular cylindrical coordinates, 555–56 exercises, 560–62 spherical polar coordinates, 557–60 series solutions — Frobenius’ method, 565–78 exercises, 574–78 expansion about x0 , 569 Fuchs’ theorem, 573 limitations of series approach — Bessel’s equation, 570–71 regular and irregular singularities, 572–73 1159 symmetry of solutions, 569 singular points, 562–65 differential forms, 304–19, see also pullbacks 1-forms, 304–5 2-forms, 305–6 3-forms, 306–7 exercises, 318–20 exterior derivative, 307–9 Hodge operator , 314–19 cross product of vectors, 315 Laplacian in orthogonal coordinates, 316 Maxwell’s equations, 316–20 overview, 314–15 Legendre’s equation, 752 overview, 304 Stokes’ theorem on, 313–14 differentials, exact, see thermodynamics differential vector operators, 110–14 adjoint, 621, 623, 634–35 curl, 112–13 del, 33 divergence, 111–12 exercises, 113–14 gradient, 110 overview, 110 differentiation, 904–5 differentiation of power series, 364 diffraction, 680–82, 1017 diffusion equation, see differential equations, heat flow PDE digamma and polygamma functions, 510–16 digamma functions, 510–11 Maclaurin expansion, computation, 512 polygamma function, 511–12 series summation, 512 dihedral groups, Dn , 299 dimension box-counting, 1086 Hausdorff, 1086 Kolmogorov, 1086 dimensionality theorem, 298 dipoles, interaction energy, magnetic dipole, radiation fields, see electric dipole Dirac bra-ket notation, 177 Dirac delta function, see delta function, Dirac Dirac matrices, 209–12 direction cosines (orthogonal matrices), 4, 196–97, 201 direct product, 139–41 exercises, 140–41 and matrix multiplication, 181–82 overview, 139–40 of tensors, 140 direct tensor, 181 direct tensor product, 139 1160 Index Dirichlet conditions, 543, 882 kernel, 910 Dirichlet problem, 617 Dirichlet series, 326 discontinuities, behavior of, 886 discontinuous functions, 888 discrete Fourier transform, 914–19 aliasing, 916–17 fast Fourier transform, 917 limitations, 916 orthogonality over discrete points, 914–15 discrete groups, 291–304 classes and character, 293 crystallographic point and space groups, 299–300 dihedral groups, Dn , 299 exercises, 300–304 subgroups and cosets, 293–94 threefold symmetry axis, 296–99 twofold symmetry axis, 294–96 discrete random variable, 1116–17 dispersion relations, 482–89 causality, 486–87 crossing relations, 484 exercises, 487–89 Hilbert transform, 485 optical dispersion, 484–85 overview, 482–83 Parseval relation, 485–86 sum rules, 484 symmetry relations, 484 displacement, 158 dissipation, 1101–3 distributive, 13 divergence, ∇, 38–43 of central force field, 39–40 circular cylindrical, 115–23 coordinates, Cartesian, 4–7 of coordinate vector, 39 curvilinear coordinates, 111 as differential vector operator, 111–12 exercises, 42–43 integral definitions of gradient, curl, and, 58–59 integration by parts of, 40 overview, 38–39 physical interpretation, 40–42 solenoidal, 42 spherical polar, 123–33 as tensor derivative operator, 160–61 divergent series, 325–26 Doppler shift, 360 dot products, 12–17 exercises, 17 gradient of, 46 invariance of scalar product under rotations, 15–17 overview, 12–15 double series, rearrangement of, 345–48 driven oscillator with damping, 991–93 dual tensors, 147–48 duplication formula for factorial functions, see Legendre duplication formula dynamical systems, dissipation in, 1102–3 E E, Lorentz transformation of the electric field, 287–88 Earth’s gravitational field, 758–59 Earth’s nutation, 973–74 eigenfunctions, 624, 626–27 Bessel’s inequality, 651–52 completeness of, 649–61 of Fourier series: of Sturm–Liouville eigenfunctions, 649–51 of Hilbert–Schmidt: of integral equations, 1031–33 eigenvalue equation, 667–68 expansion, Green’s function, 662–74 of Dirac delta function, 88–90 of Hermitian differential operators, 635, 649–51 of square wave, 637 expansion coefficients, 658 orthogonal, 636, 637, 1030–32 Schwarz inequality, 652–54 summary — vector spaces, completeness, 654–58 variational calculation, 1072–76 eigenvalues, 217–18, 223, 624–27, 634–35 of Hermitial differential operator, 634 of Hermitial matrices, 219–23 of Hilbert–Schmidt integral equation, 1030–36 of normal matrices, 231–32 of real symmetric matrices, 215–19 variational principle for, 1072 eigenvectors, 216–19, 221–22 eightfold way (weight diagram), 257 Einstein’s energy relation, 281 Einstein’s summation convention, 136 velocity addition law, 283, 290 electrical charge inside spheres, 88 electric dipole potential, Legendre polynomial expansion, 745 electromagnetic invariants, 288 electromagnetic wave equation, 51–53 electromagnetic waves, 981–82 electrostatic multipole expansion, 599 electrostatic potential, 593 in hollow cylinder, 695–96 Index of ring of charge, 761–62 electrostatics, physical basis, 741–42 elimination, see determinants elliptical drum, 872–73 elliptic integrals, 370–76 definitions of, 372 exercises, 374–76 of first kind, 372 hypergeometric representations, 373, 860 limiting values, 374 overview, 370 period of simple pendulum, 370–71 of second kind, 372 series expansion, 372–73 elliptic PDEs, 538 empty set, 1111 energy potentials, 309 relativistic, 356–57 equality of matrices, 178 equations, see also linear equations; Maxwell’s equations; Poisson’s equation of motion and field, 142 error integrals, 530 asymptotic expansion, 393 confluent hypergeometric representation, 864 error propagation, 1138–40 essential (irregular) singular point, 563 Euler angles, 202–3 Euler equation, 1040 alternate forms of, 1042 applications of, 1044–52 soap film, 1045–46 soap film — minimum area, 1046–49 straight line, 1044–45 Euler identity, 224 product formula, 382–84 Euler integrals, 502 Euler–Maclaurin integration formula, 380–82, 517 Euler–Mascheroni constant, 330 Euler product for Riemann Zeta function, 382–84 event horizon, 1041 exact differential equations, 545–47 exact differentials, see thermodynamics expansion, see also Laurent expansion; Taylor’s expansion pole, of meromorphic functions, 461 product, of entire function, 462–63 of series, 372–73 expansion coefficients, 658 expansion of functions, Legendre series, 757–58 expectation value, 630, 955–56, 1117–18 exponential forms, 471–82 Bernoulli numbers, 473–74 factorial function, 472–73 1161 exponential function, of Maclaurin theorem, 354–55 exponential integral, 527–30 exponential of diagonal matrix, 225–26 exponential transform, 938 exterior derivative, 307–9 extreme or stationary value, 36, 881, 1038–40 F Factorial function Ŵ(1 + s) asymptotic form of, 494–95 complex argument, 499–506 contour integrals, 505 digamma function, 510 double factorial notation, 505 Gamma functional relation, 503 infinite product, 501 integral (Euler) representation, 500 Legendre duplication formula, 503 Maclaurin expansion, 512 polygamma functions, 511 steepest descent asymptotic formula, 495 Stirling’s (formula) series, 516–18 factorial notation, 503–5 faithful group, 243 Faraday’s law, 66–68 fast Fourier transform, 917 Feigenbaum number, 1083–84 Fermage equation, 950 Fermat’s principle, 157, 1041, 1050 Fermi–Dirac statistics, 1064, 1115 field equations, 142 finite wave train, 940–41 first-order differential equations, 543–53 exact differential equations, 545–47 linear first-order ODEs, 547–50 separable variables, 544–45 fixed and movable singularities, special solutions, 1090 Floquet’s theorem, 877 force as gradient of potentials, 36 forced classical oscillators, 457–60 force field, central, see central force field Fourier–Bessel series, 695 Fourier expansions of Mathieu functions, 919–29 integral equations and Fourier series for Mathieu functions, 919–23 leading coefficients for ce0 , 926–29 leading coefficients of se1 , 923–26 Fourier integral theorem, 937 development of, 936–38 exponential form, 937 Fourier–Mellin integral, 995 Fourier representation, of Dirac delta function, 90 Fourier series, 881–930 1162 Index advantages, uses of, 888–92 change of interval, 890–91 completeness, see general properties of Fourier series convergence, 893, 903–10 differentiation, see general properties of Fourier series discontinuous functions, 888 exercises, 891–92 integration, see general properties of Fourier series periodic functions, 888–90 applications of, 892–903 exercises, 898–903 full-wave rectifier, 893–94 infinite series, Riemann Zeta function, 894–98 square wave — high frequencies, 892–93 discrete Fourier transform, 914–19 discrete fourier transform, 915–16 discrete Fourier transform — aliasing, 916–17 exercises, 918–19 fast Fourier transform, 917 limitations, 916 orthogonality over discrete points, 914–15 Fourier expansions of Mathieu functions, 919–29 exercises, 929 integral equations and Fourier series for Mathieu functions, 919–23 leading coefficients for ce0 , 926–29 leading coefficients of se1 , 923–26 general properties, 881–88 behavior of discontinuities, 886 completeness, 883–84 complex variables — Abel’s theorem, 882 exercises, 886–88 sawtooth wave, 885–86 square wave, 892 Sturm–Liouville theory, 885 summation of a Fourier series, 882–83 Gibbs phenomenon, 910–14 calculation of overshoot, 912–13 exercises, 913–14 square wave, 911–12 summation of series, 910 orthogonality, 636–37 properties of, 903–10 convergence, 903 differentiation, 904–5 exercises, 905–10 integration, 904 summation of, 882–83 Fourier transform, of Gaussian, 932 Fourier transform of derivatives, 946–51 heat flow PDE, 948–49 inversion of PDE, 949–50 wave equation, 947–48 Fourier transforms, 486–87, 931–32 aliasing, 916 convolution (Faltungs) theorem, 951 delta function derivation, 937 transfer functions, 961 Fourier transforms — inversion theorem, 938–46 cosine transform, 939 exponential transform, 938 fast Fourier transform, 917 finite wave train, 940–41 Fourier integral, 931, 936–39 momentum space representation, 955 sine transform, 939–40 uncertainty principle, 941 Fourier transform solution, 1013 fractals, 1086–88 fractional order, 427 Fraunhofer diffraction, Bessel function, 680–82 Fredholm equation, 1005–6 Frobenius’ method, series solutions, 565–78 Fuchs’ theorem, 573 full-wave rectifier, 893–94 functional equation, Riemann Zeta function, 896 Gamma function, 500 functions, 817–80, see also analytic functions Chebyshev polynomials, 848–59 exercises, 855–59 generating functions, 848 orthogonality, 854–55 recurrence relations — derivatives, 852–53 trigonometric form, 853–54 type I, 849–52 type II, 849 of complex variable, 408–13 confluent hypergeometric functions, 863–69 Bessel and modified Bessel functions, 865 exercises, 867–69 Hermite functions, 866 integral representations, 865 miscellaneous cases, 866 entire, 415, 451, 462–63 exponential, of Maclaurin theorem, 354–55 factorial, 472–73 Hermite functions, 817–36 alternate representations, 819 applications of the product formulas, 831–32 direct expansion of products of Hermite polynomials, 828–30 exercises, 832–36 generating functions — Hermite polynomials, 817–18 Index orthogonality, 821–22 quantum mechanical simple harmonic oscillator, 822–27 recurrence relations, 818–19 Rodrigues’ representation, 820–21 threefold Hermite formula, 827–28 hypergeometric functions, 859–63 contiguous function relations, 861 exercises, 862–63 hypergeometric representations, 861–62 Laguerre functions, 837–48 associated Laguerre polynomials, 841–43 differential equation — Laguerre polynomials, 837–41 exercises, 845–48 hydrogen atom, 843–45 Mathieu functions, 869–79 elliptical drum, 872–73 exercises, 879 general properties of Mathieu functions, 874 quantum pendulum, 873 radial Mathieu functions, 874–79 separation of variables in elliptical coordinates, 870–71 of matrices, 224–26 meromorphic, 451, 461–62, 478 multivalent, and branch points, 447–50 rotation of, 251 series of, 348–52 Abel’s test, 351 exercises, 352 overview, 348 uniform and nonuniform convergence, 348–49 Weierstrass M test, 349–50 G gamma distribution, 1143 gamma function, see also factorial function, 499–533 beta function, 520–26 definite integrals, alternate forms, 521–22 derivation of Legendre duplication formula, 522–23 exercises, 523–26 incomplete beta function, 523 verification of π α/ sin π α relation, 522 definitions, simple properties, 499–510 definite integral (Euler), 500–501 double factorial notation, 505 exercises, 506–10 factorial notation, 503–5 infinite limit (Euler), 499–500 infinite product (Weierstrass), 501–3 integral representation, 505–6 1163 digamma and polygamma functions, 510–16 Catalan’s constant, 513 digamma functions, 510–11 exercises, 513–16 Maclaurin expansion, computation, 512 polygamma function, 511–12 series summation, 512 incomplete gamma functions and related functions, 527–33 error integrals, 530 exercises, 530–33 exponential integral, 527–30 of infinite product, 398–99 Stirling’s series, 516–20 derivation from Euler–Maclaurin integration formula, 517 exercises, 518–20 Stirling’s series, 518 gauge covariant derivative, 76 theory, 259 transformation, 76 gauge theory, 76, 241 Gauss elimination, 170–72 Gauss’ differential equation, 312–13, 318, 614–15, 617–18 hypergeometric differential equation, 576, 859–62, 873 Gauss error integral, 500 asymptotic expansion, 530 Gauss’ fundamental theorem of algebra, 428, 463 Gauss–Jordan matrix inversion, 185–87 Gauss’ law, 52, 79–83, 594 Gauss’ normal distribution, 1134–38 Gauss’ notation, 503 Gauss–Seidel iteration technique, 172 Gauss–Seidel method, 226 Gauss’ test, 332–33, 357 Legendre series, 333 Gauss’ theorem, 60–64 alternate forms of, 62–64 exercises, 62–64 overview, 62 overview, 60–61 pullbacks, 312–13 Gegenbauer polynomials, see ultraspherical polynomials general parabolic solution, 540 general properties, 881–88 behavior of discontinuities, 886 completeness, 883–84 complex variables — Abel’s theorem, 882 of Mathieu functions, 874 sawtooth wave, 885–86 Sturm–Liouville theory, 885 summation of a Fourier series, 882–83 1164 Index general tensors, 151–60, see also Christoffel symbols covariant derivative, 156 exercises, 158–60 geodesics and parallel transport, 157–60 metric tensor, 151–54 overview, 151 generating function, 741–49, 848, 1014–15 associated Laguerre polynomials, 624–25, 841–45 associated Legendre functions, 771–82, 788 associated Legendre polynomials, 773 Bernoulli numbers, 376–89, 473–74, 517 Bernoulli polynomials, 379–80 Bessel functions, modified, 711, 713–19, 723–24, 865 Chebyshev polynomials, 848–59 extension to ultraspherical polynomials, 747 Hermite polynomials, 817–30 for integral order, 675–77 Laguerre polynomials, 647, 837–41, 843–45 Legendre polynomials, 742–44 linear electric multipoles, 744–45 physical basis — electrostatics, 741–42 ultraspherical polynomials, 747 vector expansion, 745–47 generators of continuous groups, 246–61, see also rotations; SU(2) exercises, 260–61 overview, 246–50 geodesic equation, 157 geodesics, 157–60 geometrical interpretation of gradient, 35–38 integration by parts of, 36–37 of potential, force as, 36 geometric series, 322–23 Gibbs phenomenon, 886, 910–14 calculation of overshoot, 912–13 square wave, 911–12 summation of series, 910 global behavior, 1093–1101 Goursat proof of Cauchy’s integral theorem, 421–23 gradient in Cartesian coordinates, 37, 51, 113, 134 in circular cylindrical coordinates, 118 in curvilinear coordinates, 110 in spherical polar coordinates, 126 gradient, curvilinear coordinates, 110 gradient, ∇, 32–38 as differential vector operator, 110 of dot product, 46 exercises, 37–38 geometrical interpretation, 35–38 integration by parts of, 36–37 of potential, force as, 36 integral definitions of divergence, curl, and, 58–59 overview, 32–34 of potential, 34 Gram–Schmidt orthogonalization, 642–49 Gram–Schmidt procedure, 173–74 overview, 173–74 vectors by orthogonalization, 174 gravitational potentials, 72 great circle, 1040 Green’s function, 662–74 construction of, one dimension, 598, 599, 663–65, 670 construction of, two dimension, 597–98 construction of, three dimension, 597–98 and Dirac delta function, 669–70 eigenfunction, eigenvalue equation, 667–68 eigenfunction expansion, 662–82 electrostatic analog, 592, 665 form of, 596–98 Helmholtz, 598 Helmholtz equation, 662 integral — differential equation, 665–67 Laplace operator, 598 circular cylindrical expansion, 601–6 spherical polar expansion, 598–600 linear oscillator, 668–69 modified Helmholtz, 598 nonhomogeneous equation, 592–610 one-dimensional, 663–65 Poisson’s equation, 669–70 symmetry of, 595–96 Green’s theorem, 61–62, 593 Gregory series, 362, 368 ground state eigenfunction, 1073 group theory, 241–320, see also angular momentum; differential forms; generators of continuous groups; homogeneous Lorentz group character, 184 definition of, 242–43 discrete, 291–93 classes and character, 293 crystallographic point and space, 299–300 dihedral, Dn , 299 exercises, 300–304 subgroups and cosets, 293–94 threefold symmetry axis, 296–99 twofold symmetry axis, 294–96 faithfulness, 243 homomorphic, 243 homomorphism and isomorphism, 243–45 homomorphism SU(2)–SO(3), 252–6 isomorphic, 243 Index Lorentz covariance of Maxwell’s equations, 283–91 electromagnetic invariants, 288 exercises, 289–91 overview, 283–86 transformation of E and B, 287–88 overview, 241–42 permutation groups, 301–3 quantum chromodynamics (QCD), 258 reducible and irreducible representations, 245–46 special unitary group SU(2), 251 vierergruppe, 292–93, 296 Gutzwiller’s trace formula, 898 H Hadamard product, 208 Hamilton–Jacobi equation, 539 Hamilton’s principle, 1053–54 Hankel functions and Lagrange equations of motion, 493–94, 707–13 asymptotic forms, 493–94, 723 contour integral representation of the Hankel functions, 709–11 cylindrical traveling waves, 708 definition, by Neumann function, 707–8 definitions, 707–8 series expansion, 707, 715 spherical, 728–29 Wronskian formulas, 708 Hankel transforms, 933 harmonic functions, 539 harmonic oscillator, 822–27, 958–59 harmonics, see also spherical harmonics, vector spherical harmonics harmonic series, 323–24 Hausdorff, 225, 249, 1086 heat flow PDE, 948–49 Heaviside expansion theorem, 442, 1003 Heaviside shifting theorem, 981 Heaviside unit step function, 93, 996 Heisenberg uncertainty principle, 732, 941 Helmholtz diffusion equation, 536, 537 Helmholtz equation, 556, 557, 613 Bessel function, 683–84, 725 Green’s function, 662 spherical coordinates, 725 Helmholtz operators, 598 Helmholtz’s theorem, 95–101 exercises, 100–101 overview, 95–96 Hermite functions, 817–36, 866 alternate representations, 819 applications of the product formulas, 831–32 confluent hypergeometric representation, 866 1165 direct expansion of products of Hermite polynomials, 828–30 generating functions — Hermite polynomials, 817–18 Gram–Schmidt construction, 642 orthogonality, 821–22 quantum mechanical simple harmonic oscillator, 822–27 recurrence relations, 818–19 Rodrigues representation, 820–21 threefold Hermite formula, 827–28 Hermite polynomials direct expansion of products of, 828–30 generating functions, 817–18 orthogonality integral, 821 recurrence relations, 818–19 Rodrigues representation, 820–21 Hermitian matrices, 184, 209 and matrix diagonalization, 219–21 unitary and, 208–15 exercises, 212–15 overview, 208–9 Pauli and Dirac, 209–12 Hermitian matrices, anti-, 221–23 eigenvalues degenerate, 223 and eigenvectors of real symmetric matrices, 221–22 overview, 221 Hermitian operators, 629–30, 634–42 completeness of eigenfunctions, 649–58 degeneracy, 638 expansion in orthogonal eigenfunctions — square wave, 637 Fourier series — orthogonality, 636–37 integration interval, 628–29 orthogonal eigenfunctions, 636 properties of, 634–38 in quantum mechanics, 630 real eigenvalues, 634–35 Hilbert matrix, determinant, 235 Hilbert–Schmidt theory, 1029–36 nonhomogeneous integral equation, 1032–34 orthogonal eigenfunctions, 1030–32 symmetrization of kernels, 1029–30 Hilbert space, 7, 535, 629, 638, 658, 885 Hilbert transforms, 483 Hodge operator, 314–19 cross product of vectors, 315 Laplacian in orthogonal coordinates, 316 Maxwell’s equations, 316–20 overview, 314–15 holomorphic functions (analytic or regular functions), 415 homogeneous equations, 165–66, 565 1166 Index homogeneous Lorentz group, 278–83 exercises, 283 kinematics and dynamics in Minkowski space–time, 280–83 overview, 278–80 homomorphic group, 243 homomorphism, 243–45 overview, 243 rotations, 244–45 SU(2) and SO(3), 252–56 hooks, 275–76 Hubble’s law, 7 hydrogen atom, 843–45, 957–58 Schrödinger’s wave equation, 843 hyperbolic PDEs, 538 hypercharge, 257 hypergeometric equation alternate forms, 860, 864 second independent solution, 860 singularities, 564, 859, 864, 865, 873 hypergeometric functions, 859–63 contiguous function relations, 861 hypergeometric representations, 861–62 I ill-conditioned matrices, 234–35 imaginary part, 407 impulsive force, 975–76 incomplete gamma function, 389–92 confluent hypergeometric representation, 500, 864 recurrence relations, 399, 512 independence, linear, 173, 579–81, 643, 665, 703 of solutions of ordinary differential equations, 579–81 of vectors, 173, 579 indicial equation, 566 inertia matrix, moment of, 215–16 infinite limit (Euler), 499–500 infinite product (Weierstrass), 499–500, 501–3 infinite products, 378, 383, 396–99, 499, 501–3 convergence, 397–98 cosine, 398–99 entire functions, 462 gamma function, 398–99 sine, 398–99 infinite series, 321–401, see also alternating series; power series; Taylor’s expansion algebra of, 342–48 alternating series, 342–43 convergence, 342–45 convergence: absolute, 342–44 convergence: Cauchy integral, 327–29 convergence: Cauchy root, 326 convergence: comparison, 325–26 convergence: conditional, Leibniz criterion, 344 convergence: D’Alembert ratio, 326–27 convergence: Gauss’, 332–33 convergence: improvement of, 345 convergence: Kummer’s, 330–32 convergence: Maclaurin integral, 327–30 convergence: Raabe’s, 332 convergence: tests of, 325–35 convergence: uniform, 348–51, 363–64 divergence of squares, 344 double series, 345–47 exercises, 347–48 overview, 342–44 rearrangement of double, 345–48 asymptotic series, 389–96 cosine and sine integrals, 392–93 definition of, 393–94 exercises, 394–96 incomplete gamma function, 389–92 overview, 389 Bernoulli numbers, 376–89 Euler–Maclaurin integration formula, 380–82 exercises, 385–89 improvement of convergence, 385 overview, 376–79 polynomials, 379–80 Riemann Zeta function, 382–84 elliptic integrals, 370–76 definitions of, 372 exercises, 374–76 limiting values, 374 overview, 370 period of simple pendulum, 370–71 series expansion, 372–73 of functions, 348–52 Abel’s test, 351 exercises, 352 overview, 348 uniform and nonuniform convergence, 348–49 Weierstrass M test, 349–50 fundamental concepts, 321–25 addition and subtraction of, 324–25 exercises, 325 geometric, 322–23 harmonic, 323–24 overview, 321–22 power series, 363–66, 578–79 products of, 396–401 convergence of, 397–98 exercises, 399–401 overview, 396–97 sine, cosine, and gamma functions, 398–99 Riemann’s theorem, 894–98 Index infinity, see singularity, pole, essential singularity inhomogeneous linear equations, 166–67 inhomogeneous ordinary differential equation (ODE), Green’s function solutions, 663–64 inner product and matrix multiplication, 179–81 integral definitions of gradient, divergence, and curl, 58–59 integral — differential equation, 665–67 integral equations, 1005–36 and Fourier series for Mathieu functions, 919–23 Fredholm equations, 1005, 1007, 1010, 1013, 1018, 1021–24, 1030–32 Hilbert–Schmidt theory, 1029–36 exercises, 1034–36 nonhomogeneous integral equation, 1032–34 orthogonal eigenfunctions, 1030–32 symmetrization of kernels, 1029–30 integral transforms, generating functions, 1012–18 exercises, 1015–18 Fourier transform solution, 1013 generalized Abel equation, convolution theorem, 1014 generating functions, 1014–15 introduction, 1005–12 definitions, 1005–6 exercises, 1011 linear oscillator equation, 1009–11 momentum representation in quantum mechanics, 1006–7 transformation of a differential equation into an integral equation, 1008–9 Neumann series, separable (degenerate) kernels, 1018–29 exercises, 1025–29 Neumann series, 1018–19 Neumann series solution, 1020–21 numerical solution, 1023–25 separable kernel, 1021–22 Volterra equations, 991, 1005–6, 1009–11, 1021 integral form, Neumann functions, 701 integral representations, 505–6, 679–80 for Dirac delta function, 90 expansion of, 720–23 integrals, see also Cauchy (Maclaurin) integral test; definite integrals; elliptic integrals contour integration, 463, 471, 503, 522, 603 differentiation of, 590 evaluation of, 810 Lebesgue, 649, 657 line, 55–56, 65–67, 440 of products of three spherical harmonics, 803–6 Riemann, 55, 60–61, 605, 636 Stieltjes, 86, 873 1167 surface, 56–57 volume, 57–58 integral test, Cauchy, see Cauchy (Maclaurin) integral test integral transforms, 931–1004 convolution (Faltungs) theorem, 990–94 driven oscillator with damping, 991–93 exercises, 993 convolution theorem, 951–55 exercises, 953–55 Parseval’s relation, 952–53 development of the Fourier integral, 936–38 Dirac delta function derivation, 937–38 Fourier integral — exponential form, 937 Fourier transform, 931–32 Fourier transform of derivatives, 946–51 heat flow PDE, 948–49 inversion of PDE, 949–50 wave equation, 947–48 Fourier transform of Gaussian, 932 Fourier transforms — inversion theorem, 938–46 cosine transform, 939 exercises, 942–46 exponential transform, 938 finite wave train, 940–41 sine transform, 939–40 uncertainty principle, 941 Fourier transforms of derivatives, 950–51 generating functions, 1012–18 Fourier transform solution, 1013 generalized Abel equation, convolution theorem, 1014 generating functions, 1014–15 integral transforms, 931–35 exercises, 934–35 Fourier transform, 931–32 Fourier transform of Gaussian, 932 Laplace, Mellin, and Hankel transforms, 933 linearity, 933–34 inverse Laplace transform, 994–1003 Bromwich integral, 994–95 exercises, 1000–1003 inversion via calculus of residues, 996 summary — inversion of Laplace transform, 999–1000 velocity of electromagnetic waves in a dispersive medium, 997–99 Laplace, Mellin, and Hankel transforms, 933 Laplace transform of derivatives, 971–78 Dirac delta function, 975 Earth’s nutation, 973–74 exercises, 976–78 impulsive force, 975–76 simple harmonic oscillator, 973 1168 Index Laplace transforms, 965–71 definition, 965 elementary functions, 965–66 exercises, 970–71 inverse transform, 967–68 partial fraction expansion, 968–69 step function, 969–70 linearity, 933–34 momentum representation, 955–61 exercises, 959–61 harmonic oscillator, 958–59 hydrogen atom, 957–58 other properties, 979–89 Bessel’s equation, 983–84 damped oscillator, 979–80 derivative of a transform, 982–83 electromagnetic waves, 981–82 exercises, 985–89 integration of transforms, 984 limits of integration — unit step function, 985 RLC analog, 980–81 substitution, 979 translation, 981 transfer functions, 961–64 exercises, 964 significance of (t), 963–64 integration, 904, see also path-dependent work Euler–Maclaurin formula, 380–82 by parts of curl, 47 by parts of divergence, 40 by parts of gradient, 36 of power series, 364 simple pole on contour of, 468–69 of transforms, 984 of vectors, 54–60 exercises, 59–60 overview, 54–56 integration interval [a, b], 628–29 interpolating polynomials, 194 interpretation, see geometrical interpretation of gradient; physical interpretation of divergence interitem, 1111 invariance of scalar product under rotations, 15–17 invariants, electromagnetic, 288 inverse Laplace transform, 994–1003 Bromwich integral, 994–95 inversion via calculus of residues, 996 summary — inversion of Laplace transform, 999–1000 velocity of electromagnetic waves in a dispersive medium, 997–99 inverse matrix, 200 inverse operator, 934, 1019 inverse transform, 967–68 inversion, 445–46 matrix, 184–87 Gauss–Jordan, 185–87 overview, 184–85 of PDE, 949–50 of power series, 366 via calculus of residues, 996 irreducible representations, 245–46 irreducible tensors, 149–51 irregular (essential) singular point, 563 irregular sign changes, series with, 341–42 irregular singularities, 572–73 irrotational, 45 isomorphic group, 243 isomorphism, 243–45 overview, 243 rotations, 244–45 isospin, SU(2), 256–60 isospin I, 257 J Jacobian, 107–8 parity transformation, 146 Jacobians for polar coordinates, 108–10 Jacobi–Anger expansion, 687 Jacobi identity, 248 Jacobi technique, 226 Jordan’s lemma, 468 Julia set, 1087 K kaon decay, 282–83 Kepler’s laws of planetary motion, 116–17 kinematics and dynamics in Minkowski space–time, 280–83 Kirchhoff diffraction theory, 426 Klein–Gordon equation, 537 Korteweg–deVries equation, 542 Kronecker delta, 10, 136–37 Kronecker product, 181 Kronig–Kramers optical dispersion relations, 484, 485 Kummer’s equation, see confluent hypergeometric equation Kummer’s test, 330–32 L ladder operators, approach to orbital angular momentum, 262–64 Lagrange’s equations, 1066 Lagrangian, 1053–54 Lagrangian equations, 1066–67 Lagrangian multipliers, 1060–65 cylindrical nuclear reactor, 1062 particle in a box, 1061–62 Index Laguerre functions, 837–48 associated Laguerre polynomials, 841–43 differential equation — Laguerre polynomials, 837–41 hydrogen atom, 843–45 Laguerre polynomials, 624 associated, 624–25 confluent hypergeometric representation, 866 generating function, 841–42 integral representation, 843 orthogonality, 843 recurrence relations, 842 Rodrigues’ representation, 767, 842 confluent hypergeometric representation, 866 differential equation, 837–41 generating function, 837–39 Gram–Schmidt construction, 647 orthogonality, 624, 647 recurrence relations, 840, 842 Rodrigues’ formula, 839 Schrödinger’s wave equation, 843 self-adjoint form, 624, 840 singularities, 564 Laplace, Mellin, and Hankel transforms, 933 Laplace function, 598 Laplace’s equation, 536, 1057–58 Bessel functions, 695 Legendre polynomials, 760, 761 solutions, 50–51, 96, 443, 452, 539, 559–60 Laplace series expansion theorem, 790–91 gravity fields, 791 Laplace transforms, 965–71 convolution theorem, 521, 1000, 1003, 1014 definition, 965 of derivatives, 971–78 Dirac delta function, 975 Earth’s nutation, 973–74 impulsive force, 975–76 simple harmonic oscillator, 973 elementary functions, 965–66 inverse transform, 967–68 partial fraction expansion, 968–69 step function, 969–70 table of transforms, 967–68, 979 translation, 1000 Laplacian in Cartesian coordinates, 51–52, 554–55 in circular cylindrical coordinates, 119 development by minors, 167–68 in orthogonal coordinates, 316 of potentials, 50–51 spherical polar coordinates, 126 as tensor derivative operator, 161–62 Laurent expansion, 430–38, 466, 472 1169 analytic continuation, 432–34 exercises, 437–38 Schwarz reflection principle, 431–32 Taylor expansion, 430–31 law of cosines, 16–17, 118, 745 leading coefficients for ce0 , 926–29 leading coefficients of se1 , 923–26 least squares, method of, 1119 Legendre duplication formula, derivation of, 522–23 Legendre equation, Maxwell’s equation, 779 self-adjoint form, 623, 625 Legendre functions, 741–816 addition theorem, 798 addition theorem for spherical harmonics, 797–802 derivation of addition theorem, 798–800 exercises, 800–802 trigonometric identity, 797–98 alternate definitions of Legendre polynomials, 767–70 exercises, 769–70 Rodrigues’ formula, 767 Schlaefli integral, 768–69 associated, 772 associated Legendre functions, 771–86 associated Legendre polynomials, 772–74 equation, 558, 771–72, 778–82, 788 Fourier transform, 770 Gram–Schmidt construction, 789 hypergeometric representation, 861 lowest associated Legendre polynomials, 774 magnetic induction field of a current loop, 778–82 orthogonality, 776–78 parity, 776 poles, 760, 782 recurrence relations, 775 Rodrigues’ formula, 772–73 Schlaefli integral, 768 second kind, 806–12 self-adjoint form, 771–72 special values, 774–75 generating function, 741–49 exercises, 747–49 extension to ultraspherical polynomials, 747 Legendre polynomials, 742–44 linear electric multipoles, 744–45 physical basis — electrostatics, 741–42 vector expansion, 745–47 integrals of products of three spherical harmonics, 803–6 application of recurrence relations, 804–5 exercises, 805–6 orbital angularmomentum operators, 793–97 1170 Index orbital angular momentum operators, 796–97 orthogonality, 756–67 Earth’s gravitational field, 758–59 electrostatic potential of a ring of charge, 761–62 exercises, 762–66 expansion of functions, Legendre series, 757–58 polarization of dielectric, 764 ring of electric charge, 761–62 sphere in a uniform field, 759–61 recurrence relations and special properties, 749–56 differential equations, 751–52 exercises, 754–56 parity, 753 recurrence relations, 749–50 special values, 752 sphere in uniform electric field, 759–61 upper and lower bounds for Pn (cos θ), 753–4 of second kind, 806–13 closed-form solutions, 810–12 exercises, 812–13 Qn (x) functions of the second kind, 809–10 series solutions of Legendre’s equation, 807–9 spherical harmonics, 786–93 azimuthal dependence — orthogonality, 787 exercises, 791–93 Laplace series, expansion theorem, 790–91 Laplace series — gravity fields, 791 polar angle dependence, 788 spherical harmonics, 788–90 vector spherical harmonics, 813–16 Legendre polynomials, 644–46, 741, 742–44 associated generating function, 773 recurrence relations, 775 generating function, 743 by Gram–Schmidt orthogonalization, 644–46 Laplace’s equation, 760, 761 orthogonality integral, 777 recurrence relations, 749–50 Rodrigues’ formula, 761 Schlaefli integral, 768 Legendre’s duplication formula, 503 Legendre’s equation, 625 differential form, 752 Legendre’s equation self-adjoint form, 625, 771 Legendre’s equation series solutions of, 807–9 Legendre series, 333, 807–9 recurrence relations, 807–8 Leibniz criterion, 339–40 formula for differentiating an integral, 590, 776 formula for differentiating a product, 771 Lerch’s theorem, 967 Levi-Civita symbol, 146–47 L’Hôpital’s rule, 365 Lie groups and algebras, 243, 248, 264–66 limits of integration — unit step function, 985 limits to values of elliptic integrals, 374 linear electric multipoles, 744–45 linear equations homogeneous, 165–66 inhomogeneous, 165–66 linear independence of solutions, 581–83 linearity, 933–34 linearly dependent solutions, 549 linearly independent solutions, 549 linear operator, 87, 176, 208, 535, 622, 650 differential operator, 42–43, 249, 261, 285, 304, 307, 554, 569, 629, 634, 664 integral operator, 768, 1019 linear oscillator Green’s function, 668–69 linear oscillator equation, 1009–11 linear transformation law, 152 line integrals, 55 Liouville’s theorem, 428 liquid drop model, 785 logistic map, 1080–84 Lommel integrals, 697 Lorentz covariance of Maxwell’s equations, 283–91 electromagnetic invariants, 288 exercises, 289–91 overview, 283–86 transformation of E and B, 287–88 Lorentz–Fitzgerald contraction, 148 Lorentz gauge, 52 Lorentz group, see homogeneous Lorentz group lowering operator, 263 lowest associated Legendre polynomials, 774 Lyapunov exponents, 1085–86 M Maclaurin expansion, series computation, 512 Maclaurin integral test, 327–30 Riemann Zeta function, 329–30 Maclaurin theorem, 354–55 exponential function, 354–55 logarithm, 355 overview, 354 Madelung constant, 347 magnetic, 19, 46, 51, 66–67, 69, 74–76, 96, 100, 127–28, 144–45, 283–84, 288, 306, 311, 317, 447, 537, 638, 685, 703–5, 746, 778–82, 974 Index magnetic field constant (B field), 44 magnetic flux across an oriented surface, 306 magnetic induction field of current loop, 778–82 magnetic moments, 145 magnetic vector potentials, 44, 74–76, 127–28 Mandelbrot set, 1087–88 mapping complex variables, 443–51 branch points and multivalent functions, 447–50 exercises, 450–51 inversion, 445–46 overview, 443 rotation, 444 translation, 443–44 conformal, 451–54 exercises, 453–54 Mathieu equation angular, 872 modified, 872 radial, 872 Mathieu functions, 869–79, 921 elliptical drum, 872–73 Fourier expansions of, 919–29 integral equations and Fourier series for Mathieu functions, 919–23 leading coefficients for ce0 , 926–29 leading coefficients of se1 , 923–26 general properties of Mathieu functions, 874 quantum pendulum, 873 radial Mathieu functions, 874–79 separation of variables in elliptical coordinates, 870–71 matrices, 165–239, see also determinants; diagonal matrices; orthogonal matrices addition and subtraction, 178–79 adjoint, 208 angular momentum matrices, 253, 273 anticommuting sets, 236 antihermitian, 221–23, 231 antisymmetric, 204 definition, 176 diagonalization, 215–26 direct product, 181–82 equality, 178 Euler angle rotation, 202–3 exercises, 187–95 Gauss–Jordan matrix inversion technique, 185 Hermitian and unitary, 208–15 exercises, 212–15 overview, 208–9 Pauli and Dirac, 209–12 inversion of, 184–87 Gauss–Jordan, 185–87 overview, 184–85 1171 ladder operators, 263 matrix multiplication, 176, 179–81 moment of intertia, 215–16, 220 multiplication, 179–80 direct product, 181–82 inner product, 179–81 by scalar, 179 normal, 231–39 exercises, 236–39 ill-conditioned systems, 234–35 normal modes of vibration, 233–34 overview, 231–32 null matrix, 178 orthogonal matrix, 201, 206, 209 overview, 176–78 product theorem, 181 quaternions, 204, 212 rank, 178 relation to tensor, 206 representation, 177, 184, 205, 208, 212 self-adjoint, 209 similarity transformation, 205 skewsymmetric, 204 symmetric, 204 traces, 139, 183–84 transposition, 177 unitary, 209 vector transformation law, 198 Maxwell’s equations, 51, 284, 316–20 derivation of wave equations, 52 dual transformation, 290 Gauss’ law, 52–53 Legendre equation, 779 Lorentz covariance of, 283–91 electromagnetic invariants, 288 exercises, 289–91 overview, 283–86 transformation of E and B, 287–88 Oersted’s law, 52–53 mean value theorem, 353 Mellin transforms, 897, 933 meromorphic functions, 439, 461 integral of, 466 pole expansion of, 461 metric, curvilinear coordinates, 105 metric tensor, 151–54 Christoffel symbols as derivatives of, 155–56 minimal substitution, 76 Minkowski space, 136, 278–79 Minkowski space–time, see kinematics and dynamics in Minkowski space–time minor, 168 minors, Laplacian development by, 167–68 missing dependent variables, 1042–43 Mittag-Leffler theorem, 461 1172 Index mixed tensor, 136, 139, 154 modes of vibration, normal, 233–34 modified Bessel functions, 713–19 asymptotic expansion, 711, 719 Fourier transform, 716 generating function, 709 integral representation, 720–23 Laplace transform, 933 recurrence relations, 714–16 series form, 714 modified Helmholtz operator, 598 modified Mathieu equation, 872 modulus, 406 moment of inertia matrix, 215–16 momentum, see angular momentum; orbital angular momentum momentum representation, 955–61 harmonic oscillator, 958–59 hydrogen atom, 957–58 Schrödinger wave equation, 957–58 momentum representation in quantum mechanics, 1006–7 monopole, 745–46 Morera’s theorem, 427–28 motion area law for planetary, 116–19 equations of, 142 moving particle Cartesian coordinates, 1054 circular cylindrical coordinates, 1054–55 multiplet, 245 multiplication of matrices direct product, 181–82 of vectors, 182 inner product, 179–81 by scalar, 179 multiply connected regions, 423–24 multipole expansion, electrostatic, 599 multivalent functions, 447–50 multivalued function, 409 mutually exclusive, 1109–10 N Navier–Stokes equations, 119 Neumann boundary conditions, 543 Neumann functions, 511 asymptotic form, 602–3 Bessel functions of second kind, 699–707 coaxial wave guides, 703–4 definition and series form, 699–700 other forms, 701 recurrence relations, 702 Wronskian formulas, 702–3 Fourier transform, 602 Hankel function definition, 707–8 integral form, 701 recurrence relations, 702 spherical, 507, 727 Wronskian formulas, 702–3 Neumann functions, integral form, 701 Neumann problem, 617 Neumann series, 1018–19 Neumann series, separable (degenerate) kernels, 1018–29 numerical solution, 1023–25 separable kernel, 1021–22 Neumann series solution, 1020–21 neutron diffusion theory, 369, 951 Newton’s second laws, 130, 233, 370, 973, 1054 node, spiral, 1097, 1103–4 non-Cartesian tensors, 140 nonessential (regular) singular point, 563 nonhomogeneous equation — Green’s function, 592–610 circular cylindrical coordinate expansion, 601–2 form of Green’s functions, 596–98 Legendre polynomial addition theorem, 600–601 spherical polar coordinate expansion, 598–600 symmetry of Green’s function, 595–96 nonhomogeneous integral equation, 1032–34 nonlinear differential equations (NDEs), 1088–89 autonomous differential equations, 1091–93 Bernoulli and Riccati equations, 1089–90 bifurcations in dynamical systems, 1103–4 center or cycle, 1100–1101 chaos in dynamical systems, 1105–6 dissipation in dynamical systems, 1102–3 fixed and movable singularities, special solutions, 1090 local and global behavior in higher dimensions, 1093–94 routes to chaos in dynamical systems, 1106–7 saddle point, 1095–97 spiral fixed point, 1098–1100 stable sink, 1095 nonlinear methods and chaos, 1079–1108 introduction, 1079–80 logistic map, 1080–84 exercises, 1084 nonlinear differential equations (NDEs), 1088–89 autonomous differential equations, 1091–93 Bernoulli and Riccati equations, 1089–90 bifurcations in dynamical systems, 1103–4 center or cycle, 1100–1101 chaos in dynamical systems, 1105–6 dissipation in dynamical systems, 1102–3 exercises, 1089, 1102, 1106, 1107 Index fixed and movable singularities, special solutions, 1090 local and global behavior in higher dimensions, 1093–94 routes to chaos in dynamical systems, 1106–7 saddle point, 1095–97 spiral fixed point, 1098–1100 stable sink, 1095 sensitivity to initial conditions and parameters, 1085–88 exercises, 1088 fractals, 1086–88 Lyapunov exponents, 1085–86 nonuniform convergence, 348–49 normalization, 695 normal matrices, 231–39 exercises, 236–39 ill-conditioned, 234–35 normal modes of vibration, 233–34 overview, 231–32 null matrix, 178 number operator, 823 numbers, Bernoulli, see Bernoulli numbers numerical solution, 1023–25 nutation, 973–74 O Oersted’s law, 52, 66–68 Olbers’ paradox, 337 operators, differential vector, see differential vector operators optical dispersion, 484–85 optical path near event horizon of black hole, 1041–42 orbital angular momentum, 251, 261–66 exercises, 266 ladder operator approach, 262–64 Lie groups and algebras, 264–66 Lie groups and operators, order of, 264–65 operators, 793–97 overview, 261 rotation of, 251 order 2 branch points, 440–42 order parameter, 1086 ordinary differential equations (ODEs), linear first-order, 547–50 oriented surface, magnetic flux across, 306 orthogonal coordinates Laplacian in, 316 in R3 , 103–10 exercises, 109–10 Jacobians for polar coordinates, 108 overview, 103–8 orthogonal eigenfunctions, 636, 637, 1030–32 1173 orthogonal functions, representation of Dirac delta function by, 88–89 orthogonal groups, 243 orthogonal group SO(3), 254, 256 orthogonality, 694–99, 731, 776–78, 821–22, 854–55 Bessel series, 695 continuum form, 696 curvilinear coordinates, 104 Earth’s gravitational field, 758–59 electrostatic potential in hollow cylinder, 695–96 electrostatic potential of ring of charge, 761–62 expansion of functions, Legendre series, 757–58 Fourier series, 636–37 Fourier series: Hilbert–Schmidt integral equations, 919–29 normalization, 695 over discrete points, 914–15 sphere in a uniform field, 759–61 Sturm–Liouville differential equations, 1031 of vectors, 14 orthogonality condition, 10, 198 orthogonality integral Hermite polynomials, 821 Legendre polynomials, 777 spherical harmonics, 788 orthogonality relations, spherical harmonics, 814 orthogonalization, Gram–Schmidt, 173, 642–7 orthogonal matrices, 195–208, 209 applications to vectors, 197–98 direction cosines, 196–97 Euler angles, 202–3 exercises, 206–8 inverse, 200 overview, 195–96 relation to tensors, 206 symmetry properties, 203–5 ˜ 200–202 transpose matrix, A, two-dimensional conditions for, 199–200 orthonormal functions, 642–44 polynomials, 646–47 vectors, 174 oscillator damping, 979–80, 991–93, 1101 driven, 457, 565, 991–93 forced classical, 457–60 harmonic, 822–27 integral equation for, 991–93 Laplace transform solution, 991–93 linear, 668–69 momentum space wave function, 569 self-adjoint equation, 679 series solution of differential equation, 568–69 1174 Index singularities in harmonic oscillator differential equation, 564 oscillatory series, 322 overshoot, calculation of, 912–13 P parabolic PDEs, 538 parallelogram addition law, 2–3 parallel transport, 157–60 parity, 753, 776, 939 Bessel functions, 687, 735 Chebyshev functions, 569 differential operator, 569 Fourier cosine, sine transforms, 939 Hermite functions, 569 Legendre functions, 569 Legendre functions, associated, 776 Legendre functions, second kind, 812 spherical harmonics, 776, 791 vector spherical harmonics, 814 parity transformation, 143 Parseval relation, 485–86, 952–53 partial differential equations (PDEs), 535–43 bicharacteristics of, 538 boundary conditions, 542–43 characteristics of, 538–41 classes of, 538–41 elliptic, 538 examples of, 536–38 harmonic functions, 539 hyperbolic, 538 introduction, 535–36 inversion of, 949–50 nonlinear, 541–42 parabolic, 538 partial fraction expansion, 968–69 partial sum approximation, 390 particle in a box, 1061–62 in a sphere, 731–32 particle motion Cartesian coordinates, 1054 circular cylindrical coordinates, 1054–55 quantum mechanical, 827 in rectangular box, 1061–62 in right circular cylinder, 1062 in sphere, 731–32 path-dependent work, 56–60 integral definitions of gradient, divergence, and curl, 58–59 overview, 56 surface integrals, 56–57 volume integrals, 57–58 Pauli matrices, 209–12 pendulums, period of simple, 370–71 periodic functions, 888–90 boundary conditions, 636, 883–5 permutations and combinations, counting of, 1114–15 phase of a complex function, 409 phase of a complex number, 409 phase space, 88, 1080, 1091 physical interpretation of divergence, 40–42 pi, π , 219, 379, 396–400, 586 Leibniz formula, 590, 776, 886 Wallis formula, 399 pion photoproduction threshold, 282–83 pi-sine relation, verification of, 522 planetary motion, area law for, 116–19 Pochhammer symbol, 859–60, 864 Poincaré section, 1088–89, 1104–6 point and space groups, crystallographic, 299–300 point source equation, 662 Poisson distribution, 1130–33 Poisson’s equation, 536 and Gauss’ law, 81–83 Green’s function, 669–70 polar angle dependence, 788 polar coordinates, Jacobians for, 108–10, see also spherical polar coordinates polar vectors, 143–44 pole expansion of meromorphic functions, 461 poles, 439 simple, on contour of integration, 468–69 polygamma functions, 511–12 Catalan’s constant, 513 polynomials, Bernoulli, 379–80 potential energy, 309 potentials, 68–79, see also thermodynamics gradient of, 34 force as, 36 Laplacian of, 50–51 overview, 68 scalar, 68–72 centrifugal, 72 gravitational, 72 overview, 68–71 vector, 73–79 of constant B field, 44 exercises, 77–79 magnetic, 74–76 potential theory conservative force, 69 electrostatic potential, 593 scalar potential, 70 vector potential, 73, 311 power series, 363–70 continuity, 364 convergence, 363 uniform and absolute, 363 Index differentiation and integration, 364 exercises, 366–70 inversion of, 366 overview, 363 uniqueness theorem, 364–65 L’Hôpital’s rule, 365 prime numbers, 379, 382–83, 897 prime number theorem, asymptotic, 897–8 primitive, 898 principal axis, 216 principal value, 409 probability, 1109–51 binomial distribution, 1128–30 exercises, 1129 repeated tosses of dice, 1128–30 definitions, simple properties, 1109–15 conditional probability, 1112 counting of permutations and combinations, 1114–15 exercises, 1115 probability for A or B, 1110–11 scholastic aptitude tests, 1112–14 Gauss’ normal distribution, 1134–38 exercises, 1137–38 Poisson distribution, 1130–33 exercises, 1133 random variables, 1116–28 continuous random variable: hydrogen atom, 1117–19 discrete random variable, 1116–17 exercises, 1127–28 repeated draws of cards, 1123–26 standard deviation of measurements, 1119–23 sum, product, and ratio of random variables, 1126–27 statistics, 1138 χ 2 distribution, 1143–45 confidence interval, 1149 error propagation, 1138–40 exercises, 1150 fitting curves to data, 1140–43 student t distribution, 1146–49 probability for A or B, 1110–11 product convergence theorem, 344 products, see also cross product; direct product; dot products; scalars expansion of entire functions, 462–63 of infinite series, 396–401 convergence of infinite, 397–98 exercises, 399–401 overview, 396–97 sine, cosine, and gamma functions, 398–99 product theorem, 181 projection operators, 644 projections, of vectors, 4, 12 1175 pseudoscalars, 146 pseudotensors, 142–51 dual tensors, 147–48 exercises, 149–51 irreducible tensors, 149–51 Levi-Civita symbol, 146–47 overview, 142–46 pseudovectors, 146 pullbacks, 309–13 Gauss’ theorem, differential form, 312–13 overview, 309–10 Stokes’ theorem, 310–11 differential form, 311 Q QCD (quantum chromodynamics), 259 Qn (x) functions of the second kind, 809–10 quadrupole, 149, 745–47, 805 quantization, 624–25, 1054 quantum chromodynamics (QCD), 259 quantum mechanical scattering, 469–71 quantum mechanical simple harmonic oscillator, 822–27 quantum mechanics, sum rules, 484 angular momentum, 189–92, 251–6, 261–4, 266–70 configuration space representation, 957–58 expectation values, 263 hydrogen atom, 245–46 momentum representation, 1006–7 Schrödinger representation, 1006–7 quantum pendulum, 873 quasiperiodic, 1100–1101, 1107 quaternions, 189, 204, 212 quotient rule, 141–42 equations of motion and field equations, 142 exercises, 142 overview, 141–42 R R3 , orthogonal coordinates in, see orthogonal coordinates Raabe’s test, 332 radial Mathieu equation, 872 radial Mathieu functions, 874–79 radioactive decay, 553, 977–78, 1129, 1130–31, 1133 raising operator, 263 random variables, 1116–28 continuous random variable: hydrogen atom, 1117–19 discrete random variable, 1116–17 standard deviation of measurements, 1119–23 sum, product, and ratio of random variables, 1126–27 1176 Index rank, 264 of matrices, 178 of tensor, 133 rapidity, 280 rational approximations, 345 ratio test, Cauchy, d’Alembert, 326 Rayleigh formulas, 730 Rayleigh–Ritz variational technique, 1072–76 ground state eigenfunction, 1073 vibrating string, 1074 real part, 407 rearrangement of double series, 345–48 reciprocal, 916 reciprocal lattice, 27 reciprocity principle, 665 rectifier, full-wave, 893–94 recurrence relations, 567, 714–16, 749–50, 775, 818–19 application of, 804–5 associated Legendre polynomials, 775 Bernoulli numbers, 377 Bessel functions, 677 Bessel functions, spherical, 730 Chebyshev polynomials, 850 confluent hypergeometric functions, 866 derivatives, 852–53 exponential integral, 529 factorial function, gamma, 504, 506, 529, 995 Hankel functions, 708 Hermite polynomials, 818–19 hypergeometric functions, 861 Laguerre functions, associated, 842 Legendre polynomials, 749–50 Legendre series, 807–8 modified Bessel functions, 714–16 Neumann functions, 702 polygamma functions, 512 and special properties, 749–56 differential equations, 751–52 parity, 753 recurrence relations, 749–50 special values, 752 upper and lower bounds for Pn (cos θ), 753–54 spherical Bessel functions, 730 reducible representations, 245–46 reflection principle, Schwarz, 431–32 regression coefficient, 1140–41 regular (nonessential) singular point, 563 regular functions (holomorphic or analytic functions), 415 regular singularities, 572–73 relations, see dispersion relations relativistic energy, 356–57 repeated draws of cards, 1123–26 repellant, 1082 repellor, 1092 representation fundamental, 259–60, 274–75 irreducible, 246, 263, 265, 268, 274, 276, 293, 298 reducible, 246 residue theorem, 455–56, 472; see also calculus of residues resonant cavity, 682–85 Riccati equation, 1089–90 Riemann–Christoffel curvature tensor, 138 Riemann integral, 60–61 Riemann manifold, 313–14 Riemann’s theorem, 344 Riemann surface, 448–49 Riemann Zeta function, 329–30, 334–39 and Bernoulli numbers, 382–84 Fourier series evaluation, 329 table of values, 382 infinite series, 894–98 Riesz’ theorem, 177 RLC analog, 980–81 RL circuit, 549–50 Rodrigues’ formula, 767 Laguerre polynomials, 839 associated, 767, 842 Rodrigues representation, 820–21 Hermite polynomials, 820–21 root diagram, 264 root test, Cauchy, 326 rotations, 176, 444 of coordinate axes, 7–12 exercises, 12 vectors and vector space, 11–12 of coordinates, 199 of functions and orbital angular momentum, 251 groups SO(2) and SO(3), 250 invariance of scalar product under, 15–17 isomorphic and homomorphic, 244–45 Rouché’s theorem, 463 routes to chaos in dynamical systems, 1106–7 S saddle points (steepest descent method), 489–97, 1092, 1095–97, 1103 analytic landscape, 489–90 asymptotic forms of factorial function Ŵ, 494–95 (1) of Hankel function Hν (s), 493–94 exercises, 496–97 factorial function Ŵ(z), 494–95 (1) Hankel function Hν , 493–94 overview, 489 sample space, 1109–10 Index sawtooth wave, 885–86 scalar potential, 33–34, 70, 536 scalar quantities, 1 scalars, 7, 133 multiplication of matrices by, 179 potentials, 68–72 centrifugal, 72 gravitational, 72 overview, 68–71 products, 12–17 exercises, 17 invariance of under rotations, 15–17 overview, 12–15 triple, 25–27 scattering, quantum mechanical, 469–71 Schlaefli integral, 709, 768–69 Legendre polynomials, 768 Schmidt orthogonalonization, see Gram–Schmidt scholastic aptitude tests, 1112–14 Schrödinger’s wave equation, 624 degeneracy of, 638 hydrogen atom, 843 Schrödinger wave equation, 76, 537, 1069–70 momentum space representation, 957–58 variational derivations, 1069–70 Schur’s lemma, 265 Schwarz inequality, 652–54 generalized, 661 Schwarz reflection principle, 431–32 second-rank tensors, 135–36 section, see Poincaré section secular equation, 218 selection rules, 265 self-adjoint eigenvalue equations, 595 self-adjoint matrices, 209 self-adjoint ODEs, 622–34 boundary conditions, 627–28 deuteron, 626–27 eigenfunctions, eigenvalues, 624–25 Hermitian operators, 629 Hermitian operators in quantum mechanics, 630 integration interval [a, b], 628–29 Legendre’s equation, 625 self-adjoint operator, 623, 630 Sturm–Liouville theory, differential equations, 622–30 semiconvergent series, 391 sensitivity to initial conditions and parameters, 1085–88 fractals, 1086–88 Lyapunov exponents, 1085–86 separable kernel, 1021–22 separable variables, 544–45 separation of variables in elliptical coordinates, 870–71 1177 series, see infinite series series approach, 570–71 Bessel’s equation, limitations of, 570–71 Chebyshev, 338, 569 Hermite, 567 hypergeometric, 569, 859, 862 incomplete beta, 523 Laguerre, 569 Legendre, 333, 507, 569, 757–62 shifted polynomials, 836 Chebyshev, 850 Legendre, 629 ultraspherical, 338 series form, 714 of second solution, 583–85 series solutions — Frobenius’ method, 565–78 expansion about x0 , 569 Fuchs’ theorem, 573 limitations of series approach — Bessel’s equation, 570–71 regular and irregular singularities, 572–73 symmetry of solutions, 569 sign changes, series with alternating, irregular, 341–42 similarity transformation, 205 simple harmonic oscillator, 973 simple pendulum, 1067–68 simple pole on contour of integration, 468–69 sine confluent hypergeometric representation, 393 functions of infinite products, 398–99 integrals in asymptotic series, 392–93 sine transform, 939–40 singularities, 438–43 branch points, 440–42 of order 2, 440–42 exercises, 442–43 fixed, 1090 Laurent series, 439 movable, 1090 overview, 438 poles, 439 singular points, 562–65 essential (irregular), 563 irregular (essential), 563 nonessential (regular), 563 regular (nonessential), 563 sink, 1092–96, 1101 sin x infinite product representation, 378, 392–93, 398, 439, 468–69, 583, 679–80, 728–29, 730, 885, 889, 892, 911, 950, 1021, 1097–98 power series, 358, 406, 410 skewsymmetric matrices, 204 1178 Index Slater determinant, 274 small oscillations, 370–71 SO(2) rotation groups, 250 SO(3) Clebsch–Gordan coefficients, 267–70 homomorphism, 252–56 rotation groups, 250 soap film, 1045–46 soap film — minimum area, 1046–49 solenoidal, 42 soliton solutions, 542 space, vector, see vectors space and point groups, crystallographic, 299–300 space–time, Minkowski, see kinematics and dynamics in Minkowski space–time special unitary groups SU(2), Pauli spin matrices, 189, 203–4, 250–66, 267–70, 274–76 SU(3), Gell-Mann matrices, 212, 256–60, 265–66, 274–76 SU(n), Young tableaux, 274–76 special unitary group SU(2), 252 special values, 774–75 spectral decomposition, 219, 225, 635 sphere in a uniform field, 759–61 spheres, total charge inside, 88 spherical Bessel functions, 725–39 asymptotic values, 729 definitions, 726–29 limiting values, 729–30 recurrence relations, 730 spherical components, 271 spherical coordinates, Helmholtz equation, 725 spherical harmonics, 264, 786–93 addition theorem for, 797–802 derivation of addition theorem, 798–800 trigonometric identity, 797–98 angular momentum operators, 793 azimuthal dependence — orthogonality, 787 Condon–Shortley phase conventions, 270 integrals of, 804 ladder operators, 796–97 Laplace series, expansion theorem, 790–91 orthogonality integral, 788 orthogonality relations, 814 polar angle dependence, 788 spherical harmonics, 788–90 vector spherical harmonics, 813–16 spherical polar coordinates, 123–33, 557–60 exercises, 128–33 expansion, 598–600 magnetic vector potential, 127–28 ∇, ∇·, ∇× for central force, 127 overview, 123–26 unit vectors, 123 spherical symmetry, 616 spherical tensor operator, 271 spherical tensors, 271–74 spherical waves, Bessel functions, 730 spinors, 138–39 exercises, 138–39 overview, 138 spinor wave functions, 212 spiral fixed point, 1098–1100 spiral node, 1097, 1103 spiral repellor, 1097, 1103 square integrable, 485, 487, 653, 658, 882 squares of series, divergent, 344 square wave, 911–12 square wave — high frequencies, 892–93 stable sink, 1095 standard deviation, 1119, 1140 standard deviation of measurements, 1119–23 stark effect, 576, 847 statistical hypothesis, 1138 statistics, 1138 χ 2 distribution, 1143–45 confidence interval, 1149 error propagation, 1138–40 fitting curves to data, 1140–43 student t distribution, 1146–49 steepest descent, method of, 489–96 factorial function, 494–95 Hankel functions, 493–94 modified Bessel functions, 720 step function, 969–70 Stirling’s expansion, 495 Stirling’s series, 516–20 derivation from Euler–Maclaurin integration formula, 517 Stokes’ theorem, 64–68 alternate forms of, 66–68 Oersted’s and Faraday’s laws, 66–68 overview, 66 on differential forms, 313–14 Riemann manifold, 313–14 exercises, 67–68 overview, 64–65 proof, 420–21 pullbacks, 310–11 straight line, 1044–45 strange attractor, 1085, 1086 string, Lagrangian of a vibrational, 1058, 1071 structure constants, 248, 251, 259 student t distribution, 1146–49 Sturm–Liouville theory, 885 Sturm–Liouville theory — orthogonal functions, 621–74 completeness of egenfunctions, 649–61 Bessel’s inequality, 651–52 Index expansion coefficients, 658 Schwarz inequality, 652–54 summary — vector spaces, completeness, 654–58 Sturm–Liouville theory — orthogonal functions completeness of Eigenfunction, 659–61 Gram–Schmidt orthogonalization, 642–49 exercises, 647–49 Legendre polynomials by Gram–Schmidt orthogonalization, 644–46 Green’s function — eigenfunction expansion, 662–74 eigenfunction, eigenvalue equation, 667–68 exercises, 670–74 Green’s function and the Dirac delta function, 669–70 Green’s function integral — differential equation, 665–67 Green’s functions — one-dimensional, 663–65 linear ocillator, 668–69 Hermitian operators, 634–42 degeneracy, 638 exercises, 639–42 expansion in orthogonal eigenfunctions — square wave, 637 Fourier series — orthogonality, 636–37 orthogonal eigenfunctions, 636 real eigenvalues, 634–35 self-adjoint ODEs, 622–34 boundary conditions, 627–28 deuteron, 626–27 eigenfunctions, eigenvalues, 624–25 exercises, 631–34 Hermitian operators, 629 Hermitian operators in quantum mechanics, 630 integration interval [a, b], 628–29 Legendre’s equation, 625 SU(2) Clebsch–Gordan coefficients, 267–70 isospin and SU(3) flavor symmetry, 256–60 and SO(3) homomorphism, 252–56 SU(3) flavor symmetry, 256–60 subgroups and cosets, 293–94 substitution, 979 subtraction of matrices, 178–79 of series, 324–25 of sets, 1111 of tensors, 136 sum, product, and ratio of random variables, 1126–27 summation convention, 136–37, 139 summation of series, 910 1179 sum rules, 484 SU(n), young tableaux for, 274–78 superposition principle for homogenous ODEs, PDEs, 536 surface integrals, 56–57 symmetric matrices, 204 symmetric tensor, 137 symmetrization of kernels, 1029–30 symmetry, 889 axes threefold, 296–99 twofold, 294–96 cylindrical, 617 properties of orthogonal matrices, 203–5 relations, 484 of solutions, 569 spherical, 616 SU(3) flavor, 256–60 of tensors, 137 T tableaux for SU(n), Young, 274–78 Taylor’s expansion, 352–63, 430–31 binomial theorem, 356–57 relativistic energy, 356–57 exercises, 358–63 Maclaurin theorem, 354–55 exponential function, 354–55 logarithm, 355 overview, 354 multiple variables, 358 overview, 352–54 tensor analysis contravariant tensor, 135–36, 153 contravariant vector, 134–35, 139, 152–54, 158 covariant tensor, 135–36, 158 covariant vector, 134–35, 139–40, 152–54, 156, 158 definition, 133–36 displacement, 158 isotropic tensor, 137 non-Cartesian tensors, 140 parallel transport, 158 scalar quantity, 8, 15, 57, 134, 149, 179 spherical components, 271 spherical tensor operator, 271 symmetry–asymmetry, 137 tensor density, see pseudotensors tensor derivative operators curl, 162–63 divergence, 160–61 exercises, 162–63 Laplacian, 161–62 overview, 160 1180 Index tensors, see also derivative operators, tensor; direct product; general tensors; pseudotensors; quotient rule; spinors general tensors, 151–60, see also Christoffel symbols covariant derivative, 156 exercises, 158–60 geodesics and parallel transport, 157–60 metric tensor, 151–54 overview, 151 relation to orthogonal matrices, 206 spherical, 271–74 vector analysis in, 133–63 addition and subtraction of, 136 contraction, 139 overview, 133–35 second-rank, 135–36 summation convention, 136–37 symmetry–antisymmetry, 137 thermodynamics, 72–79 exact differentials, 72–76 overview, 72–73 vector potential, 73–79 exercises, 77–79 magnetic, 74–76 Thomas precession, 280 threefold Hermite formula, 827–28 threefold symmetry axis, 296–99 time-dependent diffusion equation, 536 time-independent diffusion equation, 536 Titchmarsh theorem, 487 trace, 139 trace formula, 224–25 Gutzwiller’s, 898 traces of matrices, 183–84 trajectory, 38, 1088, 1091–92, 1097, 1099, 1100, 1103, 1105 transfer function, 962 transfer functions, 961–64 significance of (t), 963–64 transform, derivative of, 982–83; see also cosines; exponential; Fourier; Fourier–Bessel; Hankel; Laplace; Mellin; sine transformation law, 134 transformation of differential equation into integral equation, 1008–9 transformation of E and B, Lorentz, 287–88 translation, 443–44, 981 transport, parallel, 157–60 ˜ 200–202 transpose matrix, A, transposition, 177 triangle inequalities, 406 triangle rule, 268–69 trigonometric form, 853–54 trigonometric identity, 797–98 triple scalar products, 25–27, 165–66 triple vector products, 27–29, 46 BAC–CAB rule, 28, 46, 51 exercises, 27–32 overview, 27 Tschebyscheff, see Chebyshev two-dimensional conditions for orthogonal matrices, 199–200 twofold symmetry axis, 294–96 U ultraspherical polynomials, extension to, 747 equation, 853, 861 self-adjoint form, 854 uncertainty principle in quantum theory, 941 uniform convergence, 348–49, 363 union of sets, 1111 uniqueness theorem, 364–65 descending power series, 781 inverse operator, 181 Laurent expansion, 433 of power series, 364–65 uniqueness theorem, L’Hôpital’s rule, 365 unitary groups, 243 unitary matrices, see also Hermitian matrices algebra, 210–12 Heaviside, 93, 981, 985, 996 ring, 180, 187 unit element of group, 293 unit step function, 191 vector space, 208 unit vectors, 123 Cartesian coordinates, 5 circular cylindrical, 116 orthogonality relation, 651 spherical polar, 201–2 spherical polar coordinates, 126 upper and lower bounds for Pn (cos θ), 753–4 V values, limiting, of elliptic integrals, 374 variables, see also complex variables dependent, 1052–58 Hamilton’s Principle, 1053–54 Laplace’s equation, 1057–58 moving particle — Cartesian coordinates, 1054 moving particle — circular cylindrical coordinates, 1054–55 dependent and independent, 1038–44, 1058–59 alternate forms of Euler equations, 1042 concept of variation, 1038–41 missing dependent variables, 1042–43 optical path near event horizon of a black hole, 1041–42 Index multiple, of Taylor’s expansion, 358 separation of, 554–62 variance, 1120 variation, concept of, 1038–41 variation of the constant, 548 variations, see also calculus of variations variation with constraints, 1065–72 Lagrangian equations, 1066–67 Schrödinger wave equation, 1069–70 simple pendulum, 1067–68 sliding off a log, 1068–69 vector analysis parallelogram addition law, 2–3 reciprocal lattice, 27 rotation of coordinates, 205 transformation law, 134 vector definition, 8–9 representation of, 9 vector expansion, 745–47 vector field, 7 vector integrals line integrals, 55 surface integrals, 56–57 volume integrals, 57 vector potential, 73, 311 vector product, 5, 11, 18–22, 25–29, 43–44, 46, 147, 272 vector quantities, 1 vectors, 1–101, see also curved coordinates and vectors; divergence, ∇; gradient, ∇; integration; potentials; rotations; Stokes’ theorem; tensors applications of orthogonal matrices to, 197–98 components, 8, 11, 153, 654 contravariant, 134–35, 139, 152–54, 158 covariant, 134–35, 139, 152–54, 156, 158 cross product of, 315 curl, ∇×, 43–49 of central force field, 44–46 exercises, 47–49 gradient of dot product, 46 integration by parts of, 47 overview, 43 potential of constant B field, 44 definitions and elementary approach, 1–7 exercises, 6–7 differential vector operators, 110–14 curl, 112–13 divergence, 111–12 exercises, 113–14 gradient, 110 overview, 110 Dirac delta function, 83–95 exercises, 91–95 integral representations for, 90–95 1181 overview, 83–87 phase space, 88 representation by orthogonal functions, 88–89 total charge inside sphere, 88 direction, 5 direct product of, 182 elementary approach to, 1–7 elementary approach to, exercises, 6–7 Gauss’ law, 79–83 exercises, 82–83 Poisson’s equation, 81–83 Gauss’ theorem, 60–64 alternate forms of, 62 exercises, 62–64 Green’s theorem, 61–62 overview, 60–61 by Gram–Schmidt orthogonalization, 174–76 Helmholtz’s theorem, 95–101 exercises, 100–101 overview, 95–96 irrotational, 45–46 linear dependence of, 172–73 normal, 15, 56 or cross product, 18–22 exercises, 22–25 orthogonal, 108, 173 overview, 1 potentials, magnetic, 127–28 scalar or dot product, 12–17 exercises, 17 invariance of under rotations, 15–16 serve as a basis, 5 space, 11–12 exercises, 12 successive applications of ∇, 49–54 electromagnetic wave equation, 51–53 exercises, 53–54 Laplacian of potential, 50–51 overview, 49–50 triangle law of addition, 1–2 triple product of, 27–29 exercises, 27–32 triple scalar product, 25–27 vector space, linear space, 7, 11–12, 173, 177, 208, 219, 247–48, 314, 638, 644, 650, 654–55, 657–58, 884 vector spaces, completeness, 654–58 vector spherical harmonics, 813–16 vector transformation law, 21 velocity of electromagnetic waves in a dispersive medium, 997–99 vibrating string, 1074 vibration, normal modes of, 233–34 vierergruppe, 292–93, 296 1182 Index Volterra equation, 1005–6 volume integrals, 57–58 von Staudt–Clausen theorem, 379 W Wallis’ formula, 399 wave diffusion equation (Helmholtz diffusion equation), 536, 537 wave equation, 947–48 anomalous dispersion, 999 derivation from Maxwell’s equation, 52 Fourier transform solution, 947–48 Laplace transform solution, 948 wave equation, electromagnetic, 51–53 wave functions, 624 spinor, 212 wave guides, coaxial, Bessel functions, 703–4 Weierstrass infinite-product form of Ŵ(z), 499–500 Weierstrass M test, 349–50 weight diagram, 257, 258, 260, 265 weight vectors, 265 Whittaker functions, 866, 919 Wigner–Eckart theorem, 273 WKB expansion, 394 work, potential, 34, 70–72 Wronskian, 550 Wronskian determinant, 579–80 Wronskian formulas, 702–3 absence of third solution, 587, 589 Bessel functions, 702–4 Bessel functions, spherical, 601, 723 Chebyshev functions, 582–83, 591 confluent hypergeometric functions, 868 Green’s function, construction of, 592, 703 linear independence of functions, 579–80, 665, 703 second solution of differential equations, 583–87 solutions of self-adjoint differential equation, 702 Y Young tableaux for SU(n), 274–77 Z zero-point energy, 732, 826 zeros, Bessel function, 682 zeta function, see Riemann Zeta function × Report "Arfken George, Weber Hans - Mathematical Methods for Physicists - 6a Edition" Your name Email Reason Description Close Submit Contact information Ronald F. Clayton info@pdfcoffee.com Address: 46748 Colby MotorwayHettingermouth, QC T3J 3P0 About Us Contact Us Copyright Privacy Policy Terms and Conditions FAQ Cookie Policy Subscribe our weekly Newsletter Subscribe Copyright © 2025 PDFCOFFEE.COM. All rights reserved. Our partners will collect data and use cookies for ad personalization and measurement. Learn how we and our ad partner Google, collect and use data. Agree & close
3992
https://emedicine.medscape.com/article/220869-overview
Lymphogranuloma Venereum (LGV): Background, Pathophysiology, Epidemiology [x] X No Results No Results For You News & Perspective Tools & Reference CME/CE Video Events Specialties Topics Edition English Medscape Editions English Deutsch Español Français Português UK Invitations About You Scribe Professional Information Newsletters & Alerts Your Watch List Formulary Plan Manager Log Out Register Log In For You News & Perspective Tools & Reference CME/CE More Video Events Specialties Topics EN Medscape Editions English Deutsch Español Français Português UK X Univadis from Medscape Welcome to Medscape About YouScribeProfessional InformationNewsletters & AlertsYour Watch ListFormulary Plan ManagerLog Out RegisterLog In X No Results No Results close Please confirm that you would like to log out of Medscape. If you log out, you will be required to enter your username and password the next time you visit. Log outCancel processing.... Tools & Reference>Infectious Diseases Lymphogranuloma Venereum (LGV) Updated: Jan 03, 2025 Author: John L Kiley, MD; Chief Editor: Pranatharthi Haran Chandrasekar, MBBS, MDmore...;) 16 Share Print Feedback Close Facebook Twitter LinkedIn WhatsApp Email Sections Lymphogranuloma Venereum (LGV) Sections Lymphogranuloma Venereum (LGV) Overview Background Pathophysiology Epidemiology Prognosis Patient Education Show All Presentation History Physical Causes Complications Show All DDx Workup Approach Considerations Laboratory Studies Procedures Histologic Findings Show All Treatment Approach Considerations Medical Care Surgical Care Consultations Activity Complications Prevention Further Outpatient Care Show All Guidelines Guidelines Summary Clinical Practice Guideline on Treatment of Lymphogranuloma Venereum by the CDC The 2019 European guideline on the management of lymphogranuloma venereum WHO Guidelines on the Treatment of Lymphogranuloma Venereum Show All Medication Medication Summary Antibiotics Macrolides Show All Tables) References;) Overview Background Lymphogranuloma venereum (LGV) is a sexually transmitted infection caused by specific serovars of Chlamydia trachomatis that have the ability to invade and replicate in regional lymph nodes. This distinguishes LGV from other chlamydial infections, leading to more severe clinical manifestations. Infections caused by LGV serotypes L1, L2, and L3 can present with genital ulcers, followed by painful inguinal and/or femoral lymphadenopathy, which may be the primary clinical symptoms. LGV also can involve the rectum, leading to ulceration, proctocolitis, and symptoms such as rectal pain, discharge, and bleeding, often resembling gastrointestinal conditions. If left untreated, LGV can lead to complications such as disfiguring genital ulcers and lymphatic obstruction. The infection is more commonly diagnosed in men, particularly among men who have sex with men (MSM), and increasingly is reported in high-income countries. [4, 5, 6, 7, 8, 9] Prompt diagnosis, appropriate treatment, and ongoing monitoring are essential to prevent long-term sequelae and reduce transmission of LGV in affected populations. Medical providers should remain vigilant in recognizing and managing LGV, especially in populations experiencing an increase in cases. Next: Pathophysiology Pathophysiology Chlamydia trachomatis is an obligate intracellular bacterium. Of the 15 known clinical serotypes, only the L1, L2, and L3 serotypes cause LGV. These serotypes are more virulent and invasive compared to other chlamydial serotypes. Infection occurs after direct contact with the skin or mucous membranes of an infected partner. The organism does not penetrate intact skin. The organism travels by lymphatics to regional lymph nodes, where it replicates within macrophages and causes systemic disease. Although transmission is predominantly sexual, cases of transmission through laboratory accidents, fomites, and nonsexual contact have been reported. The L2b serovar has been identified to play a more important role than previously expected. After the diagnosis of 92 cases of LGV in the Netherlands among MSM, Schachter et al evaluated samples obtained from rectal swabs between 1979 and 1985 from patients infected with HIV in San Francisco and between 2000 and 2005 in Amsterdam. The study revealed the same serotype circulating among patients with HIV and LGV during 1979 to 1985. This indicates the L2b serovar has been present and unrecognized for many years. LGV occurs in 3 stages. The first stage, which often is unrecognized, consists of a rapidly healing, painless genital papule or pustule. The second stage, consisting of painful, often unilateral inguinal lymphadenopathy, occurs 2-6 weeks after the primary lesion. The third stage, which is more common in women and MSM, may have a delayed presentation and is characterized by proctocolitis. Previous Next: Pathophysiology Epidemiology Frequency United States LGV historically is a rare disease in developed countries, but is best thought of as a re-emerging STI. Since 2003,sporadic outbreaks of LGV proctitis have been reported among MSM in North America, Europe, and Australia. [11, 12] However, in the United States,a precise understanding of the prevalence is unknown for a combination of reasons: LGV as such is not a nationally notifiable condition - aside from the requirements to report Chlamydia infections; some states will attempt to track cases of LGV based on syndrome reporting, but most laboratories do not have the capability to either grow or serotype Chlamydia. No universal surveillance data exist for this disease. Twenty-four states still mandate reporting of LGV cases to the US Centers for Disease Control and Prevention (CDC), which provides limited data for disease prevalance. Since 1972, rates of LGV had declined, with 113 known cases reported to the CDC in 1997. In November 2004, the CDC began offering assistance to test for LGV in the United States. Between November 2004 and January 2006, LGV was identified in 180 specimens, with 27 specimens identified as being obtained from MSM.Of note, chlamydia rates have not decreased since 2012. A study published in 2011 reporting LGV surveillance data from multiple sites in the United States found that less than 1% of the samples obtained from rectal swabs of MSM that were positive for C trachomatis tested positive for LGV. Much like in the United Kingdom, increasing reports of clusters of LGV transmission in the United States, particularly among MSM patients, have occurred. International LGV is an uncommon disease, although it may account for 2-10% of patients with genital ulcer disease in selected areas of India and Africa. The disease most commonly is found in areas of the Caribbean, Central America, Southeast Asia, and Africa. Since 2003, however, the emergence of documented LGV infections, mostly among MSM, but also in women, has prompted increased surveillance and reporting of this disease in developed countries. [18, 19] Proctitis is reemerging as a presentation of LGV in developing countries. After a cluster of 92 cases was identified in the Netherlands between 2003 and 2004 (where fewer than 5 cases were reported yearly), many countries have begun active surveillance for LGV, and an increasing number of cases has been identified. Evidence exists that among MSM, LGV is endemic in the United Kingdom; between 2004-2008, LGV was documented in 854 isolates by the National Reference Center there. [22, 23, 24, 25, 26, 27, 28] Evidence supporting the endemicity of LGV in the United Kingdom continues to be published, particularly from the LGV Enhanced Surveillance system which collected data from 2004-2010. Over this 6-year period of data collection, 1370 cases were reported, of which 28 were females, heterosexual males or unknown, with the rest being MSM. Of note, nearly 80% of the patients with a single episode of LGV also were living with HIV; further complicating this rise in cases is evidence from the United Kingdom that a higher percentage of these cases may be asymptomatic. An Austrian study found that LGV represented 23% of rectal CT-infections in MSM, and that 45% of LGV cases were asymptomatic. A French study found on universal testing of CT-positive anorectal samples that rates of LGV being asymptomatic were high, and increased from 36.1% in 2020 to 52.4% in 2022. A study in Australia demonstrated an increase in the proportion of asymptomatic LGV cases when universal screening of CT-positive rectal samples in men over the age of 25 years was used in place of only screening in the setting of symptoms or HIV-positivity. A recent study in Moscow, Russia of MSM seen by proctologists found 37.3% were CT positive, and of those, 68.8% were LGV positive. Mortality/Morbidity With appropriate treatment, the disease is easily eradicated. Death is a rare complication but could possibly result from a small bowel obstruction or perforation secondary to rectal scarring. Morbidity is common, especially during the third stage of the disease, and includes such conditions as proctocolitis, perirectal fissures, abscesses, strictures, and rectal stenosis. A chronic inflammatory response may lead to hyperplasia of the intestinal and perirectal lymphatics, causing lymphorrhoids, which are similar to hemorrhoids. Strictures and fistulous tracts may lead to chronic lymphatic obstruction, resulting in elephantiasis, thickening or fibrosis of the labia, and edema or gross distortion of the penis and scrotum. Reports show an association between adenocarcinoma (primarily rectal adenocarcinoma) and chronic untreated LGV. Race In North America and Europe, most reported cases of LGV have been identified among white males infected with HIV who acquired the condition after having sex with other men after travel or living in endemic areas, and typically after having multiple anonymous sexual contacts. Sex LGV is an STI and probably affects both sexes equally, although it is more commonly reported in men. This predilection may be because early manifestations of LGV are more apparent in men and thus are diagnosed more readily. Men typically present with the acute form of the disease, whereas women often present later, after developing complications from late disease. Most cases in Europe and North America have been identified among White, frequently HIV-positive MSM patients presenting with proctitis. [27, 34, 35, 36] Age LGV may affect any age but has a peak incidence in the sexually active population aged 15-40 years. Previous Next: Pathophysiology Prognosis With prompt and appropriate antibiotic therapy, the prognosis is excellent and patients typically make a full recovery. Patients must be informed that reinfection and relapses may occur. Previous Next: Pathophysiology Patient Education Inform patients how to avoid high-risk sexual activities by using condoms and avoiding sexual intercourse with high-risk sexual partners. Previous Clinical Presentation ;) References Morris SR. Sexually Transmitted Infections (STIs). Porter RE.The Merck Manual of Diagnosis and Therapy. Rahway, NJ: Merck & Co Inc; Reviewed/Revised January 2023. [Full Text]. Soni S, Srirajaskanthan R, Lucas SB, Alexander S, Wong T, White JA. Lymphogranuloma venereum proctitis masquerading as inflammatory bowel disease in 12 homosexual men. Aliment Pharmacol Ther. July/2010. 32:59-65. [QxMD MEDLINE Link].[Full Text]. CDC. Sexually Transmitted Infections: Treatment Guidelines, 2021 Lymphogranuloma Venereum (LGV). US Centers for Disease Control and Prevention. Available at July 22, 2021; Accessed: October 4, 2024. Beigi, Richard H. Lymphogranuloma Venereum. Beigi, Richard H.Sexually Transmitted Diseases. 1st ed. West Sussex, UK: John Wiley & Sons, LTD; 2012. 49 - 52. [Full Text]. Oud EV, de Vrieze NH, de Meij A, de Vries HJ. Pitfalls in the diagnosis and management of inguinal lymphogranuloma venereum: important lessons from a case series. Sex Transm Infect. 2014 Jun. 90(4):279-82. [QxMD MEDLINE Link]. [Guideline] de Vries HJ, Zingoni A, Kreuter A, Moi H, White JA. 2013 European guideline on the management of lymphogranuloma venereum. J Eur Acad Dermatol Venereol. 2014 Mar 24. [QxMD MEDLINE Link]. de Vrieze NH, de Vries HJ. Lymphogranuloma venereum among men who have sex with men. An epidemiological and clinical review. Expert Rev Anti Infect Ther. 2014 Jun. 12(6):697-704. [QxMD MEDLINE Link]. Nieuwenhuis RF, Ossewaarde JM, Götz HM, Dees J, Thio HB, Thomeer MG, et al. Resurgence of lymphogranuloma venereum in Western Europe: an outbreak of Chlamydia trachomatis serovar l2 proctitis in The Netherlands among men who have sex with men. Clin Infect Dis. 2004 Oct 1. 39 (7):996-1003. [QxMD MEDLINE Link]. Rönn M, Hughes G, White P, Simms I, Ison C, Ward H. Characteristics of LGV repeaters: analysis of LGV surveillance data. Sex Transm Infect. 2014 Jun. 90 (4):275-8. [QxMD MEDLINE Link]. Schachter J. Confirming positive results of nucleic acid amplification tests (NAATs) for Chlamydia trachomatis: all NAATs are not created equal. J Clin Microbiol. 2005. 43:1372-1373. Kapoor S. Re-emergence of lymphogranuloma venereum. J Eur Acad Dermatol Venereol. April/2008. 22:409-16. [QxMD MEDLINE Link].[Full Text]. White JA. Manifestations and management of lymphogranuloma venereum. Curr Opin Infect Dis. Feb/2009. 22:57-66. [QxMD MEDLINE Link]. de Voux A, Kent JB, Macomber K, Krzanowski K, Jackson D, Starr T, et al. Notes from the Field: Cluster of Lymphogranuloma Venereum Cases Among Men Who Have Sex with Men - Michigan, August 2015-April 2016. MMWR Morb Mortal Wkly Rep. 2016 Sep 2. 65 (34):920-1. [QxMD MEDLINE Link]. CDC Sexually Transmitted Infections Surveillance 2022: Chlamydia - Rates of Reported Cases by Sex, United States, 2013-2022. cdc.gov. Available at 30 January 2024; Hardick J, Quinn N, Eshelman S, Piwowar-Manning E, Cummings V, Marsigila VC, et al. O3-S6.04 Multi-site screening for lymphogranuloma venereum (LGV) in the USA. Sex Transm Infect. 2011. 87 (Suppl 1):[Full Text]. de Voux A, Kent JB, Macomber K, Krzanowski K, Jackson D, Starr T, et al. Notes from the Field: Cluster of Lymphogranuloma Venereum Cases Among Men Who Have Sex with Men - Michigan, August 2015-April 2016. MMWR Morb Mortal Wkly Rep. 2016 Sep 2. 65 (34):920-1. [QxMD MEDLINE Link]. Wolff K, Lowell G, Stephen K, et al. Lymphogranuloma Venereum. Wolff K, Lowell G, Stephen K, et al.Fitzpatrick's Dermatology in General Medicine. 7th. United States: McGraw-Hill; 2008. 1: Chapter 203. [Full Text]. Martin-Iguacel R, Llibre JM, Nielsen H, Heras E, Matas L, Lugo R, et al. Lymphogranuloma venereum proctocolitis: a silent endemic disease in men who have sex with men in industrialised countries. Eur J Clin Microbiol Infect Dis. August 2010. 29:917-25. [QxMD MEDLINE Link].[Full Text]. Vanousova D, Zakouzka H, Jilich D, et al. First detection of Chlamydia trachomatis LGV biovar in the Czech Republic, 2010–2011. Eurosurvelliance. 2011. 17:article 2. [Full Text]. López-Vicente J, Rodríguez-Alcalde D, Hernández-Villalba L, Moreno-Sánchez D, Lumbreras-Cabrera M, Barros-Aguado C, et al. Proctitis as the clinical presentation of lymphogranuloma venereum, a re-emerging disease in developed countries. Rev Esp Enferm Dig. 2014 Jan. 106(1):59-62. [QxMD MEDLINE Link]. CDC. Lymphogranuloma venereum among men who have sex with men--Netherlands, 2003-2004. MMWR Morb Mortal Wkly Rep. 2004. 53:985-988. [QxMD MEDLINE Link].[Full Text]. Stary G, Stary A. Lymphogranuloma venereum outbreak in Europe. J Dtsch Dermatol Ges. 2008 Nov. 6(11):935-40. [QxMD MEDLINE Link]. Gomes JP, Nunes A, Florindo C, Ferreira MA, Santo I, Azevedo J, et al. Lymphogranuloma venereum in Portugal: unusual events and new variants during 2007. Sex Transm Dis. 2009 Feb. 36(2):88-91. [QxMD MEDLINE Link]. Sethi G, Allason-Jones E, Richens J, Annan NT, Hawkins D, Ekbote A, et al. Lymphogranuloma venereum presenting as genital ulceration and inguinal syndrome in men who have sex with men in London, United Kingdom. Sex Transm Infect. 2008 Dec 9. [QxMD MEDLINE Link]. Robertson A, Azariah S, Bromhead C, Tabrizi S, Blackmore T. Case report: lymphogranuloma venereum in New Zealand. Sex Health. 2008 Dec. 5(4):369-70. [QxMD MEDLINE Link]. Cusini M, Boneschi V, Arancio L, Ramoni S, Venegoni L, Gaiani F, et al. Lymphogranuloma Venereum: the Italian experience. Sex Transm Infect. 2008 Nov 26. [QxMD MEDLINE Link]. Ward H, Alexander S, Carder C, Dean G, French P, Ivens D, et al. The prevalence of Lymphogranuloma venereum (LGV) infection in men who have sex with men: results of a multi-centre case finding study. Sex Transm Infect. 2009 Feb 15. [QxMD MEDLINE Link]. Acknowledgement: Maria Jose Borrego. ESSTI_ALERT: LGV Cases Reported inDenmark and Portugal. ESTTI/Health Protection Agency. June 2007. Available at Saxon C, Hughes G, Ison C, UK LGV Case-Finding Group. Asymptomatic Lymphogranuloma Venereum in Men who Have Sex with Men, United Kingdom. Emerg Infect Dis. 2016 Jan. 22 (1):112-116. [QxMD MEDLINE Link]. Chromy D, Sadoghi B, Gasslitter I, Skocic M, Okoro A, Grabmeier-Pfistershammer K, et al. Asymptomatic lymphogranuloma venereum is commonly found among men who have sex with men in Austria. J Dtsch Dermatol Ges. 2024 Mar. 22 (3):389-397. [QxMD MEDLINE Link]. Peuchant O, Laurier-Nadalié C, Albucher L, Balcon C, Dolzy A, Hénin N, et al. Anorectal lymphogranuloma venereum among men who have sex with men: a 3-year nationwide survey, France, 2020 to 2022. Euro Surveill. 2024 May. 29 (19):277-9. [QxMD MEDLINE Link]. Hughes Y, Chen MY, Fairley CK, Hocking JS, Williamson D, Ong JJ, et al. Universal lymphogranuloma venereum (LGV) testing of rectal chlamydia in men who have sex with men and detection of asymptomatic LGV. Sex Transm Infect. 2022 Dec. 98 (8):582-585. [QxMD MEDLINE Link]. Tyulenev YA, Guschin AE, Titov IS, Frigo NV, Potekaev NN, Unemo M. First reported lymphogranuloma venereum cases in Russia discovered in men who have sex with men attending proctologists. Int J STD AIDS. 2022 Apr. 33 (5):456-461. [QxMD MEDLINE Link]. Tinmouth J, Gilmour MW, Kovacs C, Kropp R, Mitterni L, Rachlis A, et al. Is there a reservoir of sub-clinical lymphogranuloma venereum and non-LGV Chlamydia trachomatis infection in men who have sex with men?. Int J STD AIDS. 2008 Dec. 19(12):805-9. [QxMD MEDLINE Link]. de Vries HJ, van der Bij AK, Fennema JS, Smit C, de Wolf F, Prins M, et al. Lymphogranuloma venereum proctitis in men who have sex with men is associated with anal enema use and high-risk behavior. Sex Transm Dis. 2008 Feb. 35(2):203-8. [QxMD MEDLINE Link]. Savage EJ, van de Laar MJ, Gallay A, van der Sande M, Hamouda O, Sasse A, et al. Lymphogranuloma venereum in Europe, 2003-2008. Euro Surveill. Dec/2009. 14:48. [QxMD MEDLINE Link].[Full Text]. McLean CA, Stoner BP, Workowski KA. Treatment of lymphogranuloma venereum. Clin Infect Dis. 2007 Apr 1. 44 Suppl 3:S147-52. [QxMD MEDLINE Link]. Zenilman J, Shahmanesh M. Laboratory Interventions. Sexually Transmitted Infections: Diagnosis, Management, and Treatment. Sudbury, MA: Jones & Bartlett Learning, LLC; 2012. chap 19. Frickmann H, Essig A, Poppert S. Identification of lymphogranuloma venereum-associated Chlamydia trachomatis serovars by fluorescence in situ hybridisation--a proof-of-principle analysis. Trop Med Int Health. 2014 Apr. 19(4):427-30. [QxMD MEDLINE Link]. [Guideline] Centers for Disease Control and Prevention (CDC) Sexually Transmitted Infection (STI) Guideline, 2021: Lymphogranuloma Venereum (LGV). www.cdc.gov. Available at 22 July 2021; Simons R, Candfield S, French P, White JA. Observed Treatment Responses to Short-Course Doxycycline Therapy for Rectal Lymphogranuloma Venereum in Men Who Have Sex With Men. Sex Transm Dis. 2018 Jun. 45 (6):406-408. [QxMD MEDLINE Link]. Bilinska J, Artykov R, White J. Effective Treatment of Lymphogranuloma Venereum With a 7-Day Course of Doxycycline. Sex Transm Dis. 2024 Jul 1. 51 (7):504-507. [QxMD MEDLINE Link]. Oud EV, de Vrieze NH, de Meij A, de Vries HJ. Pitfalls in the diagnosis and management of inguinal lymphogranuloma venereum: important lessons from a case series. Sex Transm Infect. 2014 Jun. 90 (4):279-82. [QxMD MEDLINE Link]. J Klausner, E Hook III. Lymphogranuloma Venereum. J Klausner, E Hook III.CURRENT Diagnosis & Treatment of Sexually Transmitted Diseases. 1. United States: McGraw-Hill; 2007. Chapter 17. Annan NT, Sullivan AK, Nori A, Naydenova P, Alexander S, McKenna A, et al. Rectal chlamydia--a reservoir of undiagnosed infection in men who have sex with men. Sex Transm Infect. June/2009. 85:176-9. [QxMD MEDLINE Link].[Full Text]. [Guideline] de Vries HJC, de Barbeyrac B, de Vrieze NHN, Viset JD, White JA, Vall-Mayans M, et al. 2019 European guideline on the management of lymphogranuloma venereum. J Eur Acad Dermatol Venereol. 2019 Oct. 33 (10):1821-1828. [QxMD MEDLINE Link].[Full Text]. [Guideline] World Health Organization. WHO Guidelines for the Treatment of Chlamydia trachomatis. World Health Organization. 2016. [QxMD MEDLINE Link].[Full Text]. Peters RPH, Maduna L, Kock MM, McIntyre JA, Klausner JD, Medina-Marino A. Single-Dose Azithromycin for Genital Lymphogranuloma Venereum Biovar Chlamydia trachomatis Infection in HIV-Infected Women in South Africa: An Observational Study. Sex Transm Dis. 2021 Feb 1. 48 (2):e15-e17. [QxMD MEDLINE Link]. Blanco JL, Fuertes I, Bosch J, De Lazzari E, Gonzalez-Cordón A, Vergara A, et al. Effective treatment of Lymphogranuloma venereum proctitis with Azithromycin. Clin Infect Dis. 2021 Jan 19. [QxMD MEDLINE Link]. Shiferaw W, Martin BM, Dean JA, et al. A systematic review and meta-analysis of sexually transmitted infections and blood-borne viruses in travellers. J Travel Med. 2024 Jun 3. 31 (4):[QxMD MEDLINE Link].[Full Text]. Media Gallery of 0 Tables Table 1. Recommended and Alternative Treatment Regimens;) Table 2.WHO Recommendations for Treatment of Lymphogranuloma Venereum;) Table 1. Recommended and Alternative Treatment Regimens | Treatment Regimen | Dosage | Duration | --- | Recommended: | Doxycycline 100 mg orally, 2 times/day | 21 days | | | | | | Alternative Regimens: | | | | Azithromycin | 1 gm orally, once weekly | 3 weeks | | Erythromycin base | 500 mg orally, 4 times/day | 21 days | Table 2.WHO Recommendations for Treatment of Lymphogranuloma Venereum | Criteria | Treatment Recommendation | --- | | In adults and adolescents | Doxycycline 100 mg orally twice daily for 21 days is | | with LGV | preferred over Azithromycin 1 g orally weekly for 3 weeks. | ------------------------------------------------------------- | | When Doxycycline is | Azithromycin 1 g orally weekly for 3 weeks should be | | contraindicated | provided as an alternative. | ------------------------------------------------------------- | | When neither treatment is | Erythromycin 500 mg orally 4 times a day for 21 days is | | available | an alternative option. | ------------------------------------------------------------- | | Pregnancy | Doxycycline is contraindicated in pregnant women. | Back to List ;) Contributor Information and Disclosures Author John L Kiley, MD Associate Program Director, SAUSHEC Infectious Disease Fellowship Program, Department of Medicine, Infectious Disease Service, Brooke Army Medical Center/San Antonio Military Medical Center, San Antonio Uniformed Services Health Education Consortium; Assistant Professor, Department of Medicine, Uniformed Services University of the Health Sciences; Adjoint Assistant Professor of Medicine, University of Texas Health Science Center at San Antonio, Long School of Medicine John L Kiley, MD is a member of the following medical societies: American College of Physicians, Armed Forces Infectious Diseases Society, Infectious Diseases Society of America Disclosure: Nothing to disclose. Coauthor(s) Mark T Derasmo, MD Infectious Disease Specialist, David Grant Medical Center Mark T Derasmo, MD is a member of the following medical societies: Infectious Diseases Society of America, American College of Physicians Disclosure: Nothing to disclose. Specialty Editor Board Francisco Talavera, PharmD, PhD Adjunct Assistant Professor, University of Nebraska Medical Center College of Pharmacy; Editor-in-Chief, Medscape Drug Reference Disclosure: Received salary from Medscape for employment. Charles V Sanders, MD Edgar Hull Professor and Chairman, Department of Internal Medicine, Professor of Microbiology, Immunology and Parasitology, Louisiana State University School of Medicine in New Orleans; Medical Director, Medicine Hospital Center, Charity Hospital and Medical Center of Louisiana at New Orleans; Consulting Staff, Ochsner Medical Center Charles V Sanders, MD is a member of the following medical societies: Alliance for the Prudent Use of Antibiotics, Alpha Omega Alpha, American Association for Physician Leadership, American Association for the Advancement of Science, American Association of University Professors, American Clinical and Climatological Association, American College of Physicians, American Federation for Medical Research, American Geriatrics Society, American Lung Association, American Medical Association, American Society for Microbiology, American Thoracic Society, American Venereal Disease Association, Association for Professionals in Infection Control and Epidemiology, Association of American Medical Colleges, Association of American Physicians, Association of Professors of Medicine, Infectious Disease Society for Obstetrics and Gynecology, Infectious Diseases Society of America, Louisiana State Medical Society, Orleans Parish Medical Society, Royal Society of Medicine, Sigma Xi, The Scientific Research Honor Society, Society of General Internal Medicine, Southeastern Clinical Club, Southern Medical Association, Southern Society for Clinical Investigation, Southwestern Association of Clinical Microbiology, The Foundation for AIDS Research Disclosure: Receives royalties from Baxter International for: Takeda-receives royalties; UpToDate-receives royalties. Chief Editor Pranatharthi Haran Chandrasekar, MBBS, MD Professor, Chief of Infectious Disease, Department of Internal Medicine, Wayne State University School of Medicine Pranatharthi Haran Chandrasekar, MBBS, MD is a member of the following medical societies: American College of Physicians, American Society for Microbiology, International Immunocompromised Host Society, Infectious Diseases Society of America Disclosure: Nothing to disclose. Additional Contributors Pamela Arsove, MD, FACEP Associate Residency Director, Department of Emergency Medicine, Hofstra Northshore Long Island Jewish School of Medicine; Attending Physician, Department of Emergency Medicine, Long Island Jewish Medical Center; Assistant Professor, Department of Emergency Medicine, Northshore Long Island Jewish School of Medicine Pamela Arsove, MD, FACEP is a member of the following medical societies: American College of Emergency Physicians, American Medical Association, Phi Beta Kappa, Society for Academic Emergency Medicine Disclosure: Nothing to disclose. Barbara Edwards, MD Associate Physician, Division of Infectious Diseases, Department of Medicine, Long Island Jewish Medical Center; Assistant Professor, Department of Medicine, Albert Einstein College of Medicine of Yeshiva University Barbara Edwards, MD is a member of the following medical societies: American College of Physicians, Infectious Diseases Society of America, Society for Healthcare Epidemiology of America Disclosure: Nothing to disclose. Acknowledgements Kenneth C Earhart, MD Deputy Head, Disease Surveillance Program, United States Naval Medical Research Unit #3 Kenneth C Earhart, MD is a member of the following medical societies: American College of Physicians, American Society of Tropical Medicine and Hygiene, Infectious Diseases Society of America, and Undersea and Hyperbaric Medical Society Disclosure: Nothing to disclose. Alexandre F Migala, DOStaff Physician, Department of Emergency Medicine, Denton Regional Medical Center Disclosure: Nothing to disclose. Close;) What would you like to print? What would you like to print? Print this section Print the entire contents of Print the entire contents of article Top Picks For You encoded search term (Lymphogranuloma Venereum (LGV)) and Lymphogranuloma Venereum (LGV) What to Read Next on Medscape Related Conditions and Diseases Lymphogranuloma Venereum (LGV) in Emergency Medicine Chlamydia (Chlamydial Genitourinary Infections) Nonmalignant Dermatologic Diseases of the Male Genitalia Painful Genital Lesions Avoiding Sexual Intercourse Due to Rectal Pain Proctitis and Proctocolitis Organism-Specific Therapy News & Perspective Sexually Transmitted Infections Surge in Europe, Latest Data Shows Orolabial Lymphogranuloma Venereum, Michigan, USA Doxycycline Trumps Azithromycin for Asymptomatic Rectal Chlamydia in Men Who Have Sex With Men Updated Guidelines for the Management of Gonorrhea and Chlamydia in Patients With HIV Tools Drug Interaction Checker Pill Identifier Calculators Formulary Slideshow Visual Findings of 9 Sexually Transmitted Infections Sections Lymphogranuloma Venereum (LGV) Overview Background Pathophysiology Epidemiology Prognosis Patient Education Show All Presentation History Physical Causes Complications Show All DDx Workup Approach Considerations Laboratory Studies Procedures Histologic Findings Show All Treatment Approach Considerations Medical Care Surgical Care Consultations Activity Complications Prevention Further Outpatient Care Show All Guidelines Guidelines Summary Clinical Practice Guideline on Treatment of Lymphogranuloma Venereum by the CDC The 2019 European guideline on the management of lymphogranuloma venereum WHO Guidelines on the Treatment of Lymphogranuloma Venereum Show All Medication Medication Summary Antibiotics Macrolides Show All Tables) References;) Recommended 2002 783971-overview Diseases & Conditions Diseases & ConditionsLymphogranuloma Venereum (LGV) in Emergency Medicine 2010 Slideshow SlideshowVisual Findings of 9 Sexually Transmitted Infections Medscape Log in or register for free to unlock more Medscape content Unlimited access to our entire network of sites and services Log in or Register Log in or register for free to unlock more Medscape content Unlimited access to industry-leading news, research, resources, and more Email Continue or Log in with Google Log in with Facebook Log in with Apple Policies Medscape About For Advertisers Privacy PolicyEditorial PolicyAdvertising PolicyYour Privacy ChoicesTerms of UseCookies News & PerspectivesTools & ReferenceCME/CEVideoEventsSpecialtiesTopicsAccount InformationScribeNewsletters & Alerts About MedscapeMedscape StaffMarket ResearchHelp CenterContact Us Advertise with UsAdvertising Policy Get the Medscape App Download on the App Store Get it on Google Play All material on this website is protected by copyright, Copyright © 1994-2025 by WebMD LLC. This website also contains material copyrighted by 3rd parties. Close
3993
https://www.stroke.org/-/media/Files/Affiliates/MWA/MN-Chest-Pain-ACS-Tool-Kit.pdf
Minnesota Chest Pain / Acute Coronary Syndrome “Tool-Kit” Patient with Chest Pain Or Potential Acute Coronary Syndrome STEMI, Non-STEMI, Chest Pain ? Follow MN STEMI Guideline Follow MN Non-STEMI Guideline Follow MN ED Chest Pain Guideline Final Draft: June 12th, 2018 This ACS/Chest Pain “Tool-Kit” was created with coordination from the Minnesota Department of Health, in conjunction with the American Heart Association Minnesota Mission:Lifeline™ Workgroup. This information is intended only as a guideline. Please use your best judgement or newly published literature in the treatment of patients. The Minnesota Department of Health is not responsible for inaccuracies contained herein. No responsibility is assumed for damages or liabilities arising from accuracy, content error, or omission. 1 Minnesota Chest Pain / Acute Coronary Syndrome “Tool-Kit” Table of Contents 1. Cover – Minnesota Chest Pain / Acute Coronary Syndrome Tool-Kit 2. Table of Contents 3. Minnesota STEMI Inter-Facility Transfer Guideline page 1 of 2 4. Minnesota STEMI Inter-Facility Transfer Guideline page 2 of 2 5. Minnesota EMS STEMI Transport Guideline page 1 of 2 6. Minnesota EMS STEMI Transport Guideline page 2 of 2 7. Minnesota EMS STEMI Transport Flowchart 8. Minnesota Non-STEMI Guideline 9. Minnesota Non-STEMI Flowchart 10. Minnesota ED Chest Pain Protocol 11. Minnesota ED Chest Pain Flowchart 12. Minnesota Low Risk Chest Pain Shared Decision-Making Tool 13. Minnesota Moderate Risk Chest Pain Shared Decision-Making Tool 14. Minnesota High Risk Chest Pain Shared Decision-Making Tool 15. Who Needs a 12-Lead ECG? (Symptom and Age Algorithm) 2 3 4 5 6 7 8 Minnesota Non-STEMI Guideline - Flowchart Patient meets any of the following criteria • HEART Score of 7-10 • ST depression or dynamic T-wave inversion strongly suspicious for ischemia • Otherwise identified Non-ST elevation acute coronary syndrome • Admit to CCU or appropriate unit with cardiac telemetry (may require transfer) • Consider Cardiology consult • Start adjunctive treatments (as indicated/if no contraindications): o Aspirin 324 mg PO (give suppository if unable to take PO) o Ticagrelor 180 mg PO or Clopidogrel 600 mg PO (loading doses) - (Prasugrel 60 mg PO could also be considered, but note contraindications) o Heparin 60 Units/kg (max 4,000 Units) IV bolus o Heparin 12 Units/kg/hr (max 1,000 Units/hr) IV infusion o Other medications as indicated per institutional AMI order set Choose Treatment Strategy Early Invasive Strategy (Cath Lab) • Prepare for Cath Lab - Transfer if necessary by ground ambulance (Air transfers should be reserved for STEMI) • Insert 2 large bore peripheral saline lock IV’s in left arm • Continue adjunctive treatments as above • Consult Cardiology for additional treatments - (i.e. Beta-Blocker, Nitro, Morphine, O2, etc.) Ischemia-Guided Strategy (Medical Therapy) • Continue adjunctive treatments as indicated • Continue serum Troponins q 3 hours x 3 • Continue serial ECG’s - Repeat PRN for recurring or worsening symptoms • Obtain cardiac imaging study - Consult Cardiology for appropriate test - (i.e. Echocardiography, CTA, Radionuclide, etc.) Therapy Effective? Late Hospital/Posthospital Care • Aspirin 81 mg PO once daily • ACE inhibitor or ARB • Beta-Blocker • High Intensity Statin • P2Y12 inhibitor per Cardiology - Up to 12 months if medically treated - At least 12 months if treated with drug eluting stent (DES) ° Ticagrelor 90 mg PO twice daily or ° Clopidogrel 75 mg once daily or ° Prasugrel 10 mg PO once daily (5 mg if ≤ 60 kg) • If switching to a different P2Y12 inhibitor, consider a full loading dose at the time the next dose would be due - Don’t Forget Cardiac Rehab Referral !!! Is CABG Appropriate? YES Final Draft 6.12.18 Assess Criteria for Early Invasive Strategy (Cath Lab) • High-risk features and patient a candidate for invasive angiography (PCI)? • Persistent or recurrent symptoms? • New ST-segment depression and positive serum Troponin(s)? • Depressed LV functional study that suggests multi-vessel CAD? • Hemodynamic instability or VT? Urgent Angiography For Intended PCI CABG Surgery • Continue Aspirin therapy • Consult CT Surgeon about stopping other therapies and timing - (i.e. when to hold antiplatelet) Yes Reconsider Invasive Strategy if Appropriate No No 9 10 Minnesota ED Chest Pain Protocol ...for patients presenting to an emergency department with chest pain (or equivalent) symptoms of a potential acute coronary syndrome (ACS) Patient with Chest Pain or Potential ACS Triaged in ED Stat 12-Lead ECG & IV Troponin Drawn Is ECG (+) for STEMI? Activate STEMI Alert & Follow STEMI Process Yes Follow Process for Intermediate/High Risk NSTE-ACS (Hospitalize) Wait for 1st Troponin Result Calculate HEART Score After 1st Troponin Result No Is HEART Score 0 - 3 & Initial Troponin Negative ? Use Low Risk Shared Decision Making Tool: Inform Patient That at This Point There Is Less Than 2% Chance (1.7%) of Adverse Cardiac Event in the next 4-6 weeks Does Patient Want a 2nd ECG & Troponin at Hour 2-3 of ED Stay? Is 2nd ECG & 2nd Troponin Negative? Refer to Low Risk Shared Decision Making Document: Inform Patient There is Now Less Than 1% Chance Of Adverse Cardiac Event in the next 4 weeks Repeat ECG Immediately if Symptoms Change Evolving Changes Noted on Repeated ECG? Yes No No Is HEART Score 4 – 6 & Initial Troponin Negative? Yes Continue to Obtain serial ECG’s and Troponin’s at hour 3 and 6 of stay Evaluate need for continued admission, or Stress Test within 72 hours Discharge to Home & Follow Up With Provider Within 1 Week (No Stress Test) Clarify Admission Disposition or Discharge to Home . Schedule Stress Test Within 72 Hours if Appropriate. Yes No Follow Process for Intermediate/High Risk NSTE-ACS (Hospitalize) Heart Score is 7 – 10 Or Initial Troponin is Positive Yes No Yes No Does Patient Want a 3rd ECG &Troponin at Hour 6 of ED or Hospital Stay? Serial ECG’s & Troponin’s all Negative No Yes Final Draft 6/12/18 No Yes Is ECG (+) for STEMI? Yes No Consider Admission Under Observation vs. Inpatient Status (In Hospital or Observation Unit) Optional Use of Moderate Risk Shared Decision Making Tool: Inform Patient That at This Point There May Be More Than a 13% Chance of Adverse Cardiac Event in the next 4-6 weeks Optional Use of High Risk Shared Decision Making Tool: Inform Patient That at This Point There May Be More Than a 50% Chance of Adverse Cardiac Event in the next 4-6 weeks 11 Final Draft – 6/12/18 12 13 Final Draft – 6/12/18 14 Final Draft – 6/12/18 15
3994
https://www.makerlessons.com/coretechs/simple-machines/inclined-plane
INCLINED PLANE Search this site Embedded Files Skip to main content Skip to navigation MAKER LESSONS ABOUT US COURSES AND PACKAGES MAKER LABS SAFETY TOOL BOX TABLE SAW DRILL PRESS MITER SAW RULER BAND SAW X-ACTO KNIVES MULTIMETERS SOLDERING BREADBOARDING LASER CUTTER 3D PRINTER BELT SANDER DESIGN CHALLENGES MINI-MINI GOLF HOLE ELECTRICITY INTRODUCTION TO ELECTRICITY ATOMIC STRUCTURE SCHEMATICS MAGNETISM OHM'S LAW AND POWER VOLTAGE RESISTANCE CURRENT OHM'S LAW POWER CIRCUITS SIMPLE CIRCUITS SERIES CIRCUITS PARALLEL CIRCUITS BEGINNER CIRCUIT PROJECTS ELECTRONICS ELECTRONICS PROJECTS RESISTORS IN A CIRCUIT VARIABLE RESISTORS IN A CIRCUIT CAPACITORS IN A CIRCUIT TRANSISTORS IN A CIRCUIT BASIC COMPONENT PROJECTS RESISTORS VARIABLE RESISTORS CAPACITORS TRANSISTORS BUTTONS AND SWITCHES LEDs DIODES INDUCTORS FUSES AND BREAKERS PIEZO ELEMENTS RELAYS AND SOLENOIDS ELECTRIC MOTORS BATTERIES AND POWER SOURCES SPEAKERS HALL EFFECT LOGIC GATES INTEGRATED CIRCUITS ABOUT ICs 555 TIMER CIRCUITS 4017 DECADE COUNTER LM386 AMPLIFIER 74283 ADDER INTEGRATED CIRCUIT PROJECTS 555 TIMER PROJECTS 4017 DECADE COUNTER PROJECTS LM386 AMPLIFIER PROJECTS 74283 ADDER Projects ARDUINO ABOUT ARDUINO INTRO TO ARDUINO WITH TINKERCAD ANALOG INPUTS ON ARDUINO DIGITAL INPUTS ON ARDUINO SERIAL MONITOR SERVO CONTROLLER ARDUINO PROJECTS BASIC ANALOG DIGITAL CAPSTONE ARDUINO PROJECTS LINE FOLLOWING ROBOT THEREMIN GAME BOX THE "GAME CUBE" MANUFACTURING PCBs ABOUT PCBs DRAFTING FUNDAMENTALS OF DRAFTING LINES TITLE BLOCKS SCALE ORTHOGRAPHIC PROJECTIONS DRAWING TYPES DIMENSIONING HAND DRAFTING TOOLS SINGLE VIEW: HAND DRAFTING ORTHOGRAPHIC PROJECTIONS: HAND DRAFTING CAD OPERATIONS ONSHAPE Sketching & Constraints Parts Holes Loft & Sweep Threads Revolve Assemblies Drawing Sheets AUTOCAD AUTOCAD TUTORIAL TOOLBAR - DRAW SINGLE VIEW ORTHOGRAPHIC PROJECTIONS ISOMETRIC CHALLENGE DWGs SKETCHUP DRAWINGS FRAMED WALL ARCHITECHTURE FRAMING ROOFS CAD FILES PARTS PART #1: CLAMP BLOCK PART #2: CLAMP PLATE PART #3: GUIDE BLOCK PART #4: BLADE HOLDER PART #5: CLAPPER BOX PART #6: ACTUATOR BASE PART #7: SWITCH DOG PART #8 : FRAME GUIDE PART #9: TOOL HOLDER PART #10: CONTROL DOG 3D PRINTS CUBES PUZZLE INITIAL DESIGN FIDGET SPINNER GRAPHICS TYPOGRAPHY COLOR THEORY BRANDING LOGOS LAYOUT DESIGN CHALLENGES CORETECHS ENGINEERING DISCIPLINES ENGINEERING DESIGN PROCESS SKETCHING AGES OF TECHNOLOGY SIMPLE MACHINES INCLINED PLANE WEDGE SCREW LEVER WHEEL AND AXLE PULLEY STRUCTURAL FORCES CORETECHS PROJECTS CATAPULTS AND TREBUCHETS COMPETITIONS FIRST TECH CHALLENGE MECHANICAL SYSTEMS TETRIX BUILD SYSTEM WHEELS AND AXLES GEARS, SPROCKETS, AND CHAIN CHASSIS AND DRIVE TRAINS ELECTRONIC SYSTEMS FTC REV CONTROL HUB WIRING AND ADAPTERS SENSORS SERVOS MOTORS MAKER LESSONS ABOUT US COURSES AND PACKAGES MAKER LABS SAFETY TOOL BOX TABLE SAW DRILL PRESS MITER SAW RULER BAND SAW X-ACTO KNIVES MULTIMETERS SOLDERING BREADBOARDING LASER CUTTER 3D PRINTER BELT SANDER DESIGN CHALLENGES MINI-MINI GOLF HOLE ELECTRICITY INTRODUCTION TO ELECTRICITY ATOMIC STRUCTURE SCHEMATICS MAGNETISM OHM'S LAW AND POWER VOLTAGE RESISTANCE CURRENT OHM'S LAW POWER CIRCUITS SIMPLE CIRCUITS SERIES CIRCUITS PARALLEL CIRCUITS BEGINNER CIRCUIT PROJECTS ELECTRONICS ELECTRONICS PROJECTS RESISTORS IN A CIRCUIT VARIABLE RESISTORS IN A CIRCUIT CAPACITORS IN A CIRCUIT TRANSISTORS IN A CIRCUIT BASIC COMPONENT PROJECTS RESISTORS VARIABLE RESISTORS CAPACITORS TRANSISTORS BUTTONS AND SWITCHES LEDs DIODES INDUCTORS FUSES AND BREAKERS PIEZO ELEMENTS RELAYS AND SOLENOIDS ELECTRIC MOTORS BATTERIES AND POWER SOURCES SPEAKERS HALL EFFECT LOGIC GATES INTEGRATED CIRCUITS ABOUT ICs 555 TIMER CIRCUITS 4017 DECADE COUNTER LM386 AMPLIFIER 74283 ADDER INTEGRATED CIRCUIT PROJECTS 555 TIMER PROJECTS 4017 DECADE COUNTER PROJECTS LM386 AMPLIFIER PROJECTS 74283 ADDER Projects ARDUINO ABOUT ARDUINO INTRO TO ARDUINO WITH TINKERCAD ANALOG INPUTS ON ARDUINO DIGITAL INPUTS ON ARDUINO SERIAL MONITOR SERVO CONTROLLER ARDUINO PROJECTS BASIC ANALOG DIGITAL CAPSTONE ARDUINO PROJECTS LINE FOLLOWING ROBOT THEREMIN GAME BOX THE "GAME CUBE" MANUFACTURING PCBs ABOUT PCBs DRAFTING FUNDAMENTALS OF DRAFTING LINES TITLE BLOCKS SCALE ORTHOGRAPHIC PROJECTIONS DRAWING TYPES DIMENSIONING HAND DRAFTING TOOLS SINGLE VIEW: HAND DRAFTING ORTHOGRAPHIC PROJECTIONS: HAND DRAFTING CAD OPERATIONS ONSHAPE Sketching & Constraints Parts Holes Loft & Sweep Threads Revolve Assemblies Drawing Sheets AUTOCAD AUTOCAD TUTORIAL TOOLBAR - DRAW SINGLE VIEW ORTHOGRAPHIC PROJECTIONS ISOMETRIC CHALLENGE DWGs SKETCHUP DRAWINGS FRAMED WALL ARCHITECHTURE FRAMING ROOFS CAD FILES PARTS PART #1: CLAMP BLOCK PART #2: CLAMP PLATE PART #3: GUIDE BLOCK PART #4: BLADE HOLDER PART #5: CLAPPER BOX PART #6: ACTUATOR BASE PART #7: SWITCH DOG PART #8 : FRAME GUIDE PART #9: TOOL HOLDER PART #10: CONTROL DOG 3D PRINTS CUBES PUZZLE INITIAL DESIGN FIDGET SPINNER GRAPHICS TYPOGRAPHY COLOR THEORY BRANDING LOGOS LAYOUT DESIGN CHALLENGES CORETECHS ENGINEERING DISCIPLINES ENGINEERING DESIGN PROCESS SKETCHING AGES OF TECHNOLOGY SIMPLE MACHINES INCLINED PLANE WEDGE SCREW LEVER WHEEL AND AXLE PULLEY STRUCTURAL FORCES CORETECHS PROJECTS CATAPULTS AND TREBUCHETS COMPETITIONS FIRST TECH CHALLENGE MECHANICAL SYSTEMS TETRIX BUILD SYSTEM WHEELS AND AXLES GEARS, SPROCKETS, AND CHAIN CHASSIS AND DRIVE TRAINS ELECTRONIC SYSTEMS FTC REV CONTROL HUB WIRING AND ADAPTERS SENSORS SERVOS MOTORS More MAKER LESSONS ABOUT US COURSES AND PACKAGES MAKER LABS SAFETY TOOL BOX TABLE SAW DRILL PRESS MITER SAW RULER BAND SAW X-ACTO KNIVES MULTIMETERS SOLDERING BREADBOARDING LASER CUTTER 3D PRINTER BELT SANDER DESIGN CHALLENGES MINI-MINI GOLF HOLE ELECTRICITY INTRODUCTION TO ELECTRICITY ATOMIC STRUCTURE SCHEMATICS MAGNETISM OHM'S LAW AND POWER VOLTAGE RESISTANCE CURRENT OHM'S LAW POWER CIRCUITS SIMPLE CIRCUITS SERIES CIRCUITS PARALLEL CIRCUITS BEGINNER CIRCUIT PROJECTS ELECTRONICS ELECTRONICS PROJECTS RESISTORS IN A CIRCUIT VARIABLE RESISTORS IN A CIRCUIT CAPACITORS IN A CIRCUIT TRANSISTORS IN A CIRCUIT BASIC COMPONENT PROJECTS RESISTORS VARIABLE RESISTORS CAPACITORS TRANSISTORS BUTTONS AND SWITCHES LEDs DIODES INDUCTORS FUSES AND BREAKERS PIEZO ELEMENTS RELAYS AND SOLENOIDS ELECTRIC MOTORS BATTERIES AND POWER SOURCES SPEAKERS HALL EFFECT LOGIC GATES INTEGRATED CIRCUITS ABOUT ICs 555 TIMER CIRCUITS 4017 DECADE COUNTER LM386 AMPLIFIER 74283 ADDER INTEGRATED CIRCUIT PROJECTS 555 TIMER PROJECTS 4017 DECADE COUNTER PROJECTS LM386 AMPLIFIER PROJECTS 74283 ADDER Projects ARDUINO ABOUT ARDUINO INTRO TO ARDUINO WITH TINKERCAD ANALOG INPUTS ON ARDUINO DIGITAL INPUTS ON ARDUINO SERIAL MONITOR SERVO CONTROLLER ARDUINO PROJECTS BASIC ANALOG DIGITAL CAPSTONE ARDUINO PROJECTS LINE FOLLOWING ROBOT THEREMIN GAME BOX THE "GAME CUBE" MANUFACTURING PCBs ABOUT PCBs DRAFTING FUNDAMENTALS OF DRAFTING LINES TITLE BLOCKS SCALE ORTHOGRAPHIC PROJECTIONS DRAWING TYPES DIMENSIONING HAND DRAFTING TOOLS SINGLE VIEW: HAND DRAFTING ORTHOGRAPHIC PROJECTIONS: HAND DRAFTING CAD OPERATIONS ONSHAPE Sketching & Constraints Parts Holes Loft & Sweep Threads Revolve Assemblies Drawing Sheets AUTOCAD AUTOCAD TUTORIAL TOOLBAR - DRAW SINGLE VIEW ORTHOGRAPHIC PROJECTIONS ISOMETRIC CHALLENGE DWGs SKETCHUP DRAWINGS FRAMED WALL ARCHITECHTURE FRAMING ROOFS CAD FILES PARTS PART #1: CLAMP BLOCK PART #2: CLAMP PLATE PART #3: GUIDE BLOCK PART #4: BLADE HOLDER PART #5: CLAPPER BOX PART #6: ACTUATOR BASE PART #7: SWITCH DOG PART #8 : FRAME GUIDE PART #9: TOOL HOLDER PART #10: CONTROL DOG 3D PRINTS CUBES PUZZLE INITIAL DESIGN FIDGET SPINNER GRAPHICS TYPOGRAPHY COLOR THEORY BRANDING LOGOS LAYOUT DESIGN CHALLENGES CORETECHS ENGINEERING DISCIPLINES ENGINEERING DESIGN PROCESS SKETCHING AGES OF TECHNOLOGY SIMPLE MACHINES INCLINED PLANE WEDGE SCREW LEVER WHEEL AND AXLE PULLEY STRUCTURAL FORCES CORETECHS PROJECTS CATAPULTS AND TREBUCHETS COMPETITIONS FIRST TECH CHALLENGE MECHANICAL SYSTEMS TETRIX BUILD SYSTEM WHEELS AND AXLES GEARS, SPROCKETS, AND CHAIN CHASSIS AND DRIVE TRAINS ELECTRONIC SYSTEMS FTC REV CONTROL HUB WIRING AND ADAPTERS SENSORS SERVOS MOTORS SIMPLE MACHINES: INCLINED PLANE The inclined plane, or ramp, is a classic simple machine. It's like a flat surface slanted at an angle, with one end higher than the other. People use it to help move things up or down. Inclined planes make it easier to move stuff up or down because you don't have to lift them straight up. Instead, you push or pull them along the slanted surface. But there's a trade-off: you have to move the object over a longer distance. When you use an inclined plane, you need less force than lifting something straight up. But because you're moving it along a longer path, it can take more time. How much easier it is to move something with an inclined plane depends on how long the slope is compared to how high it is. Even though you're doing the same amount of work, the inclined plane lets you do it with less force spread out over a longer distance. This makes it a handy tool for moving heavy things without needing as much muscle power. So, the next time you see a ramp, remember how it helps make lifting and lowering objects easier by spreading the effort out over a longer distance. Wedge Screw Lever Wheel & Axel Pulley History of the Inclined Plane The inclined plane, a fundamental and enduring concept in engineering and construction, has a rich history spanning millennia. From its early origins in ancient civilizations to its integral role in modern industry, the inclined plane has been utilized in a myriad of applications. It has facilitated the movement of heavy objects, simplified construction tasks, and contributed to technological progress. Let's explore the fascinating history of this simple yet ingenious mechanical device. Early Civilizations: The concept of the inclined plane likely emerged independently in various ancient civilizations, including Mesopotamia, Egypt, and China. Archaeological evidence suggests that ramps were used by these cultures for construction purposes, such as building pyramids, ziggurats, and other monumental structures. Ancient Egypt: In ancient Egypt, ramps were extensively used in the construction of pyramids. Large stone blocks were transported and lifted into position using inclined planes constructed of earth, mudbrick, or stone. These ramps allowed workers to move heavy objects with relative ease by reducing the amount of vertical lifting required. Ancient Greek and Roman: The Greeks and Romans further developed the use of inclined planes in engineering and construction. They employed ramps in the construction of temples, theaters, aqueducts, and other architectural marvels. The Greeks also recognized the mathematical principles underlying the mechanical advantage of inclined planes, laying the groundwork for later scientific understanding. Medieval and Renaissance Europe: In medieval and Renaissance Europe, inclined planes continued to be utilized in various applications, including mining, agriculture, and transportation. Ramps were used in mining operations to transport ore and other materials from underground mines to the surface. In agriculture, they were employed for irrigation purposes and to facilitate the movement of goods. Industrial Revolution: The Industrial Revolution marked a significant milestone in the history of the inclined plane. With advancements in engineering and manufacturing, inclined planes became integral components of machinery and transportation systems. They were used in factories and mills for moving materials and products along assembly lines and conveyor belts. Modern Applications: Inclined planes remain an essential tool in contemporary engineering, construction, and transportation. They are used in various industries for loading and unloading cargo, facilitating accessibility in buildings and infrastructure, and providing mechanical advantage in machinery and equipment design. Throughout history, the inclined plane has played a crucial role in human civilization, enabling the movement of heavy objects, simplifying construction and engineering tasks, and contributing to technological progress. Its versatility and simplicity make it one of the enduring principles of mechanical engineering. Mechanical Advantage (MA) The mechanical advantage MA of a simple machine is defined as the ratio of the output force exerted on the load to the input force applied. For the inclined plane, the output load force is just the gravitational force of the load object on the plane, its weight Fw. The input force is the force Fi exerted on the object, parallel to the plane, to move it up the plane. The mechanical advantage formula is as shown on the right. The MA of an ideal inclined plane without friction is sometimes called ideal mechanical advantage (IMA) while the MA when friction is included is called the actual mechanical advantage (AMA). The mechanical advantage of an inclined plane depends on its slope, meaning its gradient or steepness. The smaller the slope, the larger the mechanical advantage and the smaller the force needed to raise a given weight. A plane's slope (S) is equal to the difference in height between its two ends, or "rise", divided by its horizontal length, or "run". It can also be expressed by the angle the plane makes with the horizontal, θ. Common Uses Inclined planes are widely used in the form of loading ramps to load and unload goods on trucks, ships, and planes. Wheelchair ramps are used to allow people in wheelchairs to get over vertical obstacles without exceeding their strength. Escalators and slanted conveyor belts are also forms of the inclined plane. Inclined planes also allow heavy fragile objects, including humans, to be safely lowered down a vertical distance by using the normal force of the plane to reduce the gravitational force. Aircraft evacuation slides allow people to rapidly and safely reach the ground from the height of a passenger airliner. Other inclined planes are built into permanent structures. Roads for vehicles and railroads have inclined planes in the form of gradual slopes, ramps, and causeways to allow vehicles to surmount vertical obstacles such as hills without losing traction on the road surface. Similarly, pedestrian paths and sidewalks have gentle ramps to limit their slope, to ensure that pedestrians can keep traction. Inclined planes are also used as entertainment for people to slide down in a controlled way, in playground slides, water slides, ski slopes, and skateboard parks. COMPANY Contact UsCourses and Packages QUICK LINKS Electronics ProjectsIntegrated Circuits ProjectsArduino ProjectsResistor Color Code Reader CAD Files3D Print projectsLaser Cutter ProjectsLaser Map Maker<- Affiliate Link SOCIALS Copyright ©2024 Maker Lessons LLC, Makerlessons.com Google Sites Report abuse
3995
https://gre.magoosh.com/definitions/braggadocio
braggadocio Definition - Magoosh GRE Testimonials Score Guarantee Shorter GRE Pricing Log In Sign Up Toggle navigation Testimonials Score Guarantee Shorter GRE Pricing Log In Sign Up Definition for braggadocio noun – A boasting fellow; a braggart. noun – Empty boasting; brag: as, “tiresome braggadocio,” noun – A braggart; a boaster; a swaggerer. noun – Empty boasting; mere brag; pretension. noun – A braggart. noun – Empty boasting. noun – vain and empty boasting More definitions and example sentences on Wordnik
3996
https://www.picmonic.com/api/v3/picmonics/2683/pdf
Streptococcus agalactiae Streptococcus agalactiae, commonly called group B streptococci (GBS), is a gram-positive cocci that can cause serious disease in newborns. This organism can be differentiated from other gram-positive cocci, because it is beta-hemolytic, catalase-negative, and bacitracin-resistant. This organism also produces CAMP factor, which causes the area of hemolysis formed by beta hemolysin from Staphylococcus aureus to be enlarged. This organism can colonize the vagina of women, and cause serious disease in newborns as they pass through the vaginal canal during delivery. Common neonatal manifestations include pneumonia, meningitis and septicemia. Therefore, pregnant women are routinely screened for the presence of GBS in the vagina at 35-37 weeks, and women with positive cultures can receive intrapartum prophylactic treatment with IV penicillin during delivery. PLAY PICMONIC Characteristics Group B Streptococci (B) Bee Stripper This bacteria is characterized by the presence of group B Lancefield antigen and is commonly called group B Streptococci (GBS). Gram-Positive Graham-cracker Positive-angel This organism stains positive on Gram stain due to the thick peptidoglycan layer, which absorbs crystal violet. Cocci Cockeyed This bacterium has a spherical shape. Beta-Hemolytic Beta-fish in Petri-dish Strep agalactiae typically produces large zones of beta-hemolysis, which is the complete lysis of red cells in the blood culture media. Bacitracin-Resistant Resisting Bass wearing Resistance-bandana Bacitracin can be used to distinguish Strep agalactiae from other beta-hemolytic Streptococci, like Strep pyogenes. Streptococcus pyogenes is bacitracin-sensitive, while Streptococcus agalactiae is bacitracin-resistant. Catalase-Negative Negative-cat Characteristically, Streptococcus agalactiae is catalase-negative, meaning it does not produce the enzyme catalase. This enzyme allows the bacterium to convert hydrogen peroxide to water and oxygen. This characteristic helps distinguish Streptococci from catalase-positive Staphylococci. Pyrrolidonyl Arylamidase (PYR) Negative Negative Pyro Streptococcus agalactiae does not have activity of the enzyme pyrrolidonyl arylamidase. Thus, it produces a negative test that results in an orange or yellow color of the reagent. Streptococcus agalactiae is known to be pyrrolidonyl arylamidase-negative and serves as a negative control in this test. Polysaccharide Capsule Polly-sack Capsule An important virulence factor of this organism is its capsule, composed of polysaccharides. These bacterial capsules surround the bacterial cell and enhance the bacteria's ability to cause disease. Hippurate Positive (+) Positive Hippie-pirate The hippurate hydrolysis test is used to detect a bacteria's ability to hydrolyze hippurate into glycine and benzoic acid. This test serves as a presumptive identification test for Gardnerella vaginalis, Campylobacter jejuni, Listeria monocytogenes, and group B streptococci. Produces CAMP Factor Camping-tent A CAMP test is frequently used to identify group B streptococci based on their formation of CAMP factor. CAMP factor enlarges the area of hemolysis formed by beta-hemolysin from Staphylococcus aureus. Enlarges Area of Hemolysis by S. aureus Staff of Oreos A CAMP test is frequently used to identify group B streptococci based on their formation of CAMP factor. CAMP factor enlarges the area of hemolysis formed by beta-hemolysin from Staphylococcus aureus. Disease Mainly in Babies Baby S. agalactiae is commonly transferred to neonates during passage through the birth canal and can cause serious infections in infants, including pneumonia, meningitis, and sepsis. Meningitis Men-in-tights GBS infection in newborns can cause inflammation of the meninges. However, S. agalactiae neonatal meningitis typically does not present with the characteristic sign of a stiff neck. Instead, infants typically present with nonspecific symptoms of fever, vomiting, and irritability. Hearing loss can be a long-term sequela. Pneumonia Nude-Mona This organism can invade the alveolar and pulmonary epithelial cells of infants when inhaled during vaginal delivery. Newborns are especially susceptible to infection due to the lack of alveolar macrophages. Sepsis Sepsis-snake This organism is a major cause of bacterial sepsis in newborns. Early onset sepsis is typically accompanied by pneumonia, while onset after seven days is accompanied more often by meningitis. Colonizes Vagina Vagina-violet S. agalactiae is a member of the GI normal flora in some people and can spread to secondary sites, including the vagina in approximately 20% of individuals. Colonization of the vagina is important clinically because it can be transferred to neonates during passage through the birth canal and cause serious infections. Treatment Screen Pregnant Women at 35-37 Weeks Screen-door and Pregnant-woman with 35 -37 Pregnant individuals are routinely screened for the presence of S. agalactiae (GBS) in the vagina at 35-37 weeks. Individuals with positive cultures can receive intrapartum prophylactic treatment with IV penicillin during delivery. Penicillin Pencil-villain Individuals with positive cultures can receive intrapartum prophylactic treatment with IV penicillin during delivery. Sign up for FREE at picmonic.com Sign up for FREE at picmonic.com
3997
https://chem.libretexts.org/Courses/University_of_Alberta_Augustana_Campus/AUCHE_252_-_Organic_Chemistry_II/03%3A_Alcohols_and_Ethers/3.08%3A_Conversion_of_Alcohols_to_Alkyl_Halides_with_SOCl2_and_PBr3
Skip to main content 3.8: Conversion of Alcohols to Alkyl Halides with SOCl2 and PBr3 Last updated : Feb 1, 2024 Save as PDF 3.7: Reactions of Alcohols 3.9: Protecting Groups in Organic Synthesis Page ID : 470177 Layne Morsch University of Illinois Springfield ( \newcommand{\kernel}{\mathrm{null}\,}) The most common methods for converting 1º- and 2º-alcohols to the corresponding chloro and bromo alkanes (i.e. replacement of the hydroxyl group) are treatments with thionyl chloride and phosphorus tribromide, respectively. These reagents are generally preferred over the use of concentrated HX due to the harsh acidity of these hydrohalic acids and the carbocation rearrangements associated with their use. Synthetic organic chemists, when they want to convert an alcohol into a better leaving group, have several methods to choose from. One common strategy is to convert the alcohol into an alkyl chloride or bromide, using thionyl chloride or phosphorus tribromide: Drawbacks to using and Despite their general usefulness, phosphorous tribromide and thionyl chloride have shortcomings. Hindered 1º- and 2º-alcohols react sluggishly with the former, and may form rearrangement products, as noted in the following equation. Below, an abbreviated mechanism for the reaction is displayed. The initially formed trialkylphosphite ester may be isolated if the HBr byproduct is scavenged by base. In the presence of HBr a series of acid-base and SN2 reactions take place, along with the transient formation of carbocation intermediates. Rearrangement (pink arrows) of the carbocations leads to isomeric products. Reaction of thionyl chloride with chiral 2º-alcohols has been observed to proceed with either inversion or retention. In the presence of a base such as pyridine, the intermediate chlorosulfite ester reacts to form an "pyridinium" salt, which undergoes a relatively clean SN2 reaction to the inverted chloride. In ether and similar solvents the chlorosulfite reacts with retention of configuration, presumably by way of a tight or intimate ion pair. This is classified as an SNi reaction (nucleophilic substitution internal). The carbocation partner in the ion pair may also rearrange. These reactions are illustrated by the following equations. An alternative explanation for the retention of configuration, involving an initial solvent molecule displacement of the chlorosulfite group (as SO2 and chloride anion), followed by chloride ion displacement of the solvent moiety, has been suggested. In this case, two inversions lead to retention. | Example 1: Conversion of Alcohols to Alkyl Chlorides | | There’s one important thing to note here: see the stereochemistry? It’s been inverted.(white lie alert – see below) That’s an important difference between and TsCl, which leaves the stereochemistry alone. We’ll get to the root cause of that in a moment, but in the meantime, can you think of a mechanism which results in inversion of configuration at carbon? | Formation of Alkyl Chlorides Since the reaction proceeds through a backside attack (), there is inversion of configuration at the carbon The mechanism for formation of acid chlorides from carboxylic acids is similar. The conversion of caboxylic acids to acid chlorides is similar, but proceeds through a [1,2]-addition of chloride ion to the carbonyl carbon followed by [1,2]-elimination to give the acid chloride, and Formation of Alkyl Bromides The PBr3 reaction is thought to involve two successive SN2-like steps: Notice that these reactions result in inversion of stereochemistry in the resulting alkyl halide. Organic Chemistry With a Biological Emphasis by Tim Soderberg (University of Minnesota, Morris) William Reusch, Professor Emeritus (Michigan State U.), Virtual Textbook of Organic Chemistry 3.7: Reactions of Alcohols 3.9: Protecting Groups in Organic Synthesis
3998
https://www.sciencedirect.com/topics/medicine-and-dentistry/hemoglobin-c-disease
Skip to Main content My account Sign in Hemoglobin C Disease In subject area:Medicine and Dentistry Hemoglobin C disease is defined as a condition that manifests milder symptoms compared to sickle cell disease, characterized by mild splenomegaly and hemolysis, without the occurrence of vasoocclusive crises. AI generated definition based on: Rodak's Hematology (Sixth Edition), 2020 How useful is this definition? Add to Mendeley Discover other topics Chapters and Articles You might find these chapters and articles relevant to this topic. Hypochromic and Hemolytic Anemias 2021, Atlas of Diagnostic HematologyMeenakshi Garg Bansal, Genevieve Marie Crane Hemoglobin C Hemoglobin C is a slightly less common structural hemoglobin variant, with the highest frequency in West Africa, potentially above 15%.40 Homozygous inheritance appears to convey full resistance against malaria, with partial protection in heterozygotes, and with a less severe phenotype in homozygotes than is seen with sickle cell disease. This allele may have originated independently in Southeast Asia. In the United States, approximately 2% of Americans of African descent have hemoglobin C trait and 1 in 6000 have hemoglobin C disease. It may co-occur with β-thalassemia or the HbS mutation. As with HbS, the HbC mutation is also in codon 6 of the β-globin gene. This results in a conversion from glutamic acid to lysine. Crystals can be formed in vivo, which result in decreased deformability and reduced red blood cell life span. These red blood cells also demonstrate an increased loss of K+ resulting in dehydrated, spherocytic cells with increased hemoglobin concentration. This red blood cell dehydration is also marked in HbSC disease.41 Patients with hemoglobin C disease may have mild to moderate splenomegaly, cholelithiasis, or other signs of chronic hemolysis. Pain crises are not a feature, and life span is not decreased. Laboratory values generally demonstrate a mild to moderate hemolytic anemia with hemoglobin in the range of 10 to 11 g/dL and potentially increased indirect bilirubin. On peripheral smear, HbC is characterized by target cells, spherocytes, and characteristic crystals, particularly in patients who are splenectomized. The red blood cells may be microcytic and hyperchromic with mild reticulocytosis. HbC can readily be distinguished from HbA and HbS by electrophoresis or liquid chromatography owing to the charge difference from the substituted lysine (see Fig. 3.6). In contrast to HbS, HbC does not differ in solubility compared with HbA, but in the homozygous state, HbC forms crystals when the red blood cells are incubated in hypertonic saline. View chapterExplore book Read full chapter URL: Book2021, Atlas of Diagnostic HematologyMeenakshi Garg Bansal, Genevieve Marie Crane Chapter Hemoglobinopathies (structural defects in hemoglobin) 2020, Rodak's Hematology (Sixth Edition)Tim R. Randolph Hemoglobin C Hb C was the next hemoglobinopathy after Hb S to be described and in the United States is found almost exclusively in the African American population. Spaet and Ranney reported this disease in the homozygous state (Hb CC) in 1953.8 Prevalence, etiology, and pathophysiology Hb C is found in 17% to 28% of people of West African extraction and in 2% to 3% of African Americans.4 It is the most common nonsickling variant encountered in the United States and the third most common in the world.4 Hb C is defined by the structural formula α2β26Glu→Lys, in which lysine replaces glutamic acid in position 6 of the β chain. Lysine has a +1 charge and glutamic acid has a −1 charge, so the result of this substitution is a net change in charge of +2, which has a different structural effect on the hemoglobin molecule and electrophoresis mobility pattern compared with the Hb S substitution. Similar to Hb S, Hb C forms polymers intracellularly. Unlike HbS the structure of Hb C polymers differ and they form under high oxygen tension. Hb S polymers are long and thin, whereas the polymers in Hb C form a short, thick crystal within the RBCs. As a result of electrostatic interactions between the positively charged B6-lysyl side chain and the negativity charged adjacent groups, Hb C is less soluble than Hb A in RBCs and crystalizes in the oxygenated state. Band 3 within the RBC membrane serves as a nucleation center for Hb C crystal formation.83 The shorter Hb C crystal does not alter RBC shape to the extent that Hb S does, so there is less splenic sequestration and hemolysis. Clinical features Homozygous hemoglobin C disease (Hb CC) manifests as a milder disease compared with SCD. Mild splenomegaly and hemolysis may be present. In addition, vasoocclusive crises do not occur. Heterozygous hemoglobin C trait (Hb AC) is asymptomatic. Laboratory diagnosis A mild to moderate, normochromic, normocytic anemia occurs in homozygous Hb C disease. Occasionally, some microcytosis and mild hypochromia may be present. There is a marked increase in the number of target cells and a slight to moderate increase in the number of reticulocytes (2% to 3%), and nucleated RBCs may be present in peripheral blood. Hexagonal crystals of Hb C form within RBCs and may be seen on the peripheral blood film (Figure 24.10). Many crystals appear extracellularly with no evidence of a cell membrane. In some cells, hemoglobin is concentrated within the boundary of the crystal. Crystals are densely stained and vary in size and appear oblong with pyramid-shaped or pointed ends. These crystals may be seen on wet preparations by washing RBCs and resuspending them in a solution of sodium citrate or hypertonic saline.12,83 Hb C yields a negative result on the hemoglobin solubility test, and definitive diagnosis is made using electrophoresis, HPLC, or nucleic acid testing. No Hb A is present in Hb CC disease. In addition, Hb C is present at levels of greater than 90%, with Hb F at less than 7% and Hb A2 at approximately 2%. In Hb AC trait, about 60% Hb A and 30% Hb C are present. On alkaline hemoglobin electrophoresis Hb C migrates in the same position as Hb A2, Hb E, and Hb O-Arab (Figure 24.7). Hb C is separated from these other hemoglobins on citrate agar electrophoresis at an acid pH (Figure 24.7). Treatment and prognosis No specific treatment is required. This disorder becomes problematic only if infection occurs, if splenomegaly becomes painful, or if mild chronic hemolysis leads to gallbladder disease. Genetic counseling is recommended because Hb C in combination with Hb S results in SCD and can have severe symptoms similar to Hb SS. View chapterExplore book Read full chapter URL: Book2020, Rodak's Hematology (Sixth Edition)Tim R. Randolph Chapter Hematology and coagulation 2021, Self-Assessment Q&A in Clinical Laboratory Science, IIIZane D. Amenhotep What is the best interpretation of the hemoglobin gel electrophoresis? a. : hemoglobin S trait b. : hemoglobin S disease c. : hemoglobin C trait d. : hemoglobin C disease e. : hemoglobin SC disease 12. : A 57-year-old female presents with incidentally discovered leukocytosis (WBC > 90 × 103/μL). Her peripheral blood smear reveals many immature granulocytes including a relative expansion in the number of myelocytes, 5% basophils, and 2% blasts. Her hemoglobin concentration was slightly low and her platelet count was slightly increased. Which of the following diagnoses best fits the described blood smear morphology and what is the likely molecular finding? a. : acute promyelocytic leukemia with PML-RARA b. : chronic myelogenous leukemia, BCR-ABL1 positive c. : chronic neutrophilic leukemia with CSF3R mutation d. : essential thrombocythemia with JAK2 V617F e. : polycythemia vera with JAK2 Exon 12 mutation 13. : A 19-year-old female college student presents with a sore throat and excessive malaise. Her complete blood count reveals moderate leukocytosis (WBC count 24 × 103/μL) and a review of her peripheral blood smear reveals a variety of atypical lymphocytes including some large forms with variable nuclear borders and abundant deeply basophilic cytoplasm. Both monocytoid and plasmacytoid lymphocytes are noted. Which diagnosis fits best with this morphologic pattern? a. : acute myeloid leukemia b. : diffuse large B cell lymphoma c. : chronic lymphocytic leukemia d. : Epstein-Bar virus, infectious mononucleosis e. : chronic myeloid leukemia 14. : All of the following are true of G6PD deficiency EXCEPT… a. : Heinz bodies, which are denatured globin chains, may form in this condition. b. : Oxidative stress may provoke a hemolytic episode. c. : Antimalaria drugs are a well-described possible trigger of hemolytic episodes. d. : The optimal time for quantitative measurement of G6PD is during an acute hemolytic episode. e. : The ascorbate cyanide test is not specific for G6PD deficiency. 15. : A 9-year-old male with sickle cell disease presents with progressive fatigue, pallor, and shortness of breath. A complete blood count reveals a hemoglobin measurement of 4.2 g/dL, down from 7.4 g/dL, 2 weeks ago. The patient’s absolute reticulocyte count is low at 5 × 103/μL. What is the likely explanation for the significant drop in this patient’s hemoglobin? a. : spurious result b. : myelodysplastic syndrome c. : transient erythroblastopenia of childhood d. : parvovirus infection e. : iron deficiency 16. : Which of the following combinations of blood and bone marrow findings are compatible with a deficiency of cobalamin (vitamin B12)? a. : hypersegmented neutrophils, macrocytic anemia, hypercellular bone marrow b. : hyposegmented neutrophils, macrocytic anemia, hypocellular bone marrow c. : hyposegmented neutrophils, macrocytic anemia, hypercellular bone marrow d. : hypersegmented neutrophils, microcytic anemia, hypercellular bone marrow e. : hypersegmented neutrophils, microcytic anemia, hypocellular bone marrow 17. : A 5-year-old female with splenomegaly, anemia, and mild jaundice presents for evaluation. Her peripheral blood smear shows many red blood cells with smaller diameter than normal RBCs and no central pallor and that stain more darkly red. Which of the following statements is true regarding this condition? a. : In an osmotic fragility test this patient’s red blood cells (RBCs) would show a decreased osmotic fragility relative to normal RBCs. b. : The Ham's (acid hemolysin) test would be helpful in confirming the diagnosis. c. : Flow cytometry would have no role in making the diagnosis for this condition. d. : This condition may involve abnormalities related to ankyrin, spectrin, or band 3 e. : This condition is always severe 18. : Which of the following porphyrias is commonly associated with both photosensitivity and hemolytic anemia? a. : acute intermittent porphyria (AIP) b. : congenital erythropoietic porphyria (CEP) c. : variegate porphyria (VP) d. : hereditary coproporphyria (HCP) e. : X-linked porphyria 19. : A 32-year-old female presents with fatigue and no prior bleeding history. A complete blood count is performed and, although there is no evidence of anemia, the initial run on the hematology analyzer reveals unexpected thrombocytopenia. View chapterExplore book Read full chapter URL: Book2021, Self-Assessment Q&A in Clinical Laboratory Science, IIIZane D. Amenhotep Chapter Hemoglobinopathies (structural defects in hemoglobin) 2020, Rodak's Hematology (Sixth Edition)Tim R. Randolph Clinical features Homozygous hemoglobin C disease (Hb CC) manifests as a milder disease compared with SCD. Mild splenomegaly and hemolysis may be present. In addition, vasoocclusive crises do not occur. Heterozygous hemoglobin C trait (Hb AC) is asymptomatic. View chapterExplore book Read full chapter URL: Book2020, Rodak's Hematology (Sixth Edition)Tim R. Randolph Review article Laboratory Medicine 2019, Physician Assistant ClinicsJames A. Van Rhee MS, PA-C Hemoglobin Disorders Sickle cell anemia is an inherited disorder caused by a point mutation leading to a substitution of valine for glutamic acid in position 6 of the beta chain of hemoglobin. Membrane abnormalities from sickling and oxidative damage caused by hemoglobin S, along with impaired deformability of sickle cells, leads to splenic trapping and removal of cells. Some degree of intravascular hemolysis occurs as well. Sickle cell disease is more common in certain ethnic groups, including people of African descent, including African Americans, Hispanic Americans from Central and South America, and people of Middle Eastern, Asian, Indian, and Mediterranean descent. Sickle cell disease symptoms typically appear by 4 months of age. Symptoms include pallor, jaundice, bone pain, edema, and recurrent painful episodes and chronic organ disease secondary to vaso-occlusion. Laboratory testing reveals sickle cells on peripheral blood smear and hemoglobin electrophoresis reveals a predominance of hemoglobin S. Treatment consists of immunizations for S pneumoniae, Haemophilus influenzae, hepatitis B, and influenza. Hemoglobin C disease is one-fourth as frequent as sickle cell disease among African Americans. Laboratory testing reveals mild anemia and mild reticulocytosis. The predominant RBC abnormality on the peripheral smear is an abundance of target cells, and crystal-containing cells also may be seen (Fig. 8). Splenomegaly may be the only physical finding, and the frequency of acute painful episodes is approximately half that found in sickle cell disease, with a life expectancy 2 decades longer.18 Significant morbidity, however, can occur. The incidence of fatal bacterial infection is less than in sickle cell anemia, but there is an increased risk of S pneumoniae and H influenzae infection. View article Read full article URL: Journal2019, Physician Assistant ClinicsJames A. Van Rhee MS, PA-C Chapter Nonneoplastic Hematological Disorders 2019, Self-Assessment Questions for Clinical Molecular GeneticsHaiying Meng Answers 1. : A. Hemoglobin is the protein molecule in red blood cells that carries oxygen from the lungs to the body’s tissues and returns carbon dioxide from the tissues back to the lungs. Hemoglobin is made up of four globin chains that are connected together. The normal adult hemoglobin (Hb) contains two alpha-globin chains and two beta-globin chains (HbA, α2β2). In fetuses and infants, beta-globin chains are not common and the hemoglobin is made up of two alpha chains and two gamma chains (HbF, α2γ2). As the infant grows, the gamma chains are gradually replaced by beta chains, forming the adult hemoglobin structure. Each globin chain contains an important iron-containing porphyrin compound termed heme. Embedded within the heme compound is an iron atom that is vital in transporting oxygen and carbon dioxide in our blood. The iron contained in hemoglobin is also responsible for the red color of blood. Hemoglobin also plays an important role in maintaining the shape of the red blood cells. Hemoglobin A2 (HbA2, α2δ2) is a normal variant of hemoglobin A that consists of two alpha and two delta chains and is found at low levels in normal human blood. Hemoglobin A2 may be increased in beta thalassemia or in people who are heterozygous for the beta thalassemia gene. Therefore, hemoglobin A has α2β2. 2. : B. Hemoglobin is the protein molecule in red blood cells that carries oxygen from the lungs to the body’s tissues and returns carbon dioxide from the tissues back to the lungs. Hemoglobin is made up of four globin chains that are connected together. The normal adult hemoglobin (Hb) contains two alpha-globin chains and two beta-globin chains (HbA: α2β2). In fetuses and infants, beta chains are not common and the hemoglobin is made up of two alpha chains and two gamma chains (HbF, α2γ2). As the infant grows, the gamma chains are gradually replaced by beta chains, forming the adult hemoglobin structure. Each globin chain contains an important iron-containing porphyrin compound called “heme.” Embedded within the heme compound is an iron atom that is vital in transporting oxygen and carbon dioxide in the blood. The iron contained in hemoglobin is also responsible for the red color of blood. Hemoglobin also plays an important role in maintaining the shape of the red blood cells. Hemoglobin A2 (HbA2, α2δ2) is a normal variant of hemoglobin A that consists of two alpha and two delta chains and is found at low levels in normal human blood. Hemoglobin A2 may be increased in beta thalassemia or in people who are heterozygous for the beta thalassemia gene. Therefore, hemoglobin F has α2γ2. 3. : C. Hemoglobin is the protein molecule in red blood cells that carries oxygen from the lungs to the body’s tissues and returns carbon dioxide from the tissues back to the lungs. Hemoglobin is made up of four globin chains that are connected together. The normal adult hemoglobin (Hb) contains two alpha-globin chains and two beta-globin chains (HbA, α2β2). In fetuses and infants, beta chains are not common and the hemoglobin molecule is made up of two alpha chains and two gamma chains (HbF, α2γ2). As the infant grows, the gamma chains are gradually replaced by beta chains, forming the adult hemoglobin structure. Each globin chain contains an important iron-containing porphyrin compound called “heme.” Embedded within the heme compound is an iron atom that is vital in transporting oxygen and carbon dioxide in our blood. The iron contained in hemoglobin is also responsible for the red color of blood. Hemoglobin also plays an important role in maintaining the shape of the red blood cells. Hemoglobin A2 (HbA2, α2δ2) is a normal variant of hemoglobin A that consists of two alpha and two delta chains and is found at low levels in normal human blood. Hemoglobin A2 may be increased in beta thalassemia or in people who are heterozygous for the beta-thalassemia gene. Therefore, hemoglobin A2 has α2δ2. 4. : A. Sickle cell anemia is more common in certain populations and ethnicities. It is most common in people of African descent. It affects 1 of every 375 African American infants. And it is also common in people whose families come from South or Central America (especially Panama), the Caribbean islands, Mediterranean countries (such as Turkey, Greece, and Italy), India, and Saudi Arabia. Therefore, sickle cell disease is more common in African Americans than in the other populations in the list. 5. : D. All 50 U.S. states provide universal newborn screening (NBS) for sickle cell disease (SCD) because of its high morbidity and mortality rates. The primary purpose of screening is to identify infants with SCD, the most prevalent disorder included in neonatal screening panels. Screening also identifies infants with other hemoglobinopathies, hemoglobinopathy carriers, and in some states, infants with alpha-thalassemia syndromes. The majority of NBS programs perform isoelectric focusing electrophoresis (IFE) of an eluate of dried blood spots. A few programs use HPLC, DNA testing, or cellulose acetate electrophoresis as the initial screening method. Specimens with abnormal screening results are retested using a second, complementary electrophoretic technique, HPLC, citrate agar, IEF, or DNA-based assays (not Sanger sequencing). Infants with hemoglobins that suggest SCD or other clinically significant hemoglobinopathies require confirmatory testing of a separate blood sample by age 6 weeks. The sensitivity and specificity of current screening method is excellent, and 99% of U.S. infants at highest risk for sickle cell disease are born in states with universal screening ( Sanger sequencing is a relatively more expensive and time-consuming test compared with IFE, HPLC, and others mentioned in the question. Therefore, Sanger sequencing may be used to confirm the diagnosis, but not used as a screening test. 6. : F. In the human genome, there are two copies of hemoglobin alpha (HBA1 and HBA2), and one copy of hemoglobin beta (HBB). Sickle cell disease (SCD) is a group of autosomal recessive nonneoplastic hematological disorders associated with pathogenic variants in HBB and defined by the presence of predominantly hemoglobin S (HbS), or HbS along with other Hb variants that allow for HbS polymerization. Normal adult human hemoglobin is a heterotetramer composed of two α-hemoglobin chains and two β-hemoglobin chains. Hemoglobin S results from a single nucleotide variant in HBB, changing the sixth amino acid in the β-hemoglobin chain from glutamic acid to valine (Glu6Val). Sickle cell anemia (homozygous HbSS) accounts for 60%–70% of SCD in the United States. Other forms of SCD result from coinheritance of HbS with other abnormal β-globin chain variants, the most common forms being sickle hemoglobin C disease (HbSC) and two types of sickle β thalassemia (HbSβ+thalassemia and HbSβ° thalassemia). Rarer forms result from coinheritance of other Hb variants such as D-Punjab, O-Arab, and E. The diagnosis of SCD is established by demonstrating the presence of significant quantities of HbS by isoelectric focusing (IEF), cellulose acetate electrophoresis, high-performance liquid chromatography (HPLC), or (less commonly) DNA analysis. Targeted mutation analysis is used to identify the common pathogenic variants of HBB associated with hemoglobin S, hemoglobin C, and additional rarer pathogenic variants. HBB sequence analysis may be used to detect pathogenic variants associated with β-thalassemia hemoglobin variants. Gel electrophoresis or HPLC can differentiate these disorders from heterozygous carriers of the HbS pathogenic variant (HbAS) ( Therefore, SCD is caused by a mutation in the HBB gene. 7. : D. Sickle cell disease (SCD) is an inherited autosomal recessive disorder. Since the wife’s brother had SCD, the wife had a 2/3 chance of being a carrier. SCD is most common in people of African descent, and affects 1 of every 375 African American infants. And it is also common in people whose families come from South or Central America (especially Panama), the Caribbean islands, Mediterranean countries (such as Turkey, Greece, and Italy), India, and Saudi Arabia. But it is not common in European Caucasians. Therefore, the chance of the husband being a carrier would be very low (<1%). And the chance of the couple’s firstborn child having SCD would be less than 1% even though the wife had a 2/3 chance of being a carrier. 8. : D. Fetal hemoglobin (hemoglobin F, HbF, α2γ2) is the main oxygen-transporting protein in the human fetus during the last 7 months of development in the uterus and persists in the newborn until roughly 6 months of age. HbF differs most from adult hemoglobin (HbA) in that it is able to bind oxygen with greater affinity than HbA, giving the developing fetus better access to oxygen from the maternal bloodstream. When fetal hemoglobin production is switched off after birth, normal children begin producing HbA. Children with sickle cell disease (SCD) instead begin producing a defective form of hemoglobin, hemoglobin S (HbS). The defective red blood cells have a greater tendency to lead to vaso-occlusive episodes. Since HbF remains the predominant form of hemoglobin after birth, the number of painful episodes decreases in patients with SCD. In older infants the amount of HbS will increase as HbF decreases. By 2 years of age, the amount of HbS and HbF stabilizes. By high-performance liquid chromatography (HPLC), most patients with SCD will have HbS and HbF, but no HbA. Hemoglobin A2 (HbA2) is a normal variant of hemoglobin A that consists of two alpha and two delta chains (α2δ2) and is found at low levels in normal human blood (1.5%–3.1% of all Hb molecules in adults) and is increased in people with SCD. It is also known that homozygous HbS accounts for 60%–70% of SCD in the United States. Other forms of SCD result from coinheritance of HbS with other abnormal β-globin-chain variants, the most common forms being sickle hemoglobin C disease (HbSC) and two types of sickle β thalassemia (HbSβ+ thalassemia and HbSβ° thalassemia). Rarer forms result from coinheritance of other Hb variants such as D-Punjab, O-Arab, and E ( Therefore, HbS and HbF are the most common hemoglobins detectable by HPLC in infants with HBSS, but others may appear (HbA2 is not identifiable because of its very low concentration in patients’ blood). 9. : D. Fetal hemoglobin (hemoglobin F, HbF, or α2γ2) is the main oxygen-transporting protein in the human fetus during the last 7 months of development in the uterus and persists in the newborn until roughly 6 months old. HbF differs most from adult hemoglobin (HbA) in that it is able to bind oxygen with greater affinity than HbA, giving the developing fetus better access to oxygen from the maternal bloodstream. When fetal hemoglobin production is switched off after birth, normal children begin producing HbA. Children with sickle cell disease (SCD) instead begin producing a defective form of hemoglobin, hemoglobin S (HbS). The defective red blood cells have a greater tendency to lead to vaso-occlusive episodes. Since HbF remains the predominant form of hemoglobin after birth, the number of painful episodes decreases in patients with SCD. In older infants, the amount of HbS will increase as HbF decreases. Hemoglobin A2 (HbA2) is a normal variant of hemoglobin A that consists of two alpha and two delta chains (α2δ2) and is found at low levels in normal human blood (1.5%–3.1% of all hemoglobin molecules in adults) and is increased in people with SCD ( Since this patient had sickle β°-thalassemia (HbSβ-thalassemia), he did not have beta chain to synthesize HbA. Therefore, it would be most likely that he only had HbS and HbF in his blood (HbA2 is not identifiable because of its very low concentration in patients’ blood). 10. : D. Fetal hemoglobin (hemoglobin F, HbF, or α2γ2) is the main oxygen-transporting protein in the human fetus during the last seven 7 months of development in the uterus and persists in the newborn until roughly 6 months of age. HbF differs most from adult hemoglobin (HbA) in that it is able to bind oxygen with greater affinity than HbA, giving the developing fetus better access to oxygen from the maternal bloodstream. When fetal hemoglobin production is switched off after birth, normal children begin producing HbA. Children with sickle cell disease (SCD) instead begin producing a defective form of hemoglobin, hemoglobin S (HbS). The defective red blood cells have a greater tendency to lead to vaso-occlusive episodes. Since HbF remains the predominant form of hemoglobin after birth, the number of painful episodes decreases in patients with SCD. In older infants, the amount of HbS will increase as HbF decreases. Hemoglobin A2 (HbA2) is a normal variant of hemoglobin A that consists of two alpha and two delta chains (α2δ2) and is found at low levels in normal human blood (1.5%–3.1% of all hemoglobin molecules in adults) and is increased in people with SCD ( Since this patient had homozygous HbS, he did not beta chain to synthesize HbA. Therefore, it would be most likely that he only had HbS and HbF in his blood (HbA2 is not identifiable because of its very low concentration in patients’ blood). 11. : C. Fetal hemoglobin (hemoglobin F, HbF, or α2γ2) is the main oxygen-transporting protein in the human fetus during the last 7 months of development in the uterus and persists in the newborn until roughly 6 months of age. HbF differs most from adult hemoglobin (HbA) in that it is able to bind oxygen with greater affinity than HbA, giving the developing fetus better access to oxygen from the maternal bloodstream. When fetal hemoglobin production is switched off after birth, normal children begin producing HbA. Children with sickle cell disease (SCD) instead begin producing a defective form of hemoglobin, hemoglobin S (HbS). The defective red blood cells have a greater tendency to lead to vaso-occlusive episodes. Since HbF remains the predominant form of hemoglobin after birth, the number of painful episodes decreases in patients with SCD. In older infants, the amount of HbS will increase as HbF decreases. Hemoglobin A2 (HbA2) is a normal variant of hemoglobin A that consists of two alpha and two delta chains (α2δ2) and is found at low levels in normal human blood (1.5%–3.1% of all hemoglobin molecules in adults) and is increased in people with SCD ( Since this patient had sickle β+-thalassemia (HbSβ+-thalassemia), he had some beta chains to synthesize HbA. Therefore, it would be most likely that he had HbS, HbF and HbA in his blood (HbA2 is not identifiable because of its very low concentration in patients’ blood). 12. : B. Fetal hemoglobin (hemoglobin F, HbF, or α2γ2) is the main oxygen-transporting protein in the human fetus during the last 7 months of development in the uterus and persists in the newborn until roughly 6 months of age. HbF differs most from adult hemoglobin (HbA) in that it is able to bind oxygen with greater affinity than HbA, giving the developing fetus better access to oxygen from the maternal bloodstream. When fetal hemoglobin production is switched off after birth, normal children begin producing HbA. Children with sickle cell disease (SCD) instead begin producing a defective form of hemoglobin, hemoglobin S (HbS). The defective red blood cells have a greater tendency to lead to vaso-occlusive episodes. Since HbF remains the predominant form of hemoglobin after birth, the number of painful episodes decreases in patients with SCD. In older infants, the amount of HbS will increase as HbF decreases. Hemoglobin A2 (HbA2) is a normal variant of hemoglobin A that consists of two alpha and two delta chains (α2δ2) and is found at low levels in normal human blood (1.5%–3.1% of all hemoglobin molecules in adults) and is increased in people with SCD ( Since this patient had sickle hemoglobin C disease (HbSC), he did not have beta chains to synthesize HbA. Therefore, it would be most like that he had HbS, HbF and HbC in his blood (HbA2 is not identifiable because of its very low concentration in patients’ blood). 13. : A. Sickle cell disease (SCD) is characterized by intermittent vaso-occlusive events and chronic hemolytic anemia. Vaso-occlusive events result in tissue ischemia, leading to acute and chronic pain as well as damage to any organ in the body, including the bones, lungs, liver, kidneys, brain, eyes, and joints. Dactylitis (pain and/or swelling of the hands or feet) in infants and young children is often the earliest manifestation of SCD. SCD is an autosomal recessive disorder encompassing a group of symptomatic diseases associated with pathogenic variants in the HBB gene on 11p15.4 and defined by the presence of predominantly hemoglobin S (HbS), or HbS along with other Hb variants that allow for HbS polymerization. Normal adult hemoglobin is a heterotetramer composed of two α-hemoglobin chains and two β-hemoglobin chains (HbA, α2β2). HbS results from a single nucleotide variant in HBB, p.Glu6Val. Sickle cell anemia (homozygous HbSS) accounts for 60%–70% of SCD in the United States. Other forms of SCD result from coinheritance of HbS with other abnormal β-globin chain variants, the most common forms being sickle hemoglobin C and two types of sickle β thalassemia (HbSβ+ thalassemia and HbSβ° thalassemia) ( If a patient has two copies of HbS, he/she does not have normal hemoglobin B protein products. Therefore, no HbA (α2β2) can be synthesized in this patient. 14. : C. Sickle cell disease (SCD) is characterized by intermittent vaso-occlusive events and chronic hemolytic anemia. Vaso-occlusive events result in tissue ischemia, leading to acute and chronic pain as well as organ damage that can affect any organ in the body, including the bones, lungs, liver, kidneys, brain, eyes, and joints. Dactylitis (pain and/or swelling of the hands or feet) in infants and young children is often the earliest manifestation of SCD. SCD is an autosomal recessive disorder encompassing a group of symptomatic diseases associated with pathogenic variants in the HBB gene on 11p15.4 and defined by the presence of predominantly hemoglobin S (HbS), or HbS along with other Hb variants that allow for HbS polymerization. Normal adult hemoglobin is a heterotetramer composed of two α-hemoglobin chains and two β-hemoglobin chains (HbA, α2β2). HbS results from a single nucleotide variant in HBB, p.Glu6Val. Sickle cell anemia (homozygous HbSS) accounts for 60%–70% of SCD in the United States. Other forms of SCD result from coinheritance of HbS with other abnormal β-globin chain variants, the most common forms being sickle hemoglobin C disease (HbSC) and two types of sickle β thalassemia (HbSβ+ thalassemia and HbSβ° thalassemia). Patients with β+ thalassemia have reduced levels of normal β-globin chains. Patients with β° thalassemia have no β-globin chain synthesis ( Therefore, the newborn in this question would most likely have HbSβ+ thalassemia since he/she had HbF, HbS, and HbA. 15. : D. Sickle cell disease (SCD) is an autosomal recessive disease encompassing a group of symptomatic disorders associated with pathogenic variants in the HBB gene on 11p15.4 and defined by the presence of predominantly hemoglobin S (HbS), or HbS along with other Hb variants that allow for HbS polymerization. Normal adult hemoglobin is a heterotetramer composed of two α-hemoglobin chains and two β-hemoglobin chains (HbA, α2β2). HbS results from a single nucleotide variant in HBB, p.Glu6Val. Sickle cell anemia (homozygous HbSS) accounts for 60%–70% of SCD in the United States. Other forms of SCD result from coinheritance of HbS with other abnormal β-globin chain variants, the most common forms being sickle hemoglobin C disease (HbSC) and two types of sickle β thalassemia (HbSβ+ thalassemia and HbSβ° thalassemia). Patients with β+ thalassemia have reduced levels of normal β-globin chains. Patients with β° thalassemia have no β-globin-chain synthesis ( This newborn in this question had only HbF and HbS, so that he might have sickle cell anemia (homozygous HbS), or sickle β° thalassemia. The p.Glu6Lys variant in the HBB gene results in HbC. The p.Glu26Lys variant in the HBB gene results in HbE. Therefore, it would be most likely that he/she was homozygous for the p.Glu6Val variant related to HbS in HBB. 16. : B. Sickle cell disease (SCD) is an autosomal recessive disease encompassing a group of symptomatic disorders associated with pathogenic variants in the HBB gene on 11p15.4 and defined by the presence of predominantly hemoglobin S (HbS), or HbS along with other Hb variants that allow for HbS polymerization. Normal adult hemoglobin is a heterotetramer composed of two α-hemoglobin chains and two β-hemoglobin chains (HbA, α2β2). HbS results from a single nucleotide variant in HBB, p.Glu6Val. Sickle cell anemia (homozygous HbSS) accounts for 60%–70% of SCD in the United States. Other forms of SCD result from coinheritance of HbS with other abnormal β-globin chain variants, the most common forms being sickle hemoglobin C disease (HbSC) and two types of sickle β thalassemia (HbSβ+ thalassemia and HbSβ° thalassemia). Patients with β+ thalassemia have reduced levels of normal β-globin chains. Patients with β° thalassemia have no β-globin chain synthesis ( The newborn in this question had only HbF and HbS, so he might have sickle cell anemia (homozygous HbSS), or sickle β° thalassemia. The p.Glu6Lys variant results in HbC. The p.Glu26Lys variant results in HbE. The p.Glu121Lys variant results in HbO. Therefore, it would be most likely that he/she was homozygous for the p.Glu6Val variant in HBB. 17. : D. Sickle cell disease (SCD) is an autosomal recessive disease encompassing a group of symptomatic disorders associated with pathogenic variants in the HBB gene on 11p15.4 and defined by the presence of predominantly hemoglobin S (HbS), or HbS along with other Hb variants that allow for HbS polymerization. Normal adult hemoglobin is a heterotetramer composed of two α hemoglobin chains and two β hemoglobin chains (HbA, α2β2). HbS results from a single nucleotide variant in HBB, p.Glu6Val. Sickle cell anemia (homozygous HbSS) accounts for 60% to 70% of SCD in the United States. Other forms of SCD result from coinheritance of HbS with other abnormal β-globin chain variants, the most common forms being sickle hemoglobin C disease (HbSC) and two types of sickle β-thalassemia (HbSβ+ thalassemia and HbSβ° thalassemia). Patients with β+ thalassemia have reduced levels of normal β-globin chains. Patients with β° thalassemia have no β-globin chain synthesis ( This newborn had HbF, HbS, and HbC so that he has sickle hemoglobin C disease (HbSC). The p.Glu6Lys variant results in HbC. Therefore, it would be most likely that he/she had compound heterozygous p.Glu6Val and p.Glu6Lys variants in HBB. 18. : B. Sickle cell disease (SCD) is an autosomal recessive disease encompassing a group of symptomatic disorders associated with pathogenic variants in the HBB gene on 11p15.4 and defined by the presence of predominantly hemoglobin S (HbS), or HbS along with other Hb variants that allow for HbS polymerization. Normal adult hemoglobin is a heterotetramer composed of two α-hemoglobin chains and two β-hemoglobin chains (HbA, α2β2). HbS results from a single nucleotide variant in HBB, p.Glu6Val. Sickle cell anemia (homozygous HbSS) accounts for 60%–70% of SCD in the United States. Other forms of SCD result from coinheritance of HbS with other abnormal β-globin chain variants, the most common forms being sickle hemoglobin C disease (HbSC) and two types of sickle β thalassemia (HbSβ+ thalassemia and HbSβ° thalassemia). Patients with β+ thalassemia have reduced levels of normal β-globin chains. Patients with β° thalassemia have no β-globin chain synthesis ( This newborn had HbF, HbS, and HbC, so he has sickle hemoglobin C disease (HbSC). The p.Glu6Lys variant results in HbC. The p.Glu26Lys variant results in HbE. Therefore, it would be most likely that he/she had compound heterozygous p.Glu6Val and p.Glu6Lys variants in HBB. 19. : F. Sickle cell disease (SCD) is an autosomal recessive disease encompassing a group of symptomatic disorders associated with pathogenic variants in the HBB gene on 11p15.4 and defined by the presence of predominantly hemoglobin S (HbS), or HbS along with other Hb variants that allow for HbS polymerization. Normal adult hemoglobin is a heterotetramer composed of two α-hemoglobin chains and two β-hemoglobin chains (HbA, α2β2). HbS results from a single nucleotide variant in HBB, p.Glu6Val. Sickle cell anemia (homozygous HbSS) accounts for 60%–70% of SCD in the United States. Other forms of SCD result from coinheritance of HbS with other abnormal β-globin chain variants, the most common forms being sickle hemoglobin C disease (HbSC) and two types of sickle β thalassemia (HbSβ+ thalassemia and HbSβ° thalassemia). Patients with β+ thalassemia have reduced levels of normal β-globin chains. Patients with β° thalassemia have no β-globin chain synthesis ( Most SCD results from the following variants: : Hemoglobin S (Glu6Val pathogenic variant) : Hemoglobin C (HbC; Glu6Lys pathogenic variant) : Hemoglobin D (D-Punjab; Glu121Gln pathogenic variant) : Hemoglobin E (HbE; Glu26Lys pathogenic variant) : Hemoglobin O (O-Arab; Glu121Lys pathogenic variant) This newborn had only HbF, HbS, and HbE so that he has sickle hemoglobin E disease (HbSE). Therefore, it would be most likely that he/she had compound heterozygous p.Glu6Val and p.Glu26Lys variants in HBB. 20. : D. Sickle cell disease (SCD) is an autosomal recessive disease encompassing a group of symptomatic disorders associated with pathogenic variants in the HBB gene on 11p15.4 and defined by the presence of predominantly hemoglobin S (HbS), or HbS along with other Hb variants that allow for HbS polymerization. Normal adult hemoglobin is a heterotetramer composed of two α-hemoglobin chains and two β-hemoglobin chains (HbA, α2β2). HbS results from a single nucleotide variant in HBB, p.Glu6Val. Sickle cell anemia (homozygous HbSS) accounts for 60%–70% SCD in the United States. Other forms of SCD result from coinheritance of HbS with other abnormal β-globin chain variants, the most common forms being sickle hemoglobin C disease (HbSC) and two types of sickle β thalassemia (HbSβ+ thalassemia and HbSβ° thalassemia). Patients with β+ thalassemia have reduced levels of normal β-globin chains. Patients with β° thalassemia have no β-globin chain synthesis ( Most of SCD results from the following variants: : Hemoglobin S (HbC; Glu6Val pathogenic variant) : Hemoglobin C (HbC; Glu6Lys pathogenic variant) : Hemoglobin D (D-Punjab; Glu121Gln pathogenic variant) : Hemoglobin E (HbE; Glu26Lys pathogenic variant) : Hemoglobin O (O-Arab; Glu121Lys pathogenic variant) This newborn had only HbF, HbC, and HbE so that he has sickle hemoglobin E disease (HbSE). Therefore, it would be most likely that he/she had compound heterozygous p.Glu6Lys and p.Glu26Lys variants in HBB. 21. : C. All 50 U.S. states provide universal newborn screening (NBS) for sickle cell disease (SCD) to identify infants with SCD because of the high morbidity and mortality rate. Screening also identifies infants with other hemoglobinopathies, hemoglobinopathy carriers, and in some states, infants with alpha-thalassemia syndromes. The majority of NBS programs perform isoelectric focusing (IEF) of an eluate of dried blood spots. A few programs use high-performance liquid chromatography (HPLC), DNA testing, or cellulose acetate electrophoresis as the initial screening method. Specimens with abnormal screening results are retested using a second, complementary electrophoretic technique, HPLC, citrate agar, IEF, or DNA-based assay. Infants with hemoglobins that suggest SCD or other clinically significant hemoglobinopathies require confirmatory testing of a separate blood sample within 6 weeks. The sensitivity and specificity of current NBS methods is excellent, and 99% of U.S. infants at highest risk for SCD are born in states with universal screening ( Therefore, it would be most likely that the complementary test was done within in 6 weeks. 22. : B. Sickle cell disease (SCD) is an autosomal recessive disease encompassing a group of symptomatic disorders associated with pathogenic variants in the HBB gene on 11p15.4 and defined by the presence of predominantly hemoglobin S (HbS), or HbS along with other Hb variants that allow for HbS polymerization. Normal adult hemoglobin is a heterotetramer composed of two α-hemoglobin chains and two β-hemoglobin chains (HbA, α2β2). HbS results from a single nucleotide variant in HBB, p.Glu6Val. Sickle cell anemia (homozygous HbSS) accounts for 60%–70% of SCD in the United States. Patients with sickle cell anemia (homozygous HbSS) have no β-globin chain synthesis. Other forms of SCD result from coinheritance of HbS with other abnormal β-globin chain variants, the most common forms being sickle hemoglobin C disease (HbSC) and two types of sickle β thalassemia (HbSβ+ thalassemia and HbSβ° thalassemia). Patients with β+ thalassemia have reduced levels of normal β-globin chains. Patients with β° thalassemia have no β-globin chain synthesis ( This newborn has only HbF and HbS, so may have sickle cell anemia (homozygous HbS), or sickle β° thalassemia. Therefore, it would be most likely that he was offered a molecular genetic study for the HBB gene to distinguish sickle cell anemia (homozygous HbS) from sickle β° thalassemia as the next step in the workup for a diagnosis. 23. : B. Sickle cell disease (SCD) is an autosomal recessive disease encompassing a group of symptomatic disorders associated with pathogenic variants in the HBB gene on 11p15.4 and defined by the presence of predominantly hemoglobin S (HbS), or HbS along with other Hb variants that allow for HbS polymerization. Normal adult hemoglobin is a heterotetramer composed of two α-hemoglobin chains and two β-hemoglobin chains (HbA, α2β2). HbS results from a single nucleotide variant in HBB, p.Glu6Val. Sickle cell anemia (homozygous HbS) accounts for 60%–70% of SCD in the United States. Patients with sickle cell anemia (homozygous HbS) have no β-globin chain synthesis. Other forms of SCD result from coinheritance of HbS with other abnormal β-globin chain variants, the most common forms being sickle hemoglobin C disease (HbSC) and two types of sickle β thalassemia (HbSβ+ thalassemia and HbSβ° thalassemia). Patients with β+ thalassemia have reduced levels of normal β-globin chains. Patients with β° thalassemia have no β-globin chain synthesis ( Regardless of the outcome of testing in the newborn period, additional testing that should be done at about age 1 year (once HbF levels have fallen) includes a CBC, reticulocyte count, some type of electrophoresis or high-performance liquid chromatography (HPLC), a measure of iron status, and inclusion body preparation with BCB (brilliant cresyl blue) stain. Together, these help to determine whether there is a coexisting thalassemia component, and if so, if it is α thalassemia or β thalassemia. This is important for genetic counseling and for providing insight into disease-specific outcomes. Therefore, it is appropriate for this patient to have a checkup at 1 year if there were no acute symptoms. 24. : C. Fetal hemoglobin (HbF, α2γ2) is the main oxygen-transporting protein in the human fetus during the last 7 months of development in the uterus and persists in the newborn until roughly 6 months of age. Functionally, HbF differs most from adult hemoglobin (HbA, α2β2) in that it is able to bind oxygen with greater affinity than the adult form, giving the developing fetus better access to oxygen from the mother’s bloodstream. In newborns, HbF (α2γ2) is nearly completely replaced by HbA (α2β2) by approximately 6 months after birth, except in a few thalassemia cases, in which there may be a delay in cessation of HbF production until 3–5 years of age. In adults, HbF (α2γ2) production can be reactivated pharmacologically by hydroxycarbamide (Hydrea), which is useful in the treatment of diseases such as sickle cell disease (SCD). It is the only FDA–approved therapy for sickle cell disease. Hemoglobin A2 (HbA2, α2δ2) is a normal variant of HbA that consists of two alpha and two delta chains and is found at low levels in normal human blood. HbA2 may be increased in beta thalassemia or in people who are heterozygous for the beta-thalassemia gene. HbA2 exists in small amounts in all adult humans (1.5%–3.1% of all hemoglobin molecules) and is increased in people with SCD. Hemoglobin Portland (Hb Portland, ζ2γ2) exists at low levels during embryonic and fetal life and is composed of two zeta chains and two gamma chains. Therefore, it would be most likely that HbF was induced pharmacologically to treat SCD in this adolescent. 25. : C. Fetal hemoglobin (HbF, α2γ2) is the main oxygen-transporting protein in the human fetus during the last 7 months of development in the uterus and persists in the newborn until roughly 6 months of age. Functionally, HbF differs most from adult hemoglobin (HbA, α2β2) in that it is able to bind oxygen with greater affinity than the adult form, giving the developing fetus better access to oxygen from the mother’s bloodstream. In newborns, HbF (α2γ2)is nearly completely replaced by HbA (α2β2) by approximately 6 months postnatally, except in a few thalassemia cases in which there may be a delay in cessation of HbF production until 3–5 years of age. In adults, HbF (α2γ2) production can be reactivated pharmacologically by hydroxycarbamide (Hydrea), which is useful in the treatment of diseases such as sickle cell disease (SCD). It is the only FDA-approved therapy for sickle cell disease. Hemoglobin A2 (HbA2, α2δ2) is a normal variant of HbA that consists of two alpha and two delta chains and is found at low levels in normal human blood. HbA2 may be increased in beta thalassemia or in people who are heterozygous for the beta-thalassemia gene. HbA2 exists in small amounts in all adult humans (1.5%–3.1% of all Hb molecules) and is increased in people with SCD. Hemoglobin Portland (Hb Portland, ζ2γ2) exists at low levels during embryonic and fetal life and is composed of two zeta chains and two gamma chains. Therefore, it would be most likely that α2γ2 was induced pharmacologically to treat SCD in this adolescent. 26. : C. Fetal hemoglobin (HbF, α2γ2) is the main oxygen-transporting protein in the human fetus during the last 7 months of development in the uterus and persists in the newborn until roughly 6 months of age. Functionally, HbF differs most from adult hemoglobin (HbA, α2β2) in that it is able to bind oxygen with greater affinity than the adult form, giving the developing fetus better access to oxygen from the mother’s bloodstream. In newborns, HbF is nearly completely replaced by HbA (α2β2) by approximately 6 months after birth, except in a few thalassemia cases in which there may be a delay in cessation of HbF production until 3–5 years of age. In adults, HbF (α2γ2) production can be reactivated pharmacologically by hydroxycarbamide (Hydrea), which is useful in the treatment of diseases such as sickle cell disease (SCD). It is the only FDA-approved therapy for sickle cell disease. all-trans retinoic acid (ATRA) is the first-line drug for patients with acute promyelocytic leukemia (APML, APL). Vemurafenib (Zelboraf) and dabrafenib (Tafinlar) are drugs for patients with BRAF V600E mutation–positive metastatic melanoma, thyroid cancer, lung cancer, and so forth. Gleevec (imatinib) is the medicine for patients with t(9;22) positive chronic myeloid leukemia (CML). Therefore, it would be most likely that the physician prescribed hydroxyurea to treat SCD in this adolescent. 27. : A. Sickle cell disease (SCD) encompasses a group of symptomatic disorders associated with pathogenic variants in HBB and is defined by the presence of predominantly hemoglobin S (HbS), or HbS along with other Hb variants that allow for HbS polymerization. Normal human hemoglobin is a heterotetramer composed of two α-hemoglobin chains and two β-hemoglobin chains. Hemoglobin S results from a single nucleotide variant in HBB, resulting in the sixth amino acid in the β-hemoglobin chain changing from glutamic acid to valine (p.Glu6Val). Hemoglobin SS disease is the predominant hemoglobin in people with SCD. The alpha chain is normal. The disease-causing mutation exists in the beta chain, giving the molecule the structure, α2βS2. Patients with hemoglobin SS disease have no β-globin chain synthesis. Hemoglobin SC disease is caused by a copy of hemoglobin S gene inherited from one parent, and a hemoglobin C gene inherited from the other. The hemoglobin C molecule disturbs the red-cell metabolism only slightly. However, the disturbance is enough to allow the deleterious effects of the hemoglobin S to be manifested. On average, patients with hemoglobin SC disease have milder symptoms than do those with hemoglobin SS disease, and have more severe symptoms than do those with homozygous hemoglobin C disease. A number of other syndromes exist that involve a hemoglobin S compound heterozygous state. They are less common than hemoglobin SC disease. Hemoglobin SE disease is caused by a copy of hemoglobin S gene inherited from one parent and a copy of hemoglobin E gene inherited from the other. People with hemoglobin E disease have a mild hemolytic anemia and mild splenomegaly. Hemoglobin E trait is benign. Hemoglobin E is extremely common in Southeast Asia and in some areas equals hemoglobin A in frequency. The expression of a single hemoglobin S gene normally produces no problem (e.g., sickle cell trait). Sickle/beta thalassemia causes by a copy of hemoglobin S gene inherited from one parent and a copy of beta thalassemia gene inherited from the other. The severity of the condition is determined to a large extent by the quantity of normal hemoglobin produced by the beta-thalassemia gene. If the gene produces no normal hemoglobin (β° thalassemia), the condition is virtually identical to SCD. Patients with sickle/β° thalassemia have no β-globin chain synthesis. Some patients have a gene that produces a small amount of normal hemoglobin, called β+ thalassemia. Patients with sickle/β+ thalassemia have an amount of hemoglobin A that depends on the level of function of the β+-thalassemia gene. The severity of the condition is dampened when significant quantities of normal hemoglobin are produced by the β+-thalassemia gene. Sickle/beta thalassemia is the most common sickle syndrome seen in people of Mediterranean descent (Italian, Greek, and Turkish). Beta thalassemia is quite common in this region, and the sickle cell mutation occurs in some sections of these countries ( Therefore, HbS and HbS is the most severe form of SCD in the list. 28. : D. Sickle cell disease (SCD) encompasses a group of symptomatic disorders associated with pathogenic variants in HBB and defined by the presence of predominantly hemoglobin S (HbS), or HbS along with other Hb variants that allow for HbS polymerization. Normal human hemoglobin is a heterotetramer composed of two α-hemoglobin chains and two β-hemoglobin chains. Hemoglobin S results from a single nucleotide variant in HBB, resulting in the sixth amino acid in the β-hemoglobin chain changing from glutamic acid to valine (p.Glu6Val). Hemoglobin SS disease is the predominant hemoglobin in people with SCD. The alpha chain is normal. The disease causing mutation exists in the beta chain, giving the molecule the structure, α2βS2. Patients with hemoglobin SS disease have no β-globin chain synthesis. Hemoglobin SC disease is caused by a copy of the hemoglobin S gene inherited from one parent and a hemoglobin C gene inherited from the other. The hemoglobin C molecule disturbs the red-cell metabolism only slightly. However, the disturbance is enough to allow the deleterious effects of the hemoglobin S to be manifested. On average, patients with hemoglobin SC disease have milder symptoms than do those with hemoglobin SS disease and have more severe symptoms than do those with homozygous hemoglobin C disease. A number of other syndromes exist that involve a hemoglobin S compound heterozygous state. They are less common than hemoglobin SC disease. Hemoglobin SE disease is caused by a copy of the hemoglobin S gene inherited from one parent and a copy of the hemoglobin E gene inherited from the other. People with hemoglobin E disease have a mild hemolytic anemia and mild splenomegaly. Hemoglobin E trait is benign. Hemoglobin E is extremely common in Southeast Asia and in some areas equals hemoglobin A in frequency. The expression of a single hemoglobin S gene normally produces no problem (for example sickle cell trait). Sickle/beta thalassemia causes by a copy of hemoglobin S gene inherited from one parent and a copy of the beta-thalassemia gene inherited from the other. The severity of the condition is determined to a large extent by the quantity of normal hemoglobin produced by the beta-thalassemia gene. If the gene produces no normal hemoglobin (β° thalassemia), the condition is virtually identical to SCD. Patients with sickle/β° thalassemia have no β-globin chain synthesis. Some patients have a gene that produces a small amount of normal hemoglobin, called β+ thalassemia. Patients with sickle/β+ thalassemia have an amount of hemoglobin A that depends on the level of function of the β+-thalassemia gene. The severity of the condition is dampened when significant quantities of normal hemoglobin are produced by the β+-thalassemia gene. Sickle/beta thalassemia is the most common sickle syndrome seen in people of Mediterranean descent (Italian, Greek, and Turkish). Beta thalassemia is quite common in this region, and the sickle cell mutation occurs in some sections of these countries ( Therefore, hemoglobin S and β° thalassemia (Hbβ°) is the most severe form of SCD in the list. 29. : A. Sickle cell disease encompasses a group of symptomatic disorders associated with pathogenic variants in HBB and defined by the presence of predominantly hemoglobin S (HbS), or HbS along with other Hb variants that allow for HbS polymerization. Normal human hemoglobin is a heterotetramer composed of two α-hemoglobin chains and two β-hemoglobin chains. Hemoglobin S results from a single nucleotide variant in HBB, resulting in the sixth amino acid in the β-hemoglobin chain changing from glutamic acid to valine (p.Glu6Val). Hemoglobin SS disease is the predominant hemoglobin in people with SCD. The alpha chain is normal. The disease-producing mutation exists in the beta chain, giving the molecule the structure, α2βS2. Patients with hemoglobin SS disease have no β-globin-chain synthesis. Hemoglobin SC disease is caused by a copy of the hemoglobin S gene inherited from one parent and a hemoglobin C gene inherited from the other. The hemoglobin C molecule disturbs the red-cell metabolism only slightly. However, the disturbance is enough to allow the deleterious effects of hemoglobin S to be manifested. On average, patients with hemoglobin SC disease have milder symptoms than do those with hemoglobin SS disease and have more severe symptoms than do those with homozygous hemoglobin C disease. A number of other syndromes exist that involve a hemoglobin S compound heterozygous state. They are less common than hemoglobin SC disease. Hemoglobin SE disease is caused by a copy of the hemoglobin S gene inherited from one parent and a copy of the hemoglobin E gene inherited from the other. People with hemoglobin E disease have a mild hemolytic anemia and mild splenomegaly. Hemoglobin E trait is benign. Hemoglobin E is extremely common in Southeast Asia and in some areas equals hemoglobin A in frequency. The expression of a single hemoglobin S gene normally produces no problem (e.g., sickle cell trait). Sickle/beta thalassemia is caused by a copy of the hemoglobin S gene inherited from one parent and a copy of the beta-thalassemia gene inherited from the other. The severity of the condition is determined to a large extent by the quantity of normal hemoglobin produced by the beta-thalassemia gene. If the gene produces no normal hemoglobin (β° thalassemia), the condition is virtually identical to SCD. Patients with sickle/β° thalassemia have no β-globin-chain synthesis. Some patients have a gene that produces a small amount of normal hemoglobin, called β+ thalassemia. Patients with sickle/β+ thalassemia have an amount of hemoglobin A that depends on the level of function of the β+-thalassemia gene. The severity of the condition is dampened when significant quantities of normal hemoglobin are produced by the β+-thalassemia gene. On average, patients with hemoglobin S and β° thalassemia with α-thalassemia trait have reduced clinical severity as compared with patients with hemoglobin S and β° thalassemia due to less of an imbalance between alpha and beta hemoglobins ( Therefore, hemoglobin S and S (HbSS) is the most severe form of SCD in the list. 30. : C. Sickle cell disease (SCD) encompasses a group of symptomatic disorders associated with pathogenic variants in HBB and defined by the presence of predominantly hemoglobin S (HbS), or HbS along with other Hb variants that allow for HbS polymerization. Normal adult human hemoglobin is a heterotetramer composed of two α-hemoglobin chains and two β-hemoglobin chains. Hemoglobin S results from a single nucleotide variant in HBB, resulting in the sixth amino acid in the β-hemoglobin chain changing from glutamic acid to valine (p.Glu6Val). Sickle cell anemia (homozygous HbS) accounts for 60%–70% of sickle cell disease in the United States. Other forms of sickle cell disease result from coinheritance of HbS with other abnormal β-globin chain variants, the most common forms being sickle hemoglobin C disease (HbC) and two types of sickle β thalassemia (HbSβ+ thalassemia and HbSβ° thalassemia) ( Even when only one parent has the sickle cell trait, a couple still may have children with SCD. They can have children with compound heterozygous forms of SCD, such as hemoglobin SC disease (HbSC), sickle/beta thalassemia (HbSβ° or HbSβ+), and hemoglobin SE disease (HbSE). In this case, a negative test for the presence of HBB Glu6Val mutation for HbS does not exclude the presence of other abnormal beta hemoglobins, which may lead to an asymptomatic thalassemia trait in the father. A glucose 6-phosphate dehydrogenase (G6PD) deficiency and hereditary elliptocytosis are both common among African Americans. But they are not caused mutations in hemoglobin genes. The alpha-thalassemia trait and Hemoglobin Constant Spring (HbCS) trait are both abnormalities of the alpha-globin, rather than the beta-globin, locus. Among the possible answers, only the beta-thalassemia trait is an abnormality of the beta globin. Therefore, it would most likely that the father had beta-thalassemia trait so that the child in question had sickle/beta thalassemia. 31. : D. Alpha thalassemia (α thalassemia) has two clinically significant forms: hemoglobin Barts hydrops fetalis (Hb Barts) syndrome and hemoglobin H (HbH) disease (see the table below). Hb Barts syndrome is the more severe form. Death usually occurs in the neonatal period. A 5-year-old patient with alpha thalassemia most likely has HbH disease. Hemoglobin patterns in HbH disease are as shown in the table. | Hemoglobin type | Structure | Normal, birth | Normal, adult | HbH disease | Hydrops fetalis | --- --- --- | | HbA | α2β2 | 20%–25% | 96%–98% | 60%–90% | NA | | HbF | α2γ2 | 75%–80% | <1% | <1.0% | NA | | HbA2 | α2δ2 | 0.5% | 2%–3% | <2.0% | NA | | HbH | β4 | NA | NA | 5%–30%, adult | NA | | 20%–40%, birth | | Hb Bart | γ4 | NA | NA | 2%–5% | 100% | ( Therefore, it would be most likely that HbH increased significantly in this patient. 32. : C. Beta thalassemia (β thalassemia) is characterized by reduced synthesis of the hemoglobin subunit beta resulting in microcytic hypochromic anemia. Patients with β thalassemia have decreased amounts of HbA and increased amounts of hemoglobin F (HbF) in peripheral-blood samples after age 12 months ( Therefore, it would be most likely that HbF increased significantly in this patient. 33. : C. Both of the parents had a 2/3 chance to be a carrier of sickle cell disease (SCD), since they had siblings with SCD. The prior probability of the couple having a SCD child is 2/3×2/3=4/9. Bayesian analysis for the carrier probability of both parents being carriers is 1/81 (see the table below). Therefore, the risk of their next child with SCD would be 1/81×1/4=1/324. | | | | --- | Empty Cell | Carrier | Noncarrier | | Prior probability | 2/3×2/3=4/9 | 1−(2/3×2/3)=5/9 | | Conditional probability | (1/4)3=1/64 | 1 | | Joint probability | 4/9×1/64=1/144 | 5/9×1=80/144 | | Posterior probability | | =1/81 | 34. : B. Alpha thalassemia is a nonneoplastic hematological disorder that reduces the production of hemoglobin because of the impaired production of 1, 2, 3, or 4 alpha-globin chains, leading to a relative excess of beta-globin chains. The degree of impairment is based on which clinical phenotype is present (how many chains are affected). Testing for α thalassemia includes hematological testing of red-blood-cell indices, peripheral-blood smear, supravital staining to detect RBC inclusion bodies, and qualitative and quantitative hemoglobin analysis. HBA1, the gene encoding α1-globin, and HBA2, the gene encoding α2-globin, are the two genes most commonly associated with α thalassemia. Molecular genetic testing of HBA1 and HBA2 detects deletions in about 90% and point mutations in about 10% of affected individuals. Chromosome microarray analysis (CMA) is the first-line test for individuals with multiple congenital anomalies, developmental delay, intellectual disability, and autism; it is used to detect copy-number gain or loss. The ACMG has recommended CMA as the first-line test for individuals with multiple congenital anomalies, developmental delay, intellectual disability, and autism. Multiplex ligation-dependent probe amplification (MLPA) is a time-efficient technique to detect genomic deletions and insertions bigger than single-nucleotide variants and in/dels, but smaller than what CMA can detect. Next-generation sequencing (NGS) is a high-throughput test to sequence multiple genes at the same time. It is an appropriate test for Fanconi anemia, hearing loss, and other disorders. Sanger sequencing is the most appropriate molecular test for single-gene disorders when the most pathogenic variants are single nucleotide variants and in/dels. It is an appropriate test for Gaucher disease, Wiskott–Aldrich syndrome (WAS), and other disorders. Exonic and whole-gene deletions/duplications in HBA1 and HBA2 are not readily detectable by sequence analysis of the coding and flanking intronic regions of genomic DNA. The most common form of alpha (+) thalassemia seen in the United States is due to the -α3.7 deletion, which is too small to be detected by CMA. Therefore, MLPA would most likely be used for the alpha-thalassemia test in this patient. 35. : A. Sickle cell disease (SCD) is an autosomal recessive disorder of hemoglobin in which β-subunit genes have a missense mutation that substitutes valine for glutamic acid at amino acid 6 (p.Glu6Val). The p.Glu6Val mutation in β globin decreases the solubility of deoxygenated hemoglobin and causes it to form a gelatinous network of stiff fibrous polymers that distort red blood cells, giving them a sickle shape ( Therefore, in patients with SCD, the missense p.Glu6Val mutation is responsible for the unique properties of HbS. 36. : A. Sickle cell disease (SCD) encompasses a group of symptomatic disorders associated with pathogenic variants in HBB and defined by the presence of predominantly hemoglobin S (Hb S), or HbS along with other Hb variants that allow for HbS polymerization. Normal human hemoglobin is a heterotetramer composed of two α-hemoglobin chains and two β-hemoglobin chains. Hemoglobin S results from a single nucleotide variant in HBB, changing the sixth amino acid in the β-hemoglobin chain from glutamic acid to valine (p.Glu6Val). In deoxygenated sickle hemoglobin, an interaction between the p.Glu6Val residue and the complementary regions on adjacent molecules can result in the formation of highly ordered insoluble molecular polymers that aggregate and distort the shape of the red blood cells, making them brittle and poorly deformable, increasing adherence to the endothelium. This can lead to veno-occlusion and potentially decreased tissue perfusion and ischemia ( Therefore, a missense mutation in HBB leads to decreased elasticity of red blood cells in globin Hb S. 37. : A. All four alpha-globin alleles are deleted or inactivated in Hb Barts syndrome. Deletion or dysfunction of three alleles results in HbH disease. Alpha° thalassemia results from deletion or dysfunction of two alleles, and α+ thalassemia results from deletion or dysfunction of one allele ( Therefore, 1 out of 4 alpha alleles in HBA1 and HBA2 is not functional in patients with α+ thalassemia. 38. : B. All four alpha-globin alleles are deleted or inactivated in Hb Barts syndrome. Deletion or dysfunction of three alleles results in HbH disease (3/4). Alpha° thalassemia results from deletion or dysfunction of two alleles (2/4), and α+ thalassemia results from deletion or dysfunction of one allele (1/4) ( Therefore, 2 out of 4 alpha alleles in HBA1 and HBA2 is not functional in patients with α° thalassemia. 39. : C. All four α globin alleles are deleted or inactivated in Hb Bart syndrome (4/4). Deletion or dysfunction of three alleles results in HbH disease (3/4). thalassemia results from deletion or dysfunction of two alleles, and α+ thalassemia results from deletion or dysfunction of one allele ( Therefore, 3 out of 4 alpha alleles in HBA1 and HBA2 are not functional in patients with Hb H disease. 40. : D. All four α-globin alleles are deleted or inactivated in Hb Barts syndrome. Deletion or dysfunction of three alleles results in HbH disease (3/4). Alpha°-thalassemia results from deletion or dysfunction of two alleles, and α+-thalassemia results from deletion or dysfunction of one allele ( Therefore, all 4 alpha alleles in HBA1 and HBA2 are not functional in patients with Hb Bart disease. 41. : A. HBA1, encoding α1-globin, and HBA2, encoding α2-globin, are the two genes associated with α thalassemia. They are localized to the telomeric region of chromosome 1613.3 in a cluster containing the embryonically expressed HBZ encoding ζ globin and a cis-acting regulatory element, HS-40, located 40 kb upstream of HBZ (see Fig. 6.2). The most common cause of alpha thalassemia is deletions. The most common form of alpha thalassemia seen in the United States is due to the α3.7 deletion and is present in approximately 30% of African Americans ( Therefore, deletion is most common in patients with alpha thalassemia in comparison with other types of variants. 42. : A. Alpha thalassemia is a nonneoplastic hematological disorder involving the genes HBA1 and HBA2. HBA1, encoding α1-globin, and HBA2, encoding α2-globin, are localized to the telomeric region of chromosome 16p13.3 in a cluster. Molecular genetic testing of HBA1 and HBA2 detects deletions in about 90% and point mutations in about 10% of affected individuals. The most common deletions for alpha thalassemia remove one (-α3.7 and -α4.2) or both (--Med and --SEA) of the α genes, as do the unusual deletions α-ZF and (αα)™ ( Therefore, one of four copies of alpha-globin chains was deleted in the wife, since she was an -α3.7 carrier. 43. : A. Alpha thalassemia is a nonneoplastic hematological disorder involving the genes HBA1 and HBA2. HBA1, encoding α1 globin, and HBA2, encoding α2-globin, are localized to the telomeric region of chromosome 16p in a cluster. Molecular genetic testing of HBA1 and HBA2 detects deletions in about 90% and point mutations in about 10% of affected individuals. The most common deletions for α thalassemia remove one (-α3.7 and -α4.2) or both (--Med and --SEA) of the α genes, as do the unusual deletions α-ZF and (αα)™ ( Therefore, one of four copies of alpha globin chains were deleted in the wife, since she was an -α4.2 carrier. 44. : D. Alpha thalassemia (α thalassemia) is a nonneoplastic hematological disorder involving the genes HBA1 and HBA2. HBA1, encoding α1 globin, and HBA2, encoding α2 globin, are localized to the telomeric region of chromosome 16p13.3 in a cluster. Molecular genetic testing of HBA1 and HBA2 detects deletions in about 90% and point mutations in about 10% of affected individuals. The most common deletions for α thalassemia remove one (-α3.7 and -α4.2) or both (--Med, --SEA, --FIL, and --THAI) α genes, as do the unusual deletions α-ZF and (αα)™. All four α alleles are deleted or dysfunctional (inactivated) in patients with Hb Barts hydrops fetalis syndrome. Hemoglobin H (HbH) disease is a result of deletion or dysfunction of three of four α-globin alleles. Two of four copies of alpha-globin chains were deleted in the wife, since she was an --αTHAI carrier. And one of four copies of alpha-globin chains was deleted in the husband, since he was an -α4.2 carrier. Therefore, the couple’s firstborn child would have 25% chance to have HbH, but not Hb Barts (<1%). 45. : C. Alpha thalassemia is a nonneoplastic hematological disorder involving the genes HBA1 and HBA2. HBA1, encoding α1 globin, and HBA2, encoding α2 globin, are localized to the telomeric region of chromosome 16p in a cluster. Molecular genetic testing of HbA1 and HbA2 detects deletions in about 90% and point mutations in about 10% of affected individuals. The most common deletions for α thalassemia remove one (-α3.7 and -α4.2) or both (--Med, --SEA, --FIL, and --THAI) α genes, as do the unusual deletions α-ZF and (αα)™. All four α alleles are deleted or dysfunctional (inactivated) in patients with hemoglobin Bart hydrops fetalis syndrome. Hemoglobin H (HbH) disease is a result of deletion or dysfunction of three of four alpha-globin alleles. Two of four copies of alpha-globin chains were deleted in the wife since she was an --αTHAI carrier. And one out of four copies of alpha-globin chains was deleted in the husband since he was an -α4.2 carrier. Therefore, the couple’s firstborn child would have a 25% chance of having HbH, but not Hb Barts (<1%). 46. : E. HBA1, encoding α1 globin, and HBA2, encoding α2 globin, are the two genes associated with α thalassemia. They are localized to the telomeric region of chromosome 16p in a cluster containing the embryonically expressed HBZ encoding ζ globin and a cis-acting regulatory element, HS-40, located 40 kb upstream of HBZ (see Fig. 6.2). The most common nondeletion mutation, which is frequently seen in Southeast Asia, is HbConstant Spring (HbCS, c.427T>C). It is a missense mutation of the termination codon of HBA2, leading to an elongated protein chain. This mutation leads to the production of an α-globin chain elongated by 31 amino acids. HbCS is produced in very small amounts because its mRNA is unstable. Heterozygotes for HbCS and other rare elongated variants, along with the presence of the Hb variant, produce the α°-thalassemia phenotype ( Therefore, HbCS is a nonstop mutation. 47. : B. HBA1, encoding α1 globin, and HBA2, encoding α2 globin, are the two genes associated with α thalassemia. They are localized to the telomeric region of chromosome 16p in a cluster containing the embryonically expressed HBZ encoding ζ globin and a cis-acting regulatory element, HS-40, located 40 kb upstream of HBZ (see Fig. 6.2). The most common nondeletion mutation, which is frequently seen in Southeast Asia, is HbCS (c.427T>C). It is a missense mutation of the termination codon of HBA2, leading to an elongated protein chain. This mutation leads to the production of an α-globin chain elongated by 31 amino acids. HbCS is produced in very small amounts because its mRNA is unstable. Heterozygotes for HbCS and other rare elongated variants, along with the presence of the Hb variant, produce the α° thalassemia phenotype ( Hemoglobin gamma (HBG), hemoglobin delta (HBD), and hemoglobin beta (HBB) are located on 11p15.4 from terminal to proximal, respectively. Mutations in HBB are more common in beta (β) thalassemia than deletions. So sequence analysis may detect 99% of the pathogenic mutations in HBB for β thalassemia ( Therefore, HbCS is a mutation in HBA2. 48. : E. Beta thalassemia (β thalassemia) is characterized by reduced synthesis of the hemoglobin subunit beta (hemoglobin beta chain) that results in microcytic hypochromic anemia, an abnormal peripheral-blood smear with nucleated red blood cells, and reduced amounts of hemoglobin A (HbA) on hemoglobin analysis. Individuals with beta thalassemia major have severe anemia and hepatosplenomegaly. They usually come to medical attention within the first 2 years of life. Beta thalassemias are inherited in an autosomal recessive manner. Most patients have single-nucleotide mutations (SNPs) in HBB. Sequencing analysis of HBB may identify 99% of pathogenic variants ( Therefore, single nucleotide mutation is most common in patients with beta thalassemia in comparison with other types of variants. 49. : A. HbConstant Spring (HbCS) is the most common nondeletion mutation in Southeast Asia. It is a missense mutation of the termination codon of HBA2, c.427T>C, which leads to an elongated protein chain. This mutation leads to the production of an α-globin chain elongated by 31 amino acids. HbCS is produced in very small amounts because its mRNA is unstable ( HbF increases in patients with beta thalassemia, or disorders caused by mutations in HBB, such as sickle cell disease (SCD). But in patients with alpha thalassemia, such as HbCS/HbCS, HbF does not increase ( Therefore, HbCS/HbCS would be in the differential list for the diagnosis in this patient, since her HbA and HbF were in the normal range. 50. : C. In fetuses and infants, the hemoglobin is made up of two alpha chains and two gamma chains (HbF: α2γ2). As the infant grows, the gamma chains are gradually replaced by beta chains, forming the adult hemoglobin HbA (α2β2). HbF increases in diseases associated with mutations in the HBB gene, such as beta thalassemia (β°/β°, β°/β+, β+/β+), and sickle cell disease (SCD), but not in alpha thalassemia. The absence of beta globin is referred to as beta zero (β°) thalassemia. Other HBB gene mutations allow some beta globin to be produced but in reduced amounts. A reduced amount of beta globin is called beta plus (β+) thalassemia ( Therefore, β°/β° is associated with a more significant HbF increase than in the others. 51. : A. Normally, the majority of adult hemoglobin (HbA) is composed of two α-globin and two β-globin chains arranged into a heterotetramer. In thalassemia, patients have defects in either the α- or β-globin chain, causing production of abnormal red blood cells. In α thalassemias, production of the α-globin chain is affected, while in β thalassemia, production of the β-globin chain is affected. In sickle cell disease (SCD), the mutation is specific to β globin. In newborns with α thalassemia Hb Barts syndrome, all four α-globin alleles are deleted or inactivated in Hb Barts syndrome. So no HbA (α2β2) can be made in patients with Hb Barts syndrome. HbA2 (α2δ2) and HbF (α2γ2) cannot increase to compensate for the lack of HbA owing to lack of α-globin chain. Patients with Hb Barts syndrome usually die in the neonatal period ( Beta thalassemias are due to mutations in the HBB gene on chromosome 11. The severity of the disease depends on the nature of the mutation. Mutations are characterized as β° or β thalassemia major if they prevent any formation of β chains. Otherwise, they are characterized as either β+ or β thalassemia intermedia if they allow some β-chain formation. In patients with mutations in the HBB gene, such as beta thalassemia (β°/β°, β°/β+, β+/β+), and SCD, HbF increases, but not in alpha thalassemia. The increased HbF results in relatively milder symptoms in β thalassemia than in α thalassemia ( Therefore, Hb Barts has the earliest onset and the worst prognosis among all the hemoglobinopathies listed. 52. : A. Thalassemia syndromes are the commonest monogenic diseases in the world. The worldwide distribution of inherited alpha thalassemia corresponds to areas of malaria exposure. Alpha thalassemia is common in sub-Saharan Africa, the Mediterranean Basin, the Middle East, South Asia, and Southeast Asia, and different genetic subtypes have variable frequencies in each of these areas. The epidemiology of alpha thalassemia in the United States reflects this global distribution pattern. Deletions are the most common type of variants for alpha thalassemia, while single-nucleotide mutations are more common in patients with beta thalassemia. The most common form of alpha+ thalassemia seen in the United States is due to the -α3.7 deletion, a single alpha-globin gene deletion, and is present in approximately 30% of African Americans. The highest frequency (0.30–0.40) of the -α3.7 allele (causing α+ thalassemia) has been observed in the equatorial belt, including Nigeria, Ivory Coast, and Kenya. And --αSEA is one of common deletions of HBA1 and HBA2 in Southeast Asia, as the name indicates. Hemoglobin Constant Spring (HbCS) is an unstable α-globin variant causing α-thalassemia phenotypes. Sequence variant HbConstant Spring (HbCS, c.427T>C) is a missense mutation of the termination codon of HBA2, leading to an elongated protein chain. It is the most common nondeletion in the genes for alpha globin and alpha thalassemia. Alpha thalassemia retardation-16 (ATR-16) syndrome, a contiguous-gene deletion syndrome, results from a large deletion on the short arm of chromosome 16 from band 16p13.3 to the terminus, which removes HBA1 and HBA2 together with other flanking genes. Among the few reported individuals with deletion of 16p, microcephaly and short stature were variable; IQ ranged from 53 to 76. Facial features are distinctive; talipes equinovarus (club foot) is common, as are hypospadias and cryptorchidism in males. Typically, hematological features are those of the α-thalassemia trait reflecting deletion of HBA1 and HBA2 in cis configuration, such as --/αα. The deletion may be de novo or inherited from a parent who carries a balanced chromosome rearrangement. Worldwide, it is not as common as alpha thalassemia ( Therefore, it would be most likely that the patient had the -α3.7 allele for alpha thalassemia. 53. : B. Alpha thalassemia typically results from deletions involving the HBA1 and HBA2 genes. They are localized to the telomeric region of chromosome 16p in a cluster with HBZ and HS-40. Alpha-thalassemia silent carrier (α+ thalassemia) results from a deletion or nondeletion mutation that inactivates one of the two α-globin genes (HBA1 or HBA2) on one chromosome (1/4). Alpha-thalassemia trait results from deletion or inactivation of two α-globin alleles (--/αα in cis configuration or -α/-α in trans configuration) (2/4). Alpha thalassemia is usually inherited in an autosomal recessive manner. HbH disease results from the deletion or inactivation of three α-globin alleles (3/4), usually as a result of a compound heterozygous state for α° thalassemia and α+ thalassemia ( Therefore, the couple’s firstborn child will have a 25% chance of having HbH disease, a 25% chance of having α° thalassemia, a 25% chance of having α+ thalassemia, and a 25% chance of being unaffected and not a carrier. Once an at-risk child is known to be unaffected, the risk of his/her having either α° thalassemia or α+ thalassemia is 2/3. 54. : A. Alpha thalassemia typically results from deletions involving the HBA1 and HBA2 genes. They are localized to the telomeric region of chromosome 16p in a cluster with HBZ and HS-40. Alpha-thalassemia silent carrier (α+ thalassemia) results from a deletion or nondeletion mutation that inactivates one of the two α-globin genes (HBA1 or HBA2) on one chromosome (1/4). Alpha-thalassemia trait (α° thalassemia) results from the deletion or inactivation of two α-globin alleles (--/αα in cis configuration or -α/-α in trans configuration) (2/4) ( Alpha thalassemia is usually inherited in an autosomal recessive manner. All four α-globin alleles are deleted or inactivated in Hb Barts syndrome as a result of deletion or inactivation of all four α-globin alleles (4/4). The couple’s child would have 25% chance of having Hb Barts disease, a 25% chance of having α° thalassemia, a 25% chance of having α+ thalassemia, and a 25% chance of being unaffected and not a carrier. If an at-risk child is known to be unaffected, he/her has 2/3 risk to have either α° or α+ thalassemia is 2/3. Therefore, the couple’s firstborn child will have a <1% chance of having Hb Barts syndrome. 55. : C. Alpha thalassemia typically results from deletions involving the HBA1 and HBA2 genes. HBA1 and HBA2 are localized to the telomeric region of chromosome 16p in a cluster with HBZ and HS-40. Hemoglobin H (HbH) disease results from deletion or inactivation of three α-globin alleles, usually as a result of a compound heterozygous state for α° thalassemia and α+ thalassemia (3/4). Patients with HbH disease usually have decreased HbA, increased Hb Barts and HbH, normal percentages of HbA2 and HbF, and RBC inclusion bodies ( Therefore this child would most likely have an increased chance of having Hb Barts by 2 years of age. 56. : A. Alpha thalassemia typically results from deletions involving the HBA1 and HBA2 genes. HBA1 and HBA2 are localized to the telomeric region of chromosome 16p in a cluster with HBZ and HS-40. Hemoglobin H (HbH) disease results from deletion or inactivation of three α-globin alleles, usually as a result of a compound heterozygous state for α° thalassemia and α+ thalassemia (3/4). Patients with HbH disease usually have decreased HbA, increased Hb Barts and HbH, normal percentages of HbA2 and HbF, and RBC inclusion bodies ( Therefore, this child most likely would have decreased HbA by 2 years of age. 57. : B. Alpha thalassemia typically results from deletions involving the HBA1 and HBA2 genes. HBA1 and HBA2 are localized to the telomeric region of chromosome 16p in a cluster with HBZ and HS-40. HbH disease results from deletion or inactivation of three α-globin alleles, usually as a result of a compound heterozygous state for α° thalassemia and α+ thalassemia (3/4). Patients with HbH disease usually have decreased HbA, increased Hb Barts and HbH, normal percentages of HbA2 and HbF, and RBC inclusion bodies ( Therefore, this child would most likely have stable HbA2 by 2 years of age. 58. : C. Alpha thalassemia typically results from deletions involving the HBA1 and HBA2 genes. HBA1 and HBA2 are localized to the telomeric region of chromosome 16p in a cluster with HBZ and HS-40. HbH disease results from deletion or inactivation of three α-globin alleles, usually as a result of a compound heterozygous state for α° thalassemia and α+ thalassemia (3/4). Patients with HbH disease usually have decreased HbA, increased Hb Barts and HbH, normal percentages of HbA2 and HbF, and RBC inclusion bodies ( Therefore, this child would most likely have stable HbF by 2 years of age. 59. : B. Alpha thalassemia typically results from deletions involving the HBA1 and HBA2 genes. HBA1 and HBA2 are localized to the telomeric region of chromosome 16p in a cluster with HBZ and HS-40. α°-Thalassemia trait carriers have two copies of dysfunctional α-globin chains due to deletion(s) and/or mutation(s). If a fetus has four dysfunctional (inactivated) α-globin chains, he/she has Hb Barts hydrops fetalis syndrome, which is the most severe clinical form of α thalassemia. Affected fetuses are either stillborn or die soon after birth. Red cells with Hb Barts have an extremely high oxygen affinity and are incapable of effective tissue oxygen delivery. The clinical features are severe anemia, marked hepatosplenomegaly, diffuse edema, heart failure, and extramedullary erythropoiesis. Patients with Hb Barts syndrome have only Hb Barts. Therefore, this future child would most likely have Hb Bart sat birth. 60. : C. Alpha thalassemia typically results from deletions involving the HBA1 and HBA2 genes localized to the telomeric region of chromosome 16p in a cluster with HBZ encoding ζ globin and a cis-acting regulatory element, HS-40, located 40 kb upstream of HBZ. Alpha-thalassemia silent carrier (α+ thalassemia) results from a deletion or nondeletion mutation that inactivates one of the two copies of α globin alleles (HBA1 or HBA2) on one chromosome (1/4 alleles). Carriers of α+ thalassemia may have a completely silent hematologic phenotype or may present with a moderate, thalassemia-like hematologic picture, such as reduced MCV and MCH, but normal HbA2 and HbF. Alpha thalassemia trait (α° thalassemia) results from deletion or inactivation of two α-globin alleles (--/αα in cis configuration or -α/-α in trans configuration) (2/4 allele). Carriers of α° thalassemia show microcytosis (low MCV), hypochromia (low MCH), normal percentages of HbA2 and HbF, and RBC inclusion bodies. Patients are considered to be α-thalassemia carriers without symptoms. HbH disease results from deletion or inactivation of three α-globin alleles, usually as a result of a compound heterozygous state for α° thalassemia and α° thalassemia (3/4 alleles). Patients with HbH disease (chronic microcytic, hypochromic hemolytic anemia of variable severity) usually are symptomatic. Hemoglobin Barts hydrops fetalis (Hb Barts) syndrome, the most severe form of α thalassemia, is characterized by fetal onset of generalized edema, ascites, pleural and pericardial effusions, and severe hypochromic anemia, in the absence of ABO or Rh blood group incompatibility. It is usually detected by ultrasonography at 22–28 weeks’ gestation and can be suspected in an at-risk pregnancy at 13–14 weeks’ gestation when increased nuchal thickness, possible placental thickness, increased cerebral media artery velocity and increased cardiothoracic ratio are present. Death in the neonatal period is almost inevitable. All four α-globin alleles are deleted or dysfunctional (inactivated, 4/4 alleles) ( Therefore, patients with HbH usually are symptomatic. 61. : B. Fetal hemoglobin (α2γ2), composed of two alpha and two gamma subunits, is the primary hemoglobin present during fetal development. Around the time of birth, gamma expression is diminished as β-globin expression is increased. Individuals with alpha thalassemia have deletions/mutations that impair α-globin production. Fetal hemoglobin (α2γ2) cannot be produced effectively, which affects fetal development significantly. Therefore, alpha thalassemia is identifiable in utero. Individuals with beta thalassemia have mutations that impair beta-globin production. Production of fetal hemoglobin (α2γ2) is not affected by inactivation of the beta-globin gene. However, production of adult hemoglobin HbA (α2β2) is impaired by the inactivation of beta globin gene. Therefore, individuals with beta thalassemia begin to show symptoms as the proportion of fetal hemoglobin HbF drops without a concomitant rise in adult hemoglobin HbA (α2β2) after birth. 62. : C. Beta thalassemia (β thalassemia) is caused by mutations in the HBB gene on the short arm of chromosome 11 (11p15.4), and is inherited in an autosomal recessive fashion. The severity of the disease depends on the nature of the mutation. HBB blockage over time leads to decreased beta-chain synthesis. The body’s inability to construct new beta chains leads to the underproduction of HbA, secondary to the increased production of HbF. ( Therefore, an individual with beta thalassemia has an increased level of HbF. 63. : A. Beta thalassemia (β thalassemia) is caused by mutations in the HBB gene on the short arm of chromosome 11 (11p15.4) and is inherited in an autosomal recessive fashion. The severity of the disease depends on the nature of the mutation. HBB blockage over time leads to decreased beta-chain synthesis. The body’s inability to construct new beta chains leads to the underproduction of HbA, secondary to the increased production of HbF ( Therefore, an individual with beta thalassemia has a decreased level of HbA. 64. : C. HbCS is caused by a mutation in the stop codon of the α2-globin gene, HBA2, that results in poor expression (1% of normal) of an α globin, which has 31 additional amino acids. Hemoglobin Constant Spring (HbCS) is the most common nondeletional α-thalassemia mutation and is an important cause of HbH-like disease in Southeast Asia. The quantity of hemoglobin in the cells is low for two reasons. First, the messenger RNA for hemoglobin Constant Spring is unstable; some is degraded prior to protein synthesis. Second, the Constant Spring alpha-chain protein is itself unstable. The result is a thalassemic phenotype. The designation Constant Spring derives from the isolation of the hemoglobin variant in a family of ethnic Chinese background from the Constant Spring district of Jamaica ( Therefore, HbCS is a stop codon mutation in HBA2. 65. : B. HbCS is caused by a mutation in the stop codon of the α2-globin gene, HBA2, that results in poor expression (1% of normal) of an α globin, which has 31 additional amino acids. Hemoglobin Constant Spring (Hbcs) is the most common nondeletional α-thalassemia mutation and is an important cause of HbH-like disease in Southeast Asia. There are two copies of the α-globin gene, HBA1 and HBA2, tandemly located on the short arm of chromosome 16. There is no HBA3 in the human genome. Beta thalassemia (β thalassemia) is caused by mutations in the HBB gene on chromosome 11 and is inherited in an autosomal recessive fashion. There is only one HBB gene in humans. Therefore, HbCS is a stop codon mutation in HBA2. 66. : A. If the couple are both carriers of the α°-thalassemia deletion mutation in which both HBA1 and HBA2 as well as HBZ are deleted (such as αα/--FIL or genotype of αα/--THAI), they are not at risk of having offspring with Hb Barts hydrops fetalis syndrome because homozygotes for such mutations are lost shortly after conception as a miscarriage ( Therefore, this couple will be at risk for multiple miscarriages. 67. : B. If the couple are both carriers of a deletion involving both HBA1 and HBA2, but only one of them has a deletion that extends into HBZ (such as αα/--SEA and αα/--FIL), the couple is at risk of having offspring with Hb Barts hydrops fetalis syndrome because the single HBZ in the fetus produces sufficient ζ globin for fetal development ( Therefore, this couple will be at risk for having offspring with Hb Barts hydrops fetalis syndrome. 68. : C. Two tandem genes for alpha globin (HBA1 and HBA2) are located on the short arm of chromosome 16 (16p13.3). Hemoglobin H (HbH) disease results from deletion or inactivation of three out of four α globin alleles, usually as a result of a compound heterozygous state for α° thalassemia (α-/α-, or αα/--) and α+ thalassemia (αα/α-). HbH disease is characterized by chronic microcytic, hypochromic hemolytic anemia. The severity mainly correlates with the severity of the α° thalassemia. Patients with HbH disease usually have decreased HbA, increased Hb Barts and HbH, normal percentages of HbA2 and HbF, and RBC inclusion bodies ( Therefore, this couple will be at risk for having offspring with HbH disease. 69. : A. Two tandem genes for alpha globin (HBA1 and HBA2) are located on the short arm of chromosome 16 (16p13.3) with HBZ. If a couple are both carriers of an α° thalassemia deletion mutation (αα/--), each of their offspring has a 1/4 risk of having Hb Barts hydrops fetalis syndrome ( Therefore, this couple will be at risk for having offspring with Hb Barts hydrops fetalis syndrome. 70. : C. Alpha thalassemia is a nonneoplastic hematological disorder that reduces the production of hemoglobin. Hemoglobin is the protein in red blood cells that carries oxygen to cells throughout the body. Alpha thalassemia typically results from deletions involving the HBA1 and HBA2 genes on 16p13.3 (see the figure below). The mRNAs produced by HBA1 and HBA2 have identical coding regions and can be distinguished only by their 3′ UTR. Reciprocal recombination between the Z boxes, which are 3.7 kb apart, or between the X boxes, 4.2 kb apart, gives rise to chromosomes with a single α-globin gene. The two resulting α-thalassemia mutations are referred to as the 3.7-kb rightward deletion (-α3.7) and the 4.2-kb leftward deletion (-α4.2), respectively (see Fig. 6.2) ( Therefore, the HBA2 gene is most likely deleted in the wife. 71. : C. Alpha thalassemia is a nonneoplastic hematological disorder that reduces the production of hemoglobin. Hemoglobin is the protein in red blood cells that carries oxygen to cells throughout the body. Alpha thalassemia typically results from deletions involving the HBA1 and HBA2 genes on 16p13.3 (see the figure below). The mRNAs produced by HBA1 and HBA2 have identical coding regions and can be distinguished only by their 3′ UTR. Reciprocal recombination between the Z boxes, which are 3.7 kb apart, or between the X boxes, 4.2 kb apart, gives rise to chromosomes with a single α-globin gene. The two resulting α-thalassemia mutations are referred to, respectively, as the 3.7-kb rightward deletion (-α3.7) and the 4.2-kb leftward deletion (-α4.2) (see Fig. 6.2) ( Therefore, the HBA2 gene is most likely deleted in the wife. 72. : A. Alpha thalassemia (α thalassemia) is an autosomal recessive disease with two clinically significant forms: hemoglobin Barts hydrops fetalis (Hb Bart) syndrome and hemoglobin H (HbH) disease. HBA1, the gene encoding α1-globin, and HBA2, the gene encoding α2-globin, are the two genes associated with α thalassemia. Molecular genetic testing of HBA1 and HBA2 detects deletions in about 90% and point mutations in about 10% of affected individuals. The six common deletions are -α3.7, -α4.2, −−SEA, −−FIL, −−MED, and −−THAI ( Therefore, the detection rate of the six common deletions is approximately 90%. 73. : C. Alpha thalassemia (α thalassemia) is an autosomal recessive disease with two clinically significant forms: hemoglobin Bart hydrops fetalis (Hb Barts) syndrome and hemoglobin H (HbH) disease. HBA1, the gene encoding α1 globin, and HBA2, the gene encoding α2-globin, are the two genes associated with α thalassemia. Molecular genetic testing of HBA1 and HBA2 detects deletions in about 90% and point mutations in about 10% of affected individuals. The six common deletions are -α3.7, -α4.2, −−SEA, −−FIL, −−MED, and −−THAI ( Therefore, the residual risk for this couple having a child with α thalassemia is: 1/10×1/10×1/4=1/400. 74. : E. The clinical manifestations of alpha thalassemia (α thalassemia) depend on the degree of α-globin chain deficiency relative to β-globin production. The different α-thalassemia mutations vary widely in severity. The most and least severe (respectively) are nondeletion HBA2, -α3.7 (because of a compensatory increase of the α-globin gene output from the remaining HBA1), and nondeletion HBA1. For the -α4.2 deletion, evidence is inconclusive for a compensatory increase in the expression of the remaining α gene. The most common nondeletion mutation, which is frequently seen in Southeast Asia, is HbConstant Spring (HbCS, c.427T>C). It is a missense mutation of the termination codon of HBA2 leading to an elongated protein chain. This mutation leads to the production of an α-globin chain elongated by 31 amino acids. HbCS is produced in very small amounts because its mRNA is unstable. The phenotype α thalassemia may be modified by triplication or quadruplication of the α-globin genes on one chromosome. Therefore, p.Asp75Gly in HBA1 may be predicted to have milder symptoms than the remaining choices if the effect from the expression of HBB and the extra copies of HBA1/HBA2 are not considered. 75. : C. The clinical manifestations of alpha thalassemia (α thalassemia) depend on the degree of α-globin chain deficiency relative to β-globin production. The different α-thalassemia mutations vary widely in severity. The most and least severe (respectively) are nondeletion HBA2, -α3.7 (because of a compensatory increase of the α-globin gene output from the remaining HBA1), and nondeletion HBA1. For the -α4.2 deletion, evidence is inconclusive for a compensatory increase in the expression of the remaining α gene. The most common nondeletion mutation, which is frequently seen in Southeast Asia, is HbConstant Spring (HbCS, c.427T>C). It is a missense mutation of the termination codon of HBA2, leading to an elongated protein chain. This mutation leads to the production of an α-globin chain elongated by 31 amino acids. HbCS is produced in very small amounts because its mRNA is unstable. The phenotype of α thalassemia may be modified by triplication or quadruplication of the α-globin genes on one chromosome. Therefore, HbConstant Spring may be predicted to have more severe symptoms than the remaining choices in the question if the effect from the expression of HBB and the extra copies of HBA1/HBA2 are not considered. 76. : D. The clinical manifestations of alpha thalassemia (α thalassemia) depend on the degree of α-globin chain deficiency relative to β-globin production. The six most common deletions for α thalassemia are -α3.7, -α4.2, −−SEA, −−FIL, −−MED, and −−THAI. The -α4.2 removes the entire HBA2. The -α3.7 gives rise to a hybrid HBA2/HBA1 gene. The rest of the four common deletions involve both HBA1 and HBA2 and HBZ at the terminal sides of HBA1 and HBA2 (−−SEA, −−FIL, −−MED, and −−THAI). The different α-thalassemia mutations vary widely in severity. The most and least severe (respectively) are nondeletion HBA2, -α3.7 (because of compensatory increase of the α-globin gene output from the remaining HBA1), and nondeletion HBA1. For the -α4.2 deletion, evidence is inconclusive for a compensatory increase in the expression of the remaining α-globin gene. The most common nondeletion mutation, which is frequently seen in Southeast Asia, is HbConstant Spring (HbCS, c.427T>C). It is a missense mutation of the termination codon of HBA2, leading to an elongated protein chain. This mutation leads to the production of an α-globin chain elongated by 31 amino acids. HbCS is produced in very small amounts because its mRNA is unstable. The phenotype of α thalassemia may be modified by triplication or quadruplication of the α-globin genes on one chromosome. Therefore, −−SEA may be predicted to have more severe symptoms than the remaining choices if the effect from the expression of HBB and the extra copies of HBA1/HBA2 are not considered. 77. : D. Alpha thalassemia is an autosomal recessive nonneoplastic hematological disorder characterized by a reduced production of hemoglobin. Hemoglobin is the protein in red blood cells that carries oxygen to cells throughout the body. Alpha thalassemia typically results from deletions involving the HBA1 and HBA2 genes. HBA1, encoding α1-globin, and HBA2, encoding α2-globin, are the two genes associated with α thalassemia. They are localized to the telomeric region of chromosome 16p in a cluster containing the embryonically expressed HBZ encoding ζ globin and a cis-acting regulatory element, HS-40, located 40 kb upstream of HBZ. Hemoglobin Barts hydrops fetalis (Hb Barts) syndrome is characterized by fetal onset of generalized edema, ascites, pleural and pericardial effusions, and severe hypochromic anemia, in the absence of ABO or Rh blood group incompatibility. It is usually detected by ultrasonography at 22–28 weeks’ gestation and can be suspected in an at-risk pregnancy at 13–14 weeks’ gestation when increased nuchal thickness, possible placental thickness, increased cerebral media artery velocity, and increased cardiothoracic ratio are present. Death in the neonatal period is almost inevitable. Patients with Hb Barts syndrome have only hemoglobin Barts (Hb Barts). All four α-hemoglobins alleles are deleted (--/--) or dysfunctional (inactivated) ( The newborn in this question had hemoglobin Barts hydrops fetalis (Hb Barts) syndrome (--/--). The mother was apparently healthy. Therefore, it would be most likely the genotype of the mother was --/αα. 78. : D. Alpha thalassemia is an autosomal recessive nonneoplastic hematological disorder characterized by reduced production of hemoglobin. Hemoglobin is the protein in red blood cells that carries oxygen to cells throughout the body. Alpha thalassemia typically results from deletions involving the HBA1 and HBA2 genes. HBA1, encoding α1-globin, and HBA2, encoding α2-globin, are the two genes associated with α thalassemia. They are localized to the telomeric region of chromosome 16p in a cluster containing the embryonically expressed HBZ encoding ζ globin and a cis-acting regulatory element, HS-40, located 40 kb upstream of HBZ. Hemoglobin H (HbH) disease results from deletion or inactivation of three of four α-globin alleles, usually as a result of a compound heterozygous state for α° thalassemia (--/αα) and α+ thalassemia (-α/αα). Hb Barts syndrome results from deletion or inactivation of all four α-globin alleles (--/--), usually as a result of the homozygous state for α° thalassemia (--/αα). In Southeast Asian (in particular) and Mediterranean populations, HbH disease and hemoglobin Barts (γ4) are common because of the frequent coinheritance of an allele lacking both alpha-globin genes (--/αα) and another allele lacking one alpha-globin gene (-α/αα) ( The child in this question had hemoglobin H (HbH) disease (--/-α). Therefore, most likely the mother carried the familial --/αα alleles because her sister’s unborn child had the --/-- alleles. And the husband has the -α/αα allele, since he is apparently healthy with no history of thalassemia. 79. : B. Alpha thalassemia is an autosomal recessive nonneoplastic hematological disorder characterized by reduced production of hemoglobin. Hemoglobin is the protein in red blood cells that carries oxygen to cells throughout the body. Alpha thalassemia typically results from deletions involving the HBA1 and HBA2 genes. HBA1, encoding α1-globin, and HBA2, encoding α2-globin, are the two genes associated with α thalassemia. They are localized to the telomeric region of chromosome 16p in a cluster containing the embryonically expressed HBZ encoding ζ globin and a cis-acting regulatory element, HS-40, located 40 kb upstream of HBZ. Hemoglobin H (HbH) disease results from deletion or inactivation of three of four α-globin alleles, usually as a result of a compound heterozygous state for α° thalassemia (--/αα) and α+ thalassemia (-α/αα). Hb Barts syndrome results from the deletion or inactivation of all four α-globin alleles (--/--), usually as a result of the homozygous state for α° thalassemia (--/αα). In Southeast Asian (in particular) and Mediterranean populations, HbH disease and hemoglobin Barts (γ4) are common because of the frequent coinheritance of an allele lacking both α-globin genes (--/αα) and another allele lacking 1 α-globin gene (-α/αα) ( The child in this question has hemoglobin H (HbH) disease (--/-α). Therefore, most likely the mother carried the familial --/αα alleles because her sister’s unborn child had the --/-- alleles. And the husband has the -α/αα allele, since he is apparently healthy with no history of thalassemia. 80. : D. Thalassemia is a nonneoplastic hematological disorder that reduces the production of hemoglobin. Hemoglobin is the protein in red blood cells that carries oxygen to cells throughout the body. Alpha thalassemia typically results from deletions involving the HBA1 and HBA2 genes. HBA1, encoding α1-globin, and HBA2, encoding α2-globin, are the two genes associated with α thalassemia. They are localized to the telomeric region of chromosome 16p (16p13.3) in a cluster containing the embryonically expressed HBZ encoding ζ globin and a cis-acting regulatory element, HS-40, located 40 kb upstream of HBZ ( Hemoglobin gamma (HBG), hemoglobin delta (HBD), and hemoglobin beta (HBB) are located on the short arm of chromosome 11 (11p15.4) from terminal to proximal, respectively. Beta thalassemia is more common than thalassemias due to variants in HBG and HBD. Mutations in HBB are more common than deletions in beta (β) thalassemia. Sequence analysis may detect 99% of the pathogenic mutations in HBB for β thalassemia. Alpha or beta thalassemias are more common than delta or gamma thalassemias ( Therefore, it is most likely that HBB is mutated in the wife. 81. : D. Beta thalassemia (β thalassemia) is characterized by reduced synthesis of the hemoglobin subunit beta that results in microcytic hypochromic anemia, an abnormal peripheral-blood smear with nucleated red blood cells, and reduced amounts of hemoglobin A (HbA) on hemoglobin analysis. Beta thalassemia is an autosomal recessive disease results from the variants in the HBB gene on 11p15.4. Some of variants in HBB may cause hemoglobinopathy other than β thalassemia, such as: : Hemoglobin S (HbC; Glu6Val pathogenic variant) : Hemoglobin C (HbC; Glu6Lys pathogenic variant) : Hemoglobin D (D-Punjab; Glu121Gln pathogenic variant) : Hemoglobin O (O-Arab; Glu121Lys pathogenic variant) : Hemoglobin E (HbE; Glu26Lys pathogenic variant) The p.Lys8ValfsTer13 and p.Ser9ValfsTer13 mutations are nonsense variants in HBB for β thalassemia. Therefore, in this case it would be most likely that BJ had compound heterozygous p.Ser9ValfsTer13 and p.Glu26Lys, which results in hemoglobin E/β thalassemia. 82. : E. Beta thalassemia (β thalassemia) is characterized by reduced synthesis of the hemoglobin subunit beta that results in microcytic hypochromic anemia, an abnormal peripheral-blood smear with nucleated red blood cells, and reduced amounts of hemoglobin A (HbA) on hemoglobin analysis. Beta thalassemia, an autosomal recessive disease, results from the variants in the HBB gene on 11p15.4. Some variants in HBB may cause hemoglobinopathy other than β thalassemia, such as: : Hemoglobin S (HbC; Glu6Val pathogenic variant) : Hemoglobin C (HbC; Glu6Lys pathogenic variant) : Hemoglobin D (D-Punjab; Glu121Gln pathogenic variant) : Hemoglobin O (O-Arab; Glu121Lys pathogenic variant) : Hemoglobin E (HbE; Glu26Lys pathogenic variant) The p.Lys8ValfsTer13 and p.Ser9ValfsTer13 mutations are nonsense variants in HBB for β thalassemia. Therefore, in this case it would be most likely that Marina had compound heterozygous p.Lys8ValfsTer13 and p.Ser9ValfsTer13, which results in β thalassemia major. 83. : E. Beta thalassemia (β thalassemia) is characterized by reduced synthesis of the hemoglobin subunit beta that results in microcytic hypochromic anemia, an abnormal peripheral-blood smear with nucleated red blood cells, and reduced amounts of hemoglobin A (HbA) on hemoglobin analysis. Beta thalassemia is an autosomal recessive disease, results from the variants in the HBB gene on 11p15.4. Some variants in HBB may cause hemoglobinopathy other than β thalassemia, such as: : Hemoglobin S (HbC; Glu6Val pathogenic variant) : Hemoglobin C (HbC; Glu6Lys pathogenic variant) : Hemoglobin D (D-Punjab; Glu121Gln pathogenic variant) : Hemoglobin O (O-Arab; Glu121Lys pathogenic variant) : Hemoglobin E (HbE; Glu26Lys pathogenic variant) Nonsense, frameshift, or sometimes splicing mutations usually result in β° thalassemia alleles (complete absence of hemoglobin subunit beta production). Pathogenic variants in the promoter area (either the CACCC or the TATA box), the polyadenylation signal, or the 5′–3′ untranslated region, or by splicing abnormalities usually results in β+ thalassemia alleles (residual output of globin beta chains). c.-138C>A is a transcriptional mutant in the proximal CACC box of the promoter region. c.91+6T>C and p.Ala27Ser are splicing variants, mild or silent HBB pathogenic variants for β thalassemia. The c.\6C>G mutation is a mild pathogenic variant in 3′ UTR of HBB. The p.Lys8ValfsTer13 and p.Ser9ValfsTer13 mutations are nonsense variants in HBB. Therefore, in this case it would be most likely that Marina had compound heterozygous p.Lys8ValfsTer13 and p.Ser9ValfsTer13, resulting in β thalassemia major. 84. : A. Beta thalassemia (β thalassemia) is characterized by reduced synthesis of the hemoglobin subunit beta that results in microcytic hypochromic anemia, an abnormal peripheral-blood smear with nucleated red blood cells, and reduced amounts of hemoglobin A (HbA) on hemoglobin analysis. Beta thalassemia, an autosomal recessive disease, results from the variants in the HBB gene on 11p15.4. β thalassemia may be divided into thalassemia major and thalassemia minor. Individuals with thalassemia major have severe anemia and hepatosplenomegaly; they usually come to medical attention within the first 2 years of life. Without treatment, affected children have severe failure to thrive and shortened life expectancy. Individuals with thalassemia intermedia or minor present later and have milder anemia that only rarely requires transfusion. Nonsense, frameshift, or sometimes splicing mutations usually result in β° thalassemia alleles (complete absence of hemoglobin subunit beta production). Pathogenic variants in the promoter area (either the CACCC or the TATA box), the polyadenylation signal, or the 5′ or 3′ untranslated region, or by splicing abnormalities usually results in β+ thalassemia alleles (residual output of globin beta chains). The p.Lys8ValfsTer13 and p.Ser9ValfsTer13 mutations are nonsense variants in HBB. HbH and Hb Barts are different forms of α thalassemia. Therefore, it would be most likely that this patient had β thalassemia major. 85. : A. Beta thalassemia (β thalassemia) is characterized by reduced synthesis of the hemoglobin subunit beta that results in microcytic hypochromic anemia, an abnormal peripheral-blood smear with nucleated red blood cells, and reduced amounts of hemoglobin A (HbA) on hemoglobin analysis. Beta thalassemia is an autosomal recessive disease that results from the variants in the HBB gene on 11p15.4. Some variants in HBB may cause hemoglobinopathy other than β thalassemia, such as: : Hemoglobin S (HbC; Glu6Val pathogenic variant) : Hemoglobin C (HbC; Glu6Lys pathogenic variant) : Hemoglobin D (D-Punjab; Glu121Gln pathogenic variant) : Hemoglobin O (O-Arab; Glu121Lys pathogenic variant) : Hemoglobin E (HbE; Glu26Lys pathogenic variant) β thalassemia may be divided into thalassemia major and thalassemia minor. Individuals with thalassemia major have severe anemia and hepatosplenomegaly; they usually come to medical attention within the first 2 years of life. Without treatment, affected children have severe failure to thrive and shortened life expectancy. Individuals with thalassemia intermedia or minor present later and have milder anemia that only rarely requires transfusion. Nonsense, frameshift, or sometimes splicing mutations usually result in β° thalassemia alleles (complete absence of hemoglobin subunit beta production). Pathogenic variants in the promoter area (either the CACCC or the TATA box), the polyadenylation signal, or the 5′ or 3′ untranslated region, or by splicing abnormalities usually results in β+ thalassemia alleles (residual output of globin beta chains). The c.-138C>A mutation is a transcriptional mutant in the proximal CACC box of the promoter region. And the c.-81A>G mutation is a mild pathogenic variant in the TATA box. The p.Lys8ValfsTer13 and p.Ser9ValfsTer13 mutations are nonsense variants in HBB. In this case the patient has β thalassemia minor since she had HbA (rule out HbS, HbC, HbD, and HbE, also). Therefore, it is most like that the patient has compound heterozygous c.-138C>A and c.-81A>G, which results in β thalassemia minor. 86. : A. Beta thalassemia (β thalassemia) is characterized by reduced synthesis of the hemoglobin subunit beta that results in microcytic hypochromic anemia, an abnormal peripheral-blood smear with nucleated red blood cells, and reduced amounts of hemoglobin A (HbA) on hemoglobin analysis. Beta thalassemia is an autosomal recessive disease results from the variants in the HBB gene on 11p15.4. Some of variants in HBB may cause hemoglobinopathy other than β thalassemia, such as: : Hemoglobin S (HbC; Glu6Val pathogenic variant) : Hemoglobin C (HbC; Glu6Lys pathogenic variant) : Hemoglobin D (D-Punjab; Glu121Gln pathogenic variant) : Hemoglobin E (HbE; Glu26Lys pathogenic variant) : Hemoglobin O (O-Arab; Glu121Lys pathogenic variant) Nonsense, frameshift, or sometimes splicing mutations usually results in β° thalassemia alleles (complete absence of hemoglobin subunit beta production). Pathogenic variants in the promoter area (either the CACCC or the TATA box), the polyadenylation signal, or the 5′ or 3′ untranslated region, or by splicing abnormalities usually results in β+ thalassemia alleles (residual output of beta-globin chains). The p.Ala27Ser mutation is a splicing variant, a mild or silent HBB pathogenic variant for β thalassemia. The c.-138C>A mutation is a transcriptional mutant in the proximal CACC box of the promoter region. The p.Lys8ValfsTer13 and p.Ser9ValfsTer13 mutations are nonsense variants in HBB. In this case, the patient has β thalassemia minor with 30% residual HbA. Therefore, it is most like that the patient has compound heterozygous c.-138C>A and p.Ala27Ser, which results in β thalassemia minor. 87. : B. Beta thalassemia (β thalassemia) is characterized by reduced synthesis of the hemoglobin subunit beta that results in microcytic hypochromic anemia, an abnormal peripheral-blood smear with nucleated red blood cells, and reduced amounts of hemoglobin A (HbA) on hemoglobin analysis. Beta thalassemia is an autosomal recessive disease results from the variants in the HBB gene on 11p15.4. β thalassemia may be divided into thalassemia major and thalassemia minor. Individuals with thalassemia major have severe anemia and hepatosplenomegaly; they usually come to medical attention within the first 2 years of life. Without treatment, affected children have severe failure to thrive and shortened life expectancy. Individuals with thalassemia intermedia or minor present later and have milder anemia that only rarely requires transfusion. Nonsense, frameshift, or sometimes splicing mutations usually result in β° thalassemia alleles (complete absence of hemoglobin subunit beta production). Pathogenic variants in the promoter area (either the CACCC or the TATA box), the polyadenylation signal, or the 5′ or 3′ untranslated region, or by splicing abnormalities usually result in β+ thalassemia alleles (residual output of beta-globin chains). The c.-138C>A mutation is a transcriptional mutant in the proximal CACC box of the promoter region. The c.-81A>G mutation is a mild pathogenic variant in the TATA box. HbH and Hb Barts are different forms of α thalassemia associated with pathogenic variants in HBA1 and/or HBA2. This patient has 35% residual HbA, and she has compound heterozygous mutations in the promoter regions of HBB for β thalassemia. Therefore, β thalassemia minor is the most appropriate diagnosis in this patient. 88. : D. Beta thalassemia major causes patients to have no HbA (α2β2), but persistent HbF (α2γ2). Fetal hemoglobin (HbF, α2γ2) is the main oxygen transport protein in the human fetus during the last 7 months of development in the uterus and persists in the newborn until roughly 6 months old. It constitutes approximately 60%–80% of total hemoglobin in the full-term newborn, then gradually is replaced by HbA. Hemoglobin A (HbA, α2β2) is the most common human hemoglobin tetramer in adults comprising over 97% of the total red-blood-cell hemoglobin. Homozygous p.Glu6Val in HBB results in sickle cell disease. ( Hemoglobin Constant Spring (HbCS) is a variant in which a mutation in HBA2 produces an alpha-globin chain that is abnormally long. HB -α20.5 removes HBA2 and part of HBA1 and produce α° thalassemia. This full-term Pakistanian patient had alpha globin due to the presence of HbF (α2γ2). The absence of any adult hemoglobin (HbA) in him likely indicates beta thalassemia major because of homozygous mutations in HBB, which leads to two null beta alleles. β° thalassemia (complete absence of hemoglobin subunit beta production) alleles result from nonsense, frameshift, or (sometimes) splicing mutations. Therefore, it would be most likely that this patient had beta thalassemia major with homozygous p.Gln39Ter in HBB. 89. : D. Hemoglobin E (HbE) results from a thalassemic structural variant in HBB. It is characterized by the presence of an abnormal structure as well as a biosynthetic defect. The nucleotide substitution at codon 26, producing the HbE variant (α2β2, p.Glu26Lys), activates a potential cryptic RNA splice region, resulting in alternative splicing at this position. Because the normal donor site has to compete with this new site, the level of normally spliced β messenger RNA with the HbE mutation is reduced, resulting in the clinical phenotype of a mild form of β thalassemia. A homozygous state for Hb E results in a mild hemolytic microcytic anemia. Compound heterozygosities for β thalassemia and Hb E result in a wide range of often severe but sometimes mild or even clinically asymptomatic clinical phenotypes ( Therefore, the βE mutation reduces the quantity of the synthesized structurally abnormal Hb E by creating an abnormal cryptic splice site. 90. : B. Wilson disease is a disorder of copper metabolism that can present with hepatic, neurologic, or psychiatric disturbances or a combination of these, in individuals ranging from 3 to over 50 years of age. Kayser–Fleischer rings, frequently present as a result of copper deposition in Descemet’s membrane of the cornea and reflect a high degree of copper storage in the body. Wilson disease, an autosomal recessive disease, results from mutations in the ATP7B gene ( Variants in ATP7A result in X-linked recessive Menkes disease. Variants in HFE are associated with hereditary hemochromatosis. ATP8A and ATP8B are not recognized genes in Homo sapiens. Therefore, ATP7B would most likely be sequenced in order to confirm the diagnosis in this patient. 91. : C. Hereditary hemochromatosis, Menkes disease, and Wilson disease are all caused by dysfunction of metal metabolism. Menkes disease is an X-linked recessive disorder, while hereditary hemochromatosis and Wilson disease are autosomal recessive disorders. Germline pathogenic variants in ATP7B result in Wilson’s disease. Germline pathogenic variants in ATP7A results in Menkes disease. Germline pathogenic variants in HFE result in hereditary hemochromatosis ( The c.3207C>A(p.His1069Gln) mutation is the common mutation in ATP7B in populations of European origin and is associated with neurologic or hepatic disease, and a mean age at onset of about 20 years. The c.2333G>T(p.Arg778Leu) mutation in exon 8 is the most common mutation in the Asian population, found at a high frequency in all Chinese and ethnically related populations studies. Therefore, compound heterozygous c.2333G>T(p.Arg778Leu) and c.3207C>A(p.His1069Gln) mutations in ATP7B result in Wilson disease. 92. : B. Wilson disease is a disorder of copper metabolism that can present with hepatic, neurologic, or psychiatric disturbances, or a combination of these, in individuals ranging from 3 to over 50 years of age. Kayser–Fleischer rings, frequently present, result from copper deposition in Descemet’s membrane of the cornea and reflect a high degree of copper storage in the body. Wilson disease is an autosomal recessive disease results from mutations in the ATP7B gene ( This patient’s parents were obligate carriers. Therefore, this female patient’s sister had a 25% (1/4) risk of being affected, too. 93. : C. Wilson disease is a disorder of copper metabolism that can present with hepatic, neurologic, or psychiatric disturbances, or a combination of these, in individuals ranging from 3 to over 50 years of age. Kayser–Fleischer rings, frequently present, result from copper deposition in Descemet’s membrane of the cornea and reflect a high degree of copper storage in the body. Wilson disease is an autosomal recessive disease results from mutations in the ATP7B gene ( This patient’s parents were obligate carriers. The 12-year-old sister did not have symptoms, but her copper level was not tested. An elevated copper level could not be ruled out, but she remained asymptomatic. Therefore, this female patient’s sister had a 50% (1/2) chance of being a silent carrier. 94. : D. Wilson disease is a disorder of copper metabolism that can present with hepatic, neurologic, or psychiatric disturbances, or a combination of these, in individuals ranging from 3 to over 50 years of age. Kayser–Fleischer rings, frequently present, result from copper deposition in Descemet’s membrane of the cornea and reflect a high degree of copper storage in the body. Wilson disease, an autosomal recessive disease, results from mutations in the ATP7B gene ( This patient’s parents were obligate carriers. The 12-year-old sister did not have symptoms and did not have an elevated copper level. So there was no indication that the sister had Wilson disease. Therefore, this female patient’s sister had 2/3 chances to be a silent carrier, since she did not have Wilson disease. 95. : B. Hereditary hemochromatosis, Menkes disease, and Wilson disease are all caused by dysfunction of metal metabolism. Menkes disease is X-linked recessive, while hereditary hemochromatosis and Wilson disease are autosomal recessive disorders. Germline pathogenic variants in ATP7A results in Menkes disease. Germline pathogenic variants in ATP7B result in Wilson disease. Germline pathogenic variants in HFE result in hereditary hemochromatosis. Therefore, it would be most likely that this newborn boy had Menkes disease caused by p.Ser637Leu mutation in ATP7A. 96. : A. Infants with classic Menkes disease appear healthy until age 2–3 months, when loss of developmental milestones, hypotonia, seizures, and failure to thrive occur. The diagnosis is usually suspected when infants exhibit typical neurological changes and concomitant characteristic changes of the hair (short, sparse, coarse, twisted, and often lightly pigmented). Temperature instability and hypoglycemia may be present in the neonatal period. Death usually occurs by age 3 years. Menkes disease, an X-linked recessive disease, results from mutations in the ATP7A gene ( Germline pathogenic variants in ATP7B result in autosomal recessive Wilson disease. Germline pathogenic variants in HFE are associated with hereditary hemochromatosis. ATP8A and ATP8B are not recognized as genes in Homo sapiens. Therefore, ATP7A would most likely be sequenced in order to confirm the diagnosis in this patient. 97. : B. Hemochromatosis is a group of nonneoplastic hematological disorders classified by the age at onset and other factors, such as genetic cause and mode of inheritance. HFE-associated hereditary hemochromatosis (HFE-HH) is the most common form of the disorder. Ferroportin-related hereditary hemochromatosis is an adult-onset disorder. Men with these two types typically develop symptoms between the ages of 40 and 60, and women usually develop symptoms after menopause. HFE-HH is an autosomal recessive disease characterized by excessive storage of iron in the liver, skin, pancreas, heart, joints, and testes. In untreated individuals, early symptoms may include abdominal pain, weakness, lethargy, and weight loss. The risk of cirrhosis is significantly increased when the serum ferritin is higher than 1000 ng/mL. Other findings may include a progressive increase in skin pigmentation, diabetes mellitus, congestive heart failure and/or arrhythmias, arthritis, and hypogonadism. Clinical HFE-HH is more common in men than in women. The diagnosis of clinical HFE-HH in individuals with clinical findings consistent with HFE-HH and the diagnosis of biochemical HFE-HH are typically based on findings of an elevated transferrin-iron saturation of 45% or higher and a serum ferritin concentration above the upper limit of normal, such as >300 ng/mL in men and >200 ng/mL in women, and two HFE-HH causing mutations on confirmatory HFE molecular genetic testing ( Juvenile hereditary hemochromatosis is a juvenile-onset autosomal recessive disease. Iron accumulation begins early in life, and symptoms may begin to appear in childhood. By age 20, decreased or absent secretion of sex hormones is evident. Females usually begin menstruation in a normal manner, but menses stop after a few years. Males may experience delayed puberty or sex hormone deficiency symptoms such as impotence. If the disorder is untreated, heart disease is evident by age 30. Variants in two genes, HJV and HAMP, result in juvenile hemochromatosis. HJV (HFE2) (locus name HFE2A) encoding hemojuvelin accounts for more than 90% of cases. HAMP (HEPC) (locus name HFE2B) encoding hepcidin accounts for fewer than 10% of cases ( TFR2-related hereditary hemochromatosis (TFR2-HH) is an autosomal recessive disease. Onset of TFR2-HH is usually intermediate between HFE-HH and juvenile hereditary hemochromatosis. Symptoms of TFR2-HH generally begin before age 30. TFR2-HH is characterized by increased intestinal iron absorption resulting in iron accumulation in the liver, heart, pancreas, and endocrine organs. The age at onset is earlier than in HFE-associated HH. Some individuals present in the second decade and others present as adults with fatigue and arthralgia and/or organ involvement, including liver cirrhosis, diabetes mellitus, and arthropathy. In other individuals, TFR2-HH may not be progressive even if untreated ( Ferroportin-related hereditary hemochromatosis is an autosomal dominant late-onset iron overload disorder. Iron storage affects reticuloendothelial rather than parenchymal cells. It is caused by mutations in SLC40A1 encoding ferroportin. This patient and his brother are diagnosed in their 50s. The disease seems to be inherited in autosomal recessive manner since the parents are related and are apparently healthy. Therefore, HFE most likely harbors germline pathogenic variant(s) in this patient. 98. : E. Hemochromatosis is a group of nonneoplastic hematological disorders classified by the age at onset and other factors such as genetic cause and mode of inheritance. HFE-associated hereditary hemochromatosis (HFE-HH) is the most common form of the disorder. Ferroportin-related hereditary hemochromatosis is an adult-onset disorder. Men with these two types typically develop symptoms between the ages of 40 and 60, and women usually develop symptoms after menopause. HFE-HH is an autosomal recessive disease characterized by excessive storage of iron in the liver, skin, pancreas, heart, joints, and testes. In untreated individuals, early symptoms may include abdominal pain, weakness, lethargy, and weight loss. The risk of cirrhosis is significantly increased when the serum ferritin is higher than 1000 ng/mL. Other findings may include progressive increase in skin pigmentation, diabetes mellitus, congestive heart failure and/or arrhythmias, arthritis, and hypogonadism. Clinical HFE-HH is more common in men than in women. The diagnosis of clinical HFE-HH in individuals with clinical findings consistent with HFE-HH and the diagnosis of biochemical HFE-HH are typically based on finding elevated transferrin-iron saturations of 45% or higher and serum ferritin concentration above the upper limit of normal (i.e., >300 ng/mL in men and >200 ng/mL in women) and two HFE-HH causing mutations on confirmatory HFE molecular genetic testing ( Juvenile hereditary hemochromatosis is an juvenile-onset autosomal recessive disease. Iron accumulation begins early in life, and symptoms may begin to appear in childhood. By age 20, decreased or absent secretion of sex hormones is evident. Females usually begin menstruation in a normal manner, but menses stop after a few years. Males may experience delayed puberty or sex hormone deficiency symptoms such as impotence. If the disorder is untreated, heart disease is evident by age 30. Variants in two genes, HJV and HAMP, result in juvenile hemochromatosis. HJV (HFE2) (locus name HFE2A) encoding hemojuvelin accounts for more than 90% of cases. HAMP (HEPC) (locus name HFE2B) encoding hepcidin accounts for fewer than 10% of cases ( TFR2-related hereditary hemochromatosis (TFR2-HH) is an autosomal recessive disease. Onset of TFR2-HH usually intermediate between HFE-HH and juvenile hereditary hemochromatosis. Symptoms of TFR2-HH generally begin before age 30. TFR2-HH is characterized by increased intestinal iron absorption resulting in iron accumulation in the liver, heart, pancreas, and endocrine organs. The age at onset is earlier than in HFE-associated HH. Some individuals present in the second decade and others present as adults with fatigue and arthralgia and/or organ involvement including liver cirrhosis, diabetes mellitus, and arthropathy. In other individuals, TFR2-HH may not be progressive even if untreated ( Ferroportin-related hereditary hemochromatosis is an autosomal dominant late-onset iron overload disorder. Iron storage affects reticuloendothelial rather than parenchymal cells. It is caused by mutations in SLC40A1 encoding ferroportin. This patient and his brother are diagnosed in their 50s. The disease seems to be inherited in an autosomal recessive manner, since the parents are related and are apparently healthy. Therefore, SLC40A1 for autosomal dominant hemochromatosis LEAST likely harbors germline pathogenic variant(s) in this patient. 99. : C. Juvenile hemochromatosis is characterized by onset of severe iron overload occurring typically in the first to third decades of life. Prominent clinical features include hypogonadotropic hypogonadism, cardiomyopathy, arthropathy, and liver fibrosis or cirrhosis. Juvenile hemochromatosis is inherited in an autosomal recessive manner. The two genes in which mutations are known to cause juvenile hemochromatosis are HJV encoding hemojuvelin, accounting for more than 90% of cases, and HAMP encoding hepcidin, accounting for fewer than 10% of cases ( Clinically serious iron loading, resulting in congestive heart failure at age 21 is most likely to be seen in juvenile hemochromatosis. The more common hereditary hemochromatosis, HFE-associated hereditary hemochromatosis, does not generally result in major clinical manifestations until at least the fourth decade of life. Ferroportin-related hereditary hemochromatosis is an adulthood-onset disease. It can result in severe iron loading manifestations, but it is autosomal dominant and often the transferrin saturation is normal despite substantial iron overload. TFR2-related hereditary hemochromatosis has a similar presentation to HFE-HH. The age at onset is earlier and progression is slower than in juvenile HH. It is caused by homozygous/compound heterozygous mutations in TFR2, which encodes transferrin receptor 2. Therefore, HJV for juvenile hemochromatosis most likely harbors germline pathogenic variant(s) in this patient. 100. : A. HFE-associated hereditary hemochromatosis (HFE-HH) is an autosomal recessive nonneoplastic hematological disorder characterized by excessive storage of iron in the liver, skin, pancreas, heart, joints, and testes. In untreated individuals, early symptoms may include abdominal pain, weakness, lethargy, and weight loss. The risk of cirrhosis is significantly increased when the serum ferritin is higher than 1000 ng/mL. Other findings may include a progressive increase in skin pigmentation, diabetes mellitus, congestive heart failure and/or arrhythmias, arthritis, and hypogonadism. Clinical HFE-HH is more common in men than in women. The diagnosis of clinical HFE-HH in individuals with clinical findings consistent with HFE-HH and the diagnosis of biochemical HFE-HH are typically based on finding elevated transferrin-iron saturations of 45% or higher and serum ferritin concentrations above the upper limit of normal (>300 ng/mL in men and >200 ng/mL in women) and two HFE-HH causing mutations on confirmatory HFE molecular genetic testing ( At least 28 distinct pathogenic variants in HFE have been reported. Two missense variants account for the vast majority of disease causing alleles in the population are p.Cys282Tyr and p.His63Asp. In addition, p.Ser65Cys has been seen in combination with p.Cys282Tyr in individuals with iron overload. Homozygous p.Cys282Tyr is in 60%–90% of individuals with HFE-HH. Compound heterozygous p.Cys282Tyr and p.His63Asp are in 3%–8% ( Therefore, homozygous p.Cys282Tyr would most likely be Jon’s molecular test result. 101. : B. Approximately 87% of individuals of European origin with HFE-hereditary hemochromatosis (HFE-HH) are either homozygous for the p.Cys282Tyr mutation or compound heterozygous for p.Cys282Tyr and p.His63Asp mutations. Three phenotypes of HFE-HH are now recognized: • : Clinical HFE-HH (individuals with end-organ damage [e.g., advanced cirrhosis, cardiac failure, skin pigment changes, or diabetes] secondary to iron storage). Alcohol consumption worsens the symptoms in HFE-HH. • : Biochemical HFE-HH (individuals with evidence of iron overload as determined by transferrin-iron saturation and serum ferritin concentration only). • : Nonexpressing p.Cys282Tyr homozygotes (p.Cys282Tyr homozygotes without clinical or biochemical evidence of iron overload, such as normal serum ferritin concentration). Several large-scale screening studies in general populations have demonstrated that most individuals homozygous for the p.Cys282Tyr mutation do not have clinical HFE-HH. However, a significant proportion of individuals with this genotype (especially males) have biochemical HFE-HH, but not everyone. In the absence of unbiased data, penetrance of clinical end points of iron overload was reported to be as low as 2% in a large study. Currently, no test can predict whether an individual homozygous for p.Cys282Tyr will develop clinical HFE-HH. The penetrance for p.Cys282Tyr/p.His63Asp compound heterozygotes is lower; only approximately 0.5%–2.0% of such individuals develop clinical evidence of iron overload ( Therefore, this patient had up to 6% of risk developing clinical symptom of hemochromatosis in his lifetime. 102. : A. Approximately 87% of individuals of European origin with HFE-hereditary hemochromatosis (HFE-HH) are either homozygous for the p.Cys282Tyr mutation or compound heterozygous p.Cys282Tyr and p.His63Asp mutations. Three phenotypes of HFE-HH are now recognized: • : Clinical HFE-HH (individuals with end-organ damage [e.g., advanced cirrhosis, cardiac failure, skin pigment changes, or diabetes] secondary to iron storage). Alcohol consumption worsens the symptoms in HFE-HH. • : Biochemical HFE-HH (individuals with evidence of iron overload as determined by transferrin-iron saturation and serum ferritin concentration only). • : Nonexpressing p.Cys282Tyr homozygotes (p.Cys282Tyr homozygotes without clinical or biochemical evidence of iron overload, such as normal serum ferritin concentration). Several large-scale screening studies in the general population have demonstrated that most individuals homozygous for the p.Cys282Tyr mutation do not have clinical HFE-HH. However, a significant proportion of individuals with this genotype (especially males) have biochemical HFE-HH, but not everyone. In the absence of unbiased data, penetrance of clinical end points of iron overload was reported to be as low as 2% in a large study. Currently, no test can predict whether an individual homozygous for p.Cys282Tyr will develop clinical HFE-HH. The penetrance for p.Cys282Tyr/p.His63Asp compound heterozygotes is lower; only approximately 0.5%–2.0% of such individuals develop clinical evidence of iron overload ( Therefore, this patient had about 0.5% risk of developing clinical symptoms of hemochromatosis in his lifetime. 103. : A. HFE-associated hereditary hemochromatosis (HFE-HH) is characterized by inappropriately high absorption of iron by the gastrointestinal mucosa. Clinical symptoms of HFE-HH are related to excessive storage of iron in the liver, skin, pancreas, heart, joints, and testes. In untreated individuals, early symptoms may include abdominal pain, weakness, lethargy, and weight loss. The risk of cirrhosis is significantly increased when the serum ferritin is higher than 1000 ng/mL. Other findings may include progressive increase in skin pigmentation, diabetes mellitus, congestive heart failure and/or arrhythmias, arthritis, and hypogonadism. Clinical HFE-HH is more common in men than in women. HFE-HH is inherited in an autosomal recessive manner ( Approximately 87% of individuals of European origin with HFE-HH are either homozygous for the p.Cys282Tyr (C282Y) mutation or compound heterozygous p.Cys282Tyr and p.His63Asp (H63D) mutations. Homozygotes for p.Cys282Tyr have a greater risk for iron overload than p.Cys282Tyr/p.His63Asp compound heterozygotes. Therefore, it is most likely the patient is homozygous for p.Cys282Tyr. 104. : B. Hemophilia A is an X-linked recessive nonneoplastic hematological disorder characterized by a deficiency in factor VIII clotting activity that results in prolonged oozing after injuries, tooth extractions, or surgery and delayed or recurrent bleeding prior to complete wound healing. The diagnosis is established in individuals with low factor VIII clotting activity in the presence of a normal von Willebrand factor (VWF) level. Severe hemophilia A is indicated by active factor VIII levels <1 or 2%. Carrier females most likely are asymptomatic but have a 50% chance of transmitting the F8 pathogenic variant in each pregnancy. Sons who inherit the pathogenic variant will be affected. Daughters who inherit the pathogenic variant will be carriers ( Therefore, the couple’s next child would have ½×1/2=1/4 risk of having hemophilia A, since the mother is a carrier. 105. : C. Hemophilia A is an X-linked recessive nonneoplastic hematological disorder characterized by deficiency in factor VIII clotting activity that results in prolonged oozing after injuries, tooth extractions, or surgery and delayed or recurrent bleeding prior to complete wound healing. The diagnosis is established in individuals with low factor VIII clotting activity in the presence of a normal von Willebrand factor (VWF) level. Severe hemophilia A is indicated by active factor VIII levels <1 or 2%. Carrier females most likely are asymptomatic, but have a 50% chance of transmitting the F8 pathogenic variant in each pregnancy. Sons who inherit the pathogenic variant will be affected. Daughters who inherit the pathogenic variant will be carriers ( Therefore, the couple’s next son would have a 1/2 risk of having hemophilia A, since the mother is a carrier. 106. : D. Hemophilia A is an X-linked recessive nonneoplastic hematological disorder characterized by a deficiency in factor VIII clotting activity that results in prolonged oozing after injuries, tooth extractions, or surgery and delayed or recurrent bleeding prior to complete wound healing. The diagnosis is established in individuals with low factor VIII clotting activity in the presence of a normal von Willebrand factor (VWF) level. Severe hemophilia A is indicated by active factor VIII levels <1 or 2%. F8 is the only gene in which pathogenic variants are known to cause hemophilia A. One-third to one-half of affected males have no family history of hemophilia, which may due to a de novo variant in the affected male or his mother. In this case, the nature of the variant is not revealed, so we may not predict symptoms. Immediate treatment of bleeding or prophylaxis is always necessary for patients with hemophilia A ( Hemophilia B is an X-linked recessive disorder of factor IX due to variants in F9. Therefore, choice D is correct. 107. : B. Hemophilia B is an X-linked recessive disorder characterized by deficiency in factor IX clotting activity that results in prolonged oozing after injuries, tooth extractions, or surgery and delayed or recurrent bleeding prior to complete wound healing. The diagnosis is established in individuals with low factor IX clotting activity in the presence of a normal von Willebrand factor (VWF) level. Severe hemophilia B is indicated by active factor IX levels <1 or 2%. Molecular genetic testing of F9, the gene encoding factor IX, identifies pathogenic variants in more than 99% of individuals with hemophilia B. Approximately 50% of affected males have no family history of hemophilia B. Hemophilia B is not a self-limited disease. Immediate treatment of bleeding or prophylaxis is always necessary ( Hemophilia A is an X-linked recessive disorder of factor VIII due to variants in F8. In this case, both the patient and the mother carried the same variant in F9. Siblings of the patient would have 25% chances to be affected. Therefore, choice B is correct. 108. : C. Hemophilia A is an X-linked recessive disorder of factor VIII due to variants in F8. An F8 intron 22-A inversion is identified in nearly half of families with severe hemophilia A. An intron 1 inversion is identified in 2%–5% of individuals with severe hemophilia A ( Therefore, an inversion would most likely be detected in this patient. 109. : D. Sanger sequencing by capillary electrophoresis, a chain-termination method of DNA sequencing, has been a powerful technique in molecular biology. DNA is replicated in the presence of dideoxynucleotides (ddNPTs). These bases stop the replication process when they are incorporated into the growing strand of DNA, resulting in varying lengths of short DNA. These short DNA strands are ordered by size; by reading the end letters from the shortest to the longest piece, the whole sequence of the original DNA is revealed. Sanger sequencing of target region in F8 may reveal whether a specific single-nucleotide variant, such as a nonsense mutation in this case, is in an individual. Next-generation sequencing (NGS) refers to non-Sanger-based high-throughput DNA sequencing technologies. Millions or billions of DNA strands can be sequenced in parallel, yielding substantially more throughput. It parallelizes the sequencing process, producing thousands or millions of sequences concurrently while lowering the cost of DNA sequencing beyond what is possible with standard dye-terminator methods. NGS may be used to detect SNPs and very small in/dels, such as a 5-bp deletion. But it cannot detect deletion or duplication of one or more exons or the entire gene. It is not the test of choice to detect a known single-nucleotide variant, such as a nonsense mutation, in family members. A real-time polymerase chain reaction is a laboratory technique of molecular biology based on the polymerase chain reaction (PCR). It monitors the amplification of a targeted DNA molecule at each cycle of PCR. When the DNA is in the log-linear phase of amplification, the amount of fluorescence increases above the background. Quantitative PCR may be used to quantify gene expression. Multiplex ligation-dependent probe amplification (MLPA) permits multiple targets to be amplified with only a single primer pair. Each probe consists of two oligonucleotides that recognize adjacent target sites on the DNA. One probe oligonucleotide contains the sequence recognized by the forward primer, the other the sequence is recognized by the reverse primer. Only when both probe oligonucleotides are hybridized to their respective targets can they be ligated into a complete probe. The advantage of splitting the probe into two parts is that the ligated oligonucleotides but not the unbound probe oligonucleotides are amplified. If the probes were not split in this way, the primer sequences at either end would cause the probes to be amplified regardless of their hybridization to the template DNA, and the amplification product would not be dependent on the number of target sites present in the sample DNA. Each complete probe has a unique length, so its resulting amplicons can be separated and identified by (capillary) electrophoresis. Because the forward primer used for probe amplification is fluorescently labeled, each amplicon generates a fluorescent peak that can be detected by a capillary sequencer. Comparing the peak pattern obtained on a given sample with that obtained on various reference samples, the relative quantity of each amplicon can be determined. This ratio is a measure for the ratio in which the target sequence is present in the sample DNA. MLPA may be used to detect SNPs and CNV, such as diagnosing patients with Duchenne muscular dystrophy. This patient’s factor VIII clot activity was lower than 1% with a normal and functional von Willebrand factor level. So he had a severe form of hemophilia A (factor VIII deficiency). Hemophilia A is an X-linked disease resulting from variants in F8 on Xq28. Therefore, Sanger sequencing would be the test of choice for targeted test of the nonsense mutation in family members. 110. : B. Hemophilia A is an X-linked disease that results from variants in the F8 gene for factor VIII on Xq28. An intron 22-A inversion (14–20 kb) is identified in nearly half of families with severe hemophilia A, but it is not identified in families with moderate or mild hemophilia A. This inversion can be detected by multiple techniques (e.g., long-range PCR, inverse PCR). An intron 1 inversion (1 kB) is identified in 2%–5% of individuals with severe hemophilia A and has not been described in families with moderate or mild hemophilia A. The mutation detection rate in individuals with hemophilia A who do not have one of the two common inversions varies from 75% to 98%, depending on the testing methods used. In the severe form of the disease the mutation detection rate is even lower, about 55%. A factor VIII level less than 1% of normal is classified as severe hemophilia A. For individuals with severe hemophilia A, a targeted molecular genetic testing is generally performed to identify the intron 22 or intron 1 inversion. The incidence of intron 22 inversion is approximately 50% in severe HA patients, and without a significant ethnic difference. The intron 22 inversion is also a high-risk factor for inhibitor formation. Thus, it has drawn special attention as a hotspot of F8 mutation. In a previous report, HA patients with intron 22 inversion exhibited an inhibitor prevalence of >22%. For this reason, tests for the intron 22 inversion have been the primary step of F8 mutation profiling. If the inversion test is negative, sequence analysis of the 26 exons in F8 is performed. Deletion/duplication analyses are considered last ( This patient’s factor VIII clotting activity was lower than 1% with a normal and functional von Willebrand factor level. So he had a severe form of hemophilia A (factor VIII deficiency). Therefore, PCR would most likely be used as the first-line test to detect inversions in order to confirm the diagnosis in this patient. 111. : A. Hemophilia A is characterized by prolonged oozing after injuries, tooth extractions, or surgery and delayed or recurrent bleeding prior to complete wound healing. The diagnosis is established in individuals with low factor VIII clotting activity in the presence of a normal von Willebrand factor (VWF) level. Severe hemophilia A is indicated by active factor VIII levels <1 or 2%. Hemophilia A is inherited in an X-linked manner. Carrier females have a 50% chance of transmitting the F8 pathogenic variant in each pregnancy. Sons who inherit the pathogenic variant will be affected. Daughters who inherit the pathogenic variant will be carriers. F8 is located on Xq28 with 26 exons. Single-nucleotide variants leading to new stop codons are essentially all associated with a severe phenotype, as are most frameshift mutations ( Therefore, the unborn boy would most likely have severe hemophilia A. 112. : B. All the genes listed in the question are associated with hemophilia. The family had X-linked hemophilia, which narrows it down to F8 and F9. Factor IX Leyden is an unusual F9 variant caused by point mutations in the F9 promoter. It is associated with very low levels of factor IX and severe hemophilia during childhood, but spontaneous resolution at puberty as factor IX levels nearly normalize. Therefore, it would be most likely that F9 was mutated in the family. 113. : A. Hemophilia B is a very heterogeneous disease both in clinical severity and at the molecular level. Point mutations, particularly missense mutations of F9 are the most frequent changes that affect the F9 gene. Missense mutations in the promoter region possibly disrupt the recognition sequences for several specific gene regulatory proteins and result in the reduced transcription of coagulation factor IX. This situation gives rise to a specific hemophilia B phenotype—hemophilia B Leyden. If there is a mutation in the hepatocyte nuclear factor 4 (HNF-4) binding site in the 5′ UTR, an unidentified protein presumably binds to this site and an androgen-responsive element exerts its effect through this protein–protein interaction. So at puberty, during which androgen levels increase, the phenotype severity decreases to a mild level. Research has supported that other transcription factors may contribute to the recovery of hemophilia at puberty. The intron 22-A inversion and intron 1 inversion of F8 are the common pathogenic variants for hemophilia A. p.Glu117stop of F11 is a common pathogenic variants for hemophilia C in Ashkenazi Jews. Therefore, it would be most likely that the mother/fetus had c.-6G>A in the 5′ UTR region of F9. 114. : E. The hemophilia is in the maternal family. The mother most likely is a carrier because of the positive family history. The husband may carry a mutation, too, but the likelihood is as much as for any male in general population. Predictive testing and testing isolated carriers are difficult. Therefore, the brother of the mother should be tested first in order to detect the familial mutation, since he was the proband (index case) of this family for hemophilia. 115. : A. If a mother has a son with hemophilia, but no other affected relatives, her prior risk for being a carrier depends on the type of mutation. Point mutations and the common F8 inversions almost always arise in male meiosis. As a result, 98% of mothers of a male with one of these mutations carry a new mutation derived from their father (the maternal grandfather of the affected male). In contrast, deletion mutations usually arise during female meiosis. Therefore, most likely the grandfather passed on the mutation to the mother owing to a new mutation. 116. : G. If a mother has a son with hemophilia, but no other affected relatives, her prior risk for being a carrier depends on the type of mutation. Point mutations and the common F8 inversions almost always arise in male meiosis. As a result, 98% of mothers of a male with one of these mutations carry the mutation owing to a new mutation in their father (the maternal grandfather of the affected male). In contrast, deletion mutations usually arise during female meiosis. Therefore, in this case it would be most likely the mother passed on the mutation to the boy owing to a new mutation. 117. : A. If a mother has a son with hemophilia, but no other affected relatives, her prior risk for being a carrier depends on the type of mutation. Point mutations and the common F8 inversions almost always arise in male meiosis. As a result, 98% of mothers of a male with one of these mutations carry the mutation owing to a new mutation in their father (the maternal grandfather of the affected male). In contrast, deletion mutations usually arise during female meiosis. Therefore, it would be most likely the grandfather passed on the mutation to the mother owing to a new mutation. 118. : A. All the genes listed in the question are associated with hemophilia. F8-associated hemophilia A is the most common type of hemophilia, and it is inherited in an X-linked recessive manner. Hemophilia B, caused by variants in F9, is an X-linked recessive disease, too. But the prevalence/incidence of hemophilia B is not as much as that for hemophilia A. F13, F13B, and vWF are associated autosomal recessive hemophilia. Therefore, it would be most likely that F8 harbors a germline pathogenic variant in the family. 119. : A. F8, F9, F11, F13, F13B, and vWF are all associated with hemophilia, but not F2. F2 is associated with thrombophilia. Therefore, it is most likely that F2 was not in this NGS panel for hemophilia. 120. : D. Hemophilia B is an X-linked recessive disease. The diagnosis of hemophilia B is established in individuals with low factor IX clotting activity. Molecular genetic testing of F9, the gene encoding factor IX, identifies pathogenic variants in more than 99% of individuals with hemophilia B. Linkage analysis is possibly appropriate if no mutation detected by sequencing and deletion/duplication analysis of F9. It may be used to track an unidentified F9 disease causing allele in a family and to identify the person in whom the de novo mutation originated ( In this case, it was important to know whether the wife’s uncle, the affected member, had the SNP. If the wife’s maternal uncle did not have the SNP, the SNP did not in link with hemophilia B phenotype in this family. Therefore, the wife's uncle, the affected member, should be evaluated next for the SNP in this family. 121. : B. Promoters are usually located upstream of the transcription start sites of genes on the DNA, which can be about 100–1000 bp long. Promoters initiate transcription of a particular gene. A variant at the exon–intron boundary most likely changes splicing. Dominant negative effect usually is caused by a variant in an exon. Truncating variant usually is caused by variants in exons also. Therefore, most likely the 5-bp deletion in a promoter region alters transcriptional activity of factor VIII mRNA, which causes the factor VIII deficiency in this patient, if it is pathogenic. 122. : A. Hemophilia A is the second most common form of hereditary coagulopathy, which is caused by deficiency in factor VIII (F8) clotting activity. The diagnosis of hemophilia A is established in individuals with low factor VIII clotting activity in the presence of a normal von Willebrand factor (VWF) level. Hemophilia A results from variants in F8 on X chromosome. Molecular genetic testing of F8 identifies pathogenic variants in as many as 98% of individuals with hemophilia A. The birth prevalence of hemophilia A is approximately 1:4000–1:5000 live male births worldwide. von Willebrand disease (VWD) is the most common form of hereditary coagulopathy, which is caused by deficient or defective plasma von Willebrand factor (VWF), a large multimeric glycoprotein that plays a pivotal role in primary hemostasis by mediating platelet hemostatic function and stabilizing blood coagulation factor VIII (F8). Bleeding history may become more apparent with increasing age. The infant had an increased PTT in the context of a normal PT and bleeding time, and a normal and functional von Willebrand factor level which rule out the diagnosis of VWD. VWD affects 0.1%–1% of the population. One in 10,000 seeks tertiary care referral. Hemophilia B is the third most common form of hereditary coagulopathy, which is caused by deficiency in factor IX (F9) clotting activity. The diagnosis of hemophilia B is established in individuals with low factor IX clotting activity. Hemophilia B results from variants in F9 on the X chromosome. Molecular genetic testing of F9 identifies pathogenic variants in more than 99% of individuals with hemophilia B. The birth prevalence of hemophilia B is approximately 1 in 20,000 live male births worldwide. Hemophilia C is a rare form of hereditary coagulopathy that is caused by deficiency in factor XI (F11) clotting activity. It predominantly occurs in Jews of Ashkenazi descent. It is the fourth most common coagulation disorder after von Willebrand disease and hemophilias A and B. Hemophilia C is inherited in an autosomal recessive manner. It is distinguished from hemophilia A and B by the fact it does not lead to bleeding into the joints. The diagnosis of hemophilia C is established in individuals with low factor XI (F11) clotting activity. Hemophilia C was described first in two sisters and a maternal uncle of an American Jewish family. All three bled after dental extractions, and the sisters also bled after tonsillectomy. In the United States, it is thought to affect 1 in 100,000 of the adult population, a rate that makes hemophilia A 10 times more common than hemophilia C. In Israel, the estimated rate for heterozygosity is 8%. Factor XIII deficiency occurs exceedingly rarely; it causes a severe bleeding tendency. Most are due to mutations in the A subunit gene (located on chromosome 6p25-p24). Umbilical cord bleeding is common in factor XIII deficiency, reported in almost 80% of cases. Up to 30% of patients sustain a spontaneous intracranial hemorrhage. The diagnosis is established in individuals with low factor XIII (F13) clotting activity. The mutation of F13 is inherited in an autosomal recessive manner. The birth prevalence of factor XIII deficiency is approximately 1 in 5 million live births worldwide ( This patient was a male infant. He had an increased PTT in the context of a normal PT and bleeding time, which ruled out factor XIII deficiency. Coagulation factor assays ruled out VWD because his vWF activity was normal. A male relative from his mother’s side had “bleeding problems,” too, and his mother was apparently healthy without bleeding history. This raises suspicion for X-linked coagulopathy. Therefore, it would be most likely that this patient had hemophilia A caused by a germline pathogenic variant in F8, since hemophilia A is the second most common form of hereditary coagulopathy. 123. : C. Hemophilia C is a rare form of hereditary coagulopathy, which is caused by deficiency in factor XI (F11) clotting activity. It predominantly occurs in Jews of Ashkenazi descent. It is the fourth most common coagulation disorder after von Willebrand disease and hemophilia A and B. Hemophilia C is inherited in an autosomal recessive manner. It is distinguished from hemophilia A and B by the fact that it does not lead to bleeding into the joints. The diagnosis of hemophilia C is established in individuals with low factor XI (F11) clotting activity. Hemophilia C was described first in two sisters and a maternal uncle of an American Jewish family. All three bled after dental extractions, and the sisters also bled after tonsillectomy. In the United States, it is thought to affect 1 in 100,000 of the adult population, a rate that makes hemophilia A 10 times more common than hemophilia C. In Israel, the estimated rate for heterozygosity is 8%. von Willebrand disease (VWD) is the most common form of hereditary coagulopathy; it is caused by deficient or defective plasma von Willebrand factor (VWF), a large multimeric glycoprotein that plays a pivotal role in primary hemostasis by mediating platelet hemostatic function and stabilizing blood coagulation factor VIII (F8). Bleeding history may become more apparent with increasing age. The infant had an increased PTT in the context of a normal PT and bleeding time and a normal and functional von Willebrand factor level; this rules out the diagnosis of VWD. VWD affects 0.1%–1% of the population. One in 10,000 seeks tertiary care referral. Hemophilia A is the second most common form of hereditary coagulopathy, which is caused by deficiency in factor VIII (F8) clotting activity. The diagnosis of hemophilia A is established in individuals with low factor VIII clotting activity in the presence of a normal von Willebrand factor (VWF) level. Hemophilia A results from variants in the F8 gene on chromosome X. Molecular genetic testing of F8 identifies pathogenic variants in as many as 98% of individuals with hemophilia A. The birth prevalence of hemophilia A is approximately 1:4000–1:5000 of live male births worldwide. Hemophilia B is the third most common form of hereditary coagulopathy, which is caused by deficiency in factor IX (F9) clotting activity. The diagnosis of hemophilia B is established in individuals with low factor IX clotting activity. Hemophilia B results from variants in F9 on X chromosome. Molecular genetic testing of F9 identifies pathogenic variants in more than 99% of individuals with hemophilia B. The birth prevalence of hemophilia B is approximately 1 in 20,000 live male births worldwide. Factor XIII deficiency occurs exceedingly rarely, causing a severe bleeding tendency. Most are due to mutations in the A subunit gene (located on chromosome 6p25-p24). Umbilical cord bleeding is common in factor XIII deficiency, reported in almost 80% of cases. Up to 30% of patients sustain a spontaneous intracranial hemorrhage. The diagnosis is established in individuals with low factor XIII (F13) clotting activity. The mutation of F13 is inherited in an autosomal recessive manner. The birth prevalence of factor XIII deficiency is approximately 1 in 5 million live births worldwide ( This patient was the only person in the family who had the bleeding problem. He had an increased PTT in the context of a normal PT and bleeding time, which ruled out factor XIII deficiency. Coagulation factor assays ruled out VWD because his VWF activity was normal. He was of Ashkenazi Jewish descendent. Therefore, it would be most likely that he had hemophilia C caused by mutations in F11, since the estimated prevalence is 6.4/1000 among Ashkenazi Jewish. 124. : D. Factor XIII deficiency occurs exceedingly rarely; it causes a tendency for severe bleeding. Most of these deficiencies are due to mutations in the A subunit gene (located on chromosome 6p25-p24). Umbilical cord bleeding is common in factor XIII deficiency, reported in almost 80% of cases. Up to 30% of patients sustain a spontaneous intracranial hemorrhage. The diagnosis is established in individuals with low factor XIII (F13) clotting activity. The mutation of F13 is inherited in an autosomal recessive manner. The birth prevalence of factor XIII deficiency is approximately 1 in 5 million live births worldwide. It is extremely important to diagnose of factor XIII deficiency because these patients tend to develop intracranial hemorrhages. von Willebrand disease (VWD) is the most common form of hereditary coagulopathy, which is caused by deficient or defective plasma von Willebrand factor (VWF), a large multimeric glycoprotein that plays a pivotal role in primary hemostasis by mediating platelet hemostatic function and stabilizing blood coagulation factor VIII (F8). A bleeding history may become more apparent with increasing age. The infant had an increased PTT in the context of a normal PT and bleeding time and a normal and functional von Willebrand factor level, which rules out the diagnosis of VWD. VWD affects 0.1%–1% of the population. One in 10,000 seeks tertiary care referral. Hemophilia A is the second most common form of hereditary coagulopathy, which is caused by a deficiency in factor VIII (F8) clotting activity. The diagnosis of hemophilia A is established in individuals with low factor VIII clotting activity in the presence of a normal von Willebrand factor (VWF) level. Hemophilia A results from variants in the F8 gene on chromosome X. Molecular genetic testing of F8 identifies pathogenic variants in as many as 98% of individuals with hemophilia A. The birth prevalence of hemophilia A is approximately 1:4000–1:5000 of live male births worldwide. Hemophilia B is the third most common form of hereditary coagulopathy, which is caused by deficiency in factor IX (F9) clotting activity. The diagnosis of hemophilia B is established in individuals with low factor IX clotting activity. Hemophilia B results from variants in F9 on the X chromosome. Molecular genetic testing of F9 identifies pathogenic variants in more than 99% of individuals with hemophilia B. The birth prevalence of hemophilia B is approximately 1 in 20,000 live male births worldwide. Hemophilia C is a rare form of hereditary coagulopathy, which is caused by a deficiency in factor XI (F11) clotting activity. It predominantly occurs in Jews of Ashkenazi descent. It is the fourth most common coagulation disorder after von Willebrand disease and hemophilia A and B. Hemophilia C is inherited in an autosomal recessive manner. It is distinguished from hemophilia A and B by the fact it does not lead to bleeding into the joints. The diagnosis of hemophilia C is established in individuals with low factor XI (F11) clotting activity. Hemophilia C was described first in two sisters and a maternal uncle of an American Jewish family. All three bled after dental extractions, and the sisters also bled after tonsillectomy. In the United States, it is thought to affect 1 in 100,000 of the adult population, a rate that makes hemophilia A 10 times more common than hemophilia C. In Israel, the estimated rate for heterozygosity is 8% ( This patient is the only personal in the family has the bleeding problem. His prothrombin time, partial thromboplastin time, and thrombin time are all normal, which ruled out factor VIII, and factor XI deficiencies. Coagulation factor assays ruled out VWD because his vWF activity is normal. Therefore, it is most likely that the patient in this question has factor XIII deficiency. 125. : D. TAT codes for cystine. TAG, TAA, and TGA code for the stop codon. Therefore, the single-nucleotide variant most likely leads to a truncated protein C in this patient, which stops the translation prematurely. Therefore, the variant in PROC leads to a truncated protein C, which results in protein C deficiency in this patient. 126. : D. TAT codes for cystine. TAG, TAA, and TGA code for stop codons. Therefore, the single-nucleotide variant most likely leads to a truncated protein C in this patient, which stops the translation prematurely. Therefore, the variant in PROC leads to a truncated protein C, which results in protein C deficiency in this patient. 127. : D. TAT codes for cystine. TAG, TAA, and TGA code for stop codons. Therefore, the single-nucleotide variant most likely leads to a truncated protein C in this patient, which stops the translation prematurely. Therefore, the variant in PROC leads to a truncated protein C, which results in protein C deficiency in this patient. 128. : B. There are two commonly recognized polymorphic variants in the MTHFR gene (coded for methylenetetrahydrofolate reductase): the “thermolabile” variant c.665C>T(p.Ala222Val), historically more commonly referred to as C677T, and the c.1286A>C(p.Glu429Ala) variant. Both are missense variants known to decrease enzyme activity ( Reduced enzyme activity of MTHFR is a genetic risk factor for hyperhomocysteinemia, especially in the presence of low serum folate levels. Mild-to-moderate hyperhomocysteinemia has been identified as a risk factor for venous thrombosis and has been associated with other cardiovascular diseases, such as coronary artery disease. Hyperhomocysteinemia is multifactorial, involving a combination of genetic, physiological, and environmental factors. Several enzymes with vitamin B cofactors—including vitamin B6, vitamin B12, and folate—are involved in regulating homocysteine levels. Individuals who are MTHFR polymorphism homozygotes may have hyperhomocysteinemia, usually to a mild or moderate degree of uncertain clinical significance. Because MTHFR polymorphism is only one of many factors contributing to the overall clinical picture, the utility of this testing in thromboembolic disease is currently ambiguous. The American Congress of Obstetricians and Gynecologists does not recommend the measurement of homocysteine or MTHFR polymorphisms in the evaluation of the etiology of venous thromboembolism. Therefore, Homozygous MTHFR C677T polymorphism is associated with increased risk for homocystinuria, but not others listed in the question. 129. : C. There are two commonly recognized polymorphic variants in the MTHFR gene (coded for methylenetetrahydrofolate reductase): the “thermolabile” variant c.665C>T(p.Ala222Val), historically more commonly referred to as C677T, and the c.1286A>C(p.Glu429Ala) variant; both are missense changes that are known to decrease enzyme activity ( Reduced enzyme activity of MTHFR is a genetic risk factor for hyperhomocysteinemia, especially in the presence of low serum folate levels. Hyperhomocysteinemia is multifactorial, involving a combination of genetic, physiological, and environmental factors. Individuals who are MTHFR polymorphism homozygotes may have hyperhomocysteinemia, usually to a mild or moderate degree of uncertain clinical significance. Mild-to-moderate hyperhomocysteinemia has been identified as a risk factor for venous thrombosis and has been associated with other cardiovascular diseases, such as coronary artery disease. However, there is growing evidence that MTHFR polymorphism testing has minimal clinical utility and, therefore, should not be ordered as a part of a routine evaluation for thrombophilia. The American Congress of Obstetricians and Gynecologists does not recommend the measurement of homocysteine or MTHFR polymorphisms in the evaluation of the etiology of venous thromboembolism. In its practice guidelines, the ACMG recommends: • : "MTHFR polymorphism genotyping should not be ordered as part of the clinical evaluation for thrombophilia or recurrent pregnancy loss. • : MTHFR polymorphism genotyping should not be ordered for at-risk family member. • : A clinical geneticist who serves as a consultant for a patient in whom an MTHFR polymorphism(s) has ensured a thorough and appropriate evaluation for his or her symptoms. • : If the patient is homozygous for the “thermolabile” variant c.665C>T, the geneticist may order a fasting total plasma homocysteine, if not previously ordered, to provide more accurate counseling. • : MTHFR status does not change the recommendation that women of childbearing age should take the standard dose of folic acid supplementation to reduce the risk of neural tube defects as per the general population guidelines. Therefore, a clinical geneticist should ensure that a patient with an MTHFR polymorphism(s) has received a thorough and appropriate evaluation for his or her thrombophilia symptoms according to the ACMG recommendation. 130. : D. In its practice guidelines, the ACMG recommends: • : "MTHFR polymorphism genotyping should not be ordered as part of the clinical evaluation for thrombophilia or recurrent pregnancy loss. • : MTHFR polymorphism genotyping should not be ordered for at-risk family members. • : A clinical geneticist who serves as a consultant for a patient in whom an MTHFR polymorphism(s) has received a thorough and appropriate evaluation for his or her symptoms. • : If the patient is homozygous for the “thermolabile” variant c.665C>T, the geneticist may order a fasting total plasma homocysteine, if not previously ordered, to provide more accurate counseling. • : MTHFR status does not changes the recommendation that women of childbearing age should take the standard dose of folic acid supplementation to reduce the risk of neural tube defects as per the general population guidelines." In the ACMG guideline it also states “A fasting total plasma homocysteine level may be obtained in any patient who is homozygous for the ‘thermolabile’ variant, in order to provide more information for counseling. For the purpose of laboratory interpretation, it should be noted that total homocysteine levels increase with age and are lower in the pregnant population.” “Patients who are homozygous for the ‘thermolabile’ variant with normal plasma homocysteine can be reassured that there is currently no evidence of increased risk for venous thromboembolism or recurrent pregnancy loss related to their MTHFR status, common reasons for which clinical testing is done. A patient who is homozygous for the c.665CMTHFR ‘thermolabile’ variant homozygosity and mortality, from cardiovascular disease or otherwise.” “In individuals who have a known thrombophilia, such as factor V Leiden or prothrombin c.\97G>A, most available studies support the contention that MTHFR genotype status does not alter their thrombotic risk to a clinically significant degree.” Therefore, if the patient is homozygous for the “thermolabile” variant c.665C>T, the geneticist may order a fasting total plasma homocysteine, if not previously ordered, to provide more accurate counseling according to the ACMG recommendation. 131. : B. Homozygosity for the MTHFR C677T polymorphism can be associated with elevated homocysteine levels but, by itself, is not a risk factor for deep-vein thrombosis (DVT). Therefore, there was no clinical indication for testing the MTHFR polymorphism in this patient’s family members. In 2013, the American College of Medical Genetics (ACMG) published a practice guideline regarding clinical MTHFR test. It states, “MTHFR polymorphism testing is frequently ordered by physicians as part of the clinical evaluation for thrombophilia. It was previously hypothesized that reduced enzyme activity of MTHFR led to mild hyperhomocysteinemia which led to an increased risk for venous thromboembolism, coronary heart disease, and recurrent pregnancy loss. Recent meta-analyses have disproven an association between hyperhomocysteinemia and risk for coronary heart disease and between MTHFR polymorphism status and risk for venous thromboembolism. There is growing evidence that MTHFR polymorphism testing has minimal clinical utility and, therefore should not be ordered as a part of a routine evaluation for thrombophilia.” Therefore, most individuals with homozygous MTHFR C677T polymorphism will not develop a DVT in their lifetime. 132. : C. Prothrombin-related thrombophilia (factor II deficiency) is an autosomal dominant disorder characterized by venous thromboembolism (VTE) that manifest most commonly in adults as deep vein thrombosis (DVT) in the legs or pulmonary embolism. The clinical expression of prothrombin-related thrombophilia is variable. Many individuals heterozygous or homozygous for the 20210G>A (G20210A or c.97G>A) allele in F2 never develop thrombosis. The relative risk for DVT in adults heterozygous for the 20210G>A allele is increased 2- to 5-fold. In children, the relative risk for thrombosis is increased 3- to 4-fold. A 20210G>A heterozygosity has at most a modest effect on recurrence risk after a first episode. Homozygosities for this allele confer a higher risk for thrombosis than heterozygosities, although the magnitude is not well defined. Whereas 20210G>A homozygotes may develop thrombosis more frequently and at a younger age, the risk is much lower than that associated with homozygous protein C deficiency or homozygous protein S deficiency. Numerous reports of asymptomatic 20210G>A homozygotes emphasize the contribution of other genetic and acquired risk factors to thrombosis ( Current evidence suggests that a 20210G>A heterozygosity has at most a modest effect on recurrence risk after initial treatment of a first VTE. Although the data are conflicting, the majority of more recent studies found no increase in risk. An evidence review by the Evaluation of Genomic Applications in Practice and Prevention (EGAPP) concluded that a 20210G>A heterozygosity is not predictive of VTE recurrence ( The clinical circumstances of the first event (provoked or unprovoked), individual characteristics such as male sex, and global hemostasis tests (e.g., d-dimer) are more important determinants of recurrence. Therefore, most individuals with a heterozygous F2 G20210A mutation will not develop a venous thromboembolism at some point of their lifetime. 133. : C. The risk for VTE deep-vein thrombosis (DVT) is increased 3- to 8-fold in factor V Leiden heterozygotes and 9- to 80-fold in homozygotes. Current evidence suggests that a heterozygous factor V Leiden mutation has at most a modest effect on recurrence risk after initial treatment of a first VTE ( Therefore, most individuals with a heterozygous factor V Leiden will not develop a venous thromboembolism in their lifetime. 134. : A. In this patient with a first episode of unprovoked VTE, male sex is an important consideration in determining the need for long-term anticoagulation. Other risk factors for recurrent VTE are a positive d-dimer test while on vitamin K antagonists and a positive d-dimer test 4 weeks after discontinuation of vitamin K antagonists. Current evidence suggests that a heterozygous factor V Leiden mutation (c.1691G>A) has a modest effect on recurrence risk after initial treatment of a first VTE. MTHFR and prothrombin (factor II) c.20210G>A mutations are not associated with an increased risk for recurrent VTE. Therefore, a heterozygous c.1691 G>A mutation in F5 increases this patient’s risk for recurrent thrombosis. 135. : C. In this patient with a first episode of unprovoked VTE, male sex is an important consideration in determining the need for long-term anticoagulation. Other risk factors for recurrent VTE are a positive d-dimer test while on vitamin K antagonists and a positive d-dimer test 4 weeks after discontinuation of vitamin K antagonists. Current evidence suggests that a heterozygous factor V Leiden mutation (c.1691G>A, p.Arg506Gln) has a modest effect on recurrence risk after initial treatment of a first VTE. MTHFR and prothrombin (factor II) 20210G>A (c.96G>A) mutations are not associated with an increased risk for recurrent VTE. Although a history of VTE in a first-degree relative increases the likelihood of a first VTE in patients, a positive family history has never been independently associated with a greater chance of recurrence. Therefore, a heterozygous c.C677T mutation in MTHFR does NOT increase this patient’s risk for recurrent thrombosis. 136. : B. Both the factor V Leiden mutation and the prothrombin 20210G>A mutation exhibit semidominant expression in that both heterozygotes and homozygotes are at increased risk of occurrence/recurrence of venous thrombosis. The relative risk for venous thrombosis associated with the factor V Leiden mutation in the absence of other acquired or environmental predispositions is approximately 4- to 7-fold for heterozygotes and 80-fold for homozygotes. The relative risk for venous thrombosis associated with the prothrombin 20210G>A mutation in the absence of other acquired or environmental predispositions is approximately 2- to 4-fold for heterozygotes ( Therefore, this patient would have 5-fold increased risk for venous thrombosis as compared with individuals without the mutation. 137. : E. Both the factor V Leiden mutation and the prothrombin 20210G>A mutation exhibit semidominant expression in that both heterozygotes and homozygotes are at increased risk of occurrence/recurrence of venous thrombosis. The relative risk for venous thrombosis associated with the factor V Leiden mutation in the absence of other acquired or environmental predispositions is approximately 4- to 7-fold for heterozygotes and 80-fold for homozygotes. The relative risk for venous thrombosis associated with the prothrombin 20210G>A mutation in the absence of other acquired or environmental predispositions is approximately 2- to 4-fold for heterozygotes ( Therefore, this patient would have an 80-fold increased risk for venous thrombosis as compared with individuals without the mutation. 138. : B. The relative risk for venous thromboembolism (VTE) is increased 2- to 5-fold in 20210G>A (c.96C>T) heterozygotes. In a meta-analysis of 79 studies, a 20210G>A heterozygosity in F2 was associated with a 3-fold increased risk for VTE ( Therefore, this patient would have 3-fold increased risk for venous thrombosis as compared with individuals without the mutation. 139. : C. Individuals with a heterozygous factor V Leiden variant and a heterozygous prothrombin 20210G>A mutation have a 20-fold increased risk of having a venous thrombosis as compared with individuals without either mutation. Between 1% and 10% of symptomatic carriers of the factor V Leiden mutation also carry the prothrombin 20210G>A mutation. These individuals have a 50- to 80-fold relative risk of thrombosis as compared to homozygotes for the factor V Leiden mutation ( A prothrombin 20210G>A allele was 4- to 5-fold more common in symptomatic factor V Leiden homozygotes with VTE than in controls with no thrombotic history. Therefore, this patient would have 20-fold increased risk for venous thrombosis comparing with individuals without the two mutations. 140. : C. The Leiden allele of the factor V gene contains a G-to-A substitution at nucleotide 1691, producing a missense mutation that substitutes glutamine for arginine at amino acid residue 506 (p.R506Q) in the protein product. The R506Q site is one of the activated protein C (APC) cleavage sites in the factor Va molecule ( Therefore, Protein C’s activation sites is cleavage and activated in patients with the factor V Leiden allele. 141. : E. The prothrombin 20210G>A mutation is located in the 3′ untranslated region of the factor II gene. It represents a gain-of-function mutation, causing increased cleavage-site recognition, increased 3′ end processing, and increased mRNA accumulation and protein synthesis. The 20210G>A mutation is reported to be present in about 2% of the general population, with an increased frequency (3.0%) in southern Europeans and a decreased frequency (1.7%) in northern Europeans. It is very rare among those of Asian and African descent. Clinical sensitivity can be defined as the proportion of individuals who have had (or will have) deep-vein thrombosis and who have at least one prothrombin 20210G>A mutation. The clinical sensitivity of the prothrombin 20210G>A mutation varies between 5% and 19%. Clinical specificity can be defined as the proportion of individuals who do not have or will not develop deep-vein thrombosis and do not have any known mutations in the prothrombin gene. Low penetrance of the prothrombin 20210G>A mutation is the main reason why clinical specificity is less than 100%. Analytic error is possible, but likely to be a much smaller factor in clinical false positive test results ( Therefore, low penetrance of the prothrombin 20210G>A mutation is the main reason why clinical specificity is less than 100% according to ACMG recommendation. 142. : C. Factor V is a protein of the coagulation cascade. It is synthesized primarily in the liver. The protein circulates in the bloodstream in an inactive form until the coagulation system is activated by an injury that damages blood vessels. When coagulation factor V is activated, it interacts with coagulation factor X. The active forms of these two coagulation factors (factor Va and factor Xa) form a complex that converts the important coagulation protein prothrombin (factor II) to its active form, thrombin. Thrombin then converts the protein fibrinogen into fibrin, which is the material that forms the clot ( Therefore, the activated Factor V cleavages prothrombin (II) to thrombin (IIa). 143. : E. The factor V Leiden allele contains a G>A substitution at nucleotide 1691 (c.1691G>a), producing a missense mutation that substitutes glutamine for arginine at amino acid residue 506 (p.R506Q) in the protein product. The p.R506Q site is one of the activated protein C (APC) cleavage sites in the factor Va molecule. Factor V Leiden accounts for at least 80% or 90%–95% of cases of APC resistance. Protein S is a cofactor to protein C in the inactivation of factor Va and VIIIa, but protein S does not interact with factor Va directly. The factor V Leiden mutation is most prevalent in the United States and European Caucasian populations. The factor V Leiden mutation is found in 5.27% of Caucasian Americans and is progressively less common in Hispanic Americans (2.21% heterozygotes), Native Americans (1.25% heterozygotes), African Americans (1.23% heterozygotes), and Asian Americans (0.45% heterozygotes). Clinical sensitivity can be defined as the proportion of individuals who have had (or will have) deep vein thrombi and who have at least one factor V Leiden. Clinical sensitivity is equivalent to the detection rate. Overall, the clinical sensitivity of the factor V Leiden mutation is between 20% and 50%. In the ACMG Consensus Statement on Factor V Leiden Mutation Testing, factor V Leiden testing is predominantly used and recommended for diagnostic purposes in individuals with clinical symptoms of venous thrombosis or with recurrent pregnancy loss. While predictive testing in asymptomatic individuals and in relatives of known factor V Leiden carriers is technically possible, its clinical utility for that purpose is markedly hampered by the low penetrance of the mutations and the appreciable risks inherent in prophylactic anticoagulant therapy ( Therefore, the factor V Leiden testing is recommended for diagnostic purposes in individuals with recurrent pregnancy loss according to the Consensus Statement. 144. : B. In the ACMG Consensus Statement on Factor V Leiden Mutation Testing, factor V Leiden testing is predominantly used and recommended for diagnostic purposes in individuals with clinical symptoms of venous thrombosis or with recurrent pregnancy loss. While predictive testing in asymptomatic individuals and in relatives of known factor V Leiden or prothrombin 20210G>A carriers is technically possible, its clinical utility for that purpose is markedly hampered by the low penetrance of the mutations and the appreciable risks inherent in prophylactic anticoagulant therapy. Routine testing is not recommended for patients with a personal or family history of arterial thrombotic disorders (e.g., acute coronary syndromes or stroke) except for the special situation of myocardial infarction in young female smokers. Testing may be worthwhile for young patients (<50 years of age) who develop acute arterial thrombosis in the absence of other risk factors for atherosclerotic arterial occlusive disease ( Therefore, the factor V Leiden testing is recommended for diagnostic purposes in individuals with recurrent pregnancy loss according to the Consensus Statement. 145. : B. Both the factor V Leiden mutation (c.1691G>A, p.R506Q) and the prothrombin 20210G>A (c.96C>T) mutation exhibit semidominant expression in that both heterozygotes and homozygotes are at increased risk of occurrence/recurrence of venous thrombosis. The relative risk for venous thrombosis associated with the factor V Leiden mutation in the absence of other acquired or environmental predispositions is approximately 4- to 7-fold for heterozygotes and 80-fold for homozygotes. The relative risk for venous thrombosis associated with the prothrombin 20210G>A mutation in the absence of other acquired or environmental predispositions is approximately 2- to 4-fold for heterozygotes. Data from several studies strongly suggest that the pathogenesis of venous thromboembolism is multifactorial and requires interactions between both inherited and acquired risk factors. Heterozygosities for the factor V Leiden or prothrombin 20210G>A mutations alone may be a relatively weak risk factor unless a second genetic risk factor or an acquired factor, such as older age, also exists ( Therefore, the penetrance of the factor V Leiden mutation and the prothrombin 20210G>A mutation are low for venous thromboembolism (incomplete penetrance: most individuals with a mutation will not develop a venous thrombosis). 146. : C. Venous thrombosis is a panethnic multifactorial disorder. Identifiable genetic factors are present in 25% of unselected patients, including defects in coagulation factor inhibition and impaired clot lysis. Factor V (F5) Leiden occurs in 12%–14% of patients with VTE, prothrombin (F2) mutations in 6%–18%, and deficiency of antithrombin III (SERPINC1) or protein C (PROC1) or protein S (PROS1) in 5%–15%. Factor VIII (F8) is associated with hemophilia but not thrombosis. Therefore, factor VIII (F8) would most likely NOT be a part of the genetic evaluation for thrombosis in this patient. 147. : C. The factor V Leiden (c.1691G>A, p.Arg506Gln) mutation has a prevalence of 2%–15% in general Caucasian populations, but it is rare in Hispanic American (2.2%), Asians (0.45%), Africans (1.2%), and Native Americans (1.25%). Therefore, the FV Leiden mutation is more common in Caucasian than in other ethnics listed in the question. 148. : B. The factor V Leiden (c.1691G>A, p.Arg506Gln) mutation removes the preferred site for protein C proteolysis of activated factor V. Slowing inactivation of activated factor V predisposes carriers to thrombophilia. The risk is higher for individuals with homozygous FV Leiden mutations. Therefore, F5 Leiden mutation is a gain-of-function mutation. 149. : A. The activated form of protein C is proteolytically inactivating factor V. Mutations in protein C (PROC1) slow inactivation of factor V, which predisposes carriers to thrombophilia. Therefore, mutations in PROC1 are loss-of-function mutations. 150. : A. Acute intermittent porphyria (AIP) is inherited in an autosomal dominant manner, is caused by mutations in the HMBS gene. All individuals with AIP have an approximately 50% reduction in PBG deaminase enzymatic activity. Therefore, the woman’s child will have 50% chance of inheriting the HMBS mutation from her. However, most likely her child will not have any episodes throughout his/her lifetime, which is called clinically latent, appearing in 90% of patients. Her child has a 10% chance of having clinically manifestations, according to the population data. Acute attacks, which may be provoked by certain drugs, alcoholic beverages, endocrine factors, calorie restriction, stress, and infections, usually resolve within 2 weeks. Attacks, which are very rare before puberty, are more common in women than men. The main use of molecular genetic testing of an individual with biochemically proven AIP is to identify a pathogenic variant for the molecular investigation of the individual’s family (i.e., cascade screening). Therefore, the child will have approximately 90% chance of being healthy with two normal copies of HMBS or one normal copy of HMBS. 151. : A. Acute intermittent porphyria (AIP) is inherited in an autosomal dominant manner. About 1% of probands may have a de novo mutation. Sibs and offspring of individuals with an HMBS pathogenic variant are at 50% risk of inheriting the HMBS pathogenic variant. However, because penetrance is low, the likelihood of an individual with an inherited HMBS pathogenic variant having an acute attack is small. The penetrance for clinical manifestations of an HMBS disease-causing mutation is not accurately known. In one study, 52% of relatives ascertained through cascade screening were found to have “typical” clinical symptoms with increased ALA and PBG and decreased HMBS activity in Swiss patients. However, most reviews written by experienced porphyria specialists quote a penetrance of 10%–20%, by which they imply an acute attack (acute abdominal pain±associated autonomic, motor, or CNS symptoms) leading to a hospital admission for medical management. Population surveys suggest a lower figure. The minimum prevalence of disease specific HMBS mutations in France is 597 per million inhabitants. The prevalence of overt AIP in France was recently reported as 5.5 in 1 million, indicating a penetrance of about 1% ( Therefore, it is fair to estimate that this patient’s risk of having episodes in her lifetime is up to 20%. 152. : D. Acute intermittent porphyria (AIP) is an autosomal dominant nonneoplastic hematological disorder resulting from half-normal activity of the enzyme hydroxymethylbilane synthase (HMBS). The long-term complications are chronic renal failure, hepatocellular carcinoma (HCC), and hypertension. Therefore, this patient is at risk for hepatocellular carcinoma. 153. : A. Porphyrias are a group of rare inherited or acquired disorders of certain enzymes that normally participate in the production of porphyrins and heme. They manifest with either neurological complications or skin problems or occasionally both. This PhD student has autosomal dominant acute intermittent porphyria (AIP) resulting from variants in the HMBS gene, which lead to deficiency of hydroxymethylbilane synthase (HMBS). AIP is characterized clinically by life-threatening acute neurovisceral attacks of severe abdominal pain without peritoneal signs, often accompanied by nausea, vomiting, tachycardia, and hypertension. Attacks may be complicated by hyponatremia and neurological findings, such as mental changes, convulsions, and peripheral neuropathy, that may progress to respiratory paralysis. Acute attacks, which may be provoked by certain drugs, alcoholic beverages, endocrine factors, calorie restriction, stress, and infections, usually resolve within 2 weeks. Acute attacks of porphyria are associated with an increased urinary concentration of porphobilinogen (PBG) and delta aminolevulinic acid (ALA). The total fecal porphyrin concentration and coproporphyrin isomer ratio are normal. Plasma porphyrin fluorescence emission scanning either shows a peak around 619 nm or is normal. Most individuals with AIP have one or a few attacks in their life time; the majority will remain asymptomatic throughout their lives. About 5% (mainly women) have recurrent attacks (defined as >4 attacks/year) that may persist for years. Other long-term complications are chronic renal failure, hepatocellular carcinoma (HCC), and hypertension. Attacks, which are very rare before puberty, are more common in women than in men. All individuals with a heterozygous genetic change in HMBS are at risk of developing acute attacks. However, most never have symptoms and are said to have latent (or presymptomatic) AIP. A germline heterozygous UROD pathogenic variant results in type II porphyria cutanea tarda (type II PCT) characterized by blistering over the dorsal aspects of the hands and other sun-exposed areas of skin, skin friability after minor trauma, facial hypertrichosis and hyperpigmentation, severe thickening of affected skin areas (pseudoscleroderma), and increased risk for hepatocellular carcinoma (HCC). A germline heterozygous pathogenic variant in PPOX, encoding the mitochondrial enzyme protoporphyrinogen oxidase (PPOX), results in variegate porphyria (VP) characterized by a cutaneous porphyria with chronic blistering skin lesions and an acute porphyria with severe episodic neurovisceral symptoms. Identification of gain-of-function germline pathogenic variants in ALAS2 on the X chromosome, the gene encoding erythroid-specific 5-aminolevulinate synthase 2, is diagnostic for X-linked protoporphyria characterized in affected males by cutaneous photosensitivity (usually beginning in infancy or childhood) that results in tingling, burning, pain, and itching within minutes of sun/light exposure and may be accompanied by swelling and redness. Vesicular lesions are uncommon. Pain, which may seem out of proportion to the visible skin lesions, may persist for hours or days after the initial phototoxic reaction. Photosensitivity usually remains for life. Identification of a germline heterozygous pathogenic variant in CPOX (encoding the enzyme coproporphyringen-III oxidase) is diagnostic for hereditary coproporphyria (HCP) characterized by low-grade pain starting in the abdomen and slowly increasing over a period of days (not hours), with nausea progressing to vomiting. Biallelic UROS germline pathogenic variants, or on rare occasion a hemizygous germline pathogenic variant in the X-linked gene GATA1, result in congenital erythropoietic porphyria (CEP), which is characterized in most individuals by severe cutaneous photosensitivity at birth or in early infancy with blistering and increased friability of the skin over light-exposed areas ( Therefore, a molecular genetic test of HMBS would most likely be positive in this patient. 154. : C. Porphyrias are a group of rare inherited or acquired disorders of certain enzymes that normally participate in the production of porphyrins and heme. They manifest with either neurological complications or skin problems or occasionally both. Acute intermittent porphyria (AIP) results from germline heterozygous pathogenic variants in HMBS. Type II porphyria cutanea tarda (type II PCT) results heterozygous pathogenic variants in UROD. X-linked protoporphyria (XLP) results from pathogenic gain-of-function variants in ALAS2. Hereditary coproporphyria (HCP) results from heterozygous germline pathogenic variants in CPOX. Congenital erythropoietic porphyria (CEP) results from biallelic germline pathogenic variants in UROS or hemizygous pathogenic variants in the X-linked gene GATA1. The (GAA)n repeat sequence in the intron 1 of the FXN gene is associated with autosomal recessive Friedreich ataxia, but not porphyrias. Therefore, it would be most likely that FXN was not in the next-generation sequencing (NGS) panel for porphyrias. 155. : C. X-linked protoporphyria (XLP) is characterized in affected males by cutaneous photosensitivity (usually beginning in infancy or childhood) that results in tingling, burning, pain, and itching within minutes of sun/light exposure and may be accompanied by swelling and redness. Vesicular lesions are uncommon. Pain may persist for hours or days after the initial phototoxic reaction. Photosensitivity usually remains for life. The phenotype in heterozygous females ranges from asymptomatic to as severe as affected males. Detection of markedly increased free erythrocyte protoporphyrin and zinc-chelated erythrocyte protoporphyrin is the most sensitive biochemical diagnostic test for XLP. ( Identification of gain-of-function germline pathogenic variants in ALAS2, the gene encoding erythroid specific 5-aminolevulinate synthase 2, confirmed the diagnosis. The mother was an obligate carrier. Therefore, the recurrent rate in this family would be 1/2 (the risk of inherited the mutated chromosome X)×1/2 (the risk from being a boy)=1/4, or 25%. 156. : D. X-linked protoporphyria (XLP) is characterized by cutaneous photosensitivity (usually beginning in infancy or childhood) that results in tingling, burning, pain, and itching within minutes of sun/light exposure and may be accompanied by swelling and redness in affected males. Vesicular lesions are uncommon. Pain may persist for hours or days after the initial phototoxic reaction. Photosensitivity usually remains for life. The phenotype in heterozygous females ranges from asymptomatic to as severe as affected males. Detection of markedly increased free erythrocyte protoporphyrin and zinc-chelated erythrocyte protoporphyrin is the most sensitive biochemical diagnostic test for XLP. ( X-linked protoporphyria is caused by mutations of the ALAS2 gene and is inherited as an X-linked dominant trait. Males often develop a severe form of the disorder while females may not develop any symptoms (asymptomatic) or may develop a form as severe as that seen in males. The ALAS2 gene is located on the short arm of the X chromosome (Xp11.21). The gene encodes a protein known as erythroid-specific 5-aminolevulinate synthase 2. Mutations of the ALAS2 gene lead to the overproduction of this enzyme, which, in turn, results in elevated levels of the chemical protoporphyrin. Protoporphyrin abnormally accumulates in certain tissues of the body, especially the blood, liver, and skin. The symptoms of X-linked protoporphyria develop because of this abnormal accumulation of protoporphyrin. For example, when protoporphyrins absorb energy from sunlight, they enter an excited state (photoactivation), and this abnormal activation results in the characteristic damage to the skin. Accumulation of protoporphyrins in the liver causes toxic damage to the liver and may contribute to the formation of gallstones. Protoporphyrin is formed within red blood cells in the bone marrow and then enters the blood plasma, which carries it to the skin, where it can be photoactivated by sunlight and cause damage ( Therefore, it would be most likely that the patient had a gain-of-function germline pathogenic variant in ALAS2, the gene encoding erythroid specific 5-aminolevulinate synthase 2. 157. : D. Hereditary hemorrhagic telangiectasia (HHT) is an autosomal dominant condition characterized by the presence of multiple arteriovenous malformations (AVMs) that lack intervening capillaries and result in direct connections between arteries and veins. Although HHT is a developmental disorder and infants are occasionally severely affected, in most people the features are age-dependent and the diagnosis is not suspected until adolescence or later. Small AVMs (or telangiectases) close to the surface of the skin and mucous membranes often rupture and bleed after slight trauma. The most common clinical manifestation is spontaneous and recurrent nosebleeds (epistaxis) beginning on average at age 12 years. Approximately 25% of individuals with HHT have GI bleeding, which most commonly begins after age 50 years. Large AVMs often cause symptoms when they occur in the brain, liver, or lungs; complications from bleeding or shunting may be sudden and catastrophic. Germline pathogenic variants in the ENG, ACVRL1 (ALK1), SMAD4, and GDF2 genes are associated with HHT. Molecular genetic testing of all four genes may detect pathogenic variants in approximately 80%–90% of individuals who meet unequivocal clinical diagnostic criteria for HHT ( Ataxia telangiectasia (AT) is an autosomal recessive chromosomal breakage syndrome characterized by progressive cerebellar gait beginning between ages 1 and 4 years, oculomotor apraxia, choreoathetosis, telangiectasias of the conjunctivae, immunodeficiency, frequent infections, and an increased risk for malignancy, particularly leukemia and lymphoma. Patients with AT are hypersensitive to ionizing radiation and have an increased susceptibility to cancer, usually leukemia or lymphoma ( Hemophilia A is an X-linked recessive nonneoplastic hematological disorder characterized by a deficiency in factor VIII clotting activity that results in prolonged oozing after injuries, tooth extractions, or surgery and delayed or recurrent bleeding prior to complete wound healing. HFE-hereditary hemochromatosis is an autosomal recessive disease characterized by excessive storage of iron in the liver, skin, pancreas, heart, joints, and testes. In untreated individuals, early symptoms may include abdominal pain, weakness, lethargy, and weight loss. Therefore, it would be most likely this patient had HHT if she had a genetic condition. 158. : D. Hereditary hemorrhagic telangiectasia (HHT) is an autosomal dominant condition characterized by the presence of multiple arteriovenous malformations (AVMs) that lack intervening capillaries and result in direct connections between arteries and veins. Although HHT is a developmental disorder and infants are occasionally severely affected, in most people the features are age-dependent and the diagnosis is not suspected until adolescence or later. Small AVMs (or telangiectases) close to the surface of the skin and mucous membranes often rupture and bleed after slight trauma. The most common clinical manifestation is spontaneous and recurrent nosebleeds (epistaxis) beginning on average at age 12 years. Approximately 25% of individuals with HHT have GI bleeding, which most commonly begins after age 50 years. Large AVMs often cause symptoms when they occur in the brain, liver, or lungs; complications from bleeding or shunting may be sudden and catastrophic. Germline pathogenic variants in the ENG, ACVRL1 (ALK1), SMAD4, and GDF2 genes are associated with HHT. Molecular genetic testing of all four genes may detect pathogenic variants in approximately 80%–90% of individuals who meet unequivocal clinical diagnostic criteria for HHT ( Ataxia telangiectasia (AT) is an autosomal recessive chromosomal breakage syndrome characterized by progressive cerebellar gait beginning between ages 1 and 4 years, oculomotor apraxia, choreoathetosis, telangiectasias of the conjunctivae, immunodeficiency, frequent infections, and an increased risk for malignancy, particularly leukemia and lymphoma. Patients with AT are hypersensitive to ionizing radiation and have an increased susceptibility to cancer, usually leukemia or lymphoma. Germline pathogenic variants in the ATM gene are associated with AT ( Hemophilia A is an X-linked recessive nonneoplastic hematological disorder characterized by deficiency in factor VIII clotting activity that results in prolonged oozing after injuries, tooth extractions, or surgery and delayed or recurrent bleeding prior to complete wound healing. Germline pathogenic variants in the F8 gene are associated with hemophilia A. HFE-hereditary hemochromatosis is an autosomal recessive disease characterized by excessive storage of iron in the liver, skin, pancreas, heart, joints, and testes. In untreated individuals, early symptoms may include abdominal pain, weakness, lethargy, and weight loss. Germline pathogenic variants in the HFE gene are associated with HFE-hereditary hemochromatosis. Therefore, the ENG gene would most likely be included in the sequencing assay ordered for this patient in order to rule out genetic etiologies. 159. : D. Hereditary hemorrhagic telangiectasia (HHT) is an autosomal dominant condition characterized by the presence of multiple arteriovenous malformations (AVMs) that lack intervening capillaries and result in direct connections between arteries and veins. Although HHT is a developmental disorder and infants are occasionally severely affected, in most people the features are age-dependent and the diagnosis is not suspected until adolescence or later. Small AVMs (or telangiectases) close to the surface of the skin and mucous membranes often rupture and bleed after slight trauma. The most common clinical manifestation is spontaneous and recurrent nosebleeds (epistaxis) beginning on average at age 12 years. Approximately 25% of individuals with HHT have GI bleeding, which most commonly begins after age 50 years. Large AVMs often cause symptoms when they occur in the brain, liver, or lungs; complications from bleeding or shunting may be sudden and catastrophic. Germline pathogenic variants in the ENG, ACVRL1 (ALK1), SMAD4, and GDF2 genes are associated with HHT. Molecular genetic testing of all four genes may detect pathogenic variants in approximately 80%–90% of individuals who meet unequivocal clinical diagnostic criteria for HHT ( Ataxia telangiectasia (AT) is an autosomal recessive chromosomal breakage syndrome characterized by progressive cerebellar gait beginning between ages 1 and 4 years, oculomotor apraxia, choreoathetosis, telangiectasias of the conjunctivae, immunodeficiency, frequent infections, and an increased risk for malignancy, particularly leukemia and lymphoma. Patients with AT are hypersensitive to ionizing radiation with an increased susceptibility to cancer, usually leukemia or lymphoma. Germline pathogenic variants in the ATM gene are associated with AT ( Hemophilia A is an X-linked recessive nonneoplastic hematological disorder characterized by a deficiency in factor VIII clotting activity that results in prolonged oozing after injuries, tooth extractions, or surgery and delayed or recurrent bleeding prior to complete wound healing. Germline pathogenic variants in the F8 gene are associated with hemophilia A. HFE-hereditary hemochromatosis is an autosomal recessive disease characterized by excessive storage of iron in the liver, skin, pancreas, heart, joints, and testes. In untreated individuals, early symptoms may include abdominal pain, weakness, lethargy, and weight loss. Germline pathogenic variants in the HFE gene are associated with hemophilia A. Therefore, the SMAD4 gene would most likely be included in the sequencing assay ordered for this patient in order to rule out genetic etiologies. 160. : E. Hereditary hemorrhagic telangiectasia (HHT) is an autosomal dominant condition characterized by the presence of multiple arteriovenous malformations (AVMs) that lack intervening capillaries and result in direct connections between arteries and veins. Although HHT is a developmental disorder and infants are occasionally severely affected, in most people the features are age-dependent and the diagnosis is not suspected until adolescence or later. Small AVMs (or telangiectases) close to the surface of the skin and mucous membranes often rupture and bleed after slight trauma. The most common clinical manifestation is spontaneous and recurrent nosebleeds (epistaxis) beginning on average at age 12 years. Approximately 25% of individuals with HHT have GI bleeding, which most commonly begins after age 50 years. Large AVMs often cause symptoms when they occur in the brain, liver, or lungs; complications from bleeding or shunting may be sudden and catastrophic. Germline pathogenic variants in the ENG, ACVRL1 (ALK1), SMAD4, and GDF2 genes are associated with HHT. Molecular genetic testing of all four genes may detect pathogenic variants in approximately 80%–90% of individuals who meet unequivocal clinical diagnostic criteria for HHT ( Chromosome breakage study is the diagnostic test for Fanconi anemia. Chromosome microarray analysis (CMA) is the first-line test for individuals with multiple congenital anomalies, developmental delay, intellectual disability, and autism. CMA study is used to detect copy-number gains or losses. The ACMG recommends CMA as the first-line test for individuals with multiple congenital anomalies, developmental delay, intellectual disability, and autism. Methylation study is used to identify epigenetic changes in the genome, such as methylation study for Prader–Willi/Angelman syndromes. Multiplex ligation-dependent probe amplification (MLPA) is a time-efficient technique to detect genomic deletions and insertions bigger than single-nucleotide variants and in/dels, but smaller than what CMA can detect. Next-generation sequencing (NGS) is a high-throughput test to sequence multiple genes at the same time. It is an appropriate test for Fanconi anemia, hearing loss, and other conditions. Sanger sequencing is the most appropriate molecular test for single-gene disorders when the most pathogenic variants are single-nucleotide variants and in/dels. It is an appropriate test for Gaucher disease, Wiskott–Aldrich syndrome, and other conditions. Targeted mutation analysis is used to identify specific mutations, which is a cost-effective test when there is founder effect in a population. Targeted mutation analysis is also commonly used to diagnose family members after the mutation is identified in the proband. It is an appropriate test for carrier test of cystic fibrosis (CF). Therefore, a next-generation sequencing (NGS) assay including at least the four genes would most likely be used for the genetic test ordered for this patient in order to rule out HHT. 161. : B. Hereditary hemorrhagic telangiectasia (HHT) is an autosomal dominant condition characterized by the presence of multiple arteriovenous malformations (AVMs) that lack intervening capillaries and result in direct connections between arteries and veins. Although HHT is a developmental disorder and infants are occasionally severely affected, in most people the features are age-dependent and the diagnosis is not suspected until adolescence or later. Small AVMs (or telangiectases) close to the surface of the skin and mucous membranes often rupture and bleed after slight trauma. The most common clinical manifestation is spontaneous and recurrent nosebleeds (epistaxis) beginning on average at age 12 years. Approximately 25% of individuals with HHT have GI bleeding, which most commonly begins after age 50 years. Large AVMs often cause symptoms when they occur in the brain, liver, or lungs; complications from bleeding or shunting may be sudden and catastrophic. Germline pathogenic variants in the ENG, ACVRL1 (ALK1), SMAD4, and GDF2 genes are associated with HHT. Molecular genetic testing of all four genes may detect pathogenic variants in approximately 80%–90% of individuals who meet unequivocal clinical diagnostic criteria for HHT ( Therefore, the patient’s unborn child would have a 1/2 chance to inherit the familial deletion in the ENG gene for HHT. 162. : B. Immune dysregulation, polyendocrinopathy, enteropathy, X-linked (IPEX) syndrome, autoimmune lymphoproliferative syndrome (ALPS), X-linked lymphoproliferative disease (XLP), and Wiskott–Aldrich syndrome (WAS) are all congenital nonneoplastic hematological disorders characterized by immunodeficiency. Von Willebrand disease (vWD) is the most common hereditary coagulation abnormality described in humans that is caused by pathogenic variants in vWF on 12p13.2. IPEX is characterized by systemic autoimmunity, typically beginning in the first year of life. Presentation is most commonly the clinical triad of watery diarrhea, eczematous dermatitis, and endocrinopathy (most commonly insulin-dependent diabetes mellitus). Most children have other autoimmune phenomena including Coombs-positive anemia, autoimmune thrombocytopenia, autoimmune neutropenia, and tubular nephropathy. Without aggressive immunosuppression or bone marrow transplantation, the majority of affected males die within the first 1–2 years of life from metabolic derangements or sepsis. A few with a milder phenotype have survived into the second or third decade of life. There are no specific laboratory tests to confirm the diagnosis. Molecular analysis of the FOXP3 gene on Xp11.23 is required for the diagnosis. FOXP3 is the only gene in which germline pathogenic variants are known to cause IPEX syndrome ( ALPS, caused by defective lymphocyte homeostasis, is characterized by nonmalignant lymphoproliferation such as lymphadenopathy, hepatosplenomegaly with or without hypersplenism, autoimmune disease mostly directed toward blood cells, and lifelong increased risk for both Hodgkin and non-Hodgkin lymphoma. Heterozygous or homozygous (compound heterozygous) germline pathogenic variants in FAS, FASLG, and CASP10 result in ALPS ( XLP is caused by germline pathogenic variants in SH2D1A and XIAP (BIRC4). The three most commonly recognized phenotypes of SH2D1A-related XLP are hemophagocytic lymphohistiocytosis (HLH) associated with Epstein–Barr virus (EBV) infection (58% of individuals), dysgammaglobulinemia (31%), and lymphoproliferative disorders (malignant lymphoma) (30%). Manifestations of SH2D1A-related XLP can also occur in the absence of EBV. XIAP-related XLP also presents with HLH (often associated with EBV) or dysgammaglobulinemia, but no cases of lymphoma have been described to date ( Wiskott–Aldrich syndrome (WAS) is an X-linked disorder caused by germline pathogenic variants in WAS on Xp11.23, which is characterized by thrombocytopenia, eczema, and combined immunodeficiency. Therefore, it was most likely that this deceased patient had had IPEX. 163. : B. FOXP3 is the only gene in which germline pathogenic variants are known to cause immune dysregulation, polyendocrinopathy, enteropathy, X-linked (IPEX) syndrome. Male patients usually have neonatal enteropathy and neonatal polyendocrinopathy. Female carriers of FOXP3 pathogenic variants are generally healthy. But there are occasionally reported female carriers with mild symptoms due to the pattern of X inactivation. Therefore, the unborn sister of the patient in this question would most likely not have any symptoms even if she inherited the familial pathogenic variant. 164. : D. Immune dysregulation, polyendocrinopathy, enteropathy, X-linked (IPEX) syndrome is characterized by systemic autoimmunity, typically beginning in the first year of life. Presentation is most commonly the clinical triad of watery diarrhea, eczematous dermatitis, and endocrinopathy (most commonly insulin-dependent diabetes mellitus). Most children have other autoimmune phenomena, including Coombs-positive anemia, autoimmune thrombocytopenia, autoimmune neutropenia, and tubular nephropathy. Without aggressive immunosuppression or bone marrow transplantation, the majority of affected males die within the first 1–2 years of life from metabolic derangements or sepsis; a few with a milder phenotype have survived into the second or third decade of life. There are no specific laboratory tests to confirm the diagnosis. FOXP3 is the only gene in which mutations are known to cause IPEX syndrome. Molecular analysis of the FOXP3 gene on Xp11.23 is required for the diagnosis ( Heterozygous or homozygous (compound heterozygous) pathogenic variants in FAS, FASLG, and CASP10 result in autoimmune lymphoproliferative syndrome (ALPS). ALPS, which is caused by defective lymphocyte homeostasis characterized by nonmalignant lymphoproliferation such as lymphadenopathy, hepatosplenomegaly with or without hypersplenism, autoimmune disease, mostly directed toward blood cells, and a lifelong increased risk for both Hodgkin and non-Hodgkin lymphoma ( This patient had late-onset IPEX syndrome presenting a severe phenotype with aggressive autoimmune-associated symptoms, which led to his death. Therefore, FOXP3 would most likely harbor a germline pathogenic variant for IPEX syndrome in this patient. 165. : B. Immune dysregulation, polyendocrinopathy, enteropathy, X-linked (IPEX) syndrome is characterized by systemic autoimmunity, typically beginning in the first year of life. Presentation is most commonly the clinical triad of watery diarrhea, eczematous dermatitis, and endocrinopathy (most commonly insulin-dependent diabetes mellitus). Most children have other autoimmune phenomena, including Coombs-positive anemia, autoimmune thrombocytopenia, autoimmune neutropenia, and tubular nephropathy. Without aggressive immunosuppression or bone marrow transplantation, the majority of affected males die within the first 1–2 years of life from metabolic derangements or sepsis; a few with a milder phenotype have survived into the second or third decade of life ( There are no specific laboratory tests to confirm the diagnosis. FOXP3 is the only gene in which mutations are known to cause IPEX syndrome. Molecular analysis of the FOXP3 gene on Xp11.23 is required for the diagnosis. Sanger sequencing analysis of all exons, exon–intron boundaries, and the first polyadenylation site of FOXP3 detects mutations in approximately 25% of males with a clinical symptoms suggestive of IPEX syndrome. Research has suggested the possibility of an additional autosomal locus. Among the males who lack FOXP3 mutations, approximately half have low FOXP3 mRNA expression levels and low numbers of FOXP3-expressing cells in peripheral blood, suggesting that defects in other genes or gene products, possibly in the same pathway as FOXP3, may cause a similar phenotype. This patient had late-onset IPEX syndrome, presenting with a severe phenotype with aggressive autoimmune-associated symptoms, which led to his death. Therefore, there was a 25% chance that Sanger sequencing analysis of FOXP3 would confirm the diagnosis in this patient. 166. : B. Autoimmune lymphoproliferative syndrome (ALPS), caused by defective lymphocyte homeostasis, is characterized by nonmalignant lymphoproliferation such as lymphadenopathy, hepatosplenomegaly with or without hypersplenism, autoimmune disease mostly directed toward blood cells, and lifelong increased risk for both Hodgkin and non-Hodgkin lymphoma. Heterozygous or homozygous (compound heterozygous) germline pathogenic variants in FAS, FASLG, and CASP10 result in ALPS. ALPS-FAS is the most common and best-characterized type of ALPS, which is caused by heterozygous germline mutations in FAS ( Therefore, FAS would most likely harbor a pathogenic variant in this patient. 167. : D. Autoimmune lymphoproliferative syndrome (ALPS), caused by defective lymphocyte homeostasis, is characterized by nonmalignant lymphoproliferation such as lymphadenopathy, hepatosplenomegaly with or without hypersplenism, autoimmune disease mostly directed toward blood cells, and lifelong increased risk for both Hodgkin and non-Hodgkin lymphoma. Heterozygous or homozygous (compound heterozygous) germline pathogenic variants in FAS, FASLG, and CASP10 result in ALPS. ALPS-FAS is the most common and best-characterized type of ALPS, which is caused by heterozygous germline mutations in FAS ( Immune dysregulation, polyendocrinopathy, enteropathy, X-linked (IPEX) syndrome is characterized by systemic autoimmunity, typically beginning in the first year of life. Presentation is most commonly the clinical triad of watery diarrhea, eczematous dermatitis, and endocrinopathy (most commonly insulin-dependent diabetes mellitus). Molecular analysis of the FOXP3 gene on Xp11.23 is required for the diagnosis. FOXP3 is the only gene in which germline pathogenic variants are known to cause IPEX syndrome ( Therefore, it would be most likely that FOXP3 did not harbor a pathogenic variant in this patient. 168. : D. Autoimmune lymphoproliferative syndrome (ALPS), caused by defective lymphocyte homeostasis, is characterized by nonmalignant lymphoproliferation such as lymphadenopathy, hepatosplenomegaly with or without hypersplenism, autoimmune disease mostly directed toward blood cells, and lifelong increased risk for both Hodgkin and non-Hodgkin lymphoma. Heterozygous or homozygous (compound heterozygous) pathogenic variants in FAS, FASLG, and CASP10 result in ALPS. ALPS-FAS is the most common and best-characterized type of ALPS, associated with heterozygous germline mutations in FAS. Approximately 20%–25% of individuals with ALPS currently lack a genetic diagnosis ( Therefore, there was a 75% chance that Sanger sequencing analysis of FAS, FASLG, and CASP10 would confirm the diagnosis in this patient. 169. : A. Immune dysregulation, polyendocrinopathy, enteropathy, X-linked (IPEX) syndrome, autoimmune lymphoproliferative syndrome (ALPS), X-linked lymphoproliferative disease (XLP), and Wiskott–Aldrich syndrome (WAS) are all congenital nonneoplastic hematological disorders characterized by immunodeficiency. Von Willebrand disease (vWD) is the most common hereditary coagulation abnormality described in humans that is caused by pathogenic variants in vWF on 12p13.2. ALPS, caused by defective lymphocyte homeostasis, is characterized by nonmalignant lymphoproliferations such as lymphadenopathy, hepatosplenomegaly with or without hypersplenism, autoimmune disease mostly directed toward blood cells, and lifelong increased risk for both Hodgkin and non-Hodgkin lymphoma. Heterozygous or homozygous (compound heterozygous) germline pathogenic variants in FAS, FASLG, and CASP10 result in ALPS ( IPEX is characterized by systemic autoimmunity, typically beginning in the first year of life. Presentation is most commonly the clinical triad of watery diarrhea, eczematous dermatitis, and endocrinopathy (most commonly insulin-dependent diabetes mellitus). Most children have other autoimmune phenomena, including Coombs-positive anemia, autoimmune thrombocytopenia, autoimmune neutropenia, and tubular nephropathy. There are no specific laboratory tests to confirm the diagnosis. FOXP3 is the only gene in which mutations are known to cause IPEX syndrome. Molecular analysis of the FOXP3 gene on Xp11.23 is required for the diagnosis ( XLP is caused by mutations in SH2D1A and XIAP (BIRC4). The three most commonly recognized phenotypes of SH2D1A-related XLP are hemophagocytic lymphohistiocytosis (HLH) associated with Epstein–Barr virus (EBV) infection (58% of individuals), dysgammaglobulinemia (31%), and lymphoproliferative disorders (malignant lymphoma) (30%). Manifestations of SH2D1A-related XLP can also occur in the absence of EBV. XIAP-related XLP also presents with HLH (often associated with EBV) or dysgammaglobulinemia, but no cases of lymphoma have been described to date ( Wiskott–Aldrich syndrome (WAS) is an X-linked disorder caused by pathogenic variants in WAS on Xp11.23, which is characterized by thrombocytopenia, eczema, and combined immunodeficiency. Therefore, it was most likely that the deceased patient had ALPS. 170. : E. Autoimmune lymphoproliferative syndrome (ALPS), caused by defective lymphocyte homeostasis, is characterized by nonmalignant lymphoproliferation such as lymphadenopathy, hepatosplenomegaly with or without hypersplenism, autoimmune disease mostly directed toward blood cells, and lifelong increased risk for both Hodgkin and non-Hodgkin lymphoma. Heterozygous or homozygous (compound heterozygous) germline pathogenic variants in FAS, FASLG, and CASP10 result in ALPS. In general, ALPS is an autosomal dominant disorder, however, patients with biallelic germline pathogenic variants have been seen ( The patient in this question had a heterozygous pathogenic variant in FAS, which resulted in symptoms related to ALPS. His mother, who had the same familial pathogenic variant was asymptomatic. The risk of developing ALPS-related complications in the unborn sister depends on the nature of the mutation, as well as the presence of other as-yet incompletely understood genetic or environmental factors. Therefore, it was unclear if the unborn sister of the proband would develop symptoms. 171. : B. Wiskott–Aldrich syndrome (WAS) is an X-linked recessive immunodeficiency characterized by thrombocytopenia, eczema, and recurrent infections. WAS on Xp11.23 is the only gene associated with WAS. Female carriers of a WAS pathogenic variant rarely have significant clinical symptoms and generally have no immunologic or biochemical markers of the disorder; however, mild thrombocytopenia is noted in a small proportion ( Therefore, it was most likely the unborn sister of the proband would not develop WAS symptoms. 172. : E. Wiskott–Aldrich syndrome (WAS) is an X-linked recessive immunodeficiency characterized by thrombocytopenia, eczema, and recurrent infections. WAS on Xp11.23 is the only gene in which pathogenic variants are known to cause WAS. Sanger sequencing of WAS may identify approximately 95% of pathogenic variants, and del/dup may identify about 5% of pathogenic variants. Therefore, there was more than a 99% chance that Sanger sequencing analysis and del/dup analysis of the WAS gene would confirm the diagnosis in this patient. 173. : C. X-linked severe combined immunodeficiency (XSCID) is a combined cellular and humoral immunodeficiency caused by pathogenic variants in IL2RG. Germline pathogenic variants in FAS result in autoimmune lymphoproliferative syndrome (ALPS). Germline pathogenic variants in FOXP3 result in immune dysregulation, polyendocrinopathy, enteropathy, X-linked (IPEX) syndrome. IL1RG is not a known gene. Therefore, IL2RG is responsible for XSCID according to the publication in Cells in 1993. 174. : A. X-linked severe combined immunodeficiency (XSCID) is an immunodeficiency disorder in which the body produces very few T cells and NK cells. In the absence of T cell help, B cells become defective. Therefore, the number of B cells does not decrease in patients with XSCID. 175. : B. X-linked severe combined immunodeficiency (XSCID) is a combined cellular and humoral immunodeficiency caused by pathogenic variants in IL2RG. If the DNA is integrated in a sensitive spot in the genome, for example—in a tumor suppressor gene—the therapy could induce a tumor. This has occurred in clinical trials for XSCID patients, in which hematopoietic stem cells were transduced with a corrective transgene using a retrovirus, and this led to the development of T-cell leukemia in 3 of 20 patients. Insertion of the IL2RG gene near the LMO2 gene may activate the LMO2 gene, which is a known oncogene. The activation of LMO2 in the development of T-cell leukemia is unclear. Therefore, patients with XSCID tend to develop T-cell leukemia after gene therapy. 176. : A. Adenosine deaminase (ADA) deficiency is one form of an autosomal recessive SCID (severe combined immunodeficiency), a disorder that affects the immune system. ADA deficiency is very rare, but very dangerous, because a malfunctioning immune system leaves the body open to infection from bacteria and viruses. Gene therapy attempts to treat genetic diseases at the molecular level by correcting what is wrong with defective genes. A 4-year-old girl with ADA deficiency became the first gene therapy patient on September 14, 1990, at the NIH Clinical Center. White blood cells were taken from her, and the normal genes for making adenosine deaminase were inserted into them. The corrected cells were injected into her. Dr. W. French Anderson helped develop this landmark clinical trial when he worked at the National Heart, Lung, and Blood Institute. Therefore, adenosine deaminase deficiency was treated with gene therapy for the first time in 1990. View chapterExplore book Read full chapter URL: Book2019, Self-Assessment Questions for Clinical Molecular GeneticsHaiying Meng Chapter Splenectomy for Conditions Other Than Trauma 2019, Shackelford's Surgery of the Alimentary Tract, 2 Volume Set (Eighth Edition)Rory L. Smoot, ... David M. Nagorney Hemoglobinopathies Sickle cell disease includes SS, hemoglobin C disease (SC), and the sickle β-thalassemia. The inherited point mutation on the sickle gene leads to an abnormal β-chain forming a hemoglobin with decreased solubility in its deoxygenated form. Pathogenesis of sickle disease results from abnormal polymerization of hemoglobin S with low cellular oxygen content. Exponential propagation of this process stiffens and distorts erythrocytes. Further compounding factors include abnormal endothelial adhesion, formation of heterocellular aggregates, dysregulation of nitric oxide–mediated vasodilation, and local inflammation. All of these factors lead to slowed RBC transit and their entrapment in the vasculature and in the spleen.55 Microvascular occlusion results, and sickle patients suffer from end-organ damage of the eyes, kidneys, subcutaneous tissue, and bone. Splenic sequestration occurs when the RBC is trapped in the enlarged spleen, which then undergoes autoinfarction; it is observed in 7% to 30% of SS patients between 2 and 5 years of life. Acute manifestation, known as acute splenic sequestration crisis (ASSC), is potentially fatal. Patients present with profound acute anemia (decrease in hemoglobin by >2 g/dL), reticulocytosis, and thrombocytopenia. Acute therapy requires resuscitation by RBC transfusions. However, recurrence carries a 20% mortality rate and can occur in 50% of those who survive ASSC.55 As a means to prevent future ASSC, elective splenectomy has been indicated in children older than 2 or 3 years of age after the first episode of ASSC. The operative mortality is 7%, and 5-year mortality is 3.4%.56,57 The risk of postsplenectomy sepsis is approximately 2% in this patient population but increases substantially if splenectomy is performed before 4 years of age.58–60 More recently partial splenectomy has been compared with total splenectomy in pediatric patients and has demonstrated no change in postsplenectomy hemoglobin levels in either group, underscoring the decision-making process for splenectomy which focuses on minimizing complications and improvement of quality of life. Importantly in this multiinstitutional observational study, no difference was found in the rate of postsplenectomy sepsis comparing partial to total splenectomy.51 Splenectomy (partial or total) has not been proven to increase survival, but the benefits include reducing transfusion dependency, relief from pain from splenomegaly, and treatment of splenic abscesses resulting from splenic infarctions.56,61 Patients with thalassemia major (or homozygous β-thalassemia) synthesize structurally abnormal hemoglobin that deforms erythrocytes. They typically depend on multiple transfusions to maintain a hemoglobin level greater than 10 g/dL. When complications of hypersplenism develop, as measured by transfusion requirement of greater than 250 mL/kg per year and iron overload, splenectomy is indicated.62 Splenectomy reduces the requirements for both transfusions and deferoxamine (an iron chelator) in 32% of patients.63 More than 80% of children with thalassemia regain normal weight and growth rates after splenectomy.64 The risk for overwhelming postsplenectomy sepsis (OPSS) is high in this patient population, approximately 10% in the long term.65 Therefore splenectomy is usually delayed until after 6 to 8 years of age. Partial splenectomy has been advocated in younger children,66 and laparoscopic splenectomy is definitely feasible in these patients.67 View chapterExplore book Read full chapter URL: Book2019, Shackelford's Surgery of the Alimentary Tract, 2 Volume Set (Eighth Edition)Rory L. Smoot, ... David M. Nagorney Chapter Hypochromic and Hemolytic Anemias 2021, Atlas of Diagnostic HematologyMeenakshi Garg Bansal, Genevieve Marie Crane Congenital Structural Hemoglobin Disorders In addition to thalassemias, mutations that cause structural alterations of the globin gene are included in the broad category of hemoglobinopathies. In these cases the amount of hemoglobin produced may be normal, but hemolysis results from unstable protein and resultant aggregates or degradation products. More than 700 structural hemoglobin variants have been described, of which hemoglobins S, C, and E are those most frequently detected, with HbS, HbC, and HbSC variants being significantly associated with increased hemolysis.30 HbE, although the most common structural variant, is clinically innocuous unless present in combination with β-thalassemia as it is also produced at a reduced rate.30 Hemoglobin S Sickle cell disease (homozygous S hemoglobin) is a severe hemolytic anemia with a markedly shortened red blood cell life span.29 Sickle cell trait affects approximately 8% of African Americans with a lower frequency in Americans of Hispanic descent (<1%).36 The disease is caused by a mutation in the codon of the sixth amino acid of the β-globin gene, resulting in the substitution of valine for glutamic acid. Sickling disease results from homozygous inheritance of this mutation or compound heterozygosity with another hemoglobinopathy with altered structure (e.g., HbC, HbD, HbOArab, HbE) of β-globin production owing to polymerization of HbS in the deoxygenated state. The disease can be ameliorated by increased HbF concentrations, which help prevent sickling, or concurrent α-thalassemia, which reduces hemoglobin concentration. HbF is heterogeneously distributed between red blood cells, particularly following hydroxyurea administration, and red blood cells containing HbF demonstrate improved survival compared with those that do not.37 Hemoglobin S polymerization results in distortion of the red blood cell membrane and alters the ability of the cell to maintain ion balance, which leads to dehydration. Only a fraction of the cells are irreversibly sickled, which are those seen on a peripheral smear. Sickled cells are more likely to adhere to endothelium or become trapped in the microvasculature, leading to occlusion, further ischemia, and increased sickling. This vicious cycle can lead to a sickle cell crisis manifesting as severe pain, which may in some cases be precipitated by cold, dehydration, or infection. This may occur in any tissue, but the chest (acute chest syndrome), lower back, and extremities are common sites. The patient may present with symptoms of an acute abdomen from intraabdominal ischemia. Priapism is another common complication requiring urgent intervention.38 These acute episodes may be followed by a hyperhemolytic crisis as irreversibly sickled cells are cleared. Sickling can also result in intravascular hemolysis with release of substances that alter normal vascular function.39 Intravascular cell-free hemoglobin can scavenge nitric oxide, resulting in increased risk for vasoocclusion.38 L-arginase released from red blood cells may also reduce L-arginine availability to endothelial cells for nitric oxide production (Fig. 3.8). In addition to hemolysis, sequestration crises may occur with pooling of red blood cells in the spleen or liver with a decrease in hemoglobin levels on the order of 3 g/dL below the patient’s baseline and result in signs of hypoxia and hypovolemic shock. It is less of an issue outside of early childhood because most patients ultimately undergo splenic autoinfarction. The HbS trait is typically asymptomatic, but it appears to confer resistance to severe forms of malaria. This may occur through increased sickling and removal of infected red blood cells, but the precise mechanism is unclear. Although sickle cell disease is caused by a well-defined mutation, the clinical presentation can be heterogeneous owing to other genetic and environmental factors. HbS is also seen together with other structurally abnormal hemoglobins, including HbD and HbC. Coinheritance of HbD and HbS results in a severe phenotype similar to homozygous HbS, whereas HbSC has a milder phenotype. HbS can also be inherited with thalassemias. In sickle β-thalassemia, the amount of HbA varies based on the allele inherited (β0 verses β+), but such patients generally have anemia and complications from sickle cells (Fig. 3.9). Diagnosis can be made based on a combination of laboratory tests, examination of the peripheral smear, and hemoglobin electrophoresis (see Fig. 3.6). Typically patients with sickle cell disease have anemia (Hb typically between 5 and 11 g/dL), increased lactate dehydrogenase, increased indirect bilirubin, increased serum ferritin, elevated erythropoietin, and decreased serum haptoglobin. A decreased haptoglobin favors a component of intravascular hemolysis as opposed to primary destruction by the reticuloendothelial system, but levels may vary because it is also an acute-phase reactant. The peripheral smear demonstrates irreversibly sickled cells and increased reticulocytes (Fig. 3.10). Neutrophil and platelet counts may also be elevated. Traditional hemoglobin electrophoresis or high-pressure liquid chromatography can be used to identify HbS (see Fig. 3.6). Solubility testing, red blood cell sickling assays, and genetic studies may also be used. Hemoglobin C Hemoglobin C is a slightly less common structural hemoglobin variant, with the highest frequency in West Africa, potentially above 15%.40 Homozygous inheritance appears to convey full resistance against malaria, with partial protection in heterozygotes, and with a less severe phenotype in homozygotes than is seen with sickle cell disease. This allele may have originated independently in Southeast Asia. In the United States, approximately 2% of Americans of African descent have hemoglobin C trait and 1 in 6000 have hemoglobin C disease. It may co-occur with β-thalassemia or the HbS mutation. As with HbS, the HbC mutation is also in codon 6 of the β-globin gene. This results in a conversion from glutamic acid to lysine. Crystals can be formed in vivo, which result in decreased deformability and reduced red blood cell life span. These red blood cells also demonstrate an increased loss of K+ resulting in dehydrated, spherocytic cells with increased hemoglobin concentration. This red blood cell dehydration is also marked in HbSC disease.41 Patients with hemoglobin C disease may have mild to moderate splenomegaly, cholelithiasis, or other signs of chronic hemolysis. Pain crises are not a feature, and life span is not decreased. Laboratory values generally demonstrate a mild to moderate hemolytic anemia with hemoglobin in the range of 10 to 11 g/dL and potentially increased indirect bilirubin. On peripheral smear, HbC is characterized by target cells, spherocytes, and characteristic crystals, particularly in patients who are splenectomized. The red blood cells may be microcytic and hyperchromic with mild reticulocytosis. HbC can readily be distinguished from HbA and HbS by electrophoresis or liquid chromatography owing to the charge difference from the substituted lysine (see Fig. 3.6). In contrast to HbS, HbC does not differ in solubility compared with HbA, but in the homozygous state, HbC forms crystals when the red blood cells are incubated in hypertonic saline. Unstable Hemoglobins Unstable hemoglobins may lead to a congenital hemolytic anemia with Heinz bodies visible from hemoglobin precipitation on a peripheral smear. These anemias are thus sometimes referred to as “congenital Heinz body hemolytic anemias.” They are predominantly sporadic, often resulting from mutations that destabilize the association between the heme group and globin. Alternatively, mutations may affect the formation of globin dimers, tetramers, or the protein secondary structure.42 The resultant clinical presentation is variable but often worsens in the setting of oxidant stress. Laboratory findings demonstrate evidence of hemolytic anemia, including increased reticulocytes, elevated lactate dehydrogenase, and decreased haptoglobin. A peripheral smear typically demonstrates increased anisopoikilocytosis, Heinz bodies, and basophilic stippling. Electrophoresis and liquid chromatography may demonstrate abnormal hemoglobins, and solubility tests can be performed to demonstrate an unstable variant. Membrane Defects Membrane defects cause hemolysis by the inability of the erythrocyte to withstand normal shear stress and distortion required for migration through sinusoids and microvasculature. Hereditary Spherocytosis Hereditary spherocytosis is the most common inherited hemolytic anemia in northern Europeans; it affects approximately 1 in 2000 people in North America with an equal incidence in males and females.43 It may result from genetic defects in a number of different membrane proteins, but with a common theme of disrupting so-called vertical connections linking the lipid bilayer via integral membrane proteins to the membrane skeleton. When this linkage is disrupted, there is destabilization of the lipid bilayer and release of portions of membrane as microvesicles. This membrane loss ultimately results in a decreased surface area-to-volume ratio in the erythroid cells and spherocytosis.43 Defects in membrane cytoskeletal and linker proteins spectrin, ankyrin, or protein 4.2 result in band 3 containing microvesicles. Defects in the integral membrane protein band 3 can be distinguished by absence of this protein from the vesicles. These abnormal red blood cells undergo further membrane loss or entrapment in the spleen generating microspherocytes. There the spherocytes also become more dehydrated, potentially owing to the acidic environment. The reduced surface area also leads to abnormal cation permeability, leading to increased requirement for the sodium–potassium adenosine triphosphatase (Na+-K+ ATPase) pump, increased glycolysis, and red blood cell dehydration.44 Although some of the abnormal cells return to circulation, destruction/phagocytosis in the spleen is the main cause of hemolysis. Clinically, hereditary spherocytosis is characterized by anemia, jaundice, gallstones, and splenomegaly.43 Severity of the disease may vary from asymptomatic to transfusion dependent. Combination with sickle trait or HbSC disease can increase the risk of splenic infarct or acute splenic sequestration, although the incidence of this combination appears to be rare.45 Diagnosis includes examination of the blood film for spherocytosis and reticulocytosis and increased osmotic fragility (Fig. 3.11). This is combined with information from family history, physical examination, and laboratory values. Numerous other entities, particularly autoimmune hemolytic anemias, also demonstrate spherocytes. The direct antiglobulin test result should be negative; compared with autoimmune hemolytic anemia, the reticulocyte count is usually lower and the hemoglobin concentration is higher. Congenital dyserythropoietic anemia type II (CDA II) may also demonstrate a positive family history and increased red blood cell osmotic fragility, but will also have an abnormal bone marrow with erythroid hyperplasia and binucleate or multinucleate erythroblasts.43 Flow cytometry can also be used as a diagnostic tool. The eosin-5′-maleimide flow cytometry test uses a fluorescent dye that binds to transmembrane proteins including band 3, Rh-related proteins, and CD47 and can be used to detect erythrocytes with abnormal/decreased membrane.46 However, membrane proteins may also be decreased in hereditary elliptocytosis, hereditary pyropoikilocytosis, CDA II, and the setting of some enzymatic defects. Molecular diagnosis remains challenging because of the multiple potential genes that have been implicated. Analysis of the relative abundance of red blood cell proteins by sodium dodecylsulfate–polyacrylamide gel electrophoresis (SDS-PAGE) may be of value to first identify the defective protein for further study.1 Hereditary Elliptocytosis Hereditary elliptocytosis (HE) is characterized by elliptical/oval erythrocytes. Related membrane defects are found in hereditary pyropoikilocytosis (HPP), where red blood cells are unusually sensitive to thermal injury. This disease is seen worldwide with relatively higher frequencies in individuals of African and Mediterranean descent, potentially related to protective factors related to malaria.47 Prevalence in North America is estimated at 1 in 2000. HE typically results from failure of red blood cells to maintain (or regain) the biconcave shape following distortion of the vasculature owing to defects in horizontal protein interactions. Horizontal interactions are required to stabilize the membrane skeleton in contrast to the vertical interactions that stabilize the association with the lipid bilayer, which are defective in hereditary spherocytosis. Both result in a decreased ability of red blood cells to withstand shear stress. Defects have been found in α- or β-spectrin, protein 4.1, and glycophorin C, but most HE variants involve the spectrin gene, including all cases of hereditary poikilocytosis.1,47 Spectrin is the main structural protein of the membrane skeleton, and mutations typically impair the ability of spectrin dimers to self-associate. In the case of HPP, spectrin deficiency results in impairment of horizontal and vertical spectrin associations with the generation of microspherocytes and a more severe phenotype1 (Fig. 3.12). Clinical presentation of HE is often asymptomatic but may include hemolytic anemia, jaundice, and splenomegaly.47 By contrast, patients with HPP are severely affected because they carry homozygous or compound heterozygote spectrin mutations, resulting in both defective self-association and partial deficiency of spectrin.48 Diagnosis includes characteristic findings on the blood film of cigar-shaped elliptocytes, which are normochromic and normocytic and vary in percent without a direct correlation to symptoms of hemolysis.1 In addition, reticulocytes (generally <5%), along with spherocytes, fragmented cells, and stomatocytes, may be present. HPP is characterized by a more severe phenotype with extreme poikilocytosis and microspherocytosis with a low measured MCV (see Fig. 3.12). SDS-PAGE or two-dimension gel electrophoresis may be used to identify defects in spectrin. A peripheral smear from a patient with severe burns (Fig. 3.13) is also shown for comparison. The differential diagnosis on a peripheral smear may include Southeast Asian ovalocytosis, but this process is not associated with hemolysis. Here, hyperstable, rigid erythroid cells are produced as a result of an abnormal band 3 protein that is unable to transport ions and also restricts membrane movement. The mutation is embryonic lethal in homozygous form, but appears to confer resistance to several forms of malarial parasites.49 The genetic defect is found at increased prevalence in endemic areas. The peripheral smear demonstrates at least 20% ovalocytes, some with a central slit.1 Osmotic Defects Several inherited causes of abnormal erythrocyte hydration have been described and may result in increased hemolysis. Together they are referred to as hereditary stomatocytosis syndromes. Stomatocytosis results from the overhydration of red blood cells owing to the net increase in Na+ or K+ cations. Xerocytes form as a result of dehydration. Either results in decreased red blood cell survival. Hereditary Xerocytosis Hereditary xerocytosis (HX) is an autosomal dominant disorder caused by increased K+ efflux without a concurrent increase in Na+ ions and is the most common of the dehydrated red blood cell disorders.50 Missense mutations have been found in the gene encoding the Piezo1 protein, which is a mechanosensory molecule associated with stretch-activated cation channels. Mutations in the adenosine triphosphate (ATP)-binding cassette transporter ABCB6 have been associated with familial pseudohyperkalemia, which appears to be on the spectrum with HX without clinical symptoms but does show the presence of an abnormal red blood cell morphology.51 The clinical presentation of HX is variable but may include pseudohyperkalemia (caused by leakage of K+ from erythrocytes after blood is drawn), jaundice, cholelithiasis, pulmonary arteriolar thrombosis, and perinatal edema.51 Hemolytic anemia is generally compensated. Patients may be anemic, but supranormal hemoglobin is also possible. Iron overload is common. The peripheral smear may demonstrate spiculated cells and target cells. Stomatocytes are not common. Red blood cell indices show increased MCHC reflecting dehydration, and often mildly increased MCV. Red blood cells are more resistant to osmotic lysis. Hereditary Stomatosis Hereditary stomatosis/hydrocytosis is an exceedingly rare autosomal dominant disorder with a prevalence of less than 1 in 1 million individuals. It is characterized by a passive leak of Na+ into erythrocytes, causing overhydration. The Na+ K+-ATPase pump is overactivated, but cannot compensate, ultimately resulting in exhaustion of glycolysis and metabolic disruption of the cell.52 The resultant erythrocytes have increased osmotic fragility. Genetic defects have been described in Rh-associated glycoprotein, of which the mutated form induces a more pronounced cation leak when expressed in Xenopus eggs. The red blood cells are also deficient in the membrane protein stomatin, which acts as a switch to convert the glucose transporter 1 (GLUT-1 transporter) to a transporter for dehydroascorbic acid. The dehydroascorbic acid is then metabolized to ascorbate inside the cell. The absence of stomatin in hereditary stomatocytosis may enable greater glucose transport to fuel the glycolytic needs, but also intracellular ascorbate depletion. The clinical presentation is moderate to severe anemia, often with jaundice, cholelithiasis, and splenomegaly with a tendency for iron overload. The peripheral smear demonstrates macrocytosis with stomatocytosis (up to 50% of red blood cells), which also demonstrate a decreased MCHC.1 The macrocytosis may be marked, with an MCV up to 150 fL. Of note, acquired stomatocytosis may be seen in acute alcoholism, in certain chemotherapy regimens, and in long-distance runners immediately after a race.1,53 Acquired Osmotic Defects Plasma hypo-osmolarity and resultant hyponatremia, when severe, such as errors in fluid used as dialysate in hemodialysis, may cause red blood cell swelling, decreased deformability, and hemolysis at lower shear forces than normal red blood cells.54 Symptoms may be not be specific, but the condition can be life threatening. Enzyme Deficiencies Erythrocyte enzyme deficiencies may lead to hereditary nonspherocytic hemolytic anemias, of which glucose-6-phosphate dehydrogenase (G6PD) deficiency is the most common. Prevalence in northern Europeans is approximately 1 in 1000, but it may affect upward of 20% of the population in some regions, including sub-Saharan Africa, and approximately 50% of Kurdish Jews. This distribution is consistent with a selective advantage conferred by resistance to malaria. Coexistence of G6PD deficiency may also decrease the severity of sickle cell disease, perhaps through the accelerated disappearance of older red blood cells that demonstrate a higher fraction of irreversibly sickled cells.55 G6PD deficiency is X-linked and thus affects hemizygous males and homozygous females. The normal allele is designated G6PD B. More than 180 variants have been described,56 varying in both regional distribution and severity. In the most common variants, G6PD A− and G6PD Mediterranean, the enzyme shows decreased stability but is present at normal or near-normal levels. Mutations causing decreased activity or altered kinetics have also been described. Overall, most mutations are missense mutations altering a single amino acid (85%), with mutations in exons 6, 10, and 13 that bind the enzyme substrate being most frequent.56 The G6PD A- variant is the most common deficient variant in individuals of African descent with red blood cells containing 5% to 15% of normal enzymatic activity. There is an age-dependent decline in red blood cell G6PD activity, so older red blood cells are more severely affected. The G6PD Mediterranean variant is most common in Mediterranean regions and results in barely detectable enzymatic activity. As a result, young red blood cells are also susceptible to hemolysis, and there is a more severe phenotype. Clinical presentation varies based on the mutation, with hemolysis being precipitated by drugs, stress, infection, or foods such as fava beans in certain individuals. This results from a reduced ability to handle the production of free radicals and to protect sulfhydryl groups from oxidation. Reduced glutathione (GSH) may be oxidized to the oxidized glutathione (GSSG) disulfide form or glutathione may be complexed to hemoglobin in a mixed disulfide bond. Oxidative damage causes denaturation and precipitation of hemoglobin and other stromal proteins as Heinz bodies. Cells with Heinz bodies are less able to traverse the splenic red pulp and may be eliminated. Increased hemolysis in the setting of infection is not as well characterized, but it may relate to hydrogen peroxide production by leukocytes.57 The most serious sequela is the development of icterus neonatorum, although this is often not accompanied by obvious hemolytic changes and may be influenced by concurrent defects in bilirubin metabolism.58 Diagnosis may be made using screening tests or a quantitative assay for enzyme activity along with DNA testing to identify specific mutations. Enzyme levels may be falsely elevated after a hemolytic episode because of the presence of young RBCs/reticulocytes with higher/normal enzyme levels. Caution should be exercised with interpreting screening test results and enzyme activity testing results after an acute hemolytic episode. Heinz bodies may also be seen in the peripheral smear in the setting of drug-induced hemolysis, along with fragmentation and spherocytosis when severe. In the absence of hemolysis, red blood cells in G6PD deficiency appear normal. However, varying degrees of reticulocytosis and anemia may be detected. G6PD mutations are graded based on World Health Organization (WHO) criteria into a series of classes (I–V) with I being the most severely affected (baseline chronic hemolysis exacerbated by precipitating causes) to class V, which is actually increased enzymatic activity.59 Class II shows less than 10% residual activity and class III moderately deficient with 10% to 60% activity. Class IV shows normal enzymatic activity. Other enzyme deficiencies that may lead to hemolytic anemia including pyruvate kinase (see later text), glucose phosphate isomerase, triosephosphate isomerase, pyrimidine 5′-nucleotidase, glutathione synthetase, glutathione reductase, and hexokinase.60 Overall red blood cell morphology is of little utility in differentiating these disorders, but pyrimidine 5′-nucleotidase deficiency is characterized by basophilic stippling. Also, numerous enzyme deficiencies do not appear to cause hemolysis, such as defects in carbonic anhydrase or lactate dehydrogenase. Pyruvate Kinase Deficiency Pyruvate kinase (PK) is a key regulatory enzyme in the glycolytic process and the most common cause of nonspherocytic hemolytic anemia owing to defects in glycolysis. It has a prevalence of approximately 50 cases per million in the Caucasian population.60 More than 160 mutations have been reported, the majority of which are missense mutations. Major metabolic disturbances that result from PK deficiency include ATP depletion and increased 2,3-bisphosphoglycerate production, but the precise mechanism for shorted red blood cell life span remains unknown. The increased 2,3-bisphosphoglycerate results in a decreased oxygen affinity of hemoglobin (right shift of the oxygen-hemoglobin dissociation curve), and thus partially ameliorates the effects of the anemia. The clinical picture may vary from well-compensated hemolysis to hydrops fetalis or neonatal death and may vary significantly among patients harboring the same mutation.60 Acquired Causes of Hemolytic Anemia Autoimmune Hemolytic Anemia Autoimmune hemolytic anemia may develop as a result of underlying autoimmune disease (particularly systemic lupus erythematosus), infection, or malignancy, in association with certain drugs or without any certain underlying cause. Approximately 50% of cases are idiopathic. The estimated incidence is approximately 1 in 80,000, and the anemia is generally mild to moderate without the need for transfusion. Autoimmune hemolytic anemia is characterized by antibodies against red blood cell antigens that result in increased red blood cell destruction. In most cases (80% to 90%) the antibodies are of the immunoglobulin (Ig) G type and bind to red blood cells at physiologic temperatures (warm antibody). Cold-reactive antibodies can also cause disease, of which the antibodies can be divided into two types: cold agglutinins (typically IgM) and cold hemolysins (typically IgG). Cold agglutinins directly bind red blood cells. Cold hemolysins, such as the Donath-Landsteiner autoantibody, cause paroxysmal cold hemoglobinuria (further discussed in following text). In rare cases, there is a mixture of both cold and warm antibody–mediated hemolytic anemia. Of secondary causes of autoimmune hemolytic anemia, lymphoid malignancies, particularly chronic lymphocytic leukemia, are the most common cause and account for the majority of cases of cold agglutinin disease. It is also a common source of warm antibody–mediated hemolysis. The hemolytic disease may predate the diagnosis of the malignancy.61 When warm antibody autoimmune hemolytic anemia is seen in combination with immune thrombocytopenia, it is termed Evans syndrome (Fig. 3.14).62 Warm antibody autoimmune hemolytic anemia may occur at any age with a peak around the seventh decade and with the majority occurring sporadically. The older peak age may relate to the association with underlying lymphoproliferative disorders. Presenting symptoms are generally related to the anemia itself. With severe cases, patients may present with fever, pallor, jaundice, or heart failure.62 In addition to idiopathic and lymphoproliferative causes, cold agglutinin disease may be associated with infectious mononucleosis, Mycoplasma pneumoniae, and chickenpox in some pediatric cases.62 Cold agglutinin disease is much less common that warm antibody–mediated disease at approximately 14 cases per 1 million people. Women are more frequently affected than men.1 Diagnosis is based on the demonstration of antibodies and/or complement on the red blood cell surface as indicated by a positive result on direct antiglobulin testing (DAT, also known as the Coombs test), which results in red blood cell agglutination. In warm antibody–mediated hemolysis, the antibody remains bound to the red blood cell. In cold antibody–mediated hemolysis, only the complement remains bound. Testing with more specific agents after the Coombs test (which specifically measures IgG and complement) can be used to identify the specific pattern. In certain instances, the direct antiglobulin test result may be negative despite the presence of autoimmune-mediated hemolysis, so-called Coombs-negative autoimmune hemolytic anemia.63 This may be caused by IgG binding below the sensitivity of the assay, a low-affinity IgG, or sensitization to hemolysis by IgA or IgM alone without the fixation of complement. Modified wash methods or specific testing for IgA and IgM antibodies may help identify these cases when there is a high index of suspicion.63 The peripheral smear demonstrates polychromasia and frequent spherocytes (Fig. 3.15). In some cases red blood cell fragments, nucleated red cells, or erythrophagocytosis by monocytes may be present.62 The special cases of paroxysmal cold hemoglobinuria and drug-induced hemolysis are discussed in more detail later. Paroxysmal Cold Hemoglobinuria Paroxysmal cold hemoglobinuria is rare. Historically it had been associated with syphilis64 (tertiary or congenital), leading to recurrent massive hemolysis after cold exposure. The hemolysis on rewarming is often intravascular.65 Now more common is a self-limited form that may follow viral illnesses in young adults and children and present with fever, chills, and red-brown urine.65 A chronic, idiopathic form also occurs. Drug-Induced Hemolysis Multiple mechanisms of drug-induced hemolysis have been described. Although proven cases are quite rare, estimated at 1 in 1 million, mild to moderate cases of hemolysis may be underrecognized.66 There may be direct induction of autoantibodies by a drug (e.g., fludarabine), where the drug need not be present in the in vitro reaction to induce hemolysis, but the process will remit after discontinuation of the drug.66 Alternatively, the drug may bind directly to proteins on the red cell surface, attracting antidrug antibodies and increased red cell phagocytosis.66 The prototype is penicillin, which binds covalently to red blood cell proteins. Noncovalent binding to red blood cell proteins may also occur, creating the potential for formation of antibodies against drug-membrane protein complexes (IgM or IgG), and the potential for significant intravascular hemolysis, resulting in renal failure or even death.66 The exact mechanism, however, remains somewhat controversial. The most common drugs to cause this are antibiotics (cefotetan, ceftriaxone, piperacillin), but at least 125 drugs have been implicated; methyldopa historically has been one of the most frequently involved. Drug-related nonimmunologic protein adsorption on red blood cells may also lead to a positive Coombs test result without causing hemolysis.1 Alloimmune Hemolytic Disease of the Fetus and Newborn Alloimmune hemolytic disease of the fetus and newborn results from transplacental transmission of maternal antibodies that bind to fetal red blood cells as a result of paternally inherited antigens. Antibodies may develop owing to prior maternal-fetal hemorrhage or transfusions or are naturally occurring (anti-A, anti-B).67 Rhesus D sensitization is historically the most common cause, but now it can be prevented largely by treatment of those at risk with Rh immunoglobulin. Unfortunately, it still remains a significant problem in parts of the developing world.67 More than 50 red blood cell antigens have been associated with alloimmunization and prophylactic therapy to prevent antibody formation outside of RhD is not available. Incidence varies between ethnic groups, with antibody screening tests detecting Rh or other minor red blood cell antigens in up to 0.4% of pregnant women.1 Clinical presentation in the newborn varies, but it is typically characterized by anemia, jaundice, and hepatosplenomegaly. Without intervention in RhD alloimmunization hydrops may develop in utero prior to 34 weeks’ gestation. Anti-Kell alloimmunization may result in a spectrum of mild anemia to hydrops fetalis, and AB-related alloimmunization demonstrates jaundice as the prominent feature.1,68 Diagnosis can begin by screening for maternal antibodies against red blood cell antigens. For infants, the direct antiglobulin test with IgG reagent may demonstrate sensitization, but it must be correlated with the clinical presentation. Peripheral smear demonstrates polychromasia and anisopoikilocytosis (Fig. 3.16). Fragmentation/Microangiopathic Hemolytic Anemia Increased intravascular hemolysis may be caused by mechanical disruption in the vasculature or partial vascular occlusions.69 This can occur in the context of systemic illness by platelet clumps in the microvasculature, including in disseminated intravascular coagulation, thrombotic thrombocytopenic purpura, hemolytic uremic syndrome, and preeclampsia/eclampsia. Destruction may also be increased in the setting of vascular malformations, such as giant hemangiomas in Kasabach-Merritt syndrome. Increased mechanical destruction can occur in the setting of prosthetic valve malfunction or dialysis. Patients may have clinical symptoms of increased intravascular hemolysis, such as pallor, icterus, or dark urine. Laboratory values show signs of hemolysis to investigate the potential underlying disorder. Peripheral smear shows fragmentation of red blood cells (schistocytosis) with poikilocytosis and reticulocytosis. Platelets are frequently decreased (Fig. 3.17). Paroxysmal Nocturnal Hemoglobinuria Paroxysmal nocturnal hemoglobinuria (PNH) results from an acquired mutation in hematopoietic stem cells in the PIGA gene, located on the X chromosome, which is required for synthesis of the glycophosphatidylinositol (GPI) moiety that anchors certain proteins to the cell surface. Such GPI-linked proteins include the complement regulatory proteins CD55 and CD59. As such, the red blood cell progeny of these stem cells are subject to increased complement-mediated hemolysis. The prevalence is not known with certainty, particularly because improved methods have made it possible to detect very low-level clonal involvement in otherwise asymptomatic patients. However, the disease appears to be rare, with a prevalence of less than 1 case per 200,000 people with a peak in the third and fourth decades. The size of the abnormal clone is an important determinant of the clinical manifestation. In addition to hemolysis, clinical manifestations may include thrombophilia or pancytopenia caused by marrow failure, with presenting symptoms including fatigue, smooth muscle dystonias, or an unusual site of venous thrombosis (e.g., hepatic, cerebral, mesenteric, or dermal veins).70,71 Paroxysmal hemoglobinuria for which it is named, caused by exacerbations in intravascular hemolysis, is a presenting symptom in approximately 25% of cases.70 Hemosiderinuria is also common and may result in concurrent iron deficiency. The relationship between PNH and bone marrow failure is complex, and PNH clones may arise in association with autoimmune-mediated aplastic anemia.71 Indeed, owing to the lack of certain cell surface proteins, autoimmune processes may show a positive selection for PNH clones, and PNH clones may expand over time in the setting of autoimmune aplastic anemia71 (Fig. 3.18). PNH clones may also arise in the setting of low-risk myelodysplastic syndrome. Diagnostic evaluation for PNH should be considered in the setting of nonspherocytic, Coombs-negative intravascular hemolysis. Original methods relied on erythrocyte-based assays, including the acidified serum lysis test (Ham test), sucrose hemolysis test, and complement lysis assay, but these are not very quantitative and pale in sensitivity and specificity to flow cytometric evaluation of cell surface GPI-linked proteins. The clone size may be underestimated if only red blood cells are evaluated given the selective lysis of red blood cells and the potential for the patients to have undergone transfusion. As it is a primary stem cell defect and multiple proteins are GPI-linked, clone size can also be estimated in blood granulocytes and monocytes. Standard GPI-linked proteins measured include CD55 and CD59 on glycophorin A–positive red blood cells, CD24 on CD15-positive granulocytes, and CD14 on CD33-positive monocytes.72 Sensitivity and specificity of diagnosis are further improved with the use of a fluorescein-labeled proaerolysin variant (FLAER), where proaerolysin is a secreted prototoxin produced by the bacterium Aeromonas hydrophila that binds selectively to the GPI anchor.71 The assay uses an inactive variant and is less sensitive to the maturational state of the cells.72 If positive, a bone marrow biopsy is warranted to assess for underlying myelodysplastic syndrome (MDS) or aplastic anemia. View chapterExplore book Read full chapter URL: Book2021, Atlas of Diagnostic HematologyMeenakshi Garg Bansal, Genevieve Marie Crane Chapter Hypochromic and Hemolytic Anemias 2021, Atlas of Diagnostic HematologyMeenakshi Garg Bansal, Genevieve Marie Crane Structural Hemoglobin Disorders In addition to thalassemias, mutations that cause structural alterations of the globin gene are included in the broad category of hemoglobinopathies. In these cases the amount of hemoglobin produced may be normal, but hemolysis results from unstable protein and resultant aggregates or degradation products. More than 700 structural hemoglobin variants have been described, of which hemoglobins S, C, and E are those most frequently detected, with HbS, HbC, and HbSC variants being significantly associated with increased hemolysis.30 HbE, although the most common structural variant, is clinically innocuous unless present in combination with β-thalassemia as it is also produced at a reduced rate.30 Hemoglobin S Sickle cell disease (homozygous S hemoglobin) is a severe hemolytic anemia with a markedly shortened red blood cell life span.29 Sickle cell trait affects approximately 8% of African Americans with a lower frequency in Americans of Hispanic descent (<1%).36 The disease is caused by a mutation in the codon of the sixth amino acid of the β-globin gene, resulting in the substitution of valine for glutamic acid. Sickling disease results from homozygous inheritance of this mutation or compound heterozygosity with another hemoglobinopathy with altered structure (e.g., HbC, HbD, HbOArab, HbE) of β-globin production owing to polymerization of HbS in the deoxygenated state. The disease can be ameliorated by increased HbF concentrations, which help prevent sickling, or concurrent α-thalassemia, which reduces hemoglobin concentration. HbF is heterogeneously distributed between red blood cells, particularly following hydroxyurea administration, and red blood cells containing HbF demonstrate improved survival compared with those that do not.37 Hemoglobin S polymerization results in distortion of the red blood cell membrane and alters the ability of the cell to maintain ion balance, which leads to dehydration. Only a fraction of the cells are irreversibly sickled, which are those seen on a peripheral smear. Sickled cells are more likely to adhere to endothelium or become trapped in the microvasculature, leading to occlusion, further ischemia, and increased sickling. This vicious cycle can lead to a sickle cell crisis manifesting as severe pain, which may in some cases be precipitated by cold, dehydration, or infection. This may occur in any tissue, but the chest (acute chest syndrome), lower back, and extremities are common sites. The patient may present with symptoms of an acute abdomen from intraabdominal ischemia. Priapism is another common complication requiring urgent intervention.38 These acute episodes may be followed by a hyperhemolytic crisis as irreversibly sickled cells are cleared. Sickling can also result in intravascular hemolysis with release of substances that alter normal vascular function.39 Intravascular cell-free hemoglobin can scavenge nitric oxide, resulting in increased risk for vasoocclusion.38 L-arginase released from red blood cells may also reduce L-arginine availability to endothelial cells for nitric oxide production (Fig. 3.8). In addition to hemolysis, sequestration crises may occur with pooling of red blood cells in the spleen or liver with a decrease in hemoglobin levels on the order of 3 g/dL below the patient’s baseline and result in signs of hypoxia and hypovolemic shock. It is less of an issue outside of early childhood because most patients ultimately undergo splenic autoinfarction. The HbS trait is typically asymptomatic, but it appears to confer resistance to severe forms of malaria. This may occur through increased sickling and removal of infected red blood cells, but the precise mechanism is unclear. Although sickle cell disease is caused by a well-defined mutation, the clinical presentation can be heterogeneous owing to other genetic and environmental factors. HbS is also seen together with other structurally abnormal hemoglobins, including HbD and HbC. Coinheritance of HbD and HbS results in a severe phenotype similar to homozygous HbS, whereas HbSC has a milder phenotype. HbS can also be inherited with thalassemias. In sickle β-thalassemia, the amount of HbA varies based on the allele inherited (β0 verses β+), but such patients generally have anemia and complications from sickle cells (Fig. 3.9). Diagnosis can be made based on a combination of laboratory tests, examination of the peripheral smear, and hemoglobin electrophoresis (see Fig. 3.6). Typically patients with sickle cell disease have anemia (Hb typically between 5 and 11 g/dL), increased lactate dehydrogenase, increased indirect bilirubin, increased serum ferritin, elevated erythropoietin, and decreased serum haptoglobin. A decreased haptoglobin favors a component of intravascular hemolysis as opposed to primary destruction by the reticuloendothelial system, but levels may vary because it is also an acute-phase reactant. The peripheral smear demonstrates irreversibly sickled cells and increased reticulocytes (Fig. 3.10). Neutrophil and platelet counts may also be elevated. Traditional hemoglobin electrophoresis or high-pressure liquid chromatography can be used to identify HbS (see Fig. 3.6). Solubility testing, red blood cell sickling assays, and genetic studies may also be used. Hemoglobin C Hemoglobin C is a slightly less common structural hemoglobin variant, with the highest frequency in West Africa, potentially above 15%.40 Homozygous inheritance appears to convey full resistance against malaria, with partial protection in heterozygotes, and with a less severe phenotype in homozygotes than is seen with sickle cell disease. This allele may have originated independently in Southeast Asia. In the United States, approximately 2% of Americans of African descent have hemoglobin C trait and 1 in 6000 have hemoglobin C disease. It may co-occur with β-thalassemia or the HbS mutation. As with HbS, the HbC mutation is also in codon 6 of the β-globin gene. This results in a conversion from glutamic acid to lysine. Crystals can be formed in vivo, which result in decreased deformability and reduced red blood cell life span. These red blood cells also demonstrate an increased loss of K+ resulting in dehydrated, spherocytic cells with increased hemoglobin concentration. This red blood cell dehydration is also marked in HbSC disease.41 Patients with hemoglobin C disease may have mild to moderate splenomegaly, cholelithiasis, or other signs of chronic hemolysis. Pain crises are not a feature, and life span is not decreased. Laboratory values generally demonstrate a mild to moderate hemolytic anemia with hemoglobin in the range of 10 to 11 g/dL and potentially increased indirect bilirubin. On peripheral smear, HbC is characterized by target cells, spherocytes, and characteristic crystals, particularly in patients who are splenectomized. The red blood cells may be microcytic and hyperchromic with mild reticulocytosis. HbC can readily be distinguished from HbA and HbS by electrophoresis or liquid chromatography owing to the charge difference from the substituted lysine (see Fig. 3.6). In contrast to HbS, HbC does not differ in solubility compared with HbA, but in the homozygous state, HbC forms crystals when the red blood cells are incubated in hypertonic saline. Unstable Hemoglobins Unstable hemoglobins may lead to a congenital hemolytic anemia with Heinz bodies visible from hemoglobin precipitation on a peripheral smear. These anemias are thus sometimes referred to as “congenital Heinz body hemolytic anemias.” They are predominantly sporadic, often resulting from mutations that destabilize the association between the heme group and globin. Alternatively, mutations may affect the formation of globin dimers, tetramers, or the protein secondary structure.42 The resultant clinical presentation is variable but often worsens in the setting of oxidant stress. Laboratory findings demonstrate evidence of hemolytic anemia, including increased reticulocytes, elevated lactate dehydrogenase, and decreased haptoglobin. A peripheral smear typically demonstrates increased anisopoikilocytosis, Heinz bodies, and basophilic stippling. Electrophoresis and liquid chromatography may demonstrate abnormal hemoglobins, and solubility tests can be performed to demonstrate an unstable variant. Membrane Defects Membrane defects cause hemolysis by the inability of the erythrocyte to withstand normal shear stress and distortion required for migration through sinusoids and microvasculature. View chapterExplore book Read full chapter URL: Book2021, Atlas of Diagnostic HematologyMeenakshi Garg Bansal, Genevieve Marie Crane Related terms: Thalassemia Hemoglobin A Alpha-Thalassemia Beta Thalassemia Embryonic Hemoglobin Sickle-Cell Disease Allele Hemoglobin Alpha Chain Hemoglobin Beta Chain Hemochromatosis View all Topics
3999
https://www.instagram.com/reel/DNQbby5ycdC/
Instagram Log In Sign Up justquant • Follow Original audio justquant6w Divide by 6 or 12 in seconds! To divide by 6, halve the number and then divide by 3. To divide by 12, divide by 3 and then by 4. It’s that simple! Can you try it with 450? Drop your answer in the comments! #mentalmath#mathtricks#divisiontrick#quickmath#studyhacks#mathtips#shorts#learningmath#mathshortcut#factors#justquant#mathtricksworkout No comments yet. Start the conversation. 6 likes August 12 Log in to like or comment. More posts from justquant See more posts Meta About Blog Jobs Help API Privacy Consumer Health Privacy Terms Locations Instagram Lite Meta AI Meta AI Articles Threads Contact Uploading & Non-Users Meta Verified English © 2025 Instagram from Meta