id
stringlengths
1
6
url
stringlengths
16
1.82k
content
stringlengths
37
9.64M
2400
https://www.internationaljournalssrg.org/IJVS/paper-details?Id=22
Call for Paper - Upcoming Issues IJVS Histological Structure of the Gills and Respiratory Surface Area of the Trachurus Mediterraneus Caught from the Coast of Misurata City | | | International Journal of Veterinary Science | | © 2025 by SSRG - IJVS Journal | | Volume 11 Issue 1 | | Year of Publication : 2025 | | Authors : Esmail Mohamed alhemmali, Tahani Riheel Abdulwahid Misbah, Amna Abushalla, Khawla Esbaga, Aisha Nasif | | 10.14445/24550868/IJVS-V11I1P101 | How to Cite? Esmail Mohamed alhemmali, Tahani Riheel Abdulwahid Misbah, Amna Abushalla, Khawla Esbaga, Aisha Nasif, "Histological Structure of the Gills and Respiratory Surface Area of the Trachurus Mediterraneus Caught from the Coast of Misurata City," SSRG International Journal of Veterinary Science, vol. 11, no. 1, pp. 1-5, 2025. Crossref, Abstract: The gills function to supply the body with the oxygen necessary for the physiological processes occurring in fish. Their efficiency varies depending on the species, and the gills' surface area is one factor that determines the fish's efficiency and activity level. A rotary microtome was used to prepare tissue sections of the gills of Trachurus mediterraneus, a species found along the coast of Misurata city, to study the histological structure and estimate the respiratory surface area during May 2024. The macroscopic examination revealed the shape of the gill arches, with an average of 131.75±0.5 gill filaments arranged in an orderly manner on the dorsal surface of the gill arch, with an average length of 38.75 ± 1.25 µm. The average number of primary lamellae was 229.5±57.44, with an average length of 2.21±0.43 µm. Histologically, the gill lamellae are covered by two layers of epithelial cells, while the secondary lamellae are covered by a single layer of epithelial cells, along with supporting cells within their structure. The gill filaments are supported by cartilage tissue, and there are several blood sinuses and lacunae. The absolute and relative respiratory surface areas of the gills of T. mediterraneus were 148.5888 m² and 0.425319, respectively. The histological structure indicates that the studied fish possess cartilage supporting the gill filaments and numerous blood sinuses. The primary lamellae are lined with stratified epithelial cells, while the secondary lamellae are lined with simple epithelial cells. Additionally, pillar cells are present, supporting the secondary lamellae and enclosing many blood spaces. The histological structure and micrometric measurements play a crucial role in understanding the physiological activity of fish, which is oxygen-dependent, and the structural and histological adaptations of the gill filaments. Keywords: Fish, Gills, Filaments, Gill arches, Pillar cells. References: Trachurus Mediterraneus Mediterranean Horse Mackerel, Fishbase, 2024. [Online]. Available: Christian Leveque, Didier Paugy, and Olga Otero, The Inland Water Fishes of Africa, IRD Editions, pp. 1-680, 2018. [Google Scholar] [Publisher Link] Maroua Badri, Mohammed Ezziyyani, and Said Benchoucha, “Morphometrical Characters of Atlantic Horse Mackerel Trachurus Trachurus (Linnaeus, 1758), in the North Atlantic Zone of Morocco,” 2nd International Congress on Coastal Research (ICCR 2023): E3S Web Conference, vol. 502, pp. 1-4, 2024. [CrossRef] [Google Scholar] [Publisher Link] David H. Evans, “Cell Signaling and Ion Transport across the Fish Gill Epithelium,” Journal of Experimental Zoology, vol. 293, no. 3, pp. 336-347, 2002. [CrossRef] [Google Scholar] [Publisher Link] Pierre Laurent, and Nadra Hebibi, “Gill Morphometry and Fish Osmoregulation,” Canadian Journal of Zoology, vol. 67, no. 12, 1989. [CrossRef] [Google Scholar] [Publisher Link] F.A. Mohammed, “Relationship between Total Length and Gill Surface Area in Orange Spotted Grouper, Epinephelus Coioides (Hamilton, 1822),” Iraqi Journal of Agricultural Sciences, vol. 49, no. 5, 2018. [CrossRef] [Google Scholar] [Publisher Link] D. Bernet et al., “Histopathology in Fish: Proposal for a Protocol to Assess Aquatic Pollution,” Journal of Fish Diseases, vol. 22, no. 1, pp. 25-34, 1999. [CrossRef] [Google Scholar] [Publisher Link] Asmaa Hashem Sweidan et al., “Water Pollution Detection System Based on Fish Gills as a Biomarker,” Procedia Computer Science, vol. 65, pp. 601 611, 2015. [CrossRef] [Google Scholar] [Publisher Link] Jonathan M. Wilson, and Pierre Laurent, “Fish Gill Morphology: Inside Out,” Journal of Experimental Zoology, vol. 293, no. 3, pp. 192-213, 2002. [CrossRef] [Google Scholar] [Publisher Link] I. Samajdar, and D.K. Mandal, “Histology and Surface Ultra-structure of the Gill of a Minor Carp, Labeo Bata (Hamilton),” Journal of Scientific Research, vol. 9, no. 2, pp. 201-208, 2017. [CrossRef] [Google Scholar] [Publisher Link] Hiran M. Dutta, and J.S. Datta Munshi, “Functional Morphology of Air-breathing Fishes: A Review,” Proceedings: Animal Sciences, vol. 94, pp. 359 375, 1985. [CrossRef] [Google Scholar] [Publisher Link] Sanaa E. Abdel Samei et al., “A Comparative Study on Gill Histology and Ultrastructure of the Sea Bass (Dicentrarchus labrax) Inhabiting Brackish, Marine and Hyper-saline Waters,” Egyptian Journal of Aquatic Biology and Fisheries, vol. 25, no. 6, pp. 111-128, 2021. [CrossRef] [Google Scholar] [Publisher Link] M.R. Skeeles, and T.D. Clark, “Fish Gill Surface Area can Keep Pace with Metabolic Oxygen Requirements across Body Mass and Temperature,” Functional Ecology, vol. 38, no. 4, pp. 755-764, 2024. [CrossRef] [Google Scholar] [Publisher Link] N.L.P.R. Phadmacanty et al., “Histopathological Study of Fish Gills in Situ Cikaret and Situ Cilodong, West Java,” IOP Conference Series: Earth and Environmental Science: 1st Aquatic Science International Conference, vol. 1191, 2022. [CrossRef] [Google Scholar] [Publisher Link] Caroline A. Schneider, Wayne S. Rasband, and Kevin W. Eliceiri, “NIH Image to ImageJ: 25 Years of Image Analysis,” Nature Methods, vol. 9, pp. 671 675, 2012. [CrossRef] [Google Scholar] [Publisher Link] G.M. Hughes, “Measurement of Gill Area in Fishes: Practices and Problems,” Journal of Marine Biological Association of the United Kingdom, vol. 64, no. 3, pp. 637-655, 1984. [CrossRef] [Google Scholar] [Publisher Link] G.M. Hughes, “The Dimensions of Fish Gills in Relation to their Function,” Journal of Experimental Biology, vol. 45, no. 1, pp. 177-195, 1966. [CrossRef] [Google Scholar] [Publisher Link] S. Laurie Sanderson et al., “Mucus Entrapment of Particles by a Suspension-feeding Tilapia (Pisces: Cichlidae),” Journal of Experimental Biology, vol. 199, no. 8, pp. 1743-1756, 1996. [CrossRef] [Google Scholar] [Publisher Link] Mohannad Kareem Jaafar Al-Sudani, and Asseel Mohammed Hussein Abdulredha “Evaluation of Water and Fish Heavy Metals Contamination in the General Estuary in Iraq,” African Journal of Biomedical Research, vol. 27, no. 3, pp. 1399-1407, 2024. [Publisher Link] Louis Sam, “Structure and Important Functions of Fish Gills,” Fisheries and Aquaculture Journal, vol. 14, no. 3, 2023. [Publisher Link] E.L. Cardoso et al., “Morphological Changes in the Gills of Lophiosilurus Alexanfri Exposed to Un-ionized Ammonia,” Journal of Fish Biology, vol. 49, no. 5, pp. 778-787, 1996. [CrossRef] [Google Scholar] [Publisher Link] Jonathan M. Wilson et al., “NaCl Uptake by The Branchial Epithelium in Freshwater Teleost Fish: An Immunological Approach to Ion-transport Protein Localization,” Journal of Experimental Biology, vol. 203, no. 15, pp. 2279-2296, 2000. [CrossRef] [Google Scholar] [Publisher Link] Fatima H. Al-Asady, and Mohammed W.H. Al-Muhanna, “Estimation of Gill Respiratory Area and Diameters of Both Red and White Muscle Fibers in Luciobarbus Xanthopterus (Heckel, 1843) and Coptodon Zillii (Gervais, 1848) Local Bony Fish in Karbala City,” Tropical Journal of Natural Product Research, vol. 4, no. 12, pp. 1088-1095, 2020. [CrossRef] [Google Scholar] [Publisher Link] Yutaro Suzuki, Akiyoshi Kondo, and Jan Bergstrom, “Morphological Requirement in Limulid and Decapod Gills: A Case Study in Deducing the Function of Lamellipedian Exopod Lamella,” Acta Palaeontologica Polonica, vol. 53, no. 2, pp. 275-283, 2008. [CrossRef] [Google Scholar] [Publisher Link] IJVS MENUS | | || IJVS Aim & Scope | Editorial Board | Paper Submission | | Current Issue | IJVS Archives | Publication Ethics | | Authors Guidelines | Editors Guidelines | Reviewer Guidelines | | Indexing | APC | Mode of Payment | | Paper Template | Copyright Form | Annual Subscription | Quick Links Authors Editors Events SSRG Follow Us © SSRG International Journals - All right reserved SSRG site and its metadata are licensed under CC BY-NC-ND Designed by Infodazz
2401
https://math.stackexchange.com/questions/53279/rotation-by-180-angle
Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams rotation by 180 angle Ask Question Asked Modified 8 years, 8 months ago Viewed 10k times 0 $\begingroup$ In general I know that if we rotate $(x, y)$ about origin through $180^\circ$ we will get new image $(-x, -y)$, but suppose that we make rotation not about origin but some other point $(a, b)$ does your result be rotation around origin + or - $(a, b)$? Suppose we have point $A(3, 27)$ and we want turn it by $180$ around the point $(2, -1)$, if we rotate $(3, 27)$ about origin by $180$ we get $(-3, -27)$ but how to connect $(2, -1)$ to this result? geometry analytic-geometry Share edited Jun 20, 2015 at 3:54 Harish Chandra Rajpoot 38.6k135135 gold badges8585 silver badges120120 bronze badges asked Jul 23, 2011 at 11:37 dato datuashvilidato datuashvili 1 $\endgroup$ 2 3 $\begingroup$ The idea is to translate to the origin, rotate, and then undo the translation... $\endgroup$ J. M. ain't a mathematician – J. M. ain't a mathematician 2011-07-23 11:41:14 +00:00 Commented Jul 23, 2011 at 11:41 $\begingroup$ The general idea even has a name: Transform, Solve, Transform Back. $\endgroup$ André Nicolas – André Nicolas 2011-07-23 13:12:31 +00:00 Commented Jul 23, 2011 at 13:12 Add a comment | 2 Answers 2 Reset to default 2 $\begingroup$ You can make a translation of axes so that $(2,-1)$ becomes the new origin. The new axes are $X=x-2,Y=y+1$. Then you compute the new coordinates of $A(3-2,27+1)=(1,28)$. The symmetric point $A'$ with respect to $(X,Y)=(0,0)$ is thus $A'(-1,-28)$ in the $XY-$coordinate system or $A'(-1+2,-28-1)=(1,-29)$ in the original $xy-$coordinate system. Share edited Jul 23, 2011 at 12:03 answered Jul 23, 2011 at 11:50 Américo TavaresAmérico Tavares 39.2k1414 gold badges110110 silver badges252252 bronze badges $\endgroup$ 0 Add a comment | 2 $\begingroup$ You can first move the point $(2,-1)$ to the origin, by adding $(-2,1)$ to all the points of the plane. Now the point $A$ goes to $(1,28)$. Now rotate: You get $(-1,-28)$. Now you have to return back: the image would be $(-1,-28)+(2,-1)=(1,-29)$. That's your result. Share answered Jul 23, 2011 at 11:45 Dennis GulkoDennis Gulko 15.9k11 gold badge3838 silver badges6161 bronze badges $\endgroup$ Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions geometry analytic-geometry See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Related 3 3x3 Rotation matrix from various angles 1 Is there a generalized method of rotation for curves? Matrix for rotation around a vector 1 Find center of rotation after object rotated by known angle (2D) Rotation of 3 lines in 3D space 1 Reflection and rotation 1 New location of point after rotation around origin on two axis 0 A line $L_0: 2x+5y=11$ rotates about a point $P(\alpha, \beta)$ on the line $L_0$ such that $\alpha$ and $\beta$ are integers [CONT..] 0 coordinate transformation geometry and rotation sense Hot Network Questions How do you emphasize the verb "to be" with do/does? Do sum of natural numbers and sum of their squares represent uniquely the summands? Where is the first repetition in the cumulative hierarchy up to elementary equivalence? Passengers on a flight vote on the destination, "It's democracy!" I'm having a hard time intuiting throttle position to engine rpm consistency between gears -- why do cars behave in this observed way? My dissertation is wrong, but I already defended. How to remedy? Numbers Interpreted in Smallest Valid Base How long would it take for me to get all the items in Bongo Cat? Fix integral lower bound kerning in textstyle or smaller with unicode-math Copy command with cs names Help understanding moment of inertia Can I use the TEA1733AT for a 150-watt load despite datasheet saying 75 W? Mishearing Monica’s line in Friends: “beacon that only dogs…” — is there a “then”? Switch between math versions but without math versions What’s the usual way to apply for a Saudi business visa from the UAE? Quantizing EM field by imposing canonical commutation relations Exchange a file in a zip file quickly Are there any world leaders who are/were good at chess? Determine which are P-cores/E-cores (Intel CPU) Space Princess Space Tours: Black Holes merging - what would you visually see? How to home-make rubber feet stoppers for table legs? is the following argument valid? Singular support in Bezrukavnikov's equivalence Another way to draw RegionDifference of a cylinder and Cuboid more hot questions Question feed
2402
https://www.youtube.com/watch?v=nJIMpyov72U
Summations 4 Proving the Arithmetic Sum Formula with Induction Professor Painter 6380 subscribers 24 likes Description 1947 views Posted: 22 Feb 2022 1 comments Transcript: our first summation formula that we are going to look at is what's called the arithmetic summation it's the first thing we looked at adding up the numbers from 1 to n so this is 1 plus 2 plus 3 plus 4 until we get to n there's a classic story around this that newton or gauss or whatever everyone's favorite mathematician is was asked in elementary school to add up the numbers from one to a hundred or one to a thousand or changes based on the story and this was given to him as just some busy work the teacher really didn't want to teach however they were very clever and identified this formula and finished it within seconds as opposed to taking a full class period to do this it's almost certainly not true this is like a fable of mathematics that everybody tells each other but it's a neat little story to explain why this formula is kind of nice it allows you to add up lots of values very quickly this formula i actually do use sometimes in my regular life the other ones maybe not so much when you're at the grocery store this one does show up sometimes though so little anecdote aside how are we going to prove this it's called a proposition we're going to actually prove that this is correct it says for any n in the positive integers this formula is true that might tickle your sense and say we're probably going to use induction here so we're proving something that's going to be true for every single integer so we're going to prove this with induction so proof because we weren't told directly to do induction here i should probably tell the reader that we're going to use induction so i'm going to write we prove this via mathematical induction math mathematical induction to prove stuff with induction the first step is to always prove the base case so our base case here we're proving this for all n and the positive integers the base case is n equals one for that we have that the sum from i equals one to one that of i what is that that says starting at one and going to one add up the value i so this is just equal to one furthermore let's compare that to our desired quantity on the right which is n times n plus one divided by two that's one times two divided by two that's two so it seems like the base case holds we now must make our inductive assumption so let's write that down inductive assumption this is the thing we're going to assume is true we're going to assume that the formula holds for n assume that p of n where that means that the formula or n is true i equals 1 to n of i is equal to n times n plus 1 divided by two holds i'm going to include that down on this line we then want to show that p of n plus one is true which is the formula for n plus one the sum from i equals one to n plus one of i is equal to the right hand side of the formula plugging in n plus one is n plus one times m plus two so this is n plus one times quantity n plus two divided by two we wanna show that that holds like we've seen with many inductive things in the past we're going to start with one side of the thing we want to show and then manipulate it to try to use the inductive hypothesis somewhere along the way so let us consider the summation from i equals 1 to n plus 1 of i i can write this as n plus 1 which is the biggest term in the summation plus the sum from i equals 1 to n of i this says add up 1 to n and then afterwards we add up n plus 1. this is in fact the same summation we had before from our definition that we had in our very first video this is actually exactly what the definition told us to do you can peel off one term at a time in a summation recursively that's what we're going to do here this seems like it's a good idea because now i have the sum from 1 to n of i which is exactly verbatim what my inductive assumption knows stuff about so i'm going to use my inductive assumption now and say that this equals n plus 1 is going to remain as written and i'm going to replace this summation with that fraction that our inductive hypothesis assumed was true and because i use my inductive assumption i want to comment somewhere that i use my inductive assumption so off to the side here i will justify this by saying by inductive assumption or hypothesis both are valid ways to write this inductive assumption and now we have an algebra problem i have some expression that i want to eventually show is equal to n plus 1 times m plus 2 divided by 2. so this is equal to this is equal to i'm going to factor out an m plus 1 out of both things because i notice both terms have an n plus 1. so i have n plus 1 quantity times 1 plus n divided by 2. alternatively i could have tried to get a common denominator both approaches are equally valid i now i'm going to get a common denominator between those two fractions and i get that this is equal to quantity m plus one times quantity two plus n divided by two this is written in a funny form but this is my final goal i'm going to rewrite it just to make sure it looks identical to the thing i wanted to prove this is equal to n plus 1 times n plus 2 divided by 2 which was our end goal so p of n plus 1 is true because we started with the left-hand side of the thing we wanted to show and did algebra and used our inductive hypothesis to eventually show that the right hand side was equivalent to it so if p of n plus 1 is true the result follows by mathematical induction therefore or thus the result follows by the principle of mathematical induction principle of mathematical induction and that's the end of our proof one of the primary reasons that we learned induction for the purposes of this class was to be able to prove things like this we're going to be using summation so much going forward that we really want to have formulas when we see these well-known summations this is called an arithmetic summation will show up lots of times in this class and is a very useful formula sometimes even in your regular everyday life when you're trying to add things up
2403
https://www.accountingtools.com/articles/sinking-fund-method-of-depreciation.html
Sinking fund method of depreciation — AccountingTools No results found. AccountingTools - [x] CPE Courses CPE CoursesCPE Log InHow to Take a CourseState CPE Requirements - [x] Articles ArticlesTopics IndexSite Archive - [x] Books Accounting BooksCollege TextbooksFinance BooksOperations BooksScience Fiction Novels - [x] Podcast Accounting Best PracticesPodcast IndexPodcast Summary Dictionary FAQs - [x] About AboutContactEnvironmental Commitment Home AccountingTools CPE Courses/ CPE Courses CPE Log In How to Take a Course State CPE Requirements Articles/ Articles Topics Index Site Archive Books/ Accounting Books College Textbooks Finance Books Operations Books Science Fiction Novels Podcast/ Accounting Best Practices Podcast Index Podcast Summary Dictionary/ FAQs/ About/ About Contact Environmental Commitment Home/ AccountingTools Accounting CPE Courses & Books Articles AccountingTools CPE Courses/ CPE Courses CPE Log In How to Take a Course State CPE Requirements Articles/ Articles Topics Index Site Archive Books/ Accounting Books College Textbooks Finance Books Operations Books Science Fiction Novels Podcast/ Accounting Best Practices Podcast Index Podcast Summary Dictionary/ FAQs/ About/ About Contact Environmental Commitment Home/ April 16, 2025 Sinking fund method of depreciation April 16, 2025/Steven Bragg What is the Sinking Fund Method of Depreciation? The sinking fund method of depreciation is used when an organization wants to set aside a sufficient amount of cash to pay for a replacement asset when the current asset reaches the end of its useful life. As depreciation is incurred, a matching amount of cash is invested, with the interest proceeds being deposited into an asset replacement fund. The interest deposited into this fund is also invested. By the time a replacement asset is needed, the funds needed to make the acquisition have accumulated in the associated fund. Related AccountingTools Courses Fixed Asset Accounting How to Audit Fixed Assets Public Utility Accounting When to Use the Sinking Fund Method This approach is most applicable in industries that have a large fixed asset base, so that they are constantly providing for future asset replacements in a highly organized manner. It is also most applicable to long-term, established industries where it is most likely that the same assets will need to be replaced, over and over again. Advantages of the Sinking Fund Method The sinking fund method of depreciation offers several strategic advantages, particularly for long-term financial planning and asset management. One of its key benefits is that it ensures an organization systematically sets aside funds to replace an asset at the end of its useful life, reducing the financial burden of large, one-time capital expenditures. By investing the depreciation amounts and reinvesting the interest earned, the organization builds a dedicated reserve that grows over time, aligning cash flow with future asset needs. This method promotes fiscal discipline, enhances liquidity planning, and helps prevent disruptions in operations due to a lack of funds for critical asset replacements. It is especially useful for capital-intensive businesses that rely heavily on long-lived equipment or infrastructure. Disadvantages of the Sinking Fund Method There are several disadvantages associated with the sinking fund method, which are as follows: Multiple funds needed. The sinking fund method requires the use of a separate asset replacement fund for each asset, so it can result in an unusually complex amount of accounting. For this reason, it is usually only applied to a few of the most expensive assets. Varying investment rates. Investment rates will vary over the life of the asset, so the amount accumulated in the fund will probably not match the asset's original cost. If it turns out to be lower than the replacement cost, then the business will need to find funding for the disparity. Varying replacement cost. The replacement cost of the asset may have changed (up or down) over its life, so the funded amount may exceed or be short of the actual purchase requirement. Discover more Bookshelves AccountingTool Accounting Best Practices Related Articles Annuity Method of Depreciation Overview of Depreciation Retirement Method of Depreciation April 16, 2025/Steven Bragg/ Fixed Assets 6 Likes Share Steven Bragg Off balance sheet definition Accounting for joint ventures CPE Courses/ CPE Courses CPE Log In How to Take a Course State CPE Requirements Articles/ Articles Topics Index Site Archive Books/ Accounting Books College Textbooks Finance Books Operations Books Science Fiction Novels Podcast/ Accounting Best Practices Podcast Index Podcast Summary Dictionary/ FAQs/ About/ About Contact Environmental Commitment Home/ AccountingTools Accounting Books College Textbooks Finance Books Operations Books Sign up for monthly discounts Submit Submit Copyright 2025
2404
https://www.birmingham.ac.uk/Documents/college-eps/college/stem/Student-Summer-Education-Internships/Proof-and-Reasoning.pdf
Proofs and Mathematical Reasoning University of Birmingham Author: Agata Stefanowicz Supervisors: Joe Kyle Michael Grove September 2014 c ⃝University of Birmingham 2014 Contents 1 Introduction 6 2 Mathematical language and symbols 6 2.1 Mathematics is a language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 Greek alphabet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.4 Words in mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3 What is a proof? 9 3.1 Writer versus reader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2 Methods of proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.3 Implications and if and only if statements . . . . . . . . . . . . . . . . . . . . . . . . . . 10 4 Direct proof 11 4.1 Description of method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.2 Hard parts? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.4 Fallacious “proofs” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4.5 Counterexamples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 5 Proof by cases 17 5.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 5.2 Hard parts? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 5.3 Examples of proof by cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 6 Mathematical Induction 19 6.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 6.2 Versions of induction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 6.3 Hard parts? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 6.4 Examples of mathematical induction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 7 Contradiction 26 7.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 7.2 Hard parts? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 7.3 Examples of proof by contradiction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 8 Contrapositive 29 8.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 8.2 Hard parts? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 8.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 9 Tips 31 9.1 What common mistakes do students make when trying to present the proofs? . . . . . 31 9.2 What are the reasons for mistakes? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 9.3 Advice to students for writing good proofs . . . . . . . . . . . . . . . . . . . . . . . . . . 32 9.4 Friendly reminder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 c ⃝University of Birmingham 2014 10 Sets 34 10.1 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 10.2 Subsets and power sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 10.3 Cardinality and equality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 10.4 Common sets of numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 10.5 How to describe a set? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 10.6 More on cardinality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 10.7 Operations on sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 10.8 Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 11 Functions 41 11.1 Image and preimage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 11.2 Composition of the functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 11.3 Special functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 11.4 Injectivity, surjectivity, bijectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 11.5 Inverse function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 11.6 Even and odd functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 11.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 12 Appendix 47 c ⃝University of Birmingham 2014 Foreword Talk to any group of lecturers about how their students handle proof and reasoning when presenting mathematics and you will soon hear a long list of ‘improvements’ they would wish for. And yet, if no one has ever explained clearly, in simple but rigorous terms, what is expected it is hardly a surprise that this is a regular comment. The project that Agata Stefanowicz worked on at the University of Birmingham over the summer of 2014 had as its aim, clarifying and codifying views of staffon these matters and then using these as the basis of an introduction to the basic methods of proof and reasoning in a single document that might help new (and indeed continuing) students to gain a deeper understanding of how we write good proofs and present clear and logical mathematics. Through a judicious selection of examples and techniques, students are presented with instructive examples and straightforward advice on how to improve the way they produce and present good mathematics. An added feature that further enhances the written text is the use of linked videos files that offer the reader the experience of ‘live’ mathematics developed by an expert. And Chapter 9, that looks at common mistakes that are made when students present proofs, should be compulsory reading for every student of mathematics. We are confident that, regardless of ability, all students will find something to improve their study of mathematics within the pages that follow. But this will be doubly true if they engage with the problems by trying them as they go through this guide. Michael Grove & Joe Kyle September 2014 c ⃝University of Birmingham 2014 Acknowledgements I would like to say a big thank you to the Mathematics Support Centre team for the opportunity to work on an interesting project and for the help and advice from the very first day. Special gratitude goes to Dr Joe Kyle for his detailed comments on my work and tips on creating the document. Thank you also to Michael Grove for his cheerful supervision, fruitful brainstorming conversations and many ideas on improving the document. I cannot forget to mention Dr Simon Goodwin and Dr Corneliu Hoffman; thank you for your time and friendly advice. The document would not be the same without help from the lecturers at the University of Birmingham who took part in my survey - thank you all. Finally, thank you to my fellow interns, Heather Collis, Allan Cunningham, Mano Sivanthara-jah and Rory Whelan for making the internship an excellent experience. c ⃝University of Birmingham 2014 1 Introduction From the first day at university you will hear mention of writing Mathematics in a good style and using “proper English”. You will probably start wondering what is the whole deal with words, when you just wanted to work with numbers. If, on top of this scary welcome talk, you get a number of definitions and theorems thrown at you in your first week, where most of them include strange notions that you cannot completely make sense of - do not worry! It is important to notice how big difference there is between mathematics at school and at the university. Before the start of the course, many of us visualise really hard differential equations, long calculations and x-long digit numbers. Most of us will be struck seeing theorems like “a×0 = 0”. Now, while it is obvious to everybody, mathematicians are the ones who will not take things for granted and would like to see the proof. This booklet is intended to give the gist of mathematics at university, present the language used and the methods of proofs. A number of examples will be given, which should be a good resource for further study and an extra exercise in constructing your own arguments. We will start with introducing the mathematical language and symbols before moving onto the serious matter of writing the mathematical proofs. Each theorem is followed by the “notes”, which are the thoughts on the topic, intended to give a deeper idea of the statement. You will find that some proofs are missing the steps and the purple notes will hopefully guide you to complete the proof yourself. If stuck, you can watch the videos which should explain the argument step by step. Most of the theorems presented, some easier and others more complicated, are discussed in first year of the mathematics course. The last two chapters give the basics of sets and functions as well as present plenty of examples for the reader’s practice. 2 Mathematical language and symbols 2.1 Mathematics is a language Mathematics at school gives us good basics; in a country where mathematical language is spoken, after GCSEs and A-Levels we would be able to introduce ourselves, buy a train ticket or order a pizza. To have a fluent conversation, however, a lot of work still needs to be done. Mathematics at university is going to surprise you. First, you will need to learn the language to be able to communicate clearly with others. This section will provide the “grammar notes”, i.e. the commonly used symbols and notation, so that you can start writing your mathematical statements in a good style. And like with any other foreign language, “practice makes perfect”, so take advantage of any extra exercises, which over time will make you fluent in a mathematical world. 2.2 Greek alphabet Greek alphabet - upper and lower cases and the names of the letters. 2.3 Symbols Writing proofs is much more efficient if you get used to the simple symbols that save us writing long sentences (very useful during fast paced lectures!). Below you will find the basic list, with the symbols on the left and their meaning on the right hand side, which should be a good start to exploring further mathematics. Note that these are useful shorthands when you need to note the ideas down quickly. In general though, when writing your own proofs, your lecturers will advise you to use words instead of the fancy notation - especially at the beginning until you are totally comfortable with the statements “if. . . , then. . . ”. When reading mathematical books you will notice that the word “implies” appears more often than the symbol = ⇒. c ⃝University of Birmingham 2014 7 A α alpha B β beta Γ γ gamma ∆ δ delta E ϵ epsilon Z ζ zeta H η eta Θ θ theta I ι iota K κ kappa Λ λ lambda M µ mu N ν nu Ξ ξ xi O ø omicron Π π pi P ρ rho Σ σ sigma T τ tau Y υ upsilon Φ φ phi X χ chi Ψ ψ psi Ω ω omega Table 1: Greek letters • Quantifiers ∀(universal quantifier) ∃(existential quantifier) for all there exists • Symbols in set theory ∪ ∩ ⊆ ⊂, ⊊or & ◦ union intersection subset proper subset composition of functions • Common symbols used when writing proofs and definitions = ⇒ ⇐ ⇒ := ≡ : or | ∴ E or ■or □ implies if and only if is defined as is equivalent to such that therefore contradiction end of proof 2.4 Words in mathematics Many symbols presented above are useful tools in writing mathematical statements but nothing more than a convenient shorthand. You must always remember that a good proof should also include words. As mentioned at the beginning of the paper, “correct English” (or any other language in which you are literate) is as important as the symbols and numbers when writing mathematics. Since it is important to present proofs clearly, it is good to add the explanation of what is happening at each step using full sentences. The whole page with just numbers and symbols, without a single word, will nearly always be an example of a bad proof! Tea or coffee? Mathematical language, though using mentioned earlier “correct English”, differs slightly from our everyday communication. The classic example is a joke about a mathematician, c ⃝University of Birmingham 2014 8 who asked whether they would like a tea or coffee, answers simply “yes”. This is because “or” in mathematics is inclusive, so A or B is a set of things where each of them must be either in A or in B. In another words, elements of A or B are both those in A and those in B. On the other hand, when considering a set A and B, then each of its elements must be both in A and B. Exercise 2.1. Question: There are 3 spoons, 4 forks and 4 knives on the table. What fraction of the utensils are forks OR knives? Answer: “Forks or knives” means that we consider both of these sets. We have 4 of each, so there are 8 together. Therefore we have that forks or knives constitute to 8 11 of all the utensils. If we were asked what fraction of the utensils are “forks and knives”, then the answer would be 0, since no utensil is both fork and knive. Please refer to section 10, where the operations on sets are explained in detail. The notions “or” and “and” are illustrated on the Venn diagrams, which should help to understand them better. c ⃝University of Birmingham 2014 9 3 What is a proof? “The search for a mathematical proof is the search for a knowledge which is more absolute than the knowledge accu-mulated by any other discipline.” Simon Singh A proof is a sequence of logical statements, one implying another, which gives an explanation of why a given statement is true. Previously established theorems may be used to deduce the new ones; one may also refer to axioms, which are the starting points, “rules” accepted by everyone. Mathematical proof is absolute, which means that once a theorem is proved, it is proved for ever. Until proven though, the statement is never accepted as a true one. Writing proofs is the essence of mathematics studies. You will notice very quickly that from day one at university, lecturers will be very thorough with their explanations. Every word will be defined, notations clearly presented and each theorem proved. We learn how to construct logical arguments and what a good proof looks like. It is not easy though and requires practice, therefore it is always tempting for students to learn theorems and apply them, leaving proofs behind. This is a really bad habit (and does not pay offduring final examinations!); instead, go through the proofs given in lectures and textbooks, understand them and ask for help whenever you are stuck. There are a number of methods which can be used to prove statements, some of which will be presented in the next sections. Hard and tiring at the beginning, constructing proofs gives a lot of satisfaction when the end is reached successfully. 3.1 Writer versus reader Kevin Houston in his book gives an idea to think of a proof like a small “battle” between the reader and the writer. At the beginning of mathematics studies you will often be the reader, learning the proofs given by your lecturers or found in textbooks. You should then take the active attitude, which means working through the given proof with pen and paper. Reading proofs is not easy and may get boring if you just try to read it like a novel, comfortable on your sofa with the half-concentration level. Probably the most important part is to question everything, what the writer is telling you. Treat it as the argument between yourself and the author of the proof and ask them “why?” at each step of their reasoning. When it comes to writing your own proof, the final version should be clear and have no gaps in understanding. Here, a good idea is to think about someone else as the person who would question each of the steps you present. The argument should flow and have enough explanations, so that the reader will find the answer to every “why?” they might ask. 3.2 Methods of proofs There are many techniques that can be used to prove the statements. It is often not obvious at the beginning which one to use, although with a bit of practice, we may be able to give an “educated guess” and hopefully reach the required conclusion. It is important to notice that there is no one ideal proof - a theorem can be established using different techniques and none of them will be better or worse (as long as they are all valid). For example, in “Proofs from the book”, we may find six different proofs of the infinity of primes (one of which is presented in section 7). Go ahead and master the techniques - you might discover the passion for pure mathematics! c ⃝University of Birmingham 2014 10 We can divide the techniques presented in this document into two groups; direct proofs and indirect proofs. Direct proof assumes a given hypothesis, or any other known statement, and then logically deduces a conclusion. Indirect proof, also called proof by contradiction, assumes the hypothesis (if given) together with a negation of a conclusion to reach the contradictory statement. It is often equivalent to proof by contrapositive, though it is subtly different (see the examples). Both direct and indirect proofs may also include additional tools to reach the required conclusions, namely proof by cases or mathematical induction. 3.3 Implications and if and only if statements “If our hypothesis is about anything and everything and not about one or more particular things, then our deductions constitute mathematics. Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true”. Bertrand Russell The formula A = ⇒B means “A implies B” or “if A then B”, where A and B are two statements. Saying A = ⇒B indicates that whenever A is accepted, then we also must accept B. The important point is that the direction of the implication should not be mixed! When A = ⇒B, then the argument goes from A to B, so if A holds, then B does too (we cannot have A without B). On the other hand, when we have that B is accepted, then it does not have to happen that A is also accepted (so we can have B without A). This can be illustrated by the following example: it is raining = ⇒it is cloudy. Now, if the first statement is true (so it is raining), then we automatically accept that it is also cloudy. However, it does not work the other way round; the fact that it is cloudy does not imply the rain. Notice further, that accepted does not mean true! We have that if it is raining, then it is cloudy and we accept both statements, but we do not know whether they are actually true (we might have a nice sunny day!). Also, genuineness of the second statement does not give any information whether the first statement is true or not. It may happen that the false statement will lead to the truth via a number of implications! “If and only if”, often abbreviated “iff”, is expressed mathematically A ⇐ ⇒B and means that if A holds, then B also holds and vice versa. To prove the theorems of such form, we must show the implications in both directions, so the proof splits into two parts - showing that “A ⇒B” and that “B ⇒A”. The proof of the statement it is raining ⇔it is cloudy, requires from us showing that whenever it is raining, then it is cloudy and showing that whenever it is cloudy, it is always raining. Necessary and sufficient. A ⇒B means that A is sufficient for B; A ⇐B means that A is necessary for B; A ⇔B means that A is both necessary and sufficient for B. c ⃝University of Birmingham 2014 11 4 Direct proof 4.1 Description of method Direct proof is probably the easiest approach to establish the theorems, as it does not require knowledge of any special techniques. The argument is constructed using a series of simple statements, where each one should follow directly from the previous one. It is important not to miss out any steps as this may lead to a gap in reasoning. To prove the hypothesis, one may use axioms, as well as the previously established statements of different theorems. Propositions of the form A = ⇒B are shown to be valid by starting at A by writing down what the hypothesis means and consequently approaching B using correct implications. 4.2 Hard parts? • it is tempting to skip simple steps, but in mathematics nothing is “obvious” - all steps of reasoning must be included; • not enough explanations; “I know what I mean” is no good - the reader must know what you mean to be able to follow your argument; • it is hard to find a starting point to the proof of theorems, which seem “obvious” - we often forget about the axioms. 4.3 Examples Below you will find the theorems from various areas of mathematics. Some of them will be new and techniques used not previously seen by the reader. To help with an understanding, the proofs are preceded by the “rough notes” which should give a little introduction to the reasoning and show the thought process. Theorem 4.1. Let n and m be integers. Then i. if n and m are both even, then n + m is even, ii. if n and m are both odd, then n + m is even, iii. if one of n and m is even and the other is odd, then n + m is odd. Rough notes. This is a warm-up theorem to make us comfortable with writing mathe-matical arguments. Start with the hypothesis, which tells you that both n and m are even integers (for part i.). Use your knowledge about the even and odd numbers, writing them in forms 2k or 2k + 1 for some integer k. Proof. i. If n and m are even, then there exist integers k and j such that n = 2k and m = 2j. Then n + m = 2k + 2j = 2(k + j). And since k, j ∈Z, (k + j) ∈Z. ∴n + m is even. ii. and iii. are left for a reader as an exercise. c ⃝University of Birmingham 2014 12 Theorem 4.2. Let n ∈N, n > 1. Suppose that n is not prime = ⇒2n −1 is not a prime. Rough notes. Notice that this statement gives us a starting point; we know what it means to be a prime, so it is reasonable to begin by writing n as a product of two natural numbers n = a × b. To find the next step, we have to “play” with the numbers so we receive the expression of the required form. We are looking at 2ab −1 and we want to factorise this. We know the identity tm −1 = (t −1)(1 + t + t2 + · · · + tm−1). Apply this identity with t = 2b and m = a to obtain 2ab −1 = (2b −1)(1 + 2b + 22b + · · · + 2(a−1)b). Always keep in mind where you are trying to get to - it is a useful advice here! Proof. Since n is not a prime, ∃a, b ∈N such that n = a × b, 1 < a, b < n. Let x = 2b −1 and y = 1 + 2b + 22b + · · · + 2(a−1)b. Then xy = (2b −1)(1 + 2b + 22b + · · · + 2(a−1)b) (substituting for x and y) = 2b + 22b + 23b + · · · + 2ab −1 −2b −22b −23b −· · · −2(a−1)b (multiplying out the brackets) = 2ab −1 (taking away the similar items) = 2n −1. (as n = ab) Now notice that since 1 < b < n, we have that 1 < 2b −1 < 2n −1, so 1 < x < 2n −1. Therefore, x is a positive factor, hence 2n −1 is not prime number. Note: It is not true that: n ∈N, if n is prime = ⇒2n −1 is prime; see the counterexample of this statement in section 4.5. Proposition 4.3. Let x, y, z ∈Z. If x + y = x + z, then y = z. Rough notes. The proof of this proposition is an example of an axiomatic proof, i.e. the proof that refers explicitly to the axioms. To prove the statements of the simplest form like the one above, we need to find a starting point. Referring to axioms is often a good idea. Proof. x + y = x + z = ⇒(−x) + (x + y) = (−x) + (x + z) (by the existence of additive inverse) = ⇒((−x) + x) + y = ((−x) + x) + z (by the associativity of addition) = ⇒(x + (−x)) + y = (x + (−x)) + z (by the commutativity of addition) = ⇒0 + y = 0 + z (by existence of additive inverse) = ⇒y = z. c ⃝University of Birmingham 2014 13 Proposition 4.4. ∀x ∈Z, 0 × x = x × 0 = 0 Rough notes. Striking theorem seen in the second year lecture! We all know it since our early years, though now is the time to proove it! Again, we will refer to the axioms. Proof. x × 0 = x × (0 + 0) (because 0 = 0 + 0) = ⇒x × 0 = x × 0 + x × 0 (by the distributivity) = ⇒0 + (x × 0) = x × 0 + x × 0 (by the existence of zero) = ⇒0 = x × 0 (by the cancellation) Similarly, 0 = 0 × x (try to prove it yourself!) Theorem 4.5. n2+5 n+1 →∞as n →∞. Rough notes. Before we proceed, we need to recap the definitions. Definition 4.1. A sequence (of real numbers) is a function from N to R. Definition 4.2. A sequence (an) of real numbers tends to infinity, if given any A > 0, ∃N ∈N such that an > A whenever n > N. The above definitions are the key to proving the statement. We follow their structure, so we assume A being given and try to find N such that an > A whenever n > N. Proving statements of this form is not very hard, but requires practice to be able to get the expression of required form. We will “play” with the fraction to make it smaller, which will prove that it tends to infinity. Proof. Let an := n2+5 n+1 and let A > 0 be given. Observe that an := n2 + 5 n + 1 ≥ n2 n + 1 (find an expression smaller than an by taking away 5 in the numerator) ≥ n2 n + n (decrease an expression by increasing the denominator; holds as n ∈N) = n2 2n (adding ns in the denominator) = n 2 (cancelling ns in the numerator and denominator) and n 2 > A provided that n > 2A. So, let N be any natural number larger that 2A. Then if n > N, we have an > n 2 > N 2 > A. Therefore, an tends to infinity. c ⃝University of Birmingham 2014 14 Lemma 4.6 (Gibb’s Lemma). Assume (p1, . . . , pm) is probability distribution and let (r1, . . . , rm) be such that ri > 0 ∀i and r X i=1 ri ≤1. Then X i pi log 1 pi ≥ X i pi log 1 ri . Rough notes. You will probably not see this lemma until year 3, although a year 1 student should be able to prove it, as it requires only manipulations with logs and using simple claims. Notice, that we want to show the equivalent statement: X i pi  log 1 pi −log 1 ri  ≥0. Then there are just laws of logs and a simple claim that help us to arrive at the statement. CLAIM: for x > 0, ln x ≤x −1. This is illustrated in the sketch below, however the formal proof is given in the appendix. −1 1 2 3 4 5 6 −3 −2 −1 1 2 0 y = x −1 y = ln(x) We now have all the tools needed to write proof of the lemma. See how it works step by step and then check if you can do it yourself! Proof. We want to show that X i pi  log 1 pi −log 1 ri  ≥0. Write X i pi  log 1 pi −log 1 ri  = X i pi  log 1 pi + log ri  (log(a−1) = −log a) = X i pi  log ri pi  (log a + log b = log(ab)) = 1 ln m X i pi  ln ri pi  (using the fact that logm a = ln a ln m; m is a base of logarithm) ≤ 1 ln m X i pi ri pi −1  (by the claim) = 1 ln m X i  ri −pi  (multiplying out the brackets) = 1 ln m  X i ri | {z } ≤1 − X i pi  | {z } =1 (by the assumptions) ≤0. c ⃝University of Birmingham 2014 15 4.4 Fallacious “proofs” Example 4.7. Study the sequence of sentences below and try to find what went wrong. You can find the answer in the footnote1. We “prove” that 1 = 2. a = b = ⇒a2 = ab = ⇒a2 + a2 = a2 + ab = ⇒2a2 = a2 + ab = ⇒2a2 −2ab = a2 + ab −2ab = ⇒2a2 −2ab = a2 −ab = ⇒2(a2 −ab) = a2 −ab = ⇒2 = 1. Example 4.8. This example will be similar to the previous one (again, we will “prove that 2 = 1), although it does not contain the same mistake. Try to find what goes wrong here and again, the solution is given at the bottom of the page2. −2 = −2 = ⇒4 −6 = 1 −3 = ⇒4 −6 + 9 4 = 1 −3 + 9 4 = ⇒  2 −3 2 2 =  1 −3 2 2 = ⇒2 −3 2 = 1 −3 2 = ⇒2 = 1. Example 4.9. Below we will present the classic mistake of assuming true the statement which is yet to be proved. The task is to prove that the statement √ 2 + √ 6 < √ 15 is true. √ 2 + √ 6 < √ 15 = ⇒( √ 2 + √ 6)2 < 15 = ⇒8 + 2 √ 12 < 15 = ⇒2 √ 12 < 7 = ⇒48 < 49. 1Notice that we assumed that a = b, so (a2 −ab) = 0 and hence we cannot cancel these expressions out! The last step is not correct, hence the “proof” is not valid. 2The problem occurs when we take the square root of both sides. Remember that the square root function returns a positive output, however its input might come from a negative number raised to the power: |x| = √ x2 and therefore x = ± √ x2. You can check that taking the left hand side negative, we will actually arrive at the true statement. c ⃝University of Birmingham 2014 16 It may seem that the above argument is correct as we have reached true statement (48 < 49), but this is not the case. It is important to remember that statement X = ⇒true statement does NOT mean that statement X is necessarily true! We assumed that √ 2 + √ 6 < √ 15 is true, where this is what we need to prove. Therefore, our implications are going in the wrong direction (go back to section about the implications if you are still confused). Valid proof would be of the form true statement = ⇒statement X showing that X is true. The “proof” above is not correct, however it is not totally useless! Check if you can reverse the implications to obtain the proof we are looking for. Note that it is alright to write arguments in the wrong direction when finding the proof but not when writing it in the final form. Reversed implications would give a valid argument, however, presented in its final form might make a reader wonder where did the idea of starting from “48 < 49” came from (looks pretty random). Generally, the easier approach would be the proof by contradiction (see section 7). 4.5 Counterexamples Having in mind a little “writer - reader battle”, we should be sceptical about any presented statement and try to find a counterexample, which will disprove the conjecture. It may happen that the theorem is true, so it is not obvious in which direction to go - trying to prove or disprove? One counterexample is enough to say that the statement is not true, even though there will be many examples in its favour. Example 4.10. Conjecture: let n ∈N and suppose that n is prime. Then 2n −1 is prime. Counterexample: when n = 11, 211 −1 = 23 × 89. Example 4.11. Conjecture: every positive integer is equal to the sum of two integer squares. Counterexample: n = 3. All integer squares, apart from (−1)2, (0)2, (1)2, are greater than 3 and we need only consider the situation when one of the squares is either 0 or 1. Neither (3−1), nor (3−0) is an integer square. Hence result. Example 4.12. Conjecture: every man is Chinese. Counterexample: it suffices to find at least one man who is not Chinese. c ⃝University of Birmingham 2014 17 5 Proof by cases 5.1 Method Proof by cases is sometimes also called proof by exhaustion, because the aim is to exhaust all possibilities. The problem is split into parts and then each one is considered separately. During lectures you will usually see proofs containing two or three cases but there is no upper limit for the number of them. For example, the first proof of the Four Colour Theorem used 1936 cases. Over the time, mathematicians managed to reduce this number to over 600 - still lots! It is often very useful to split the problem into many small problems. Be aware though, the more cases, the more room for errors. You must be careful to cover all the possibilities, otherwise the proof is useless! 5.2 Hard parts? • split the problem wisely - it is sometimes not obvious how to divide the problem into cases; • a big number of cases may result in skipping one of them in the proof - make sure each possibility is included in your resoning! 5.3 Examples of proof by cases Theorem 5.1. The square of any integer is of the form 3k or 3k + 1. Rough notes. This is a simple example of the proof, where at some point it is easier to split the problem into 2 cases and consider them separately - otherwise it would be hard to find a conclusion. Start by expressing an integer a as 3q +r, (q, r ∈Z) and then square it. Then split the problem and show that the statement holds for both cases. Proof. We know that every integer3 can be written in the form: 3q + 1 or 3q + 2 or 3q. So let a = 3q + r, where q ∈Z, r ∈0, 1, 2. Then a2 = (3q + r)2 = 9q2 + 6qr + r2 = 3 (3q2 + 2qr) | {z } ∈Z as q,r∈Z +r2 So let 3q2 + 2qr := k, k ∈Z. We have a2 = 3k + r2. Now, case I: if r = 0 or r = 1, we are done; case II: if r = 2 = ⇒r2 = 4 and then a2 = 3k + 4 = 3k + 3 + 1 = 3(k + 1) + 1 which is in the required form. Theorem 5.2. Let n ∈Z. Then n2 + n is even. Rough notes. To show that the expression is even, it may be helpful to consider the cases when n is even and odd - what does it mean? Click here to see a video example. 3The proof is given in section “Examples of Mathematical Induction” c ⃝University of Birmingham 2014 18 • CASE I: n is even (express it mathematically); • CASE II: n is odd; now, the simple algebra should bring us to the required conclusion. Proof. Exercise for a reader Theorem 5.3 (Triangle Inequality). Suppose x, y ∈R. Then |x + y| ≤|x| + |y|. Notes. To split the proof into small problems, we need to recall the modulus function, which is defined using cases: |x| =  x for x ≥0, −x for x < 0. Then, using the definition, carefully substitute x or (−x) for |x|, depending on the case. The triangle inequality is a very useful tool in proving many statements, hence it is worth to study the proof and memorise the inequality - you will see it lots in the future. Proof. case I: x ≥0, y ≥0, so by the definition, |x| = x and |y| = y. Hence, x + y ≥0. So |x + y| = x + y = |x| + |y| case II: x < 0, y < 0. So, |x| = −x, |y| = −y. Then x + y < 0. So |x + y| = −(x + y) = −x + −(y) = |x| + |y| case III: One of x and y is positive and the other is negative. Without loss of generality, assume that x is positive (x ≥0so|x| = x) and y is negative (y < 0, |y| = −y). Now we need to split the problem into 2 subcases: i. x + y ≥0, So |x + y| = x + y ≤x + (−y) = |x| + |y| ii. x + y < 0, So |x + y| = −x + (−y) ≤x + (−y) = |x| + |y| c ⃝University of Birmingham 2014 19 6 Mathematical Induction 6.1 Method How to use it, when to use it? Mathematical induction is a very useful mathematical tool to prove theorems on natural numbers. Although many first year students are familiar with it, it is very often challenging not only at the beginning of our studies. It may come from the fact that it is not as straightforward as it seems. Formally, this method of proof is referred to as Principle of Mathematical Induction. Principle of Mathematical Induction Let P(n) be an infinite collection of statements with n ∈N. Suppose that (i) P(1) is true, and (ii) P(k) = ⇒P(k + 1), ∀k ∈N. Then, P(n) is true ∀n ∈N. When constructing the proof by induction, you need to present the statement P(n) and then follow three simple steps (simple in a sense that they can be described easily; they might be very complicated for some examples though, especially the induction step): • INDUCTION BASE check if P(1) is true, i.e. the statement holds for n = 1, • INDUCTION HYPOTHESIS assume P(k) is true, i.e. the statement holds for n = k, • INDUCTION STEP show that if P(k) holds, then P(k + 1) also does. We finish the proof with the conclusion “since P(1) is true and P(k) = ⇒P(k + 1), the statement P(n) holds by the Principle of Mathematical Induction”. Dominoes effect. Induction is often compared to dominoes toppling. When we push the first domino, all consecutive ones will also fall (provided each domino is close enough to its neighbour); similarly with P(1) being true, it can be shown by induction that also P(2), P(3), P(4), ... and so on, will be true. Hence we prove P(n) for infinite n. 6.2 Versions of induction. Principle of Strong Mathematical Induction Let P(n) be an infinite collection of statements with n, r, k ∈N and r ≤k. Suppose that (i) P(r) is true, and (ii) P(j) = ⇒P(k + 1), ∀r ≤j ≤k. Then, P(n) is true ∀n ∈N, n ≥r. Changing base step. There are different variants of Mathematical Induction, all useful in slightly different situations. We may, for example, prove a statement which fails for the first couple of values of n, but can be proved for all natural numbers n greater than some r ∈N. We then change the base step of Principle of Mathematical Induction to “check if P(r) is true, for some r ∈N′′ c ⃝University of Birmingham 2014 20 and continue with the induction hypothesis and induction step for the values greater or equal than r. More assumptions. In the hypothesis step, we are allowed to assume P(n) for more values of n than just one. Sometimes to be able to show that the statement P(k + 1) is true , you may have to use both P(k) and P(k −1), so assume that both of them are true. In this case the induction base will consist of checking P(1) and P(2). It may also happen that we will deduce P(k + 1) once we assumed that all P(1), P(2), . . . , P(k) hold. Mixture The most complicated case would combine the last two, such that we start the induction base for some r ∈N and then prove that P(r), P(r+1), P(r+2), ..., P(k−1), P(k) imply that P(k+1). Then by induction P(n) is true for all natural numbers n ≥r. 6.3 Hard parts? • the induction hypothesis looks like we are assuming something that needs to be proved; • it is easy to get confused and get the inductive step wrong. Ethan Bloch gives an example of “proof” by induction which fails to be true - see exercise 6.5. 6.4 Examples of mathematical induction Example 6.1. Show that 23n+1 + 5 is always a multiple of 7. Notes. This is a typical statement which can be proved by induction. We start by checking if it holds for n = 1. Then if we are able to show that P(k) = ⇒P(k + 1), then we know that statement is true by induction. Proof. The statement P(n) : 23n+1 + 5 is always a multiple of 7. • BASE (n=1) 23×1+1 + 5 = 24 + 5 = 16 + 5 = 21 = 7 × 3 ∴P(1) holds. • INDUCTION HYPOTHESIS: Assume that P(k) is true, so 23k+1 + 5 is always a multiple of 7, k ∈N. • INDUCTION STEP: Now, we want to show that P(k) = ⇒P(k + 1), where P(k + 1) : 23(k+1)+1 + 5 = 23k+4 + 5 is a multiple of 7. We know from induction hypothesis that 23k+1 + 5 is always a multiple of 7, so we can write 23k+1 + 5 = 7 × x for some x ∈Z = ⇒(23k+1 + 5) × 23 = 7 × x × 23 (multiplying by 23) = ⇒23k+4 + 40 = 7 × x × 8 = ⇒23k+4 + 5 = 56x −35 (−35 from both sides) = ⇒23k+4 + 5 = 7 (8x −5) | {z } ∈Z c ⃝University of Birmingham 2014 21 So 23k+4 + 5 is a multiple by 7 (P(k + 1) holds), provided that P(k) is true. We have shown that P(1) holds and if P(k), then P(k + 1) is also true. Hence by the Principle of Mathematical Induction, it follows that P(n) holds for all natural n. Theorem 6.2 (De Moivre’s Theorem). If n ∈N and θ ∈R, then [cos(θ) + i sin(θ)]n = cos(nθ) + i sin(nθ). Notes. The well known De Moivre’s Theorem can be easily proved using Mathematical Induction. We show it is true for n = 2, remembering the rule for the product of complex numbers: z1 × z2 = r1(cos θ1 + i sin θ1) × r2(cos θ2 + i sin θ2) = r1r2((cos θ1 cos θ2 −sin θ1 sin θ2) + i(sin θ1 cos θ2 + cos θ1 sin θ2)) = r1r2(cos(θ1 + θ2) + i sin(θ1 + θ2)). Then we assume the statement is true for n = k and use this assumption to show it holds for n = k + 1. At the induction step we will need the trigonometric identities: cos(A + B) = cos A cos B −sin A sin B, sin(A + B) −sin A cos B + cos A sin B. Proof. The theorem is true for n = 1, trivially. • BASE (n = 2): z2 = (cos θ + i sin θ)2 = (cos 2θ + i sin 2θ) • INDUCTION HYPOTHESIS: assume that it is true for n = k, so [cos(θ) + i sin(θ)]k = cos(kθ) + i sin(kθ). • INDUCTION STEP: Now, (cos θ + i sin θ)k+1 = (cos θ + i sin θ)k(cos θ + i sin θ) = (cos(kθ) + i sin(kθ))(cos θ + i sin θ) (by the induction hypothesis) = cos(kθ) cos θ + i cos(kθ) sin θ + i cos θ sin(kθ) −sin(kθ) sin θ (multiplying out the brackets) = cos(kθ + θ) + i sin(kθ + θ) (follows from the trigonometric identities) = cos(k + 1)θ + i sin(k + 1)θ (taking θ outside the bracket) Hence we have shown that P(1) and P(2) hold and ∀k ≥2, P(k) = ⇒P(k + 1). Therefore P(n) is true ∀n ≥2 by the Mathematical Induction. c ⃝University of Birmingham 2014 22 Proposition 6.3. Let an+1 = 1 5  a2 n + 6  and a1 = 5 2. Then (an) is decreasing. Notes. We have defined a sequence earlier and here is the definition of the decreasing sequence. Definition 6.1. A sequence (an) is decreasing if an+1 ≤an for all n ∈N. We will use the definition to prove the statement. Notice that we need to show an+1 ≤an for all n - this should suddenly bring to your mind induction. As always, we start by checking the base, (here for n = 1) and then we assume that P(n) is true for n = k. The hard part is usually the induction step, although it is not very complicated here. That we want to show that ak+2 ≤ak+1 using our previous assumption. Proof. We will show that the statement P(n) holds for all n. P(n) : an+1 ≤an for all n. • BASE: an = 1 5 5 2 2 + 6  = 1 5 25 4 + 6  = 49 20. Note: a2 = 49 20 < 5 2 = a1. Hence, P(1) holds. • HYPOTHESIS: Suppose that for some k ≤1, ak+1 ≥ak. • INDUCTION STEP: ak+2 = (ak+1)2 5 + 6 5 ≤(ak)2 5 + 6 5 = ak+1. Hence ak+2 ≤ak+1. Since P(1) is true and P(k) = ⇒P(k + 1), it follows that the sequence is decreasing by the Mathe-matical Induction. Exercise 6.4. For which positive integers n is 2n < n! ? Notes. Notice that this is a different example to the ones we have presented above. Here, you must find n first and then show that it actually holds. You may want to check the first couple of values of n and then formulate the statement P(n) for which you can use Induction. Structure your proof as above, the notes on side should also help. Click here to see a video example. c ⃝University of Birmingham 2014 23 Proof. First, let us find the value of n for which we will prove the statement. (does the statement hold for n = 1, 2, 3, . . . ? Have you found n for which this is true?) Let P(n) : 2n < n! be the statement. We will show that it holds for all n ≥. . . • BASE STEP (show P(n) holds for n smallest possible) • INDUCTION HYPOTHESIS P(k) : . . . (state the assumption for P(k)) • INDUCTION STEP (keep in mind what you are trying to prove - it helps to note it on the side) (hint: notice that 2 < k + 1 ∀k > 1) • CONCLUSION (finish the proof by writing the conclusion) Exercise 6.5. Use Principle of Mathematical Induction to show that  n S k=1 Ak ′ = n T k=1 A′ k ∀n ≥2. The above theorem is one of the De Morgan’s laws for an arbitrary collection of subsets (see section on sets for De Morgan’s Laws in case of two subsets). Below there are two examples of the first year students’ approach to prove this theorem by mathematical induction. Have a look at their work (figures 1 and 2). Can you see which student’s work gained more marks and why? Are all the steps of induction correct? c ⃝University of Birmingham 2014 24 Figure 1: Proof by induction attempted by student A Figure 2: Proof by induction attempted by student B Two “proofs”, both written by first year students, are a good example to see why the induction is hard. The fact that the argument looks as if it contains all the required steps, like base, hypothesis c ⃝University of Birmingham 2014 25 and induction step, does not make it correct proof. We will now analyse both arguments and point out where the problems are. You have probably spotted already that the proof written by student B is much better. It received 4 out of 5 points, because the argument flows well and contains all required parts of induction, though the induction step needs more explanation. You must also watch the details closely, because the statement is proved for n ≥2, so this should be mentioned in our conclusion. Student A received 0 marks for their work and the details are discussed below. STUDENT A: • the argument starts with the wrong statement - we want to state P(n) at the beginning, so prove the equality in terms of n; • De Morgan’s Laws are necessary to show that base step holds and it should be checked for n=2, because we are proving the statement for all n ≥2; • there is hypothesis stated clearly; • induction step is not stated properly - steps are missing when deducing P(k + 1); • conclusion is incorrect as it was not in fact shown that P(k) = ⇒P(k + 1). STUDENT B: • it is not necessary to check P(1) since we are proving P(n) for n ≥2; • induction step is missing one line of ar-gument, we should not “jump” to the re-quired form straight away but show all reasoning (the highlighted step is missing); • in the conclusion “P(k) ⇒P(k+1) true ∀k ∈ N”, we should also add “k ≥2”. Exercise 6.6. In Bloch’s book we read an argument, which clearly fails at some point. It is hard to detect the mistake though and it seems that induction is correct. See if you can spot a problem -the answer is given in a footnote4. Proof. P(n) : in any collection of n horses, all of them have the same colour. Since there are finite number of horses in the world, the statement means that all horses in the world have the same colour! • BASE (n=1): P(n) clearly holds as in any group of only 1 horse, it is trivially true that “all horses have the same colour” • HYPOTHESIS: Now we assume that P(k) holds, so in any group of k horses, it is true that all of them have the same colour. • INDUCTION STEP: Now imagine a collection of k + 1 horses, let’s call it {H1, H2, . . . , Hk+1}. Now, if we take the first k of them, then by induction hypothesis we know that they are all of the same colour. We may also consider another set of k horses {H2, H3, . . . Hk+1} which, again, are all of the same colour. Since {H1, H2, . . . , Hk} and {H2, H3, . . . Hk+1} all have the same colour, then we may deduce that all k + 1 horses have the same colour. Hence all horses have the same colour. Since P(1) holds and P(k) = ⇒P(k + 1), we have that P(n) is true for all natural n by the Principle of Mathematical Induction. 4The problem lies in an inductive step which will fail for some particular value of n. So if we take n big enough, then we may not be able to find set of n horses, where all would have the same colour c ⃝University of Birmingham 2014 26 7 Contradiction 7.1 Method Proof by contradiction is a very powerful technique, but the method itself is simple to understand. When trying to prove statement A = ⇒statement B, assume that A is true and that not B is true and try to reach a contradiction. The method is often used in proofs of the existence theorems, so the statements of the form “there is no x such that. . . ”. Here, instead of proving that something does not exist, we can assume that it does and try to reach nonsense. We finish the proof with the word “contradiction!”, where some people prefer the lightning symbol or a double cross (see section 2 on symbols) to indicate that they reached the contradiction. 7.2 Hard parts? • when proving more complex theorems, it is easy to get confused and make a mistake. Then we arrive at contradiction, which does not come from the original assumption but form the error in the middle of the proof. 7.3 Examples of proof by contradiction Theorem 7.1. Let a be rational number and b irrational. Then i. a + b is irrational ii. if a ̸= 0, then ab is also irrational. Notes. First of all we need to recall what it means to be rational (can be expressed as a fraction) or irrational (cannot be expressed as a fraction). So if we want to show that a + b is irrational, we do not really know how to describe it generally. Instead, we may assume the opposite, express it as a rational number (which is easy to do in general) and show that it leads to a contradiction. Notice how a big role the definitions play when constructing the proof! Proof. i. Suppose that a + b is rational, so a + b := m n . Now, as a is rational, we can write it as a := p q . So b = (a + b) −a = m n −p q = mq −pn nq , hence b is rational, which contradicts the assumption. ii. left as an exercise Exercise 7.2. Prove that √ 2 + √ 6 < √ 15 c ⃝University of Birmingham 2014 27 Notes. We have seen this statement before as an example of “fallacious proof” and now we will show how to prove such expressions using contradiction. It has been shown that it is easy to fall in the trap of assuming that the statement is true and then arguing from there; it is popular mistake probably because there is no other “starting point” that seems sensible. To avoid the mistake of assuming of what has to be proved, it is better suppose the opposite of a given statement. If we reach a contradiction, then our assumption was wrong and the statement is proven true. Proof. Assume for a contradiction that √ 2 + √ 6 ≥ √ 15 = ⇒( √ 2 + √ 6)2 ≥15 = ⇒8 + 2 √ 12 ≥15 = ⇒2 √ 12 ≥7 = ⇒48 ≥49 The last statement is clearly not true, hence we reached the contradiction. Therefore, we proved that √ 2 + √ 6 < √ 15. Theorem 7.3. Let f : A →B and g : B →C be functions. If g ◦f is bijective, then f is injective and g is surjective. Notes. Refer to the section on functions to recall that “bijective” means “injective and surjective” - have a look at the definitions in section 11, because again these will help to construct the proof. Now, you may want to start with the statement “g ◦f is bijective” and follow from there, however it may be hard to conclude that f is injective and g is surjective. The quickest way to bring us to the required statement is to assume the opposite and try to reach the contradiction. Proof. Suppose the statement does not hold, so f is not injective or g is not surjective. Let us consider both cases: • f is not injective, which means that ∃a, a′ ∈A such that f(a) = f(a′). Now, (g ◦f)(a) = g(f(a)) = g(f(a′)) so we have (g ◦f)(a) = (g ◦f)(a′) but a ̸= a′. So g ◦f is not injective, hence it is not bijective. • g is not surjective, which means that ∃c ∈C such that for all b ∈B, g(b) ̸= c. Moreover, g ◦f is surjective, so ∃a ∈A such that (g ◦f)(a) = c. Now, if b = f(a), then g(b) = c, which is a contradiction! Both cases lead us to the contradiction, hence we may conclude that if g ◦f is bijective, then f is injective and g is surjective. c ⃝University of Birmingham 2014 28 Theorem 7.4. There are infinitely many primes. Notes. Having seen many theorems and proofs, let us try to prove the famous theorem stating the infinity of primes. This comment should guide you to writing the statements in the mathematical language - do not worry if you don’t get it first time; a final proof often needs changing and polishing a few times! Click here to see a video example. Proof. Suppose for a contradiction that. . . (write the statement) Since. . . (insert the statament) ,we can list them: . . . (list the primes - you need to pick a suitable nota-tion) Now, consider the number n, which is not a prime. (you can define this number in many different ways, but you need a number which is not a prime (consider multiplying all the listed primes by each other - then n is greater that any of them) and that leads us to the contradiction (this is a tricky part) - we may want to come back to this point later, because the next lines of the argument should help us to pick appropriate n here) Since n is not a prime,. . . (what does it tell us? Write down what does it mean mathematically that n is not a prime. Think about the factor of n - what if we take it the smallest possible?) Take the factor the smallest possible, so it is prime. Now, it follows that ∃z ∈Z, such that n = z × . . ., (1) hence z = n . . . = . . . . . . (2) At this stage, we can get the contradiction to our assumption, but it depends on our choice of n. Let us come back to the definition of n - what would guarantee it? What did we assume about the factor of n? What assumptions did we state at the equation(1)? Try to change n slightly if your argument does not reach the contradiction yet. c ⃝University of Birmingham 2014 29 8 Contrapositive 8.1 Method Proof by contrapositive is in essence a “submethod” of a proof by contradiction. The argument begins in the same way in both cases, by assuming the opposite of the statement. So when showing that statement A = ⇒statement B, we assume “not B”, but this time we argue to arrive at “not A”. The trick here is the fact that the statements “A = ⇒B” and “not B = ⇒not A” are equivalent. To understand this relationship better, you may want to read more about the Wason selection task, which is the logic puzzle formulated by Peter Wason in 1966. The example below is based on his work and it illustrates very well the reasoning used in proof by contrapositive. Imagine four cards placed on the table, each with a letter on one face and a number on the other one. A D 9 5 You are given the rule which states: “if A is on a card, then 5 is on its other side”. Now, the task is to indicate which card(s) need to be turned over to check whether the rule holds? While most of the people give the automatic response “A” and “5”, the correct answer is “A” and “9”. Notice that according to our rule, if the card shows “A” on one face, it must have “5” on the other. The rule however does not say anything about the card showing “5”! Hence, only checking “A” and “9” can test the rule: • if A does not have 5 on the other side, the rule is broken; • if D has (or not) 5 on the other side - it does not tell us anything; • if 5 has (or not) A on the other side - again, the rule is not broken; • if 9 does have A on the other side - the rule is broken. 8.2 Hard parts? • the method itself requires the knowledge of the fact “A ⇒B” is equivalent to “not B ⇒not A”; • similarly as in the proof by contradiction, the theorems proved may be complex and it is easy to make mistakes and arrive at the incorrect conclusion. 8.3 Examples Theorem 8.1. Let n ∈Z. If n2 is odd, then n is odd. Notes. The direct proof would not really work here, because writing n2 in the form 2k +1 (for some k ∈Z) does not really help to continue. Neither is the contradiction useful, as we cannot find the required conclusion. In this case we need a special technique, which is the contrapositive. This means that we start by assuming the opposite of the statement (here: “n is NOT odd”) and then use the fact that A = ⇒B is equivalent to: not B = ⇒ not A. In this example statement A is: “n2 is odd” and B: “n is odd”. c ⃝University of Birmingham 2014 30 Proof. Let n be even (which is “not B”). = ⇒n = 2k, k ∈Z = ⇒n2 = (2k)2 = 4k2 = 2 × 2k2 = ⇒n2 is even (up to this point we proceed as we would in the proof by contradiction, although the conclusion would be: n even = ⇒n2 even, which is not what we are trying to prove) So we proved that n is even = ⇒n2 is even. Now using the contrapositive we conclude that n2not even (odd) = ⇒n not even (odd), which proves the statement. Theorem 8.2. If mn is odd, then m and n are odd. Notes. Representing mn as an odd integer does not really give us any tips on how to carry on with our proof. Much easier thing is to assume the opposite of the second part of the statement and see where we arrive at - is it contradiction? Or opposite of the first part of the statement? We have the tools to arrive at the conclusion in any case, so let us see where will our assumption get us to. Click here to see a video example. Proof. Assume that . . . , so m and n are. . . . (assume “not B”) So m = 2 × . . . , . . . ∈Z and n = 2 × . . ., . . . ∈Z. Now, mn = . . . × . . . = . . . (substitute for m and n) Hence, we conclude that. . . (you should arrive at “not A”) So, by contrapositive, . . . (remember “not B = ⇒not A” ≡“A = ⇒B”) c ⃝University of Birmingham 2014 31 9 Tips “Proofs are hard to create - but there is a hope.” K. Houston This section has been created with help of lecturers from the University of Birmingham who took part in a short survey, so the problems and common mistakes in areas of pure mathematics could be detected. All the quotations throughout this section come from the short questionnaire. Many good tips have also been found in and and the reader is strongly advised to look into this literature. It is good to find out what the common mistakes are, and to watch out closely and try to avoid them. Writing proofs is hard. It often requires a good amount of time and paper, before the neat proof is finally produced. You will find yourself crossing things out and changing it many times before you reach what is required. Making mistakes is natural and it is better to think about them as a “step in learning to write proofs. Writing proofs in pure mathematics university courses is different from what has been done during school, so it will take some time to get used to doing this properly.” 9.1 What common mistakes do students make when trying to present the proofs? Misunderstanding of definitions The absolute number one problem mentioned by most lecturers taking part in a survey is the fact that students do not learn definitions. It is important to be on track of the terminology - there will be a lot of it, so it will be hard to learn it all if you leave it for later! “(Students) do not know where to start because they do not know the definitions of the objects they are working with” Not enough words Lecturers are waiting for your words! Prioritising symbols to words especially at the beginning of mathematics studies is a common mistake, because it seems like a page with lots of symbols “looks clever”. “I have found it common in particular for first year students not to explain the steps in their arguments, as if they think they are not allowed to use words, only symbols” Lack of understanding “When a student gets to a point in a proof that they cannot proceed from, often the conclusion of the result follows immediately, and it is clear that the student does not understand the necessary missing arguments” “They are trying to memorise the proofs rather than understand them” Incorrect steps Although your argument should start at the beginning and then lead to the final statement, while constructing the proof you may want to look at the conclusion and imagine how it may be arrived from the hypothesis. You may then be able to reverse the steps to produce a good proof - not always though, be careful! Another idea is to work from both ends (both from the beginning and from the end) and “meet” somewhere in the middle. This is all allowed during constructing the proof but remember to then produce a neat final version with all steps well explained. Good knowledge of different methods of proofs is also essential. “Contradiction arguments often have incorrect conclusions; induction arguments often start at the incorrect point, and the induction hypothesis is often abused’ c ⃝University of Birmingham 2014 32 9.2 What are the reasons for mistakes? Again we come back to definitions, as incorrect arguments may come from the fact that students “do not appreciate the importance of mathematical definitions”. Sometimes we lack the motivation and sometimes also ability, hence practise is key to success. Mistakes may also come from the fact that “school maths does not prepare students adequately in how to present a mathematical argument”. A good method to develop mathematical reasoning is to create your own examples on which you can practice writing proofs. How many teachers challenge their students in this way though? How much is there of understanding maths and how much of getting the required grade? “Additionally, students are often asked questions which have ‘nice looking answers’ and the final steps of a calculation can be somewhat guessed to obtain an answer that looks correct. However, it is very difficult to guess parts of a proof correctly”. It seems that there is no “magic fix for this, other than practice and feedback”. Finally though, “this should not necessarily be considered a mistake rather a step in learning. Writing proofs in pure mathematics university courses is different from what has been done during school, so it will take some time to get used to doing this properly”. 9.3 Advice to students for writing good proofs Here are the tips collected together, which come from the lecturers at University of Birmingham. Have a look at what they think and what they are looking for when marking your work! “Often you’ll need to do some rough work to figure out how your proof should go. Some of the time, you will be able to get a good idea of how a proof should go by noticing that it can be similar to the proof of something else.” “Understand every line that you write, and do not make bogus claims.” “Understanding the proofs in the lectures/ lecture notes as a way of understanding of the type of reasoning that is involved in a proof.” “How you lay out a proof is at least as important as the content (it’s not ok if ‘it’s all in there somewhere’).” “Try to make your proofs easier to follow by including brief phrases where appropriate. For example rather than writing (statement A) implies (statement B) it may be more enlightening to write (statement B) follows from (statement A) because. . . ” “Try to write out many of them, understanding every step. Ask others about unclear points. Try to follow proofs in class or in books, and ask about unclear points.” 9.4 Friendly reminder “The importance of proofs goes well beyond a university degree. It is even-tually about using reason in everyday life. This could contribute to solving major and global problems.” You have seen many methods of proofs presented in previous sections and they are all used in different areas of mathematics. It has been underlined many times that writing proofs is not easy, but with a lot of practice and open mind, pure mathematics is not as scary. Here are some final tips to keep in your head when starting the next proof. Good luck! • Experiment! If one method does not work, try a different one. Lots of practice allows for an “educated guess” in the future; • do not start with what you are trying to prove; c ⃝University of Birmingham 2014 33 • use correct English with full punctuation; • begin by outlining what is assumed and what needs to be proved; do not skip this step! • remove initial working when writing up the final version of the proof, but include all steps of reasoning. c ⃝University of Birmingham 2014 34 10 Sets 10.1 Basics To be able to write well in mathematical language, you will need at least the basics of the set theory. Most lecturers will assume that you know it, or will include it in the “zero chapter” in their lecture notes, so you must know all these definitions and symbols before you start working on your first proofs. Definition 10.1. A set is a collection of objects, which are called the elements or members of a set. Sets are usually denoted by capital letters and the members by lower case letters. We usually write all elements in curly brackets. The notation A = {a, b, c} means that the set A consists of 3 elements: a, b and c. We can say that the element a belongs to the set A, write a ∈A, or that d is not a member of A, write d / ∈A. Example 10.1. Examples of the sets: B = {1, 2, 3}, C = {0}, D = {x, y, ∅}, E = {{red, white}, blue}, F = {−1, 0, cos π, 1}. Note: the order of the elements in a set does NOT matter, i.e. {2, 3, 1} is exactly the same as set B in the example given above. Observe also the difference between the sets E = {{red, white}, blue} and {red, white, blue}. Here blue is an element of both sets, but red / ∈E. We have that {red, white} ∈E. So a set can be an element of another set. The symbol ∅represents the empty set, which contains no elements. Note that {∅} is NOT an empty set! 10.2 Subsets and power sets Definition 10.2. If A is a subset of B (write A ⊆B), then all elements of A are also elements of B; A ⊆B ⇔∀x ∈A, x ∈B. So A is “contained” in B. If you want to say that A is NOT subset of B, write mathematically A ⊈B. Example 10.2. {1, 10} ⊆{1, 10, 100}, {1000, 10} ⊈{1, 10, 100}, ∅⊆{∅}. Notice that the empty set is a subset of any set. To show that A ⊆B, you need to show that every element of A is also an element of B. We will present a very simple theorem and its proof. It should give you an idea of how to structure your argument. Note the amount of words used! Theorem 10.3. Let A, B and C be sets. Then (a) A ⊆A (b) If A ⊆B and B ⊆C, then A ⊆C. c ⃝University of Birmingham 2014 35 Proof. (a) Start by choosing an arbitrary element a ∈A (we mean A on the left hand side of “A ⊆A”). Then it follows that a ∈A (A on the left right side). We chose a arbitrarily, hence the argument holds for all a ∈A. So by the definition of a subset, A ⊆A. (b) Let a ∈A. Then, since A ⊆B, it follows that a ∈B. Since B ⊆C, we have that a ∈C. So a ∈A implies a ∈C. Therefore A ⊆C. If A is a subset of B but it they are not equal, then we say that A is a proper subset of B and write it A ⊂B (or A ⊊B or A & B). To show that A is a proper subset of B, you need to show that A ⊆B and find at least one element of B which is not an element of A. Example 10.4. {a, b, c} ⊂{a, b, c, d}, {1, 2, 3} ̸⊂{1, 2, 3} but {1, 2, 3} ⊆{1, 2, 3}. Definition 10.3. The power set of a set A consists of all subsets of A and is usually denoted by P (A) (some writers use 2A). The power set of A = {x, y, ∅} is P(A) = {{x}, {y}, {∅}, {x, y}, {x, ∅}, {y, ∅}, {x, y, ∅}, ∅}. 10.3 Cardinality and equality Definition 10.4. In mathematics, the cardinality of a set A (card(A) or |A|) is a measure of the “number of the elements of the set”5. Example 10.5. If A = {a, b, c}, then |A| = 3. Example 10.6. Important to notice: |{∅}| = 1, while |∅| = 0. |{0}| ̸= 0, but |{0}| = 1. Notice that the repetitions are ignored when we are counting the members of the set. The convention is to list each element only once, however as you see in the Example 4.1. the same number can be written in different forms. F = {−1, 0, 1, cos π} = {−1, 0, 1}, as cos π = −1. Hence, |F| = 3. Definition 10.5. Two sets A and B are equal when they have exactly the same elements, i.e. every element of A is an element of B and every element of B is an element of A. So A = B ⇔A ⊆B and B ⊆A. Example 10.7. {−1, 0, 1} = {−1, 0, 1, cos π}, {2, 3, 3, 3, 3, 2, 3, 2, 2} = {2, 3}, {2, 3, 4} = {4, 2, 3}. 5This works for finite sets - see section 4.6. for the cardinality of the infinite sets c ⃝University of Birmingham 2014 36 To show that two sets A and B are equal, pick an arbitrary x ∈A and show that x ∈B and vice versa. Exercise 10.8. Let A = {1, 2, 3, 4} and B = {x : x ∈N, x2 < 17}, where N is the set of natural numbers. Show that A = B. Answer. To prove the equality of the sets, we must show that for every x, x ∈B ⇒x ∈A (B ⊆A) and x ∈A ⇒x ∈B (A ⊆B). So if x ∈B, then x2 < 17, which implies x < √ 17. Therefore x ≤4. Since x is a positive integer, therefore for every x ∈B we have that 0 < x ≤4. Hence, x ∈B ⇒x ∈A. Now assume x ∈A, so x ∈{1, 2, 3, 4}. To prove that x ∈B, it suffices to show that the largest element x ∈A satisfies x2 < 17. Then it is also true for the smaller values since they all belong to N. Since ∀x ∈A, x2 ≤42 ≤16 < 17, we have that x ∈A ⇒x ∈B. Exercise 10.9. Show that {(cos t, sin t) : t ∈R} = {(x, y) : x2 + y2 = 1}. Answer. Let A = {(cos t, sin t) : t ∈R} and B = {(x, y) : x2 + y2 = 1}. Now, to show that A = B we need to show that A ⊆B and B ⊆A. Let x = cos t and y = sin t. Then x2 + y2 = cos2 t + sin2 t = 1 because cos2 t + sin2 t = 1 is a known identity. Hence we have that A ⊆B. Now, to show that B ⊆A we appeal to geometry. Let (x, y) ∈B, hence x2 + y2 = 1. So (x, y) lies on the unit circle. (x, y) t Therefore, we have that cos t = x and sin t=y. As x2 + y2 = 1, substituting in for x and y gives cos2 t + sin2 t = 1 and hence we have shown that B ⊆A and so A = B. 10.4 Common sets of numbers The commonly used sets of numbers are: • The set of natural numbers, N = {1, 2, 3, 4...}. Careful! Some mathematicians include 0 in N, c ⃝University of Birmingham 2014 37 the others do not. Your lecturers will usually tell you at the beginning which version do they mean by writing N; • The set of integers, Z = {−3, −2, −1, 0, 1, 2, 3, 4, ...}; • The set of rational numbers Q = { m n : m, n ∈Z, n > 0}; • The set of real numbers R, which is the union6 of both rational Q and irrational numbers (which cannot be expressed as a fraction, for example log 2, √ 2, π, e). Notice that one set is a subset of another, in the following order: N ⊂Z ⊂Q ⊂R. 10.5 How to describe a set? As mentioned before, we write the elements of a set inside curly brackets. The elements should be defined clearly, for example {x ∈R : 2 < x < 3}. Notice that it is often tempting to write just {2 < x < 3}, but it is meaningless! Naturally, x being a natural number rather than a real number makes a big difference! Hence, make sure that you define your sets properly. It may give rise to a very long and scary looking sentence with lots of notation, but there will be no room for ambiguity. We can represent the intervals on the real number line using set notation. Let a, b ∈R be any two numbers with a < b. Then we can define the following: closed interval open interval half open intervals infinite intervals [a, b] = {x ∈R : a ≤x ≤b} (a, b) = {x ∈R : a < x < b} [a, b) = {x ∈R : a ≤x < b} (a, b] = {x ∈R : a < x ≤b} [a, ∞) = {x ∈R : a ≤x} (a, ∞) = {x ∈R : a < x} (−∞, b] = {x ∈R : x ≤b} (−∞, b) = {x ∈R : x < b} R = (−∞, ∞) It is important to understand that ∞is not a real number! It is a symbol to represent the fact that the interval goes on forever, hence infinity is a concept rather than a big numerical value. Notice that no interval is closed at ∞or −∞, as there is no number there to be included in the real line. 10.6 More on cardinality We have seen earlier the definition, stating that the cardinality of a set is a measure of the “number of the elements of the set”. This is indeed the case when the sets, say A and B, are finite, i.e. they consist of a finite number of elements. Then the notion |A| = |B| means the equality of integers. Do not confuse it with A = B, which means that not just a number of the elements is the same, but elements themselves are equal. The problem arises when we consider the infinite sets. The number of their elements is not finite and so we cannot count them and arrive at finite number. In this case we say that two sets A and B have the same cardinality whenever we can match, or pair off, the elements of the set A with the elements of the set B. Put more exactly, two sets A and B have the same cardinality whenever there is a bijection7 between A and B. 6See section 10.7 7One-to-one correspondence or bijection - see section 11 c ⃝University of Birmingham 2014 38 For example, given set A = {1, 2, 3}, its cardinality equals 3 (card(A) = 3). It is more complicated to talk about the cardinalities of the sets of, say, real or natural numbers. It automatically brings up the interesting discussion about infinity and its different sizes (!). One of the most striking theorems of set theory was proved in 1878 by George Cantor, who stated that card(R) = card(R2). The proof to this powerful statement is quite simple (see appendix), but who would have thought that a straight line has as many points as the plane? 10.7 Operations on sets Below we present the basic operations on sets, which can be nicely presented on the Venn diagrams. • Unions A B A ∪B = {x : x ∈A or x ∈B} • Intersections A B A ∩B = {x : x ∈A and x ∈B} note: when A ∩B = ∅, then A and B are said to be disjoint. • Complements A B A\B = {x ∈A : x / ∈B} The above notation is sometimes called a relative complement of B in A. We also distinguish the universal complement or simply a complement, U\A (denoted by A′ or Ac). A B U Ac = {x : x / ∈A} Here U is the universal set, which contains all objects, including itself. Notice that U must be specified (which is very often omitted) for Ac to be well-defined8. 8well-defined means that the expression is unambiguous, so it has a unique value or interpretation assigned to it c ⃝University of Birmingham 2014 39 • Cartesian Product A × B = {(a, b) : a ∈A and b ∈B} Note, that here (a, b) is not an open interval , but an ordered pair, which is a pair of elements written in a particular order. This means that (x, y) and (y, x) represent different ordered pairs, unless x = y. Sometimes x and y are called coordinates, with x being the first one and y the second. 10.8 Theorems In this section we will present statements and proofs of simple theorems, which are a good warm-up to the more complicated material on the set theory. Theorem 10.10. For any sets A, B and C, the following statements are true: i. A ∪A = A, ii. A ∪B = B ∪A, (commutative law) iii. A ∪(B ∪C) = (A ∪B) ∪C, (associative law) iv. A ⊂A ∪B and B ⊂A ∪B, v. A ∪∅= A. Proof. i. Take any x in A. But then the statement “x ∈A or x ∈A is also trivially true ( ∀x, x ∈ A ⇔(x ∈A or x ∈A) ). Therefore the statement holds. ii. (x ∈A or x ∈B) ⇔(x ∈B or x ∈A). Trivially, the statements are equivalent, hence (ii) is true. iii. We will start from the left hand side of the statement to arrive at the right hand side, using definitions of the union of sets. ∀x, (x ∈A ∪(B ∪C)) ⇔(x ∈A or x ∈(B ∪C)) ⇔(x ∈A or (x ∈B or x ∈C)) ⇔((x ∈ A or x ∈B) or x ∈C) ⇔(x ∈(A ∪B) or C) ⇔(x ∈(A ∪B) ∪C). iv. This follows from the definition of the union of the sets: x ∈A ∪B means x ∈A or x ∈ B. Hence, A ⊂A ∪B and B ⊂A ∪B. v. x ∈A ∪∅⇔x ∈A or x ∈∅. But ∅does not contain any elements, hence x ∈A ∪∅⇔x ∈A. The next theorem is similar to the previous one, but it deals with the intersections of the sets. The proofs are not provided and the reader is strongly encouraged to mimic the above arguments to prove the following statements. Theorem 10.11. For any sets A, B and C, the following statements hold: i. A ∩A = A, ii. A ∩B = B ∩A, (commutative law) iii. A ∩(B ∩C) = (A ∩B) ∩C, (associative law) iv. A ∩B ⊂A, A ∩B ⊂B, v. A ∩∅= ∅. c ⃝University of Birmingham 2014 40 The next result provides the rules when taking the unions and intersections with three different sets. Theorem 10.12 (Distributive laws for sets). For any three sets A, B and C, i. A ∩(B ∪C) = (A ∩B) ∪(A ∩C), ii. A ∪(B ∩C) = (A ∪B) ∩(A ∪C). When working with sets, it is a good idea to picture the statement first, using Venn diagrams, to see that the theorem is in fact true. Remember that the diagrams are not proofs and the formal argument needs to follow. Draw separate pictures, one for the left hand side and the other for the right hand side of the statement. You should get exactly the same graphs, which can reassure you that the theorem is correct. Then you can start your formal proof. A B C Figure 3: A ∩(B ∪C) A B C Figure 4: (A ∩B) ∪(A ∩C) Proof. i. x ∈(A ∩(B ∪C)) ⇔x ∈A and x ∈(B ∪C) ⇔x ∈A and (x ∈B or x ∈C) ⇔(x ∈A and x ∈B) or (x ∈A and x ∈C) ⇔x ∈(A ∩B) or x ∈(A ∩C) ⇔x ∈(A ∩B) ∪(A ∩C). ii. This part is left to the reader as an exercise. Three of the earlier described operations on sets (intersection, union and complement) have been related to each other in the 19th century by the British mathematician Augustus De Morgan. We present his theorem below. Theorem 10.13 (De Morgan’s Laws). Let A and B be sets. Then the following statements are true: i. (A ∩B)′ = A′ ∪B′, ii. (A ∪B)′ = A′ ∩B′ The proof is not provided, but is an excellent exercise so the reader is strongly encouraged to try writing it. c ⃝University of Birmingham 2014 41 11 Functions A function can be defined by specifying three elements: set of inputs (called domain), set of outputs plus possibly extra elements9 (called codomain) and a rule assigning each element from the domain to a unique member of the codomain. Formally, functions can be defined in many ways, for example in terms of sets, like below. Definition 11.1 (Function). Let A and B be sets. A function (also called a map) f from A to B, f : A →B is a subset F ⊆A × B such that for each a ∈A there is one and only one pair of the form (a, b) in F. For example, we may have the set of real numbers R as a domain and the codomain and the rule f(x) = x2. In this case, we define the function in the following way: let f : R →R be given by f(x) = x2. Writing just f(x) = x2 on its own does not describe the function fully; it is tempting to state “f(x) = x2 is a function”, but you need to specify the domain and codomain as well. Notice that they are a really important part of our definition of a function. Imagine that we changed the codomain from R to Z; the resulting function is completely different! The functions are equal only if all three elements are the same. Many authors of the mathematical books, like for example D. Bloch , will point out the difference between f and f(x). This is one of those things that are often assumed as “unimportant” by students and needs to be pointed out at the beginning, since it may lead to misunderstandings in the future. Informally, you may still find people referring to “the function x2”, but this is really an abuse of notation. Notice how we defined the function above; “f : R →R” suggests that the name of the function is f and NOT f(x). 11.1 Image and preimage There is an important difference between the codomain and the image, sometimes also called range. It is common to mix these two terms and interchange them when describing the functions. You can think of a codomain as the “target set” of a function, which all the outputs are constrained to fall into. It may however include the elements which are not the outputs of the function for any of the elements of the domain. The image of the function, however, is a set of values that the function can produce. The formal definition is given below. Definition 11.2 (Image). Let f : A →B be a function from set A to B and let X be a subset of A. The image of a subset X ⊆A under f is the subset f(X) ⊆B defined by f(X) = {b ∈B|b = f(a) for some a ∈X}. The image f(A) of the entire domain is called the image of f. We also use the term “image” when talking about a single element of a set. Letting f : A →B to be a function from the set A to B, we can find the image of any element a ∈A under f. The function f(a) = b gives us the “output”, i.e. the element of the codomain B, which is our required image of a point a. So far, we have only looked at the function as the process of inputting the values and receiving the output values. We can also reverse this operation. Definition 11.3 (Inverse image, pre-image). Let f be a function from A to B. The pre-image or inverse image of a set Y ⊆B under f is the subset of A defined by f −1(Y ) = { a ∈A | f(a) ∈Y }. 9see the difference between the codomain and the range c ⃝University of Birmingham 2014 42 In various mathematical books you may find a slightly different notation for the image and pre-image of the function; for the image of the function f : A →B, some authors use f∗(A) instead of our f(A), and the pre-image is denoted by f ∗(B) instead of f −1(B). Although the notation used by the author of this paper is more common and should be sufficient for students, Bloch in his book calls such writing style an “abuse of notation, which is a technically incorrect way of writing something that everyone understands what it means anyway, and tends not to cause problems”. He argues that the notation f −1(B) does not imply that the function f −1 exists, which sends us to the topic of inverse functions, discussed later in this document. Examples Let us give some examples on what we have discussed so far. • The first, classical example when talking about the images and preimages is the function f : R → R given by f(x) = x2. Notice that the image of 3 is 9, but the preimage of 9 is {-3,3}. • Given the function g : {1, 2, 3, 4} →{red,white,blue} defined by g(x) = ( red if x = 1, 2, white otherwise. Then the image of the function is the set {red, white}. The preimage of red is g−1({red}) = {1, 2} and g−1({red, blue}) = {1, 2} as well. Notice that g−1({blue}) = ∅. • Let h : R →R, h(x) = ex. Here the codomain is defined to be R, but the image of the function is the set (0, ∞). 11.2 Composition of the functions We can combine the functions in many different ways using the composition of the functions. Definition 11.4. Let f : A →B and g : B →C be functions. The composition of f and g is the function g ◦f : A →C defined by (g ◦f)(x) = g(f(x)) for all x ∈A. This means that for any element of the domain (a ∈A), we can apply the function f and then the function g, to get an output ((g ◦f)(a) ∈C) of the function g ◦f. To be able to combine two functions, we must have that the codomain of the first one is equal to the domain of the second one. Be careful not to mix the order of applying the functions, as the commutativity does not hold in this case. The following example illustrates this fact. Example 11.1. Let f : R →R and g : R →R be functions given by f(x) = x3 and g(x) = x + 5. Then (g ◦f)(2) = g(f(2)) = g(23) = g(8) = 13, but (f ◦g)(2) = f(g(2)) = f(2 + 5) = f(7) = 73 = 243. Notice that in the above example we are able to form both g ◦f and f ◦g, since the domains and codomains are equal. In the situation when f : A →B, g : B →C, it is impossible to construct f ◦g, unless A = C! 11.3 Special functions Among many different functions we can distinguish a special function, which will always return the same value that was used as an input. We call such function the identity function defined by f : A →A such that f(x) = x and usually denoted by idA. c ⃝University of Birmingham 2014 43 A constant function is another specific function. It is of the form f(x) = c, where c is the constant returned from the function, regardless of the input value. 11.4 Injectivity, surjectivity, bijectivity Definition 11.5 (Injectivity, one-to-one function). This is a function that maps distinct points of the domain to distinct points of the codomain. Let f be a function whose domain is D. If f is injective, then ∀a, b ∈D, a ̸= b ⇔f(a) ̸= f(b). You can often judge whether the function is injective or not by just looking by its graph. We should see that every element from the domain is assigned to exactly one member of the codomain. Remember that although the picture is a helpful tool, it does not replace the formal argument! −2 −1 1 2 −3 −2 −1 1 2 3 0 f(x) = x3 Figure 5: Injective −2 −1 1 2 −1 1 2 3 0 g(x) = x2 Figure 6: Not injective To show the injectivity (or non-injectivity) of a function f, we use the definition. We let x1, x2 ∈R and start by assuming that f(x1) = f(x2). If the function is injective, we should arrive at the statement “x1 = x2”. Otherwise, the it is not injective. Let f : R →R, f(x) = x3, Assume that f(x1) = f(x2) = ⇒x3 1 = x3 2 = ⇒x3 1 −x3 2 = 0 = ⇒(x1 −x2) (x1 + x1x2 + x2 2) | {z } always > 0 = 0 = ⇒x1 = x2 Hence this function is injective. Let g(x) : R →R, g(x) = x2 Assume that g(x1) = g(x2) = ⇒x2 1 = x2 2 = ⇒x2 1 −x2 2 = 0 = ⇒(x1 −x2)(x1 + x2) = 0 = ⇒x1 = ±x2 Hence g is not injective. Note that it is possible to get g(x) = x2 injective, by restricting the domain to R+ (positive real numbers). c ⃝University of Birmingham 2014 44 Definition 11.6 (Surjectivity, onto function). Let f be a function f : A →B. Then f is surjective ⇔for any element b ∈B, there exists an element a ∈A such that f(a) = b. Note that f may send more than one element from A to the same element in B. The image of the surjective function is equal to its codomain, so every element of B has at least one element of A assigned to it. From Figure 5 we can see that the function f(x) = x3 is surjective, however g(x) = x2 is not. To make g surjective, we need to change the codomain to R+. Definition 11.7 (Bijectivity, one-to-one correspondence10). A bijective function f : A →B is map-ping between two sets A and B, which is both injective and surjective. Each element from the set A is paired with exactly one element from set B and each element from set B is paired with exactly one element from set A. Exercise 11.2. Study the examples below. Decide whether they are surjective, injective or bijec-tive. If not, how can you restrict domain or codomain to make them bijective? 1. f : R ⇒R, f(x) = sin x, 2. g : Z →Z, g(x) = ex, 3. h : R+ →R, h(x) = x2 −5. 11.5 Inverse function For any bijective function f, we can find an inverse function (f −1), which reverses the process. While the function f takes an input x and returns the output f(x) = y, the inverse function will take y back to x, i.e. f −1(y) = x. Notice also that the composition of the function with its inverse gives the identity function11, f −1(f(x)) = x = f(f −1(x)). Notice that the function f(x) = x2, will have an inverse (or not) depending on the domain (will need to be restricted so that f is bijective). 11.6 Even and odd functions Definition 11.8. The function f : X →Y is even if it is true that • if x ∈X, then −x ∈X, and • f(x) = f(−x) for all x ∈X. 10Note that the one-to-one function and one-to-one correspondence are two different concepts and should not be confused. 11See the theorem at the end of the chapter c ⃝University of Birmingham 2014 45 Even functions are symmetric with respect to the y−axis. You can check that the function g(x) = x2 (fig.6) is an even function, but f(x) = x3 (fig.5) is not. Definition 11.9. The function f : X →Y is odd if it is true that • if x ∈X, then −x ∈X, and • f(−x) = −f(x) for all x ∈X. The odd functions are symmetric with respect to the origin, which means that they remain unchanged when reflected across both x−and y−axis. From the graph you can see that the function in fig.5 is odd, but the one in fig.6 is not. 11.7 Exercises Theorem 11.3. Let f : A →B, g : B →C and h : C →D be functions. 1. If f has an inverse, then the inverse is unique; 2. If f and g are injective, then g ◦f is also injective; 3. If f and g are surjective, then g ◦f is also surjective; 4. If f and g are bijective, then g ◦f is also bijective; 5. (h ◦g) ◦f = h ◦(g ◦f) (Associative Law); 6. f ◦idA = f and idB ◦f = f (Identity Law). The above statements are a good exercise to practise proofs on functions and are left for the reader to try. c ⃝University of Birmingham 2014 46 References Ethan D. Bloch, “Proofs and Fundamentals. A First Course in Abstract Mathematics”, Birkh¨ auser, Boston, 2000. Kevin Houston, “How to Think Like a Mathematician. A Companion to Undergraduate Mathe-matics”, Cambridge University Press, Cambridge, 2009. Martin Liebeck, “A Concise Introduction to Pure Mathematics”, CRC Press, Taylor & Francis Group, Boca Raton, 2011. Richard Bornat, “Proof and Disproof in Formal Logic”, Oxford University Press, New York, 2005. Kam-Tim Leung, Doris Lai-Chue Chen, “Elementary Set Theory”, Hong Kong University Press, Hong Kong, 1967. Martin Aigner, G¨ unter M. Ziegler, “Proofs from THE BOOK”, Springer-Verlag, Berlin, 2001. c ⃝University of Birmingham 2014 47 12 Appendix Theorem 12.1. card(R)= card(R2) Notes. Here the question is: how is it actually possible to correspond an element from the real line to an element on the plane? We will present the idea of the proof, considering an interval I = [0, 1] and the unit square Q = [0, 1]2 instead of the whole real line and the plane. Proof. Consider a point in the unit sqare (x, y) ∈Q and write the decimal expansion of x and y, x = 0. x1 x2 x3 x4 x5... and y = 0. y1 y2 y3 y4 y5... Now, to find a point on I corresponding to (x, y), we use the above expansion of x and y, creating a point z, as shown below. z = 0. x1 y1 x2 y2 x3 y3... Note, that this approach would work well if not for the fact that 0.a b c 0 9 9 9... = 0.a b c 1 0 0 0.... So we can represent the point (x, y) ∈Q using two different decimal expansions of x and y, which would result in getting two distinct points z and z′ (z, z′ ∈I). Hence the mapping is not one-to-one. To avoid this situation, we do not allow the decimal expansions with the infinite number of nines, (for example 0.04999... is forbidden). Now we can be sure that the point z ∈I is unique for each pair (x, y). Theorem 12.2. Suppose that {v1, v2, v3} is the linearly independent set of vectors. Then {v1, v3} is also linearly independent set. Notes. The proof is not complete and the reader is encouraged to fill in the gaps. First, re-call that finitely many vectors v1, v2, . . . vr are linearly dependent if there exist scalars (not all zero) such that a1v1 + a2v2 + · · · + arvr = 0. If the vectors are not linearly dependent, then we say they are linearly independent. Now, the statement is fairly simple but how to prove it? If we remember the relation (A ⇒B) ≡(not B ⇒not A), then the second one is easy to show. Click here to see a video example Proof. Assume that {v1, v3} are . . . (assume the opposite) So we can write . . . v1 + . . . v3 = . . . for some scalars . . . , . . . ∈R. (fill in the dots using your knowledge about the dependent vectors) Now notice that . . . v1 + . . . v2 + . . . v3 = 0, hence we have that v1, v2 and v3 are . . . . So we proved that. . . (write the conclusion) hence by contrapositive we have that . . . . c ⃝University of Birmingham 2014 48 Theorem 12.3. ln x ≤x −1 for x > 0. Proof. We will use the expansion of the exponential function (proof of the expansion is not provided, as it is commonly used in the first year of studies): ex = 1 + x + x2 2! + x3 3! + x4 4! + . . . () We first show that ex ≥1 + x for all x ∈R. From the expansion above (), this is clear for x ≥0. Also from (), if 0 ≤x < 1, ex = 1 + x + x2 2 + x3 3! + . . . ≤1 + x + x2 + x3 + . . . = 1 1 −x Therefore if −1 < x < 0, ex ≤ 1 1 −(−x) = 1 1 + x and so ex ≥1 + x for −1 < x < 0. Finally, for x ≤−1, ex is positive and 1 + x is negative. So ex ≥1 + x for x ≤1. Therefore, ex ≥1 + x for all x ∈R. Now, let x = ln u (where u > 0). Then eln u ≥1 + ln u ∴u ≥1 + ln u So ln u ≤1 −u for all n > 0. c ⃝University of Birmingham 2014 49 c ⃝University of Birmingham 2014
2405
https://cdn.who.int/media/docs/default-source/documents/ddi/indicatorworkinggroup/glossary_of_terms_june-2024.pdf?sfvrsn=ac699779_1
WHO Glossary of Health Data, Statistics and Public Health Indicators June 2024 A................................ ................................ ................................ ................................ ............................ 1 ACCESSIBILITY ................................ ................................ ................................ ................................ ... 1 ACCOUNTABILITY ................................ ................................ ................................ .............................. 1 ACCURACY ................................ ................................ ................................ ................................ ........ 1 ADMINISTRATIVE AREA ................................ ................................ ................................ ..................... 1 AGE GROUPINGS RECOMMENDED BY WHO ................................ ................................ ..................... 1 AGE -SPECIFIC MORTALITY RATE ................................ ................................ ................................ ........ 1 ANONYMITY ................................ ................................ ................................ ................................ ...... 1 B ................................ ................................ ................................ ................................ ............................ 1 BIOSPECIMEN ................................ ................................ ................................ ................................ .... 1 BIRTH REGISTRATION DATA ................................ ................................ ................................ .............. 1 BIRTH REGISTRATION FACILITY ................................ ................................ ................................ ......... 2 C ................................ ................................ ................................ ................................ ............................ 2 CAUSES OF DEATH ................................ ................................ ................................ ............................ 2 CHI -SQUARE(x 2) TEST ................................ ................................ ................................ ........................ 2 CIVIL REGISTRATION AND VITAL STATISTICS (CRVS) ................................ ................................ ......... 2 COMPLETENESS OF REPORTING ................................ ................................ ................................ ....... 2 COMPOSITE INDICATOR ................................ ................................ ................................ .................... 2 CONCURRENT VALIDITY ................................ ................................ ................................ .................... 2 CONFIDENTIALITY ................................ ................................ ................................ ............................. 2 CONFOUNDING ................................ ................................ ................................ ................................ . 2 CONSTRUCT VALIDITY ................................ ................................ ................................ ....................... 2 CONTENT VA LIDITY ................................ ................................ ................................ ........................... 3 CONTINUUM OF CARE ................................ ................................ ................................ ...................... 3 CONVERGENT VALIDITY ................................ ................................ ................................ .................... 3 CORE INDICATOR ................................ ................................ ................................ .............................. 3 CORRELATION AN ALYSIS ................................ ................................ ................................ ................... 3 COUNT ................................ ................................ ................................ ................................ .............. 3 COVARIATES ................................ ................................ ................................ ................................ ...... 3 COVERAGE ................................ ................................ ................................ ................................ ........ 3 CREDIBILITY ................................ ................................ ................................ ................................ ....... 3 CRITERION VALIDITY ................................ ................................ ................................ ......................... 3 D ................................ ................................ ................................ ................................ ........................... 3 DATA COLLECTION LEVEL ................................ ................................ ................................ .................. 3 DATA COLLECTION METHOD ................................ ................................ ................................ ............. 3 DATA CUSTODIAN ................................ ................................ ................................ ............................. 4 DATA INFORMATI ON PYRAMID ................................ ................................ ................................ ........ 4 DATA INPUTS ................................ ................................ ................................ ................................ .... 4 DATA LIFE CYCLE ................................ ................................ ................................ ............................... 4 DATA PROVIDER ................................ ................................ ................................ ................................ 4 DATA QUALITY ................................ ................................ ................................ ................................ .. 4 DATA QUALITY ASSESSMENT ................................ ................................ ................................ ............ 4 DATA QUALITY ASSURANCE (DQA) ................................ ................................ ................................ ... 4 DATA SOURCE ................................ ................................ ................................ ................................ ... 4 DATA TRIANGULATION ................................ ................................ ................................ ..................... 5DEATH REGI STRATION DATA ................................ ................................ ................................ ............. 5 DEATH REGISTRATION FACILITY ................................ ................................ ................................ ........ 5 DEMOGRAPHIC SURVEILLANCE SYSTEM ................................ ................................ ........................... 5 DENOMINATOR ................................ ................................ ................................ ................................ . 5 DESCRIPTIVE ANALYS IS ................................ ................................ ................................ ..................... 5 DIGITAL HEALTH ................................ ................................ ................................ ................................ 5 DISAGGREGATION ................................ ................................ ................................ ............................. 5 DISEASE SURVEILLANCE SYSTEM ................................ ................................ ................................ ....... 5 DOMAIN ................................ ................................ ................................ ................................ ............ 5 E ................................ ................................ ................................ ................................ ............................ 5 ECOLOGICAL ANALYSIS ................................ ................................ ................................ ..................... 5 ECOLOGICAL FALLACY ................................ ................................ ................................ ....................... 6 EPIDEMIC INTELLIGENCE ................................ ................................ ................................ ................... 6 ESTIMATI ON METHOD ................................ ................................ ................................ ...................... 6 EVALUATION ................................ ................................ ................................ ................................ ..... 6 EVENT BASED SURVEILLANCE ................................ ................................ ................................ ........... 6 EXTERNAL CONSISTENCY OF DATA ................................ ................................ ................................ ... 6 EXTERNAL RESPONSIVENESS ................................ ................................ ................................ ............. 6 F ................................ ................................ ................................ ................................ ............................ 6 FEASIBLE ................................ ................................ ................................ ................................ ........... 6 FOCAL POINT ................................ ................................ ................................ ................................ ..... 6 G ................................ ................................ ................................ ................................ ........................... 6 GENDER ................................ ................................ ................................ ................................ ............ 6 GEOCODED ................................ ................................ ................................ ................................ ....... 7 GEOGRAPHIC INFORMATION SYSTEMS (GIS) ................................ ................................ .................... 7 GEOSPATIAL ANALYSIS ................................ ................................ ................................ ...................... 7 GEOSPATIAL DATA ................................ ................................ ................................ ............................ 7 GLOBAL DATABAS E ................................ ................................ ................................ ........................... 7 GLOBAL HEALTH ESTIMATES (GHE) ................................ ................................ ................................ ... 7 GOLD STANDARD ................................ ................................ ................................ .............................. 7 GRANULARITY ................................ ................................ ................................ ................................ ... 7 H ................................ ................................ ................................ ................................ ........................... 7 HEALTH CARE ADMINISTRATIVE D ATA ................................ ................................ .............................. 7 HEALTH ESTIMATES ................................ ................................ ................................ .......................... 7 HEALTH FACILITY CENSUS ................................ ................................ ................................ ................. 7 HEALTH FACILITY SURVEY ................................ ................................ ................................ ................. 7 HEALTH IMPACT ASSESSMENT ................................ ................................ ................................ .......... 8 HEAL TH INDICATOR ................................ ................................ ................................ .......................... 8 HEALTH INEQUALITY ................................ ................................ ................................ ......................... 8 HEALTH INEQUITY ................................ ................................ ................................ ............................. 8 HEALTH INFORMATION SYSTEM ................................ ................................ ................................ ....... 8 HEAL TH RECORD ................................ ................................ ................................ ............................... 8 HEALTH SURVEY ................................ ................................ ................................ ................................ 8 HEAPING OF DATA ................................ ................................ ................................ ............................ 8 I ................................ ................................ ................................ ................................ ............................. 8 IMPACT INDICATOR ................................ ................................ ................................ .......................... 8 IMPUTATION ................................ ................................ ................................ ................................ ..... 9 INCIDENCE RATE ................................ ................................ ................................ ............................... 9 INDICATOR -BASED SURVEILLANCE ................................ ................................ ................................ .... 9INDICATOR CLASSIFICATION ................................ ................................ ................................ ............. 9 INDICATOR DEFINITION ................................ ................................ ................................ .................... 9 INPUT INDICATOR ................................ ................................ ................................ ............................. 9 INTEGRITY OF DATA ................................ ................................ ................................ .......................... 9 INTERACTION ................................ ................................ ................................ ................................ .... 9 INTERNAL CONSISTENCY OF DATA ................................ ................................ ................................ .... 9 INTERNAL RESPONSIVENESS ................................ ................................ ................................ ............. 9 INTERNATIONAL CLASSIFICATION OF DISEASES (ICD) ................................ ................................ ....... 9 INTERNATIONAL HEALTH REGULATIONS (IHR) ................................ ................................ ................ 10 INTERNATIONAL STANDARDS ................................ ................................ ................................ ......... 10 INTEROPERABILITY ................................ ................................ ................................ .......................... 10 J................................ ................................ ................................ ................................ ........................... 10 JOINT EXTERNAL EVALUATION (JEE) ................................ ................................ ............................... 10 K ................................ ................................ ................................ ................................ .......................... 10 KAPPA STATISTIC (K) ................................ ................................ ................................ ....................... 10 L ................................ ................................ ................................ ................................ .......................... 10 LINKAGE ................................ ................................ ................................ ................................ .......... 10 M ................................ ................................ ................................ ................................ ........................ 10 MASTER FACILITY LIST (MLF) ................................ ................................ ................................ ........... 10 MATHEMATICAL OR STATISTICAL MODELS ................................ ................................ ..................... 10 MEAN ................................ ................................ ................................ ................................ .............. 10 MEASURE ................................ ................................ ................................ ................................ ........ 10 MEASUREMENT ................................ ................................ ................................ .............................. 10 MEASUREMENT LEVEL ................................ ................................ ................................ .................... 11 MEASUREMENT METHOD ................................ ................................ ................................ ............... 11 MEDIAN ................................ ................................ ................................ ................................ .......... 11 MEDICAL CERTIFICATION OF CAUSE OF DEATH (MCCD) ................................ ................................ . 11 METADATA ................................ ................................ ................................ ................................ ...... 11 METHOD OF AGGREGATE ESTIMATION ................................ ................................ .......................... 11 METHODOLOGICAL SOUNDNESS ................................ ................................ ................................ .... 11 MICR ODATA ................................ ................................ ................................ ................................ .... 11 MINISTRY OF HEALTH (MoH) ................................ ................................ ................................ .......... 11 MONITORING ................................ ................................ ................................ ................................ .. 11 MONITORING & EVALUATION (M&E) FRAMEWORK ................................ ................................ ....... 11 MORBIDITY DATA ................................ ................................ ................................ ............................ 11 MORTALITY CODER ................................ ................................ ................................ ......................... 11 MORTALITY DATA ................................ ................................ ................................ ........................... 12 N ................................ ................................ ................................ ................................ ......................... 12 NATIONAL HEALTH STRATEGIC PLAN ................................ ................................ .............................. 12 NATIONAL STATISTICS OFFICE (NSO) ................................ ................................ .............................. 12 NATIONALLY REPRESENTATIVE ................................ ................................ ................................ ....... 12 NEGATIVE PREDICTIVE VALUE (NPV) ................................ ................................ ............................... 12 NOTIFIABLE CONDITIONS ................................ ................................ ................................ ................ 12 NUMERATOR ................................ ................................ ................................ ................................ ... 12 O ................................ ................................ ................................ ................................ ......................... 12 ODDS ................................ ................................ ................................ ................................ ............... 12 OUTCOME INDICATOR ................................ ................................ ................................ .................... 12 OUTPUT INDICATOR ................................ ................................ ................................ ........................ 12 P ................................ ................................ ................................ ................................ .......................... 13 PERCENTAGE ................................ ................................ ................................ ................................ ... 13 PERIDIOCITY ................................ ................................ ................................ ................................ .... 13 POPULATION BASED SURVEY ................................ ................................ ................................ .......... 13 POPULATION CENSUS ................................ ................................ ................................ ..................... 13 POSITIVE P REDICTIVE VALUE (PPV) ................................ ................................ ................................ . 13 POST ENUMERATION SURVEY (PES) ................................ ................................ ................................ 13 PREDICTIVE VALIDITY ................................ ................................ ................................ ...................... 13 PREFERRED DATA SOURCES ................................ ................................ ................................ ............ 13 PREVALENCE RATE ................................ ................................ ................................ .......................... 13 PRIMARY DATA ................................ ................................ ................................ ............................... 13 PRIMARY DATA SOURCES ................................ ................................ ................................ ................ 14 PROCESS INDICATOR ................................ ................................ ................................ ....................... 14 PROCESS OF VALIDATION ................................ ................................ ................................ ............... 14 PROCES SED HEALTH DATA ................................ ................................ ................................ .............. 14 PROPORTION ................................ ................................ ................................ ................................ .. 14 PROXY HEALTH INDICATOR ................................ ................................ ................................ ............. 14 PUBLIC HEALTH SURVEILLANCE SYSTEM ................................ ................................ ......................... 14 PUBLICLY AVAILABLE ................................ ................................ ................................ ....................... 14 PUNCTUALITY ................................ ................................ ................................ ................................ .. 14 P VALUE ................................ ................................ ................................ ................................ .......... 14 R ................................ ................................ ................................ ................................ .......................... 15 RATE ................................ ................................ ................................ ................................ ................ 15 RATIO ................................ ................................ ................................ ................................ .............. 15 RATIONALE ................................ ................................ ................................ ................................ ...... 15 RAW HEALTH DATA ................................ ................................ ................................ ......................... 15 RECORD LINKAGE ................................ ................................ ................................ ............................ 15 REGISTRAR ................................ ................................ ................................ ................................ ...... 15 REGISTRATION FORM ................................ ................................ ................................ ..................... 15 REGRESSION ................................ ................................ ................................ ................................ .... 15 REGULAR ASSESSMENT ................................ ................................ ................................ ................... 15 RELEVANCE ................................ ................................ ................................ ................................ ..... 15 RELIABILITY OF DATA ................................ ................................ ................................ ...................... 15 REPORTIN G SITE ................................ ................................ ................................ .............................. 15 REPRESENTATIVE ................................ ................................ ................................ ............................ 16 RESPONSE RATE ................................ ................................ ................................ .............................. 16 RESPONSIVENESS ................................ ................................ ................................ ............................ 16 ROUTINE HEALTH INFORMATION SYSTEM (RHIS) ................................ ................................ ........... 16 S ................................ ................................ ................................ ................................ .......................... 16 SAMPLE ................................ ................................ ................................ ................................ ........... 16 SAMPLE SIZE ................................ ................................ ................................ ................................ ... 16 SAMPLING ERROR ................................ ................................ ................................ ........................... 16 SECONDARY DATA SOURCES ................................ ................................ ................................ ........... 16 SENSITIVITY ANALYSIS ................................ ................................ ................................ ..................... 16 SEX ................................ ................................ ................................ ................................ .................. 16 STAKEHOLDER ................................ ................................ ................................ ................................ . 16 STANDARD OPERATING PROCEDURES (SOP) ................................ ................................ .................. 17 STATISTICAL DATA ................................ ................................ ................................ ........................... 17 STATISTICAL METHODS ................................ ................................ ................................ ................... 17 STATISTICAL OUTPUT ................................ ................................ ................................ ...................... 17 STATISTICAL SIGNIFICANCE ................................ ................................ ................................ ............. 17 STRATIFICATION ................................ ................................ ................................ .............................. 17 STRUCTURAL VALIDITY ................................ ................................ ................................ .................... 17 SUB -NATIONAL ................................ ................................ ................................ ............................... 17 SURVEILLANCE ................................ ................................ ................................ ................................ 17 SURVEY ................................ ................................ ................................ ................................ ........... 17 SURVEY DATA ................................ ................................ ................................ ................................ .. 17 SUSTAINABILITY ................................ ................................ ................................ .............................. 17 SUSTAINABLE DEVELOPMENT GOAL (SDG) ................................ ................................ ..................... 18 T ................................ ................................ ................................ ................................ .......................... 18 TARGET POPULATION ................................ ................................ ................................ ..................... 18 TIMELINESS ................................ ................................ ................................ ................................ ..... 18 TRACER INDICATOR ................................ ................................ ................................ ......................... 18 TREND ................................ ................................ ................................ ................................ ............. 18 U ................................ ................................ ................................ ................................ ......................... 18 UNCERTAINTY MEASURE ................................ ................................ ................................ ................ 18 UNDERCOU NT RATE ................................ ................................ ................................ ........................ 18 UNDERSTANDABLE/SIMPLICITY ................................ ................................ ................................ ...... 18 UNIT OF MEASURE ................................ ................................ ................................ .......................... 18 USEFULNESS/UTILITY ................................ ................................ ................................ ...................... 18 V................................ ................................ ................................ ................................ .......................... 19 VALIDITY ................................ ................................ ................................ ................................ .......... 19 VERBAL AUTOPSY ................................ ................................ ................................ ............................ 19 VITAL EVENT ................................ ................................ ................................ ................................ ... 19 VITAL STATISTICS ................................ ................................ ................................ ............................ 19 W ................................ ................................ ................................ ................................ ........................ 19 WEIGHTING ................................ ................................ ................................ ................................ ..... 19 1 WHO Glossary of Health Data, Statistics and Public Health Indicators These definitions are considered in the context of WHO ’s technical work on health data, statistics and public health indicators. A ACCESSIBILITY Definition: The ease with which users can find, retrieve, understand, and use data. ACCOUNTABILITY Definition : Answerability or legal responsibility for identifying and removing obstacles and barriers to health services . This should include responding to findings from monitoring and evaluation. ACCURACY Definition: The degree of closeness estimates are to the true values. ADMINISTRATIVE AREA Definition: Administrative area refers to the clusters that are administered by a departmental area. For example: county, district, province, state, sub -national, national. AGE GROUPINGS RECOMMENDED BY WHO Definition: Age groups for data analysis that capture a time interval representing a developmental stage in the life course of a human. WHO proposes age groups for analysis as follows : • 0-6 days (early neonates) • 7-27 days (late neonates) • 28 -364 days (post -natal infants) • 1-4 years (young children) • 5-9 years (older children) • 10 -14 years (young adolescents) • 15 -19 years (older adolescents) • 20 -24 years (young adults) • 25 -59 years (adults) in five year age groups • 60 -99 years (older adults) in five year age groups • 100+ (older adults) AGE -SPECIFIC MORTALITY RATE Definition: A mortality rate limited to a particular age group. The numerator is the number of deaths in that age group; the denominator is the number of persons in that age group in the population. ANONYMI TY Definition: The condition of being anonymous . B BIOSPECIMEN Definition: Biological sample . BIRTH REGISTRATION DATA Definition: Data collected during birth registration, including place of birth, sex, etc. 2 BIRTH REGISTRATION FACILITY Definition: Registration desk that receives birth notification, validates information, and enters it into central registration IT system . C CAUSES OF DEATH Definition: Cause of death as all those diseases, morbid conditions, or injuries which either resulted in or contributed to death and the circumstances of the accident or violence which produced any such injuries. It does not include symptoms or modes of dying such as cardiac arrest. CHI -SQUARE(x 2) TEST Definition: The Chi -square test of independence (also known as the Pearson Chi -square test, or simply the Chi - square) is one of the most useful statistics for testing hypotheses when the variables are nominal, as often happens in clinical research. Unlike most statistics, the Chi -square (χ 2) can provide information not only on the significance of any observed differences, but also provides detailed information on exactly which categories account for any differences found. CIVIL REGISTRATION AND VITAL STATISTICS (CRVS) Definition: The continuous, permanent, compulsory, and universal recording of the occurrence and characteristics of vital events pertaining to the population, as provided through decree or regulation in accordance with the legal requirements in each country Vital statistics include events like birth, marriage, divorce, adoption, death, and cause of death. Civil registration is the universal recording of the occurrence and characteristics of those vital events pertaining to a specific population. COMPLETENESS OF REPORTING Definition: Reflects the percentage of reporting units that have provided data. This could be the completeness of facility reporting, district reporting or globally the completeness of reporting from countries. COMPOSITE INDICATOR Definition: An index composed of several indicators within a health topic to represent that topic; a composite indicator may combine indicators from across several health topics to represent a broader concept, such as universal health coverage. CONCURRENT VALIDITY Definition: Concurrent validity is one approach of criterion validity that estimates individual performance on different tests at approximately the same time. CONFIDENTIALITY Definition: Refers to the obligation to maintain confidentiality agreements that limit access or restrict the disclosure of certain types of information . CONFOUNDING Definition: The distortion of a measure of the effect of an exposure on an outcome due to the association of the exposure with other factors that influence the occurrence of the outcome. An additional variable related to the independent and dependent variables and that distorts the relationship between them. CONSTRUCT VALIDITY Definition: The extent to which the measure ‘behaves ’ in a way consistent with theoretical hypothesis and represents how well scores on the measurements indicative of the theoretical construct, providing confidence in the meaningfulness and relevance of the results obtained using that measurement. Construct validity is often used for process indicators. 3 CONTENT VALIDITY Definition: The degree to which an assessment instrument is relevant to, and representative of, the targeted construct it is designed to measure. CONTINUUM OF CARE Definition: Describes the pathway for measurement across the life course including reproductive health, pregnancy, childbirth, postnatal care for mothers and newborns and prevention and treatment measures for children, adolescents, adults, and older people. CONVERGENT VALIDITY Definition: how closely the indicator is related to other variables and other measures of the same construct. This approach is utilized when a gold standard does not exist. Convergent validity is often used for impact indicators. CORE INDICATOR Definition: Core indicators may be defined in collaboration with all key stakeholders (e.g., ministry of health (MoH), national statistics office (NSO), other relevant ministries, professional organizations, subnational experts, and major disease -focused programmes), and depends on the priority monitoring requirements related to health and health -related SDGs, among other health priorities. CORRELATION ANALYSIS Definition: Examination of the strength and direction of linear relationships between two continuous variables. COUNT Definition: a count gives the number of occurrences of the event(s) being studied, within a specified time and at a specified place . This is the absolute frequency and indicates the impact of a disease in precise numerical terms. COVARIATES Definition: Data, including non -health data, which are used in a statistical model to improve the estimation of the health indicator of interest. These data are population -specific and are available for every population included in the analysis. A common covariate is gross domestic product per capita. COVERAGE Definition: the extent to which the real, observed population matches the ideal or normative population. CREDIBILITY Definition: Confidence that users place in the statistics . CRITERION VALIDITY Definition: The extent to which the measurement correlates with an external criterion of the phenomenon under study; ideally, a GOLD STANDARD. D DATA COLLECTION LEVEL Definition: The specific setting that the indicator is designed to measure/ monitor, e.g., national, sub -national, facility, or community. DATA COLLECTION METHOD Definition: Description of all methods used for data collection. This description should include, when applicable, the sample frame used, the questions used to collect the data, the type of interview, the dates/duration of fieldwork, the sample size, and the response rate. 4 DATA CUSTODIAN Definition: Data custodians are agencies responsible for managing the use, disclosure and protection of source data used in a statistical data integration project. Data custodians collect and hold information on behalf of a data provider (defined as an individual, household, business, or other organisation which supplies data either for statistical or administrative purposes). The role of data custodians may also extend to producing source data, in addition to their role as a holder of datasets. DATA INFORMATION PYRAMID Definition: A schematic way of looking at the number of data items to be collected at each level of the health system allowing each level to gather data of importance and relevance to their daily work while avoiding excessive data collection where no action is taken. The pyramid illustrates how most data are collected at the base of the pyramid in the health facility, where most health service action takes place. Data are processed, filtered, and streamlined as data sets that are then passed up the health system. DATA INPUTS Definition: All numerical inputs to mathematical or statistical models that are used to generate global health estimates. Model inputs may include raw health data, processed health data, covariates, and other parameters. DATA LIFE CYCLE Definition: The main steps of the data life cycle include data collection, entry and recording, storage, processing and analysis, presentation and visualization, interpretation, sharing and dissemination, retention and archiving, maintenance and quality assurance, and disposal. DATA PROVIDER Definition: Data providers consist of the individuals and organizations who are responsible, whether formally or informally, for making data accessible to others. Sometimes a data provider may be simply the producer of those data. In other cases, data may be deposited in a repository, centre , or archive that has the responsibility of disseminating the data. DATA QUALITY Definition: Data quality is a set of standards that data should reach to be usable. Quality data must encompass the following characteristics: Relevance, Credibility, Accuracy, Timeliness, Punctuality, Methodological soundness, Coherence and Accessibility . DATA QUALITY ASSESSMENT Definition: Is the analysis or evaluation of data to determine its accuracy, completeness, consistency, and other quality attributes against predefined criteria. The primary goal of DQ assessment is to evaluate the quality of data that already exists within a system, database, or dataset. It helps identify issues and areas for improvement in the current state of data quality. DATA QUALITY ASSURANCE (DQA) Definition: is a proactive and systematic process that focuses on preventing errors and ensuring that data meets predefined quality standards throughout its lifecycle. The main goal of DQA is to guarantee the overall quality of data from the point of its creation or acquisition to its eventual use. DATA SOURCE Definition: Description of all actual and recommended sources of data. This description should include, when applicable, any changes of the data source over time, details of denominator (if from a different source) and any other relevant information related to the origin of the source or indicator . Similar details should be given for administrative sources. Definition of primary, secondary, and preferred data sources are provided separately. 5 DATA TRIANGULATION Definition: The analysis of data from three or more sources obtained by different methods. Findings can be corroborated, and the weakness or bias of any of the methods or data sources can be compensated for by the strengths of another, thereby increasing the validity and reliability of the results. DEATH REGISTRATION DATA Definition: Data collected during death registration, including cause of death, sex, occupation, etc. DEATH REGISTRATION FACILITY Definition: Registration desk that receives death notification, validates information, and enters it into central registration IT system DEMOGRAPHIC SURVEILLANCE SYSTEM Definition: Demographic surveillance systems are longitudinal data collection platforms that track births, deaths, migrations, and socioeconomic and health circumstances over time in established geographic areas . DENOMINATOR Definition: Total population of interest in a specified population. The lower portion of a fraction used to calculate a rate or ratio. In a rate, the denominator is usually the population (or population experience, as in person -years, etc.) at risk. DESCRIPTIVE ANALYSIS Definition: is the process of using and analyzing summary statistics that quantitatively describe or summarize features from a specific population. DIGITAL HEALTH Definition: Digital health is the systematic application of information and communication technologies, computer science, and data, to support informed decision -making by individuals, the health workforce and health systems, in order to strengthen resilience to disease and improve health and wellness. DISAGGREGATION Definition: The breakdown of observations to a more detailed level when finer details are required and made possible by the codes given to the primary observations. In health this often includes, sex, age, wealth quintile, education level, place of residence and occupation. DISEASE SURVEILLANCE SYSTEM Definition: is the infrastructure for ongoing systematic collection, analysis, and interpretation of outcome specific disease data for use in planning, implementing, and evaluating public health policies and practices. A communicable disease surveillance system serves two key functions; early warning of potential threats to public health and programme monitoring functions which may be disease specific or multi -disease in nature. DOMAIN Definition : Categoriz ation of an indicators -based factors such as health status, risk factors, service coverage and health systems . The list includes a selection of priority indicators relating to 4 domains that includes health status, risk factors, service coverage and health systems . E ECOLOGICAL ANALYSIS Definition: analysis of the relationship between a health indicator and a health determinant or exposure at a population level. 6 ECOLOGICAL FALLACY Definition: an erroneous inference that may occur because an association observed between variables on an aggregate level does not necessarily represent or reflect the association that exists at an individual level; a causal relationship that exists on a group level or among groups may not exist among the group individuals. EPIDEMIC INTELLIGENCE Definition: System to detect, verify, investigate, and respond to early warning signals . ESTIMATION METHOD Definition: The estimation method is the steps taken to generate estimates. Explanation of how the indicator is calculated, including mathematical formulas and descriptive information of computations made on the source data to produce the indicator (including adjustments and weighting). This explanation should also highlight cases in which mixed sources are used or where the calculation has changed over time (i.e., discontinuities in the series). [synonyms: method of computation] EVALUATION Definition: A process that attempts to determine as systematically and objectively as possible the relevance, effectiveness, and impact of activities in the light of their objectives. EVENT BASED SURVEILLANCE Definition: Event -based surveillance is the organized and rapid capture of information about events that are a potential risk to public health. This information can be rumours and other ad -hoc reports transmitted through formal channels (i.e., established routine reporting systems) and informal channels (i.e., media, health workers and nongovernmental organizations reports). EXTERNAL CONSISTENCY OF DATA Definition: An assessment of the level of agreement between two sources of data measuring the same health indicator. The two sources of data that are usually compared are data flowing through the HMIS or the programme -specific information system and data from a periodic population -based survey. EXTERNAL RESPONSIVENESS Definition: Reflects the extent to which change in a measure relates to corresponding change in a reference measure of clinical or health status. F FEASIBLE Definition: refers to the availability of data to measure the indicator . If the data is available from existing health data at a reasonable cost and/or will not add additional burden to collect data . FOCAL POINT Definition: The designated person to respond on behalf of a Member State or technical group during data collection or country consultation process. G GENDER Definition: Gender refers to the characteristics of women, men, girls, and boys that are socially constructed . This includes norms, behaviors and roles associated with being a woman, man, girl, or boy, as well as relationships with each other. As a social construct, gender varies from society to society and can change over time. 7 GEOCODED Definition: Geocoding is the process of transforming a description of a location -such as a pair of coordinates, an address, or the name of a place -to a location on the earth's surface. GEOGRAPHIC INFORMATION SYSTEMS (GIS) Definition: A geographic information system (GIS) is a system that creates, manages, analyzes, and maps all types of data. GEOSPATIAL ANALYSIS Definition: The use of geospatial data and statistical techniques to uncover patterns, relationships, and trends. GEOSPATIAL DATA Definition: data about objects, events or other features that have a location on the surface of the earth . GLOBAL DATABASE Definition: For health and health -related indicators, the Global Health Observatory (GHO) is the preferred global database , presenting the latest available data at global, regional, and national level. The GHO data repository is WHO's gateway to health -related statistics, providing access to over 1000 indicators on priority health topics including morality and burden of diseases. GLOBAL HEALTH ESTIMATES (GHE) Definition: Are WHO's estimates on death and disability globally, by region and country, available by age, sex, and cause. These provide key insights on mortality and morbidity trends and are a powerful tool to support informed decision -making on health policy and resource allocation. GOLD STANDARD Definition: A reference measure or criterion against which other assessments are compared. GRANULARITY Definition: Granularity is the level of detail of the data. H HEALTH CARE ADMINISTRATIVE DATA Definition: Information primarily collected for the purpose of record -keeping, which is subsequently used to produce statistics. Thes e data are generated at every encounter with the health care system, whether through a visit to a clinic, a diagnostic procedure, an admission to a hospital or receipt of a prescription. HEALTH ESTIMATES Quantitative population -level estimates (including global, regional, national, or subnational estimates) of health indicators, including indicators of health status such as estimates of total and cause -specific mortality, incidence and prevalence of diseases, injuries, and disability and functioning; and indicators of health determinants, including health behaviours and health exposures. HEALTH FACILITY CENSUS Definition: periodic enumeration of all public and private healthcare facilities within a country about the facilities and the services they provide . HEALTH FACILITY SURVEY Definition: periodic enumeration of a representative sample of public and private healthcare facilities within a country about the facilities and the services they provide .8 HEALTH IMPACT ASSESSMENT Definition: Health impact assessment is a combination of procedures, methods, and tools by which a policy, programme, product, or service may be judged concerning its effects on the health of the population and the distribution of those effects within the population. HEALTH INDICATOR Definition: A measurable quantity that can be used to describe a population ’s health or its determinants. Health indicators can be categorized into domains: health status (e.g., life expectancy, HIV prevalence), risk factors (e.g., childhood stunting, prevalence of smoking), service coverage (e.g., immunization coverage rate), or health systems (e.g., hospital bed density, death registration coverage). HEALTH INEQUALITY Definition: A measured difference in health between population subgroups. Health inequalities can be measured and monitored. HEALTH INEQUITY Definition: unfair, avoidable, or remediable differences in health among groups of people. In some cases, the absence of a difference between groups (that is, a situation of equality) might be considered inequitable. Health inequity is rooted in the unfair distribution of, and access to, power, wealth, and other social resources, and is linked to forms of disadvantage that are socially produced, such as poverty, discrimination, and lack of access to services or goods. HEALTH INFORMATION SYSTEM Definition: A system that collects data from health and other relevant sectors, analyses the data and ensures their overall quality, relevance, and timeliness, and converts the data into information for health -related decision -making. It has four key functions: (i) data generation, (ii) compilation, (iii) analysis and synthesis, and (iv) communication and use. A solid health information system will be capable of generating reliable data from hospitals, outpatients, reportable diseases registries, cancer registries and other relevant data for health . HEALTH RECORD Definition: Records that contain diagnoses and treatment, medications, allergies, immunizations, as well as radiology images and laboratory results. Health records contribute to tracking a patient's medical history. HEALTH SURVEY Definition: A survey that is designed to gather information about health (physical and mental) and health - related factors. Health surveys generally include measures of risk factors, health behaviors, and non -health determinants or correlates of health such as socioeconomic status. The range of measures that can be included is wide and varies by survey. Age, sex/gender, and race/ethnicity are the basic demographic variables that are included in health surveys. Socioeconomic determinants of health include education, income, geographic region, and urbanicity of residence . HEAPING OF DATA Definition: A measure of the of valu e falling on specific values such as age of disease diagnosis or date of other events, rounded up or down to the nearest integer, (e.g., Birthweight on 2000g or 2500g) or rounded (i.e., ending in “00 “ or “50 ”). I IMPACT INDICATOR Definition: Measures long -term outcomes that programmes are designed to affect, including decreases in mortality and morbidity. 9 IMPUTATION Definition: Data imputation is a method for retaining the majority of the dataset's data and information by substituting missing data with a different value. INCIDENCE RATE Definition: A new event or case of a disease (or a death or other health condition) that occurred in a specified time period. INDICATOR -BASED SURVEILLANCE Definition: Routine reporting of cases of disease, including notifiable disease surveillance systems, sentinel surveillance, and laboratory -based surveillance. Indicator -based surveillance commonly comes from health care facilities and can be regularly reported. INDICATOR CLASSIFICATION Definition: The level of measurement provided by the indicator; this can be one of five levels starting with input (lowest) and moving through process, output, outcome and finally impact (highest) INDICATOR DEFINITION Definition: How the indicator is measured, including numerators, denominators, data type and disaggregation in common use . The indicator definition should be unambiguous and be expressed in universally applicable terms. INPUT INDICATOR Definition: Measures human and financial resources, physical facilities, equipment, and operational policies that enable program activities to be implemented. This includes health financing , health workforce, health infrastructure, and health information and governance . INTEGRITY OF DATA Definition: Data have integrity when the system used to generate them is protected from deliberate bias or manipulation for political or personal reasons. INTERACTION Definition: when the relationship between two variables depends on the value of another variable (also referred to as effect modification). INTERNAL CONSISTENCY OF DATA Definition: An assessment of the level of agreement between two variables in the same source documents or document flow. Typically, this involves assessing reporting accuracy of selected indicators through the review of source documents in health facilities and district offices. This element of internal consistency is measured by a data verification exercise. (e.g., Answers the question: Is the observed relationship between the indicators, as reflected in the reported data, that which we would expect ?). INTERNAL RESPONSIVENESS Definition: Characterises the ability of a measure to change over a prespecified time frame. INTERNATIONAL CLASSIFICATION OF DISEASES (ICD) Definition: WHO ’s International Statistical Classification of Diseases and Related Health Problems (ICD) is a structured translation of each medical condition into an alphanumeric code, which allows for the harmonization and comparison of mortality statistics across time and location. ICD is a gold standard coding system for reporting cause of death data. 10 INTERNATIONAL HEALTH REGULATIONS (IHR) Definition: The IHR is an instrument of international law that is legally -binding on 196 countries, including the 194 WHO Member States. The IHR grew out of the response to deadly epidemics that once overran Europe. They create rights and obligations for countries, including the requirement to report public health events. INTERNATIONAL STANDARDS Definition: Set of standards defined and agreed at international level on ethics, design, implementation, confidentiality, analysis and dissemination of information, surveys, services, etc. INTEROPERABILIT Y The ability of different applications to access, exchange, integrate and use data in a coordinated manner through the use of shared application interfaces and standards, within and across organizational, regional, and national boundaries, to provide timely and seamless portability of information and optimize health outcomes. J JOINT EXTERNAL EVALUATION (JEE) Definition: Voluntary, collaborative, multisectoral process to assess country capacities to prevent, detect and rapidly respond to public health risks. K KAPPA STATISTIC (K) Definition: Measures of agreement between categorical measures. L LINKAGE Definition: Linkage is the process of combining information from different sources or datasets. M MASTER FACILITY LIST (MLF) Definition: The unique, complete, up -to -date, and uniquely coded list of all the active and prior health facilities in the country officially that are officially curated by the mandated agency. At a minimum, the HFML includes a unique ID, location, type, and name of each facility. MATHEMATICAL OR STATISTICAL MODELS Definition: A statistical model is a mathematical model that embodies a set of statistical assumptions . MEAN Definition: The average of a set of values . MEASURE Definition: refers to the procedure of applying a reference scale to a variable or set of variables . MEASUREMENT Definition: refers to the extent, dimension, quantity, etc. of an attribute .11 MEASUREMENT LEVEL Definition: The specific setting that the indicator is designed to measure/monitor; e.g., global, national, sub - national, facility, household, community , patient level . MEASUREMENT METHOD Definition: How the data from data sources are used ; this can process or other types of analyses that make use of the indicator . MEDIAN Definition : middle point of a set of ordered numbers; half of the values are higher than the median, and half of the values are lower. MEDICAL CERTIFICATION OF CAUSE OF DEATH (MCCD) Definition: Medical certification of cause of death involves confirmation of death, external examination of the body and ascertainment of the circumstances and cause of death . METADATA Definition: Data that define or describe other data. They are the information needed to explain and understand the data or values being presented. METHOD OF AGGREGATE ESTIMATION Description of the methodology, including any mathematical formulas, used for the calculation of the regional/global aggregates from the country values. METHODOLOGICAL SOUNDNESS Definition: The application of the available international standards, guidelines, and good practices in the production of data . MICRODATA Definition: Granular data that may include individual level information (anonymized or not) MINISTRY OF HEALTH (MoH) Definition: The government institution that is in charge of all aspects of health. This may have different names at country level, but the main function is to safeguard the health of the population. MONITORING Definition: The systematic and routine collection of information to assess performance and progress towards specific targets and over an established period of time. MONITORING & EVALUATION (M&E) FRAMEWORK Definition: Necessary structure to support analysis and perform monitoring and evaluation activities. A framework can establish and maintain a set of global and country indicators to support strategic thinking, operational tracking, real -time. MORBIDITY DATA Definition: Is the information registered on the state of being symptomatic or unhealthy due to a disease or health -related condition . MORTALITY CODER Definition: Mortality coder is the trained person that registers medical conditions and events reported in MCCD forms to determine the underlying cause of death and assign mortality codes using ICD rules and principles. 12 MORTALITY DATA Definition: Is the information registered when a death occurs. N NATIONAL HEALTH STRATEGIC PLAN Definition: A national health strategic plan is the set of priorities to achieve key milestones that will have impact beyond the health sector. There will be medium -term and long -term expected outcomes and concrete and realistic allocation of resources to implement the activities within a clear timing. The national health strategic plan will concretize priorities; keep focus on medium and long term without devi ating from an optimal path; integrate the health sector; help focus the policy dialogue on health priorities and guide operational planning, resource allocation and health sector monitoring and evaluation . NATIONAL STATISTICS OFFICE (NSO) Definition: Government agency or institution responsible for collecting, analyzing, using, and disseminating statistical data related to a country. NATIONALLY REPRESENTATIVE Definition: A survey that will use design methods and standardized criteria scalab le to national context using a sub -sample that represents the target population in terms of age, sex, urban/rural and other categories of interest. NEGATIVE PREDICTIVE VALUE (NPV) Definition: Negative predictive value is the proportion of the cases giving negative test results who are already healthy. It is the ratio of subjects truly diagnosed as negative to all those who had negative test results (including patients who were incorrectly diagnosed as healthy). This characteristic can predict how likely it is for someone to truly be healthy, in case of a negative test result . NOTIFIABLE CONDITIONS Definition: A disease that, when diagnosed, requires health providers (usually by law) to report to state or local public health officials. Notifiable diseases are of public interest by reason of their contagiousness, severity, or frequency. NUMERATOR Count of values captured by the indicator in a specified population. The upper portion of a fraction used to calculate a rate or ratio. O ODDS The numerator is the proportion of the event of interest, and the denominator is the proportion of the non0event. The numerator and denominator are thus complementary proportions (p/1 -p). OUTCOME INDICATOR Definition: Measure s whether the program is achieving the expected effects/changes in the short, intermediate, and long term. Some programs refer to their longest -term/most distal outcome indicators as impact indicators. This usually includes coverage of interventions and risk factors and behavio urs . OUTPUT INDICATOR Definition: Measures the results of the processes in terms of service access, availability, quality and safety and health security .13 P PERCENTAGE Definition: Number or ratio that can be expressed as a fraction of 100. PERI ODI CITY Definition: Data can be compiled continuously in systems such as civil registries, cancer registries and surveillance systems of reportable diseases. Data can also be compiled periodically, which is to say at regular intervals or without predefined periodicity, and at a particular point in time. POPULATION BASED SURVEY Definition: descriptive cross -sectional epidemiological study that is useful for calculating the prevalence of self - reported events or events measured during the investigation, generally employing a representative sample from the population of interest. POPULATION CENSUS Definition: A population census is the total process of planning, collecting, compiling, evaluating, disseminating, and analyzing demographic, economic and social data at the smallest geographic level pertaining, at a specified time, to all persons in a country or in a well -delimited part of a country. POSITIVE PREDICTIVE VALUE (PPV) Definition: Positive predictive value is the proportion of cases giving positive test results who are already patient. It is the ratio of patients truly diagnosed as positive to all those who had positive test results (including healthy subjects who were incorrectly diagnosed as patient). This characteristic can predict how likely it is for someone to truly be patient, in case of a positive test result. POST ENUMERATION SURVEY (PES) Definition: The purpose of the Post -Enumeration Survey is to measure the accuracy of the census by independently surveying a sample of the population. The survey estimates the proportion of people and housing units potentially missed or counted erroneously in the census. PREDICTIVE VALIDITY Definition: the degree to which predictions are confirmed by facts expressed in terms of its ability to predict future outcomes or events. Predictive validity is often used for impact indicators. PREFERRED DATA SOURCES Definition: Recommended sources of data. This includes civil registration and vital statistics system, national population -based surveys; routine facility information systems; health facility assessments; • administrative data sources; human resources information systems; key informant surveys; indica tors from other sources, including modelling. PREVALENCE RATE Definition: number of existing cases of a disease or other health event divided by the number of persons in population at that specified time. Each individual is observed on a single occasion, at which time the individual ’s status with respect of the event in question is ascertained. PRIMARY DATA Definition: Primary data comes from country health information systems (including administrative reporting, household surveys, etc.). Data is reported as is, or with modest adjustment. 14 PRIMARY DATA SOURCES Definition: Primary source data provide direct evidence about an event. Data collection can take different forms, whether through a population census, national or local research (typically, sample -based, or non - sample -based surveys). The creation of an information system to achieve specific objectives will generate primary data. In other words, indicators are said to be based on primary data if the data source was created to achieve a specific purpose. PROCESS INDICATOR Definition: Measures the program ’s activities and outputs (direct products/deliverables of the activities). Together, measures of activities and outputs indicate whether the program is being implemented as planned. (e.g. health workforce training, constructing a health facility, the process of registering births and deaths.) PROCESS OF VALIDATION Definition: A description of how the indicator was validated to assess how accurately it measures what it is intended to do. This glossary includes some methods for validation. PROCESSED HEALTH DATA Definition: Health statistics that have been calculated from raw health data, but which are not the result of synthesizing multiple data sources. Examples of processing raw health data include cleaning data by removing implausible values, calculating an indicator with an algorithm, or adjusting a statistic for bias. PROPORTION Definition: The size, number, or amount of one thing or group as compared to the size, number, or amount of another. When the numerator is a subset of the denominator. A proportion tends to be expressed as a percentage (%). It is the observed relative frequency of an event and provides an estimate of probability. It should be noted that, according to the frequentist approach, the probability of an event occurrence is given by the relative frequency of the event over the long term (in infinite attempts or repetitions of the experiment). PROXY HEALTH INDICATOR Definition: an indicator that stands in for another indicator or topic that is difficult to measure or for which data are limited. PUBLIC HEALTH SURVEILLANCE SYSTEM Definition: Is the system that systematically collects, analyzes, and interprets health -related data essential to planning, implementation and evaluation of public health practice. PUBLICLY AVAILABLE Definition: information in any form that is generally accessible, without restriction, to the public . PUNCTUALITY Definition: The time lag between the release date of data and the target date on which they were scheduled for release as announced in an official release calendar . P VALUE Definition: The probability that a test statistic would be as extreme as or more extreme than observed if the null hypothesis were true. 15 R RATE Definition: A rate is an expression of the frequency with which an event occurs in a defined population, usually in a specified period of time. The components of a rate are the numerator, the denominator, the specified time in which events occur, and usually a multiplier, a power of 10, that converts the rate from a fraction or decimal to a whole number. The numerator is the absolute number of occurrences of the event being studied in a specified time. The denominator is the reference population (or population being studied) at the same time. RATIO Definition: The result of dividing one quantity by another without regard for details such as a time dimension. Rates, proportions, and percentages are all types of ratios. The distinction between a proportion and a ratio is that, whereas the numerator of a proportion is included in the population defined by the denominator, this is not necessarily so for a ratio, which expresses the relationship of two separate and distinct quantities, neither of which is included in the other. RATIONALE Definition: Importance of the indicator for public health response . RAW HEALTH DATA Definition: Measurements derived from primary data collection with no adjustments or corrections. RECORD LINKAGE Definition: The methodology of bringing together corresponding records from two or more files or finding duplicates within files. REGISTRAR Definition: The local civil registrar is the official authorized by law to register the occurrence of vital events and to represent the legal authority of government in the field of civil registration. REGISTRATION FORM Definition: The registration form is the paper or electronic format that is used to register a vital event. REGRESSION Definition: a statistical technique that relates a dependent variable to one or more independent (explanatory) variables. REGULAR ASSESSMENT Definition: Frequent evaluation that may be done in established periods (weekly, monthly, yearly, etc.) . RELEVANCE Definition: The degree to which the data meet the user needs . indicators must provide information that is appropriate and useful for guiding policies and programmes as well as for decision -making . RELIABILITY OF DATA Definition: The degree to which the results obtained by a measurement/procedure can be replicated. Consistency of the data when collected repeatedly using the same procedures and under the same circumstances. [synonym as replicability] REPORTING SITE Definition: Health facilities designated by authorities to mandatory report cases of diseases. If available, public, and private reporting sites should be identified. 16 REPRESENTATIVE Definition: the ability for the indicator to accurately describe the occurrence of a health -related event over time and its distribution in the population by place and person. This involves the absence of selection bias with respect to the population that the indicator is intended to represent. RESPONSE RATE Definition: In survey research, response rate, also known as completion rate or return rate, is the number of people who answered the survey divided by the number of people in the sample. RESPONSIV ENESS Definition: Is the indicator ’s ability to detect changes over time in response to interventions, treatments, or natural progression of the condition. [consult External responsiveness and Internal responsiveness] ROUTINE HEALTH INFORMATION SYSTEM (RHIS) Definition: Systems that record data generated at public and private health facilities, institutions and community -level , healthcare posts and clinics, use these data for analysis, monitoring and reporting at regular intervals . The data give a picture of health services, health interventions and health resources. Most of the data are gathered by healthcare providers as they go about their work . The sources of those data are generally records from these institutions. S SAMPLE Definition: Sample is a subset of a population . SAMPLE SIZE Definition: the number of people upon which a disaggregated (subgroup) estimate is based; that is, the denominator used to calculate a disaggregated estimate. SAMPLING ERROR Definition: The difference between the sample statistics and population parameter. Sampling errors can be estimated by general methods such as bootstrapping or specific methods incorporating assumptions from the true population distribution. SECONDARY DATA SOURCES Definition: data that was originally collected for other purposes. The data from these existing sources are considered secondary. Although these sources were not created for the purpose at hand, they facilitate the development of the required indicators. Data from a census, research, information system, etc. are secondary source data. SENSITIVITY ANALYSIS Definition: Sensitivity analysis is a systematic approach to evaluate how the variation in the output of a system or model can be attributed to different sources of variation in its inputs. It involves examining the sensitivity of the model's outcomes or outputs to changes in individual input parameters, providing insights into the relative importance of each parameter in influencing the overall results. SEX Definition: Biological sex (male/female) . STAKEHOLDER Definition: Interested parties, group or organization who may affect, be affected by, or perceive itself to be affected by a decision, activity or outcome of a project or programme. 17 STANDARD OPERATING PROCEDURES (SOP) Definition: Procedures that are documented to guarantee that these are standardized and followed. STATISTICAL DATA Definition: Data that has been organized for analysis, interpretation, and representation through statistical analysis . STATISTICAL METHODS Definition: Various techniques employed in data analysis for validation studies. STATISTICAL OUTPUT Definition: numerical data relating to an aggregate of individuals or entities. STATISTICAL SIGNIFICANCE Definition: mathematical measure of the probability that a result is likely due to chance or another factor (that is, the null hypothesis). STRATIFICATION The process of sorting data into defined segments or groups . This method can be used when sampling a population for conducting a survey or this can be used for analysis to control for confounding. STRUCTURAL VALIDITY Definition: The degree to which scores of an Indicator are an adequate reflection of the dimensionality of the construct to be measured and involves examining the construct's underlying structure or dimensionality. A measurement instrument with strong structural validity should demonstrate that its items align logically with the theoretical framework of the construct, ensuring that it effectively captures the intended concepts or traits. SUB -NATIONAL Definition: County, district, state levels . SURVEILLANCE Definition: The continuous, systematic collection, analysis , interpretation, and dissemination of data needed for the planning, implementation, and evaluation of public health actions. Some examples are public health surveillance system , indicator -based surveillance , disease surveillance system, demographic surveillance system, etc. SURVEY Definition: Survey is a structured list of questions that collect data on a specific population. SURVEY DATA Definition: A survey is an investigation about the characteristics of a given population by means of collecting data from a sample of that population and estimating their characteristics through the systematic use of statistical methodology. SUSTAINABILITY Definition: the source ’s potential to remain relevant and be of the quality needed to generate information over time. This depends not only on the periodicity of the data collection, but on the availability of the financial resources needed to sustain that source of data, the presence of a legal framework, political will, among other factors. 18 SUSTAINABLE DEVELOPMENT GOAL (SDG) Definition: The Sustainable Development Goals (SDGs) are 17 global objectives that were agreed by Members of the United Nations and aim to transform our world. They are a call to action to end poverty and inequality, protect the planet, and ensure that all people enjoy health, justice, and prosperity. It is critical that no one is left behind. T TARGET POPULATION Definition: A set of elements about which information is wanted and estimates are required. TIMELINESS Definition: When the data is quickly available and accessible for use . In the context of data quality, the degree to which reports are submitted on time according to established deadlines. Timeliness involves the availability and reliability of the data at the time it is needed to construct the indicators. Thus, timely produced indicators provide better opportunities for making health -related decisions . TRACER INDICATOR Definition: a highly specified indicator chosen as an example to represent a broader health topic. TREND Definition: Multiple standardized measurements in time demonstrating how the values have increased, remained the same or decreased. U UNCERTAINTY MEASURE Definition: Measures that indicate the level of certainty around a point estimate and quantify the imprecisio n. Common measures of certainty include confidence interval (CI) , standard deviation (SD) , credible interval (Crd I), uncertainty interval (UI), standard error, etc. UNDERCOUNT RATE Definition: Net undercount of the census is the difference between the number of persons counted in the census and the number of people who should have been counted. UNDERSTANDABLE /SIMPLICITY Definition: if the indicator is presented in a clear, concise, and easily comprehensible way. Whether the indicator is presented in a clear, concise, and easily comprehensible way. The indicator must be understood by those responsible for taking action, and, specifically, by those responsible for decision -making. [synonyms comprehensible] UNIT OF MEASURE Definition: Unit of measure in which the indicator is presented; e.g., deaths per 1000 live births; US$; liters per person per year . Note: Percentage is not considered a unit of measure and indicators that are presented as percentages should have the unit of measure field filled in as "N/A" (not applicable) USEFULNESS /UTILITY Definition: if the indicator is useful for program improvement and policy issues .19 V VALIDITY Definition: Ability of an indicator to measure what it is intended to measure (absence of distortions, bias, or systematic errors). The most relevant biases are those related to selection of the study population and the quality of the information compiled. The data source should include the variables needed to develop the indicator . VERBAL AUTOPSY Definition: Verbal autopsy (VA) is a method used to determine the cause of death through interviews with the deceased person's next of kin or caregivers. These interviews involve a standardized questionnaire to gather details on symptoms, medical history, and the circumstances leading to death. Healthcare professionals or algorithms then analyze this information to identify the likely cause of death. The primary goal of verbal autopsy is to describe the causes of death at the community level or population level in areas where there is no medical certification of deaths, or it is not yet well -established. VITAL EVENT Definition: Some vital events that are captured through civil registration and vital statistic systems are: birth, adoption, marriage, divorce, migration, death . VITAL STATISTICS Definition: The systematic record of vital events such as birth, marriage, divorce, adoption, death, and cause of death to generate data and statistics. W WEIGHTING Definition: Statistics weighting is a technique used to adjust data to reflect the known population profile. It is used to balance out any significant variance between actual and target profile. Weighting is commonly us . Weighted analyses of population -based surveys allow us to generalize findings to a larger or more general population. This approach aims to provide unbiased estimates of descriptive statistics or model parameters of the population of interest, which may be a general population or major population subgroup. Incorporating weights in the analyses can be crucial to achieve statistically valid, representative population -based findings in surveys to make adjustments for sampling errors.
2406
https://texasgateway.org/resource/85-image-formation-lenses
Skip to main content Sections Learning Objectives Ray Tracing and Thin Lenses Image Formation by Thin Lenses Problem-Solving Strategies for Lenses Learning Objectives Learning Objectives By the end of this section, you will be able to do the following: List the rules for ray tracking for thin lenses Illustrate the formation of images using the technique of ray tracing Determine power of a lens given the focal length The information presented in this section supports the following AP® learning objectives and science practices: 6.E.5.1 The student is able to use quantitative and qualitative representations and models to analyze situations and solve problems about image formation occurring due to the refraction of light through thin lenses. (S.P. 1.4, 2.2) 6.E.5.2 The student is able to plan data collection strategies, perform data analysis and evaluation of evidence, and refine scientific questions about the formation of images due to refraction for thin lenses. (S.P. 3.2, 4.1, 5.1, 5.2, 5.3) Lenses are found in a huge array of optical instruments, ranging from a simple magnifying glass to the eye to a camera’s zoom lens. In this section, we will use the law of refraction to explore the properties of lenses and how they form images. The word lens derives from the Latin word for a lentil bean, the shape of which is similar to the convex lens in Figure 8.20. The convex lens shown has been shaped so that all light rays that enter it parallel to its axis cross one another at a single point on the opposite side of the lens. The axis is defined to be a line normal to the lens at its center, as shown in Figure 8.20. Such a lens is called a converging (or convex) lens for the converging effect it has on light rays. An expanded view of the path of one ray through the lens is shown, to illustrate how the ray changes direction both as it enters and as it leaves the lens. Since the index of refraction of the lens is greater than that of air, the ray moves towards the perpendicular as it enters and away from the perpendicular as it leaves. This is in accordance with the law of refraction. Due to the lens’s shape, light is thus bent toward the axis at both surfaces. The point at which the rays cross is defined to be the focal point F of the lens. The distance from the center of the lens to its focal point is defined to be the focal lengthff of the lens. Figure 8.21 shows how a converging lens, such as that in a magnifying glass, can converge the nearly parallel light rays from the sun to a small spot. Figure 8.20 Rays of light entering a converging lens parallel to its axis converge at its focal point F. (Ray 2 lies on the axis of the lens.) The distance from the center of the lens to the focal point is the lens’s focal length f.f. An expanded view of the path taken by ray 1 shows the perpendiculars and the angles of incidence and refraction at both surfaces. Converging or Convex Lens The lens in which light rays that enter it parallel to its axis cross one another at a single point on the opposite side with a converging effect is called converging lens. Focal Point F The point at which the light rays cross is called the focal point F of the lens. Focal Length ff size 12{f} {} The distance from the center of the lens to its focal point is called focal length f.f. Figure 8.21 Sunlight focused by a converging magnifying glass can burn paper. Light rays from the sun are nearly parallel and cross at the focal point of the lens. The more powerful the lens, the closer to the lens the rays will cross. The greater effect a lens has on light rays, the more powerful it is said to be. For example, a powerful converging lens will focus parallel light rays closer to itself and will have a smaller focal length than a weak lens. The light will also focus into a smaller and more intense spot for a more powerful lens. The power PP of a lens is defined to be the inverse of its focal length. In equation form, this is 8.20 P=1f. P=1f. Power PP size 12{P} {} The power PP of a lens is defined to be the inverse of its focal length. In equation form, this is 8.21 P=1f, P=1f, where ff is the focal length of the lens, which must be given in meters (and not cm or mm). The power of a lens PP has the unit diopters (D), provided that the focal length is given in meters. That is, 1 D=1/m,1 D=1/m, or 1 m−1.1 m−1. (Note that this power (optical power, actually) is not the same as power in watts. It is a concept related to the effect of optical devices on light.) Optometrists prescribe common spectacles and contact lenses in units of diopters. Example 8.5 What is the Power of a Common Magnifying Glass? Suppose you take a magnifying glass out on a sunny day and you find that it concentrates sunlight to a small spot 8.00 cm away from the lens. What are the focal length and power of the lens? Strategy The situation here is the same as those shown in Figure 8.20 and Figure 8.21. The sun is so far away that the sun’s rays are nearly parallel when they reach Earth. The magnifying glass is a convex, or converging, lens, focusing the nearly parallel rays of sunlight. Thus the focal length of the lens is the distance from the lens to the spot, and its power is the inverse of this distance (in m). Solution The focal length of the lens is the distance from the center of the lens to the spot, given to be 8.00 cm. Thus, 8.22 f=8.00 cm. f=8.00 cm. To find the power of the lens, we must first convert the focal length to meters; then, we substitute this value into the equation for power. This gives 8.23 P=1f=10.0800 m=12.5 D. P=1f=10.0800 m=12.5 D. Discussion This is a relatively powerful lens. The power of a lens in diopters should not be confused with the familiar concept of power in watts. It is an unfortunate fact that the word power is used for two completely different concepts. If you examine a prescription for eyeglasses, you will note lens powers given in diopters. If you examine the label on a motor, you will note energy consumption rate given as a power in watts. Figure 8.22 shows a concave lens and the effect it has on rays of light that enter it parallel to its axis (the path taken by ray 2 in the figure is the axis of the lens). The concave lens is a diverging lens, because it causes the light rays to bend away, diverge, from its axis. In this case, the lens has been shaped so that all light rays entering it parallel to its axis appear to originate from the same point, F,F, defined to be the focal point of a diverging lens. The distance from the center of the lens to the focal point is again called the focal length ff of the lens. Note that the focal length and power of a diverging lens are defined to be negative. For example, if the distance to FF in Figure 8.22 is 5.00 cm, then the focal length is f=–5.00 cmf=–5.00 cm and the power of the lens is P=–20 D.P=–20 D. An expanded view of the path of one ray through the lens is shown in the figure to illustrate how the shape of the lens, together with the law of refraction, causes the ray to follow its particular path and be diverged. Figure 8.22 Rays of light entering a diverging lens parallel to its axis are diverged, and all appear to originate at its focal point F.F. The dashed lines are not rays—they indicate the directions from which the rays appear to come. The focal length ff of a diverging lens is negative. An expanded view of the path taken by ray 1 shows the perpendiculars and the angles of incidence and refraction at both surfaces. Diverging Lens A lens that causes the light rays to bend away from its axis is called a diverging lens. As noted in the initial discussion of the law of refraction in The Law of Refraction, the paths of light rays are exactly reversible. This means that the direction of the arrows could be reversed for all of the rays in Figure 8.20 and Figure 8.22. For example, if a point light source is placed at the focal point of a convex lens, as shown in Figure 8.23, parallel light rays emerge from the other side. Figure 8.23 A small light source, like a light bulb filament, placed at the focal point of a convex lens, results in parallel rays of light emerging from the other side. The paths are exactly the reverse of those shown in Figure 8.20. This technique is used in lighthouses and sometimes in traffic lights to produce a directional beam of light from a source that emits light in all directions. Ray Tracing and Thin Lenses Ray Tracing and Thin Lenses Ray tracing is the technique of determining or following, tracing, the paths that light rays take. For rays passing through matter, the law of refraction is used to trace the paths. Here we use ray tracing to help us understand the action of lenses in situations ranging from forming images on film to magnifying small print to correcting nearsightedness. While ray tracing for complicated lenses, such as those found in sophisticated cameras, may require computer techniques, there is a set of simple rules for tracing rays through thin lenses. A thin lens is defined to be one whose thickness allows rays to refract, as illustrated in Figure 8.20, but does not allow properties such as dispersion and aberrations. An ideal thin lens has two refracting surfaces but the lens is thin enough to assume that light rays bend only once. A thin symmetrical lens has two focal points, one on either side and both at the same distance from the lens. (See Figure 8.24.) Another important characteristic of a thin lens is that light rays through its center are deflected by a negligible amount, as seen in Figure 8.25. Thin Lens A thin lens is defined to be one whose thickness allows rays to refract but does not allow properties such as dispersion and aberrations. Take-Home Experiment: A Visit to the Optician Look through your eyeglasses, or those of a friend, backward and forward and comment on whether they act like thin lenses. Figure 8.24 Thin lenses have the same focal length on either side. (a) Parallel light rays entering a converging lens from the right cross at its focal point on the left. (b) Parallel light rays entering a diverging lens from the right seem to come from the focal point on the right. Figure 8.25 The light ray through the center of a thin lens is deflected by a negligible amount and is assumed to emerge parallel to its original path (shown as a shaded line). Using paper, pencil, and a straight edge, ray tracing can accurately describe the operation of a lens. The rules for ray tracing for thin lenses are based on the illustrations already discussed: A ray entering a converging lens parallel to its axis passes through the focal point F of the lens on the other side. (See rays 1 and 3 in Figure 8.20.) A ray entering a diverging lens parallel to its axis seems to come from the focal point F. (See rays 1 and 3 in Figure 8.22.) A ray passing through the center of either a converging or a diverging lens does not change direction. (See Figure 8.25, and see ray 2 in Figure 8.20 and Figure 8.22.) A ray entering a converging lens through its focal point exits parallel to its axis. (The reverse of rays 1 and 3 in Figure 8.20.) A ray that enters a diverging lens by heading toward the focal point on the opposite side exits parallel to the axis. (The reverse of rays 1 and 3 in Figure 8.22.) Rules for Ray Tracing A ray entering a converging lens parallel to its axis passes through the focal point F of the lens on the other side. A ray entering a diverging lens parallel to its axis seems to come from the focal point F. A ray passing through the center of either a converging or a diverging lens does not change direction. A ray entering a converging lens through its focal point exits parallel to its axis. A ray that enters a diverging lens by heading toward the focal point on the opposite side exits parallel to the axis. Image Formation by Thin Lenses Image Formation by Thin Lenses In some circumstances, a lens forms an obvious image, such as when a movie projector casts an image onto a screen. In other cases, the image is less obvious. Where, for example, is the image formed by eyeglasses? We will use ray tracing for thin lenses to illustrate how they form images, and we will develop equations to describe the image formation quantitatively. Consider an object some distance away from a converging lens, as shown in Figure 8.26. To find the location and size of the image formed, we trace the paths of selected light rays originating from one point on the object, in this case the top of the person’s head. The figure shows three rays from the top of the object that can be traced using the ray tracing rules given above. Rays leave this point going in many directions, but we concentrate on only a few with paths that are easy to trace. The first ray is one that enters the lens parallel to its axis and passes through the focal point on the other side (rule 1). The second ray passes through the center of the lens without changing direction (rule 3). The third ray passes through the nearer focal point on its way into the lens and leaves the lens parallel to its axis (rule 4). The three rays cross at the same point on the other side of the lens. The image of the top of the person’s head is located at this point. All rays that come from the same point on the top of the person’s head are refracted in such a way as to cross at the point shown. Rays from another point on the object, such as her belt buckle, will also cross at another common point, forming a complete image, as shown. Although three rays are traced in Figure 8.26, only two are necessary to locate the image. It is best to trace rays for which there are simple ray tracing rules. Before applying ray tracing to other situations, let us consider the example shown in Figure 8.26 in more detail. Figure 8.26 Ray tracing is used to locate the image formed by a lens. Rays originating from the same point on the object are traced—the three chosen rays each follow one of the rules for ray tracing, so that their paths are easy to determine. The image is located at the point where the rays cross. In this case, a real image—one that can be projected on a screen—is formed. The image formed in Figure 8.26 is a real image, meaning that it can be projected. That is, light rays from one point on the object actually cross at the location of the image and can be projected onto a screen, a piece of film, or the retina of an eye, for example. Figure 8.27 shows how such an image would be projected onto film by a camera lens. This figure also shows how a real image is projected onto the retina by the lens of an eye. Note that the image is there whether it is projected onto a screen or not. Real Image The image in which light rays from one point on the object actually cross at the location of the image and can be projected onto a screen, a piece of film, or the retina of an eye is called a real image. Figure 8.27 Real images can be projected. (a) A real image of the person is projected onto film. (b) The converging nature of the multiple surfaces that make up the eye result in the projection of a real image on the retina. Several important distances appear in Figure 8.26. We define dodo to be the object distance, the distance of an object from the center of a lens. Image distance didi is defined to be the distance of the image from the center of a lens. The height of the object and height of the image are given the symbols hoho and hi,hi, respectively. Images that appear upright relative to the object have heights that are positive and those that are inverted have negative heights. Using the rules of ray tracing and making a scale drawing with paper and pencil, like that in Figure 8.26, we can accurately describe the location and size of an image. But the real benefit of ray tracing is in visualizing how images are formed in a variety of situations. To obtain numerical information, we use a pair of equations that can be derived from a geometric analysis of ray tracing for thin lenses. The thin lens equations are 8.24 1do+1di=1f 1do+1di=1f and 8.25 hiho=−dido=m. hiho=−dido=m. We define the ratio of image height to object height (hi/ho)(hi/ho) to be the magnification m.m. The minus sign in the equation above will be discussed shortly. The thin lens equations are broadly applicable to all situations involving thin lenses, and thin mirrors, as we will see later. We will explore many features of image formation in the following worked examples. Image Distance The distance of the image from the center of the lens is called image distance. Thin Lens Equations and Magnification 8.26 1do+1di=1f 1do+1di=1f 8.27 hiho=−dido=m hiho=−dido=m Example 8.6 Finding the Image of a Light Bulb Filament by Ray Tracing and by the Thin Lens Equations A clear glass light bulb is placed 0.750 m from a convex lens having a 0.500 m focal length, as shown in Figure 8.28. Use ray tracing to get an approximate location for the image. Then use the thin lens equations to calculate (a) the location of the image and (b) its magnification. Verify that ray tracing and the thin lens equations produce consistent results. Figure 8.28 A light bulb placed 0.750 m from a lens having a 0.500 m focal length produces a real image on a poster board as discussed in the example above. Ray tracing predicts the image location and size. Strategy and Concept Since the object is placed farther away from a converging lens than the focal length of the lens, this situation is analogous to those illustrated in Figure 8.26 and Figure 8.27. Ray tracing to scale should produce similar results for di.di. Numerical solutions for didi and mm can be obtained using the thin lens equations, noting that do=0.750 m andf=0.500 m.do=0.750 m andf=0.500 m. Solutions (Ray tracing) The ray tracing to scale in Figure 8.28 shows two rays from a point on the bulb’s filament crossing about 1.50 m on the far side of the lens. Thus, the image distance didi is about 1.50 m. Similarly, the image height based on ray tracing is greater than the object height by about a factor of 2, and the image is inverted. Thus mm is about –2. The minus sign indicates that the image is inverted. The thin lens equations can be used to find didi from the given information. 8.28 1do+1di=1f 1do+1di=1f Rearranging to isolate didi gives 8.29 1di=1f−1do. 1di=1f−1do. Entering known quantities gives a value for 1/di.1/di. 8.30 1di=10.500 m−10.750 m=0.667m 1di=10.500 m−10.750 m=0.667m This must be inverted to find di.di. 8.31 di=m0.667=1.50 m di=m0.667=1.50 m Note that another way to find didi is to rearrange the equation. 8.32 1di=1f−1do 1di=1f−1do This yields the equation for the image distance as 8.33 di=fdodo−f. di=fdodo−f. Note that there is no inverting here. The thin lens equations can be used to find the magnification m,m, since both didi and dodo are known. Entering their values gives 8.34 m=–dido=–1.50 m0.750 m=–2.00. m=–dido=–1.50 m0.750 m=–2.00. Discussion Note that the minus sign causes the magnification to be negative when the image is inverted. Ray tracing and the use of the thin lens equations produce consistent results. The thin lens equations give the most precise results, being limited only by the accuracy of the given information. Ray tracing is limited by the accuracy with which you can draw, but it is highly useful both conceptually and visually. Real images, such as the one considered in the previous example, are formed by converging lenses whenever an object is farther from the lens than its focal length. This is true for movie projectors, cameras, and the eye. We shall refer to these as case 1 images. A case 1 image is formed when do>fdo>f and ff is positive, as in Figure 8.29(a). A summary of the three cases or types of image formation appears at the end of this section. A different type of image is formed when an object, such as a person's face, is held close to a convex lens. The image is upright and larger than the object, as seen in Figure 8.29(b), and so the lens is called a magnifier. If you slowly pull the magnifier away from the face, you will see that the magnification steadily increases until the image begins to blur. Pulling the magnifier even farther away produces an inverted image as seen in Figure 8.29(a). The distance at which the image blurs, and beyond which it inverts, is the focal length of the lens. To use a convex lens as a magnifier, the object must be closer to the converging lens than its focal length. This is called a case 2 image. A case 2 image is formed when dofdof and ff is positive. Figure 8.29 (a) When a converging lens is held farther away from the face than the lens’s focal length, an inverted image is formed. This is a case 1 image. Note that the image is in focus but the face is not, because the image is much closer to the camera taking this photograph than the face. (credit: DaMongMan, Flickr) (b) A magnified image of a face is produced by placing it closer to the converging lens than its focal length. This is a case 2 image. (Casey Fleser, Flickr) Figure 8.30 uses ray tracing to show how an image is formed when an object is held closer to a converging lens than its focal length. Rays coming from a common point on the object continue to diverge after passing through the lens, but all appear to originate from a point at the location of the image. The image is on the same side of the lens as the object and is farther away from the lens than the object. This image, like all case 2 images, cannot be projected and, hence, is called a virtual image. Light rays only appear to originate at a virtual image; they do not actually pass through that location in space. A screen placed at the location of a virtual image will receive only diffuse light from the object, not focused rays from the lens. Additionally, a screen placed on the opposite side of the lens will receive rays that are still diverging, and so no image will be projected on it. We can see the magnified image with our eyes, because the lens of the eye converges the rays into a real image projected on our retina. Finally, we note that a virtual image is upright and larger than the object, meaning that the magnification is positive and greater than 1. Figure 8.30 Ray tracing predicts the image location and size for an object held closer to a converging lens than its focal length. Ray 1 enters parallel to the axis and exits through the focal point on the opposite side, while ray 2 passes through the center of the lens without changing path. The two rays continue to diverge on the other side of the lens, but both appear to come from a common point, locating the upright, magnified, virtual image. This is a case 2 image. Virtual Image An image that is on the same side of the lens as the object and cannot be projected on a screen is called a virtual image. Example 8.7 Image Produced by a Magnifying Glass Suppose the book page in Figure 8.30 (a) is held 7.50 cm from a convex lens of focal length 10.0 cm, such as a typical magnifying glass might have. What magnification is produced? Strategy and Concept We are given that do=7.50 cmdo=7.50 cm and f=10.0 cm,f=10.0 cm, so we have a situation where the object is placed closer to the lens than its focal length. We therefore expect to get a case 2 virtual image with a positive magnification that is greater than 1. Ray tracing produces an image like that shown in Figure 8.30, but we will use the thin lens equations to get numerical solutions in this example. Solution To find the magnification m,m, we try to use magnification equation, m=–di/do.m=–di/do. We do not have a value for di,di, so that we must first find the location of the image using lens equation. The procedure is the same as followed in the preceding example, where dodo and ff were known. Rearranging the magnification equation to isolate didi gives 8.35 1di=1f−1do. 1di=1f−1do. Entering known values, we obtain a value for 1/di.1/di. 8.36 1di=110.0 cm−17.50 cm=−0.0333cm 1di=110.0 cm−17.50 cm=−0.0333cm This must be inverted to find di.di. 8.37 di=−cm0.0333=−30.0 cm di=−cm0.0333=−30.0 cm Now the thin lens equation can be used to find the magnification m,m, since both didi and dodo are known. Entering their values gives 8.38 m=−dido=−−30.0 cm10.0 cm=3.00. m=−dido=−−30.0 cm10.0 cm=3.00. Discussion A number of results in this example are true of all case 2 images, as well as being consistent with Figure 8.30. Magnification is indeed positive, as predicted, meaning the image is upright. The magnification is also greater than 1, meaning that the image is larger than the object—in this case, by a factor of 3. Note that the image distance is negative. This means the image is on the same side of the lens as the object. Thus the image cannot be projected and is virtual. Negative values of didi occur for virtual images. The image is farther from the lens than the object, since the image distance is greater in magnitude than the object distance. The location of the image is not obvious when you look through a magnifier. In fact, since the image is bigger than the object, you may think the image is closer than the object. But the image is farther away, a fact that is useful in correcting farsightedness, as we shall see in a later section. A third type of image is formed by a diverging or concave lens. Try looking through eyeglasses meant to correct nearsightedness. (See Figure 8.31.) You will see an image that is upright but smaller than the object. This means that the magnification is positive but less than 1. The ray diagram in Figure 8.32 shows that the image is on the same side of the lens as the object and, hence, cannot be projected—it is a virtual image. Note that the image is closer to the lens than the object. This is a case 3 image, formed for any object by a negative focal length or diverging lens. Figure 8.31 A car viewed through a concave or diverging lens looks upright. This is a case 3 image. (Daniel Oines, Flickr) Figure 8.32 Ray tracing predicts the image location and size for a concave or diverging lens. Ray 1 enters parallel to the axis and is bent so that it appears to originate from the focal point. Ray 2 passes through the center of the lens without changing path. The two rays appear to come from a common point, locating the upright image. This is a case 3 image, which is closer to the lens than the object and smaller in height. Example 8.8 Image Produced by a Concave Lens Suppose an object such as a book page is held 7.50 cm from a concave lens of focal length –10.0 cm. Such a lens could be used in eyeglasses to correct pronounced nearsightedness. What magnification is produced? Strategy and Concept This example is identical to the preceding one, except that the focal length is negative for a concave or diverging lens. The method of solution is thus the same, but the results are different in important ways. Solution To find the magnification m,m, we must first find the image distance didi using thin lens equation 8.39 1di=1f−1do, 1di=1f−1do, or its alternative rearrangement 8.40 di=fdodo−f. di=fdodo−f. We are given that f=–10.0 cmf=–10.0 cm and do=7.50 cm. Entering these yields a value for 1/di. 8.41 1di=1−10.0 cm−17.50 cm=−0.2333cm This must be inverted to find di: 8.42 di=−cm0.2333=−4.29 cm or 8.43 di=(7.5)(−10)(7.5−(−10))=−75/17.5=−4.29 cm. Now the magnification equation can be used to find the magnification m, since both di and do are known. Entering their values gives 8.44 m=−dido=−−4.29 cm7.50 cm=0.571. Discussion A number of results in this example are true of all case 3 images, as well as being consistent with Figure 8.32. Magnification is positive, as predicted, meaning the image is upright. The magnification is also less than 1, meaning the image is smaller than the object—in this case, a little over half its size. The image distance is negative, meaning the image is on the same side of the lens as the object. The image is virtual. The image is closer to the lens than the object, since the image distance is smaller in magnitude than the object distance. The location of the image is not obvious when you look through a concave lens. In fact, since the image is smaller than the object, you may think it is farther away. But the image is closer than the object, a fact that is useful in correcting nearsightedness, as we shall see in a later section. Table 8.2 summarizes the three types of images formed by single thin lenses. These are referred to as case 1, 2, and 3 images. Convex, converging, lenses can form either real or virtual images (cases 1 and 2, respectively), whereas concave, diverging, lenses can form only virtual images (always case 3). Real images are always inverted, but they can be either larger or smaller than the object. For example, a slide projector forms an image larger than the slide, whereas a camera makes an image smaller than the object being photographed. Virtual images are always upright and cannot be projected. Virtual images are larger than the object only in case 2, where a convex lens is used. The virtual image produced by a concave lens is always smaller than the object—a case 3 image. We can see and photograph virtual images only by using an additional lens to form a real image. | Type | Formed When | Image Type | di | m | | Case 1 | f positive, do>f | real | positive | negative | | Case 2 | f positive, dof | virtual | negative | positive m>1 | | Case 3 | f negative | virtual | negative | positive m1 | Table 8.2 Three Types of Images Formed By Thin Lenses In Image Formation by Mirrors, we shall see that mirrors can form exactly the same types of images as lenses. Take-Home Experiment: Concentrating Sunlight Find several lenses and determine whether they are converging or diverging. In general those that are thicker near the edges are diverging and those that are thicker near the center are converging. On a bright sunny day take the converging lenses outside and try focusing the sunlight onto a piece of paper. Determine the focal lengths of the lenses. Be careful because the paper may start to burn, depending on the type of lens you have selected. Problem-Solving Strategies for Lenses Problem-Solving Strategies for Lenses Step 1. Examine the situation to determine that image formation by a lens is involved. Step 2. Determine whether ray tracing, the thin lens equations, or both are to be employed. A sketch is very useful even if ray tracing is not specifically required by the problem. Write symbols and values on the sketch. Step 3. Identify exactly what needs to be determined in the problem (identify the unknowns). Step 4. Make alist of what is given or can be inferred from the problem as stated (identify the knowns). It is helpful to determine whether the situation involves a case 1, 2, or 3 image. While these are just names for types of images, they have certain characteristics (given in Table 8.2) that can be of great use in solving problems. Step 5. If ray tracing is required, use the ray tracing rules listed near the beginning of this section. Step 6. Most quantitative problems require the use of the thin lens equations. These are solved in the usual manner by substituting knowns and solving for unknowns. Several worked examples serve as guides. Step 7. Check to see if the answer is reasonable: Does it make sense? If you have identified the type of image (case 1, 2, or 3), you should assess whether your answer is consistent with the type of image, magnification, and so on. Misconception Alert We do not realize that light rays are coming from every part of the object, passing through every part of the lens, and all can be used to form the final image. We generally feel the entire lens, or mirror, is needed to form an image. Actually, half a lens will form the same, though a fainter, image.
2407
https://askfilo.com/math-question-answers/find-the-perpendicular-distance-from-the-origin-of-the-perpendicular-from-the
[Solved] Find the perpendicular distance from the origin of the perpendic.. World's only instant tutoring platform Instant TutoringPrivate Courses Tutors Explore TutorsBecome Tutor Login StudentTutor Maths XI The straight lines Find the perpendicular distance from the origin of the perpend Maths XI Class 11 RD Sharma Author: R.D. Sharma Subject: Mathematics 1 Sets 251 questions 2 Relations 81 questions 3 Functions 130 questions 4 Measurement of Angles 41 questions 5 Trigonometric Functions 116 questions 6 Values of Trigonometric function at sum or difference of angles 105 questions 7 Transformation formulae 103 questions 8 Values of Trigonometric function at multiples and submultiples of an angle 119 questions 9 Sine and cosine formulae and their applications 52 questions 10 Trigonometric equations 86 questions 11 Mathematical Induction 51 questions 12 Complex Numbers 153 questions 13 Quadratic Equations 78 questions 14 Linear Inequations 87 questions 15 Permutations 194 questions 16 Combinations 111 questions 17 Binomial Theorem 147 questions 18 Arithmetic Progression 168 questions 19 Geometric Progression 159 questions 20 Some special series 39 questions 21 Brief review of cartesian system of rectangular co-ordinates 38 questions 22 The straight lines 239 questions [Q1 Find the perpendicular distance from the origin of the perpendicular from the point (1, 2) upon the straight line x−3​y+4=0.]]( all question 23 The circle 88 questions 24 Parabola 44 questions 25 Ellipse 62 questions 26 Hyperbola 63 questions 27 Introduction to three dimensional coordinate geometry 89 questions 28 Limits 332 questions 29 Derivatives 158 questions 30 Mathematical reasoning 94 questions 31 Statistics 100 questions 32 Probability 243 questions Question Find the perpendicular distance from the origin of the perpendicular from the point (1, 2) upon the straight line x−3​y+4=0.] Views: 5,559 students Updated on: Sep 27, 2022 Book: Maths XI (RD Sharma) Not the question you're searching for? Ask your question Ask your question Or Upload the image of your question Get Solution Text solutionVerified The equation of the line perpendicular to x−3​y+4=0 is 3​x+y+λ=0. This line passes through (1, 2). ∴3​+2+λ=0 ⇒λ=−3​−2 Substituting the value of λ,we get 3​x+y−3​−2=0 Let d be the perpendicular distance from the origin to the line 3​x+y−3​−2=0 d=∣∣​1+3​0−0−3​−2​∣∣​=2 3​+2​ Hence, the required perpendicular distance is 2 3​+2​] Was this solution helpful? 93 Share Report Ask your next question Or Upload the image of your question Get Solution Video solutions (1) Learn from their 1-to-1 discussion with Filo tutors. Video Player is loading. Play Video Play Skip Backward Mute Current Time 0:00 / Duration-:- Loaded: 0% 0:00 Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time--:- 1x Playback Rate 2.5x 2x 1.5x 1x, selected 0.75x Chapters Chapters Descriptions descriptions off, selected Captions captions settings, opens captions settings dialog captions off, selected Audio Track Picture-in-Picture Fullscreen This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color Opacity Text Background Color Opacity Caption Area Background Color Opacity Font Size Text Edge Style Font Family Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. 6 mins Uploaded on: 9/27/2022 Ask your question, on a video call with tutor Connect nowInstall app Connect instantly with this tutor Connect now Taught by Sai Vamsi Total classes on Filo by this tutor - 593 Teaches : Physics, Mathematics Connect instantly with this tutor Connect now Was this solution helpful? 62 Share Report Get instant study help from an expert tutor 24/7 Download Filo Found 8 tutors discussing this question Noah Discussed Find the perpendicular distance from the origin of the perpendicular from the point (1, 2) upon the straight line x−3​y+4=0.] 5 mins ago Discuss this question LIVE 5 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Download AppExplore now Trusted by 4 million+ students Practice more questions from Maths XI (RD Sharma) [Q1 Find the perpendicular distance from the origin of the perpendicular from the point (1, 2) upon the straight line x−3​y+4=0.]]( all Practice questions from Maths XI (RD Sharma) Question 1 Views: 5,298 State whether the two lines in each of the following is parallel, perpendicular or neither. Through (6, 3) and (1, 1); through (−2, 5) and (2, −5) Topic: Straight Lines Book: Maths XI (RD Sharma) View solution Question 2 Views: 5,914 Find what the following equation become when the origin is shifted to the point (1, 1). xy − x − y + 1 = 0 Topic: Straight Lines Book: Maths XI (RD Sharma) View 2 solutions Question 3 Views: 6,044 Reduce the following equation to the normal form and find p and α in y − 2 = 0. Topic: Straight Lines Book: Maths XI (RD Sharma) View solution Question 4 Views: 5,240 Classify the following pair of line as coincident, parallel or intersecting: 3x + 2y − 4 = 0 and 6x + 4y − 8 = 0. Topic: Straight Lines Book: Maths XI (RD Sharma) View solution View more Practice more questions from Straight Lines Question 1 Hard Views: 5,564 The distance between the pair of parallel lines x 2+4 x y+4 y 2+3 x+6 y−4=0 is 1. 5​ 2. 5​2​ 3. 5​1​ 4. 2 5​​ Topic: Straight Lines View 2 solutions Question 2 Easy Views: 5,490 If a,b,c are in A.P, then the line ax+by+c=0 will always pass through the point 1. (2,1) 2. (1,2) 3. (1,−2) 4. (−2,−1) Topic: Straight Lines View 2 solutions Question 3 Medium Views: 5,489 A system of lines is given as y=m i​x+c i​, where m i​ can take any value out of 0,1,−1 and when m i​ is positive, then c i​ can be 1 or −1, when m i​ equal 0,c i​ can be 0 or 1 and when m i​ equals to −1,c i​ can take 0 or 2 . Then, the area enclosed by all these straight line is 1. 2​3​(2​−1) sq units 2. 2​3​ sq units 3. 2 3​ sq units 4. None of these Topic: Straight Lines View solution Question 4 Medium Views: 5,699 The lines (λ−2)x+3 λ y+(λ+4)=0,x+y−1=0 and (2 λ−5)x+(λ+1)y+2 λ=0 are concurrent for 1. 3 values of λ 2. 2 values of λ 3. λ=−1 4. λ=5−11​ Topic: Straight Lines View 3 solutions Practice questions on similar concepts asked by Filo students Question 1 Views: 5,459 Given differential equation is x d x d y​+y=x lo g e​x,(x>1) Topic: Complex Number and Binomial Theorem View solution Question 2 Views: 5,391 A chord of a circle of radius 10 cm subtends a right angle at its centre. Find the length of the chord. Topic: Mathematics View solution Question 3 Views: 5,161 Exercise 3.3 question 2nd Topic: Mathematics View solution Question 4 Views: 5,631 The sum of the first 12 terms of an A.P. is 1440 and its first term is 10 , then the sum of first 9 terms is equal to 1. {460,520,880,810} Topic: Algebra View solution View more Stuck on the question or explanation? Connect with our 412 tutors online and get step by step solution of this question. Talk to a tutor now 261 students are taking LIVE classes Related books for questions on Straight Lines Mathematics NCERT 1176 Questions Mathematics Exemplar NCERT 1180 Questions Maths XI R.D. Sharma 3821 Questions Trigonometry for JEE Main a... Arihant Amit M Agarwal 1381 Questions Vectors and 3D Geometry for... Arihant Amit M Agarwal 1180 Questions Calculus: Early Transcenden... Cengage Learning James Stewart 4414 Questions Coordinate Geometry for JEE... Arihant Dr. S K Goyal 1537 Questions Integral Calculus Arihant Amit M Agarwal 821 Questions Mathematics NCERT 1176 Questions Mathematics Exemplar NCERT 1180 Questions Maths XI R.D. Sharma 3821 Questions Trigonometry for JEE Main a... Arihant Amit M Agarwal 1381 Questions Vectors and 3D Geometry for... Arihant Amit M Agarwal 1180 Questions Calculus: Early Transcenden... Cengage Learning James Stewart 4414 Questions Coordinate Geometry for JEE... Arihant Dr. S K Goyal 1537 Questions Integral Calculus Arihant Amit M Agarwal 821 Questions Mathematics NCERT 1176 Questions Mathematics Exemplar NCERT 1180 Questions Maths XI R.D. Sharma 3821 Questions Trigonometry for JEE Main a... Arihant Amit M Agarwal 1381 Questions Vectors and 3D Geometry for... Arihant Amit M Agarwal 1180 Questions Question Text Find the perpendicular distance from the origin of the perpendicular from the point (1, 2) upon the straight line x−3​y+4=0.] Updated On Sep 27, 2022 Topic Straight Lines Subject Mathematics Class Class 11 Answer Type Text solution:1 Video solution: 1 Upvotes 155 Avg. Video Duration 6 min Are you ready to take control of your learning? Download Filo and start learning with your favorite tutors right away! Questions from top courses Algebra 1 Algebra 2 Geometry Pre Calculus Statistics Physics Chemistry Advanced Math AP Physics 2 Biology Smart Solutions College / University Explore Tutors by Cities Tutors in New York City Tutors in Chicago Tutors in San Diego Tutors in Los Angeles Tutors in Houston Tutors in Dallas Tutors in San Francisco Tutors in Philadelphia Tutors in San Antonio Tutors in Oklahoma City Tutors in Phoenix Tutors in Austin Tutors in San Jose Tutors in Boston Tutors in Seattle Tutors in Washington, D.C. World's only instant tutoring platform Connect to a tutor in 60 seconds, 24X7 27001 Filo is ISO 27001:2022 Certified Become a Tutor Instant Tutoring Scheduled Private Courses Explore Private Tutors Filo Instant Ask Button Instant tutoring API High Dosage Tutoring About Us Careers Contact Us Blog Knowledge Privacy Policy Terms and Conditions © Copyright Filo EdTech INC. 2025 This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
2408
https://www.cuemath.com/questions/what-are-the-x-intercepts-of-y-2x2-3x-20/
What are the x-intercepts of y = 2x2 - 3x - 20? We use cookies to improve your experience. Learn more OK Sign up What are the x-intercepts of y = 2x 2 - 3x - 20? Solution: The given function y = 2x 2 - 3x - 20 is a quadratic function and represents a parabola facing upwards because coefficient of x 2 is positive. Its vertex is given by the x-coordinate of : x = -b/2a Here b = -3 and a = 2 Therefore x = -(-3)/(2)(2) = 3/4 Therefore y coordinate of the vertex is y = 2(3/4)2 - 3(3/4) - 20 = 2(9/16) - 9/4 - 20 = 9/8 - 9/4 - 20 = -9/8 - 20 = -169/20 The vertex is (3/4, -169/20) or (0.75, -8.45) The x intercepts can be determined by either factorizing the given equation or determining its roots: y = 2x 2 - 3x - 20 y = 2x 2 - 8x + 5x - 20 y = 2x(x - 4) + 5(x - 4) = (x - 4)(2x + 5) To determine the zeros of the function we put y = 0 and therefore (x - 4)(2x + 5) = 0 Hence (x- 4) = 0 and (2x + 5) = 0 x - 4 = 0 ⇒ x = 4 2x + 5 = 0 ⇒ x = -5/2 Thus the x-intercepts are (4,0)and (-5/2, 0) What are the x-intercepts of y = 2x 2 - 3x - 20? Summary: The x-intercepts of y = 2x 2 - 3x - 20 are (4,0)and (-5/2, 0) Explore math program Math worksheets and visual curriculum Sign up FOLLOW CUEMATH Facebook Youtube Instagram Twitter LinkedIn Tiktok MATH PROGRAM Online math classes Online Math Courses online math tutoring Online Math Program After School Tutoring Private math tutor Summer Math Programs Math Tutors Near Me Math Tuition Homeschool Math Online Solve Math Online Curriculum NEW OFFERINGS Coding SAT Science English MATH ONLINE CLASSES 1st Grade Math 2nd Grade Math 3rd Grade Math 4th Grade Math 5th Grade Math 6th Grade Math 7th Grade Math 8th Grade Math ABOUT US Our Mission Our Journey Our Team QUICK LINKS Maths Games Maths Puzzles Our Pricing Math Questions Blogs Events FAQs MATH TOPICS Algebra 1 Algebra 2 Geometry Calculus math Pre-calculus math Math olympiad Numbers Measurement MATH TEST CAASPP CogAT STAAR NJSLA SBAC Math Kangaroo AMC 8 MATH CURRICULUM 1st Grade Math 2nd Grade Math 3rd Grade Math 4th Grade Math 5th Grade Math 6th Grade Math 7th Grade Math 8th Grade Math FOLLOW CUEMATH Facebook Youtube Instagram Twitter LinkedIn Tiktok MATH PROGRAM Online math classes Online Math Courses online math tutoring Online Math Program After School Tutoring Private math tutor Summer Math Programs Math Tutors Near Me Math Tuition Homeschool Math Online Solve Math Online Curriculum NEW OFFERINGS Coding SAT Science English MATH CURRICULUM 1st Grade Math 2nd Grade Math 3rd Grade Math 4th Grade Math 5th Grade Math 6th Grade Math 7th Grade Math 8th Grade Math MATH TEST CAASPP CogAT STAAR NJSLA SBAC Math Kangaroo AMC 8 ABOUT US Our Mission Our Journey Our Team MATH TOPICS Algebra 1 Algebra 2 Geometry Calculus math Pre-calculus math Math olympiad Numbers Measurement QUICK LINKS Maths Games Maths Puzzles Our Pricing Math Questions Blogs Events FAQs MATH ONLINE CLASSES 1st Grade Math 2nd Grade Math 3rd Grade Math 4th Grade Math 5th Grade Math 6th Grade Math 7th Grade Math 8th Grade Math Terms and ConditionsPrivacy Policy
2409
https://courses.lumenlearning.com/suny-osbiology2e/chapter/characteristics-of-protists/
23. Protists Characteristics of Protists Learning Objectives By the end of this section, you will be able to do the following: Describe the cell structure characteristics of protists Describe the metabolic diversity of protists Describe the life cycle diversity of protists There are over 100,000 described living species of protists, and it is unclear how many undescribed species may exist. Since many protists live as commensals or parasites in other organisms and these relationships are often species-specific, there is a huge potential for protist diversity that matches the diversity of their hosts. Because the name “protist” serves as a catchall term for eukaryotic organisms that are not animal, plant, or fungi, it is not surprising that very few characteristics are common to all protists. On the other hand, familiar characteristics of plants and animals are foreshadowed in various protists. Cell Structure The cells of protists are among the most elaborate of all cells. Multicellular plants, animals, and fungi are embedded among the protists in eukaryotic phylogeny. In most plants and animals and some fungi, complexity arises out of multicellularity, tissue specialization, and subsequent interaction because of these features. Although a rudimentary form of multicellularity exists among some of the organisms labelled as “protists,” those that have remained unicellular show how complexity can evolve in the absence of true multicellularity, with the differentiation of cellular morphology and function. A few protists live as colonies that behave in some ways as a group of free-living cells and in other ways as a multicellular organism. Some protists are composed of enormous, multinucleate, single cells that look like amorphous blobs of slime, or in other cases, like ferns. In some species of protists, the nuclei are different sizes and have distinct roles in protist cell function. Single protist cells range in size from less than a micrometer to three meters in length to hectares! Protist cells may be enveloped by animal-like cell membranes or plant-like cell walls. Others are encased in glassy silica-based shells or wound with pellicles of interlocking protein strips. The pellicle functions like a flexible coat of armor, preventing the protist from being torn or pierced without compromising its range of motion. Metabolism Protists exhibit many forms of nutrition and may be aerobic or anaerobic. Those that store energy by photosynthesis belong to a group of photoautotrophs and are characterized by the presence of chloroplasts. Other protists are heterotrophic and consume organic materials (such as other organisms) to obtain nutrition. Amoebas and some other heterotrophic protist species ingest particles by a process called phagocytosis, in which the cell membrane engulfs a food particle and brings it inward, pinching off an intracellular membranous sac, or vesicle, called a food vacuole ((Figure)). In some protists, food vacuoles can be formed anywhere on the body surface, whereas in others, they may be restricted to the base of a specialized feeding structure. The vesicle containing the ingested particle, the phagosome, then fuses with a lysosome containing hydrolytic enzymes to produce a phagolysosome, and the food particle is broken down into small molecules that can diffuse into the cytoplasm and be used in cellular metabolism. Undigested remains ultimately are expelled from the cell via exocytosis. Figure 1. Phagocytosis. The stages of phagocytosis include the engulfment of a food particle, the digestion of the particle using hydrolytic enzymes contained within a lysosome, and the expulsion of undigested materials from the cell. Subtypes of heterotrophs, called saprobes, absorb nutrients from dead organisms or their organic wastes. Some protists can function as mixotrophs, obtaining nutrition by photoautotrophic or heterotrophic routes, depending on whether sunlight or organic nutrients are available. Motility The majority of protists are motile, but different types of protists have evolved varied modes of movement ((Figure)). Some protists have one or more flagella, which they rotate or whip. Others are covered in rows or tufts of tiny cilia that they beat in a coordinated manner to swim. Still others form cytoplasmic extensions called pseudopodia anywhere on the cell, anchor the pseudopodia to a substrate, and pull themselves forward. Some protists can move toward or away from a stimulus, a movement referred to as taxis. For example, movement toward light, termed phototaxis, is accomplished by coupling their locomotion strategy with a light-sensing organ. Figure 2. Locomotor organelles in protists. Protists use various methods for transportation. (a) Paramecium waves hair-like appendages called cilia to propel itself. (b) Amoeba uses lobe-like pseudopodia to anchor itself to a solid surface and pull itself forward. (c) Euglena uses a whip-like tail called a flagellum to propel itself. Life Cycles Protists reproduce by a variety of mechanisms. Most undergo some form of asexual reproduction, such as binary fission, to produce two daughter cells. In protists, binary fission can be divided into transverse or longitudinal, depending on the axis of orientation; sometimes Paramecium exhibits this method. Some protists such as the true slime molds exhibit multiple fission and simultaneously divide into many daughter cells. Others produce tiny buds that go on to divide and grow to the size of the parental protist. Sexual reproduction, involving meiosis and fertilization, is common among protists, and many protist species can switch from asexual to sexual reproduction when necessary. Sexual reproduction is often associated with periods when nutrients are depleted or environmental changes occur. Sexual reproduction may allow the protist to recombine genes and produce new variations of progeny, some of which may be better suited to surviving changes in a new or changing environment. However, sexual reproduction is often associated with resistant cysts that are a protective, resting stage. Depending on habitat of the species, the cysts may be particularly resistant to temperature extremes, desiccation, or low pH. This strategy allows certain protists to “wait out” stressors until their environment becomes more favorable for survival or until they are carried (such as by wind, water, or transport on a larger organism) to a different environment, because cysts exhibit virtually no cellular metabolism. Protist life cycles range from simple to extremely elaborate. Certain parasitic protists have complicated life cycles and must infect different host species at different developmental stages to complete their life cycle. Some protists are unicellular in the haploid form and multicellular in the diploid form, a strategy employed by animals. Other protists have multicellular stages in both haploid and diploid forms, a strategy called alternation of generations, analogous to that used by plants. Habitats Nearly all protists exist in some type of aquatic environment, including freshwater and marine environments, damp soil, and even snow. Several protist species are parasites that infect animals or plants. A few protist species live on dead organisms or their wastes, and contribute to their decay. Section Summary Protists are extremely diverse in terms of their biological and ecological characteristics, partly because they are an artificial assemblage of phylogenetically unrelated groups. Protists display highly varied cell structures, several types of reproductive strategies, virtually every possible type of nutrition, and varied habitats. Most single-celled protists are motile, but these organisms use diverse structures for transportation. Review Questions Protists that have a pellicle are surrounded by ______________. silica dioxide calcium carbonate carbohydrates proteins Show Solution D Protists with the capabilities to perform photosynthesis and to absorb nutrients from dead organisms are called ______________. photoautotrophs mixotrophs saprobes heterotrophs Show Solution B Which of these locomotor organs would likely be the shortest? a flagellum a cilium an extended pseudopod a pellicle Show Solution B Alternation of generations describes which of the following? The haploid form can be multicellular; the diploid form is unicellular. The haploid form is unicellular; the diploid form can be multicellular. Both the haploid and diploid forms can be multicellular. Neither the haploid nor the diploid forms can be multicellular. Show Solution C The amoeba E. histolytica is a pathogen that forms liver abscesses in infected individuals. Its metabolic classification is most likely ______. Anaerobic heterotroph Mixotroph Aerobic phototroph Phagocytic autotroph Show Solution A Free Response Explain in your own words why sexual reproduction can be useful if a protist’s environment changes. Show Solution The ability to perform sexual reproduction allows protists to recombine their genes and produce new variations of progeny that may be better suited to the new environment. In contrast, asexual reproduction generates progeny that are clones of the parent. Giardia lamblia is a cyst-forming protist parasite that causes diarrhea if ingested. Given this information, against what type(s) of environments might G. lamblia cysts be particularly resistant? Show Solution As an intestinal parasite, Giardia cysts would be exposed to low pH in the stomach acids of its host. To survive this environment and reach the intestine, the cysts would have to be resistant to acidic conditions. Explain how the definition of protists ensures that the kingdom Protista includes a wide diversity of cellular structures. Provide an example of two different structures that perform the same function for their respective protest. Show Solution Protists are defined as any eukaryotes that do not fall into the Plantae, Fungi, or Animal Kingdoms. Since the unifying characteristics describe what they are NOT, rather than what they are, Protista can include almost any cellular/organism organization.Possible examples of structure variety: Barrier to exterior world: cell wall, plasma membrane, pellicle Locomotion: flagella, cilia, pseudopodia Glossary mixotroph : organism that can obtain nutrition by autotrophic or heterotrophic means, usually facultatively pellicle : outer cell covering composed of interlocking protein strips that function like a flexible coat of armor, preventing cells from being torn or pierced without compromising their range of motion phagolysosome : cellular body formed by the union of a phagosome containing the ingested particle with a lysosome that contains hydrolytic enzymes Candela Citations CC licensed content, Shared previously Biology 2e. Provided by: OpenStax. Located at: License: CC BY: Attribution. License Terms: Download for free at Licenses and Attributions CC licensed content, Shared previously Biology 2e. Provided by: OpenStax. Located at: License: CC BY: Attribution. License Terms: Download for free at Privacy Policy
2410
https://ltcconline.net/greenl/courses/102/ExponentialsLogarithms/properties_of_logarithms.htm
Properties of Logarithms Properties of Logarithms Section 9.0B Find x for log 3 3 2 = x rewrite 3 x = 3 2 therefore x = 2 Log 5 25 = x 5 x =25 x = 2 Ln e 3 = x e x = e 3 x = 3 Log 10 = x 10 x = 10 x = 1 Inverse properties: log10 x = x ln e x = x Do not confuse log with ln. log e x x and ln 10 x x , More inverses (the functions undo each other). For example ( )2 = x e lnx = x 10 logx = x Example 1:Solving exponential equations. a)10 x = 3 Take the common log of both sides Log10 x = log 3 Inverse property X = log 3 = 0.47712155…Use calculator b)2e 3x = 5 the base is e, but before taking ln of both sides, isolate e 3x e 3x= 5/2 take ln of both sides ln e 3x = ln (5/2)Inverse property 3x = ln (5/2)Solve for x x = ln(5/2) = 0.305430244 3 Property:Exponent becomes multiplier log A r = r log A ln A r = r ln A Proof: Let log A = y rewrite 10 y= A raise both sides to n (10 y)n = A n take log of both sides log 10 ny = log A n Inverse Property ny = log A n but y = log A(given) Therefore n(y) = n(log A) Example 3: Solve 3 2x = 5 The base is neither 10 nor e, but we can still take common log of both sides and use multiplier rule. Log 3 2x = log 5 2xlog 3 = log 5 x= log5/2log3 = 0.73248676 Division becomes subtraction property. log A/B = log A – log B ln A/B = ln A – ln B Note: log A/log B log A – log B Proof: Let a = log A b = log B Rewrite 10 a = A 10 b = B Divide A/B = 10 a/ 10 b Property of exponents A/B = 10 a – b Take log of both log A/B = log 10 a – b Inverse Property log A/B = a – b Given log A/B = log A – log B EX 5: Rewrite a)log 5x/3 without fraction log 5x – log 3 b)log 3 – log 2x as a single log log 3/2x Multiplication becomes addition property. log (AB) = log A + log B ln (AB) = ln A + ln B Extra Credit: Prove this property. Exercise 6: Rewrite a)log 4x as separate logs Log 4 + log x b)log 5 + log 3x as a single log log 15x Exercise 7: Using all three properties (multiplier, division, addition) solve the following. Log x + log 5 = 3 Addition property Log 5x = 3 Rewrite into exponential form 10 3 = 5x Simplify 1000 = 5x Solve for x 1000/5 = x = 200 Back to Exponenials and Logarithms Main Page Back to the Survey of Math Ideas Home Page Back to the Math Department Home Page e-mail Questions and Suggestions
2411
https://www.chegg.com/homework-help/questions-and-answers/calculating-true-driving-costs-lab-estimate-much-costs-operate-vehicle-per-day-per-mile-pe-q53454018
Your solution’s ready to go! Our expert help has broken down your problem into an easy-to-learn solution you can count on. Question: Calculating Your True Driving Costs In this lab, we will estimate how much it costs you to operate your vehicle on a per-day, per-mile and per-year basis. Follow these steps to acquire and analyze some data about your driving expenses. 1. You must start with a full tank of gas, so go to the gas station and fill your tank up all the way. While you're still at how i can answer the question for a good report. FORD EXPEDITION 2007 ENGINE SIZE; 2WD 8cyl 5.4L MILEAGE 14 MPG CITY, 20 MPG HIGHWAY Not the question you’re looking for? Post any question and get expert help quickly. Chegg Products & ServicesChegg Products & Services Chegg Products & Services CompanyCompany Company Chegg NetworkChegg Network Chegg Network Customer ServiceCustomer Service Customer Service EducatorsEducators Educators
2412
https://www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html
Convention and Ratification - Creating the United States | Exhibitions - Library of Congress ........skip navigation Library of Congress Exhibitions Ask a Librarian Digital Collections Library Catalogs Search Suggestions enabled. GO The Library of Congress>Exhibitions>Creating the United States> Convention and Ratification Exhibitions Image Exhibitions Home Current Exhibitions All Exhibitions Loan Procedures for Institutions Special Presentations Contact Us PrintSubscribeShare/Save Creating the United States Convention and Ratification Home|Exhibition Overview|Exhibition Items|Public Programs|Learn More|Interactive Presentations|Acknowledgments Sections:Creating the Declaration of Independence|Creating the United States Constitution|Creating the Bill of Rights Return to Creating the United States Constitution ListPrevious Section: Road to the Constitution|Next Section: Constitution Legacy When delegates to the Constitutional Convention began to assemble at Philadelphia in May 1787, they quickly resolved to replace rather than merely revise the Articles of Confederation. Although James Madison is known as the “father of the constitution,” George Washington’s support gave the convention its hope of success. Division of power between branches of government and between the federal and state governments, slavery, trade, taxes, foreign affairs, representation, and even the procedure to elect a president were just a few of the contentious issues. Diverging plans, strong egos, regional demands, and states’ rights made solutions difficult. Five months of debate, compromise, and creative strategies produced a new constitution creating a federal republic with a strong central government, leaving most of the power with the state governments. Ten months of public and private debate were required to secure ratification by the minimum nine states. Even then Rhode Island and North Carolina held out until after the adoption of a Bill of Rights. “For we are sent hither to consult not contend, with each other; and Declaration of a fix’ Opinion, and of determined Resolutions never to change it, neither enlighten nor convince us.” Benjamin Franklin, Speech in Congress, June 11, 1787 Philadelphia, Birthplace of the Constitution Philadelphia, the largest city in the American colonies, and its adjacent rural areas are depicted on this 1752 map. The first illustration of the city’s State House, later called Independence Hall, dominates the upper portion of the map. The map also identifies the owners of many individual properties. Philadelphia was, in essence, the capital of the United States during the Revolutionary War, and the State House was home to the second Continental Congress and the Federal Convention of 1787. Enlarge A Map of Philadelphia and Parts Adjacent with a Perspective View of the State House. Philadelphia: N[icholas] Scull and G[eorge] Heap; L[awrence] Hebert sculpt., 1752. Hand colored engraved map. Geography and Map Division, Library of Congress (055.01.00) [Digital ID# ct000294] Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj22 Discover! The Virginia Plan The Virginia delegates to the Constitutional Convention, led by James Madison (1741–1836) and George Washington (1732–1799), prepared a plan of government that provided for proportional representation in a bicameral (two-house) legislature and a strong national government with veto power over state laws. Virginia’s governor, Edmund Randolph (1753–1813), who ultimately refused to sign the Constitution, presented the plan to the convention on May 29, 1787. The plan, designed to protect the interests of the large states in a strong, national republic, became the basis for debate. 1 of 3 Enlarge “The Virginia Plan of Government” in James Madison’s notes. Notes of Debates in the Federal Constitutional Convention, May 29, 1787. Manuscript. James Madison Papers, Manuscript Division, Library of Congress (056.01.02) [Digital ID# us0056_01] Read the transcript Enlarge “The Virginia Plan of Government” in James Madison’s notes on the Constitutional Convention, May 29, 1787. Manuscript. James Madison Papers, Manuscript Division, Library of Congress (056.01.01) [Digital ID# us0056_01p01] Read the transcript Enlarge The Virginia Plan of Government, May 1787. Manuscript in the hand of George Washington. George Washington Papers, Manuscript Division, Library of Congress (56.00.00) [Digital ID#s us0056, us0056_1, us0056_2] Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj0 William Paterson Defends New Jersey Plan William Paterson (1745–1806) presented a plan of government to the Convention that came to be called the “New Jersey Plan.” Paterson wanted to retain a unicameral (one-house) legislature with equal votes of states and have the national legislature elect the executive. This plan maintained the form of government under the Articles of Confederation while adding powers to raise revenue and regulate commerce and foreign affairs. Enlarge William Paterson. Notes for Speeches in Convention, June 16, 1787. Manuscript. William Paterson Papers, Manuscript Division, Library of Congress (59.01.00) [Digital ID# us0059_01p1] Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj4 Discover! The New Jersey Plan The New Jersey delegates to the Constitutional Convention, led by William Paterson (1745–1806) proposed an alternative to the Virginia Plan on June 15, 1787. The New Jersey Plan was designed to protect the security and power of the small states by limiting each state to one vote in Congress, as under the Articles of Confederation. Its acceptance would have doomed plans for a strong national government and minimally altered the Articles of Confederation. 1 of 2 Enlarge “The New Jersey Plan of Government” in James Madison. Notes of Debates in the Federal Constitutional Convention, June 15, 1787. Manuscript. James Madison Papers, Manuscript Division, Library of Congress (057.01.02) [Digital ID#s us0057_01p2, us0057_01p01, us0057_01] Read the transcript Enlarge The New Jersey Plan of Government, June 1787. Manuscript in the hand of George Washington. George Washington Papers, Manuscript Division, Library of Congress (57.00.01) [Digital ID#s us0057, us0057_1, us0057_2] Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj1 Back to Top Madison Responds to Paterson’s New Jersey Plan William Paterson’s New Jersey Plan proposed a unicameral (one-house) legislature with equal votes of states and an executive elected by a national legislature. This plan maintained the form of government under the Articles of Confederation while adding powers to raise revenue and regulate commerce and foreign affairs. James Madison commented on Paterson’s proposed plan in his journal that he maintained during the course of the proceedings. Madison’s notes, which he refined nightly, have become the most important contemporary record of the debates in the Convention. Enlarge James Madison. Notes of Debates in the Federal Constitutional Convention, June 16, 1787. Manuscript. James Madison Papers, Manuscript Division, Library of Congress (059.00.02) [Digital ID# us0059p3] Read the transcript Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj20 Pennsylvania State House in Philadelphia The Pennsylvania State House (known today as “Independence Hall”) in Philadelphia was the site of American government during the revolutionary and early national years. The national Congress held most of its sessions there from 1775 to 1800. Within its walls the Declaration of Independence was adopted, and the Constitution of the United States was debated, drafted, and signed. This print depicts the back of the building, with citizens and Native Americans walking on the lawn. Enlarge William Birch & Son. “Back of the State House, Philadelphia,” from The City of Philadelphia in the State of Pennsylvania, North America, As it Appeared in the Year 1800. . . . Etching. Philadelphia: 1800, restrike printed in 1840. Marian S. Carson Collection, Prints and Photographs Division, Library of Congress (055.02.00) [Digital ID# ppmsca-24335] Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj21 Convention Rejects Franklin’s Proposed Daily Prayer Responding to the divisive tension among the delegates that threatened to jeopardize the purpose of the Constitutional Convention, Benjamin Franklin proposed that a clergyman lead a daily prayer to provide divine guidance in resolving differences. The delegates declined the proposal, citing the numerous religious sects represented in the Convention and a lack of funds to pay a chaplain. Enlarge Benjamin Franklin. Draft of a speech, June 28, 1787. Manuscript. Benjamin Franklin Papers, Manuscript Division, Library of Congress (058.01.02) [Digital ID# us0058_01p2] Read the transcript Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj17 Franklin Soothes Anger When delegates at the Federal Constitutional Convention became frustrated and angry because of the contentious issue of proportional representation in the new national legislature, Benjamin Franklin (1706–1790) urged “great Coolness and Temper.” James Wilson (1742–1798) from Pennsylvania reading Franklin’s speech, told the delegates “we are sent here to consult, not to contend, with each other.” As the eldest delegate at the convention, Franklin acted on several occasions to restore harmony and good humor to the proceedings. Enlarge Benjamin Franklin. Draft speech, [June 28, 1787]. Manuscript. Benjamin Franklin Papers, Manuscript Division, Library of Congress (058.01.01) [Digital ID# us0058_01p2, us0058], us0058_1, us0058_01p1] Read the transcript Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj2 “Great Compromise” Saves the Convention By mid-July the representation issue had the Constitutional Convention teetering on the brink of dissolution. Finally, delegates made a “great compromise,” to create a bicameral (two-house) legislature with the states having equal representation in the upper house or senate and the people having proportional representation in the lower house, where all money bills were to originate. Enlarge James Madison’s notes on the Constitutional Convention, July 16, 1787. Manuscript. James Madison Papers, Manuscript Division, Library of Congress (59) [Digital ID#s us0059tt_1, us0059tt_2, us0059tt_3] Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj3 Back to Top Committee of Detail John Rutledge (1739–1800) of South Carolina chaired the five-member Committee of Detail assigned on July 23, 1787, to take the nineteen resolutions adopted by the Convention, a plan presented by South Carolina delegate Charles Pinckney (1757–1824), and the rejected New Jersey Plan, as the basis for producing a draft constitution. The Committee of Detail draft boldly refocused the convention. The multiple annotations by Alexander Hamilton (1757–1804) of New York illustrate the hard work remaining for the delegates. 1 of 4 Enlarge Draft United States Constitution: Report of the Committee of Detail, ca. August 6, 1787. Printed document with annotations by Charles Cotesworth Pinckney. Charles Cotesworth Pinckney Family Papers, Manuscript Division, Library of Congress (061.03.00) [Digital ID# us0061_03] Enlarge Draft United States Constitution: Report of the Committee of Detail, ca. August 6, 1787. Printed document with annotations by James Madison. James Madison Papers, Manuscript Division, Library of Congress (61.02.00) [Digital ID# us0061_02] Enlarge Draft United States Constitution: Report of the Committee of Detail, August 6–September 8, 1787. Printed document with annotations by Alexander Hamilton. Alexander Hamilton Papers, Manuscript Division, Library of Congress (61.01.00) [Digital ID# us0061_01] Enlarge Draft United States Constitution: Report of the Committee of Detail, ca. August 6, 1787. Printed document with annotations by Convention Secretary William Jackson. William Johnson Papers, Manuscript Division, Library of Congress (61) [Digital ID# us0061] Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj5 Report of the Committee of Style The Committee of Style, chaired by William Samuel Johnson (1727–1819) working with James Madison (1751–1836), Rufus King (1755–1827), and Alexander Hamilton, gave the Constitution its substance. Gouverneur Morris (1752–1816), a delegate from Pennsylvania, is credited with providing the preamble phrase “We the people of the United States, in order to form a more perfect union”—a dramatic change from the opening of the previous version. This simple phrase anchored the new national government in the consent of the people rather than a confederation of states. 1 of 5 Enlarge Draft United States Constitution: Reports of the Committee of Style, September 8–15, 1787. Printed document with annotations by Charles Cotesworth Pinckney. Charles Cotesworth Pinckney Family Papers, Manuscript Division, Library of Congress (062.04.01) [Digital ID# us0062_04]; us0062_04p1, us0062_04p2, us0062_04p3 Enlarge Draft United States Constitution: Report of the Committee of Style, September 8–15, 1787. Printed document with annotations by James Madison. James Madison Papers, Manuscript Division, Library of Congress (062.03.00) [Digital ID#s us0062_03p1 us0062_03p2, us0062_03p3, us0062_03p4] Enlarge Draft United States Constitution: Report of the Committee of Style, September 8–15, 1787. Printed document with annotations by Convention Secretary William Jackson. William Samuel Johnson Papers, Manuscript Division, Library of Congress (62.02.00) [Digital ID#s us0062_02p1; us0062_02p2, us0062_02p3, us0062_02p4] Enlarge Draft United States Constitution: Report of the Committee of Style, September 8–15, 1787. Printed document with annotations by Alexander Hamilton. Alexander Hamilton Papers, Manuscript Division, Library of Congress (62.01.00) [Digital ID#s us0062_01p1, us0062_01p2, us0062_01p3, us0062_01p4] Enlarge Draft United States Constitution: Report of the Committee of Style, September 8–12, 1787. Printed document with annotations by George Washington and Convention Secretary William Jackson. George Washington Papers, Manuscript Division, Library of Congress (62) [Digital ID#s us0062, us0062_1, us0062_2, us0062_3] Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj6 Washington’s Frustrations at the Convention George Washington, president of the Federal Constitutional Convention, revealed few of the personal conflicts and compromises of the delegates in his daily diary. However, even the unflappable Washington exposed his frustrations when he noted on September 17, 1787, that all delegates to the convention had signed the Constitution except “Govr. [Edmund] Randolph and Colo. [George] Mason from Virginia & Mr. [Elbridge] Gerry from Massachusetts.” Enlarge George Washington diary entry, September 17, 1787. Manuscript. George Washington Papers, Manuscript Division, Library of Congress (063.01.00) [Digital ID#s us0063_01, us0063] Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj7 Discover! Opposition to the Constitution As the convention concluded, George Mason (1725–1792) continued to fear an ultra-national constitution and the absence of a bill of rights. On the eve of the Constitution’s adoption on September 17, 1787, Mason noted these major objections on the version of his copy of the Committee of Style draft. Mason sent copies of his objections to friends, from whence they soon appeared in the press. Enlarge George Mason. “Objections to the Constitution of Government Formed by the Convention,” ca. September 17, 1787. Manuscript document. George Washington Papers, Manuscript Division, Library of Congress (64.00.01) [Digital ID#s us0064_1, us0064] Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj9 “Monarchy or a Republic?” As the Constitutional Convention adjourned, “a woman [Mrs. Eliza Powell] asks Dr. Franklin well Doctor what have we got a republic or a monarchy? A republic replied the Doctor if you can keep it.” Although this story recorded by James McHenry (1753–1816), a delegate from Maryland, is probably fictitious, people wondered just what kind of government was called for in the new constitution. Enlarge James McHenry. Diary, September 18, 1787. Manuscript. James McHenry Papers, Manuscript Division, Library of Congress (63.02.00) [Digital ID# us0063_02p1] Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj8 Back to Top Early Optimism of the Acceptance of New Constitution Samuel Powel (1739–1793), a Philadelphia political leader, reflects the early optimism for the quick acceptance of the new federal Constitution. Such optimism proved premature as Anti-Federalist opponents of the Constitution mounted stiff opposition in key states, such as New York, Massachusetts, and Virginia, but its proponents ultimately prevailed. Enlarge Letter from Samuel Powel to George Washington, November 13, 1787. Manuscript. George Washington Papers, Manuscript Division, Library of Congress (67.01.00) [Digital ID# us0067_01p1] Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj13 Jefferson’s Concern about Method of Electing President Because they were serving as American ministers abroad during the constitutional debates John Adams and Thomas Jefferson were not involved in the Constitutional Convention. Neither saw major flaws in the new constitution.However, Jefferson thought that the legislature would be too restricted and greatly feared that the manner of electing the president would weaken the office. Jefferson asserted that the United States president “seems a bad edition of a Polish King, a reference to the custom in eighteenth-century Poland of electing kings, which undercut royal authority. Enlarge Letter from Thomas Jefferson to John Adams, November 13, 1787. Manuscript. Thomas Jefferson Papers, Manuscript Division, Library of Congress (67) [Digital ID# us0067] Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj12 Conflict in Ratification of the Constitution The process of state ratification of the United States Constitution was a divisive one. This satirical, eighteenth-century engraving touches on some of the major issues in the Connecticut politics on the eve of ratification. The two rival factions shown are the “Federals,” supporters of the Constitution who represented the trading interests and were for tariffs on imports, and the “Antifederals,” those committed to agrarian interests and more receptive to paper money issues. Although drawn to portray events in Connecticut, the concepts could be applied throughout the nation. Enlarge Amos Doolittle. The Looking Glass for 1787. [New Haven]: 1787. Engraving with watercolor. Prints and Photographs Division, Library of Congress (68) [Digital ID# ppmsca-17522] Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj14 Discover! Madison Defends Constitution In the ensuing debate over adoption of the Constitution, James Madison teamed with Alexander Hamilton and John Jay of New York to write a masterful dissection and analysis of the system of government presented in the Constitution. The eighty-five articles were originally published in New York newspapers as arguments aimed at anti-Federal forces in that state, but their intended scope was far larger. Madison's Federalist No. X explains what an expanding republic might do if it accepted the basic premise of majority rule, a balanced government of three separate branches, and a commitment to balance all the diverse interests through a system of checks and balances. 1 of 2 Enlarge Publius (pseudonym for James Madison). The Federalist. No. X in the New York Daily Advertiser, November 22, 1787. Serial and Government Publications Division (68.03.00) [Digital ID# vc6.7a] Enlarge The Federalist: A Collection of Essays, Written in Favour of the New Constitution. 2 vols. New York: J. and A. McLean, 1788. Thomas Jefferson Library, Rare Book and Special Collections Division, Library of Congress (66) [Digital ID# us0066, us0066_1, us0066_2, us0066_3] Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj10 The Federalist Papers The Federalist Papers were a series of eighty-five newspaper essays published anonymously but were in fact written in defense of the Constitution by James Madison, John Jay (1745–1829), and Alexander Hamilton. The essays were collected and published as a two-volume work. This edition was once owned by Hamilton’s wife, Elizabeth Schuyler, whose sister gave it to Thomas Jefferson. As his notes indicate, Jefferson attempted to determine the authorship of each essay. 1 of 6 Enlarge The Federalist: A Collection of Essays, Written in Favour of the New Constitution. 2 vols. New York: J. and A. McLean, 1788. Thomas Jefferson Library, Rare Book and Special Collections Division, Library of Congress (66.00.01) [Digital ID# vc127] Enlarge The Federalist: A Collection of Essays, Written in Favour of the New Constitution. 2 vols. New York: J. and A. McLean, 1788. Thomas Jefferson Library, Rare Book and Special Collections Division, Library of Congress (66) [Digital ID# us0066, us0066_1, us0066_2, us0066_3] Watch the video Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj11 Back to Top James Madison Defends the Constitution The Federalist Papers, a series of eighty-five newspaper essays published anonymously, were in fact written in defense of the Constitution by James Madison, John Jay (1745–1829), and Alexander Hamilton. In this essay, Madison argues against the criticism that a republic can not govern a large territory. “A democracy consequently will be confined to a small spot,” wrote Madison, but “A republic may be expanded over a large region.” Enlarge [James Madison]. Number XIV. The Federalist: A Collection of Essays, Written in Favour of the New Constitution. 2 vols. New York: J. and A. McLean, 1788. Rare Book and Special Collections Division, Library of Congress (066.02.00) [Digital ID# us0066_02] Read the transcript Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj19 Alexander Hamilton Defends the New Constitution The Federalist Papers, a series of eighty-five newspaper essays published anonymously, were in fact written in defense of the Constitution by James Madison (1751–1836), John Jay (1745–1829), and Alexander Hamilton (1755–1804). In this essay Hamilton opens his argument in support of a strong executive branch with: “the election of the president is pretty well guarded. I venture somewhat further; and hesitate to affirm, that if the manner of it be not perfect, it is at least excellent. It unites in an eminent degree all the advantages; the union of which was to be desired.” This collected volume was owned and annotated by James Madison. Enlarge [Alexander Hamilton]. Number LXVIII. The Federalist: A Collection of Essays, Written in Favour of the New Constitution. 2 vols. New York: J. and A. McLean, 1788. Rare Book and Special Collections Division, Library of Congress (66.01.00) [Digital ID# us0066_01] Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj16 Federal Constitution Ratified by Virginia Before the newly proposed Constitution could become the supreme law of the United States, it would require the ratification of nine states. New Hampshire and Virginia became the ninth and tenth states to approve the document. Supporters of the Constitution used these state ratifications to pressure the remaining states to approve and join the establishment of the new federal republic. New York followed suit in July 1788, but Rhode Island and North Carolina did not ratify until after the formation of the new government in 1789. Enlarge “Ratification of the New Constitution by the Convention of Virginia” in Supplement to the Independent Journal, July 2, 1788. New York: J. and A. McLean. Broadside. Constitutional Convention Broadside Collection, Rare Book and Special Collections Division, Library of Congress (071.03.00) [Digital ID# us0071_03] Read the transcript Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj18 New York Parade to Support the New Federal Constitution On July 23, 1788, a New York City parade of ten divisions of artisans and professionals, preceded by the firing of ten guns, was launched to pressure the New York Ratification Convention. Just days later New York became the eleventh state to ratify the new federal Constitution on July 26, 1788. Enlarge Order of procession, in honor of the Constitution of the United States . . . by order of the Committee of Arrangements, Richard Platt, chairman, July 23 . New York: 1788. Printed broadside. Rare Book and Special Collections Division, Library of Congress (68.01.00) [Digital ID# us0068_02] Bookmark this item: //www.loc.gov/exhibits/creating-the-united-states/convention-and-ratification.html#obj15 Back to top Return to Creating the United States Constitution ListPrevious Section: Road to the Constitution|Next Section: Constitution Legacy Home|Exhibition Overview|Exhibition Items|Public Programs|Learn More|Interactive Presentations|Acknowledgments Sections:Creating the Declaration of Independence|Creating the United States Constitution|Creating the Bill of Rights Connect with the Library All ways to connect Find Us On Subscribe & Comment RSS & E-Mail Blogs Download & Play Apps Podcasts Webcasts iTunesU (external link) Questions Ask a Librarian Contact Us About | Press | Jobs | Donate Inspector General | Legal | Accessibility | External Link Disclaimer | USA.gov Speech Enabled
2413
https://nrich.maths.org/tags/decision-mathematics-and-combinatorics
Decision mathematics and combinatorics | NRICH Skip to main content Problem-Solving Schools can now access the Hub! Contact us if you haven't received login details Main navigation Teachersexpand_more Early years Primary Secondary Post-16 Professional development Studentsexpand_more Primary Secondary Post-16 Parentsexpand_more Early years Primary Secondary Post-16 Problem-Solving Schoolsexpand_more What is the Problem-Solving Schools initiative? Becoming a Problem-Solving School Charter Hub Resources and PD Events About NRICHexpand_more About us Impact stories Support us Our funders Contact us search menu search close Search NRICH search Or search by topic Number and algebra Properties of numbers Place value and the number system Calculations and numerical methods Fractions, decimals, percentages, ratio and proportion Patterns, sequences and structure Coordinates, functions and graphs Algebraic expressions, equations and formulae Geometry and measure Measuring and calculating with units Angles, polygons, and geometrical proof 3D geometry, shape and space Transformations and constructions Pythagoras and trigonometry Vectors and matrices Probability and statistics Handling, processing and representing data Probability Working mathematically Thinking mathematically Mathematical mindsets Advanced mathematics Calculus Decision mathematics and combinatorics Advanced probability and statistics Mechanics For younger learners Early years foundation stage Decision mathematics and combinatorics Type Keystage Challenge level Flows in a network 1 Euler's formula 4 Topology 28 Combinations 153 Permutations 18 Logic 16 Combinatorics 56 Optimisation 11 Networks/graph theory 45 Critical path analysis 1 Algorithms 32 Matching and allocation problems 1 Zero sum game 1 Game theory 2 Linear programming 0 Pagination Current page 1 Page 2 Next page Next › Last page Last » Footer Sign up to our newsletter Technical help Accessibility statement Contact us Terms and conditions Links to the NRICH Twitter account Links to the NRICH Facebook account Links to the NRICH Bluesky account NRICH is part of the family of activities in the Millennium Mathematics Project.
2414
https://www.youtube.com/watch?v=0_thR8VMFow
2-digit Subtraction with Regrouping using the Base Ten Strategy - Grade 2 Math in Action Math (and More) in Minutes 2600 subscribers 9 likes Description 370 views Posted: 4 Dec 2022 Hi Families! Today we will learn how to solve a 2-digit subtraction equation using the base ten strategy. Transcript: hi families today we are learning how to do two digit subtraction with regrouping so here we go in subtraction step one what do we do draw the bigger number how do we draw 56. we only put five in a column then you start a new column okay then we have to ask ourselves can we subtract the ones no no we cannot take away seven ones from six ones so what do we do yes we trade a 10. 14 ones good job cross that one out so we know that one's not there anymore okay now now we could do our subtraction right okay how many ones do we have to take away seven count with me one two three four five six seven and how many tens do we take away three one two three how many times do we have left and how many ones you have to remember your ones from 56 Also let's count one two three four five six seven eight nine so our answer is 19. good job
2415
https://ocw.tau.edu.ng/courses/mathematics/18-700-linear-algebra-fall-2013/
Linear Algebra | Mathematics | MIT OpenCourseWare Subscribe to the OCW Newsletter Help|Contact Us FIND COURSES Find courses by: Topic MIT Course Number Department Collections New Courses Most Visited Courses OCW Scholar Courses Audio/Video Lectures Online Textbooks Supplemental Resources OCW Highlights for High School MITx & Related OCW Courses MIT Open Learning Library Cross-Disciplinary Topic Lists Energy Entrepreneurship Environment Introductory Programming Life Sciences Transportation Translated Courses 繁體字 / Traditional Chinese Türkçe / Turkish (비디오)한국 / Korean For Educators Chalk Radio Podcast OCW Educator Portal Instructor Insights by Department Residential Digital Innovations OCW Highlights for High School Additional Resources Give Now Make a Donation Why Give? Our Supporters Other Ways to Contribute Become a Corporate Sponsor About About MIT OpenCourseWare Site Statistics OCW Stories Newsletter Chalk Radio Podcast Open Matters Blog Home » Courses » Mathematics » Linear Algebra Linear Algebra Course Home Syllabus Calendar Readings Assignments Study Materials Download Course Materials Gram–Schmidt process which is a method for orthonormalising a set of vectors in an inner product space. (Image from public domain.) Instructor(s) David Vogan MIT Course Number 18.700 As Taught In Fall 2013 Level Undergraduate Cite This Course Some Description Instructor(s)Prof. As Taught InSpring 2002 Course Number2.24 LevelUndergraduate/Graduate FeaturesLecture Notes, Student Work Course Description Course Features Assignments: problem sets (no solutions) Course Description This course offers a rigorous treatment of linear algebra, including vector spaces, systems of linear equations, bases, linear independence, matrices, determinants, eigenvalues, inner products, quadratic forms, and canonical forms of matrices. Compared with 18.06 Linear Algebra, more emphasis is placed on theory and proofs. Other Versions OCW Scholar Version 18.06SC Linear Algebra (Fall 2011) Other OCW Versions Archived versions: 18.700 Linear Algebra (Fall 2005) Related Content Course Collections See related courses in the following collections: Find Courses by Topic Linear Algebra David Vogan. 18.700 Linear Algebra. Fall 2013. Massachusetts Institute of Technology: MIT OpenCourseWare, License: Creative Commons BY-NC-SA. For more information about using these materials and the Creative Commons license, see our Terms of Use. Find Courses Find by Topic Find by Course Number Find by Department New Courses Most Visited Courses OCW Scholar Courses Audio/Video Courses Online Textbooks Instructor Insights Supplemental Resources MITx & Related OCW Courses MIT Open Learning Library Translated Courses For Educators Chalk Radio Podcast OCW Educator Portal Instructor Insights by Department Residential Digital Innovations OCW Highlights for High School Additional Resources Give Now Make a Donation Why Give? Our Supporters Other Ways to Contribute Become a Corporate Sponsor About About OpenCourseWare Site Statistics OCW Stories Newsletter Open Matters Blog Tools Help & FAQs Contact Us Accessibility Site Map Privacy & Terms of Use RSS Feeds Our Corporate Supporters About MIT OpenCourseWare MIT OpenCourseWare is an online publication of materials from over 2,500 MIT courses, freely sharing knowledge with learners and educators around the world. Learn more » © 2001–2025 Massachusetts Institute of Technology Your use of the MIT OpenCourseWare site and materials is subject to our Creative Commons License and other terms of use.
2416
https://www.collinsdictionary.com/us/dictionary/english/haggard
English French German Italian Spanish Portuguese Hindi Chinese Korean Japanese More English Italiano Português 한국어 简体中文 Deutsch Español हिंदी 日本語 English French German Italian Spanish Portuguese Hindi Chinese Korean Japanese Definitions Summary Synonyms Sentences Pronunciation Collocations Conjugations Grammar Credits × Definition of 'haggard' COBUILD frequency band haggard (hægərd ) adjective Someone who looks haggard has a tired expression and shadows under their eyes, especially because they are ill or have not had enough sleep. He was pale and a bit haggard. Synonyms: gaunt, wasted, drawn, thin More Synonyms of haggard Collins COBUILD Advanced Learner’s Dictionary. Copyright © HarperCollins Publishers You may also like English Quiz ConfusablesSynonyms of 'haggard'Language Lover's BlogFrench Translation of 'haggard'Translate your textPronunciation PlaylistsWord of the day: 'hwyl'Spanish Translation of 'haggard'English GrammarCollins AppsEnglish Quiz ConfusablesSynonyms of 'haggard'Language Lover's BlogFrench Translation of 'haggard'Translate your textPronunciation PlaylistsWord of the day: 'hwyl'Spanish Translation of 'haggard'English GrammarCollins AppsEnglish Quiz ConfusablesSynonyms of 'haggard'Language Lover's Blog COBUILD frequency band haggard in American English (ˈhæɡərd ) adjectiveOrigin: MFr hagard, untamed, untamed hawk 1. falconry designating a hawk captured after reaching maturity 2. untamed; unruly; wild 3. a. wild-eyed b. having a wild, wasted, worn look, as from sleeplessness, grief, or illness; gaunt; drawn noun 4. falconry a haggard hawk Webster’s New World College Dictionary, 5th Digital Edition. Copyright © 2025 HarperCollins Publishers. Derived forms ˈhaggardly adverb ˈhaggardness noun COBUILD frequency band Haggard (Sir H(enry) Rider) in American English (ˈhæɡərd ) 1856-1925; Eng. writer, esp. of adventure novels Webster’s New World College Dictionary, 5th Digital Edition. Copyright © 2025 HarperCollins Publishers. COBUILD frequency band haggard in American English (ˈhæɡərd) adjective 1. having a gaunt, wasted, or exhausted appearance, as from prolonged suffering, exertion, or anxiety; worn the haggard faces of the tired troops 2. wild; wild-looking haggard eyes 3. Falconry (esp of a hawk caught after it has attained adult plumage) untamed noun 4. Falconry a wild or untamed hawk caught after it has assumed adult plumage SYNONYMS 1. emaciated, drawn, hollow-eyed.ANTONYMS 1. robust. Most material © 2005, 1997, 1991 by Penguin Random House LLC. Modified entries © 2019 by Penguin Random House LLC and HarperCollins Publishers Ltd Derived forms haggardly adverb haggardness noun Word origin [1560–70; orig., wild female hawk. See hag1, -ard] COBUILD frequency band haggard in British English 1 (ˈhæɡəd ) adjective 1. careworn or gaunt, as from lack of sleep, anxiety, or starvation 2. wild or unruly 3. (of a hawk) having reached maturity in the wild before being caught noun 4. falconry a hawk that has reached maturity before being caught Compare eyas, passage hawk Collins English Dictionary. Copyright © HarperCollins Publishers Derived forms haggardly (ˈhaggardly) adverb haggardness (ˈhaggardness) noun Word origin C16: from Old French hagard wild; perhaps related to hedge COBUILD frequency band haggard in British English 2 (ˈhæɡərd ) noun (in Ireland and the Isle of Man) an enclosure beside a farmhouse in which crops are stored Collins English Dictionary. Copyright © HarperCollins Publishers Word origin C16: related to Old Norse heygarthr, from hey hay + garthr yard COBUILD frequency band Haggard in British English (ˈhæɡəd ) noun Sir (Henry) Rider. 1856–1925, British author of romantic adventure stories, including King Solomon's Mines (1885) Collins English Dictionary. Copyright © HarperCollins Publishers Examples of 'haggard' in a sentence haggard These examples have been automatically selected and may contain sensitive content that does not reflect the opinions or policies of Collins, or its parent company HarperCollins. We welcome feedback: report an example sentence to the Collins team. Read more… The result is a jerky flickering film of an increasingly haggard man. The Guardian (2015) The faces still in the room are wan and haggard. The Guardian (2018) Not too haggard from being holed up sick in his hotel room? The Guardian (2019) Many of the traditional heavyweights look a little haggard. The Guardian (2019) A couple of haggard sports journalists squinted unsmilingly at their notepads. The Guardian (2018) When she finally surfaced, she looked haggard and tearful. Times, Sunday Times (2008) His pale, haggard face struck her painfully. Elizabeth Gaskell North and South (1855) When her husband stood opposite to her, she saw that his face was more haggard. George Eliot Middlemarch (1872) But for a man renowned for looking young for his age, he appeared haggard and pale for the first time, above. The Sun (2014) His face is haggard; his fingers are gnarled and spindly; and across his lap rests the long stick he now uses to help him walk. Andrew Bridgeford 1066: and the Hidden History of the Bayeux Tapestry (2004) Trends of haggard View usage over: Source: Google Books Ngram Viewer Browse alphabetically haggard haggadistic Haggadoth Haggai haggard hagged haggis haggish All ENGLISH words that begin with 'H' ## Wordle Helper ## Scrabble Tools Quick word challenge Quiz Review Question: 1 - Score: 0 / 5 BIRDS Drag the correct answer into the box. buzzard flamingo gull pigeon BIRDS What is this an image of? pelicanpenguinlong-eared owlpheasant BIRDS Drag the correct answer into the box. pheasant kingfisher penguin magpie BIRDS What is this an image of? ostrichmagpierobingull BIRDS What is this an image of? robinbarn owlpheasanteagle Your score: New collocations added to dictionary Collocations are words that are often used together and are brilliant at providing natural sounding language for your speech and writing. Read more Study guides for every stage of your learning journey Whether you're in search of a crossword puzzle, a detailed guide to tying knots, or tips on writing the perfect college essay, Harper Reference has you covered for all your study needs. Read more Updating our Usage There are many diverse influences on the way that English is used across the world today. We look at some of the ways in which the language is changing. Read our series of blogs to find out more. Read more Area 51, Starship, and Harvest Moon: September’s Words in the News I’m sure a lot of people would agree that we live in strange times. But do they have to be so strange that Area 51 is making headlines? And what’s this about fish the look like aliens. September’s Words in the News explain all. Read more Where, were or we’re? Learn the difference between “where,” “were,” and “we’re” with this quick guide to their meanings and uses in English. Read more Auxiliary verbs: contracted forms Auxiliary verbs in contractions: how they form shortened versions, function in tags, short answers, and add emphasis in everyday English. Read more Utterly British Maps: An Atlas of Britain’s Quirks and Quibbles by Helen McKenzie Read a linguist's review of a funny and fascinating little book. Read more What is an auxiliary verb? Learn about auxiliary verbs: their types, functions, and how they work with main verbs to show time, continuity, and possibility in English grammar. Read more Collins English Dictionary Apps Download our English Dictionary apps - available for both iOS and Android. Read more Collins Dictionaries for Schools Our new online dictionaries for schools provide a safe and appropriate environment for children. And best of all it's ad free, so sign up now and start using at home or in the classroom. Read more Word lists We have almost 200 lists of words from topics as varied as types of butterflies, jackets, currencies, vegetables and knots! Amaze your friends with your new-found knowledge! Read more Create an account and sign in to access this FREE content Register now or log in to access × Register for free on collinsdictionary.com Unlock this page by registering for free on collinsdictionary.com Access the entire site, including our language quizzes. Customize your language settings. (Unregistered users can only access the International English interface for some pages.) Submit new words and phrases to the dictionary. Benefit from an increased character limit in our Translator tool. Receive our weekly newsletter with the latest news, exclusive content, and offers. Be the first to enjoy new tools and features. It is easy and completely free! Already registered? Log in here Collins TRANSLATOR LANGUAGE English English Dictionary Thesaurus Word Lists Grammar English Easy Learning Grammar English Grammar in Spanish Grammar Patterns English Usage Teaching Resources Video Guides Conjugations Sentences Video Learn English Video pronunciations Build your vocabulary Quiz English grammar English collocations English confusables English idioms English images English usage English synonyms Thematic word lists French English to French French to English Grammar Pronunciation Guide Conjugations Sentences Video Build your vocabulary Quiz French confusables French images German English to German German to English Grammar Conjugations Sentences Video Build your vocabulary Quiz German confusables German images Italian English to Italian Italian to English Grammar Conjugations Sentences Video Build your vocabulary Quiz Italian confusables Italian images Spanish English to Spanish Spanish to English Grammar English Grammar in Spanish Pronunciation Guide Conjugations Sentences Video Build your vocabulary Spanish grammar Portuguese English to Portuguese Portuguese to English Grammar Conjugations Video Build your vocabulary Hindi English to Hindi Hindi to English Video Build your vocabulary Chinese English to Simplified Simplified to English English to Traditional Traditional to English Quiz Mandarin Chinese confusables Mandarin Chinese images Traditional Chinese confusables Traditional Chinese images Video Build your vocabulary Korean English to Korean Korean to English Video Build your vocabulary Japanese English to Japanese Japanese to English Video Build your vocabulary GAMES Quiz English grammar English collocations English confusables English idioms English images English usage English synonyms Thematic word lists French French images German grammar German images Italian Italian images Mandarin Chinese Traditional Chinese Spanish Wordle Helper Collins Conundrum SCHOOLS School Home Primary School Secondary School BLOG RESOURCES Collins Word of the Day Paul Noble Method Word of the Year Collins API
2417
https://www.sciencedirect.com/science/article/abs/pii/S0385814620300870
Suppurative cervical lymphadenitis in adult: An analysis of predictors for surgical drainage - ScienceDirect Skip to main contentSkip to article Journals & Books Help Search My account Sign in Access throughyour organization Purchase PDF Patient Access Other access options Search ScienceDirect Article preview Abstract Introduction Section snippets References (26) Cited by (3) Auris Nasus Larynx Volume 47, Issue 5, October 2020, Pages 887-894 Suppurative cervical lymphadenitis in adult: An analysis of predictors for surgical drainage Author links open overlay panel Chonticha Srivanitchapoom, Kedsaraporn Yata Show more Add to Mendeley Share Cite rights and content Abstract Objective Lymphadenitis can be treated successfully by empirical antibiotic therapy. However, inflamed lymph nodes can progress into an abscess with local and/or systemic reaction, which requires more complex treatment strategies. The study aim to analyze possible predictors for abscess formation within inflamed nodes that require surgical drainage. Materials and Methods We retrospectively enrolled 241 patients with acute or sub-acute cervical lymphadenitis. Demographic including, lymph node characteristics, management, and final diagnosis were recorded. Predictors for abscess formation within the lymph node that required surgical drainage were evaluated using univariate and multivariate analysis. Patient and lymph node characteristics that differentiated suppurative cervical lymphadenitis (SCL) from other lymphadenitis were also analyzed. Results There were 41 cases of SCL, 173 cases of uncomplicated cervical lymphadenitis, and 27 cases of tuberculous cervical lymphadenitis (TBLN). Abscess was surgically drained in 39 patients, while 2 patients received a needle aspiration. In 9 patients, SCL complications included cellulitis of the neck soft tissue, supraglottic swelling, internal jugular vein thrombosis, and sepsis. Two patients were diagnosed with melioidosis and actinomycosis after drainage. Multivariate analysis showed that an immunocompromised host, male sex, and receiving prior inadequate treatment were predictors for surgical drainage. TBLN patients had similar manifestations as SCL patients. However, affected nodes in SCL patients were singular, painful, and showed fluctuation. Conclusions Following SCL diagnosis, abscess drainage and appropriate antibiotic treatment should be considered. Aspiration or surgical drainage can be effective in certain patients. Pathogen isolation and tissue biopsy should be performed to ensure accurate diagnosis and antibiotic selection. In addition, TBLN and melioidosis should be considered, especially in endemic areas. Introduction Lymphadenitis is defined as inflammation of the lymph nodes usually initiated by an infectious agent [1,2]. Moreover, lymphadenopathy can co-occur with other pre-existing disorders such as malignancy or systemic disease , , . The infected lymph node can progress to cause pyogenic inflammation through necrosis and liquefaction; known as suppurative lymphadenitis [1,5]. This can affect the surrounding soft tissue and/or cause a systemic reaction [2,5], particularly in the head and neck area, which are abundant in lymph nodes , and other vital structures. This leads to serious complications such as internal jugular vein (IJV) thrombosis, and carotid artery aneurysm. Deep neck space infection can also further complicate cervical lymphadenitis , , . Lymphadenitis can generally be treated empirically using antibiotics [1,2]. However, in pyogenic cases, patients require both abscess drainage and antibiotic treatment for successful outcomes [1,5]. The incidence of lymphadenitis is highest children aged of 4–8 years, which can lead to atrophy due to its physiology [1,2,5]. Therefore, previous studies have mostly focused on pediatric patients [1,2,5]. Nevertheless, lymphadenitis affects all age groups, including adults. Therefore, this study focuses on acute and sub-acute cervical lymphadenitis in adults to analyze possible predictors for abscess formation within inflamed nodes that require surgical drainage. We also compare the clinical presentation of lymphadenitis and characteristics of infected lymph nodes between patients with suppurative lymphadenitis and cervical lymphadenitis. Section snippets Material and methods The retrospective study was conducted at the otolaryngology unit and was being reported in the line with the STROBE guidelines. The protocol of the investigation has been approved by the Institutional Review Board. Adult patients (age≥15 year) diagnosed with cervical lymphadenitis between January 2014 and February 2019 were enrolled. All patients had acute (≤2 week) or sub-acute (> 2 -≤6 week) lymphadenitis. Cervical lymphadenitis was defined as lymphadenopathy in the neck area with signs Results A total of 241 adult patients matched the inclusion criteria. Patients comprised of 84 (34.9%) males and 157 (65.1%) females, with a mean age of 40.92±17.94 years (range 15–89). SCL was diagnosed in 41 patients, UCCL in 173 patients, and TBLN in 27 patients. All patient demographic data and lymph node characteristics are summarized in Table 1. In the SCL group, a higher number of patients had underlying disease and a history of primary care treatment than the other 2 groups. Among 22 patients Discussion To our knowledge, this is the first study to investigate SCL in adults. The etiology of abscess formation within the lymph node includes pyogenic bacterial infection, BCG vaccination, and disorders of the immune system such as interlukin-12 deficiency, interferon-γ receptor deficiency, and chronic neutropenia [5,10]. Similarly, a significant number of immunocompromised hosts were found in this study. Pus can be evaluated by a US or CT scan , , . The benefits of US include preventing Conclusion The majority of adult with lymphadenitis have UCCL, which can be treated empirically by first- (penicillin, first-generation cephalosporin, macrolide) and second-line antibiotics (amoxicillin/clavulanic acid or clindamycin) based on evidence of infection. Re-evaluation of treatment strategies should be prompt upon lack of improvement. SCL is the second most common type of lymphadenitis in adults. Following SCL diagnosis, patients should receive adequate pus drainage and proper antibiotic Declaration of Competing Interest None Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors Recommended articles References (26) J.R. Gosche et al. Acute, subacute, and chronic cervical lymphadenitis in children Semin Pediatr Surg (2006) N. Al-Dajani et al. Cervical lymphadenitis, suppurative parotitis, thyroiditis, and infected cysts Infect Dis Clin North Am (2007) N.A. Quinn et al. Pediatric lateral neck infections - computed tomography vs ultrasound on initial evaluation Int J Pediatr Otorhinolaryngol (2018) H. Kato et al. Necrotic cervical nodes: usefulness of diffusion-weighted MR imaging in the differentiation of suppurative lymphadenitis from malignancy Eur J Radiol (2013) A.A. Kimia et al. Predictors of a drainable suppurative adenitis among children presenting with cervical adenopathy Am J Emerg Med (2019) M.W. Sauer et al. Acute neck infections in children: who is likely to undergo surgical drainage? Am J Emerg Med (2013) L. Neff et al. Microbiology and antimicrobial treatment of pediatric cervical lymphadenitis requiring surgical intervention Int J Pediatr Otorhinolaryngol (2013) N.R. Lindquist et al. Pediatric acute unilateral suppurative lymphadenitis: the role of antibiotic susceptibilities at a large tertiary pediatric care center Int J Pediatr Otorhinolaryngol (2019) S.S. Chao et al. Tuberculous and nontuberculous cervical lymphadenitis: a clinical review Otolaryngol Head Neck Surg (2002) M. Singh et al. Hodgkin's lymphoma masquerading as suppurative lymphadenitis J Cytol (2017) J.M. Kim et al. Cooccurrence of metastatic papillary thyroid carcinoma and salmonella induced neck abscess in a cervical lymph node Case Rep Med (2017) I.P. Fraser Suppurative lymphadenitis Curr Infect Dis Rep (2009) T. Kato et al. Microbial extracranial aneurysm of the internal carotid artery: complication of cervical lymphadenitis Ann Otol Rhinol Laryngol (1999) View more references Cited by (3) An assessment of radiology resident competency in identifying suppurative retropharyngeal lymphadenitis: an examination using the WIDI SIM platform 2024, Emergency Radiology ### Burden and Risk Factors of Melioidosis in Southeast Asia: A Scoping Review 2022, International Journal of Environmental Research and Public Health ### Common chest and abdominal computed tomography findings among patients with disseminated bcg disease 2021, Southeast Asian Journal of Tropical Medicine and Public Health View full text © 2020 Oto-Rhino-Laryngological Society of Japan Inc. Published by Elsevier B.V. All rights reserved. Recommended articles A transnasal foreign body penetrating the spinal cord from the nasopharynx Auris Nasus Larynx, Volume 47, Issue 5, 2020, pp. 895-898 Daiki Takagi, …, Seishi Matsui ### Commentary: Moving the goalposts The Journal of Thoracic and Cardiovascular Surgery, Volume 160, Issue 6, 2020, pp. 1431-1432 Dawn S.Hui ### Commentary: To do, or not to do, that is the question The Journal of Thoracic and Cardiovascular Surgery, Volume 160, Issue 6, 2020, p. 1627 Hiroshi Date ### Comment on the article by Dr. T. Huda: Barrier device prototype for open tracheotomy during COVID-19 pandemic Auris Nasus Larynx, Volume 47, Issue 4, 2020, pp. 711-712 Tuheen Huda, …, Cliff S Shelton ### Endoscopic Approaches to the Craniovertebral Junction Otolaryngologic Clinics of North America, Volume 49, Issue 1, 2016, pp. 213-226 Varun R.Kshettry, …, Pablo F.Recinos ### An infant case of cervical purulent lymphadenitis caused by ST834 community-acquired methicillin-resistant Staphylococcus aureus with SCC mec-IVc in Japan New Microbes and New Infections, Volume 48, 2022, Article 100998 Satoshi Hirakawa, …, Nobumichi Kobayashi Show 3 more articles About ScienceDirect Remote access Contact and support Terms and conditions Privacy policy Cookies are used by this site.Cookie Settings All content on this site: Copyright © 2025 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the relevant licensing terms apply.
2418
https://www.dcpehvpm.org/E-Content/Stat/E%20L%20Lehaman.pdf
Theory of Point Estimation, Second Edition E.L. Lehmann George Casella Springer Springer Texts in Statistics Advisors: George Casella Stephen Fienberg Ingram Olkin Springer Texts in Statistics Alfred: Elements of Statistics for the Life and Social Sciences Berger: An Introduction to Probability and Stochastic Processes Bilodeau and Brenner: Theory of Multivariate Statistics Blom: Probability and Statistics: Theory and Applications Brockwell and Davis: An Introduction to Times Series and Forecasting Chow and Teicher: Probability Theory: Independence, Interchangeability, Martingales, Third Edition Christensen: Plane Answers to Complex Questions: The Theory of Linear Models, Second Edition Christensen: Linear Models for Multivariate, Time Series, and Spatial Data Christensen: Log-Linear Models and Logistic Regression, Second Edition Creighton: A First Course in Probability Models and Statistical Inference Dean and Voss: Design and Analysis of Experiments du Toit, Steyn, and Stumpf: Graphical Exploratory Data Analysis Durrett: Essentials of Stochastic Processes Edwards: Introduction to Graphical Modelling, Second Edition Finkelstein and Levin: Statistics for Lawyers Flury: A First Course in Multivariate Statistics Jobson: Applied Multivariate Data Analysis, Volume I: Regression and Experimental Design Jobson: Applied Multivariate Data Analysis, Volume II: Categorical and Multivariate Methods Kalbfleisch: Probability and Statistical Inference, Volume I: Probability, Second Edition Kalbfleisch: Probability and Statistical Inference, Volume II: Statistical Inference, Second Edition Karr: Probability Keyfitz: Applied Mathematical Demography, Second Edition Kiefer: Introduction to Statistical Inference Kokoska and Nevison: Statistical Tables and Formulae Kulkarni: Modeling, Analysis, Design, and Control of Stochastic Systems Lehmann: Elements of Large-Sample Theory Lehmann: Testing Statistical Hypotheses, Second Edition Lehmann and Casella: Theory of Point Estimation, Second Edition Lindman: Analysis of Variance in Experimental Design Lindsey: Applying Generalized Linear Models Madansky: Prescriptions for Working Statisticians McPherson: Applying and Interpreting Statistics: A Comprehensive Guide, Second Edition Mueller: Basic Principles of Structural Equation Modeling: An Introduction to LISREL and EQS (continued after index) E.L. Lehmann George Casella Theory of Point Estimation Second Edition E.L. Lehmann George Casella Department of Statistics Department of Statistics University of California, Berkeley University of Florida Berkeley, CA 94720 Gainesville, FL 32611-8545 USA USA Editorial Board George Casella Stephen Fienberg Ingram Olkin Department of Statistics Department of Statistics Department of Statistics University of Florida Carnegie Mellon University Stanford University Gainesville, FL 32611-8545 Pittsburgh, PA 15213-3890 Stanford, CA 94305 USA USA USA Library of Congress Cataloging-in-Publication Data Lehmann, E.L. (Erich Leo), 1917– Theory of point estimation. — 2nd ed. / E.L. Lehmann, George Casella. p. cm. — (Springer texts in statistics) Includes bibliographical references and index. ISBN 0-387-98502-6 (hardcover : alk. paper) 1. Fix-point estimation. I. Casella, George. II. Title. III. Series. QA276.8.L43 1998 519.5′44—dc21 98-16687 Printed on acid-free paper. 1998 Springer-Verlag New York, Inc. All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer-Verlag New York, Inc., 175 Fifth Avenue, New York, NY 10010, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use of general descriptive names, trade names, trademarks, etc., in this publication, even if the former are not especially identified, is not to be taken as a sign that such names, as understood by the Trade Marks and Merchandise Marks Act, may accordingly be used freely by anyone. Production managed by Timothy Taylor; manufacturing supervised by Joe Quatela. Photocomposed copy prepared from the author’s files. Printed and bound by Maple-Vail Book Manufacturing Group, York, PA. Printed in the United States of America. 9 8 7 6 5 4 3 2 1 ISBN 0-387-98502-6 SPIN 10660103 Springer-Verlag New York Berlin Heidelberg A member of BertelsmannSpringer Science+Business Media GmbH To our children Stephen, Barbara, and Fia Benjamin and Sarah ELL GC This page intentionally left blank Preface to the Second Edition Since the publication in 1983 of Theory of Point Estimation, much new work has made it desirable to bring out a second edition. The inclusion of the new material has increased the length of the book from 500 to 600 pages; of the approximately 1000 references about 25% have appeared since 1983. The greatest change has been the addition to the sparse treatment of Bayesian inference in the first edition. This includes the addition of new sections on Equivariant, Hierarchical, and Empirical Bayes, and on their comparisons. Other major additions deal with new developments concerning the information in-equality and simultaneous and shrinkage estimation. The Notes at the end of each chapter now provide not only bibliographic and historical material but also introductions to recent development in point estimation and other related topics which, for space reasons, it was not possible to include in the main text. The problem sections also have been greatly expanded. On the other hand, to save space most of the discussion in the first edition on robust estimation (in particu-lar L, M, and R estimators) has been deleted. This topic is the subject of two excellent books by Hampel et al (1986) and Staudte and Sheather (1990). Other than subject matter changes, there have been some minor modifications in the presentation. For example, all of the references are now collected together at the end of the text, examples are listed in a Table of Examples, and equations are references by section and number within a chapter and by chapter, section and number between chapters. The level of presentation remains the same as that of TPE. Students with a thorough course in theoretical statistics (from texts such as Bickel and Doksum 1977 or Casella and Berger 1990) would be well prepared. The second edition of TPE is a companion volume to “Testing Statistical Hypotheses, Second Edition (TSH2).” Between them, they provide an account of classical statistics from a unified point of view. Many people contributed to TPE2 with advice, suggestions, proofreading and problem-solving. We are grateful to the efforts of John Kimmel for overseeing this project; to Matt Briggs, Lynn Eberly, Rich Levine and Sam Wu for proof-reading and problem solving, to Larry Brown, Anirban DasGupta, Persi Diaconis, Tom DiCiccio, Roger Farrell, Leslaw Gajek, Jim Hobert, Chuck vii viii PREFACE TO THE SECOND EDITION McCulloch, Elias Moreno, Christian Robert, Andrew Rukhin, Bill Strawderman and Larry Wasserman for discussions and advice on countless topics, and to June Meyermann for transcribing most of TEP to LaTeX. Lastly, we thank Andy Scherrer for repairing the near-fatal hard disk crash and Marty Wells for the almost infinite number of times he provided us with needed references. E. L. Lehmann Berkeley, California George Casella Ithaca, New York March 1998 Preface to the First Edition This book is concerned with point estimation in Euclidean sample spaces. The first four chapters deal with exact (small-sample) theory, and their approach and organization parallel those of the companion volume, Testing Statistical Hypotheses (TSH). Optimal estimators are derived according to criteria such as unbiasedness, equivariance, and minimaxity, and the material is organized around these criteria. The principal applications are to exponential and group families, and the systematic discussion of the rich body of (relatively simple) statistical problems that fall under these headings constitutes a second major theme of the book. A theory of much wider applicability is obtained by adopting a large sample approach. The last two chapters are therefore devoted to large-sample theory, with Chapter 5 providing a fairly elementary introduction to asymptotic con-cepts and tools. Chapter 6 establishes the asymptotic efficiency, in sufficiently regular cases, of maximum likelihood and related estimators, and of Bayes esti-mators, and presents a brief introduction to the local asymptotic optimality the-ory of Hajek and LeCam. Even in these two chapters, however, attention is restricted to Euclidean sample spaces, so that estimation in sequential analysis, stochastic processes, and function spaces, in particular, is not covered. The text is supplemented by numerous problems. These and references to the literature are collected at the end of each chapter. The literature, particularly when applications are included, is so enormous and spread over the journals of so many countries and so many specialties that complete coverage did not seem feasible. The result is a somewhat inconsistent coverage which, in part, reflects my personal interests and experience. It is assumed throughout that the reader has a good knowledge of calculus and linear algebra. Most of the book can be read without more advanced mathe-matics (including the sketch of measure theory which is presented in Section 1.2 for the sake of completeness) if the following conventions are accepted. 1. A central concept is that of an integral such as ∫f dP or ∫f dµ. This covers both the discrete and continuous case. In the discrete case ∫f dP becomes Σf (xi)P(xi) where P(xi) = P(X = xi) and ∫f dµ becomes Σf(xi). In the continuous case, ∫f dP and ∫f dµ become, respectively, ∫f(x)p(x) dx and ∫f(x) dx. Little is lost ix x PREFACE TO THE FIRST EDITION (except a unified notation and some generality) by always making these substi-tutions. 2. When specifying a probability distribution, P, it is necessary to specify not only the sample space X, but also the class of sets over which P is to be defined. In nearly all examples X will be a Euclidean space and a large class of sets, the so-called Borel sets, which in particular includes all open and closed sets. The references to can be ignored with practically no loss in the under-standing of the statistical aspects. A forerunner of this book appeared in 1950 in the form of mimeographed lecture notes taken by Colin Blyth during a course I taught at Berkeley; they subsequently provided a text for the course until the stencils gave out. Some sections were later updated by Michael Stuart and Fritz Scholz. Throughout the process of converting this material into a book, I greatly benefited from the support and advice of my wife, Juliet Shaffer. Parts of the manuscript were read by Rudy Beran, Peter Bickel, Colin Blyth, Larry Brown, Fritz Scholz, and Geoff Watson, all of whom suggested many improvements. Sections 6.7 and 6.8 are based on material provided by Peter Bickel and Chuck Stone, respectively. Very special thanks are due to Wei-Yin Loh, who carefully read the complete manu-script at its various stages and checked all the problems. His work led to the corrections of innumerable errors and to many other improvements. Finally, I should like to thank Ruth Suzuki for her typing, which by now is legendary, and Sheila Gerber for her expert typing of many last-minute additions and cor-rections. E.L. Lehmann Berkeley, California, March 1983 Contents Preface to the Second Edition vii Preface to the First Edition ix List of Tables xiv List of Figures xv List of Examples xvii Table of Notation xxv 1 Preparations 1 1 The Problem 1 2 Measure Theory and Integration 7 3 Probability Theory 13 4 Group Families 16 5 Exponential Families 23 6 Sufficient Statistics 32 7 Convex Loss Functions 45 8 Convergence in Probability and in Law 54 9 Problems 62 10 Notes 78 2 Unbiasedness 83 1 UMVU Estimators 83 2 Continuous One- and Two-Sample Problems 91 3 Discrete Distributions 100 4 Nonparametric Families 109 5 The Information Inequality 113 6 The Multiparameter Case and Other Extensions 124 7 Problems 129 8 Notes 143 xii CONTENTS [ 0.0 3 Equivariance 147 1 First Examples 147 2 The Principle of Equivariance 158 3 Location-Scale Families 167 4 Normal Linear Models 176 5 Random and Mixed Effects Models 187 6 Exponential Linear Models 193 7 Finite Population Models 198 8 Problems 207 9 Notes 223 4 Average Risk Optimality 225 1 Introduction 225 2 First Examples 233 3 Single-Prior Bayes 239 4 Equivariant Bayes 245 5 Hierarchical Bayes 253 6 Empirical Bayes 262 7 Risk Comparisons 272 8 Problems 282 9 Notes 305 5 Minimaxity and Admissibility 309 1 Minimax Estimation 309 2 Admissibility and Minimaxity in Exponential Families 322 3 Admissibility and Minimaxity in Group Families 338 4 Simultaneous Estimation 346 5 Shrinkage Estimators in the Normal Case 354 6 Extensions 366 7 Admissibility and Complete Classes 376 8 Problems 389 9 Notes 420 6 Asymptotic Optimality 429 1 Performance Evaluations in Large Samples 429 2 Asymptotic Efficiency 437 3 Efficient Likelihood Estimation 443 4 Likelihood Estimation: Multiple Roots 451 5 The Multiparameter Case 461 6 Applications 468 7 Extensions 475 8 Asymptotic Efficiency of Bayes Estimators 487 9 Problems 496 10 Notes 515 References 521 0.0 ] CONTENTS xiii Author Index 565 Subject Index 574 List of Tables 1.4.1 Location-Scale Families 18 1.5.1 Some One- and Two-Parameter Exponential Families 25 1.7.1 Convex Functions 45 2.3.1 I × J Contingency Table 107 2.5.1 I[τ(θ)] for Some Exponential Families 118 2.5.2 If for Some Standard Distributions 119 2.6.1 Three Information Matrices 127 4.6.1 Bayes and Empirical Bayes Risks 265 4.6.2 Hierarchical and Empirical Bayes Estimates 267 4.7.1 Poisson Bayes and Empirical Bayes Risks 279 5.5.1 Maximum Component Risk 364 5.5.2 Expected Value of the Shrinkage Factor 365 5.6.1 Minimax Robust Bayes Estimator 372 List of Figures 1.10.1 A Curved Exponential Family 80 2.8.1 Illustration of the information inequality 144 5.2.1 Risks of Bounded Mean Estimators 328 5.5.1 Risks of James-Stein Estimators 357 5.5.2 Maximum Component Risk of the James-Stein Estimator 364 6.2.1 Risks of a Superefficient Estimator 443 This page intentionally left blank List of Examples Chapter 1: Preparations 1.1 The measurement problem 2 2.1 Counting measure 8 2.2 Lebesgue measure 8 2.3 Continuation of Example 2.1 10 2.4 Continuation of Example 2.2 10 2.7 Borel sets 13 3.1 Support 16 4.1 Location-scale families 17 4.4 Continuation of Example 4.1 19 4.5 Multivariate normal distribution 20 4.6 The linear model 21 4.7 A nonparametric iid family 21 4.8 Symmetric distributions 21 4.9 Continuation of Example 4.7 22 4.10 Sampling from a finite population 22 5.1 Normal family 24 5.3 Multinomial 24 5.4 Curved normal family 25 5.5 Logit Model 26 5.6 Normal sample 27 5.7 Bivariate normal 27 5.9 Continuation of Example 5.3 27 5.11 Binomial moments 29 5.12 Poisson moments 30 5.13 Normal moments 30 5.14 Gamma moments 30 5.16 Stein’s identity for the normal 31 6.2 Poisson sufficient statistic 33 6.3 Sufficient statistic for a uniform distribution 34 6.4 Sufficient statistic for a symmetric distribution 34 6.6 Continuation of Example 6.2 35 6.7 Normal sufficient statistic 36 6.8 Continuation of Example 6.3 36 xviii List of Examples [ 0.0 6.9 Continuation of Example 6.4 36 6.10 Sufficiency of order statistics 36 6.11 Different sufficient statistics 37 6.15 Location families 38 6.17 Minimal sufficiency in curved exponential families 39 6.19 Location/curved exponential family 41 6.20 Location ancillarity 41 6.23 Completeness in some one-parameter families 42 6.24 Completeness in some two-parameter families 43 6.25 Minimal sufficient but not complete 44 6.26 Completeness in the logit model 44 6.27 Dose-response model 44 7.3 Convex functions 46 7.7 Entropy distance 47 7.12 Convex combination 49 7.14 Quadratic loss 49 7.17 Squared error loss 50 7.18 Absolute error loss 50 7.20 Nonconvex loss 51 7.25 Subharmonic functions 53 7.26 Subharmonic loss 53 8.3 Consistency of the mean 55 8.4 Consistency of S2 55 8.5 Markov chains 55 8.7 Degenerate limit distribution 58 8.13 Limit of binomial 59 8.15 Continuation of Example 8.13 59 8.17 Asymptotic distribution of S2 60 10.1 Curvature 80 Chapter 2: Unbiasedness 1.2 Nonexistence of unbiased estimator 83 1.3 The jackknife 83 1.5 Locally best unbiased estimation 84 1.8 Continuation of Example 1.5 86 1.9 Nonexistence of UMVU estimator 87 1.13 Binomial UMVU estimator 88 1.14 UMVU estimator for a uniform distribution 89 2.1 Estimating polynomials of a normal variance 91 2.2 Estimating a probability or a critical value 93 2.3 The normal two-sample problem 95 2.4 The multivariate normal one-sample problem 96 2.5 The exponential one-sample problem 98 2.6 Comparing UMVU and ML estimators 98 3.1 Binomial UMVU estmators 100 0.0 ] List of Examples xix 3.2 Inverse binomial sampling 101 3.3 Sequential estimation of binomial p 103 3.4 Two sampling plans 103 3.7 Poisson UMVU estimation 105 3.8 Multinomial UMVU estimation 106 3.9 Two-way contingency tables 107 3.10 Conditional independence in a three-way table 108 3.11 Misbehaved UMVU estimator 108 4.1 Estimating the distribution function 109 4.2 Nonparametric UMVU estimation of a mean 110 4.3 Nonparametric UMVU estimation of a variance 110 4.4 Nonparametric UMVU estimation of a second moment 110 4.5 Degree of the variance 111 4.6 Nonexistence of unbiased estimator 112 4.7 Two-sample UMVU estimator 112 4.8 U-estimation of covariance 113 5.2 Hammersley-Chapman-Robbins inequality 114 5.5 Information in a gamma variable 117 5.6 Information in a normal variable 117 5.7 Information about a function of a Poisson parameter 118 5.11 Binomial attainment of information bound 121 5.13 Poisson attainment of information bound 121 5.16 Integrability 123 6.3 Multivariate normal information matrix 126 6.5 Information in location-scale families 126 Chapter 3: Equivariance 1.1 Estimating binomial p 147 1.9 Location equivariant estimators based on one observation 151 1.13 Continuation of Example 1.9 152 1.15 MRE under 0 −1 loss 153 1.16 Normal 153 1.18 Exponential 153 1.19 Uniform 154 1.21 Continuation of Example 1.19 155 1.22 Double exponential 155 1.25 Mean-unbiasedness 157 1.26 Median-unbiasedness 157 2.2 Location family 158 2.3 Two-sample location family 159 2.6 Continuation of Example 2.3 162 2.9 Conclusion of Example 2.3 162 2.11 Binomial transformation group 163 2.12 Orbits of a scale group 163 2.14 Counterexample 164 xx List of Examples [ 0.0 2.16 Counterexample 166 3.2 Scale equivariant estimator based on one observation 169 3.5 Standardized power loss 169 3.6 Continuation of Example 3.2 170 3.7 MRE for normal variance, known mean 170 3.9 Normal scale estimation under Stein’s loss 171 3.10 Risk-unbiasedness 171 3.11 MRE for normal variance, unknown mean 172 3.12 Uniform 172 3.13 More normal variance estimation 173 3.15 MRE for normal mean 174 3.16 Uniform location parameter 174 3.18 Exponential 175 4.1 One-way layout 176 4.2 A simple regression model 176 4.6 Continuation of Example 4.1 179 4.7 Simple linear regression 180 4.9 Unbalanced one-way layout 181 4.11 Two-way layout 183 4.15 Quadratic unbiased estimators 186 5.1 One-way random effects model 187 5.2 Random effects two-way layout 189 5.3 Two nested random factors 190 5.4 Mixed effects model 192 5.5 Best prediction of random effects 192 6.1 Two-way contingency table 194 6.2 Conditional independence in a three-way table 195 7.3 UMVU estimation in simple random sampling 200 7.4 Sum-quota sampling 200 7.6 Informative labels 201 7.8 UMVU estimation in stratified random sampling 204 Chapter 4: Average Risk Optimality 1.3 Poisson 229 1.5 Binomial 230 2.1 Sequential binomial sampling 233 2.2 Normal mean 233 2.4 Sample means 235 2.5 Normal Variance, known mean 236 2.6 Normal Variance, unknown mean 237 2.7 Random effects one-way layout 237 2.8 Improper prior Bayes 238 2.11 Limit of Bayes estimators 239 3.1 Scale Uniform 240 3.4 Multiple normal model 242 0.0 ] List of Examples xxi 3.6 Continuation of Example 3.4 244 3.7 Conjugate gamma 245 4.2 Equivariant binomial 246 4.3 Location group 247 4.4 Scale group 248 4.5 Location-scale group 248 4.7 Continuation of Example 4.3 250 4.8 Continuation of Example 4.4 250 4.9 Continuation of Example 4.5 251 4.10 Invariance of induced measures 252 5.1 Conjugate normal hierarchy 254 5.2 Conjugate normal hierarchy, continued 255 5.3 Beta-binomial hierarchy 255 5.4 Poisson hierarchy with Gibbs sampling 257 5.5 Gibbs point estimation 258 5.6 Normal hierarchy 258 5.9 Binomial reference prior 262 6.1 Normal empirical Bayes 263 6.2 Empirical Bayes binomial 263 6.4 Normal empirical Bayes, µ unknown 266 6.5 Hierarchical Bayes approximation 267 6.6 Poisson hierarchy 268 6.7 Continuation of Example 5.2 270 6.8 Continuation of Example 3.1 272 7.1 The James-Stein estimator 272 7.3 Bayes risk of the James-Stein estimator 274 7.4 Bayesian robustness of the James-Stein estimator 275 7.6 Poisson Bayes and empirical Bayes estimation 277 7.7 Empirical Bayes analysis of variance 278 7.8 Analysis of variance with regression submodel 280 Chapter 5: Minimaxity and Admissibility 1.2 A first example 309 1.7 Binomial 311 1.8 Randomized minimax estimator 313 1.9 Difference of two binomials 313 1.14 Normal mean 317 1.16 Nonparametric mean 318 1.17 Simple random sampling 319 1.18 Binomial restricted Bayes estimator 321 1.19 Normal 321 2.2 Randomized response 322 2.3 Variance components 323 2.5 Admissibility of linear estimators 323 2.7 Continuation of Example 2.5 324 xxii List of Examples [ 0.0 2.8 Admissibility of ¯ X 324 2.9 Truncated normal mean 327 2.10 Linear minimax risk 329 2.11 Linear model 329 2.13 Normal variance 330 2.15 Continuation of Example 2.13 333 2.16 Normal variance, unknown mean 334 2.17 Binomial 335 2.22 Binomial admissible minimax estimator 336 2.23 Two binomials 336 3.2 Finite group 338 3.3 Circular location family 338 3.4 Location family on the line 340 3.7 MRE not minimax 343 3.8 A random walk 343 3.9 Discrete location family 344 4.2 Several normal means 348 4.4 Multinomial Bayes 349 4.5 Multinomial minimax 349 4.6 Independent experiments 349 4.7 A variety of loss functions 353 4.8 Admissibility of X 353 5.6 Proper Bayes minimax 358 5.8 Loss functions in the one-way layout 360 5.13 Superharmonicity of the marginal 362 5.15 Superharmonic prior 363 6.1 Shrinking toward a common mean 366 6.2 Shrinking toward a linear subspace 367 6.3 Combining biased and unbiased estimators 367 6.4 Unknown variance 368 6.5 Mixture of normals 369 6.7 Bayesian robustness 371 6.11 Improved estimation for independent Poissons 375 6.12 Improved negative binomial estimation 375 6.13 Multivariate binomial 376 7.2 Unreasonable admissible estimator 377 7.3 The positive-part Stein estimator 377 7.8 Nonexistence of a minimal complete class 378 7.10 Exponential families have continuous risks 379 7.12 Squared error loss 380 7.14 Admissible negative binomial MLE 381 7.18 Brown’s identity 383 7.20 Multivariate normal mean 385 7.21 Continuation of Example 7.20 385 7.22 Tail minimaxity 386 7.23 Binomial estimation 387 0.0 ] List of Examples xxiii 7.24 When there is no Stein effect 388 7.25 Admissible linear estimators 389 9.1 Conditional bias 421 Chapter 6: Asymptotic Optimality 1.3 Approximate variance of a binomial UMVU estimator 431 1.6 Approximate variance of a normal probability estimator 432 1.7 Limiting variance in the exponential distribution 433 1.11 Continuation of Example 1.6 434 1.12 Asymptotic distribution of squared mean estimators 434 1.13 A family of estimators 436 2.2 Large-sample behavior of squared mean estimators 438 2.3 Asymptotically biased estimator 439 2.5 Superefficient estimator 440 2.7 Continuation of Example 2.5 442 3.3 Binomial MLE 445 3.4 Normal MLE 445 3.6 An inconsistent MLE 445 3.9 Minimum likelihood 448 3.12 One-parameter exponential family 450 3.13 Truncated normal 451 3.14 Double exponential 451 4.1 Continuation of Example 3.6 452 4.5 Location parameter 455 4.6 Grouped or censored observations 455 4.7 Mixtures 456 4.9 Censored data likelihood 457 4.10 EM in a one-way layout 458 4.13 Continuation of Example 4.9 460 6.1 Weibull distribution 468 6.2 Location-scale families 468 6.3 Multiparameter exponential families 470 6.4 Multivariate normal distribution 471 6.5 Bivariate normal distribution 472 6.8 Continuation of Example 6.5 473 6.9 Efficiency of nonparametric UMVU estimator 473 6.10 Normal mixtures 474 6.11 Multinomial experiments 475 7.3 Estimation of a common mean 476 7.4 Balanced one-way random effects model 477 7.5 Balanced two-way random effects model 478 7.6 Independent binomial experiments 479 7.7 Total information 479 7.8 Normal autoregressive Markov series 481 7.9 Estimation of a common variance 482 xxiv List of Examples [ 0.0 7.10 Regression with both variables subject to error 482 7.11 Uniform MLE 485 7.12 Exponential MLE 485 7.13 Pareto MLE 486 7.14 Lognormal MLE 486 7.15 Second-order mean squared error 487 8.1 Limiting binomial 487 8.4 Exponential families 491 8.5 Location families 492 8.6 Binomial 493 Table of Notation The following notation will be used throughout the book. We present this list for easy reference. Quantity Notation Comment Random variable X, Y, uppercase Sample space X, Y uppercase script Roman letters Parameter θ, λ lowercase Greek letters Parameter space , uppercase script Greek letters Realized values x, y lowercase (data) Distribution function F(x), F(x|θ), P(x|θ) continuous (cdf) Fθ(x), Pθ, (x) or discrete Density function (pdf) f (x), f (x|θ), p(x|θ) notation is “generic”, fθ(x), Pθ(x) i.e., don’t assume f (x|y) = f (x|z) Prior distribution (γ ), (γ |λ) Prior density π(γ ), π(γ |λ) may be improper Probability triple (X, P, B) sample space, probability distribution, and sigma-algebra of sets xxvi Table of Notation [ 0.0 Quantity Notation Comment Vector h = (h1, . . . , hn) = {hi} boldface signifies vectors Matrix H = {hij} = ||hij|| uppercase signifies matrices Special matrices I Identity matrix and vectors 1 vector of ones J = 11′ matrix of ones Dot notation hi· = 1 J J j=1 hij average across the dotted subscript Gradient ∇h(x) = ∂ ∂x1 h(x), . . . , ∂ ∂xn h(x) vector of partial derivatives =  ∂ ∂xi h(x)  Hessian ∇∇h(x) =  ∂2 ∂xi∂xj h(x)  matrix of partial second derivatives Jacobian  ∂ ∂xj hi(x)  matrix of derivatives Laplacian  i ∂2 ∂x2 i h(x) sum of second derivatives Euclidean norm |x| (x2 i )1/2 Indicator function IA(x), I(x ∈A) equals 1 if or I(x < a) x ∈A, 0 otherwise Big ”Oh,” little ”oh” O(n), o(n) or Op(n), op(n) As n →∞ O(n) n →constant, o(n) n →0 subscript p denotes in probability CHAPTER 1 Preparations 1 The Problem Statistics is concerned with the collection of data and with their analysis and interpretation. We shall not consider the problem of data collection in this book but shall take the data as given and ask what they have to tell us. The answer depends not only on the data, on what is being observed, but also on background knowledge of the situation; the latter is formalized in the assumptions with which the analysis is entered. There have, typically, been three principal lines of approach: Dataanalysis.Here,thedataareanalyzedontheirownterms,essentiallywithout extraneous assumptions. The principal aim is the organization and summarization of the data in ways that bring out their main features and clarify their underlying structure. Classical inference and decision theory. The observations are now postulated to be the values taken on by random variables which are assumed to follow a joint probability distribution, P, belonging to some known class P. Frequently, the distributions are indexed by a parameter, say θ (not necessarily real-valued), taking values in a set, , so that P = {Pθ, θ ∈ }. (1.1) The aim of the analysis is then to specify a plausible value for θ (this is the problem of point estimation), or at least to determine a subset of of which we can plausibly assert that it does, or does not, contain θ (estimation by confidence sets or hypothesis testing). Such a statement about θ can be viewed as a summary of the information provided by the data and may be used as a guide to action. Bayesian analysis. In this approach, it is assumed in addition that θ is itself a random variable (though unobservable) with a known distribution. This prior distribution (specified according to the problem) is modified in light of the data to determine a posterior distribution (the conditional distribution of θ given the data), which summarizes what can be said about θ on the basis of the assumptions made and the data. These three methods of approach permit increasingly strong conclusions, but they do so at the price of assumptions which are correspondingly more detailed and possibly less reliable. It is often desirable to use different formulations in conjunction; for example, by planning a study (e.g., determining sample size) under rather detailed assumptions but performing the analysis under a weaker set which appears more trustworthy. In practice, it is often useful to model a problem 2 PREPARATIONS [ 1.1 in a number of different ways. One may then be satisfied if there is reasonable agreement among the conclusions; in the contrary case, a closer examination of the different sets of assumptions will be indicated. In this book, Chapters 2, 3, and 5 will be primarily concerned with the second formulation, Chapter 4 with the third. Chapter 6 considers a large-sample treat-ment of both. (A book-length treatment of the first formulation is Tukey’s classic Exploratory Data Analysis, or the more recent book by Hoaglin, Mosteller, and Tukey 1985, which includes the interesting approach of Diaconis 1985.) Through-out the book we shall try to specify what is meant by a “best” statistical procedure for a given problem and to develop methods for determining such procedures. Ideally, this would involve a formal decision-theoretic evaluation of the problem resulting in an optimal procedure. Unfortunately, there are difficulties with this approach, partially caused by the fact that there is no unique, convincing definition of optimality. Compounding this lack of consensus about optimality criteria is that there is also no consensus about the evaluation of such criteria. For example, even if it is agreed that squared error loss is a reasonable criterion, the method of evaluation, be it Bayesian, frequentist (the classical approach of averaging over repeated experiments), or conditional, must then be agreed upon. Perhaps even more serious is the fact that the optimal procedure and its prop-erties may depend very heavily on the precise nature of the assumed probability model (1.1), which often rests on rather flimsy foundations. It therefore becomes important to consider the robustness of the proposed solution under deviations from the model. Some aspects of robustness, from both Bayesian and frequentist perspectives, will be taken up in Chapters 4 and 5. The discussion so far has been quite general; let us now specialize to point estimation. In terms of the model (1.1), suppose that g is a real-valued function defined over and that we would like to know the value of g(θ) (which may, of course, be θ itself). Unfortunately, θ, and hence g(θ), is unknown. However, the data can be used to obtain an estimate of g(θ), a value that one hopes will be close to g(θ). Point estimation is one of the most common forms of statistical inference. One measures a physical quantity in order to estimate its value; surveys are conducted to estimate the proportion of voters favoring a candidate or viewers watching a television program; agricultural experiments are carried out to estimate the effect of a new fertilizer, and clinical experiments to estimate the improved life expectancy orcurerateresultingfromamedicaltreatment.Asaprototypeofsuchanestimation problem, consider the determination of an unknown quantity by measuring it. Example 1.1 The measurement problem. A number of measurements are taken of some quantity, for example, a distance (or temperature), in order to obtain an estimate of the quantity θ being measured. If the n measured values are x1, . . . , xn, a common recommendation is to estimate θ by their mean ¯ x = (x1 + · · · + xn) n . The idea of averaging a number of observations to obtain a more precise value 1.1 ] THE PROBLEM 3 seems so commonplace today that it is difficult to realize it has not always been in use. It appears to have been introduced only toward the end of the seventeenth century (see Plackett, 1958). But why should the observations be combined in just this way? The following are two properties of the mean, which were used in early attempts to justify this procedure. (i) An appealing approximation to the true value being measured is the value a, for which the sum of squared difference (xi −a)2 is a minimum. That this least squares estimate of θ is ¯ x is seen from the identity (xi −a)2 = (xi −¯ x)2 + n(¯ x −a)2, (1.2) since the first term on the right side does not involve a and the second term is minimized by a = ¯ x. (For the history of least squares, see Eisenhart 1964, Plackett 1972, Harter 1974–1976, and Stigler 1981. Least squares estimation will be discussed in a more general setting in §3.4.) (ii) The least squares estimate defined in (i) is the value minimizing the sum of the squaredresiduals,theresidualsbeingthedifferencesbetweentheobservations xi and the estimated value. Another approach is to ask for the value a for which the sum of the residuals is zero, so that the positive and negative residuals are in balance. The condition on a is (xi −a) = 0, (1.3) and this again immediately leads to a = ¯ x. (That the two conditions lead to the same answer is, of course, obvious since (1.3) expresses that the derivative of (1.2) with respect to a is zero.) These two principles clearly belong to the first (data analytic) level mentioned at the beginning of the section. They derive the mean as a reasonable descriptive measure of the center of the observations, but they cannot justify ¯ x as an estimate of the true value θ since no explicit assumption has been made connecting the observations xi with θ. To establish such a connection, let us now assume that the xi are the observed values of n independent random variables which have a common distribution depending on θ. Eisenhart (1964) attributes the crucial step of introducing such probability models for this purpose to Simpson (1755). More specifically, we shall assume that Xi = θ + Ui, where the measurement error Ui is distributed according to a distribution F symmetric about 0 so that the Xi are symmetrically distributed about θ with distribution P(Xi ≤x) = F(x −θ). (1.4) In terms of this model, can we now justify the idea that the mean provides a more precise value than a single observation? The second of the approaches mentioned at the beginning of the section (classical inference) suggests the following kind of consideration. If the X’s are independent and have a finite variance σ 2, the variance of the mean ¯ X is σ 2/n; the expected squared difference between ¯ X and θ is therefore only 1/n of what it is for a single observation. However, if the X’s have a Cauchy distribution, the distribution of ¯ X is the same as that of a single Xi (Problem 1.6), 4 PREPARATIONS [ 1.1 so that nothing is gained by taking several measurements and then averaging them. Whether ¯ X is a reasonable estimator of θ thus depends on the nature of the Xi. ∥ This example suggests that the formalization of an estimation problem involves two basic ingredients: (a) A real-valued function g defined over a parameter space , whose value at θ is to be estimated; we shall call g(θ) the estimand. [In Example 1.1, g(θ) = θ.] (b) A random observable X (typically vector-valued) taking on values in a sample space X according to a distribution Pθ, which is known to belong to a family P as stated in (1.1). [In Example 1.1, X = (X1, . . . , Xn), where the Xi are independently, identically distributed (iid) and their distribution is given by (1.4). The observed value x of X constitutes the data.] The problem is the determination of a suitable estimator. Definition 1.2 An estimator is a real-valued function δ defined over the sample space. It is used to estimate an estimand, g(θ), a real-valued function of the pa-rameter. Of course, it is hoped that δ(X) will tend to be close to the unknown g(θ), but such a requirement is not part of the formal definition of an estimator. The value δ(x) taken on by δ(X) for the observed value x of X is the estimate of g(θ), which will be our “educated guess” for the unknown value. One could adopt a slightly more restrictive definition than Definition 1.2. In applications, it is often desirable to restrict δ to possible values of g(θ), for example, to be positive when g takes on only positive values, to be integer-valued when g is, and so on. For the moment, however, it is more convenient not to impose this additional restriction. The estimator δ is to be close to g(θ), and since δ(X) is a random variable, we shall interpret this to mean that it will be close on the average. To make this requirement precise, it is necessary to specify a measure of the average closeness of (or distance from) an estimator to g(θ). Examples of such measures are P(|δ(X) −g(θ)| < c) for some c > 0 (1.5) and E|δ(X) −g(θ)|p for some p > 0. (1.6) (Of these, we want the first to be large and the second to be small.) If g and δ take on only positive values, one may be interested in E     δ(X) δ(θ) −1     p , which suggests generalizing (1.6) to κ(θ)E|δ(X) −g(θ)|p. (1.7) Quite generally, suppose that the consequences of estimating g(θ) by a value d are measured by L(θ, d). Of the loss function L, we shall assume that L(θ, d) ≥0 for all θ, d (1.8) 1.1 ] THE PROBLEM 5 and L[θ, g(θ)] = 0 for all θ, (1.9) so that the loss is zero when the correct value is estimated. The accuracy, or rather inaccuracy, of an estimator δ is then measured by the risk function R(θ, δ) = Eθ{L[θ, δ(X)]}, (1.10) the long-term average loss resulting from the use of δ. One would like to find a δ which minimizes the risk for all values of θ. As stated, this problem has no solution. For, by (1.9), it is possible to reduce the risk at any given point θ0 to zero by making δ(x) equal to g(θ0) for all x. There thus exists no uniformly best estimator, that is, no estimator which simultaneously minimizes the risk for all values of θ, except in the trivial case that g(θ) is constant. One way of avoiding this difficulty is to restrict the class of estimators by ruling out estimators that too strongly favor one or more values of θ at the cost of ne-glecting other possible values. This can be achieved by requiring the estimator to satisfy some condition which enforces a certain degree of impartiality. One such condition requires that the bias Eθ[δ(X)] −g(θ), sometimes called the systematic error, of the estimator δ be zero, that is, that Eθ[δ(X)] = g(θ) for all θ ∈ . (1.11) This condition of unbiasedness ensures that, in the long run, the amounts by which δ over- and underestimates g(θ) will balance, so that the estimated value will be correct “on the average.” A somewhat similar condition is obtained by considering not the amount but only the frequency of over- and underestimation. This leads to the condition Pθ[δ(X) < g(θ)] = Pθ[δ(X) > g(θ)] (1.12) or slightly more generally to the requirement that g(θ) be a median of δ(X) for all values of θ. To distinguish it from this condition of median-unbiasedness, (1.11) is called mean-unbiasedness if there is a possibility of confusion. Mean-unbiased estimators, due to Gauss and perhaps the most classical of all frequentist constructions, are treated in Chapter 2. There, we will also consider performance assessments that naturally arise from unbiasedness considerations. [A more general unbiasedness concept, of which (1.11) and (1.12) are special cases, will be discussed in Section 3.1.] Adifferentimpartialityconditioncanbeformulatedwhensymmetriesarepresent in a problem. It is then natural to require a corresponding symmetry to hold for the estimator. The resulting condition of equivariance will be explored in Chapter 3 and will also play a role in the succeeding chapters. In many important situations, unbiasedness and equivariance lead to estima-tors that are uniformly best among the estimators satisfying these restrictions. Nevertheless, the applicability of both conditions is limited. There is an alterna-tive approach which is more generally applicable. Instead of seeking an estimator which minimizes the risk uniformly in θ, one can more modestly ask that the risk function be low only in some overall sense. Two natural global measures of the 6 PREPARATIONS [ 1.1 size of the risk are the average  R(θ, δ)w(θ)dθ (1.13) for some weight function w and the maximum of the risk function sup R(θ, δ). (1.14) The estimator minimizing (1.13) (discussed in Chapter 4) formally coincides with the Bayes estimator when θ is assumed to be a random variable with probabil-ity density w. Minimizing (1.14) leads to the minimax estimator, which will be considered in Chapter 5. The formulation of an estimation problem in a concrete situation along the lines described in this chapter requires specification of the probability model (1.1) and of a measure of inaccuracy L(θ, d). In the measurement problem of Example 1.1 and its generalizations to linear models, it is frequently reasonable to assume that the measurement errors are approximately normally distributed. In other situations, the assumptions underlying a binomial or Poisson distribution may be appropri-ate. Thus, knowledge of the circumstances and previous experience with similar situations will often suggest a particular parametric family P of distributions. If such information is not available, one may instead adopt a nonparametric model, which requires only very general assumptions such as independence or symmetry but does not lead to a particular parametric family of distributions. As a compro-mise between these two approaches, one may be willing to assume that the true distribution, though not exactly following a particular parametric form, lies within a stated distance of some parametric family. For a theory of such neighborhood models see, for example, Huber (1981) or TSH2, Section 9.3. The choice of an appropriate model requires judgment and utilizes experience; it is also affected by considerations of convenience. Analogous considerations for choice of the loss function L appear to be much more difficult. The most common fate of a point estimate (for example, of the distance of a star or the success probability of an operation) is to wind up in a research report or paper. It is likely to be used on different occasions and in various settings for a variety of purposes which cannot be foreseen at the time the estimate is made. Under these circumstances, one wants the estimator to be accurate, but just what measure of accuracy should be used is fairly arbitrary. This was recognized very clearly by Laplace (1820) and Gauss (1821), who compared the estimation of an unknown quantity, on the basis of observations with random errors, with a game of chance and the error in the estimated value with the loss resulting from such a game. Gauss proposed the square of the error as a measure of loss or inaccuracy. Should someone object to this specification as arbitrary, he writes, he is in complete agreement. He defends his choice by an appeal to mathematical simplicity and convenience. Among the infinite variety of possible functions for the purpose, the square is the simplest and is therefore preferable. When estimates are used to make definite decisions (for example, to determine the amount of medication to be given a patient or the size of an order that a store 1.2 ] MEASURE THEORY AND INTEGRATION 7 should place for some goods), it is sometimes possible to specify the loss function by the consequences of various errors in the estimate. A general discussion of the distinction between inference and decision problems is given by Blyth (1970) and Barnett (1982). Actually, it turns out that much of the general theory does not require a detailed specification of the loss function but applies to large classes of such functions, in particular to loss functions L(θ, d), which are convex in d. [For example, this includes (1.7) with p ≥1 but not with p < 1. It does not include (1.5)]. We shall develop here the theory for suitably general classes of loss functions whenever the cost in complexity is not too high. However, in applications to specific examples — and these form a large part of the subject — the choice of squared error as loss has the twofold advantage of ease of computation and of leading to estimators that can be obtained explicitly. For these reasons, in the examples we shall typically take the loss to be squared error. Theoretical statistics builds on many different branches of mathematics, from set theory and algebra to analysis and probability. In this chapter, we will present an overview of some of the most relevant topics needed for the statistical theory to follow. 2 Measure Theory and Integration A convenient framework for theoretical statistics is measure theory in abstract spaces. The present section will sketch (without proofs) some of the principal concepts, results, and notational conventions of this theory. Such a sketch should provide sufficient background for a comfortable understanding of the ideas and results and the essentials of most of the proofs in this book. A fuller account of measure theory can be found in many standard books, for example, Halmos (1950), Rudin (1966), Dudley (1989), and Billingsley (1995). The most natural example of a “measure” is that of the length, area, or volume of sets in one-, two-, or three-dimensional Euclidean space. As in these special cases, a measure assigns non-negative (not necessarily finite) values to sets in some space X. A measure µ is thus a set function; the value it assigns to a set A will be denoted by µ(A). In generalization of the properties of length, area, and volume, a measure will be required to be additive, that is, to satisfy µ(A ∪B) = µ(A) + µ(B) when A, B are disjoint, (2.1) where A ∪B denotes the union of A and B. From (2.1), it follows immediately by induction that additivity extends to any finite union of disjoint sets. The measures with which we shall be concerned will be required to satisfy the stronger condition of sigma-additivity, namely that µ  ∞ i=1 Ai = ∞ i=1 µ(Ai) (2.2) for any countable collection of disjoint sets. 8 PREPARATIONS [ 1.2 The domain over which a measure µ is defined is a class of subsets of X. It would seem easiest to assume that this is the class of all subsets of X. Unfortunately, it turns out that typically it is not possible to give a satisfactory definition of the measures of interest for all subsets of X in such a way that (2.2) holds. [Such a negative statement holds in particular for length, area, and volume (see, for example, Halmos (1950), p. 70) but not for the measure µ of Example 2.1 below.] It is therefore necessary to restrict the definition of µ to a suitable class of subsets of X. This class should contain the whole space X as a member, and for any set A also its complement X −A. In view of (2.2), it should also contain the union of any countable collection of sets of the class. A class of sets satisfying these conditions is called a σ-field or σ-algebra. It is easy to see that if A1, A2, . . . are members of a σ-field A, then so are their union and intersection (Problem 2.1). If A is a σ-field of subsets of a space X, then (X, A) is said to be a measurable space and the sets A of A to be measurable. A measure µ is a nonnegative set function defined over a σ-field A and satisfying (2.2). If µ is a measure defined over a measurable space (X, A), the triple (X, A, µ) is called a measure space. A measure is σ-finite if there exist sets Ai in A whose union is X and such that µ(Ai) < ∞. All measures with which we shall be concerned in this book are σ-finite, and we shall therefore use the term measure to mean a σ-finite measure. The following are two important examples of measure spaces. Example 2.1 Counting measure. Let X be countable and A the class of all subsets of X. For any A in A, let µ(A) be the number of points of A if A is finite, and µ(A) = ∞otherwise. This measure µ is called counting measure. That µ is σ-finite is obvious. ∥ Example 2.2 Lebesgue measure. Let X be n-dimensional Euclidean space En, and let A be the smallest σ-field containing all open rectangles A = {(x1, . . . , xn) : ai < xi < bi}, −∞< ai < bi < ∞. (2.3) We shall then say that (X, A) is Euclidean. The members of A are called Borel sets. This is a very large class which contains, among others, all open and all closed subsets of X. There exists a (unique) measure µ, defined over A, which assigns to (2.3) the measure µ(A) = (b1 −a1) · · · (bn −an), (2.4) that is, its volume; µ is called Lebesgue measure. ∥ The intuitive meaning of measure suggests that any subset of a set of measure zero should again have measure zero. If (X, A, µ) is a measure space, it may, however, happen that a subset of a set in A which has measure zero is not in A and hence not measurable. This difficulty can be remedied by the process of completion. Consider the class B of all sets B = A ∪C where A is in A and C is a subset of a set in A having measure zero. Then, B is a σ-field (Problem 2.7). If µ′ is defined over B by µ′(B) = µ(A), µ′ agrees with µ over A, and (X, B, µ′) is called the completion of the measure space (X, A, µ). When the process of completion is applied to Example 2.1 so that X is Euclidean and A is the class of Borel sets, the resulting larger class B is the class of Lebesgue 1.2 ] MEASURE THEORY AND INTEGRATION 9 measurable sets. The measure µ′ defined over B, which agrees with Lebesgue measure over the Borel sets, is also called Lebesgue measure. A third principal concept needed in addition to σ-field and measure is that of the integral of a real-valued function f with respect to a measure µ. However, before defining this integral, it is necessary to specify a suitable class of functions f . This will be done in three steps. First, consider the class of real-valued functions s called simple, which take on only a finite number of values, say a1, . . . , am, and for which the sets Ai = {x : s(x) = ai} (2.5) belong to A. An important special case of a simple function is the indicator IA of a set A in A, defined by IA(x) = I(x ∈A) = 1 if x ∈A 0 if x ̸= A. (2.6) If the set A is an interval, for example (a, b], the indicator function of the interval may be written in the alternate form I(a < x ≤b). Second, let s1, s2, . . . be a nondecreasing sequence of non-negative simple func-tions and let f (x) = lim n→∞sn(x). (2.7) Note that this limit exists since for every x, the sequence s1(x), s2(x), . . . is non-decreasing but that f (x) may be infinite. A function with domain X and range [0, ∞), that is, non-negative and finite valued, will be called A-measurable or, for short, measurable if there exists a nondecreasing sequence of non-negative simple functions such that(2.7) holds for all x ∈X. Third, for an arbitrary function f , define its positive and negative part by f +(x) = max(f (x), 0), f −(x) = −min(f (x), 0), so that f + and f −are both non-negative and f = f + −f −. Then a function with domain X and range (−∞, ∞) will be called measurable if both its positive and its negative parts are measurable. The measurable functions constitute a very large class which has a simple alternative characterization. It can be shown that a real-valued function f is A-measurable if and only if, for every Borel set B on the real line, the set {x : f (x) ∈B} is in A. If follows from the definition of Borel sets that it is enough to check that {x : f (x) < b} is in A for every b. This shows in particular that if (X, A) is Euclidean and f continuous, then f is measurable. As another important class, consider functions taking on a countable number of values. If f takes on distinct values a1, a2, . . . on sets A1, A2, . . ., it is measurable if and only if Ai ∈A for all i. The integral can now be defined in three corresponding steps. 10 PREPARATIONS [ 1.2 (i) For a non-negative simple function s taking on values ai on the sets Ai, define  s dµ = aiµ(Ai), (2.8) where aµ(A) is to be taken as zero when a = 0 and µ(A) = ∞. (ii) For a non-negative measurable function f given by (2.7), define  f dµ = lim n→∞  sndµ. (2.9) Here, the limit on the right side exists since the fact that the functions sn are nondecreasing implies the same for the numbers sndµ. The definition (2.9) is meaningful because it can be shown that if {sn} and {s′ n} are two nondecreasing sequences with the same limit function, their integrals also will have the same limit. Thus, the value of f dµ is independent of the particular sequence used in (2.7). The definitions (2.8) and (2.9) do not preclude the possibility that s dµ or f dµ is infinite. A non-negative measurable function is integrable (with respect to µ) if f dµ < ∞. (iii) An arbitrary measurable function f is said to be integrable if its positive and negative parts are integrable, and its integral is then defined by  f dµ =  f +dµ −  f −dµ. (2.10) Important special cases of this definition are obtained by taking, for µ, the measures defined in Examples 2.1 and 2.2. Example 2.3 Continuation of Example 2.1. If X = {x1, x2, . . .} and µ is count-ing measure, it is easily seen from (2.8) through (2.10) that  f dµ = f (xi). ∥ Example 2.4 Continuation of Example 2.2. If µ is Lebesgue measure, then f dµ exists whenever the Riemann integral (the integral taught in calculus courses) exists and the two agree. However, the integral defined in (2.8) through (2.10) exists for many functions for which the Riemann integral is not defined. A simple example is the function f for which f (x) = 1 or 0, as x is rational or irra-tional. It follows from (2.22) below that the integral of f with respect to Lebesgue measure is zero; on the other hand, f is not Riemann integrable (Problem 2.11). ∥ In analogy with the customary notation for the Riemann integral, it will fre-quently be convenient to write the integral (2.10) as f (x)dµ(x). This is especially true when f is given by an explicit formula. Theintegraldefinedabovehasthepropertiesonewouldexpectofit.Inparticular, for any real numbers c1, . . . , cm and any integrable functions f1, . . . , fm, cifi is 1.2 ] MEASURE THEORY AND INTEGRATION 11 also integrable and  (cifi) dµ = ci  fidµ. (2.11) Also, if f is measurable and g integrable and if 0 ≤f ≤g, then f is also integrable, and  f dµ ≤  g dµ. (2.12) We shall often be dealing with statements that hold except on a set of measure zero. If a statement holds for all x in X −N where µ(N) = 0, the statement is said to hold a.e. (almost everywhere) µ (or a.e. if the measure µ is clear from the context). It is sometimes required to know when f (x) = lim fn(x) or more generally when f (x) = lim fn(x) (a.e. µ) (2.13) implies that  f dµ = lim  fndµ. (2.14) Here is a sufficient condition. Theorem 2.5 (Dominated Convergence) If the fn are measurable and satisfy (2.13), and if there exists an integrable function g such that |fn(x)| ≤g(x) for all x, (2.15) then the fn and f are integrable and (2.14) holds. The following is another useful result concerning integrals of sequences of functions. Lemma 2.6 (Fatou) If {fn} is a sequence of non-negative measurable functions, then  lim inf n→∞fn dµ ≤lim inf n→∞  fndµ (2.16) with the reverse inequality holding for limsup. Recall that the liminf and limsup of a sequence of numbers are, respectively, the smallest and largest limit points that can be obtained through subsequences. See Problems 2.5 and 2.6. As a last extension of the concept of integral, define  A f dµ =  IAf dµ (2.17) when the integral on the right exists. It follows in particular from (2.8) and (2.17) that  A dµ = µ(A). (2.18) Obviously such properties as (2.11) and (2.12) continue to hold when is replaced by A. 12 PREPARATIONS [ 1.2 It is often useful to know under what conditions an integrable function f satisfies  A f dµ = 0. (2.19) This will clearly be the case when either f = 0 on A (2.20) or µ(A) = 0. (2.21) More generally, it will be the case whenever f = 0 a.e. on A, (2.22) that is, f is zero except on a subset of A having measure zero. Conversely, if f is a.e. non-negative on A,  A f dµ = 0 ⇒f = 0 a.e. on A, (2.23) and if f is a.e. positive on A, then  A f dµ = 0 ⇒µ(A) = 0. (2.24) Note that, as a special case of (2.22), if f and g are integrable functions differing only on a set of measure zero, that is, if f = g (a.e. µ), then  f dµ =  g dµ. It is a consequence that functions can never be determined by their integrals uniquely but at most up to sets of measure zero. For a non-negative integrable function f , let us now consider ν(A) =  A f dµ (2.25) as a set function defined over A. Then ν is non-negative, σ-finite, and σ-additive and hence a measure over (X, A). If µ and ν are two measures defined over the same measurable space (X, A), it is a question of central importance whether there exists a function f such that (2.25) holds for all A ∈A. By (2.21), a necessary condition for such a representation is clearly that µ(A) = 0 ⇒ν(A) = 0. (2.26) When (2.26) holds, ν is said to be absolutely continuous with respect to µ. It is a surprising and basic fact known as the Radon-Nikodym theorem that (2.26) is not only necessary but also sufficient for the existence of a function f satisfying (2.25) for all A ∈A. The resulting function f is called the Radon-Nikodym derivative of ν with respect to µ. This f is not unique because it can be changed on a set of µ-measure zero without affecting the integrals (2.25). However, it is unique a.e. µ 1.3 ] PROBABILITY THEORY 13 in the sense that if g is any other integrable function satisfying (2.25), then f = g (a.e. µ). It is a useful consequence of this result that  A f dµ = 0 for all A ∈A implies that f = 0 (a.e. µ). The last theorem on integration we require is a form of Fubini’s theorem which essentially states that in a repeated integral of a non-negative function, the order of integration is immaterial. To make this statement precise, define the Cartesian product A × B of any two sets A, B as the set of all ordered pairs (a, b) with a ∈A, b ∈B. Let (X, A, µ) and (Y, B, ν) be two measure spaces, and define A×B to be the smallest σ-field containing all sets A×B with A ∈A and B ∈B. Then there exists a unique measure λ over A × B which to any product set A × B assigns the measure µ(A) · ν(B). The measure λ is called the product measure of µ and ν and is denoted by µ × ν. Example 2.7 Borel sets. If X and Y are Euclidean spaces Em and En and A and B the σ-fields of Borel sets of X and Y respectively, then X × Y is Euclidean space Em+n, and A × B is the class of Borel sets of X × Y. If, in addition, µ and ν are Lebesgue measure on (X, A) and (Y, B), then µ × ν is Lebesgue measure on (X × Y, A × B). ∥ An integral with respect to a product measure generalizes the concept of a double integral. The following theorem, which is one version of Fubini’s theorem, states conditions under which a double integral is equal to a repeated integral and under which it is permitted to change the order of integration in a repeated integral. Theorem 2.8 (Fubini) Let (X, A, µ) and (Y, B, ν) be measure spaces and let f be a non-negative A × B-measurable function defined on X × Y. Then  X  Y f (x, y)dν(y)  dµ(x) =  Y  X f (x, y)dµ(x)  dν(y) (2.27) =  X×Y f d(µ × ν). Here, the first term is the repeated integral in which f is first integrated for fixed x with respect to ν, and then the result with respect to µ. The inner integrals of the first two terms in (2.27) are, of course, not defined unless f (x, y), for fixed values of either variable, is a measurable function of the other. Fortunately, under the assumptions of the theorem, this is always the case. Similarly, existence of the outer integrals requires the inner integrals to be measurable functions of the variable that has not been integrated. This condition is also satisfied. 3 Probability Theory For work in statistics, the most important application of measure theory is its specialization to probability theory. A measure P defined over a measure space 14 PREPARATIONS [ 1.3 (X, A) satisfying P(X) = 1 (3.1) is a probability measure (or probability distribution), and the value P(A) it assigns to A is the probability of A. If P is absolutely continuous with respect to a measure µ with Radon-Nikodym derivative, p, so that P(A) =  A p dµ, (3.2) p is called the probability density of P with respect to µ. Such densities are, of course, determined only up to sets of µ-measure zero. We shall be concerned only with situations in which X is Euclidean, and typi-cally the distributions will either be discrete (in which case µ can be taken to be counting measure) or absolutely continuous with respect to Lebesgue measure. Statistical problems are concerned not with single probability distributions but with families of such distributions P = {Pθ, θ ∈ } (3.3) defined over a common measurable space (X, A). When all the distributions of P are absolutely continuous with respect to a common measure µ, as will usually be the case, the family P is said to be dominated (by µ). Most of the examples with which we shall deal belong to one or the other of the following two cases. (i) The discrete case. Here, X is a countable set, A is the class of subsets of X, and the distributions of P are dominated by counting measure. (ii) The absolutely continuous case. Here, X is a Borel subset of a Euclidean space, A is the class of Borel subsets of X, and the distributions of P are dominated by Lebesgue measure over (X, A). It is one of the advantages of the general approach of this section that it includes both these cases, as well as mixed situations such as those arising with censored data (see Problem 3.8). When dealing with a family P of distributions, the most relevant null-set concept is that of a P-null set, that is, of a set N satisfying P(N) = 0 for all P ∈P. (3.4) IfastatementholdsexceptonasetN satisfying(3.4),weshallsaythatthestatement holds (a.e. P). If P is dominated by µ, then µ(N) = 0 (3.5) implies (3.4). When the converse also holds, µ and P are said to be equivalent. To bring the customary probabilistic framework and terminology into conso-nance with that of measure theory, it is necessary to define the concepts of random variable and random vector. A random variable is the mathematical representation of some real-valued aspect of an experiment with uncertain outcome. The experi-ment may be represented by a space E, and the full details of its possible outcomes by the points e of E. The frequencies with which outcomes can be expected to fall 1.3 ] PROBABILITY THEORY 15 into different subsets E of E (assumed to form a σ-field B) are given by a proba-bility distribution over (E, B). A random variable is then a real-valued function X defined over E. Since we wish the probabilities of the events X ≤a to be defined, the function X must be measurable and the probability FX(a) = P(X ≤a) (3.6) is simply the probability of the set {e : X(e) ≤a}. The function FX defined through (3.6) is the cumulative distribution function (cdf) of X. It is convenient to digress here briefly in order to define another concept of absolute continuity. A real-valued function f on (−∞, ∞) is said to be absolutely continuous if given any ε > 0, there exits δ > 0 such that for each finite collection of disjoint bounded open intervals (ai, bi), (bi −ai) < δ implies |f (bi) −f (ai)| < ε. (3.7) A connection with the earlier concept of absolute continuity of one measure with respect to another is established by the fact that a cdf F on the real line is absolutely continuous if and only if the probability measure it generates is absolutely con-tinuous with respect to Lebesgue measure. Any absolutely continuous function is continuous (Problem 3.2), but the converse does not hold. In particular, there exist continuous cumulative distribution functions which are not absolutely continuous and therefore do not have a probability density with respect to Lebesgue measure. Such distributions are rather pathological and play little role in statistics. If not just one but n real-valued aspects of an experiment are of interest, these are represented by a measurable vector-valued function (X1, . . . , Xn) defined over E, with the joint cdf FX(a1, . . . , an) = P[X1 ≤a1, . . . , Xn ≤an] (3.8) being the probability of the event {e : X1(e) ≤a1, . . . , Xn(e) ≤an}. (3.9) The cdf (3.8) determines the probabilities of (X1, . . . Xn) falling into any Borel set A, and these agree with the probabilities of the events {e : [X1(e), . . . , Xn(e)] ∈A}. From this description of the mathematical model, one might expect the starting point for modeling a specific situation to be the measurable space (E, B) and a fam-ily P of probability distributions defined over it. However, the statistical analysis of an experiment is typically not based on a full description of the experimental outcome (which would, for example, include the smallest details concerning all experimental subjects) represented by the points e of E. More often, the starting point is a set of observations, represented by a random vector X = (X1, . . . , Xn), with all other aspects of the experiment being ignored. The specification of the model will therefore begin with X, the data; the measurable space (X, A) in which X takes on its values, the sample space; and a family P of probability distributions to which the distribution of X is known to belong. Real-valued or vector-valued 16 PREPARATIONS [ 1.4 measurable functions T = (T1, . . . , Tk) of X are called statistics; in particular, estimators are statistics. The change of starting point from (E, B) to (X, A) requires clarification of two definitions: (1) In order to avoid reference to (E, B), it is convenient to require T to be a measurable function over (X, A) rather than over (E, B). Measurability with respect to the original (E, B) is then an automatic consequence (Problem 3.3). (2) Analogously, the expectation of a real-valued integrable T is originally defined as  T [X(e)]dP(e). However, it is legitimate to calculate it instead from the formula E(T ) =  T (x)dPX(X) where PX denotes the probability distribution of X. As a last concept, we mention the support of a distribution P on (X, A). It is the set of all points x for which P(A) > 0 for all open rectangles A [defined by (2.3)] which contain x. Example 3.1 Support. Let X be a random variable with distribution P and cdf F, and suppose the support of P is a finite interval I with end points a and b. Then, I must be the closed interval [a, b] and F is strictly increasing on [a, b] (Problem 3.4). ∥ If P and Q are two probability measures on (X, A) and are equivalent (i.e., each is absolutely continuous with respect to the other), then they have the same support; however, the converse need not be true (Problems 3.6 and 3.7). Having outlined the mathematical foundation on which the statistical develop-ments of the later chapters are based, we shall from now on ignore it as far as possible and instead concentrate on the statistical issues. In particular, we shall pay little or no attention to two technical difficulties that occur throughout. (i) The estimators that will be derived are statistics and hence need to be measur-able. However, we shall not check that this requirement is satisfied. In specific examples, it is usually obvious. In more general constructions, it will be tac-itly understood that the conclusion holds only if the estimator in question is measurable. In practice, the sets and functions in these constructions usually turn out to be measurable although verification of their measurability can be quite difficult. (ii) Typically, the estimators are also required to be integrable. This condition will not be as universally satisfied in our examples as measurability and will therefore be checked when it seems important to do so. In other cases, it will again be tacitly assumed. 4 Group Families The two principal families of models with which we shall be concerned in this book are exponential families and group families. Between them, these families cover 1.4 ] GROUP FAMILIES 17 many of the more common statistical models. In this and the next section, we shall discuss these families and some of their properties, together with some of the more important special cases. More details about these and other special distributions can be found in the four-volume reference work on statistical distributions by Johnson and Kotz (1969-1972), and the later editions by Johnson, Kotz, and Kemp (1992) and Johnson, Kotz, and Balakrishnan (1994,1995). One of the main reasons for the central role played by these two families in statistics is that in each of them, it is possible to effect a great simplification of the data. In an exponential family, there exists a fixed (usually rather small) number of statistics to which the data can be reduced without loss of information, regardless of the sample size. In a group family, the simplification stems from the fact that the different distributions of the family play a highly symmetric role. This symmetry in the basic structure again leads essentially to a reduction of the dimension of the data since it is then natural to impose a corresponding symmetry requirement on the estimator. A group family of distributions is a family obtained by subjecting a random variable with a fixed distribution to a suitable family of transformations. Example 4.1 Location-scale families. Let U be a random variable with a fixed distribution F. If a constant a is added to U, the resulting variable X = U + a (4.1) has distribution P(X ≤x) = F(x −a). (4.2) The totality of distributions (4.2), for fixed F and as a varies from −∞to ∞, is said to constitute a location family. Analogously, a scale family is generated by the transformations X = bU, b > 0, (4.3) and has the form P(X ≤x) = F(x/b). (4.4) Combining these two types of transformations into X = a + bU, b > 0, (4.5) one obtains the location-scale family P(X ≤x) = F x −a b  . (4.6) In applications of these families, F usually has a density f with respect to Lebesgue measure. The density of (4.6) is then given by 1 bf x −a b  . (4.7) Table 4.1 exhibits several such densities, which will be used in the sequel. ∥ In each of (4.1), (4.3), and (4.5), the class of transformations has the following two properties. 18 PREPARATIONS [ 1.4 Table 4.1. Location-Scale Families −∞< a < ∞, b > 0 Density Support Name Notation 1 √ 2πbe−(x−a)2/2b2 −∞< x < ∞ Normal N(a, b2) 1 2be−|x−a|/b −∞< x < ∞ Double exponential DE(a, b) b π 1 b2+(x−a)2 −∞< x < ∞ Cauchy C(a, b) 1 b e−(x−a)/b [1+e−(x−a)/b]2 −∞< x < ∞ Logistic L(a, b) 1 be−(x−a)/bI[a,∞)(x) a < x < ∞ Exponential E(a, b) 1 bIa−b/2,a+b/2 a −b 2 < x < a + b 2 Uniform U  a −b 2, a + b 2  (i) Closure under composition. Application of a 1:1 transformation g1 from X to X followed by another, g2, results in a new such transformation called the composition of g1 with g2 and denoted by g2·g1. For the transformation (4.1), addition first of a1 and then of a2 results in the addition of a1 + a2. For (4.3), multiplication by b1 and then by b2 is equivalent to multiplication by b2 · b1. The composition rule (4.5) is slightly more complicated. First transforming u to x = a1 +b1u and then the result to y = a2 +b2x results in the transformation y = a2 + b2(a1 + b1u) = (a2 + b2a1) + b2b1u. (4.8) A class J of transformations is said to be closed under composition if g1 ∈ J , g2 ∈J implies that g2 ·g1 ∈J . We have just shown that the three classes of transformations, (4.1) with −∞< a < ∞, (4.3) with 0 < b, (4.9) (4.5) with −∞< a < ∞, 0 < b, are all closed with respect to composition. On the other hand, the class (4.1) with |a| < 1 is not, since U + 1/2 and U + 2/3 are both members of the class but their composition is not. (ii) Closure under inversion. Given any 1 : 1 transformation x′ = gx, let g−1, the inverse of g, denote the transformation which undoes what g did, that is, takes x′ back to x so that x = g−1x′. For the transformation which adds a, the inverse subtracts a; the inverse in (4.3) of multiplication by b is division by b; and the inverse of a + bu is (x −a)/b. A class J is said to be closed under inversion if g ∈J implies g−1 ∈J . The three classes listed in (4.9) are all closed under inversion. On the other hand, (4.1) with 0 ≤a is not. 1.4 ] GROUP FAMILIES 19 The structure of the class of transformations possessing these properties is a special case of a more general mathematical object, simply called a group. Definition 4.2 A set G of elements is called a group if it satisfies the following four conditions. (i) There is defined an operation, group multiplication, which with any two el-ements a, b ∈G associates an element c of G. The element c is called the product of a and b and is denoted ab. (ii) Group multiplication obeys the associative law (ab)c = a(bc). (iii) There exists an element e ∈G, called the identity, such that ae = ea = a for all a ∈G. (iv) For each element a ∈G, there exists an element a−1, its inverse, such that aa−1 = a−1a = e. Both the identity element and the inverse a−1 of any element a can be shown to be unique. The groups of primary interest in statistics are transformation groups. Definition 4.3 A class G of transformations is called a transformation group if it is closed under both composition and inversion. It is straightforward to verify (Problem 4.4) that a transformation group is, in fact, a group. In particular, note that the identity transformation x ≡x is a member of any transformation group G since g ∈G implies g−1 ∈G and hence g−1g ∈G, and by definition, g−1g is the identity. Note also that the inverse (g−1)−1 of g−1 is g, so that gg−1 is also the identity. A transformation group G which satisfies g2 · g1 = g1 · g2 for all g1, g2 ∈G is called commutative. The first two groups of transformations of (4.9) are commutative, but the third is not. Example 4.4 Continuation of Example 4.1. The group families (4.2), (4.4), and (4.6) generalize easily to the case that U is a vector U = (U1, . . . , Un), if one defines U + a = (U1 + a, . . . , Un + a) and bU = (bU1, . . . , bUn). (4.10) This covers in particular the case that X1, . . . , Xn are iid according to one of the previous families, for example, one of the densities of Table 4.1. Larger group families are obtained in the same way by letting U + a = (U1 + a1, . . . , Un + an) and bU = (b1U1, . . . , bnUn). (4.11) ∥ 20 PREPARATIONS [ 1.4 Example 4.5 Multivariate normal distribution. As a more special but very im-portant example, suppose next that U = (U1, . . . , Up) where the Ui are indepen-dently distributed as N(0, 1) and let    X1 . . . Xp   =    a1 . . . ap   + B    U1 . . . Up    (4.12) where B is nonsingular p × p matrix. The resulting family of distributions in p-space is the family of nonsingular p-variate normal distributions. If the three columns of (4.12) are denoted by X, a, and U, respectively,1 (4.12) can be written as X = a + BU. (4.13) From this equation, it is seen that the covariance matrix of X are given by E(X) = a and = E[(X −a)(X −a)′] = BB′. (4.14) To obtain the density of X, write the density of U as 1 ( √ 2π)p e−(1/2)u′u. Now U = B−1(X −a) and the Jacobian of the linear transformation (4.13) is just the determinant |B| of B. Thus, by the usual formula for transforming densities, the density of X is seen to be |B|−1 ( √ 2π)p e−(x−a)′−1(x−a)/2. (4.15) For the case p = 2, this reduces to (Problem 4.6) 1 2πστ  1 −ρ2 e−[(x−ξ)2/σ 2−2ρ(x−ξ)(y−η)/στ+(y−η)2/τ 2]/2(1−ρ2) (4.16) where we write (x, y) for (x1, x2) and (ξ, η) for (a1, a2), and where σ 2 = var(X), τ 2 = var(Y), and ρστ = cov(X, Y). ∥ There is a difference between the transformation groups (4.1), (4.3), and (4.5), on the one hand, and (4.13), on the other. In the first three cases, different transfor-mations of the group lead to different distributions. This is not true of (4.13) since the distributions of a1 + B1U and a2 + B2U coincide provided a1 = a2 and B1B′ 1 = B2B′ 2. This occurs when a1 = a2 and (B−1 2 B1)(B−1 2 B1)′ is the identity matrix, that is, when B−1 2 B1 is orthogonal. The same family of distributions can therefore be generated by restricting the matrices B in (4.13) to belong to a smaller group. In particular, it is enough to let G be the group of lower triangular matrices, in which all elements above the main diagonal are zero (Problems 4.7 - 4.9). 1 When it is not likely to cause confusion, we shall use U and so on to denote both the vector and the column with elements Ui. 1.4 ] GROUP FAMILIES 21 Example 4.6 The linear model. Let us next consider a different generalization of a location-scale family. As before, let U = (U1, . . . , Un) have a fixed joint distribution and consider the transformations Xi = ai + bUi, i = 1, . . . , n, (4.17) where the translation vector a = (a1, . . . , an) is restricted to be in some s-dimen-sional linear subspace of n-space, that is, to satisfy a set of linear equations ai = s j=1 dijβj (i = 1, . . . , n). (4.18) Here, the dij are fixed (without loss of generality the matrix D = (dij) is assumed to be of rank s) and the βj are arbitrary. The most important case of this model is that in which the U’s are iid as N(0, 1). The joint distribution of the X’s is then given by 1 ( √ 2πb)n exp  −1 2b2 (xi −ai)2  (4.19) with a ranging over . ∥ We shall next consider a number of models in which the groups (and hence the resulting families of distributions) are much larger than in the situations discussed so far. Example 4.7 A nonparametric iid family. Let U1, . . . , Un be n independent random variables with a fixed continuous common distribution, say N(0, 1), whose support is the whole real line, and let G be the class of all transformations Xi = g(Ui) (4.20) where g is any continuous, strictly increasing function satisfying lim u→−∞g(u) = −∞, lim u→∞g(u) = ∞. (4.21) This class constitutes a group. The Xi are again iid with common distribution, say Fg. The class {Fg : g ∈G} is the class of all continuous distributions whose support is (−∞, ∞), that is, the class of all distributions whose cdf is continuous and strictly increasing on (−∞, ∞). In this example, one may wish to impose on g the additional restriction of differentiability for all µ. The resulting family of distributions will be as before but restricted to have probability density with respect to Lebesgue measure. ∥ Many variations of this basic example are of interest, we shall mention only a few. Example 4.8 Symmetric distributions. Consider the situation of Example 4.7 but with g restricted to be odd, that is, to satisfy g(−u) = −g(u) for all u. This leads to the class of all distributions whose support is the whole real line and which are symmetric with respect to the origin. If instead we let Xi = g(ui) + a, −∞< a < ∞, the resulting class is that of all distributions whose support is the real line and which are symmetric with the point a of symmetry being specified. ∥ 22 PREPARATIONS [ 1.4 Example 4.9 Continuation of Example 4.7. In Example 4.7, replace N(0, 1) as the initial distribution of the Ui with the uniform distribution on (0, 1), and let G be the class of all strictly increasing continuous functions g on (0, 1) satisfying g(0) = 0, g(1) = 1. If, then, Xi = a + bg(Ui) with −∞< a < ∞, 0 < b, the resulting group family is that of all continuous distributions whose support is an interval. ∥ The examples of group families considered so far are of two types. In Examples 4.1 - 4.6, the distributions within a family were naturally indexed by a relatively small number of parameters (a and b in Example 4.1; the elements of the matrix B and the vector a in Example 4.4; the quantities b and β1, . . . , βs in Example 4.6). On the other hand, in Examples 4.7 - 4.9, the distribution of the Xi was fairly unrestricted, subject only to conditions such as independence, identity of distribution, nature of support, continuity, and symmetry. The next example is the prototype of a third kind of model arising in survey sampling. Example 4.10 Sampling from a finite population. To motivate this model, con-sider a finite population of N elements (or subjects) to each of which is attached a real number (for example, the age or income of the subject) and an identifying label. A random sample of n elements drawn from this population constitutes the observations. Let the observed values and labels be (X1, J1), . . . , (Xn, Jn). The following group family provides a possible model for this situation. Let v1, . . . , vN be any fixed N real numbers, and let the pairs (U1, J1), . . ., (Un, Jn) be n of the pairs (v1, 1), . . . , (vN, N) selected at random, that is, in such a way that all N n  possible choices of n pairs are equally likely. Finally, let G be the group of transformations X1 = U1 + aJ1, . . . , Xn = Un + aJn (4.22) where the N-tuple (a1, . . . , , aN) ranges over all possible N-tuples −∞< a1, a2, . . . , aN < ∞. If we put yi = vi + ai, then the pairs (X1, J1), . . . , (Xn, Jn) are a random sample from the population (y1, 1), . . . , (yN, N), the y values being arbitrary. This example can be extended in a number of ways. In particular, the sampling method, reflecting some knowledge concerning the population of y values, may be more complex. In stratified sampling, for instance, the population of N is divided into, say, s subpopulations of N1, . . . , Ns members (Ni = N) and a sample of ni is drawn at random from the ith subpopulation (Problem 4.12). This and some other sampling schemes will be considered in Section 3.7. A different modification places some restrictions on the y’s such as 0 < yi < ∞, or 0 < yi < 1 (Problem 4.11). ∥ It was stated at the beginning of the section that in a group family, the differ-ent members of the family play a highly symmetric role. However, the general construction of such a family P as the distributions of gU, where U has a fixed 1.5 ] EXPONENTIAL FAMILIES 23 distribution P0 and g ranges over a group G of transformations, appears to single out the distribution P0 of U (which is a member of P since the identity transfor-mation is a member of G) as the starting point of the construction. This asymmetry is only apparent. Let P1 be any distribution of P other than P0 and consider the family P′ of distributions of gV as g ranges over G, where V has distribution P1. Since P1 is an element of P, there exists an element g0 of G for which g0U is distributed according to P1. Thus, g0U can play the role of V , and P′ is the family of distributions of gg0U as g ranges over G. However, as g ranges over G, so does gg0 (Problem 4.5), so that the family of distributions of gg0U, g ∈G, is the same as the family of P of gU, g ∈G. A group family is thus independent of which of its members is taken as starting distribution. If one cannot find a group generating a given family P of distributions, the question arises whether such a group exists, that is, whether P is a group family. In principle, the answer is easy. For the sake of simplicity, suppose that P is a family of univariate distributions with continuous and strictly increasing cumula-tive distribution functions. Let F0 and F be two such cdf’s and suppose that U is distributed according to F0. Then, if g is strictly increasing, g(U) is distributed ac-cording to F if and only if g = F −1(F0) (Problem 4.14). Thus, the transformations generating the family must be the transformations {F −1(F0), F ∈P}. (4.23) The family P will be a group family if and only if the transformations (4.23) form a group, that is, are closed under composition and inversion. In specific situations, the calculations needed to check this requirement may not be easy. For an important class of problems, the question has been settled by Borges and Pfanzagl (1965). 5 Exponential Families A family {Pθ} of distributions is said to form an s-dimensional exponential family if the distributions Pθ have densities of the form pθ(x) = exp  s i=1 ηi(θ)Ti(x) −B(θ) h(x) (5.1) with respect to some common measure µ. Here, the ηi and B are real-valued functions of the parameters and the Ti are real-valued statistics, and x is a point in the sample space X, the support of the density. Frequently, it is more convenient to use the ηi as the parameters and write the density in the canonical form p(x|η) = exp  s i=1 ηiTi(x) −A(η) h(x). (5.2) It should be noted that the form (5.2) is not unique. We can, for example, multiply ηi by a constant c if, at the same time, Ti is replaced by Ti/c. More generally, we can make linear transformations of the η’s and T ’s. Both (5.1) and (5.2) are redundant in that the factor h(x) could be absorbed into µ. The reason for not doing so is that it is then usually possible to take µ to be 24 PREPARATIONS [ 1.5 either Lebesgue measure or counting measure rather than having to define a more elaborate measure. The function p given by (5.2) is non-negative and is therefore a probability density with respect to the given µ, provided its integral with respect to µ equals 1. A constant A(η) for which this is the case exists if and only if  eηiTi(x)h(x)dµ(x) < ∞. (5.3) The set F of points η = (η1, . . . , ηs) for which (5.3) holds is called the natural parameter space of the family (5.2) and η is called the natural parameter. It is not difficult to see that F is convex (TSH2, Section 2.7, Lemma 7). In most applications, it turns out to be open, but this need not be the case (Problem 5.1). In the parametrization (5.1), the natural parameter space is the set of θ values for which [η1(θ), . . . , ηs(θ)] is in F. Example 5.1 Normal family. If X has the N(ξ, σ 2) distribution, then θ = (ξ, σ 2) and the density with respect to Lebesgue measure is pθ(x) = exp  ξ σ 2 x − 1 2σ 2 x2 −ξ 2 2σ 2  1 √ 2πσ , a two-parameter exponential family with natural parameters (η1, η2) = (ξ/σ 2, −1/2σ 2) and natural parameter space ℜ× (−∞, 0). ∥ Some other examples of one- and two-parameter exponential families are shown in Table 5.1. If the statistics T1, . . . , Ts satisfy linear constraints, the number s of terms in the exponent of (5.1) can be reduced. Unless this is done, the parameters ηi are statistically meaningless; they are unidentifiable (see Problem 5.2). Definition 5.2 If X is distributed according to pθ, then θ is said to be unidentifiable on the basis of X if there exist θ1 ̸= θ2 for which Pθ1 = Pθ2. A reduction is also possible when the η’s satisfy a linear constraint. In the latter case, the natural parameter space will be a convex set which lies in a linear subspace of dimension less than s. If the representation (5.2) is minimal in the sense that neither the T ’s nor the η’s satisfy a linear constraint, the natural parameter space will then be a convex set in Es containing an open s-dimensional rectangle. If (5.2) is minimal and the parameter space contains an s-dimensional rectangle, the family (5.2) is said to be of full rank. Example 5.3 Multinomial. In n independent trials with s + 1 possible outcomes, let the probability of the ith outcome be pi in each trial. If Xi denotes the number of trials resulting in outcome i (i = 0, 1, . . . , s), then the joint distribution of the X’s is the multinomial distribution M(p0, . . . , ps; n) P(X0 = x0, . . . , Xs = xs) = n! x0! · · · xs!px0 0 . . . pxs s , (5.4) which can be rewritten as exp(x0 log p0 + · · · + xs log ps)h(x). 1.5 ] EXPONENTIAL FAMILIES 25 Table 5.1. Some One- and Two-Parameter Exponential Families Density∗ Name Notation Support 1 H(a)ba xa−1e−x/a Gamma(a, b) H(a, b) 0 < x < ∞ 1 H(f/2)2f/2 xf/2−1e−x/2 Chi-squared(f ) χ2 f 0 < x < ∞ H(a + b) H(a)H(b)xa−1(1 −x)b−1 Beta(a, b) B(a, b) 0 < x < 1 px(1 −p)n−x Bernoulli(p) b(p) x = 0, 1  n x  px(1 −p)n−x Binomial(p, n) b(p, n) x = 0, 1, . . . , n 1 x!λxe−λ Poisson(λ) P(λ) x = 0, 1, . . . m + x −1 m −1  pmqx Negative binomial(p, m) Nb(p, m) x = 0, 1, . . . ∗The density of the first three distributions is with respect to Lebesgue measure, and that of the last four with respect to counting measure. Since the xi add up to n, this can be reduced to exp[n log p0 + x1 log(p1/p0) + · · · + xs log(ps/p0)]h(x). (5.5) This is an s-dimensional exponential family with ηi = log(pi/p0), A(η) = −n log p0 = n log  1 + s i=1 eηi . (5.6) The natural parameter space is the set of all (η1, . . . , ηs) with −∞< ηi < ∞. ∥ In the normal family of Example 5.1, it might be the case that the mean and the variance are related. [Such a model can be useful in data analysis, where the variance may be modeled as a power of the mean (see, for example, Snedecor and Cochran 1989, Section 15.10).] In such cases, when the natural parameters of the distribution are related in a nonlinear way, we say that (5.1) or (5.2) forms a curved exponential family (see Note 10.6). Example 5.4 Curved normal family. For the normal family of Example 5.1, assume that ξ = σ, so that pθ(x) = exp 1 ξ x − 1 2ξ 2 x2 −1 2  1 √ 2πξ , ξ > 0. (5.7) 26 PREPARATIONS [ 1.5 Although this is formally a two-parameter exponential family with natural param-eter ( 1 ξ , −1 2ξ 2 ), this parameter is, in fact, generated by the single parameter ξ. The two-dimensional parameter ( 1 ξ , −1 2ξ 2 ) lies on a curve in ℜ2, making (5.7) a curved exponential family. ∥ The underlying parameter in a curved exponential family need not be one di-mensional. The following is an example in which it is two dimensional. Example 5.5 Logit model. Let Xi be independent b(pi, ni), i = 1, . . . , m, so that their joint distribution is P(X1 = x1, . . . , Xm = xm) = m i=1 ni xi  pxi i (1 −pi)ni−xi. This can be written as exp m i=1 xi log pi 1 −pi m i=1 ni xi  (1 −pi)ni, (5.8) an m-dimensional exponential family with natural parameters ηi = log[(pi/ (1 −pi)], i = 1, . . . , m. The quantity log[p/(1 −p)] is known as the logit of p. If the η’s satisfy ηi = α + βzi, i = 1, . . . , m, (5.9) for known covariates zi, the model only contains the two parameters α and β and (5.8) becomes a curved exponential family (see Note 10.6). ∥ Note that the parameter space of an s-dimensional curved exponential family cannot contain an s-dimensional rectangle, so a curved exponential family is not of full rank. Nevertheless, as long as the T ’s are not rank deficient, a curved exponential family shares many of the following properties of a full rank family. (An exception is completeness of the sufficient statistic, discussed in the next section.) A more detailed treatment can be found in Brown (1986a) or Barndorff-Nielsen and Cox (1994). Let X and Y be independently distributed according to s-dimensional exponen-tial families (not necessarily full rank) with densities exp [ηiTi(x) −A(η)] h(x) and exp [ηiUi(y) −C(η)] k(y) (5.10) with respect to measures µ and ν over (X, A) and (Y, B), respectively. Then, the joint distribution of X, Y is again an exponential family, and by induction, the result extends to the joint distribution of more than two factors. The most important special case is that of iid random variables Xi, each distributed according to (5.1): The exponential structure is preserved under random sampling. The joint density of X = (X1, . . . , Xn) is exp ηi(θ)T ′ i (x) −ηB(θ) ! h(x1) · · · h(xn) (5.11) with T ′ i (x) = n j=1Ti(xj). 1.5 ] EXPONENTIAL FAMILIES 27 Example 5.6 Normal sample. Let Xi (i = 1, . . . , n) be iid according to N(ξ, σ 2). Then, the joint density of X1, . . . , Xn with respect to Lebesgue measure in En is exp  ξ σ 2 xi − 1 2σ 2 x2 i − n 2σ 2 ξ 2  · 1 ( √ 2π σ)n . (5.12) As in the case n = 1 (Example 5.1), this constitutes a two-parameter exponential family with natural parameters (ξ/σ 2, −1/2σ 2). ∥ Example 5.7 Bivariate normal. Suppose that (Xi, Yi), i = 1, . . . , n, is a sample from the bivariate normal density (4.16). Then, it is seen that the joint density of the n pairs is a five-parameter exponential density with statistics T1 = Xi, T2 = X2 i , T3 = XiYi, T4 = Yi, T5 = Y 2 i . This example easily generalizes to the p-variate case (Problem 5.3). ∥ A useful property of exponential families is given by the following theorem, which is proved, for example, in TSH2 (Chapter 2, Theorem 9) and in Barndorff-Nielsen (1978, Section 7.1). Theorem 5.8 For any integrable function f and any η in the interior of F, the integral  f (x) exp[ηiTi(x)]h(x) dµ(x) (5.13) is continuous and has derivatives of all orders with respect to the η’s, and these can be obtained by differentiating under the integral sign. As an application, differentiate the identity  exp[ηiTi(x) −A(η)]h(x) dµ(x) = 1 with respect to ηj to find Eη(Tj) = ∂ ∂ηj A(η). (5.14) Differentiating (5.14), in turn, with respect to ηk leads to cov(Tj, Tk) = ∂2 ∂ηj∂ηk A(η). (5.15) (For the corresponding formulas in terms of (5.1), see Problem 5.6.) Example 5.9 Continuation of Example 5.3. From (5.6), (5.14), and (5.15), one easily finds for the multinomial variables of Example 5.3 that (Problem 5.15) E(Xi) = npi, cov(Xj, Xk) = npj(1 −pj) if k = j −npjpk if k ̸= j. (5.16) ∥ As will be discussed in the next section, in an exponential family the statistics T = (T1, . . . , Ts) carry all the information about η or θ contained in the data, so that all statistical inferences concerning these parameters will be based on the T ’s. 28 PREPARATIONS [ 1.5 For this reason, we shall frequently be interested in calculating not only the first two moments of the T ’s given by (5.14) and (5.15) but also some of the higher moments αr1,...,rs = E(T r1 1 · · · T rs s ) (5.17) and central moments µr1,...,rs = E{[T1 −E(T1)]r1 · · · [Ts −E(Ts)]rs}. (5.18) A tool that often facilitates such calculations is the moment generating function MT (u1, . . . , us) = E(eu1T1+···+usTs). (5.19) If MT exists in some neighborhood u2 i < δ of the origin, then all moments αr1,...,rs exist and are the coefficients in the expansion of MT as a power series MT (u1, . . . , us) = (r1,...,rs) αr1,...,rsur1 1 · · · urs s /r1! · · · rs! (5.20) As an alternative, it is sometimes more convenient to calculate, instead, the cumulants κr1,...,rs, defined as the coefficients in the expansion of the cumulant generating function KT (u1, . . . , us) = log MT (u1, . . . , us) (5.21) = (r1,...,rs) κr1,...,rsur1 1 · · · urs s /r1! · · · rs! From the cumulants, the moments can be determined by formal comparison of the two power series (see, for example, Cram´ er 1946a, p. 186, or Stuart and Ord 1987, Chapter 3.). For s = 1, one finds, for example (Problem 5.7), α1 = κ1, α2 = κ2 + κ2 1, α3 = κ3 + 3κ1κ2 + κ3 1, (5.22) α4 = κ4 + 3κ2 2 + 4κ1κ3 + 6κ2 1κ2 + κ4 1. For exponential families, the moment and cumulant generating functions can be expressed rather simply as follows. Theorem 5.10 If X is distributed with density (5.2), then for any η in the interior of F, the moment and cumulant generating functions MT (u) and KT (u) of the T ’s exist in some neighborhood of the origin and are given by KT (u) = A(η + u) −A(η) (5.23) and MT (u) = eA(η+u)/eA(η) (5.24) respectively. Frequently, the calculation of moments becomes particularly easy when they can be represented as the sum of independent terms. We shall illustrate two examples for the case s = 1. (a) Suppose X = X1 + · · · + Xn, where the Xi are independent with moment and cumulant generating functions MXi(u) and KXi(u), respectively. Then MX(u) = E[e(x1+···+xn)] = MX1(u) · · · MXn(u) 1.5 ] EXPONENTIAL FAMILIES 29 and therefore KX(u) = n i=1 KXi(u). From the definition of cumulants, it then follows that κr = n i=1 κir (5.25) where κir is the rth cumulant of Xi. (b) The situation is also very simple for low central moments. If ξi = E(Xi), σ 2 i = var(Xi) and the Xi are independent, one easily finds (Problem 5.7) var(Xi) = σ 2 i , E[(Xi −ξi)]3 = E(Xi −ξi)3, (5.26) E[(Xi −ξi)]4 = E(Xi −ξi)4 + 6 i<j σ 2 i σ 2 j . For the case of identical components with ξi = ξ, σ 2 i = σ 2, this reduces to var(Xi) = nσ 2, E[(Xi −ξ)]3 = nE(X1 −ξ)3, (5.27) E[(Xi −ξ)]4 = nE(X1 −ξ)4 + 3n(n −1)σ 4. The following are a few of the many important special cases of exponential fam-ilies and some of their moments. Additional examples are given in the problems; see also Johanson (1979), Brown (1986a), or Hoffmann-Jorgensen (1994, Chapter 12). Example 5.11 Binomial moments. Let X have the binomial distribution b(p, n) so that for x = 0, 1, . . . , n, P(X = x) = n x  pxqn−x (0 < p < 1; q = 1 −p). (5.28) This is the special case of the multinomial distribution (5.4) with s = 1. The probability (5.28) can be rewritten as n x  ex log(p/q)+n log q, which defines an exponential family, with µ being counting measure over the points x = 0, 1, . . . , n and with η = log(p/q), A(η) = n log(1 + eη). (5.29) From (5.24) and (5.29), one finds that (Problem 5.8) MX(u) = (q + peu)n. (5.30) An easy way to obtain the expectation and the first three central moments of X is to use the fact that X arises as the number of successes in n Bernoulli trials with success probability p, and hence that X = Xi, where Xi is 1 or 0, as the ith trial is or is not a success. From (5.27) and the moments of Xi, one then finds (Problem 5.8) 30 PREPARATIONS [ 1.5 E(X) = np, E(X −np)3 = npq(q −p), (5.31) var(X) = npq, E(X −np)4 = 3(npq)2 + npq(1 −6pq). ∥ Example 5.12 Poisson moments. A random variable X has the Poisson distribu-tion P(λ) if P(X = x) = λx x! e−λ, x = 0, 1, . . . ; λ > 0. (5.32) Writing this as an exponential family in canonical form, we find η = log λ, A(η) = λ = eη (5.33) and hence KX(u) = λ(eu −1), MX(u) = eλ(eu−1), (5.34) so that, in particular, κr = λ for all r. The expectation and first three central moments are given by (Problem 5.9) E(X) = λ, E(X −λ)3 = λ, (5.35) var(X) = λ, E(X −λ)4 = λ + 3λ2. ∥ Example 5.13 Normal moments. Let X have the normal distribution N(ξ, σ 2) with density 1 √ 2πσ e−(x−ξ)2/2σ 2 (5.36) with respect to Lebesgue measure. For fixed σ, this is a one-parameter exponential family with η = ξ/σ 2 and A(η) = η2σ 2/2 + constant. (5.37) It is thus seen that MX(u) = eξu+(1/2)σ 2u2, (5.38) and hence in particular that E(X) = ξ. (5.39) Since the distribution of X−ξ is N(0, σ 2), the central moments µr of X are simply the moments αr of N(0, σ 2), which are obtained from the moment generating function M0(u) = eσ 2u2/2 to be µ2r+1 = 0, µ2r = 1 · 3 · · · (2r −1)σ 2r, r = 1, 2, . . . . (5.40) ∥ Example 5.14 Gamma moments. A random variable X has the gamma distribu-tion H(α, b) if its density is 1 H(α)bα xα−1e−x/b, x > 0, α > 0, b > 0, (5.41) 1.5 ] EXPONENTIAL FAMILIES 31 with respect to Lebesgue measure on (0, ∞). Here, b is a scale parameter, whereas α is called the shape parameter of the distribution. For α = f/2 (f an integer), b = 2, this is the χ2-distribution χ2 f with f degrees of freedom. For fixed-shape parameter α, (5.41) is a one-parameter exponential family with η = −1/b and A(η) = α log b = −α log(−η). Thus, the moment and cumulant generating functions are seen to be MX(u) = (1 −bu)−α and KX(u) = −α log(1 −bu), u < 1/b. (5.42) From the first of these formulas, one finds E(Xr) = α(α + 1) · · · (α + r −1)br = H(α + r) H(α) br (5.43) and hence (Problem 5.17) E(X) = αb, E(X −αb)3 = 2αb3, (5.44) var(X) = αb2, E(X −αb)4 = (3α2 + 6α)b4. ∥ Another approach to moment calculations is to use an identity of Charles Stein, which was given a thorough treatment by Hudson (1978). Stein’s identity is pri-marily used to establish minimaxity of estimators, but it is also useful in moment calculations. Lemma 5.15 (Stein’s identity) If X is distributed with density (5.2) and g is any differentiable function such that E|g′(X)| < ∞, then E  h′(X) h(X) + s i=1 ηiT ′ i (X) g(X) = −Eg′(X), (5.45) provided the support of X is (−∞, ∞). If the support of X is the bounded interval (a, b), then (5.45) holds if exp{ ηiTi(x)}h(x) →0 as x →a or b. The proof of the lemma is quite straightforward and is based on integration by parts (Problem 5.18). We illustrate its use in the normal case. Example 5.16 Stein’s identity for the normal. If X ∼N(µ, σ 2), then (5.45) becomes E{g(X)(X −µ)} = σ 2Eg′(X). This immediately shows that E(X) = µ (take g(x) = 1) and E(X2) = σ 2 +µ2 (take g(x) = x). Higher-order moments are equally easy to calculate (Problem 5.18). ∥ Not only are the moments of the statistics Ti appearing in (5.1) and (5.2) of interest but also the family of distributions of the T ’s. This turns out again to be an exponential family. Theorem 5.17 If X is distributed according to an exponential family with density (5.1) with respect to a measure µ over (X, A), then T = (T1, . . . , Ts) is distributed according to an exponential family with density exp [ηiti −A(η)] k(t) (5.46) 32 PREPARATIONS [ 1.6 with respect to a measure ν over Es. For a proof, see, for example, TSH2, Section 2.7, Lemma 8. Let us now apply this theorem to the case of two independent exponential fami-lies with densities (5.10). Then it follows from Theorem 5.17 that (T1+U1, . . . , Ts+ Us) is also distributed according to an s-dimensional exponential family, and by induction, this result extends to the sum of more than two independent terms. In particular, let X1, . . . , Xn be independently distributed, each according to a one-parameter exponential family with density exp [ηTi(xi) −Ai(η)] hi(xi). (5.47) Then, the sum n i=1 Ti(Xi) is again distributed according to a one-parameter expo-nential family. In fact, the sum of independent Poisson or normal variables again has a distribution of the same type, and the same is true for a sum of independent binomial variables with common p, or a sum of independent gamma variables H(αi, b) with common b. The normal distributions N(ξ, σ 2) for fixed σ constitute both a one-parameter exponential family (Example 5.12) and a location family (Table 4.1). It is natural to ask whether there are any other families that enjoy this double advantage. Another example is obtained by putting X = log Y where Y has the gamma distribution H(α, b) given by (5.41), and where the location parameter θ is θ = log b. Since multiplication of a random variable by a constant c ̸= 0 preserves both the expo-nential and location structure, a more general example is provided by the random variable c log Y for any c ̸= 0. It was shown by Dynkin (1951) and Ferguson (1962) that the cases in which X is normal or is equal to c log Y, with Y being gamma, provide the only examples of exponential location families. The H(α, b) distribution, with known parameter α, constitutes an example of an exponential scale family. Another example of an exponential scale family is provided by the inverse Gaussian distribution (see Problem 5.22), which has been extensively studied by Tweedie (1957). For a general treatment of these and other results relating exponential and group families, see Barndorff-Nielsen et al. (1992) or Barndorff-Nielsen (1988). 6 Sufficient Statistics The starting point of a statistical analysis, as formulated in the preceding sections, is a random observable X taking on values in a sample space X, and a family of possible distributions of X. It often turns out that some part of the data carries no information about the unknown distribution and that X can therefore be replaced by some statistic T = T (X) (not necessarily real-valued) without loss of information. A statistic T is said to be sufficient for X, or for the family P = {Pθ, θ ∈ } of possible distributions of X, or for θ, if the conditional distribution of X given T = t is independent of θ for all t. This definition is not quite precise and we shall return to it later in this section. However, consider first in what sense a sufficient statistic T contains all the in-formation about θ contained in X. For that purpose, suppose that an investigator reports the value of T , but on being asked for the full data, admits that they have 1.6 ] SUFFICIENT STATISTICS 33 been discarded. In an effort at reconstruction, one can use a random mechanism (such as a pseudo-random number generator) to obtain a random quantity X′ dis-tributed according to the conditional distribution of X given t. (This would not be possible, of course, if the conditional distribution depended on the unknown θ.) Then the unconditional distribution of X′ is the same as that of X, that is, Pθ(X′ ∈A) = Pθ(X ∈A) for all A, regardless of the value of θ. Hence, from a knowledge of T alone, it is possible to construct a quantity X′ which is completely equivalent to the original X. Since X and X′ have the same distribution for all θ, they provide exactly the same information about θ (for example, the estimators δ(X) and δ(X′) have identical distributions for any θ). In this sense, a sufficient statistic provides a reduction of the data without loss of information. This property holds, of course, only as long as attention is restricted to the model P and no distributions outside P are admitted as possibilities. Thus, in particular, restriction to T is not appropriate when testing the validity of P. The construction of X′ is, in general, effected with the help of an independent random mechanism. An estimator δ(X′) depends, therefore, not only on T but also on this mechanism. It is thus not an estimator as defined in Section 1, but a randomized estimator. Quite generally, if X is the basic random observable, a randomized estimator of g(θ) is a rule which assigns to each possible outcome x of X a random variable Y(x) with a known distribution. When X = x, an observation of Y(x) will be taken and will constitute the estimate of g(θ). The risk, defined by (1.10), of the resulting estimator is then  X  Y L(θ, y)dPY|X=x(y)  dPX|θ(x), where the probability measure in the inside integral does not depend on θ. With this representation, the operational significance of sufficiency can be formally stated as follows. Theorem 6.1 Let X be distributed according to Pθ ∈P and let T be sufficient for P. Then, for any estimator δ(X) of g(θ), there exists a (possibly randomized) estimator based on T which has the same risk function as δ(X). Proof. Let X′ be constructed as above so that δ′(X) is an (possibly randomized) estimator depending on the data only through T . Since δ(X) and δ′(X) have the same distribution, they also have the same risk function. ✷ Example 6.2 Poisson sufficient statistic. Let X1, X2 be independent Poisson variables with common expectation λ, so that their joint distribution is P(X1 = x1, X2 = x2) = λx1+x2 x1!x2!e−2λ. Then, the conditional distribution of X1 given X1 + X2 = t is given by P(X1 = x1|X1 + X2 = t) = λte−2λ/x1!(t −x1)! t y=0 λte−2λ/y!(t −y)! 34 PREPARATIONS [ 1.6 = 1 x1!(t −x1)!  1 t y=0 1/y!(t −y)! −1 . Since this is independent of λ, so is the conditional distribution given t of (X1, X2 = t −X1), and hence T = X1 + X2 is a sufficient statistic for λ. To see how to reconstruct (X1, X2) from T , note that 1 y!(t −y)! = 1 t!2t so that P(X1 = x1|X1 + X2 = t) =  t x1  1 2 x1 1 2 t−x1 , that is, the conditional distribution of X1 given t is the binomial distribution b(1/2, t) corresponding to t trials with success probability 1/2. Let X′ 1 and X′ 2 = t −X′ 1 be respectively the number of heads and the number of tails in t tosses with a fair coin. Then, the joint conditional distribution of (X′ 1, X′ 2) given t is the same as that of (X1, X2) given t. ∥ Example 6.3 Sufficient statistic for a uniform distribution. Let X1, . . . , Xn be independently distributed according to the uniform distribution U(0, θ). Let T be the largest of the n X’s, and consider the conditional distribution of the remaining n −1 X’s given t. Thinking of the n variables as n points on the real line, it is intuitivelyobviousandnotdifficulttoseeformally(Problem6.2)thattheremaining n −1 points (after the largest is fixed at t) behave like n −1 points selected at random from the interval (0, t). Since this conditional distribution is independent of θ, T is sufficient. Given only T = t, it is obvious how to reconstruct the original sample: Select n −1 points at random on (0, t). ∥ Example 6.4 Sufficient statistic for a symmetric distribution. Suppose that X is normally distributed with mean zero and unknown variance σ 2 (or more generally that X is symmetrically distributed about zero). Then, given that |X| = t, the only two possible values of X are ±t, and by symmetry, the conditional probability of each is 1/2. The conditional distribution of X given t is thus independent of σ and T = |X| is sufficient. In fact, a random variable X′ with the same distribution as X can be obtained from T by tossing a fair coin and letting X′ = T or −T as the coin falls heads or tails. ∥ The definition of sufficiency given at the beginning of the section depends on the concept of conditional probability, and this, unfortunately, is not capable of a treatmentwhichisbothgeneralandelementary.DifficultiesarisewhenPθ(T = t) = 0, so that the conditioning event has probability zero. The definition of conditional probability can then be changed at one or more values of t (in fact, at any set of t values which has probability zero) without affecting the distribution of X, which is the result of combining the distribution of T with the conditional distribution of X given T . Inelementarytreatmentsofprobabilitytheory,theconditionalprobabilityP(X ∈ A|t) is considered for fixed t as defining the conditional distribution of X given 1.6 ] SUFFICIENT STATISTICS 35 T = t. A more general approach can be obtained by a change of viewpoint, namely by considering P(X ∈A|t) for fixed A as a function of t, defined in such a way that in combination with the distribution of T , it leads back to the distribution of X. (See TSH2, Chapter 2, Section 4 for details.) This provides a justification, for instance, of the assignment of conditional probabilities in Example 6.4 and Example 6.10. In the same way, the conditional expectation η(t) = E[δ(X)|t] can be defined in such a way that Eη(T ) = Eδ(X), (6.1) that is, so that the expected value of the conditional expectation is equal to the unconditional expectation. Conditional expectation essentially satisfies the usual laws of expectation. How-ever, since it is only determined up to sets of probability zero, these laws can only hold a.e. More specifically, we have with probability 1 E[af (X) + bg(X)|t] = aE[f (X)|t] + bE[g(X)|t] and E[b(T )f (X)|t] = b(t)E[f (X)|t]. (6.2) Asjustdiscussed,thefunctionsP(A|t)arenotuniquelydefined,andthequestion arises whether determinations exist which, for each fixed t, define a conditional probability. It turns out that this is not always possible. [See Romano and Siegel (1986), who give an example due to Ash (1972). A more detailed treatment is Blackwell and Ryll-Nardzewsky (1963).] It is possible when the sample space is Euclidean, as will be the case throughout most of this book (see TSH2, Chapter 2, Section 5). When this is the case, a statistic T can be defined to be sufficient if there exists a determination of the conditional distribution functions of X given t which is independent of θ. The determination of sufficient statistics by means of the definition is incon-venient since it requires, first, guessing a statistic T that might be sufficient and, then, checking whether the conditional distributions of X given t is independent of θ. However, for dominated families, that is, when the distributions have densities with respect to a common measure, there is a simple criterion for sufficiency. Theorem 6.5 (Factorization Criterion) A necessary and sufficient condition for a statistic T to be sufficient for a family P = {Pθ, θ ∈ } of distributions of X dominated by a σ-finite measure µ is that there exist non-negative functions gθ and h such that the densities pθ of Pθ satisfy pθ(x) = gθ[T (x)]h(x) (a.e. µ). (6.3) Proof. See TSH2, Section 2.6, Theorem 8 and Corollary 1. ✷ Example 6.6 Continuation of Example 6.2. Suppose that X1, . . . , Xn are iid according to a Poisson distribution with expectation λ. Then Pλ(X1 = x1, . . . , Xn = xn) = λxie−nλ/O(xi!). This satisfies (6.3) with T = Xi, which is therefore sufficient. ∥ 36 PREPARATIONS [ 1.6 Example 6.7 Normal sufficient statistic. Let X1, . . . , Xn be iid as N(ξ, σ 2) so that their joint density is pξ,σ(x) = 1 ( √ 2πσ)n exp  −1 2σ 2 x2 i + ξ σ 2 xi − n 2σ 2 ξ 2  . (6.4) Then it follows from the factorization criterion that T = (X2 i , Xi) is sufficient for θ = (ξ, σ 2). Sometimes it is more convenient to replace T by the equivalent statistic T ′ = ( ¯ X, S2) where ¯ X = Xi/n and S2 = (Xi −¯ X)2 = X2 i −n ¯ X2. The two representations are equivalent in that they identify the same points of the sample space, that is, T (x) = T (y) if and only if T ′(x) = T ′(y). ∥ Example 6.8 Continuation of Example 6.3. The joint density of a sample X1, . . . , Xn from U(0, θ) is pθ(x) = 1 θn n i=1 I(0 < xi)I(xi < θ) (6.5) where the indicator function, I(·) is defined in (2.6). Now n i=1 I(0 < xi)I(xi < θ) = I(x(n) < θ) n i=1 I(0 < xi) where x(n) is the largest of the x values. It follows from Theorem 6.5 that X(n) is sufficient, as had been shown directly in Example 6.3. ∥ As a final illustration, consider Example 6.4 from the present point of view. Example 6.9 Continuation of Example 6.4. If X is distributed as N(0, σ 2), the density of X is 1 √ 2πσ e−x2/2σ 2 which depends on x only through x2, so that (6.3) holds with T (x) = x2. As always, of course, there are many equivalent statistics such as |X|, X4 or eX2. ∥ Quite generally, two statistics, T = T (X) and T ′ = T ′(X), will be said to be equivalent (with respect to a family P of distributions of X) if each is a function of the other a.e. P, that is, if there exists a P-null set N and functions f and g such that T (x) = f [T ′(x)] and T ′(x) = g[T (x)] for all x ∈N. Two such statistics carry the same amount of information. Example 6.10 Sufficiency of order statistics. Let X = (X1, . . . , Xn) be iid ac-cording to an unknown continuous distribution F and let T = (X(1), . . . , X(n)) where X(1) < · · · < X(n) denotes the ordered observations, the so-called order statistics. By the continuity assumptions, the X’s are distinct with probability 1. Given T , the only possible values for X are the n! vectors (X(i1), · · · , X(in)), and by symmetry, each of these has conditional probability 1/n! The conditional dis-tribution is thus independent of F, and T is sufficient. In fact, a random vector X′ with the same distribution as X can be obtained from T by labeling the n coordinates of T at random. Equivalent to T is the statistic U = (U1, . . . , Un) 1.6 ] SUFFICIENT STATISTICS 37 where U1 = Xi, U2 = XiXj (i ̸= j), . . . , Un = X1 · · · Xn, and also the statistic V = (V1, . . . , Vn) where Vk = Xk 1 + · · · + Xk n (Problem 6.9). ∥ Equivalent forms of a sufficient statistic reduce the data to the same extent. There may, however, also exist sufficient statistics which provide different degrees of reduction. Example 6.11 Different sufficient statistics. Let X1, . . . , Xn be iid as N(0, σ 2) and consider the statistics T1(X) = (X1, . . . , Xn), T2(X) = (X2 1, . . . , X2 n), T3(X) = (X2 1 + · · · + X2 m, X2 m+1 + · · · + X2 n), T4(X) = X2 1 + · · · + X2 n. These are all sufficient (Problem 6.5), with Ti providing increasing reduction of the data as i increases. ∥ It follows from the interpretation of sufficiency given at the beginning of this section that if T is sufficient and T = H(U), then U is also sufficient. Knowledge of U implies knowledge of T and hence permits reconstruction of the original data. Furthermore, T provides a greater reduction of the data than U unless H is 1:1, in which case T and U are equivalent. A sufficient statistic T is said to be minimal if of all sufficient statistics it provides the greatest possible reduction of the data, that is, if for any sufficient statistic U there exists a function H such that T = H(U) (a.e. P). Minimal sufficient statistics can be shown to exist under weak assumptions (see, for example, Bahadur, 1954), but exceptions are possible (Pitcher 1957, Landers and Rogge 1972). Minimal sufficient statistics exist, in particular if the basic measurable space is Euclidean in the sense of Example 2.2 and the family P of distributions is dominated (Bahadur 1957). It is typically fairly easy to construct a minimal sufficient statistic. For the sake of simplicity, we shall restrict attention to the case that the distributions of P all have the same support (but see Problems 6.11 - 6.17). Theorem 6.12 Let P be a finite family with densities pi, i = 0, 1, . . . , k, all having the same support. Then, the statistic T (X) = p1(X) p0(X), p2(X) p0(X), . . . , pk(X) p0(X)  (6.6) is minimal sufficient. The proof is an easy consequence of the following corollary of Theorem 6.5 (Problem 6.6). Corollary 6.13 Under the assumptions of Theorem 6.5, a necessary and sufficient condition for a statistic U to be sufficient is that for any fixed θ and θ0, the ratio pθ(x)/pθ0(x) is a function only of U(x). Proof of Theorem 6.12. The corollary states that U is a sufficient statistic for P if and only if T is a function of U, and this proves T to be minimal. ✷ 38 PREPARATIONS [ 1.6 Theorem 6.12 immediately extends to the case that P is countable. Generaliza-tions to uncountable families are also possible (see Lehmann and Scheff´ e 1950, Dynkin 1951, and Barndorff-Nielsen, Hoffmann-Jorgensen, and Pedersen 1976), but must contend with measure-theoretic difficulties. In most applications, min-imal sufficient statistics can be obtained for uncountable families by combining Theorem 6.12 with the following lemma. Lemma 6.14 If P is a family of distributions with common support and P0 ⊂P, and if T is minimal sufficient for P0 and sufficient for P, it is minimal sufficient for P. Proof. If U is sufficient for P, it is also sufficient for P0, and hence T is a function of U. ✷ Example 6.15 Location families. As an application, let us now determine mini-mal sufficient statistics for a sample X1, . . . , Xn from a location family P, that is, when pθ(x) = f (x1 −θ) · · · f (xn −θ), (6.7) where f is assumed to be known. By Example 6.10, sufficiency permits the rather trivial reduction to the order statistics for all f . However, this reduction uses only the iid assumption and neither the special structure (6.7) nor the knowledge of f . To illustrate the different possibilities that arise when this knowledge is utilized, we shall take for f the six densities of Table 4.1, each with b = 1. (i) Normal. If P0 consists of the two distributions N(θ0, 1) and N(θ1, 1), it follows from Theorem 6.12 that the minimal sufficient statistic for P0 is T (x) = pθ1(X)/pθ0(X), which is equivalent to ¯ X. Since ¯ X is sufficient for P = {N(θ, 1), −∞< θ < ∞} by the factorization criterion, it is minimal sufficient. (ii) Exponential. If the X’s are distributed as E(θ, 1), it is easily seen that X(1) is minimal sufficient (Problem 6.17). (iii) Uniform. For a sample from U(θ −1/2, θ + 1/2), the minimal sufficient statistic is (X(1), X(n)) (Problem 6.16). In these three instances, sufficiency was able to reduce the original n-di-mensional data to one or two dimensions. Such extensive reductions are not possible for the remaining three distributions of Table 4.1. (iv) Logistic. The joint density of a sample from L(θ, 1) is pθ(x) = exp[−(xi −θ)]/ {1 + exp[−(xi −θ)]}2. (6.8) Consider a subfamily P0 consisting of the distribution (6.8) with θ0 = 0 and θ1, . . . , θk. Then by Theorem 6.12, the minimal sufficient statistic for P0 is T (X) = [T1(X), . . . , Tk(X)], where Tj(x) = enθj n i=1  1 + e−xi 1 + e−xi+θj 2 . (6.9) We shall now show that for k = n + 1, T (X) is equivalent to the order statistics, that is, that T (x) = T (y) if and only if x = (x1, . . . , xn) and y = (y1, . . . , yn) have 1.6 ] SUFFICIENT STATISTICS 39 the same order statistics, which means that one is a permutation of the other. The equation Tj(x) = Tj(y) is equivalent to O  1 + exp(−xi) 1 + exp(−xi + θj) 2 = O  1 + exp(−yi) 1 + exp(−yi + θj) 2 and hence T (x) = T (y) to n i=1 1 + ξui 1 + ui = n i=1 1 + ξvi 1 + vi for ξ = ξ1, . . . , ξn+1, (6.10) where ξj = eθj , ui = e−xi, and vi = e−yi. Now the left- and right-hand sides of (6.10) are polynomials in ξ of degree n which agree for n + 1 values of ξ if and only if the coefficients of ξ r agree for all r = 0, 1, . . . , n. For r = 0, this implies O(1 + ui) = O(1 + vi), so that (6.10) reduces to O(1 + ξui) = O(1 + ξvi) for ξ = ξ1, . . . , ξn+1, and hence for all ξ. It follows that O(η + ui) = O(η + vi) for all η, so that these two polynomials in η have the same roots. Since this is equivalent to the x’s and y’s having the same order statistics, the proof is complete. Similar arguments show that in the Cauchy and double exponential cases, too, the order statistics are minimal sufficient (Problem 6.10). This is, in fact, the typical situation for location families, examples (i) through (iii) being happy exceptions. ∥ As a second application of Theorem 6.12 and Lemma 6.1, let us determine minimal sufficient statistics for exponential families. Corollary 6.16 (Exponential Families) Let X be distributed with density (5.2). Then, T = (T1, . . . , Ts) is minimal sufficient provided the family (5.2) satisfies one of the following conditions: (i) It is of full rank. (ii) The parameter space contains s + 1 points η(j)(j = 0, . . . , s), which span Es, in the sense that they do not belong to a proper affine subspace of Es. Proof. That T is sufficient follows immediately from Theorem 6.5. To prove min-imality under assumption (i), let P0 be a subfamily consisting of s +1 distributions η(j) = (η(j) 1 , . . . , η(j) s ), j = 0, 1, . . . , s. Then, the minimal sufficient statistic for P0 is equivalent to (η(1) i −η(0) i )Ti(X), . . . , (η(s) i −η(0) i )Ti(X), which is equivalent to T = [T1(X), . . . , Ts(X)], provided the s × s matrix ||η(j) i − η(0) i || is nonsingular. A subfamily P0 for which this condition is satisfied exists under the assumption of full rank. The proof of minimality under assumption (ii) is similar. ✷ It is seen from this result that the sufficient statistics T of Examples 6.6 and 6.7 are minimal. The following example illustrates the applicability of part (ii). Example 6.17 Minimal sufficiency in curved exponential families. Let X1, X2, . . . , Xn have joint density (6.4), but, as in Example 5.4, assume that ξ = σ, so 40 PREPARATIONS [ 1.6 the parameter space is the curve of Figure 10.1 (see Note 10.6). The statistic T = ( Xi,  X2 i ) is sufficient, and it is also minimal by Corollary 6.16. To see this, recall that the natural parameter is η = (1/ξ, −1/2ξ 2), and choose η(0) =  1, −1 2  , η(1) =  2, −1 8  , η(2) =  3, −1 18  and note that the 2 × 2 matrix  2 −1 3 −1 −1 8 + 1 2 −1 18 + 1 2  has rank 2 and is invertible. In contrast, suppose that the parameters are restricted according to ξ = σ 2, another curved exponential family. This defines an affine subspace (with zero curvature) and the sufficient statistic T is no longer minimal (Problem 6.20). ∥ Let X1, . . . , Xn be iid, each with density (5.2), assumed to be of full rank. Then, the joint distribution of the X’s is again full-rank exponential, with T = (T ∗ 1 , . . . , T ∗ s ) where T ∗ i = n j=1Ti(Xj). This shows that in a sample from the exponential family (5.2), the data can be reduced to an s-dimensional sufficient statistic, regardless of the sample size. The reduction of a sample to a smaller number of sufficient statistics greatly simplifies the statistical analysis, and it is therefore interesting to ask what other families permit such a reduction. The dimensionality of a sufficient statistic is a property which differs from those considered so far, in that it depends not only on the sets of points of the sample space for which the statistic takes on the same value but it also depends on these values; that is, the dimensionality may not be the same for different representations of a sufficient statistic (see, for example, Denny, 1964, 1969). To make the concept of dimensionality meaningful, let us call T a continuous s-dimensional sufficient statistic over a Euclidean sample space X if the assumptions of Theorem 6.5 hold, if T (x) = [T1(x), . . . , Ts(x)] where T is continuous, and if the factorization (6.3) holds not only a.e. but for all x ∈X. Theorem 6.18 SupposeX1, . . . , Xn arereal-valuediidaccordingtoadistribution with density fθ(xi) with respect to Lebesgue measure, which is continuous in xi and whose support for all θ is an interval I. Suppose that for the joint density of X = (X1, . . . , Xn) pθ(x) = fθ(x1) · · · fθ(xn) there exists a continuous k-dimensional sufficient statistic. Then (i) if k = 1, there exist functions η1, B and h such that (5.1) holds; (ii) k > 1, and if the densities fθ(xi) have continuous partial derivatives with respect to xi, then there exist functions ηi, B and h such that (5.1) holds with s ≤k. For a proof of this result, see Barndorff-Nielsen and Pedersen (1968). A corre-sponding problem for the discrete case is considered by Andersen (1970a). This theorem states essentially that among “smooth” absolutely continuous fam-ilies of distributions with fixed support, exponential families are the only ones that permit dimensional reduction of the sample through sufficiency. It is crucial for 1.6 ] SUFFICIENT STATISTICS 41 this result that the support of the distributions Pθ is independent of θ. In the con-trary case, a simple example of a family possessing a one-dimensional sufficient statistic for any sample size is provided by the uniform distribution (Example 6.3). The Dynkin-Ferguson theorem mentioned at the end of the last section and Theorem 6.18 state roughly that (a) the only location families which are one-dimensional exponential families are the normal and log of gamma distributions and (b) only exponential families permit reduction of the data through sufficiency. Together, these results appear to say that the only location families with fixed support in which a dimensional reduction of the data is possible are the normal and log of gamma families. This is not quite correct, however, because a location family — although it is a one-dimensional family — may also be a curved exponential family. Example 6.19 Location/curved exponential family. Let X1, . . . , Xn be iid with joint density (with respect to Lebesgue measure) C exp  − n i=1 (xi −θ)4 (6.11) = C exp(−nθ4) exp(4θ3xi −6θ2x2 i + 4θx3 i −x4 i ). According to (5.1), this is a three-dimensional exponential family, and it provides an example of a location family with a three-dimensional sufficient statistic sat-isfying all the assumptions of Theorem 6.18. This is a curved exponential family with parameter space = {(θ1, θ2, θ3) : θ1 = θ3 3 , θ2 = θ2 3 }, a curved subset of three-dimensional space. ∥ The tentative conclusion, which had been reached just before Example 6.19 and which was contradicted by this example, is nevertheless basically correct. Typically, a location family with fixed support (−∞, ∞) will not constitute even a curved exponential family and will, therefore, not permit a dimensional reduction of the data without loss of information. Example 6.15 shows that the degree of reduction that can be achieved through sufficiency is extremely variable, and an interesting question is, what characterizes the situations in which sufficiency leads to a substantial reduction of the data? The ability of a sufficient statistic to achieve such a reduction appears to be related to the amount of ancillary information it contains. A statistic V (X) is said to be ancillary if its distribution does not depend on θ, and first-order ancillary if its expectation Eθ[V (X)] is constant, independent of θ. An ancillary statistic by itself contains no information about θ, but minimal sufficient statistics may still contain much ancillary material. In Example 6.15(iv), for instance, the differences X(n) −X(i)(i = 1, . . . , n −1) are ancillary despite the fact that they are functions of the minimal sufficient statistics (X(1), . . . , X(n)). Example 6.20 Location ancillarity. Example 6.15(iv) is a particular case of a location family. Quite generally, when sampling from any location family, the differences Xi −Xj, i ̸= j, are ancillary statistics. Similarly, when sampling from scale families, ratios are ancillary. See Problem 6.34 for details. ∥ 42 PREPARATIONS [ 1.6 A sufficient statistic T appears to be most successful in reducing the data if no nonconstant function of T is ancillary or even first-order ancillary, that is, if Eθ[f (T )] = c for all θ ∈ implies f (t) = c (a.e. P). By subtracting c, this condition is seen to be equivalent to Eθ[f (T )] = 0 for all θ ∈ implies f (t) = 0 (a.e. P) (6.12) where P = {Pθ, θ ∈ }. A statistic T satisfying (6.12) is said to be complete. As will be seen later, completeness brings with it substantial simplifications of the statistical situation. Since complete sufficient statistics are particularly effective in reducing the data, it is not surprising that a complete sufficient statistic is always minimal. Proofs are given in Lehmann and Scheff´ e (1950), Bahadur (1957), and Schervish (1995); see also Problem 6.29. What happens to the ancillary statistics when the minimal sufficient statistic is complete is shown by the following result. Theorem 6.21 (Basu’s Theorem) If T is a complete sufficient statistic for the family P = {Pθ, θ ∈ }, then any ancillary statistic V is independent of T . Proof. If V is ancillary, the probability pA = P(V ∈A) is independent of θ for all A. Let ηA(t) = P(V ∈A|T = t). Then, Eθ[ηA(T )] = pA and, hence, by completeness, ηA(t) = pA(a.e. P). This establishes the independence of V and T . ✷ We conclude this section by examining some complete and incomplete families through examples. Theorem 6.22 If X is distributed according to the exponential family (5.2) and the family is of full rank, then T = [T1(X), . . . , Ts(X)] is complete. For a proof, see TSH2 Section 4.3, Theorem 1; Barndorff-Nielsen (1978), Lemma 8.2.; or Brown (1986a), Theorem 2.12. Example 6.23 Completeness in some one-parameter families. We give some examples of complete one-parameter families of distributions. (i) Theorem 6.22 proves completeness of (a) X for the binomial family {b(p, n), 0 < p < 1} (b) X for the Poisson family {P(λ), 0 < λ} (ii) Uniform. Let X1, . . . , Xn be iid according to the uniform distribution U(0, θ), 0 < θ. It was seen in Example 6.3 that T = X(n) is sufficient for θ. To see that T is complete, note that P(T ≤t) = tn/θn, 0 < t < θ, so that T has probability density pθ(t) = ntn−1/θn, 0 < t < θ. (6.13) 1.6 ] SUFFICIENT STATISTICS 43 Suppose Eθf (T ) = 0 for all θ, and let f + and f −be its positive and negative parts, respectively. Then,  θ 0 tn−1f +(t) dt =  θ 0 tn−1f −(t) dt for all θ. It follows that  A tn−1f +(t) dt =  A tn−1f −(t) dt for all Borel sets A, and this implies f = 0 a.e. (iii) Exponential. Let Y1, . . . , Yn be iid according to the exponential distribution E(η, 1). If Xi = e−Yi and θ = e−η, then X1, . . . , Xn iid as U(0, θ) (Problem 6.28), and it follows from (ii) that X(n) or, equivalently, Y(1) is sufficient and complete. ∥ Example 6.24 Completeness in some two-parameter families. (i) Normal N(ξ, σ 2). Theorem 6.22 proves completeness of ( ¯ X, S2) of Example 6.7 in the normal family {N(ξ, σ 2), −∞< ξ < ∞, 0 < σ}. (ii) Exponential E(a, b). Let X1, . . . , Xn be iid according to the exponential distribution E(a, b), −∞< a < ∞, 0 < b, and let T1 = X(1), T2 = [Xi −X(1)]. Then, (T1, T2) are independently distributed as E(a, b/n) and 1 2bχ2 2n−2, respectively (Problem 6.18), and they are jointly sufficient and com-plete. Sufficiency follows from the factorization criterion. To prove complete-ness, suppose that Ea,b[f (T1, T2)] = 0 for all a, b. Then if g(t1, b) = Eb[f (t1, T2)], (6.14) we have that for any fixed b,  ∞ a g(t1, b)e−nt1/bdt1 = 0 for all a. It follows from Example 6.23(iii) that g(t1, b) = 0, except on a set Nb of t1 values which has Lebesgue measure zero and which may depend on b. Then, by Fubini’s theorem, for almost all t1 we have g(t1, b) = 0 a.e. in b. Since the densities of T2 constitute an exponential family, g(t1, b) by (6.14) is a continuous function of b for any fixed t1. It follows that for almost all t1, g(t1, b) = 0, not only a.e. but for all b. Applying completeness of T2 to (6.14), we see that for almost all t1, f (t1, t2) = 0 a.e. in t2. Thus, finally, f (t1, t2) = 0 a.e. with respect to Lebesgue measure in the (t1, t2) plane. [For measurability aspects which have been ignored in this proof, see Lehmann and Scheff´ e (1955, Theorem 7.1).] ∥ 44 PREPARATIONS [ 1.7 Example 6.25 Minimal sufficient but not complete. (i) Location uniform. Let X1, . . . , Xn be iid according to U(θ −1/2, θ + 1/2), −∞< θ < ∞. Here, T = {X(1), X(n)) is minimal sufficient (Problem 6.16). Ontheotherhand,T isnotcompletesinceX(n)−X(1) isancillary.Forexample, Eθ[X(n) −X(1) −(n −1)/(n + 1)] = 0 for all θ. (ii) Curved normal family. In the curved exponential family derived from the N(ξ, σ 2) family with ξ = σ, we have seen (Example 6.17) that the statistic T = ( xi,  x2 i ) is minimal sufficient. However, it is not complete since there exists a function f (T ) satisfying (6.12). This follows from the fact that we can find unbiased estimators for ξ based on either  Xi or  X2 i (see Problem 6.21). ∥ We close this section with an illustration of sufficiency and completeness in logit dose-response models. Example 6.26 Completeness in the logit model. For the model of Example 5.5, where Xi are independent b(pi, ni), i = 1, . . . , m, that is, P(X1 = x1, . . . , Xm = xm) = m i=1 ni xi  pxi i (1 −pi)ni−xi, (6.15) it can be shown that X = (X1, · · · , Xm) is minimal sufficient. The natural param-eters are the logits ηi = log[(pi/(1 −pi)], i = 1, . . . , m [see (5.8)], and if the pi’s are unrestricted, the minimal sufficient statistic is also complete (Problem 6.23). ∥ Example 6.27 Dose-response model. Suppose ni subjects are each given dose level di of a drug, i = 1, 2, and that d1 < d2. The response of each subject is either 0 or 1, independent of the others, and the probability of a successful response is pi = ηθ(di). The joint distribution of the response vector X = (X1, X2) is pθ(x) = 2 i=1 ni xi  [ηθ(di)]xi [1 −ηθ(di)]ni−xi . (6.16) Note the similarity to the model (6.15). The statistic X is minimal sufficient in the model (6.16), and remains so if ηθ(di) has the form ηθ(di) = 1 −e−θdi, d1 = 1, d2 = 2, n1 = 2, n2 = 1. (6.17) However, it is not complete since Eθ [I(X1 = 0) −I(X2 = 0)] = 0. (6.18) If instead of (6.17), we assume that ηθ(di) is given by ηθ(di) = 1 −e−θ1di−θ2d2 i , i = 1, 2, (6.19) where d1/d2 is an irrational number, then X is a complete sufficient statistic. These models are special cases of those examined by Messig and Strawderman (1993), who establish conditions for minimal sufficiency and completeness in a large class of dose-response models. ∥ 1.7 ] CONVEX LOSS FUNCTIONS 45 Table 7.1. Convex Functions Function φ Interval (a, b) (i) |x| −∞< x < ∞ (ii) x2 −∞< x < ∞ (iii) xp, p ≥1 0 < x (iv) 1/xp, p > 0 0 < x (v) ex −∞< x < ∞ (vi) −log x 0 < x < ∞ 7 Convex Loss Functions The property of convexity and the associated property of concavity play an impor-tant role in point estimation. In particular, the point estimation problem outlined in Section 1 simplifies in a number of ways when the loss function L(θ, d) is a convex function of d. Definition 7.1 A real-valued function φ defined over an open interval I = (a, b) with −∞≤a < b ≤∞is convex if for any a < x < y < b and any 0 < γ < 1 φ[γ x + (1 −γ )y] ≤γ φ(x) + (1 −γ )φ(y). (7.1) The function is said to be strictly convex if strict inequality holds in (7.1) for all indicated values of x, y, and γ . A function φ is concave on (a, b) if −φ is convex. Convexity is a very strong condition which implies, for example, that φ is con-tinuous in (a, b) and has a left and right derivative at every point of (a, b). Proofs of these properties and of the other properties of convex functions stated in the fol-lowing without proof can be found, for example, in Hardy, Littlewood, and Polya (1934), Rudin (1966), Roberts and Varberg (1973), or Dudley (1989). Determination of whether or not a function is convex is often easy with the help of the following two criteria. Theorem 7.2 (i) If φ is defined and differentiable on (a, b), then a necessary and sufficient condition for φ to be convex is that φ′(x) ≤φ′(y) for all a < x < y < b. (7.2) The function is strictly convex if and only if the inequality (7.2) is strict for all x < y. (ii) If, in addition, φ is twice differentiable, then the necessary and sufficient condition (7.2) is equivalent to φ′′(x) ≥0 for all a < x < b (7.3) 46 PREPARATIONS [ 1.7 with strict inequality sufficient (but not necessary) for strict convexity. Example 7.3 Convex functions. From these criteria, it is easy to see that the functions of Table 7.1 are convex over the indicated intervals: In all these cases, φ is strictly convex, except in (i) and in (iii) with p = 1. ∥ In general, a convex function is strictly convex unless it is linear over some subinterval of (a, b) (Problems 7.1 and 7.6). A basic property of convex functions is contained in the following theorem. Theorem 7.4 Let φ be a convex function defined on I = (a, b) and let t be any fixed point in I. Then, there exists a straight line y = L(x) = c(x −t) + φ(t) (7.4) through the point [t, φ(t)] such that L(x) ≤φ(x) for all x in I. (7.5) By definition, a function φ is convex if the value of the function at the weighted average of two points does not exceed the weighted average of its values at these two points. By induction, this is easily generalized to the average of any finite number of points (Problem 7.8). In fact, the inequality also holds for the weighted average of any infinite set of points, and in this general form, it is known as Jensen’s inequality. The weighted average of φ with respect to the weight function  is represented by  I φd (7.6) where  is a measure with (I) = 1. In the particular case that  assigns measure γ and 1 −γ to the points x and y, respectively, this reduces to the right side of (7.1). It is convenient to interpret (7.6) as the expected value of φ(X), where X is a random variable taking on values in I according to the probability distribution . Theorem 7.5 (Jensen’s Inequality) If φ is a convex function defined over an open interval I, and X is a random variable with P(X ∈I) = 1 and finite expectation, then φ[E(X)] ≤E[φ(X)]. (7.7) If φ is strictly convex, the inequality is strict unless X is a constant with probability 1. Proof. Let y = L(x) be the equation of the line which satisfies (7.5) and for which L(t) = φ(t) when t = E(X). Then, E[φ(X)] ≥E[L(X)] = L[E(X)] = φ[E(X)], (7.8) which proves (7.7). If φ is strictly convex, the inequality in (7.5) is strict for all x ̸= t, and hence the inequality in (7.8) is strict unless φ(X) = E[φ(X)] with probability 1. ✷ Note that the theorem does not exclude the possibility that E[φ(X)] = ∞. 1.7 ] CONVEX LOSS FUNCTIONS 47 Corollary 7.6 If X is a nonconstant positive random variable with finite expec-tation, then 1 E(X) < E  1 X  (7.9) and E(log X) < log[E(X)]. (7.10) Example 7.7 Entropy distance. For density functions f and g, we define the entropy distance between f and g , with respect to f (also known as Kullback-Leibler Information of g at f or Kullback-Leibler distance between g and f ) as Ef [log(f (X)/g(X))] =  log[f (x)/g(x)]f (x) dx. (7.11) Corollary 7.6 shows that Ef [log(f (X)/g(X))] = −Ef [log(g(X)/f (X))] ≥−log[Ef (g(X)/f (X))] (7.12) = 0, and hence that the entropy distance is always non-negative, and equals zero if f = g. Note that inequality (7.12) also establishes Ef log[g(X)] ≤Ef log[f (X)], (7.13) which plays an important role in the theory of the EM algorithm of Section 6.4. Entropy distance was explored by Kullback (1968); for an exposition of its properties see, for example, Brown (1986a). Entropy distance has, more recently, found many uses in Bayesian analysis, see e.g., Berger (1985) or Robert (1994a), and Section 4.5. ∥ In Theorem 6.1, it was seen that if T is a sufficient statistic, then for any statistical procedure there exists an equivalent procedure (i.e., having the same risk function) based only on T . We shall now show that in estimation with a strictly convex loss function, a much stronger statement is possible: Given any estimator δ(X) which is not a function of T , there exists a better estimator depending only on T . Theorem 7.8 (Rao-Blackwell Theorem) Let X be a random observable with distribution Pθ ∈P = {Pθ′, θ′ ∈ }, and let T be sufficient for P. Let δ be an estimator of an estimand g(θ), and let the loss function L(θ, d) be a strictly convex function of d. Then, if δ has finite expectation and risk, R(θ, δ) = EL[θ, δ(X)] < ∞, and if η(t) = E[δ(X)|t], (7.14) the risk of the estimator η(T ) satisfies R(θ, η) < R(θ, δ) (7.15) unless δ(X) = η(T ) with probability 1. 48 PREPARATIONS [ 1.7 Proof. In Theorem 7.5, let φ(d) = L(θ, d), let δ = δ(X), and let X have the conditional distribution P X|t of X given T = t. Then L[θ, η(t)] < E{L[θ, δ(X)]|t} unless δ(X) = η(T ) with probability 1. Taking the expectation on both sides of this inequality yields (7.15), unless δ(X) = η(T ) with probability 1. ✷ Some points concerning this result are worth noting. 1. Sufficiency of T is used in the proof only to ensure that η(T ) does not depend on θ and hence is an estimator. 2. If the loss function is convex but not strictly convex, the theorem remains true provided the inequality sign in (7.15) is replaced by ≤. Even in that case, the theorem still provides information beyond the results of Section 6 because it shows that the particular estimator η(T ) is at least as good as δ(X). 3. The theorem is not true if the convexity assumption is dropped. Examples illustrating this fact will be given in Chapters 2 and 5. In Section 6, randomized estimators were introduced, and such estimators may be useful, for example, in reducing the maximum risk (see Chapter 5, Example 5.1.8), but this can never be the case when the loss function is convex. Corollary 7.9 Given any randomized estimator of g(θ), there exists a nonran-domized estimator which is uniformly better if the loss function is strictly convex and at least as good when it is convex. Proof. Note first that a randomized estimator can be obtained as a nonrandomized estimator δ∗(X, U), where X and U are independent and U is uniformly distributed on (0, 1). This is achieved by observing X = x and then using U to construct the distribution of Y given X = x, where Y = Y(x) is the random variable employed in the definition of a randomized estimator (Problem 7.10). To prove the theorem, we therefore need to show that given any estimator δ∗(X, U) of g(θ), there exists an estimator δ(X), depending on X only, which has uniformly smaller risk. However, this is an immediate consequence of the Rao-Blackwell theorem since for the observations (X, U), the statistic X is sufficient. For δ(X), one can therefore take the conditional expectation of δ∗(X, U) given X. ✷ An estimator δ is said to be inadmissible if there exists another estimator δ′ which dominates it (that is, such that R(θ, δ′) ≤R(θ, δ) for all θ, with strict inequality for some θ) and admissible if no such estimator δ′ exists. If the loss function L is strictly convex, it follows from Corollary 7.9 that every admissible estimator must be nonrandomized. Another property of admissible estimators in the strictly convex loss case is provided by the following uniqueness result. Theorem 7.10 If L is strictly convex and δ is an admissible estimator of g(θ), and if δ′ is another estimator with the same risk function, that is, satisfying R(θ, δ) = R(θ, δ′) for all θ, then δ′ = δ with probability 1. Proof. If δ∗= 1 2(δ + δ′), then R(θ, δ∗) < 1 2[R(θ, δ) + R(θ, δ′)] = R(θ, δ) (7.16) 1.7 ] CONVEX LOSS FUNCTIONS 49 unless δ = δ′ with probability 1, and (7.16) contradicts the admissibility of δ. ✷ The preceding considerations can be extended to the situation in which the estimand g(θ) = [g1(θ), . . ., gk(θ)] and the estimator δ(X) = [δ1(X), . . ., δk(X)] are vector-valued. Definition 7.11 For any two points x = (x1, . . . , xk) and y = (y1, . . . , yk) in Ek, define γ x+(1−γ )y to be the point with coordinates γ xi +(1−γ )yi, i = 1, . . . , k. (i) A set S in Ek is convex if for any x, y ∈S, the points γ x + (1 −γ )y, 0 < γ < 1 are also in S. (Geometrically, this means that the line segment connecting any two points in S lies in S.) (ii) A real-valued function φ defined over an open convex set S in Ek is convex if (7.1) holds with x and y replaced by x and y; it is strictly convex if the inequality is strict for all x and y. Example 7.12 Convex combination. If φj is a convex function of a real variable defined over an interval Ij for each j = 1, . . . , k, then for any positive constants a1, . . . , ak φ(x) = ajφj(xj) (7.17) is a convex function defined over the k-dimensional rectangle with sides I1, . . . , Ik; it is strictly convex, provided φ1, . . . , φk are all strictly convex. This example implies, in particular, that the loss function L(θ, d) = ai[di −gi(θ)]2 (7.18) is strictly convex. ∥ A useful criterion to determine whether a given function φ is convex is the following generalization of (7.3). Theorem 7.13 Let φ be defined over an open convex set S in Ek and twice differ-entiable in S. Then, a necessary and sufficient condition for φ to be convex is that the k × k matrix with ijth element ∂2φ(x1, . . . , xk)/∂xi∂xj, which is known as the Hessian matrix, is positive semidefinite; if the matrix is positive definite, then φ is strictly convex. Example 7.14 Quadratic loss. Consider the loss function L(θ, d) = aij[di −gi(θ)][dj −gj(θ)]. (7.19) Since ∂2L/∂di∂dj = aij, L is strictly convex, provided the matrix ||aij|| is positive definite. ∥ Let us now consider some consequences of adopting a convex loss function in a location model. In Section 1, it was pointed out that there exists a unique number a minimizing (xi −a)2, namely ¯ x, and that the minimizing value of n i=1|xi −a| is either unique (when n is odd) or the minimizing values constitute an interval. This interval structure of the minimizing values does not hold, for example, when minimizing √|xi −a|. In the case n = 2, for instance, there exist two minimizing 50 PREPARATIONS [ 1.7 values, a = x1 and a = x2 (Problem 7.12). This raises the general question of the set of values a minimizing ρ(xi −a), which, in turn, is a special case of the following problem. Let X be a random variable and L(θ, d) = ρ(d −θ) a loss function, with ρ even. Then, what can be said about the set of values a minimizing E[ρ(X−a)]? This specializes to the earlier case if X takes on the values x1, . . . , xn with probabilities 1/n each. Theorem 7.15 Let ρ be a convex function defined on (−∞, ∞) and X a random variable such that φ(a) = E[ρ(X −a)] is finite for some a. If ρ is not monotone, φ(a) takes on its minimum value and the set on which this value is taken is a closed interval. If ρ is strictly convex, the minimizing value is unique. The proof is based on the following lemma. Lemma 7.16 Let φ be a convex function on (−∞, ∞) which is bounded below and suppose that φ is not monotone. Then, φ takes on its minimum value; the set S on which this value is taken on is a closed interval and is a single point when φ is strictly convex. Proof. Since φ is convex and not monotone, it tends to ∞as x →±∞. Since φ is also continuous, it takes on its minimizing value. That S is an interval follows from convexity and that it is closed follows from continuity. ✷ Proof of Theorem 7.15. By the lemma, it is enough to prove that φ is (strictly) convex and not monotone. That φ is not monotone follows from that fact that φ(a) →∞as a →±∞. This latter property of φ is a consequence of the facts that X −a tends in probability to ∓∞as a →±∞and that ρ(t) →∞as t →±∞. (Strict) convexity of φ follows from the corresponding property of ρ. ✷ Example 7.17 Squared error loss. Let ρ(t) = t2 and suppose that E(X2) < ∞. Since ρ is strictly convex, if follows that φ(a) has a unique minimizing value. If E(X) = µ, which by assumption is finite, we have, in fact, φ(a) = E(X −a)2 = E(X −µ)2 + (µ −a)2, (7.20) which shows that φ(a) is a minimum if and only if a = µ. ∥ Example 7.18 Absolute error loss. Let ρ(t) = |t| and suppose that E|X| < ∞. Since ρ is convex but not strictly convex, it follows from Theorem 7.15 that φ(a) takes on its minimum value and that the set S of minimizing values is a closed interval. The set S is, in fact, the set of medians of X (Problems 1.7 and 1.8). ∥ The following is a useful consequence of Theorem 7.15 (see also Problem 7.27). Corollary 7.19 Under the assumptions of Theorem 7.15, suppose that ρ is even and X is symmetric about µ. Then, φ(a) attains its minimum at a = µ. Proof. By Theorem 7.15 the minimum is taken on. If µ + c is a minimizing value, so is µ −c and so, therefore, are all values a between µ −c and µ + c, which includes a = µ. ✷ 1.7 ] CONVEX LOSS FUNCTIONS 51 Now consider an example in which ρ is not convex. Example 7.20 Nonconvex loss. Let ρ(t) = 1 if |t| ≥k and ρ(t) = 0 otherwise. Minimizing φ(a) is then equivalent to maximizing ψ(a) = P(|X −a| < k). Consider the following two special cases (Problem 7.22): (i) The distribution of X has a probability density (with respect to Lebesgue measure) which is continuous, unimodal, and such that f (x) decreases strictly as x moves away from the mode in either direction. Then, there exists a unique value a for which f (a −k) = f (a + k), and this is the unique maximizing value of ψ(a). (ii) Suppose that f is even and U-shaped with f (x) attaining its maximum at x = ±A and f (x) = 0 for |x| > A. Then, ψ(a) attains its maximum at the two points a = −A + k and a = A −k. ∥ Convex loss functions have been seen to lead to a number of simplifications of estimation problems. One may wonder, however, whether such loss functions are likely to be realistic. If L(θ, d) represents not just a measure of inaccuracy but a real (for example, financial) loss, one may argue that all such losses are bounded: once you have lost all, you cannot lose any more. On the other hand, if d can take on all values in (−∞, ∞) or (0, ∞), no nonconstant bounded function can be convex (Problem 7.18). Unfortunately, bounded loss functions with unbounded d can lead to completely unreasonable estimators (see, for example, Theorem 2.1.15). The reason is roughly that arbitrarily large errors can then be committed with essentially no additional penalty and their leverage used to unfair advantage. Perhaps convex loss functions result in more reasonable estimators because the large penalties they exact for large errors compensate for the unrealistic assumption of unbounded d: They make such values so expensive that the estimator will try hard to avoid them. The most widely used loss function is squared error L(θ, d) = [d −g(θ)]2 (7.21) or slightly more generally weighted squared error L(θ, d) = w(θ)[d −g(θ)]2. (7.22) Since these are strictly convex in d, the simplifications represented by Theorem 7.8, Corollary 7.9, and Theorem 7.10 are valid in these cases. The most slowly growing even convex loss function is absolute error L(θ, d) = |d −g(θ)|. (7.23) The faster the loss function increases, the more attention it pays to extreme values of the estimators and hence to outlying observations, so that the perfor-mance of the resulting estimators is strongly influenced by the tail behavior of the assumed distribution of the observable random variables. As a consequence, fast-growing loss functions lead to estimators that tend to be sensitive to the as-sumptions made about this tail behavior, and these assumptions typically are based on little information and thus are not very reliable. It turns out that the estimators produced by squared error loss often are uncom-fortably sensitive in this respect. On the other hand, absolute error appears to go 52 PREPARATIONS [ 1.7 too far in leading to estimators which discard all but the central observations. For many important problems, the most appealing results are obtained from the use of loss functions which lie between (7.21) and (7.23). One interesting class of such loss functions, due to Huber (1964), puts L(θ, d) = [d −g(θ)]2 if |d −g(θ)| ≤k 2k|d −g(θ)| −k2 if |d −g(θ)| ≥k. (7.24) This agrees with (7.21) for |d −g(θ)| ≤k, but above k and below −k, it replaces the parabola with straight lines joined to the parabola so as to make the function continuous and continuously differentiable (Problem 7.21). The Huber loss functions are convex but not strictly convex. An alternative family, which also interpolates between (7.21) and (7.23) and which is strictly convex, is L(θ, d) = |d −g(θ)|p, 1 < p < 2. (7.25) It is a disadvantage of both (7.24) and (7.25) that the resulting estimators, even in fairly simple problems, cannot be obtained in closed form and hence are more difficult to grasp intuitively and to interpret. This may account at least in part for the fact that squared error is the most commonly used loss function or measure of accuracy and that the classic estimators in most situations are the ones derived on this basis. As indicated at the end of Section 1, we shall develop here the theory under the more general assumption of convex loss functions (which, in practice, does not appear to be a serious limitation), but we shall work most examples for the conventional squared error loss. The issue of the robustness of the resulting estimators, which requires going outside the assumed model, will not be treated in detail here. References for further study of robustness include Huber (1981), Hampel et al. (1986), and Staudte and Sheather (1990). With some care, the properties of convex and concave functions generalize to multivariate situations. For example, Theorem 7.4 generalizes to the following supporting hyperplane theorem for convex functions. Theorem 7.21 Let φ be a convex function defined over an open convex set S in Ek and let t be any point in S. Then, there exists a hyperplane y = L(x) = ci(xi −ti) + φ(t) (7.26) through the point [t, φ(t)] such that L(x) ≤φ(x) for all x ∈S. (7.27) Jensen’s inequality (Theorem 7.5) generalizes in the obvious way. The only changes that are needed are replacement of the interval I by an open convex set S, of the random variable X by a random vector X satisfying P(X ∈S) = 1, and of the expectation E(X) by the expectation vector E(X) = [E(X1), . . . , E(Xk)]. For the resulting modification of the inequality (7.7) to be meaningful, it is necessary to know that E(X) is in S so that φ[E(X)] is defined. Lemma 7.22 If X is a random vector with P(X ∈S) = 1, where S is an open convex set in Ek, and if E(X) exists, then E(X) ∈S. 1.8 ] CONVEX LOSS FUNCTIONS 53 A formal proof is given by Ferguson (1967, p. 74). Here, we shall give only a sketch. Suppose that k = 2, and suppose that ξ = E(X) is not in S. Then, Theorem 7.21 guarantees the existence of a line a1x1 + a2x2 = b through the point (ξ1, ξ2) such that S lies entirely on one side of the line. By a rotation of the plane, it can be assumed without loss of generality that the equation of the line is x2 = ξ2 and that S lies above this line so that P(X2 > ξ2) = 1. It follows that E(X2) > ξ2, which is a contradiction. The notions of convexity and concavity can also be extended to the multidi-mensional case in a slightly different way, one that examines the behavior of the function when it is averaged over spheres instead of over pairs of points. Definition 7.23 A continuous function f : Rk →R is superharmonic at a point x0 ∈Rk if, for every r > 0, the average of f over the surface of the sphere Sr(x0) = {x : ||x −x0|| = r} is less than or equal to f (x0). The function f is superharmonic in Rp if it is superharmonic at each x0 ∈Rp. (See Problem 7.15 for an extension.) If we denote the average of f over the surface of the sphere by Ax0(f ), we thus define f to be superharmonic, harmonic, or subharmonic, depending on whether Ax0(f ) is less than or equal to, equal to, or greater than or equal to f , respectively. These definitions are analogous to those of convexity and concavity, but here we taketheaverageoverthesurfaceofasphere.(Notethatinonedimension,thesphere reduces to two points, so superharmonic and concave are the same property.) The following characterization of superharmonicity, which is akin to that of Theorem 7.13, is typically easier to check than the definition. (For a proof, see Helms 1969). Theorem 7.24 If f : Rk →R is twice differentiable, then f is superharmonic in Rk if and only if for all x ∈Rk, k i=1 ∂2 ∂x2 i f (x) ≤0. (7.28) If Equation (7.28) is an equality, then f is harmonic, and if the inequality is reversed, then f is subharmonic. Example 7.25 Subharmonic functions. Some multivariate analogs of the con-vex functions in Example 7.3 are subharmonic. For example, if f (x1, . . . , xk) = k i=1 xp i then k i=1 ∂2 ∂x2 i f (x) = k i=1 p(p −1)xp−2 i . This function is subharmonic if p ≥1 and xi > 0, or if p ≥2 is an even integer. Problem 7.14 considers some other multivariate functions. ∥ Example 7.26 Subharmonic loss. The loss function of Example 7.14, given in Equation (7.19), has second derivative ∂2L/∂d2 i = aii. Thus, it is subharmonic if, and only if,  i aii ≥0. This is a weaker condition than that needed for multidi-mensional convexity. ∥ The property of superharmonicity is useful in the theory of minimax point estimation, as will be seen in Section 5.6. 54 PREPARATIONS [ 1.8 8 Convergence in Probability and in Law Thus far, our preparations have centered on “small-sample” aspects, that is, we have considered the sample size n as being fixed. However, it is often fruitful to consider a sequence of situations in which n tends to infinity. If the given sample size is sufficiently large, the limit behavior may provide an important complement to the small-sample behavior, and often discloses properties of estimators that are masked by complications inherent in small-sample calculations. In preparation for a study of such large-sample asymptotics in Chapter 6, we here present some of the necessary tools. In particular, we review the probabilistic foundations necessary to derive the limiting behavior of estimators. It turns out that under rather weak assumptions, the limit distribution of many estimators is normal and hence depends only on a mean and a variance. This mitigates the effect of the underlying assumptions because the results become less dependent on the model and the loss function. We consider a sample X = (X1, . . . , Xn) as a member of a sequence corre-sponding to n = 1, 2 (or, more generally, n0, n0 + 1, . . .) and obtain the limiting behavior of estimator sequences as n →∞. Mathematically, the results are thus limit theorems. In applications, the limiting results (particularly the asymptotic variances) are used as approximations to the situation obtaining for the actual finite n. A weakness of this approach is that, typically, no good estimates are available for the accuracy of the approximation. However, we can obtain at least some idea of the accuracy by numerical checks for selected values of n. Suppose for a moment that X1, . . . , Xn are iid according to a distribution Pθ, θ ∈ , and that the estimand is g(θ). As n increases, more and more information about θ becomes available, and one would expect that for sufficiently large values of n, it would typically be possible to estimate g(θ) very closely. If δn = δn(X1, . . . , Xn) is a reasonable estimator, of course, it cannot be expected to be close to g(θ) for every sample point (x1, . . . , xn) since the values of a particular sample may always be atypical (e.g., a fair coin may fall heads in 1000 successive spins). What one can hope for is that δn will be close to g(θ) with high probability. This idea is captured in the following definitions, which do not assume the random variables to be iid. Definition 8.1 A sequence of random variables Yn defined over sample spaces (Yn, Bn) tends in probability to a constant c (Yn P →c) if for every a > 0 P[|Yn −c| ≥a] →0 as n →∞. (8.1) A sequence of estimators δn of g(θ) is consistent if for every θ ∈ δn Pθ →g(θ). (8.2) The following condition, which assumes the existence of second moments, fre-quently provides a convenient method for proving consistency. Theorem 8.2 Let {δn} be a sequence of estimators of g(θ) with mean squared error E[δn −g(θ)]2. 1.8 ] CONVERGENCE IN PROBABILITY AND IN LAW 55 (i) If E[δn −g(θ)]2 →0 for all θ, (8.3) then δn is consistent for estimating g(θ). (ii) Equivalent to (8.3), δn is consistent if bn(θ) →0 and varθ(δn) →0 for all θ, (8.4) where bn is the bias of δn. (iii) In particular, δn is consistent if it is unbiased for each n and if varθ(δn) →0 for all θ. (8.5) The proof follows from Chebychev’s Inequality (see Problem 8.1). Example 8.3 Consistency of the mean. Let X1, . . . , Xn be iid with expectation E(Xi) = ξ and variance σ 2 < ∞. Then, ¯ X is an unbiased estimator of ξ with vari-ance σ 2/n, and hence is consistent by Theorem 8.2(iii). Actually, it was proved by Khinchin, see, for example, Feller 1968, Chapter X, Section 1,2) that consistency of ¯ X already follows from the existence of the expectation, so that the assumption of finite variance is not needed. ∥ Note. The statement that ¯ X is consistent is shorthand for the fuller assertion that the sequence of estimators ¯ Xn = (X1 + · · · + Xn)/n is consistent. This type of shorthand is used very common and will be used here. However, the full meaning should be kept in mind. Example 8.4 Consistency of S2. Let X1, . . . , Xn be iid with finite variance σ 2. Then, the unbiased estimator S2 n = (Xi −¯ X)2/(n −1) is a consistent estimator of σ 2. To see this, assume without loss of generality that E(Xi) = 0, and note that S2 n = n n −1 1 nX2 i −¯ X2  . By Example 8.3, X2 i /n P →σ 2 and ¯ X2 P →0. Since n/(n −1) →1, it follows from Problem 8.4 that S2 n P →σ 2. (See also Problem 8.5.) ∥ Example 8.5 Markov chains. As an illustration of a situation involving depen-dentrandomvariables,consideratwo-stateMarkovchain.ThevariablesX1, X2, . . . each take on the values 0 and 1, with the joint distribution determined by the initial probability P(X1 = 1) = p1, and the transition probabilities P(Xi+1 = 1|Xi = 0) = π0, P(Xi+1 = 1|Xi = 1) = π1, of which we shall assume 0 < π0, π1 < 1. For such a chain, the probability pk = P(Xk = 1) 56 PREPARATIONS [ 1.8 typically depends on k and the initial probability p1 (but see Problem 8.10). How-ever, as k →∞, pk tends to a limit p, which is independent of p1. It is easy to see what the value of p must be. Consider the recurrence relation pk+1 = pkπ1 + (1 −pk)π0 = pk(π1 −π0) + π0. (8.6) If pk →p, (8.7) this implies p = π0 1 −π1 + π0 . (8.8) To prove (8.7), it is only necessary to iterate (8.6) starting with k = 1 to find (Problem 8.6). pk = (p1 −p)(π1 −π0)k−1 + p. (8.9) Since |π1 −π0| < 1, the result follows. For estimating p, after n trials, the natural estimator is ¯ Xn, the frequency of ones in these trials. Since E( ¯ Xn) = (p1 + · · · + pn)/n, it follows from (8.7) that E( ¯ Xn) →p (Problem 8.7), so that the bias of ¯ Xn tends to zero. Consistency of ¯ Xn will therefore follow if we can show that var( ¯ Xn) →0. Now, var( ¯ Xn) = n i=1 n j=1 cov(Xi, Xj)/n2. As n →∞, this average of n2 terms will go to zero if cov(Xi, Xj) →0 sufficiently fast as |j −i| →∞. The covariance of Xi and Xj can be obtained by a calculation similar to that leading to (8.9) and satisfies |cov(Xi, Xj)| ≤M|π1 −π0|j−i. (8.10) From (8.10), one finds that var( ¯ Xn) is of order 1/n and hence that ¯ Xn is consistent (Problem 8.11). Instead of p, one may be interested in estimating π0 and π1 themselves. Again, it turns out that the natural estimator N01/(N00 + N01) for π0, where N0j is the number of pairs (Xi, Xi+1) with Xi = 0, Xi+1 = j, j = 0, 1, is consistent. Consider, on the other hand, the estimation of p1. It does not appear that observa-tions beyond on the first provide any information about p1, and one would therefore not expect to be able to estimate p1 consistently. To obtain a formal proof, suppose for a moment that the π’s are known, so that p1 is the only unknown parameter. If a consistent estimator δn exists for the original problem, then δn will continue to be consistent under this additional assumption. However, when the π’s are known, X1 is a sufficient statistic for p1 and the problem reduces to that of estimating a success probability from a single trial. That a consistent estimator of p1 cannot exist under these circumstances follows from the definition of consistency. ∥ When X1, . . . , Xn are iid according to a distribution Pθ, θ ∈ , consistent estimators of real-valued functions of θ will exist in most of the situations we shall encounter (see, for example, Problem 8.8). There is, however, an important 1.8 ] CONVERGENCE IN PROBABILITY AND IN LAW 57 exception. Suppose the X’s are distributed according to F(xi −θ) where F is N(ξ, σ 2), with θ, ξ, and σ 2 unknown. Then, no consistent estimator of θ exists. To see this, note that the X’s are distributed as N(ξ + θ, σ 2). Thus, ¯ X is consistent for estimating ξ + θ, but ξ and θ cannot be estimated separately because they are not uniquely defined, they are unidentifiable (see Definition 5.2). More precisely, for X ∼Pθ,ξ,thereexistpairs(θ1, ξ1)and(θ2, ξ2)withθ1 ̸= θ2 forwhichPθ1,ξ1 = Pθ2,ξ2, showing the parameter θ to be unidentifiable. A parameter that is unidentifiable cannot be estimated consistently since δ(X1, . . . , Xn) cannot simultaneously be close to both θ1 and θ2. Consistency is too weak a property to be of much interest in itself. It tells us that for large n, the error δn −g(θ) is likely to be small but not whether the order of the error is 1/n, 1/√n, 1/ log n, and so on. To obtain an idea of the rate of convergence of a consistent estimator δn, consider the probability Pn(a) = P |δn −g(θ)| ≤a kn " . (8.11) If kn is bounded, then Pn(a) →1. On the other hand, if kn →∞sufficiently fast, Pn(a) →0. This suggests that for a given a > 0, there might exist an intermediate sequence kn →∞for which Pn(a) tends to a limit strictly between 0 and 1. This will be the case for most of the estimators with which we are concerned. Commonly, there will exist a sequence kn →∞and a limit function H which is a continuous cdf such that for all a P{kn[δn −g(θ)] ≤a} →H(a) as n →∞. (8.12) We shall then say that the error |δn −g(θ)| tends to zero at rate 1/kn. The rate, of course, is not uniquely determined by this definition. If 1/kn is a possible rate, so is 1/k′ n for any sequence k′ n for which k′ n/kn tends to a finite nonzero limit. On the other hand, if k′ n tends to ∞more slowly (or faster) than kn, that is, if k′ n/kn →0 (or ∞), then k′ n[δn −g(θ)] tends in probability to zero (or ∞) (Problem 8.12). One can think of the normalizing constants kn in (8.12) in another way. If δn is consistent, the errors δn−g(θ) tend to zero as n →∞. Multiplication by constants kn tending to infinity magnifies these minute errors—it acts as a microscope. If (8.12) holds, then kn is just the right degree of magnification to give a well-focused picture of the behavior of the errors. We formalize (8.12) in the following definition. Definition 8.6 Suppose that {Yn} is a sequence of random variables with cdf Hn(a) = P(Yn ≤a) and that there exists a cdf H such that Hn(a) →H(a) (8.13) at all points a at which H is continuous. Then, we shall say that the distribution functions Hn converge weakly to H, and that the Yn have the limit distribution H, or converge in law to any random variable Y with distribution H. This will be denoted by Yn L →Y or by L(Yn) →H. We may also say that Yn tends in law to H and write Yn →H. 58 PREPARATIONS [ 1.8 The crucial assumption in (8.13) is that H(−∞) = 0 and H(+∞) = 1, that is, that no probability mass escapes to ±∞(see Problem 1.37). The following example illustrates the reason for requiring (8.13) to hold only for the continuity points of H. Example 8.7 Degenerate limit distribution. (i) Let Yn be normally distributed with mean zero and variance σ 2 n where σn →0 as n →∞. (ii) Let Yn be a random variable taking on the value 1/n with probability 1. In both cases, it seems natural to say that Yn tends in law to a random variable Y which takes on the value 0 with probability 1. The cdf H(a) of Y is zero for a < 0 and 1 for a ≥0. The cdf Hn(a) of Yn in both (i) and (ii) tends to H(a) for all a ̸= 0, but not for a = 0 (Problem 8.14). ∥ An important property of weak convergence is given by the following theorem. Its proof, and those of Theorems 8.9-8.12, can be found in most texts on probability theory. See, for example, Billingsley (1995, Section 25). Theorem 8.8 The sequence Yn converges in law to Y if and only if E[f (Yn)] → E[f (Y)] for every bounded continuous real-valued function f . A basic tool for obtaining the limit distribution of many estimators of interest is the central limit theorem (CLT), of which the following is the simplest case. Theorem 8.9 (Central Limit Theorem) Let Xi (i = 1, . . . , n) be iid with E(Xi) = ξ and var(Xi) = σ 2 < ∞. Then, √n( ¯ X −ξ) tends in law to N(0, σ 2) and hence √n( ¯ X −ξ)/σ to the standard normal distribution N(0, 1). The usefulness of this result is greatly extended by Theorems 8.10 and 8.12 below. Theorem 8.10 If Yn L →Y, and An and Bn tend in probability to a and b, respec-tively, then An + BnYn L →a + bY. When Yn converges to a distribution H, it is often required to evaluate prob-abilities of the form P(Yn ≤yn) where yn →y, and one may hope that these probabilities will tend to H(y). Corollary 8.11 If Yn L →H, and yn converges to a continuity point y of H, then P(Yn ≤yn) →H(y). Proof. P(Yn ≤yn) = P[Yn + (y −yn) ≤y] and the result follows from Theorem 8.10 with Bn = 1 and An = y −yn. ✷ The following widely used result is often referred to as the delta method. Theorem 8.12 (Delta Method) If √n[Tn −θ] L →N(0, τ 2), (8.14) then √n[h(Tn) −h(θ)] L →N(0, τ 2[h′(θ)]2), (8.15) provided h′(θ) exists and is not zero. 1.8 ] CONVERGENCE IN PROBABILITY AND IN LAW 59 Proof. Consider the Taylor expansion of h(Tn) around h(θ): h(Tn) = h(θ) + (Tn −θ)[h′(θ) + Rn], (8.16) where Rn →0 as Tn →θ. It follows from (8.14) that Tn →θ in probability and hence that Rn →0 in probability. The result now follows by applying Theorem 8.10 to √n[h(Tn) −h(θ)]. ✷ Example 8.13 Limit of binomial. Let Xi, i = 1, 2, . . ., be independent Bernoulli (p) random variables and let Tn = 1 n n i=1 Xi. Then by the CLT (Theorem 8.9) √n (Tn −p) →N [0, p(1 −p)] (8.17) since E(Tn) = p and var(Tn) = p(1 −p). Suppose now that we are interested in the large sample behavior of the estimate Tn(1 −Tn) of the variance h(p) = p(1 −p). Since h′(p) = 1 −2p, it follows from Theorem 8.12 that √n [Tn(1 −Tn) −p(1 −p)] →N 0, (1 −2p)2p(1 −p) ! (8.18) for p ̸= 1/2. ∥ When the dominant term in the Taylor expansion (8.16) vanishes [as it does at p = 1/2 in (8.18)], it is natural to carry the expansion one step further to obtain h(Tn) = h(θ) + (Tn −θ)h′(θ) + 1 2(Tn −θ)2[h′′(θ) + Rn], where Rn →0 in probability as Tn →θ, or, since h′(θ) = 0, h(Tn) −h(θ) = 1 2(Tn −θ)2[h′′(θ) + Rn]. (8.19) In view of (8.14), the distribution of [√n(Tn −θ)]2 tends to a nondegenerate limit distribution, namely (after division by τ 2) to a χ2-distribution with 1 degree of freedom, and hence n(Tn −θ)2 →τ 2 · χ2 1 . (8.20) The same argument as that leading to (8.15), but with h′(θ) = 0 and h′′(θ) ̸= 0, establishes the following theorem. Theorem 8.14 If √n[Tn −θ] L →N(0, τ 2) and if h′(θ) = 0, then n[h(Tn) −h(θ)] →1 2τ 2h′′(θ)χ2 1 (8.21) provided h′′(θ) exists and is not zero. Example 8.15 Continuation of Example 8.13. For h(p) = p(1−p), we have, at p = 1/2, h′(1/2) = 0 and h′′(1/2) = −2. Hence, from Theorem 8.14, at p = 1/2, n  Tn(1 −Tn) −1 4  →−1 2χ2 1 . (8.22) Although (8.22) might at first appear strange, note that Tn(1 −Tn) ≤1/4, so the left side is always negative. An equivalent form for (8.22) is 2n 1 4 −Tn(1 −Tn)  →χ2 1 . ∥ 60 PREPARATIONS [ 1.8 The typical behavior of estimator sequences as sample sizes tend to infinity is that suggested by Theorem 8.12, that is, if δn is the estimator of g(θ) based on n observations, one may expect that √n[δn −g(θ)] will tend to a normal distribution with mean zero and variance, say τ 2(θ). It is in this sense that the large-sample behavior of such estimators can be studied without reference to a specific loss function. The asymptotic behavior of δn is governed solely by τ 2(θ) since knowledge of τ 2(θ) determines the probability of the error √n[δn −g(θ)] lying in any given interval. In particular, τ 2(θ) provides a basis for the large-sample comparison of different estimators. Contrast this to the finite-sample situation where, for example, if estimators are compared in terms of their risk, one estimator might be best in terms of absolute error, another for squared error, and still another in terms of a higher power of the error or the probability of falling within a stated distance of the true value. This cannot happen here, as τ 2(θ) provides the basis for all large-sample evaluations. It is straightforward to generalize the preceding theorems to functions of several means. The expansion (8.16) is replaced by the corresponding Taylor’s theorem in several variables. Although the following theorem starts in a multivariate setting, the conclusion is univariate. Theorem 8.16 Let (X1ν, . . . , Xsν), ν = 1, . . . , n, be n independent s-tuples of random variables with E(Xiν) = ξi and cov(Xiν, Xjν) = σij. Let ¯ Xi = Xiν/n, and suppose that h is a real-valued function of s arguments with continuous first partial derivatives. Then, √n[h( ¯ X1, . . . , ¯ Xs) −h(ξ1, . . . , ξs)] L →N(0, v2), v2 = σij ∂h ∂ξi · ∂h ∂ξj , provided v2 > 0. Proof. See Problem 8.20. ✷ Example 8.17 Asymptotic distribution of S2. As an illustration of Theorem 8.16, consider the asymptotic distribution of S2 = (Zν −¯ Z)2/n where the Z’s are iid. Without loss of generality, suppose that E(Zν) = 0, E(Z2 ν) = σ 2. Since S2 = (1/n)Z2 ν −¯ Z2, Theorem 8.16 applies with X1ν = Z2 ν, X2ν = Zν, h(x1, x2) = x1 −x2 2, ξ2 = 0, and ξ1 = var(Zν) = σ 2. Thus, √n(S2 −σ 2) →N(0, v2) where v2 = var(Z2 ν). ∥ We conclude this section by considering the multivariate case and extending some of the basic probability results for random variables to vectors of random variables. The definitions of convergence in probability and in law generalize very naturally as follows. Definition 8.18 A sequence of random vectors Yn = (Y1n, . . . , Yrn), n = 1, 2, . . ., tends in probability toward a constant vector c = (c1, . . . , cr) if Yin P →ci for each i = 1, . . . , r, and it converges in law (or weakly) to a random vector Y with cdf H if Hn(a) →H(a) (8.23) 1.9 ] CONVERGENCE IN PROBABILITY AND IN LAW 61 at all continuity points a of H, where Hn(a) = P[Y1n ≤a1, . . . , Yrn ≤ar] (8.24) is the cdf of Yn. Theorem 8.8 extends to the present case. Theorem 8.19 Thesequence{Yn}convergesinlawtoYifandonlyifE[f (Yn)] → E[f (Y)] for every bounded continuous real-valued f . [For a proof of this and Theorem 8.20, see Billingsley (1995, Section 29).] Weak convergence of Yn to Y does not imply P(Yn ∈A) →P(Y ∈A) (8.25) for all sets A for which these probabilities are defined since this is not even true for the set A defined by T1 ≤a1, . . . , Tr ≤ar unless H is continuous at a. Theorem 8.20 The sequence {Yn} converges in law to Y if and only if (8.25) holds for all sets A for which the probabilities in question are defined and for which the boundary of A has probability zero under the distribution of Y. As in the one-dimensional case, the central limit theorem provides a basic tool for multivariate asymptotic theory. Theorem 8.21 (Multivariate CLT) Let Xν = (X1ν, . . . , Xrν) be iid with mean vector ξ = (ξ1, . . . , ξr) and covariance matrix = ||σij||, and let ¯ Xin = (Xi1 + · · · + Xin)/n. Then, [√n( ¯ X1n −ξ1), . . . , √n( ¯ Xrn −ξr)] tends in law to the multivariate normal distribution with mean vector 0 and co-variance matrix . As a last result, we mention a generalization of Theorem 8.16. Theorem 8.22 Suppose that [√n(Y1n −θ1), . . . , √n(Yrn −θr)] tends in law to the multivariate normal distribution with mean vector 0 and co-variance matrix , and suppose that h1, . . . , hr are r real-valued functions of θ = (θ1, . . . , θr), defined and continuously differentiable in a neighborhood ω of the parameter point θ and such that the matrix B = ||∂hi/∂θj|| of partial deriva-tives is nonsingular in ω. Then, [√n[h1(Yn) −h1(θ)], . . . , √n[hr(Yn) −hr(θ)]] tends in law to the multivariate normal distribution with mean vector 0 and with covariance matrix BB′. Proof. See Problem 8.27 ✷ 62 PREPARATIONS [ 1.9 9 Problems Section 1 1.1 If (x1, y1), . . . , (xn, yn) are n points in the plane, determine the best fitting line y = α + βx in the least squares sense, that is, determine the values α and β that minimize [yi −(α + βxi)]2. 1.2 Let X1, . . . , Xn be uncorrelated random variables with common expectation θ and variance σ 2. Then, among all linear estimators αiXi of θ satisfying αi = 1, the mean ¯ X has the smallest variance. 1.3 In the preceding problem, minimize the variance of αiXi(αi = 1) (a) When the variance of Xi is σ 2/αi (αi known). (b) When the Xi have common variance σ 2 but are correlated with common correlation coefficient ρ. (For generalizations of these results see, for example, Watson 1967 and Kruskal 1968.) 1.4 Let X and Y have common expectation θ, variances σ 2 and τ 2, and correlation coeffi-cient ρ. Determine the conditions on σ, τ, and ρ under which (a) var(X) < var[(X + Y)/2]. (b) The value of α that minimizes var[αX + (1 −α)Y] is negative. Give an intuitive explanation of your results. 1.5 Let Xi (i = 1, 2) be independently distributed according to the Cauchy densities C(ai, bi). Then, X1 + X2 is distributed as C(a1 + a2, b1 + b2). [Hint: Transform to new variables Y1 = X1 + X2, Y2 = X2.] 1.6 If X1, . . . , Xn are iid as C(a, b), the distribution of ¯ X is again C(a, b). [Hint: Prove by induction, using Problem 5.] 1.7 A median of X is any value m such that P(X ≤m) ≥1/2 and P(X ≥m) ≥1/2. (a) Show that this is equivalent to P(X < m) ≤1/2 and P(X > m) ≤1/2. (b) Show that the set of medians is always a closed interval m0 ≤m ≤m1. 1.8 If φ(a) = E|X −a| < ∞for some a, show that φ(a) is minimized by any median of X. [Hint: If m0 ≤m ≤m1 (in the notation of Problem 1.7) and m1 < c, then E|X −c| −E|X −m| = (c −m)[P(X ≤m) −P(X > m)] + 2  m<x f2(0), then P [|δ1 −θ| < c] > P [|δ2 −θ| < c] for some c > 0 and hence δ1 will be closer to θ than δ2 with respect to the measure (1.5). (b) Let X, Y be independently distributed with common continuous symmetric density f , and let δ1 = X, δ2 = (X + Y)/2. The inequality in part (a) will hold provided 2 f 2(x) dx < f (0) (Edgeworth 1883, Stigler 1980). 1.12 (a) Let f (x) = (1/2)(k −1)/(1 + |x|)k, k ≥2. Show that f is a probability density and that all its moments of order < k −1 are finite. (b) The density of part (a) satisfies the inequality of Problem 1.11(b). 1.13 (a) If X is binomial b(p, n), show that E   x n −p    = 2 n −1 k −1  pk(1 −p)n−k+1 for k −1 n ≤p ≤k n. (b) Graph the risk function of part (i) for n = 4 and n = 5. [Hint: For (a), use the identity n x  (x −np) = n n −1 x −1  (1 −p) − n −1 x  p  , 1 ≤x ≤n. (Johnson 1957–1958, and Blyth 1980).] Section 2 2.1 If A1, A2, . . . are members of a σ-field A (the A’s need not be disjoint), so are their union and intersection. 2.2 For any a < b, the following sets are Borel sets (a) {x : a < x} and (b) {x : a ≤x ≤b}. 2.3 Under the assumptions of Problem 2.1, let A = lim inf An = {x : x ∈An for all except a finite number of n’s}, A = lim sup An = {x : x ∈An for infinitely many n}. Then, A and A are in A. 2.4 Show that (a) If A1 ⊂A2 ⊂· · · , then A = A = ∪An. (b) If A1 ⊃A2 ⊃· · · , then A = A = ∩An. 2.5 For any sequence of real numbers a1, a2, . . ., show that the set of all limit points of subsequences is closed. The smallest and largest such limit point (which may be infinite) are denoted by lim inf ak and lim sup ak, respectively. 2.6 Under the assumptions of Problems 2.1 and 2.3, show that IA(x) = lim inf IAk(x) and IA(x) = lim sup IAk(x) where IA(x) denotes the indicator of the set A. 2.7 Let (X, A, µ) be a measure space and let B be the class of all sets A ∪C with A ∈A and C a subset of a set A′ ∈A with µ(A′) = 0. Show that B is a σ-field. 2.8 If f and g are measurable functions, so are (i) f + g, and (ii) max(f, g). 2.9 If f is integrable with respect to µ, so is |f |, and   f dµ   ≤ |f | dµ. [Hint: Express |f | in terms of f + and f −.] 64 PREPARATIONS [ 1.9 2.10 Let X = {x1, x2, . . .}, µ = counting measure on X, and f integrable. Then f dµ = f (xi). [Hint: Suppose, first, that f ≥0 and let sn(x) be the simple function, which is f (x) for x = x1, . . . , xn, and 0 otherwise.] 2.11 Let f (x) = 1 or 0 as x is rational or irrational. Show that the Riemann integral of f does not exist. Section 3 3.1 Let X have a standard normal distribution and let Y = 2X. Determine whether (a) the cdf F(x, y) of (X, Y) is continuous. (b) the distribution of (X, Y) is absolutely continuous with respect to Lebesgue measure in the (x, y) plane. 3.2 Show that any function f which satisfies (3.7) is continuous. 3.3 Let X be a measurable transformation from (E, B) to (X, A) (i.e., such that for any A ∈A, the set {e : X(e) ∈A} is in B), and let Y be a measurable transformation from (X, A) to (Y, C). Then, Y[X(e)] is a measurable transformation from (E, B) to (Y, C). 3.4 In Example 3.1, show that the support of P is [a, b] if and only if F is strictly increasing on [a, b]. 3.5 Let S be the support of a distribution on a Euclidean space (X, A). Then, (i) S is closed; (ii) P (S) = 1; (iii) S is the intersection of all closed sets C with P(C) = 1. 3.6 If P and Q are two probability measures over the same Euclidean space which are equivalent, then they have the same support. 3.7 Let P and Q assign probabilities P : P  X = 1 n  = pn > 0, n = 1, 2, . . . (pn = 1), Q : P (X = 0) = 1 2; P  X = 1 n  = qn > 0; n = 1, 2, . . .  qn = 1 2  . Then, show that P and Q have the same support but are not equivalent. 3.8 Suppose X and Y are independent random variables with X ∼E(λ, 1) and Y ∼ E(µ, 1). It is impossible to obtain direct observations of X and Y. Instead, we observe the random variables Z and W, where Z = min{X, Y} and W = 1 if Z = X 0 if Z = Y. Find the joint distribution of Z and W and show that they are independent. (The X and Y variables are censored., a situation that often arises in medical experiments. Suppose that X measures survival time from some treatment, and the patient leaves the survey for some unrelated reason. We do not get a measurement on X, but only a lower bound.) Section 4 4.1 If the distributions of a positive random variable X form a scale family, show that the distributions of log X form a location family. 4.2 If X is distributed according to the uniform distribution U(0, θ), show that the distri-bution of −log X is exponential. 1.9 ] PROBLEMS 65 4.3 Let U be uniformly distributed on (0, 1) and consider the variables X = U α, 0 < α. Show that this defines a group family, and determine the density of X. 4.4 Show that a transformation group is a group. 4.5 If g0 is any element of a group G, show that as g ranges over G so does gg0. 4.6 Show that for p = 2, the density (4.15) specializes to (4.16). 4.7 Show that the family of transformations (4.12) with B nonsingular and lower triangular form a group G. 4.8 Show that the totality of nonsingular multivariate normal distributions can be obtained by the subgroup G of (4.12) described in Problem 4.7. 4.9 In the preceding problem, show that G can be replaced by the subgroup G0 of lower triangularmatricesB = (bij),inwhichthediagonalelementsb11, . . . , bpp areallpositive, but that no proper subgroup of G0 will suffice. 4.10 Show that the family of all continuous distributions whose support is an interval with positive lower end point is a group family. [Hint: Let U be uniformly distributed on the interval (2, 3) and let X = b[g(U)]a where α, b > 0 and where g is continuous and 1:1 from (2, 3) to (2, 3).] 4.11 Find a modification of the transformation group (4.22) which generates a random sample from a population {y1, . . . , yN} where the y’s, instead of being arbitrary, are restricted to (a) be positive and (b) satisfy 0 < yi < 1. 4.12 Generalize the transformation group of Example 4.10 to the case of s populations {yij, j = 1, . . . , Ni}, i = 1, . . . , s, with a random sample of size ni being drawn from the ith population. 4.13 Let U be a positive random variable, and let X = bU 1/c, b > 0, c > 0. (a) Show that this defines a group family. (b) If U is distributed as E(0, 1), then X is distributed according to the Weibull distri-bution with density c b x b c−1 e−(x/b)c, x > 0. 4.14 If F and F0 are two continuous, strictly increasing cdf’s on the real line, and if the cdf of U is F0 and g is strictly increasing, show that the cdf of g(U) is F if and only if g = F −1(F0). 4.15 The following two families of distributions are not group families: (a) The class of binomial distributions b(p, n), with n fixed and 0 < p < 1. (b) The class of Poisson distributions P(λ), 0 < λ. [Hint: (a) How many 1:1 transformations are there taking the set of integers {0, 1, . . . , n} into itself?] 4.16 Let X1, . . . , Xr have a multivariate normal distribution with E(Xi) = ξi and with covariance matrix . If X is the column matrix with elements Xi and B is an r × r matrix of constants, then BX has a multivariate normal distribution with mean Bξ and covariance matrix BB′. Section 5 5.1 Determine the natural parameter space of (5.2) when s = 1, T1(x) = x, µ is Lebesgue measure, and h(x) is (i) e−|x| and (ii) e−|x|/(1 + x2). 66 PREPARATIONS [ 1.9 5.2 Suppose in (5.2), s = 2 and T2(x) = T1(x). Explain why it is impossible to estimate η1. [Hint: Compare the model with that obtained by putting η′ 1 = η1 + c, η′ 2 = η2 −c.] 5.3 Show that the distribution of a sample from the p-variate normal density (4.15) con-stitutes an s-dimensional exponential family. Determine s and identify the functions ηi, Ti, and B of (5.1). 5.4 Efron (1975) gives very general definitions of curvature, which generalize (10.1) and (10.2). For the s-dimensional family (5.1) with covariance matrix θ, if θ is a scalar, define the statistical curvature to be γθ =  |Mθ|/m3 11 1/2 where Mθ = m11 m12 m21 m22  =  ˙ η′ θ ˙ θηθ ˙ η′ θθ ¨ ηθ ¨ η′ θθ ˙ ηθ ¨ η′ θθ ¨ ηθ  , with η(θ) = {ηi(θ)}, ˙ η(θ) = {η′ i(θ)} and ¨ η(θ) = {η′′ i (θ)}. Calculate the curvature of the family (see Example 6.19) C exp −n i=1(xi −θ)m! for m =2, 3, 4. Are the values of γθ ordered in the way you expected them to be? 5.5 Let (X1, X2) have a bivariate normal distribution with mean vector ξ = (ξ1, ξ2) and identity the covariance matrix. In each of the following situations, verify the curvature, γθ of the family. (a) ξ = (θ, θ), γθ = 0. (b) ξ = (θ1, θ2), θ 2 1 + θ 2 2 = r2, γθ = 1/r. 5.6 In the density (5.1) (a) For s = 1 show that Eθ [T (X)] = B′(θ)/η′(θ) and varθ [T (X)] = B′′(θ) [η′(θ)]2 −η′′(θ)B′(θ) [η′(θ)]3 . (b) For s > 1, show that Eθ [T (X)] = J −1∇B where J is the Jacobian matrix defined by J = { ∂ηj ∂θi } and ∇B is the gradient vector ∇B = { ∂ ∂θi B(θ)}. (See Johnson, Ladalla, and Liu (1979) for a general treatment of these identities.) 5.7 Verify the relations (a) (5.22) and (b) (5.26). 5.8 For the binomial distribution (5.28), verify (a) the moment generating function (5.30) and (b) the moments (5.31). 5.9 For the Poisson distribution (5.32), verify the moments (5.35). 5.10 In a Bernoulli sequence of trials with success probability p, let X + m be the number of trials required to achieve m successes. (a) Show that the distribution of X, the negative binomial distribution, is as given in Table 5.1. (b) Verify that the negative binomial probabilities add up to 1 by expanding 1 p −q p −m = pm(1 −q)−m. (c) Show that the distributions of (a) constitute a one-parameter exponential family. (d) Show that the moment generating function of X is MX(u) = pm/(1 −qeu)m. (e) Show that E(X) = mq/p and var(X) = mq/p2. (f) By expanding KX(u), show that the first four cumulants of X are k1 = mq/p, k2 = mq/p2, k3 = mq(1 + q)/p3, and k4 = mq(1 + 4q + q2)/p4. 5.11 In the preceding problem, let Xi +1 be the number of trials required after the (i −1)st success has been obtained until the next success occurs. Use the fact that X = m i=1Xi to find an alternative derivation of the mean and variance in part (e). 1.9 ] PROBLEMS 67 5.12 A discrete random variable with probabilities P (X = x) = a(x)θ x/C(θ), x = 0, 1, . . . ; a(x) ≥0; θ > 0, is a power series distribution. This is an exponential family (5.1) with s = 1, η = log θ, and T = X. The moment generating function is MX(u) = C(θeu)/C(θ). 5.13 Show that the binomial, negative binomial, and Poisson distributions are special cases of the power series distribution of Problem 5.12, and determine θ and C(θ). 5.14 The distribution of Problem 5.12 with a(x) = 1/x and C(θ) = −log(1 −θ), x = 1, 2, . . . ; 0 < θ < 1, is the logarithmic series distribution. Show that the moment generating function is log(1 −θeu)/ log(1 −θ) and determine E(X) and var(X). 5.15 For the multinomial distribution (5.4), verify the moment formulas (5.16). 5.16 As an alternative to using (5.14) and (5.15), obtain the moments (5.16) by representing each Xi as a sum of n indicators, as was done in (5.5): 5.17 For the gamma distribution (5.41). (a) verify the formulas (5.42), (5.43), and (5.44); (b) show that (5.43), with the middle term deleted, holds not only for all positive integers r but for all real r > −α. 5.18 (a) Prove Lemma 5.15. (Use integration by parts.) (b) By choosing g(x) to be x2 and x3, use the Stein Identity to calculate the third and fourth moments of the N(µ, σ 2) distribution. 5.19 Using Lemma 5.15: (a) Derive the form of the identity for X ∼Gamma(α, b) and use it to verify the moments given in (5.44). (b) Derive the form of the identity for X ∼Beta(a, b), and use it to verify that E(X) = a/(a + b) and var(X) = ab/(a + b)2(a + b + 1). 5.20 As an alternative to the approach of Problem 5.19(b) for calculating the moments of X ∼B(a, b), a general formula for EXk (similar to equation (5.43)) can be derived. Do so, and use it to verify the mean and variance of X given in Problem 5.19. [Hint: Write EXk as the integral of xc−1(1 −x)d−1 and use the constant B(c, d) of Table 5.1. Note that a similar approach will work for many other distributions, including the χ 2, Student’s t, and F distributions.] 5.21 The Stein Identity can also be applied to discrete exponential families, as shown by Hudson (1978) and generalized by Hwang (1982a). If X takes values in N = {0, 1, . . . , } with probability function pθ(x) = exp[θx −B(θ)]h(x), then for any g : N →ℜwith Eθ|g(X)| < ∞, we have the identity Eg(X) = e−θE{t(X)g(X −1)} where t(0) = 0 and t(x) = h(x −1)/h(x) for x > 0. (a) Prove the identity. (b) Use the identity to calculate the first four moments of the binomial distribution (5.31). (c) Use the identity to calculate the first four moments of the Poisson distribution (5.35). 68 PREPARATIONS [ 1.9 5.22 The inverse Gaussian distribution, IG(λ, µ), has density function $ λ 2π e(λµ)1/2x−3/2e−1 2 ( λ x +µx), x > 0, λ, µ > 0. (a) Show that this density constitutes an exponential family. (b) Show that this density is a scale family (as defined in Example 4.1). (c) Show that the statistics ¯ X = (1/n)xi and S∗= (1/xi −1/¯ x) are complete sufficient statistics. (d) Show that ¯ X ∼IG(nλ, nµ) and S∗∼(1/λ)χ 2 n−1. Note: Together with the normal and gamma distributions, the inverse Gaussian completes the trio of families that are both an exponential and a group family of distributions. This fact plays an important role in distribution theory based on saddlepoint approximations (Daniels 1983) or likelihood theory (Barndorff-Nielsen 1983). 5.23 In Example 5.14, show that (a) χ2 1 is the distribution of Y 2 where Y is distributed as N(0, 1); (b) χ 2 n is the distribution of Y 2 1 + · · · + Y 2 n where the Yi are independent N(0, 1). 5.24 Determine the values α for which the density (5.41) is (a) a decreasing function of x on (0, ∞) and (b) increasing for x < x0 and decreasing for x > x0(0 < x0). In case (b), determine the mode of the density. 5.25 A random variable X has the Pareto distribution P(c, k) if its cdf is 1 −(k/x)c, x > k > 0, c > 0. (a) The distributions P (c, 1) constitute a one-parameter exponential family (5.2) with η = −c and T = log X. (b) The statistic T is distributed as E(log k, 1/c). (c) The family P (c, k) (0 < k, 0 < c) is a group family. 5.26 If (X, Y) is distributed according to the bivariate normal distribution (4.16) with ξ = η = 0: (a) Show that the moment generating function of (X, Y) is MX,Y(u1, u2) = e−[u2 1σ 2+2ρστu1u2+u2 2τ2]/2. (b) Use (a) to show that µ12 = µ21 = 0, µ11 = ρστ, µ13 = 3ρστ 3, µ31 = 3ρσ 3τ, µ22 = (1 + 2ρ2)σ 2τ 2. 5.27 (a) If X is a random column vector with expectation ξ, then the covariance matrix of X is cov(X) = E[(X′ −ξ)(X′ −ξ ′)]. (b) If the density of X is (4.15), then ξ = a and cov(X) = . 5.28 (a) Let X be distributed with density pθ(x) given by (5.1), and let A be any fixed subset of the sample space. Then, the distributions of X truncated on A, that is, the distributions with density pθ(x)IA(x)/Pθ(A) again constitute an exponential family. (b) Give an example in which the natural parameter space of the original exponential family is a proper subset of the natural parameter space of the truncated family. 1.9 ] PROBLEMS 69 5.29 If Xi are independently distributed according to H(αi, b), show that Xi is distributed as H(αi, b). [Hint: Method 1. Prove it first for the sum of two gamma variables by a transformation to new variables Y1 = X1 + X2, Y2 = X1/X2 and then use induction. Method 2. Obtain the moment generating function of Xi and use the fact that a distri-bution is uniquely determined by its moment generating function, when the latter exists for at least some u ̸= 0.] 5.30 When the Xi are independently distributed according to Poisson distributions P(λi), find the distribution of Xi. 5.31 Let X1, . . . , Xn be independently distributed as H(α, b). Show that the joint distribu-tion is a two-parameter exponential family and identify the functions ηi, Ti, and B of (5.1). 5.32 If Y is distributed as H(α, b), determine the distribution of c log Y and show that for fixed α and varying b it defines an exponential family. 5.33 Morris (1982, 1983b) investigated the properties of natural exponential families with quadratic variance functions. There are only six such families: normal, binomial, gamma, Poisson, negative binomial, and the lesser-known generalized hyperbolic secant distri-bution, which is the density of X = 1 π log( Y 1−Y ) when Y ∼Beta( 1 2 + θ π , 1 2 −θ π ), |θ| < π 2 . (a) Find the density of X, and show that it constitutes an exponential family. (b) Find the mean and variance of X, and show that the variance equals 1 + µ2, where µ is the mean. Subsequent work on quadratic and other power variance families has been done by Bar-Lev and Enis (1986, 1988), Bar-Lev and Bshouty (1989), and Letac and Mora (1990). Section 6 6.1 Extend Example 6.2 to the case that X1, . . . , Xr are independently distributed with Poisson distributions P (λi) where λi = aiλ (ai > 0, known). 6.2 Let X1, . . . , Xn be iid according to a distribution F and probability density f . Show that the conditional distribution given X(i) = a of the i −1 values to the left of a and the n−i values to the right of a is that of i−1 variables distributed independently according to the probability density f (x)/F(a) and n−i variables distributed independently with density f (x)/[1 −F(a)], respectively, with the two sets being (conditionally) independent of each other. 6.3 Let f be a positive integrable function over (0, ∞), and let pθ(x) be the density over (0, θ) defined by pθ(x) = c(θ)f (x) if 0 < x < θ, and 0 otherwise. If X1, . . . , Xn are iid with density pθ, show that X(n) is sufficient for θ. 6.4 Let f be a positive integrable function defined over (−∞, ∞) and let pξ,η(x) be the probability density defined by pξ,η(x) = c(ξ, η)f (x) if ξ < x < η, and 0 otherwise. If X1, . . . , Xn are iid with density pξ,η, show that (X(1), X(n)) is sufficient for (ξ, n). 6.5 Show that each of the statistics T1 −T4 of Example 6.11 is sufficient. 6.6 Prove Corollary 6.13. 6.7 Let X1, . . . , Xm and Y1, . . . , Yn be independently distributed according to N(ξ, σ 2) and N(η, τ 2), respectively. Find the minimal sufficient statistics for these cases: (a) ξ, η, σ, τ are arbitrary: −∞< ξ, η < ∞, 0 < σ, τ. (b) σ = τ and ξ, η, σ are arbitrary. (c) ξ = η and ξ, σ, τ are arbitrary. 70 PREPARATIONS [ 1.9 6.8 Let X1, . . . , Xn be iid according to N(σ, σ 2), 0 < σ. Find a minimal set of sufficient statistics. 6.9 (a) If (x1, . . . , xn) and (y1, . . . , yn), have the same elementary symmetric functions xi = yi, i̸=jxiyj = i̸=jyiyj, . . . , x1 · · · xn = y1 · · · yn, then the y’s are a permutation of the x’s. (b) In the notation of Example 6.10, show that U is equivalent to V . [Hint: Compare the coefficients and the roots of the polynomials P(x) = O(x −ui) and Q(x) = O(x −vi).] 6.10 Show that the order statistics are minimal sufficient for the location family (6.7) when f is the density of (a) the double exponential distribution D(0, 1). (b) the Cauchy distribution C(0, 1). 6.11 Prove the following generalization of Theorem 6.12 to families without common support. Theorem 9.1 Let P be a finite family with densities pi, i = 0, . . . , k, and for any x, let S(x) be the set of pairs of subscripts (i, j) for which pi(x) + pj(x) > 0. Then, the statistic T (X) = pj(X) pi(X) , i < j and (i, j) ∈S(X) " is minimal sufficient. Here, pj(x)/pi(x) = ∞if pi(x) = 0 and pj(x) > 0. 6.12 In Problem 6.11 it is not enough to replace pi(X) by p0(X). To see this let k = 2 and p0 = U(−1, 0), p1 = U(0, 1), and p2(x) = 2x, 0 < x < 1. 6.13 Let k = 1 and Pi = U(i, i + 1), i = 0, 1. (a) ShowthataminimalsufficientstatisticforP = {P0, P1}isT (X) = i ifi < X < i+1, i = 0, 1. (b) Let X1 and X2 be iid according to a distribution from P. Show that each of the two statistics T1 = T (X1) and T2 = T (X2) is sufficient for (X1, X2). (c) Show that T (X1) and T (X2) are equivalent. 6.14 In Lemma 6.14, show that the assumption of common support can be replaced by the weaker assumption that every P0-null set is also a P-null set so that (a.e. P0) is equivalent to (a.e. P). 6.15 Let X1, . . . , Xn be iid according to a distribution from P = {U(0, θ), θ > 0}, and let P0 be the subfamily of P for which θ is rational. Show that every P0-null set in the sample space is also a P-null set. 6.16 Let X1, . . . , Xn be iid according to a distribution from a family P. Show that T is minimal sufficient in the following cases: (a) P = {U(0, θ), θ > 0}; T = X(n). (b) P = {U(θ1, θ2), −∞< θ1 < θ2 < ∞}; T = (X(1), X(n)). (c) P = {U(θ −1/2, θ + 1/2), −∞< θ < ∞}; T = (X(1), X(n)). 6.17 Solve the preceding problem for the following cases: (a) P = {E(θ, 1), −∞< θ < ∞}; T = X(1). (b) P = {E(0, b), 0 < b}; T = Xi. (c) P = {E(a, b), −∞< a < ∞, 0 < b}; T = (X(1), [Xi −X(1)]). 1.9 ] PROBLEMS 71 6.18 Show that the statistics X(1) and [Xi −X(1)] of Problem 6.17(c) are independently distributed as E(a, b/n) and bGamma(n −2, 1) respectively. [Hint: If a = 0 and b = 1, the variables Yi = (n −i + 1)[X(i) −X(i−1)], i = 2, . . . , n, are iid as E(0, 1).] 6.19 Show that the sufficient statistics of (i) Problem 6.3 and (ii) Problem 6.4 are minimal sufficient. 6.20 (a) Show that in the N(θ, θ) curved exponential family, the sufficient statistic T = ( xi,  x2 i ) is not minimal. (b) For the density of Example 6.19, show that T = ( xi,  x2 i ,  x3 i ) is a minimal sufficient statistic. 6.21 For the situation of Example 6.25(ii), find an unbiased estimator of ξ based on  Xi, and another based on  X2 i ); hence, deduce that T = ( Xi,  X2 i ) is not complete. 6.22 For the situation of Example 6.26, show that X is minimal sufficient and complete. 6.23 For the situation of Example 6.27: (a) Show that X = (X1, X2) is minimal sufficient for the family (6.16) with restriction (6.17). (b) Establish (6.18), and hence that the minimal sufficient statistic of part (a) is not complete. 6.24 (Messig and Strawderman 1993) Show that for the general dose-response model pθ(x) = m i=1 ni xi  [ηθ(di)]xi [1 −ηθ(di)]ni−xi , the statistic X = (X1, X2, . . . , Xm) is minimal sufficient if there exist vectors θ1, θ2, · · · , θm) such that the m × m matrix P = log  ηθj (di) 1 −ηθ0(di) ! ηθ0(di) 1 −ηθj (di) ! is invertible. (Hint: Theorem 6.12.) 6.25 Let (Xi, Yi), i = 1, . . . , n, be iid according to the uniform distribution over a set R in the (x, y) plane and let P be the family of distributions obtained by letting R range over a class R of sets R. Determine a minimal sufficient statistic for the following cases: (a) R is the set of all rectangles a1 < x < a2, b1 < y < b2, −∞< a1 < a2 < ∞, −∞< b1 < b2 < ∞. (b) R′ is the subset of R, for which a2 −a1 = b2 −b1. (c) R′′ is the subset of R′ for which a2 −a1 = b2 −b1 = 1. 6.26 Solve the preceding problem if (a) R is the set of all triangles with sides parallel to the x axis, the y axis, and the line y = x, respectively. (b) R′ is the subset of R in which the sides parallel to the x and y axes are equal. 6.27 Formulate a general result of which Problems 6.25(a) and 6.26(a) are special cases. 6.28 If Y is distributed as E(η, 1), the distribution of X = e−Y is U(0, e−η). (This result is useful in the computer generation of random variables; see Problem 4.4.14.) 72 PREPARATIONS [ 1.9 6.29 If a minimal sufficient statistic exists, a necessary condition for a sufficient statistic to be complete is for it to be minimal. [Hint: Suppose that T = h(U) is minimal sufficient and U is complete. To show that U is equivalent to T , note that otherwise there exists ψ such that ψ(U) ̸= η[h(U)] with positive probability where η(t) = E[ψ(U)|t].] 6.30 Show that the minimal sufficient statistics T = (X(1), X(n)) of Problem 6.16(b) are complete. [Hint: Use the approach of Example 6.24.] 6.31 For each of the following problems, determine whether the minimal sufficient statistic is complete: (a) Problem 6.7(a)-(c); (b) Problem 6.25(a)-(c); (c) Problem 6.26(a) and (b). 6.32 (a) Show that if P0, P1 are two families of distributions such that P0 ∈P1 and every null set of P0 is also a null set of P1, then a sufficient statistic T that is complete for P0 is also complete for P1. (b) Let P0 be the class of binomial distributions b(p, n), 0 < p < 1, n = fixed, and let P1 = P0 ∪{Q} where Q is the Poisson distribution with expectation 1. Then P0 is complete but P1 is not. 6.33 Let X1, . . . , Xn be iid each with density f (x) (with respect to Lebesgue measure), which is unknown. Show that the order statistics are complete. [Hint: Use Problem 6.32(a) with P0 the class of distributions of Example 6.15(iv). Alternatively, let P0 be the exponential family with density C(θ1, . . . , θn)e−θ1xi−θ2x2 1−···−θnxn i −x2n i .] 6.34 Suppose that X1, . . . , Xn are an iid sample from a location-scale family with distri-bution function F((x −a)/b). (a) If b is known, show that the differences (X1 −Xi)/b, i = 2, . . . , n, are ancillary. (b) If a is known, show that the ratios (X1 −a)/(Xi −a), i = 2, . . . , n, are ancillary. (c) If neither a or b are known, show that the quantities (X1 −Xi)/(X2 −Xi), i = 3, . . . , n, are ancillary. 6.35 Use Basu’s theorem to prove independence of the following pairs of statistics: (a) X and (Xi −X)2 where the X’s are iid as N(ξ, σ 2). (b) X(1) and [Xi −X(1)] in Problem 6.18. 6.36 (a) Under the assumptions of Problem 6.18, the ratios Zi = [X(n) −X(i)]/X(n) − X(n−1)], i = 1, . . . , n −2, are independent of {X(1), [Xi −X(1)]}. (b) Under the assumptions of Problems 6.16(b) and 6.30 the ratios Zi = [X(i) − X(1)]/X(n) −X(1)], i = 2, . . . , n −1, are independent of {X(1), X(n)). 6.37 Under the assumptions of Theorem 6.5, let A be any fixed set in the sample space, P ∗ θ the distribution Pθ truncated on A, and P∗= {P ∗ θ , θ ∈ }. Then prove (a) if T is sufficient for P, it is sufficient for P∗. (b) if, in addition, T is complete for P, it is also complete for P∗. Generalizations of this result were derived by Tukey in the 1940s and also by Smith (1957). The analogous problem for observations that are censored rather than truncated is discussed by Bhattacharyya, Johnson, and Mehrotra (1977). 6.38 If X1, . . . , Xn are iid as B(a, b), (a) Show that [OXi, O(1 −Xi)] is minimal sufficient for (a, b). (b) Determine the minimal sufficient statistic when a = b. 1.9 ] PROBLEMS 73 Section 7 7.1 Verify the convexity of the functions (i)-(vi) of Example 7.3. 7.2 Show that xp is concave over (0, ∞) if 0 < p < 1. 7.3 Give an example showing that a convex function need not be continuous on a closed interval. 7.4 If φ is convex on (a, b) and ψ is convex and nondecreasing on the range of φ, show that the function ψ[φ(x)] is convex on (a, b). 7.5 Prove or disprove by counterexample each of the following statements. If φ is convex on (a, b), then so is (i) eφ(x) and (ii) log φ(x) if φ > 0. 7.6 Show that if equality holds in (7.1) for some 0 < γ < 1, then φ is linear on [x, y]. 7.7 Establish the following lemma, which is useful in examining the risk functions of certain estimators. (For further discussion, see Casella 1990). Lemma 9.2 Let r : [0, ∞) →[0, ∞) be concave. Then, (i) r(t) is nondecreasing and (ii) r(t)/t is nonincreasing. 7.8 Prove Jensen’s inequality for the case that X takes on the values x1, . . . , xn with prob-abilities γ1, . . . , γn(γi = 1) directly from (7.1) by induction over n. 7.9 A slightly different form of the Rao-Blackwell theorem, which applies only to the variance of an estimator rather than any convex loss, can be established without Jensen’s inequality. (a) For any estimator δ(x) with var[δ(X)] < ∞, and any statistic T , show that var[δ(X)] = var[E(δ(X)|T )] + E[var(δ(X)|T )]. (b) Based on the identity in part (a), formulate and prove a Rao-Blackwell type theorem for variances. (c) The identity in part (a) plays an important role in both theoretical and applied statistics. For example, explain how Equation (1.2) can be interpreted as a special case of this identity. 7.10 Let U be uniformly distributed on (0, 1), and let F be a distribution function on the real line. (a) If F is continuous and strictly increasing, show that F −1(U) has distribution func-tion F. (b) For arbitrary F, show that F −1(U) continues to have distribution function F. [Hint: Take F −1 to be any nondecreasing function such that F −1[F(x)] = x for all x for which there exists no x′ ̸= x with F(x′) = F(x).] 7.11 Show that the k-dimensional sphere k i=1x2 1 ≤c is convex. 7.12 Show that f (a) = √|x −a| + √|y −a| is minimized by a = x and a = y. 7.13 (a) Show that φ(x) = exi is convex by showing that its Hessian matrix is positive semidefinite. (b) Show that the result of Problem 7.4 remains valid if φ is a convex function defined over an open convex set in Ek. (c) Use (b) to obtain an alternative proof of the result of part (a). 7.14 Determine whether the following functions are super- or subharmonic: (a) k i=1 xp i , p < 1, xi > 0. 74 PREPARATIONS [ 1.9 (b) e−k i=1 x2 i . (c) log #k i=1 xi . 7.15 A function is lower semicontinuous at the point y if f (y) ≤lim infx→y f (x). The definition of superharmonic can be extended from continuous to lower semicontinuous functions. (a) Show that a continuous function is lower semicontinuous. (b) The function f (x) = I(a < x < b) is superharmonic on (−∞, ∞). (c) For an estimator d of θ, show that the loss function L(θ, d) = 0 if |d −θ| ≤k 2 if |d −θ| > k is subharmonic. 7.16 (a) If f : ℜp →ℜis superharmonic, then ϕ(f (·)) is also superharmonic, where ϕ : ℜ→ℜis a twice-differentiable increasing concave function. (b) If h is superharmonic, then h∗(x) = g(x−y)h(y)dy is also superharmonic, where g(·) is a density. (c) If hγ is superharmonic, then so is h∗(x) = hγ (x)dG(γ ) where G(γ ) is a distribu-tion function. (Assume that all necessary integrals exist, and that derivatives may be taken inside the integrals.) 7.17 Use the convexity of the function φ of Problem 7.13 to show that the natural parameter space of the exponential family (5.2) is convex. 7.18 Show that if f is defined and bounded over (−∞, ∞) or (0, ∞), then f cannot be convex (unless it is constant). 7.19 Show that φ(x, y) = −√xy is convex over x > 0, y > 0. 7.20 If f and g are real-valued functions such that f 2, g2 are measurable with respect to the σ-finite measure µ, prove the Schwarz inequality  fg dµ 2 ≤  f 2dµ  g2dµ. [Hint: Write fg dµ = EQ(f/g), where Q is the probability measure with dQ = g2dµ/ g2dµ, and apply Jensen’s inequality with ϕ(x) = x2.] 7.21 Show that the loss functions (7.24) are continuously differentiable. 7.22 Prove that statements made in Example 7.20(i) and (ii). 7.23 Let f be a unimodal density symmetric about 0, and let L(θ, d) = ρ(d −θ) be a loss function with ρ nondecreasing on (0, ∞) and symmetric about 0. (a) The function φ(a) = E[ρ(X −a)] defined in Theorem 7.15 takes on its minimum at 0. (b) If Sa = {x : [ρ(x + a) −ρ(x −a)][f (x + a) −f (x −a)] ̸= 0}, then φ(a) takes on its unique minimum value at a = 0 if and only if there exists a0 such that φ(a0) < ∞, and µ(Sa) > 0 for all a. [Hint: Note that φ(0) ≤1/2[φ(2a)+ φ(−2a)], with strict inequality holding if and only if µ(Sa) > 0 for all a.] 1.9 ] PROBLEMS 75 7.24 (a) Suppose that f and ρ satisfy the assumptions of Problem 7.23 and that f is strictly decreasing on [0, ∞). Then, if φ(a0) < ∞for some a0, φ(a) has a unique minimum at zero unless there exists c ≤d such that ρ(0) = c and ρ(x) = d for all x ̸= 0. (b) If ρ is symmetric about 0, strictly increasing on [0, ∞), and φ(a0) < ∞for some a0, then φ(a) has a unique minimum at (0) for all symmetric unimodal f . [Problems 7.23 and 7.24 were communicated by Dr. W.Y. Loh.] 7.25 Let ρ be a real-valued function satisfying 0 ≤ρ(t) ≤M < ∞and ρ(t) →M as t →±∞, and let X be a random variable with a continuous probability density f . Then φ(a) = E[ρ(X −1)] attains its minimum. [Hint: Show that (a) φ(a) →M as a →±∞and (b) φ is continuous. Here, (b) follows from the fact (see, for example, TSH2, Appendix, Section 2) that if fn, n = 1, 2, . . ., and f are probability densities such that fn(x) →f (x) a.e., then ψfn → ψf for any bounded ψ.] 7.26 Let φ be a strictly convex function defined over an interval I (finite or infinite). If there exists a value a0 in I minimizing φ(a), then a0 is unique. 7.27 Generalize Corollary 7.19 to the case where X and µ are vectors. Section 8 8.1 (a) Prove Chebychev’s Inequality: For any random variable X and non-negative func-tion g(·), P(g(X) ≥ε) ≤1 ε Eg(X) for every ε > 0 . (In many statistical applications, it is useful to take g(x) = (x −a)2/b2 for some constants a and b.) (b) Prove Lemma 9.3. [Hint: Apply Chebychev’s Inequality.] Lemma 9.3 A sufficient condition for Yn to converge in probability to c is that E(Yn − c)2 →0. 8.2 To see that the converse of Theorem 8.2 does not hold, let X1, . . . , Xn be iid with E(Xi) = θ, var(Xi) = σ 2 < ∞, and let δn = ¯ X with probability 1 −εn and δn = An with probability εn. If εn and An are constants satisfying εn →0 and εnAn →∞, then δn is consistent for estimating θ, but E(δn −θ)2 does not tend to zero. 8.3 Suppose ρ(x) is an even function, nondecreasing and non-negative for x ≥0 and positive for x > 0. Then, E{ρ[δn −g(θ)]} →0 for all θ implies that δn is consistent for estimating g(θ). 8.4 (a) If An, Bn, and Yn tend in probability to a, b, and y, respectively, then An + BnYn tends in probability to a + by. (b) If An takes on the constant value an with probability 1 and an →a, then An →a in probability. 8.5 Referring to Example 8.4, show that cnS2 n P →σ 2 for any sequence of constants cn →1. In particular, the MLE ˆ σ 2 = n−1 n S2 n is a consistent estimator of σ 2. 8.6 Verify Equation (8.9). 76 PREPARATIONS [ 1.9 8.7 If {an} is a sequence of real numbers tending to a, and if bn = (a1 + · · · + an)/n, then bn →a. 8.8 (a) If δn is consistent for θ, and g is continuous, then g(δn) is consistent for g(θ). (b) Let X1, . . . , Xn be iid as N(θ, 1), and let g(θ) = 0 if θ ̸= 0 and g(0) = 1. Find a consistent estimator of g(θ). 8.9 (a) In Example 8.5, find cov(Xi, Xj) for any i ̸= j. (b) Verify (8.10). 8.10 (a) In Example 8.5, find the value of p1 for which pk becomes independent of k. (b) If p1 has the value given in (a), then for any integers i1 < · · · < ir and k, the joint distribution of Xi1, . . . , Xir is the same as that of Xi1+k, . . . , Xir+k. [Hint: Do not calculate, but use the definition of the chain.] 8.11 Suppose X1, . . . , Xn have a common mean ξ and variance σ 2, and that cov(Xi, Xj) = ρj−i. For estimating ξ, show that: (a) ¯ X is not consistent if ρj−i = ρ ̸= 0 for all i ̸= j; (b) ¯ X is consistent if |ρj−i| ≤Mγ j−i with |γ | < 1. [Hint: (a) Note that var( ¯ X) > 0 for all sufficiently large n requires ρ ≥0, and determine the distribution of ¯ X in the multivariate normal case.] 8.12 Suppose that kn[δn −g(θ)] tends in law to a continuous limit distribution H. Prove that: (a) If k′ n/kn →d ̸= 0 or ∞, then k′ n[δn −g(θ)] also tends to a continuous limit distribution. (b) If k′ n/kn →0 or ∞, then k′ n[δn −g(θ)] tends in probability to zero or infinity, respectively. (c) If kn →∞, then δn →g(θ) in probability. 8.13 Show that if Yn →c in probability, then it tends in law to a random variable Y which is equal to c with probability 1. 8.14 (a) In Example 8.7(i) and (ii), Yn →0 in probability. Show that: (b) If Hn denotes the distribution function of Yn in Example 8.7(i) and (ii), then Hn(a) →0 for all a < 0 and Hn(a) →1 for all a > 0. (c) Determine lim Hn(0) for Example 8.7(i) and (ii). 8.15 If Tn > 0 satisfies √n[Tn −θ] L →N(0, τ 2), find the limiting distribution of (a) √ T n and (b) log Tn (suitably normalized). 8.16 If Tn satisfies √n[Tn −θ] L →N(0, τ 2), find the limiting distribution of (a) T 2 n , (b) log |Tn|, (c) 1/Tn, and (d) eTn (suitably normalized). 8.17 Variance stabilizing transformations are transformations for which the resulting statis-tic has an asymptotic variance that is independent of the parameters of interest. For each of the following cases, find the asymptotic distribution of the transformed statistic and show that it is variance stabilizing. (a) Tn = 1 n n i=1 Xi, Xi ∼Poisson(λ), h(Tn) = √ T n. (b) Tn = 1 n n i=1 Xi, Xi ∼Bernoulli(p), h(Tn) = arcsin √ T n. 8.18 (a) The function v(·) is a variance stabilizing transformation if the estimator v(Tn) has asymptotic variance τ 2(θ)[v′(θ)]2 = c, where c is a constant independent of θ. 1.9 ] PROBLEMS 77 (b) For any positive integer n, find the variance stabilizing transformation if τ 2(θ) = θ n. In particular, be careful of the important case n = 2. [Avariancestabilizingtransformation(ifitexists)isthesolutionofadifferentialequation resulting from the Delta Method approximation of the variance of an estimator (Theorem 8.12) and is not a function of the distribution of the statistic (other than the fact that the distribution will determine the form of the variance). The transformations of part (b) are known as the Box-Cox family of power transformations and play an important role in applied statistics. For more details and interesting discussions, see Bickel and Doksum 1981, Box and Cox 1982, and Hinkley and Runger 1984.] 8.19 Serfling (1980, Section 3.1) remarks that the following variations of Theorem 8.12 can be established. Show that: (a) If h is differentiable in a neighborhood of θ, and h′ is continuous at θ, then h′(θ) may be replace by h′(Tn) to obtain √n[h(Tn) −h(θ)] τh′(Tn) L →N(0, 1). (b) Furthermore, if τ 2 is a continuous function of θ, say τ 2(θ), it can be replaced by τ 2(Tn) to obtain √n[h(Tn) −h(θ)] τ(Tn)h′(Tn) L →N(0, 1). 8.20 Prove Theorem 8.16. [Hint: Under the assumptions of the theorem we have the Taylor expansion h(x1, . . . , xs) = h(ξ1, . . . , ξs) + (xi −ξi)  ∂h ∂ξi + Ri  where Ri →0 as xi →ξi.] 8.21 A sequence of numbers Rn is said to be o(1/kn) as n →∞if knRn →0 and to be O(1/kn) if there exist M and n0 such that |knRn| < M for all n > n0 or, equivalently, if knRn is bounded. (a) If Rn = o(1/kn), then Rn = 0(1/kn). (b) Rn = 0(1) if and only if Rn is bounded. (c) Rn = o(1) if and only if Rn →0. (d) If Rn is O(1/kn) and k′ n/kn tends to a finite limit, then Rn is O(1/k′ n). 8.22 (a) If Rn and R′ n are both O(1/kn), so is Rn + R′ n. (b) If Rn and R′ n are both o(1/kn), so is Rn + R′ n. 8.23 Suppose k′ n/kn →∞. (a) If Rn = 0(1/kn) and R′ n = 0(1/k′ n), then Rn + R′ n = 0(1/kn). (b) If Rn = o(1/kn) and R′ n = o(1/k′ n), then Rn + R′ n = o(1/kn). 8.24 A sequence of random variables Yn is bounded in probability if given any ε > 0, there exist M and n0 such that P (|Yn| > M) < ε for all n > n0. Show that if Yn converges in law, then Yn is bounded in probability. 8.25 In generalization of the notation o and O, let us say that Yn = op(1/kn) if knYn →0 in probability and that Yn = Op(1/kn) if knYn is bounded in probability. Show that the results of Problems 8.21 - 8.23 continue to hold if o and O are replaced by op and Op. 78 PREPARATIONS [ 1.10 8.26 Let (Xn, Yn) have a bivariate normal distribution with means E(Xn) = E(Yn) = 0, variances E(X2 n) = E(Y 2 n ) = 1, and with correlation coefficient ρn tending to 1 as n →∞. (a) Show that (Xn, Yn) L →(X, Y) where X is N(0, 1) and P(X = Y) = 1. (b) If S = {(x, y) : x = y}, show that (8.25) does not hold. 8.27 Prove Theorem 8.22. [Hint: Make a Taylor expansion as in the proof of Theorem 8.12 and use Problem 4.16.] 10 Notes 10.1 Fubini’s Theorem Theorem 2.8, called variously Fubini’s or Tonelli’s theorem, is often useful in mathe-matical statistics. A variant of Theorem 2.8 allows f to be nonpositive, but requires an integrability condition (Billingsley 1995, Section 18). Dudley (1989) refers to Theorem 2.8 as the Tonelli-Fubini theorem and recounts an interesting history in which Lebesgue played a role. Apparently, Fubini’s first published proof of this theorem was incorrect and was later corrected by Tonelli, using results of Lebesgue. 10.2 Sufficiency The concept of sufficiency is due to Fisher (1920). (For some related history, see Stigler 1973.). In his fundamental paper of 1922, Fisher introduced the term sufficiency and stated the factorization criterion. The criterion was rediscovered by Neyman (1935) and was proved for general dominated families by Halmos and Savage (1949). The theory of minimal sufficiency was initiated by Lehmann and Scheff´ e (1950) and Dynkin (1951). Further generalizations are given by Bahadur (1954) and Landers and Rogge (1972). Yamada and Morimoto (1992) review the topic. Theorem 7.8 with squared error loss is due to Rao (1945) and Blackwell (1947). It was extended to the pth power of the error (p ≥1) by Barankin (1950) and to arbitrary convex loss functions by Hodges and Lehmann (1950). 10.3 Exponential Families One-parameter exponential families, as the only (regular) families of distributions for which there exists a one-dimensional sufficient statistic, were also introduced by Fisher (1934). His result was generalized to more than one dimension by Darmois (1935), Koopman (1936), and Pitman (1936). (Their contributions are compared by Barankin and Maitra (1963).) Another discussion of this theorem with reference to the literature is given, for example, by Hipp (1974). Comprehensive treatments of exponential families are provided by Barndorff-Nielsen (1978) and Brown (1986a); a more mathematical treatment is given in Hoffman-Jorgenson (1994). Statistical aspects are emphasized in Johansen (1979). 10.4 Ancillarity To illustrate his use of ancillary statistics, group families were introduced by Fisher (1934). (For more information on ancillarity, see Buehler 1982, or the review article by Lehmann and Scholtz 1992).) Ancillary statistics, and more general notions of ancillarity, have played an important role in developing inference in both group families and curved exponential families, the latter having connections to the field of “small-sample asymptotics,” where it is 1.10 ] NOTES 79 shown how to obtain highly accurate asymptotic approximations, based on ancillaries and saddlepoints. Forexample,ascurvedexponentialfamiliesarenotoffullrank,itistypicalthataminimal sufficient statistic is not complete. One might hope that an s-dimensional sufficient statistic could be split into a d-dimensional sufficient piece and an s −d-dimensional ancillary piece. Although this cannot always be done, useful decompositions can be found. Such endeavors lie at the heart of conditional inference techniques. Good introductions to these topics can be found in Reid (1988), Field and Ronchetti (1990), Hinkley, Reid, and Snell (1991), Barndorff-Nielsen and Cox (1994), and Reid (1995). 10.5 Completeness Completeness was introduced by Lehmann and Scheff´ e (1950). Theorem 6.21 is due to Basu (1955b, 1958). Although there is no converse to Basu’s theorem as stated here, some alternative definitions and converse results are discussed by Lehmann (1981). There are alternate versions of Theorem 6.22, which relate completeness in exponential families to having full rank. This is partially due to the fact that a full or full-rank exponential family can be defined in alternate ways. For example, referring to (5.1), if we define as the index set of the densities pθ(x), that is, we consider the family of densities {pθ(x), θ ∈ }, then Brown (1986a, Section 1.1) defines the exponential family to be full if = F, where F is the natural parameter space [see (5.3)]. But this property is not needed for completeness. As Brown (1986a, Theorem 2.12) states, as long as the interior of is nonempty (that is, contains an open set), the family {pθ(x), θ ∈ } is complete. Another definition of a full exponential model is given by Barndorff-Nielsen and Cox (1994, Section 1.3), which requires that the statistics T1, . . . , Ts not be linearly dependent. In nonparametric families, the property of completeness, and determination of complete sufficient statistics, continues to be investigated. See, for example, Mandelbaum and R¨ uschendorf (1987) and Mattner (1992, 1993, 1994). For example, building on the work of Fraser (1954) and Mandelbaum and R¨ uschendorf (1987), Mattner (1994) showed that the order statistics are complete for the family of densities P, in cases such as (a) P={all probability measures on the real line with unimodal densities with respect to Lebesgue measure}. (b) P = {(1 −t)P + tQ : P ∈P, Q ∈Q(P), t ∈[0, ϵ]}, where ϵ is fixed and, for each P ∈P, P is absolutely continuous with respect to the complete and convex family Q(P ). 10.6 Curved Exponential Families The theory of curved exponential families was initiated by Efron (1975, 1978), who applied the ideas of plane curvature and arc length to better understand the structure of exponential families. Curved exponential families have been extensively studied since then. (See, for example, Brown 1986a, Chapter 3; Barndorff-Nielsen 1988; McCul-lagh and Nelder 1989; Barndorff-Nielsen and Cox 1994, Section 2.10.) Here, we give some details in a two-dimensional case; extensions to higher dimensions are reasonably straightforward (Problem 5.4). For the exponential family (5.1), with s = 2, the parameter is (η1(θ), η2(θ)), where θ is an underlying parameter which is indexing the parameter space. If θ itself is a one-dimensional parameter, then the parameter space is a curve in two dimensions, a subset of the full two-dimensional space. Assuming that the ηi’s have at least two derivatives 80 PREPARATIONS [ 1.10 as functions of θ, the parameter space is a one-dimensional differentiable manifold, a differentiable curve. (See Amari et al 1987 or Murray and Rice 1993 for an introduction to differential geometry and statistics.) Figure 10.1. The curve η(τ) = (η1(τ), η2(τ)) = (τ, −1 2τ 2). The radius of curvature γτ is the instantaneous rate of change of the angle Wa, between the derivatives ∇η(τ), with respect to the arc length Ws. The vector ∇η(τ), the tangent vector, and the unit normal vector N(τ) = [−η′ 2(τ), η′ 1(τ)]/[dsτ/dτ] provide a moving frame of reference. Example 10.1 Curvature. For the exponential family (5.7) let τ = 1 ξ , so the parameter space is the curve η(τ) = (τ, −1 2τ 2), as shown in Figure 10.1. The direction of the curve η(τ), at any point τ, is measured by the derivative vector (the gradient) ∇η(τ) = (η′ 1(τ), η′ 2(τ)) = (1, −τ). At each τ we can assign an angular value a(τ) = polar angle of normalized gradient vector ∇η(τ) = polar angle of (η′ 1(τ), η′ 2(τ)) [(η′ 1)2 + (η′ 2)2]1/2 , which measures how the curve “bends.” The curvature, γτ, is a measure of the rate of change of this angle as a function of the arc length s(τ), where s(τ) = τ 0 |∇η(t)|dt. 1.10 ] NOTES 81 Thus, γτ = lim δ→0 a(τ + δτ) −a(τ) s(τ + δτ) −s(τ) = da(τ) ds(τ) ; (10.1) see Figure 10.1. An application of calculus will show that γτ = η′ 1η′′ 2 −η′ 2η′′ 1 [(η′ 1)2 + (η′ 2)2]3/2 , (10.2) so for the exponential family (5.7), we have γτ = −(1 + τ 2)3/2. ∥ For the most part, we are only concerned with |γτ|, as the sign merely gives the direction of parameterization, and the magnitude gives the degree of curvature. As might be expected, lines have zero curvature and circles have constant curvature. The curvature of a circle is equal to the reciprocal of the radius, which leads to calling 1/|γτ| the radius of curvature. Definitions of arc length, and so forth, naturally extend beyond two dimensions. (See Problems 5.5 and 5.4.) 10.7 Large Deviation Theory Limit theorems such as Theorem 1.8.12 refer to sequences of situations as n →∞. However, in a given problem, one is dealing with a specific large value of n. Any particular situation can be embedded in many different sequences, which lead to different approximations. Suppose, for example, that it is desired to find an approximate value for P(|Tn −g(θ)| ≥a) (10.3) when n = 100 and a = 0.2. If √n[Tn −g(θ)] is asymptotically normally distributed as N(0, 1), one might want to put a = c/√n (so that c = 2) and consider (10.3) as a member of the sequence P  |Tn −g(θ)| ≥ 2 √n  ≈2[1 −X(2)]. (10.4) Alternatively, one could keep a = 0.2 fixed and consider (10.3) as a member of the sequence P(|Tn −g(θ)| ≥0.2). (10.5) Since Tn −g(θ) →0, this sequence of probabilities tends to zero, and in fact does so at a very fast rate. In this approach, the normal approximation is no longer useful (it only tells us that (10.5) →0 as n →∞). The study of the limiting behavior of sequences such as (10.5) is called large deviation theory. An exposition of large deviation theory is given by Bahadur (1971). Books on large deviation theory include those by Kester (1985) and Bucklew (1990). Much research has been done on this topic, and applications to various aspects of point estimation can be found in Fu (1982), Kester and Kallenberg (1986), Sieders and Dzhaparidze (1987), and Pfanzagl (1990). We would, of course, like to choose the approximation that comes closer to the true value. It seems plausible that for values of (10.3) not extremely close to 0 and for mod-erate sample sizes, (10.4) would tend to do better than that obtained from the sequence (10.5). Some numerical comparisons in the context of hypothesis testing can be found in Groeneboom and Oosterhoff (1981); other applications in testing are considered in Barron (1989). This page intentionally left blank CHAPTER 2 Unbiasedness 1 UMVU Estimators It was pointed out in Section 1.1 that estimators with uniformly minimum risk typically do not exist, and restricting attention to estimators showing some degree of impartiality was suggested as one way out of this difficulty. As a first such restriction, we shall study the condition of unbiasedness in the present chapter. Definition 1.1 An estimator δ(x) of g(θ) is unbiased if Eθ[δ(X)] = g(θ) for all θ ∈ . (1.1) When used repeatedly, an unbiased estimator in the long run will estimate the right value “on the average.” This is an attractive feature, but insistence on unbi-asedness can lead to problems. To begin with, unbiased estimators of g may not exist. Example 1.2 Nonexistence of unbiased estimator. Let X be distributed accord-ing to the binomial distribution b(p, n) and suppose that g(p) = 1/p. Then, unbi-asedness of an estimator δ requires n k=0 δ(k) n k  pkqn−k = g(p) for all 0 < p < 1. (1.2) That no such δ exists can be seen, for example, for the fact that as p →0, the left side tends to δ(0) and the right side to ∞. Yet, estimators of 1/p exist which (for n not too small) are close to 1/p with high probability. For example, since X/n tends to be close to p, n/X (with some adjustment when X = 0) will tend to be close to 1/p. ∥ If there exists an unbiased estimator of g, the estimand g will be called U-estimable. (Some authors call such an estimand “estimable,” but this conveys the false impression that any g not possessing this property cannot be accurately esti-mated.) Even when g is U-estimable there is no guarantee that any of its unbiased estimators are desirable in other ways, and one may instead still prefer to use an estimator that does have some bias. On the other hand, a large bias is usually con-sidered a drawback and special methods of bias reduction have been developed for such cases. Example 1.3 The jackknife. A general method for bias reduction was initiated by Quenouille (1949, 1956) and later named the jackknife by Tukey (1958). Let 84 UNBIASEDNESS [ 2.1 T (x) be an estimator of a parameter τ(θ) based on a sample x = (x1, . . . , xn) and satisfying E[T (x)] = τ(θ) + O( 1 n). Define x(−i) to be the vector of sample values excluding xi. Then, the jackknifed version of T (x) is TJ(x) = nT (x) −n −1 n n i=1 T (x(−i)). (1.3) It can be shown that E[TJ(x)] = τ(θ)+O( 1 n2 ), so the bias has been reduced (Stuart and Ord 1991, Section 17.10; see also Problem 1.4). ∥ Although unbiasedness is an attractive condition, after a best unbiased estimator has been found, its performance should be investigated and the possibility not ruled out that a slightly biased estimator with much smaller risk might exist (see, for example, Sections 5.5 and 5.6). The motive for introducing unbiasedness was the hope that within the class of unbiased estimators, there would exist an estimator with uniformly minimum risk. In the search for such an estimator, a natural approach is to minimize the risk for some particular value θ0 and then see whether the result is independent of θ0. To this end, the following obvious characterization of the totality of unbiased estimators is useful. Lemma 1.4 If δ0 is any unbiased estimator of g(θ), the totality of unbiased esti-mators is given by δ = δ0 −U where U is any unbiased estimator of zero, that is, it satisfies Eθ(U) = 0 for all θ ∈ . To illustrate this approach, suppose the loss function is squared error. The risk of an unbiased estimator δ is then just the variance of δ. Restricting attention to estimators δ0, δ, and U with finite variance, we have, if δ0 is unbiased, var(δ) = var(δ0 −U) = E(δ0 −U)2 −[g(θ)]2 so that the variance of δ is minimized by minimizing E(δ0 −U)2. Example 1.5 Locally best unbiased estimation. Let X take on the values −1, 0, 1, . . . with probabilities (Problem 1.1) P(X = −1) = p, P(X = k) = q2pk, k = 0, 1, . . . , (1.4) where 0 < p < 1 and q = 1 −p, and consider the problems of estimating (a) p and (b) q2. Simple unbiased estimators of p and q2 are, respectively, δ0 = 1 if X = −1 0 otherwise and δ1 = 1 if X = 0 0 otherwise. It is easily checked that U is an unbiased estimator of zero if and only if [Problem 1.1(b)] U(k) = −kU(−1) for k = 0, 1, . . . (1.5) or equivalently if U(k) = ak for all k = −1, 0, 1, . . . and some a. The problem of determining the unbiased estimator which minimizes the variance at p0 thus 2.1 ] UMVU ESTIMATORS 85 reduces to that of determining the value of a which minimizes P(X = k)[δi(k) −ak]2. (1.6) The minimizing values of a are (Problem 1.2) a∗ 0 = −p0/  p0 + q2 0 ∞ k=1 k2pk 0 and a∗ 1 = 0 in cases (a) and (b), respectively. Since a∗ 1 does not depend on p0, the estimator δ∗ 1 = δ1 −a∗ 1X = δ1 minimizes the variance among all unbiased estimators not only when p = p0 but for all values of p. On the other hand, δ∗ 0 = δ0 −a∗ 0X does depend on p0, and it therefore only minimizes the variance at p = p0. ∥ The properties possessed by δ∗ 0 and δ∗ 1 are characterized more generally by the following definition. Definition 1.6 An unbiased estimator δ(x) of g(θ) is the uniform minimum vari-ance unbiased (UMVU) estimator of g(θ) if varθδ(x) ≤varθδ′(x) for all θ ∈ , where δ′(x) is any other unbiased estimator of g(θ). The estimator δ(x) is locally minimum variance unbiased (LMVU) at θ = θ0 if varθ0δ(x) ≤varθ0δ′(x) for any other unbiased estimator δ′(x). In terms of Definition 1.6, we have shown in Example 1.5 that δ∗ 1 is UMVU and that δ∗ 0 is LMVU. Since δ∗ 0 depends on p0, no UMVU estimator exists in this case. Notice that the definition refers to “the” UMVU estimator, since UMVU estima-tors are unique (see Problem 1.12). The existence, uniqueness, and characterization of LMVU estimators have been investigated by Barankin (1949) and Stein (1950). Interpreting E(δ0 −U)2 as the distance between δ0 and U, the minimizing U ∗ can be interpreted as the projection of δ0 onto the linear space U formed by the unbiased estimators U of zero. The desired results then follow from the projection theorem of linear space theory (see, for example, Bahadur 1957, and Luenberger 1969). The relationship of unbiased estimators of g(θ) with unbiased estimators of zero can be helpful in characterizing and determining UMVU estimators when they exist. Note that if δ(X) is an unbiased estimator of g(θ), then so is δ(X) + aU(X), for any constant a and any unbiased estimator U of zero and that varθ0[δ(X) + aU(X)] = varθ0δ(X) + a2varθ0U(X) + 2acovθ0(U(X), δ(X)). If covθ(U(X), δ(X)) ̸= 0 for some θ = θ0, we shall show below that there exists a value of a for which varθ0[δ(X)+aU(X)] < varθ0δ(X). As a result, the covariance with unbiased estimators of zero is the key in characterizing the situations in which a UMVU estimator exists. In the statement of the following theorem, attention will be restricted to estimators with finite variance, since otherwise the problem of minimizing the variance does not arise. The class of estimators δ with Eθδ2 < ∞ for all θ will be denoted by W. Theorem 1.7 Let X have distribution Pθ, θ ∈ , let δ be an estimator in W, and let U denote the set of all unbiased estimators of zero which are in W. Then, a 86 UNBIASEDNESS [ 2.1 necessary and sufficient condition for δ to be a UMVU estimator of its expectation g(θ) is that Eθ(δU) = 0 for all U ∈U and all θ ∈ . (1.7) (Note: Since Eθ(U) = 0 for all U ∈U, it follows that Eθ(δU) = covθ(δ, U), so that (1.7) is equivalent to the condition that δ is uncorrelated with every U ∈U.) Proof. (a) Necessity. Suppose δ is UMVU for estimating its expectation g(θ). Fix U ∈ U, θ ∈ , and for arbitrary real λ, let δ′ = δ +λU. Then, δ′ is also an unbiased estimator of g(θ), so that varθ(δ + λU) ≥varθ(δ) for all λ. Expanding the left side, we see that λ2varθU + 2λcovθ(δ, U) ≥0 for all λ, a quadratic in λ with real roots λ = 0 and λ = −2 covθ(δ, U)/varθ(U). It will therefore take on negative values unless covθ(δ, U) = 0. (b) Sufficiency. Suppose Eθ(δU) = 0 for all U ∈U. To show that δ is UMVU, let δ′ be any unbiased estimator of Eθ(δ). If varθδ′ = ∞, there is nothing to prove, so assume varθδ′ < ∞. Then, δ −δ′ ∈U (Problem 1.8) so that Eθ[δ(δ −δ′)] = 0 and hence Eθ(δ2) = Eθ(δδ′). Since δ and δ′ have the same expectation, varθδ = covθ(δ, δ′), and from the covariance inequality (Problem 1.5), we conclude that varθ(δ) ≤ varθ(δ′). ✷ The proof of Theorem 1.7 shows that condition (1.7), if required only for θ = θ0, is necessary and sufficient for an estimator δ with Eθ0(δ2) < ∞to be LMVU at θ0. This result also follows from the characterization of the LMVU estimator as δ = δ0 −U ∗where δ0 is any unbiased estimator of g and U ∗is the projection of δ0 onto U. Interpreting the equation Eθ0(δU) = 0 as orthogonality of δ0 and U, the projection of U ∗has the property that δ = δ0 −U ∗is orthogonal to U, that is, Eθ0(δU) = 0 for all U ∈U. If the estimator is to be UMVU, this relation must hold for all θ. Example 1.8 Continuation of Example 1.5. As an application of Theorem 1.7, let us determine the totality of UMVU estimators in Example 1.5. In view of (1.5) and (1.7), a necessary and sufficient condition for δ to be UMVU for its expectation is Ep(δX) = 0 for all p, (1.8) that is, for δX to be in U and hence to satisfy (1.5). This condition reduces to kδ(k) = kδ(−1) for k = 0, 1, 2, . . . , 2.1 ] UMVU ESTIMATORS 87 which is satisfied provided δ(k) = δ(−1) for k = 1, 2, . . . (1.9) with δ(0) being arbitrary. If we put δ(−1) = a, δ(0) = b, the expectation of such a δ is g(p) = bq2 +a(1−q2) and g(p) is therefore seen to possess a UMVU estimator with finite variance if and only if it is of the form a + cq2. ∥ It is interesting to note, although we shall not prove it here, that Theorem 1.7 typically, but not always, holds not only for squared error but for general convex loss functions. This result follows from a theorem of Bahadur (1957). For details, see Padmanabhan (1970) and Linnik and Rukhin (1971). Constants are always UMVU estimators of their expectations since the variance of a constant is zero. (If δ is a constant, (1.7) is of course trivially satisfied.) Deleting the constants from consideration, three possibilities remain concerning the set of UMVU estimators. Case 1. No nonconstant U-estimable function has a UMVU estimator. Example 1.9 Nonexistence of UMVU estimator. Let X1, . . . , Xn be a sample from a discrete distribution which assigns probability 1/3 to each of the points θ −1, θ, θ + 1, and let θ range over the integers. Then, no nonconstant function of θ has a UMVU estimator (Problem 1.9). A continuous version of this example is provided by a sample from the uniform distribution U(θ −1/2, θ + 1/2); see Lehmann and Scheff´ e (1950, 1955, 1956). (For additional examples, see Section 2.3.) ∥ Case 2. Some, but not all, nonconstant U-estimable functions have UMVU esti-mators. Example 1.5 provides an instance of this possibility. Case 3. Every U-estimable function has a UMVU estimator. A condition for this to be the case is suggested by (Rao-Blackwell) Theorem 1.7.8. If T is a sufficient statistic for the family P = {Pθ, θ ∈ } and g(θ) is U-estimable, then any unbiased estimator δ of g(θ) which is not a function of T is improved by its conditional expectation given T , say η(T ). Furthermore, η(T ) is again an unbiased estimator of g(θ) since by (6.1), Eθ[η(T )] = Eθ[δ(X)]. Lemma 1.10 Let X be distributed according to a distribution from P = {Pθ, θ ∈ }, and let T be a complete sufficient statistic for P. Then, every U-estimable function g(θ) has one and only one unbiased estimator that is a function of T . (Here, uniqueness, of course, means that any two such functions agree a.e. P.) Proof. That such an unbiased estimator exists was established just preceding the statement of Lemma 1.10. If δ1 and δ2 are two unbiased estimators of g(θ), their difference f (T ) = δ1(T ) −δ2(T ) satisfies Eθf (T ) = 0 for all θ ∈ , 88 UNBIASEDNESS [ 2.1 and hence by the completeness of T , δ1(T ) = δ2(T ) a.e. P, as was to be proved. ✷ So far, attention has been restricted to squared error loss. However, the Rao-Blackwell theorem applies to any convex loss function, and the preceding argument therefore establishes the following result. Theorem 1.11 Let X be distributed according to a distribution in P = {Pθ, θ ∈ }, and suppose that T is a complete sufficient statistic for P. (a) For every U-estimable function g(θ), there exists an unbiased estimator that uniformly minimizes the risk for any loss function L(θ, d) which is convex in its second argument; therefore, this estimator in particular is UMVU. (b) The UMVU estimator of (i) is the unique unbiased estimator which is a func-tion of T ; it is the unique unbiased estimator with minimum risk, provided its risk is finite and L is strictly convex in d. It is interesting to note that under mild conditions, the existence of a complete sufficient statistic is not only sufficient but also necessary for Case 3. This result, which is due to Bahadur (1957), will not be proved here. Corollary 1.12 If P is an exponential family of full rank given by (5.1), then the conclusions of Theorem 1.11 hold with θ = (θ1, . . . , θs) and T = (T1, . . . , Ts). Proof. This follows immediately from Theorem 1.6.22. ✷ Theorem 1.11 and its corollary provide best unbiased estimators for large classes of problems, some of which will be discussed in the next three sections. For the sake of simplicity, these estimators will be referred to as being UMVU, but it should be kept in mind that their optimality is not tied to squared error as loss, but, in fact, they minimize the risk for any convex loss function. Sometimes we happen to know an unbiased estimator δ of g(θ) which is a function of a complete sufficient statistic. The theorem then states that δ is UMVU. Suppose, for example, that X1, . . . , Xn are iid according to N(ξ, σ 2) and that the estimand is σ 2. The standard unbiased estimator of σ 2 is then δ = (Xi −¯ X)2/(n− 1). Since this is a function of the complete sufficient statistic T = (Xi, (Xi − ¯ X)2), δ is UMVU. Barring such fortunate accidents, two systematic methods are available for deriving UMVU estimators through Theorem 1.11. Method One: Solving for δ If T is a complete sufficient statistic, the UMVU estimator of any U-estimable function g(θ) is uniquely determined by the set of equations Eθδ(T ) = g(θ) for all θ ∈ . (1.10) Example 1.13 Binomial UMVU estimator. Suppose that T has the binomial distribution b(p, n) and that g(p) = pq. Then, (1.10) becomes n t=0 n t  δ(t)ptqn−1 = pq for all 0 < p < 1. (1.11) 2.1 ] UMVU ESTIMATORS 89 If ρ = p/q so that p = ρ/(1 + ρ) and q = 1/(1 + ρ), (1.11) can be rewritten as n t=0 n t  δ(t)ρt = ρ(1 + ρ)n−2 = n−1 t=1 n −2 t −1  ρt (0 < ρ < ∞). A comparison of the coefficients on the left and right sides leads to δ(t) = t(n −t) n(n −1). ∥ Method Two: Conditioning If δ(X) is any unbiased estimator of g(θ), it follows from Theorem 1.11 that the UMVU estimator can be obtained as the conditional expectation of δ(X) given T . For this derivation, it does not matter which unbiased estimator δ is being conditioned; one can thus choose δ so as to make the calculation of δ′(T ) = E[δ(X)|T ] as easy as possible. Example 1.14 UMVU estimator for a uniform distribution. Suppose that X1, . . ., Xn are iid according to the uniform distribution U(0, θ) and that g(θ) = θ/2. Then, T = X(n), the largest of the X’s, is a complete sufficient statistic. Since E(X1) = θ/2, the UMVU estimator of θ/2 is E[X1|X(n) = t]. If X(n) = t, then X1 = t with probability 1/n, and X1 is uniformly distributed on (0, t) with the remaining probability (n −1)/n (see Problem 1.6.2). Hence, E[X1|t] = 1 n · t + n −1 n · t 2 = n + 1 n · t 2. Thus, [(n + 1)/n] · T/2 and [(n + 1)/n] · T are the UMVU estimators of θ/2 and θ, respectively. ∥ The existence of UMVU estimators under the assumptions of Theorem 1.11 was proved there for convex loss functions. That the situation tends to be very different without convexity of the loss is seen from the following results of Basu (1955a). Theorem 1.15 Let the loss function L(θ, d) for estimating g(θ) be bounded, say L(θ, d) ≤M, and assume that L[θ, g(θ)] = 0 for all θ, that is, the loss is zero when the estimated value coincides with the true value. Suppose that g is U-estimable and let θ0 be an arbitrary value of θ. Then, there exists a sequence of unbiased estimators δn for which R(θ0, δn) →0. Proof. Since g(θ) is U-estimable, there exists an unbiased estimator δ(X). For any 0 < π < 1, let δ′ π(x) =      g(θ0) with probability 1 −π 1 π [δ(x) −g(θ0)] + g(θ0) with probability π. Then, δ′ π is unbiased for all π and all θ, since Eθ(δ′ π) = (1 −π)g(θ0) + π π [g(θ) −g(θ0)] + πg(θ0) = g(θ). 90 UNBIASEDNESS [ 2.2 The risk R(θ0, δ′ π) at θ0 is (1−π)·0 plus π times the expected loss of (1/π)[δ(X)− g(θ0)] + g(θ0), so that R(θ0, δ′ π) ≤πM. As π →0, it is seen that R(θ0, δ′ π) →0. ✷ This result implies that for bounded loss functions, no uniformly minimum-risk-unbiased or even locally minimum-risk-unbiased estimator exists except in trivial cases, since at each θ0, the risk can be made arbitrarily small even by unbiased estimators. [Basu (1955a) proved this fact for a more general class of nonconvex loss functions.] The proof lends support to the speculation of Section 1.7 that the difficulty with nonconvex loss functions stems from the possibility of arbitrarily large errors since as π →0, the error |δπ(x) −g(θ0)| →∞. It is the leverage of these large but relatively inexpensive errors which nullifies the restraining effect of unbiasedness. This argument applies not only to the limiting case of unbounded errors but also, although to a correspondingly lesser degree, to the case of finite large errors. In the latter situation, convex loss functions receive support from a large-sample consideration. To fix ideas, suppose the observations consist for n iid variables X1, . . . , Xn. As n increases, the error in estimating a given value g(θ) will decrease and tend to zero as n →∞. (See Section 1.8 for a precise statement.) Thus, essentially only the local behavior of the loss function near the true value g(θ) is relevant. If the loss function is smooth, its Taylor expansion about d = g(θ) gives L(θ, d) = a(θ) + b(θ)[d −g(θ)] + c(θ)[d −g(θ)]2 + R, where the remainder R becomes negligible as the error |d −g(θ)| becomes suf-ficiently small. If the loss is zero when d = g(θ), then a must be zero, so that b(θ)[d −g(θ)] becomes the dominating term for small errors. The condition L(θ, d) ≥0 for all θ then implies b(θ) = 0 and hence L(θ, d) = c(θ)[d −g(θ)]2 + R. Minimizing the risk for large n thus becomes essentially equivalent to minimizing E[δ(X) −g(θ)]2, which justifies not only a convex loss function but even squared error. Not only the loss function but also other important aspects of the behavior of estimators and the comparison of different estimators greatly simplify for large samples, as will be discussed in Chapter 6. The difficulty which bounded loss functions present for the theory of unbiased estimation is not encountered by a different unbiasedness concept, that of median unbiasedness mentioned in Section 1.1. For estimating g(θ) in a multiparameter exponential family, it turns out that uniformly minimum risk median unbiased estimators exist for any loss function L for which L(θ, d) is a nondecreasing function of d as d moves in either direction away from g(θ). A detailed version of this result can be found in Pfanzagl (1979). We shall not discuss the theory of median unbiased estimation here since the methods required belong to the theory of confidence intervals rather than that of point estimation (see TSH2, Section 3.5). 2.2 ] CONTINUOUS ONE- AND TWO-SAMPLE PROBLEMS 91 2 Continuous One- and Two-Sample Problems The problem of estimating an unknown quantity θ from n measurements of θ was considered in Example 1.1.1 as the prototype of an estimation problem. It was formalized by assuming that the n measurements are iid random variables X1, . . . , Xn with common distribution belonging to the location family Pθ(Xi ≤x) = F(x −θ). (2.1) The problem takes different forms according to the assumptions made about F. Some possibilities are the following: (a) F is completely specified. (b) F is specified except for an unknown scale parameter. In this case, (2.1) will be replaced by a location-scale family. It will then be convenient to denote the location parameter by ξ rather than θ (to reserve θ for the totality of unknown parameters) and hence to write the family as Pθ(Xi ≤x) = F x −ξ σ  . (2.2) Here, it will be of interest to estimate both ξ and σ. (c) The distribution of the X’s is only approximately given by Equation (2.1) or (2.2) with a specified F. What is meant by “approximately” leads to the topic of robust estimation (d) F is known to be symmetric about 0 (so that the X’s are symmetrically distributed about θ or ξ) but is otherwise unknown. (e) F is unknown except that it has finite variance; the estimand is ξ = E(Xi). In all these models, F is assumed to be continuous. AtreatmentofProblems(a)and(b)foranarbitraryknownF isgiveninChapter3 from the point of view of equivariance. In the present section, we shall be concerned with unbiased estimation of θ or (ξ, σ) in Problems (a) and (b) and some of their generalizations for some special distributions, particularly for the case that F is normal or exponential. Problems (c), (d), and (e) all fall under the general heading of robust and nonparametric statistics (Huber 1981, Hampel et al. 1986, Staudte and Sheather 1990). We will not attempt a systematic treatment of these topics here, but will touch upon some points through examples. For example, Problem (e) will be considered in Section 2.4. The following three examples will be concerned with the normal one-sample problems, that is, with estimation problems arising when X1, . . . , Xn are dis-tributed with joint density (2.3). Example 2.1 Estimating polynomials of a normal variance. Let X1, . . . , Xn be distributed with joint density 1 ( √ 2 πσ)n exp  −1 2σ 2 (xi −ξ)2  , (2.3) and assume, to begin with, that only one of the parameters is unknown. If σ is known, it follows from Theorem 1.6.22 that the sample mean ¯ X is a complete 92 UNBIASEDNESS [ 2.2 sufficient statistic, and since E( ¯ X) = ξ, ¯ X is the UMVU estimator of ξ. More generally, if g(ξ) is any U-estimable function of ξ, there exists a unique unbiased estimator δ( ¯ X) based on ¯ X and it is UMVU. If, in particular, g(ξ) is a polynomial of degree r, δ( ¯ X) will also be a polynomial of that degree, which can be determined inductively for r = 2, 3, . . . (Problem 2.1). If ξ is known, (2.3) is a one-parameter exponential family with S2 = (Xi − ξ)2 being a complete sufficient statistic. Since Y = S2/σ 2 is distributed as χ2 n independently of σ 2, it follows that E  Sr σ r  = 1 Kn,r , where Kn,r is a constant, and hence that Kn,rSr (2.4) is UMVU for σ r. Recall from Example 1.5.14 with a = n/2, b = 2 and with r/2 in place of r that E  Sr σ r  = E (χ2 n)r/2! = H[(n + r)/2] H(n/2) 2r/2 so that Kn,r = H(n/2) 2r/2H[(n + r)/2]. (2.5) As a check, note that for r = 2, Kn,r = 1/n, and hence E(S2) = nσ 2. Formula (2.5) is established in Example 1.5.14 only for r > 0. It is, however, easy to see (Problem 1.5.19) that it holds whenever n > −r, (2.6) but that the (r/2)th moment of χ2 n does not exist when n ≤−r. We are now in a position to consider the more realistic case in which both parameters are unknown. Then, by Example 1.6.24, ¯ X and S2 = (Xi −¯ X)2 jointly are complete sufficient statistics for (ξ, σ 2). This shows that ¯ X continues to be UMVU for ξ. Since var( ¯ X) = σ 2/n, estimation of σ 2 is, of course, also of great importance. Now, S2/σ 2 is distributed as χ2 n−1 and it follows from (2.4) with n replaced by n −1 and the new definition of S2 that Kn−1,rSr (2.7) is UMVU for σ r provided n > −r +1, and thus in particular S2/(n−1) is UMVU for σ 2. Sometimes, it is of interest to measure ξ in σ-units and hence to estimate g(ξ, σ) = ξ/σ. Now ¯ X is UMVU for ξ and Kn−1,−1/S for 1/σ. Since ¯ X and S are independent, it follows that Kn−1,−1 ¯ X/S is unbiased for ξ/σ and hence UMVU, provided n −1 > 1, that is, n > 2. If we next consider calculating the variance of Kn−1,−1 ¯ X/S or, more generally, calculating the variance of UMVU estimators of polynomial functions of ξ and σ, we are led to calculating the moments E( ¯ Xk) and E(Sk) for all k = 1, 2, . . . . This is investigated in Problems 2.4-2.6 ∥ 2.2 ] CONTINUOUS ONE- AND TWO-SAMPLE PROBLEMS 93 Another class of problems within the framework of the normal one-sample problem relates to the probability p = P(X1 ≤u). (2.8) Example 2.2 Estimating a probability or a critical value. Suppose that the observations Xi denote the performances of past candidates on an entrance exam-ination and that we wish to estimate the cutoff value u for which the probability of a passing performance, X ≥u, has a preassigned probability 1 −p. This is the problem of estimating u in (2.8) for a given value of p. Solving the equation p = P(X1 ≤u) = X u −ξ σ  (2.9) (where X denotes the cdf of the standard normal distribution) for u shows that u = g(ξ, σ) = ξ + σX−1(p). It follows that the UMVU estimator of u is ¯ X + Kn−1,1SX−1(p). (2.10) Consider next the problem of estimating p for a given value of u. Suppose, for example, that a manufactured item is acceptable if some quality characteristic is ≤u and that we wish to estimate the probability of an item being acceptable, its reliability, given by (2.9). To illustrate a method which is applicable to many problems of this type, con-sider, first, the simpler case that σ = 1. An unbiased estimator δ of p is the indicator of the event X1 ≤u. Since ¯ X is a complete sufficient statistic, the UMVU estimator of p = P(X1 ≤u) = X(u −ξ) is therefore E[δ| ¯ X] = P[X1 ≤u| ¯ X]. To evaluate this probability, use the fact that X1 −¯ X is independent of ¯ X. This follows from Basu’s theorem (Theorem 1.6.21) since X1 −¯ X is ancillary.1 Hence, P[X1 ≤u|¯ x] = P[X1 −¯ X ≤u −¯ x|¯ x] = P[X1 −¯ X ≤u −¯ x], and the computation of a conditional probability has been replaced by that of an unconditional one. Now, X1 −¯ X is distributed as N(0, (n −1)/n), so that P[X1 −¯ X ≤u −¯ x] = X $ n n −1(u −¯ x)  , (2.11) which is the UMVU estimator of p. Closely related to the problem of estimating p, which is the cdf F(u) = P[X1 ≤u] = X(u −ξ) of X1 evaluated at u, is that of estimating the probability density at u : g(ξ) = φ(u −ξ). We shall now show that the UMVU estimator of the probability density g(ξ) = pX1 ξ (u) of X1 evaluated at u is the conditional density of X1 given ¯ X 1 Such applications of Basu’s theorem can be simplified when invariance is present. The theory and some interesting illustrations are discussed by Eaton and Morris (1970). 94 UNBIASEDNESS [ 2.2 evaluated at u, δ( ¯ X) = pX1| ¯ X(u). Since this is a function of ¯ X, it is only necessary to check that δ is unbiased. This can be shown by differentiating the UMVU estimator of the cdf after justifying the required interchange of differentiation and integration, or as follows. Note that the joint density of X1 and ¯ X is pX1| ¯ X(u)p ¯ X ξ (x) and that the marginal density is therefore pX1 ξ (u) =  ∞ −∞ pX1| ¯ X(u)p ¯ X ξ (x) dx. This equation states just that δ( ¯ X) is an unbiased estimator of g(ξ). Differentiating the earlier equation P[X1 ≤u|¯ x] = X $ n n −1(u −¯ x)  with respect to u, we see that the derivative d du[P[X1 ≤u| ¯ X] = pX1| ¯ X(u) = $ n n −1φ $ n n −1(u −¯ X)  , (where φ is the standard normal density) is the UMVU estimator of pX1 ξ (u). Suppose now that both ξ and σ are unknown. Then, exactly as in the case σ = 1, the UMVU estimator of P[X1 ≤u] = X((u −ξ)/σ) and of the density pX1(u) = (1/σ)φ((u −ξ)/σ) is given, respectively, by P[X1 ≤u| ¯ X, S] and the conditional density of X1 given ¯ X and S evaluated at u, where S2 = (Xi −¯ X)2. To replace the conditional distribution with an unconditional one, note that (X1−¯ X)/S is ancillary and therefore, by Basu’s theorem, independent of ( ¯ X, S). It follows, as in the earlier case, that P[X1 ≤u|¯ x, s] = P X1 −¯ X S ≤u −¯ x s  (2.12) and that pX1|¯ x,s(u) = 1 s f u −¯ x s  (2.13) where f is the density of (X1 −¯ X)/S. A straightforward calculation (Problem 2.10) gives f (z) = H  n−1 2  H  1 2  H  n−2 2  $ n n −1  1 −nz2 n −1 (n/2)−2 if 0 < |z| < $ n −1 n (2.14) and zero elsewhere. The estimator (2.13) is obtained by substitution of (2.14), and the estimator (2.12) is obtained by integrating the density f . ∥ We shall next consider two extensions of the normal one-sample model. The first extension is concerned with the two-sample problem, in which there are two independent groups of observations, each with a model of this type, but corre-sponding to different conditions or representing measurements of two different 2.2 ] CONTINUOUS ONE- AND TWO-SAMPLE PROBLEMS 95 quantities so that the parameters of the two models are not the same. The sec-ond extension deals with the multivariate situation of n p-tuples of observations (X1ν, . . . , Xpν), ν = 1, . . . , n, with (X1ν, . . . , Xpν) representing measurements of p different characteristics of the νth subject. Example 2.3 The normal two-sample problem. Let X1, . . . , Xm and Y1, . . . , Yn be independently distributed according to normal distributions N(ξ, σ 2) and N(η, τ 2), respectively. (a) Suppose that ξ, η, σ, τ are completely unknown. Then, the joint density 1 ( √ 2π)m+nσ mτ n exp  −1 2σ 2 (xi −ξ)2 − 1 2τ 2 (yj −η)2  (2.15) constitutes an exponential family for which the four statistics ¯ X, ¯ Y, S2 X = (Xi −¯ X)2, S2 Y = (Yj −¯ Y)2 are sufficient and complete. The UMVU estimators of ξ and σ r are therefore ¯ X and Kn−1,rSr X, as in Example 2.1, and those of η and τ r are given by the corresponding formulas. In the present model, interest tends to focus on comparing parameters from the two distributions. The UMVU estimator of η −ξ is ¯ Y −¯ X and that of τ r/σ r is the product of the UMVU estimators of τ r and 1/σ r. (b) Sometimes, it is possible to assume that σ = τ. Then ¯ X, ¯ Y, and S2 = (Xi − ¯ X)2 + (Yj −¯ Y)2 are complete sufficient statistics [Problem 1.6.35(a)] and the natural unbiased estimators of ξ, η, σ r, η−ξ, and (η−ξ)/σ are all UMVU (Problem 2.11). (c) As a third possibility, suppose that η = ξ but that σ and τ are not known to be equal, and that it is desired to estimate the common mean ξ. This might arise, for example, when two independent sets of measurements of the same quantity are available. The statistics T = ( ¯ X, ¯ Y, S2 X, S2 Y) are then minimal sufficient (Problem 1.6.7), but they are no longer complete since E( ¯ Y −¯ X) = 0. If σ 2/τ 2 = γ is known, the best unbiased linear combination of ¯ X and ¯ Y is δγ = α ¯ X + (1 −α) ¯ Y, where α = τ 2 n )σ 2 m + τ 2 n  (Problem 2.12). Since, in this case, T ′ = (X2 i +γ Y 2 j , Xi+γ Yj) is a complete sufficient statistic (Problem 2.12) and δγ is a function of T ′, δγ is UMVU. When σ 2/τ 2 is unknown, a UMVU estimator of ξ does not exist (Problem 2.13), but one can first estimate α, and then estimate ξ by ξ = ˆ α ¯ X + (1 −ˆ α) ¯ Y. It is easy to see that ˆ ξ is unbiased provided ˆ α is a function of only S2 X and S2 Y (Problem 2.13), for example, if σ 2 and τ 2 in α are replaced by S2 X/(m −1) and S2 Y/(n −1). The problem of finding a good estimator of ξ has been considered by various authors, among them Graybill and Deal (1959), Hogg (1960), Seshadri (1963), Zacks (1966), Brown and Cohen (1974), Cohen and Sackrowitz (1974), Rubin and Weisberg (1975), Rao (1980), Berry (1987), Kubokawa (1987), Loh (1991), and George (1991). ∥ 96 UNBIASEDNESS [ 2.2 It is interesting to note that the nonexistence of a UMVU estimator holds not only for ξ but for any U-estimable function of ξ. This fact, for which no easy proof is available, was established by Unni (1978, 1981) using the results of Kagan and Palamadov (1968). In cases (a) and (b), the difference η −ξ provides one comparison between the distributions of the X’s and Y’s. An alternative measure of the superiority (if large values of the variables are desirable) of the Y’s over the X’s is the probability p = P(X < Y). The UMVU estimator of p can be obtained as in Example 2.2 as P(X1 < Y1| ¯ X, ¯ Y, S2 X, S2 Y) and P(X1 < Y1| ¯ X, ¯ Y, S2) in cases (a) and (b), respectively(Problem2.14).Incase(c),theproblemdisappearssincethenp = 1/2. Example 2.4 The multivariate normal one-sample problem. Suppose that (Xi, Yi, . . .), i = 1, . . . , n, are observations of p characteristics on a random sample of n subjects from a large population, so that the n p-vectors can be assumed to be iid. We shall consider the case that their common distribution is a p-variate normal distribution (Example 1.4.5) and begin with the case p = 2. The joint probability density of the (Xi, Yi) is then  1 2πστ  1 −ρ2 n exp − 1 2(1 −ρ2)  1 σ 2 (xi −ξ)2 (2.16) −2ρ στ (xi −ξ)(yi −η) + 1 τ 2 (yi −η)2 " where E(Xi) = ξ, E(Yi) = η, var(Xi) = σ 2, var(Yi) = τ 2, and cov(Xi, Yi) = ρστ, so that ρ is the correlation coefficient between Xi and Yi. The bivariate family (2.16) constitutes a five-parameter exponential family of full rank, and the set of sufficient statistics T = ( ¯ X, ¯ Y, S2 X, S2 Y, SXY) where SXY = (Xi −¯ X)(Yi −¯ Y) (2.17) is therefore complete. Since the marginal distributions of the Xi and Yi are N(ξ, σ 2) and N(η, τ 2), the UMVU estimators of ξ and σ 2 are ¯ X and S2 X/(n −1), and those of η and τ 2 are given by the corresponding formulas. The statistic SXY/(n−1) is an unbiased estimator of ρστ (Problem 2.15) and is therefore the UMVU estimator of cov(Xi, Yi). For the correlation coefficient ρ, the natural estimator is the sample correlation coefficient R = SXY/ S2 XS2 Y. (2.18) However, R is not unbiased, since it can be shown [see, for example, Stuart and Ord (1987, Section 16.32)] that E(R) = ρ  1 −(1 −ρ2) 2n + O  1 n2  . (2.19) By implementing Method One of Section 2.1, together with some results from the theory of Laplace transforms, Olkin and Pratt (1958) derived a function G(R) of 2.2 ] CONTINUOUS ONE- AND TWO-SAMPLE PROBLEMS 97 R which is unbiased and hence UMVU. It is given by G(r) = rF 1 2, 1 2; n −1 2 ; 1 −r2  , where F(a, b; c; x) is the hypergeometric function F(a, b; c; x) = ∞ k=0 H(a + k)H(b + k)H(c)xk H(a)H(b)H(c + k)k! = H(c) H(b)H(c −b)  1 0 tb−1(1 −t)c−b−1 (1 −tx)a dt. Calculation of G(r) is facilitated by using a computer algebra program. Alterna-tively, by substituting in the above series expansion, one can derive the approxi-mation G(r) = r  1 + 1 −r2 2(n −1) + O  1 n2  which is quite accurate. These results extend easily to the general multivariate case. Let us change notation and denote by (X1ν, . . . , Xpν), ν = 1, . . . , n, a sample from a non-singular p-variate normal distribution with means E(Xiν) = ξi and covariances cov(Xiν, Xjν) = σij. Then, the density of the X’s is | |n/2 (2π)pn/2 exp  −1 2 θjkS′ jk  (2.20) where S′ jk = n ν=1 (Xjν −ξj)(Xkν −ξk) (2.21) and where = (θjk) is the inverse of the covariance matrix (σjk). This is a full-rank exponential family, for which the p + p 2  = 1 2p(p + 1) statistics Xi· = Xiν/n (i = 1, . . . , p) and Sjk = (Xjν −Xj.)(Xkν −Xk.) are complete. Since the marginal distributions of the Xjν and the pair (Xjν, Xkν) are univariate and bivariate normal, respectively, it follows from Example 2.1 and the earlier part of the present example, that Xi· is UMVU for ξi and Sjk/(n −1) for σjk. Also, the UMVU estimators of the correlation coefficients ρjk = σjk/√σjjσkk are just those obtained from the bivariate distribution of the (Xjν, Xkν). The UMVU estimator of the square of the multiple correlation coefficient of one of the p coordinates with the other p−1 was obtained by Olkin and Pratt (1958). The problem of estimating a multivariate normal probability density has been treated by Ghurye and Olkin (1969); see also Gatsonis 1984. ∥ Results quite analogous to those found in Examples 2.1–2.3 obtain when the normal density (2.3) is replaced by the exponential density 1 bn exp  −1 b(xi −a)  , xi > a. (2.22) 98 UNBIASEDNESS [ 2.2 Despite its name, this two-parameter family does not constitute an exponential family since its support changes with a. However, for fixed a, it constitutes a one-parameter exponential family with parameter 1/b. Example 2.5 The exponential one-sample problem. Suppose, first, that b is known. Then, X(1) is sufficient for a and complete (Example 1.6.24). The distri-bution of n[X(1) −a]/b is the standard exponential distribution E(0, 1) and the UMVU estimator of a is X(1) −(b/n) (Problem 2.17). On the other hand, when a is known, the distribution (2.22) constitutes a one-parameter exponential family with complete sufficient statistic (Xi −a). Since 2(Xi −a)/b is distributed as χ2 2n, it is seen that (Xi −a)/n is the UMVU estimator for b (Problem 2.17). When both parameters are unknown, X(1) and [Xi −X(1)] are jointly sufficient and complete (Example 1.6.27). Since they are independently distributed, n[X(1)− a]/b as E(0, 1) and 2[Xi −X(1)]/b as χ2 2(n−1) (Problem 1.6.18), it follows that (Problem 2.18) 1 n −1[Xi −X(1)] and X(1) − 1 n(n −1)[Xi −X(1)] (2.23) are UMVU for b and a, respectively. It is also easy to obtain the UMVU estimators of a/b and of the critical value u for which P(X1 ≤u) has a given value p. If, instead, u is given, the UMVU estimator of P(X1 ≤u) can be found in analogy with the normal case (Problems 2.19 and 2.20). Finally, the two-sample problems corresponding to Example 2.3(a) and (b) can be handled very similarly to the normal case (Problems 2.21-2.23). ∥ An important aspect of estimation theory is the comparison of different estima-tors. As competitors of UMVU estimators, we shall now consider the maximum likelihood estimator (ML estimator, see Section 6.2). This comparison is of in-terest both because of the widespread use of the ML estimator and because of its asymptotic optimality (which will be discussed in Chapter 6). If a distribution is specified by a parameter θ (which need not be real-valued), the ML estimator of θ is that value ˆ θ of θ which maximizes the probability or probability density. The ML estimator of g(θ) is defined to be g( ˆ θ). Example 2.6 Comparing UMVU and ML estimators. Let X1, . . ., Xn be iid according to the normal distribution N(ξ, σ 2). Then, the joint density of the X′s is given by (2.3) and it is easily seen that the ML estimators of ξ and σ 2 are (Problem 2.26) ˆ ξ = ¯ X and ˆ σ 2 = 1 n (Xi −¯ X)2. (2.24) Within the framework of this example, one can illustrate the different possible relationships between UMVU and ML estimators. (a) When the estimand g(ξ, σ) is ξ, then ¯ X is both the ML estimator and the UMVU estimator, so in this case, the two estimators coincide. (b) Let σ be known, say σ = 1, and let g(ξ, σ) be the probability p = X(u −ξ) considered in Example 2.2 (see also Example 3.1.13). The UMVU estimator is X[√n/(n −1)(u −¯ X)], whereas the ML estimator is X(u −¯ X). Since the 2.2 ] CONTINUOUS ONE- AND TWO-SAMPLE PROBLEMS 99 ML estimator is biased (by completeness, there can be only one unbiased function of ¯ X), the comparison should be based on the mean squared error (rather than the variance) Rδ(ξ, σ) = E[δ −g(ξ, σ)]2 (2.25) as risk. Such a comparison was carried out by Zacks and Even (1966), who found that neither estimator is uniformly better than the other. For n = 4, for example, the UMVU estimator is better when |u −ξ| > 1.3 or, equivalently, when p < .1 or p > .9, whereas for the remaining values the ML estimator has smaller mean squared error. This example raises the question whether there are situations in which the ML estimator is either uniformly better or worse than its UMVU competitor. The following two simple examples illustrate these possibilities. (c) If ξ and σ 2 are both unknown, the UMVU estimator and the ML estimator of σ 2 are, respectively, S2/(n−1) and S2/n, where S2 = (Xi −¯ X)2. Consider the general class of estimators cS2. An easy calculation (Problem 2.28) shows that E(cS2 −σ 2) = σ 4 (n2 −1)c2 −2(n −1)c + 1 ! . (2.26) For any given c, this risk function is proportional to σ 4. The risk functions corresponding to different values of c, therefore, do not intersect, but one lies entirely above the other. The right side of (2.26) is minimized by c = 1/(n+1). Since the values c = 1/(n−1) and c = 1/n, corresponding to the UMVU and ML estimator, respectively, lie on the same side of 1/(n + 1) with 1/n being closer and the risk function is quadratic, it follows that the ML estimator has uniformly smaller risk than the UMVU estimator, but that the ML estimator, in turn, is dominated by S2/(n + 1). (For further discussion of this problem, see Section 3.3.) (d) Suppose that σ 2 is known and let the estimand be ξ 2. Then, the ML estimator is ¯ X2 and the UMVU estimator is ¯ X2 −σ 2/n (Problem 2.1). That the risk of the ML estimator is uniformly larger follows from the following lemma. ∥ Lemma 2.7 Let the risk be expected squared error. If δ is an unbiased estimator of g(θ) and if δ∗= δ+b, where the bias b is independent of θ, then δ∗has uniformly larger risk than δ, in fact, Rδ∗(θ) = Rδ(θ) + b2. For small sample sizes, both the UMVU and ML estimators can be unsatisfac-tory. One unpleasant possible feature of UMVU estimators is illustrated by the estimation of ξ 2 in the normal case [Problem 2.5; Example 2.6(d)]. The UMVU estimator is ¯ X2−σ 2/n when σ is known, and ¯ X2−S2/n(n−1) when it is unknown. In either case, the estimator can take on negative values although the estimand is known to be non-negative. Except when ξ = 0 or n is small, the probability of such values is not large, but when they do occur, they cause some embarrassment. The 100 UNBIASEDNESS [ 2.3 difficulty can be avoided, and at the same time the risk of the estimator improved, by replacing the estimator by zero whenever it is negative. This idea is developed further in Sections 4.7 and 5.6. It is also the case that most of these problems disappear in large samples, as we will see in Chapter 6. The examples of this section are fairly typical and suggest that the difference between the two estimators tends to be small. For samples from the exponential families, which constitute the main area of application of UMVU estimation, it has, in fact, been shown under suitable regularity assumptions that the UMVU and ML estimators are asymptotically equivalent as the sample size tends to infinity, so that the UMVU estimator shares the asymptotic optimality of the ML estimator. (For an exact statement and counterexamples, see Portnoy 1977b.) 3 Discrete Distributions The distributions considered in the preceding section were all continuous. We shall now treat the corresponding problems for some of the basic discrete distributions. Example 3.1 Binomial UMVU estimators. In the simplest instance of a one-sample problem with qualitative rather than quantitative “measurements,” the ob-servations are dichotomous; cure or no cure, satisfactory or defective, yes or no. The two outcomes will be referred to generically as success or failure. Theresultsofnindependentsuchobservationswithcommonsuccessprobability p are conveniently represented by random variables Xi which are 1 or 0 as the ith case or “trial” is a success or failure. Then, P(Xi = 1) = p, and the joint distribution of the X’s is given by P(X1 = x1, . . . , Xn = xn) = pxiqn−xi (q = 1 −p). (3.1) This is a one-parameter exponential family, and T = Xi —the total number of successes—is a complete sufficient statistic. Since E(Xi) = E( ¯ X) = p and ¯ X = T/n, it follows that T/n is the UMVU estimator of p. Similarly, (Xi − ¯ X)2/(n −1) = T (n −T )/n(n −1) is the UMVU estimator of var(Xi) = pq (Problem 3.1; see also Example 1.13). The distribution of T is the binomial distribution b(p, n), and it was pointed out in Example 1.2 that 1/p is not U-estimable on the basis of T , and hence not in the present situation. In fact, it follows from Equation (1.2) that a function g(p) can be U-estimable only if it is a polynomial of degree ≤n. To see that every such polynomial is actually U-estimable, it is enough to show that pm is U-estimable for every m ≤n. This can be established, and the UMVU estimator determined, by Method 1 of Section 1 (Problem 3.2). An alternative approach utilizes Method 2. The quantity pm is the probability pm = P(X1 = · · · = Xm = 1) and its UMVU estimator is therefore given by δ(t) = P[X1 = · · · = Xm = 1|T = t]. This probability is 0 if t < m. For t ≥m, δ(t) is the probability of obtaining m successes in the first m trials and t −m successes in the remaining n −m trials, 2.3 ] DISCRETE DISTRIBUTIONS 101 divided by P(T = t), and hence it is pm n −m t −m  pt−mqn−1 )n t  ptqn−1, or δ(T ) = T (T −1) · · · (T −m + 1) n(n −1) · · · (n −m + 1) . (3.2) Since this expression is zero when T = 0, . . . , m −1, it is seen that δ(T ), given by (3.2) for all T = 0, 1, . . . , n, is the UMVU estimator of pm. This proves that g(p) is U-estimable on the basis of n binomial trials if and only if it is a polynomial of degree ≤n. ∥ Consider now the estimation of 1/p, for which no unbiased estimator exists. This problem arises, for example, when estimating the size of certain animal pop-ulations. Suppose that a lake contains an unknown number N of some species of fish. A random sample of size k is caught, tagged, and released again. Somewhat later, a random sample of size n is obtained and the number X of tagged fish in the sample is noted. (This is the capture-recapture method. See, for example, George and Robert, 1992.) If, for the sake of simplicity, we assume that each caught fish is immediately returned to the lake (or alternatively that N is very large compared to n), the n fish in this sample constitute n binomial trials with probability p = k/N of success (i.e., obtaining a tagged fish). The population size N is therefore equal to k/p. We shall now discuss a sampling scheme under which 1/p, and hence k/p, is U-estimable. Example 3.2 Inverse binomial sampling. Reliable estimation of 1/p is clearly difficult when p is close to zero, where a small change of p will cause a large change in 1/p. To obtain control of 1/p for all p, it would therefore seem necessary to take more observations the smaller p is. A sampling scheme achieving this is inverse sampling, which continues until a specified number of successes, say m, have been obtained. Let Y +m denote the required number of trials. Then, Y has the negative binomial distribution given by (Problem 1.5.12) P(Y = y) = m + y −1 m −1  pm(1 −p)y, y = 0, 1, . . . , (3.3) with E(Y) = m(1 −p)/p; var(Y) = m(1 −p)/p2. (3.4) It is seen from (3.4) that δ(Y) = (Y + m)/m, the reciprocal of the proportion of successes, is an unbiased estimator of 1/p. The full data in the present situation are not Y but also include the positions in which the m successes occur. However, Y is a sufficient statistic (Problem 3.6), and it is complete since (3.3) is an exponential family. As a function of Y, δ(Y) is thus the unique unbiased estimator of 1/p; based on the full data, it is UMVU. 102 UNBIASEDNESS [ 2.3 It is interesting to note that 1/(1 −p) is not U-estimable with the present sampling scheme, for suppose δ(Y) is an unbiased estimator so that pm ∞ y=0 δ(y) m + y −1 m −1  (1 −p)y = 1/(1 −p) for all 0 < p < 1. The left side is a power series which converges for all 0 < p < 1, and hence converges and is continuous for all |p| < 1. As p →1, the left side therefore tends to δ(0) while the right side tends to infinity. Thus, the assumed δ does not exist. (For the estimation of pr, see Problem 3.4.) ∥ The situations described in Examples 3.1 and 3.2 are special cases of sequential binomial sampling in which the number of trials is allowed to depend on the observations. The outcome of such sampling can be represented as a random walk in the plane. The walk starts at (0, 0) and moves a unit to the right or up as the first trial is a success or failure. From the resulting point (1, 0) or (0, 1), it again moves a unit to the right or up, and continues in this way until the sampling plan tells it to stop. A stopping rule is thus defined by a set B of points, a boundary, at which sampling stops. We require B to satisfy (x,y)∈B P(x, y) = 1 (3.5) since otherwise there is positive probability that sampling will go on indefinitely. A stopping rule that satisfies (3.5) is called closed. Any particular sample path ending in (x, y) has probability pxqy, and the prob-ability of a path ending in any particular point (x, y) is therefore P(x, y) = N(x, y)pxqy, (3.6) where N(x, y) denotes the number of paths along which the random walk can reach the point (x, y). As illustrations, consider the plans of Examples 3.1 and 3.2. (a) In Example 3.1, B is the set of points (x, y) satisfying x +y = n, x = 0, . . . , n, and for any (x, y) ∈B, we have N(x, y) = n x  . (b) In Example 3.2, B is the set of points (x, y) with x = m; y = 0, 1, . . . , and for any such point N(x, y) = m + y −1 y  . The observations in sequential binomial sampling are represented by the sample path, and it follows from (3.6) and the factorization criterion that the coordinates (X, Y) of the stopping point in which the path terminates constitute a sufficient statistic. This can also be seen from the definition of sufficiency, since the condi-tional probability of any given sample path given that it ends in (x, y) is pxqx N(x, y)pxqy = 1 N(x, y), which is independent of p. 2.3 ] DISCRETE DISTRIBUTIONS 103 Example 3.3 Sequential estimation of binomial p. For any closed sequential binomial sampling scheme, an unbiased estimator of p depending only on the sufficient statistic (X, Y) can be found in the following way. A simple unbiased estimator is δ = 1 if the first trial is a success and δ = 0 otherwise. Application of the Rao-Blackwell theorem then leads to δ′(X, Y) = E[δ|(X, Y)] = P[1st trial = success|(X, Y)] as an unbiased estimator depending only on (X, Y). If the point (1, 0) is a stopping point, then δ′ = δ and nothing is gained. In all other cases, δ′ will have a smaller variance than δ. An easy calculation [Problem 3.8(a)] shows that δ′(x, y) = N′(x, y)/N(x, y) (3.7) where N′(x, y) is the number of paths possible under the sampling schemes which pass through (1, 0) and terminate in (x, y). ∥ More generally, if (a, b) is any accessible point, that is, if it is possible under the given sampling plan to reach (a, b), the quantity paqb is U-estimable, and an unbiased estimator depending only on (X, Y) is given by (3.7), where N′(x, y) now stands for the number of paths passing through (a, b) and terminating in (x, y) [Problem 3.8(b)]. The estimator (3.7) will be UMVU for any sampling plan for which the sufficient statistic (X, Y) is complete. To describe conditions under which this is the case, let us call an accessible point that is not in B a continuation point. A sampling plan is called simple if the set of continuation points Ct on each line segment x + y = t is an interval or the empty set. A plan is called finite if the number of accessible points is finite. Example 3.4 Two sampling plans. (a) Let a, b, and m be three positive integers with a < b < m. Continue obser-vation until either a successes or failures have been obtained. If this does not happen during the first m trials, continue until either b successes or failures have been obtained. This sampling plan is simple and finite. (b) Continue until both at least a successes and a failures have been obtained. This plan is neither simple nor finite, but it is closed (Problem 3.10). ∥ Theorem 3.5 A necessary and sufficient condition for a finite sampling plan to be complete is that it is simple. We shall here only prove sufficiency. [For a proof of necessity, see Girschick, Mosteller, and Savage 1946.] If the restriction to finite plans is dropped, simplicity is no longer sufficient (Problem 3.9). Another necessary condition in that case is stated in Problem 3.13. This condition, together with simplicity, is also sufficient. (For a proof, see Lehmann and Stein 1950.) For the following proof it may be helpful to consider a diagram of plan (a) of Example 3.4. Proof. Proof of sufficiency. Suppose there exists a nonzero function δ(X, Y) whose expectation is zero for all p (0 < p < 1). Let t0 be the smallest value of t for 104 UNBIASEDNESS [ 2.3 which there exists a boundary point (x0, y0) on x + y = t0 such that δ(x0, y0) ̸= 0. Since the continuation points on x + y = t0 (if any) form an interval, they all lie on the same side of (x0, y0). Suppose, without loss of generality, that (x0, y0) lies to the left and above Ct0, and let (x1, y1) be that boundary point on x + y = t0 above Ct0 and with δ(x, y) ̸= 0, which has the smallest x-coordinate. Then, all boundary points with δ(x, y) ̸= 0 satisfy t ≥t0 and x ≥x1. It follows that for all 0 < p < 1 E[δ(X, Y)] = N(x1, y1)δ(x1, y1)px1qt0−x1 + px1+1R(p) = 0 where R(p) is a polynomial in p. Dividing by px1 and letting p →0, we see that δ(x1, y1) = 0, which is a contradiction. ✷ Fixed binomial sampling satisfies the conditions of the theorem, but, there (and for inverse binomial sampling), completeness follows already from the fact that it leads to a full-rank exponential family (5.1) with s = 1. An example in which this is not the case is curtailed binomial sampling, in which sampling is continued as long as X < a, Y < b, and X + Y < n(a, b < n) and is stopped as soon as one of the three boundaries is reached (Problem 3.11). Double sampling and curtailed double sampling provide further applications of the theory. (See Girshick, Mosteller, and Savage 1946; see also Kremers 1986.) The discrete distributions considered so far were all generated by binomial trials. A large class of examples is obtained by considering one-parameter exponential families (5.2) in which T (x) is integer-valued. Without loss of generality, we shall take T (x) to be x and the distribution of X to be given by P(X = x) = eηx−B(η)a(x). (3.8) Putting θ = eη, we can write (3.8) as P(X = x) = a(x)θx/C(θ), x = 0, 1, . . . , θ > 0. (3.9) For any function a(x) for which a(x)θx < ∞for some θ > 0, this is a family of power series distributions (Problems 1.5.14–1.5.16). The binomial distribution b(p, n) is obtained from (3.9) by putting a(x) = n x  for x = 0, 1, . . . , n, and a(x) = 0 otherwise; θ = p/q and C(θ) = (θ + 1)n. The negative binomial dis-tribution with a(x) = m + x −1 m −1  , θ = q, and C(θ) = (1 −θ)−m is another example. The family (3.9) is clearly complete. If a(x) > 0 for all x = 0, 1, . . . , then θr is U-estimable for any positive integer r, and its unique unbiased estimator is obtained by solving the equations ∞ x=0 δ(x)a(x)θx = θr · C(θ) for all θ ∈ . Since a(x)θx = C(θ), comparison of the coefficients of θx yields δ(x) = 0 if x = 0, . . . , r −1 a(x −r)/a(x) if x ≥r. (3.10) Suppose, next, that X1, . . . , Xn are iid according to a power series family (3.9). Then, X1 +· · ·+Xn is sufficient for θ, and its distribution is given by the following 2.3 ] DISCRETE DISTRIBUTIONS 105 lemma. Lemma 3.6 The distribution of T = X1 + · · · + Xn is the power series family P(T = t) = A(t, n)θt [C(θ)]n , (3.11) where A(t, n) is the coefficient of θt in the power series expansion of [C(θ)]n. Proof. By definition, P(T = t) = θt t a(x1) · · · a(xn) [C(θ)]n where t indicates that the summation extends over all n-tuples of integers (x1, . . . , xn) with x1 + · · · + xn = t. If B(t, n) = t a(x1) · · · a(xn), (3.12) the distribution of T is given by (3.11) with B(t, n) in place of A(t, n). On the other hand, [C(θ)]n =  ∞ x=0 a(x)θx n , and for any t = 0, 1, . . . , the coefficient of θt in the expansion of the right side as a power series in θ is just B(t, n). Thus, B(t, n) = A(t, n), and this completes the proof. ✷ It follows from the lemma that T is complete and from (3.10) that the UMVU estimator of θr on the basis of a sample of n is δ(t) =    0 if t = 0, . . . , r −1 A(t −r, n) A(t, n) if t ≥r. (3.13) Consider, next, the problem of estimating the probability distribution of X from a sample X1, . . . , Xn. The estimand can be written as g(θ) = Pθ(X1 = x) and the UMVU estimator is therefore given by δ(t) = P[X1 = x|X1 + · · · + Xn = t] = P(X1 = x)P(X2 + · · · + Xn = t −x) P(T = t) . In the present case, this reduces to δ(t) = a(x)A(t −x, n −1) A(t, n) , n > 1, 0 ≤x ≤t. (3.14) Example 3.7 Poisson UMVU estimation. The Poisson distribution, shown in Table 1.5.1, arises as a limiting case of the binomial distribution for large n and small p, and more generally as the number of events occurring in a fixed time 106 UNBIASEDNESS [ 2.3 period when the events are generated by a Poisson process. The distribution P(θ) of a Poisson variable with expectation θ is given by (3.9) with a(x) = 1 x!, C(θ) = eθ. (3.15) Thus, [C(θ)]n = enθ and A(t, n) = nt t! . (3.16) The UMVU estimator of θr is therefore, by (3.13), equal to δ(t) = t(t −1) · · · (t −r + 1) nr (3.17) for all t ≥r. Since the right side is zero for t = 0, . . . , r −1, formula (3.17) holds for all r. The UMVU estimator of Pθ(X = x) is given by (3.14), which, by (3.16), be-comes δ(t) =  t x  1 n x n −1 n t−x , x = 0, 1, . . . , t. For varying x, this is the binomial distribution b(1/n, t). In some situations, Poisson variables are observed only when they are positive. For example, suppose that we have a sample from a truncated Poisson distribution (truncated on the left at 0) with probability function P(X = x) = 1 eθ −1 θx x! , x = 1, 2, . . . . (3.18) This is a power series distribution with a(x) = 1 x! if x ≥1, a(0) = 0, and C(θ) = eθ −1. For any values of t and n, the UMVU estimator δ(t) of θ, for example, can now be obtained from (3.13). (See Problems 3.18–3.22; for further discussion, see Tate and Goen 1958.) ∥ We next consider some multiparameter situations. Example 3.8 Multinomial UMVU estimation. Let (X0, X1, . . . , Xn) have the multinomial distribution (5.4). As was seen in Example 1.5.3, this is an s-parameter exponential family, with (X1, . . ., Xs) or (X0, X1, . . . , Xs) constituting a complete sufficient statistic. [Recall that X0 = n −(X1 + · · · + Xs).] Since E(Xi) = npi, it follows that Xi/n is the UMVU estimator of pi. To obtain the UMVU estimator of pipj, note that one unbiased estimator is δ = 1 if the first trial results in outcome i and the second trial in outcome j, and δ = 0 otherwise. The UMVU estimator of pipj is therefore E(δ|X0, . . . , Xs) = (n −2)!XiXj X0! · · · Xs! ) n! X0! · · · Xs! = XiXj n(n −1). ∥ 2.3 ] DISCRETE DISTRIBUTIONS 107 Table 3.1. I × J Contingency Table B1 · · · BJ Total A1 n11 · · · n1J n1+ . . . . . . AI nI1 · · · nIJ nI+ Total n+1 · · · n+J n In the application of multinomial models, the probabilities p0, . . . , ps are fre-quently subject to additional restrictions, so that the number of independent param-eters is less than s. In general, such a restricted family will not constitute a full-rank exponential family, but may be a curved exponential family. There are, however, important exceptions. Simple examples are provided by certain contingency tables. Example 3.9 Two-way contingency tables. A number n of subjects is drawn at random from a population sufficiently large that the drawings can be considered to be independent. Each subject is classified according to two characteristics: A, with possible outcomes A1, . . . , AI, and B, with possible outcomes B1, . . . , BJ. [For example, students might be classified as being male or female (I = 2) and according to their average performance (A, B, C, D, or F; J = 5).] The probability that a subject has properties (Ai, Bj) will be denoted by pij and the number of such subjects in the sample by nij. The joint distribution of the IJ variables nij is an unrestricted multinomial distribution with s = IJ −1, and the results of the sample can be represented in an I ×J table, such as Table 3.1. From Example 3.8, it follows that the UMVU estimator of pij is nij/n. A special case of Table 3.1 arises when A and B are independent, that is, when pij = pi+p+j where pi+ = pi1 + · · · + piJ and p+j = p1j + · · · + pIj. The joint probability of the IJ cell counts then reduces to n! i,j nij! i pni+ i+ j p n+j +j . This is an (I +J −2)-parameter exponential family with the complete sufficient statistics (ni+, n+j), i = 1, . . . , I, j = 1, . . . , J, or, equivalently, i = 1, . . . , I −1, j = 1, . . . , J −1. In fact, (n1+, . . . , nI+) and (n+1, . . . , n+J) are independent, with multinomial distributions M(p1+, . . . , pI+; n) and M(p+1, . . . , p+J; n), re-spectively (Problem 3.27), and the UMVU estimators of pi+, p+j and pij = pi+p+j are, therefore, ni+/n, n+j/n and ni+n+j/n2, respectively. ∥ When studying the relationship between two characteristics A and B, one may find A and B to be dependent although no mechanism appears to exist through which either factor could influence the other. An explanation is sometimes found in the dependence of both factors on a common third factor, C, a phenomenon known as spurious correlation. The following example describes a model for this situation. 108 UNBIASEDNESS [ 2.3 Example 3.10 Conditional independence in a three-way table. In the situation of Example 3.9, suppose that each subject is also classified according to a third factor C as C1, . . . , or CK. [The third factor for the students of Example 3.9 might be their major (History, Physics, etc.).] Consider this situation under the assumption that conditionally given Ck (k = 1, . . . , K), the characteristics A and B are independent, so that pijk = p++kpi+|kp+j|k (3.19) where pi+|k, p+j|k, and pij|k denote the probability of the subject having properties Ai, Bj, or (Ai, Bj), respectively, given that it has property Ck. After some simplification, the joint probability of the IJK cell counts nijk is seen to be proportional to (Problem 3.28) i,j,k (p++kpi+|kp+j|k)nijk = k  pn++k ++k i pni+k i+|k j p n+jk +j|k . (3.20) This is an exponential family of dimension (K −1) + K(I + J −2) = K(I + J −1) −1 with complete sufficient statistics T = {(n++k, ni+k, n+jk), i = 1, . . . , I, j = 1, . . . , J, k = 1, . . . , K}. Since the expectation of any cell count is n times the probability of that cell, the UMVU estimators of p++k, pi+k, and p+jk are n++k/n, ni+k/n, and n+jk/n, respectively. ∥ Consider, now, the estimation of the probability pijk. The unbiased estimator δ0 = nijk/n, which is UMVU in the unrestricted model, is not a function of T and hence no longer UMVU. The relationship (3.19) suggests the estimator δ1 = (n++k/n) · (ni+k/n++k) · (n+jk/n++k), which is a function of T . It is easy to see (Problem 3.30) that δ1 is unbiased and hence is UMVU. (For additional results concerning the estimation of the parameters of this model, see Cohen 1981, or Davis 1989.) To conclude this section, an example is provided in which the UMVU estimator fails completely. Example 3.11 Misbehaved UMVU estimator. Let X have the Poisson distribu-tion P(θ) and let g(θ) = e−aθ, where a is a known constant. The condition of unbiasedness of an estimator δ leads to δ(x)θx x! = e(1−a)θ = (1 −a)xθx x! and hence to δ(X) = (1 −a)X. (3.21) Suppose a = 3. Then, g(θ) = e−3θ, and one would expect an estimator which decreases from 1 to 0 as X goes from 0 to infinity. The ML estimator e−3X meets this expectation. On the other hand, the unique unbiased estimator δ(x) = (−2)x oscillates wildly between positive and negative values and appears to bear no relation to the problem at hand. (A possible explanation for this erratic behavior is suggested in Lehmann (1983).) It is interesting to see that the difficulty disappears 2.4 ] NONPARAMETRIC FAMILIES 109 if the sample size is increased. If X1, · · · , Xn are iid according to P(θ), then T =  Xi is a sufficient statistic and has the Poisson P(nθ) distribution. The condition of unbiasedness now becomes δ(t)(nθ)t t! = e(n−a)θ = (n −a)tθt t! and the UMVU estimator is δ(T ) = 1 −a n T . (3.22) This is quite reasonable as soon as n > a. ∥ 4 Nonparametric Families Section 2.2 was concerned with continuous parametric families of distributions such as the normal, uniform, or exponential distributions, and Section 2.3 with discrete parametric families such as the binomial and Poisson distributions. We now turn to nonparametric families in which no specific form is assumed for the distribution. We begin with the one-sample problem in which X1, . . . , Xn are iid with distri-bution F ∈F. About the family F, we shall make only rather general assumptions, for example, that it is the family of distributions F which have a density, or are con-tinuous, or have first moments, and so on. The estimand g(F) might, for example, be E(Xi) = x dF(x), or var Xi, or P(Xi ≤a) = F(a). It was seen in Problem 1.6.33 that for the family F0 of all probability densities, the order statistics X(1) < · · · < X(n) constitute a complete sufficient statistic, and the hint given there shows that this result remains valid if F0 is further restricted by requiring the existence of some moments.2 (For an alternative proofs, see TSH2, Section 4.3. Also, Bell, Blackwell, and Breiman (1960) show the result is valid for the family of all continuous distributions.) An estimator δ(X1, . . . , Xn) is a function of the order statistics if and only if it is symmetric in its n arguments. For families F for which the order statistics are complete, there can therefore exist at most one symmetric unbiased estimator of any estimand, and this is UMVU. Thus, to find the UMVU estimator of any U-estimable g(F), it suffices to find a symmetric unbiased estimator. Example 4.1 Estimating the distribution function. Let g(F) = P(X ≤a) = F(a), a known. The natural estimator is the number of X’s which are ≤a, di-vided by N. The number of such X’s is the outcome of n binomial trials with success probability F(a), so that this estimator is unbiased for F(a). Since it is also symmetric, it is the UMVU estimator. This can be paraphrased by saying that the empirical cumulative distribution function is the UMVU estimator of the unknown true cumulative distribution function. Note. In the normal case of Section 2.2, it was possible to find unbiased estimators not only of P(X ≤u) but also of the probability density pX(u) of X. No unbiased 2 Thecorrespondingprobleminwhichthevaluesofsomemoments(orexpectationsofotherfunctions) are given is treated by Hoeffding (1977) and N. Fisher (1982). 110 UNBIASEDNESS [ 2.4 estimator of the density exists for the family F0. For proofs, see Rosenblatt 1956, and Bickel and Lehmann 1969, and for further discussion of the problem of esti-mating a nonparametric density see Rosenblatt 1971, the books by Devroye and Gyoerfi(1985), Silverman (1986), or Wand and Jones (1995), and the review ar-ticle by Izenman (1991). Nonparametric density estimation is an example of what Liu and Brown (1993) call singular problems, which pose problems for unbiased estimation. See Note 8.3. ∥ Example 4.2 Nonparametric UMVU estimation of a mean. Let us now further restrict F0, the class of all distributions F having a density, by adding the condition E|X| < ∞, and let g(F) = xf (x) dx. Since ¯ X is symmetric and unbiased for g(F), ¯ X is UMVU. An alternative proof of this result is obtained by noting that X1 is unbiased for g(F). The UMVU estimator is found by conditioning on the order statistics; E[X1|X(1), . . . , X(n)]. But, given the order statistics, X1 assumes each value with probability 1/n. Hence, the above conditional expectation is equal to (1/n)X(i) = ¯ X. In Section 2.2, it was shown that ¯ X is UMVU for estimating E(Xi) = ξ in the family of normal distributions N(ξ, σ 2); now it is seen to be UMVU in the family of all distributions that have a probability density and finite expectation. Which of these results is stronger? The uniformity makes the nonparametric result appear much stronger. This is counteracted, however, by the fact that the condition of unbiasedness is much more restrictive in that case. Thus, the number of competitors which the UMVU estimator “beats” for such a wide class of distributions is quite small (see Problem 4.1). It is interesting in this connection to note that, for a family intermediate between the two considered here, the family of all symmetric distributions having a probability density, ¯ X is not UMVU (Problem 4.4; see also Bickel and Lehmann 1975-1979). ∥ Example 4.3 Nonparametric UMVU estimation of a variance. Let g(F) = var X. Then [(Xi−¯ X)2]/(n−1) is symmetric and unbiased, and hence is UMVU. ∥ Example 4.4 Nonparametric UMVU estimation of a second moment. Let g(F) = ξ 2, where ξ = EX. Now, σ 2 = E(X2) −ξ 2 and a symmetric unbiased estimator of E(X2) is X2 i /n. Hence, the UMVU estimator of ξ 2 is X2 i /n − (Xi −¯ X)2/(n −1). An alternative derivation of this result is obtained by noting that X1X2 is un-biased for ξ 2. The UMVU estimator of ξ 2 can thus be found by conditioning: E[X1, X2|X(1), . . . , X(n)]. But, given the order statistics, the pair {X1, X2} assumes the value of each pair {X(i), X(j)}, i ̸= j, with probability 1/n(n −1). Hence, the above conditional expected value is 1 n(n −1) i̸=j XiXj, which is equivalent to the earlier result. ∥ Consider, now, quite generally a function g(F) which is U-estimable in F0. Then, there exists an integer m ≤n and a function δ(X1, . . . , Xm), which is 2.4 ] NONPARAMETRIC FAMILIES 111 unbiased for g(F). We can assume without loss of generality that δ is symmetric in its m arguments; otherwise, it can be symmetrized. Then, the estimator 1  n m  (i1,...,im) δ(Xi1, . . . , Xim) (4.1) is UMVU for g(F); here, the sum is over all m-tuples (i1, . . . , im) from the integers 1, 2, . . . , n with i1 < · · · < im. That this estimator is UMVU follows from the facts that it is symmetric and that each of the  n m  summands has expectation g(F). The class of statistics (4.1) called U-statistics was studied by Hoeffding (1948) who, in particular, gave conditions for their asymptotic normality; for further work on U-statistics, see Serfling 1980, Staudte and Sheather 1990, Lee 1990, or Ko-roljuk and Borovskich 1994. Two problems suggest themselves: (a) Whatkindoffunctionsg(F)haveunbiasedestimators,thatis,areU-estimable? (b) If a functional g(F) has an unbiased estimator, what is the smallest number of observations for which the unbiased estimator exists? We shall call this smallest number the degree of g(F). (For the case that F assigns positive probability only to the two values 0 and 1, these equations are answered in the preceding section.) Example 4.5 Degree of the variance. Let g(F) be the variance σ 2 of F. Then g(F) has an unbiased estimator in the subset F′ 0 of F02 with EFX2 < ∞and n = 2 observations, since (Xi −¯ X)2/(n−1) = 1 2(X2 −X1)2 is unbiased for σ 2. Hence, the degree of σ 2 is ≤2. Furthermore, since in the normal case with unknown mean there is no unbiased estimator of σ 2 based on only one observation (Problem 2.7), there is no such estimator within the class F′ 0. It follows that the degree of σ 2 is 2. ∥ We shall now give another proof that the degree of σ 2 in this example is greater than 1 to illustrate a method that is of more general applicability for problems of this type. Let g be any estimand that is of degree 1 in F′ 0. Then, there exists δ such that  δ(x)dF(x) = g(F), for all F ∈F′ 0. Fix two arbitrary distributions F1 and F2 in calF ′ 0 with F1 ̸= F2, and let F = αF1 + (1 −α)F2, 0 ≤α ≤1. Then, g[αF1 + (1 −α)F2] = α  δ(x)dF1(x) + (1 −α)  δ(x)dF2(x). (4.2) Then, αF1 + (1 −α)F2 is also in calF ′ 0, and as a function of α, the right-hand side is linear in α. Thus, the only g’s that can be of degree 1 are those for which the left-hand side is linear in α. 112 UNBIASEDNESS [ 2.4 Now, consider g(F) = α2 F = E(X2) −[EX]2. In this case, σ 2 αF1+(1−α)F2 = αE(X2 1) + (1 −α)E(X2 2) −[αEX1 + (1 −α)EX2]2 (4.3) where Xi is distributed according to Fi. The coefficient of α2 on the right-hand side is seen to be −[E(X2) −E(X1)]2. Since this is not zero for all F1, F2 ∈F′ 0, the right-hand side is not linear in α, and it follows that σ 2 is not of degree 1. ∥ Generalizing (4.2), we see that if g(F) is of degree m, then    g[αF1 + (1 −α)F2] = · · · δ(x1, . . . , xm)d[αF1(x1) + (1 −α)F2(x1)] · · · is a polynomial of degree at most m, (4.4) which is thus a necessary condition for g to be estimable with m observations. Conditions for (4.4) to be also sufficient are given by Bickel and Lehmann (1969). Condition (4.4) may also be useful for proving that there exists no value of n for which a functional g(F) has an unbiased estimate. Example 4.6 Nonexistence of unbiased estimator. Let g(F) = σ. Then g[αF1 + (1 −α)F2] is the square root of the right-hand side of (4.3). Since this quadratic in α is not a perfect square for all F1, F2 ∈F′ 0, it follows that its square root is not a polynomial. Hence σ does not have an unbiased estimator for any fixed number n of observations. ∥ Letusnowturnfromtheone-sampletothetwo-sampleproblem.LetX1, . . . , Xm and Y1, . . . , Yn be independently distributed according to distributions F and G ∈ F0. Then the order statistics X(1) < · · · < X(m) and Y(1) < · · · < Y(n) are sufficient and complete (Problem 4.5). A statistic δ is a function of these order statistics if and only if δ is symmetric in the Xi’s and separately symmetric in the Yj’s. Example 4.7 Two-sample UMVU estimator. Let h(F, G) = E(Y)−E(X). Then ¯ Y −¯ X is unbiased for h(F, G). Since it is a function of the complete sufficient statistic, it is UMVU. ∥ The concept of degree runs into difficulty in the present case. Smallest values m0 and n0 are sought for which a given functional h(F, G) has an unbiased estimator. One possibility is to find the smallest m for which there exists an n such that h(F, G) has an unbiased estimator, and to let m0 and n0 be the smallest values so determined. This procedure is not symmetric in m and n. However, it can be shown that if the reverse procedure is used, the same minimum values are obtained. [See Bickel and Lehmann, (1969)]. As a last illustration, let us consider the bivariate nonparametric problem. Let (X1, Y1), . . . , (Xn, Yn) be iid according to a distribution F ∈F, the family of all bivariate distributions having a probability density. In analogy with the order statistics in the univariate case, the set of pairs T = {[X(1), Yj1], . . . , [X(n), Yjn]} 2.5 ] THE INFORMATION INEQUALITY 113 that is, the n pairs (Xi, Yi), ordered according to the value of their first coordinate, constitute a sufficient statistic. An equivalent statistic is T ′ = {[Xi1, Y(1)], . . . , [Xin, Y(n)]}, that is, the set of pairs (Xi, Yi) ordered according to the value of the second co-ordinate. Here, as elsewhere, the only aspect of T that matters is the set of points to which T assigns a constant value. In the present case, these are the n! points that can be obtained from the given point [(X1, Y1), . . . , (Xn, Yn)] by permuting the n pairs. As in the univariate case, the conditional probability of each of these permutations given T or T ′ is 1/n!. Also, as in the univariate case, T is complete (Problem 4.10). An estimator δ is a function of the complete sufficient statistic if and only if δ is invariant under permutation of the n pairs. Hence, any such function is the unique UMVU estimator of its expectation. Example 4.8 U-estimationofcovariance.Theestimator(Xi−¯ X)(Yi−¯ Y)/(n− 1) is UMVU for cov(X, Y) (Problem 4.8). ∥ 5 The Information Inequality The principal applications of UMVU estimators are to exponential families, as illustrated in Sections 2.2–2.3. When a UMVU estimator does not exist, the vari-ance VL(θ0) of the LMVU estimator at θ0 is the smallest variance that an unbiased estimator can achieve at θ0. This establishes a useful benchmark against which to measure the performance of a given unbiased estimator δ. If the variance of δ is close to VL(θ) for all θ, not much further improvement is possible. Unfortunately, the function VL(θ) is usually difficult to determine. Instead, in this section, we shall derive some lower bounds which are typically not sharp [i.e., lie below VL(θ)] but are much simpler to calculate. One of the resulting inequalities for the variance, the information inequality, will be used in Chapter 5 as a tool for minimax estimation. However, its most important role is in Chapter 6, where it provides insight and motivation for the theory of asymptotically efficient estimators. For any estimator δ of g(θ) and any function ψ(x, θ) with a finite second mo-ment, the covariance inequality (Problem 1.5) states that var(δ) ≥[cov(δ, ψ)]2 var(ψ) . (5.1) In general, this inequality is not helpful since the right side also involves δ. How-ever, when cov(δ, ψ) depends on δ only through Eθ(δ) = g(θ), (5.1) does provide a lower bound for the variance of all unbiased estimators of g(θ). The following result is due to Blyth (1974). Theorem 5.1 A necessary and sufficient condition for cov(δ, ψ) to depend on δ only through g(θ) is that for all θ cov(U, ψ) = 0 for all U ∈U, (5.2) 114 UNBIASEDNESS [ 2.5 where U is the class of statistics defined in Theorem 1.1, that is, U = {U : EθU = 0, EθU 2 < ∞, for all θ ∈ }. Proof. To say that cov(δ, ψ) depends on δ only through g(θ) is equivalent to saying that for any two estimators δ1 and δ2 with Eθδ1 = Eθδ2 for all θ, we have cov(δ1, ψ) = cov(δ2, ψ). The proof of the theorem is then easily established by writing cov(δ1, ψ) −cov(δ2, ψ) = cov(δ1 −δ2, ψ) (5.3) = cov(U, ψ) and noting that therefore, cov(δ1, ψ) = cov(δ2, ψ) for all δ1 and δ2 if and only if cov(U, ψ) = 0 for all U. ✷ Example 5.2 Hammersley-Chapman-Robbins inequality. Suppose X is dis-tributed with density pθ = p(x, θ), and for the moment, suppose that p(x, θ) > 0 for all x. If θ and θ +W are two values for which g(θ) ̸= g(θ +W), then the function ψ(x, θ) = p(x, θ + W) p(x, θ) −1 (5.4) satisfies the conditions of Theorem 5.1 since Eθ(ψ) = 0 (5.5) and hence cov(U, ψ) = E(ψU) = Eθ+W(U) −Eθ(U) = 0. In fact, cov(δ, ψ) = Eθ(δψ) = g(θ + W) −g(θ), so that (5.1) becomes var(δ) ≥[g(θ + W) −g(θ)]2/Eθ p(X, θ + W) p(X, θ) −1 2 . (5.6) Since this inequality holds for all W, it also holds when the right side is replaced by its supremum over W. The resulting lower bound is due to Hammersley (1950) and Chapman and Robbins (1951). ∥ In this inequality, the assumption of a common support for the distributions pθ can be somewhat relaxed. If S(θ) denotes the support of pθ, (5.6) will be valid provided S(θ + W) is contained in S(θ). In taking the supremum over W, attention must then be restricted to the values of W for which this condition holds. When certain regularity conditions are satisfied, a classic inequality is obtained by letting W →0 in (5.4). The inequality (5.6) is unchanged if (5.4) is replaced by pθ+W −pθ W 1 pθ , which tends to ((∂/∂θ)pθ)/pθ as W →0, provided pθ is differentiable with respect to θ. This suggests as an alternative to (5.4) ψ(x, θ) = ∂ ∂θ p(x, θ)/p(x, θ). (5.7) 2.5 ] THE INFORMATION INEQUALITY 115 Since for any U ∈U, clearly (d/dθ)Eθ(U) = 0, ψ will satisfy (5.2), provided Eθ(U) =  Upθ dµ can be differentiated with respect to θ under the integral sign for all U ∈U. To obtain the resulting lower bound, let p′ θ = (∂pθ/∂θ) so that cov(δ, ψ) =  δp′ θ dµ. If differentiation under the integral sign is permitted in  δpθ dµ = g(θ), it then follows that cov(δ, ψ) = g′(θ) (5.8) and hence var(δ) ≥ [g′(θ)]2 var  ∂ ∂θ log p(X, θ) . (5.9) The assumptions required for this inequality will be stated more formally in Theo-rem 5.15, where we will pay particular attention to requirements on the estimator. Pitman (1979, Chapter 5) provides an interesting interpretation of the inequality and discussion of the regularity assumptions. The function ψ defined by (5.7) is the relative rate at which the density pθ changes at x. The average of the square of this rate is denoted by I(θ) = Eθ  ∂ ∂θ log p(X, θ) 2 =  p′ θ pθ 2 pθ dµ. (5.10) It is plausible that the greater this expectation is at a given value θ0, the easier it is to distinguish θ0 from neighboring values θ, and, therefore, the more accurately θ can be estimated at θ = θ0. (Under suitable assumptions this surmise turns out to be correct for large samples; see Chapter 6.) The quantity I(θ) is called the information (or the Fisher information) that X contains about the parameter θ. It is important to realize that I(θ) depends on the particular parametrization chosen. In fact, if θ = h(ξ) and h is differentiable, the information that X contains about ξ is I ∗(ξ) = I[h(ξ)] · [h′(ξ)]2. (5.11) When different parameterizations are considered in a single problem, the notation I(θ) is inadequate; however, it suffices for most applications. To obtain alternative expressions for I(θ) that sometimes are more convenient, let us make the following assumptions: (a) is an open interval (finite, infinite, or semi-infinite). (b) The distributions Pθ have common support, so that without loss of generality the set A = {x : pθ(x) > 0} 116 UNBIASEDNESS [ 2.5 is independent of θ. (5.12) (c) For any x in A and θ in , the derivative p′ θ(x) = ∂pθ(x)/∂θ exists and is finite. Lemma 5.3 (a) If (5.12) holds, and the derivative with respect to θ of the left side of  pθ(x) dµ(x) = 1 (5.13) can be obtained by differentiating under the integral sign, then Eθ  ∂ ∂θ log pθ(X)  = 0 (5.14) and I(θ) = varθ  ∂ ∂θ log pθ(X)  . (5.15) (b) If, in addition, the second derivative with respect to θ of log pθ(x) exists for all x and θ and the second derivative with respect to θ of the left side of (5.13) can be obtained by differentiating twice under the integral sign, then I(θ) = −Eθ  ∂2 ∂θ2 log pθ(X)  . (5.16) Proof. (a) Equation (5.14) is derived by differentiating (5.13), and (5.15) follows from (5.10) and (5.14). (b) We have ∂2 ∂θ2 log pθ(x) = ∂2 ∂θ2 pθ(x) pθ(x) −    ∂ ∂θ pθ(x) pθ(x)    2 , and the result follows by taking the expectation of both sides. ✷ Let us now calculate I(θ) for some of the families discussed in Sections 1.4 and 1.5. We first look at exponential families with s = 1, given in Equation (1.5.1), and derive a relationship between some unbiased estimators and information. Theorem 5.4 Let X be distributed according to the exponential family (5.1) with s = 1, and let τ(θ) = Eθ(T ), (5.17) the so-called mean-value parameter. Then, T I[τ(θ)] = 1 varθ(T ). (5.18) 2.5 ] THE INFORMATION INEQUALITY 117 Proof. From Equation (5.15), the amount of information that X contains about θ, I(θ), is I(θ) = varθ  ∂ ∂θ log pθ(X)  = varθ[η′(θ)T (X) −B′(θ)] (from (1.5.1)) (5.19) = [η′(θ)]2var(T ). Now, from (5.11), the information I[τ(θ)] that X contains about τ(θ), is given by I[τ(θ)] = I(θ) [τ ′(θ)]2 = η′(θ) τ ′(θ) 2 var(T ). (5.20) Finally, using the fact that τ(θ) = B′(θ)/η′(θ) [(Problem 1.5.6)], we have var(T ) = B′′(θ) −η′′(θ)τ(θ) [η′(θ)]2 = τ ′2(θ) η′2(θ) 1/2 (5.21) and substituting (5.21) into (5.20) yields (5.18). ✷ If we combine Equations (5.11) and (5.19), then for any differentiable function h(θ), we have I[h(θ)] = η′(θ) h′(θ) 2 var(T ). (5.22) Example 5.5 Information in a gamma variable. Let X ∼Gamma(α, β), where we assume that α is known. The density is given by pβ(x) = 1 H(α)βα xα−1e−x/β (5.23) = e(−1/β)x−α log(β)h(x) with h(x) = xα−1/ H(α). In this parametrization, η(β) = −1/β, T (x) = x and B(β) = α log(β). Thus, E(T ) = αβ, var(T ) = αβ2, and the information in X about αβ is I(αβ) = 1/αβ2. If we are instead interested in the information in X about β, then we can repa-rameterize (5.23) using η(β) = −α/β and T (x) = x/α. From (5.22), we have, quite generally, that I[ch(θ)] = 1 c2 I[h(θ)], so the information in X about β is I(β) = α/β2. ∥ Table 5.1 gives I[τ(θ)] for a number of special cases. Qualitatively, I[τ(θ)] given by (5.18) behaves as one would expect. Since T is the UMVU estimator of its expectation τ(θ), the variance of T is a measure of the difficulty of estimating τ(θ). Thus, the reciprocal of the variance measures the ease with which τ(θ) can be estimated and, in this sense, the information X contains about τ(θ). Example 5.6 Informationinanormalvariable.ConsiderthecaseoftheN(ξ, σ 2) distribution with σ known, when the interest is in estimation of ξ 2. The density is 118 UNBIASEDNESS [ 2.5 Table 5.1. I[τ(θ)] for Some Exponential Families Distribution Parameter τ(θ) I(τ(θ)) N(ξ, σ 2) ξ 1/σ 2 N(ξ, σ 2) σ 2 1/2σ 4 b(p, n) p n/pq P (λ) λ 1/λ H(α, β) β α/β2 given by pξ(x) = 1 √ 2π σ e− 1 2σ2 (x−ξ)2 = eη(ξ)T (x)−B(ξ)h(x) with η(ξ) = ξ, T (x) = x/σ 2, B(ξ) = 1 2ξ 2/σ 2, and h(x) = e−x2/2σ 2/ √ 2π. The information in X about h(ξ) = ξ 2 is given by I(ξ 2) = η′(ξ) h′(ξ) 2 var(T ) = 1 4ξ 2σ 2 . Note that we could have equivalently defined η(ξ) = ξ/σ 2, T (x) = x and arrived at the same answer. ∥ Example 5.7 Information about a function of a Poisson parameter. Suppose that X has the Poisson (λ) distribution, so that I[λ], the information X contains about λ = E(X), is 1/λ. For η(λ) = log λ, which is an increasing function of λ, I[log λ] = λ. Thus, the information in X about λ is inversely proportional to that about log λ . In particular, for large values of λ, it seems that the parameter log λ can be estimated quite accurately, although the converse is true for λ. This conclusion is correct and is explained by the fact the log λ changes very slowly when λ is large. Hence, for large λ, even a large error in the estimate of λ will lead to only a small error in log λ, whereas the situation is reversed for λ near zero where log λ changes very rapidly. It is interesting to note that there exists a function of λ [namely h(λ) = √ λ] whose behavior is intermediate between that of h(λ) = λ and h(λ) = log λ, in that the amount of information X contains about it is constant, independent of λ (Problem 5.6). ∥ As a second class of distributions for which to evaluate I(θ), consider location families with density f (x −θ) (x, θ real-valued) (5.24) where f (x) > 0 for all x. Conditions (5.12) are satisfied provided the derivative f ′(x) of f (x) exists for all values of x. It is seen that I(θ) is independent of θ and given by (Problem 5.14) If =  ∞ −∞ [f ′(x)]2 f (x) dx. (5.25) 2.5 ] THE INFORMATION INEQUALITY 119 Table 5.2. If for Some Standard Distributions Distribution N(0, 1) L(0, 1) C(0, 1) DE(0, 1) If 1 1/3 1/2 1 Table 5.2 shows If for a number of distributions (defined in Table 1.4.1). Actually, the double exponential density does not satisfy the stated assumptions since f ′(x) does not exist at x = 0. However, (5.25) is valid under the slightly weaker assumption that f is absolutely continuous [see (1.3.7)] which does hold in the double exponential case. For this and the extensions below, see, for example, Huber 1981, Section 4.4. On the other hand, it does not hold when f is the uniform density on (0, 1) since f is then not continuous and hence, a fortiori, not absolutely continuous. It turns out that whenever f is not absolutely continuous, it is natural to put If equal to ∞. For the uniform distribution, for example, it is easier by an order of magnitude to estimate θ (see Problem 5.33) than for any of the distributions listed in Table 5.2, and it is thus reasonable to assign to If the value ∞. This should be contrasted with the fact that f ′(x) = 0 for all x ̸= 0, 1, so that formal application of (5.25) leads to the incorrect value 0. When (5.24) is replaced by 1 bf x −θ b  , (5.26) the amount of information about θ becomes (Problem 5.14) If b2 (5.27) with If given by (5.25). The information about θ contained in independent observations is, as one would expect, additive. This is stated formally in the following result. Theorem 5.8 Let X and Y be independently distributed with densities pθ and qθ, respectively, with respect to measures µ and ν satisfying (5.12) and (5.14). If I1(θ), I2(θ), and I(θ) are the information about θ contained in X, Y, and (X, Y), respectively, then I(θ) = I1(θ) + I2(θ). (5.28) Proof. By definition, I(θ) = E  ∂ ∂θ log pθ(X) + ∂ ∂θ log qθ(Y) 2 , and the result follows from the fact that the cross-product is zero by (5.14). ✷ Corollary 5.9 If X1, . . . , Xn are iid, satisfy (5.12) and (5.14), and each has in-formation I(θ), then the information in X = (X1, . . . , Xn) is nI(θ). Let us now return to the inequality (5.9), and proceed to a formal statement of when it holds. If (5.12), and hence (5.15), holds, then the denominator of the right 120 UNBIASEDNESS [ 2.5 side of (5.9) can be replaced by I(θ). The result is the following version of the Information Inequality. Theorem 5.10 (The Information Inequality) Suppose pθ is a family of densities with dominating measure µ for which (5.12) and (5.14) hold, and that I(θ) > 0. Let δ be any statistic with Eθ(δ2) < ∞ (5.29) for which the derivative with respect to θ of Eθ(δ) exists and can be differentiated under the integral sign, that is, d dθ Eθ(δ) =  ∂ ∂θ δ pθ dµ. (5.30) Then varθ(δ) ≥  ∂ ∂θ Eθ(δ) 2 I(θ) . (5.31) Proof. The result follows from (5.9)and Lemma 5.3 and is seen directly by differ-entiating (5.30) and then applying (5.1). ✷ If δ is an estimator of g(θ), with Eθ(δ) = g(θ) + b(θ) where b(θ) is the bias of δ, then (5.31) becomes varθ(δ) ≥[b′(θ) + g′(θ)]2 I(θ) , (5.32) which provides a lower bound for the variance of any estimator in terms of its bias and I(θ). If δ = δ(X) where X = (X1, . . . , Xn) and if the X’s are iid, then by Corollary 5.9 varθ(δ) ≥[b′(θ) + g′(θ)]2 nI1(θ) (5.33) where I1(θ) is the information about θ contained in X1. Inequalities (5.32) and (5.33) will be useful in Chapter 5. Unlike I(θ), which changes under reparametrization, the lower bound (5.31), andhencethebounds(5.32)and(5.33),doesnot.Let θ = h(ξ)withhdifferentiable. Then, ∂ ∂ξ Eh(ξ)(δ) = ∂ ∂θ Eθ(δ) · h′(ξ), and the result follows from (5.11). (See Problem 5.20.) The lower bound (5.31) for varθ(δ) typically is not sharp. In fact, under suitable regularity conditions, it is attained if and only if pθ(x) is an exponential family (1.5.1) with s = 1 and T (x) = δ(x) (see Problem 5.17). However, (5.1) is based on the Cauchy-Schwarz inequality, which has a well-known condition for equality (see Problems 5.2 and 5.19). The bound (5.31) will be attained by an estimator if 2.5 ] THE INFORMATION INEQUALITY 121 and only if δ = a  ∂ ∂θ log pθ(x)  + b (5.34) for some constants a and b (which may depend on θ). Example 5.11 Binomial attainment of information bound. For the binomial distribution X ∼b(p, n), we have δ = a  ∂ ∂θ log pθ(x)  + b = a  ∂ ∂p[x log p + (n −x) log(1 −p)]  + b = a  x −np p(1 −p)  + b with Eδ = b and var δ = na2/p(1−p). This form for δ is the only form of function for which the information inequality bound (5.31) can be attained. The function δ is an estimator only if a = p(1 −p) and b = np. This yields δ = X, Eδ = np, and var(δ) = np(1 −p). Thus, X is the only unbiased estimator that achieves the information inequality bound (5.31). ∥ Many authors have presented general necessary and sufficient conditions for attainment of the bound (5.31) (Wijsman 1973, Joshi 1976, M¨ uller-Funk et al., 1989). The following theorem is adapted from M¨ uller-Funk et al. Theorem 5.12 Attainment. Suppose (5.12) holds, and δ is a statistic with varθδ < ∞for all θ ∈ . Then δ attains the lower bound varθδ =  ∂ ∂θ Eθδ 2 ) I(θ) for all θ ∈ if and only if there exists a continuously differentiable function ϕ(θ) such that pθ(x) = C(θ)eϕ(θ)δ(x)h(x) is a density with respect to a dominating measure µ(x) for suitably chosen C(θ) and h(x), i.e., pθ(x) constitutes an exponential family. Moreover, if Eθδ = g(θ), then δ and g satisfy δ(x) = g′(θ) I(θ)  ∂ ∂θ log pθ(x) + g(θ), (5.35) g(θ) = −C′(θ)/C(θ)ϕ′(θ), and I(θ) = ϕ′(θ)g′(θ). Note that the function δ specified in (5.35) may depend on θ. In such a case, δ is not an estimator, and there is no estimator that attains the information bound. Example 5.13 Poisson attainment of information bound. Suppose X is a dis-crete random variable with probability function that is absolutely continuous with respect to µ = counting measure, and satisfies EλX = varλ(X) = λ. 122 UNBIASEDNESS [ 2.5 If X attains the Information Inequality bound then λ = [∂/(∂λ)Eλ(X)]2 /I(λ) so from Theorem 5.12 ϕ′(λ) = 1/λ and the distribution of X must be pλ(x) = C(λ)e[log λ]xh(x). Since g(θ) = λ = −λC′(λ)/C(λ), it follows that C(λ) = e−λ, which implies h(x) = x!, and pλ(x) is the Poisson distribution. ∥ Some improvements over (5.31) are available when the inequality is not attained. These will be briefly mentioned at the end of the next section. Theorem 5.10 restricts the information inequality to estimators δ satisfying (5.29) and (5.30). The first of these conditions imposes no serious restrictions since any estimator satisfies (5.31) automatically. However, it is desirable to replace (5.30) by a condition (on the densities pθ) not involving δ, so that (5.31) will then hold for all δ. Such conditions will be given in Theorem 5.15 below, with a more detailed discussion of alternatives given in Note 8.6. In reviewing the argument leading to (5.9), the conditions that were needed on the estimator δ(x) were (a) Eθ[δ2(X)] < ∞ for all θ (b) ∂ ∂θ Eθ[δ(X)] =  ∂ ∂θ δ(x)pθ(x) dµ(x) = g′(θ). (5.36) The key point is to find a way to ensure that cov(δ, φ) = (∂/∂θ)Eθδ, and hence (5.30) holds. Consider the following argument, in which one of the steps is not immediately justified. For qθ(x) = ∂log pθ(x)/∂θ, write cov(δ, q) =  δ(x)  ∂ ∂θ log pθ(x)  pθ(x)dx =  δ(x)  lim W→0 pθ+W(x) −pθ(x) Wpθ(x)  pθ(x)dx ? = lim W→0  δ(x) pθ+W(x) −pθ(x) Wpθ(x)  pθ(x)dx (5.37) = lim W→0 Eθ+Wδ(X) −Eθδ(X) W = ∂ ∂θ Eθδ(X) Thus (5.30) will hold provided the interchange of limit and integral is valid. A simple condition for this is given in the following lemma. Lemma 5.14 Assume that (5.12(a)) and (5.12(b)) hold, and let δ be any estimator for which Eθδ2 < ∞. Let qθ(x) = ∂log pθ(x)/∂θ and, for some ε > 0, let bθ be a function that satisfies Eθb2 θ(X) < ∞and     pθ+W(x) −pθ(x) Wpθ(x)     ≤bθ(x) for all |W| < ε. (5.38) Then Eθqθ(X) = 0 and ∂ ∂θ Eθδ(X) = Eδ(X)qθ(X) = covθ(δ, qθ), (5.39) 2.6 ] THE INFORMATION INEQUALITY 123 and thus (5.30) holds. Proof. Since    δ(x)pθ+W(x) −pθ(x) Wpθ(x)     ≤|δ(x)|b(x)|, and Eθ[|δ(x)|b(x)] ≤{Eθ[δ(x)2]}1/2{Eθ[b(x)2]}1/2 << ∞, it follows from the Dominated Convergence Theorem (Theorem 1.2.5) that the interchange of limit and integral in (5.37) is valid. ✷ An immediate consequence of Lemma 5.14 is the following theorem. Theorem 5.15 Suppose pθ(x) is a family of densities with dominating measure µ(x) satisfying (5.12), I(θ) > 0, and there exists a function bθ and ε > 0 for which (5.38) holds, If δ is any statistic for which Eθ(δ2) << ∞, then the information inequality (5.31) will hold. We note that condition (5.38) is similar to what is known as a Lipschitz condition, which imposes a smoothness constraint on a function by bounding the left side of (5.38) by a constant. It is satisfied for many families of densities (see Problem 5.27), including of course the exponential family. We give one illustration here. Example 5.16 Integrability. Suppose that X ∼f (x −θ), where f (x −θ) is Students t distribution with m degrees of freedom. It is not immediately obvious that this family of densities satisfies (5.14), so we cannot directly apply Theorem 5.10. We leave the general case to Problem 5.27(b), and show here that the Cauchy family (m = 1), with density pθ(x) = 1 π 1 1+(x−θ)2 , satisfies (5.38). The left side of (5.38) is     1 W  1 + (x −θ)2 1 + (x −W −θ)2 −1     =     1 W 1 + (x −θ)2 −1 −(x −W −θ)2 1 + (x −W −θ)2     =     1 W 2W(x −θ) −W2 1 + (x −W −θ)2     ≤2 |x −W −θ| 1 + (x −W −θ)2 + |W| 1 + (x −W −θ)2 ≤2 + ε. Here the last inequality follows from the facts that |W| << ε and |x|/(1 + x2) ≤1 for all x . Condition (5.38) therefore holds with bθ(x) = 2 + ε, which verifies the information inequality (5.31) for the Cauchy case. ∥ As a consequence of Theorem 5.15, note Corollary 5.17 If (5.38) holds, then (5.14) is valid. Proof. Putting δ(x) = 1 in (5.29), we have that 0 = d dθ (1) =  ∂ ∂θ pθdµ = Eθ  ∂ ∂θ log pθ(X)  ✷ 124 UNBIASEDNESS [ 2.6 6 The Multiparameter Case and Other Extensions In discussing the information inequality, we have so far assumed that θ is real-valued. To extend the inequalities of the preceding section to the multiparameter case, we begin by generalizing the inequality (5.1) to one involving several func-tions ψi (i = 1, . . . , r). This extension also provides a tool for sharpening the inequality (5.31). Theorem 6.1 For any unbiased estimator δ of g(θ) and any functions ψi(x, θ) with finite second moments, we have var(δ) ≥γ ′C−1γ, (6.1) where γ ′ = (γ1 · · · γr) and C = ||Cij|| are defined by γi = cov(δ, ψi), Cij = cov(ψi, ψj). (6.2) The right side of (6.1) will depend on δ only through g(θ) = Eθ(δ), provided each of the functions ψi satisfies (5.2). Proof. For any constants a1, . . . , ar, it follows from (5.1) that var(δ) ≥[cov(δ, aiψi)]2 var(aiψi) , (6.3) and direct calculation shows cov(δ, aiψi) = aiγi = a′γ, var(aiψi) = a′Ca. (6.4) Since (6.3) is true for any vector a, from (6.4) and (5.1) we have var(δ) ≥max a [a′γ ]2 a′Ca = γ ′C−1γ, where we use the fact (see Problem 6.2) that if P is an r × r matrix and p an r × 1 column vector such that P = pp′, then max a a′Pa a′Qa = largest eigenvalue of Q−1P (6.5) = p′Q−1p. ✷ As the first and principal application of (6.1), we shall extend the information inequality (5.31) to the multiparameter case. Let X be distributed with density pθ, θ ∈ , with respect to µ where θ is vector-valued, say θ = (θ1, . . . , θs). Suppose that (5.12)(a) and (b) hold, and in addition (c) For any x in A, θ in , and i = 1, . . . , s, (6.6) the derivative ∂pθ(x)/∂θi exists and is finite. In a generalization of (5.10), define the information matrix as the s × s matrix I(θ) = ||Iij(θ)|| (6.7) 2.6 ] THE MULTIPARAMETER CASE AND OTHER EXTENSIONS 125 where Iij(θ) = Eθ  ∂ ∂θi log pθ(X) · ∂ ∂θj log pθ(X)  . (6.8) If (6.6) holds and the derivative with respect to each θi of the left side of (5.13) can be obtained by differentiating under the integral sign, one obtains, as in Lemma 5.3, E  ∂ ∂θi log pθ(X)  = 0 (6.9) and Iij(θ) = cov  ∂ ∂θi log pθ(X), ∂ ∂θj log pθ(X)  . (6.10) Being a covariance matrix, I(θ) is positive semidefinite and positive definite unless the (∂/∂θi) log pθ(X), i = 1, . . . , s, are affinely dependent (and hence, by (6.9), linearly dependent). If, in addition to satisfying (6.6) and (6.9), the density pθ also has second deriva-tives ∂2pθ(x)/∂θi∂θj for all i and j, there is in generalization of (5.16), an alter-native expression for Iij(θ) which is often more convenient (Problem 6.4), Iij(θ) = −E  ∂2 ∂θi∂θj log pθ(X)  . (6.11) In the multiparameter situation with θ = (θi, . . . , θs), Theorem 5.8 and Corollary 5.9 continue to hold with only the obvious changes, that is, information matrices for independent observations are additive. To see how an information matrix changes under reparametrization, suppose that θi = hi(ξ1, . . . , ξs), i = 1, . . . , s, (6.12) and let J be the matrix J =         ∂θj ∂ξi         . (6.13) Let the information matrix for (ξ1, . . . , ξs) be I ∗(ξ) = ||I ∗ ij(ξ)|| where I ∗ ij(ξ) = E  ∂ ∂ξi log pθ(ξ)(X) · ∂ ∂ξj log pθ(ξ)(X)  . (6.14) Then, it is seen from the chain rule for differentiating a function of several variables that (Problem 6.7) I ∗ ij(ξ) = k l Ikl(θ)∂θk ∂ξi ∂θl ∂ξj (6.15) and hence that I ∗(ξ) = JIJ ′. (6.16) In generalization of Theorem 5.4, let us now calculate I(θ) for multiparameter exponential families. Theorem 6.2 Let X be distributed according to the exponential family (1.5.1) and let 126 UNBIASEDNESS [ 2.6 τi = ETi(X), i = 1, . . . , s, (6.17) the mean-value parametrization. Then, I(τ) = C−1 (6.18) where C is the covariance matrix of (T1, . . . , Ts). Proof. It is easiest to work with the natural parametrization (1.5.2), which is equiv-alent. By (6.10) and (1.5.15), the information in X about the natural parameter η is I ∗(η) =         ∂2 ∂ηj∂ηk A(η)         = cov(Tj, Tk) = C. Furthermore, (1.5.14) shows that τj = ∂/∂ηjA(η) and, hence, (6.13) shows that J =         ∂τj ∂ηi         = C. Thus, from (6.16) C = I ∗(η) = JI(τ)J ′ = CI(τ)C, which implies (6.18). ✷ Example 6.3 Multivariate normal information matrix. Let (X1, . . ., Xp) have a multivariate normal distribution with mean 0 and covariance matrix = ||σij||, so that by (1.4.15), the density is proportional to e− ηij xixj/2 where ||ηij|| = −1. Since E(XiXj) = σij, we find that the information matrix of the σij is I() = −1. (6.19) ∥ Example 6.4 Exponential family information matrices. Table 6.1 gives I(θ) for three two-parameter exponential families, where ψ(α) = H′(α)/H(α) and ψ′(α) = dψ(α)/dα are, respectively, the digamma and trigamma function (Problem 6.5). ∥ Example 6.5 Information in location-scale families. For the location-scale fam-ilies with density (1/θ2)f ((x −θ1)/θ2), θ2 > 0, f (x) > 0 for all x, the elements of the information matrix are (Problem 6.5) I11 = 1 θ2 2  f ′(y) f (y) 2 f (y) dy, (6.20) I22 = 1 θ2 2  yf ′(y) f (y) + 1 2 f (y) dy and I12 = 1 θ2 2  y f ′(y) f (y) 2 f (y) dy. (6.21) 2.6 ] THE MULTIPARAMETER CASE AND OTHER EXTENSIONS 127 Table 6.1. Three Information Matrices N(ξ, σ 2) H(α, β) I(ξ, σ) = 1/σ 2 0 0 2/σ 2  I(α, β) = ψ′(α) 1/β 1/β α/β2  B(α, β) I(α, β) = ψ′(α) −ψ′(α + β) −ψ′(α + β) −ψ′(α + β) ψ′(β) −ψ′(α + β)  The covariance term I12 is zero whenever f is symmetric about the origin. ∥ Let us now generalize Theorems 5.10 and 5.15 to the multiparameter case in which θ = (θ1, . . . , θs). For convenience, we state the generalizations in one the-orem. Theorem 6.6 (MultiparameterInformationInequality)Supposethat(6.6)holds, and I(θ) is positive definite. Let δ be any statistic for which Eθ (|δ|2) < ∞and either (i) For i = 1, . . . , s, (∂/∂θi)Eθ δ exists and can be obtained by differentiating under the integral sign, or (ii) There exist functions b(i) θ , i = 1, . . . , s, with Eθ b(i) θ (X)2 < ∞that satisfy     pθ +Wϵi(x) −pθ (x) W     ≤b(i) θ (x) for all W, where ϵi ∈Rs is the unit vector with 1 in the ith position and zero elsewhere. Then, Eθ (∂/∂θi) log pθ (X) = 0 and varθ(δ) ≥α′I −1(θ)α (6.22) where α′ is the row matrix with ith element αi = ∂ ∂θi Eθ[δ(X)]. (6.23) Proof. If the functions ψi of Theorem 6.1 are taken to be ψi = (∂/∂θi) log pθ(X), (6.22) follows from (6.1) and (6.10). ✷ If δ is an estimator of g(θ) and b(θ) is its bias, then (6.23) reduces to αi = ∂ ∂θi [b(θ) + g(θ)]. (6.24) It is interesting to compare the lower bound (6.22) with the corresponding bound when the θ’s other than θi are known. By Theorem 5.15, the latter is equal to [(∂/∂θi)Eθ(δ)]2/Iii(θ). This is the bound obtained by setting a = ϵi in (6.4), 128 UNBIASEDNESS [ 2.6 where ϵi is the ith unit vector. For example, if the θ’s other than θi are zero, then the only nonzero element of the vector α of (6.22) is αi. Since (6.22) was obtained by maximizing (6.4), comparing the two bounds shows I −1 ii (θ) ≤||I −1(θ)||ii. (6.25) (See Problem 6.10 for a different derivation.) The two sides of (6.25) are equal if Iij(θ) = 0 for all j ̸= i, (6.26) as is seen from the definition of the inverse of a matrix, and, in fact, (6.26) is also necessary for equality in (6.25) (Problem 6.10). In this situation, when (6.26) holds, the parameters are said to be orthogonal. This is illustrated by the first matrix in Table 6.1. There, the information bound for one of the parameters is independent of whether the other parameter is known. This is not the case, however, in the second and third situations in Table 6.1, where the value of one parameter affects the information for another. Some implications of these results for estimation will be taken up in Section 6.6. (Cox and Reid (1987) discuss methods for obtaining parameter orthogonality, and some of its consequences; see also Barndorff-Nielsen and Cox 1994.) In a manner analogous to the one-parameter case, it can be shown that the information inequality bound is attained only if δ(x) has the form δ(x) = g(θ) + [∇g(θ)]′I(θ)−1[∇log pθ(x)], (6.27) where Eδ = g(θ), ∇g(θ) = {(∂/∂θi)g(θ), i = 1, 2, . . . , s}, ∇log pθ(x) = {(∂/∂θi) log pθ(x), i = 1, 2, . . . , s}. It is also the case, analogous to Theorem 5.12, that if the bound is attainable then the underlying family of distributions constitutes an exponential family (Joshi 1976, Fabian and Hannan, 1977; M¨ uller-Funk et al. 1989). The information inequalities (5.31) and (6.22) have been extended in a number of directions, some of which are briefly sketched in the following. (a) When the lower bound is not sharp, it can usually be improved by considering not only the derivatives ψi but also higher derivatives: ψi1,...,is = 1 pθ(x) ∂i1+···+ispθ(x) ∂θi1 1 · · · ∂θis s . (6.28) It is then easy to generalize (5.31) and (5.24) to obtain a lower bound based on anygivensetS oftheψ’s.Assume(6.6)with(c)replacedbythecorresponding assumption for all the derivatives needed for the set S, and suppose that the covariance matrix K(θ) of the given set of ψ’s is positive definite. Then, (6.1) yields the Bhattacharyya inequality varθ(δ) ≥α′K−1(θ)α (6.29) where α′ is the row matrix with elements ∂i1+···+is ∂θi1 1 · · · ∂θis s Eθδ(X) = cov(δ, ψi1,...,is). (6.30) 2.7 ] PROBLEMS 129 It is also seen that equality holds in (6.29) if and only if δ is a linear func-tion of the ψ’s in S (Problem 6.12). The problem of whether the Bhat-tacharyya bounds become sharp as s →∞has been investigated for some one-parameter cases by Blight and Rao (1974). (b) A different kind of extension avoids the need for regularity conditions by con-sidering differences instead of derivatives. (See Hammersley 1950, Chapman and Robbins 1951, Kiefer 1952, Fraser and Guttman 1952, Fend 1959, Sen and Ghosh 1976, Chatterji 1982, and Klaassen 1984, 1985.) (c) Applications of the inequality to the sequential case in which the number of observations is not a fixed integer but a random variable, say N, determined from the observations is provided by Wolfowitz (1947), Blackwell and Gir-shick (1947), and Seth (1949). Under suitable regularity conditions, (6.23) then continues to hold with n replaced by Eθ(N); see also Simons 1980, Govindarajulu and Vincze 1989, and Stefanov 1990. (d) Other extensions include arbitrary convex loss functions (Kozek 1976); weighted loss functions (Mikulski and Monsour 1988); to the case that g and δ are vector-valued (Rao 1945, Cram´ er 1946b, Seth 1949, Shemyakin 1987, and Rao 1992); to nonparametric problems (Vincze 1992); location problems (Klaassen 1984); and density estimation (Brown and Farrell 1990). 7 Problems Section 1 1.1 Verify (a) that (1.4) defines a probability distribution and (b) condition (1.5). 1.2 In Example 1.5, show that a∗ i minimizes (1.6) for i = 0, 1, and simplify the expression for a∗ 0. [Hint: κpκ−1 and κ(κ −1)pκ−2 are the first and second derivatives of pκ = 1/q.] 1.3 Let X take on the values −1, 0, 1, 2, 3 with probabilities P(X = −1) = 2pq and P (X = k) = pkq3−k for k = 0, 1,2,3. (a) Check that this is a probability distribution. (b) Determine the LMVU estimator at p0 of (i) p, and (ii) pq, and decide for each whether it is UMVU. 1.4 For a sample of size n, suppose that the estimator T (x) of τ(θ) has expectation E[T (X)] = τ(θ) + ∞ k=1 ak nk , where ak may depend on θ but not on n. (a) Show that the expectation of the jackknife estimator TJ of (1.3) is E[TJ(X)] = τ(θ) −a2 n2 + O(1/n3). (b) Show that if var T ∼c/n for some constant c, then var TJ ∼c′/n for some constant c′. Thus, the jackknife will reduce bias and not increase variance. A second-order jackknife can be defined by jackknifing TJ, and this will result in further bias reduction, but may not maintain a variance of the same order (Robson and Whitlock 1964; see also Thorburn 1976 and Note 8.3). 130 UNBIASEDNESS [ 2.7 1.5 (a) Any two random variables X and Y with finite second moments satisfy the covariance inequality [cov(X, Y)]2 ≤var(X) · var(Y). (b) The inequality in part (a) is an equality if and only if there exist constants a and b for which P (X = aY + b) = 1. [Hint: Part (a) follows from the Schwarz inequality (Problem 1.7.20) with f = X−E(X) and g = Y −E(Y).] 1.6 An alternative proof of the Schwarz inequality is obtained by noting that  (f + λg)2dP =  f 2dP + 2λ  fg dP + λ2  g2dP ≥0 for all λ, so that this quadratic in λ has at most one root. 1.7 Suppose X is distributed on (0, 1) with probability density pθ(x) = (1 −θ) + θ/2√x for all 0 < x < 1, 0 ≤θ ≤1. Show that there does not exist an LMVU estimator of θ. [Hint: Let δ(x) = a[x−1/2 + b] for c < x < 1 and δ(x) = 0 for 0 < x < c. There exist values a and b, and c such that E0(δ) = 0 and E1(δ) = 1 (and δ is unbiased) and that E0(δ2) is arbitrarily close to zero (Stein 1950).] 1.8 If δ and δ′ have finite variance, so does δ′ −δ. [Hint: Problem 1.5.] 1.9 In Example 1.9, (a) determine all unbiased estimators of zero; (b) show that no nonconstant estimator is UMVU. 1.10 If estimators are restricted to the class of linear estimators, characterization of best unbiased estimators is somewhat easier. Although the following is a consequence of Theorem 1.7, it should be established without using that theorem. Let Xp×1 satisfy E(X) = Bψ and var(X) = I, where Bp×r is known, and ψr×1 is unknown. A linear estimator is an estimator of the form a′X, where ap×1 is a known vector. We are concerned with the class of estimators D = {δ(x) : δ(x) = a′x, for some known vector a}. (a) For a known vector c, show that the estimators in D that are unbiased estimators of c′ψ satisfy a′B = c′. (b) Let Dc = {δ(x) : δ(x) = a′x, a′B = c′} be the class of linear unbiased estimators of c′ψ. Show that the best linear unbiased estimator (BLUE) of c′ψ, the linear unbi-ased estimator with minimum variance, is δ∗(x) = a∗′x, where a∗′ = a′B(B′B)−1B′ and a∗′B = c with variance var(δ∗) = c′c. (c) Let D0 = {δ(x) : δ(x) = a′x, a′B = 0.} be the class of linear unbiased estimators of zero. Show that if δ ∈D0, then cov(δ, δ∗) = 0. (d) Hence, establish the analog of Theorem 1.7 for linear estimators: Theorem. An estimator δ∗∈Dc satisfies var(δ∗) = minδ∈Dc var(δ) if and only if cov(δ∗, U) = 0, where U is any estimator in D0. (e) Show that the results here can be directly extended to the case of var(X) = , where p×p is a known matrix, by considering the transformed problem with X∗= 1/2X and B∗= 1/2B. 1.11 Use Theorem 1.7 to find UMVU estimators of some of the ηθ(di) in the dose-response model (1.6.16), with the restriction (1.6.17) (Messig and Strawderman 1993). Let the classes W and U be defined as in Theorem 1.7. (a) Show that an estimator U ∈U if and only if U(x1, x2) = a[I(x1 = 0) −I(x2 = 0)] for an arbitrary constant a < ∞. 2.7 ] PROBLEMS 131 (b) Using part (a) and (1.7), show that an estimator δ is UMVU for its expectation only if it is of the form δ(x1, x2) = aI(0,0)(x1, x2)+bI(0,1),(1,0),(2,0)(x1, x2) + cI(1,1)(x1, x2)+ dI(2,1)(x1, x2) where a, b, c, and d are arbitrary constants. (c) Show that there does not exist a UMVU estimator of ηθ(d1) = 1 −e−θ, but the UMVU estimator of ηθ(d2) = 1 −e−2θ is δ(x1, x2) = 1 −1 2[I(x1 = 0) + I(x2 = 0)]. (d) Show that the LMVU estimator of 1 −e−θ is δ(x1, x2) = x1 2 + 1 2(1+e−θ )[I(x1 = 0) −I(x2 = 0)]. 1.12 Show that if δ(X) is a UMVU estimator of g(θ), it is the unique UMVU estimator of g(θ). (Do not assume completeness, but rather use the covariance inequality and the conditions under which it is an equality.) 1.13 If δ1 and δ2 are in W and are UMVU estimators of g1(θ) and g2(θ), respectively, then a1δ1 + a2δ2 is also in W and is UMVU for estimating a1g1(θ) + a2g2(θ), for any real a1 and a2. 1.14 Completeness of T is not only sufficient but also necessary so that every g(θ) that can be estimated unbiasedly has only one unbiased estimator that is a function of T . 1.15 Suppose X1, . . . , Xn are iid Poisson (λ). (a) Show that ¯ X is the UMVU estimator for λ. (b) For S2 = n i=1(Xi −¯ X)2/(n−1), we have that ES2 = E ¯ X = λ. To directly establish that var S2 > var ¯ X, prove that E(S2| ¯ X) = ¯ X. Note: The identity E(S2| ¯ X) = ¯ X shows how completeness can be used in calculating conditional expectations. 1.16 (a) If X1, . . . , Xn are iid (not necessarily normal) with var(Xi) = σ 2 < ∞, show that δ = (Xi −¯ X)2/(n −1) is an unbiased estimator of σ 2. (b) If the Xi take on the values 1 and 0 with probabilities p and q = 1−p, the estimator δ of (a) depends only on T = Xi and hence is UMVU for estimating σ 2 = pq. Compare this result with that of Example 1.13. 1.17 If T has the binomial distribution b(p, n) with n > 3, use Method 1 to find the UMVU estimator of p3. 1.18 Let X1, . . . , Xn be iid according to the Poisson distribution P(λ). Use Method 1 to find the UMVU estimator of (a) λk for any positive integer k and (b) e−λ. 1.19 Let X1, . . . , Xn be distributed as in Example 1.14. Use Method 1 to find the UMVU estimator of θk for any integer k > −n. 1.20 Solve Problem 1.18(b) by Method 2, using the fact that an unbiased estimator of e−λ is δ = 1 if X1 = 0, and δ = 0 otherwise. 1.21 In n Bernoulli trials, let Xi = 1 or 0 as the ith trial is a success or failure, and let T = Xi. Solve Problem 1.17 by Method 2, using the fact that an unbiased estimator of p3 is δ = 1 if X1 = X2 = X3 = 1, and δ = 0 otherwise. 1.22 Let X take on the values 1 and 0 with probability p and q, respectively, and assume that 1/4 < p < 3/4. Consider the problem of estimating p with loss function L(p, d) = 1 if |d −p| ≥1/4, and 0 otherwise. Let δ∗be the randomized estimator which is Y0 or Y1 when X = 0 or 1 where Y0 and Y1 are distributed as U(−1/2, 1/2) and U(1/2, 3/2), respectively. (a) Show that δ∗is unbiased. (b) Compare the risk function of δ∗with that of X. 132 UNBIASEDNESS [ 2.7 Section 2 2.1 If X1, . . . , Xn are iid as N(ξ, σ 2) with σ 2 known, find the UMVU estimator of (a) ξ 2, (b) ξ 3, and (c) ξ 4. [Hint: To evaluate the expectation of ¯ Xk, write ¯ X = Y + ξ, where Y is N(0, σ 2/n) and expand E(Y + ξ)k.] 2.2 Solve the preceding problem when σ is unknown. 2.3 In Example 2.1 with σ known, let δ = ciXi be any linear estimator of ξ. If δ is biased, show that its risk E(δ −ξ)2 is unbounded. [Hint: If ci = 1 + k, the risk is ≥k2ξ 2.] 2.4 Suppose, as in Example 2.1, that X1, . . . , Xn are iid as N(ξ, σ 2), with one of the parameters known, and that the estimand is a polynomial in ξ or σ. Then, the UMVU estimator is a polynomial in ¯ X or S2 = (Xi −ξ)2. The variance of any such polynomial can be estimated if one knows the moments E( ¯ Xk) and E(Sk) for all k = 1, 2, . . . . To determine E( ¯ Xk), write ¯ X = Y + ξ, where Y is distributed as N(0, σ 2/n). Show that (a) E( ¯ Xk) = k r=0 k r  ξ k−rE(Y r) with E(Y r) =    (r −1)(r −3) · · · 3 · 1(σ 2/n)r/2 when r ≥2 is even 0 when r is odd . (b) As an example, consider the UMVU estimator S2/n of σ 2. Show that E(S4) = n(n + 2)σ 2 and var S2 n = 2σ 4 n and that the UMVU estimator of this variance is 2S4/n2(n + 2). 2.5 In Example 2.1, when both parameters are unknown, show that the UMVU estimator of ξ 2 is given by δ = ¯ X2 − S2 n(n−1) where now S2 = (Xi −¯ X)2. 2.6 (a) Determine the variance of the estimator Problem 2.5. (b) Find the UMVU estimator of the variance in part (a). 2.7 If X is a single observation from N(ξ, σ 2), show that no unbiased estimator δ of σ 2exists when ξ is unknown. [Hint: For fixed σ = a, X is a complete sufficient statistic for ξ, and E[δ(X)] = a2 for all ξ implies δ(x) = a2 a.e.] 2.8 Let Xi, i = 1, . . . , n, be independently distributed as N(α + βti, σ 2) where α, β, and σ 2 are unknown, and the t’s are known constants that are not all equal. Find the UMVU estimators of α and β. 2.9 In Example 2.2 with n = 1, the UMVU estimator of p is the indicator of the event X1 ≤u whether σ is known or unknown. 2.10 Verify Equation (2.14), the density of (X1 −¯ X)/S in normal sampling. [The UMVU estimator in (2.13) is used by Kiefer (1977) as an example of his estimated confidence approach.] 2.11 Assuming (2.15) with σ = τ, determine the UMVU estimators of σ 2 and (η−ξ)/σ. 2.12 Assuming (2.15) with η = ξ and σ 2/τ 2 = γ , show that when γ is known: (a) T ′ defined in Example 2.3(iii) is a complete sufficient statistic; (b) δγ is UMVU for ξ. 2.13 Show that in the preceding problem with γ unknown, 2.7 ] PROBLEMS 133 (a) a UMVU estimator of ξ does not exist; (b) the estimator ˆ ξ is unbiased under the conditions stated in Example 2.3. [Hint: (i) Problem 2.12(b) and the fact that δγ is unbiased for ξ even when σ 2/τ 2 ̸= γ . (ii) Condition on (SX, SY).] 2.14 For the model (2.15) find the UMVU estimator of P(X1 < Y1) when (a) σ = τ and (b) when σ and τ are arbitrary. [Hint: Use the conditional density (2.13) of X1 given ¯ X, S2 X and that of Y1 given ¯ Y, S2 Y to determine the conditional density of Y1 −X1 given ¯ X, ¯ Y, S2 X, and S2 Y.] 2.15 If (X1, Y1), . . . , (Xn, Yn) are iid according to any bivariate distribution with finite second moments, show that SXY/(n −1) given by (2.17) is an unbiased estimator of cov(Xi, Yi). 2.16 In a sample size N = n + k + 1, some of the observations are missing. Assume that (Xi, Yi), i = 1, . . . , n, are iid according to the bivariate normal distribution (2.16), and that U1, . . . , Uk and V1, . . . , Vl are independent N(ξ, σ 2) and N(η, τ 2), respectively. (a) Show that the minimal sufficient statistics are complete when ξ and η are known but not when they are unknown. (b) When ξ and η are known, find the UMVU estimators for σ 2, τ 2, and ρστ, and suggest reasonable unbiased estimators for these parameters when ξ and η are unknown. 2.17 For the family (2.22), show that the UMVU estimator of a when b is known and the UMVU estimator of b is known are as stated in Example 2.5. [Hint: Problem 6.18.] 2.18 Show that the estimators (2.23) are UMVU. [Hint: Problem 1.6.18.]. 2.19 For the family (2.22) with b = 1, find the UMVU estimator of P(X1 ≥u) and of the density e−(u−a) of X1 at u. [Hint: Obtain the estimator δ(X(1)) of the density by applying Method 2 of Section 2.1 and then the estimator of the probability by integration. Alternatively, one can first obtain the estimator of the probability as P(X1 ≥u|X(1)) using the fact that X1 −X(1) is ancillary and that given X(1), X1 is either equal to X(1) or distributed as E(X(1), 1).] 2.20 Find the UMVU estimator of P(X1 ≥u) for the family (2.22) when both a and b are unknown. 2.21 LetX1, . . . , Xm andY1, . . . , Yn beindependentlydistributedasE(a, b)andE(a′, b′), respectively. (a) If a, b, a′, and b′ are completely unknown, X(1), Y(1), [Xi −X(1)], and [Yj −Y(1)] jointly are sufficient and complete. (b) Find the UMVU estimators of a′ −a and b′/b. 2.22 In the preceding problem, suppose that b′ = b. (a) Show that X(1), Y(1), and [Xi −X(1)] + [Yj −Y(1)] are sufficient and complete. (b) Find the UMVU estimators of b and (a′ −a)/b. 2.23 In Problem 2.21, suppose that a′ = a. (a) Show that the complete sufficient statistic of Problem 2.21(a) is still minimal suf-ficient but no longer complete. (b) Show that a UMVU estimator for a′ = a does not exist. (c) Suggest a reasonable unbiased estimator for a′ = a. 134 UNBIASEDNESS [ 2.7 2.24 Let X1, . . . , Xn be iid according to the uniform distribution U(ξ −b, ξ + b). If ξ, b are both unknown, find the UMVU estimators of ξ, b, and ξ/b. [Hint: Problem 1.6.30.] 2.25 Let X1, . . . , Xm and Y1, . . . , Yn be iid as U(0, θ) and U(0, θ ′), respectively. If n > 1, determine the UMVU estimator of θ/θ ′. 2.26 Verify the ML estimators given in (2.24). 2.27 In Example 2.6(b), show that (a) The bias of the ML estimator is 0 when ξ = u. (b) At ξ = u, the ML estimator has smaller expected squared error than the UMVU estimator. [Hint: In (b), note that u −¯ X is always closer to 0 than n n−1(u −¯ X).] 2.28 Verify (2.26). 2.29 Under the assumptions of Lemma 2.7, show that: (a) If b is replaced by any random variable B which is independent of X and not 0 with probability 1, then Rδ(θ) < Rδ∗(θ). (b) If squared error is replaced by any loss function of the form L(θ, δ) = ρ(d −θ) and δ is risk unbiased with respect to L, then Rδ(θ) < Rδ∗(θ). Section 3 3.1 (a) In Example 3.1, show that (Xi −¯ X)2 = T (n −T )/n. (b) The variance of T (n−T )/n(n−1) in Example 3.1 is (pq/n)[(q−p)2+2pq/(n−1)]. 3.2 If T is distributed as b(p, n), find an unbiased estimator δ(T ) of pm (m ≤n) by Method 1, that is, using (1.10). [Hint: Example 1.13.] 3.3 (a) Use the method leading to (3.2) to find the UMVU estimator πk(T ) of P[X1 + · · · + Xm = k] = m k  pkqm−k (m ≤n). (b) For fixed t and varying k, show that the πk(t) are the probabilities of a hypergeo-metric distribution. 3.4 If Y is distributed according to (3.3), use Method 1 of Section 2.1 (a) to show that the UMVU estimator of pr (r < m) is δ(y) = (m −r + y −1)(m −r + y −2) . . . (m −r) (m + y −1)(m + y −2) · · · m , and hence in particular that the UMVU estimator of 1/p, 1/p2 and p are, respec-tively, (m + y)/m, (m + y)(m + y + 1)/m(m + 1), and (m −1)/(m + y −1); (b) to determine the UMVU estimator of var(Y); (c) to show how to calculate the UMVU estimator δ of log p. 3.5 Consider the scheme in which binomial sampling is continued until at least a suc-cesses and b failures have been obtained. Show how to calculate a reasonable estimator of log(p/q). [Hint: To obtain an unbiased estimator of log p, modify the UMVU estimator δ of Problem 3.4(c).] 3.6 If binomial sampling is continued until m successes have been obtained, let Xi (i = 1, . . . , m) be the number of failures between the (i −1)st and ith success. (a) The Xi are iid according to the geometric distribution P(Xi = x) = pqx, x = 0, 1, . . .. 2.7 ] PROBLEMS 135 (b) The statistic Y = Xi is sufficient for (X1, . . . , Xm) and has the distribution (3.3). 3.7 Suppose that binomial sampling is continued until the number of successes equals the number of failures. (a) This rule is closed if p = 1/2 but not otherwise. (b) If p = 1/2 and N denotes the number of trials required, E(N) = ∞. 3.8 Verify Equation (3.7) with the appropriate definition of N′(x, y) (a) for the estimation of p and (b) for the estimation of paqb. 3.9 Consider sequential binomial sampling with the stopping points (0, 1) and (2, y), y = 0, 1, . . .. (a) Show that this plan is closed and simple. (b) Show that (X, Y) is not complete by finding a nontrivial unbiased estimator of zero. 3.10 In Example 3.4(ii), (a) show that the plan is closed but not simple, (b) show that (X, Y) is not complete, and (c) evaluate the unbiased estimator (3.7) of p. 3.11 Curtailed single sampling. Let a, b < n be three non-negative integers. Continue observation until either a successes, b failures, or n observations have been obtained. Determine the UMVU estimator of p. 3.12 For any sequential binomial sampling plan, the coordinates (X, Y) of the end point of the sample path are minimal sufficient. 3.13 Consider any closed sequential binomial sampling plan with a set B of stopping points, and let B′ be the set B∪{(x0, y0)} where (x0, y0) is a point not in B that has positive probability of being reached under plan B. Show that the sufficient statistic T = (X, Y) is not complete for the sampling plan which has B′ as its set of stopping points. [Hint: For any point (x, y) ∈B, let N(x, y) and N ′(x, y) denote the number of paths to (x, y) when the set of stopping points is B and B′, respectively, and let N(x0, y0) = 0, N′(x0, y0) = 1. Then, the statistic 1 −[N(X, Y)/N ′(X, Y)] has expectation 0 under B′ for all values of p.] 3.14 For any sequential binomial sampling plan under which the point (1, 1) is reached with positive probability but is not a stopping point, find an unbiased estimator of pq depending only on (X, Y). Evaluate this estimator for (a) taking a sample of fixed size n > 2; (b) inverse binomial sampling. 3.15 Use (3.3) to determine A(t, n) in (3.11) for the negative binomial distribution with m = n, and evaluate the estimators (3.13) of qr, and (3.14). 3.16 Consider n binomial trials with success probability p, and let r and s be two positive integers with r + s < n. To the boundary x + y = n, add the boundary point (r, s), that is, if the number of successes in the first r + s trials is exactly r, the process is stopped and the remaining n −(r + s) trials are not performed. (a) Show that U is an unbiased estimator of zero if and only if U(k, n −k) = 0 for k = 0, 1, . . . , r −1 and k = n−s +1, n−s +2, . . . , n, and U(k, n−k) = ckU(r, s) for k = r, . . . , n −s, where the c’s are given constants ̸= 0. (b) Show that δ is the UMVU estimator of its expectation if and only if δ(k, n −k) = δ(r, s) for k = r, . . . , n −s. 3.17 Generalize the preceding problem to the case that two points (r1, s1) and (r2, s2) with ri + si < n are added to the boundary. Assume that these two points are such that all n + 1 points x + y = n remain boundary points. [Hint: Distinguish the three cases that the intervals (r1, s1) and (r2, s2) are (i) mutually exclusive, (ii) one contained in the other, and (iii) overlapping but neither contained in the other.] 136 UNBIASEDNESS [ 2.7 3.18 If X has the Poisson distribution P(θ), show that 1/θ does not have an unbiased estimator. 3.19 If X1, . . . , Xn are iid according to (3.18), the Poisson distribution truncated on the left at 0, find the UMVU estimator of θ when (a) n = 1 and (b) n = 2. 3.20 Let X1, . . . , Xn be a sample from the Poisson distribution truncated on the left at 0, i.e., with sample space X = {1, 2, 3, . . .}. (a) For t = xi, the UMVU estimator of λ is (Tate and Goen 1958) ˆ λ = Cn t−1 Cn t t where Cn t = (−1)n n! ∞ k=0 n k  (−1)kkt is a Stirling number of the second kind. (b) An alternate form of the UMVU estimator is ˆ λ = t n  1 − Cn−1 t−1 Cn t  . [Hint: Establish the identity Cn t = Cn−1 t−1 + nCn t−1.] (c) The Cram´ er-Rao lower bound for the variance of unbiased estimators of λ is λ(1 − e−λ)2/[n(1 −e−λ −λe−λ)], and it is not attained by the UMVU estimator. (It is, however, the asymptotic variance of the ML estimator.) 3.21 Suppose that X has the Poisson distribution truncated on the right at a, so that it has the conditional distribution of Y given Y ≤a, where Y is distributed as P(λ). Show that λ does not have an unbiased estimator. 3.22 For the negative binomial distribution truncated at zero, evaluate the estimators (3.13) and (3.14) for m = 1, 2, and 3. 3.23 If X1, . . . , Xn are iid P (λ), consider estimation of e−bλ, where b is known. (a) Show that δ∗= (1 −b/n)t is the UMVU estimator of e−bλ. (b) For b > n, describe the behavior of δ∗, and suggest why it might not be a reasonable estimator. (The probability e−bλ, for b > n, is that of an “unobservable” event, in that it can be interpreted as the probability of no occurrence in a time interval of length b. A number of such situations are described and analyzed in Lehmann (1983), where it is suggested that, in these problems, no reasonable estimator may exist.) 3.24 If X1, . . . , Xn are iid according to the logarithmic series distribution of Problem 1.5.14, evaluate the estimators (3.13) and (3.14) for n = 1, 2, and 3. 3.25 For the multinomial distribution of Example 3.8, (a) show that pr0 0 · · · prs s has an unbiased estimator provided r0, . . . , rs are nonnegative integers with ri ≤n; (b) find the totality of functions that can be estimated unbiasedly; (c) determine the UMVU estimator of the estimand of (a). 3.26 In Example 3.9 when pij = pi+p+j, determine the variances of the two unbiased estimators δ0 = nij/n and δ1 = ni+n+j/n2 of pij, and show directly that var(δ0) > var(δ1) for all n > 1. 3.27 In Example 3.9, show that independence of A and B implies that (n1+, . . ., nI+) and (n+1, . . ., n+J) are independent with multinomial distributions as stated. 3.28 Verify (3.20). 3.29 LetX,Y,andg besuchthatE[g(X, Y)|y]isindependentofy.Then,E[f (Y)g(X, Y)] = E[f (Y)]E[g(X, Y)], and hence f (Y) and g(X, Y) are uncorrelated, for all f . 2.7 ] PROBLEMS 137 3.30 In Example 3.10, show that the estimator δ1 of pijk is unbiased for the model (3.20). [Hint: Problem 3.29.] Section 4 4.1 Let X1, . . . , Xn be iid with distribution F. (a) Characterize the totality of functions f (X1, . . . , Xn) which are unbiased estimators of zero for the class F0 of all distributions F having a density. (b) Give one example of a nontrivial unbiased estimator of zero when (i) n = 2 and (ii) n = 3. 4.2 Let F be the class of all univariate distribution functions F that have a probability density function f and finite mth moment. (a) Let X1, . . . , Xn be independently distributed with common distribution F ∈F. For n ≥m, find the UMVU estimator of ξ m where ξ = ξ(F) = EXi. (b) Show that for the case that P(Xi = 1) = p, P(Xi = 0) = q, p + q = 1, the estimator of (a) reduces to (3.2). 4.3 In the preceding problem, show that 1/varFXi does not have an unbiased estimator for any n. 4.4 Let X1, . . . , Xn be iid with distribution F ∈F where F is the class of all symmetric distributions with a probability density. There exists no UMVU estimator of the center of symmetry θ of F (if unbiasedness is required only for the distributions F for which the expectation of the estimator exists). [Hint: The UMVU estimator of θ when F is U(θ −1/2, θ + 1/2), which was obtained in Problem 2.24, is unbiased for all F ∈F; so is ¯ X.] 4.5 If X1, . . . , Xm and Y1, . . . , Yn are independently distributed according to F and G ∈ F0, defined in Problem 4.1, the order statistics X(1) < · · · < X(m) and Y(1) < · · · < Y(n) are sufficient and complete. [Hint: For completeness, generalize the second proof suggested in Problem 6.33.] 4.6 Under the assumptions of the preceding problem, find the UMVU estimator of P (Xi < Yj). 4.7 Under the assumptions of Problem 4.5, let ξ = EXi and η = EYj. Show that ξ 2η2 possesses an unbiased estimator if and only if m ≥2 and n ≥2. 4.8 Let (X1, Y1), . . . , (Xn, Yn) be iid F ∈F, where F is the family of all distributions with probability density and finite second moments. Show that δ(X, Y) = (Xi − ¯ X)(Yi −¯ Y)/(n −1) is UMVU for cov(X, Y). 4.9 Under the assumptions of the preceding problem, find the UMVU estimator of (a) P (Xi ≤Yi); (b) P (Xi ≤Xj and Yi ≤Yj), i ̸= j. 4.10 Let (X1, Y1), . . . , (Xn, Yn) be iid with F ∈F, where F is the family of all bivariate densities. Show that the sufficient statistic T , which generalizes the order statistics to the bivariate case, is complete. [Hint: Generalize the second proof suggested in Problem 6.33. As an exponential family for (X, Y), take the densities proportional to eQ(x,y) where Q(x, y) = (θ01x + θ10y) + (θ02x2 + θ11xy + θ20y2) + · · · +(θ0nxn + · · · + θn0yn) −x2n −y2n.] 138 UNBIASEDNESS [ 2.7 Section 5 5.1 Under the assumptions of Problem 1.3, determine for each p1, the value LV (p1) of the LMVU estimator of p at p1 and compare the function LV (p), 0 < p < 1 with the variance Vp0(p) of the estimator which is LMVU at (a) p0 = 1/3 and (b) p0 = 1/2. 5.2 Determine the conditions under which equality holds in (5.1). 5.3 Verify I(θ) for the distributions of Table 5.1. 5.4 If X is normal with mean zero and standard deviation σ, determine I(σ). 5.5 Find I(p) for the negative binomial distribution. 5.6 If X is distributed as P (λ), show that the information it contains about √ λ is inde-pendent of λ. 5.7 Verify the following statements, asserted by Basu (1988, Chapter 1), which illus-trate the relationship between information, sufficiency, and ancillarity. Suppose that we let I(θ) = Eθ −∂2/∂θ 2 log f (x|(θ) ! be the information in X about θ and let J(θ) = Eθ −∂2/∂θ 2 log g(T |θ) ! be the information about θ contained in a statistic T , where g(·|θ) is the density function of T . Define λ(θ) = I(θ) −J(θ), a measure of information lost by using T instead of X. Under suitable regularity conditions, show that (a) λ(θ) ≥0 for all θ (b) λ(θ) = 0 if and only if T is sufficient for θ. (c) If Y is ancillary but (T, Y) is sufficient, then I(θ) = Eθ[J(θ|Y)], where J(θ|y) = Eθ  −∂2 ∂θ 2 log h(T |y, θ)|Y = y  and h(t|y, θ) is the conditional density of T given Y = y. (Basu’s “regularity conditions” are mainly concerned with interchange of integration and differentiation. Assume any such interchanges are valid.) 5.8 Find a function of θ for which the amount of information is independent of θ: (a) for the gamma distribution H(α, β) with α known and with θ = β; (b) for the binomial distribution b(p, n) with θ = p. 5.9 For inverse binomial sampling (see Example 3.2): (a) Show that the best unbiased estimator of p is given by δ∗(Y) = (m−1)/(Y+m −1). (b) Show that the information contained in Y about P is I(p) = m/p2(1 −p). (c) Show that varδ∗> 1/I(p). (The estimator δ∗can be interpreted as the success rate if we ignore the last trial, which we know must be a success.) 5.10 Show that (5.13) can be differentiated by differentiating under the integral sign when pθ(x) is given by (5.24), for each of the distributions of Table 5.2. [Hint: Form the difference quotient and apply the dominated convergence theorem.] 5.11 Verify the entries of Table 5.2. 5.12 Evaluate (5.25) when f is the density of Student’s t-distribution with ν degrees of freedom. [Hint: Use the fact that  ∞ −∞ dx (1 + x2)k = H(1/2)H(k −1/2) H(k) .  2.7 ] PROBLEMS 139 5.13 For the distribution with density (5.24), show that I(θ) is independent of θ. 5.14 Verify (a) formula (5.25) and (b) formula (5.27). 5.15 For the location t density, calculate the information inequality bound for unbiased estimators of θ. 5.16 (a) For the scale family with density (1/θ)f (x/θ), θ > 0, the amount of informa-tion a single observation X has about θ is 1 θ 2  yf ′(y) f (y) + 1 2 f (y) dy. (b) Show that the information X contains about ξ = log θ is independent of θ. (c) For the Cauchy distribution C(0, θ), I(θ) = 1/(2θ 2). 5.17 If pθ(x) is given by 1.5.1 with s = 1 and T (x) = δ(x), show that var[δ(X)] attains the lower bound (5.31) and is the only estimator to do so. [Hint: Use (5.18) and (1.5.15).] 5.18 Show that if a given function g(θ) has an unbiased estimator, there exists an unbi-ased estimator δ which for all θ values attains the lower bound (5.1) for some ψ(x, θ) satisfying (5.2) if and only if g(θ) has a UMVU estimator δ0. [Hint: By Theorem 5.1, ψ(x, θ) = δ0(x) satisfies (5.2). For any other unbiased δ, cov(δ −δ0, δ0) = 0 and hence var(δ0) = [cov(δ, δ0)]2/var(δ0), so that ψ = δ0 provides an attainable bound.] (Blyth 1974). 5.19 Show that if Eθδ = g(θ), and var(δ) attains the information inequality bound (5.31), then δ(x) = g(θ) + g′(θ) I(θ) ∂ ∂θ pθ(x). 5.20 If Eθδ = g(θ), the information inequality lower bound is IB(θ) = [g′(θ)]2/I(θ). If θ = h(ξ) where h is differentiable, show that IB(ξ) = IB(θ). 5.21 (Liu and Brown 1993) Let X be an observation from the normal mixture density pθ(x) = 1 2 √ 2π  e−(1/2)(x−θ)2 + e−(1/2)(x+θ)2 , θ ∈ , where is any neighborhood of zero. Thus, the random variable X is either N(θ, 1) or N(−θ, 1), each with probability 1/2. Show that θ = 0 is a singular point, that is, if there exists an unbiased estimator of θ it will have infinite variance at θ = 0. 5.22 Let X1, . . . , Xn be a sample from the Poisson (λ) distribution truncated on the left at 0, i.e., with sample space X = {1, 2, 3, . . .} (see Problem 3.20). Show that the Cram´ er-Rao lower bound for the variance of unbiased estimators of λ is λ(1 −e−λ)2 n(1 −e−λ −λe−λ) and is not attained by the UMVU estimator. (It is, however, the asymptotic variance of the ML estimator.) 5.23 Let X1, . . . , Xn be iid according to a density p(x, θ) which is positive for all x. Then, the variance of any unbiased estimator δ of θ satisfies varθ0(δ) ≥ (θ −θ0)2 ∞ −∞ [p(x, θ)]2 p(x, θ0) "n −1 , θ ̸= θ0. [Hint: Direct consequence of (5.6).] 140 UNBIASEDNESS [ 2.7 5.24 If X1, . . . , Xn are iid as N(θ, σ 2) where σ is known and θ is known to have one of the values 0, ±1, ±2, . . . , the inequality of the preceding problem shows that any unbiased estimator δ of the restricted parameter θ satisfies varθ0(δ) ≥ W2 enW2/σ 2 −1, W ̸= 0, where W = θ −θ0, and hence supW̸=0varθ0(δ) ≥1/[en/σ 2 −1]. 5.25 Under the assumptions of the preceding problem, let ¯ X∗be the integer closest to ¯ X. (a) The estimator ¯ X∗is unbiased for the restricted parameter θ. (b) Thereexistpositiveconstantsa andb suchthatforallsufficientlylargen, varθ( ¯ X∗) ≤ ae−bn for all integers θ. [Hint: (b) One finds P ( ¯ X∗= k) = Ik φ(t) dt, where Ik is the interval ((k −θ − 1/2)√n/σ, (k −θ + 1/2)√n/σ), and hence var( ¯ X∗) ≤4 ∞ k=1 k 1 −X √n σ  k −1 2 " . The result follows from the fact that for all y > 0, 1 −X(y) ≤φ(y)/y. See, for example, Feller 1968, Chapter VII, Section 1. Note that h(y) = φ(y)/(1 −X(y)) is the hazard function for the standard normal distribution, so we have h(y) ≥y for all y > 0. (1 −X(y))/φ(y) is also known as Mill’s ratio (see Stuart and Ord, 1987, Section 5.38.) Efron and Johnstone (1990) relate the hazard function to the information inequality]. Note. The surprising results of Problems 5.23–5.25 showing a lower bound and variance which decrease exponentially are due to Hammersley (1950), who shows that, in fact, var( ¯ X∗) ∼ $ 8σ 2 πn e−n/8σ 2 as n σ 2 →∞. Further results concerning the estimation of restricted parameters and properties of ¯ X∗are given in Khan (1973), Ghosh (1974), Ghosh and Meeden (1978), and Kojima, Morimoto, and Takeuchi (1982). 5.26 Kiefer inequality. (a) Let X have density (with respect to µ) p(x, θ) which is > 0 for all x, and let 1 and 2 be two distributions on the real line with finite first moments. Then, any unbiased estimator δ of θ satisfies var(δ) ≥[ Wd1(W) − Wd2(W)]2 ψ2(x, θ)p(x, θ) dµ(x) where ψ(x, θ) = θ p(x, θ + W)[d1(W) −d2(W)] p(x, θ) with θ = {W : θ + Wε }. (b) If 1 and 2 assign probability 1 to W = 0 and W, respectively, the inequality reduces to (5.6) with g(θ) = θ. [Hint: Apply (5.1).] (Kiefer 1952.) 5.27 Verify directly that the following families of densities satisfy (5.38). 2.7 ] PROBLEMS 141 (a) The exponential family of (1.5.1), pη(x) = h(x)eηT (x)−A(η). (b) The location t family of Example 5.16. (c) The logistic density of Table 1.4.1. 5.28 Extend condition (5.38) to vector-valued parameters, and show that it is satisfied by the exponential family (1.5.1) for s > 1. 5.29 Show that the assumption (5.36(b)) implies (5.38), so Theorem 5.15 is, in fact, a corollary of Theorem 5.10. 5.30 Show that (5.38) is satisfied if either of the following is true: (a) |∂log pθ/∂θ| is bounded. (b) [pθ+W(x) −pθ(x)]/W →∂log pθ/∂θ uniformly. 5.31 (a) Show that if (5.38) holds, then the family of densities is strongly differentiable (see Note 8.6). (b) Show that weak differentiability is implied by strong differentiability. 5.32 Brown and Gajek (1990) give two different sufficient conditions for (8.2) to hold, which are given below. Show that each implies (8.2). (Note that, in the progression from (a) to (b) the conditions become weaker, thus more widely applicable and harder to check.) (a) For some B < ∞, Eθ0  ∂2 ∂θ 2 pθ(X)/pθ0(X) 2 < B for all θ in a neighborhood of θ. (b) If p∗ t (x) = ∂/∂θpθ(x)|θ=t, then lim W→0 Eθ0 p∗ θ0+W(X) −p∗ θ0(X) pθ0(X) 2 = 0. 5.33 Let F be the class of all unimodal symmetric densities or, more generally, densities symmetric around zero and satisfying f (x) ≤f (0) for all x. Show that min f ∈F  x2f (x)dx = 1 12, and that the minimum is attained by the uniform(−1 2, 1 2) distribution. Thus, the uniform distribution has minimum variance among symmetric unimodal distributions. (See Ex-ample 4.8.6 for large-sample properties of the scale uniform.) [Hint: The side condition f (x)dx = 1, together with the method of undetermined multipliers, yields an equiv-alent problem, minimization of (x2 −a2)f (x)dx, where a is chosen to satisfy the constraint. A Neyman-Pearson type argument will now work.] Section 6 6.1 For any random variables (ψ1, . . . , ψs), show that the matrices ||Eψiψj|| and C = ||cov(ψi, ψj)|| are positive semidefinite. 142 UNBIASEDNESS [ 2.7 6.2 In this problem, we establish some facts about eigenvalues and eigenvectors of square matrices. (For a more general treatment, see, for example, Marshall and Olkin 1979, Chapter 20.) We use the facts that a scalar λ > 0 is an eigenvalue of the n × n symmetric matrix A if there exists an n × 1 vector p, the corresponding eigenvector, satisfying Ap = λp. If A is nonsingular, there are n eigenvalues with corresponding linearly independent eigenvectors. (a) Show that A = P ′DλP , where Dλ is a diagonal matrix of eigenvalues of A and P is and n × n matrix whose rows are the corresponding eigenvalues that satisfies P ′P = P P ′ = I, the identity matrix. (b) Show that maxx x′Ax x′x = largest eigenvalue of A. (c) If B is a nonsingular symmetric matrix with eigenvector-eigenvalue represen-tation B = Q′DβQ, then maxx x′Ax x′Bx = largest eigenvalue of A∗, where A∗= D−1/2 β QAQ′D−1/2 β and D−1/2 β is a diagonal matrix whose elements are the recipro-cals of the square roots of the eigenvalues of B. (d) For any square matrices C and D, show that the eigenvalues of the matrix CD are the same as the eigenvalues of the matrix DC, and hence that maxx x′Ax x′Bx = largest eigenvalue of AB−1. (e) If A = aa′, where a is a n×1 vector (A is thus a rank-one matrix), then maxx x′aa′x x′Bx = a′B−1a. [Hint: For part (b) show that x′Ax x′x = y′Dλy y′y =  i λiy2 i  i y2 i , where y = Px, and hence the maximum is achieved at the vector y that is 1 at the coordinate of the largest eigenvalue and zero everywhere else.] 6.3 An alternate proof of Theorem 6.1 uses the method of Lagrange (or undetermined) multipliers. Show that, for fixed γ, the maximum value of a′γ , subject to the constraint that a′Ca = 1, is obtained by the solutions to ∂ ∂ai a′γ −1 2λ[a′Ca −1] " = 0, where λ is the undetermined multiplier. (The solution is a = ±C−1γ/  γ ′C−1γ .) 6.4 Prove (6.11) under the assumptions of the text. 6.5 Verify (a) the information matrices of Table 6.1 and (b) Equations (6.15) and (6.16). 6.6 If p(x) = (1−ε)φ(x −ξ)+(ε/τ)φ[(x −ξ)/τ] where φ is the standard normal density, find I(ε, ξ, τ). 6.7 Verify the expressions (6.20) and (6.21). 6.8 Let A = A11 A12 A21 A22  be a partitioned matrix with A22 square and nonsingular, and let B = I −A12A−1 22 0 I  . Show that |A| = |A11 −A12A−1 22 A21| · |A22|. 6.9 (a) Let A = a b′ b C  where a is a scalar and b a column matrix, and suppose that A is positive definite. Show that |A| ≤a|C| with equality holding if and only if b = 0. 2.8 ] NOTES 143 (b) More generally, if the matrix A of Problem 6.8 is positive definite, show that |A| ≤ |A11| · |A22| with equality holding if and only if A12 = 0. [Hint: Transform A11 and the positive semidefinite A12A−1 22 A21 simultaneously to diag-onal form.] 6.10 (a) ShowthatifthematrixAisnonsingular,thenforanyvectorx,(x′Ax)(x′A−1x) > (x′x)2. (b) Show that, in the notation of Theorem 6.6 and the following discussion, 1 ∂ ∂θi Eθδ 22 Iii(θ) = (ε′ iα)2 ε′ iI(θ)εi , and if α = (0, . . . , 0, αi, 0, . . . 0), α′I(θ)−1α = (ε′ iα)2ε′ iI(θ)−1εi, and hence estab-lish (6.25). 6.11 Prove that (6.26) is necessary for equality in (6.25). [Hint: Problem 6.9(a).] 6.12 Prove the Bhattacharyya inequality (6.29) and show that the condition of equality is as stated. 8 Notes 8.1 Unbiasedness and Information The concept of unbiasedness as “lack of systematic error” in the estimator was introduced by Gauss (1821) in his work on the theory of least squares. It has continued as a basic assumption in the developments of this theory since then. The amount of information that a data set contains about a parameter was introduced by Edgeworth (1908, 1909) and was developed more systematically by Fisher (1922 and later papers). The first version of the information inequality, and hence connections with unbiased estimation, appears to have been given by Fr´ echet (1943). Early extensions and rediscoveries are due to Darmois (1945), Rao (1945), and Cram´ er (1946b). The des-ignation “information inequality,” which replaced the earlier “Cram´ er-Rao inequality,” was proposed by Savage (1954). 8.2 UMVU Estimators The first UMVU estimators were obtained by Aitken and Silverstone (1942) in the situation in which the information inequality yields the same result (Problem 5.17). UMVU estimators as unique unbiased functions of a suitable sufficient statistic were derived in special cases by Halmos (1946) and Kolmogorov (1950) and were pointed out as a general fact by Rao (1947). An early use of Method 1 for determining such unbiased estimators is due to Tweedie (1947). The concept of completeness was defined, its implications for unbiased estimation developed, and Theorem 1.7 obtained, in Lehmann and Scheff´ e (1950, 1955, 1956). Theorem 1.11 has been used to determine UMVU estimators in many special cases. Some applications include those of Abbey and David (1970, exponential distribution), Ahuja (1972, truncated Poisson), Bhattacharyya et al. (1977, censored), Bickel and Lehmann (1969, convex), Varde and Sathe (1969, truncated exponential), Brown and Cohen (1974, common mean), Downton (1973, P(X ≤Y)), Woodward and Kelley (1977, P (X ≤Y)), Iwase (1983, inverse Gaussian), and Kremers (1986, sum-quota sampling). 144 UNBIASEDNESS [ 2.8 Figure 8.1. Illustration of the information inequality 8.3 Existence of Unbiased Esimators Doss and Sethuraman (1989) show that the process of bias reduction may not always be the wisest course. If an estimand g(θ) does not have an unbiased estimator, and one tries to reduce the bias in a biased estimator δ, they show that as the bias goes to zero, var(δ) →∞(see Problem 1.4). This result has implications for bias-reduction procedures such as the jackknife and the bootstrap. (For an introduction to the jackknife and the bootstrap, see Efron and Tibshirani 1993 or Shao and Tu 1995.) In particular, Efron and Tibshirani (1993, Section 10.6) discuss some practical implications of bias reduction, where they urge caution in its use, as large increases in standard errors can result. Liu and Brown (1993) call a problem singular if there exists no unbiased estimator with finite variance. More precisely, if F is a family of densities, then if a problem is singular, there will be at least one member of F, called a singular point, where any unbiased estimator of a parameter (or functional) will have infinite variance. There are many examples of singular problems, both in parametric and nonparametric estimation, with nonparametric density estimation being, perhaps, the best known. Two particularly simple examples of singular problems are provided by Example 1.2 (estimation of 1/p in a binomial problem) and Problem 5.21 (a mixture estimation problem). 8.4 Geometry of the Information Inequality 2.8 ] NOTES 145 The information inequality can be interpreted as, and a proof can be based on, the fact that the length of the hypotenuse of a right triangle exceeds the length of each side. For two vectors a and b, define < t, q >= t′q, with < t, t >2= |t|2. For the triangle in Figure 8.1, using the fact that the cosine of the angle between t and q is cos(t, q) = t′q/|t||q| and the fact that the hypotenuse is the longest side, we have |t| > |t| cos(t, q) = |t| < t, q > |t||q|  = < t, q > |q| . If we define < X, Y > = E [(X −EX) (Y −EY)] for random variables X and Y, applying the above inequality with this definition results in the covariance inequality (5.1), which, in turn, leads to the information inequality. See Fabian and Hannan (1977) for a rigorous development. 8.5 Fisher Information and the Hazard Function Efron and Johnstone (1990) investigate an identity between the Fisher information num-ber and the hazard function, h, defined by hθ(x) = lim W→0 W−1P(x ≤X < x + W|X ≥x) = fθ(x) 1 −Fθ(x) where fθ and Fθ are the density and distribution function of the random variable X, respectively. The hazard function, h(x), represents the conditional survival rate given survival up to time x and plays and important role in survival analysis. (See, for example, Kalbfleish and Prentice 1980, Cox and Oakes 1984, Fleming and Harrington 1991.) Efron and Johnstone show that I(θ) =  ∞ −∞ ∂ ∂θ log[fθ(x)]2fθ(x)dx =  ∞ −∞ ∂ ∂θ log[hθ(x)]2fθ(x)dx. They then interpret this identity and discuss its implications to, and connections with, survival analysis and statistical curvature of hazard models, among other things. They also note that this identity can be derived as a consequence of the more general result of James (1986), who showed that if b(·) is a continuous function of the random variable X, then var[b(X)] = E[b(X) −¯ b(X)]2, where ¯ b(x) = E[b(X)|b(X) > x], as long as the expectations exist. 8.6 Weak and Strong Differentiability Research into determining necessary and sufficient conditions for the applicability of the Information Inequality bound has a long history (see, for example, Blyth and Roberts 1972, Fabian and Hannan 1977, Ibragimov and Has’minskii 1981, Section 1.7, M¨ uller-Funk et al. 1989, Brown and Gajek 1990). What has resulted is a condition on the density sufficient to ensure (5.29). The precise condition needed was presented by Fabian and Hannan (1977), who call it weak differentiability. The function pθ+W(x)/pθ(x) is weakly differentiable at θ if there is a measurable function q such that lim W→0  h(x)  W−1 pθ+W(x) pθ(x) −1  −q(x) " pθ(x) dµ(x) = 0 (8.1) for all h(·) such that h2(x)pθ(x) dµ(x) < ∞. Weak differentiability is actually equiva-lent (necessary and sufficient) to the existence of a function qθ(x) such that (∂/∂θ)Eθδ = Eδq. Hence, it can replace condition (5.38) in Theorem 5.15. 146 UNBIASEDNESS [ 2.8 Since weak differentiability is often difficult to verify, Brown and Gajek (1990) intro-duce the more easily verifiable condition of strong differentiability, which implies weak differentiability, and thus can also replace condition (5.38) in Theorem 5.15 (Problem 5.31). The function pθ+W(x)/pθ(x) is strongly differentiable at θ = θ0 with derivative qθ0(x) if lim W→0   W−1 pθ+W(x) pθ(x) −1  −qθ0(x) "2 pθ0(x) dµ(x) = 0. (8.2) These variations of the usual definition of differentiability are well suited for the in-formation inequality problem. In fact, consider the expression in the square brackets in (8.1). If the limit of this expression exists, it is qθ(x) = ∂log pθ(x)/∂θ. Of course, existence of this limit does not, by itself, imply condition (8.2); such an implication requires an integrability condition. Brown and Gajek (1990) detail a number of easier-to-check conditions that imply (8.2). (See Problem 5.32.) Fabian and Hannan (1977) remark that if (8.1) holds and ∂log pθ(x)/∂θ exists, then it must be the case that qθ(x) = ∂log pθ(x)/∂θ. However, the existence of one does not imply the existence of the other. CHAPTER 3 Equivariance 1 First Examples In Section 1.1, the principle of unbiasedness was introduced as an impartiality restriction to eliminate estimators such as δ(X) ≡g(θ0), which would give very low risk for some parameter values at the expense of very high risk for others. As was seen in Sections 2.2–2.4, in many important situations there exists within the class of unbiased estimators a member that is uniformly better for any convex loss function than any other unbiased estimator. In the present chapter, we shall use symmetry considerations as the basis for another such impartiality restriction with a somewhat different domain of appli-cability. Example 1.1 Estimating binomial p. Consider n binomial trials with unknown probability p (0 < p < 1) of success which we wish to estimate with loss function L(p, d), for example, L(p, d) = (d −p)2 or L(p, d) = (d −p)2/p(1 −p). If Xi, i = 1, . . . , n is 1 or 0 as the ith trial is a success or failure, the joint distribution of the X’s is P(x1, . . . , xn) = pxi(1 −p)(1−xi). Suppose now that another statistician interchanges the definition of success and failure. For this worker, the probability of success is p′ = 1 −p (1.1) and the indicator of success and failure on the ith trial is X′ i = 1 −Xi. (1.2) The joint distribution of the X′ i is P(x′ 1, · · · , x′ n) = p′x′ i(1 −p′)(1−x′ i) and hence satisfies P(x′ i, . . . , x′ n) = P(x1, . . . , xn). (1.3) In the new terminology, the estimated value d′ of p′ is d′ = 1 −d, (1.4) and the loss resulting from its use is L(p′, d′). The loss functions suggested at the beginning of the example (and, in fact, most loss functions that we would want to 148 EQUIVARIANCE [ 3.1 employ in this situation) satisfy L(p, d) = L(p′, d′). (1.5) Under these circumstances, the problem of estimating p with loss function L is said to be invariant under the transformations (1.1), (1.2), and (1.4). This invariance is an expression of the complete symmetry of the estimation problem with respect to the outcomes of success and failure. Suppose now that in the above situation, we had decided to use δ(x), where x = (x1, . . . , xn) as an estimator of p. Then, the formal identity of the primed and unprimed problem suggests that we should use δ(x′) = δ(1 −x1, . . . , 1 −xn) (1.6) to estimate p′ = 1 −p. On the other hand, it is natural to estimate 1 −p by 1 minus the estimator of p, i.e., by 1 −δ(x). (1.7) It seems desirable that these two estimators should agree and hence that δ(x′) = 1 −δ(x). (1.8) An estimator satisfying (1.8) will be called equivariant under the transformations (1.1), (1.2), and (1.4). Note that the standard estimate Xi/n satisfies (1.8). The arguments for (1.6) and (1.7) as estimators of 1 −p are of a very different nature. The appropriateness of (1.6) depends entirely on the symmetry of the situation. It would continue to be suitable if it were known, for example, that 1 3 < p < 2 3 but not if, say, 1 4 < p < 1 2. In fact, in the latter case, δ(X) would typicallybechosentobe< 1 2 forallX,andhenceδ(X′)wouldbeentirelyunsuitable as an estimator of 1 −p, which is known to be > 1 2. More generally, (1.6) would cease to be appropriate if any prior information about p is available which is not symmetricabout 1 2.Incontrast,theargumentleadingto(1.7)isquiteindependentof any symmetry assumptions, but simply reflects the fact that if δ(X) is a reasonable estimator of a parameter θ (that is, is likely to be close to θ), then 1 −δ(X) is reasonable as an estimator of 1 −θ. ∥ We shall postpone giving a general definition of equivariance to the next section, and in the remainder of the present section, we formulate this concept and explore its implications for the special case of location problems. Let X = (X1, . . . , Xn) have joint distribution with probability density f (x −ξ) = f (x1 −ξ, . . . , xn −ξ), −∞< ξ < ∞, (1.9) where f is known and ξ is an unknown location parameter. Suppose that for the problem of estimating ξ with loss function L(ξ, d), we have found a satisfactory estimator δ(X). In analogy with the transformations (1.2) and (1.1) of the observations Xi and the parameter p in Example 1.1, consider the transformations X′ i = Xi + a (1.10) 3.1 ] FIRST EXAMPLES 149 and ξ ′ = ξ + a. (1.11) The joint density of X′ = (X′ 1, . . . , X′ n) can be written as f (x′ −ξ ′) = f (x′ 1 −ξ ′, · · · , x′ n −ξ ′) so that in analogy with (1.3) we have by (1.10) and (1.11) f (x′ −ξ ′) = f (x −ξ) for all x and ξ. (1.12) The estimated value d′ of ξ ′ is d′ = d + a (1.13) and the loss resulting from its use is L(ξ ′, d′). In analogy with (1.5), we require L to satisfy L(ξ ′, d′) = L(ξ, d) and hence L(ξ + a, d + a) = L(ξ, d). (1.14) A loss function L satisfies (1.14) for all values of a if and only if it depends only on the difference d −ξ, that is, it is of the form L(ξ, d) = ρ(d −ξ). (1.15) That (1.15) implies (1.14) is obvious. The converse follows by putting a = −ξ in (1.14) and letting ρ(d −ξ) = L(0, d −ξ). We can formalize these considerations in the following definition. Definition 1.2 A family of densities f (x|ξ), with parameter ξ, and a loss function L(ξ, d) are location invariant if, respectively, f (x′|ξ ′) = f (x|ξ) and L(ξ, d) = L(ξ ′, d′) whenever ξ ′ = ξ + a and d′ = d + a. If both the densities and the loss function are location invariant, the problem of estimating ξ is said to be location invariant under the transformations (1.10), (1.11), and (1.13). As in Example 1.1, this invariance is an expression of symmetry. Quite generally, symmetry in a situation can be characterized by its lack of change under certain transformations. After a transformation, the situation looks exactly as it did before. In the present case, the transformations in question are the shifts (1.10), (1.11), and (1.13), which leave both the density (1.12) and the loss function (1.14) unchanged. Suppose now that in the original (unprimed) problem, we had decided to use δ(X) as an estimator of ξ. Then, the formal identity of the primed and unprimed problem suggest that we should use δ(X′) = δ(X1 + a, . . . , Xn + a) (1.16) to estimate ξ ′ = ξ + a. On the other hand, it is natural to estimate ξ + a by adding a to the estimator of ξ, i.e., by δ(X) + a. (1.17) As before, it seems desirable that these two estimators should agree and hence that δ(X1 + a, . . . , Xn + a) = δ(X1, . . . , Xn) + a for all a. (1.18) 150 EQUIVARIANCE [ 3.1 Definition 1.3 An estimator satisfying (1.18) will be called equivariant under the transformations (1.10), (1.11), and (1.13), or location equivariant.1 All the usual estimators of a location parameter are location equivariant. This is the case, for example, for the mean, the median, or any weighted average of the order statistics (with weights adding up to one). The MLE ˆ ξ is also equivariant since, if ˆ ξ maximizes f (x −ξ), ˆ ξ + a maximizes f (x −ξ −a). As was the case in Example 1.1, the arguments for (1.16) and (1.17) as estimators of ξ + a are of a very different nature. The appropriateness of (1.16) results from the invariance of the situation under shift. It would not be suitable for an estimator of ξ +a, for example, if it were known that 0 < ξ < 1. Then, δ(X) would typically only take values between 0 and 1, and hence δ(X′) would be disastrous as an estimate of ξ + a if a > 1. In contrast, the argument leading to (1.17) is quite independent of any equivariance arguments, but simply reflects the fact that if δ(X) is a reasonable estimator of a parameter ξ, then δ(X) + a is reasonable for estimating ξ + a. The following theorem states an important set of properties of location equiv-ariant estimators. Theorem 1.4 Let X be distributed with density (1.9), and let δ be equivariant for estimating ξ with loss function (1.15). Then, the bias, risk, and variance of δ are all constant (i.e., do not depend on ξ). Proof. Note that if X has density f (x) (i.e., ξ = 0), then X + ξ has density (1.9). Thus, the bias can be written as b(ξ) = Eξ[δ(X)] −ξ = E0[δ(X + ξ)] −ξ = E0[δ(X)], which does not depend on ξ. The proofs for risk and variance are analogous (Problem 1.1). ✷ Theorem 1.4 has an important consequence. Since the risk of any equivariant estimator is independent of ξ, the problem of uniformly minimizing the risk within this class of estimators is replaced by the much simpler problem of determining the equivariant estimator for which this constant risk is smallest. Definition 1.5 In a location invariant estimation problem, if a location equivariant estimator exists which minimizes the constant risk, it is called the minimum risk equivariant (MRE) estimator. Such an estimator will typically exist, and is often unique, although in rare cases there could be a sequence of estimators whose risks decrease to a value not assumed. To derive an explicit expression for the MRE estimator, let us begin by finding a representation of the most general location equivariant estimator. Lemma 1.6 If δ0 is any equivariant estimator, then a necessary and sufficient condition for δ to be equivariant is that δ(x) = δ0(x) + u(x) (1.19) 1 Some authors have called such estimators invariant, which could suggest that the estimator remains unchanged, rather than changing in a prescribed way. We will reserve that term for functions that do remain unchanged, such as those satisfying (1.20). 3.1 ] FIRST EXAMPLES 151 where u(x) is any function satisfying u(x + a) = u(x), f or all x, a. (1.20) Proof. Assumefirstthat(1.19)and(1.20)hold.Then,δ(x+a) = δ0(x+a)+u(x+a) = δ0(x) + a + u(x) = δ(x) + a, so that δ is equivariant. Conversely, if δ is equivariant, let u(x) = δ(x) −δ0(x). Then u(x + a) = δ(x + a) −δ0(x + a) = δ(x) + a −δ0(x) −a = u(x) so that (1.19) and (1.20) hold. ✷ To complete the representation, we need a characterization of the functions u satisfying (1.20). Lemma 1.7 A function u satisfies (1.20) if and only if it is a function of the dif-ferences yi = xi −xn (i = 1, . . . , n −1), n ≥2; for n = 1, if and only if it is a constant. Proof. The proof is essentially the same as that of (1.15). ✷ Note that the function u(·), which is invariant, is only a function of the ancillary statistic (y1, . . . , yn−1) (see Section 1.6). Hence, by itself, it does not carry any information about the parameter ξ. The connection between invariance and ancil-larity is not coincidental. (See Lehmann and Scholz 1992, and Problems 2.11 and 2.12.) Combining Lemmas 1.6 and 1.7 gives the following characterization of equiv-ariant estimators. Theorem 1.8 If δ0 is any equivariant estimator, then a necessary and sufficient condition for δ to be equivariant is that there exists a function v of n−1 arguments for which δ(x) = δ0(x) −v(y) f or all x. (1.21) Example 1.9 Location equivariant estimators based on one observation. Con-sider the case n = 1. Then, it follows from Theorem 1.8 that the only equivariant estimators are X + c for some constant c. ∥ We are now in a position to determine the equivariant estimator with minimum risk. Theorem 1.10 Let X = (X1, . . . , Xn) be distributed according to (1.9), let Yi = Xi −Xn (i = 1, . . . , n−1) and Y = (Y1, . . . , Yn−1). Suppose that the loss function is given by (1.15) and that there exists an equivariant estimator δ0 of ξ with finite risk. Assume that for each y there exists a number v(y) = v∗(y) which minimizes E0{ρ[δ0(X) −v(y)]|y}. (1.22) 152 EQUIVARIANCE [ 3.1 Then, a location equivariant estimator δ of ξ with minimum risk exists and is given by δ∗(X) = δ0(X) −v∗(Y). Proof. By Theorem 1.8, the MRE estimator is found by determining v so as to minimize Rξ(δ) = Eξ{ρ[δ0(X) −v(Y) −ξ]}. Since the risk is independent of ξ, it suffices to minimize R0(δ) = E0{ρ[δ0(X) −v(Y)]} =  E0{ρ[δ0(X) −v(y)]|y} dP0(y). The integral is minimized by minimizing the integrand, and hence (1.22), for each y. Since δ0 has finite risk E0{ρ[δ0(X)]|y} < ∞(a.e. P0), the minimization of (1.22) is meaningful. The result now follows from the assumptions of the theorem. ✷ Corollary 1.11 Under the assumptions of Theorem 1.10, suppose that ρ is convex and not monotone. Then, an MRE estimator of ξ exists; it is unique if ρ is strictly convex. Proof. Theorems 1.10 and 1.7.15. ✷ Corollary 1.12 Under the assumptions of Theorem 1.10: (i) if ρ(d −ξ) = (d −ξ)2, then v∗(y) = E0[δ0(X)|y]; (1.23) (ii) if ρ(d −ξ) = |d −ξ|, then v∗(y) is any median of δ0(X) under the conditional distribution of X given y. Proof. Examples 1.7.17 and 1.7.18 ✷ Example 1.13 Continuation of Example 1.9. For the case n = 1, if X has fi-nite risk, the arguments of Theorem 1.10 and Corollary 1.11 show that the MRE estimator is X −v∗where v∗is any value minimizing E0[ρ(X −v)]. (1.24) In particular, the MRE estimator is X −E0(X) and X−med0(X) when the loss is squared error and absolute error, respectively. Suppose, now, that X is symmetrically distributed about ξ. Then, for any ρ which is convex and even, if follows from Corollary 1.7.19 that (1.24) is minimized by v = 0, so that X is MRE. Under the same assumptions, if n = 2, the MRE estimator is (X1 + X2)/2. (Problem 1.3). ∥ The existence of MRE estimators is, of course, not restricted to convex loss functions. As an important class of nonconvex loss functions, consider the case that ρ is bounded. Corollary 1.14 Under the assumptions of Example 1.13, suppose that 0 ≤ρ(t) ≤ M for all values of t, that ρ(t) →M as t →±∞, and that the density f of X is continuous a.e. Then, an MRE estimator of ξ exists. 3.1 ] FIRST EXAMPLES 153 Proof. See Problem 1.8. ✷ Example 1.15 MRE under 0 −1 loss. Suppose that ρ(d −ξ) = 1 if |d −ξ| > k 0 otherwise. Then, v will minimize (1.24), provided it maximizes P0{|X −v| ≤k}. (1.25) Suppose that the density f is symmetric about 0. If f is unimodal, then v = 0 and the MRE estimator of ξ is X. On the other hand, suppose that f is U-shaped, say f (x) is zero for |x| > c > k and is strictly increasing for 0 < x < c. Then, there are two values of v maximizing (1.25), namely v = c −k and v = −c + k, hence, X −c + k and X + c −k are both MRE. ∥ Example 1.16 Normal. Let X1, . . . , Xn be iid according to N(ξ, σ 2), where σ is known. If δ0 = ¯ X in Theorem 1.10, it follows from Basu’s theorem that δ0 is independent of Y and hence that v(y) = v is a constant determined by minimizing (1.24) with ¯ X in place of X. Thus ¯ X is MRE for all convex and even ρ. It is also MRE for many nonconvex loss functions including that of Example 1.15. ∥ This example has an interesting implication concerning a “least favorable” prop-erty of the normal distribution. Theorem 1.17 Let F be the class of all univariate distributions F that have a density f (w.r.t. Lebesgue measure) and fixed finite variance, say σ 2 = 1. Let X1, . . . , Xn be iid with density f (xi −ξ), ξ = E(Xi), and let rn(F) be the risk of the MRE estimator of ξ with squared error loss. Then, rn(F) takes on its maximum value over F when F is normal. Proof. The MRE estimator in the normal case is ¯ X with risk E( ¯ X −ξ)2 = 1/n. Since this is the risk of ¯ X, regardless of F, the MRE estimator for any other F must have risk ≤1/n, and this completes the proof. ✷ For n ≥3, the normal distribution is, in fact, the only one for which rn(F) = 1/n. Since the MRE estimator is unique, this will follow if the normal distribution can be shown to be the only one whose MRE estimator is ¯ X. From Corollary 1.12, it is seen that the MRE estimator is ¯ X −E0[ ¯ X|Y] and, hence, is ¯ X if and only if E0[ ¯ X|Y] = 0. It was proved by Kagan, Linnik, and Rao (1965, 1973) that this last equation holds if and only if F is normal. Example 1.18 Exponential. Let X1, . . . , Xn be iid according to the exponential distribution E(ξ, b) with b known. If δ0 = X(1) in Theorem 1.10, it again follows from Basu’s theorem that δ0 is independent of Y and hence that v(y) = v is determined by minimizing E0[ρ(X(1) −v)]. (1.26) (a) If the loss is squared error, the minimizing value is v = E0[X(1)] = b/n, and hence the MRE estimator is X(1) −(b/n). (b) If the loss is absolute error, the minimizing value is v = b(log 2)/n (Problem 1.4). 154 EQUIVARIANCE [ 3.1 (c) If the loss function is that of Example 1.15, then v is the center of the interval I of length 2k which maximizes Pξ=0[X(1)εI]. Since for ξ = 0, the density of X(1) is decreasing on (0, ∞), v = k, and the MRE estimator is X(1) −k. See Problem 1.5 for another comparison. ∥ Example 1.19 Uniform. Let X1, . . . , Xn be iid according to the uniform distri-bution U(ξ −1/2b, ξ + 1/2b), with b known, and suppose the loss function ρ is convex and even. For δ0, take [X(1) + X(n)]/2 where X(1) < · · · < X(n) denote the ordered X’s To find v(y) minimizing (1.22), consider the conditional distribution of δ0 given y. This distribution depends on y only through the differences X(i) −X(1), i = 2, . . . , n. By Basu’s theorem, the pair (X(1), X(n)) is independent of the ratios Zi = [X(i) −X(1)]/X(n) −X(1)], i = 2, . . . , n −1 (Problem 1.6.36(b)). Therefore, the conditional distribution of δ0 given the differences X(i) −X(1), which is equiva-lent to the conditional distribution of δ0 given X(n)−X(1) and the Z’s, depends only on X(n) −X(1). However, the conditional distribution of δ0 given V = X(n) −X(1) is symmetric about 0 (when ξ = 0; Problem 1.2). It follows, therefore, as in Example 1.13 that the MRE estimator of ξ is [X(1) + X(n)]/2, the midrange. ∥ When loss is squared error, the MRE estimator δ∗(X) = δ0(X) −E[δ0(X)|Y] (1.27) can be evaluated more explicitly. Theorem 1.20 Under the assumptions of Theorem 1.15, with L(ξ, d) = (d −ξ)2, the estimator (1.27) is given by δ∗(x) = ∞ −∞uf (x1 −u, . . . , xn −u) du ∞ −∞f (x1 −u, . . . , xn −u) du , (1.28) and in this form, it is known as the Pitman estimator of ξ. Proof. Let δ0(X) = Xn. To compute E0(Xn|y) (which exists by Problem 1.21), make the change of variables yi = xi −xn (i = 1, . . . , n −1); yn = xn. The Jacobian of the transformation is 1. The joint density of the Y’s is therefore pY(y1, . . . , yn) = f (y1 + yn, . . . , yn−1 + yn, yn), and the conditional density of Yn given y = (y1, . . . , yn−1) is f (y1 + yn, . . . , yn−1 + yn, yn) f (y1 + t, . . . , yn−1 + t, t) dt . It follows that E0[Xn|y] = E0[Yn|y] = tf (y1 + t, . . . , yn−1 + t, t) dt f (y1 + t, . . . , yn−1 + t, t) dt . This can be reexpressed in terms of the x’s as E0[Xn|y] = tf (x1 −xn + t, . . . , xn−1 −xn + t, t) dt f (x1 −xn + t, . . . , xn−1 −xn + t, t) dt 3.1 ] FIRST EXAMPLES 155 or, finally, by making the change of variables u = xn −t as E0[Xn|y] = xn − uf (x1 −u, . . . , xn −u) du f (x1 −u, . . . , xn −u) du . This completes the proof. ✷ Example 1.21 (Continuation of Example 1.19). As an illustration of (1.28), let us apply it to the situation of Example 1.19. Then f (x1 −ξ, . . . , xn −ξ) =      b−n if ξ −b 2 ≤X(1) ≤X(n) ≤ξ + b 2 0 otherwise where b is known. The Pitman estimator is therefore given by δ∗(x) =  x(1)+b/2 x(n)−b/2 u du  x(1)+b/2 x(n)−b/2 du −1 = 1 2[x(1) + x(n)], which agrees with the result of Example 1.19. ∥ For most densities, the integrals in (1.28) are difficult to evaluate. The following example illustrates the MRE estimator for one more case. Example 1.22 Double exponential. Let X1, . . . , Xn be iid with double exponen-tial distribution DE(ξ, 1), so that their joint density is (1/2n) × exp(−|xi −ξ|). It is enough to evaluate the integrals in (1.28) over the set where x1 < · · · < xn. If xk < ξ < xk+1, |xi −ξ| = n k+1 (xi −ξ) − k 1 (xi −ξ) = n k+1 xi − k 1 xi + (2k −n)ξ. The integration then leads to two sums, both in numerator and denominator of the Pitman estimator. The resulting expression is the desired estimator. ∥ So far, the estimator δ has been assumed to be nonrandomized. Let us now con-sider the role of randomized estimators for equivariant estimation. Recall from the proof of Corollary 1.7.9 that a randomized estimator can be obtained as a nonran-domized estimator δ(X, W) depending on X and an independent random variable W with known distribution. For such an estimator, the equivariance condition (1.18) becomes δ(X + a, W) = δ(X, W) + a for all a. There is no change in Theorem 1.4, and Lemma 1.6 remains valid with (1.20) replaced by u(x + a, w) = u(x, w) for all x, w, and a. The proof of Lemma 1.7 shows that this condition holds if and only if u is a function only of y and w, so that, finally, in generalization of (1.21), an estimator 156 EQUIVARIANCE [ 3.1 δ(X, W) is equivariant if and only if it is of the form δ(X, W) = δ0(X, W) −v(Y, W). (1.29) Applying the proof of Theorem 1.10 to (1.29), we see that the risk is minimized by choosing for v(y, w) the function minimizing E0{ρ[δ0(X, w) −v(y, w)]|y, w}. Since the starting δ0 can be any equivariant estimator, let it be nonrandomized, that is, not dependent on W. Since X and W are independent, it then follows that the minimizing v(y, w) will not involve w, so that the MRE estimator (if it exists) will be nonrandomized. Suppose now that T is a sufficient statistic for ξ. Then, X can be represented as (T, W), where W has a known distribution (see Section 1.6), and any estimator δ(X) can be viewed as a randomized estimator based on T . The above argument then suggests that a MRE estimator can always be chosen to depend on T only. However, the argument does not apply since the family {P T ξ , −∞< ξ < ∞} no longer needs be a location family. Let us therefore add the assumption that T = (T1, . . . , Tr) where Ti = Ti(X) are real-valued and equivariant, that is, satisfy Ti(x + a) = Ti(x) + a for all x and a. (1.30) Under this assumption, the distributions of T do constitute a location family. To see this, let V = X −ξ so that V is distributed with density f (v1, . . . , vn). Then, Ti(X) = Ti(V + ξ) = Ti(V) + ξ, and this defines a location family. The earlier argument therefore applies, and under assumption (1.30), an MRE estimator can be found which depends only on T . (For a general discussion of the relationship of invariance and sufficiency, see Hall, Wijsman, and Ghosh 1965, Basu 1969, Berk 1972a, Landers and Rogge 1973, Arnold 1985, Kariya 1989, and Ramamoorthi 1990.) In Examples 1.16, 1.18 and 1.19, the sufficient statistics ¯ X, X(1), and (X(1), X(n)), respectively, satisfy (1.30), and the previous remark provides an alternative deriva-tion for the MRE estimators in these examples. It is interesting to compare the results of the present section with those on unbiased estimation in Chapter 2. It was found there that when a UMVU estimator exists, it typically minimizes the risk for all convex loss functions, but that for bounded loss functions not even a locally minimum risk unbiased estimator can be expected to exist. In contrast: (a) An MRE estimator typically exists not only for convex loss functions but even when the loss function is not so restricted. (b) On the other hand, even for convex loss functions, the MRE estimator often varies with the loss function. (c) Randomized estimators need not be considered in equivariant estimation since there are always uniformly better nonrandomized ones. (d) Unlike UMVU estimators which are frequently inadmissible, the Pitman es-timator is admissible under mild assumptions (Stein 1959, and Section 5.4). 3.1 ] FIRST EXAMPLES 157 (e) The principal area of application of UMVU estimation is that of exponential families, and these have little overlap with location families (see Section 1.5). (f) For location families, UMVU estimators typically do not exist. (For specific results in this direction, see Bondesson 1975.) Let us next consider whether MRE estimators are unbiased. Lemma 1.23 Let the loss function be squared error. (a) When δ(X) is any equivariant estimator with constant bias b, then δ(X) −b is equivariant, unbiased, and has smaller risk than δ(X). (b) The unique MRE estimator is unbiased. (c) If a UMVU estimator exists and is equivariant, it is MRE. Proof. Part (a) follows from Lemma 2.2.7; (b) and (c) are immediate consequences of (a). ✷ That an MRE estimator need not be unbiased for general loss functions is seen from Example 1.18 with absolute error as loss. Some light is thrown on the possible failure of MRE estimators to be unbiased by considering the following decision-theoretic definition of unbiasedness, which depends on the loss function L. Definition 1.24 An estimator δ of g(θ) is said to be risk-unbiased if it satisfies EθL[θ, δ(X)] ≤EθL[θ′, δ(X)] for all θ′ ̸= θ, (1.31) If one interprets L(θ, d) as measuring how far the estimated value d is from the estimand g(θ), then (1.31) states that, on the average, δ is at least as close to the true value g(θ) as it is to any false value g(θ′). Example 1.25 Mean-unbiasedness. If the loss function is squared error, (1.31) becomes Eθ[δ(X) −g(θ′)]2 ≥Eθ[δ(X) −g(θ)]2 for all θ′ ̸= θ. (1.32) Suppose that Eθ(δ2) < ∞and that Eθ(δ) ∈ g for all θ, where g = {g(θ) : θ ∈ }. [The latter condition is, of course, automatically satisfied when = (−∞, ∞) and g(θ) = θ, as is the case when θ is a location parameter.] Then, the left side of (1.32) is minimized by g(θ′) = Eθδ(X) (Example 1.7.17) and the condition of risk-unbiasedness, therefore, reduces to the usual unbiasedness condition Eθδ(X) = g(θ). (1.33) ∥ Example 1.26 Median-unbiasedness. If the loss function is absolute error, (1.31) becomes Eθ|δ(X) −g(θ′)| ≥Eθ|δ(X) −g(θ)| for all θ′ ̸= θ. (1.34) By Example 1.7.18, the left side of (1.34) is minimized by any median of δ(X). It follows that (1.34) reduces to the condition medθδ(X) = g(θ), (1.35) 158 EQUIVARIANCE [ 3.2 that is, g(θ) is a median of δ(X), provided Eθ|δ| < ∞and g contains a median of δ(X) for all θ. An estimator δ satisfying (1.35) is called median-unbiased. ∥ Theorem 1.27 If δ is MRE for estimating ξ in model (1.9) with loss function (1.15), then it is risk-unbiased. Proof. Condition (1.31) now becomes Eξρ[δ(X) −ξ ′] ≥Eξρ[δ(X) −ξ] for all ξ ′ ̸= ξ, or, if without loss of generality we put ξ = 0, E0ρ[δ(X) −a] ≥E0ρ[δ(X)] for all a. ✷ That this holds is an immediate consequence of the fact that δ(X) = δ0(X)−v∗(Y) where v∗(y) minimizes (1.22). 2 The Principle of Equivariance In the present section, we shall extend the invariance considerations of the bi-nomial situation of Example 1.1 and the location families (1.9) to the general situation in which the probability model remains invariant under a suitable group of transformations. Let X be a random observable taking on values in a sample space X according to a probability distribution from the family P = {Pθ, θ ∈ }. (2.1) Denote by C a class of 1 : 1 transformations g of the sample space onto itself. Definition 2.1 (i) If g is a 1 : 1 transformation of the sample space onto itself, if for each θ the distribution of X′ = gX is again a member of P, say Pθ′, and if as θ traverses , so does θ′, then the probability model (2.1) is invariant under the transformation g. (ii) If (i) holds for each member of a class of transformations C, then the model (2.1) is invariant under C. A class of transformations that leave a probability model invariant can always be assumed to be a group. To see this, let G = G(C) be the set of all compositions (defined in Section 1.4) of a finite number of transformations g±1 1 · · · g±1 m with g1, . . . , gm ∈C, where each of the exponents can be +1 or −1 and where the elements g1, . . . , gm need not be distinct. Then, any element g ∈G leaves (2.1) invariant, and G is a group (Problem 2.1), the group generated by C. Example 2.2 Location family. (a) Consider the location family (1.9) and the group of transformations X′ = X+a, which was already discussed in (1.10) and Example 4.1. It is seen from (1.12) that if X is distributed according to (1.9) with θ = ξ, then X′ = X + a has the density (1.9) with θ′ = ξ ′ = ξ + a, so that the model (1.9) is preserved under these transformations. 3.2 ] THE PRINCIPLE OF EQUIVARIANCE 159 (b) Suppose now that, in addition, f has the symmetry property f (−x) = f (x) (2.2) where −x = (−x1, . . . , −xn), and consider the transformation x′ = −x. The density of X′ is f (−x′ 1 −ξ, . . . , −x′ n −ξ) = f (x′ 1 −ξ ′, . . . , x′ n −ξ ′) if ξ ′ = −ξ. Thus, model (1.9) is invariant under the transformations x′ = −x, ξ ′ = −ξ, and hence under the group consisting of this transformation and the identity (Problem 2.2). This is not true, however, if f does not satisfy (1.10). If, for example, X1, . . . , Xn are iid according to the exponential distribution E(ξ, 1), then the variables −X1, . . . , −Xn no longer have an exponential distribution. ∥ Let {gX, g ∈G} be a group of transformations of the sample space which leave the model invariant. If gX has the distribution Pθ′, then θ′ = ¯ gθ is a function which maps onto , and the transformation ¯ gθ is 1 : 1, provided the distributions Pθ, θ ∈ are distinct (Problem 2.3). It is easy to see that the transformations ¯ g then also form a group which will be denoted by ¯ G (Problem 2.4). From the definition of ¯ gθ, it follows that Pθ(gX ∈A) = P ¯ gθ(X ∈A) (2.3) where the subscript on the left side indicates the distribution of X, not that of gX. More generally, for a function ψ whose expectation is defined, Eθ[ψ(gX)] = E ¯ gθ[ψ(X)]. (2.4) We have now generalized the transformations (1.10) and (1.11), and it remains to consider (1.13). This last generalization is most easily introduced by an example. Example 2.3 Two-sample location family. Let X = (X1, . . . , Xm) and Y = (Y1, . . . , Yn) and suppose that (X, Y) has the joint density f (x −ξ, y −η) = f (x1 −ξ, . . . , xm −ξ, y1 −η, . . . , yn −η). (2.5) This model remains invariant under the transformations g(x, y) = (x + a, y + b), ¯ g(ξ, η) = (ξ + a, η + b). (2.6) Consider the problem of estimating W = η −ξ. (2.7) If the transformed variables are denoted by x′ = x + a, y′ = y + b, ξ ′ = ξ + a, η′ = η + b, then W is transformed into W′ = W + (b −a). Hence, an estimated value d, when expressed in the new coordinates, becomes d′ = d + (b −a). (2.8) 160 EQUIVARIANCE [ 3.2 For the problem to remain invariant, we require, analogously to (1.14), that the loss function L(ξ, η; d) satisfies L[ξ + a, η + b; d + (b −a)] = L(ξ, η; d). (2.9) It is easy to see (Problem 2.5) that this is the case if and only if L depends only on the difference (η −ξ) −d, that is, if L(ξ, η; d) = ρ(W −d). (2.10) Suppose, next, that instead of estimating η−ξ, the problem is that of estimating h(ξ, η) = ξ 2 + η2. Under the transformations (2.6), h(ξ, η) is transformed into (ξ +a)2 +(η+b)2. This does not lead to an analog of (2.8) since the transformed value does not depend on (ξ, η) only though h(ξ, η). Thus, the form of the function to be estimated plays a crucial role in invariance considerations. ∥ Now, consider the general problem of estimating h(θ) in model (2.1), which is assumed to be invariant under the transformations X′ = gX, θ′ = ¯ gθ, g ∈G. The additional assumption required is that for any given ¯ g, h( ¯ gθ) depends on θ only through h(θ), that is, h(θ1) = h(θ2) implies h( ¯ gθ1) = h( ¯ gθ2). (2.11) The common value of h( ¯ gθ) for all θ’s to which h assigns the same value will then be denoted by h( ¯ gθ) = g∗h(θ). (2.12) If H is the set of values taken on by h(θ) as θ ranges over , the transforma-tions g∗are 1 : 1 from H onto itself. [Problem 2.8(a)]. As ¯ g ranges over ¯ G, the transformations g∗form a group G∗(Problem 2.6). The estimated value d of h(θ) when expressed in the new coordinates becomes d′ = g∗d. (2.13) Since the problems of estimating h(θ) in terms of (X, θ, d) or h(θ′) in terms of (X′, θ′, d′) represent the same physical situation expressed in a new coordinate system, the loss function should satisfy L(θ′, d′) = L(θ, d). This leads to the following definition. Definition 2.4 If the probability model (2.1) is invariant under g, the loss function L satisfies L( ¯ gθ, g∗d) = L(θ, d), (2.14) and h(θ) satisfies (2.11), the problem of estimating h(θ) with loss function L is invariant under g. In this discussion, it was tacitly assumed that the set D of possible decisions coincides with H. This need not, however, be the case. In Chapter 2, for example, estimators of a variance were permitted (with some misgiving) to take on negative values. In the more general case that H is a subset of D, one can take the condition 3.2 ] THE PRINCIPLE OF EQUIVARIANCE 161 that (2.14) holds for all θ as the definition of g∗d. If L(θ, d) = L(θ, d′) for all θ implies d = d′, as is typically the case, g∗d is uniquely defined by the above condition, and g∗is 1 : 1 from D onto itself [Problem 2.8(b)]. In an invariant estimation problem, if δ is the estimator that we would like to use to estimate h(θ), there are two natural ways of estimating g∗h(θ), the estimand h(θ) expressed in the transformed system. One of these generalizes the estimators (1.6) and (1.16), and the other the estimators (1.6) and (1.17) of the preceding section. 1. Functional Equivariance. Quite generally, if we have decided to use δ(X) to estimate h(θ), it is natural to use φ[δ(X)] as the estimator of φ[¯ h(θ)], for any function φ. If, for example, δ(X) is used to estimate the length θ of the edge of a cube, it is natural to estimate the volume θ3 of the cube by [δ(X)]3. Hence, if d is the estimated value of h(θ), then g∗d should be the estimated value of g∗h(θ). Applying this to φ = g∗leads to g∗δ(X) as the estimator of g∗h(θ) (2.15) when δ(X) is used to estimate h(θ). 2. Formal Invariance. Invariance under transformations g, ¯ g, and g∗of the esti-mation of h(θ) means that the problem of estimating h(θ) in terms of X, θ, and d and that of estimating g∗h(θ) in terms of X′, θ′, and d′ are formally the same, and should therefore be treated the same. In generalization of (1.6) and (1.16), this means that we should use δ(X′) = δ(gX) to estimate g∗[¯ h(θ)] = h( ¯ gθ). (2.16) It seems desirable that these two principles should lead to the same estimator and hence that δ(gX) = g∗δ(X). (2.17) Definition 2.5 In an invariant estimation problem, an estimator δ(X) is said to be equivariant if it satisfies (2.17) for all g ∈G. As was discussed in Section 1, the arguments for (2.15) and (2.16) are of a very different nature. The appropriateness of (2.16) results from the symmetries exhibited by the situation and represented mathematically by the invariance of the problem under the transformations g ∈G. It gives expression to the idea that if some symmetries are present in an estimation problem, the estimators should pos-sess the corresponding symmetries. It follows that (2.16) is no longer appropriate if the symmetry is invalidated by asymmetric prior information; if, for example, θ is known to be restricted to a subset ω of the parameter space , for which ¯ gω ̸= ω, as was the case mentioned at the end of Example 1.1.1 and after Definition 1.3. In contrast, the argument leading to (2.15) is quite independent of any symmetry assumptions and simply reflects the fact that if δ(X) is a reasonable estimator of, say, θ then φ[δ(X)] is a reasonable estimator of φ(θ). 162 EQUIVARIANCE [ 3.2 Example 2.6 Continuation of Example 2.3. In Example 2.3, h(ξ, η) = η −ξ, and by (2.8), g∗d = d + (b −a). It follows that (2.17) becomes δ(x + a, y + b) = δ(x, y) + b −a. (2.18) If δ0(X) and δ′ 0(Y) are location equivariant estimators of ξ and η, respectively, then δ(X, Y) = δ′ 0(Y) −δ0(X) is an equivariant estimator of η −ξ. ∥ The following theorem generalizes Theorem 1.4 to the present situation. Theorem 2.7 If δ is an equivariant estimator in a problem which is invariant under a transformation g, then the risk function of δ satisfies R( ¯ gθ, δ) = R(θ, δ) f or all θ. (2.19) Proof. By definition R( ¯ gθ, δ) = E ¯ gθL[ ¯ gθ, δ(X)]. It follows from (2.4) that the right side is equal to EθL[ ¯ gθ, δ(gX)] = EθL[ ¯ gθ, g∗δ(X)] = R(θ, δ). ✷ Looking back on Section 1, we see that the crucial fact underlying the success of the invariance approach was the constancy of the risk function of any equivariant estimator. Theorem 2.7 suggests the following simple condition for this property to obtain. A group G of transformations of a space is said to be transitive if for any two points there is a transformation in G taking the first point into the second. Corollary 2.8 Under the assumptions of Theorem 2.7, if ¯ G is transitive over the parameter space , then the risk function of any equivariant estimator is constant, that is, independent of θ. When the risk function of every equivariant estimator is constant, the best equiv-ariant estimator (MRE) is obtained by minimizing that constant, so that a uniformly minimum risk equivariant estimator will then typically exist. In such problems, alternative characterizations of the best equivariant estimator can be obtained. (See Problems 2.11 and 2.12.) Berk (1967a) and Kariya (1989) provide a rigorous treatment, taking account of the associated measurability problems. A Bayesian approach to the derivation of best equivariant estimators is treated in Section 4.4. Example 2.9 Conclusion of Example 2.3. In this example, θ = (ξ, η) and ¯ gθ = (ξ + a, η + b). This group of transformations is transitive over since, given any two points (ξ, η) and (ξ ′, η′), a and b exist such that ξ +a = ξ ′, and η +b = η′. The MRE estimator can now be obtained in exact analogy to Section 3.1 (Problems 1.13 and 1.14). ∥ The estimation problem treated in Section 1 was greatly simplified by the fact that it was possible to dispense with randomized estimators. The corresponding result holds quite generally when ¯ G is transitive. If an estimator δ exists which is MRE among all nonrandomized estimators, it is then also MRE when randomiza-tion is permitted. To see this, note that a randomized estimator can be represented 3.2 ] THE PRINCIPLE OF EQUIVARIANCE 163 as δ′(X, W) where W is independent of X and has a known distribution and that it is equivariant if δ′(gX, W) = g∗δ′(X, W). Its risk is again constant, and for any θ = θ0, it is equal to E[h(W)] where h(w) = Eθ0{L[δ′(X, w), θ0]}. This risk is minimized by minimizing h(w) for each w. However, by assumption, δ′(X, w) = δ(X) minimizes h(w), and hence the MRE estimator can be chosen to be nonrandomized. The corresponding result need not hold when ¯ G is not transitive. A counterexample is given in Example 5.1.8. Definition 2.10 For a group G of transformations of , two points θ1, θ2 ∈ are equivalent if there exists a g ∈G such that gθ1 = θ2. The totality of points equivalent to a given point (and hence to each other) is called an orbit of G. The group G is transitive over if it has only one orbit. For the most part, we will consider transitive groups; however, there are some groups of interest that are not transitive. Example 2.11 Binomial transformation group. Let X ∼binomial(n, p), 0 < p < 1, and consider the group of transformations. gX = n −X, ¯ gp = 1 −p. The orbits are the pairs (p, 1 −p). The group is not transitive. ∥ Example 2.12 Orbits of a scale group. Let X1, . . . , Xn be iid N(µ, σ 2), both unknown, and consider estimation of σ 2. The model remains invariant under the scale group gXi = aXi, ¯ g(µ, σ 2) = (aµ, a2σ 2), a > 0. We shall now show that (µ1, σ 2 1 ) and (µ2, σ 2 2 ) lie on the same orbit if and only if µ1/σ1 = µ2/σ2. On the one hand, suppose that µ1/σ1 = µ2/σ2. Then, µ2/µ1 = σ2/σ1 = a, say, and µ2 = aµ1; σ 2 2 = a2σ 2 1 . On the other hand, if µ2 = aµ1 and σ 2 2 = a2σ 2 1 , then µ2/µ1 = a and σ 2 2 /σ 2 1 = a. Thus, the values of τ = µ/σ can be used to label the orbits of G. ∥ The following corollary is a straightforward consequence of Theorem 2.7. Corollary 2.13 Under the assumptions of Theorem 2.7, the risk function of any equivariant estimator is constant on the orbits of G. Proof. See Problem 2.15. ✷ In Section 1.4, group families were introduced as families of distributions gen-erated by subjecting a random variable with a fixed distribution to a group of transformations. Consider now a family of distributions P = {Pθ, θ ∈ } which remains invariant under a group G for which ¯ G is transitive over and g1 ̸= g2 164 EQUIVARIANCE [ 3.2 implies ¯ g1 ̸= ¯ g2. Let θ0 be any fixed element of . Then P is exactly the group family of distributions of {gX, g ∈G} when X has distribution Pθ0. Conversely, let P be the group family of the distributions of gX as g varies over G, when X has a fixed distribution P, so that P = {Pg, g ∈G}. Then, g can serve as the parameter θ and G as the parameter space. In this notation, the starting distribution P becomes Pe, where e is the identity transformation. Thus, a family of distributions remains invariant under a transitive group of transformations of the sample space if and only if it is a group family. When an estimation problem is invariant under a group of transformations and an MRE estimator exists, this seems the natural estimator to use—of the various principles we shall consider, equivariance, where it applies, is perhaps the most convincing. Yet, even this principle can run into difficulties. The following example illustrates the possibility of a problem remaining invariant under two different groups, G1 and G2, which lead to two different MRE estimators δ1 and δ2. Example 2.14 Counterexample. Let the pairs (X1, X2) and (Y1, Y2) be indepen-dent, each with a bivariate normal distribution with mean zero. Let their covariance matrices be = [σij] and W = [Wσij], W > 0, and consider the problem of es-timating W. Let G1 be the group of transformations X′ 1 = a1X1 + a2X2 Y ′ 1 = c(a1Yi + a2Y2) X′ 2 = bX2 Y ′ 2 = cbY2 . (2.20) Then,(X′ 1, X′ 2)and(Y ′ 1, Y ′ 2)willagainbeindependentandeachwillhaveabivariate normal distribution with zero mean. If the covariance matrix of (X′ 1, X′ 2) is ′, that of (Y ′ 1, Y ′ 2) is W′′ where W′ = c2W (Problem 2.16). Thus, G1 leaves the model invariant. If h(, W) = W, (2.11) clearly holds, (2.12) and (2.13) become W′ = c2W, d′ = c2d, (2.21) respectively, and a loss function L(W, d) satisfies (2.14) provided L(c2W, c2d) = L(W, d). This condition holds if and only if L is of the form L(W, d) = ρ(d/W). (2.22) [For the necessity of (2.22), see Problem 2.10.] An estimator δ of W is equivariant under the above transformation if δ(x′, y′) = c2δ(x, y). (2.23) We shall now show that (2.23) holds if and only if δ(x, y) = ky2 2 x2 2 for some value of k a.e. (2.24) It is enough to prove this for the reduced sample space in which the matrix x1x2 y1y2  is nonsingular and in which both x2 and y2 are ̸= 0, since the rest of the sample space has probability zero. 3.2 ] THE PRINCIPLE OF EQUIVARIANCE 165 Let G′ 1 be the subgroup of G1 consisting of the transformations (2.20) with b = c = 1. The condition of equivariance under these transformations reduces to δ(x′, y′) = δ(x, y). (2.25) This is satisfied whenever δ depends only on x2 and y2 since x′ 2 = x2 and y′ 2 = y2. To see that this condition is also necessary for (2.25), suppose that δ satisfies (2.25) and let (x′ 1, x2; y′ 1, y2) and (x1, x2; y1, y2) be any two points in the reduced sample space which have the same second coordinates. Then, there exist a1 and a2 such that x′ 1 = a1x1 + a2x2; y′ 1 = a1y1 + a2y2, that is, there exists g ∈G′ 1 for which g(x, y) = (x′y′), and hence δ depends only on x2, y2. Consider now any δ′(x2, y2). To be equivariant under the full group G1, δ′ must satisfy δ′(bx2, cby2) = c2δ′(x2, y2). (2.26) For x2 = y2 = 1, this condition becomes δ′(b, cb) = c2δ′(1, 1) and hence reduces to (2.24) with x2 = b, y2 = bc, and k = δ′(1, 1). This shows that (2.24) is necessary for δ to be equivariant; that it is sufficient is obvious. The best equivariant estimator under G1 is thus k∗Y 2 2 /X2 2 where k∗is a value which minimizes EWρ  kY 2 2 WX2 2  = E1ρ kY 2 2 X2 2  . Such a minimizing value will typically exist. Suppose, for example, that the loss is 1 if |d −W|/W > 1/2 and zero otherwise. Then, k∗is obtained by maximizing P1    k Y 2 2 X2 2 −1     < 1 2  = P1  1 2k < Y 2 2 X2 2 < 3 2k  . As k →0 or ∞, this probability tends to zero, and a maximizing value therefore exists and can be determined from the distribution of Y 2 2 /X2 2 when W = 1. Exactly the same argument applies if G1 is replaced by the transformations G2 X′ 1 = bX1 Y ′ 1 = cbY1 X′ 2 = a1X1 + a2X2 Y ′ 2 = c(a1Y1 + a2Y2) and leads to the MRE estimator k∗Y 2 1 /X2 1. See Problems 2.19 and 2.20. ∥ In the location case, it turned out (Theorem 1.27) that an MRE estimator is always risk-unbiased. The extension of this result to the general case requires some assumptions. Theorem 2.15 If ¯ G is transitive and G∗commutative, then an MRE estimator is risk-unbiased. Proof. Let δ be MRE and θ, θ′ ∈ . Then, by the transitivity of ¯ G, there exists ¯ g ∈¯ G such that θ = ¯ gθ′, and hence EθL[θ′, δ(X)] = EθL[ ¯ g−1θ, δ(X)] = EθL[θ, g∗δ(X)]. 166 EQUIVARIANCE [ 3.2 Now, if δ(X) is equivariant, so is g∗δ(X) (Problem 2.18), and, therefore, since δ is MRE, EθL[θ, g∗δ(X)] ≥EθL[θ, δ(X)], which completes the proof. ✷ Transitivity of ¯ G will usually [but not always, see Example 2.14(a) below] hold when an MRE estimator exists. On the other hand, commutativity of G∗imposes a severe restriction. That the theorem need not be valid if either condition fails is shown by the following example. Example 2.16 Counterexample. Let X be N(ξ, σ 2) with both parameters un-known, let the estimand be ξ and the loss function be L(ξ, σ; d) = (d −ξ)2/σ 2. (2.27) (a) The problem remains invariant under the group G1; gx = x + c. It follows from Section 1 that X is MRE under G1. However, X is not risk-unbiased (Problem 2.19). Here, ¯ G1 is the group of transformations ¯ g(ξ, σ) = (ξ + c, σ), which is clearly not transitive. If the loss function is replaced by (d −ξ)2, the problem will remain invariant under G1; X remains equivariant but is now risk-unbiased by Example 1.25. Transitivity of ¯ G is thus not necessary for the conclusion of Theorem 2.15. (b) When the loss function is given by (2.27), the problem also remains invariant under the larger group G2 : ax+c, 0 < a. Since X is equivariant under G2 and MRE under G1, it is also MRE under G2. However, as stated in (i), X is not risk-unbiased with respect to (1.35). Here, G∗ 2 is the group of transformations g∗d = ad + c, and this is not commutative (Problem 2.19). ∥ The location problem considered in Section 1 provides an important example in which the assumptions of Theorem 2.15 are satisfied, and Theorem 1.27 is the specialization of Theorem 2.15 to that case. The scale problem, which will be considered in Section 3, can also provide another illustration. We shall not attempt to generalize to the present setting the characterization of equivariant estimators which was obtained for the location case in Theorem 1.8. Some results in this direction, taking account also of the associated measurability problems, can be found in Eaton (1989) or Wijsman (1990). Instead, we shall consider in the next section some other extensions of the problem treated in Section 1. We close this section by exhibiting a family P of distributions for which there exists no group leaving P invariant (except the trivial group consisting of the identity only). Theorem 2.17 Let X be distributed according to the power series distribution [see (2.3.9)] P(X = k) = ckθkh(θ); k = 0, 1, . . . , 0 < θ < ∞. (2.28) 3.3 ] LOCATION-SCALE FAMILIES 167 If ck > 0 for all k, then there does not exist a transformation gx = g(x) leaving the family (2.28) invariant except the identity transformation g(x) = x for all x. Proof. Suppose Y = g(X) is a transformation leaving (2.28) invariant, and let g(k) = ak and ¯ gθ = µ. Then, Pθ(X = k) = Pµ(Y = ak) and hence ckθkh(θ) = cakµakh(µ). (2.29) Replacing k by k + 1 and dividing the resulting equation by (2.29), we see that ck+1 ck θ = cak+1 cak µak+1−ak. (2.30) Replacing k by k + 1 in (2.30) and dividing the resulting equation by (2.30) shows that µak+2−ak+1 is proportional to µak+1−ak for all 0 < µ < m and hence that ak+2 −ak+1 = ak+1 −ak. If we denote this common value by W, we get ak = a0 + kW for k = 0, 1, 2, . . . . (2.31) Invariance of the model requires the set (2.31) to be a permutation of the set {0, 1, 2, . . .}. This implies that W > 0 and hence that a0 = 0 and W = 1, i.e., that ak = k and g is the identity. ✷ Example 2.11 shows that this result no longer holds if ck = 0 for k exceeding some k0; see Problem 2.28. 3 Location-Scale Families The location model discussed in Section 1 provides a good introduction to the ideas of equivariance, but it is rarely realistic. Even when it is reasonable to assume the form of the density f in (1.9) to be known, it is usually desirable to allow the model to contain an unknown scale parameter. The standard normal model according to which X1, . . . , Xn are iid as N(ξ, σ 2) is the most common example of such a location-scale model. In this section, we apply some of the general principles developed in Section 2 to location-scale models, as well as some other group models. As preparation for the analysis of these models, we begin with the case, which is of interest also in its own right, in which the only unknown parameter is scale parameter. Let X = (X1, . . . , Xn) have a joint probability density 1 τ n f x τ = 1 τ n f x1 τ , . . . , xn τ , τ > 0, (3.1) where f is known and τ is an unknown scale parameter. This model remains invariant under the transformations X′ i = bXi, τ ′ = bτ for b > 0. (3.2) 168 EQUIVARIANCE [ 3.3 The estimand of primary interest is h(τ) = τ r. Since h is strictly monotone, (2.11) is vacuously satisfied. Transformations (3.2) induce the transformations h(τ) →brτ r = brh(τ) and d′ = brd, (3.3) and the loss function L is invariant under these transformations, provided L(bτ, brd) = L(τ, d). (3.4) This is the case if and only if it is of the form (Problem 3.1) L(τ, d) = γ  d τ r  . (3.5) Examples are L(τ, d) = (d −τ r)2 τ 2r and L(τ, d) = |d −τ r| τ r (3.6) but not squared error. An estimator δ of τ r is equivariant under (3.2), or scale equivariant, provided δ(bX) = brδ(X). (3.7) All the usual estimators of τ are scale equivariant; for example, the standard devi-ation  Xi −¯ X 2 / (n −1), the mean deviation |Xi −¯ X|/n, the range, and the maximum likelihood estimator [Problem 3.1(b)]. Since the group ¯ G of transformations τ ′ = bτ, b > 0, is transitive over , the risk of any equivariant estimator is constant by Corollary 2.8, so that one can expect an MRE estimator to exist. To derive it, we first characterize the totality of equivariant estimators. Theorem 3.1 Let X have density (3.1) and let δ0(X) be any scale equivariant estimator of τ r. Then, if zi = xi xn (i = 1, . . . , n −1) and zn = xn |xn| (3.8) and if z = (z1, . . . , zn), a necessary and sufficient condition for δ to satisfy (3.7) is that there exists a function w(z) such that δ(x) = δ0(x) w(z) . Proof. Analogous to Lemma 1.6, a necessary and sufficient condition for δ to satisfy (3.7) is that it is of the form δ(x) = δ0(x)/u(x) where (Problem 3.4) u(bx) = u(x) for all x and all b > 0. (3.9) It remains to show that (3.9) holds if and only if u depends on x only through z. Note here that z is defined when xn ̸= 0 and, hence, with probability 1. That any function of z satisfies (3.9) is obvious. Conversely, if (3.9) holds, then u(x1, . . . , xn) = u x1 xn , . . . , xn−1 xn , xn |xn|  ; hence, u does depend only on z, as was to be proved. ✷ 3.3 ] LOCATION-SCALE FAMILIES 169 Example 3.2 Scale equivariant estimator based on one observation. Suppose that n = 1. Then, the most general estimator satisfying (3.7) is of the form Xr/w(Z) where Z = X/|X| is ±1 as X is ≥or ≤0, so that δ(X) = AXr if X > 0 BXr if X < 0, A, B being two arbitrary constants. ∥ Let us now determine the MRE estimator for a general scale family. Theorem 3.3 Let X be distributed according to (3.1) and let Z be given by (3.8). Suppose that the loss function is given by (3.5) and that there exists an equivariant estimator δ0 of τ r with finite risk. Assume that for each z, there exists a number w(z) = w∗(z) which minimizes E1{γ [δ0(X)/w(z)]|z}. (3.10) Then, an MRE estimator δ∗of τ r exists and is given by δ∗(X) = δ0(X) w∗(X). (3.11) The proof parallels that of Theorem 1.10. Corollary 3.4 Under the assumptions of Theorem 3.3, suppose that ρ(v) = γ (ev) is convex and not monotone. Then, an MRE estimator of τ r exists; it is unique if ρ is strictly convex. Proof. By replacing γ (w) by ρ(log w) [with ρ(−∞) = γ (0)], the result essentially reduces to that of Corollary 1.11. This argument requires that δ ≥0, which can be assumed without loss of generality (Problem 3.2). ✷ Example 3.5 Standardized power loss. Consider the loss function L(τ, d) = |d −τ r|p τ pr =     d τ r −1     p = γ  d τ r  (3.12) with γ (v) = |v −1|p. Then, ρ is strictly convex for v > 0, provided p ≥1 (Problem 3.5). Under the assumptions of Theorem 3.3, if we set γ  d τ r  = (d −τ r)2 τ 2r , (3.13) then (Problem 3.10) δ∗(X) = δ0(X)E1[δ0(X)|Z] E1[δ2 0(X)|Z] ; (3.14) if γ  d τ r  = |d −τ r| τ r , (3.15) then δ∗(X) is given by (3.11), with w∗(Z) any scale median of δ0(X) under the conditional distribution of X given Z and with τ = 1, that is, w∗(z) satisfies E(X|Z)I(X ≥w∗(Z)) = E(X|Z)I(X ≤w∗(Z)) (3.16) (Problems 3.7 and 3.10). ∥ 170 EQUIVARIANCE [ 3.3 Example 3.6 Continuation of Example 3.2. Suppose that n = 1, and X > 0 with probability 1. Then, the arguments of Theorem 3.3 and Example 3.5 show that if Xr has finite risk, the MRE estimator of τ r is Xr/w∗where w∗is any value minimizing E1[γ (Xr/w)]. (3.17) In particular, the MRE estimator is XrE1(Xr)/E1(X2r) (3.18) when the loss is (3.13), and it is Xr/w∗, where w∗is any scale median of Xr for τ = 1, when the loss is (3.15). ∥ Example 3.7 MRE for normal variance, known mean. Let X1, . . ., Xn be iid according to N(0, σ 2) and consider the estimation of σ 2. For δ0 = X2 i , it follows from Basu’s theorem that δ0 is independent of Z and hence that w∗(z) = w∗is a constant determined by minimizing (3.17) with X2 i in place of Xr. For the loss function (3.13) with r = 2, the MRE estimator turns out to be X2 i /(n + 2) [Equation (2.2.26) or Problem 3.7]. ∥ Quite generally, when the loss function is (3.13), the MRE estimator of τ r is given by δ∗(x) = ∞ 0 vn+r−1f (vx1, . . . , vxn) dv ∞ 0 vn+2r−1f (vx1, . . . , vxn) dv , (3.19) and in this form, it is known as the Pitman estimator of τ r. The proof parallels that of Theorem 1.20 (Problem 3.16). The loss function (3.13) satisfies lim d→∞L(τ, d) = ∞but lim d→0 L(τ, d) = 1, so that it assigns much heavier penalties to overestimation than to underestimation. Analternativetothelossfunction(3.13)and(3.15),firstintroducedbyStein(James and Stein, 1961), and known as Stein’s loss, is given by Ls(τ, d) = (d/τ r) −log(d/τ r) −1. (3.20) For this loss, limd→∞Ls(τ, d) = limd→0 Ls(τ, d) = ∞; it is thus somewhat more evenhanded. For another justification of (3.20), see Brown 1968, 1990b and also Dey and Srinivasan 1985. The change in the estimator (3.14) if (3.13) is replaced by (3.20) is shown in the following corollary. Corollary 3.8 Under the assumptions of Theorem 3.3, if the loss function is given by (3.20), the MRE estimator δ∗of τ r is uniquely given by δ∗ s = δ0(X)/E1(δ0(X)|z). (3.21) Proof. Problem 3.19. ✷ In light of the above discussion about skewness of the loss function, it is inter-esting to compare δ∗ s of (3.21) with δ∗of (3.14). It is clear that δ∗ s ≥δ∗if and only if E1(δ2 0(X)|Z) ≥[E1(δ0(X)|Z)]2, which will always be the case. Thus, Ls results in an estimator which is larger. 3.3 ] LOCATION-SCALE FAMILIES 171 Example 3.9 Normal scale estimation under Stein’s loss. For the situation of Example 3.7, with r = 2, the MLE is δ∗ s (x) = X2 i /n which is always larger than δ∗= X2 i /(n + 2), the MRE estimator under L2(τ, d). Brown (1968) explores the loss function Ls further, and shows that it is the only scale invariant loss function for which the UMVU estimator is also the MRE estimator. ∥ So far, the estimator δ has been assumed to be nonrandomized. Since ¯ G is transitive over , it follows from the result proved in the preceding section that randomized estimators need not be considered. It is further seen, as for the cor-responding result in the location case, that if a sufficient statistic T exists which permits a representation T = (T1, . . . , Tr) with Ti(bX) = bTi(X) for all b > 0, then an MRE estimator can be found which depends only on T . Illustrations are provided by Example 3.7 and Problem 3.12, with T = (X2 i )1/2 and T = X(n), respectively. When the loss function is (3.13), it follows from the factorization criterion that the MRE estimator (3.19) depends only on T . Since the group τ ′ = bτ, b > 0, is transitive and the group d′ = τ rd is commuta-tive, Theorem 3.3 applies and an MRE estimator is always risk-unbiased, although the MRE estimators of Examples 3.7 and 3.9 are not unbiased in the sense of Chapter 2. See also Problem 3.12. Example 3.10 Risk-unbiasedness. If the loss function is (3.13), the condition of risk-unbiasedness reduces to Eτ[δ2(X)] = τ rEτ[δ(X)]. (3.22) Given any scale equivariant estimator δ0(X) of τ r, there exists a value of c for which cδ0(X) satisfies (3.22), and for this value, cδ0(X) has uniformly smaller risk than δ0(X) unless c = 1 (Problem 3.21). If the loss function is (3.15), the condition of risk-unbiasedness requires that Eτ|δ(X)−a|/a be minimized by a = τ r. From Example 3.5, for this loss function, risk-unbiasedness is equivalent to the condition that the estimand τ r is equal to the scale median of δ(X). ∥ Letusnowturntolocation-scalefamilies,wherethedensityofX = (X1, . . . , Xn) is given by 1 τ n f x1 −ξ τ , . . . , xn −ξ τ  (3.23) with both parameters unknown. Consider first the estimation of τ r with loss func-tion (3.5). This problem remains invariant under the transformations X′ i = a + bXi, ξ ′ = a + bξ, τ ′ = bτ (b > 0), (3.24) and d′ = brd, and an estimator δ of τ r is equivariant under this group if δ(a + bX) = brδ(X). (3.25) Consider first only a change in location, X′ i = Xi + a, (3.26) 172 EQUIVARIANCE [ 3.3 which takes ξ into ξ ′ = ξ +a but leaves τ unchanged. By (3.25), δ must then satisfy δ(x + a) = δ(x), (3.27) that is, remain invariant. By Lemma 1.7, condition (3.27) holds if and only if δ is a function only of the differences yi = xi −xn. The joint density of the Y’s is 1 τ n  ∞ −∞ f y1 + t τ , . . . , yn−1 + t τ , t τ  dt (3.28) = 1 τ n−1  ∞ −∞ f y1 τ + u, . . . , yn−1 τ + u, u du. Since this density has the structure (3.1) of a scale family, Theorem 3.3 applies and provides the estimator that uniformly minimizes the risk among all estimators satisfying (3.25). It follows from Theorem 3.3 that such an MRE estimator of τ r is given by δ(X) = δ0(Y) w∗(Z) (3.29) where δ0(Y) is any finite risk scale equivariant estimator of τ r based on Y = (Y1, . . . , Yn−1), where Z = (Z1, . . . , Zn−1) with Zi = Yi Yn−1 (i = 1, . . . , n −2) and Zn−1 = Yn−1 |Yn−1|, (3.30) and where w∗(Z) is any number minimizing Eτ=1{γ [δ0(Y)/w(Z)|Z]}. (3.31) Example 3.11 MRE for normal variance, unknown mean. Let X1, . . . , Xn be iid according to N(ξ, σ 2) and consider the estimation of σ 2 with loss function (3.13), r = 2. By Basu’s theorem, ( ¯ X, (Xi −¯ X)2) is independent of Z. If δ0 = (Xi −¯ X)2, then δ0 is equivariant under (3.24) and independent of Z. Hence, w∗(z) = w∗in (3.29) is a constant determined by minimizing (3.17) with (Xi − ¯ X)2 in place of Xr. Since (Xi −¯ X)2 has the distribution of δ0 of Example 3.7 with n −1 in place of n, the MRE estimator for the loss function (3.13) with r = 2 is (Xi −¯ X)2/(n + 1). ∥ Example 3.12 Uniform. Let X1, . . . , Xn be iid according to U(ξ −1 2τ, ξ + 1 2τ), and consider the problem of estimating τ with loss function (3.13), r = 1. By Basu’s theorem, (X(1), X(n)) is independent of Z. If δ0 is the range R = X(n) −X(1), it is equivariant under (3.24) and independent of Z. It follows from (3.18) with r = 1 that (Problem 3.22) δ∗(X) = [(n + 2)/n]R. ∥ Since the group ξ ′ = a + bξ, τ ′ = bτ is transitive and the group d′ = brd is commutative, it follows (as in the pure scale case) that an MRE estimator is always risk-unbiased. The principle of equivariance seems to suggest that we should want to invoke as much invariance as possible and hence use the largest group G of transforma-tions leaving the problem invariant. Such a group may have the disadvantage of restricting the class of eligible estimators too much. (See, for example, Problem 3.3 ] LOCATION-SCALE FAMILIES 173 2.7.) To increase the number of available estimators, we may then want to restrict attention to a subgroup G0 of G. Since estimators that are equivariant under G are automatically also equivariant under G0, invariance under G0 alone will leave us with a larger choice, which may enable us to obtain improved risk performance. For estimating the scale parameter in a location-scale family, a natural subgroup of (3.24) is obtained by setting a = 0, which reduces (3.24) to the scale group X′ i = bXi, ξ ′ i = bξi, τ ′ = bτ (b > 0), (3.32) and d′ = brd. An estimator δ of τ r is equivariant under this group if δ(bX) = brδ(X), as in (3.7). Application of Theorem 3.1 shows that the equivariant estima-tors are of the form δ(x) = δ0(x) ω(x) (3.33) where δ0 is any scale equivariant estimator and w(z) is a function of zi = Xi/Xn, i = 1, . . . , n −1, and zn = xn/|xn|. However, we cannot now apply Theorem 3.3 to obtain the MRE estimator, because the group is no longer transitive (Example 2.14), and the risk of equivariant estimators is no longer constant. We can, however, go further in special cases, such as in the following example. Example 3.13 More normal variance estimation. If X1, . . . , Xn are iid as N(ξ, τ 2), with both parameters unknown, then it was shown in Example 3.11 that δ0(x) = (xi −¯ x)2/(n + 1) = S2/(n + 1) is MRE under the location-scale group (3.24) for the loss function (3.13) with r = 2. Now consider the scale group (3.32). Of course, δ0 is equivariant under this group, but so are the estimators δ(x) = ϕ(¯ x/s)s2 for some function ϕ(·) (Problem 3.24). Stein (1964) showed that ϕ(¯ x/s) = min{(n+ 1)−1, (n + 2)−1(1 + n¯ x2/s2)} produces a uniformly better estimator than δ0, and Brewster and Zidek (1974) found the best scale equivariant estimator. See Example 5.2.15 and Problem 5.2.14 for more details. ∥ In the location-scale family (3.23), we have so far considered only the estimation of τ r; let us now take up the problem of estimating the location parameter ξ. The transformations (3.24) relating to the sample space and parameter space remain the same, but the transformations of the decision space now become d′ = a + bd. A loss function L(ξ, τ; d) is invariant under these transformations if and only if it is of the form L(ξ, τ; d) = ρ d −ξ τ  . (3.34) That any such loss function is invariant is obvious. Conversely, suppose that L is invariant and that (ξ, τ; d) and (ξ ′, τ ′; d′) are two points with (d′ −ξ ′)/τ ′ = (d −ξ)/τ. Putting b = τ ′/τ and ξ ′ −a = bξ, one has d′ = a + bd, ξ ′ = a + bξ, and τ ′ = bτ, hence L(ξ ′, τ ′; d′) = L(ξ, τ; d), as was to be proved. Equivariance in the present case becomes δ(a + bx) = a + bδ(x), b > 0. (3.35) 174 EQUIVARIANCE [ 3.3 Since ¯ G is transitive over the parameter space, the risk of any equivariant estimator is constant so that an MRE estimator can be expected to exist. In some special cases, the MRE estimator reduces to that derived in Section 1 with τ known, as follows. For fixed τ, write gτ(x1, . . . , xn) = 1 τ n f x1 τ , . . . , xn τ (3.36) so that (3.23) becomes gτ(x1 −ξ, . . . , xn −ξ). (3.37) Lemma 3.14 Suppose that for the location family (3.37) and loss function (3.34), there exists an MRE estimator δ∗of ξ with respect to the transformations (1.10) and (1.11) and that (a) δ∗is independent of τ, and (b) δ∗satisfies (3.35). Then δ∗minimizes the risk among all estimators satisfying (3.35). Proof. Suppose δ is any other estimator which satisfies (3.35) and hence, a fortiori, is equivariant with respect to the transformations (1.10) and (1.11), and that the value τ of the scale parameter is known. It follows from the assumptions about δ∗ that for this τ, the risk of δ∗does not exceed the risk of δ. Since this is true for all values of τ, the result follows. ✷ Example 3.15 MRE for normal mean. Let X1, . . . , Xn be iid as N(ξ, τ 2), both parameters being unknown. Then, it follows from Example 1.15 that δ∗= ¯ X for any loss function ρ[(d −ξ)/τ] for which ρ satisfies the assumptions of Example 1.15. Since (i) and (ii) of Lemma 3.14 hold for this δ∗, it is the MRE estimator of ξ under the transformations (3.24). ∥ Example 3.16 Uniform location parameter. Let X1, . . . , Xn be iid as U(ξ − 1 2τ, ξ + 1 2τ). Then, analogous to Example 3.15, it follows from Example 1.19 that [X(1) + X(n)]/2 is MRE for the loss functions of Example 3.15. ∥ Unfortunately, the MRE estimators of Section 1 typically do not satisfy the assumptions of Lemma 3.14. This is the case, for instance, with the estimators of Examples 1.18 and 1.22. To derive the MRE estimator without these assumptions, let us first characterize the totality of equivariant estimators. Theorem 3.17 Let δ0 be any estimator ξ satisfying (3.35) and δ1 any estimator of τ taking on positive values only and satisfying δ1(a + bx) = bδ1(x) f or all b > 0 and all a. (3.38) Then, δ satisfies (3.35) if and only if it is of the form δ(x) = δ0(x) −w(z)δ1(x) (3.39) where z is given by (3.30). 3.3 ] LOCATION-SCALE FAMILIES 175 Proof. Analogous to Lemma 1.6, it is seen that δ satisfies (3.35) if and only if it is of the form δ(x) = δ0(x) −u(x)δ1(x), (3.40) where u(a + bx) = u(x) for all b > 0 and all a (3.41) (Problem 3.26). That (3.40) holds if and only if u depends on x only through z follows from Lemma 1.7 and Theorem 3.1. An argument paralleling that of Theorem 1.10 now shows that the MRE esti-mator of ξ is δ(X) = δ0(X) −w∗(Z)δ1(X) where for each z, w∗(z) is any number minimizing E0,1{ρ[δ0(X) −w∗(z)δ1(X)]|z}. (3.42) Here, E0,1 indicates that the expectation is evaluated at ξ = 0, τ = 1. ✷ If, in particular, ρ d −ξ τ  = (d −ξ)2 τ 2 , (3.43) it is easily seen that w∗(z) is w∗(z) = E0,1[δ0(X)δ1(X)|z]/E0,1[δ2 1(X)|z]. (3.44) Example 3.18 Exponential. Let X1, . . . , Xn be iid according to the exponential distribution E(ξ, τ). If δ0(X) = X(1) and δ1(X) = [Xi −X(1)], it follows from Example 1.6.24 that (δ0, δ1) are jointly independent of Z and are also independent of each other. Then (Problem 3.25), w∗(z) −w∗= E [δ0(X)δ1(X)] E[δ2 1(X)] = 1 n2 , and the MRE estimator of ξ is therefore δ∗(X) = X(1) −1 n2 [Xi −X(1)]. When the best location equivariant estimate is not also scale equivariant, its risk is, of course, smaller than that of the MRE under (3.35). Some numerical values of the increase that results from the additional requirement are given for a number of situations by Hoaglin (1975). ∥ For the loss function (3.43), no risk-unbiased estimator δ exists, since this would require that for all ξ, ξ ′, τ, and τ ′ 1 τ 2 Eξ,τ[δ(X) −ξ]2 ≤1 τ ′2 Eξ,τ[δ(X) −ξ ′]2, (3.45) which is clearly impossible. Perhaps (3.45) is too strong and should be required only when τ ′ = τ. It then reduces to (1.32) with θ = (ξ, τ) and g(θ) = ξ, and this weakened form of (3.45) reduces to the classical unbiasedness condi-tion Eξ,τ[δ(X)] = ξ. A UMVU estimator of ξ exists in Example 3.18 (Problem 176 EQUIVARIANCE [ 3.4 2.2.18), but it is δ(X) = X(1) − 1 n(n −1)[Xi −X(1)] rather than δ∗(X), and the latter is not unbiased (Problem 3.27). 4 Normal Linear Models Having developed the theory of unbiased estimation in Chapter 2 and of equivariant estimation in the first three sections of the present chapter, we shall now apply these results to some important classes of statistical models. One of the most widely used bodies of statistical techniques, comprising particularly the analysis of variance, regression, and the analysis of covariance, is formalized in terms of linear models, which will be defined and illustrated in the following. The examples, however, are not enough to give an idea of the full richness of the applications. For a more complete treatment, see, for example, the classic book by Scheff´ e (1959), or Seber (1977), Arnold (1981), Searle (1987), or Christensen (1987). Consider the problem of investigating the effect of a number of different factors on a response. Typically, each factor can occur in a number of different forms or at a number of different levels. Factor levels can be qualitative or quantitative. Three possibilities arise, corresponding to three broad categories of linear models: (a) All factor levels qualitative. (b) All factor levels quantitative. (c) Some factors of each kind. Example 4.1 One-way layout. A simple illustration of category (a) is provided by the one-way layout in which a single factor occurs at a number of qualitatively different levels. For example, we may wish to study the effect on performance of a number of different textbooks or the effect on weight loss of a number of diets. If Xij denotes the response of the jth subject receiving treatment i, it is often reasonable to assume that the Xij are independently distributed as Xij : N(ξi, σ 2), j = 1, . . . , ni; i = 1, . . . , s. (4.1) Estimands that may be of interest are ξi and ξi −(1/s)s j=1ξj. ∥ Example 4.2 A simple regression model. As an example of type (b), consider the time required to memorize a list of words. If the number of words presented to the ith subject and the time it takes the subject to learn the words are denoted by ti and Xi, respectively, one might assume that for the range of t’s of interest, the X’s are independently distributed as Xi : N(α + βti + γ t2 i , σ 2) (4.2) where α, β, and γ are the unknown regression coefficients, which are to be esti-mated. This would turn into an example of the third type if there were several groups of subjects. One might, for example, wish to distinguish between women and men 3.4 ] NORMAL LINEAR MODELS 177 or to see how learning ability is influenced by the form of the word list (whether it is handwritten, typed, or printed). The model might then become Xij : N(αi + βitij + γit2 ij, σ 2) (4.3) where Xij is the response of the jth subject in the ith group. Here, the group is a qualitative factor and the length of the list a quantitative one. ∥ The general linear model, which covers all three cases, assumes that Xi is distributed as N(ξi, σ 2), i = 1, . . . , n, (4.4) where the Xi are independent and (ξi, . . . , ξn) ∈# , an s-dimensional linear subspace of En(s < n). It is convenient to reduce this model to a canonical form by means of an orthog-onal transformation Y = XC (4.5) where we shall use Y to denote both the vector with components (Y1, . . . , Yn) and the row matrix (Y1, . . . , Yn). If ηi = E(Yi), the η’s and ξ’s are related by η = ξC (4.6) where η = (η1, . . . , ηn) and ξ = (ξ1, . . . ξn). To find the distribution of the Y’s, note that the joint density of X1, . . . , Xn is 1 ( √ 2πσ)n exp  −1 2σ 2 (xi −ξi)2  , that (xi −ξi)2 = (yi −ηi)2, since C is orthogonal, and that the Jacobian of the transformation is 1. Hence, the joint density of Y1, . . . , Yn is 1 ( √ 2πσ)n exp  −1 2σ 2 (yi −ηi)2  . The Y’s are therefore independent normal with Yi ∼N(ηi, σ 2), i = 1, . . . , n. If c′ i denotes the ith column of C, the desired form is obtained by choosing the ci so that the first s columns c′ 1, . . . , c′ s span # . Then, ξ ∈ ⇐ ⇒ξ is orthogonal to the last n −s columns of C. Since η = ξC, it follows that ξ ∈ ⇐ ⇒ηs+1 = · · · = ηn = 0. (4.7) In terms of the Y’s, the model (4.4) thus becomes Yi : N(ηi, σ 2), i = 1, . . . , s, and Yj : N(0, σ 2), j = s + 1, . . . , n. (4.8) As (ξ1, . . . , ξn) varies over # , (η1, . . . , ns) varies unrestrictedly over Es while ηs+1 = · · · = ηn = 0. 178 EQUIVARIANCE [ 3.4 In this canonical model, Y1, . . . , Ys and S2 = n j=s+1Y 2 j are complete sufficient statistics for (η1, . . . , ηs, σ 2). Theorem 4.3 (a) The UMVU estimators of s i=1λiηi (where the λ’s are known constants) and σ 2 are s i=1λiYi and S2/(n −s), respectively. (Here, UMVU is used in the strong sense of Section 2.1.) (b) Under the transformations Y ′ i = Yi + ai (i = 1, . . . , s); Y ′ j = Yj (j = s + 1, . . . , n) η′ i = ηi + ai (i = 1, . . . , s); and d′ = d + s i=1 aiλi and with loss function L(η, d) = ρ(d −λiηi) where ρ is convex and even, the UMVU estimator s i=1λiYi is also the MRE estimator of s i=1λiηi. (c) Under the loss function (d−σ 2)2/σ 4, the MRE estimator ofσ 2 is S2/(n−s+2). Proof. (a) Since s i=1λiYi and S2/(n−s) are unbiased and are functions of the complete sufficient statistics, they are UMVU. (b) The condition of equivariance is that δ(Y1 + c1, . . . , Ys + cs, Ys+1, . . . , Yn) = δ(Y1, . . . , Ys, Ys+1, . . . , Yn) + s i=1 λici and the result follows from Problem 2.27. (c) This follows essentially from Example 3.7 (see Problem 4.3) . ✷ It would be more convenient to have the estimator expressed in terms of the original variables X1, . . . , Xn, rather than the transformed variables Y1, . . . , Yn. For this purpose, we introduce the following definition. Let ξ = (ξ1, . . . , ξn) be any vector in # . Then, the least squares estimators (LSE) (ˆ ξ1, . . . , ˆ ξn) of (ξ1, . . . , ξn) are those estimators which minimize n i=1(Xi − ξi)2 subject to the condition ξ ∈# . Theorem 4.4 Under the model (4.4), the UMVU estimator of n i=1γiξi is n i=1γi ˆ ξ1. Proof. By Theorem 4.3 (and the completeness of Y1, . . . , Ys and S2), it suffices to show that n i=1γi ˆ ξi is a linear function of Y1, . . . , Ys, and that it is unbiased for n i=1γiξi. Now, n i=1 (Xi −ξi)2 = n i=1 [Yi −E(Yi)]2 = s i=1 (Yi −ηi)2 + n j=s+1 Y 2 j . (4.9) The right side is minimized by ˆ ηi = Yi (i = 1, . . . , s), and the left side is minimized by ˆ ξ1, . . . , ˆ ξn. Hence, (Y1 · · · Ys 0 · · · 0) = (ˆ ξ1 · · · ˆ ξn)C = ˆ ξC 3.4 ] NORMAL LINEAR MODELS 179 so that ˆ ξ = (Y1 · · · Ys 0 · · · 0)C−1. It follows that each ˆ ξi and, therefore, n i=1γi ˆ ξi is a linear function of Y1, . . . , Ys. Furthermore, E(ˆ ξ) = E[(Y1 · · · Ys 0 · · · 0)C−1] = (η1 · · · ηs 0 · · · 0)C−1 = ξ. Thus, each ˆ ξi is unbiased for ξi; consequently, n i=1γi ˆ ξi is unbiased for n i=1γiξi. ✷ It is interesting to note that each of the two quite different equations X = (Y1 · · · Yn)C−1 and ˆ ξ = (Y1 · · · Ys 0 · · · 0)C−1 leads to ξ = (η1, . . . , ηs 0 · · · 0)C−1 by taking expectations. Let us next reinterpret the equivariance considerations of Theorem 4.3 in terms of the original variables. It is necessary first to specify the group of transformations leaving the problem invariant. The transformations of Y-space defined in Theorem 4.3(b), in terms of the X’s become X′ i = Xi + bi, i = 1, . . . , n, but the bi are not arbitrary since the problem remains invariant only if ξ′ = ξ +b ∈# ; that is, the bi must satisfy b = (b1, . . . , bn) ∈# . Theorem 4.3(ii) thus becomes the following corollary. Corollary 4.5 Under the transformations X′ = X + b with b ∈ , (4.10) n i=1γi ˆ ξi is MRE for estimating n i=1γiξi with the loss function ρ(d −γiξi) pro-vided ρ is convex and even. To obtain the UMVU and MRE estimators of σ 2 in terms of the X’s, it is only necessary to reexpress S2. From the minimization of the two sides of (4.9), it is seen that n i=1 (Xi −ˆ ξi)2 = n j=s+1 Y 2 j = S2. (4.11) The UMVU and MRE estimators of σ 2 given in Theorem 4.3, in terms of the X’s are therefore (Xi −ˆ ξi)2/(n −s) and (Xi −ˆ ξi)2/(n −s + 2), respectively. Let us now illustrate these results. Example 4.6 Continuation of Example 4.1. Let Xij be independent N(ξi, σ 2), j = 1, . . . , ni, i = 1, . . . , s. To find the UMVU or MRE estimator of a linear function of the ξi, it is only necessary to find the least squares estimators ˆ ξi. Minimizing s i=1 ni j=1 (Xij −ξi)2 = s i=1  ni j=1 (Xij −Xi·)2 + ni(Xi· −ξi)2 , we see that ˆ ξi = Xi· = 1 ni ni j=1 Xij. 180 EQUIVARIANCE [ 3.4 From (4.11), the UMVU estimator of σ 2 in the present case is seen to be ˆ σ 2 = s i=1 ni j=1 (Xij −Xi·)2/(ni −s). ∥ Example 4.7 Simple linear regression. Let Xi be independent N(ξi, σ 2), i = 1, . . . , n, with ξi = α + βti, ti known and not all equal. Here, # is spanned by the vectors (1, . . . , 1) and (t1, . . . , tn) so that the dimension of # is s = 2. The least squares estimators of ξi are obtained by minimizing n i=1(Xi −α −βti)2 with respect to α and β. It is easily seen that for any i and j with ti ̸= tj, β = ξj −ξi tj −ti , α = tjξi −tiξj tj −ti (4.12) and that ˆ β and ˆ α are given by the same functions of ˆ ξi and ˆ ξj (Problem 4.4). Hence, ˆ α and ˆ β are the best unbiased and equivariant estimators of α and β, respectively. Note that the representation of α and β in terms of the ξi’s is not unique. Any two ξi and ξj values with ti ̸= tj determine α and β and thus all the ξ’s. The reason, of course, is that the vectors (ξ1, . . . , ξn) lie in a two-dimensional linear subspace of n-space. ∥ Example 4.7 is a special case of the model specified by the equation ξ = θA (4.13) where θ = (θ1 · · · θs) are s unknown parameters and A is a known s × n matrix of rank s, the so-called full-rank model. In Example 4.7, θ = (α, β) and A =  1 · · · 1 t1 · · · tn  . The least squares estimators of the ξi in (4.13) are obtained by minimizing n i=1 [Xi −ξi(θ)]2 with respect to θ. The minimizing values ˆ θi are the LSEs of θi, and the LSEs of the ξi are given by ˆ ξ = ˆ θA. (4.14) Theorems 4.3 and 4.4 establish that the various optimality results apply to the estimators of the ξi and their linear combinations. The following theorem shows that they also apply to the estimators of the θ’s and their linear functions. Theorem 4.8 Let Xi ∼N(ξi, σ 2), i = 1, . . . , n, be independent, and let ξ satisfy (4.13) with A of rank s. Then, the least squares estimator ˆ θ of θ is a linear function of the ˆ ξi and hence has the optimality properties established in Theorems 4.3 and 4.4 and Corollary 4.5. Proof. It need only be shown that θ is a linear function of ξ; then, by (4.13) and (4.14), ˆ θ is the corresponding linear function of ˆ ξ. 3.4 ] NORMAL LINEAR MODELS 181 Assume without loss of generality that the first s columns of A are linearly independent, and form the corresponding nonsingular s × s submatrix A∗. Then, (ξ1 · · · ξs) = (θ1 · · · θs)A∗, so that (θ1 · · · θs) = (ξ1 · · · ξs)A∗−1, and this completes the proof. ✷ Typical examples in which ξ is given in terms of (4.13) are polynomial regres-sions such as ξi = α + βti + γ t2 i or regression in more than one variable such as ξi = α + βti + γ ui where the t’s and u’s are given, and α, β, and γ are the unknown parameters. Or there might be several regression lines with a common slope, say ξij = αi + βtij (j = 1, . . . , ni; i = 1, . . . , a), and so on. The full-rank model does not always provide the most convenient parametriza-tion; for reasons of symmetry, it is often preferable to use the model (4.13) with more parameters than are needed. Before discussing such models more fully, let us illustrate the resulting difficulties on a trivial example. Suppose that ξi = ξ for all i and that we put ξi = λ+µ. Such a model does not define λ and µ uniquely but only their sum. One can then either let this ambiguity remain but restrict attention to clearly defined functions such as λ + µ, or one can remove the ambiguity by placing an additional restriction on λ and µ, such as µ −λ = 0, µ = 0, or λ = 0. More generally, let us suppose that the model is given by ξ = θA (4.15) where A is a t × n matrix of rank s < t. To define the θ’s uniquely, (4.15) is supplemented by side conditions θB = 0 (4.16) chosen so that the set of equations (4.15) and (4.16) has a unique solution θ for every ξ ∈# . Example 4.9 Unbalanced one-way layout. Consider the one-way layout of Ex-ample 4.1, with Xij (j = 1, . . . , ni; i = 1, . . . , s) independent normal variables with means ξi and variance σ 2. When the principal concern is a comparison of the s treatments or populations, one is interested in the differences of the ξ’s and may represent these by means of the differences between the ξi and some mean value µ, say αi = ξi −µ. The model then becomes ξi = µ + αi, i = 1, . . . , s, (4.17) 182 EQUIVARIANCE [ 3.4 which expresses the s ξ’s in terms of s + 1 parameters. To specify the parameters, an additional restriction is required, for example, αi = 0. (4.18) Adding the s equations (4.17) and using (4.18), one finds µ = ξi s = ¯ ξ (4.19) and hence αi = ξi −¯ ξ. (4.20) The quantity αi measures the effect of the ith treatment. Since Xi· is the least squares estimator of ξi, the UMVU estimators of µ and the α’s are ˆ µ = Xi· s = Xij sni and ˆ αi = Xi· −ˆ µ. (4.21) When the sample sizes ni are not all equal, a possible disadvantage of this representation is that the vectors of the coefficients of the Xij in the ˆ αi are not orthogonal to the corresponding vector of coefficients of ˆ µ [Problem 4.7(a)]. As a result, ˆ µ is not independent of the ˆ αi. Also, when the αi are known to be zero, the estimator of µ is no longer given by (4.21) (Problem 4.8). For these reasons, the side condition (4.18) is sometimes replaced by niαi = 0, (4.22) which leads to µ = niξi N = ˜ ξ (N = ni) (4.23) and hence αi = ξi −˜ ξ. (4.24) Although the αi of (4.22) seems to be a less natural measure of the effect of the ith treatment,theresultingUMVUestimators ˆ ˆ αi and ˆ ˆ µhavetheorthogonalityproperty not possessed by the estimators (4.21) [Problem 4.7(b)]. The side conditions (4.18) and (4.22), of course, agree when the ni are all equal. ∥ The following theorem shows that the conclusion of Theorem 4.8 continues to hold when the θ’s are defined by (4.15) and (4.16) instead of (4.13). Theorem 4.10 Let Xi be independent N(ξi, σ 2), i = 1, . . . , n, with ξ ∈# , an s-dimensional linear subspace of En. Suppose that (θ1, . . . , θt) are uniquely determined by (4.15) and (4.16), where A is of rank s < t and B of rank k. Then, k = t −s, and the optimality results of Theorem 4.4 and Corollary 4.5 apply to the parameters θ1, . . . , θt and their least squares estimators ˆ θ1, . . . , ˆ θt. Proof. Let ˆ θ1, . . . , ˆ θt be the LSEs of θ1, . . . , θt, that is, the values that minimize n i=1 [Xi −ξi(θ)]2 3.4 ] NORMAL LINEAR MODELS 183 subject to (4.15) and (4.16). It must be shown, as in the proof of Theorem 4.8, that the ˆ θi’s are linear functions of ˆ ξ1, . . . , ˆ ξn, and that the θi’s are the same functions of ξ1, . . . , ξn. Without loss of generality, suppose that the θ’s are numbered so that the last k columns of B are linearly independent. Then, one can solve for θt−k+1, . . . , θt in terms of θ1, . . . , θt−k, obtaining the unique solution θj = Lj(θ1, . . . , θt−k) for j = t −k + 1, . . . , t. (4.25) Substituting into ξ = θA gives ξ = (θ1 · · · θt−k)A∗ for some matrix A∗, with (θ1, . . . , θt−k) varying freely in Et−k. Since each ξ ∈# uniquely determines θ, in particular the value ξ = 0 has the unique solution θ = 0, so that (θ1 · · · θt−k)A∗= 0 has a unique solution. This implies that A∗has rank t −k. On the other hand, since ξ ranges over a linear space of dimension s, it follows that t −k = s and, hence, that k = t −s. The situation is now reduced to that of Theorem 4.8 with ξ a linear function of t −k = s freely varying θ’s, so the earlier result applies to θ1, . . . , θt−k. Finally, the remaining parameters θt−k+1, . . . , θt and their LSEs are determined by (4.25), and this completes the proof. ✷ Example 4.11 Two-way layout. A typical illustration of the above approach is provided by a two-way layout. This arises in the investigation of the effect of two factors on a response. In a medical situation, for example, one of the factors might be the kind of treatment (e.g., surgical, nonsurgical, or no treatment at all), the other the severity of the disease. Let Xijk denote the response of the kth subject to which factor 1 is applied at level i and factor 2 at level j. We assume that the Xijk are independently, normally distributed with means ξij and common variance σ 2. To avoid the complications of Example 4.9, we shall suppose that each treatment combination (i, j) is applied to the same number of subjects. If the number of levels of the two factors is a and b, respectively, the model is thus Xijk : N(ξij, σ 2), i = 1, . . . , I; j = 1, . . . , J; k = 1, . . . , m. (4.26) This model is frequently parametrized by ξij = µ + αi + βj + γij (4.27) with the side conditions i αi = j βj = i γij = j γij = 0. (4.28) It is easily seen that (4.27) and (4.28) uniquely determine µ and the α’s, β’s, and γ ’s. Using a dot to denote averaging over the indicated subscript, we find by averaging (4.27) over both i and j and separately over i and over j that ξ·· = µ, ξi· = µ + αi, ξ·j = µ + βj and hence that µ = ξ··, αi = ξi· −ξ··, βj = ξ·j −ξ··, (4.29) 184 EQUIVARIANCE [ 3.4 and γij = ξij −ξi· −ξ·j + ξ·· . (4.30) ∥ Thus, αi is the average effect (averaged over the levels of the second factor) of the first factor at level i, and βj is the corresponding effect of the second factor at level j. The quantity γij can be written as γij = (ξij −ξ··) −[(ξi· −ξ··) + (ξj· −ξ··)]. (4.31) It is therefore the difference between the joint effect of the two treatments at levels i and j, respectively, and the sum of the separate effects αi + βj. The quantity γij is called the interaction of the two factors when they are at levels i and j, respectively. The UMVU estimators of these various effects follow immediately from Theo-rem 4.3 and Example 4.6. This example shows that the UMVU estimator of ξij is Xij and the associated estimators of the various parameters are thus ˆ µ = X···, ˆ αi = Xi·· −X···, ˆ βj = X·j· −X···, (4.32) and ˆ γij = Xij· −Xi·· −X·j· + X··· . (4.33) The UMVU estimator of σ 2 is 1 IJ(m −1) (Xijk −Xij·)2. (4.34) These results for the two-way layout easily generalize to other factorial experi-ments, that is, experiments concerning the joint effect of several factors, provided the numbers of observations at the various combinations of factor levels are equal. Theorems 4.8 and 4.10, of course, apply without this restriction, but then the situ-ation is less simple. Model (4.4) assumes that the random variables Xi are independently normally distributed with common unknown variance σ 2 and means ξi, which are subject to certain linear restrictions. We shall now consider some models that retain the linear structure but drop the assumption of normality. (i) A very simple treatment is possible if one is willing to restrict attention to unbiased estimators that are linear functions of the Xi and to squared error loss. Suppose we retain from (4.4) only the assumptions about the first and second moments of the Xi, namely E(Xi) = ξi, ξ ∈ , (4.35) var(Xi) = σ 2, cov(Xi, Xj) = 0 for i ̸= j. Thus, both the normality and independence assumptions are dropped. Theorem 4.12 (Gauss’ Theorem on Least Squares) Under assumptions (4.35), n i=1γi ˆ ξi of Theorem 4.4 is UMVU among all linear estimators of n i=1γiξi. 3.4 ] NORMAL LINEAR MODELS 185 Proof. The estimator is still unbiased, since the expectations of the Xi are the same under (4.35) as under (4.4). Let n i=1ciXi be any other linear unbiased estimator of n i=1γiξi. Since n i=1γi ˆ ξi is UMVU in the normal case, and variances of linear functions of the Xi depend only on first and second moments, it follows that var n i=1γi ˆ ξi ≤var n i=1ciXi. Hence, n i=1γi ˆ ξi is UMVU among linear unbiased estimators. ✷ Corollary 4.13 Undertheassumptions(4.35)andwithsquarederrorloss,n i=1γi ˆ ξi is MRE with respect to the transformations (4.10) among all linear equivariant estimators of n i=1γiξi. Proof. This follows from the argument of Lemma 1.23, since n i=1γi ˆ ξi is UMVU and equivariant. ✷ Theorem 4.12, which is also called the Gauss-Markov theorem, has been ex-tensively generalized (see, for example, Rao 1976, Harville 1976, 1981, Kariya, 1985). We shall consider some extensions of this theorem in the next section. On the other hand, the following result, due to Shaffer (1991), shows a direction in which the theorem does not extend. If, in (4.35), we adopt the parametrization ξ = θA for some s × n matrix A, there are some circumstances in which it is rea-sonable to assume that A also has a distribution (for example, if the data (X, A) are obtained from a sample of units, rather than A being a preset design matrix as is the case in many experiments). The properties of the resulting least squares estimator, however, will vary according to what is assumed about both the distribution of A and the distribution of X. Note that in the following theorem, all expectations are over the joint distribution of X and A. Theorem 4.14 Under assumptions (4.35), with ξ = θA, the following hold. (a) If (X, A) are jointly multivariate normal with all parameters unknown, then the least squares estimator γi ˆ ξi is the UMVU estimator of γiξi. (b) If the distribution of A is unknown, then the least squares estimator γi ˆ ξi is UMVU among all linear estimators of γiξi. (c) If E(AA′) is known, no best linear unbiased estimator of γiξi exists. Proof. Part (a) follows from the fact that the least squares estimator is a function of the complete sufficient statistic. Part (b) can be proved by showing that if γi ˆ ξi is unconditionally unbiased then it is conditionally unbiased, and hence Theorem 4.12 applies. For this purpose, one can use a variation of Problem 1.6.33, where it was shown that the order statistics are complete sufficient. Finally, part (c) follows from the fact that the extra information about the variance of A can often be used to improve any unbiased estimator. See Problems 4.16–4.18 for details. ✷ The formulation of the regression problem in Theorem 4.14, in which the p rows of A are sometimes referred to as “random regressors,” has other interesting implications. If A is ancillary, the distribution of A and hence E(A′A) are known and so we have a situation where the distribution of an ancillary statistic will affect the properties of an estimator. This paradox was investigated by Brown (1990a), who established some interesting relationships between ancillarity and admissibility (see Problems 5.7.31 and 5.7.32). 186 EQUIVARIANCE [ 3.4 For estimating σ 2, it is natural to restrict attention to unbiased quadratic (rather than linear) estimators Q of σ 2. Among these, does the estimator S2/(n −s) which is UMVU in the normal case continue to minimize the variance? Under mild additional restrictions—for example, invariance under the transformations (4.10) or restrictions to Q’s taking on only positive values—it turns out that this is true in some cases (for instance, in Example 4.15 below when the ni are equal) but not in others. For details, see Searle et al. (1992, Section 11.3). Example 4.15 Quadratic unbiased estimators. Let Xij (j = 1, . . ., ni; i = 1, . . ., s) be independently distributed with means E(Xij) = ξi and common variance and fourth moment σ 2 = E(Xij −ξi)2 and β = E(Xij −ξi)4/σ 4, respectively. Consider estimators of σ 2 of the form Q = λiS2 i where S2 i = (Xij −Xi·)2 and λi(ni −1) = 1 so that Q is an unbiased estimator of σ 2. Then, the variance of Q is minimized (Problem 4.19) when the λ’s are proportional to 1/(αi + 2) where αi = (ni −1)/ni. The standard choice of the λi (which is to make them equal) is, therefore, best if either the ni are equal or β = 3, which is the case when the Xij are normal. (ii) Let us now return to the model obtained from (4.4) by dropping the as-sumption of normality but without restricting attention to linear estimators. More specifically, we shall assume that X1, . . . , Xn are random variables such that the variables Xi −ξi are iid with a common distribution F which has expectation zero and an otherwise unknown (4.36) probability density f , and such that (4.13) holds with A an n × n matrix of rank s. ∥ In Section 2.4, we found that for the case ξi = θ, the LSE ¯ X of θ is UMVU in this nonparametric model. To show that the corresponding result does not generally hold when ξ is given by (4.13), consider the two-way layout of Example 4.11 and the estimation of αi = ξi· −ξ·· = 1 IJ I j=1 J k=1 (ξik −ξjk). (4.37) To avoid calculations, suppose that F is t2, the t-distribution with 2 degrees of freedom. Then, the least squares estimators have infinite variance. On the other hand, let ˜ Xij be the median of the observations Xijv, v = 1, . . . , m. Then ˜ Xik −˜ Xjk is an unbiased estimator of ξik −ξjk so that δ = (1/ab) ( ˜ Xik −˜ Xjk) is an unbiased estimator of αi. Furthermore, if m ≥3, the ˜ Xij have finite variance and so, therefore, does δ. (A sum of random variables with finite variance has finite variance.) This shows that the least squares estimators of the αi are not UMVU when F is unknown. The same argument applies to the β’s and γ ’s. The situation is quite different for the estimation of µ. Let U be the class of unbiased estimators of µ in model (4.27) with F unknown, and let U′ be the cor-3.5 ] RANDOM AND MIXED EFFECTS MODELS 187 responding class of unbiased estimators when the α’s , β’s, and γ ’s are all zero. Then, clearly, U ⊂U′; furthermore, it follows from Section 2.4 that X··· uniformly minimizes the variance within U′. Since X··· is a member of U, it uniformly mini-mizes the variance within U and, hence, is UMVU for µ in model (4.27) when F is unknown. For a more detailed discussion of this problem, see Anderson (1962). (iii) Instead of assuming the density f in (4.36) to be unknown, we may be interested in the case in which f is known but not normal. The model then remains invariant under the transformations X′ v = Xv + s j=1 ajvγj, −∞< γ1, . . . , γs < ∞. (4.38) Since E(X′ v) = ajv(θj + γj), the induced transformations in the parameter space are given by θ′ j = θj + γj (j = 1, . . . , s). (4.39) The problem of estimating θj remains invariant under the transformations (4.38), (4.39), and d′ = d + γj (4.40) for any loss function of the form ρ(d −θj), and an estimator δ of θj is equivariant with respect to these transformations if it satisfies δ(X′) = δ(X) + γj. (4.41) Since (4.39) is transitive over , the risk of any equivariant estimator is constant, and an MRE estimator of θj can be found by generalizing Theorems 1.8 and 1.10 to the present situation (see Verhagen 1961). (iv) Important extensions to random and mixed effects models, and to general exponential families, will be taken up in the next two sections. 5 Random and Mixed Effects Models In many applications of linear models, the effects of the various factors A, B, C, . . . which were considered to be unknown constants in Section 3.4 are, instead, ran-dom. One then speaks of a random effects model (or Model II); in contrast, the corresponding model of Section 3.4 is a fixed effects model (or Model I). If both fixed and random effects occur, the model is said to be mixed. Example 5.1 Random effects one-way layout. Suppose that, as a measure of quality control, an auto manufacturer tests a sample of new cars, observing for each car, the mileage achieved on a number of occasions on a gallon of gas. Suppose Xij is the mileage of the ith car on the jth occasion, at time tij, with all the tij being selected at random and independently of each other. This would have been modeled in Example 4.1 as Xij = µ + αi + Uij where the Uij are independent N(0, σ 2). Such a model would be appropriate if these particular cars were the object of study and a replication of the experiment 188 EQUIVARIANCE [ 3.5 thus consisted of a number of test runs by the same cars. However, the manufacturer is interested in the performance of the thousands of cars to be produced that year and, for this reason, has drawn a random sample of cars for the test. A replication of the experiment would start by drawing a new sample. The effect of the ith car is therefore a random variable, and the model becomes Xij = µ + Ai + Uij (j = 1, . . . , ni; i = 1, . . . , s). (5.1) Here and following, the populations being sampled are assumed to be large enough so that independence and normality of the unobservable random variables Ai and Uij can be assumed as a reasonable approximation. Without loss of generality, one can put E(Ai) = E(Uij) = 0 since the means can be absorbed into µ. The variances will be denoted by var(Ai) = σ 2 A and var(Uij) = σ 2. The Xij are dependent, and their joint distribution, and hence the estimation of σ 2 A and σ 2, is greatly simplified if the model is assumed to be balanced, that is, to satisfy ni = n for all i. In that case, in analogy with the transformation (4.5), let each set (Xi1, . . . , Xin) be subjected to an orthogonal transformation to (Yi1, . . . , Yin) such that Yi1 = √n Xi· . An additional orthogonal transformation is made from (Y11, . . . , Ys1) to (Z11, . . . , Zs1) such that Z11 = √s Y·1, whereas for i > 1, we put Zij = Yij. Unlike the Xij, the Yij and Zij are all independent (Problem 5.1). They are normal with means E(Z11) = √sn µ, E(Zij) = 0 if i > 1 or j > 1 and variances var(Zi1) = σ 2 + nσ 2 A, var(Zij) = σ 2 for j > 1, so that the joint density of the Z’s is proportional to exp − 1 2(σ 2 + nσ 2 A) (Z11 −√sn µ)2 + S2 A ! − 1 2σ 2 S2 " (5.2) with S2 A = s i=2 Z2 i1 = n(Xi· −X··)2, S2 = s i=1 n j=2 Z2 ij = s i=1 n j=1 (Xij −Xi·)2. This is a three-parameter exponential family with η1 = µ σ 2 + nσ 2 A , η2 = 1 σ 2 + nσ 2 A , η3 = 1 σ 2 . (5.3) The variance of Xij is var(Xij) = σ 2 + σ 2 A, and we are interested in estimating the variance components σ 2 A and σ 2. Since E  S2 A s −1  = σ 2 + nσ 2 A and E  S2 s(n −1)  = σ 2, it follows that ˆ σ 2 = S2 s(n −1) and ˆ σ 2 A = 1 n  S2 A s −1 − S2 s(n −1)  (5.4) 3.5 ] RANDOM AND MIXED EFFECTS MODELS 189 are UMVU estimators of σ 2 and σ 2 A, respectively. The UMVU estimator of the ratio is σ 2 A/σ 2 is 1 n  Kf,−2 s(n −1) ˆ σ 2 + nˆ σ 2 A ˆ σ 2 −1  , where Kf,−2 is given by (2.2.5) with f = s(n−1) (Problem 5.3). Typically, the only linear subspace of the η’s of interest here is the trivial one defined by σ 2 A = 0, which corresponds to η2 = η3 and to the case in which the sn Xij are iid as N(µ, σ 2). ∥ Example 5.2 Random effects two-way layout. In analogy to Example 4.11, con-sider the random effects two-way layout. Xijk = µ + Ai + Bj + Cij + Uijk (5.5) where the unobservable random variables Ai, Bj, Cij, and Uijk are independently normally distributed with zero mean and with variances σ 2 A, σ 2 B, σ 2 C, and σ 2, respec-tively. We shall restrict attention to the balanced case i = 1, . . . , I, j = 1, . . . , J, and k = 1, . . . n. As in the preceding example, a linear transformation leads to independent normal variables Zijk with means E(Z111) = √ IJn µ and 0 for all other Z ’s and with variances var(Z111) = nJσ 2 A + nIσ 2 B + nσ 2 C + σ 2, var(Zi11) = nJσ 2 A + nσ 2 C + σ 2, i > 1, var(Z1j1) = nIσ 2 B + nσ 2 C + σ 2, j > 1, (5.6) var(Zij1) = nσ 2 C + σ 2, i, j > 1, var(Zijk) = σ 2, k > 1. As an example in which such a model might arise, consider a reliability study of blood counts, in which blood samples from each of J patients are divided into nI subsamples of which n are sent to each of I laboratories. The study is not concerned with these particular patients and laboratories, which, instead, are assumed to be random samples from suitable patient and laboratory populations. From (5.5) it follows that var(Xijk) = σ 2 A + σ 2 B + σ 2 C + σ 2. The terms on the right are the variance components due to laboratories, patients, the interaction between the two, and the subsamples from a patient. The joint distribution of the Zijk constitutes a five-parameter exponential family with the complete set of sufficient statistics (Problem 5.9) S2 A = I i=2 Z2 i11 = nJ I i=1 (Xi·· −X···)2, S2 B = J j=2 Z2 1j1 = nI J j=1 (X·j· −X···)2, S2 C = I i=2 J j=2 Z2 ij1 = n I i=1 J j=1 (Xij· −Xi·· −X·j· + X···)2, (5.7) S2 = I i=1 J j=1 n k=2 Z2 ijk = I i=1 J j=1 n k=1 (Xijk −Xij·)2, 190 EQUIVARIANCE [ 3.5 Z111 = √ IJnX···. From the expectations of these statistics, one finds the UMVU estimators of the variance components σ 2, σ 2 C, σ 2 A, and σ 2 B to be ˆ σ 2 = S2 IJ(n −1), ˆ σ 2 C = 1 n  S2 C (I −1)(J −1) −ˆ σ 2  , ˆ σ 2 A = 1 nJ  S2 A I −1 −nˆ σ 2 C −ˆ σ 2  , ˆ σ 2 B = 1 nI  S2 B J −1 −nˆ σ 2 C −ˆ σ 2  . A submodel of (5.5), which is sometimes appropriate, is the additive model corresponding to the absence of the interaction terms Cij and hence to the as-sumption σ 2 C = 0. If η1 = µ/var(Z111), 1/η2 = nJσ 2 A + nσ 2 C + σ 2, 1/η3 = nIσ 2 B + nσ 2 C + σ 2, 1/η4 = nσ 2 C + σ 2, and 1/η5 = σ 2, this assumption is equiv-alent to η4 = η5 and thus restricts the η’s to a linear subspace. The submodel constitutes a four-parameter exponential family, with the complete set of sufficient statistics Z111, S2 A, S2 B, and S′2 = S2 C = (Xijk−Xi··−X·j·+X···)2. The UMVU estimators of the variance components σ 2 A, σ 2 B, and σ 2 are now easily obtained as before (Problem 5.10). Another submodel of (5.5) which is of interest is obtained by setting σ 2 B = 0, thus eliminating the Bj terms from (5.5) . However, this model, which corresponds to the linear subspace η3 = η4, does not arise naturally in the situations leading to (5.5), as illustrated by the laboratory example. These situations are characterized by a crossed design in which each of the IA units (laboratories) is observed in combination with each of the JB units (patients). On the other hand, the model without the B terms arises naturally in the very commonly occurring nested design illustrated in the following example. ∥ Example 5.3 Two nested random factors. For the two factors A and B, suppose that each of the units corresponding to different values of i (i.e., different levels of A) is itself a collection of smaller units from which the values of B are drawn. Thus, the A units might be hospitals, schools, or farms that constitute a random sample from a population of such units from each of which a random sample of patients, students, or trees is drawn. On each of the latter, a number of observations is taken (for example, a number of blood counts, grades, or weights of a sample of apples). The resulting model [with a slight change of notation from (5.5)] may be written as Xijk = µ + Ai + Bij + Uijk. (5.8) Here, the A’s, B’s, and U’s are again assumed to be independent normal with zero means and variances σ 2 A, σ 2 B, and σ 2, respectively. In the balanced case (i = 1, . . . , I, j = 1, . . . , J, k = 1, . . . , n), a linear transformation produces independent variables with means E(Z111) = √ IJn µ and = 0 for all other Z’s and variances var(Zi11) = σ 2 + nσ 2 B + Jnσ 2 A (i = 1, . . . , 1), var(Zij1) = σ 2 + nσ 2 B (j > 1), var(Zijk) = σ 2 (k > 1). 3.5 ] RANDOM AND MIXED EFFECTS MODELS 191 The joint distribution of the Z’s constitutes a four-parameter exponential family with the complete set of sufficient statistics S2 A = I i=2 Z2 i11 = Jn(Xi·· −X···)2, S2 B = J j=2 Z2 1j1 = n (Xij· −Xi··)2, (5.9) S2 = I i=1 J j=1 n k=2 Z2 ijk = I i=1 J j=1 n k=1 (Xijk −Xij·)2, Z111 = √ IJnX···, and the UMVU estimators of the variance components can be obtained as before (Problem 5.12). ∥ The models illustrated in Examples 5.2 and 5.3 extend in a natural way to more than two factors, and in the balanced cases, the UMVU estimators of the variance components are easily derived. Theestimationofvariancecomponentsdescribedabovesuffersfromtwoserious difficulties. (i) The UMVU estimators of all the variance components except σ 2 can take on negative values with probabilities as high as .5 and even in excess of that value (Problem 5.5–5.7) (and, correspondingly, their expected squared errors are quite unsatisfactory; see Klotz, Milton, and Zacks 1969). Theinterpretationofsuchnegativevalueseitherasindicationsthattheassociated components are negligible (which is sometimes formalized by estimating them to be zero) or that the model is incorrect is not always convincing because negative values do occur even when the model is correct and the components are positive. An alternative possibility, here and throughout this section, is to fall back on max-imum likelihood estimation or restricted MLE’s (REML estimates) obtained by maximizing the likelihood after first reducing the data through location invariance (Thompson, 1962; Corbeil and Searle, 1976). Although these methods have no small-sample justification, they are equivalent to a noninformative prior Bayesian solution (Searle et al. 1992; see also Example 2.7). Alternatively, there is an ap-proach due to Hartung (1981), who minimizes bias subject to non-negativity, or Pukelsheim (1981) and Mathew (1984), who find non-negative unbiased estimates of variance. (ii) Models as simple as those obtained in Examples 5.1–5.3 are not available when the layout is not balanced. The joint density of the X’s can then be obtained by noting that they are linear functions of normal variables and thus have a joint multivariate normal distribu-tion. To obtain it, one only need write down the covariance matrix of the X’s and invert it. The result is an exponential family which typically is not complete un-less the model is balanced. (This is illustrated for the one-way layout in Problem 5.4.) UMVU estimators cannot be expected in this case (see Pukelsheim 1981). A 192 EQUIVARIANCE [ 3.5 characterization of U-estimable functions permitting UMVU estimators is given by Unni (1978). Two general methods for the estimation of variance components have been developed in some detail; these are maximum and restricted maximum likelihood, and the minimum norm quadratic unbiased estimation (Minque) intro-duced by Rao (1970). Surveys of the area are given by Searle (1971b), Harville (1977), and Kleffe (1977). More detailed introductions can be found, for exam-ple, in the books by Rao and Kleffe (1988), Searle et al. (1992), and Burdick and Graybill (1992). So far, the models we have considered have had factors that were either all fixed or all random. We now look at an example of a mixed model, which contains both types of factors. Example 5.4 Mixed effects model. In Example 5.3, it was assumed that the hos-pital, schools, or farms were obtained as a random sample from a population of such units. Let us now suppose that it is only these particular hospitals that are of interest (perhaps it is the set of all hospitals in the city), whereas the patients continue to be drawn at random from these hospitals. Instead of (5.8), we shall assume that the observations are given by Xijk = µ + αi + Bij + Uijk (αi = 0). (5.10) A transformation very similar to the earlier one (Problem 5.14) now leads to independent normal variables Wijk with joint density proportional to exp − 1 2(σ 2 + nσ 2 B)[(wi11 −µ −αi)2 + S2 B] − 1 2σ 2 S2 " (5.11) with S2 B and S2 given by (5.9), and with Wi11 = √ Jn Xi··. This is an exponential family with the complete set of sufficient statistics Xi··, S2 B, and S2. The UMVU estimators of σ 2 B and σ 2 are the same as in Example 5.3, whereas the UMVU estimator of αi is Xi·· −X···, as it would be if the B’s were fixed. ∥ Thus far in this section, our focus has been the estimation of the variance compo-nents in random and mixed effects models. There is, however, another important estimation target in these models, the random effects themselves. This presents a somewhat different problem than is considered in the rest of this book, as the estimand is now a random variable rather than a fixed parameter. However, the theory of UMVU estimation has a fairly straightforward extension to the present case. We illustrate this in the following example. Example 5.5 Best prediction of random effects. Consider, once more, the ran-dom effects model (5.1), where the value αi of Ai, the effect on gas mileage, could itself be of interest. Since αi is the realized value of a random variable rather than a fixed parameter, it is common to speak of prediction of αi rather than estimation of αi. To avoid identifiability problems, we will, in fact, predict µ + αi rather than αi. If δ(X) is a predictor, then under squared error loss we have E[δ(X) −(µ + αi)]2 = E[δ(X) ± E(µ + αi|X) −(µ + αi)]2 = E[δ(X) −E(µ + αi|X)]2 (5.12) 3.6 ] EXPONENTIAL LINEAR MODELS 193 +E[E(µ + αi|X) −(µ + αi)]2. As we have no control over the second term on the right side of (5.12), we only need be concerned with minimization of the first term. (In this sense, prediction of a random variable is the same as estimation of its conditional expected value.) Under the normality assumptions of Example 5.1, E(µ + αi|X) = nσ 2 A nσ 2 A + σ 2 ¯ Xi + σ 2 nσ 2 A + σ 2 µ. (5.13) Assuming the variances known, we set δ(X) = nσ 2 A nσ 2 A + σ 2 ¯ Xi + σ 2 nσ 2 A + σ 2 δ′(X) and choose δ′(X) to minimize E[δ′(X) −µ]2. The UMVU estimator of µ is ¯ X, and the UMVU predictor of µ + αi is nσ 2 A nσ 2 A + σ 2 ¯ Xi + σ 2 nσ 2 A + σ 2 ¯ X. (5.14) As we will see in Chapter 4, this predictor is also a Bayes estimator in a hierarchical model (which is another way of thinking of the model (5.1); see Searle et al. 1992, Chapter 9, and Problem 4.7.15). Although we have assumed normality, optimality of (5.14) continues if the distributional assumptions are relaxed, similar to (4.35). Under such relaxed as-sumptions, (5.14) continues to be best among linear unbiased predictors (Problem 5.17). Harville (1976) has formulated and proved a Gauss-Markov-type theorem for a general mixed model. ∥ 6 Exponential Linear Models The great success of the linear models described in the previous sections suggests the desirability of extending these models beyond the normal case. A natural gen-eralization combines a general exponential family with the structure of a linear model and will often result in exponential linear models in terms of new param-eters [see, for example, (5.2) and (5.3)]. However, the models in this section are discrete and do not arise from normal theory. Equivariance tends to play little role in the resulting models; they are therefore somewhat out of place in this chapter. But certain analogies with normal linear models make it convenient to present them here. (i) Contingency Tables Suppose that the underlying exponential family is the set of multinomial distri-butions (1.5.4), which may be written as exp  s i=0 xi log pi h(x), (6.1) 194 EQUIVARIANCE [ 3.6 and that a linear structure is imposed on the parameters ηi = log pi. Expositions of the resulting theory of log linear models can be found in the books by Agresti (1990),Christensen(1990),SantnerandDuffy(1990),andEveritt(1992).Diaconis (1988,Chapter9)showshowacombinationofexponentialfamilytheoryandgroup representations lead naturally to log linear models. The models have close formal similarities with the corresponding normal mod-els, and a natural linear subspace of the log pi often corresponds to a natural restriction on the p’s. In particular, since sums of the log p’s correspond to prod-ucts of the p’s, a subspace defined by setting suitable interaction terms equal to zero often is equivalent to certain independence properties in the multinomial model. The exponential family (6.1) is not of full rank since the p’s must add up to 1. A full-rank form is  exp s i=1 xi log(pi/p0) h(x). (6.2) If we let η′ i = log pi p0 = ηi −η0, (6.3) we see that arbitrary linear functions of the η′ i correspond to arbitrary contrasts (i.e., functions of the differences) of the ηi. From Example 2.3.8, it follows that (X1, . . . , Xs) or (X0, X1, . . . , Xs) is sufficient and complete for (6.2) and hence also for (6.1). In applications, we shall find (6.1) the more convenient form to use. If the η’s are required to satisfy r independent linear restrictions aijηj = bi(i = 1, . . . , r), the resulting distributions will form an exponential family of rank s −r, and the associated minimal sufficient statistics T will continue to be complete. Since E(Xi/n) = pi, the probabilities pi are always U-estimable; their UMVU estimators can be obtained as the conditional expectations of Xi/n given T . If ˆ pi is the UMVU estimator of pi, a natural estimator of ηi is ˆ ηi = log ˆ pi, but, of course, this is no longer unbiased. In fact, no unbiased estimator of ηi exists because only polynomials of the pi can be U-estimable (Problem 2.3.25). When ˆ pi is also the MLE of pi, ˆ ηi is the MLE of ηi. However, the MLE ˆ ˆ pi does not always coincide with the UMVU estimator ˆ pi. An example of this possibility with log pi = α + βti (t’s known; α and β unknown) is given by Haberman (1974, Example 1.16, p. 29; Example 3.3, p. 60). It is a disadvantage of the ˆ pi in this case that, unlike ˆ ˆ pi, they do not always satisfy the restrictions of the model, that is, for some values of the X’s, no α and β exist for which log ˆ pi = α + βti. Typically, if ˆ pi ̸= ˆ ˆ pi, the difference between the two is moderate. For estimating the ηi, Goodman (1970) has recommended in some cases apply-ing the estimators not to the cell frequencies Xi/n but to Xi/n + 1/2, in order to decrease the bias of the MLE. This procedure also avoids difficulties that may arise when some of the cell counts are zero. (See also Bishop, Fienberg, and Holland 1975, Chapter 12.) Example 6.1 Two-way contingency table. Consider the situation of Example 2.3.9 in which n subjects are classified according to two characteristics A and B with possible outcomes A1, . . . , AI and B1, . . . , BJ. If nij is the number of subjects with properties Ai and Bj, the joint distribution of the nij can be written 3.6 ] EXPONENTIAL LINEAR MODELS 195 as n! Oi,j(nij)! exp nijξij, ξij = log pij. Write ξij = µ + αi + βj + γij as in Example 4.11, with the side conditions (4.28). This implies no restrictions since any IJ numbers ξij can be represented in this form. The pij must, of course, satisfy pij = 1 and the ξij must therefore satisfy exp ξij = 1. This equation determines µ as a function of the α’s, β’s, and γ ’s which are free, subject only to (4.28). The UMVU estimators of the pij were seen in Example 2.3.9 to be nij/n. ∥ In Example 4.11 (normal two-way layout), it is sometimes reasonable to suppose that all the γijs (the interactions) are zero. In the present situation, this corresponds exactly to the assumption that the characteristics A and B are independent, that is, that pij = pi+p+j (Problem 6.1). The UMVU estimator of pij is now ni+n+j/n2. Example 6.2 Conditionalindependenceinathree-waytable.InExample2.3.10, it was assumed that the subjects are classified according to three characteristics A, B, and C and that conditionally, given outcome C, the two characteristics A and B are independent. If ξijk = log pijk and ξijk is written as ξijk = µ + αA i + αB j + αC k + αAB ij + αAC ik + αBC jk + αABC ijk with the α’s subject to the usual restrictions and with µ determined by the fact that the pijk add up to 1, it turns out that the conditional independence of A and B given C is equivalent to the vanishing of both the three-way interactions αABC ijk and the A, B interactions αAB ij (Problem 6.2). The UMVU estimators of the pijk in this model were obtained in Example 2.3.10. ∥ (ii) Independent Binomial Experiments The submodels considered in Example 5.2–5.4 and 6.1–6.2 corresponded to natural assumptions about the variances or probabilities in question. However, in general, the assumption of linearity in the η’s made at the beginning of this section is rather arbitrary and is dictated by mathematical convenience rather than by meaningful structural assumptions. We shall now consider a particularly simple class of problems, in which this linearity assumption is inconsistent with more customary assumptions. Agreement with these assumptions can be obtained by not insisting on a linear structure for the parameters ηi themselves but permitting a linear structure for a suitable function of the η’s. The problems are concerned with a number of independent random variables Xi having the binomial distributions b(pi, ni). Suppose the X’s have been obtained from some unobservable variables Zi distributed independently as N(ζi, σ 2) by setting Xi = 0 if Zi ≤u 1 if Zi > u. (6.4) Then pi = P(Zi > u) = X ζi −u σ  (6.5) 196 EQUIVARIANCE [ 3.6 and hence ζi = u + σX−1(pi). (6.6) Now consider a two-way layout for the Z’s in which the effects are additive, as in Example 4.11. The subspace of the ζij (i = 1, . . . , a, j = 1, . . . , b) defining this model is characterized by the fact that the interactions satisfy γij = ζij −ζi· −ζ·j + ζ·· = 0 (6.7) which, by (6.6), implies that X−1(pij) −1 J j X−1(pij) −1 I i X−1(pij) (6.8) + 1 IJ i j X−1(pij) = 0. The “natural” linear subspace of the parameter space for the Z’s thus translates into a linear subspace in terms of the parameters X−1(pij) for the X’s, and the corresponding fact by (6.6) is true quite, generally, for subspaces defined in terms of differences of the ζ’s. On the other hand, the joint distribution of the X’s is proportional to exp  xi log pi qi  h(x), (6.9) and the natural parameters of this exponential family are ηi = log(pi/qi). The restrictions (6.8) are not linear in the η’s, and the minimal sufficient statistics for the exponential family (6.9) with the restrictions (6.8) are not complete. It is interesting to ask whether there exists a distribution F for the underlying variables Zi such that a linear structure for the ζi will result in a linear structure for ηi = log(pi/qi) when the pi and the ζi are linked by the equation qi = P(Zi ≤u) = F(u −ζi) (6.10) instead of by (6.5). Then, ζi = u −F −1(qi) so that linear functions of the ζi correspond to linear functions of the F −1(qi) and hence of log(pi/qi), provided F −1(qi) = a −b log pi qi . (6.11) Suppressing the subscript i and putting x = a −b log(p/q), we see that (6.11) is equivalent to q = F(x) = 1 1 + e−(x−a)/b , (6.12) which is the cdf of the logistic distribution L(a, b) whose density is shown in Table 2.3.1. Inferences based on the assumption of linearity in X−1(pi) and log(pi/qi) = F −1(qi) with F given by (6.12) where, without loss of generality, we can take a = 0, b = 1, are known as probit and logit analysis, respectively, and are widely used analysis techniques. For more details and many examples, see Cox 1970, Bishop, Fienberg, and Holland 1975, or Agresti 1990. As is shown by Cox (p. 28), the two analyses may often be expected to give very similar results, provided the 3.6 ] EXPONENTIAL LINEAR MODELS 197 p’s are not too close to 0 or 1. The probit model can also be viewed as a special case of a threshold model, a model in which it is only observed whether a random variable exceeds a threshold (Finney 1971). For the calculation of the MLEs in this model see Problem 6.4.16. The outcomes of s independent binomial experiments can be represented by a 2 × s contingency table, as in Table 3.3.1, with I = 2 and J = s, and the outcomes A1 and A2 corresponding to success and failure, respectively. The column totals n+1, . . . , n+s are simply the s sample sizes and are, therefore, fixed in the present model. In fact, this is the principal difference between the present model and that assumed for a 2 × J table in Example 2.3.9. The case of s independent binomials arises in the situation of that example, if the n subjects, instead of being drawn at random from the population at large, are obtained by drawing n+j subjects from the subpopulation having property Bj for j = 1, . . . , s. A 2 × J contingency table, with fixed column totals and with the distribution of the cell counts given by independent binomials, occurs not only in its own right through the sampling of n+1, . . . , n+J subjects from categories B1, . . . , BJ, respectively, but also in the multinomial situation of Example 6.1 with I = 2, as the conditional distribution of the cell counts given the column totals. This relationship leads to an apparent paradox. In the conditional model, the UMVU estimator of the probability pj = p1j/(p1j + p2j) of success, given that the subject is in Bj, is δj = n1j/n+j. Since δj satisfies E(δj|Bj) = pj, (6.13) it appears also to satisfy E(δj) = pj and hence to be an unbiased estimator of p1j/(p1j + p2j) in the original multinomial model. On the other hand, an easy extension of the argument of Example 3.3.1 (see Problem 2.3.25) shows that, in this model, only polynomials in the pij can be U-estimable, and the ratio in question clearly is not a polynomial. The explanation lies in the tacit assumption made in (6.13) that n+j > 0 and in the fact that δj is not defined when n+j = 0. To ensure at least one observation in Bj, one needs a sampling scheme under which an arbitrarily large number of observations is possible. For such a scheme, the U-estimability of p1j/(p1j + p2j) would no longer be surprising. It is clear from the discussion leading to (6.8) that the generalization of normal linear models to models linear in the natural parameters ηi of an exponential family is too special and that, instead, linear spaces in suitable functions of the ηi should be permitted. Because in exponential families the parameters of primary interest often are the expectations θi = E(Ti) [for example in (6.9), the pi = E(Xi)], generalized linear models are typically defined by restricting the parameters to lie in a space defined by linear conditions on v(θi) [or in some cases vi(θi)] for a suitable link function v (linking the θ’s with the linear space). A theory of such models was developed Dempster (1971) and Nelder and Wedderburn (1972), who, in particular, discuss maximum likelihood estimation of the parameters. Further aspects are treated in Wedderburn (1976) and in Pregibon (1980). For a compre-hensive treatment of these generalized linear models, see the book by McCullagh and Nelder (1989), an essential reference on this topic; an introductory treatment 198 EQUIVARIANCE [ 3.7 is provided by Dobson (1990). A generalized linear interactive modeling (GLIM) package has been developed by Baker and Nelder (1983ab). The GLIM package has proved invaluable in implementing these methods and has been in the center of much of the research and modeling (see, for example, Aitken et al. 1989). 7 Finite Population Models In the location-scale models of Sections 3.1 and 3.3, and the more general linear models of Section 4.4 and 4.5, observations are measurements that are subject to random errors. The parameters to be estimated are the true values of the quantities being measured, or differences and other linear functions of these values, and the variance of the measurement errors. We shall now consider a class of problems in which the measurements are assumed to be without error, but in which the obser-vations are nevertheless random because the subjects (or objects) being observed are drawn at random from a finite population. Problems of this kind occur whenever one wishes to estimate the average in-come, days of work lost to illness, reading level, or the proportion of a population supporting some measure or candidate. The elements being sampled need not be human but may be trees, food items, financial records, schools, and so on. We shall consider here only the simplest sampling schemes. For a fuller account of the principal methods of sampling, see, for example, Cochran (1977); a systematic treatment of the more theoretical aspects is given by Cassel, S¨ arndal, and Wretman (1977) and S¨ arndal, Swensson, and Wretman (1992). The prototype of the problems to be considered is the estimation of a population average on the basis of a simple random sample from that population. In order to draw a random sample, one needs to be able to identify the members of the population. Telephone subscribers, for example, can conveniently be identified by the page and position on the page, trees by their coordinates, and students in a class by their names or by the row and number of their seat. In general, a list or other identifying description of the members of the population is called a frame. To represent the sampling frame, suppose that N population elements are labeled 1, . . . , N; in addition, a value ai (the quantity of interest) is associated with the element i. (This notation is somewhat misleading because, in any realization of the model, the a’s will simply be N real numbers without identifying subscripts.) For the purpose of estimating ¯ a = N i=1ai/N, a sample of size n is drawn in order, one element after another, without replacement. It is a simple random sample if all N(N −1) . . . (N −n + 1) possible n-tuples are equally likely. The data resulting from such a sampling process consist of the n labels of the sampled elements and the associated a values, in the order in which they were drawn, say X = {(I1, Y1), . . . , (In, Yn)} (7.1) where the I’s denote the labels and the Y’s the associated a values, Yk = aIk. The unknown aspect of the situation, which as usual we shall denote by θ, is the set of population a values of the N elements, θ = {(1, a1), . . . , (N, aN)}. (7.2) 3.7 ] FINITE POPULATION MODELS 199 In the classic approach to sampling, the labels are discarded. Let us for a moment follow this approach, so that what remains of the data is the set of n observed a values: Y1, . . . , Yn. Under simple random sampling, the order statistics Y(1) ≤ · · · ≤Y(n) are then sufficient. To obtain UMVU estimators of ¯ a and other functions of the a’s, one needs to know whether this sufficient statistic is complete. The answer depends on the parameter space , which we have not yet specified. It frequently seems reasonable to assume that the set V of possible values is the same for each of the a’s and does not depend on the values taken on by the other a’s. (This would not be the case, for example, if the a’s were the grades obtained by the students in a class which is being graded “on the curve.”) The parameter space is then the set of all θ’s given by (7.2) with (a1, . . . , aN) in the Cartesian product V × V × · · · × V. (7.3) Here, V may, for example, be the set of all real numbers, all positive real numbers, or all positive integers. Or it may just be the set V = {0, 1} representing a situation in which there are only two kinds of elements—those who vote yes or no, which are satisfactory or defective, and so on. Theorem 7.1 If the parameter space is given by (7.3), the order statistics Y(1), . . ., Y(n) are complete. Proof. Denote by s an unordered sample of n elements and by Y(1)(s, θ), . . . , Y(n)(s, θ) its a values in increasing size. Then, the expected value of any estimator δ depending only on the order statistics is Eθ{δ[Y(1), . . . , Y(n)]} = P(s)δ[Y(1)(s, θ), . . . , Y(n)(s, θ)], (7.4) where the summation extends over all N n  possible samples, and where for simple random sampling, P(s) = 1/ N n  for all s. We need to show that Eθ{δ[Y(1), . . . , Y(n)]} = 0 for all θ ∈ (7.5) implies that δ[y(1), . . . , y(n)] = 0 for all y(1) ≤· · · ≤y(n). Letusbeginbyconsidering(7.5)forallparameterpointsθ forwhich(a1, . . . , aN) is of the form (a, . . . , a), a ∈V . Then, (7.5) reduces to s P(s)δ(a, . . . , a) = 0 for all a, which implies δ(a, . . . , a) = 0. Next, suppose that N −1 elements in θ are equal to a, and one is equal to b > a. Now, (7.5) will contain two kinds of terms: those corresponding to samples consisting of n a’s and those in which the sample contains b, and (7.5) becomes p δ(a, . . . , a) + qδ(a, . . . , a, b) = 0 where p and q are known numbers ̸= 0. Since the first term has already been shown to be zero, it follows that δ(a, . . . , a, b) = 0. Continuing inductively, we see that δ(a, . . . , a, b, . . . , b) = 0 for any k a’s and n −k b’s, k = 0, . . . , n. 200 EQUIVARIANCE [ 3.7 As the next stage in the induction argument, consider θ’s of the form (a, . . . , a, b, c) with a < b < c, then θ’s of the form (a, . . . , a, b, b, c), and so on, showing successively that δ(a, . . . , a, b, c), δ(a, . . . , a, b, b, c), . . . are equal to zero. Continuing in this way, we see that δ[y(1), . . . , y(n)] = 0 for all possible (y(1), . . . , y(n)), and this proves completeness. ✷ It is interesting to note the following: (a) No use has been made of the assumption of simple random sampling, so that the result is valid also for other sampling methods for which the probabilities P(s) are known and positive for all s. (b) The result need not be true for other parameter spaces (Problem 7.1). Corollary 7.2 On the basis of the sample values Y1, . . . , Yn, a UMVU estima-tor exists for any U-estimable function of the a’s, and it is the unique unbiased estimator δ(Y1, . . . , Yn) that is symmetric in its n arguments. Proof. The result follows from Theorem 2.1.11 and the fact that a function of y1, . . . , yn depends only on y(1), . . . , y(n) if and only if it is symmetric in its n arguments (see Section 2.4). ✷ Example 7.3 UMVU estimation in simple random sampling. If the sampling method is simple random sampling and the estimand is ¯ a, the sample mean ¯ Y is clearly unbiased since E(Yi) = ¯ a for all i (Problem 7.2). Since ¯ Y is symmetric in Y1, . . . , Yn, it is UMVU and among unbiased estimators, it minimizes the risk for any convex loss function. The variance of ¯ Y is (Problem 7.3) var( ¯ Y) = N −n N −1 · 1 nτ 2 (7.6) where τ 2 = 1 N (ai −a)2 (7.7) is the population variance. To obtain an unbiased estimator of τ 2, note that (Prob-lem 7.3) E  1 n −1(Yi −¯ Y)2  = N N −1τ 2. (7.8) Thus, [(N −1)/N(n −1)]n i=1(Yi −¯ Y)2 is unbiased for τ 2, and because it is symmetric in its n arguments, it is UMVU. ∥ If the sampling method is sequential, the stopping rule may add an additional complication. Example 7.4 Sum-quota sampling. Suppose that each Yi has associated with it a cost Ci, a positive random variable, and sampling is continued until ν observations aretaken,whereν−1 1 Ci < Q < ν 1 Ci,withQbeingaspecifiedquota.(Notethe similarity to inverse binomial sampling, as discussed in Example 2.3.2.) Under this sampling scheme, Pathak (1976) showed that ¯ Yν−1 = 1 ν−1 ν−1 1 Yi is an unbiased estimator of the population average ¯ a (Problem 7.4). Note that Pathak’s estimator drops the terminal observation Yν, which tends to be upwardly biased. As a consequence, Pathak’s estimator can be improved upon. This was done by Kremers (1986), who showed the following: 3.7 ] FINITE POPULATION MODELS 201 (a) T = {(C1, Y1), . . . , (Cν, Yν)} is complete sufficient. (b) Conditional on T , {(Ci, Y1), . . . , (Cν−1, Yν−1)} are exchangeable (Problem 7.5). Under these conditions, the estimator ˆ a = ¯ Y −( ¯ Y[ν] −¯ Y)/(ν −1) (7.9) is UMVU if ν > 1, where ¯ Y[ν] is the mean of all of the observations that could have been the terminal observation; that is, ¯ Y[ν] is the mean of all the Yi’s in the set {(cj, yj) : i̸=j ci < Q, j = 1, . . . , ν}. (7.10) See Problem 7.6. ∥ So far, we have ignored the labels. That Theorem 7.1 and Corollary 7.2 no longer hold when the labels are included in the data is seen by the following result. Theorem 7.5 Given any sampling scheme of fixed size n which assigns to the sample s a known probability P(s) (which may depend on the labels but not on the a values of the sample), given any U-estimable function g(θ), and given any pre-assigned parameter point θ0 = {(1, a10), . . . , (N, aN0)}, there exists an unbiased estimator δ∗of g(θ) with variance varθ0(δ∗) = 0. Proof. Let δ be any unbiased estimator of g(θ), which may depend on both labels and y values, say δ(s) = δ[(i1, y1), . . . , (in, yn)], and let δ0(s) = δ[(i1, ai10), . . . , (in, ain0)]. Note that δ0 depends on the labels whether or not δ does and thus would not be available if the labels had been discarded. Let δ∗(s) = δ(s) −δ0(s) + g(θ0). Since Eθ(δ) = g(θ) and Eθ(δ0) = g(θ0), it is seen that δ∗is unbiased for estimating g(θ). When θ = θ0, δ∗= g(θ0) and is thus a constant. Its variance is therefore zero, as was to be proved. ✷ To see under what circumstances the labels are likely to be helpful and when it is reasonable to discard them, let us consider an example. Example 7.6 Informative labels. Suppose the population is a class of several hundred students. A random sample is drawn and each of the sampled students is asked to provide a numerical evaluation of the instructor. (Such a procedure may be more accurate than distributing reaction sheets to the whole class, if for the much smaller sample it is possible to obtain a considerably higher rate of response.) Suppose that the frame is an alphabetically arranged class list and that the label is the number of the student on this list. Typically, one would not expect this label to carry any useful information since the place of a name in the alphabet does not 202 EQUIVARIANCE [ 3.7 usually shed much light on the student’s attitude toward the instructor. (Of course, there may be exceptional circumstances that vitiate this argument.) On the other hand, suppose the students are seated alphabetically. In a large class, the students sitting in front may have the advantage of hearing and seeing better, receiving more attention from the instructor, and being less likely to read the campus newspaper or fall asleep. Their attitude could thus be affected by the place of their name in the alphabet, and thus the labels could carry some information. ∥ We shall discuss two ways of formalizing the idea that the labels can reasonably be discarded if they appear to be unrelated to the associated a values. (i) Invariance. Consider the transformations of the parameter and sample space obtained by an arbitrary permutation of the labels: ¯ gθ = {(j(1), a1), . . . , (j(N), aN)}, (7.11) gX = {(j(I1), Y1), . . . , (j(In), Yn)}. The estimand ¯ a [or, more generally, any function h(a1, . . . , aN) that is symmetric in the a’s] is unchanged by these transformations, so that g∗d = d and a loss function L(θ, d) is invariant if it depends on θ only through the a’s (in fact, as a symmetric function of the a’s) and not the labels. [For estimating ¯ a, such a loss function would be typically of the form ρ(d −¯ a).] Since g∗d = d, an estimator δ is equivariant if it satisfies the condition δ(gX) = δ(X) for all g and X. (7.12) In this case, equivariance thus reduces to invariance. Condition (7.12) holds if and only if the estimator δ depends only on the observed Y values and not on the labels. Combining this result with Corollary 7.2, we see that for any U-estimable function h(a1, . . . , aN), the estimator of Corollary 7.2 uniformly minimizes the risk for any convex loss function that does not depend on the labels among all estimators of h which are both unbiased and invariant. The appropriateness of the principle of equivariance, which permits restricting consideration to equivariant (in the present case, invariant) estimators, depends on the assumption that the transformations (7.11) leave the problem invariant. This is clearly not the case when there is a relationship between the labels and the associated a values, for example, when low a values tend to be associated with low labels and high a values with high labels, since permutation of the labels will destroy this relationship. Equivariance considerations therefore justify discarding the labels if, in our judgment, the problem is symmetric in the labels, that is, unchanged under any permutation of the labels. (ii) Random labels. Sometimes, it is possible to adopt a slightly different formu-lation of the model which makes an appeal to equivariance unnecessary. Suppose that the labels have been assigned at random, that is, so that all N! possible as-signments are equally likely. Then, the observed a values Y1, . . . , Yn are sufficient. To see this, note that given these values, any n labels (I1, . . . , In) associated with them are equally likely, so that the conditional distribution of X given (Y1, . . . , Yn) is independent of θ. In this model, the estimators of Corollary 7.2 are, therefore, UMVU without any further restriction. 3.7 ] FINITE POPULATION MODELS 203 Of course, the assumption of random labeling is legitimate only if the labels really were assigned at random rather than in some systematic way such as alpha-betically or first come, first labeled. In the latter cases, rather than incorporating a very shaky assumption into the model, it seems preferable to invoke equivariance when it comes to the analysis of the data with the implied admission that we be-lieve the labels to be unrelated to the a values but without denying that a hidden relationship may exist. Simple random sampling tends to be inefficient unless the population being sampled is fairly homogeneous with respect to the a’s. To see this, suppose that a1 = · · · = aN1 = a and aN1+1 = · · · = aN1+N2 = b(N1 + N2 = N). Then (Problem 7.3) var( ¯ Y) = N −n N −1 · γ (1 −γ ) n (b −a)2 (7.13) whereγ = N1/N.Ontheotherhand,supposethatthesubpopulationsOi consisting ofthea’sandb’s,respectively,canbeidentifiedandthatoneobservationXi istaken from each of the Oi (i = 1, 2). Then X1 = a and X2 = b and (N1X1+N2X2)/N = ¯ a is an unbiased estimator of ¯ a with variance zero. This suggests that rather than taking a simple random sample from a hetero-geneous population O, one should try to divide O into more homogeneous sub-populations Oi, called strata, and sample each of the strata separately. Human populations are frequently stratified by such factors as age, gender, socioeconomic background, severity of disease, or by administrative units such as schools, hospi-tals, counties, voting districts, and so on. Suppose that the population O has been partitioned into s strata O1, . . ., Os of sizes N1, . . . , Ns and that independent simple random samples of size ni are taken from each Oi (i = 1, . . . , s). If aij (j = 1, . . . , Ni) denote the a values in the ith stratum, the parameter is now θ = (θ1, . . . , θs), where θi = {(1, ai1), . . . , (Ni, aiNi); i}, and the observations are X = (X1, . . . , Xs), where Xi = {(Ki1, Yi1), . . . , (Kini, Yini); i}. Here, Kij is the label of the jth element drawn from Oi and Yij is its a value. It is now easy to generalize the optimality results for simple random sampling to stratified sampling. Theorem 7.7 Let the Yij (j = 1, . . . , ni), ordered separately for each i, be denoted byYi(1) < · · · < Yi(ni).OnthebasisoftheYij (i.e.,withoutthelabels),theseordered sample values are sufficient. They are also complete if the parameter space i for θi is of the form Vi × · · · × Vi (Ni factors) and the overall parameter space is = 1 × · · · × s. (Note that the value sets Vi may be different for different strata.) The proof is left to the reader (Problem 7.9). ItfollowsfromTheorem7.7thatonthebasisoftheY’s,aUMVUestimatorexists for any U-estimator function of the a’s and that it is the unique unbiased estimator 204 EQUIVARIANCE [ 3.7 δ(Yi1, . . . , Y1ni; Y21, . . . , Y2n2; . . .) which is symmetric in its first n1 arguments, symmetric in its second set of n2 arguments, and so forth. Example 7.8 UMVU estimation in stratified random sampling. Suppose that we let a·· = aij/N be the average of the a’s for the population O. If ai· is the average of the a’s in Oi, Yi· is unbiased for estimating ai· and hence δ = NiYi· N (7.14) is an unbiased estimator of a··. Since δ is symmetric for each of the s subsamples, it is UMVU for a·· on the basis of the Y’s. From (7.6) and the independence of the Yi’s, it is seen that var(δ) = N2 i N2 · Ni −ni Ni −1 · 1 ni τ 2 i , (7.15) where τ 2 i is the population variance of Oi, and from (7.8), one can read off the UMVU estimator of (7.15). ∥ Discarding the labels within each stratum (but not the strata labels) can again be justified by invariance considerations if these labels appear to be unrelated to the associated a values. Permutation of the labels within each stratum then leaves the problem invariant, and the condition of equivariance reduces to the invariance condition (7.12). In the present situation, an estimator again satisfies (7.12) if and only if it does not depend on the within-strata labels. The estimator (7.14), and other estimators which are UMVU when these labels are discarded, are therefore also UMVU invariant without this restriction. A central problem in stratified sampling is the choice of the sample sizes ni. This is a design question and hence outside the scope of this book (but see Hedayat and Sinha 1991). We only mention that a natural choice is proportional allocation, in which the sample sizes ni are proportional to the population sizes Ni. If the τi are known, the best possible choice in the sense of minimizing the approximate variance (N2 i τ 2 i /niN2) (7.16) is the Tschuprow-Neyman allocation with ni proportional to Niτi (Problem 7.11). Stratified sampling, in addition to providing greater precision for the same total sample size than simple random sampling, often has the advantage of being admin-istratively more convenient, which may mean that a larger sample size is possible on the same budget. Administrative convenience is the principal advantage of a third sampling method, cluster sampling, which we shall consider next. The pop-ulation is divided into K clusters of sizes M1, . . . , MK. A single random sample of k clusters is taken and the a values of all the elements in the sampled clusters are obtained. The clusters might, for example, be families or city blocks. A field worker obtaining information about one member of a family can often obtain the same information for all the members at relatively little additional cost. An important special case of cluster sampling is systematic sampling. Suppose the items on a conveyor belt or the cards in a card catalog are being sampled. The easiest way of drawing a sample in these cases and in many situations in which the sampling is being done in the field is to take every rth element, where r is some 3.7 ] FINITE POPULATION MODELS 205 positive number. To inject some randomness into the process, the starting point is chosen at random. Here, there are r clusters consisting of the items labeled {1, r + 1, 2r + 1, . . .}, {2, r + 2, 2r + 2, . . .}, . . . , {r, 2r, 3r, . . .}, of which one is chosen at random, so that K = r and k = 1. In general, let the elements of the ith cluster be {ai1, . . . , aiMi} and let ui = Mi j=1aij be the total for the ith cluster. We shall be interested in estimating some function of the u’s such as the population average a·· = ui/Mi. Of the aij, we shall assume that the vector of values (ai1, . . . , aiMi) belongs to some set Wi (which may, but need not be, of the form V × · · · × V ) and that (a11, . . . , a1M1; a21, . . . , a2M2; . . .) ∈W1 × · · · × WK. The observations consist of the labels of the clusters included in the sample together with the full set of labels and values of the elements of each such cluster: X = 3 i1; (1, ai1,1), (2, ai1,2), . . . ! ; i2; (1, ai2,1), (2, ai2,2), . . . ! ; . . . 4 . Let us begin the reduction of the statistical problem with invariance consider-ations. Clearly, the problem remains invariant under permutations of the labels within each cluster, and this reduces the observation to X′ = 3 i1, (ai1,1, . . . , ai1,Mi1) ! ; i2, (ai2,1, . . . , ai2,Mi2 ) ! ; . . . 4 in the sense that an estimator is invariant under these permutations if and only if it depends on X only through X′. The next group is different from any we have encountered so far. Consider any transformation taking (ai1, . . . , aiMi) into (a′ i1, . . . , a′ iMi), i = 1, . . . , K, where the a′ ij are arbitrary, except that they must satisfy (a) (a′ i1, . . . , a′ iMi) ∈Wi and (b) Mi j=1 a′ ij = ui. Note that for some vectors (ai1, . . . , aiMi), there may be no such transformations except the identity; for others, there may be just the identity and one other, and so on, depending on the nature of Wi. It is clear that these transformations leave the problem invariant, provided both the estimand and the loss function depend on the a’s only through the u’s. Since the estimand remains unchanged, the same should then be true for δ, which, therefore, should satisfy δ(gX′) = δ(X′) (7.17) for all these transformations. It is easy to see (Problem 7.17) that δ satisfies (7.17) if and only if δ depends on X′ only through the observed cluster labels, cluster sizes, and the associated cluster totals, that is, only on X′′ = {(ii, ui1, Mi1), . . . , (ik, uik, Mik)} (7.18) and the order in which the clusters were drawn. 206 EQUIVARIANCE [ 3.7 This differs from the set of observations we would obtain in a simple random sample from the collection {(1, u1), . . . , (K, uK)} (7.19) throughtheadditionalobservationsprovidedbytheclustersizes.Fortheestimation of the population average or total, this information may be highly relevant and the choice of estimator must depend on the relationship between Mi and ui. The situation does, however, reduce to that of simple random sampling from (7.19) under the additional assumption that the cluster sizes Mi are equal, say Mi = M, where M can be assumed to be known. This is the case, either exactly or as a very close approximation, for systematic sampling, and also in certain applications to industrial, commercial, or agricultural sampling—for example, when the clusters are cartons of eggs of other packages or boxes containing a fixed number of items. From the discussion of simple random sampling, we know that the average ¯ Y of the observed u values is then the UMVU invariant estimator ¯ u = ui/K and hence that ¯ Y/M is UMVU invariant for estimating a·· . The variance of the estimator is easily obtained from (7.6) with τ 2 = (ui −¯ u)2/K. In stratified sampling, it is desirable to have the strata as homogeneous as possi-ble: The more homogeneous a stratum, the smaller the sample size it requires. The situation is just the reverse in cluster sampling, where the whole cluster will be observed in any case. The more homogeneous a cluster, the less benefit is derived from these observations: “If you have seen one, you have seen them all.” Thus, it is desirable to have the clusters as heterogeneous as possible. For example, fam-ilies, for some purposes, constitute good clusters by being both administratively convenient and heterogeneous with respect to age and variables related to age. The advantages of stratified sampling apply not only to the sampling of single ele-ments but equally to the sampling of clusters. Stratified cluster sampling consists of drawing a simple random sample of clusters from each stratum and combining the estimates of the strata averages or totals in the obvious way. The resulting estimator is again UMVU invariant, provided the cluster sizes are constant within each stratum, although they may differ from one stratum to the next. (For a more detailed discussion of stratified cluster sampling, see, for example, Kish 1965.) To conclude this section, we shall briefly indicate two ways in which the equiv-ariance considerations in the present section differ from those in the rest of the chapter. (i) In all of the present applications, the transformations leave the estimand un-changed rather than transforming it into a different value, and the condition of equivariance then reduces to the invariance condition: δ(gX) = δ(X). Correspond-ingly, the group ¯ G is not transitive over the parameter space and a UMRE estimator cannot be expected to exist. To obtain an optimal estimator, one has to invoke un-biasedness in addition to invariance. (For an alternative optimality property, see Section 5.4.) (ii) Instead of starting with transformations of the sample space which would then induce transformations of the parameter space, we inverted the order and began by transforming θ, thereby inducing transformations of X. This does not involve a new approach but was simply more convenient than the usual order. To 3.8 ] PROBLEMS 207 see how to present the transformations in the usual order, let us consider the sample space as the totality of possible samples s together with the labels and values of their elements. Suppose, for example, that the transformations are permutations of the labels. Since the same elements appear in many different samples, one must ensure that the transformations g of the samples are consistent, that is, that the transform of an element is independent of the particular sample in which it appears. If a transformation has this property, it will define a permutation of all the labels in the population and hence a transformation ¯ g of θ. Starting with g or ¯ g thus leads to the same result; the latter is more convenient because it provides the required consistency property automatically. 8 Problems Section 1 1.1 Prove the parts of Theorem 1.4 relating to (a) risk and (b) variance. 1.2 In model (1.9), suppose that n = 2 and that f satisfies f (−x1, −x2) = f (x2, x1). Show that the distribution of (X1 + X2)/2 given X2 −X1 = y is symmetric about 0. Note that if X1 and X2 are iid according to a distribution which is symmetric about 0, the above equation holds. 1.3 If X1 and X2 are distributed according to (1.9) with n = 2 and f satisfying the assumptions of Problem 1.2, and if ρ is convex and even, then the MRE estimator of ξ is (X1 + X2)/2. 1.4 Under the assumptions of Example 1.18, show that (a) E[X(1)] = b/n and (b) med[X(1)] = b log 2/n. 1.5 For each of the three loss functions of Example 1.18, compare the risk of the MRE estimator to that of the UMVU estimator. 1.6 If T is a sufficient statistic for the family (1.9), show that the estimator (1.28) is a function of T only. [Hint: Use the factorization theorem.] 1.7 Let Xi(i = 1, 2, 3) be independently distributed with density f (xi −ξ) and let δ = X1 if X3 > 0 and = X2 if X3 ≤0. Show that the estimator δ of ξ has constant risk for any invariant loss function, but δ is not location equivariant. 1.8 Prove Corollary 1.14. [Hint: Show that (a) φ(v) = E0ρ(X −v) →M as v →±∞ and (b) that φ is continuous; (b) follows from the fact (see TSH2, Appendix Section 2) that if fn, n = 1, 2, . . . and f are probability densities such that fn(x) →f (x) a.e., then ψfn → ψf for any bounded ψ.] 1.9 Let X1, . . . , Xn be distributed as in Example 1.19 and let the loss function be that of Example 1.15. Determine the totality of MRE estimators and show that the midrange is one of them. 1.10 Consider the loss function ρ(t) = −At if t < 0 Bt if t ≥0 (A, B ≥0). If X is a random variable with density f and distribution function F, show that Eρ(X−v) is minimized for any v satisfying F(v) = B/(A + B). 1.11 In Example 1.16, find the MRE estimator of ξ when the loss function is given by Problem 1.10. 208 EQUIVARIANCE [ 3.8 1.12 Show that an estimator δ(X) of g(θ) is risk-unbiased with respect to the loss function of Problem 1.10 if Fθ[g(θ)] = B/(A + B), where Fθ is the cdf of δ(X) under θ. 1.13 Suppose X1, . . . , Xm and Y1, . . . , Yn have joint density f (x1 −ξ, . . . , xm −ξ; y1 − η, . . . , yn −η) and consider the problem of estimating W = η −ξ. Explain why it is desirable for the loss function L(ξ, η; d) to be of the form ρ(d −W) and for an estimator δ of W to satisfy δ(x + a, y + b) = δ(x, y) + (b −a). 1.14 Under the assumptions of the preceding problem, prove the equivalents of Theorems 1.4–1.17 and Corollaries 1.11–1.14 for estimators satisfying the restriction. 1.15 In Problem 1.13, determine the totality of estimators satisfying the restriction when m = n = 1. 1.16 In Problem 1.13, suppose the X’s and Y’s are independently normally distributed with known variances σ 2 and τ 2. Find conditions on ρ under which the MRE estimator is ¯ Y −¯ X. 1.17 In Problem 1.13, suppose the X’s and Y’s are independently distributed as E(ξ, 1) and E(η, t), respectively, and that m = n. Find conditions on ρ under which the MRE estimator of W is Y(1) −X(1). 1.18 In Problem 1.13, suppose that X and Y are independent and that the loss function is squared error. If ˆ ξ and ˆ η are the MRE estimators of ξ and η, respectively, the MRE estimator of W is ˆ η −ˆ ξ. 1.19 Suppose the X’s and Y’s are distributed as in Problem 1.17 but with m ̸= n. Deter-mine the MRE estimator of W when the loss is squared error. 1.20 For any density f of X = (X1, . . . , Xn), the probability of the set A = {x : 0 < ∞ −∞f (x −u) du < ∞} is 1. [Hint: With probability 1, the integral in question is equal to the marginal density of Y = (Y1, . . . , Yn−1) where Yi = Xi −Xn, and P[0 < g(Y) < ∞] = 1 holds for any probability density g.] 1.21 Under the assumptions of Theorem 1.10, if there exists an equivariant estimator δ0 of ξ with finite expected squared error, show that (a) E0(|Xn| | Y) < ∞with probability 1; (b) the set B = {x : |u|f (x −u) du < ∞} has probability 1. [Hint: (a) E|δ0| < ∞implies E(|δ0| | Y) < ∞with probability 1 and hence E[δ0 − v(Y)| | Y] < ∞with probability 1 for any v(Y). (b) P(B) = 1 if and only if E(|Xn| | Y) < ∞with probability 1.] 1.22 Let δ0 be location equivariant and let U be the class of all functions u satisfying (1.20) and such that u(X) is an unbiased estimator of zero. Then, δ0 is MRE if and only if cov[δ0, u(X)] = 0 for all u ∈U .2 (Note the analogy with Theorem 2.1.7. ) Section 2 2.1 Show that the class G(C) is a group. 2.2 In Example 2.2(ii), show that the transformations x′ = −x together with the identity transformation form a group. 2.3 Let {gX, g ∈G} be a group of transformations that leave the model (2.1) invariant. If the distributions Pθ, θ ∈ are distinct, show that the induced transformations ¯ g are 1 : 1 transformations of . [Hint: To show that ¯ gθ1 = ¯ gθ2 implies θ1 = θ2, use the fact that Pθ1(A) = Pθ2(A) for all A implies θ1 = θ2.] 2 Communicated by P. Bickel. 3.8 ] PROBLEMS 209 2.4 Under the assumptions of Problem 2.3, show that (a) the transformations ¯ g satisfy g2g1 = ¯ g2 · ¯ g1 and ( ¯ g)−1 = (g−1); (b) the transformations ¯ g corresponding to g ∈G form a group. (c) establish (2.3) and (2.4). 2.5 Show that a loss function satisfies (2.9) if and only if it is of the form (2.10). 2.6 (a) The transformations g∗defined by (2.12) satisfy (g2g1)∗= g∗ 2 · g∗ 1 and (g∗)−1 = (g−1)∗. (b) If G is a group leaving (2.1) invariant and G∗= {g∗, g ∈G}, then G∗is a group. 2.7 Let X be distributed as N(ξ, σ 2), −∞< ξ < ∞, 0 < σ, and let h(ξ, σ) = σ 2. The problem is invariant under the transformations x′ = ax +c; 0 < a, −∞< c < ∞. Show that the only equivariant estimator is δ(X) ≡0. 2.8 Show that: (a) If (2.11) holds, the transformations g∗defined by (2.12) are 1 : 1 from H onto itself. (b) If L(θ, d) = L(θ, d′) for all θ implies d = d′, then g∗defined by (2.14) is unique, and is a 1 : 1 transformation from D onto itself. 2.9 If θ is the true temperature in degrees Celsius, then θ′ = ¯ gθ = θ + 273 is the true temperature in degrees Kelvin. Given an observation X, in degrees Celsius: (a) Show that an estimator δ(X) is functionally equivariant if it satisfies δ(x) + a = δ(x + a) for all a. (b) Suppose our estimator is δ(x) = (ax + bθ0)/(a + b), where x is the observed tem-perature in degrees Celsius, θ0 is a prior guess at the temperature, and a and b are constants. Show that for a constant K, δ(x + K) ̸= δ(x) + K, so δ does not satisfy the principle of functional equivariance. (c) Showthattheestimatorsofpart(b)willnotsatisfytheprincipleofformalinvariance. 2.10 To illustrate the difference between functional equivariance and formal invariance, consider the following. To estimate the amount of electric power obtainable from a stream, one could use the estimate δ(x) = c min{100, x −20} where x = stream flow in m3/sec, 100 m3/sec is the capacity of the pipe leading to the turbine, and 20 m3/sec is the flow reduction necessary to avoid harming the trout. The constant c, in kilowatts /m3/sec converts the flow to a kilowatt estimate. (a) If measurements were, instead, made in liters and watts, so g(x) = 1000x and ¯ g(θ) = 1000 θ, show that functional equivariance leads to the estimate ¯ g(δ(x)) = c min{105, g(x) −20, 000}. (b) The principle of formal invariance leads to the estimate δ(g(x)). Show that this estimator is not a reasonable estimate of wattage. (Communicated by L. LeCam.) 2.11 In an invariant probability model, write X = (T, W), where T is sufficient for θ, and W is ancillary . (a) If the group operation is transitive, show that any invariant statistic must be ancillary. (b) What can you say about the invariance of an ancillary statistic? 210 EQUIVARIANCE [ 3.8 2.12 In an invariant estimation problem, write X = (T, W) where T is sufficient for θ, and W is ancillary. If the group of transformations is transitive, show: (a) The best equivariant estimator δ∗is the solution to mind Eθ[L(θ, d(x))|W = w]. (b) If e is the identity element of the group (g−1g = e), then δ∗= δ∗(t, w) can be found by solving, for each w, mind Ee{L[e, d(T, w)]|W = w}. 2.13 For the situation of Example 2.11: (a) Show that the class of transformations is a group. (b) Show that equivariant estimators must satisfy δ(n −x) = 1 −δ(x). (c) Show that, using an invariant loss, the risk of an equivariant estimator is symmetric about p = 1/2. 2.14 For the situation of Example 2.12: (a) Show that the class of transformations is a group. (b) Show that estimators of the form ϕ(¯ x/s2)s2, where ¯ x = 1/nxi and s2 = (xi −¯ x)2 are equivariant, where ϕ is an arbitrary function. (c) Show that, using an invariant loss function, the risk of an equivariant estimator is a function only of τ = µ/σ. 2.15 Prove Corollary 2.13. 2.16 (a) If g is the transformation (2.20), determine ¯ g. (b) In Example 2.12, show that (2.22) is not only sufficient for (2.14) but also necessary. 2.17 (a) In Example 2.12, determine the smallest group G containing both G1 and G2. (b) Show that the only estimator that is invariant under G is δ(X, Y) ≡0. 2.18 If δ(X) is an equivariant estimator of h(θ) under a group G, then so is g∗δ(X) with g∗defined by (2.12) and (2.13), provided G∗is commutative. 2.19 Show that: (a) In Example 2.14(i), X is not risk-unbiased. (b) The group of transformations ax + c of the real line (0 < a, −∞< c < ∞) is not commutative. 2.20 In Example 2.14, determine the totality of equivariant estimators of W under the smallest group G containing G1 and G2. 2.21 Let θ be real-valued and h strictly increasing, so that (2.11) is vacuously satisfied. If L(θ, d) is the loss resulting from estimating θ by d, suppose that the loss resulting from estimating θ ′ = h(θ) by d′ = h(d) is M(θ ′, d′) = L[θ, h−1(d′)]. Show that: (a) If the problem of estimating θ with loss function L is invariant under G, then so is the problem of estimating h(θ) with loss function M. (b) If δ is equivariant under G for estimating θ with loss function L, show that h[δ(X)] is equivariant for estimating h(θ) with loss function M. (c) If δ is MRE for θ with L, then h[δ(X)] is MRE for h(θ) with M. 2.22 If δ(X) is MRE for estimating ξ in Example 2.2(i) with loss function ρ(d −ξ), state an optimum property of eδ(X) as an estimator of eξ. 3.8 ] PROBLEMS 211 2.23 Let Xij, j = 1, . . . , ni, i = 1, . . . , s, and W be distributed according to a density of the form  s i=1 fi(xi −ξi) h(w) where xi−ξi = (xi1−ξi, . . . , xini −ξi), and consider the problem of estimating θ = ciξi with loss function L(ξi, . . . , ξs; d) = ρ(d −θ). Show that: (a) This problem remains invariant under the transformations X′ ij = Xij + ai, ξ ′ i = ξi + ai, θ′ = θ + aici, d′ = d + aici. (b) An estimator δ of θ is equivariant under these transformations if δ(x1 + a1, . . . , xs + as, w) = δ(x1, . . . , xs, w) + aici. 2.24 Generalize Theorem 1.4 to the situation of Problem 2.23. 2.25 If δ0 is any equivariant estimator of θ in Problem 2.23, and if yi = (xi1 −xini, xi2 − xini, . . . , xini−1 −xini), show that the most general equivariant estimator of θ is of the form δ(x1, . . . , xs, w) = δ0(x1, . . . , xs, w) −v(y1, . . . , ys, w). 2.26 (a) Generalize Theorem 1.10 and Corollary 1.12 to the situation of Problems 2.23 and 2.25. (b) Show that the MRE estimators of (a) can be chosen to be independent of W. 2.27 Suppose that the variables Xij in Problem 2.23 are independently distributed as N(ξi, σ 2), σ is known. Show that: (a) The MRE estimator of θ is then ci ¯ Xi −v∗, where ¯ Xi = (Xi1 + · · · + Xini)/ni, and where v∗minimizes (1.24) with X = ci ¯ Xi. (b) If ρ is convex and even, the MRE estimator of θ is ci ¯ Xi. (c) The results of (a) and (b) remain valid when σ is unknown and the distribution of W depends on σ (but not the ξ’s). 2.28 Show that the transformation of Example 2.11 and the identity transformation are the only transformations leaving the family of binomial distributions invariant. Section 3 3.1 (a) A loss function L satisfies (3.4) if and only if it satisfies (3.5) for some γ . (b) The sample standard deviation, the mean deviation, the range, and the MLE of τ all satisfy (3.7) with r = 1. 3.2 Show that if δ(X) is scale invariant, so is δ∗(X) defined to be δ(X) if δ(X) ≥0 and = 0 otherwise, and the risk of δ∗is no larger than that of δ for any loss function (3.5) for which γ (v) is nonincreasing for v ≤0. 3.3 Show that the bias of any equivariant estimator of τ r in (3.1) is proportional to τ r. 3.4 A necessary and sufficient condition for δ to satisfy (3.7) is that it is of the form δ = δ0/u with δ0 and u satisfying (3.7) and (3.9), respectively. 3.5 The function ρ of Corollary 3.4 with γ defined in Example 3.5 is strictly convex for p ≥1. 3.6 Let X be a positive random variable. Show that: 212 EQUIVARIANCE [ 3.8 (a) If EX2 < ∞, then the value of c that minimizes E(X/c −1)2 is c = EX2/EX. (b) If Y has the gamma distribution with H(α, 1), then the value of w minimizing E[(Y/w) −1]2 is w = α + 1. 3.7 Let X be a positive random variable. (a) If EX < ∞, then the value of c that minimizes E|X/c −1| is a solution to EXI(X ≤c) = EXI(X ≥c), which is known as a scale median. (b) Let Y have a χ2-distribution with f degrees for freedom. Then, the minimizing value is w = f + 2. [Hint: (b) Example 1.5.9.] 3.8 Under the assumptions of Problem 3.7(a), the set of scale medians of X is an interval. If f (x) > 0 for all x > 0, the scale median of X is unique. 3.9 Determine the scale median of X when the distribution of X is (a) U(0, θ) and (b) E(0, b). 3.10 Under the assumptions of Theorem 3.3: (a) Show that the MRE estimator under the loss (3.13) is given by (3.14). (b) Show that the MRE estimator under the loss (3.15) is given by (3.11), where w∗(z) is any scale median of δ0(x) under the distribution of X|Z. [Hint: Problem 3.7.] 3.11 Let X1, . . . , Xn be iid according to the uniform distribution u(0, θ). (a) Show that the complete sufficient statistic X(n) is independent of Z [given by Equa-tion (3.8)]. (b) For the loss function (3.13) with r = 1, the MRE estimator of θ is X(n)/w, with w = (n + 1)/(n + 2). (c) For the loss function (3.15) with r = 1, the MRE estimator of θ is [21/(n+1)] X(n). 3.12 Show that the MRE estimators of Problem 3.11, parts (b) and (c), are risk-unbiased, but not mean-unbiased. 3.13 In Example 3.7, find the MRE estimator of var(X1) when the loss function is (a) (3.13) and (b) (3.15) with r = 2. 3.14 Let X1, . . . , Xn be iid according to the exponential distribution E(0, τ). Determine the MRE estimator of τ for the loss functions (a) (3.13) and (b) (3.15) with r = 1. 3.15 In the preceding problem, find the MRE estimator of var(X1) when the loss function is (3.13) with r = 2. 3.16 Prove formula (3.19). 3.17 Let X1, . . . , Xn be iid each with density (2/τ)[1 −(x/τ)], 0 < x < τ. Determine the MRE estimator (3.19) of τ r when (a) n = 2, (b) n = 3, and (c) n = 4. 3.18 In the preceding problem, find var(X1) and its MRE estimator for n = 2, 3, 4 when the loss function is (3.13) with r = 2. 3.19 (a) Show that the loss function Ls of (3.20) is convex and invariant under scale transformations. (b) Prove Corollary 3.8. (c) Show that for the situation of Example 3.7, if the loss function is Ls, then the UMVU estimator is also the MRE. 3.20 Let X1, . . . , Xn be iid from the distribution N(θ, θ2). (a) Show that this probability model is closed under scale transformations. 3.8 ] PROBLEMS 213 (b) Show that the MLE is equivariant. [The MRE estimator is obtainable from Theorem 3.3, but does not have a simple form. See Eaton 1989, Robert 1991, 1994a for more details. Gleser and Healy (1976) consider a similar problem using squared error loss.] 3.21 (a) If δ0 satisfies (3.7) and cδ0 satisfies (3.22), show that cδ0 cannot be unbiased in the sense of satisfying E(cδ0) ≡τ r. (b) Prove the statement made in Example 3.10. 3.22 Verify the estimator δ∗of Example 3.12. 3.23 If G is a group, a subset G0 of G is a subgroup of G if G0 is a group under the group operation of G. (a) Show that the scale group (3.32) is a subgroup of the location-scale group (3.24) (b) Show that any equivariant estimator of τ r that is equivariant under (3.24) is also equivariant under (3.32); hence, in a problem that is equivariant under (3.32), the best scale equivariant estimator is at least as good as the best location-scale equiv-ariant estimator. (c) Explain why, in general, if G0 is a subgroup of G, one can expect equivariance under G0 to produce better estimators than equivariance under G. 3.24 For the situation of Example 3.13: (a) Show that an estimator is equivariant if and only if it can be written in the form ϕ(¯ x/s)s2. (b) Show that the risk of an equivariant estimator is a function only of ξ/τ. 3.25 If X1, . . . , Xn are iid according to E(ξ, τ), determine the MRE estimator of τ for the loss functions (a) (3.13) and (b) (3.15) with r = 1 and the MRE estimator of ξ for the loss function (3.43). 3.26 Show that δ satisfies (3.35) if and only if it satisfies (3.40) and (3.41). 3.27 Determine the bias of the estimator δ∗(X) of Example 3.18. 3.28 Lele(1993)usesinvarianceinthestudyofmophometrics,thequantitativeanalysisof biological forms. In the analysis of a biological object, one measures data X on k specific points called landmarks, where each landmark is typically two- or three-dimensional . Here we will assume that the landmark is two-dimensional (as is a picture), so X is a k × 2 matrix. A model for X is X = (M + Y)H + t where Mk×2 is the mean form of the object, t is a fixed translation vector, and H is a 2×2 matrix that rotates the vector X. The random variable Yk×2 is a matrix normal random variable, that is, each column of Y is distributed as N(0, k), a k-variate normal random variable, and each row is distributed as N(0, d), a bivariate normal random variable. (a) Show that X is a matrix normal random variable with columns distributed as Nk(MHj, k) and rows distributed as N2(MiH, H′dH), where Hj is the jth column of H and Mi is the ith row of M. (b) For estimation of the shape of a biological form, the parameters of interest are M, k and d, with t and H being nuisance parameters. Show that, even if there were no nuisance parameters, k or d is not identifiable. (c) It is usually assumed that the (1, 1) element of either k or d is equal to 1. Show that this makes the model identifiable. 214 EQUIVARIANCE [ 3.8 (d) The form of a biological object is considered an inherent property of the form (a baby has the same form as an adult) and should not be affected by rotations, reflections, or translations. This is summarized by the transformation X′ = XP + b where P is a 2 × 2 orthogonal matrix (P ′P = I) and b is a k × 1 vector. (See Note 9.3 for a similar group.) Suppose we observe n landmarks X1, · · · , Xn. Define the Euclidean distance between two matrices A and B to be D(A, B) =  ij(aij −bij)2, and let the n × n matrix F have (i, j)th element fij = D(Xi, Xj). Show that F is invariant under this group, that is F(X′) = F(X). (Lele (1993) notes that F is, in fact, maximal invariant.) 3.29 In (9.1), show that the group X′ = AX+b induces the group µ′ = Aµ+b, ′ = AA′. 3.30 For the situation of Note 9.3, consider the equivariant estimation of µ. (a) Show that an invariant loss is of the form L(µ, , δ) = L((µ −δ)′−1(µ −δ)). (b) The equivariant estimators are of the form ¯ X + c, with c = 0 yielding the MRE estimator. 3.31 For X1, . . . , Xn iid as Np(µ, ), the cross-products matrix S is defined by S = {Sij} = n k=1 (xik −¯ xi)(xjk −¯ xj) where ¯ xi = (1/n) n k=1 xik. Show that, for = I, (a) EI[trS] = EI p i=1 n k=1(Xik −¯ Xi)(Xik −¯ Xi) = p(n −1), (b) EI[trS2] = EI p i=1 p j=1{n k=1(Xik −¯ Xi)(Xjk −¯ Xj)}2 = (n −1)(np −p −1). [These are straightforward, although somewhat tedious, calculations involving the chi-squared distribution. Alternatively, one can use the fact that S has a Wishart distribution (see, for example, Anderson 1984), and use the properties of that distribution.] 3.32 For the situation of Note 9.3: (a) Show that equivariant estimators of are of the form cS, where S is the cross-products matrix and c is a constant. (b) Show that EI{tr[(cS −I)′(cS −I)]} is minimized by c = EItrS/EItrS2. [Hint: For part (a), use a generalization of Theorem 3.3; see the argument leading to (3.29), and Example 3.11.] 3.33 For the estimation of in Note 9.3: (a) Show that the loss function in (9.2) is invariant. (b) Show that Stein’s loss L(δ, ) = tr(δ−1) −log |δ−1| −p, where |A| is the determinant of A, is an invariant loss with MRE estimator S/n. (c) Show that a loss L(δ, ) is an invariant loss if and only if it can be written as a function of the eigenvalues of δ−1. [The univariate version of Stein’s loss was seen in (3.20) and Example 3.9. Stein (1956b) and James and Stein (1961) used the multivariate version of the loss. See also Dey and Srinivasan 1985, and Dey et al. 1987.] 3.8 ] PROBLEMS 215 3.34 Let X1, . . . , Xm and Y1, . . . , Yn have joint density 1 σ mτ n f x1 σ , . . . , xm σ ; y1 τ , . . . , yn τ , and consider the problem of estimating θ = (τ/σ)r with loss function L(σ, τ; d) = γ (d/θ). This problem remains invariant under the transformations X′ i = aXi, Y ′ j = bYj, σ ′ = aσ, τ ′ = bτ, and d′ = (b/a)rd (a, b > 0), and an estimator δ is equivariant under these transformations if δ(ax, by) = (b/a)rδ(x, y). Generalize Theorems 3.1 and 3.3, Corollary 3.4, and (3.19) to the present situation. 3.35 Under the assumptions of the preceding problem and with loss function (d −θ)2/θ 2, determine the MRE estimator of θ in the following situations: (a) m = n = 1 and X and Y are independently distributed as H(α, σ 2) and H(β, τ 2), respectively (α, β known). (b) X1, . . . , Xm and Y1, . . . , Yn are independently distributed as N(0, σ 2) and N(0, τ 2), respectively. (c) X1, . . . , Xm and Y1, . . . , Yn are independently distributed as U(0, σ) and U(0, τ), respectively. 3.36 Generalize the results of Problem 3.34 to the case that the joint density of X and Y is 1 σ mτ n f x1 −ξ σ , . . . , xm −ξ σ ; y1 −η τ , . . . , yn −η τ  . 3.37 Obtain the MRE estimator of θ = (τ/σ)r with the loss function of Problem 3.35 when the density of Problem 3.36 specializes to 1 σ mτ n Oif xi −ξ σ  Ojf yj −η τ  and f is (a) normal, (b) exponential, or (c) uniform. 3.38 In the model of Problem 3.37 with τ = σ, discuss the equivariant estimation of W = η −ξ with loss function (d −W)2/σ 2 and obtain explicit results for the three distributions of that problem. 3.39 Suppose in Problem 3.37 that an MRE estimator δ∗of W = η −ξ under the trans-formations X′ i = a + bXi and Y ′ j = a + bYj, b > 0, exists when the ratio τ/σ = c is known and that δ∗is independent of c. Show that δ∗is MRE also when σ and τ are completely unknown despite the fact that the induced group of transformations of the parameter space is not transitive. 3.40 Let f (t) = 1 π 1 1+t2 be the Cauchy density, and consider the location-scale family F = 1 σ f x −µ σ  , −∞< µ < ∞, 0 < σ < ∞ " . (a) Show that this probability model is invariant under the transformation x′ = 1/x. (b) If µ′ = µ/(µ2+σ 2) and σ ′ = σ/(µ2+σ 2), show that Pµ,r(X ∈A) = Pµ′,σ ′(X′ ∈A); that is, if X has the Cauchy density with location parameter µ and scale parameter σ, then X′ has the Cauchy density with location parameter µ/(µ2 + σ 2) and scale parameter σ/(µ2 + σ 2). (c) Explain why this group of transformations of the sample and parameter spaces does not lead to an invariant estimation problem. 216 EQUIVARIANCE [ 3.8 [See McCullaugh (1992) for a full development of this model, where it is suggested that the complex plane provides a more appropriate parameter space.] 3.41 Let (Xi, Yi), i = 1, . . . , n, be distributed as independent bivariate normal random variables with mean (µ, 0) and covariance matrix σ11 σ12 σ21 σ22  . (a) Show that the probability model is invariant under the transformations (x′, y′) = (a + bx, by), (µ′, σ ′ 11, σ ′ 12, σ ′ 22) = (a + bµ, b2σ11, b2σ12, b2σ22). (b) Using the loss function L(µ, d) = (µ−d)2/σ11, show that this is an invariant estima-tion problem, and equivariant estimators must be of the form δ = ¯ x+ψ(u1, u2, u3) ¯ y, where u1 = (xi −¯ x)2/ ¯ y2, u2 = (yi −¯ y)2/ ¯ y2, and u3 = (xi −¯ x)(yi −¯ y)/ ¯ y2. (c) Show that if δ has a finite second moment, then it is unbiased for estimating µ. Its risk function is a function of σ11/σ22 and σ12/σ22. (d) If the ratio σ12/σ22 is known, show that ¯ X −(σ12/σ22) ¯ Y is the MRE estimator of µ. [This problem illustrates the technique of covariance adjustment. See Berry, 1987.] 3.42 SupposeweletX1, . . . , Xn beasamplefromanexponentialdistributionf (x|µ, σ) = (1/σ)e−(x−µ)/σ I(x ≥µ). The exponential distribution is useful in reliability theory, and a parameter of interest is often a quantile, that is, a parameter of the form µ + bσ, where b is known. Show that, under quadratic loss, the MRE estimator of µ + bσ is δ0 = x(1) + (b −1/n)(¯ x −x(1)), where x(1) = mini xi. [Rukhin and Strawderman (1982) show that δ0 is inadmissible, and exhibit a class of improved estimators.] Section 4 4.1 (a) Suppose Xi : N(ξi, σ 2) with ξi = α + βti. If the first column of the matrix C leading to the canonical form (4.7) is (1/√n, . . . , 1/√n)′, find the second column of C. (b) If Xi : N(ξi, σ 2) with ξi, = α + βti + γ t2 i , and the first two columns of C are those of (a), find the third column under the simplifying assumptions ti = 0, t2 i = 1. [Note: The orthogonal polynomials that are progressively built up in this way are frequently used to simplify regression analysis.] 4.2 Write out explicit expressions for the transformations (4.10) when O is given by (a) ξi = α + βti and (b) ξi = α + βti + γ t2 i . 4.3 Use Problem 3.10 to prove (iii) of Theorem 4.3. 4.4 (a) In Example 4.7, determine ˆ α, ˆ β, and hence ˆ ξi by minimizing (Xi −α −βti)2. (b) Verify the expressions (4.12) for α and β, and the corresponding expressions for ˆ α and ˆ β. 4.5 In Example 4.2, find the UMVU estimators of α, β, γ , and σ 2 when ti = 0 and t2 i = 1. 4.6 Let Xij be independent N(ξij, σ 2) with ξij = αi + βtij. Find the UMVU estimators of the αi and β. 4.7 (a) In Example 4.9, show that the vectors of the coefficients in the ˆ αi are not or-thogonal to the vector of the coefficients of ˆ µ. 3.8 ] PROBLEMS 217 (b) Show that the conclusion of (a) is reversed if ˆ αi and ˆ µ are replaced by ˆ ˆ αi and ˆ ˆ µ. 4.8 In Example 4.9, find the UMVU estimator of µ when the αi are known to be zero and compare it with ˆ µ. 4.9 The coefficient vectors of the Xijk given by (4.32) for ˆ µ, ˆ αi, and ˆ βj are orthogonal to the coefficient vectors for the ˆ γij given by (4.33). 4.10 In the model defined by (4.26) and (4.27), determine the UMVU estimators of αi, βj, and σ 2 under the assumption that the γij are known to zero. 4.11 (a) In Example 4.11, show that (Xijk −µ −αi −βj −γij)2 = S2 + S2 µ + S2 α + S2 β + S2 γ where S2 = (Xijk −Xij·)2, S2 µ = IJm(X···−µ)2, S2 α = Jm(X1··−X···−αi)2, and S2 β, S2γ are defined analogously. (b) Use the decomposition of (a) to show that the least squares estimators of µ, αi, . . . are given by (4.32) and (4.33). (c) Show that the error sum of squares S2 is equal to (Xijk −ˆ ξij)2 and hence in the canonical form to n j=s+1Y 2 j . 4.12 (a) Show how the decomposition in Problem 4.11(a) must be modified when it is known that the γij are zero. (b) Use the decomposition of (a) to solve Problem 4.10. 4.13 Let Xijk (i = 1, . . . , I, j = 1, . . . , J, k = 1, . . . , K) be N(ξijk, σ 2) with ξijk = µ + αi + βj + γk where αi = βj = γk = 0. Express µ, αi, βj, and γk in terms of the ξ’s and find their UMVU estimators. Viewed as a special case of (4.4), what is the value of s? 4.14 Extend the results of the preceding problem to the model ξijk = µ + αi + βj + γk + δij + εik + λjk where i δij = j δij = i εik = k εik = j λjk = k λjk = 0. 4.15 In the preceding problem, if it is known that the λ’s are zero, determine whether the UMVU estimators of the remaining parameters remain unchanged. 4.16 (a) Show that under assumptions (4.35), if ξ = θA, then the least squares estimate of θ is xA(AA′)−1. (b) If (X, A) is multivariate normal with all parameters unknown, show that the least squares estimator of part (a) is a function of the complete sufficient statistic and, hence, prove part (a) of Theorem 4.14. 4.17 A generalization of the order statistics, to vectors, is given by the following defini-tion. Definition 8.1 The cj-order statistics of a sample of vectors are the vectors arranged in increasing order according to their jth components. Let Xi, i = 1, . . . , n, be an iid sample of p × 1 vectors, and let X = (X1, . . . , Xn) be a p × n matrix. (a) If the distribution of Xi is completely unknown, show that, for any j, j = 1, . . . , p, the cj-order statistics of (X1, . . . , Xn) are complete sufficient. (That is, the vectors X1, . . . , Xn are ordered according to their jth coordinate.) 218 EQUIVARIANCE [ 3.8 (b) Let Y1×n be a random variable with unknown distribution (possibly different from Xi). Form the (p −1) × n matrix x y  , and for any j = 1, . . . , p, calculate the cj-order statistics based on the columns of x y  . Show that these cj-order statistics are sufficient. [Hint: See Problem 1.6.33, and also TSH2, Chapter 4, Problem 12.] (c) Use parts (a) and (b) to prove Theorem 4.14(b). [Hint: Part (b) implies that only a symmetric function of (X, A) need be considered, and part (a) implies that an unconditionally unbiased estimator must also be conditionally unbiased. Theorem 4.12 then applies.] 4.18 The proof of Theorem 4.14(c) is based on two results. Establish that: (a) For large values of θ, the unconditional variance of a linear unbiased estimator will be greater than that of the least squares estimator. (b) For θ = 0, the variance of XA(AA′)−1 is greater than that of XA[E(AA′)]−1. [You may use the fact that E(AA′)−1 −[E(AA′)]−1 is a positive definite matrix (Mar-shall and Olkin 1979; Shaffer 1991). This is a multivariate extension of Jensen’s inequality.] (c) Parts (a) and (b) imply that no best linear unbiased estimator of γiξi exists if EAA′ is known. 4.19 (a) Under the assumptions of Example 4.15, find the variance of λiS2 i . (b) Show that the variance of (a) is minimized by the values stated in the example. 4.20 In the linear model (4.4), a function ciξi with ci = 0 is called a contrast. Show that a linear function diξi is a contrast if and only if it is translation invariant, that is, satisfies di(ξi + a) = diξi for all a, and hence if and only if it is a function of the differences ξi −ξj. 4.21 Determine which of the following are contrasts: (a) The regression coefficients α, β, or γ of (4.2). (b) The parameters µ, αi, βj, or γij of (4.27). (c) The parameters µ or αi of (4.23) and (4.24). Section 5 5.1 In Example 5.1: (a) Show that the joint density of the Zij is given by (5.2). (b) Obtain the joint multivariate normal density of the Xij directly by evaluating their covariance matrix and then inverting it. [Hint: The covariance matrix of X11, . . . , X1n; . . . ; Xs1, . . . , Xsn has the form =      1 0 . . . 0 0 2 . . . 0 . . . ... . . . 0 0 . . . s      where each i is an n × n matrix with a value ai for all diagonal elements and a value bi for all off-diagonal elements. For the inversion of i, see the next problem.] 3.8 ] PROBLEMS 219 5.2 Let A = (aij) be a nonsingular n × n matrix with aii = a and aij = b for all i ̸= j. Determine the elements of A−1. [Hint: Assume that A−1 = (cij) with cii = c and cij = d for all i ̸= j, calculate c and d as the solutions of the two linear equations a1jcj1 = 1 and a1jcj2 = 0, and check the product AC.] 5.3 Verify the UMVU estimator of σ 2 A/σ 2 given in Example 5.1. 5.4 Obtain the joint density of the Xij in Example 5.1 in the unbalanced case in which j = 1, . . . , ni, with the ni not all equal, and determine a minimal set of sufficient statistics (which depends on the number of distinct values of ni). 5.5 In the balanced one-way layout of Example 5.1, determine lim P(ˆ σ 2 A < 0) as n →∞ for σ 2 A/σ 2 = 0, 0.2, 0.5 , 1, and s = 3, 4, 5, 6. [Hint: The limit of the probability can be expressed as a probability for a χ2 s−1 variable.] 5.6 In the preceding problem, calculate values of P(ˆ σ 2 A < 0) for finite n. When would you expect negative estimates to be a problem? [The probability P(ˆ σ 2 A < 0), which involves an F random variable, can also be expressed using the incomplete beta function, whose values are readily available through either extensive tables or computer packages. Searle et al. (1992, Section 3.5d) look at this problem in some detail.] 5.7 The following problem shows that in Examples 5.1–5.3 every unbiased estimator of the variance components (except σ 2) takes on negative values. (For some related results, see Pukelsheim 1981.) Let X have distribution P ∈P and suppose that T is a complete sufficient statistic for P. If g(P ) is any U-estimable function defined over P and its UMVU estimator η(T ) takes on negative values with probability > 0, then show that this is true of every unbiased estimator of g(P ). [Hint: For any unbiased estimator δ, recall that E(δ|T ) = η(T ).] 5.8 Modify the car illustration of Example 5.1 so that it illustrates (5.5). 5.9 In Example 5.2, define a linear transformation of the Xijk leading to the joint dis-tribution of the Zijk stated in connection with (5.6), and verify the complete sufficient statistics (5.7). 5.10 In Example 5.2, obtain the UMVU estimators of the variance components σ 2 A, σ 2 B, and σ 2 when σ 2 C = 0, and compare them to those obtained without this assumption. 5.11 For the Xijk given in (5.8), determine a transformation taking them to variables Zijk with the distribution stated in Example 5.3. 5.12 In Example 5.3, obtain the UMVU estimators of the variance components σ 2 A, σ 2 B, and σ 2. 5.13 In Example 5.3, obtain the UMVU estimators of σ 2 A and σ 2 when σ 2 B = 0 so that the B terms in (5.8) drop out, and compare them with those of Problem 5.12. 5.14 In Example 5.4: (a) Give a transformation taking the variables Xijk into the Wijk with density (5.11). (b) Obtain the UMVU estimators of µ, αi, σ 2 B, and σ 2. 5.15 A general class of models containing linear models of Types I and II, and mixed models as special cases assumes that the 1 × n observation vector X is normally dis-tributed with mean θA as in (4.13) and with covariance matrix m i=1γiVi where the γ ’s are the components of variance and the Vi’s are known symmetric positive semidefinite n × n matrices. Show that the following models are of this type and in each case specify the γ ’s and V ’s: (a) (5.1 ); (b) (5.5); (c) (5.5) without the terms Cij; (d) (5.8); (e) (5.10). 220 EQUIVARIANCE [ 3.8 5.16 Consider a nested three-way layout with Xijkl = µ + αi + bij + cijk + Uijkl (i = 1, . . . , I; j = 1, . . . , J; k = 1, . . . , K; l = 1, . . . , n) in the versions (a) ai = αi, bij = βij, cijk = γijk; (b) ai = αi, bij = βij, cijk = Cijk; (c) ai = αi, bij = Bij, cijk = Cijk; (d) ai = Ai, bij = Bij, cijk = Cijk; where the α’s, β’s, and γ ’s are unknown constants defined uniquely by the usual conven-tions, and the A’s, B’s, C’s, and U’s are unobservable random variables, independently normally distributed with means zero and with variances σ 2 A, σ 2 B, σ 2 C and σ 2. In each case, transform the Xijkl to independent variables Zijkl and obtain the UMVU estimators of the unknown parameters. 5.17 For the situation of Example 5.5, relax the assumption of normality to only assume that Ai and Uij have zero means and finite second moments. Show that among all linear estimators (of the form  cijxij, cij known), the UMVU estimator of µ + αi (the best linear predictor) is given by (5.14). [This is a Gauss-Markov theorem for prediction in mixed models. See Harville (1976) for generalizations.] Section 6 6.1 In Example 6.1, show that γij = 0 for all i, j is equivalent to pij = pi+p+j. [Hint: γij = ξij −ξi· −ξ·j + ξ·· = 0 implies pij = aibj and hence pi+ = cai and p+j = bj/c for suitable ai, bj, and c > 0.] 6.2 In Example 6.2, show that the conditional independence of A, B given C is equivalent to αABC ijk = αAB ij = 0 for all i, j, and k. 6.3 In Example 6.1, show that the conditional distribution of the vectors (ni1, . . ., niJ) given the values of ni+ (i = 1, . . . , I) is that of I independent vectors with multinomial distribution M(p1|i, . . . , pJ|i; ni+) where pj|i = pij/pi+. 6.4 Show that the distribution of the preceding problem also arises in Example 6.1 when the n subjects, rather than being drawn from the population at large, are randomly drawn: n1+ from Category A1, . . . , nI+ from Category AI. 6.5 An application of log linear models in genetics is through the Hardy-Weinberg model of mating. If a parent population contains alleles A, a with frequencies p and 1 −p, then standard random mating assumptions will result in offspring with genotypes AA, Aa, and aa with frequencies θ1 = p2, θ2 = 2p(1 −p), and θ3 = (1 −p)2. (a) Give the full multinomial model for this situation, and show how the Hardy-Weinberg model is a non-full-rank submodel. (b) For a sample X1, . . . , Xn of n offspring, find the minimal sufficient statistic. [See Brown (1986a) for a more detailed development of this model.] 6.6 A city has been divided into I major districts and the ith district into Ji subdistricts, all of which have populations of roughly equal size. From the police records for a given year, a random sample of n robberies is obtained. Write the joint multinomial distribution of the numbers nij of robberies in subdistrict (i, j) for this nested two-way layout as e nij ξij with ξij = µ + αi + βij where iαi = jβij = 0, and show that the assumption βij = 0 for all i, j is equivalent to the assumption that pij = pi+/Ji for all i, j. 3.8 ] PROBLEMS 221 6.7 Instead of a sample of fixed size n in the preceding problem, suppose the observations consist of all robberies taking place within a given time period, so that n is the value taken on by a random variable N. Suppose that N has a Poisson distribution with unknown expectation λ and that the conditional distribution of the nij given N = n is the distribution assumed for the nij in the preceding problem. Find the UMVU estimator of λpij and show that no unbiased estimator pij exists. [Hint: See the following problem.] 6.8 Let N be an integer-valued random variable with distribution Pθ(N = n) = Pθ(n), n = 0, . . . , for which N is complete. Given N = n, let X have the binomial distribution b(p, n) for n > 0, with p unknown, and let X = 0 when n = 0. For the observations (N, X): (a) Show that (N, X) is complete. (b) Determine the UMVU estimator of pEθ(N). (c) Show that no unbiased estimator of any function g(p) exists if Pθ(0) > 0 for some θ. (d) Determine the UMVU estimator of p if Pθ(0) for all θ. Section 7 7.1 (a) Consider a population {a1, . . . , aN} with the parameter space defined by the restriction a1 + · · · + aN = A (known). A simple random sample of size n is drawn in order to estimate τ 2. Assuming the labels to have been discarded, show that Y(1), . . . , Y(n) are not complete. (b) Show that Theorem 7.1 need not remain valid when the parameter space is of the form V1 × V2 × · · · × VN. [Hint: Let N = 2, n = 1, V1 = {1, 2}, V2 = {3, 4}.] 7.2 If Y1, . . . , Yn are the sample values obtained in a simple random sample of size n from the finite population (7.2), then (a) E(Yi) = ¯ a, (b) var(Yi) = τ 2, and (c) cov(Yi, Yj) = −τ 2/(N −1). 7.3 Verify equations (a) (7.6), (b) (7.8), and (c) (7.13). 7.4 For the situation of Example 7.4: (a) Show that E ¯ Yν−1 = E[ 1 ν−1 ν−1 1 Yi] = ¯ a. (b) Show that [ 1 ν−1 −1 N ] 1 ν−2 ν−1 1 (Yi −¯ Yν−1)2 is an unbiased estimator of var( ¯ Yν−1). [Pathak (1976) proved (a) by first showing that EY1 = ¯ a, and then that EY1|T0 = ¯ Yν−1. To avoid trivialities, Pathak also assumes that Ci + Cj < Q for all i, j, so that at least three observations are taken.] 7.5 Random variables X1, . . . , Xn are exchangeable if any permutation of X1, . . ., Xn has the same distribution. (a) If X1, . . . , Xn are iid, distributed as Bernoulli (p), show that given n 1 Xi = t, X1, . . . , Xn are exchangeable (but not independent). (b) For the situation of Example 7.4, show that given T = {(C1, X1), . . ., (Cν, Xν)}, the ν −1 preterminal observations are exchangeable. The idea of exchangeability is due to deFinetti (1974), who proved a theorem that char-acterizes the distribution of exchangeable random variables as mixtures of iid random variables. Exchangeable random variables play a large role in Bayesian statistics; see Bernardo and Smith 1994 (Sections 4.2 and 4.3). 222 EQUIVARIANCE [ 3.8 7.6 For the situation of Example 7.4, assuming that (a) and (b) hold: (a) Show that ˆ a of (7.9) is UMVUE for ¯ a. (b) Defining S2 = ν i=1(Yi −¯ Y)/(ν −1), show that ˆ σ 2 = S2 − MS[v] ν ν−1 −S2 ν −2 is UMVUE for τ 2 of (7.7), where MS[v] is the variance of the observations in the set (7.10). [Kremers (1986) uses conditional expectation arguments (Rao-Blackwellization), and completeness, to establish these results. He also assumes that at least n0 obser-vations are taken. To avoid trivialities, we can assume n0 ≥3.] 7.7 In simple random sampling, with labels discarded, show that a necessary condition for h(a1, . . . , aN) to be U-estimable is that h is symmetric in its N arguments. 7.8 Prove Theorem 7.7. 7.9 Show that the approximate variance (7.16) for stratified sampling with ni = nNi/N (proportional allocation) is never greater than the corresponding approximate variance τ 2/n for simple random sampling with the same total sample size. 7.10 Let Vp be the exact variance (7.15) and Vr the corresponding variance for simple random sampling given by (7.6) with n = ni, N = Ni, ni/n = Ni/N and τ 2 = (aij −a··)2/N. (a) Show that Vr −Vp = N−n n(N−1)N 1 Ni(ai· −a··)2 −1 N N−Ni Ni−1 Niτ 2 i 2 . (b) Give an example in which Vr < Vp. 7.11 The approximate variance (7.16) for stratified sampling with a total sample size n = n1 + · · · + ns is minimized when ni is proportional to Niτi. 7.12 Forsamplingdesignswheretheinclusionprobabilitiesπi =  s:i∈s P(s)ofincluding the ith sample value Yi is known, a frequently used estimator of the population total is the Horvitz-Thompson (1952) estimator δHT =  i Yi/πi. (a) Show that δHT is an unbiased estimator of the population total. (b) The variance of δHT is given by var(δHT ) = i Y 2 i  1 πi −1  + i̸=j YiYj  πij πiπj −1  , where πij are the second-order inclusion probabilities πij =  s:i,j∈s P(s). Note that it is necessary to know the labels in order to calculate δHT , thus Theorem 7.5 precludes any overall optimality properties. See Hedayat and Sinha 1991 (Chapters 2 and 3) for a thorough treatment of δHT . 7.13 Suppose that an auxiliary variable is available for each element of the population (7.2) so that θ = {(1, a1, b1), . . . , (N, aN, bN)}. If Y1, . . . , Yn and Z1, . . . , Zn denote the values of a and b observed in a simple random sample of size n, and ¯ Y and ¯ Z denote their averages, then cov( ¯ Y, ¯ Z) = E( ¯ Y −¯ a)( ¯ Z −¯ b) = N −n nN(N −1)(ai −¯ a)(bi −¯ b). 3.9 ] NOTES 223 7.14 Under the assumptions of Problem 7.13, if B = b1 +· · ·+bN is known, an alternative unbiased estimator ¯ a is  1 n n i=1 Yi Zi ¯ b + n(N −1) (n −1)N  ¯ Y −  1 n n i=1 Yi Zi ¯ Z . [Hint: Use the facts that E(Y1/Z1) = (1/N)(ai/bi) and that by the preceding problem E  1 n −1 Yi Zi (Zi −¯ Z)  =  1 N −1 ai bi (bi −¯ b)  .] 7.15 In connection with cluster sampling, consider a set W of vectors (a1, . . . , aM) and the totality G of transformations taking (a1, . . . , aM) into (a′ 1, . . . , a′ M) such that (a′ 1, . . . , a′ M) ∈W and a′ i = ai. Give examples of W such that for any real number a1 there exist a2, . . . , aM with (a1, . . . , aM) ∈W and such that (a) G consists of the identity transformation only; (b) G consists of the identity and one other element; (c) G is transitive over W. 7.16 For cluster sampling with unequal cluster sizes Mi, Problem 7.14 provides an al-ternative estimator of ¯ a, with Mi in place of bi. Show that this estimator reduces to ¯ Y if b1 = · · · = bN and hence when the Mi are equal. 7.17 Show that (7.17) holds if and only if δ depends only on X′′, defined by (7.18). 9 Notes 9.1 History The theory of equivariant estimation of location and scale parameters is due to Pitman (1939), and the first general discussions of equivariant estimation were provided by Peisakoff (1950) and Kiefer (1957). The concept of risk-unbiasedness (but not the term) and its relationship to equivariance were given in Lehmann (1951). The linear models of Section 3.4 and Theorem 4.12 are due to Gauss. The history of both is discussed in Seal (1967); see also Stigler 1981. The generalization to exponential linear models was introduced by Dempster (1971) and Nelder and Wedderburn (1972). The notions of Functional Equivariance and Formal Invariance, discussed in Section 3.2, have been discussed by other authors sometimes using different names. Functional Equivariance is called the Principle of Rational Invariance by Berger (1985, Section 6.1), Measurement Invariance by Casella and Berger (1990, Section 7.2.4) and Pa-rameter Invariance by Dawid (1983). Schervish (1995, Section 6.2.2) argues that this principle is really only a reparametrization of the problem, and has nothing to do with invariance. This is almost in agreement with the principle of functional equivariance, however, it is still the case that when reparameterizing one must be careful to properly reparameterize the estimator, density, and loss function, which is part of the prescription of an invariant problem. This type of invariance is commonly illustrated by the example that if δ measures temperature in degrees Celsius, then (9/5)δ + 32 should be used to measure temperature in degrees Fahrenheit (see Problems 2.9 and 2.10). What we have called Formal Invariance was also called by that name in Casella and Berger (1990), but was called the Invariance Principle by Berger (1985) and Context Invariance by Dawid (1983). 224 EQUIVARIANCE [ 3.9 9.2 Subgroups The idea of improving an MRE estimator by imposing equivariance only under a sub-group was used by Stein (1964), Brown (1968), and Brewster and Zidek (1974) to find improved estimators of a normal variance. Stein’s 1964 proof is also discussed in detail by Maatta and Casella (1990), who give a history of decision-theoretic variance estima-tion. The proof of Stein (1964) contains key ideas that were further developed by Brown (1968), and led to Brewster and Zidek (1974) finding the best equivariant estimator of the form (2.33). [See Problem 2.14.] 9.3 General Linear Group The general linear group (also called the full linear group) is an example of a group that can be thought of as a multivariate extension of the location-scale group. Let X1, . . . , Xn be iid according to a p-variate normal distribution Np(µ, ), and define X as the p × n matrix (X1, . . . , Xn) and ¯ X as the n × 1 vector ( ¯ X1, . . . , ¯ Xn). Consider the group of transformations X′ = AX + b µ′ = Aµ + b, ′ = AA′, (9.1) where A is a p × p nonsingular matrix and b is a p × 1 vector. [The group of real p × p nonsingular matrices, with matrix multiplication as the group operation is called the general linear group, denoted Glp (see Eaton 1989 for a further development). The group (9.1) adds a location component.] Consider now the estimation of . (The estimation of µ is left to Problem 3.30.) An invariant loss function, analogous to squared error loss, is of the form L(, δ) = tr[−1(δ −)−1(δ −)] = tr[−1/2δ−1/2 −I]2, (9.2) where tr[·] is the trace of a matrix (see Eaton 1989, Example 6.2, or Olkin and Selliah 1977). It can be shown that equivariant estimators are of the form cS, where S = (X − 1 ¯ X′)(X−1 ¯ X′)′ with 1 a p ×1 vector of 1’s and c a constant, is the cross-products matrix (Problem 3.31). Since the group is transitive, the MRE estimator is given by the value of c that minimizes EIL(I, cS) = EItr(cS −I)′(cS −I), (9.3) that is, the risk with = I. Since EItr(cS −I)′(cS −I) = c2EItrS2 −2cEItrS + p, the minimizing c is given by c = EItrS/EItrS2. Note that, for p = 1, this reduces to the best equivariant estimator of quadratic loss in the scalar case. Other equivariant losses, such as Stein’s loss (3.20), can be handled in a similar manner. See Problems 3.29-3.33 for details. 9.4 Finite Populations Estimation in finite populations has, until recently, been developed largely outside the mainstreamofstatistics.ThebooksbyCassel,S¨ arndal,andWretman(1977)andS¨ arndal, Swenson, and Wretman (1992) constitute important efforts at a systematic presentation of this topic within the framework of theoretical statistics. The first steps in this direction were taken by Neyman (1934) and by Blackwell and Girshick (1954). The need to consider the labels as part of the data was first emphasized by Godambe (1955). Theorem 7.1 is due to Watson (1964) and Royall (1968), and Theorem 7.5 to Basu (1971). CHAPTER 4 Average Risk Optimality 1 Introduction So far, we have been concerned with finding estimators which minimize the risk R(θ, δ) at every value of θ. This was possible only by restricting the class of es-timators to be considered by an impartiality requirement such as unbiasedness or equivariance. We shall now drop such restrictions, admitting all estimators into competition, but shall then have to be satisfied with a weaker optimality prop-erty than uniformly minimum risk. We shall look for estimators that make the risk function R(θ, δ) small in some overall sense. Two such optimality proper-ties will be considered: minimizing the (weighted) average risk for some suitable non-negative weight function and minimizing the maximum risk. The second (min-imax) approach will be taken up in Chapter 5; the present chapter is concerned with the first of these approaches, the problem of minimizing r(, δ) =  R(θ, δ)d(θ) (1.1) where we shall assume that the weights represented by  add up to 1, that is,  d(θ) = 1, (1.2) so that  is a probability distribution. An estimator δ minimizing (1.1) is called a Bayes estimator with respect to . The problem of determining such Bayes estimators arises in a number of dif-ferent contexts. (i) As Mathematical Tools Bayes estimators play a central role in Wald’s decision theory. It is one of the main results of this theory that in any given statistical problem, attention can be restricted to Bayes solutions and suitable limits of Bayes solutions; given any other procedure δ, there exists a procedure δ′ in this class such that R(θ, δ′) ≤R(θ, δ) for all values of θ. (In view of this result, it is not surprising that Bayes estimators provide a tool for solving minimax problems, as will be seen in the next chapter.) (ii) As a Way of Utilizing Past Experience It is frequently reasonable to treat the parameter θ of a statistical problem as the realization of a random variable with known distribution rather than as an 226 AVERAGE RISK OPTIMALITY [ 4.1 unknown constant. Suppose, for example, that we wish to estimate the probability of a penny showing heads when spun on a flat surface. So far, we would have considered n spins of the penny as a set of n binomial trials with an unknown probability p of showing heads. Suppose, however, that we have had considerable experience with spinning pennies, experience perhaps which has provided us with approximate values of p for a large number of similar pennies. If we believe this experience to be relevant to the present penny, it might be reasonable to represent this past knowledge as a probability distribution for p, the approximate shape of which is suggested by the earlier data. This is not as unlike the modeling we have done in the earlier sections as it may seem at first sight. When assuming that the random variables representing the outcomes of our experiments have normal, Poisson, exponential distributions, and so on, we also draw on past experience. Furthermore, we also realize that these models are in no sense exact but, at best, represent reasonable approximations. There is the difference that in earlier models we have assumed only the shape of the distribution to be known but not the values of the parameters, whereas now we extend our model to include a specification of the prior distribution. However, this is a difference in degree rather than in kind and may be quite reasonable if the past experience is sufficiently extensive. A difficulty, of course, is the assumption that past experience is relevant to the present case. Perhaps the mint has recently changed its manufacturing process, and the present coin, although it looks like the earlier ones, has totally different spinning properties. Similar kinds of judgment are required also for the models considered earlier. In addition, the conclusions derived from statistical procedures are typically applied not only to the present situation or population but also to those in the future, and extrastatistical judgment is again required in deciding how far such extrapolation is justified. The choice of the prior distribution  is typically made like that of the dis-tributions Pθ by combining experience with convenience. When we make the assumption that the amount of rainfall has a gamma distribution, we probably do not do so because we really believe this to be the case but because the gamma family is a two-parameter family which seems to fit such data reasonably well and which is mathematically very convenient. Analogously, we can obtain a prior distribution by starting with a flexible family that is mathematically easy to handle and selecting a member from this family which approximates our past experience. Such an approach, in which the model incorporates a prior distribution for θ to reflect past experience, is useful in fields in which a large amount of past experi-ence is available. It can be brought to bear, for example, in many applications in agriculture, education, business, and medicine. There are important differences between the modeling of the distributions Pθ and that of . First, we typically have a number of observations from Pθ and can use these to check the assumption of the form of the distribution. Such a check of  is not possible on the basis of one experiment because the value of θ under study represents only a single observation from this distribution. A second difference concerns the meaning of a replication of the experiment. In the models preceding this section, the replication would consist of drawing another set of observations 4.1 ] INTRODUCTION 227 from Pθ with the same value of θ. In the model of the present section, we would replicate the experiment by first drawing another value, θ′, of from  and then a set of observations from P ′ θ. It might be argued that sampling of the θ values (choice of penny, for example) may be even more haphazard and less well controlled than the choice of subjects for an experiment of a study, which assumes these subjects to be a random sample from the population of interest. However, it could also be argued that the assumption of a fixed value of θ is often unrealistic. As we will see, the Bayesian approaches of robust and hierarchical analysis attempt to address these problems. (iii) As a Description of a State of Mind A formally similar approach is adopted by the so-called Bayesian school, which interprets  as expressing the subjective feeling about the likelihood of different θ values. In the presence of a large amount of previous experience, the chosen  would often be close to that made under (ii), but the subjective approach can be applied even when little or no prior knowledge is available. In the latter case, for example, the prior distribution  then models the state of ignorance about θ. The subjective Bayesian uses the observations X to modify prior beliefs. After X = x has been observed, the belief about θ is expressed by the posterior (i.e., conditional) distribution of given x. Detailed discussions of this approach, which we shall not pursue here, can be found, for example, in books by Savage (1954), Lindley (1965), de Finetti (1970, 1974), Box and Tiao (1973), Novick and Jackson (1974), Berger (1985), Bernardo and Smith (1994), Robert (1994a) and Gelman et al. (1995). A note on notation: In Bayesian (as in frequentist) arguments, it is important to keep track of which variables are being conditioned on. Thus, the density of X will be denoted by X ∼f (x|θ). Prior distributions will typically be denoted by O or  with their density functions being π(θ|λ) or γ (λ),where λ is another parameter (sometimes called a hyperparameter). From these distributions we often calculate conditional distributions such as that of θ given x and λ, or λ given x (called poste-rior distributions). These typically have densities, denoted by π(θ|x, λ) or γ (λ|x). We will also be interested in marginal distributions such as m(x|λ). To illustrate, π(θ|x, λ) = f (x|θ)π(θ|λ)/m(x|λ), where m(x|λ) = f (x|θ)π(θ|λ) dθ. It is convenient to use boldface to denote vectors, for example, x = (x1, . . . , xn), so we can write f (x|θ) for the sample density f (x1, . . . , xn|θ). The determination of a Bayes estimator is, in principle, quite simple. First, consider the situation before any observations are taken. Then, has distribution  and the Bayes estimator of g( ) is any number d minimizing EL( , d). Once the data have been obtained and are given by the observed value x of X, the prior distribution  of is replaced by the posterior, that is, conditional, distribution of given x and the Bayes estimator is any number δ(x) minimizing the posterior risk E{L[ , δ(x)]|x}. The following is a precise statement of this result, where, as usual, measurability considerations, are ignored. 228 AVERAGE RISK OPTIMALITY [ 4.1 Theorem 1.1 Let have distribution , and given = θ, let X have distribu-tion Pθ. Suppose, in addition, the following assumptions hold for the problem of estimating g( ) with non-negative loss function L(θ, d). (a) There exists an estimator δ0 with finite risk. (b) For almost all x, there exists a value δ(x) minimizing E{L[ , δ(x)]|X = x}. (1.3) Then, δ(X) is a Bayes estimator. Proof. Let δ be any estimator with finite risk. Then, (1.3) is finite a.e. since L is non-negative. Hence, E{L[ , δ(x)]|X = x} ≥E{L[ , δ(x)]|X = x} a.e., and the result follows by taking the expectation of both sides. ✷ [For a discussion of some measurability aspects and more detail when L(θ, d) = ρ(d −θ), see DeGroot and Rao 1963. Brown and Purves (1973) provide a general treatment.] Corollary 1.2 Suppose the assumptions of Theorem 1.1 hold. (a) If L(θ, d) = [d −g(θ)]2, then δ(x) = E[g( )|x] (1.4) and, more generally, if L(θ, d) = w(θ)[d −g(θ)]2, (1.5) then δ(x) = w(θ)g(θ)d(θ|x) w(θ)d(θ|x) = E[w( )g( )|x] E[w( )|x] . (1.6) (b) If L(θ, d) = |d−g(θ)|, then δ(x) is any median of the conditional distribution of given x. (c) If L(θ, d) = 0 when |d −θ| ≤c 1 when |d −θ| > c, (1.7) then δ(x) is the midpoint of the interval I of length 2c which maximizes P[ ∈I|x]. Proof. To prove part (i), note that by Theorem 1.1, the Bayes estimator is obtained by minimizing E{[g( ) −δ(x)]2|x}. (1.8) By assumption (a) of Theorem 1.1, there exists δ0(x) for which (1.8) is finite for almost all values of x, and it then follows from Example 1.7.17 that (1.8) is minimized by (1.4). The proofs of the other parts are completely analogous. ✷ 4.1 ] INTRODUCTION 229 Example 1.3 Poisson. The parameter θ of a Poisson(θ) distribution is both the mean and the variance of the distribution. Although squared error loss L0(θ, δ) = (θ −δ)2 is often preferred for the estimation of a mean, some type of scaled squared error loss, for example, Lk(θ, δ) = (θ −δ)2/θk, may be more appropriate for the estimation of a variance. If X1, . . . , Xn are iid Poisson(θ), and θ has the gamma(a, b) prior distribution, then the posterior distribution is π(θ|¯ x) = Gamma  a + ¯ x, b 1 + b  and the Bayes estimator under Lk is given by (see Problem 1.1) δk(¯ x) = E(θ1−k|¯ x) E(θ−k|¯ x) = b 1 + b(¯ x + a −k) for a −k > 0. Thus, the choice of loss function can have a large effect on the resulting Bayes estimator. ∥ It is frequently important to know whether a Bayes solution is unique. The following are sufficient conditions for this to be the case. Corollary 1.4 If the loss function L(θ, d) is squared error, or more generally, if it is strictly convex in d, a Bayes solution δ is unique (a.e. P), where P is the class of distributions Pθ, provided (a) the average risk of δ with respect to  is finite, and (b) if Q is the marginal distribution of X given by Q(A) =  Pθ(X ∈A)d(θ), then a.e. Q implies a.e. P. Proof. For squared error, if follows from Corollary 1.2 that any Bayes estimator δ(x)withfiniteriskmustsatisfy(1.4)exceptonaset N ofx valueswithQ(N) = 0. For general strictly convex loss functions, the result follows by the same argument from Problem 1.7.26. ✷ As an example of a case in which condition (b) does not hold, let X have the binomial distribution b(p, n), 0 ≤p ≤1, and suppose that  assigns probability 1/2 to each of the values p = 0 and p = 1. Then, any estimator δ(X) of p with δ(0) = 0 and δ(n) = 1 is Bayes. On the other hand, condition (b) is satisfied when the parameter space is an open set which is the support of  and if the probability Pθ(X ∈A) is continuous in θ for any A. To see this, note that Q(N) = 0 implies Pθ(N) = 0 (a.e. ) by (1.2.23). If there exists θ0 with Pθ0(N) > 0, there exists a neighborhood ω of θ0 in which Pθ(N) > 0. By the support assumption, P(ω) > 0 and this contradicts the assumption that Pθ(N) = 0 (a.e. ). Three different aspects of the performance of a Bayes estimator, or of any other estimator δ, may be of interest in the present model. These are (a) the Bayes risk (1.1); (b) the risk function R(θ, δ) of Section 1.1 [Equation (1.1.10)] [this is the 230 AVERAGE RISK OPTIMALITY [ 4.1 frequentist risk, which is now the conditional risk of δ(X) given θ]; and (c) the posterior risk given x which is defined by (1.3). For the determination of the Bayes estimator the relevant criterion is, of course, (a). However, consideration of (b), the conditional risk given θ, as a function of θ provides an important safeguard against an inappropriate choice of  (Berger 1985, Section 4.7.5). Finally, consideration of (c) is of interest primarily to the Bayesian. From the Bayesian point of view, the posterior distribution of given x summarizes the investigator’s belief about θ in the light of the observation, and hence the posterior risk is the only measure of risk of accuracy that is of interest. The possibility of evaluating the risk function (b) of δ suggests still another use of Bayes estimators. (iv) As a General Method for Generating Reasonable Estimators Postulating some plausible distributions  provides a method for generating inter-esting estimators which can then be studied in the conventional way. A difficulty with this approach is, of course, the choice of . Methodologies have been de-veloped to deal with this difficulty which sometimes incorporate frequentist mea-sures to assess the choice of . These methods tend to first select not a single prior distribution but a family of priors, often indexed by a parameter (a so-called hyperparameter). The family should be chosen so as to balance appropriateness, flexibility, and mathematical convenience. From it, a plausible member is selected to obtain an estimator for consideration. The following are some examples of these approaches, which will be discussed in Sections 4.4 and 4.5. • Empirical Bayes. The parameters of the prior distribution are themselves esti-mated from the data. • Hierarchical Bayes. The parameters of the prior distribution are, in turn, mod-eled by another distribution, sometimes called a hyperprior distribution. • Robust Bayes. The performance of an estimator is evaluated for each member of the prior class, with the goal of finding an estimator that performs well (is robust) for the entire class. Another possibility leading to a particular choice of  corresponds to the third interpretation (iii), in which the state of mind can be described as “ignorance.” One would then select for  a noninformative prior which tries (in the spirit of invariance) to treat all parameter values equitably. Such an approach was developed by Jeffreys (1939, 1948, 1961), who, on the basis of invariance considerations, suggests as noninformative prior for θ a density that is proportional to √|I(θ)|, where |I(θ)| is the determinant of the information matrix. A good account of this approach with many applications is given by Berger (1985), Robert (1994a), and Bernardo and Smith (1994). Note 9.6 has a further discussion. Example 1.5 Binomial. Suppose that X has the binomial distribution b(p, n). A two-parameter family of prior distributions for p which is flexible and for which the calculation of the conditional distribution is particularly simple is the family of beta distributions B(a, b). These densities can take on a variety of shapes (see 4.1 ] INTRODUCTION 231 Problem 1.2) and we note for later reference that the expectation and variance of a random variable p with density B(a, b) are (Problem 1.5.19). E(p) = a a + b and var(p) = ab (a + b)2(a + b + 1). (1.9) To determine the Bayes estimator of a given estimand g(p), let us first obtain the conditional distribution (posterior distribution) of p given x. The joint density of X and p is n x  H(a + b) H(a)H(b)px+a−1(1 −p)n−x+b−1. The conditional density of p given x is obtained by dividing by the marginal of x, which is a function of x alone (Problem 2.1 ). Thus, the conditional density of p given x has the form C(a, b, x)px+a−1(1 −p)n−x+b−1. (1.10) Again, this is recognized to be a beta distribution, with parameters a′ = a + x, b′ = b + n −x. (1.11) Let us now determine the Bayes estimator of g(p) = p when the loss function is squared error. By (1.4), this is δ(x) = E(p|x) = a′ a′ + b′ = a + x a + b + n. (1.12) It is interesting to compare this Bayes estimator with the usual estimator X/n. Before any observations are taken, the estimator from the Bayesian approach is the expectation of the prior: a/(a + b). Once X has been observed, the standard non-Bayesian (for example, UMVU) estimator is X/n. The estimator δ(X) = (a + X)/(a + b + n) lies between these two. In fact, a + X a + b + n =  a + b a + b + n  a a + b + n a + b + n X n (1.13) is a weighted average of a/(a + b), the estimator of p before any observations are taken, and X/n, the estimator without consideration of a prior. The estimator (1.13) can be considered as a modification of the standard esti-mator X/n in the light of the prior information about p expressed by (1.9) or as a modification of the prior estimator a/(a + b) in the light of the observation X. From this point of view, it is interesting to notice what happens as a and b →∞, with the ratio b/a being kept fixed. Then, the estimator (1.12) tends in probability to a/(a + b), that is, the prior information is so overwhelming that it essentially determines the estimator. The explanation is, of course, that in this case the beta distribution B(a, b) concentrates all its mass essentially at a/(a + b) [the variance in (1.9) tends toward 0], so that the value of p is taken to be essentially known and is not influenced by X. (“Don’t confuse me with the facts!”) On the other hand, if a and b are fixed, but n →∞, it is seen from (1.12) that δ essentially coincides with X/n. This is the case in which the information provided by X overwhelms the initial information contained in the prior distribution. 232 AVERAGE RISK OPTIMALITY [ 4.1 The UMVU estimator X/n corresponds to the case a = b = 0. However, B(0, 0) is no longer a probability distribution since 1 0 (1/p(1 −p))dp = ∞. Even with such an improper distribution (that is, a distribution with infinite mass), it is pos-sible formally to calculate a posterior distribution given x. This possibility will be considered in Example 2.8. ∥ This may be a good time to discuss a question facing the reader of this book. Throughout, the theory is illustrated with examples which are either completely formal (that is, without any context) or stated in terms of some vaguely described situation in which such an example might arise. In either case, what is assumed is a model and, in the present section, a prior distribution. Where do these assumptions come from, and how should they be interpreted? “Let X have a binomial distri-bution b(p, n) and let p be distributed according to a beta distribution B(a, b). ” Why binomial and why beta? Theassumptionsunderlyingthebinomialdistributionare(i)independenceofthe n trials and (ii) constancy of the success probability p throughout the series. While in practice it is rare for either of these two assumptions to hold exactly - consecutive trials typically exhibit some dependence and success probabilities tend to change over time (as in Example 1.8.5) - they are often reasonable approximations and may serve as identifications in a wide variety of situations arising in the real world. Similarly, to a reasonable degree, approximate normality may often be satisfied according to some version of the central limit theorem, or from past experience. Let us next turn to the assumption of a beta prior for p. This leads to an estimator which, due to its simplicity, is highly prized for a variety of reasons. But simplicity of the solution is of little use if the problem is based on assumptions which bear no resemblance to reality. Subjective Bayesians, even though perhaps unable to state their prior precisely, will typically have an idea of its shape: It may be bimodal, unimodal (symmetric or skewed), or it may be L- or U-shaped. In the first of these cases, a beta prior would be inappropriate since no beta distribution has more than one mode. However, by proper choice of the parameters a and b, a beta distribution can accommodate itself to each of the other possibilities mentioned (Problem 1.2), and thus can represent a considerable variety of prior shapes. The modeling of subjective priors discussed in the preceding paragraph corre-spond to the third of the four interpretations of the Bayes formalism mentioned at the beginning of the section. A very different approach is suggested by the fourth interpretation, where formal priors are used simply as a method of generat-ing a reasonable estimator. A standard choice in this case is to treat all parameter values equally (which corresponds to a subjective prior modeling ignorance). In the nineteenth century, the preferred choice for this purpose in the binomial case was the uniform distribution for p over (0, 1), which is the beta distribution with a = b = 1. As an alternative, the Jeffreys prior corresponding to a = b = 1/2 (see the discussion preceding Example 1.5) has the advantage of being invariant under change of parameters (Schervisch 1995, Section 2.3.4). The prior density in this case is proportional to [p(1 −p)]−1/2, which is U-shaped. It is difficult to imagine many real situations in which an investigator believes that it is equally likely for 4.2 ] FIRST EXAMPLES 233 the unknown p to be close to either 0 or 1. In this case, the fourth interpretation would therefore lead to very different priors from those of the third interpretation. 2 First Examples In constructing Bayes estimators, as functions of the posterior density, some choices are made (such as the choice of prior and loss function). These choices will ultimately affect the properties of the estimators, including not only risk perfor-mance (such as bias and admissibility) but also more fundamental considerations (such as sufficiency). In this section, we look at a number of examples to illustrate these points. Example 2.1 Sequential binomial sampling. Consider a sequence of binomial trials with a stopping rule as in Section 3.3. Let X, Y, and N denote, respectively, the number of successes, the number of failures, and the total number of trials at the moment sampling stops. The probability of any sample path is then px(1−p)y and we shall again suppose that p has the prior distribution B(a, b). What now is the posterior distribution of p given X and Y (or equivalently X and N = X + Y)? The calculation in Example 1.3 shows that, as in the fixed sample size case, it is the beta distribution with parameters a′ and b′ given by (1.11), so that, in particular, the Bayes estimator of p is given by (1.12) regardless of the stopping rule. ∥ Of course, there are stopping rules which even affect Bayesian inference (for example, ”stop when the posterior probability of an event is greater than .9”). However, if the stopping rule is a function only of the data, then the Bayes inference will be independent of it. These so-called proper stopping rules, and other aspects of inference under stopping rules, are discussed in detail by Berger and Wolpert (1988, Section 4.2). See also Problem 2.2 for another illustration. Thus, Example 2.1 illustrates a quite general feature of Bayesian inference: The posterior distribution does not depend on the sampling rule but only on the likelihood of the observed results. Example 2.2 Normal mean. Let X1, . . . , Xn be iid as N(θ, σ 2), with σ known, and let the estimand be θ. As a prior distribution for , we shall assume the normal distribution N(µ, b2). The joint density of and X = (X1, . . . , Xn) is then proportional to f (x, θ) = exp  −1 2σ 2 n i=1 (xi −θ)2 exp  −1 2b2 (θ −µ)2  . (2.1) To obtain the posterior distribution of |x, the joint density is divided by the marginal density of X, so that the posterior distribution has the form C(x)f (x|θ). If C(x) is used generically to denote any function of x not involving θ, the posterior density of |x is C(x)e−(1/2)θ2[n/σ 2+1/b2]+θ[n¯ x/σ 2+µ/b2] = C(x) exp  −1 2  n σ 2 + 1 b2   θ2 −2θ n¯ x/σ 2 + µ/b2 n/σ 2 + 1/b2 " . 234 AVERAGE RISK OPTIMALITY [ 4.2 This is recognized to be the normal density with mean E( |x) = n¯ x/σ 2 + µ/b2 n/σ 2 + 1/b2 (2.2) and variance var( |x) = 1 n/σ 2 + 1/b2 . (2.3) When the loss is squared error, the Bayes estimator of θ is given by (2.2) and can be rewritten as δ(x) =  n/σ 2 n/σ 2 + 1/b2  ¯ x +  1/b2 n/σ 2 + 1/b2  µ, (2.4) and by Corollary 2.7.19, this result remains true for any loss function ρ(d −θ) for which ρ is convex and even. This shows δ to be a weighted average of the standard estimator ¯ X, and the mean µ of the prior distribution, which is the Bayes estimator before any observations are taken. As n →∞with µ and b fixed, δ(X) becomes essentially the estimator ¯ X, and δ(X) →θ in probability. As b →0, δ(X) →µ in probability, as is to be expected when the prior becomes more and more concentrated about µ. As b →∞, δ(X) essentially coincides with ¯ X, which again is intuitively reasonable. These results are analogous to those in the binomial case. See Problem 2.3. ∥ It was seen above that ¯ X is the limit of the Bayes estimators as b →∞. As b →∞, the prior density tends to Lebesgue measure. Since the Fisher information I(θ)ofalocationparameterisconstant,thisisactuallytheJeffrey’spriormentioned under (iv) earlier in the section. It is easy to check that the posterior distribution calculatedfromthisimproperpriorisaproperdistributionassoonasanobservation has been taken. This is not surprising; since X is normally distributed about θ with variance 1, even a single observation provides a good idea of the position of θ. As in the binomial case, the question arises whether ¯ X is the Bayes solution also with respect to a proper prior . This question is answered for both cases by the following theorem. Theorem 2.3 Let have a distribution , and let Pθ denote the conditional distribution of X given θ. Consider the estimation of g(θ) when the loss function is squared error. Then, no unbiased estimator δ(X) can be a Bayes solution unless E[δ(X) −g( )]2 = 0, (2.5) where the expectation is taken with respect to variation in both X and . Proof. Suppose δ(X) is a Bayes estimator and is unbiased for estimating g(θ). Since δ(X) is Bayes and the loss is squared error, δ(X) = E[g( )|X], with probability 1. Since δ(X) is unbiased, E[δ(X)|θ] = g(θ) for all θ. 4.2 ] FIRST EXAMPLES 235 Conditioning on X and using (1.6.2) leads to E[g( )δ(X)] = E{δ(X)E[g( )|X]} = E[δ2(X)]. Conditioning instead on , we find E[g( )δ(X)] = E{g( )E[δ(X)| ]} = E[g2( )]. It follows that E[δ(X) −g( )]2 = E[δ2(X)] + E[g2( )] −2E[δ(X)g( )] = 0, as was to be proved. ✷ Let us now apply this result to the case that δ(x) is the sample mean. Example 2.4 Sample means. If Xi, i = 1, . . . , n, are iid with E(Xi) = θ and var Xi = σ 2 (independent of θ), then the risk of ¯ X (given θ) is R(θ, ¯ X) = E( ¯ X −θ)2 = σ 2/n. For any proper prior distribution on , E( ¯ X − )2 = σ 2/n ̸= 0, so (2.5) cannot be satisfied and, from Theorem 2.3, ¯ X is not a Bayes estimator. This argument will apply to any distribution for which the variance of ¯ X is independent of θ, such as the N(θ, σ 2) distribution in Example 2.2. However, if the variance is a function of θ, the situation is different. If var Xi = v(θ), then (2.5) will hold only if  v(θ)d(θ) dθ = 0 (2.6) for some proper prior . If v(θ) > 0 (a.e. ), then (2.6) cannot hold. For example, if X1, · · · , Xn are iid Bernoulli(p) random variables, then the risk function of the sample mean δ(Xi) = Xi/n is E (δ(Xi) −p)2 = p(1 −p) n , and the left side of (2.5) is therefore 1 n  1 0 p(1 −p) d(p). The integral is zero if and only if  assigns probability 1 to the set {0, 1}. For such a distribution, , δ(0) = 0 and δ(n) = 1, and any estimator satisfying this condition is a Bayes estimator for such a . Hence, in particular, X/n is a Bayes estimator. Of course, if  is true, then the values X = 1, 2, . . . , n−1 are never observed. Thus, X/n is Bayes only in a rather trivial sense. ∥ Extensions and discussion of other consequences of Theorem 2.3 can be found in Bickel and Blackwell (1967), Noorbaloochi and Meeden (1983), and Bickel and Mallows (1988). See Problem 2.4. 236 AVERAGE RISK OPTIMALITY [ 4.2 The beta and normal prior distributions in the binomial and normal cases are the so-called conjugate families of prior distributions. These are frequently defined as distributions with densities proportional to the density of Pθ. It has been pointed out by Diaconis and Ylvisaker (1979) that this definition is ambiguous; they show that in the above examples and, more generally, in the case of exponential families, conjugate priors can be characterized by the fact that the resulting Bayes estimators are linear in X. They also extend the weighted-average representation (1.13) of the Bayes estimator to general exponential families. For one parameter exponential families,MacEachern(1993)givesanalternatecharacterizationofconjugatepriors based on the requirement that the posterior mean lies ”in between” the prior mean and sample mean. As another example of the use of conjugate priors, consider the estimation of a normal variance. Example 2.5 Normal variance, known mean. Let X1, . . . , Xn be iid according to N(0, σ 2), so that the joint density of the Xi’s is Cτ re−τx2 i , where τ = 1/2σ 2 and r = n/2. As conjugate prior for τ, we take the gamma density H(g, 1/α) noting that, by (1.5.44), E(τ) = g α , E(τ 2) = g(g + 1) α2 , (2.7) E 1 τ  = α g −1, E  1 τ 2  = α2 (g −1)(g −2). Writing y = x2 i , we see that the posterior density of τ given the xi’s is C(y)τ r+g−1e−τ(α+y), which is H[r + g, 1/(α + y)]. If the loss is squared error, the Bayes estimator of 2σ 2 = 1/τ is the posterior expectation of 1/τ, which by (2.7) is (α +y)/(r +g −1). The Bayes estimator of σ 2 = 1/2τ is therefore α + Y n + 2g −2. (2.8) In the present situation, we might instead prefer to work with the scale invariant loss function (d −σ 2)2 σ 4 , (2.9) which leads to the Bayes estimator (Problem 2.6) E(1/σ 2) E(1/σ 4) = E(τ) 2E(τ 2), (2.10) and hence by (2.7) after some simplification to α + Y n + 2g + 2. (2.11) Since the Fisher information for σ is proportional to 1/σ 2 (Table 2.5.1), the Jeffreys prior density in the present case is proportional to the improper density 1/σ, which induces for τ the density (1/τ) dτ. This corresponds to the limiting 4.2 ] FIRST EXAMPLES 237 case α = 0, g = 0, and hence by (2.8) and (2.11) to the Bayes estimators Y/(n−2) and Y/(n + 2) for squared error and loss function (2.9), respectively. The first of these has uniformly larger risk than the second which is MRE. ∥ We next consider two examples involving more than one parameter. Example 2.6 Normalvariance,unknownmean.SupposethatweletX1, . . . , Xn be iid as N(θ, σ 2) and consider the Bayes estimation of θ and σ 2 when the prior assigns to τ = 1/2σ 2 the distribution H(g, 1/α) as in Example 2.5 and takes θ to be independent of τ with (for the sake of simplicity) the uniform improper prior dθ corresponding to b = ∞in Example 2.2. Then, the joint posterior density of (θ, τ) is proportional to τ r+g−1e−τ[α+z+n(¯ x−θ)2] (2.12) where z = (xi −¯ x)2 and r = n/2. By integrating out θ, it is seen that the posterior distribution of τ is H[r + g −1/2, 1/(α + z)] (Problem 1.12). In particular, for α = g = 0, the Bayes estimator of σ 2 = 1/2τ is Z/(n −3) and Z/(n + 1) for squared error and loss function (2.9), respectively. To see that the Bayes estimator of θ is ¯ X regardless of the values of α and g, it is enough to notice that the posterior density of θ is symmetric about ¯ X (Problem 2.9; see also Problem 2.10). ∥ A problem for which the theories of Chapters 2 and 3 do not lead to a satisfac-tory solution is that of components of variance. The following example treats the simplest case from the present point of view. Example 2.7 Random effects one-way layout. In the model (3.5.1), suppose for the sake of simplicity that µ and Z11 have been eliminated either by invariance or by assigning to µ the uniform prior on (−∞, ∞). In either case, this restricts the problem to the remaining Z’s with joint density proportional to 1 σ s(n−1)(σ 2 + nσ 2 A)(s−1)/2 exp  − 1 2(σ 2 + nσ 2 A) s i=2 z2 i1 − 1 2σ 2 s i=1 n j=2 z2 ij . (2.13) The most natural noninformative prior postulates σ and σA to be independent with improper densities 1/σ and 1/σA, respectively. Unfortunately, however, in this case, the posterior distribution of (σ, σA) continues to be improper, so that the calculation of a posterior expectation is meaningless (Problem 2.12). Instead, let us consider the Jeffreys prior  which has the improper density (1/σ)(1/τ) but with τ 2 = σ 2 + nσ 2 A so that the density is zero for τ < σ. (For a discussion of the appropriateness of this and related priors see Hill, Stone and Springer 1965, Tiao and Tan 1965, Box and Tiao 1973, Hobert 1993, and Hobert and Casella 1996.) The posterior distribution is then proper (Problem 2.11). The resulting Bayes estimator δ of σ 2 A is obtained by Klotz, Milton, and Zacks (1969), who compare it with the more traditional estimators discussed in Example 5.5. Since the risk of δ is quite unsatisfactory, Portnoy (1971) replaces squared error by the scale invariant loss function (d −σ 2 A)2/(σ 2 +nσ 2 A)2, and shows the resulting estimator to be δ′  = 1 2n S2 A a − S2 c −a −1 + c −1 ca(c −a −1) · S2 A + S2 F(R)  (2.14) 238 AVERAGE RISK OPTIMALITY [ 4.2 where c = 1 2(sn + 1), a = 1 2(s + 3), R = S2/(S2 A + S2), and F(R) =  1 0 va [R + v(1 −R)]c+1 dv. Portnoy’s risk calculations suggest that δ′  is a satisfactory estimator of σ 2 A for his loss function or equivalently for squared error loss. The estimation of σ 2 is analogous. ∥ Let us next examine the connection between Bayes estimation, sufficiency, and thelikelihoodfunction.Recallthatif(X1, X2, . . . , Xn)hasdensityf (x1, . . . , xn|θ), the likelihood function is defined by L(θ|x) = L(θ|x1, . . . , xn) = f (x1, . . . , xn|θ). If we observe T = t, where T is sufficient for θ, then f (x1, . . . , xn|θ) = L(θ|x) = g(t|θ)h(x), where the function h(·) does not depend on θ. For any prior distribution π(θ), the posterior distribution is then π(θ|x) = f (x1, . . . , xn|θ)π(θ) f (x1, . . . , xn|θ′)π(θ′) dθ′ = L(θ|x)π(θ) L(θ′|x)π(θ′) dθ′ = g(t|θ)π(θ) g(t|θ′)π(θ′) dθ′ (2.15) so π(θ|x) = π(θ|t), that is, π(θ|x) depends on x only through t, and the posterior distribution of θ is the same whether we compute it on the basis of x or of t. As an illustration, in Example 2.2, rather than starting with (2.1), we could use the fact that the sufficient statistic is ¯ X ∼N(θ, σ 2/n) and, starting from f (¯ x|θ) ∝e− n 2σ2 (¯ x−θ)2e− 1 2b2 (θ−µ)2, arrive at the same posterior distribution for θ as before. Thus, Bayesian measures thatarecomputedfromposteriordistributionsarefunctionsofthedataonlythrough the likelihood function and, hence, are functions of a minimal sufficient statistic. Bayes estimators were defined in (1.1) with respect to a proper distribution . It is useful to extend this definition to the case that  is a measure satisfying  d(θ) = ∞, (2.16) a so-called improper prior. It may then still be the case that (1.3) is finite for each x, so the Bayes estimator can formally be defined. Example 2.8 Improper prior Bayes. For the situation of Example 1.5, where X ∼b(p, n), the Bayes estimator under a beta(a, b) prior is given by (1.12). For a = b = 0, this estimator is x/n, the sample mean, but the prior density, π(p), is proportional to π(p) ∝p−1(1 −p)−1, and hence is improper. The posterior distribution in this case is n x  px−1(1 −p)n−x−1 1 0 n x  px−1(1 −p)n−x−1 = H(n) H(x)H(n −x)px−1(1 −p)n−x−1 (2.17) 4.3 ] SINGLE-PRIOR BAYES 239 which is a proper posterior distribution if 1 ≤x ≤n −1 with x/n the posterior mean. When x = 0 or x = n, the posterior density (2.17) is no longer proper. However, for any estimator δ(x) that satisfies δ(0) = 0 and δ(n) = 1, the posterior expected loss (1.3) is finite and minimized at δ(x) = x/n (see Problem 2.16 and Example 2.4). Thus, even though the resulting posterior distribution is not proper for all values of x, δ(x) = x/n can be considered a Bayes estimator. ∥ This example suggests the following definition. Definition 2.9 An estimator δπ(x) is a generalized Bayes estimator with respect to a measure π(θ) (even if it is not a proper probability distribution) if the posterior expected loss, E{L( , δ(X))|X = x}, is minimized at δ = δπ for all x. As we will see, generalized Bayes estimators play an important part in point estimation optimality, since they often may be optimal under both Bayesian and frequentist criteria. There is one other useful variant of a Bayes estimator, a limit of Bayes estimators. Definition 2.10 A nonrandomized1 estimator δ(x) is a limit of Bayes estimators if there exists a sequence of proper priors πν and Bayes estimators δπν such that δπν(x) →δ(x) a.e. [with respect to the density f (x|θ)] as ν →∞. Example 2.11 Limit of Bayes estimators. In Example 2.8, it was seen that the binomial estimator X/n is Bayes with respect to an improper prior. We shall now show that it is also a limit of Bayes estimators. This follows since lim a→0 b→0 a + x a + b + n = x n (2.18) and the beta(a, b) prior is proper if a > 0, b > 0. ∥ From a Bayesian view, estimators that are limits of Bayes estimators are some-what more desirable than generalized Bayes estimators. This is because, by con-struction, a limit of Bayes estimators must be close to a proper Bayes estimator. In contrast, a generalized Bayes estimator may not be close to any proper Bayes estimator (see Problem 2.15). 3 Single-Prior Bayes As discussed at the end of Section 1, the prior distribution is typically selected from a flexible family of prior densities indexed by one or more parameters. Instead of denoting the prior by , as was done in Section 1, we shall now denote its density by π(θ|γ ), where the parameter γ can be real- or vector-valued. (Hence, we are implicitly assuming that the prior π is absolutely continuous with respect to a dominating measure µ(θ), which, unless specified, is taken to be Lebesgue measure.) 1 For randomized estimators the convergence can only be in distribution. See Ferguson 1967 (Section 1.8) or Brown 1986a (Appendix). 240 AVERAGE RISK OPTIMALITY [ 4.3 We can then write a Bayes model in a general form as X|θ ∼f (x|θ), (3.1) |γ ∼π(θ|γ ). Thus, conditionally on θ, X has sampling density f (x|θ), and conditionally onγ , has prior density π(θ|γ ). From this model, we calculate the posterior distribution, π(θ|x, γ ), from which all Bayesian answers would come. The exact manner in which we deal with the parameter γ or, more generally, the prior distribution π(θ|γ ) will lead us to different types of Bayes analyses. In this section we assume that the functional form of the prior, and the value of γ , is known so we have one completely specified prior. (To emphasize that point, we will sometimes write γ = γ0.) Given a loss function L(θ, d), we then look for the estimator that minimizes  L(θ, d(x))π(θ|x, γ0) dθ, (3.2) where π(θ|x, γ0) = f (x|θ)π(θ|γ0)/ f (x|θ)π(θ|γ0) dθ. The calculation of single-prior Bayes estimators has already been illustrated in Section 2. Here is another example. Example 3.1 Scale uniform. For estimation in the model Xi|θ ∼U(0, θ), i = 1, . . . , n, 1 θ |a, b ∼Gamma(a, b), a, b known, (3.3) sufficiency allows us to work only with the density of Y = maxi Xi, which is given by g(y|θ) = nyn−1/θn, 0 < y < θ. We then calculate the single-prior Bayes estimator of θ under squared error loss. By (4.1.4), this is the posterior mean, given by E( |y, a, b) = ∞ y θ 1 θn+a+1 e−1/θb dθ ∞ y 1 θn+a+1 e−1/θb dθ . (3.4) Although the ratio of integrals is not expressible in any simple form, calculation is not difficult. See Problem 3.1 for details. ∥ In general, the Bayes estimator under squared error loss is given by E( |x) = θf (x|θ)π(θ) dθ f (x|θ)π(θ) dθ (3.5) where X ∼f (x|θ) is the observed random variable and ∼π(θ) is the parameter of interest. While there is a certain appeal about expression (3.5), it can be difficult to work with. It is therefore important to find conditions under which it can be simplified. Such simplification is useful for two somewhat related purposes. (i) Implementation If a Bayes solution is deemed appropriate, and we want to implement it, we must be able to calculate (3.5). Thus, we need reasonably straightforward, and general, methods of evaluating these integrals. 4.3 ] SINGLE-PRIOR BAYES 241 (ii) Performance By construction, a Bayes estimator minimizes the posterior expected loss and, hence, the Bayes risk. Often, however, we are interested in its performance, and perhaps optimality under other measures. For example, we might examine its mean squared error (or, more generally, its risk function) in looking for admissible or minimax estimators. We also might examine Bayesian measures using other priors, in an investigation of Bayesian robustness. These latter considerations tend to lead us to look for either manageable expres-sions for or accurate approximations to the integrals in (3.5). On the other hand, the considerations in (i) are more numerical (or computational) in nature, leading us to algorithms that ease the computational burden. However, even this path can involve statistical considerations, and often gives us insight into the performance of our estimators. A simplification of (3.5) is possible when dealing with independent prior dis-tributions. If Xi ∼f (x|θi), i = 1, · · · , n, are independent, and the prior is π(θ1, · · · , θn) = # i π(θi), then the posterior mean of θi satisfies E(θi|x1, . . . , xn) = E(θi|xi), (3.6) that is, the Bayes estimator of θi only depends on the data through xi. Although the simplification provided by (3.6) may prove useful, at this level of generality it is impossible to go further. However, for exponential families, evaluation of (3.5) is sometimes possible through alternate representations of Bayes estimators. Suppose the distribution of X = (X1, . . . , Xn) is given by the multiparameter exponential family (see (1.5.2)), that is, pη (x) = exp s i=1 ηiTi(x) −A(η) h(x). (3.7) Then, we can express the Bayes estimator as a function of partial derivatives with respect to x. The following theorem presents a general formula for the needed posterior expectation. Theorem 3.2 If X has density (3.7), and η has prior density π(η), then for j = 1, . . . , n, E  s i=1 ηi ∂Ti(x) ∂xj |x = ∂ ∂xj log m(x) − ∂ ∂xj log h(x), (3.8) where m(x) = pη (x)π(η) dη is the marginal distribution of X. Alternatively, the posterior expectation can be expressed in matrix form as E (T η) = ∇log m(x) −∇log h(x), (3.9) where T = {∂Ti/∂xj}. Proof. Noting that ∂exp{ ηiTi}/∂xj =  i ηi(∂Ti/∂xj) exp{ ηiTi}, we can write E  ηi ∂Ti(x) ∂xj |x  = 1 m(x)  i  ηi ∂Ti ∂xj  eηiTi−A(η)h(x)π(η) dη 242 AVERAGE RISK OPTIMALITY [ 4.3 = 1 m(x)   ∂ ∂xj eiηiTi  e−A(η)h(x)π(η) dη = 1 m(x)   ∂ ∂xj eiηiTih(x)  −  ∂ ∂xj h(x)  eiηiTi  e−A(η)π(η) dη (3.10) = 1 m(x) ∂ ∂xj   eηiTi−A(η)h(x)  π(η) dη − ∂ ∂xj h(x) h(x) 1 m(x)  eηiTi−A(η h(x)π(η) dη = ∂ ∂xj m(x) m(x) − ∂ ∂xj h(x) h(x) where, in the third equality, we have used the fact that  ∂Ti ∂xj  eiηiTih(x) = ∂ ∂xj  eiηiTih(x)  −eiηiTi ∂h(x) ∂xj  . In the fourth equality, we have interchanged the order of integration and differen-tiation (justified by Theorem 1.5.8), and used the definition of m(x). Finally, using logarithms, E  ηi ∂Ti(x) ∂xj |x can be written as (3.8). ✷ Although it may appear that this theorem merely shifts calculation from one integral [the posterior of (3.5)] to another [the marginal m(x) of (3.8)], this shift brings advantages which will be seen throughout the remainder of this section (and beyond). These advantages stem from the facts that the calculation of the derivatives of log m(x) is often feasible and that, with the estimator expressed as (3.8), risk calculations may be simplified. Theorem 3.2 simplifies further when Ti = Xi. Corollary 3.3 If X = (X1, . . . , Xp) has the density pη (x) = ep i=1ηixi−A(η)h(x) (3.11) and η has prior density π(η), the Bayes estimator of η under the loss L(η, δ) = (ηi −δi)2 is given by E(ηi|x) = ∂ ∂xi log m(x) −∂ ∂xi log h(x). (3.12) Proof. Problem 3.3. ✷ Example 3.4 Multiple normal model. For Xi|θi ∼N(θi, σ 2), i = 1, . . . , p, independent, i ∼N(µ, τ 2), i = 1, . . . , p, independent, where σ 2, τ 2, and µ are known, ηi = θi/σ 2 and the Bayes estimator of θi is E( i|x) = σ 2E(ηi|x) = σ 2  ∂ ∂xi log m(x) −∂ ∂xi log h(x)  4.3 ] SINGLE-PRIOR BAYES 243 = τ 2 σ 2 + τ 2 xi + σ 2 σ 2 + τ 2 µ, since ∂ ∂xi log m(x) = ∂ ∂xi log e 1 2(σ2+τ2)  i(xi−µ)2 = −(xi −µ) σ 2 + τ 2 and ∂ ∂xi log h(x) = ∂ ∂xi log e−1 2  x2 i /σ 2 = −xi σ 2 . ∥ An application of the representation (3.12) is to the comparison of the risk of the Bayes estimator with the risk of the best unbiased estimator. Theorem 3.5 Under the assumptions of Corollary 3.3, the risk of the Bayes esti-mator (3.12), under the sum of squared error loss, is R[η, E(η|X)] = R[η, −∇log h(X)] + p i=1 E 2 ∂2 ∂X2 i log m(X) +  ∂ ∂Xi log m(X) 2 . (3.13) Proof. By an application of Stein’s identity (Lemma 1.5.15; see Problem 3.4), it is straightforward to establish that for the situation of Corollary 3.3. Eη  −∂ ∂Xi log h(X)  =   ∂ ∂xi log h(x)  pη(x)dx = ηi. Hence, if we write ∇log h(x) = {∂/∂xi log h(x)}, −Eη∇log h(X) = η. (3.14) Thus, −∇log h(X) is an unbiased estimator of η with risk R[η, −∇log h(X)] = Eη p i=1  ηi + ∂ ∂Xi log h(X) 2 = Eη|η + ∇log h(X)|2, (3.15) which can also be further evaluated using Stein’s identity (see Problem 3.4). Re-turning to (3.12), the risk of the Bayes estimator is given by R[η, E(η|X)] = p i=1 [ηi −E(ηi|X)]2 = p i=1  ηi −  ∂ ∂Xi log m(X) − ∂ ∂Xi log h(X) 2 = R[η, −∇log h(X)] −2 p i=1 E  (ηi + ∂ ∂Xi log h(X)) ∂ ∂Xi log m(X)  244 AVERAGE RISK OPTIMALITY [ 4.3 + p i=1 E  ∂ ∂Xi log m(X) 2 . (3.16) An application of Stein’s identity to the middle term leads to Eη  ηi + ∂ ∂Xi log h(X)  ∂ ∂Xi log m(X)  = −Eη  ∂2 ∂X2 i log m(X)  , which establishes (3.13). ✷ From (3.13), we see that if the second term is negative, then the Bayes estimator of η will have smaller risk than the unbiased estimator −∇log h(X), (which is best unbiased if the family is complete). We will exploit this representation (3.13) in Chapter 5, but now just give a simple example. Example 3.6 Continuation of Example 3.4. To evaluate the risk of the Bayes estimator, we also calculate ∂2 ∂x2 i log m(x) = − 1 σ 2 + τ 2 , and hence, from (3.13), R[η, E(η|X)] = R[η, −∇log h(X)] − 2p σ 2 + τ 2 + i Eη Xi −µ σ 2 + τ 2 2 . (3.17) The best unbiased estimator of ηi = θi/σ 2 is −∂ ∂Xi log h(X) = Xi σ 2 with risk R(η, −∇log h(X)) = p/σ 2. If ηi = µ for each i, then the Bayes estimator has smaller risk, whereas the Bayes estimator has infinite risk as |ηi −µ| →∞ for any i (Problem 3.6). ∥ We close this section by noting that in exponential families there is a general expression for the conjugate prior distribution and that use of this conjugate prior results in a simple expression for the posterior mean. For the density pη(x) = eηx−A(η)h(x), −∞< x < ∞, (3.18) the conjugate prior family is π(η|k, µ) = c(k, µ)ekηµ−kA(η), (3.19) where µ can be thought of as a prior mean and k is proportional to a prior variance (see Problem 3.9). If X1, . . . , Xn is a sample from pη(x) of (3.18), the posterior distribution result-ing from (3.19) is π(η|x, k, µ) ∝ enη ¯ x−nA(η)! ekηµ−kA(η)! = eη(n¯ x+kµ)−(n+k)A(η) (3.20) 4.4 ] EQUIVARIANT BAYES 245 which is in the same form as (3.19) with µ′ = (n¯ x + kµ)/(n + k) and k′ = n + k. Thus, using Problem 3.9, E(A′(η)|x, k, µ) = n¯ x + kµ n + k . (3.21) As EX|η = A′(η), we see that the posterior mean is a convex combination of the sample and prior means. Example 3.7 Conjugate gamma. Let X1, . . . , Xn be iid as Gamma(a, b), where a is known. This is in the exponential family form with η = −1/b and A(η) = −a log(−η). If we use a conjugate prior distribution (3.19) for b for which E(A′(η)|x) = E  −a η|¯ x  = n¯ x + kµ n + k . The resulting Bayes estimator under squared error loss is E(b|x) = 1 a n¯ x + kµ n + k  . (3.22) This is the Bayes estimator based on an inverted gamma prior for b (see Problem 3.10). ∥ Using the conjugate prior (3.19) will not generally lead to simplifications in (3.9) and is, therefore, not helpful in obtaining expressions for estimators of the natural parameter: However, there is often more interest in estimating the mean parameter rather than the natural parameter. 4 Equivariant Bayes Definition 3.2.4 specified what is meant by an estimation problem being invariant under a transformation g of the sample space and the induced transformations ¯ g and g∗of the parameter and decision spaces, respectively. In such a situation, when considering Bayes estimation, it is natural to select a prior distribution which is also invariant. Recall that a group family is a family of distributions which is invariant under a group G of transformations for which ¯ G is transitive over the parameter space. We shall say that a prior distribution  for θ is invariant with respect to ¯ G if the distribution of ¯ gθ is also  for all ¯ g ∈¯ G; that is, if for all ¯ g ∈¯ G and all measurable B P( ¯ gθ ∈B) = P(θ ∈B) (4.1) or, equivalently, ( ¯ g−1B) = (B). (4.2) Suppose now that such a  exists and that the Bayes solution δ with respect to it is unique. By (4.1), any δ then satisfies  R(θ, δ) d(θ) =  R( ¯ gθ, δ) d(θ). (4.3) 246 AVERAGE RISK OPTIMALITY [ 4.4 Now R( ¯ gθ, δ) = E ¯ gθ{L[ ¯ gθ, δ(X)]} = Eθ{L[ ¯ gθ, δ(gX)]} = Eθ{L[θ, g∗−1δ(gX)]}. (4.4) Here, the second equality follows from (3.2.4) (invariance of the model) and the third from (3.2.14) (invariance of the loss). On substituting this last expression into the right side of (4.3), we see that if δ(x) minimizes (4.3), so does the estimator g∗−1δ(gx). Hence, if the Bayes estimator is unique, the two must coincide. By (3.2.17), this appears to prove δ to be equivariant. However, at this point, a technical difficulty arises. Uniqueness can be asserted only up to null sets, that is, sets N with Pθ(N) = 0 for all θ. Moreover, the set N may depend on g. An estimator δ satisfying δ(x) = g∗−1δ(gx) for all x / ∈Ng (4.5) where Pθ(Ng) = 0 for all θ is said to be almost equivariant. We have therefore proved the following result. Theorem 4.1 Suppose that an estimation problem is invariant under a group and that there exists a distribution  over such that (4.2) holds for all (measurable) subsets B of and all g ∈G. Then, if the Bayes estimator δ is unique, it is almost equivariant. Example 4.2 Equivariant binomial. Suppose we are interested in estimating p under squared error loss, where X ∼binomial(n, p). A common group of transformations which leaves the problem invariant is gX = n −X, ¯ gp = 1 −p. For a prior  to satisfy (4.1), we must have P( ¯ gp ≤t) = P(p ≤t) for all t. (4.6) If  has density γ (p), then (4.1) implies  t 0 γ (p) dp =  t 0 γ (1 −p) dp for all t, (4.7) which, upon differentiating, requires γ (t) = γ (1 −t) for all t and, hence, that γ (t) must be symmetric about t = 1/2. It then follows that, for example, a Bayes rule under a symmetric beta prior is equivariant. See Problem 4.1. ∥ The existence of a proper invariant prior distribution is rather special. More often, the invariant measure for θ will be improper (if it exists at all), and the situation is then more complicated. In particular, (i) the integral (4.3) may not be finite, and the argument leading to Theorem 4.1 is thus no longer valid and (ii) it becomes necessary to distinguish between left- and right-invariant measures . These complications require a level of group-theoretic sophistication that we do not assume. However, for the case of location-scale, we can develop the theory in 4.4 ] EQUIVARIANT BAYES 247 sufficient detail. [For a more comprehensive treatment of invariant Haar measures, see Berger 1985 (Section 6.6), Robert 1994a (Section 7.4), or Schervisch 1995 (Section 6.2). A general development of the theory of group invariance, and its application to statistics, is given by Eaton (1989) and Wijsman (1990).] To discuss invariant prior distributions, or more generally invariant measures, over the parameter space, we begin by considering invariant measures over groups. (See Section 1.4 for some of the basics.) Let G be a group and L be a σ-field of measurable subsets of G, and for a set B in L, let Bh = {gh : g ∈B} and gB = {gh : h ∈B}. Then, a measure  over (G, L) is right-invariant, a right invariant Haar measure if (Bh) = (B) for all B ∈L, h ∈G (4.8) and a left-invariant Haar measure if (gB) = (B) for all B ∈L, g ∈G. (4.9) In our examples, measures satisfying (4.8) or (4.9) exist, have densities, and are unique up to multiplication by a positive constant. We will now look at some location-scale examples. Example 4.3 Location group. For x = (x1, . . . , xn) in a Euclidean sample space, consider the transformations gx = (x1 + g, . . . , xn + g), −∞< g < ∞, (4.10) with the composition (group operation) g ◦h = g + h, (4.11) which was already discussed in Sections 1 and 2. Here, G is the set of real numbers g, and for L, we can take the Borel sets. The sets Bh and gB are Bh = {g + h : g ∈B} and gB = {g + h : h ∈B} and satisfy Bg = gB since g ◦h = h ◦g. (4.12) When (4.12) holds, the group operation is said to be commutative; groups with this property are called Abelian. For an Abelian group, if a measure is right invariant, it is also left invariant and vice versa, and will then be called invariant. In the present case, Lebesgue measure is invariant since it assigns the same measure to a set B on the line as to the set obtained by translating B by any fixed account g or h to the right or left. (Abelian groups are a special case of unimodular groups, the type of group for which the left- and right-invariant measures agree. See Wijsman (1990, Chapter 7) for details. ∥ There is a difference between transformations acting on parameters or on ele-ments of a group. In the first case, we know what we mean by ¯ gθ, but θ ¯ g makes 248 AVERAGE RISK OPTIMALITY [ 4.4 no sense. On the other hand, for group elements, multiplication is possible both on the left and on the right. For this reason, a prior distribution  (a measure over the parameter space) sat-isfying (4.1), which only requires left-invariance, is said to be invariant. However, for measures over a group one must distinguish between left- and right-invariance and call such measures invariant only if they are both left and right invariant. Example 4.4 Scale group. For x = (x1, . . . , xn) consider the transformations gx = (gx1, · · · , gxn), 0 < g < ∞, that is, multiplication of each coordinate by the same positive number g, with the composition g ◦h = g × h. The sets Bh and gB are obtained by multiplying each element of B by h on the right and g on the left, respectively. Since gh = hg, the group is Abelian and the concepts of left- and right-invariance coincide. An invariant measure is given by the density 1 g dg. (4.13) To see this, note that (B) =  B 1 g dg =  Bh h g′ · dg dg′ dg′ =  Bh 1 g′ dg′ = (Bh), where the first equality follows by making the change of variables g′ = gh. ∥ Example 4.5 Location-scale group. As a last and somewhat more complicated example, consider the group of transformations gx = (ax1 + b, . . . , axn + b), 0 < a < ∞, −∞< b < ∞. If g = (a, b) and h = (c, d), we have hx = cx + d and ghx = a(cx + d) + b = acx + (ad + b). So, the composition rule is (a, b) ◦(c, d) = (ac, ad + b). (4.14) Since (c, d) ◦(a, b) = (ac, cb + d), it is seen that the group operation is not commutative, and we shall therefore have to distinguish between left and right Haar measures. We shall now show that these are given, respectively, by the densities 1 a2 dadb and 1 a dadb. (4.15) 4.4 ] EQUIVARIANT BAYES 249 To show left-invariance of the first of these, note that the transformation g(h) = g◦h takes h = (c, d) into (c′ = ac, d′ = ad + b). (4.16) If  has density dadb/a2, we have (B) = {(ac, ad + b) : (c, d) ∈B} (4.17) and, hence, (B) =  B 1 c2 dcdd =  gB a2 (c)2 ∂(c, d) ∂(c′, d′)dc′dd′, (4.18) where ∂(c′, d′) ∂(c, d) =     a 0 0 a     = a2 is the Jacobian of the transformation (4.16). The right side of (4.18) therefore reduces to  gB 1 (c′)2 dc′dd′ = (gB) and thus proves (4.9). To prove the right-invariance of the density dadb/a, consider the transformation h(g) = g ◦h taking g = (a, b) into (a′ = ac, b′ = ad + b). (4.19) We then have (B) =  B 1 a dadb =  Bh c a′ ∂(a, b) ∂(a′, b′)da′db′. (4.20) The Jacobian of the transformation (4.19) is ∂(a′, b′) ∂(a, b) =     c d 0 1     = c, which shows that the right side of (4.20) is equal to (Bh). ∥ We introduced invariant measures over groups as a tool for defining measures over the parameter space that, in some sense, share these invariance properties. For this purpose, consider a measure  over a transitive group ¯ G that leaves invariant. Then,  induces a measure ′ by the relation ′(ω) = { ¯ g ∈¯ G : ¯ gθ0 ∈ω}, (4.21) where ω is any subset of , and θ0 any given point of . A disadvantage of this definition is the fact that the resulting measure ′ will typically depend on θ0, so that it is not uniquely defined by this construction. However, this difficulty disappears when  is right invariant. Lemma 4.6 If ¯ G is transitive over , and  is a right-invariant measure over ¯ G, then ′ defined by (4.21) is independent of θ0. 250 AVERAGE RISK OPTIMALITY [ 4.4 Proof. If θ1 is any other point of , there exists (by the assumption of transitivity) an element ¯ h of ¯ G such that θ1 = hθ0. Let ′′(ω) = { ¯ g ∈¯ G : ¯ gθ1 ∈ω} and let B be the subset of ¯ G given by B = { ¯ g : ¯ gθ0 ∈ω}. Then, { ¯ g : ¯ g ¯ h ∈B} = { ¯ g ¯ h−1 : ¯ g ∈B} = B ¯ h−1 and ′′(ω) = { ¯ g : ¯ g ¯ hθ0 ∈ω} = { ¯ g ¯ h−1 : ¯ gθ0 ∈ω} = (B ¯ h−1) = (B) = ′(ω), where the next to last equation follows from the fact that  is right invariant. ✷ Example 4.7 Continuation of Example 4.3. The group G of Example 4.3 given by (4.10) and (4.11) and (1.2) of Section 1 induces on = {η : −∞< η < ∞} the transformation ¯ gη = η + ¯ g and, as we saw in Example 4.3, Lebesgue measure  is both right and left invariant over ¯ G. For any point η0 and any subset ω of , we find ′(ω) = { ¯ g ∈¯ G : η0 + ¯ g ∈ω} = { ¯ g ∈¯ G : ¯ g ∈ω −η0}, where ω−η0 denotes the set ω translated by an amount η0. Since Lebesgue measure of ω −η0 is the same as that of ω, it follows that ′ is Lebesgue measure over regardless of the choice of η0. Let us now determine the Bayes estimates for this prior measure for η when the loss function is squared error. By (1.6), the Bayes estimator of η is then (Problem 4.2) δ(x) = uf (x1 −u, . . . , xn −u) du f (x1 −u, . . . , xn −u) du . (4.22) This is the Pitman estimator (3.1.28) of Chapter 3, which in Theorem 1.20 of that chapter was seen to be the MRE estimator of η. ∥ Example 4.8 Continuation of Example 4.4. The scale group G given in Ex-ample 4.4 and by (3.2) of Section 3.3 induces on = {τ : 0 < τ < ∞} the transformations ¯ g ◦τ = g × τ (4.23) and, as we saw in Example 4.4, the measure  with density 1 gdg is both left and right invariant over ¯ G. For any point τ0 and any subset ω of , we find ′(ω) = { ¯ g ∈¯ G : gτ0 ∈ω} = { ¯ g ∈¯ G : g ∈ω/τ0}, where ω/τ0 denotes the set of values in ω each divided by τ0. The change of variables g′ = τ0g shows that  ω/τ0 1 g dg =  ω τ0 g′ dg dg′ dg′ =  ω 1 g′ dg′, 4.4 ] EQUIVARIANT BAYES 251 and, hence, that ′(ω) =  ω dτ τ . Let us now determine the Bayes estimator of τ for this prior distribution when the loss function is L(τ, d) = (d −τ)2 τ 2 . This turns out to be the Pitman estimator (3.19) of Section 3.3 with r = 1, δ(x) = ∞ 0 vnf (vx1, . . . , vxn) dv ∞ 0 vn+1f (vx1, . . . , vxn) dv , (4.24) which is also MRE (Problems 3.3.17 and 4.3). ∥ Example 4.9 Continuation of Example 4.5. The location-scale family of distri-butions (3.23) of Section 3.3 remains invariant under the transformations gx : x′ i = a + bxi, −∞< a < ∞, 0 < b, which induce in the parameter space = {(η, τ) : −∞< η < ∞, 0 < τ} the transformations η′ = aη + b, τ ′ = bτ. (4.25) It was seen in Example 4.5 that the left and right Haar measures 1 and 2 over the group G = {g = (a, b) : −∞< a < ∞, 0 < b} with group operation (4.14) are given by the densities 1 a2 dadb and 1 a dadb, (4.26) respectively. Let us now determine the corresponding measures over induced by (4.19). If we describe the elements ¯ g in this group by (a, b), then for any measure  over ¯ G and any parameter point (η0, τ0), the induced measure ′ over is given by ′(ω) = {(a, b) : (aη0 + b, bτ0) ∈ω}. (4.27) Since a measure over the Borel sets is determined by its values over open intervals, it is enough to calculate ′(ω) for ω : η1 < η < η2, τ1 < τ < τ2. (4.28) If, furthermore, we assume that  has a density λ ′(ω) = {(a, b) : η1 −aη0 < b < η2 −aη0, τ1 τ0 < a < τ2 τ0 } =  τ2/τ0 τ1/τ0  η2−aη0 η1−aη0 λ(a, b)db  da. In this integral, let us now change variables from (a, b) to a′ = aτ0, b′ = b + aη0. 252 AVERAGE RISK OPTIMALITY [ 4.4 The Jacobian of the transformation is     ∂(a′, b′) ∂(a, b)     =     τ0 η0 0 1     = τ0 and we, therefore, find ′(ω) =  τ2 τ1  η2 η1 λ′(a, b)dadb =  τ2 τ1  η2 η1 λ(a, b) 1 τ0 da′db′, with a = a′/τ0 and b = b′ −aη0 = b′ −a′/τ0. We can therefore take λ′(a′, b′) = 1 τ0 λ a′ τ0 , b′ −a′ τ0  . Now consider the following two cases: (i) For λ(a, b) = 1 a , λ′(a′, b′) = 1 τ0 τ0 a′ = 1 a′ ; (4.29) (ii) for λ(a, b) = 1 a2 , λ′(a′, b′) = 1 τ0 τ 2 0 a′2 = 1 a′2 . (4.30) The Bayes estimators of η and τ corresponding to (4.29) are (Problem 4.4) ˆ η = ∞ −∞ ∞ 0 u vn+3 f  x1−u v , · · · , xn−u v  dv du ∞ −∞ ∞ 0 1 vn+3 f  x1−u v , · · · , xn−u v  dv du (4.31) and ˆ τ = ∞ −∞ ∞ 0 1 vn+2 f  x1−u v , · · · , xn−u v  dv du ∞ −∞ ∞ 0 1 vn+3 f  x1−u v , · · · , xn−u v  dv du. (4.32) These turn out to be the MRE estimators of η and τ under the loss functions (d −η)2/τ 2 and (d −τ)2/τ 2, respectively. ∥ The treatment of these three examples extends to a number of other important cases(see,forexample,Problems4.6and4.7)andsuggeststhattheBayesestimator with respect to the measure induced by right Haar measure over ¯ G is equivariant and is, in fact, MRE. For conditions under which these conclusions are valid, see Berger 1985 (Section 6.6), Robert 1994a (Section 7.4), or Schervish 1995 (Section 6.2); a special case is treated in Section 5.4. It is also worth noting that if a Haar measure  over a group G is finite, that is, (G) < ∞, then left and right Haar measures coincide. At the beginning of the section, we defined invariance of a prior distribution by (4.1) and (4.2), and the same equations define invariance of a measure over even if it is improper. We shall now consider whether the measure ′ induced over by left- or right-invariant Haar measure is invariant in this sense. Example 4.10 Invariance of induced measures. We look at the location-scale groups and consider invariance of the induced measures. 4.5 ] HIERARCHICAL BAYES 253 (i) Location Group. We saw in Example 4.7 that left and right Haar measures  coincide in this case and that ′ is Lebesgue measure which clearly satisfies (4.2) (ii) Scale Group. Again, left and right Haar measure  coincide, and by Example 4.4, ′(ω) = ω dτ τ . Since ¯ g−1(ω) = ω/ ¯ g, as in Example 4.4, ′[ ¯ g−1(ω)] =  ω/ ¯ g dτ τ =  ω 1 ¯ g d ¯ g = (ω) so that ′ is invariant. (iii) Location-Scale Group. Here, the densities induced by the left- and right-invariant Haar measures are given by (4.30) and (4.29), respectively. Calcu-lations similar to those of Example 4.9 show that the former is invariant but the latter is not ∥ The general situation is described in the following result. Theorem 4.11 Under the assumptions of Theorem 4.1, the measure ′ over induced by a measure  over ¯ G is invariant provided  is left invariant. Proof. For any ω and θ0 ∈ , let B = {¯ h ∈¯ G : ¯ hθ0 ∈ω}, so that ′(ω) = (B). Then, ¯ gB = { ¯ g ¯ h : ¯ hθ0 ∈ω} and ′( ¯ gω) = (¯ h : ¯ hθ0 ∈¯ gω) = (¯ h : ¯ g−1 ¯ hθ0 ∈ω) = ( ¯ g ¯ h : ¯ hθ0 ∈ω) = ( ¯ gB). Thus, ( ¯ gω) = ′(ω) if and only if (gB) = (B), and it follows that ′ is invariant if and only if  is left invariant. ✷ Note that this result does not contradict the remark made after Example 4.9 to the effect that the Bayes estimator under the prior measure induced by right-invariant Haar measure is equivariant. A Bayes estimator can be equivariant under a prior measure  even if  is not invariant (see Problem 4.8). When there are no groups leaving the given family of distributions invariant, no Haar measure is available to serve as a noninformative prior. In such situations, transformations that utilize some (perhaps arbitrary) structure of the parameter space may sometimes be used to deduce a form for a “noninformative” prior (Villegas 1990). A discussion of these approaches is given by Berger 1985 (Section 3.3); see also Bernardo and Smith 1994 (Section 5.6.2). 5 Hierarchical Bayes In a hierarchical Bayes model, rather than specifying the prior distribution as a single function, we specify it in a hierarchy. Thus, we place another level on the model (3.1), and write X|θ ∼f (x|θ), |γ ∼π(θ|γ ), (5.1) H ∼ψ(γ ), 254 AVERAGE RISK OPTIMALITY [ 4.5 where we assume that ψ(·) is known and not dependent on any other unknown hyperparameters (as parameters of a prior are sometimes called). Note that we can continue this hierarchical modeling and add more stages to the model, but this is not often done in practice. The class of models (5.1) appears to be more general than the class (3.1) since in (3.1), γ has a fixed value, but in (5.1), it is permitted to have an arbitrary probability distribution. However, this appearance is deceptive. Since π(θ|γ ) in (3.1) can be any fixed distribution, we can, in particular, take for it π(θ) = π(θ|γ )ψ(γ )dγ , which reduces the hierarchical model (5.1) to the single-prior model (3.1). However, there is a conceptual and practical advantage to the hierarchical model, in that it allows us to model relatively complicated situations using a series of simpler steps; that is, both π(θ|γ ) and ψ(γ ) may be of a simple form (even conjugate), but π(θ) may be more complex. Moreover, there is often a computational advantage to hierarchical modeling. We will illustrate both of these points in this section. It is also interesting to note that this process can be reversed. Starting from the single-prior model (3.1), we can look for a decomposition of the prior π(θ) of the form π(θ) = π(θ|γ )ψ(γ )dγ and thus create the hierarchy (5.1). Such modeling, known as hidden Markov models, hidden mixtures, or deconvolution, has proved very useful (Churchill 1989, Robert 1994a (Section 9.3), Robert and Casella 1998). Given a loss function L(θ, d), we would then determine the estimator that min-imizes  L(θ, d(x))π(θ|x) dθ (5.2) whereπ(θ|x) = f (x|θ)π(θ|γ )ψ(γ ) dγ/ f (x|θ)π(θ|γ )ψ(γ ) dθ dγ .Notealso that π(θ|x) =  π(θ|x, γ )π(γ |x) dγ (5.3) where π(γ |x) is the posterior distribution of H, unconditional on θ. We may then write (5.2) as  L(θ, d(x))π(θ|x) dθ =   L(θ, d(x))π(θ|x, γ ) dθ  π(γ |x) dγ, (5.4) which shows that the hierarchical Bayes estimator can be thought of as a mixture of single-prior Bayes estimators. (See Problems 5.1 and 5.2.) Hierarchical models allow easier modeling of prior distributions with “flatter” tails, which can lead to Bayes estimators with more desirable frequentist properties. This latter end is often achieved by taking ψ(·) to be improper (see, for example, Berger and Robert 1990, or Berger and Strawderman 1996). Example 5.1 Conjugate normal hierarchy. Starting with the normal distribution and modeling, each stage with a conjugate prior yields the hierarchy Xi|θ ∼N(θ, σ 2), σ 2 known, i = 1, . . . , n, θ|τ ∼N(0, τ 2) (5.5) 1 τ 2 ∼Gamma(a, b), a, b known. 4.5 ] HIERARCHICAL BAYES 255 The hierarchical Bayes estimator of θ under squared error loss is E( |x) =  θπ(θ|x, τ 2) dθπ(τ 2|x) dτ 2 =  nτ 2 ¯ x nτ 2 + σ 2 π(τ 2|x) dτ 2 (5.6) = E[E( |x, τ 2)], which is the expectation of the single-prior Bayes estimator using the density π(τ 2|x). (See Problem 5.3 for the form of the posterior distributions.) Although there is no explicit form for E( |x), calculation is not particularly difficult. ∥ It is interesting to note that even though at each stage of the model (5.5) a conjugate prior was used, the resulting Bayes estimator is not from a conjugate prior (the prior π(θ|a, b) = π(θ|τ)ψ(τ|a, b)dτ is not conjugate) and is not expressible in a simple form. Such an occurrence is somewhat commonplace in hierarchical Bayes analysis and leads to more reliance on numerical methods. Example 5.2 Conjugate normal hierarchy, continued. As a special case of the model (5.5), consider the model Xi|θ ∼N(θ, σ 2), i = 1, . . . , p, independent, |τ 2 ∼N(0, τ 2) (5.7) 1 τ 2 ∼Gamma ν 2, 2 ν  . This leads to a Student’s t-prior distribution on , and a posterior mean E[ |¯ x] = ∞ −∞θ(1 + θ2/ν)−ν+1 2 e−p/2σ 2(θ−¯ x)2dθ ∞ −∞(1 + θ2/ν)−ν+1 2 e−p/2σ 2(θ−¯ x)2dθ , (5.8) which is not expressible in a simple form. Numerical evaluation of (5.8) is simple, so calculation of this hierarchical Bayes estimator in practice poses no problem. However, evaluation of the mean squared error or Bayes risk of (5.8) presents a more substantial task. ∥ In the preceding example, the hierarchical Bayes estimator was expressible as a ratio of integrals which easily yielded to either direct calculation or simple approx-imation. There are other cases, however, in which a straightforward hierarchical model can lead to very difficult problems in evaluation of a Bayes estimator. Example 5.3 Beta-binomial hierarchy. A generalization of the standard beta-binomial hierarchy is X|p ∼binomial (p, n), p|a, b ∼beta (a, b), (5.9) (a, b) ∼ψ(a, b), 256 AVERAGE RISK OPTIMALITY [ 4.5 leading to the posterior mean E(p|x) = 1 0 px+1(1 −p)n−xπ(p) dp 1 0 px(1 −p)n−xπ(p) dp , (5.10) where π(p) =  H(a + b) H(a)H(b)pa−1(1 −p)b−1ψ(a, b) da db. (5.11) For almost any choice of ψ(a, b), calculation of (5.11), and hence (5.10), is quite difficult. Indeed, there could be difficulty with numerical integration, simulation, and approximation. Moreover, if ψ(a, b) is chosen to be improper, as is typical in such hierarchies, the propriety of π(p|x) is not easy to verify (and often does not obtain). George et al. (1993) provide algorithms for calculating expressions such as (5.10), and Hobert (1994) establishes conditions for the propriety of some resulting posterior distributions. ∥ To overcome the difficulties in computing hierarchical Bayes estimators, we need to establish either easy-to-use formulas or good approximations, in order to further investigate their risk optimality. The approximation issue will be addressed in the next section. In the remainder of the present section, we consider the evalu-ation of (3.5) using theory based on Markov chain limiting behavior (see Note 9.4 for a brief discussion). Although this theory does not result in a simple expression for the Bayes estimators in general, it usually allows us to write expressions such as (5.6) as a limit of simple estimators. (Technically, these computations are not approximations, as they are exact in the limit. However, since they involve only a finite number of computations, we think of them as approximations, but realize that any order of precision can be achieved.) The resulting techniques, collectively known as Markov chain Monte Carlo (MCMC) techniques (see Tanner 1996, Gilks et al. 1996, or Robert and Casella 1998) can greatly facilitate calculation of a hier-archical Bayes estimator. One of the most popular of these methods is known as the Gibbs sampler [brought to statistical prominence by Gelfand and Smith (1990)], which we now illustrate. Starting with the hierarchy (5.1), suppose we are interested in calculating the posterior distribution π(θ|x) (or E( |x), or some other feature of the posterior distribution). From (5.1) we calculate the full conditionals |x, γ ∼π(θ|x, γ ), (5.12) H|x, θ ∼π(γ |x, θ), which are the posterior distributions of each parameter conditional on all others. If, for i = 1, 2, . . . , M, random variables are generated according to i|x, γi−1 ∼π(θ|x, γi−1), (5.13) Hi|x, θi ∼π(γ |x, θi), this defines a Markov chain ( i, Hi). It follows from the theory of such chains (see Note 9.4) that there exist distributions π(θ|x) and π(γ |x) such that i L → ∼π(θ|x), (5.14) 4.5 ] HIERARCHICAL BAYES 257 Hi L →H ∼π(γ |x) as i →∞, and 1 M M i=1 h( i) →E(h( )|x) =  h(θ)π(θ|x) dθ (5.15) as M →∞. (A full development of this theory is given in Meyn and Tweedie (1993). See also Resnick 1992 for an introduction to Markov chains and Robert 1994a for more applications to Bayesian calculation.) It follows from (5.15) that for i generated according to (5.13), we have 1 M M i=1 i →E( |x), (5.16) the hierarchical Bayes estimator. (Problems 5.8 - 5.11 develop some of the more practical aspects of this theory.) Example 5.4 Poisson hierarchy with Gibbs sampling. As an example of a Pois-son hierarchy (see also Example 6.6), consider X|λ ∼Poisson(λ) |b ∼Gamma(a, b), a known (5.17) 1 b ∼Gamma(k, τ), leading to the full conditionals |x, b ∼Gamma  a + x, b 1 + b  (5.18) 1 b|x, λ ∼Gamma a + k, τ 1 + λτ . Recall that in this hierarchy, π(λ|x) is not expressible in a simple form. However, if we simulate from (5.18), we obtain a sequence {i} satisfying 1 M M i=1 h(i) →  h(λ)π(λ|x) dλ = E[h(|x)]. (5.19) Alternatively, we could use a {bi} sequence and calculate 1 M M i=1 π(λ|x, bi) →  π(λ|x, b)π(b|x) db = π(λ|x). (5.20) ∥ The Gibbs sampler actually yields two methods of calculating the same quantity. For example, from the hierarchy (5.1), using the full conditionals of (5.12) and the iterations in (5.13), we could estimate E(h( )|x) by (i) 1 M M i=1 h( i) →  h(θ)π(θ|x) dθ = E(h( )|x) 258 AVERAGE RISK OPTIMALITY [ 4.5 or by (5.21) (ii) 1 M M i=1 Eh( |x, Hi) →  E(h( )|x, γ )π(γ |x) dγ = E(h( )|x). Implementation of the Gibbs sampler is most effective when the full condition-als are easy to work with, and in such cases, it is often possible to calculate E(h( )|x, Hi) in a simple form, so (5.21)(ii) is a viable option. To see that it is superior to (5.21)(i), write E(h( )|x) = E[E(h( )|x, γ )] and apply the Rao-Blackwell theorem (see Problem 5.12). Example 5.5 Gibbs point estimation. To calculate the hierarchical Bayes esti-mator of λ in Example 5.4, we use 1 M M i=1 E(|x, bi) = 1 M M i=1 bi 1 + bi (a + x) rather than (1/M) M i=1 i. Analogously, the posterior density π(λ|x) can be cal-culated by ˆ π(λ|x) = 1 M M i=1 π(λ|x, bi) = λa+x−1 MH(a + x) M i=1 1 + bi bi a+x e−λ (1+bi ) bi . ∥ The actual implementation of the Gibbs sampler relies on Monte Carlo tech-niques to simulate random variables from the distributions in (5.13). Very efficient algorithms for such simulations are available, and Robert (1994a, Appendix B) catalogs a number of them. There are also full developments in Devroye (1985) and Ripley (1987). (See Problems 5.14 and 5.15.) For many problems, the simulation step is straightforward to implement on a computer so we can take M as large as we like. This makes it possible for the approximations to have any desired precision, with the only limiting factor being computer time. (In this sense we are doing exact calculations.) Many applications of these techniques are given in Tanner (1996). As a last example, consider the calculation of the hierarchical Bayes estimator of Example 5.2. Example 5.6 Normal hierarchy. From (5.5), we have the set of full conditionals θ|¯ x, τ 2 ∼N  τ 2 τ 2 + nσ 2 ¯ x, σ 2τ 2 σ 2 + nτ 2  (5.22) 1 τ 2 |¯ x, θ ∼Gamma  a + 1 2, θ2 2 + 1 b −1 . 4.5 ] HIERARCHICAL BAYES 259 Note that, conditional on θ, τ 2 is independent of ¯ x. Both of these conditional distributions are easy to simulate from, and we thus use the Gibbs sampler to generate a chain ( i, τ 2 i ), i = 1, . . . , M, from (5.22). This yields the approximation ˜ E( |¯ x) = 1 M M i=1 E( |¯ x, τ 2 i ) = 1 M M i=1 τ 2 i τ 2 i + nσ 2 ¯ x (5.23) →E( |¯ x) as M →∞. ∥ As mentioned before, one of the purposes of specifying a model in a hierarchy is to make it possible to model more complicated phenomena in a sequence of less complicated steps. In addition, the ordering in the hierarchy allows us both to order the importance of the parameters and to incorporate some of our uncertainty about the prior specification. To be precise, in the model X|θ ∼f (x|θ), |λ ∼π(θ|λ), (5.24)  ∼ψ(λ), we tend to be more exacting in our specification of π(θ|λ), and less so in our specification of ψ(λ). Indeed, in many cases, ψ(λ) is taken to be “flat” or “non-informative” (for example, ψ(λ) = Lebesgue measure). In practice, this leads to heavier-tailed prior distributions π(θ), with the resulting Bayes estimators being more robust (Berger and Robert 1990, Fourdrinier et al 1996; see also Example 5.6.7.). One way of studying the effect that the stages of the hierarchy (5.24) have on each other is to examine, for each parameter, the information contained in its posterior distribution relative to its prior distribution. In effect, this measures how much the data can tell us about the parameter, with respect to the prior distribution. To measure this information, we can use Kullback-Leibler information (recall Example 1.7.7), which also is known by the longer, and more appropriate name, Kullback-Leibler information for discrimination between two densities. For den-sities f and g, it is defined by K[f, g] =  log f (t) g(t)  f (t) dt. (5.25) The interpretation is that as K[f, g] gets larger, it becomes easier to discriminate between the densities f and g; that is, there is more information for discrimination. From the model (5.24), we can assess the information between the data and the parameter by calculating K[π(θ|x), π(θ)], where π(θ) =  π(θ|λ)ψ(λ) dλ, (5.26) 260 AVERAGE RISK OPTIMALITY [ 4.5 π(θ|x) = f (x|θ)π(θ) f (x|θ)π(θ) dθ = f (x|θ)π(θ) m(x) . By comparison, the information between the data and the hyperparameter is mea-sured by K[π(λ|x), ψ(λ)], where π(λ|x) = f (x|θ)π(θ|λ)ψ(λ) dθ m(x) = m(x|λ)ψ(λ) m(x) . (5.27) An important result about the two measures of information for (5.26) and (5.27) is contained in the following theorem. Theorem 5.7 For the model (5.24), K[π(λ|x), ψ(λ)] < K[π(θ|x), π(θ)]. (5.28) From (5.28), we see that the distribution of the data has less effect on hyperpriors than priors, or, turning things around, the posterior distribution of a hyperparameter is less affected by changes in the prior than the posterior distribution of a parameter. This provides justification of the belief that parameters that are deeper in the hierarchy have less effect on inference. Proof of Theorem 5.7. By definition, K[π(λ|x), ψ(λ)] =   π(λ|x) log π(λ|x) ψ(λ)  dλ (5.29) =   π(λ|x) ψ(λ)  log π(λ|x) ψ(λ)  ψ(λ) dλ. Now, note that π(λ|x) ψ(λ) =  f (x|θ) m(x)  π(θ|λ) dθ, (5.30) or, more succinctly, π(λ|x)/ψ(λ) = E[f (x|θ)/m(x)], where the expectation is taken with respect to π(θ|λ). We now apply Jensen’s inequality to (5.29), using the fact that the function x log x is convex if x > 0, which leads to π(λ|x) ψ(λ)  log π(λ|x) ψ(λ)  =  E f (x|θ) m(x)  log  E f (x|θ) m(x)  ≤E f (x|θ) m(x)  log f (x|θ) m(x)  (5.31) =  f (x|θ) m(x)  log f (x|θ) m(x)  π(θ|λ) dθ. Substituting back into (5.29), we have K[π(λ|x), ψ(λ)] (5.32) ≤    f (x|θ) m(x)  log f (x|θ) m(x)  π(θ|λ)ψ(λ) dθ dλ. We now (of course) interchange the order of integration and notice that   f (x|θ) m(x) π(θ|λ)ψ(λ) dλ = π(θ|x). (5.33) 4.5 ] HIERARCHICAL BAYES 261 Substitution into (5.32), together with the fact that f (x|θ) m(x) = π(θ|x) π(θ) yields (5.28). ✷ Thus, hierarchical modeling allows us to be less concerned about the exact form of ψ(λ). This frees the modeler to choose a ψ(λ) to yield other good properties without unduly compromising the Bayesian interpretations of the model. For ex-ample, as we will see in Chapter 5, ψ(λ) can be chosen to yield hierarchical Bayes estimators with reasonable frequentist performance. A full development of information measures and hierarchical models is given by Goel and DeGroot (1979, 1981); see also Problems 5.16–5.19. Theorem 5.7 shows how information acts within the levels of a hierarchy, but does not address the, perhaps, more basic question of assessing the information provided by a prior distribution in a particular model. Information measures, such as K[f, g], can also be the basis of answering this latter question. If X ∼f (x|θ) and ∼π(θ), then prior distributions that have a large effect on π(θ|x) should produce small values of K[π(θ|x), π(θ)], since the prior and posterior distributions will be close together. Alternatively, prior distributions that have a small effect on π(θ|x) should produce large values of K[π(θ|x), π(θ)], as the posterior will mainly reflect the sampling density. Thus, we may seek to find a prior π(θ) that produces the maximum value of K[π(θ|x), π(θ)]. We can consider such a prior to have the least influence on f (x|θ) and, hence, to be a default, or noninformative, prior. The above is an informal description of the approach to the construction of a reference prior, initiated by Bernardo (1979) and further developed and formalized by Berger and Bernardo (1989, 1992). [See also Robert 1994a, Section 3.4]. This theory is quite involved, but approximations due to Clarke and Barron (1990, 1994) andClarkeandWasserman(1993)shedsomeinterestinglightontheproblem.First, we cannot directly use K[π(θ|x), π(θ)] to derive a prior distribution, because it is a function of x. We, thus, consider its expected value with respect to the marginal distribution of X, the Shannon information S(π) =  K[π(θ|x), π(θ)]mπ(x) dx, (5.34) where mπ(x) = f (x|θ)π(θ) dθ is the marginal distribution. The reference prior is the distribution that maximizes S(π). The following theorem is due to Clarke and Barron (1990). Theorem 5.8 Let X1, . . . , Xn be an iid sample from f (x|θ), and let Sn(π) denote the Shannon information of the sample. Then, as n →∞, Sn(π) = k 2 log n 2πe +  π(θ) log |I(θ)|1/2 π(θ) dθ + o(1) (5.35) where k is the dimension of θ and I(θ) is the Fisher information I(θ) = −E  ∂2 ∂θ2 log f (X|θ)  . As the integral in the expansion (5.35) is the only term involving the prior π(θ), maximizing that integral will maximize the expansion. Provided that |I(θ)|1/2 is 262 AVERAGE RISK OPTIMALITY [ 4.6 integrable, Corollary 1.7.6 shows that π(θ) = |I(θ)|1/2 is the appropriate choice. This is the Jeffreys prior, which was discussed in Section 4.1. Example 5.9 Binomial reference prior. For X1, . . . , Xn iid as Bernoulli(θ), we have I(θ) = −E  ∂2 ∂θ2 log f (X|θ)  = n θ(1 −θ), (5.36) which yields the Jeffreys prior π(θ) ∝[θ(1 −θ)]−1/2. This is also the prior that maximizes the integral in Sn(π) and, in that sense, imparts the least information on f (x|θ). A formal reference prior derivation also shows that the Jeffreys prior is the reference prior. ∥ In problems where there are no nuisance parameters, the Jeffreys and reference priors agree, even when they are improper. In fact, the reference prior approach was developed to deal with the nuisance parameter problem, as the Fisher infor-mation approach gave no clear-cut guidelines as to how to proceed in that case. Reference prior derivations for nuisance parameter problems are given by Berger and Bernardo (1989, 1992a, 1992b) and Polson and Wasserman (1990). See also Clarke and Wasserman (1993) for an expansion similar to (5.35) that is valid in the nuisance parameter case. 6 Empirical Bayes Another generalization of single-prior Bayes estimation, empirical Bayes estima-tion, falls outside of the formal Bayesian paradigm. However, it has proven to be an effective technique of constructing estimators that perform well under both Bayesian and frequentist criteria. One reason for this, as we will see, is that empir-ical Bayes estimators tend to be more robust against misspecification of the prior distribution. The starting point is again the model (3.1), but we now treat γ as an unknown parameter of the model, which also needs to be estimated. Thus, we now have two parameters to estimate, necessitating at least two observations. We begin with the Bayes model Xi|θ ∼f (x|θ), i = 1, . . . , p, (6.1) |γ ∼π(θ|γ ). and calculate the marginal distribution of X, with density m(x|γ ) =  f (xi|θ)π(θ|γ ) dθ. (6.2) Based on m(x|γ ), we obtain an estimate, ˆ γ (x), of γ . It is most common to take ˆ γ (x) to be the MLE of γ , but this is not essential. We now substitute ˆ γ (x) for γ in π(θ|γ ) and determine the estimator that minimizes the empirical posterior loss  L(θ, δ(x))π(θ|x, ˆ γ (x)) dθ. (6.3) 4.6 ] EMPIRICAL BAYES 263 This minimizing estimator is the empirical Bayes estimator. An alternative definition is obtained by substituting ˆ γ (x) for γ in the Bayes estimator. Although, mathematically, this is equivalent to the definition given here (see Problem 6.1), it is statistically more satisfying to define the empirical Bayes estimator as minimizing the empirical posterior loss (6.3). Example 6.1 Normal empirical Bayes. To calculate an empirical Bayes estima-tor for the model (5.7) of Example 5.2, rather than integrate over the prior for τ 2, we estimate τ 2. We determine the marginal distribution of X (see Problem 6.4), m(x|τ 2) =  n i=1 f (xi|θ)π(θ|τ 2) dθ (6.4) = 1 (2πσ 2)n/2 e− 1 2σ2 (xi−¯ x)2 1 (2πτ 2)1/2 ×  ∞ −∞ e− n 2σ2 (¯ x−θ)2e−1 2 θ2 τ2 dθ = 1 (2π)n/2 1 σ n  σ 2 σ 2 + nτ 2 1/2 e −1 2  (xi −¯ x)2 σ2 + n¯ x2 σ2+nτ2  . (Note the similarity to the density (2.13) in the one-way random effects model.) From this density, we can now estimate τ 2 using maximum likelihood (or some other estimation method). Recalling that we are assuming σ 2 is known, we find the MLE of σ 2 + nτ 2 given by 5 σ 2 + nτ 2 = max{σ 2, n¯ x2}. Substituting into the single-prior Bayes estimator, we obtain the empirical Bayes estimator E( |¯ x, ˆ τ) =  1 − σ 2 5 σ 2 + nτ 2  ¯ x (6.5) =  1 − σ 2 max{σ 2, n¯ x2}  ¯ x. ∥ It is tempting to ask whether the empirical Bayes estimator is ever a Bayes estimator; that is, can we consider π(θ|x, ˆ γ (x)) to be a “legitimate” posterior density, in that it be derived from a real prior distribution? The answer is yes, but the prior distribution that leads to such a posterior may sometimes not be proper (see Problem 6.2). We next consider an example that illustrates the type of situation where empirical Bayes estimation is particularly useful. Example 6.2 Empirical Bayes binomial. Empirical Bayes estimation is best suited to situations in which there are many problems that can be modeled si-multaneously in a common way. For example, suppose that there are K different groups of patients, where each group has n patients. Each group is given a different treatment for the same illness, and in the kth group, we count Xk, k = 1, . . . , K, the number of successful treatments out of n. Since the groups receive different treat-ments, we expect different success rates; however, since we are treating the same 264 AVERAGE RISK OPTIMALITY [ 4.6 illness, these rates should be somewhat related to each other. These considerations suggest the hierarchy Xk ∼binomial(pk, n), (6.6) pk ∼beta(a, b), k = 1, . . . , K, where the K groups are tied together by the common prior distribution. As in Example 1.5, the single-prior Bayes estimator of pk under squared error loss is δπ(xk) = E(pk|xk, a, b) = a + xk a + b + n. (6.7) In Example 1.5, a and b are assumed known and all calculations are straightfor-ward. In the empirical Bayes model, however, we consider these hyperparameters unknown and estimate them. To construct an empirical Bayes estimator, we first calculate the marginal distribution m(x|a, b) =  1 0 · · ·  1 0 K k=1  n xk  pxk k (1 −pk)n−xk × H(a + b) H(a)H(b)pa−1 k (1 −pk)b−1dpk (6.8) = K k=1  n xk  H(a + b)H(a + xk)H(n −xk + b) H(a)H(b)H(a + b + n) , a product of beta-binomial distributions. We now proceed with maximum likeli-hood estimation of a and b based on (6.8). Although the MLEs ˆ a and ˆ b are not expressible in closed form, we can calculate them numerically and construct the empirical Bayes estimator δ ˆ π(xk) = E(pk|xk, ˆ a, ˆ b) = ˆ a + xk ˆ a + ˆ b + n . (6.9) The Bayes risk of E(pk|xk, ˆ a, ˆ b) is only slightly higher than that of the Bayes estimator (6.7), and is given in Table 6.1. For comparison, we also include the Bayes risk of the unbiased estimator x/n. The first three rows correspond to a prior mean of 1/2, with decreasing prior variance. Notice how the risk of the empirical Bayes estimator is between that of the Bayes estimator and that of X/n. ∥ As Example 6.2 illustrates, and as we will see later in this chapter (Section 7), the Bayes risk performance of the empirical Bayes estimator is often “robust”; that is, its Bayes risk is reasonably close to that of the Bayes estimator no matter what values the hyperparameters attain. We next turn to the case of exponential families, and find that a number of the expressions developed in Section 3 are useful in evaluating empirical Bayes estimators. In particular, we find an interesting representation for the risk under squared error loss. 4.6 ] EMPIRICAL BAYES 265 Table 6.1. Bayes Risks for the Bayes, Empirical Bayes, and Unbiased Estimators of Example 6.2, where K = 10 and n = 20 Prior Parameters Bayes Risk a b δπ of (6.7) δ ˆ π of (6.9) x/n 2 2 .0833 .0850 .1000 6 6 .0721 .0726 .1154 20 20 .0407 .0407 .1220 3 1 .0625 .0641 .0750 9 3 .0541 .0565 .0865 30 10 .0305 .0326 .0915 For the situation of Corollary 3.3, using a prior π(η|λ), where λ is a hyperpa-rameter, the Bayes estimator of (3.12) becomes E(ηi|x, λ) = ∂ ∂xi log m(x|λ) −∂ ∂xi log h(x) (6.10) where m(x|λ) = pη(x)π(η|λ) dη is the marginal distribution. Simply substituting an estimate of λ, ˆ λ(x) into (6.10) yields the empirical Bayes estimator E(ηi|x, ˆ λ) = ∂ ∂xi log m(x|λ)     λ=ˆ λ(x) −∂ ∂xi log h(x). (6.11) If ˆ λ is, in fact, the MLE of λ based on m(x|λ), then the empirical Bayes estimator has an alternate representation. Theorem 6.3 For the situation of Corollary 3.3, with prior distribution π(η|λ), suppose ˆ λ(x) is the MLE of λ based on m(x|λ). Then, the empirical Bayes estimator is E(ηi|x, ˆ λ) = ∂ ∂xi log m(x|ˆ λ(x)) −∂ ∂xi log h(x). (6.12) Proof. Recall from calculus that if f (·, ·) and g(·) are differentiable functions, then d dx f (x, g(x)) = g′(x) ∂ ∂y f (x, y)     y=g(x) + ∂ ∂x f (x, y)     y=g(x) . (6.13) Applying this to m(x|ˆ λ(x)) shows that ∂ ∂xi log m(x|ˆ λ(x)) = ∂ ∂xi ˆ λ(x) ∂ ∂λ log m(x|λ)     λ=ˆ λ(x) + ∂ ∂xi log m(x|λ)     λ=ˆ λ(x) = ∂ ∂xi log m(x|λ)     λ=ˆ λ(x) because (∂/∂λ) log m(x|λ) is zero at λ = ˆ λ(x). Hence, the empirical Bayes estima-tor is equal to (6.12). ✷ 266 AVERAGE RISK OPTIMALITY [ 4.6 Thus, for estimating the natural parameter of an exponential family, the empiri-cal Bayes estimator (using the marginal MLE) can be expressed in the same form as a formal Bayes estimator. Here we use the adjective formal to signify a math-ematical equivalence, as the function m(x|ˆ λ(x)) may not correspond to a proper marginal density. See Bock (1988) for some interesting results and variations on these estimators. Example 6.4 Normal empirical Bayes, µ unknown. Consider the estimation of θi in the model of Example 3.4, Xi|θi ∼N(θi, σ 2), i = 1, . . . , p, independent, (6.14) i ∼N(µ, τ 2), i = 1, . . . , p, independent, (6.15) where µ is unknown. We can use Theorem 6.3 to calculate the empirical Bayes estimator, giving E( i|x, ˆ µ) = σ 2  ∂ ∂xi log m(x| ˆ µ) −∂ ∂xi h(x)  where ˆ µ is the MLE of µ from m(x|µ) = 1 [2π(σ 2 + τ 2)]p/2 e − 1 2(σ2+τ2) (xi−µ)2 . Hence, ˆ µ = ¯ x and ∂ ∂xi log m(x| ˆ µ) = ∂ ∂xi  −1 2(σ 2 + τ 2)(xi −¯ x)2  . This yields the empirical Bayes estimator E( i|x, ˆ µ) = τ 2 σ 2 + τ 2 xi + σ 2 σ 2 + τ 2 ¯ x, which is the Bayes estimator under the prior π(θ|¯ x). An advantage of the form (6.12) is that it allows us to represent the risk of the empirical Bayes estimator in the form specified by (3.13). The risk of the empirical Bayes estimator (6.12) is given by R[η, E(η|X, ˆ λ(X))] = R[η, −∇log h(X)] + p i=1 Eη 2 ∂2 ∂X2 i log m(X| ˆ λ(X)] (6.16) +  ∂ ∂Xi log m[X|ˆ λ(X)] 2 . Using the MLE ˆ µ(¯ x) = ¯ x, differentiating the log of m(x| ˆ µ(x)), and substituting into (6.12) shows (Problem 6.10) that R[η, E{η|X, ˆ µ(X)}] = p/σ 2 −2(p −1)2 p(σ 2 + τ 2) + p −1 p(σ 2 + τ 2)2 p i=1 Eη(Xi −¯ X)2. 4.6 ] EMPIRICAL BAYES 267 Table 6.2. Values of the Hierarchical Bayes (HB)(5.8) and Empirical Bayes (EB) Estimate (6.5). Value of ¯ x ν .5 2 5 10 2 .27 1.22 4.36 9.69 10 .26 1.07 3.34 8.89 30 .25 1.02 2.79 7.30 ∞ .25 1.00 2.50 5.00 EB 0 1.50 4.80 9.90 ∥ As mentioned at the beginning of this section, empirical Bayes estimators can also be useful as approximations to hierarchical Bayes estimators. Since we often have simpler expressions for the empirical Bayes estimator, if its behavior is close to that of the hierarchical Bayes estimator, it becomes a reasonable substitute (see, for example, Kass and Steffey 1989). Example 6.5 Hierarchical Bayes approximation. Both Examples 5.2 and 6.1 consider the same model, where in Example 5.2 the hierarchical Bayes estimator (5.8) averages over the hyperparameter, and in Example 6.1 the empirical Bayes estimator (6.5) estimates the hyperparameter. A small numerical comparison in Table 6.2 suggests that the empirical Bayes estimator is a reasonable, but not exceptional, approximation to the hierarchical Bayes estimator. The approximation, of hierarchical Bayes by empirical Bayes, is best for small valuesofν [definedin(5.7)]anddeterioratesasν →∞.Atν = ∞,thehierarchical Bayes estimator becomes a Bayes estimator under a N(0, 1) prior (see Problem 6.11). Notice that, even though (6.5) provides us with a simple expression for an estimator, it still requires some work to evaluate the mean squared error, or Bayes risk, of (6.5). However, it is important to do so to obtain an overall picture of the performance of the estimator (Problem 6.12). ∥ Although the (admittedly naive) approximation in Example 6.5 is not very ac-curate, there are other situations where the empirical Bayes estimator, or slight modifications thereof, can provide a good approximation to the hierarchical Bayes estimator. We now look at some of these situations. For the general hierarchical model (5.1), the Bayes estimator under squared error loss is E( |x) =  θπ(θ|x) dθ (6.17) which can be written E( |x) =   θπ(θ|x, γ )π(γ |x) dγ dθ (6.18) 268 AVERAGE RISK OPTIMALITY [ 4.6 =  E( |x, γ )π(γ |x) dγ where π(γ |x) = m(x|γ )ψ(γ ) m(x) (6.19) with m(x|γ ) =  f (x|θ)π(θ|γ ) dθ, (6.20) m(x) =  m(x|γ )ψ(γ ) dγ. Now, suppose that π(γ |x) is quite peaked around its mode, ˆ γπ. We might then consider approximating E( |x) by E( |x, ˆ γπ). Moreover, if ψ(γ ) is relatively flat, as compared to m(x|γ ), we would expect π(γ |x) ≈m(x|γ ) and ˆ γπ ≈ˆ γ , the marginal MLE. In such a case, E( |x, ˆ γπ) would be close to the empirical Bayes estimator E( |x, ˆ γ ), and hence the empirical Bayes estimator is a good approximation to the hierarchical Bayes estimator (Equation 5.4.2). Example 6.6 Poisson hierarchy. Although we might expect the empirical Bayes and hierarchical Bayes estimators to be close if the hyperparameter has a flat-tailed prior, they will, generally, not be equal unless that prior is improper. Consider the model Xi ∼Poisson (λi), i = 1, . . . , p, independent, (6.21) λi ∼Gamma(a, b), i = 1, . . . , p, independent, a known. The marginal distribution of Xi is m(xi|b) =  ∞ 0 e−λiλxi i xi! 1 H(a)ba λa−1 i e−λi/bdλi = H(xi + a) xi!H(a) 1 ba  1 + 1 b −(xi+a) = xi + a −1 a −1   b b + 1 xi  1 b + 1 a , a negative binomial distribution. Thus, m(x|b) =  p i=1 xi + a −1 a −1   b b + 1  xi  1 b + 1 pa (6.22) and the marginal MLE of b is ˆ b = ¯ x/a. From (6.21), the Bayes estimator is E(λi|xi, b) = b b + 1(a + xi) (6.23) and, hence, the empirical Bayes estimator is E(λi|xi, ˆ b) = ¯ x ¯ x + a (a + xi). (6.24) 4.6 ] EMPIRICAL BAYES 269 If we add a prior ψ(b) to the hierarchy (6.21), the hierarchical Bayes estimator can be written E(λi|x) =  E(λi|xi, b)π(b|x) db (6.25) where π(b|x) =  b b + 1 p ¯ x  1 b + 1 pa ψ(b) ∞ 0  b b + 1 p ¯ x  1 b + 1 pa ψ(b) db . (6.26) From examination of the hierarchy (6.21), a choice of ψ(b) might be an inverted gamma, as this would be conjugate for λi. However, these priors will not lead to a simple expression for E(λi|¯ x) (although they may lead to good estimators). In general, however, we are less concerned that the hyperprior reflect reality (which is a concern for the prior), since the hyperprior tends to have less influence on our ultimate inference (Theorem 5.7). Thus, we will often base the choice of the hyperprior on convenience. Let us, therefore, choose as prior for b an F-distribution, ψ(b) ∝ bα−1 (1 + b)α+β (6.27) which is equivalent to putting a beta(α, β) prior on b/(1 + b). The denominator of π(b|x) in (6.26) is  ∞ 0  b b + 1 p ¯ x  1 b + 1 pa bα−1 (1 + b)α+β db =  1 0 tp ¯ x+α−1(1 −t)pa+β−1 dt  t = b 1 + b  (6.28) = H(p ¯ x + α)H(pa + β) H(p ¯ x + pa + α + β) , and (6.23), (6.26), and (6.28) lead to the hierarchical Bayes estimator E(λi|x) =  E(λi|x, b)π(b|x) db = H(p ¯ x + pa + α + β) H(p ¯ x + α)H(pa + β)(a + xi) ×  ∞ 0  b b + 1 p ¯ x+1  1 b + 1 pa bα−1 (1 + b)α+β db (6.29) =  H(p ¯ x + pa + α + β) H(p ¯ x + α)H(pa + β)  H(p ¯ x + α + 1)H(pa + β) H(p ¯ x + pa + α + β + 1)  (a + xi) =  p ¯ x + α p ¯ x + pa + α + β  (a + xi). The hierarchical Bayes estimator will therefore be equal to the empirical Bayes estimator when α = β = 0. This makes ψ(b) ∝(1/b) an improper prior. However, 270 AVERAGE RISK OPTIMALITY [ 4.6 the calculation of π(b|x) from (6.26) and E(λi|x) from (6.29) will still be valid. (This model was considered by Deely and Lindley (1981), who termed it Bayes Empirical Bayes.) To further see how the empirical Bayes estimator is an approximation to the hierarchical Bayes estimator, write p ¯ x + α p ¯ x + pa + α + β = ¯ x ¯ x + a − β p(¯ x + a) +paα + paβ + αβ p2(¯ x + a)2 − 2aαβ p2(¯ x + a)3 + · · · . This shows that the empirical Bayes estimator is the leading term in a Taylor series expansion of the hierarchical Bayes estimator, and we can write E(λi|x) = E(λi|xi, ˆ b) + O  1 p  . (6.30) Estimators of the form (6.29) are similar to those developed by Clevenson and Zidek (1975) for estimation of Poisson means. The Clevenson-Zidek estimators, which have a = 0 in (6.29), are minimax estimators of λ (see Section 5.7). ∥ If interest centers on obtaining an approximation to a hierarchical Bayes esti-mator, a more direct route would be to look for an accurate approximation to the integral in (6.17). When such an approximation coincides with the empirical Bayes estimator, we can safely consider the empirical Bayes estimator as an approximate hierarchical Bayes estimator. Example 6.7 Continuation of Example 5.2. In Example 5.2, the hierarchical Bayesestimator(5.5.8)wasapproximatedbytheempiricalBayesestimator(5.6.5). If, instead, we seek a direct approximation to (5.5.8), we might start with the Taylor expansion of (1 + θ2/ν)−(ν+1)/2 around ¯ x 1 (1 + θ2/ν)(ν+1)/2 = 1 (1 + ¯ x2/ν)(ν+1)/2 (6.31) −ν + 1 ν ¯ x (1 + ¯ x2/ν)(ν+3)/2 (θ −¯ x) + O[(θ −¯ x)−2], andusingthisinthenumeratoranddenominatorof(5.5.8)yieldstheapproximation (Problem 6.15) ˆ E( |x) =  1 −(ν + 1)σ 2 p(ν + ¯ x2)  ¯ x + O  1 p3/2  . (6.32) Notice that the approximation is equal to the empirical Bayes estimator if ν = 0, an extremely flat prior! The approximation (6.32) is better than the empirical Bayes estimator for large values of ν, but worse for small values of ν. ∥ The approximation (6.32) is a special case of a Laplace approximation (Tierney and Kadane 1986). The idea behind the approximation is to carry out a Taylor 4.6 ] EMPIRICAL BAYES 271 series expansion of the integrand around an MLE, which can be summarized as  b(λ)e−nh(λ)dλ . = b(ˆ λ) 6 2π nh′′(ˆ λ) e−nh(ˆ λ). (6.33) Here, h(ˆ λ) is the unique minimum of h(λ); that is, ˆ λ is the MLE based on a likelihood proportional to e−nh(λ). (See Problem 6.17 for details.) In applying (6.33) to a representation like (6.18), we obtain E( |x) =  E( |x, λ)π(λ|x) dλ =  E( |x, λ)en log π(λ|x)1/n dλ (6.34) . =  √ 2π π(ˆ λ|x) −∂2 ∂λ2 log π(λ|x)|λ=ˆ λ E( |x, ˆ λ) where ˆ λ is the mode of π(λ|x). Thus, E( |x, ˆ λ) in (6.34) will be the empirical Bayes estimator if π(λ|x) ∝m(x|λ), that is, if ψ(λ) = 1. Moreover, the expression in square brackets in (6.34) is equal to 1 if π(λ|x) is normal with mean ˆ λ and variance equal to the inverse of the observed Fisher information (see Problem 6.17). BoththehierarchicalandempiricalBayesapproacharegeneralizationsofsingle-prior Bayes analysis. In each case, we generalize the single prior to a class of priors. Hierarchical Bayes then averages over this class, whereas empirical Bayes chooses a representative member. Moreover, we have considered the functional forms of the prior distribution to be known; that is, even though and γ are unknown, π(θ|γ ) and ψ(γ ) are known. Another generalization of single-prior Bayes analysis is robust Bayes analysis, where the class of priors is treated differently. Rather than summarize over the class, we allow the prior distribution to vary through it, and examine the behavior oftheBayesproceduresasthepriorvaries.Moreover,theassumptionofknowledge of the functional form is relaxed. Typically, a hierarchy like (3.1) is used, and a class of distributions for π(·|·) is specified. For example, a popular class of prior distributions for is given by an ε-contamination class O = {π(θ|λ) : π(θ|λ) = (1 −ε)π0(θ|λ) + εq(θ), q ∈Q} (6.35) where π0(θ|λ) is a specified prior (sometimes called the root prior) and q is any distribution in a class Q. [Here, Q is sometimes taken to be the class of all distributions, but more restrictive classes can often provide estimators and posterior distributions with desirable properties. See, for example, Berger and Berliner 1986. Also, Mattner (1994) showed that for densities specified in the form of ε-contamination classes, the order statistics are complete. See Note 1.10.5.) Using (6.35), we then proceed in a formal Bayesian way, and derive estimators based on minimizing posterior expected loss resulting from a prior π ∈O, say π∗. The resulting estimator, say δπ∗, is evaluated using measures that range over all π ∈O, to assess the robustness of δπ∗against misspecification of the prior. 272 AVERAGE RISK OPTIMALITY [ 4.7 For example, one might consider robustness using the posterior expected loss, or robustness using the Bayes risk. In this latter case, we might look at (Berger 1985, Section 4.7.5) sup π∈O r(π, δ) (6.36) and, perhaps, choose an estimator δ that minimizes this quantity. If the loss is squared error, then for any estimator δ, we can write (Problem 6.2) r(π, δ) = r(π, δπ) + E(δ −δπ)2, (6.37) where δπ is the Bayes estimator under π. From (6.37), we see that a robust Bayes estimator is one that is “close” to the Bayes estimators for all π ∈O. An ultimate goal of robust Bayes analysis is to find a prior π∗∈O for which δπ∗can be considered to be robust. Example 6.8 Continuation of Example 3.1. To obtain a robust Bayes estimator of θ, consider the class of priors O = {π : π(θ) = (1 −ε)π0(θ|τ0) + εq(θ)} (6.38) where π0 = N(θ, τ 2 0 ), τ0 is specified, and q(θ) = π(θ|τ 2)π(τ 2|a, b)dτ 2, as in Problem 6.3(a). The posterior density corresponding to a distribution π ∈O is given by π(θ|x) = λ(x)π0(θ|¯ x, τ0) + (1 −λ(x))q(θ|x, a, b) (6.39) where λ(x) is given by λ(x) = (1 −ε)mπ0(¯ x|τ0) (1 −ε)mπ0(¯ x|τ0) + εmq(x|a, b) (6.40) (see Problem 5.3). Using (6.39) and (6.40), the Bayes estimator for θ under squared error loss is E( |x, τ0, a, b) = λ(x)E( |¯ x, τ0) + (1 −λ(x))E( |x, a, b), (6.41) a convex combination of the single-prior and hierarchical Bayes estimators, with the weights dependent on the marginal distribution. A robust Bayes analysis would proceed to evaluate the behavior (i.e., robustness) of this estimator as π ranges though O. ∥ 7 Risk Comparisons In this concluding section, we look, in somewhat more detail, at the Bayes risk performance of some Bayes, empirical Bayes, and hierarchical Bayes estimators. We will also examine these risks under different prior assumptions, in the spirit of robust Bayes analysis. Example 7.1 TheJames-Steinestimator.LetXhaveap-variatenormaldistribu-tion with mean θ and covariance matrix σ 2I, where σ 2 is known; X ∼Np(θ, σ 2I). We want to estimate θ under sum-of-squared-errors loss L[θ, δ(x)] = |θ −δ(x)|2 = p i=1 (θi −δi(x))2, 4.7 ] RISK COMPARISONS 273 using a prior distribution ∼N(0, τ 2I), where τ 2 is assumed to be known. The Bayes estimator of θ is δτ(x) = [τ 2/(σ 2 + τ 2)]x, x being the vector of componentwise Bayes estimates. It is straightforward to calculate its Bayes risk r(τ, δτ) = pσ 2τ 2 σ 2 + τ 2 . (7.1) An empirical Bayes approach to this problem would replace τ 2 with an estimate from the marginal distribution of x, m(x|τ 2) = 1 [2π(σ 2 + τ 2)]p/2 e − 1 2(σ2+τ2) x2 i . (7.2) Although, for the most part, we have used maximum likelihood to estimate the hyperparameters in empirical Bayes estimators, unbiased estimation provides an alternative. Using the unbiased estimator of τ 2/(σ 2 + τ 2), the empirical Bayes estimator is (Problem 7.1) δJS(x) =  1 −(p −2)σ 2 |x|2  x, (7.3) the James-Stein estimator. ∥ This estimator was discovered by Stein (1956b) and later shown by James and Stein (1961) to have a smaller mean squared error than the maximum likelihood estimator X for all θ. Its empirical Bayes derivation can be found in Efron and Morris (1972a). Since the James-Stein estimator (or any empirical Bayes estimator) cannot attain as small a Bayes risk as the Bayes estimator, it is of interest to see how much larger its Bayes risk r(τ, δJS) will be. This, in effect, tells us the penalty we are paying for estimating τ 2. Asafirststep,wemustcalculater(τ, δJS),whichismadeeasierbyfirstobtaining an unbiased estimate of the risk R(θ, δJS). The integration over θ then becomes simple, since the integrand becomes constant in θ. Recall Theorem 3.5, which gave an expression for the risk of a Bayes estimator of the form (3.3.12). In the normal case, we can apply the theorem to a fairly wide class of estimators to get an unbiased estimator of the risk. Corollary 7.2 Let X ∼Np(θ, σ 2I), and let the estimator δ be of the form δ(x) = x −g(x), where g(x) = {gi(x)} is differentiable. If Eθ|(∂/∂Xi)gi(X)| < ∞for i = 1, . . . , p, then R(θ, δ) = Eθ|θ −δ(X)|2 (7.4) = pσ 2 + Eθ|g(X)|2 −2σ 2 p i=1 Eθ ∂ ∂Xi gi(X). Hence, ˆ R(δ(x)) = pσ 2 + |g(x)|2 −2σ 2 p i=1 ∂ ∂xi gi(x) (7.5) 274 AVERAGE RISK OPTIMALITY [ 4.7 is an unbiased estimator of the risk R(θ, δ). Proof. In the notation of Theorem 3.5, in the normal case −∂/∂xi log h(x) = xi/σ 2, and the result now follows by identifying g(x) with ∇log m(x), and some calculation. See Problem 7.2. ✷ For the James-Stein estimator (7.3), we have g(x) = (p −2)σ 2x/|x|2; hence, R(θ, δJS) = pσ 2 + Eθ (p −2)2σ 4 |X|2  −2σ 2 p i=1 Eθ  ∂ ∂Xi (p −2)σ 2Xi |X|2  = pσ 2 + (p −2)2σ 4Eθ 1 |X|2 (7.6) −2(p −2)σ 4 p i=1 Eθ |X|2 −2X2 i |X|4  = pσ 2 −(p −2)2σ 4Eθ 1 |X|2 , so ˆ R(δJS(x)) = pσ 2 −(p −2)2σ 4/|x|2. Example 7.3 Bayes risk of the James-Stein estimator. Under the model of Ex-ample 7.1, the Bayes risk of δJS is r(τ, δJS) =  R(θ, δJS)π(θ) dθ =   X  pσ 2 −(p −2)2σ 4 |x|2  f (x|θ)π(θ) dx dθ =  X   pσ 2 −(p −2)2σ 4 |x|2  π(θ|x) dθ " m(x) dx, where we have used (7.6), and changed the order of integration. Since the integrand is independent of θ, the inner integral is trivially equal to 1, and r(τ, δJS) = pσ 2 −(p −2)2σ 4E 1 |X|2 . (7.7) Here, the expected value is over the marginal distribution of X (in contrast to (7.6), where the expectation is over the conditional distribution of X|θ). Since, marginally, E p−2 |X|2 = 1 σ 2+τ 2 , we have r(τ, δJS) = pσ 2 −(p −2)σ 4 σ 2 + τ 2 = pσ 2τ 2 σ 2 + τ 2 + 2σ 4 σ 2 + τ 2 (7.8) = r(τ, δτ) + 2σ 4 σ 2 + τ 2 . Here, the second term represents the increase in Bayes risk that arises from esti-mating τ 2. ∥ 4.7 ] RISK COMPARISONS 275 It is remarkable that δJS has a reasonable Bayes risk for any value of τ 2, although the latter is unknown to the experimenter. This establishes a degree of Bayesian robustness of the empirical Bayes estimator. Of course, the increase in risk is a function of σ 2 and can be quite large if σ 2 is large. Perhaps a more interesting comparison is obtained by looking at the relative increase in risk r(τ, δJS) −r(τ, δτ) r(τ, δτ) = 2 p σ 2 τ 2 . We see that the increase is a decreasing function of the ratio of the sample-to-prior variance and goes to 0 as σ 2/τ 2 →0. Thus, the risk of the empirical Bayes estimator approaches that of the Bayes estimator as the sampling information gets infinitely better than the prior information. Example 7.4 Bayesian robustness of the James-Stein estimator. To further ex-plore the robustness of the James-Stein estimator, consider what happens to the Bayes risk if the prior used to calculate the Bayes estimator is different from the prior used to evaluate the Bayes risk (a classic concern of robust Bayesians). For the model in Example 7.1, suppose we specify a value of τ, say τ0. The Bayes estimator , δτ0, is given by δτ0(xi) = [τ 2 0 /(τ 2 0 + σ 2)]xi. When evaluating the Bayes risk, suppose we let the prior variance take on any value τ 2, not necessarily equal to τ 2 0 . Then, the Bayes risk of δτ0 is (Problem 7.4) r(τ, δτ0) = pσ 2  τ 2 0 τ 2 0 + σ 2 2 + pτ 2  σ 2 τ 2 0 + σ 2 2 , (7.9) which is equal to the single-prior Bayes risk (7.1) when τ0 = τ. However, as τ 2 →∞, r(τ, δτ0) →∞, whereas r(τ, δτ) →pσ 2. Incontrast,theBayesriskofδJS,givenin(7.8),isvalidforallτ withr(τ, δJS) → pσ 2 as τ 2 →∞. Thus, the Bayes risk of δJS remains finite for any prior in the class, demonstrating robustness. ∥ In constructing an empirical Bayes estimator in Example 7.1, the use of unbiased estimation of the hyperparameters led to the James-Stein estimator. If, instead, we had used maximum likelihood, the resulting empirical Bayes estimator would have been (Problem 7.1) δ+(x) =  1 −pσ 2 |x|2 + x, (7.10) where (a)+ = max{0, a}. Such estimators are known as positive-part Stein estima-tors. A problem with the empirical Bayes estimator (7.3) is that when |x|2 is small (less than (p −2)σ 2), the estimator has the “wrong sign”; that is, the signs of the components of δJS will be opposite those of the Bayes estimator δτ. This does not happen with the estimator (7.10), and as a result, estimators like (7.10) tend to have improved Bayes risk performance. Estimators such as (7.3) and (7.10) are called shrinkage estimators, since they tend to shrink the estimator X toward 0, the shrinkage target. Actually, of the two, only (7.10) completely succeeds in this effort since the shrinkage factor 1 −(p − 276 AVERAGE RISK OPTIMALITY [ 4.7 2)σ 2/|x|2 maytakeonnegative(andevenverylargenegative)values.Nevertheless, the terminology is used to cover also (7.3). The following theorem is due to Efron and Morris (1973a). Theorem 7.5 Let X ∼Np(θ, σ 2I) and θ ∼Np(0, τ 2I), with loss function L(θ, δ) = |θ −δ|2. If δ(x) is an estimator of the form δ(x) = [1 −B(x)]x and if δ+(x) = [1 −B(x)]+x, then r(τ, δ) ≥r(τ, δ+), with strict inequality if Pθ(δ(X) ̸= δ+(X)) > 0. Proof. For any estimator δ(x), the posterior expected loss is given by E[L(θ, δ(x))|x] =  p i=1 (θi −δi(x))2π(θ|x) dθ =  p i=1 (θi −E(θi|x))2 + (E(θi|x) −δi(x))2! (7.11) ×π(θ|x) dθ where we have added ±E(θi|x) and expanded the square, noting that the cross-term is zero. Equation (7.11) can then be written as E[L(θ, δ(x))|x] = p i=1 var(θi|x) (7.12) + p i=1 [E(θi|x) −δi(x)]2. As the first term in (7.12) does not depend on the particular estimator, the difference in posterior expected loss between δ and δ+ is E[L(θ, δ(x)|x] −E[L(θ, δ+(x))|x] (7.13) = P i=1 3 [E(θi|x) −δi(x)]2 −[E(θi|x)]24 I(|B(x)| > 1) since the estimators are identical when |B(x)| ≤1. However, since E(θi|x) = τ 2/(σ 2 + τ 2)xi, it follows that when |B(x)| > 1,  τ 2 σ 2 + τ 2 xi −δi(x) 2 >  τ 2 σ 2 + τ 2 xi 2 . Thus, (7.13) is positive for all x, and the result follows by taking expectations. ✷ In view of results like Theorem 7.5 and other risk results in Chapter 5 (see Theorem 5.5.4), the positive-part Stein estimator δ+(x) =  1 −(p −2)σ 2 |x|2 + x (7.14) 4.7 ] RISK COMPARISONS 277 is preferred to the ordinary James-Stein estimator (7.3). Moreover, Theorem 7.5 generalizes to the entire exponential family (Problem 7.8). It also supports the use of maximum likelihood estimation in empirical Bayes constructions. The good Bayes risk performance of empirical Bayes estimators is not restricted to the normal case, nor to squared error loss. We next look at Bayes and empirical Bayes estimation in the Poisson case. Example 7.6 PoissonBayesandempiricalBayesestimation.RecallthePoisson model of Example 6.6: Xi ∼Poisson(λi), i = 1, . . . , p, independent, (7.15) λi ∼Gamma(a, b). For estimation of λi under the loss Lk(λ, δ) = p i=1 1 λk i (λi −δi)2, (7.16) the Bayes estimator (see Example 1.3) is δk i (x) = b b + 1(xi + a −k). (7.17) The posterior expected loss of δk i (x) = δk i (xi) is E  1 λk i [λi −δk i (xi)]2|xi  = 1 H(a + xi)  b b+1 a+xi (7.18) ×  ∞ 0 (λi −δi)2λa+xi−k−1 i e−λi (b+1) b dλi, since the posterior distribution of λi|xi is Gamma(a + xi, b b+1). Evaluating the integral in (7.18) gives E  1 λk i [λi −δk i (xi)]2|xi  = H(a + xi −k) H(a + xi)  b b + 1 2−k (a + xi −k). (7.19) To evaluate the Bayes risk, r(k, δk), we next sum (7.19) with respect to the marginal distribution of Xi, which is Negative Binomial(a, 1 b+1). For k = 0 and k = 1, we have r(0, δ0) = p ab2 b + 1 and r(1, δ1) = p b b + 1. See Problems 7.10 and 7.11 for details. For the model (7.15) with loss function Lk(λ, δ) of (7.16), an empirical Bayes estimator can be derived (similar to (6.6.24); see Example 6.6) as δEB ki (x) = ¯ x ¯ x + a (xi + a −k). (7.20) We shall now consider the risk of the estimator δEB. For the loss function (7.16), we can actually evaluate the risk of a more general estimator than δEB. The coordi-natewise posterior expected loss of an estimator of the form δϕ i = ϕ(¯ x)(xi + a −k) 278 AVERAGE RISK OPTIMALITY [ 4.7 is E  1 λk i [λi −δϕ i ]2|x  = 1 H(a + xi)  b b+1 a+xi  ∞ 0 [λi −δϕ i (x)]2λa+xi−k−1 i e−λi b+1 b dλi (7.21) = H(a + xi −k) H(a + xi)  b b + 1 −k E[(λi −δϕ i (x))2|x)] where the expectation is over the random variable λi with distribution Gamma(a + xi −k, b b+1). Using the same technique as in the proof of Theorem 7.5 [see (7.12)], we add ±δk i (xi) = b b+1(a + xi −k) in (7.21) to get E  1 λk i [λi −δϕ i ]2|x  = H(a + xi −k) H(a + xi)  b b + 1 2−k (a + xi −k) (7.22) + H(a + xi −k) H(a + xi)  b b + 1 −k  b b + 1 −ϕ(¯ x) 2 (a + xi −k)2. The first term in (7.22) is the posterior expected loss of the Bayes estimator, and the second term reflects the penalty for estimating b. Evaluation of the Bayes risk, which involves summing over xi, is somewhat involved (see Problem 7.11). Instead, Table 7.1 provides a few numerical comparisons. Specifically, it shows the Bayes risks for the Bayes (δk), empirical Bayes (δEB), and unbiased estimators (X) of Example 6.6, based on observing p independent Poisson variables, for the loss function (7.16) with k = 1. The gamma parameters are chosen so that the prior mean equals 10 and the prior variances are 5 (a = 20, b = .5), 10 (a = 10, b = 1), and 25 (a = 4, b = 2.5). It is seen that the empirical Bayes estimator attains a reasonable Bayes risk reduction over that of X, and in some cases, comes quite close to the optimum. ∥ As a final example of Bayes risk performance, we turn now to the analysis of variance. Here, we shall consider only the one-way layout (Examples 4.1, 4.6 and 4.9) in detail. Other situations and generalizations are illustrated in the problems (Problems 7.17 and 7.18). Example 7.7 Empirical Bayes analysis of variance. In the one-way layout (con-sidered earlier in Example 3.4.9 from the point of view of equivariance), we have Xij ∼N(ξi, σ 2), j = 1, . . . , ni; i = 1, . . . , s, (7.23) ξi = µ + αi, i = 1, . . . , s where we assume that αi = 0 to ensure the identifiability of parameters. With this restriction, the parameterization in terms of µ and αi is equivalent to that in terms of ξi, with the latter parameterization (the so-called cell means model; see Searle 1987) being computationally more friendly. As interest often lies in estimation of, 4.7 ] RISK COMPARISONS 279 Table 7.1. Comparisons of Some Bayes Risks for Model (7.15) p = 5 Prior var. δk of (7.17) δEB of (7.20) X 5 1.67 2.28 5.00 10 2.50 2.99 5.00 25 3.57 3.84 5.00 p = 20 Prior var. δπ of (7.17) δEB of (7.20) X 5 6.67 7.31 20.00 10 10.00 10.51 20.00 25 14.29 14.52 20.00 and testing hypotheses about, the differences of the αis, which are equivalent to the differences of the ξi’s, we will use the ξi version of the model. We will also specialize to the balanced case where all ni’s are equal. The more general case requires some (often much) extra effort. (See Problems 7.16 and 7.19). As an illustration, consider an experiment to assess the effect of linseed oil meal on the digestibility of food by steers. The measurements are a digestibility coeffi-cient, and there are five treatments, representing different amounts of linseed oil meal added to the feed (approximately 1, 2, 3, 4, and 5 kg/animal/day; see Hsu 1982 for more details.) The variable Xij of (7.23) is the jth digestibility measure-ment in the ith treatment group, where ξi is the true coefficient of digestibility of that group. Perhaps the most common hypothesis about the ξi’s is H0 : ξ1 = ξ2 = · · · = ξs = µ, µ unknown. (7.24) This specifies that the means are equal and, hence, the treatment groups are equiv-alent in that they each result in the same (unknown) mean level of digestibility. This hypothesis can be thought of as specifying a submodel where all of the ξ’s are equal, which suggests expanding (7.23) into the hierarchical model Xij|ξi ∼N(ξi, σ 2), j = 1, . . . , n, i = 1, . . . , s, independent, (7.25) ξi|µ ∼N(µ, τ 2), i = 1, . . . , s, independent. The model (7.25) is obtained from (7.24) by allowing some variation around the prior mean, µ, in the form of a normal distribution. In analogy to (4.2.4), the Bayes estimator of ξi is δB(¯ xi) = σ 2 σ 2 + nτ 2 µ + nτ 2 σ 2 + nτ 2 ¯ xi. (7.26) 280 AVERAGE RISK OPTIMALITY [ 4.7 CalculationofanempiricalBayesestimatorisstraightforward.Sincethemarginal distribution of ¯ Xi is ¯ Xi ∼N  µ, σ 2 n + τ 2  , i = 1, . . . , s, the MLE of µ is ¯ x =  i  j xij/ns and the resulting empirical Bayes estimator is δEB i = σ 2 σ 2 + nτ 2 ¯ x + nτ 2 σ 2 + nτ 2 ¯ xi. (7.27) Note that δEB is a linear combination of ¯ Xi, the UMVU estimator under the full model,and ¯ X,theUMVUestimatorunderthesubmodelthatspecifiesξ1 = · · · = ξs. If we drop the assumption that τ 2 is known, we can estimate (σ 2 +nτ 2)−1 by the unbiased estimator (s −3)/(¯ xi −¯ x)2 and obtain the empirical Bayes estimator δL i = ¯ x +  1 −(s −3)σ 2 (¯ xi −¯ x)2  (¯ xi −¯ x), (7.28) which was first derived by Lindley (1962) and examined in detail by Efron and Morris (1972a 1972b, 1973a, 1973b). Calculation of the Bayes risk of δL proceeds as in Example 7.3, and leads to r(ξ, δL) = s σ 2 n −(s −3)2 σ 2 n 2 E  s i=1 ( ¯ Xi −¯ X)2 −1 (7.29) = r(ξ, δB) + 3(σ 2/n)2 σ 2/n + τ 2 where s i=1( ¯ Xi −¯ X)2 ∼(σ 2/n + τ 2)χ2 s−1 and r(ξ, δB) is the risk of the Bayes estimator (7.26). See Problem 7.14 for details. If we compare (7.29) to (7.8), we see that the Bayes risk performance of δL, where we have estimated the value of µ, is similar to that of δJS, where we assume that the value of µ is known. The difference is that δL pays an extra penalty for estimating the point that is the shrinkage target. For δJS, the target is assumed known and taken to be 0, while δL estimates it by ¯ X. The penalty for this is that the factor in the term added to the Bayes risk is increased from 2 in (7.8), where k = 1 to 3. In general, if we shrink to a k-dimensional subspace, this factor is 2+k. ∥ More general submodels can also be incorporated in empirical Bayes analyses, and in many cases, the resulting estimators retain good Bayes risk performance. Example 7.8 Analysis of variance with regression submodel. Another common hypothesis (or submodel) in the analysis of variance is that of a linear trend in the means, which was considered earlier in Example 3.4.7 and can be written as the null hypothesis H0 : ξi = α + βti, i = 1, . . . , s, α and β unknown, ti known. For the situation of Example 7.7, this hypothesis would assert that the effect of the quantity of linseed oil meal on digestibility is linear. (We know that as the quantity 4.7 ] RISK COMPARISONS 281 of linseed oil meal increases, the coefficient of digestibility decreases. But we do not know if this relationship is linear.) In analogy with (7.25), we can translate the hypothesis into the hierarchical model Xij|ξi ∼N(ξi, σ 2), j = 1, . . . , n, i = 1, . . . , s, (7.30) ξi|α, β ∼N(α + βti, τ 2), i = 1, . . . , s. Again, the hypothesis models the prior mean of the ξi’s, and we allow variation around this prior mean in the form of a normal distribution. Using squared error loss, the Bayes estimator of ξi is δB i = σ 2 σ 2 + nτ 2 (α + βti) + nτ 2 σ 2 + nτ 2 ¯ Xi. (7.31) For an empirical Bayes estimator, we calculate the marginal distribution of ¯ Xi. ¯ Xi ∼N(α + βti, σ 2 + nτ 2), i = 1, . . . , s, and estimate α and β by ˆ α = ¯ X −ˆ β¯ t, ˆ β = ( ¯ Xi −¯ X)(ti −¯ t) (ti −¯ t)2 , the UMVU estimators of α and β (Section 3.4). This yields the empirical Bayes estimator δEB1 i = σ 2 σ 2 + nτ 2 (ˆ α + ˆ βti) + nτ 2 σ 2 + nτ 2 ¯ Xi. (7.32) If τ 2 is unknown we can, in analogy to Example 7.7 use the fact that, marginally, E[( ¯ Xi −ˆ α + ˆ βti)2]−1 = (s −4)/(σ 2/n + τ 2) to construct the estimator δEB2 i = ˆ α + ˆ βti +  1 − (s −4)σ 2 ( ¯ Xi −ˆ α −ˆ βti)2  ( ¯ Xi −ˆ α −ˆ βti) (7.33) with Bayes risk r(τ, δEB2) = s σ 2 n −(s −4)2 σ 2 n 2 E  s i=1 ( ¯ Xi −ˆ α −ˆ βti)2 (7.34) = r(ξ, δB) + 4(σ 2/n)2 σ 2/n + τ 2 where r(ξ, δB) is the risk of the Bayes estimator δB of (7.31). See Problem 7.14 for details. Notice that here we shrunk the estimator toward a two-dimensional submodel, and the factor in the second term of the Bayes risk is 4 (2 + k). We also note that for δEB2, as well as δJS and δL, the Bayes risk approaches that of δπ as n →∞. ∥ In both Examples 7.7 and 7.8, empirical Bayes estimators provide a means for attaining reasonable Bayes risk performance if σ 2/τ 2 is not too large, yet do not require full specification of a prior distribution. An obvious limitation of these results, however, is the dimension of the submodel. The ordinary James-Stein estimator shrinks toward the point 0 [see Equation (7.3)], or any specified point 282 AVERAGE RISK OPTIMALITY [ 4.8 (Problem 7.6), and hence toward a submodel (subspace) of dimension zero. In the analysis of variance, Example 7.7, the subspace of the submodel has dimension 1, {(ξ1, . . . , ξs) : ξi = µ, i = 1, . . . , s}, and in Example 7.8, it has dimension 2, {(ξ1, . . . , ξs) : {i = α + βti, i = 1, . . . , s}. In general, the empirical Bayes strategies developed here will only work if the dimension of the submodel, r, is at least two fewer than that of the full model, s; that is, s −r > 2. This is a technical requirement, as the marginal distribution of interest is χ2 s−r, and estimation is problematic if s −r ≤2. The reason for this difficulty is the need to calculate the expectation E(1/χ2 s−r), which is infinite if s −r ≤2. (See Problem 7.6; also see Problem 6.12 for an attempt at empirical Bayes if s −r ≤2.) In light of Theorem 7.5, we can improve the empirical Bayes estimators of Examples 7.7 and 7.8 by using their positive-part version. Moreover, Problem 7.8 shows that such an improvement will hold throughout the entire exponential family. Thus, the strategy of taking a positive part should always be employed in these cases of empirical Bayes estimation. Finally, we note that Examples 7.7 and 7.8 can be greatly generalized. One can handle unequal ni, unequal variances, full covariance matrices, general linear submodels, and more. In some cases, the algebra can become somewhat over-whelming, and details about performance of the estimators may become obscured. We examine a number of these cases in Problems 7.16–7.18. 8 Problems Section 1 1.1 Verify the expressions for π(λ|¯ x) and δk(¯ x) in Example 1.3. 1.2 Give examples of pairs of values (a, b) for which the beta density B(a, b) is (a) decreasing, (b) increasing, (c) increasing for p < p0 and decreasing for p > p0, and (d) decreasing for p < p0 and increasing for p > p0. 1.3 In Example 1.5, if p has the improper prior density 1 p(1−p), show that the posterior density of p given x is proper, provided 0 < x < n. 1.4 In Example 1.5, find the Jeffreys prior for p and the associated Bayes estimator δ. 1.5 For the estimator δ of Problem 1.4, (a) calculate the bias and maximum bias; (b) calculate the expected squared error and compare it with that of the UMVU esti-mator. 1.6 In Example 1.5, find the Bayes estimator δ of p(1−p) when p has the prior B(a, b). 1.7 For the situation of Example 1.5, the UMVU estimator of p(1 −p) is δ′ = [x(x − 1)]/[n(n −1)] (see Example 2.3.1 and Problem 2.3.1). (a) Compare the estimator δ of Problem 1.6 with the UMVU estimator δ′. (b) Compare the expected squared error of the estimator of p(1 −p) for the Jeffreys prior in Example 1.5 with that of δ′. 1.8 In analogy with Problem 1.2, determine the possible shapes of the gamma density H(g, 1/α), α, g > 0. 1.9 Let X1, . . . , Xn be iid according to the Poisson distribution P(λ) and let λ have a gamma distribution H(g, α). 4.8 ] PROBLEMS 283 (a) For squared error loss, show that the Bayes estimator δα,g of λ has a representation analogous to (1.1.13). (b) What happens to δα,g as (i) n →∞, (ii) α →∞, g →0, or both? 1.10 For the situation of the preceding problem, solve the two parts corresponding to Problem 1.5(a) and (b). 1.11 In Problem 1.9, if λ has the improper prior density dλ/λ (corresponding to α = g = 0), under what circumstances is the posterior distribution proper? 1.12 Solve the problems analogous to Problems 1.9 and 1.10 when the observations consist of a single random variable X having a negative binomial distribution Nb(p, m), p has the beta prior B(a, b), and the estimand is (a) p and (b) 1/p. Section 2 2.1 Referring to Example 1.5, suppose that X has the binomial distribution b(p, n) and the family of prior distributions for p is the family of beta distributions B(a, b). (a) Show that the marginal distribution of X is the beta-binomial distribution with mass function  n x  H(a + b) H(a)H(b) H(x + a)H(n −x + b) H(n + a + b) . (b) Show that the mean and variance of the beta-binomial is given by EX = na a + b and var X = n a a + b  b a + b  a + b + n a + b + 1  . [Hint: For part (b), the identities EX = E[E(X|p)] and var X = var[E(X|p)] + E[var(X|p)] are helpful.] 2.2 For the situation of Example 2.1, Lindley and Phillips (1976) give a detailed account of the effect of stopping rules, which we can illustrate as follows. Let X be the number of successes in n Bernoulli trials with success probability p. (a) Suppose that the number of Bernoulli trials performed is a prespecifed number n, so that we have the binomial sampling model, P(X = x) =  n x  px(1 −p)n−x, x −0, 1, . . . , n. Calculate the Bayes risk of the Bayes estimator (1.1.12) and the UMVU estimator of p. (b) Suppose that the number of Bernoulli trials performed is a random variable N. The valueN = nwasobtainedwhenaprespecifiednumber,x,ofsuccesseswasobserved so that we have the negative binomial sample model, P(N = n) =  n −1 x −1  px(1− p)n−x, n = x. Calculate the Bayes risk of the Bayes estimator and the UMVU estimator of p. (c) Calculate the mean squared errors of all three estimators under each model. If it is unknown which sampling mechanism generated the data, which estimator do you prefer overall? 2.3 Show that the estimator (2.2.4) tends in probability (a) to θ as n →∞, (b) to µ as b →0, and (c) to θ as b →∞. 284 AVERAGE RISK OPTIMALITY [ 4.8 2.4 Bickel and Mallows (1988) further investigate the relationship between unbiasedness and Bayes, specifying conditions under which these properties cannot hold simultane-ously. In addition, they show that if a prior distribution is improper, then a posterior mean can be unbiased. Let X ∼1 θ f (x/θ), x > 0, where ∞ 0 tf (t)dt = 1, and let π(θ) = 1 θ2 dθ, θ > 0. (a) Show that E(X|θ) = θ, so X is unbiased. (b) Show that π(θ|x) = x2 θ3 f (x/θ) is a proper density. (c) Show that E(θ|x) = x, and hence the posterior mean, is unbiased. 2.5 DasGupta (1994) presents an identity relating the Bayes risk to bias, which illustrates that a small bias can help achieve a small Bayes risk. Let X ∼f (x|θ) and θ ∼π(θ). The Bayes estimator under squared error loss is δπ = E(θ|x). Show that the Bayes risk of δπ can be written r(π, δπ) =   X [θ −δπ(x)]2f (x|θ)π(θ)dxdθ =  θb(θ)π(θ)dθ where b(θ) = E[δπ(X)|θ] −θ is the bias of δπ. 2.6 Verify the estimator (2.2.10). 2.7 In Example 2.6, verify that the posterior distribution of τ is H(r +g −1/2, 1/(α +z)). 2.8 In Example 2.6 with α = g = 0, show that the posterior distribution given the X’s of √n(θ −¯ X)/√Z/(n −1) is Student’s t-distribution with n −1 degrees of freedom. 2.9 In Example 2.6, show that the posterior distribution of θ is symmetric about ¯ x when the joint prior of θ and σ is of the form h(σ)dσ dθ, where h is an arbitrary probability density on (0, ∞). 2.10 Rukhin (1978) investigates the situation when the Bayes estimator is the same for every loss function in a certain set of loss functions, calling such estimators universal Bayes estimators. For the case of Example 2.6, using the prior of the form of Problem 2.9, show that ¯ X is the Bayes estimator under every even loss function. 2.11 Let X and Y be independently distributed according to distributions Pξ and Qη, respectively. Suppose that ξ and η are real-valued and independent according to some prior distributions  and ′. If, with squared error loss, δ is the Bayes estimator of ξ on the basis of X, and δ′ ′ is that of η on the basis of Y, (a) show that δ′ ′ −δ is the Bayes estimator of η −ξ on the basis of (X, Y); (b) if η > 0 and δ∗ ′ is the Bayes estimator of 1/η on the basis of Y, show that δ · δ∗ ′ is the Bayes estimator of ξ/η on the basis of (X, Y). 2.12 For the density (2.2.13) and improper prior (dσ/σ) · (dσA/σA), show that the pos-terior distribution of (σ, σA) continues to be improper. 2.13 (a) In Example 2.7, obtain the Jeffreys prior distribution of (σ, τ). (b) Show that for the prior of part (a), the posterior distribution of (σ, τ) is proper. 2.14 Verify the Bayes estimator (2.2.14). 2.15 Let X ∼N(θ, 1) and L(θ, δ) = (θ −δ)2. (a) Show that X is the limit of the Bayes estimators δπn, where πn is N(0, 1). Hence, X is both generalized Bayes and a limit of Bayes estimators. (b) For the prior measure π(θ) = eaθ, a > 0, show that the generalized Bayes estimator is X + a. (c) For a > 0, show that there is no sequence of proper priors for which δπn →X + a. 4.8 ] PROBLEMS 285 This example is due to Farrell; see Kiefer 1966. Heath and Sudderth (1989), building on the work of Stone (1976), showed that inferences from this model are incoherent, and established when generalized Bayes estimators will lead to coherent (that is, noncontra-dictory) inferences. Their work is connected to the theory of “approximable by proper priors,” developed by Stein (1965) and Stone (1965, 1970, 1976), which shows when generalized Bayes estimators can be looked upon as Bayes estimators. 2.16 (a) For the situation of Example 2.8, verify that δ(x) = x/n is a generalized Bayes estimator. (b) If X ∼N(0, 1) and L(θ, δ) = (θ −δ)2, show that X is generalized Bayes under the improper prior π(θ) = 1. Section 3 3.1 For the situation of Example 3.1: (a) Verify that the Bayes estimator will only depend on the data through Y = maxi Xi. (b) Show that E( |y, a, b) can be expressed as E( |y, a, b) = 1 b(n + a −1) P(χ 2 2(n+a−1) < 2/by) P(χ 2 2(n+a) < 2/by) where χ 2 ν is a chi-squared random variable with ν degrees of freedom. (In this form, the estimator is particularly easy to calculate, as many computer packages will have the chi-squared distribution built in.) 3.2 Let X1, . . . , Xn be iid from Gamma(a, b) where a is known. (a) Verify that the conjugate prior for the natural parameter η = −1/b is equivalent to an inverted gamma prior on b. (b) Using the prior in part (a), find the Bayes estimator under the losses (i) L(b, δ) = (b −δ)2 and (ii) L(b, δ) = (1 −δ/b)2. (c) Express the estimator in part (b)(i) in the form (3.3.9). Can the same be done for the estimator in part (b)(ii)? 3.3 (a) Prove Corollary 3.3. (b) Verify the calculation of the Bayes estimator in Example 3.4. 3.4 Using Stein’s identity (Lemma 1.5.15), show that if Xi ∼pηi(x) of (3.3.7), then Eη(−∇log h(X)) = η, R(η, −∇log h(X)) = p i=1 Eη  −∂2 ∂X2 i log h(X)  . 3.5 (a) If Xi ∼Gamma(a, b), i = 1, . . . , p, independent with a known, calculate −∇log h(x) and its expected value. (b) Apply the results of part (a) to the situation where Xi ∼N(0, σ 2 i ), i = 1, . . . , p, independent. Does it lead to an unbiased estimator of σ 2 i ? [Note: For part (b), squared error loss on the natural parameter 1/σ 2 leads to the loss L(σ 2, δ) = (σ 2δ −1)2/σ 4 for estimation of σ 2.] (c) If Xi ∼tan(aiπ) π xai(1 −x)−1, 0 < x < 1, i = 1, . . . , p, independent, evaluate −∇log h(X) and show that it is an unbiased estimator of a = (a1, . . . , ap). 286 AVERAGE RISK OPTIMALITY [ 4.8 3.6 For the situation of Example 3.6: (a) Show that if δ is a Bayes estimator of θ, then δ′ = δ/σ 2 is a Bayes estimator of η, and hence R(θ, δ) = σ 4R(η, δ′). (b) Show that the risk of the Bayes estimator of η is given by pτ 4 σ 2(σ 2 + τ 2)2 +  σ 2 σ 2 + τ 2 2 a2 i , where ai = ηi −µ/σ 2. (c) If a2 i = k, a fixed constant, then the minimum risk is attained at ηi = µ/σ 2+√k/p. 3.7 IfXhasthedistributionpθ(x)of(1.5.1)showthat,similartoTheorem3.2,E(T η(θ)) = ∇log mπ(x) −∇log h(x). 3.8 (a) Use Stein’s identity (Lemma 1.5.15) to show that if Xi ∼pηi(x) of (3.3.18), then Eη(−∇log h(X)) = i ηiEη ∂ ∂Xj Ti(X). (b) If Xi are iid from a gamma distribution Gamma(a, b), where the shape parameter a is known, use part (a) to find an unbiased estimator of 1/b. (c) If the Xi are iid from a beta(a, b) distribution, can the identity in part (a) be used to obtain an unbiased estimator of a when b is known, or an unbiased estimator of b when a is known? 3.9 For the natural exponential family pη(x) of (3.3.7) and the conjugate prior π(η|k, µ) of (3.3.19) establish that: (a) E(X) = A′(η) and var X = A′′(η), where the expectation is with respect to the sampling density pη(x). (b) EA′(η) = µ and var[A(η)] = (1/k)EA′′(η), where the expectation is with respect to the prior distribution. [The results in part (b) enable us to think of µ as a prior mean and k as a prior sample size.] 3.10 For each of the following situations, write the density in the form (3.7), and identify the natural parameter. Obtain the Bayes estimator of A′(η) using squared loss and the conjugate prior. Express your answer in terms of the original parameters. (a) X ∼ binomial(p, n), (b) X ∼Poisson(λ), and (c) X ∼Gamma(a, b), a known. 3.11 For the situation of Problem 3.9, if X1, . . . , Xn are iid as pη(x) and the prior is the conjugate π(η|k, µ), then the posterior distribution is π(η|k + n, kµ+n¯ x k+n ). 3.12 If X1, . . . , Xn are iid from a one-parameter exponential family, the Bayes estimator of the mean, under squared error loss using a conjugate prior, is of the form a ¯ X + b for constants a and b. (a) If EXi = µ and var Xi = σ 2, then no matter what the distribution of the Xi’s, the mean squared error is E[(a ¯ X + b) −µ]2 = a2 var ¯ X + [(a −1)µ + b]2. (b) If µ is unbounded, then no estimator of the form a ¯ X + b can have finite mean squared error for a ̸= 1. (c) Can a conjugate-prior Bayes estimator in an exponential family have finite mean squared error? 4.8 ] PROBLEMS 287 [Thisproblemshowswhyconjugate-priorBayesestimatorsareconsidered“non-robust.”] Section 4 4.1 For the situation of Example 4.2: (a) Show that the Bayes rule under a beta(α, α) prior is equivariant. (b) Show that the Bayes rule under any prior that is symmetric about 1/2 is equivariant. 4.2 The Bayes estimator of η in Example 4.7 is given by (4.22). 4.3 The Bayes estimator of τ in Example 4.5 is given by (4.22). 4.4 The Bayes estimators of η and τ in Example 4.9 are given by (4.31) and (4.32). (Recall Corollary 1.2.) 4.5 For each of the following situations, find a group G that leaves the model invariant and determine left- and right-invariant measures over G. The joint density of X = (X1, . . . , Xn) and Y = (Y1, . . . , Yn) and the estimand are (a) f (x −η, y −ζ), estimand η −ζ; (b) f  x−η σ , y−ζ τ  , estimand τ/σ; (c) f  x−η τ , /, y−ζ τ  , τ unknown; estimand η −ζ. 4.6 For each of the situations of Problem 4.5, determine the MRE estimator if the loss is squared error with a scaling that makes it invariant. 4.7 For each of the situations of Problem 4.5: (a) Determine the measure over induced by the right-invariant Haar measure over ¯ G; (b) Determine the Bayes estimator with respect to the measure found in part (a), and show that it coincides with the MRE estimator. 4.8 In Example 4.9, show that the estimator ˆ τ(x) = 1 vr f  x1−u v , . . . , xn−u v  dvdu 1 vr+1 f  x1−u v , . . . , xn−u v  dvdu is equivariant under scale changes; that is, it satisfies ¯ τ(cx) = c ˆ τ(x) for all values of r for which the integrals in ˆ τ(x) exist. 4.9 If  is a left-invariant measure over G, show that ∗defined by ∗(B) = (B−1) is right invariant, where B−1 = {g−1 : g ∈B}. [Hint: Express ∗(Bg) and ∗(B) in terms of .] 4.10 There is a correspondence between Haar measures and Jeffreys priors in the location and scale cases. (a) Show that in the location parameter case, the Jeffreys prior is equal to the invariant Haar measure. (b) Show that in the scale parameter case, the Jeffreys prior is equal to the invariant Haar measure. (c) Show that in the location-scale case, the Jeffreys prior is equal to the left invariant Haar measure. [Part c) is a source of some concern because, as mentioned in Section 4.4 (see the discussion following Example 4.9), the best-equivariant rule is Bayes against the right-invariant Haar measure (if it exists).] 288 AVERAGE RISK OPTIMALITY [ 4.8 4.11 For the model (3.3.23), find a measure ν in the (ξ, τ) plane which remains invariant under the transformations (3.3.24). The next three problems contain a more formal development of left- and right-invariant Haar measures. 4.12 A measure  over a group G is said to be right invariant if it satisfies (Bg) = (B) and left invariant if it satisfies (gB) = (B). Note that if G is commutative, the two definitions agree. (a) If the elements g ∈G are real numbers (−∞< g < ∞) and group composition is g2 · g1 = g1 + g2, the measure ν defined by ν(B) = B dx (i.e., Lebesgue measure) is both left and right invariant. (b) If the elements g ∈G are the positive real numbers, and composition of g2 and g1 is multiplication of the two numbers, the measure ν defined by ν(B) = B(1/y) dy is both left and right invariant. 4.13 If the elements g ∈G are pairs of real numbers (a, b), b > 0, corresponding to the transformations gx = a + bx, group composition by (1.4.8) is (a2, b2) · (a1, b1) = (a2 + a1b2, b1b2). Of the measures defined by ν(B) =  B 1 y dx dy and ν(B) =  B 1 y2 dx dy, the first is right but not left invariant, and the second is left but not right invariant. 4.14 The four densities defining the measures ν of Problem 4.12 and 4.13 (dx, (1/y)dy, (1/y)dxdy, (1/y2)dxdy) are the only densities (up to multiplicative constants) for which ν has the stated invariance properties in the situations of these problems. [Hint: In each case, consider the equation  B π(θ) dθ =  gB π(θ) dθ. In the right integral, make the transformation to the new variable or variables θ′ = g−1θ. If J is the Jacobian of this transformation, it follows that  B [π(θ) −Jπ(gθ)] dθ = 0 for all B and, hence, that π(θ) = Jπ(gθ) for all θ except in a null set Ng. The proof of Theorem 4 in Chapter 6 of TSH2 shows that Ng can be chosen independent of g. This proves in Problem 4.12(a) that for all θ / ∈N, π(θ) = π(θ + c), and hence that π(c) = constant a.e. The other three cases can be treated analogously.] Section 5 5.1 For the model (3.3.1), let π(θ|x, λ) be a single-prior Bayes posterior and π(θ|x) be a hierarchical Bayes posterior. Show that π(θ|x) = π(θ|x, λ) · π(λ|x) dλ, where π(λ|x) = f (x|θ)π(θ|λ)γ (λ) dθ/ f (x|θ)π(θ|λ)γ (λ) dθ dλ. 5.2 For the situation of Problem 5.1, show that: (a) E(θ|x) = E[E(θ|x, λ)]; (b) var(θ|x) = E[var(θ|x, λ)] + var[E(θ|x, λ)]; 4.8 ] PROBLEMS 289 and hence that π(θ|x) will tend to have a larger variance than π(θ|x, λ0). 5.3 For the model (3.3.3), show that: (a) The marginal prior of θ, unconditional on τ 2, is given by π(θ) = H(a + 1 2) √ 2π H(a)ba 1 1 b + θ2 2 a+1/2 , which for a = ν/2 and b = 2/ν is Student’s t-distribution with ν degrees of freedom. (b) The marginal posterior of τ 2 is given by π(τ 2|¯ x) = 1 σ 2τ2 σ 2+τ2 21/2 e −1 2 ¯ x2 σ2+τ2 1 (τ2)a+3/2 e−1/bτ2 ∞ 0 1 σ 2τ2 σ 2+τ2 21/2 e −f rac12 ¯ x2 σ2+τ2 1 (τ2)a+3/2 e−1/bτ2dt2 . 5.4 Albert and Gupta (1985) investigate theory and applications of the hierarchical model Xi|θi ∼b(θi, n), i = 1, . . . , p, independent, θi|η ∼beta[kη, k(1 −η)], k known, η ∼Uniform(0, 1). (a) Show that E(θi|x) = n n + k xi n +  k n + k  E(η|x), var(θi|x) = k2 (n + k)(n + k + 1) var(η|x). [Note that E(η|x) and var(η|x) are not expressible in a simple form.] (b) Unconditionally on η, the θi’s have conditional covariance cov(θi, θj|x) =  k n + k 2 var(η|x), i ̸= j. (c) Ignoring the prior distribution of η, show how to construct an empirical Bayes estimator of θi. (Again, this is not expressible in a simple form.) [Albert and Gupta (1985) actually consider a more general model than given here, and show how to approximate the Bayes solution. They apply their model to a problem of nonresponse in mail surveys.] 5.5 (a) Analogous to Problem 1.7.9, establish that for any random variable X, Y, and Z, cov(X, Y) = E[cov(X, Y)|Z] + cov[E(X|Z), E(Y|Z)]. (b) For the hierarchy Xi|θi ∼f (x|θi), i = 1, . . . , p, independent, i|λ ∼π(θi|λ), i = 1, . . . , p, independent,  ∼γ (λ), show that cov( i, j|x) = cov[E( i|x, λ), E( j|x, λ)]. 290 AVERAGE RISK OPTIMALITY [ 4.8 (c) If E( i|x, λ) = g(xi) + h(λ), i = 1, . . . , p, where g(·) and h(·) are known, then cov( i, j|x) = var[E( i|x, λ)]. [Part (c) points to what can be considered a limitation in the applicability of some hierarchical models, that they imply a positive correlation structure in the posterior distribution.] 5.6 The one-way random effects model of Example 2.7 (see also Examples 3.5.1 and 3.5.5) can be written as the hierarchical model Xij|µ, αi ∼N(µ + αi, σ 2), j = 1, . . . , n, i = 1, . . . , s, αi ∼N(0, σ 2 A), i = 1, . . . , s. If, in addition, we specify that µ ∼Uniform(−∞, ∞), show that the Bayes estimator of µ + αi under squared error loss is given by (3.5.13), the UMVU predictor of µ + αi. 5.7 Referring to Example 6.6: (a) Using the prior distribution for γ (b) given in (5.6.27), show that the mode of the posterior distribution π(b|x) is ˆ b = (p ¯ x + α −1)/(pa + β −1), and hence the empirical Bayes estimator based on this ˆ b does not equal the hierarchical Bayes estimator (5.6.29). (b) Show that if we estimate b/(b+1) using its posterior expectation E[b/(b+1)|x], then the resulting empirical Bayes estimator is equal to the hierarchical Bayes estimator. 5.8 The method of Monte Carlo integration allows the calculation of (possibly compli-cated) integrals by using (possibly simple) generations of random variables. (a) To calculate h(x)fX(x) dx, generate a sample X1, . . . , Xm, iid, from fX(x). Then, 1/m m i=1 h(xi) → h(x)fX(x) dx as m →∞. (b) If it is difficult to generate random variable from fX(x), then generate pairs of random variables Yi ∼fY(y), Xi ∼fX|Y(x|yi). Then, 1/m m i=1 h(xi) → h(x)fX(x) dx as m →∞. [Show that if X is generated according to Y ∼fY(y) and X ∼fX|Y(x|Y), then P (X ≤a) = a −∞fx(x) dx.] (c) If it is difficult to generate as in part (b), then generate Xmi ∼fX|Y(x|Ymi−1), Ymi ∼fY|X(y|Xmi). for i = 1, . . . , K and m = 1, . . . , M. Show that: (i) for each m, {Xmi} is a Markov chain. If it is also an ergodic Markov chain Xmi L →X, as i →∞, where X has the stationary distribution of the chain. (ii) If the stationary distribution of the chain is fX(x), then 1 M M m=1 h(xmk) →  h(x)fX(x) dx as K, M →∞. 4.8 ] PROBLEMS 291 [This is the basic theory behind the Gibbs sampler. For each k, we have generated in-dependent random variables Xmk, m = 1, . . . , M, where Xmk is distributed according to fX|Y(x|ymk−). It is also the case that for each m and large k, Xmk is approximately distributed according to fX(x), although the variables are not now independent. The ad-vantages and disadvantages of these computational schemes (one-long-chain vs. many-short-chains) are debated in Gelman and Rubin 1992; see also Geyer and Thompson 1992 and Smith and Roberts 1992. The prevailing consensus leans toward one long chain.] 5.9 To understand the convergence of the Gibbs sampler, let (X, Y) ∼f (x, y), and define k(x, x′) =  fX|Y(x|y)fY|X(y|x′) dy. (a) Show that the function h∗(·) that solves h∗(x) = k(x, x′)h∗(x′) dx′ is h∗(x) = fX(x), the marginal distribution of X. (b) Write down the analogous integral equation that is solved by fY(y). (c) Define a sequence of functions recursively by hi+1(x) = k(x, x′)hi(x′) dx′ I where h0(x) is arbitrary but satisfies supx    h0(x) h∗(x)    < ∞. Show that  |hi+1(x) −h∗(x)|dx <  |hi(x) −h∗(x)| dx and, hence, hi(x) converges to h∗(x). [The method of part (c) is called successive substitution. When there are two variables in the Gibbs sampler, it is equivalent to data augmentation (Tanner and Wong 1987). Even if the variables are vector-valued, the above results establish convergence. If the original vector of variables contains more than two variables, then a more general version of this argument is needed (Gelfand and Smith 1990).] 5.10 A direct Monte Carlo implementation of substitution sampling is provided by the data augmentation algorithm (Tanner and Wong 1987). If we define hi+1(x) =  fX|Y(x|y)fY|X(y|x′) dy  hi(x′) dx′, then from Problem 5.9, hi(x) →fx(x) as i →∞. (a) To calculate hi+1 using Monte Carlo integration: (i) Generate X′ j ∼hi(x′), j = 1, . . . , J. (ii) Generate, for each x′ j, Yjk ∼fY|X(y|x′ j), k = 1, . . . , K. (iii) Calculate ˆ hi+1(x) = 1 J J j=1 1 K K k=1 fX|Y(x|yjk). Then, ˆ hi+1(x) →hi+1(x) as J, K →∞, and hence the data augmentation algorithm converges. (b) To implement (a)(i), we must be able to generate a random variable from a mixture distribution. Show that if fY(y) = n i=1 aigi(y), ai = 1, then the algorithm (i) Select gi with probability ai (ii) Generate Y ∼gi produces a random variable with distribution fY. Hence, show how to implement step (a)(i) by generating random variables from fX|Y. Tanner and Wong (1987) note that this algorithm will work even if J = 1, which yields the approximation 292 AVERAGE RISK OPTIMALITY [ 4.8 ˆ hi+1(x) = 1 K K k=1 fX|Y(x|yk),identicaltotheGibbssampler.Thedataaugmentation algorithm can also be seen as an application of the process of multiple imputation (Rubin 1976, 1987, Little and Rubin 1987). 5.11 Successive substitution sampling can be implemented via the Gibbs sampler in the following way. From Problem 5.8(c), we want to calculate hM = 1 M M m=1 k(x|xmk) = 1 M M m=1  fX|Y(x|y)fX|Y(y|xmk) dy. (a) Show that hM(x) →fX(x) as M →∞. (b) Given xmk, a Monte Carlo approximation to hM(x) is ˆ hM(x) = 1 M M m=1 1 J J j=1 fX|Y(x|ykj ) where Ykj ∼fY|X(y|xmk) and ˆ hM(x) →hM(x) as J →∞. (c) Hence, as M, J →∞, ˆ hM(x) →fX(x). [This is the Gibbs sampler, which is usually implemented with J = 1.] 5.12 For the situation of Example 5.6, show that (a) E  1 M M i=1 i = E  1 M M i=1 E( |x, τi) , (b) var  1 M M i=1 i ≥var  1 M M i=1 E( |x, τi) . (c) Discuss when equality might hold in (b). Can you give an example? 5.13 Show that for the hierarchy (5.5.1), the posterior distributions π(θ|x) and π(λ|x) satisfy π(θ|x) =   π(θ|x, λ)π(λ|x, θ ′) dλ  π(θ ′|x) dθ ′, π(λ|x) =   π(λ|x, θ)π(θ|x, λ′) dθ  π(λ′|x) dλ′, and, hence, are stationary points of the Markov chains in (5.5.13). 5.14 Starting from a uniform random variable U ∼Uniform(0, 1), it is possible to construct many random variables through transformations. (a) Show that −log U ∼exp(1). (b) Show that −n i=1 log Ui ∼Gamma(n, 1), where U1, . . . , Un are iid as U(0, 1). (c) Let X ∼Exp(a, b). Write X as a function of U. (d) Let X ∼Gamma(n, β), n an integer. Write X as a function of U1, . . . , Un, iid as U(0, 1). 5.15 Starting with a U(0, 1) random variable, the transformations of Problem 5.14 will not get us normal random variables, or gamma random variables with noninteger shape parameters. One way of doing this is to use the Accept-Reject Algorithm (Ripley 1987, Section 3.2), an algorithm for simulating X ∼f (x): 4.8 ] PROBLEMS 293 (i) Generate Y ∼g(y), U ∼U(0, 1), independent. (ii) Calculate ρ(Y) = 1 M f (Y) g(Y) where M = supt f (t)/g(t). (iii) If U < ρ(Y), set X = Y, otherwise return to i). (a) Show that the algorithm will generate X ∼f (x). (b) Starting with Y ∼exp(1), show how to generate X ∼N(0, 1). (c) Show how to generate a gamma random variable with a noninteger shape parameter. 5.16 Consider the normal hierarchical model X|θ1 ∼n(θ1, σ 2 1 ), θ1|θ2 ∼n(θ2|σ 2 2 ), . . . θk−1|θk ∼n(θk, σ 2 k ) where σ 2 1 , i = 1, . . . , k, are known. (a) Show that the posterior distribution of θi (1 ≤i ≤k −1) is π(θi|x, θk) = N(αix + (1 −αi)θk, τ 2 i ) where τ 2 i = (i 1σ 2 j )(k i+1σ 2 j )/k i σ 2 j and αi = τ 2 i /i 1σ 2 j . (b) Find an expression for the Kullback-Leibler informationK[π(θi|x, θk), π(θi|θk)] and show that it is a decreasing function of i. 5.17 The original proof of Theorem 5.7 (Goel and DeGroot 1981) used R´ enyi’s entropy function (R´ enyi 1961) Rα(f, g) = 1 α −1 log  f α(x)g1−α(x) dµ(x), where f and g are densities, µ is a dominating measure, and α is a constant, α ̸= 1. (a) Show that Rα(f, g) satisfies Rα(f, g) > 0 and Rα(f, f ) = 0. (b) Show that Theorem 5.7 holds if Rα(f, g) is used instead of K[f, g]. (c) Show that limα→1 Rα(f, g) = K[f, g], and provide another proof of Theorem 5.7. 5.18 The Kullback-Leibler information, K[f, g] (5.5.25), is not symmetric in f and g, and a modification, called the divergence, remedies this. Define J[f, g], the divergence between f and g, to be J[f, g] = K[f, g] + K[g, f ]. Show that, analogous to Theorem 5.7, J[π(λ|x), γ (λ)] < J[π(θ|x), π(θ)]. 5.19 Goel and DeGroot (1981) define a Bayesian analog of Fisher information [see (2.5.10)] as I[π(θ|x)] =   ∂ ∂x π(θ|x) π(θ|x) 2 dθ, the information that x has about the posterior distribution. As in Theorem 5.7, show that I[π(λ|x)] < I[π(θ|x)], again showing that the influence of λ is less than that of θ. 5.20 Each of m spores has a probability τ of germinating. Of the r spores that germinate, each has probability ω of bending in a particular direction. If s bends in the particular direction, a probability model to describe this process is the bivariate binomial, with mass function f (r, s|τ, ω, m) = m r  τ r(1 −τ)m−r r s  ωs(1 −ω)r−s. 294 AVERAGE RISK OPTIMALITY [ 4.8 (a) Show that the Jeffreys prior is πJ(τ, ω) = (1 −τ)−1/2ω−1/2(1 −ω)−1/2. (b) If τ is considered a nuisance parameter, the reference prior is πR(τ, ω) = τ −1/2(1 −τ)−1/2ω−1/2(1 −ω)−1/2. Compare the posterior means E(ω|r, s, m) under both the Jeffreys and reference priors. Is one more appropriate? (c) What is the effect of the different priors on the posterior variance? [Priors for the bivariate binomial have been considered by Crowder and Sweeting (1989), Polson and Wasserman (1990), and Clark and Wasserman (1993), who propose a refer-ence/Jeffreys trade-off prior.] 5.21 Let F = {f (x|θ); θ ∈ } be a family of probability densities. The Kullback-Leibler information for discrimination between two densities in F can be written ψ(θ1, θ2) =  f (x|θ1) log f (x|θ1) f (x|θ2)  dx. Recall that the gradient of ψ is ∇ψ = {(∂/∂θi)ψ} and the Hessian is ∇∇ψ = {(∂2/∂θi∂θj)ψ}. (a) If integration and differentiation can be interchanged, show that ∇ψ(θ, θ) = 0 and det[∇∇ψ(θ, θ)] = I(θ), where I(θ) is the Fisher information of f (x|θ). (b) George and McCulloch (1993) argue that choosing π(θ) = (det[∇∇ψ(θ, θ)])1/2 is an appealing least informative choice of priors. What justification can you give for this? Section 6 6.1 For the model (3.3.1), show that δλ(x)|λ=ˆ λ = δˆ λ(x), where the Bayes estimator δλ(x) minimizes L[θ, d(x)]π(θ|x, λ) dθ and the empirical Bayes estimator δˆ λ(x) minimizes L[θ, d(x)]π(θ|x, ˆ λ) dθ. 6.2 This problem will investigate conditions under which an empirical Bayes estimator is a Bayes estimator. Expression (6.6.3) is a true posterior expected loss if π(θ|x, ˆ λ(x)) is a true posterior. From the hierarchy X|θ ∼f (x|θ), |λ ∼π(θ|λ), define the joint distribution of X and to be (X, θ) ∼g(x, θ) = f (x|θ)π(θ|ˆ λ(x)), where π(θ|ˆ λ(x)) is obtained by substituting ˆ λ(x) for λ in π(θ|λ). (a) Show that, for this joint density, the formal Bayes estimator is equivalent to the empirical Bayes estimator from the hierarchical model. (b) If f (·|θ) and π(·|λ) are proper densities, then g(x, θ) dθ < ∞. However, g(x, θ) dxdθ need not be finite. 6.3 Forthemodel(6.3.1),theBayesestimatorδλ(x)minimizes L(θ, d(x))×π(θ|x, λ) dθ and the empirical Bayes estimator, δˆ λ(x), minimizes L(θ, d(x))π(θ|x, ˆ λ(x)) dθ. Show that δλ(x)|λ=ˆ λ(x) = δˆ λ(x). 4.8 ] PROBLEMS 295 6.4 For the situation of Example 6.1: (a) Show that  ∞ −∞ e−n/2σ 2(¯ x−θ)2e(−1/2)θ2/τ2dθ = √ 2π  σ 2τ 2 σ 2 + τ 2 1/2 e(−n/2)¯ x2/σ 2+nτ2 and, hence, establish (6.6.4). (b) Verify that the marginal MLE of σ 2 + nτ 2 is n¯ x2 and that the empirical Bayes estimator is given by (6.6.5). 6.5 Referring to Example 6.2: (a) Show that the Bayes risk, r(π, δπ), of the Bayes estimator (6.6.7) is given by r(π, δπ) = kE[var(pk|xk)] = kab (a + b)(a + b + 1)(a + b + n). (b) Show that the Bayes risk of the unbiased estimator X/n = (X1/n, . . . , Xk/n) is given by r(π, X/n) = kab n(a + b + 1)(a + b). 6.6 Extend Theorem 6.3 to the case of Theorem 3.2; that is, if X has density (3.3.7) and η has prior density π(η|γ ), then the empirical Bayes estimator is E  ηi ∂Ti(x) ∂xj |x, ˆ γ (x)  = ∂ ∂xj log m(x| ˆ γ (x)) − ∂ ∂xj log h(x), where m(x|γ ) is the marginal distribution of X and ˆ γ (x) is the marginal MLE of γ . 6.7 (a) For pη(x) of (1.5.2), show that for any prior distribution π(η|λ) that is dependent on a hyperparameter λ, the empirical Bayes estimator is given by E  s i=1 ηi ∂ ∂xj Ti(x)|x, ˆ λ = ∂ ∂xj log mπ(x|ˆ λ(x)) − ∂ ∂xj log h(x). where mπ(x) = pθ(x)π(θ) dθ. (b) If X has the distribution pθ(x) of (1.5.1), show that a similar formulas holds, that is, E(T η(θ)|ˆ λ) = ∇log mπ(x|ˆ λ) −∇log h(x), where T = {∂Ti/∂xj} is the Jacobian of T and ∇a is the gradient vector of a, that is, ∇a = {∂a/∂xi}. 6.8 For each of the following situations, write the empirical Bayes estimator of the natural parameter (under squared error loss) in the form (6.6.12), using the marginal likelihood estimator of the hyperparameter λ. Evaluate the expressions as far as possible. (a) Xi ∼N(0, σ 2 i ), i = 1, . . . , p, independent; 1/σ 2 i ∼Exponential(λ). (b) Xi ∼N(θi, 1), i = 1, . . . , p, independent, θi ∼DE(0, λ). 6.9 Strawderman (1992) shows that the James-Stein estimator can be viewed as an empir-ical Bayes estimator in an arbitrary location family. Let Xp×1 ∼f (x −θ), with EX = θ and var X = σ 2I. Let the prior be θ ∼f ∗n, the n-fold convolution of f with itself. [The convolution of f with itself is f ∗2(x) = f (x −y)f (y)dy. The n-fold convolution is f ∗n(x) = f ∗(n −1)(x)(x −y)f (y)dy.] Equivalently, let Ui ∼f , i = 0, · · · , n, iid, θ = n 1 Ui, and X = U0 + θ. 296 AVERAGE RISK OPTIMALITY [ 4.8 (a) Show that the Bayes rule against squared error loss is n n+1x. Note that n is a prior parameter. (b) Show that |X|2/(pσ 2) is an unbiased estimator of n+1, and hence that an empirical Bayes estimator of θ is given by δEB = [1 −(pσ 2/|x|2)]x. 6.10 ShowforthehierarchyofExample3.4,whereσ 2 andτ 2 areknownbutµisunknown, that: (a) The empirical Bayes estimator of θi, based on the marginal MLE of θi, is τ2 σ 2+τ2 Xi + σ 2 σ 2+τ2 ¯ X. (b) The Bayes risk, under sum-of-squared-errors loss, of the empirical Bayes estimator from part (a) is pσ 2 −2(p −1)2σ 4 p(σ 2 + τ 2) + (p −1)  σ 2 σ 2 + τ 2 2 p i=1 E(Xi −¯ X)2. (c) The minimum risk of the empirical Bayes estimator is attained when all θis are equal. [Hint: Show that p i=1 E[(Xi −¯ X)2] = p i=1(θi −¯ θ)2 + (p −1)σ 2. ] 6.11 For E( |x) of (5.5.8), show that as ν →∞, E( |x) →[p/(p + σ 2)]¯ x, the Bayes estimator under a N(0, 1) prior. 6.12 (a) Show that the empirical Bayes δEB(¯ x) = (1 −σ 2/ max{σ 2, p ¯ x2})¯ x of (6.6.5) has bounded mean squared error. (b) Show that a variation of δEB(¯ x), of part (a), δv(¯ x) = [1 −σ 2/(v + p ¯ x2)]¯ x, also has bounded mean squared error. (c) For σ 2 = τ 2 = 1, plot the risk functions of the estimators of parts (a) and (b). [Thompson (1968a, 1968b) investigated the mean squared error properties of estimators like those in part (b). Although such estimators have smaller mean squared error than ¯ x for small values of θ, they always have larger mean squared error for larger values of θ.] 6.13 (a) For the hierarchy (5.5.7), with σ 2 = 1 and p = 10, evaluate the Bayes risk r(π, δπ) of the Bayes estimator (5.5.8) for ν = 2, 5, and 10. (b) Calculate the Bayes risk of the estimator δv of Problem 6.12(b). Find a value of v that yields a good approximation to the risk of the hierarchical Bayes estimator. Compare it to the Bayes risk of the empirical Bayes estimator of Problem 6.12(a). 6.14 Referring to Example 6.6, show that the empirical Bayes estimator is also a hierar-chical Bayes estimator using the prior γ (b) = 1/b. 6.15 The Taylor series approximation to the estimator (5.5.8) is carried out in a number of steps. Show that: (a) Using a first-order Taylor expansion around the point ¯ x, we have 1 (1 + θ2/ν)(ν+1)/2 = 1 (1 + ¯ x2/ν)(ν+1)/2 −ν + 1 ν ¯ x (1 + ¯ x2/ν)(ν+3)/2 (θ −¯ x) + R(θ −¯ x) where the remainder, R(θ −¯ x), satisfies R(θ −¯ x)/(θ −¯ x)2 →0 as θ →¯ x. (b) The remainder in part (a) also satisfies  ∞ −∞ R(θ −¯ x)e− p 2σ2 (θ−¯ x)2dθ = O(1/p3/2). 4.8 ] PROBLEMS 297 (c) The numerator and denominator of (5.5.8) can be written  ∞ −∞ 1 (1 + θ2/ν)(ν+1)/2 e− p 2σ2 (θ−¯ x)2dθ =  2πσ 2/p (1 + ¯ x2/ν)(ν+1)/2 + O  1 p3/2  and  ∞ −∞ θ (1 + θ2/ν)(ν+1)/2 e− p 2σ2 (θ−¯ x)2dθ =  2πσ 2/p (1 + ¯ x2/ν)(ν+1)/2  1 −(ν + 1)/ν (1 + ¯ x2/ν)  ¯ x + O  1 p3/2  , which yields (5.6.32). 6.16 For the situation of Example 6.7: (a) Calculate the values of the approximation (5.6.32) for the values of Table 6.2. Are there situations where the estimator (5.6.32) is clearly preferred over the empirical Bayes estimator (5.6.5) as an approximation to the hierarchical Bayes estimator (5.5.8)? (b) ExtendtheargumentofProblem6.15tocalculatethenexttermintheexpansionand, hence, obtain a more accurate approximation to the hierarchical Bayes estimator (5.5.8). For the values of Table 6.2, is this new approximation to (5.5.8) preferable to (5.6.5) and (5.6.32)? 6.17 (a) Show that if b(·) has a bounded second derivative, then  b(λ)e−nh(λ)dλ = b(ˆ λ) 6 2π nh′′(ˆ λ) e−nh(ˆ λ) + O  1 n3/2  where h(ˆ λ) is the unique minimum of h(λ), h′′(λ) ̸= 0, and nh(ˆ λ) →constant as n →∞. [Hint: Expand both b(·) and h(·) in Taylor series around ˆ λ, up to second-order terms. Then, do the term-by-term integration.] This is the Laplace approximation for an integral. For refinements and other de-velopments of this approximation in Bayesian inference, see Tierney and Kadane 1986, Tierney, Kass, and Kadane 1989, and Robert 1994a (Section 9.2.3). (b) For the hierarchical model (5.5.1), the posterior mean can be approximated by E( |x) = e−nh(ˆ λ)  2π nh′′(ˆ λ) 1/2 E( |x, ˆ λ) + O  1 n3/2  where h = 1 n log π(λ|x) and ˆ λ is the mode of π(λ|x), the posterior distribution of λ. (c) If π(λ|x) is the normal distribution with mean ˆ λ and variance σ 2 = [−(∂2/∂λ2) × log π(λ|x)|λ=ˆ λ]−1, then E( |x) = E( |x, ˆ λ) + O(1/n3/2). (d) Show that the situation in part (c) arises from the hierarchy Xi|θi ∼N(θi, σ 2), θi|λ ∼N(λ, τ 2), λ ∼Uniform(−∞, ∞). 6.18 (a) Apply the Laplace approximation (5.6.33) to obtain an approximation to the hierarchical Bayes estimator of Example 6.6. 298 AVERAGE RISK OPTIMALITY [ 4.8 (b) Compare the approximation from part (a) with the empirical Bayes estimator (5.6.24). Which is a better approximation to the hierarchical Bayes estimator? 6.19 Apply the Laplace approximation (5.6.33) to the hierarchy of Example 6.7 and show that the resulting approximation to the hierarchical Bayes estimator is given by (5.6.32). 6.20 (a) Verify (6.6.37), that under squared error loss r(π, δ) = r(π, δπ) + E(δ −δπ)2. (b) For X ∼binomial(p, n), L(p, δ) = (p −δ)2, and π = {π : π = beta(a, b), a > 0, b > 0}, determine whether ˆ p = x/n or δ0 = (a0 + x)/(a0 + b0 + n) is more robust, according to (6.6.37). (c) Is there an estimator of the form (c + x)/(c + d + n) that you would consider more robust, in the sense of (6.6.37), than either estimator in part (b)? [In part (b), for fixed n and (a0, b0), calculate the Bayes risk of ˆ p and δ0 for a number of (a, b) pairs.] 6.21 (a) Establish (6.6.39) and (6.6.40) for the class of priors given by (6.6.38). (b) Show that the Bayes estimator based on π(θ) ∈π in (6.6.38), under squared error loss, is given by (6.6.41). Section 7 7.1 For the situation of Example 7.1: (a) The empirical Bayes estimator of θ, using an unbiased estimate of τ 2/(σ 2 + τ 2), is δEB =  1 −(p −2)σ 2 |x|2  x, the James-Stein estimator. (b) The empirical Bayes estimator of θ, using the marginal MLE of τ 2/(σ 2 + τ 2), is δEB =  1 −pσ 2 |x|2 + x, which resembles the positive-part Stein estimator. 7.2 Establish Corollary 7.2. Be sure to verify that the conditions on g(x) are sufficient to allow the integration-by-parts argument. [Stein (1973, 1981) develops these representa-tions in the normal case.] 7.3 The derivation of an unbiased estimator of the risk (Corollary 7.2) can be extended to a more general model in the exponential family, the model of Corollary 3.3, where X = X1, . . . , Xp has the density pη (x) = e p i=1 ηixi−A(η )h(x). (a) The Bayes estimator of η, under squared error loss, is E(ηi|x) = ∂ ∂xi log m(x) −∂ ∂xi log h(x). Show that the risk of E(η|X)] has unbiased estimator p i=1  ∂2 ∂x2 i (log h(x) −2 log m(x)) +  ∂ ∂xi log m(x) 2 . [Hint: Theorem 3.5 and Problem 3.4.] 4.8 ] PROBLEMS 299 (b) Show that the risk of the empirical Bayes estimator E(ηi|x, ˆ λ) = ∂ ∂xi log m(x|ˆ λ(x)) −∂ ∂xi log h(x). of Theorem 6.3 has unbiased estimator p i=1  ∂2 ∂x2 i log h(x) −2 log m(x|ˆ λ(x)) +  ∂ ∂xi log m(x|ˆ λ(x)) 2 . (c) Use the results of part (b) to derive an unbiased estimator of the risk of the positive-part Stein estimator of (7.7.10). 7.4 Verify (7.7.9), the expression for the Bayes risk of δτ0. (Problem 3.12 may be helpful.) 7.5 A general version of the empirical Bayes estimator (7.7.3) is given by δc(x) =  1 −cσ 2 |x|2  x, where c is a positive constant. (a) Use Corollary 7.2 to verify that Eθ|θ −δc(X)|2 = pσ 2 + cσ 4[c −2(p −2)]Eθ 1 |X|2 . (b) Show that the Bayes risk, under ∼Np(0, τ 2I), is given by r(π, δc) = σ 2  p + cσ 2 σ 2 + τ 2  c p −2 −2  and is minimized by choosing c = p −2. 7.6 For the model X|θ ∼Np(θ, σ 2I), θ|τ 2 ∼Np(µ, τ 2I) : Show that: (a) The empirical Bayes estimator, using an unbiased estimator of τ 2/(σ 2 + τ 2), is the Stein estimator δJS i (x) = µi +  1 −(p −2)σ 2 (xi −µi)2  (xi −µi). (b) If p ≥3, the Bayes risk, under squared error loss, of δJS is r(τ, δJS) = r(τ, δτ) + 2σ 4/(σ 2 + τ 2), where r(τ, δτ) is the Bayes risk of the Bayes estimator. (c) If p < 3, the Bayes risk of δJS is infinite. [Hint: Show that if Y ∼χ2 m, E(1/Y) < ∞⇐ ⇒m < 3]. 7.7 For the model X|θ ∼Np(θ, σ 2I), θ|τ 2 ∼N(µ, τ 2I) the Bayes risk of the ordinary Stein estimator δi(x) = µi +  1 −(p −2)σ 2 (xi −µi)2  (xi −µi) 300 AVERAGE RISK OPTIMALITY [ 4.8 is uniformly larger than its positive-part version δ+ i (x) = µi +  1 −(p −2)σ 2 (xi −µi)2 + (xi −µi). 7.8 Theorem 7.5 holds in greater generality than just the normal distribution. Suppose X is distributed according to the multivariate version of the exponential family pη(x) of (33.7), pη(x) = eη′x−A(η)h(x), −∞< xi < ∞, and a multivariate conjugate prior distribution [generalizing (3.19)] is used. (a) Show that E(X|η) = ∇A(η). (b) If µ = 0 in the prior distribution (see 3.19), show that r(τ, δ) ≥r(τ, δ+), where δ(x) = [1 −B(x)]x and δ+(x) = [1 −B(x)]+x. (c) If µ ̸= 0, the estimator δ(x) would be modified to µ + δ(x −µ). Establish a result similar to part (b) for this estimator. [Hint: For part (b), the proof of Theorem 7.5, modified to use the Bayes estimator E(∇A(η)|x, k, µ) as in (3.21), will work.] 7.9 (a) For the model (7.7.15), show that the marginal distribution of Xi is negative binomial(a, 1/b + 1); that is, P (Xi = x) = a + x −1 x   b b + 1 x  1 b + 1 a with EXi = ab and var Xi = ab(b + 1). (b) If X1, . . . , Xm are iid according to the negative binomial distribution in part (a), show that the conditional distribution of Xj| m 1 Xi is the negative hypergeometric distribution, given by P  Xj = x| m 1 Xi = t = a + x −1 x  (m −1)a + t −x −1 t −x  ma + t −1 t  with EXj = t/m and var Xj = (m −1)t(ma + t)/m2(ma + 1). 7.10 For the situation of Example 7.6: (a) Show that the Bayes estimator under the loss Lk(λ, δ) of (7.7.16) is given by (7.7.17). (b) Verify (7.7.19) and (7.7.20). (c) Evaluate the Bayes risks r(0, δ1) and r(1, δ0). Which estimator, δ0 or δ1, is more robust? 7.11 For the situation of Example 7.6, evaluate the Bayes risk of the empirical Bayes estimator (7.7.20) for k = 0 and 1. What values of the unknown hyperparameter b are least and which are most favorable to the empirical Bayes estimator? [Hint: Using the posterior expected loss (7.7.22) and Problem 7.9(b), the Bayes risk can be expressed as an expectation of a function of Xi only. Further simplification seems unlikely.] 4.8 ] PROBLEMS 301 7.12 Consider a hierarchical Bayes estimator for the Poisson model (7.7.15) with loss (7.7.16). Using the distribution (5.6.27) for the hyperparameter b, show that the Bayes estimator is  p ¯ x + α −k p ¯ x + pa + α + β −k  (a + xi −k). [Hint: Show that the Bayes estimator is E(λ1−k|x)/E(λ−k|x) and that E(λr|x) = H(p ¯ x + pa + α + β)H(p ¯ x + α + r) H(p ¯ x + α)H(p ¯ x + pa + α + β + r) H(a + xi + r) H(a + xi) . 7.13 Prove the following: Two matrix results that are useful in calculating estimators from multivariate hierarchical models are (a) For any vector a of the form a = (I −1 s J)b, 1′a = ai = 0. (b) If B is an idempotent matrix (that is, B2 = I) and a is a scalar, then (I + aB)−1 = I − a 1 + a B. 7.14 For the situation of Example 7.7: (a) Show how to derive the empirical Bayes estimator δL of (7.7.28). (b) Verify the Bayes risk of δL of (7.7.29). For the situation of Example 7.8: (c) Show how to derive the empirical Bayes estimator δEB2 of (7.7.33). (d) Verify the Bayes risk of δEB2, (7.7.34). 7.15 The empirical Bayes estimator (7.7.27) can also be derived as a hierarchical Bayes estimator. Consider the hierarchical model Xij|ξi ∼N(ξi, σ 2), j = 1, . . . , n, i = 1, . . . , s, ξi|µ ∼N(µ, τ 2), i = 1, . . . , s, µ ∼Uniform(−∞, ∞) where σ 2 and τ 2 are known. (a) Show that the Bayes estimator, with respect to squared error loss, is E(ξi|x) = σ 2 σ 2 + nτ 2 E(µ|x) + nτ 2 σ 2 + nτ 2 ¯ xi where E(µ|x) is the posterior mean of µ. (b) Establish that E(µ|x) = ¯ x = xij/ns. [This can be done by evaluating the expec-tation directly, or by showing that the posterior distribution of ξi|x is ξi|x ∼N  σ 2 σ 2 + nτ 2 ¯ x + nτ 2 σ 2 + nτ 2 ¯ xi, σ 2 σ 2 + nτ 2  nτ 2 + σ 2 s  . Note that the ξi’s are not independent a posteriori. In fact, ξ|x ∼Ns  nτ 2 σ 2 + nτ 2 M, nσ 2τ 2 σ 2 + nτ 2 M  , where M = I + (σ 2/nτ 2)J]. 302 AVERAGE RISK OPTIMALITY [ 4.8 (c) Show that the empirical Bayes estimator (7.7.32) can also be derived as a hierar-chical Bayes estimator, by appending the specification (α, β) ∼Uniform(ℜ2) [that is, π(α, β) = dα dβ, −∞< α, β < ∞] to the hierarchy (7.7.30). 7.16 Generalization of model (7.7.23) to the case of unequal ni is, perhaps, not as straight-forward as one might expect. Consider the generalization Xij|ξi ∼N(ξi, σ 2), j = 1, . . . , ni, i = 1, . . . , s, ξi|µ ∼N(µ, τ 2 i ), i = 1, . . . , s. We also make the assumption that τ 2 i = τ 2/ni. Show that: (a) The above model is equivalent to Y ∼Ns(λ, σ 2I), λ ∼Ns(Zµ, τ 2I) where Yi = √ni ¯ Xi, λi = √niξi and z = (√n1, . . . , √ns)′. (b) The Bayes estimator of ξi, using squared error loss, is σ 2 σ 2 + τ 2 µ + τ 2 σ 2 + τ 2 ¯ xi. (c) The marginal distribution of Yi is Yi ∼Ns(Zµ, (σ 2 +τ 2)I), and an empirical Bayes estimator of ξ is δEB i = ¯ x +  1 − (s −3)σ 2 ni(¯ xi −¯ x)2  (¯ xi −¯ x) where ¯ xi = jxij/ni and ¯ x = ini ¯ xi/ini. [Without the assumption that τ 2 i = τ 2/ni, one cannot get a simple empirical Bayes estimator. If τ 2 i = τ 2, the likelihood estimation can be used to get an estimate of τ 2 to be used in the empirical Bayes estimator. This is discussed by Morris (1983a).] 7.17 (Empirical Bayes estimation in a general case). A general version of the hierarchical models of Examples 7.7 and 7.8 is X|ξ ∼Ns(ξ, σ 2I), ξ|β ∼Ns(Zβ, τ 2I) where σ 2 and Zs×r, of rank r, are known and τ 2 and β r×1 are unknown. Under this model show that: (a) The Bayes estimator of ξ, under squared error loss, is E(ξ|x, β) = σ 2 σ 2 + τ 2 zβ + τ 2 σ 2 + τ 2 x. (b) Marginally, the distribution of X|β is X|β ∼Ns(Zβ, (σ 2 + τ 2)I). (c) Under the marginal distribution in part (b), E[(Z′Z)−1Z′x] = E ˆ β = β, E  s −r −2 |X −Z ˆ β|2 = 1 σ 2 + τ 2 , and, hence, an empirical Bayes estimator of ξ is δEB = Z ˆ β +  1 −(s −r −2)σ 2 |x −Z ˆ β|2 (x −Z ˆ β). 4.8 ] PROBLEMS 303 (d) The Bayes risk of δEB is r(τ, δτ) + (r + 2)σ 4/(σ 2 + τ 2), where r(τ, δτ) is the risk of the Bayes estimator. 7.18 (Hierarchical Bayes estimation in a general case.) In a manner similar to the previous problem, we can derive hierarchical Bayes estimators for the model X|ξ ∼Ns(ξ, σ 2I), ξ|β ∼Ns(Zβ, τ 2I), β ∼Uniform(ℜr) where σ 2 and Zs×r, of rank r, are known and τ 2 is unknown. (a) The prior distribution of ξ, unconditional on β, is proportional to π(ξ) =  ℜr π(ξ|β) dβ ∝e −1 2 ξ′(I−H)ξ τ2 , where H = Z(Z′Z)−1Z′ projects from ℜs to ℜr. [Hint: Establish that (ξ −Zβ)′(ξ −Zβ) = ξ ′(I −H)ξ +[β −(Z′Z)−1Z′ξ]′Z′Z[β −(Z′Z)−1Z′ξ] to perform the integration on β.] (b) Show that ξ|x ∼Ns  τ 2 σ 2 + τ 2 M, σ 2τ 2 σ 2 + τ 2 M  where M = I + (σ 2/τ 2)H, and hence that the Bayes estimator is given by σ 2 σ 2 + τ 2 Hx + τ 2 σ 2 + τ 2 x, where Z ˆ β = Hx. [Hint: Establish that 1 τ 2 ξ ′(I −H)ξ + 1 σ 2 (x −ξ)′(x −ξ) = σ 2 + τ 2 σ 2τ 2  ξ − τ 2 σ 2 + τ 2 Mx ′ M−1  ξ − τ 2 σ 2 + τ 2 Mx  + 1 σ 2 + τ 2 x′(I −H)x where M−1 = I − σ 2 σ 2+τ2 H.] (c) Marginally, X′(I −H)X ∼(σ 2 + τ 2)χ 2 s−r. This leads us to the empirical Bayes estimator Hx +  1 −(s −r −2)σ 2 x′(I −H)x  (x −Hx) which is equal to the empirical Bayes estimator of Problem 7.17(c). [The model in this and the previous problem can be substantially generalized. For exam-ple, both σ 2I and τ 2I can be replaced by full, positive definite matrices. At the cost of an increase in the complexity of the matrix calculations and the loss of simple answers, hierarchical and empirical Bayes estimators can be computed. The covariances, either scalar or matrix, can also be unknown, and inverted gamma (or inverted Wishart) prior 304 AVERAGE RISK OPTIMALITY [ 4.9 distributions can be accommodated. Calculations can be implemented via the Gibbs sampler. Note that these generalizations encompass the “unequal ni” case (see Problem 7.16), but there are no simple solutions for this case. Many of these estimators also possess a minimax property, which will be discussed in Chapter 5.] 7.19 As noted by Morris (1983a), an analysis of variance-type hierarchical model, with unequal ni, will yield closed-form empirical Bayes estimators if the prior variances are proportional to the sampling variances. Show that, for the model Xij|ξi ∼N(ξi, σ 2), j = 1, . . . , ni, i = 1, . . . , s, ξ|β ∼Ns(Zβ, τ 2D−1) where σ 2 and Zs×r, of full rank r, are known, τ 2 is unknown, and D = diag(n1, . . . , ns), an empirical Bayes estimator is given by δEB = Z ˆ β +  1 − (s −r −2)σ 2 (¯ x −Z ˆ β)′D(¯ x −Z ˆ β) (¯ x −Z ˆ β) with ¯ xi = jxij/ni, ¯ x = {¯ xi}, and ˆ β = (Z′DZ)−1Z′D¯ x. 7.20 An entertaining (and unjustifiable) result which abuses a hierarchical Bayes calcu-lation yields the following derivation of the James-Stein estimator. Let X ∼Np(θ, I) and θ|τ 2 ∼Np(0, τ 2I). (a) Verify that conditional on τ 2, the posterior and marginal distributions are given by π(θ|x, τ 2) = Np  τ 2 τ 2 + 1x, τ 2 τ 2 + 1I  , m(x|τ 2) = Np[0, (τ 2 + 1)I]. (b) Show that, taking π(τ 2) = 1, −1 < τ 2 < ∞, we have  ℜp θπ(θ|x, τ 2)m(x|τ 2) dθ dτ 2 = x (2π)p/2(|x|2)p/2−1  H p −2 2  2(p−2)/2 −H(p/2)2p/2 |x|2  and   ℜp π(θ|x, τ 2)m(x|τ 2)dθ dτ 2 = 1 (2π)p/2(|x|2)p/2−1 H p −2 2  2(p−2)/2 and hence E(θ|x) =  1 −p −2 |x|2  x. (c) Explain some implications of the result in part (b) and, why it cannot be true. [Try to reconcile it with (3.3.12).] (d) Why are the calculations in part (b) unjustified? 4.9 ] NOTES 305 9 Notes 9.1 History Following the basic paper by Bayes (published posthumously in 1763), Laplace initiated a widespread use of Bayes procedures, particularly with noninformative priors (for example, in his paper of 1774 and the fundamental book of 1820; see Stigler 1983, 1986). However, Laplace also employed non-Bayesian methods, without always making a clear distinction. A systematic theory of statistical inference based on noninformative (locally invariant) priors, generalizing and refining Laplace’s approach, was developed by Jeffreys in his book on probability theory (1st edition 1939, 3rd edition 1961). A corresponding subjective theory owes its modern impetus to the work of deFinetti (for example, 1937, 1970) and that of L. J. Savage, particularly in his book on the Foundations of Statistics (1954). The idea of selecting an appropriate prior from the conjugate family was put forward by Raiffa and Schlaifer (1961). Interest in Bayes procedures (although not from a Bayesian point of view) also received support from Wald’s result (for example, 1950) that all admissible procedures are either Bayes or limiting Bayes (see Section 5.8). Bayesian attitudes and approaches are continually developing, with some of the most influential work done by Good (1965), DeGroot(1970), Zellner (1971), deFinetti (1974), Box and Tiao (1973), Berger (1985), and Bernardo and Smith (1994). An account of criticisms of the Bayesian approach can be found in Rothenberg (1977), and Berger (1985, Section 4.12). Robert (1994a, Chapter 10) provides a defense of “The Bayesian Choice.” 9.2 Modeling A general Bayesian treatment of linear models is given by Lindley and Smith (1972); the linear mixed model is given a Bayesian treatment in Searle et al. (1992, Chapter 9); sampling from a finite population is discussed from a Bayesian point of view by Ericson (1969) (see also Godambe 1982); a Bayesian approach to contingency tables is developed by Lindley (1964), Good (1965), and Bloch and Watson (1967) (see also Bishop, Fienberg, and Holland 1975 and Leonard 1972). The theory of Bayes estimation in exponential families is given a detailed development by Bernardo and Smith (1994). The fact that the resulting posterior expectations are convex combinations of sample and prior means is a characterization of this situation (Diaconis and Ylvisaker 1979, Goel and DeGroot 1980, MacEachern 1993). Extensions to nonlinear and generalized linear models are given by Eaves (1983) and Albert (1988). In particular, for the generalized linear model, Ibrahim and Laud (1991) and Natarajan and McCulloch (1995) examine conditions for the propriety of posterior densities resulting from improper priors. 9.3 Computing One reason why interest in Bayesian methods has flourished is because of the great strides in Bayesian computing. The fundamental work of Geman and Geman (1984) (which built on that of Metropolis et al. (1953) and Hastings 1970) influenced Gelfand and Smith (1990) to write a paper that sparked new interest in Bayesian methods, sta-tistical computing, algorithms, and stochastic processes through the use of computing algorithms such as the Gibbs sampler and the Metropolis-Hastings algorithm. Elemen-tary introductions to these topics can be found in Casella and George (1992) and Chib and Greenberg (1995). More detailed and advanced treatments are given in Tierney (1994), Robert (1994b), Gelman et al. (1995), and Tanner (1996). 306 AVERAGE RISK OPTIMALITY [ 4.9 9.4 The Ergodic Theorem The general theorem about convergence of (5.15) in a Markov chain is known as the Ergodic Theorem; the name was coined by Boltzmann when investigating the behavior of gases (see Dudley 1989, p. 217). A sequence X0, X1, X2, . . . is called ergodic if the limit of n i=1 Xi/n is independent of the initial value of X0. The ergodic theorem for stationary sequences, those for which (Xj1, . . . , Xjk) has the same distribution as (Xj1+r, . . . , Xjk+r) for all r = 1, 2, . . . is an assertion of the equality of time and space averages and holds in some generality (Dudley 1989, Section 8.4, Billingsley 1995, Section 24). As the importance of this theorem led it to have wider applicability, the term “ergodic” has come to be applied in many situations and is often associated with Markov chains. In statistical practice, the usefulness of Markov chains for computations and the importance of the limit being independent of the starting values has brought the study of the ergodic behavior of Markov chains into prominence for statisticians. Good entries to the classical theory of Markov chains can be found in Feller (1968), Kemeny and Snell (1976), Resnick (1992), Ross (1985), or the more advanced treatment by Meyn and Tweedie (1993). In the context of estimation, the papers by Tierney (1994) and Robert (1995) provide detailed introductions to the relevant Markov chain theory. Athreya, Doss, and Sethuraman (1996) rigorously develop limit theorems for Markov chains arising in Gibbs sampling-type situations. We are mainly concerned with Markov chains X0, X1, . . . that have an invariant dis-tribution, F, satisfying A dF(x) = P(Xn+1 ∈A|Xn = x)dF(x). The chain is called irreducible if all sets with positive probability under the invariant distribution can be reached at some point by the chain. Such an irreducible chain is also recurrent (Tier-ney 1994, Section 3.1). A recurrent chain is one that visits every set infinitely often (i.o.) or, more importantly, a recurrent chain tends not to “drift off” to infinity. For-mally, an irreducible Markov chain is recurrent if for each A with A dF(x) > 0, we have P (Xk ∈A i.o. |X0 = x0) > 0 for all x0, and equal to 1 for almost all x0 (f ). If P (Xk ∈A i.o. |X0 = x0) = 1 for all x0, the chain is called Harris recurrent. Finally, if the invariant distribution F has finite mass (as it will in most of the cases we consider here), the chain is positive recurrent; otherwise it is null recurrent. The Markov chain is periodic if for some integer m ≥2, there exists a collection of disjoint sets {A1, . . . , Am} for which P(Xk+1 ∈Aj+1|Xk ∈Aj) = 1 for all j = 1, . . . , m−1 (mod m). That is, the chain periodically travels through the sets A1, . . . , Am. If no such collection of sets exists, the chain is aperiodic. The relationship between these Markov chain properties and their consequences are summarized in the following theorem, based on Theorem 1 of Tierney (1994). Theorem 9.1 Suppose that the Markov chain X0, X1, . . . is irreducible with invariant distribution F satisfying dF(x) = 1. Then, the Markov chain is positive recurrent and F is the unique invariant distribution. If the Markov chain is also aperiodic, then for almost all x0 (F), sup A |P (Xk ∈A|X0 = x0) −  A dF(x)| →0. If the chain is Harris recurrent, the convergence occurs for all x0. It is common to call a Markov chain ergodic if it is positive Harris recurrent and aperiodic. For such chains, we have the following version of the ergodic theorem. 4.9 ] NOTES 307 Theorem 9.2 Let X0, X1, X2, . . . be an ergodic Markov chain with invariant distribu-tion F. Then, for any function h with |h(x)|dF(x) < ∞, (1/n) n i=1 h(Xi) →  h(x)dF(x) almost everywhere (F). 9.5 Parametric and Nonparametric Empirical Bayes The empirical Bayes analysis considered in Section 6 is sometimes referred to as para-metric empirical Bayes, to distinguish it from the empirical Bayes methodology de-veloped by Robbins (1955), which could be called nonparametric. In nonparametric empirical Bayes analysis, no functional form is assumed for the prior distribution, but a nonparametric estimator of the prior is built up and the resulting empirical Bayes mean is calculated. Robbins showed that as the sample size goes to infinity, it is possible to achieve the same Bayes risk as that achieved by the true Bayes estimator. Much research has been done in this area (see, for example, Van Ryzin and Susarla 1977, Susarla 1982, Robbins 1983, and Maritz and Lwin 1989). Due to the nature of this approach, its op-timality properties tend to occur in large samples, with the parametric empirical Bayes approach being more suited for estimation in finite-sample problems. Parametric empirical Bayes methods also have a long history, with major developments evolving in the sequence of papers by Efron and Morris (1971, 1972a, 1972b, 1973a, 1973b, 1975, 1976a, 1976b), where the connection with minimax estimation is explored. The theory and applications of empirical Bayes methods is given by Morris (1983a); a more comprehensive treatment is found in Carlin and Louis (1996). Less technical introductions are given by Casella (1985a, 1992a). 9.6 Robust Bayes Robust Bayesian methods were effectively coalesced into a practical methodology by Berger (1984). Since then, there has been a great deal of research on this topic. (See, for example, Berger and Berliner 1986, Wasserman 1989, 1990, Sivaganesen and Berger 1989, DasGupta 1991, Lavine 1991a, 1991b, and the review papers by Berger 1990b, 1994 and Wasserman 1994.) The idea of using a class of priors is similar to the gamma-minimax approach, first developed by Robbins (1951, 1964) and Good (1952). In this approach, the subject of robustness over the class is usually not an issue, but rather the objective is the construction of an estimator that is minimax over the class (see Problem 5.1.2). This page intentionally left blank CHAPTER 5 Minimaxity and Admissibility 1 Minimax Estimation At the beginning of Chapter 4, we introduced two ways in which the risk function R(θ, δ) can be minimized in some overall sense: minimizing a weighted-average risk and minimizing the maximum risk. The first of these approaches was the concern of Chapter 4; in the present chapter, we shall consider the second. Definition 1.1 An estimator δM of θ, which minimizes the maximum risk, that is, which satisfies inf δ sup θ R(θ, δ) = sup θ R(θ, δM), (1.1) is called a minimax estimator. The problem of finding the estimator δM, which minimizes the maximum risk, is often difficult. Thus, unlike what happened in UMVU, equivariant, and Bayes estimations, we shall not be able to determine minimax estimators for large classes of problems but, rather, will treat problems individually (see Section 5.4). Example 1.2 A first example. As we will see (Example 2.17), the Bayes estima-tors of Example 4.1.5, given by (4.1.12), that is, δ∧(x) = a + x a + b + n , (1.2) are admissible. Their risk functions are, therefore, incomparable as they all must cross (or coincide). As an illustration, consider the group of three estimators δπi, i = 1, . . . , 3, Bayes estimators from beta(1, 3), beta(2, 2) and beta(3, 1) pri-ors, respectively. Based on this construction, each δπi will be preferred if it is thought that the true value of the parameter is close to its prior mean (1/4, 1/2, 3/4, respectively). Alternatively, one might choose δπ2 since it can be shown that δπ2 has the smallest maximum risk among the three estimators being considered (see Problem 1.1). Although δπ2 is minimax among these three estimators, it is not minimax overall. See Problems 1.2 and 1.3 for an alternative definition of minimaxity where the class of estimators is restricted. ∥ As pointed out in Section 4.1(i), and suggested by Example 1.2, Bayes estima-tors provide a tool for solving minimax problems. Thus, Bayesian considerations are helpful when choosing an optimal frequentist estimator. Viewed in this light, there is a synthesis of the two approaches. The Bayesian approach provides us with a means of constructing an estimator that has optimal frequentist properties. 310 MINIMAXITY AND ADMISSIBILITY [ 5.1 This synthesis highlights important features of both the Bayesian and frequentist approaches. The Bayesian paradigm is well suited for the construction of possi-bly optimal estimators, but is less well suited for their evaluation. The frequentist paradigm is complementary, as it is well suited for risk evaluations, but less well suited for construction. It is important to view these two approaches and hence the contents of Chapters 4 and 5 as complementary rather than adversarial; together they provide a rich set of tools and techniques for the statistician. If we want to apply this idea to the determination of minimax estimators, we must ask ourselves: For what prior distribution  is the Bayes solution δ likely to be minimax? A minimax procedure, by minimizing the maximum risk, tries to do as well as possible in the worst case. One might, therefore, expect that the minimax estimator would be Bayes for the worst possible distribution. To make this concept precise, let us denote the average risk (Bayes risk) of the Bayes solution δ by r = r(, δ) =  R(θ, δ) d(θ). (1.3) Definition 1.3 A prior distribution  is least favorable if r ≥r′ for all prior distributions ′. This is the prior distribution which causes the statistician the greatest average loss. The following theorem provides a simple condition for a Bayes estimator δ to be minimax. Theorem 1.4 Suppose that  is a distribution on such that r(, δ) =  R(θ, δ) d(θ) = sup θ R(θ, δ). (1.4) Then: (i) δ is minimax. (ii) If δ is the unique Bayes solution with respect to , it is the unique minimax procedure. (iii)  is least favorable. Proof. (i) Let δ be any other procedure. Then, sup θ R(θ, δ) ≥  R(θ, δ) d(θ) ≥  R(θ, δ) d(θ) = sup θ R(θ, δ). (ii) This follows by replacing ≥by > in the second equality of the proof of (i). (iii) Let ′ be some other distribution of θ. Then, r′ =  R(θ, δ′) d′(θ) ≤  R(θ, δ) d′(θ) ≤sup θ R(θ, δ) = r. ✷ 5.1 ] MINIMAX ESTIMATION 311 Condition (1.4) states that the average of R(θ, δ) is equal to its maximum. This will be the case when the risk function is constant or, more generally, when  assigns probability 1 to the set on which the risk function takes on its maximum value. The following minimax characterizations are variations and simplifications of this requirement. Corollary 1.5 If a Bayes solution δ has constant risk, then it is minimax. Proof. If δ has constant risk, (1.4) clearly holds. ✷ Corollary 1.6 Let ω be the set of parameter points at which the risk function of δ takes on its maximum, that is, ω = {θ : R(θ, δ) = sup θ′ R(θ′, δ)}. (1.5) Then, δ is minimax if and only if (ω) = 1. (1.6) This can be rephrased by saying that a sufficient condition for δ to be minimax is that there exists a set ω such that (ω) = 1 and R(θ, δ) attains its maximum at all points of ω. (1.7) Example 1.7 Binomial. Suppose that X has the binomial distribution b(p, n) and that we wish to estimate p with squared error loss. To see whether X/n is minimax, note that its risk function p(1−p)/n has a unique maximum at p = 1/2. To apply Corollary 1.6, we need to use a prior distribution  for p which assigns probability 1 to p = 1/2. The corresponding Bayes estimator is δ(X) ≡1/2, not X/n. Thus, if X/n is minimax, the approach suggested by Corollary 1.6 does not work in the present case. It is, in fact, easy to see that X/n is not minimax (Problem 1.9). To determine a minimax estimator by the method of Theorem 1.4, let us utilize the result of Example 4.1.5 and try a beta distribution for . If  is B(a, b), the Bayes estimator is given by (4.1.12) and its risk function is 1 (a + b + n)2 {np(1 −p) + [a(1 −p) −bp]2}. (1.8) Corollary 1.5 suggests seeing whether there exist values a and b for which the risk function (1.8) is constant. Setting the coefficients of p2 and p in (1.8) equal to zero shows that (1.8) is constant if and only if (a + b)2 = n and 2a(a + b) = n. (1.9) Since a and b are positive, a + b = √n and, hence, a = b = 1 2 √n. (1.10) 312 MINIMAXITY AND ADMISSIBILITY [ 5.1 It follows that the estimator δ = X + 1 2 √n n + √n = X n √n 1 + √n + 1 2 1 1 + √n (1.11) is constant risk Bayes and, hence, minimax. Because of the uniqueness of the Bayes estimator (4.1.4), it is seen that (1.11) is the unique minimax estimator of p. Of course, the estimator (1.11) is biased (Problem 1.10) because X/n is the only unbiased estimator that is a function of X. A comparison of its risk, which is rn = E(δ −p)2 = 1 4 1 (1 + √n)2 , (1.12) with the risk function Rn(p) = p(1 −p)/n (1.13) of X/n shows that (Problem 1.11) rn < Rn(p) in an interval In = (1/2 −cn < p < 1/2 + cn) and rn > Rn(p) outside In. For small values of n, cn is close to 1/2, so that the minimax estimator is better (and, in fact, substantially better) for most of the range of p. However, as n →∞, cn →0 and In shrinks toward the point 1/2. Furthermore, supp Rn(p)/rn = Rn(1/2)/rn →1, so that even at p = 1/2, where the comparison is least favorable to X/n, the improvement achieved by the minimax estimator is negligible. Thus, for large and even moderate n, X/n is the better of the two estimators. In the limit as n →∞(although not for any finite n), X/n dominates the minimax estimator. Problems for which such a subminimax sequence does not exist are discussed by Ghosh (1964). The present example illustrates an asymmetry between parts (ii) and (iii) of Theorem 1.4. Part (ii) asserts the uniqueness of the minimax estimator, whereas no such claim is made in part (iii) for the least favorable . In the present case, it follows from (4.1.4) that for any , the Bayes estimator of p is δ(x) = 1 0 px+1(1 −p)n−xd(p) 1 0 px(1 −p)n−xd(p) . (1.14) Expansion of (1 −p)n−x in powers of p shows that δ(x) depends on  only through the first n + 1 moments of . This shows, in particular, that the least favorable distribution is not unique in the present case. Any prior distribution with the same first n+1 moments gives the same Bayes solution and, hence, by Theorem 1.4 is least favorable (Problem 1.13). Viewed as a loss function, squared error may be unrealistic when estimating p since in many situations an error of fixed size seems much more serious for values of p close to 0 or 1 than for values near 1/2. To take account of this difficulty, let L(p, d) = (d −p)2 p(1 −p). (1.15) With this loss function, X/n becomes a constant risk estimator and is seen to be a Bayes estimator with respect to the uniform distribution on (0, 1) and hence a minimax estimator. It is interesting to note that with (1.15), the risk function of the 5.1 ] MINIMAX ESTIMATION 313 estimator (1.11) is unbounded. This indicates how strongly the minimax property can depend on the loss function. ∥ When the loss function is convex in d, as was the case in Example 1.7, it follows from Corollary 1.7.9 that attention may be restricted to nonrandomized estimators. The next example shows that this is no longer true when the convexity assumption is dropped. Example 1.8 Randomized minimax estimator. In the preceding example, sup-pose that the loss is zero when |d −p| ≤α and is one otherwise, where α < 1/2(n + 1). Since any nonrandomized δ(X) can take on at most n + 1 distinct val-ues, the maximum risk of any such δ is then equal to 1. To exhibit a randomized estimator with a smaller maximum risk, consider the extreme case in which the estimator of p does not depend on the data at all but is a random variable U, which is uniformly distributed on (0, 1). The resulting risk function is R(p, U) = 1 −P(|U −p| ≤α) (1.16) and it is easily seen that the maximum of (1.16) is 1 −α < 1 (Problem 1.14). ∥ The loss function in this example was chosen to make the calculations easy, but the possibility of reducing the maximum risk through randomization exists also for other nonconvex loss functions. In particular, for the problem of Example 1.7 with loss function |d −p|r(0 < r < 1), it can be proved that no nonrandomized estimator can be minimax (Hodges and Lehmann 1950). Example 1.9 Difference of two binomials. Consider the case of two independent variables X and Y with distributions b(p1, m) and b(p2, n), respectively, and the problem of estimating p2 −p1 with squared error loss. We shall now obtain the minimax estimator when m = n; no solution is known when m ̸= n. The derivation of the estimator in Example 4.1.5 suggests that in the present case, too, the minimax estimator might be a linear estimator aX + bY + k with constant risk. However, it is easy to see (Problem 1.18) that such a minimax estimator does not exist. Still hoping for a linear estimator, we shall therefore try to apply Corollary 1.6. Before doing so, let us simplify the hoped-for solutions by an invariance consideration. The problem remains invariant under the transformation (X′, Y ′) = (Y, X), (p′ 1, p′ 2) = (p2, p1), d′ = −d, (1.17) andanestimatorδ(X, Y)isequivariantunderthistransformationprovidedδ(Y, X) = −δ(X, Y) and hence if (a + b)(x + y) + 2k = 0 for all x, y. This leads to the condition a +b = k = 0 and, therefore, to an estimator of the form δ(X, Y) = c(Y −X). (1.18) As will be seen in Section 5.4 (see Theorem 4.1 and the discussion following it), if a problem remains invariant under a finite group G and if a minimax estimator exists, then there exists an equivariant minimax estimator. In our search for a linear 314 MINIMAXITY AND ADMISSIBILITY [ 5.1 minimax estimator, we may therefore restrict attention to estimators of the form (1.18). Application of Corollary 1.6 requires determination of the set ω of pairs (p1, p2) for which the risk of (1.18) takes on its maximum. The risk of (1.18) is Rc(p1, p2) = E[c(Y −X) −(p2 −p1)]2 = c2n(p1(1 −p1) + p2(1 −p2)) + (cn −1)2(p2 −p1)2. Taking partial derivatives with respect to p1 and p2 and setting the resulting ex-pressions equal to 0 leads to the two equations [2(cn −1)2 −2c2n]p1 −2(cn −1)2p2 = −c2n, −2(cn −1)2p1 + [2(cn −1)2 −2c2n]p2 = −c2n. (1.19) Typically, these equations have a unique solution, say (p0 1, p0 2), which is the point of maximum risk. Application of Corollary 1.6 would then have  assign probability 1 to the point (p0 1, p0 2) and the associated Bayes estimator would be δ(X, Y) ≡ p0 2 −p0 1, whose risk does not have a maximum at (p0 1, p0 2). This impasse does not occur if the two equations (1.19) are linearly dependent. This will be the case only if c2n = 2(cn −1)2 and hence if c = √ 2n n 1√ 2n ± 1 2. (1.20) Now, a Bayes estimator (4.1.4) does not take on values outside the convex hull of the range of the estimand, which in the present case is (−1, 1). This rules out the minus sign in the denominator of c. Substituting (1.20) with the plus sign into (1.19) reduces these two equations to the single equation p1 + p2 = 1. (1.21) The hoped-for minimax estimator is thus δ(X, Y) = √ 2n n √ 2n + 1 (Y −X). (1.22) We have shown (and it is easily verified directly, see Problem 1.19) that in the (p1, p2) plane, the risk of this estimator takes on its maximum value at all points of the line segment (1.21), with 0 < p1 < 1, which therefore is the conjectured ω of Corollary 1.6. It remains to show that (1.22) is the Bayes estimator of a prior distribution , which assigns probability 1 to the set (1.21). Let us now confine attention to this subset and note that p1 + p2 = 1 implies p2−p1 = 2p2−1. The following lemma reduces the problem of estimating 2p2−1 to that of estimating p2. 5.1 ] MINIMAX ESTIMATION 315 Lemma 1.10 Let δ be a Bayes (respectively, UMVU, minimax, admissible) esti-mator of g(θ) for squared error loss. Then, aδ + b is Bayes (respectively, UMVU, minimax, admissible) for ag(θ) + b. Proof. This follows immediately from the fact that R(ag(θ) + b, aδ + b) = a2R(g(θ), δ). ✷ For estimating p2, we have, in the present case, n binomial trials with parameter p = p2 and n binomial trials with parameter p = p1 = 1 −p2. If we interchange the meanings of “success” and “failure” in the latter n trials, we have 2n binomial trials with success probability p2, resulting in Y + (n −X) successes. According to Example 1.7, the estimator Y + n −X 2n √ 2n 1 + √ 2n + 1 2 1 1 + √ 2n is unique Bayes for p2. Applying Lemma 1.10 and collecting terms, we see that the estimator (1.22) is unique Bayes for estimating p2 −p1 = 2p2 −1 on ω. It now follows from the properties of this estimator and Corollary 1.5 that δ is minimax for estimating p2 −p1. It is interesting that δ(X, Y) is not the difference of the minimax estimators for p2 and p1. This is unlike the behavior of UMVU estimators. That δ(X, Y) is the unique Bayes (and hence minimax) estimator for p2 −p1, even when attention is not restricted to ω, follows from the remark after Corollary 4.1.4. It is only necessary to observe that the subsets of the sample space which have positive probability are the same whether (p1, p2) is in ω or not. The comparison of the minimax estimator (1.22) with the UMVU estimator (Y −X)/n gives results similar to those in the case of a single p. In particular, the UMVU estimator is again much better for large m = n (Problem 1.20). ∥ Equation (1.4) implies that a least favorable distribution exists. When such a distribution does not exist, Theorem 1.4 is not applicable. Consider, for example, the problem of estimating the mean θ of a normal distribution with known variance. Since all possible values of θ play a completely symmetrical role, in the sense that none is easier to estimate than any other, it is natural to conjecture that the least favorable distribution is “uniform” on the real line, that is, that the least favorable distribution is Lebesgue measure. This is the Jeffreys prior and, in this case, is not a proper distribution. There are two ways in which the approach of Theorem 1.4 can be generalized to include such improper priors. (a) As was seen in Section 4.1, it may turn out that the posterior distribution given x is a proper distribution. One can then compute the expectation E[g( )|x] for this distribution, a generalized Bayes estimator, and hope that it is the desired estimator. This approach is discussed, for example, by Sacks (1963), Brown (1971), and Berger and Srinivasan (1978). (b) Alternatively, one can approximate the improper prior distribution with a se-quence of proper distributions; for example, Lebesgue measure by the uniform 316 MINIMAXITY AND ADMISSIBILITY [ 5.1 distributions on (−N, N), N = 1, 2, . . ., and generalize the concept of least favorable distribution to that of least favorable sequence. We shall here follow the second approach. Definition 1.11 A sequence of prior distributions {n} is least favorable if for every prior distribution  we have r ≤r = lim n→∞rn, (1.23) where rn =  R(θ, δn) dn(θ) (1.24) is the Bayes risk under n. Theorem 1.12 Suppose that {n} is a sequence of prior distributions with Bayes risks rn satisfying (1.23) and that δ is an estimator for which sup θ R(θ, δ) = r. (1.25) Then (i) δ is minimax and (ii) the sequence {n} is least favorable. Proof. (i) Suppose δ′ is any other estimator. Then, sup θ R(θ, δ′) ≥  R(θ, δ′) dn(θ) ≥rn, and this holds for every n. Hence, sup θ R(θ, δ′) ≥sup θ R(θ, δ), and δ is minimax. (ii) If  is any distribution, then r =  R(θ, δ) d(θ) ≤  R(θ, δ) d(θ) ≤sup θ R(θ, δ) = r. This completes the proof. ✷ This theorem is less satisfactory than Theorem 1.4 in two respects. First, even if the Bayes estimators δn are unique, it is not possible to conclude that δ is the unique minimax estimator. The reason for this is that the second inequality in the second line of the proof of (i), which is strict when δn is unique Bayes, becomes weak under the limit operation. The other difficulty is that in order to check condition (1.25), it is necessary to evaluate r and hence the Bayes risk rn. This evaluation is often easy when the n are conjugate priors. Alternatively, the following lemma sometimes helps. 5.1 ] MINIMAX ESTIMATION 317 Lemma 1.13 If δ is the Bayes estimator of g(θ) with respect to  and if r = E[δ(X) −g( )]2 (1.26) is its Bayes risk, then r =  var[g( )|x] dP(x). (1.27) In particular, if the posterior variance of g( )|x is independent of x, then r = var[g( )|x]. (1.28) Proof. The right side of (1.26) is equal to  3 E [g( ) −δ(x)]2 |x 4 dP(x) and the result follows from (4.5.2). ✷ Example 1.14 Normal mean. Let X = (X1, . . . , Xn), with the Xi iid according to N(θ, σ 2). Let the estimand be θ, the loss squared error, and suppose, at first, that σ 2 is known. We shall prove that ¯ X is minimax by finding a sequence of Bayes estimators δn satisfying (1.23) with r = σ 2/n. As prior distribution for θ, let us try the conjugate normal distribution N(µ, b2). Then, it follows from Example 4.2.2 that the Bayes estimator is δ(x) = n¯ x/σ 2 + µ/b2 n/σ 2 + 1/b2 . (1.29) The posterior variance is given by (4.2.3) and is independent of x, so that r = 1 n/σ 2 + 1/b2 . (1.30) As b →∞, r ↑σ 2/n, and this completes the proof of the fact that ¯ X is minimax. Suppose, now, that σ 2 is unknown. It follows from the result just proved that the maximum risk of every estimator will be infinite unless σ 2 is bounded. We shall therefore assume that σ 2 ≤M. (1.31) Under this restriction, the maximum risk of ¯ X is sup (θ,σ 2) E( ¯ X −θ)2 = M n . That ¯ X is minimax subject to (1.31), then, is an immediate consequence of Lemma 1.15 below. It is interesting to note that although the boundedness condition (1.31) was required for the minimax problem to be meaningful, the minimax estimator does not, in fact, depend on the value of M. An alternative modification, when σ 2 is unknown, is to consider the loss function L(θ, δ) = 1 σ 2 (θ −δ)2. (1.32) 318 MINIMAXITY AND ADMISSIBILITY [ 5.1 For this loss function, the risk of ¯ X is bounded, and ¯ X is again minimax (Problem 1.21). ∥ We now prove a lemma which is helpful in establishing minimaxity in nonpara-metric situations. Lemma 1.15 Let X be a random quantity with distribution F, and let g(F) be a functional defined over a set F1 of distributions F. Suppose that δ is a minimax estimator of g(F) when F is restricted to some subset F0 of F1. Then, if sup F∈F0 R(F, δ) = sup F∈F1 R(F, δ), (1.33) δ is minimax also when F is permitted to vary over F1. Proof. If an estimator δ′ existed with smaller sup risk over F1 than δ, it would also have smaller sup risk over F0 and thus contradict the minimax property of δ over F0. ✷ Example 1.16 Nonparametric mean. Let X1, . . . , Xn be iid with distribution F and finite expectation θ, and consider the problem of estimating θ with squared error loss. If the maximum risk of every estimator of θ is infinite, the minimax problem is meaningless. To rule this out, we shall consider two possible restrictions on F: (a) Bounded variance, varF(Xi) ≤M < ∞; (1.34) (b) bounded range, −∞< a < Xi < b < ∞. (1.35) Under (a), it is easy to see that ¯ X is minimax by applying Lemma 1.15 with F1 the family of all distributions F satisfying (1.34), and F0 the family of normal distributions satisfying (1.34). Then, ¯ X is minimax for F0 by Example 1.14. Since (1.33) holds with δ = ¯ X, it follows that ¯ X is minimax for F1. We shall see in the next section that it is, in fact, the unique minimax estimator of θ. To find a minimax estimator of θ under (b), suppose without loss of generality that a = 0 and b = 1, and let F1 denote the class of distributions F with F(1) − F(0) = 1. It seems plausible in the present case that a least favorable distribution over F1 would concentrate on those distributions F ∈F1 which are as spread out as possible, that is, which put all their mass on the points 0 and 1. But these are just binomial distributions with n = 1. If this conjecture is correct, the minimax estimator of θ should reduce to (1.11) when all the Xi are 0 or 1, with X in (1.11) given by X = Xi. This suggests the estimator δ(X1, . . . , Xn) = √n 1 + √n ¯ X + 1 2 1 1 + √n, (1.36) and we shall now prove that (1.36) is, indeed, a minimax estimator of θ. Let F0 denote the set of distributions F according to which P(Xi = 0) = 1 −p, P(Xi = 1) = p, 0 < p < 1. 5.1 ] MINIMAX ESTIMATION 319 Then, it was seen in Example 1.7 that (1.36) is the minimax estimator of p = E(Xi) as F varies over F0. To prove that (1.36) is minimax with respect to F1, it is, by Lemma 1.15, enough to prove that the risk function of the estimator (1.36) takes on its maximum over F0. Let R(F, δ) denote the risk of (1.36). Then, R(F, δ) = E  √n 1 + √n ¯ X + 1 2(1 + √n) −θ 2 . By adding and subtracting [√n/(1 + √n)]θ inside the square brackets, this is seen to simplify to R(F, δ) = 1 (1 + √n)2  varF(X) + 1 2 −θ 2 . (1.37) Now, varF(X) = E(X −θ)2 = E(X2) −θ2 ≤E(X) −θ2 since 0 ≤X ≤1 implies X2 ≤X. Thus, varF(X) ≤θ −θ2. (1.38) Substitution of (1.38) into (1.37) shows, after some simplification, that R(F, δ) ≤ 1 4(1 + √n)2 . (1.39) Since the right side of (1.39) is the (constant) risk of δ over F0, the minimax property of δ follows. ∥ Let us next return to the situation, considered at the beginning of Section 3.7, of estimating the mean ¯ a of a population {a1, . . . , aN} from a simple random sample Y1, . . . , Yn drawn from this population. To make the minimax estimation of ¯ a meaningful, restrictions on the a’s are needed. In analogy to (1.34) and (1.35), we shall consider the following cases: (a) Bounded population variance 1 N (ai −¯ a)2 ≤M; (1.40) (b) Bounded range, 0 ≤ai ≤1, (1.41) to which the more general case a ≤ai ≤b can always be reduced. The loss function will be squared error, and for the time being, we shall ignore the labels. It will be seen in Section 5.4 that the minimax results remain valid when the labels are included in the data. Example 1.17 Simple random sampling. We begin with case (b) and consider first the special case in which all the values of a are either 1 or 0, say D equal to 320 MINIMAXITY AND ADMISSIBILITY [ 5.1 1, N −D equal to 0. The total number X of 1’s in the sample is then a sufficient statistic and has the hypergeometric distribution P(X = x) = D x  N −D n −x  )N n  (1.42) where max[0, n −(N −D)] ≤x ≤min(n, D) (Problem 1.28) and where D can take on the values 0, 1, . . . , N. The estimand is ¯ a = D/N, and, following the method of Example 4.1.5, one finds that αX/n + β with α = 1 1 + N−n n(N−1) , β = 1 2(1 −α) (1.43) is a linear estimator with constant risk (Problem 1.29). That (1.43) is minimax is then a consequence of the fact that it is the Bayes estimator of D/N with respect to the prior distribution P(D = d) =  1 0 N d  pdqN−d H(a + b) H(a)H(b)pa−1qb−1dp, (1.44) where a = b = β α/n −1/N . (1.45) It is easily checked that as N →∞, (1.43) →(1.11) and (1.45) →1/2√n, as one would expect since the hypergeometric distribution then tends toward the binomial. The special case just treated plays the same role as a tool for the problem of estimating ¯ a subject to (1.41) that the binomial case played in Example 1.16. To show that δ = α ¯ Y + β (1.46) is minimax, it is only necessary to check that E(δ −¯ a)2 = α2 var( ¯ Y) + [β + (α −1)¯ a]2 (1.47) takes on its maximum when all the values of a are 0 or 1, and this is seen as in Example 1.16 (Problem 1.31). Unfortunately, δ shares the poor risk properties of the binomial minimax estimator for all but very small n. The minimax estimator of ¯ a subject to (1.40), as might be expected from Exam-ple 1.16, is ¯ Y. For a proof of this result, which will not be given here, see Bickel and Lehmann (1981) or Hodges and Lehmann (1981). ∥ As was seen in Examples 1.7 and 1.8, minimax estimators can be quite unsat-isfactory over a large part of the parameter space. This is perhaps not surprising since, as a Bayes estimator with respect to a least favorable prior, a minimax esti-mator takes the most pessimistic view possible. This is illustrated by Example 1.7, in which the least favorable prior, B(an, bn) with an = bn = √n/2, concentrates nearly its entire attention on the neighborhood of p = 1/2 for which accurate esti-mation of p is most difficult. On the other hand, a Bayes estimator corresponding to a personal prior may expose the investigator to a very high maximum risk, which 5.1 ] MINIMAX ESTIMATION 321 may well be realized if the prior has badly misjudged the situation. It is possible to avoid the worst consequences of both these approaches through a compromise which permits the use of personal judgment and yet provides adequate protection against unacceptably high risks. Suppose that M is the maximum risk of the minimax estimator. Then, one may be willing to consider estimators whose maximum risk exceeds M, if the excess is controlled, say, if R(θ, δ) ≤M(1 + ε) for all θ (1.48) where ε is the proportional increase in risk that one is willing to tolerate. A re-stricted Bayes estimator is then obtained by minimizing, subject to (1.48), the average risk (4.1.1) for the prior  of one’s choice. Such restricted Bayes estimators are typically quite difficult to calculate. There is,however,oneclassofsituationsinwhichtheevaluationistrivial:Ifthemaximum risk of the unrestricted Bayes estimator satisfies (1.48), it, of course, coincides with the restricted Bayes estimator. This possibility is illustrated by the following example. Example 1.18 Binomial restricted Bayes estimator. In Example 4.1.5, suppose we believe p to be near zero (it may, for instance, be the prob-ability of a rarely occurring disease or accident). As a prior distribution for p, we therefore take B(1, b) with a fairly high value of b. The Bayes estimator (4.11.12) is then δ = (X + 1)/(n + b + 1) and its risk is E(δ −p)2 = np(1 −p) + [(1 −p) −bp]2 [n + b + 1]2 . (1.49) At p = 1, the risk is [b/(n + b + 1)]2, which for fixed n and sufficiently large b can be arbitrarily close to 1, while the constant risk of the minimax estimator is only 1/4(1 + √n)2. On the other hand, for fixed b, an easy calculation shows that (Problem 1.32). 4(1 + √n)2 sup R(p, δ) →1 as n →∞. For any given b and ε > 0, δ will therefore satisfy (1.48) for sufficiently large values of n. ∥ A quite different, and perhaps more typical, situation is illustrated by the normal case. Example 1.19 Normal. If in the situation of Example 4.2.2, without loss of gen-erality, we put σ = 1 and µ = 0, the Bayes estimator (4.2.2) reduces to c ¯ X with c = nb2/(1+nb2). Since its risk function is unbounded for all n, while the minimax risk is 1/n, no such Bayes estimator can be restricted Bayes. As a compromise, Efron and Morris (1971) propose an estimator of the form δ =    ¯ x + M if ¯ x < −M/(1 −c) c ¯ x if |¯ x| ≤M/(1 −c) ¯ x −M if ¯ x > M/(1 −c) (1.50) for 0 ≤c ≤1. The risk of these estimators is bounded (Problem 1.33) with maximum risk tending toward 1/n as M →0. On the other hand, for large M 322 MINIMAXITY AND ADMISSIBILITY [ 5.2 values, (1.50) is close to the Bayes estimator. Although (1.50) is not the exact optimum solution of the restricted Bayes problem, Efron and Morris (1971) and Marazzi (1980) show it to be close to optimal. ∥ 2 Admissibility and Minimaxity in Exponential Families It was seen in Example 2.2.6 that a UMVU estimator δ need not be admissible. If a biased estimator δ′ has uniformly smaller risk, the choice between δ and δ′ is not clear-cut: One must balance the advantage of unbiasedness against the drawback of larger risk. The situation is, however, different for minimax estimators. If δ′ dominates a minimax estimator δ, then δ′ is also minimax and, thus, definitely preferred. It is, therefore, particularly important to ascertain whether a proposed minimax estimator is admissible. In the present section, we shall obtain some admissibility results (and in the process, some minimax results) for exponential families, and in the next section, we shall consider the corresponding problem for group families. To prove inadmissibility of an estimator δ, it is sufficient to produce an estimator δ′ which dominates it. An example was given in Lemma 2.2.7. The following is another instance. Lemma 2.1 Let the range of the estimand g(θ) be an interval with end-points a and b, and suppose that the loss function L(θ, d) is positive when d ̸= g(θ) and zero when d = g(θ), and that for any fixed θ, L(θ, d) is increasing as d moves away from g(θ) in either direction. Then, any estimator δ taking on values outside the closed interval [a, b] with positive probability is inadmissible. Proof. δ is dominated by the estimator δ′, which is a or b when δ < a or > b, and which otherwise is equal to δ. ✷ Example 2.2 Randomized response. The following is a survey technique some-times used when delicate questions are being asked. Suppose, for example, that the purpose of a survey is to estimate the proportion p of students who have ever cheated on an exam. Then, the following strategy may be used. With probability a (known), the student is asked the question “Have you ever cheated on an exam?”, and with probability (1 −a), the question “Have you always been honest on ex-ams?” The survey taker does not know which question the student answers, so the answer cannot incriminate the respondent (hence, honesty is encouraged). If a sample of n students is questioned in this way, the number of positive responses is a binomial random variable X∗∼b(p∗, n) with p∗= ap + (1 −a)(1 −p), (2.1) where p is the probability of cheating, and min{a, 1 −a} < p∗< max{a, 1 −a}. (2.2) For estimating the probability p = [p∗−(1−a)]/(1−2a), the method of moments estimator ˜ p = [ ˆ p∗−(1 −a)]/(1 −2a) is inadmissible by Lemma 2.1. The MLE of p, which is equal to ˜ p if it falls in the interval specified in (2.2) and takes on the endpoint values if ˜ p is not in the interval, is also inadmissible, although this 5.2 ] ADMISSIBILITY AND MINIMAXITY IN EXPONENTIAL FAMILIES 323 fact does not follow directly from Lemma 2.1. (Inadmissibility of the MLE of p follows from Moors (1981); see also Hoeffding 1982 and Chaudhuri and Mukerjee 1988). ∥ Example 2.3 Variance components. Another application of Lemma 2.1 occurs in the estimation of variance components. In the one-way layout with random effects (see Example 3.5.1 or 4.2.7), let Xij = µ + Ai + µij, j = 1, . . . , ni, i = 1, . . . , s, (2.3) where the variables Ai ∼N(0, σ 2 A) and Uij ∼N(0, σ 2) are independent. The parameter σ 2 A has range [0, ∞); hence, any estimator δ taking on negative values is an inadmissible estimator of σ 2 A (against any loss function for which the risk function exists). The UMVU estimator of σ 2 A [see (3.5.4)] has this property and hence is inadmissible. ∥ A principal method for proving admissibility is the following result. Theorem 2.4 Any unique1 Bayes estimator is admissible. Proof. If δ is unique Bayes with respect to the prior distribution  and is dominated by δ′, then  R(θ, δ′) d(θ) ≤  R(θ, δ) d(θ), which contradicts uniqueness. ✷ An example is provided by the binomial minimax estimator (1.11) of Example 1.7. For the corresponding nonparametric minimax estimator (1.36) of Example 1.16, admissibility was proved by Hjort (1976) who showed that it is the essen-tially unique minimax estimator with respect to a class of Dirichlet-process priors described by Ferguson (1973). We shall, in the present section, illustrate a number of ideas and results concern-ingadmissibilityontheestimationofthemeanandvarianceofanormaldistribution and then indicate some of their generalizations. Unless stated otherwise, the loss function will be assumed to be squared error. Example 2.5 Admissibility of linear estimators. Let X1, . . . , Xn be indepen-dent, each distributed according to a N(θ, σ 2), with σ 2 known. In the preceding section, ¯ X was seen to be minimax for estimating θ. Is it admissible? Instead of attacking this question directly, we shall consider the admissibility of an arbitrary linear function a ¯ X + b. From Example 2.2, it follows that the unique Bayes estimator with respect to the normal prior for θ with mean µ and variance τ 2 is nτ 2 σ 2 + nτ 2 ¯ X + σ 2 σ 2 + nτ 2 µ (2.4) and that the associated Bayes risk is finite (Problem 2.2). It follows that a ¯ X + b is unique Bayes and hence admissible whenever 1 Uniqueness here means that any two Bayes estimators differ only on a set N with Pθ(N) = 0 for all θ. 324 MINIMAXITY AND ADMISSIBILITY [ 5.2 0 < a < 1. (2.5) ∥ To see what can be said about other values of a, we shall now prove an inadmis-sibility result for linear estimators, which is quite general and in particular does not require the assumption of normality. Theorem 2.6 Let X be a random variable with mean θ and variance σ 2. Then, aX + b is an inadmissible estimator of θ under squared error loss whenever (i) a > 1, or (ii) a < 0, or (iii) a = 1 and b ̸= 0. Proof. The risk of aX + b is ρ(a, b) = E(aX + b −θ)2 = a2σ 2 + [(a −1)θ + b]2. (2.6) (i) If a > 1, then ρ(a, b) ≥a2σ 2 > σ 2 = ρ(1, 0) so that aX + b is dominated by X. (ii) If a < 0, then (a −1)2 > 1 and hence ρ(a, b) ≥[(a −1)θ + b]2 = (a −1)2  θ + b a −1 2 >  θ + b a −1 2 = ρ  0, − b a −1  . Thus, aX + b is dominated by the constant estimator δ ≡−b/(a −1). (iii) In this case, aX + b = X + b is dominated by X (see Lemma 2.2.7). ✷ Example 2.7 Continuation of Example 2.5. Combining the results of Example 2.5 and Theorem 2.6, we see that the estimator a ¯ X + b is admissible in the strip 0 < a < 1 in the (a, b) plane, that it is inadmissible to the left (a < 0) and to the right (a > 1). The left boundary a = 0 corresponds to the constant estimators δ = b which are admissible since δ = b is the only estimator with zero risk at θ = b. Finally, the right boundary a = 1 is inadmissible by (iii) of Theorem 2.6, with the possible exception of the point a = 1, b = 0. ∥ We have thus settled the admissibility of a ¯ X + b for all cases except ¯ X itself, which was the estimator of primary interest. In the next example, we shall prove that ¯ X is indeed admissible. Example 2.8 Admissibility of ¯ X. The admissibility of ¯ X for estimating the mean of a normal distribution is not only of great interest in itself but can also be regarded as the starting point of many other admissibility investigations. For this reason, we shall now give two proofs of this fact—they represent two principal methods for proving admissibility and are seen particularly clearly in this example because of its great simplicity. 5.2 ] ADMISSIBILITY AND MINIMAXITY IN EXPONENTIAL FAMILIES 325 First Proof of Admissibility (the Limiting Bayes Method). Suppose that ¯ X is not admissible, and without loss of generality, assume that σ = 1. Then, there exists δ∗such that R(θ, δ∗) ≤1 n for all θ, R(θ, δ∗) < 1 n for at least some θ. Now, R(θ, δ) is a continuous function of θ for every δ so that there exists ε > 0 and θ0 < θ1 such that R(θ, δ∗) < 1 n −ε for all θ0 < θ < θ1. Let r∗ τ be the average risk of δ∗with respect to the prior distribution τ = N(0, τ 2), and let rτ be the Bayes risk, that is, the average risk of the Bayes solution with respect to τ. Then, by (1.30) with σ = 1 and τ in place of b, 1 n −r∗ τ 1 n −rτ = 1 √ 2πτ ∞ −∞ 1 n −R(θ, δ∗) ! e−θ2/2τ 2 dθ 1 n − τ 2 1+nτ 2 ≥n(1 + nτ 2)ε τ √ 2π  θ1 θ0 e−θ2/2τ 2 dθ. The integrand converges monotonically to 1 as τ →∞. By the Lebesgue mono-tone convergence theorem (TSH2, Theorem 2.2.1), the integral therefore converges to θ1 −θ0, and, hence, as τ 2 →∞, 1/n −r∗ τ 1/n −rτ →∞. Thus, there exists τ0 such that r∗ τ0 < rτ0, which contradicts the fact that rτ0 is the Bayes risk for τ0. This completes the proof. A more general version of this approach, known as Blyth’s method, will be given in Theorem 7.13. Second Proof of Admissibility (the Information Inequality Method). Another use-ful tool for establishing admissibility is based on the information inequality and solutions to a differential inequality, a method due to Hodges and Lehmann (1951). It follows from the information inequality (2.5.33) and the fact that R(θ, δ) = E(δ −θ)2 = varθ(δ) + b2(θ), where b(θ) is the bias of δ, that R(θ, δ) ≥[1 + b′(θ)]2 nI(θ) + b2(θ), (2.7) where the first term on the right is the information inequality variance bound for estimators with expected value θ +b(θ). Note that, in the present case with σ 2 = 1, I(θ) = 1 from Table 2.5.1. 326 MINIMAXITY AND ADMISSIBILITY [ 5.2 Suppose, now, that δ is any estimator satisfying R(θ, δ) ≤1 n for all θ (2.8) and hence [1 + b′(θ)]2 n + b2(θ) ≤R(θ, δ) ≤1 n for all θ. (2.9) We shall then show that (2.9) implies b(θ) ≡0, (2.10) that is, that δ is unbiased. (i) Since |b(θ)| ≤1/√n, the function b is bounded. (ii) From the fact that 1 + 2b′(θ) + [b′(θ)]2 ≤1, it follows that b′(θ) ≤0, so that b is nonincreasing. (iii) We shall show, next, that there exists a sequence of values θi tending to ∞ and such that b′(θi) →0. Suppose that b′(θ) were bounded away from 0 as θ →∞, say b′(θ) ≤−ε for all θ > θ0. Then b(θ) cannot be bounded as θ →∞, which contradicts (i). (iv) Analogously, it is seen that there exists a sequence of values θi →−∞and such that b′(θi) →0 (Problem 2.3). Inequality (2.9) together with (iii) and (iv) shows that b(θ) →0 as θ →±∞, and (2.10) now follows from (ii). Since (2.10) implies that b(θ) = b′(θ) = 0 for all θ, it implies by (2.7) that R(θ, δ) ≥1 n for all θ and hence that R(θ, δ) ≡1 n. This proves that ¯ X is admissible and minimax. That it is, in fact, the only minimax estimator is an immediate consequence of Theorem 1.7.10. For another application of this second method of proof, see Problem 2.7. ∥ Admissibility (hence, minimaxity) of ¯ X holds not only for squared error loss but for large classes of loss functions L(θ, d) = ρ(d −θ). In particular, it holds if ρ(t) is nondecreasing as t moves away from 0 in either direction and satisfies the growth condition  |t|ρ(2|t|)φ(t) dt < ∞, with the only exceptions being the loss functions ρ(0) = a, ρ(t) = b for |t| ̸= 0, a < b. This result2 follows from Brown (1966, Theorem 2.1.1); it is also proved under somewhat stronger conditions in H´ ajek (1972). 2 Communicated by L. Brown. 5.2 ] ADMISSIBILITY AND MINIMAXITY IN EXPONENTIAL FAMILIES 327 Example 2.9 Truncated normal mean. In Example 2.8, suppose it is known that θ > θ0. Then, it follows from Lemma 2.1 that ¯ X is no longer admissible. However, assuming that σ 2 = 1 and using the method of the second proof of Example 2.8, it is easy to show that ¯ X continues to be minimax. If it were not, there would exist an estimator δ and an ε > 0 such that R(θ, δ) ≤1 n −ε for all θ > θ0 and hence [1 + b′(θ)]2 n + b2(θ) ≤1 n −ε for all θ > θ0. As a consequence, b(θ) would be bounded and satisfy b′(θ) ≤−εn/2 for all θ > θ0, and these two statements are contradictory. This example provides an instance in which the minimax estimator is not unique and the constant risk estimator ¯ X is inadmissible. A uniformly better estimator which a fortiori is also minimax is max(θ0, ¯ X), but it, too, is inadmissible [see Sacks (1963), in which a characterization of all admissible estimators is given]. Admissible minimax estimators in this case were found by Katz (1961) and Sacks (1963); see also Gupta and Rohatgi 1980. If θ is further restricted to satisfy a ≤θ ≤b, ¯ X is not only inadmissible but also no longer minimax. If ¯ X were minimax, the same would be true of its improvement, the MLE δ∗(X) =    a if ¯ X < a ¯ X if a ≤¯ X ≤b b if ¯ X > b, so that sup a≤θ≤b R(θ, δ∗) = sup a≤θ≤b R(θ, ¯ X) = 1 n. However, R(θ, δ∗) < R(θ, ¯ X) = 1/n for all a ≤θ ≤b. Furthermore, R(θ, δ∗) is a continuous function of θ and hence takes on its maximum at some point a ≤θ0 ≤b. Thus, sup a≤θ≤b R(θ, δ∗) = R(θ0, δ∗) < 1 n, which provides a contradiction. It follows from Wald’s general decision theory (see Section 5.8) that in the present situation, there exists a probability distribution  over [a, b] which satisfies (1.4) and (1.6). We shall now prove that the associated set ω of (1.5) consists of a finite number of points. Suppose the contrary were true. Then, ω contains an infinite sequence of points with a limit point. Since R(θ, δ) is constant over these points and since it is an analytic function of θ, it follows that R(θ, δ) is constant, not only in [a, b] but for all θ. Example 2.8 then shows that δ = ¯ X, which is in contradiction to the fact that ¯ X is not minimax for the present problem. To simplify matters, and without losing generality [Problem 2.9(a)], we can take a = −m and b = m, and, thus, consider θ to be restricted to the interval [−m, m]. To determine a minimax estimator, let us consider the form of a least favorable prior. Since  is concentrated on a finite number of points, it is reasonable to suspect 328 MINIMAXITY AND ADMISSIBILITY [ 5.2 Figure 2.1. Risk functions of bounded mean estimators for m = 1.056742, n = 1. that these points would be placed at a distances neither too close together nor too far apart, where “close” is relative to the standard deviation of the density of X. (If the points are either much closer together or much further apart, then the prior might be giving us information.) One might therefore conjecture that the number of points in ω increases with m, and for small m, look at the Bayes estimator for the two-point prior  that puts mass 1/2 at ±m. The Bayes estimator, against squared error loss, is [Problem 2.9(b)] δ(¯ x) = m tanh(mn¯ x) (2.11) where tanh(·) is the hyperbolic tangent function. For m ≤1.05/√n, Corollary 1.6 can be used to show that δ is minimax and provides a substantial risk decrease over ¯ x. Moreover, for m < 1/√n, δ also dominates the MLE δ∗[Problem 2.9(c)]. This is illustrated in Figure 2.1, where we have taken m to be the largest value for which (2.11) is minimax. Note that the risk of δ is equal at θ = 0 and θ = m. As m increases, so does the number of points in ω. The range of values of m, for which the associated Bayes estimators is minimax, was established by Casella and Strawderman (1981) for 2- and 3-point priors and Kempthorne (1988a, 1988b) for 4-point priors. Some interesting results concerning  and δ, for large m, are 5.2 ] ADMISSIBILITY AND MINIMAXITY IN EXPONENTIAL FAMILIES 329 given by Bickel (1981). An alternative estimator, the Bayes estimator against a uniform prior on [−m, m], was studied by Gatsonis et al. (1987) and shown to perform reasonably when compared to δ and to dominate δ∗for |θ| ≤m/√n. Many of these results were discovered independently by Zinzius (1981, 1982), who derived minimax estimators for θ restricted to the interval [0, c], where c is known and small. ∥ Example 2.10 Linear minimax risk. Suppose that in Example 2.9 we decide to restrict attention to linear estimators δ(a,b) = a ¯ X + b because of their simplicity. With σ 2 = 1, from the proof of Theorem 2.6 [see also Problem 4.3.12(a)], R(θ, a ¯ X + b) = a2 var ¯ X + [(a −1)θ + b]2 = a2/n + [(a −1)θ + b]2, and from Theorem 2.6, we only need consider 0 ≤a ≤1. It is straightforward to establish (Problem 2.10) that max θ∈[−m,m] R(θ, a ¯ X + b) = max{R(−m, a ¯ X + b), R(m, a ¯ X + b)} and that δ∗= a∗¯ X, with a∗= m2/( 1 n + m2) is minimax among linear estimators. Donoho et al. (1990) provide bounds on the ratio of the linear minimax risk to the minimax risk. They show that, surprisingly, this ratio is approximately 1.25 and, hence, that the linear minimax estimators may sometimes be reasonable substitutes for the full minimax estimators. ∥ Example 2.11 Linear model. Consider the general linear model of Section 3.4 and suppose we wish to estimate some linear function of the ξ’s. Without loss of generality, we can assume that the model is expressed in the canonical form (4.8) so that Y1, . . . , Yn are independent, normal, with common variance σ 2, and E(Yi) = ηi (i = 1, . . . , s); E(Ys+1) = · · · = E(Yn) = 0. The estimand can be taken to be η1. If Y2, . . . , Yn were not present, it would follow from Example 2.8 that Y1 is admissible for estimating η1. It is obvious from the Rao-Blackwell theorem (Theorem 1.7.8 ) that the presence of Ys+1, . . . , Yn cannot affect this result. The following lemma shows that, as one would expect, the same is true for Y2, . . . , Ys. ∥ Lemma 2.12 Let X and Y be independent (possibly vector-valued) with distribu-tions Fξ and Gη, respectively, where ξ and η vary independently. Then, if δ(X) is admissible for estimating ξ when Y is not present, it continues to be so in the presence of Y. Proof. Suppose, to the contrary, that there exists an estimator T (X, Y) satisfying R(ξ, η; T ) ≤R(ξ; δ) for all ξ, η, R(ξ0, η0; T ) < R(ξ0; δ) for some ξ0, η0. Consider the case in which it is known that η = η0. Then, δ(X) is admissible on the basis of X and Y (Problem 2.11). On the other hand, R(ξ, η0; T ) ≤R(ξ; δ) for all ξ, R(ξ0, η0; T ) < R(ξ0; δ) for some ξ0, 330 MINIMAXITY AND ADMISSIBILITY [ 5.2 and this is a contradiction. ✷ The examples so far have been concerned with normal means. Let us now turn to the estimation of a normal variance. Example 2.13 Normal variance. Under the assumptions of Example 4.2.5, let us consider the admissibility, using squared error loss, of linear estimators aY + b of 1/τ = 2σ 2. The Bayes solutions α + Y r + g −1, (2.12) derived there for the prior distributions H(g, 1/α), appear to prove admissibility of aY + b with 0 < a < 1 r −1, 0 < b. (2.13) In particular, this includes the estimators (1/r)Y + b for any b > 0. On the other hand, it follows from (2.7) that E(Y) = r/τ, so that (1/r)Y is an unbiased estimator of 1/τ, and hence from Lemma 2.2.7, that (1/r)Y +b is inadmissible for any b > 0. What went wrong? Conditions (i) and (ii) of Corollary 4.1.4 indicate two ways in which the unique-ness (hence, admissibility) of a Bayes estimator may be violated. The second of these clearly does not apply here since the gamma prior assigns positive density to all values τ > 0. This leaves the first possibility as the only visible suspect. Let us, therefore, consider the Bayes risk of the estimator (2.12). Given τ, we find [by adding and subtracting the expectation of Y/(g + r −1)], that E  Y + α g + r −1 −1 τ 2 = 1 (g + r −1)2  1 τ 2 −  α −g −1 τ 2 . The Bayes risk will therefore be finite if and only if E(1/τ 2) < ∞, where the expectation is taken with respect to the prior and, hence, if and only if g > 2. Applying this condition to (2.12), we see that admissibility has not been proved for the region (2.13), as seemed the case originally, but only for the smaller region 0 < a < 1 r + 1, 0 < b. (2.14) In fact, it is not difficult to prove inadmissibility for all a > 1/(r + 1) (Problem 2.12), whereas for a < 0 and for b < 0, it, of course, follows from Lemma 2.1. The left boundary a = 0 of the strip (2.14) is admissible as it was in Example 2.5; the bottom boundary b = 0 was seen to be inadmissible for any positive a ̸= 1/(r + 1) in Example 2.2.6. This leaves in doubt only the point a = b = 0, which is inadmissible (Problem 2.13), and the right boundary, corresponding to the estimators 1 r + 1Y + b, 0 ≤b < ∞. (2.15) Admissibility of (2.15) for b = 0 was first proved by Karlin (1958), who considered the case of general one-parameter exponential families. His proof was extended to other values of b by Ping (1964) and Gupta (1966). We shall follow Ping’s proof, 5.2 ] ADMISSIBILITY AND MINIMAXITY IN EXPONENTIAL FAMILIES 331 which uses the second method of Example 2.3, whereas Karlin (1958) and Stone (1967) employed the first method. ∥ Let X have probability density pθ(x) = β(θ)eθT (x) (θ, T real-valued) (2.16) with respect to µ and let be the natural parameter space. Then, is an interval, with endpoints, say, θ and ¯ θ (−∞≤θ ≤¯ θ ≤∞) (see Section 1.5). For estimating Eθ(T ), the estimator aT + b is inadmissible if a < 0 or a > 1 and is a constant for a = 0. To state Karlin’s sufficient condition in the remaining cases, it is convenient to write the estimator as δλ,γ (x) = 1 1 + λT + γ λ 1 + λ, (2.17) with 0 ≤λ < ∞corresponding to 0 < a ≤1. Theorem 2.14 (Karlin’s Theorem) Under the above assumptions, a sufficient condition for the admissibility of the estimator (2.17) for estimating g(θ) = Eθ(T ) with squared error loss is that the integral of e−γ λθ[β(θ)]−λ diverges at θ and ¯ θ; that is, that for some (and hence for all) θ < θ0 < ¯ θ, the two integrals  θ∗ θ0 e−γ λθ [β(θ)]λ dθ and  θ0 θ∗ e−γ λθ [β(θ)]λ dθ (2.18) tend to infinity as θ∗tends to ¯ θ and θ, respectively. Proof. It is seen from (1.5.14) and (1.5.15) that g(θ) = Eθ(T ) = −β′(θ) β(θ) (2.19) and g′(θ) = varθ(T ) = I(θ), (2.20) where I(θ) is the Fisher information defined in (2.5.10). For any estimator δ(X), we have Eθ[δ(X) −g(θ)]2 = varθ[δ(X)] + b2(θ) ≥[g′(θ) + b′(θ)]2 I(θ) + b2(θ) [information inequality] (2.21) = [I(θ) + b′(θ)]2 I(θ) + b2(θ) where b(θ) = Eθ[δ(X)] −g(θ) is the bias of δ(x). If δ = δλ,γ of (2.17), then its bias is bλ,γ (θ) = λ 1+λ[γ −g(θ)] with b′(θ) = −λ 1+λg′(θ) and Eθ[δλ,γ (X) −g(θ)]2 = Eθ T + γ λ 1 + λ −g(θ) 2 = I(θ) (1 + λ)2 + λ2[g(θ) −γ ]2 (1 + λ)2 (2.22) 332 MINIMAXITY AND ADMISSIBILITY [ 5.2 = [I(θ) + b′ λ,γ (θ)]2 I(θ) + b2 λ,γ (θ). Thus, for estimators (2.17), the information inequality risk bound is an equality. Now, suppose that δ0 satisfies Eθ[δλ,γ (X) −g(θ)]2 ≥Eθ[δ0(X) −g(θ)]2 for all θ. (2.23) Denote the bias of δ0 by b0(θ), apply inequality (2.21) to the right side of (2.23), and apply Equation (2.22) to the left side of (2.23) to get b2 λ,γ (θ) + [I(θ) + b′ λ,γ (θ)]2 I(θ) ≥b2 0(θ) + [I(θ) + b′ 0(θ)]2 I(θ) . (2.24) If h(θ) = b0(θ) −bλ,γ (θ), (2.24) reduces to h2(θ) − 2λ 1 + λh(θ)[g(θ) −γ ] + 2 1 + λh′(θ) + [h′(θ)]2 I(θ) ≤0, (2.25) which implies h2(θ) − 2λ 1 + λh(θ)[g(θ) −γ ] + 2 1 + λh′(θ) ≤0. (2.26) Finally, let κ(θ) = h(θ)βλ(θ)eγ λθ. Differentiation of κ(θ) and use of (2.19) reduces (2.26) to (Problem 2.7) κ2(θ)β−λ(θ)e−γ λθ + 2 1 + λκ′(θ) ≤0. (2.27) We shall now show that (2.27) with λ ≥0 implies that κ(θ) ≥0 for all θ. Suppose to the contrary that κ(θ0) < 0 for some θ0. Then, κ(θ) < 0 for all θ ≥θ0 since κ′(θ) < 0, and for θ > θ0, we can write (2.27) as d dθ  1 κ(θ)  ≥1 + λ 2 β−λ(θ)e−γ λθ. Integrating both sides from θ0 to θ∗leads to 1 κ(θ) − 1 κ(θ0) ≥1 + λ 2  θ∗ θ0 β−λ(θ)e−γ λθ dθ. As θ∗→¯ θ, the right side tends to infinity, and this provides a contradiction since the left side is < −1/κ(θ0). Similarly, κ(θ) ≤0 for all θ. It follows that κ(θ) and, hence, h(θ) is zero for all θ. This shows that for all θ equality holds in (2.25), (2.24), and, thus, (2.23). This proves the admissibility of (2.17). ✷ Under some additional restrictions, it is shown by Diaconis and Ylvisaker (1979) that when the sufficient condition of Theorem 2.14 holds, aX + b is Bayes with respect to a proper prior distribution (a member of the conjugate family) and has finite risk. This, of course, implies that it is admissible. 5.2 ] ADMISSIBILITY AND MINIMAXITY IN EXPONENTIAL FAMILIES 333 Karlin (1958) conjectured that the sufficient condition of Theorem 2.14 is also necessary for the admissibility of (2.17). Despite further work on this problem (Morton and Raghavachari 1966, Stone 1967, Joshi 1969a), this conjecture has not yet been settled. See Brown (1986a) for further discussion. Let us now see whether Theorem 2.14 settles the admissibility of Y/(r + 1), which was left open in Example 2.13. Example 2.15 Continuation of Example 2.13. The density of Example 4.2.5 is of the form (2.16) with θ = −rτ, β(θ) = −θ r r , Y r = T (X), θ = −∞, ¯ θ = 0. Here, the parameterization is chosen so that Eθ[T (X)] = 1 τ coincides with the estimand of Example 2.13. An estimator 1 1 + λ Y r + γ λ 1 + λ (2.28) is therefore admissible, provided the integrals  −c −∞ e−γ λθ −θ r −rγ dθ = C  ∞ c eγ λθθ−rλ dθ and  c 0 eγ λθθ−rλ dθ are both infinite. The conditions for the first integral to be infinite are that either γ = 0 and rλ ≤1, or γ λ > 0. For the second integral, the factor eγ λθ plays no role, and the condition is simply rλ ≥1. Combining these conditions, we see that the estimator (2.28) is admissible if either (a) γ = 0 and λ = 1 r or (b) λ ≥1 r and γ > 0 (since r > 0). If we put a = 1/(1 + λ)r and b = γ λ/(1 + λ), it follows that aY + b is admissible if either (a’) b = 0 and a = 1 1 + r or (b’) b > 0 and 0 < a ≤ 1 1 + r . 334 MINIMAXITY AND ADMISSIBILITY [ 5.2 The first of these results settles the one case that was left in doubt in Example 2.13; the second confirms the admissibility of the interior of the strip (2.14), which had already been established in that example. The admissibility of Y/(r + 1) for estimating 1/τ = 2σ 2 means that 1 2r + 2Y = 1 n + 2Y (2.29) is admissible for estimating σ 2. The estimator (2.29) is the MRE estimator for σ 2 found in Section 3.3 (Example 3.3.7). ∥ Example 2.16 Normal variance, unknown mean. Admissibility of the estimator X2 i /(n + 2) when the X’s are from N(0, σ 2) naturally raises the corresponding question for (Xi −¯ X)2/(n + 1), (2.30) the MRE estimator of σ 2 when the X’s are from N(ξ, σ 2) with ξ unknown (Ex-ample 3.3.11). The surprising answer, due to Stein (1964), is that (2.30) is not admissible (see Examples 3.3.13 and 5.2.15). An estimator with uniformly smaller risk is δs = min (Xi −¯ X)2 n + 1 , X2 i n + 2  . (2.31) The estimator (2.30) is MRE under the location-scale group, that is, among estimators that satisfy δ(ax1 + b, . . . , axn + b) = aδ(x1, . . . , xn). (2.32) To search for a better estimator of σ 2 than (2.30), consider the larger class of estimators that are only scale invariant. These are the estimators of σ 2 that satisfy (2.32) with b = 0, and are of the form δ(¯ x, s) = ϕ(¯ x/s)s2. (2.33) The estimator δs is of this form. As a motivation of δs, suppose that it is thought a priori likely but by no means certain that ξ = 0. One might then wish to test the hypothesis H : ξ = 0 by the usual t-test. If |√n ¯ X|  Xi −¯ X 2 /(n −1) < c, (2.34) one would accept H and correspondingly estimate σ 2 by X2 i /(n + 2); in the contrary case, H would be rejected and σ 2 estimated by (2.30). For the value c = √(n −1)/(n + 1), it is easily checked that (2.34) is equivalent to 1 n + 2X2 i < 1 n + 1(Xi −¯ X)2, (2.35) and the resulting estimator then reduces to (2.31). While (2.30) is inadmissible, it is clear that no substantial improvement is pos-sible, since (Xi −¯ X)2/σ 2 has the same distribution as (Xi −ξ)2/σ 2 with n replaced by n−1 so that ignorance of σ 2 can be compensated for by one additional 5.2 ] ADMISSIBILITY AND MINIMAXITY IN EXPONENTIAL FAMILIES 335 observation. Rukhin (1987) shows that the maximum relative risk improvement is approximately 4%. ∥ Let us now return to Theorem 2.14 and apply it to the binomial case as another illustration. Example 2.17 Binomial. Let X have the binomial distribution b(p, n), which we shall write as P(X = x) = n x  (1 −p)ne(x/n)n log(p/(1−p)). (2.36) Putting θ = n log(p/(1 −p)), we have β(θ) = (1 −p)n = [1 + eθ/n]−n and g(θ) = Eθ X n  = p = eθ/n 1 + eθ/n . Furthermore, as p ranges from 0 to 1, θ ranges from θ = −∞to ¯ θ = +∞. The integral in question is then  e−γ λθ(1 + eθ/n)λn dθ (2.37) and the estimator X/[n(1 + λ)] + γ λ/(1 + λ) is admissible, provided this integral diverges at both −∞and +∞. If λ < 0, the integrand is ≤e−γ λθ and the integral cannot diverge at both limits, whereas for λ = 0, the integral does diverge at both limits. Suppose, therefore, that λ > 0. Near infinity, the dominating term (which is also a lower bound) is  e−γ λθ+λθ dθ, which diverges provided γ ≤1. At the other end, we have  −c −∞ e−γ λθ(1 + eθ/n)λn dθ =  ∞ c eγ λθ  1 + 1 eθ/n λn dθ. The factor in parentheses does not affect the convergence or divergence of this integral, which therefore diverges if and only if γ λ ≥0. The integral will therefore diverge at both limits, provided λ > 0 and 0 ≤γ ≤1, or λ = 0. (2.38) With a = 1/(1 + λ) and b = γ λ/(1 + λ), this condition is seen to be equivalent (Problem 2.7) to 0 < a ≤1, 0 ≤b, a + b ≤1. (2.39) The estimator, of course, is also admissible when a = 0 and 0 ≤b ≤1, and it is easy to see that it is inadmissible for the remaining values of a and b (Problem 2.8). The region of admissibility is, therefore, the closed triangle {(a, b) : a ≥0, b ≥0, a + b ≤1}. ∥ Theorem 2.14 provides a simple condition for the admissibility of T as an estimator of Eθ(T ). 336 MINIMAXITY AND ADMISSIBILITY [ 5.2 Corollary 2.18 If the natural parameter space of (2.16) is the whole real line so that θ = −∞, ¯ θ = ∞, then T is admissible for estimating Eθ(T ) with squared error loss. Proof. With λ = 0 and γ = 1, the two integrals (2.18) clearly tend toward infinity as θ →±∞. ✷ The condition of this corollary is satisfied by the normal (variance known), binomial, and Poisson distributions, but not in the gamma or negative binomial case (Problem 2.25). The starting point of this section was the question of admissibility of some minimax estimators. In the opposite direction, it is sometimes possible to use the admissibility of an estimator to prove that it is minimax. Lemma 2.19 If an estimator has constant risk and is admissible, it is minimax. Proof. If it were not, another estimator would have smaller maximum risk and, hence, uniformly smaller risk. ✷ This lemma together with Corollary 2.18 yields the following minimax result. Corollary 2.20 Under the assumptions of Corollary 2.18, T is the unique minimax estimator of g(θ) = Eθ(T ) for the loss function [d −g(θ)]2/varθ(T ). Proof. For this loss function, T is a constant risk estimator which is admissible by Corollary 2.18 and unique by Theorem 1.7.10. ✷ A companion to Lemma 2.19 allows us to deduce admissibility from unique minimaxity. Lemma 2.21 If an estimator is unique minimax, it is admissible. Proof. If it were not admissible, another estimator would dominate it in risk and, hence, would be minimax. ✷ Example 2.22 Binomial admissible minimax estimator. If X has the binomial distribution b(p, n), then, by Corollary 2.20, X/n is the unique minimax estimator of p for the loss function (d −p)2/pq (which was seen in Example 1.7). By Lemma 2.21, X/n is admissible for this loss function. ∥ The estimation of a normal variance with unknown mean provided a surprising example of a reasonable estimator which is inadmissible. We shall conclude this section with an example of a totally unreasonable estimator that is admissible. Example 2.23 Two binomials. Let X and Y be independent binomial random variables with distributions b(p, m) and b(π, n), respectively. It was shown by Makani (1977) that a necessary and sufficient condition for a X m + bY n + c (2.40) to be admissible for estimating p with squared error loss is that either 0 ≤a < 1, 0 ≤c ≤1, 0 ≤a + c ≤1, (2.41) 0 ≤b + c ≤1, 0 ≤a + b + c ≤1, 5.2 ] ADMISSIBILITY AND MINIMAXITY IN EXPONENTIAL FAMILIES 337 or a = 1 and b = c = 0. (2.42) We shall now prove the sufficiency part, which is the result of interest; for necessity, see Problem 2.21. Suppose there exists another estimator δ(X, Y) with risk uniformly at least as small as that of (2.40), so that E  a X m + bY n + c −p 2 ≥E[δ(X, Y) −p]2 for all p. Then m x=0 n k=0  a x m + b k n + c −p 2 P(X = x, Y = k) (2.43) ≥ m x=0 n k=0 [δ(x, k) −p]2P(X = x, Y = k). Letting π →0, this leads to m x=0 a x m + c −p 2 P(X = x) ≥ m x=0 [δ(x, 0) −p]2P(X = x) for all p. However, a(X/m) + c is admissible by Example 2.17; hence δ(x, 0) = a(x/m) + c for all x = 0, 1, . . . , m. The terms in (2.43) with k = 0, therefore, cancel. The remaining terms contain a common factor π which can also be canceled and one can now proceed as before. Continuing in this way by induction over k, one finds at the (k + 1)st stage that m x=0  a x m + b k n + c −p 2 P(X = x) ≥ m x=0 [δ(x, k) −p]2P(X = x) for all p. However, aX/m + bk/n + c is admissible by Example 2.17 since a + b k n + c ≤1 and, hence, δ(x, k) = a x m + b k n + c for all x. This shows that (2.43) implies δ(x, y) = a x m + by n + c for all x and y and, hence, that (2.40) is admissible. Putting a = 0 in (2.40), we see that estimates of the form b(Y/n) + c (0 ≤ c ≤1, 0 ≤b + c ≤1) are admissible for estimating p despite the fact that only the distribution of X depends on p and that X and Y are independent. This paradoxical result suggests that admissibility is an extremely weak property. While it is somewhat embarrassing for an estimator to be inadmissible, the fact that it is admissible in no way guarantees that it is a good or even halfway reasonable estimator. ∥ 338 MINIMAXITY AND ADMISSIBILITY [ 5.3 The result of Example 2.23 is not isolated. An exactly analogous result holds in the Poisson case (Problem 2.22) and a very similar one due to Brown for normal distributions (see Example 7.2); that an exactly analogous example is not possible in the normal case follows from Cohen (1965a). 3 Admissibility and Minimaxity in Group Families The two preceding sections dealt with minimax estimators and their admissibility in exponential families. Let us now consider the corresponding problems for group families. As was seen in Section 3.2, in these families there typically exists an MRE estimator δ0 for any invariant loss function, and it is a constant risk estimator. If δ0 is also a Bayes estimator, it is minimax by Corollary 1.5 and admissible if it is unique Bayes. Recall Theorem 4.4.1, where it was shown that a Bayes estimator under an invariant prior is (almost) equivariant. It follows that under the assumptions of that theorem, there exists an almost equivariant estimator which is admissible. Furthermore, it turns out that under very weak additional assumptions, given any almost equivariant estimator δ, there exists an equivariant estimator δ′ which differs from δ only on a fixed null set N. The existence of such a δ′ is obvious in the simplest case, that of a finite group. We shall not prove it here for more general groups (a precise statement and proof can be found in TSH2, Section 6.5, Theorem 4). Since δ and δ′ then have the same risk function, this establishes the existence of an equivariant estimator that is admissible. Theorem 4.4.1 does not require ¯ G to be transitive over . If we add the assump-tion of transitivity, we get a stronger result. Theorem 3.1 Under the conditions of Theorem 4.4.1, if ¯ G is transitive over , then the MRE estimator is admissible and minimax. The crucial assumption in this approach is the existence of an invariant prior distribution. The following example illustrates the rather trivial case in which the group is finite. Example 3.2 Finite group. Let X1, . . . , Xn be iid according to the normal distri-bution N(ξ, 1). Then, the problem of estimating ξ with squared error loss remains invariant under the two-element group G, which consists of the identity transfor-mation e and the transformation g(x1, . . . , xn) = (−x1, . . . , −xn); ¯ gξ = −ξ; g∗d = −d. In the present case, any distribution  for ξ which is symmetric with respect to the origin is invariant. Under the conditions of Theorem 4.4.1, it follows that for any such , there is a version of the Bayes solution which is equivariant, that is, which satisfies δ(−x1, . . . , −xn) = −δ(x1, . . . xn). The group ¯ G in this case is, of course, not transitive over . ∥ As an example in which G is not finite, we shall consider the following version of the location problem on the circle. 5.3 ] ADMISSIBILITY AND MINIMAXITY IN GROUP FAMILIES 339 Example 3.3 Circular location family. Let U1, . . . , Un be iid on (0, 2π) accord-ing to a distribution F with density f . We shall interpret these variables as n points chosen at random on the unit circle according to F. Suppose that each point is translated on the circle by an amount θ (0 ≤θ < 2π) (i.e., the new positions are those obtained by rotating the circle by an amount, θ). When a value Ui + θ exceeds 2π, it is, of course, replaced by Ui + θ −2π. The resulting values are the observations X1, . . . , Xn. It is then easily seen (Problem 3.2) that the density of Xi is f (xi −θ + 2π) when 0 < xi < θ, (3.1) f (xi −θ) when θ < xi < 2π. This can also be written as f (xi −θ)I(θ < xi) + f (xi −θ + 2π)I(xi < θ) (3.2) where I(a < b) is 1 when a < b, and 0 otherwise. If we straighten the circle to a straight-line segment of length 2π, we can also represent this family of distributions in the following form. Select n points at random on (0, 2π) according to F. Cut the line segment at an arbitrary point θ (0 < θ < 2π). Place the upper segment so that its endpoints are (0, 2π −θ) and the lower segment so that its endpoints are (2π −θ, 2π), and denote the coordinates of the n points in their new positions by X1, . . . , Xn. Then, the density of Xi is given by (3.1). As an illustration of how such a family of distributions might arise, suppose that in a study of gestation in rats, n rats are impregnated by artificial insemination at a given time, say at midnight on day zero. The observations are the n times Y1, . . . , Yn to birth, recorded as the number of days plus a fractional day. It is assumed that the Y’s are iid according to G(y −η) where G is known and η is an unknown location parameter. A scientist who is interested in the time of day at which births occur abstracts from the data the fractional parts X′ i = Yi −[Yi]. The variables Xi = 2πX′ i have a distribution of the form (3.1) where θ is 2π times the fractional part of η. Let us now return to (3.2) and consider the problem of estimating θ. The model as originally formulated remains invariant under rotations of the circle. To represent these transformations formally, consider for any real number a the unique number a∗, 0 ≤a∗< 2π, for which a = 2κπ + a∗(κ an integer). Then, the group G of rotations can be represented by x′ i = (xi + c)∗, θ′ = (θ + c)∗, d′ = (d + c)∗. A loss function L(θ, d) remains invariant under G if and only if it is of the form L(θ, d) = ρ[(d −θ)∗] (Problem 3.3.). Typically, one would want it to depend only on (d −θ)∗∗= min{(d −θ)∗, (2π −(d −θ))∗}, which is the difference between d and θ along the smaller of the two arcs connecting them. Thus, the loss might be ((d −θ)∗∗)2 or |(d −θ)∗∗|. It is important to notice that neither of these is convex (Problem 3.4). The group G is transitive over and an invariant distribution for θ is the uniform distribution over (0, 2π). By applying an obvious extension of the construction 340 MINIMAXITY AND ADMISSIBILITY [ 5.3 (3.20) or (3.21) below, one obtains an admissible equivariant (and, hence, constant risk) Bayes estimator, which a fortiori is also minimax. If the loss function is not convex in d, only the extension of (3.20) is available and the equivariant Bayes procedure may be randomized. ∥ Let us next turn to the question of the admissibility and minimaxity of MRE estimators which are Bayes solutions with respect to improper priors. We begin with the location parameter case. Example 3.4 Location family on the line. Suppose that X = (X1, . . ., Xn) has density f (x −θ) = f (x1 −θ, . . . , xn −θ), (3.3) and let G and ¯ G be the groups of translations x′ i = xi + a and θ′ = θ + a. The parameter space is the real line, and from Example 4.4.3, the invariant measure on is the measure ν which to any interval I assigns its length, that is, Lebesgue measure. Since the measure ν is improper, we proceed as in Section 4.3 and look for a generalized Bayes estimator. The posterior density of θ given x is given by f (x −θ) f (x −θ) dθ . (3.4) This quantity is non-negative and its integral, with respect to θ, is equal to 1. It therefore defines a proper distribution for θ, and by Section 4.3, the generalized Bayes estimator of θ, with loss function L, is obtained by minimizing the posterior expected loss  L[θ, δ(x)]f (x −θ) dθ/  f (x −θ) dθ. (3.5) For the case that L is squared error, the minimizing value of δ(x) is the expectation of θ under (3.4), which was seen to be the Pitman estimator (3.1.28) in Exam-ple 4.4.7. The agreement of the estimator minimizing (3.5) with that obtained in Section 3.1 of course holds also for all other invariant loss functions. Up to this point, the development here is completely analogous to that of Ex-ample 3.3. However, since ν is not a probability distribution, Theorem 4.4.1 is not applicable and we cannot conclude that the Pitman estimator is admissible or even minimax. ∥ The minimax character of the Pitman estimator was established in the normal case in Example 1.14 by the use of a least favorable sequence of prior distributions. WeshallnowconsidertheminimaxandadmissibilitypropertiesofMREestimators more generally in group families, beginning with the case of a general location family. Theorem 3.5 Suppose X = (X1, . . . , Xn) is distributed according to the density (3.3) and that the Pitman estimator δ∗given by (1.28) has finite variance. Then, δ∗is minimax for squared error loss. Proof. As in Example 1.14, we shall utilize Theorem 1.12, and for this purpose, we require a least favorable sequence of prior distributions. In view of the discus-sion at the beginning of Example 3.4, one would expect a sequence of priors that 5.3 ] ADMISSIBILITY AND MINIMAXITY IN GROUP FAMILIES 341 approximates Lebesgue measure to be suitable. The sequence of normal distribu-tions with variance tending toward infinity used in Example 1.14 was of this kind. Here, it will be more convenient to use instead a sequence of uniform densities. πT (u) = 1/2T if |u| < T 0 otherwise, (3.6) with T tending to infinity. If δT is the Bayes estimator with respect to (3.6) and rT its Bayes risk, the minimax character of δ∗will follow if it can be shown that rT tends to the constant risk r∗= E0δ∗2(X) of δ∗as T →∞. Since rT ≤r∗for all T , it is enough to show lim inf rT ≥r∗. (3.7) We begin by establishing the lower bound for rT rT ≥(1 −ε) inf a≤−εT b≥εT E0δ2 a,b(X), (3.8) where ε is any number between 0 and 1, and δa,b is the Bayes estimator with respect to the uniform prior on (a, b) so that, in particular, δT = δ−T,T . Then, for any c (Problem 3.7), δa,b(x + c) = δa−c,b−c(x) + c (3.9) and hence Eθ[δ−T,T (X) −θ]2 = E0[δ−T −θ,T −θ(X)]2. It follows that for any 0 < ε < 1, rT = 1 2T  T −T E0[δ−T −θ,T −θ(X)]2 dθ ≥(1 −ε) inf |θ|≤(1−ε)T E0[δ−T −θ,T −θ(X)]2. Since −T −θ ≤−εT and T −θ ≥εT when |θ| ≤(1 −ε)T , this implies (3.8). Next, we show that lim inf T →∞rT ≥E0  lim inf a→−∞ b→∞ δ2 a,b(X) (3.10) where the lim inf on the right side is defined as the smallest limit point of all sequences δ2 an,bn(X) with an →−∞and bn →∞. To see this, note that for any function h of two real arguments, one has (Problem 3.8). lim inf T →∞  inf a≤−T b≥T h(a, b)  = lim inf a→−∞ b→∞ h(a, b). (3.11) Taking the lim inf of both sides of (3.8), and using (3.11) and Fatou’s Lemma (Lemma 1.2.6) proves (3.10). We shall, finally, show that as a →−∞and b →∞, δa,b(X) →δ∗(X) with probability 1. (3.12) 342 MINIMAXITY AND ADMISSIBILITY [ 5.3 From this, it follows that the right side of (3.10) is r∗, which will complete the proof. The limit (3.12) is seen from the fact that δa,b(x) =  b a uf (x −u) du/  b a f (x −u) du and that, by Problems 3.1.20, and 3.1.21 the set of points x for which 0 <  ∞ −∞ f (x −u) du < ∞ and  ∞ −∞ |u|f (x −u) du < ∞ has probability 1. ✷ Theorem3.5isduetoGirshickandSavage(1951),whoproveditsomewhatmore generally without assuming a probability density and under the sole assumption that there exists an estimator (not necessarily equivariant) with finite risk. The streamlined proof given here is due to Peter Bickel. Of course, one would like to know whether the constant risk minimax estimator δ∗is admissible. This question was essentially settled by Stein (1959). We state without proof the following special case of his result. Theorem 3.6 If X1, . . . , Xn are independently distributed with common proba-bility density f (x −θ), and if there exists an equivariant estimator δ0 of θ for which E0|δ0(X)|3 < ∞, then the Pitman estimator δ∗is admissible under squared error loss. It was shown by Perng (1970) that this admissibility result need not hold when the third-moment condition is dropped. In Example 3.4, we have, so far, restricted attention to squared error loss. Ad-missibility of the MRE estimator has been proved for large classes of loss functions by Farrell (1964), Brown (1966), and Brown and Fox (1974b). A key assumption is the uniqueness of the MRE estimator. An early counterexample when that as-sumption does not hold was given by Blackwell (1951). A general inadmissibility result in the case of nonuniqueness is due to Farrell (1964). Examples 3.3 and 3.4 involved a single parameter θ. That an MRE estimator of θ may be inadmissible in the presence of nuisance parameters, when the correspond-ing estimator of θ with known values of the nuisance parameters is admissible, is illustrated by the estimator (2.30). Other examples of this type have been studied by Brown (1968), Zidek (1973), and Berger (1976bc), among others. An impor-tant illustration of the inadmissibility of the MRE estimator of a vector-valued parameter constitutes the principal subject of the next two sections. Even when the best equivariant estimator is not admissible, it may still be— and frequently is—minimax. Conditions for an MRE estimator to be minimax are given by Kiefer (1957) or Robert (1994a, Section 7.5). (See Note 9.3.) The general treatment of admissibility and minimaxity of MRE estimators is beyond the scope of this book. However, roughly speaking, MRE estimators will typically not be admissible except in the simplest situations, but they have a much better chance of being minimax. The difference can be seen by comparing Example 1.14 and the proof of Theo-rem 3.5 with the first admissibility proof of Example 2.8. If there exists an invariant 5.3 ] ADMISSIBILITY AND MINIMAXITY IN GROUP FAMILIES 343 measure over the parameter space of the group family (or equivalently over the group, see Section 3.2 ) which can be suitably approximated by a sequence of probability distributions, one may hope that the corresponding Bayes estimators will tend to the MRE estimator and Theorem 3.5 will become applicable. In com-parison, the corresponding proof in Example 2.8 is much more delicate because it depends on the rate of convergence of the risks (this is well illustrated by the attempted admissibility proof at the beginning of the next section). As a contrast to Theorem 3.5, we shall now give some examples in which the MRE estimator is not minimax. Example 3.7 MRE not minimax. Consider once more the estimation of W in Example 3.2.12 with loss 1 when |d −W|/W > 1/2, and 0 otherwise. The problem remains invariant under the group G of transformations X′ 1 = a1X1 + a2X2, Y ′ 1 = c(a1Y1 + a2Y2), X′ 2 = b1X1 + b2X2, Y ′ 2 = c(b1Y1 + b2Y2) with a1b2 ̸= a2b1 and c > 0. The only equivariant estimator is δ(x, y) ≡0 and its risk is 1 for all values of W. On the other hand, the risk of the estimator k∗Y 2 2 /X2 2 obtained in Example 3.2.12 is clearly less than 1. ∥ Example 3.8 A random walk.3 Consider a walk in the plane. The walker at each step goes one unit either right, left, up, or down and these possibilities will be denoted by a, a−, b, and b−, respectively. Such a walk can be represented by a finite “path” such as bba−b−a−a−a−a−. In reporting a path, we shall, however, cancel any pair of successive steps which re-verse each other, such as a−a or bb−. The resulting set of all finite paths constitutes the parameter space . A typical element of will be denoted by θ = π1 · · · πm, its length by l(θ) = m. Being a parameter, θ (as well as m) is assumed to be unknown. What is observed is the path X obtained from θ by adding one more step, which is taken in one of the four possible directions at random, that is, with probability 1/4 each. If this last step is πm+1, we have X = θπm+1 if πm and πm+1 do not cancel each other, π1 · · · πm−1 otherwise. A special case occurs if θ or X, after cancellation, reduce to a path of length 0; this happens, for example, if θ = a−and the random step leading to X is a. The resulting path will then be denoted by e. The problem is to estimate θ, having observed X = x; the loss will be 1 if the estimated path δ(x) is ̸= θ, and 0 if δ(x) = θ. If we observe X to be x = π1 · · · πk, 3 A more formal description of this example is given in TSH2 [Chapter 1, Problem 11(ii)]. See also Diaconis (1988) for a general treatment of random walks on groups. 344 MINIMAXITY AND ADMISSIBILITY [ 5.3 the natural estimate is δ0(x) = π1 · · · πk−1. An exception occurs when x = e. In that case, which can arise only when l(θ) = 1, let us arbitrarily put δ0(e) = a. The estimator defined in this way clearly satisfies R(θ, δ0) ≤1 4 for all θ. Now, consider the transformations that modify the paths θ, x, and δ(x) by having each preceded by an initial segment π−r · · · π−1 on the left, so that, for example, θ = π1 · · · πm is transformed into ¯ gθ = π−r · · · π−1π1 · · · πm where, of course, some cancellations may occur. The group G is obtained by con-sidering the addition in this manner of all possible initial path segments. Equivari-ance of an estimator δ under this group is expressed by the condition δ(π−r · · · π−1x) = π−r · · · π−1δ(x) (3.13) for all x and all π−r · · · π−1, r = 1, 2 . . .. This implies, in particular, that δ(π−r · · · π−1) = π−r · · · π−1δ(e), (3.14) and this condition is sufficient as well as necessary for δ to be equivariant because (3.14) implies that π−r · · · π−1δ(x) = π−r · · · π−1xδ(e) = δ(π−r · · · π−1x). Since ¯ G is clearly transitive over , the risk function of any equivariant esti-mator is constant. Let us now determine the MRE estimator. Suppose that δ(e) = π10 · · · πk0, so that by (3.14), δ(x) = xπ10 · · · πk0. The only possibility of δ(x) being equal to θ occurs when π10 cancels the last element of x. The best choice for k is clearly k = 1, and the choice of π10 (fixed or random) is then immaterial; in any case, the probability of cancellation with the last element of X is 1/4, so that the risk of the MRE estimator (which is not unique) is 3/4. Comparison with δ0 shows that a best equivariant estimator in this case is not only not admissible but not even minimax. ∥ The following example, in which the MRE estimator is again not minimax but where G is simply the group of translations on the real line, is due to Blackwell and Girshick (1954). Example 3.9 Discrete location family. Let X = U + θ where U takes on the values 1, 2, . . . with probabilities P(U = k) = pk. We observe x and wish to estimate θ with loss function L(θ, d) = d −θ if d > θ (3.15) = 0 if d ≤θ. 5.3 ] ADMISSIBILITY AND MINIMAXITY IN GROUP FAMILIES 345 The problem remains invariant under arbitrary translation of X, θ, and d by the same amount. It follows from Section 3.1 that the only equivariant estimators are those of the form X −c. The risk of such an estimator, which is constant, is given by k>c (k −c)pk. (3.16) If the pk tend to 0 sufficiently slowly, an equivariant estimator will have infinite risk. This is the case, for example, when pk = 1 k(k + 1) (3.17) (Problem 3.11). The reason is that there is a relatively large probability of sub-stantially overestimating θ for which there is a heavy penalty. This suggests a deliberate policy of grossly underestimating θ, for which, by (3.15), there is no penalty. One possible such estimator (which, of course, is not equivariant) is δ(x) = x −M|x|, M > 1, (3.18) and it is not hard to show that its maximum risk is finite (Problem 3.12). ∥ The ideas of the present section have relevance beyond the transitive case for which they were discussed so far. If ¯ G is not transitive, we can no longer ask whether the uniform minimum risk equivariant (UMRE) estimator is minimax since a UMRE estimator will then typically not exist. Instead, we can ask whether there exists a minimax estimator which is equivariant. Similarly, the question of the admissibility of the UMRE estimator can be rephrased by asking whether an estimator which is admissible among equivariant estimators is also admissible within the class of all estimators. The conditions for affirmative answers to these two questions are essentially the same as in the transitive case. In particular, the answer to both questions is affirmative when G is finite. A proof along the lines of Theorem 4.1 is possible but not very convenient because it would require a characterization of all admis-sible (within the class of equivariant estimators) equivariant estimators as Bayes solutions with respect to invariant prior distributions. Instead, we shall utilize the fact that for every estimator δ, there exists an equivariant estimator whose average risk (to be defined below) is no worse than that of δ. Let the elements of the finite group G be g1, . . . , gN and consider the estimators δi(x) = g∗−1 i δ(gix). (3.19) When δ is equivariant, of course, δi(x) = δ(x) for all i. Consider the randomized estimator δ∗for which δ∗(x) = δi(x) with probability 1/N for each i = 1, . . . , N, (3.20) and assuming the set D of possible decisions to be convex, the estimator δ∗∗(x) = 1 N N i=1 δi(x) (3.21) 346 MINIMAXITY AND ADMISSIBILITY [ 5.4 which, for given x, is the expected value of δ∗(x). Then, δ∗∗(x) is equivariant, and so is δ∗(x) in the sense that g∗−1δ∗(gx) again is equal to δi(x) with probability 1/N for each i (Problem 3.13). For these two estimators, it is easy to prove that (Problem 3.14): (i) for any loss function L, R(θ, δ∗) = 1 N R( ¯ giθ, δ) (3.22) and (ii) for any loss function L(θ, d) which is convex in d, R(θ, δ∗∗) ≤1 N R( ¯ giθ, δ). (3.23) From (3.22) and (3.23), it follows immediately that sup R(θ, δ∗) ≤sup R(θ, δ) and sup R(θ, δ∗∗) ≤sup R(θ, δ), which proves the existence of an equivariant minimax estimator provided a mini-max estimator exists. Suppose, next, that δ0 is admissible among all equivariant estimators. If δ0 is not admissible within the class of all estimators, it is dominated by some δ. Let δ∗ and δ∗∗be as above. Then, (3.22) and (3.23) imply that δ∗and δ∗∗dominate δ0, which is a contradiction. Of the two constructions, δ∗∗has the advantage of not requiring randomization, whereas δ∗has the advantage of greater generality since it does not require L to be convex. Both constructions easily generalize to groups that admit an invariant measure which is finite (Problems 4.4.12–4.4.14). Further exploration of the re-lationship of equivariance to admissibility and the minimax property leads to the Hunt-Stein theorem (see Notes 9.3). 4 Simultaneous Estimation So far, we have been concerned with the estimation of a single real-valued param-eter g(θ). However, one may wish to estimate several parameters simultaneously, for example, several physiological constants of a patient, several quality charac-teristics of an industrial or agricultural product, or several dimensions of musical ability. One is then dealing with a vector-valued estimand g(θ) = [g1(θ), . . . , gr(θ)] and a vector-valued estimator δ = (δ1, . . . , δr). A natural generalization of squared error as a measure of accuracy is [δi −gi(θ)]2, (4.1) a sum of squared error losses, which we shall often simply call squared error loss. More generally, we shall consider loss functions L(θ, δ) where δ = (δ1, . . . , δr), 5.4 ] SIMULTANEOUS ESTIMATION 347 and then denote the risk of an estimator δ by R(θ, δ) = EθL[θ, δ(X)]. (4.2) Another generalization of expected squared error loss is the matrix R(θ, δ) whose (i, j)th element is E{[δi(X) −gi(θ)][δj(X) −gj(θ)]}. (4.3) We shall say that δ is more concentrated about g(θ) than δ′ if R(θ, δ′) −R(θ, δ) (4.4) is positive semidefinite (but not identically zero). This definition differs from that based on (4.2) by providing only a partial ordering of estimators, since (4.4) may be neither positive nor negative semidefinite. Lemma 4.1 (i) δ is more concentrated about g(θ) than δ′ if and only if E{ki[δi(X) −gi(θ)]}2 ≤E{ki[δ′ i(X) −gi(θ)]}2 (4.5) for all constants k1, . . . , kr. (ii) In particular, if δ is more concentrated about g(θ) than δ′, then E[δi(X) −gi(θ)]2 ≤E[δ′ i(X) −gi(θ)]2 f or all i. (4.6) (iii) If R(θ, δ) ≤R(θ, δ′) for all convex loss functions, then δ is more concentrated about g(θ) than δ′. Proof. (i) If E{ki[δi(X)−gi(θ)]}2 is expressed as a quadratic form in the ki, its matrix is R(θ, δ). (ii) This is a special case of (i). (iii) This follows from the fact that {ki[di −gi(θ)]}2 is a convex function of d = (d1, . . . , dr). ✷ Let us now consider the extension of some of the earlier theory to the case of simultaneous estimation of several parameters. (1) The Rao-Blackwell theorem (Theorem 1.7.8). The proof of this theorem shows that its results remain valid when δ and g are vector-valued. In particular, for any convex loss function, the risk of any estimator is reduced by taking its expectation given a sufficient statistic. It follows that for such loss functions, one candispensewithrandomizedestimators.Also,Lemma4.1showsthatanestimator δ is always less concentrated about g(θ) than the expectation of δ(X), given a sufficient statistic. (2) Unbiased estimation. In the vector-valued case, an estimator δ of g(θ) is said to be unbiased if Eθ[δi(X)] = gi(θ) for all i and θ. (4.7) 348 MINIMAXITY AND ADMISSIBILITY [ 5.4 For unbiased estimators, the concentration matrix R defined by (4.3) is just the covariance matrix of δ. From the Rao-Blackwell theorem, it follows, as in Theorem 2.1.11 for the case r = 1, that if L is convex and if a complete sufficient statistic T exists, then any U-estimable g has a unique unbiased estimator depending only on T . This estimator uniformly minimizes the risk among all unbiased estimators and, thus, is also more concentrated about g(θ) than any other unbiased estimator. (3) Equivariant estimation. The definitions and concepts of Section 3.2 apply without changes. They are illustrated by the following example, which will be considered in more detail later in the section. Example 4.2 Several normal means. Let X = (X1, . . . , Xr), with the Xi inde-pendently distributed as N(θi, 1), and consider the problem of estimating the vector mean θ = (θ1, . . . , θr) with squared error loss. This problem remains invariant un-der the group G1 of translations gX = (X1 + a1, . . . , Xr + ar), ¯ gθ = (θ1 + a1, . . . , θr + ar), (4.8) g∗d = (d1 + a1, . . . , dr + ar). The only equivariant estimators are those of the form δ(X) = (X1 + c1, . . . , Xr + cr) (4.9) and an easy generalization of Example 3.1.16 shows that X is the MRE estimator of θ. The problem also remains invariant under the group G2 of orthogonal transfor-mations gX = XH, ¯ gθ = θH, g∗d = dH (4.10) where H is an orthogonal r × r matrix. An estimator δ is equivariant if and only if it is of the form (Problem 4.1) δ(X) = u(X) · X, (4.11) where u(X) is any scalar satisfying u(XH) = u(X) for all orthogonal H and all X (4.12) and, hence, is an arbitrary function of X2 i (Problem 4.2). The group ¯ G defined by (4.10) is not transitive over the parameter space, and a UMRE estimator of θ, therefore, cannot be expected. ∥ (4) Bayes estimators. The following result frequently makes it possible to reduce Bayes estimation of a vector-valued estimand to that of its components. Lemma 4.3 Suppose that δ∗ i (X) is the Bayes estimator of gi(θ) when θ has the prior distribution  and the loss is squared error. Then, δ∗= (δ∗ 1, . . . , δ∗ r ) is more concentrated about g(θ) in the Bayes sense that it minimizes E[ki(δi(X) −gi(θ))]2 = E[kiδi(X) −kigi(θ)]2 (4.13) 5.4 ] SIMULTANEOUS ESTIMATION 349 for all ki, where the expectation is taken over both θ and X. Proof. The result follows from the fact that the estimator kiδi(X) minimizing (4.13) is E[kigi(θ)|X] = kiE[g(θ i)|X] = kiδ∗ i (X). ✷ Example 4.4 Multinomial Bayes. Let X = (X0, . . . , Xs) have the multinomial distribution M(n; p0, . . . , ps), and consider the Bayes estimation of the vector p = (p0, . . . , ps) when the prior distribution of p is the Dirichlet distribution  with density H(a0, . . . , as) H(a0) . . . H(as)pa0−1 0 · · · pas−1 s (ai > 0, 0 ≤pi ≤1, pi = 1). (4.14) The Bayes estimator of pi for squared error loss is (Problem 4.3) δi(X) = ai + Xi aj + n, (4.15) and by Lemma 4.3, the estimator [δ0(X), . . . , δs(X)] is then most concentrated in the Bayes sense. As a check, note that δi(X) = 1 as, of course, it must since  assigns probability 1 to pi = 1. ∥ (5) Minimax estimators. In generalization of the binomial minimax problem treated in Example 1.7, let us now determine the minimax estimator of (p0, . . . , ps) for the multinomial model of Example 4.4. Example 4.5 Multinomial minimax. Suppose the loss function is squared error. In light of Example 1.7, one might guess that a least favorable distribution is the Dirichlet distribution (4.14) with a0 = · · · = as = a. The Bayes estimator (4.15) reduces to δi(X) = a + Xi (s + 1)a + n. (4.16) The estimator δ(X) with components (4.16) has constant risk over the support of (4.14), provided a = √n/(s +1), and for this value of a, δ(X) is therefore minimax by Corollary 1.5. [Various versions of this problem are discussed by Steinhaus (1957), Trybula (1958), Rutkowska (1977), and Olkin and Sobel (1979).] ∥ Example 4.6 Independent experiments. Suppose the components Xi of X = (X1, . . . , Xr) are independently distributed according to distributions Pθi, where the θi vary independently over i, so that the parameter space for θ = (θ1, . . . , θr) is = 1 × · · · × r. Suppose, further, that for the ith component problem of estimating θi with squared error loss, i is least favorable for θi, and the minimax estimator δi is the Bayes solution with respect to i, satisfying condition (1.5) with ωi = ωi. Then, δ = (δ1, . . . , δr) is minimax for estimating θ with squared error loss. This follows from the facts that (i) δ is a Bayes estimator with respect to the prior distribution  for θ, according to which the components θi are independently distributed with distribution i, (ii) (ω) = 1 where ω = ω1 × · · · × ωr, and (iii) the set of points θ at which R(θ, δ) attains its maximum is exactly ω. 350 MINIMAXITY AND ADMISSIBILITY [ 5.4 Theanalogousresultholdsifthecomponentminimaxestimatorsδi arenotBayes solutions with respect to least favorable priors but have been obtained through a least favorable sequence by Theorem 1.12. As an example, suppose that Xi (i = 1, . . . , r) are independently distributed as N(θi, 1). Then, it follows that (X1, . . . , Xr) is minimax for estimating (θ1, . . . , θr) with squared error loss. ∥ The extensions so far have brought no great surprises. The results for general r were fairly straightforward generalizations of those for r = 1. This will no longer always be the case for the last topic to be considered. (6) Admissibility. The multinomial minimax estimator (4.16) was seen to be a unique Bayes estimator and, hence, is admissible. To investigate the admissibility of the minimax estimator X for the case of r normal means considered at the end of Example 4.6, one might try the argument suggested following Theorem 4.1. It was seen in Example 4.2 that the problem under consideration remains invariant under the group G1 of translations and the group G2 of orthogonal transformations, given by (4.8) and (4.10), respectively. Of these, G1 is transitive; if there existed an invariant probability distribution over G1, the remark following Theorem 4.1 would lead to an admissible estimator, hopefully X. However, the measures cν, where ν is Lebesgue measure, are the only invariant measures (Problem 4.14) and they are not finite. Let us instead consider G2. An invariant probability distribution over G2 does exist (TSH2, Example 6 of Chapter 9). However, the approach now fails because ¯ G2 is not transitive. Equivariant estimators do not necessarily have constant risk and, in fact, in the present case, a UMRE estimator does not exist (Strawderman 1971). Since neither of these two attempts works, let us try the limiting Bayes method (Example 2.8, first proof) instead, which was successful in the case r = 1. For the sake of convenience, we shall take the loss to be the average squared error, L(θ, d) = 1 r (di −θi)2. (4.17) If X is not admissible, there exists an estimator δ∗, a number ε > 0, and intervals (θi0, θi1) such that R(θ, δ∗) ≤1 for all θ < 1 −ε for θ satisfying θi0 < θi < θi1 for all i. A computation analogous to that of Example 2.8 now shows that 1 −r∗ τ 1 −rτ ≥ε(1 + τ 2) ( √ 2πτ)r  θ11 θ10 · · ·  θr1 θr0 exp(−θ2 i /2τ 2)dθ1 · · · dθr. (4.18) Unfortunately, the factor preceding the integral no longer tends to infinity when r > 1, and so this proof breaks down too. It was shown by Stein (1956b) that X is, in fact, no longer admissible when r ≥3 although admissibility continues to hold for r = 2. (A limiting Bayes proof will work for r = 2, although not with normal priors. See Problem 4.5). For r ≥3, there are many different estimators whose risk is uniformly less than that of X. 5.4 ] SIMULTANEOUS ESTIMATION 351 To produce an improved estimator, Stein (1956b) gave a “large r and |θ|” ar-gument based on the observation that with high probability, the true θ is in the sphere {θ : |θ|2 ≤|x|2 −r}. Since the usual estimator X is approximately the same size as θ, it will almost certainly be outside of this sphere. Thus, we should cut down the estimator X to bring it inside the sphere. Stein argues that X should be cut down by a factor of (|X|2 −r)/|X|2 = 1 −r/|X|2, and as a more general form, he considers the class of estimators δ(x) = [1 −h(|x|2)]x, (4.19) with particular emphasis on the special case δ(x) =  1 − r |x|2  x. (4.20) See Problem 4.6 for details. Later, James and Stein (1961) established the complete dominance of (4.20) over X, and (4.20) remains the basic underlying form of almost all improved estimators. In particular, the appearance of the squared term in the shrinkage factor is essential for optimality (Brown 1971; Berger 1976a; Berger 1985, Section 8.9.4). Since Stein (1956b) and James and Stein (1961), the proof of domination of the estimator (4.20) over the maximum likelihood estimator, X, has undergone many modifications and updates. More recent proofs are based on the representation of Corollary 4.7.2 and can be made to apply to cases other than the normal. We defer treatment of this topic until Section 5.6. At present, we only make some remarks about the estimator (4.20) and the following modifications due to James and Stein (1961). Let δi = µi +  1 − r −2 |x −µ|2  (xi −µi) (4.21) where µ = (µ1, . . . , µr) are given numbers and |x −µ| = (xi −µi)2!1/2 . (4.22) A motivation for the general structure of the estimator (4.21) can be obtained by using arguments similar to the empirical Bayes arguments in Examples 4.7.7 and 4.7.8 (see also Problems 4.7.6 and 4.7.7). Suppose, a priori, it was thought likely, though not certain, that θi = µi (i = 1, . . . , r). Then, it might be reasonable first to test H : θ1 = µ1, . . . , θr = µr and to estimate θ by µ when H is accepted and by X otherwise. The best acceptance region has the form |x −µ| ≤C so that the estimator becomes δ = µ if |x −µ| ≤C x if |x −µ| > C. (4.23) A smoother approach is provided by an estimator with components of the form δi = ψ(|x −µ|)xi + [1 −ψ(|x −µ|)]µi (4.24) 352 MINIMAXITY AND ADMISSIBILITY [ 5.4 where ψ, instead of being two-valued as in (4.23), is a function increasing con-tinuously with ψ(0) = 0 to ψ(∞) = 1. The estimator (4.21) is of the form (4.24) (although with ψ(0) = −∞), but the argument given above provides no expla-nation of the particular choice for ψ. We note, however, that many hierarchical Bayes estimators (such as given in Example 5.2) will result in estimators of this form. We will return to this question in Section 5.6. For the case of unknown σ, the estimator corresponding to (4.23) has been investigated by Sclove, Morris, and Radhakrishnan (1972). They show that it does not provide a uniform improvement over X and that its risk is uniformly greater than that of the corresponding James-Stein estimator. Although these so-called pretest estimators tend not to be optimal, they have been the subject of considerable research (see, for example, Sen and Saleh 1985, 1987). Unlike X, the estimator δ is, of course, biased. An aspect that in some circum-stances is disconcerting is the fact that the estimator of θi depends not only on Xi but also on the other (independent) X’s. Do we save enough in risk to make up for these drawbacks? To answer this, we take a closer look at the risk function. Under the loss (4.17), it will be shown in Theorem 5.1 that the risk function of the estimator (4.21) can be written as R(θ, δ) = 1 −r −2 r Eθ  r −2 |X −µ|2  . (4.25) Thus, δ has uniformly smaller risk than the constant estimator X when r ≥3, and, in particular, δ is then minimax by Example 4.6. More detailed information can be obtained from the fact that |X −µ|2 has a noncentral χ2-distribution with noncentrality parameter λ = (θi −µi)2 and that, therefore, the risk function (4.25) is an increasing function of λ. (See TSH2 Chapter 3, Lemma 2 and Chapter 7, Problem 4 for details). The risk function tends to 1 as λ →∞, and takes on its minimum value at λ = 0. For this value, |X −µ|2 has a χ2-distribution with r degrees of freedom, and it follows from Example 2.1 that (Problem 4.7) E  1 |X −µ|2  = 1 r −2 and hence R(µ, δ) = 2/r. Particularly for large values of r, the savings over the risk of X (which is equal to 1 for all θ) can therefore be substantial. (See Bondar 1987 for further discussion.) We thus have the surprising result that X is not only inadmissible when r ≥3 but that even substantial risk savings are possible. This is the case not only for squared error loss but also for a wide variety of loss functions which in a suitable way combine the losses resulting from the r component problems. In particular, Brown (1966, Theorem 3.1.1) proves that X is inadmissible for r ≥3 when L(θ, d) = ρ(d −θ), where ρ is a convex function satisfying, in addition to some mild conditions, the requirement that the r × r matrix R with the (i, j)th element E0  Xi ∂ ∂Xj ρ(X)  (4.26) is nonsingular. Here, the derivative in (4.26) is replaced by zero whenever it does 5.4 ] SIMULTANEOUS ESTIMATION 353 not exist. Example 4.7 A variety of loss functions. Consider the following loss functions ρ1, . . . , ρ4: ρ1(t) = νit2 i (all νi > 0); ρ2(t) = maxit2 i ; ρ3(t) = t2 1, ρ4(t) = 1 r ti 2 . All four are convex, and R is nonsingular for ρ1 and ρ2 but singular for ρ3 and ρ4 (Problem 4.8). For r ≥3, it follows from Brown’s theorem that X is inadmissible for ρ1 and ρ2. On the other hand, it is admissible for ρ3 and ρ4 (Problem 4.10). ∥ Other ways in which the admissibility of X depends on the loss function are indicated by the following example (Brown, 1980b) in which L(θ, d) is not of the form ρ(d −θ). Example 4.8 Admissibility of X. Let Xi (i = 1, . . . , r) be independently dis-tributed as N(θi, 1) and consider the estimation of θ with loss function L(θ, d) = r i=1 v(θi) v(θj)(θi −di)2. (4.27) Then the following results hold: (i) When v(t) = ekt (k ̸= 0), X is inadmissible if and only if r ≥2. (ii) When v(t) = (1 + t2)k/2, (a) X is admissible for k < 1, 1 ≤r < (2 −k)/(1 −k) and for k ≥1, all r; (b) X is inadmissible for k < 1, r > (2 −k)/(1 −k). Parts (i) and (ii(b)) will not be proved here. For the proof of (ii(a)), see Problem 4.11. ∥ In the formulations considered so far, the loss function in some way combines the losses resulting from the different component problems. Suppose, however, that the problems of estimating θ1, . . . , θr are quite unrelated and that it is important to control the error on each of them. It might then be of interest to minimize max i  sup θi E(δi −θi)2  . (4.28) It is easy to see that X is the unique estimator minimizing (4.28) and is admissible from this point of view. This follows from the fact that Xi is the unique estimator for which sup θi E(δi −θi)2 ≤1. [On the other hand, it follows from Example 4.7 that X is inadmissible for r ≥3 when L(θ, d) = maxi(di −θi)2.] 354 MINIMAXITY AND ADMISSIBILITY [ 5.5 The performance measure (4.28) is not a risk function in the sense defined in Chapter 1 because it is not the expected value of some loss but the maximum of a number of such expectations. An interesting way of looking at such a criterion was proposed by Brown (1975) [see also Bock 1975, Shinozaki 1980, 1984]. Brown considers a family L of loss functions L, with the thought that it is not clear which of these loss functions will be most appropriate. (It may not be clear how the data will be used, or they may be destined for multiple uses. In this connection, see also Rao 1977.) If RL(θ, δ) = EθL[θ, δ(X)], (4.29) Brown defines δ to be admissible with respect to the class L if there exists no δ′ such that RL(θ, δ′) ≤RL(θ, δ) for all L ∈L and all θ with strict inequality holding for at least one L = L0 and θ = θ 0. The argument following (4.28) shows that X is admissible when L contains the r loss functions Li(θ, d) = (di −θi)2, i = 1, . . . , r, and hence, in particular, when L is the class of all loss functions r i=1 ci(δi −θi)2, 0 ≤ci < ∞. (4.30) On the other hand, Brown shows that if the ratios of the weights ci to each other are bounded, ci/cj < K, i, j = 1, . . . , r, (4.31) then no matter how large K, the estimator X is inadmissible with respect to the class L of loss functions (4.30) satisfying (4.31). Similar results persist in even more general settings, such as when L is not restricted to squared error loss. See Hwang 1985, Brown and Hwang 1989, and Problem 4.14. The above considerations make it clear that the choice between X and competi-tors such as (4.21) must depend on the circumstances. (In this connection, see also Robinson 1979a, 1979b). A more detailed discussion of some of these issues will be given in the next section. 5 Shrinkage Estimators in the Normal Case The simultaneous consideration of a number of similar estimation problems in-volving independent variables and parameters (Xi, θi) often occurs in repetitive situations in which it may be reasonable to view the θ’s themselves as random variables. This leads to the Bayesian approach of Examples 4.7.1, 4.7.7, and 4.7.8. In the simplest normal case, we assume, as in Example 4.7.1, that the Xi are inde-pendent normal with mean θi and variance σ 2, and that the θi’s are also normal, say with mean ξ and variance A (previously denoted by τ 2), that is, X ∼Nr(θ, σ 2I) and θ ∼Nr(ξ, AI). This model has some similarity with the Model II version of the one-way layout considered in Section 3.5. There, however, interest centered on the variances σ 2 and A, while we now wish to estimate the θi’s. 5.5 ] SHRINKAGE ESTIMATORS IN THE NORMAL CASE 355 To simplify the problem, we shall begin by assuming that σ and ξ are known, say σ = 1 and ξ = 0, so that only A and θ are unknown. The empirical Bayes arguments of Example 4.7.1 led to the estimator δ ˆ π i = (1 −ˆ B)xi, (5.1) where ˆ B = (r −2)/x2 i , that is, the James-Stein estimator (4.21) with µ = 0. We now prove that, as previously claimed, the risk function of (5.1) is given by (4.25). However,weshalldosoforthemoregeneralestimator(5.1)with ˆ B = c(r−2)/x2 i where c is a positive constant. (The value c = 1 minimizes both the Bayes risk (Problem 4.7.5) and the frequentist risk among estimators of this form.) Theorem 5.1 Let Xi, i = 1, . . . , r (r > 2), be independent, with distributions N(θi, 1) and let the estimator δc of θ be given by δc(x) =  1 −cr −2 |x|2  x, |x|2 = x2 j . (5.2) Then, the risk function of δc, with loss function (5.17), is R(θ, δc) = 1 −(r −2)2 r Eθ c(2 −c) |X|2  . (5.3) Proof. From Theorem 4.7.2, using the loss function (4.17), the risk of δc is R(θ, δc) = 1 + 1 r Eθ|g(X)|2 −2 r r i=1 Eθ ∂ ∂Xi gi(X) (5.4) where gi(x) = c(r −2)xi/|x|2 and |g(x)|2 = c2(r −2)2/|x|2. Differentiation shows ∂ ∂xi gi(x) = c(r −2) |x|4 [|x|2 −2x2 i ] (5.5) and hence r i=1 ∂ ∂xi gi(x) = c(r −2) |x|4 r i=1 [|x|2 −2x2 i ] = c(r −2) |x|2 (r −2), and substitution into (5.4) gives R(θ, δc) = 1 + 1 r Eθ c2(r −2)2 |X|2  −2 r Eθ c(r −2)2 |X|2  = 1 −(r −2)2 r Eθ c(2 −c) |X|2  . ✷ Note that Eθ  1 |X|2  ≤E0  1 |X|2  , (5.6) so R(θ, δc) < ∞only if the latter expectation is finite, which occurs when r ≥3 (see Problem 5.2). 356 MINIMAXITY AND ADMISSIBILITY [ 5.5 From the expression (5.3) for the risk, we immediately get the following results. Corollary 5.2 The estimator δc defined by (5.2) dominates X (δc = X when c = 0), provided 0 < c < 2 and r ≥3. Proof. For these values, c(2 −c) > 0 and, hence, R(θ, δc) < 1 for all θ. Note that R(θ, δc) = R(θ, X) for c = 2. ✷ Corollary 5.3 The James-Stein estimator δ, which equals δc with c = 1, dominates all estimators δc with c ̸= 1. Proof. The factor c(2 −c) takes on its maximum value 1 if and only if c = 1. ✷ For c = 1, formula (5.3) verifies the risk formula (4.25). Since the James-Stein estimator dominates all estimators δc with c ̸= 1, one might hope that it is admissible. However, unfortunately, this is not the case, as is shown by the following theorem, which strengthens Theorem 4.7.5 by extending the comparison from the average (Bayes) risk to the risk function. Theorem 5.4 Let δ be any estimator of the form (5.1) with ˆ B any strictly decreas-ing function of the xi’s and suppose that Pθ( ˆ B > 1) > 0. (5.7) Then, R(θ, ˆ δ) < R(θ, δ) where ˆ δi = max[(1 −ˆ B), 0]xi. (5.8) Proof. By (4.17), R(θ, δ) −R(θ, ˆ δ) = 1 r 1 Eθ δ2 i −ˆ δ2 i −2θiEθ δi −ˆ δi 2 . To show that the expression in brackets is always > 0, calculate the expectations by first conditioning on ˆ B. For any value ˆ B ≤1, we have δi = ˆ δi, so it is enough to show that the right side is positive when conditioned on any value ˆ B = b > 1. Since in that case ˆ δi = 0, it is finally enough to show that for any b > 1, θiEθ[δi| ˆ B = b] = θi(1 −b)Eθ(Xi| ˆ B = b) ≤0 and hence that θiEθ(Xi| ˆ B = b) ≥0. Now ˆ B = b is equivalent to |X|2 = c for some c and hence to X2 1 = c −(X2 2 + · · · + X2 r ). Conditioning further on X2, . . . , Xr, we find that Eθ(θ1X1||X|2 = c, x2, . . . , xr) = θ1Eθ(X1|X2 1 = y2) = θ1y(eθ1y −e−θ1y) eθ1y + e−θ1y where y = c −(x2 2 + · · · + x2 r ). This is an increasing function of |θ1y|, which is zero when θ1y = 0, and this completes the proof. ✷ 5.5 ] SHRINKAGE ESTIMATORS IN THE NORMAL CASE 357 Theorem 5.4 shows in particular that the James-Stein estimator (δc with c = 1) is dominated by another minimax estimator, δ+ i =  1 −r −2 |x|2 + xi (5.9) where (·)+ indicates that the quantity in parentheses is replaced by 0 whenever it is negative. We shall call (·)+ = max[(·), 0] the positive part of (·). The risk functions of the ordinary and positive-part Stein estimators are shown in Figure 5.1. Figure 5.1. Risk functions of the ordinary and positive-part Stein estimators, for r=4. Unfortunately, it can be shown that even δ+ is inadmissible because it is not smooth enough to be either Bayes or limiting Bayes, as we will see in Section 5.7. However, as suggested by Efron and Morris (1973a, Section 5), the positive-part estimator δ+ is difficult to dominate, and it took another twenty years until a dominating estimator was found by Shao and Strawderman (1994). There exist, in fact, many admissible minimax estimators, but they are of a more general form than (5.1) or (5.9). To obtain such an estimator, we state the following generalization of Corollary 5.2, due to Baranchik (1970) (see also Strawderman 1971, and Efron 358 MINIMAXITY AND ADMISSIBILITY [ 5.5 and Morris 1976a). The proof is left to Problem 5.4. Theorem 5.5 For X ∼Nr(θ, I), r ≥3, and loss L(θ, d) = 1 r (di −θi)2, an estimator of the form δi =  1 −c(|x|)r −2 |x|2  xi (5.10) is minimax provided (i) 0 ≤c(·) ≤2 and (ii) the function c is nondecreasing. It is interesting to note how very different the situation for r ≥3 is from the one-dimensional problem discussed in Sections 5.2 and 5.3. There, minimax estimators were unique (although recall Example 2.9); here, they constitute a rich collection. It follows from Theorem 5.4 that the estimators (5.10) are inadmissible whenever c(|x|)/|x|2 > 1/(r −2) with positive probability. On the other hand, the family (5.10) does contain some admissible members, as is shown by the following example, due to Strawderman (1971). Example 5.6 Proper Bayes minimax. Let Xi be independent normal with mean θi and unit variance, and suppose that the θi’s are themselves random variables with the following two-stage prior distribution. For a fixed value of λ, let the θi be iid according to N[0, λ−1(1 −λ)]. In addition, suppose that λ itself is a random variable, , with distribution  ∼(1 −a)λ−a, 0 ≤a < 1. We therefore have the hierarchical model X ∼Nr(θ, I), θ ∼Nr(0, λ−1(1 −λ)I), (5.11)  ∼(1 −a)λ−a, 0 < λ < 1, 0 < a < 1. Here, for illustration, we take a = 0 so that  has the uniform distribution U(0, 1). A straightforward calculation (Problem 5.5) shows that the Bayes estimator δ, under squared error loss (4.17), is given by (5.10) with c(|x|) = 1 r −2  r + 2 − 2 exp(−1 2|x|2) 1 0 λr/2 exp(−λ|x|2/2) dλ . (5.12) It follows from Problem 5.4 that E(|x) = (r −2)c(|x|)/|x|2 and hence that c(|x|) ≥0 since λ < 1. On the other hand, c(|x|) ≤(r + 2)/(r −2) and hence c(|x|) ≤2 provided r ≥6. It remains to show that c(|x|) is nondecreasing or, equivalently, that  1 0 λr/2 exp 1 2|x|2(1 −λ)  dλ is nondecreasing in |x|. This is obvious since 0 < λ < 1. Thus, the estimator (5.10) with c(|x|) given by (5.12) is a proper Bayes (and admissible) minimax estimator for r ≥6. ∥ Although neither the James-Stein estimator nor its positive-part version of (5.9) are admissible, it appears that no substantial improvements over the latter are 5.5 ] SHRINKAGE ESTIMATORS IN THE NORMAL CASE 359 possible (see Example 7.3). We shall now turn to some generalizations of these estimators, where we no longer require equal variances or equal weights in the loss function. We first look at the case where the covariance matrix is no longer σ 2I, but may be any positive definite matrix . Conditions for minimaxity of the estimator δ(x) =  1 −c(|x|2) |x|2  x (5.13) will now involve this covariance matrix. Theorem 5.7 For X ∼Nr(θ, ) with known, an estimator of the form (5.13) is minimax against the loss L(θ, δ) = |θ −δ|2, provided (i) 0 ≤c(|x|2) ≤2[tr()/λmax()] −4, (ii) the function c(·) is nondecreasing, where tr() denotes the trace of the matrix and λmax() denotes its largest eigenvalue. Note that the covariance matrix must satisfy tr()/λmax() > 2 for δ to be different from X. If = I, tr()/λmax() = r, so this is the dimensionality restriction in another guise. Bock (1975) (see also Brown 1975) shows that X is unique minimax among spherically symmetric estimators if tr()/λmax() < 2. (An estimator is said to be spherically symmetric if it is equivariant under orthogonaltransformations.Suchestimatorswerecharacterizedby(4.11),towhich (5.13) is equivalent.) When the bound on c(·) is displayed in terms of the covariance matrix, we get some idea of the types of problems in which we can expect improvement from shrinkage estimators. The condition tr() > 2λmax() will be satisfied when the eigenvalues of are not too different (see Problem 5.10). If this condition is not satisfied, then estimators which allow different coordinatewise shrinkage are needed to obtain minimaxity (see Notes 9.6). Proof of Theorem 5.7. The risk of δ(x) is R(θ, δ) = Eθ (θ −δ(X))′(θ −δ(X)) ! = Eθ (θ −X)′(θ −X) ! (5.14) −2Eθ c(|X|2) |X|2 X′(θ −X)  + Eθ c2(|X|2) |X|2  where Eθ(θ −X)′(θ −X) = tr(), the trace of the matrix , is the minimax risk (Problem 5.8). We can now apply integration by parts (see Problem 5.9) to write R(θ, δ) = tr() + Eθ c(|X|2) |X|2  (c(|X|2) + 4)X′X |X|2 −2 tr " (5.15) −4Eθ c′(|X|2) |X|2 X′X. Since c′(·) ≥0, an upper bound on R(θ, δ) results by dropping the last term. We 360 MINIMAXITY AND ADMISSIBILITY [ 5.5 then note [see Equation (2.6.5)] x′x |x|2 = x′x x′x ≤λmax() (5.16) and the result follows. ✷ Theorem 5.5 can also be extended by considering more general loss functions in which the coordinates may have different weights. Example 5.8 Loss functions in the one-way layout. In the one-way layout we observe Xij ∼N(ξi, σ 2 i ), j = 1, . . . , ni, i = 1, . . . , r , (5.17) where the usual estimator of ξi is ¯ Xi =  j Xij/ni. If the assumptions of Theorem 5.7 are satisfied, then the estimator ¯ X = ( ¯ X1, . . . , ¯ Xr) can be improved. If the ξi’s represent mean responses for different treatments, for example, crop yields from different fertilization treatments or mean responses from different drug therapies, it may be unrealistic to penalize the estimation of each coordinate by the same amount. In particular, if one drug is uncommonly expensive or if a fertilization treatment is quite difficult to apply, this could be reflected in the loss function. ∥ The situation described in Example 5.8 can be generalized to X ∼N(θ, ), (5.18) L(θ, δ) = (θ −δ)′Q(θ −δ), where both and Q are positive definite matrices. We again ask under what conditions the estimator (5.13) is a minimax estimator of θ. Before answering this question, we first note that, without loss of generality, we can consider one of or Q to be the identity (see Problem 5.11). Hence, we take = I in the following theorem, whose proof is left to Problem 5.12; see also Problem 5.13 for a more general result. Theorem 5.9 Let X ∼N(θ, I). An estimator of the form (5.13) is minimax against the loss L(θ, δ) = (θ −δ)′Q(θ −δ), provided (i) 0 ≤c(|x|2) ≤2[tr(Q)/λmax(Q)] −4, (ii) the function c(·) is nondecreasing. Theorem 5.9 can also be viewed as a robustness result, since we have shown δ to be minimax against any Q, which provides an upper bound for c(|x|2) in (i). This is in the same spirit as the results of Brown (1975), mentioned in Section 5.5. (See Problem 5.14.) Thus far, we have been mainly concerned with one form of shrinkage estima-tor, the estimator (5.13). We shall now obtain a more general class of minimax estimators by writing δ as δ(x) = x −g(x) and utilizing the resulting expression (5.4) for the risk of δ. As first noted by Stein, (5.4) can be combined with the identities derived in Section 4.3 (for the Bayes estimator in an exponential family) to obtain a set of sufficient conditions for minimaxity in terms of the condition of superharmonicity of the marginal distribution (see Section 1.7). 5.5 ] SHRINKAGE ESTIMATORS IN THE NORMAL CASE 361 In particular, for the case of X ∼N(θ, I), we can, by Corollary 3.3, write a Bayes estimator of θ as E(θi|x) = ∂ ∂xi log m(x) −∂ ∂xi log h(x) (5.19) with −∂log h(x)/∂xi = xi (Problem 5.17) so that the Bayes estimators are of the form δ(x) = x + ∇log m(x). (5.20) Theorem 5.10 If X ∼Nr(θ, I), the risk, under squared error loss, of the estimator (5.20) is given by R(θ, δ) = 1 + 4 r Eθ    r i=1 ∂2 ∂x2 i √m(X) √m(X)    (5.21) = 1 + 4 r Eθ ∇2√m(X) √m(X) where ∇2f = {(∂2/∂x2 i )f } is the Laplacian of f . Proof. The ith component of the estimator (5.20) is δi(x) = xi + (∇log m(x))i = xi + ∂ ∂xi log m(x) = xi + m′ i(x) m(x) where, for simplicity of notation, we write m′ i(x) = (∂/∂xi)m(x). In the risk identity (5.4), set gi(x) = −m′ i(x)/m(x) to obtain R(θ, δ) = 1 + 1 r Eθ r i=1 m′ i(X) m(X) 2 + 2 r Eθ r i=1  ∂ ∂xi m′ i(X) m(X)  (5.22) = 1 + 1 r r i=1 Eθ 2m′′ i (X) m(X) − m′ i(X) m(X) 2 where m′′ i (x) = (∂2/∂x2 i )m(x), and the second expression follows from straight-forward differentiation and gathering of terms. Finally, notice the differentiation identity ∂2 ∂x2 i [g(x)]1/2 = g′′ i (x) 2[g(x)]1/2 −[g′ i(x)]2 4[g(x)]3/2 . (5.23) Using (5.23), we can rewrite the risk (5.22) in the form (5.21). ✷ The form of the risk function (5.21) shows that the estimator (5.20) is minimax (provided all expectations are finite) if (∂2/∂x2 i )[m(x)]1/2 < 0, and hence by Theorem 1.7.24 if [m(x)]1/2 is superharmonic. We, therefore, have established the following class of minimax estimators. Corollary 5.11 Under the conditions of Theorem 5.10, if Eθ{∇2√m(X)/ √m(X)} < ∞and [m(x)]1/2 is a superharmonic function, then δ(x) = x + log ∇m(x) is minimax. 362 MINIMAXITY AND ADMISSIBILITY [ 5.5 A useful consequence of Corollary 5.11 follows from the fact that superhar-monicity of m(x) implies that of [m(x)]1/2 (Problem 1.7.16) and is often easier to verify. Corollary 5.12 Under the conditions of Theorem 5.10, if Eθ|∇m(X)/ m(X)|2 < ∞, Eθ|∇2m(X)/m(X)| < ∞, and m(x) is a superharmonic function, then δ(x) = x + log ∇m(x) is minimax. Proof. From the second expression in (5.22), we see that R(θ, δ) ≤1 + 2 r r i=1 Eθ m′′ i (X) m(X) , (5.24) which is ≤1 if m(x) is superharmonic. ✷ Example 5.13 Superharmonicity of the marginal. For the model in (5.11) of Example 5.6, we have m(x) ∝  1 0 λ(r/2)−ae −λ 2 |x|2dλ and r i=1 ∂2 ∂x2 i m(x) ∝  1 0 λ(r/2)−a+1[λ|x|2 −r]e −λ 2 |x|2dλ (5.25) = 1 (|x|2)(r/2)−a+2  |x|2 0 t(r/2)−a+1[t −r]e−t/2 dt. Thus, a sufficient condition for m(x) to be superharmonic is  |x|2 0 t(r/2)−a+1[t −r]e−t/2 dt ≤0. (5.26) From Problem 5.18, we have  |x|2 0 t(r/2)−a+1[t −r]e−t/2 dt ≤  ∞ 0 t(r/2)−a+1[t −r]e−t/2 dt (5.27) = H r 2 −a + 2 2(r/2)−a+2[−2a + 4], so m(x) is superharmonic if −2a + 4 ≤0 or a ≥2. For the choice a = 2, the Strawderman estimator is (Problem 5.5) δ(x) =  1 −r −2 |x|2  P(χ2 r ≤|x|2) P(χ2 r−2 ≤|x|2) x, which resembles the positive-part Stein estimator. Note that this estimator is not a proper Bayes estimator, as the prior distribution is improper. ∥ There are many other characterizations of superharmonic functions that lead to different versions of minimax theorems. A most useful one, noticed by Stein (1981), is the following. 5.5 ] SHRINKAGE ESTIMATORS IN THE NORMAL CASE 363 Corollary 5.14 Under the conditions of Theorem 5.10 and Corollary 5.11, if the prior π(θ) is superharmonic, then δ(x) = x + ∇log m(x) is minimax. Proof. The marginal density can be written as m(x) =  φr(x −θ)π(θ) dθ, (5.28) where φr(x −θ) is the r-variate normal density. From Problem 1.7.16, the super-harmonicity of m(x) follows. ✷ Example 5.15 Superharmonic prior. The hierarchical Bayes estimators of Faith (1978) (Problem 5.7) are based on the multivariate t prior π(θ) ∝ 2 b + |θ|2 −(a+r/2) . (5.29) It is straightforward to verify that this prior is superharmonic if a ≤−1, allowing a simple verification of minimaxity of an estimator that can only be expressed as an integral. The superharmonic condition, although sometimes difficult to verify, has often proved helpful in not only establishing minimaxity but also in understanding what types of prior distributions may lead to minimax Bayes estimators. See Note 9.7 for further discussion. ∥ We close this section with an examination of componentwise risk. For Xi ∼ N(θi, 1), independent, and risk function ¯ R(θ, δ) = 1 r R(θi, δi) (5.30) with R(θi, δi) = E(δi −θi)2, it was seen in Section 5.3 that it is not possible to find a δi for which R(θi, δi) is uniformly better than R(θi, Xi) = 1. Thus, the improvement in the average risk can be achieved only though in-creasing some of the component risks, and it becomes of interest to consider the maximum possible component risk max i sup θi R(θi, δi). (5.31) For given λ = θ2 j , it can be shown (Baranchik 1964) that (5.31) attains its maximum when all but one of the θi’s are zero, say θ2 = · · · = θr = 0, θ1 = √ λ, and that this maximum risk ρr(λ) as a function of λ increases from a minimum of 2/r at λ = 0 to a maximum and then decreases and tends to 1 as λ →∞; see Figure 6.2. The values of maxλ ρr(λ) and the value λr at which the maximum is attained, shown in Table 5.1, are given by Efron and Morris (1972a). The table suggests that shrinkage estimators will typically not be appropriate when the component prob-lems concern different clients. No one wants his or her blood test subjected to the possibility of large errors in order to improve a laboratory’s average performance. To get a feeling for the behavior of the James-Stein estimator (4.21) with µ = 0, in a situation in which most of the θi’s are at or near zero (representing the standard 364 MINIMAXITY AND ADMISSIBILITY [ 5.5 Figure 5.2. Maximum component risk ρr(λ) of the ordinary James-Stein estimator, and the componentwise risk of X, the UMVU estimator, for r=4. Table 5.1. Maximum Component Risk r 3 5 10 20 30 ∞ λr 2.49 2.85 3.62 4.80 5.75 ρr(λr) 1.24 1.71 2.93 5.40 7.89 r/4 or normal situation of no effect) but a few relatively large θi’s are present, consider the 20-component model Xi ∼N(θi, 1), i = 1, . . . , 20, where the vector θ = (θ1, . . . , θ20) is taken to have one of three configurations: (a) θ1 = · · · = θ19 = 0, θ20 = 2, 3, 4, (b) θ1 = · · · = θ18 = 0, θ19 = i, θ20 = j, 2 ≤i ≤j ≤4, (c) θ1 = · · · = θ17 = 0, θ18 = i, θ19 = j, θ20 = k, 2 ≤i ≤j ≤k ≤4. 5.5 ] SHRINKAGE ESTIMATORS IN THE NORMAL CASE 365 The resulting shrinkage factor, 1 −(r −2)/|x|2, by which the observation is mul-tiplied to obtain the estimators δi of θi, has expected value E  1 −r −2 |X|2  = 1 −(r −2)E 1 χ2 r (λ) (5.32) = 1 −(r −2) ∞ k=0 e−λ/2(λ/2)k (r + 2k −2)k! where λ = |θ|2 (see Problem 5.23). Its values are given in Table 5.2. Table 5.2. Expected Value of the Shrinkage Factor (a) θ20 (b) θ19θ20 θ ′s ̸= 0 2 3 4 22 23 33 24 34 44 Factor .17 .37 .46 .29 .40 .49 .51 . 57 .63 λ 4 9 16 8 13 18 20 25 32 (c) θ18θ19θ20 θ ′s ̸= 0 222 223 224 233 234 244 333 334 344 444 Factor .38 .47 .56 .54 .61 .66 .59 .64 .69 .78 λ 12 17 24 22 29 36 27 34 41 64 To see the effect of the shrinkage explicitly, suppose, for example, that the observation X20 corresponding to θ20 = 2 turned out to be 2.71. The modified estimate ranges from 2.71 × .17 = .46 (when θ1 = · · · = θ19 = 0) to 2.71 × .66 = 1.79 (when θ1 = · · · = θ17 = 0, θ18 = θ19 = 4). What is seen here can be summarized roughly as follows: (i) If all the θi’s are at or fairly close to zero, then the James-Stein estimator will reduce the X’s very substantially in absolute value and thereby typically will greatly improve the accuracy of the estimated values. (ii) If there are some very large θi’s or a substantial number of moderate ones, the factor by which the X’s are multiplied will not be very far from 1, and the modification will not have a great effect. Neither of these situations causes much of a problem: In (ii), the modification presents an unnecessary but not particularly harmful complication; in (i), it is clearly very beneficial. The danger arises in the following intermediate situation. (iii) Most of the θi’s are close to zero, but there are a few moderately large θi’s (of the order of two to four standard deviations, say). These represent the cases in which something is going on, about which we will usually want to know. However, in these cases, the estimated values are heavily shrunk toward the norm, with the resulting risk of their being found “innocent by association.” If one is interested in minimizing the average risk (5.30) but is concerned about the possibility of large component risks, a compromise is possible along the lines 366 MINIMAXITY AND ADMISSIBILITY [ 5.6 of restricted Bayes estimation mentioned in Section 5.2. One can impose an upper bound on the maximum component risk, say 10% or 25% above the minimax risk of 1 (when σ = 1). Subject to this restriction, one can then try to minimize the average risk, for example, in the sense of obtaining a Bayes or empirical Bayes solution. An approximation to such an approach has been developed by Efron and Morris (1971, 1972a), Berger (1982a, 1988b), Bickel (1983, 1984), and Kempthorne (1988a, 1988b). See Example 6.7 for an illustration. The results discussed in this and the preceding section for the simultaneous estimation of normal means have been extended, particularly to various members of the exponential family and to general location parameter families, with and without nuisance parameters. The next section contains a number of illustrations. 6 Extensions The estimators of the previous section have all been constructed for the case of the estimation of θ based on observing X ∼Nr(θ, I). The applicability of shrinkage estimation, now often referred to as Stein estimation, goes far beyond this case. In this section, through examples, we will try to illustrate some of the wide ranging applicability of the “Stein effect,” that is, the ability to improve individual estimates by using ensemble information. Also, the shrinkage estimators previously considered were designed to obtain the greatest risk improvement in a specified region of the parameter space. For example, the maximum risk improvement of (5.10) occurs at θ1 = θ2 = · · · = θr = 0, while that of (4.21) occurs at θ1 = µ1, θ2 = µ2, . . . , θr = µr. In the next three examples, we look at modifications of Stein estimators that shrink toward adaptively chosen targets, that is, targets selected by the data. By doing so, it is hoped that a maximal risk improvement will be realized. Although we only touch upon the topic of selecting a shrinkage target, the literature is vast. See Note 9.7 for some references. The first two examples examine estimators that we have seen before, in the context of empirical Bayes analysis of variance and regression (Examples 4.7.7 and 4.7.8). These estimators shrink toward subspaces of the parameter space rather than specified points. Moreover, we can allow the data to help choose the specific shrinkage target. We now establish minimaxity of such estimators. Example 6.1 Shrinkingtowardacommonmean.Inproblemswhereitisthought there is some similarity between components, a reasonable choice of shrinkage target may be the linear subspace where all the components are equal. This was illustrated in Example 4.7.7, where the estimator (4.7.28) shrunk the coordinates toward an estimated common mean value rather than a specified point. For the average squared error loss L(θ, δ) = 1 r |θ −δ|2, the estimator (4.7.28) with coordinates δL i (x) = ¯ x +  1 − c(r −3)  j(xj −¯ x)2 (xi −¯ x), (6.1) 5.6 ] EXTENSIONS 367 has risk given by R(θ, δL) = 1 + (r −3)2 r Eθ  c(c −2)  j(Xj −¯ X)2 . (6.2) Hence, δL is minimax if r ≥4 and c ≤2. The minimum risk is attained at θ values that satisfy (θi −¯ θ)2 = 0, that is, where θ1 = θ2 = · · · = θr. Moreover, the best value of c is c = 1, which results in a minimum risk of 3/r. This is greater than the minimum of 2/r (for the case of a known value of θ) but is attained on a larger set. See Problems 6.1 and 6.2. ∥ Example 6.2 Shrinking toward a linear subspace. The estimator δL given by (6.1) shrinks toward the subspace of the parameter space defined by L = {θ : θ1 = θ2 = · · · = θr} = θ : 1 r Jθ = θ " (6.3) where J is a matrix of 1’s, J = 11′. Another useful submodel, which is a generalization of (6.3), is θi = α + βti (6.4) where the ti’s are known but α and β are unknown. This corresponds to the (sub)model of a linear trend in the means (see Example 4.7.8). If we define T =  11 · · · 1 t1t2 · · · tr ′ , (6.5) then the θi’s satisfying (6.4) constitute the subspace L = 3 θ : T ∗θ = θ 4 , (6.6) where T ∗= T (T ′T )−1T ′ is the matrix that projects any vector θ into the sub-space. (Such projection matrices are symmetric and idempotent, that is, they satisfy (T ∗)2 = I.) The models (6.3) and (6.6) suggest what the more general situation would look like when the target is a linear subspace defined by Lk = {θ : Kθ = θ, K idempotent of rank s}. (6.7) If we shrink toward the MLE of θ ∈Lk, which is given by ˆ θ k = Kx, the resulting Stein estimator is δk(x) = ˆ θ k +  1 −r −s −2 |x −ˆ θ k|2 (x −ˆ θ k) (6.8) and is minimax provided r −s > 2. (See Problem 6.3.) More general linear restrictions are possible: one can take L = {θ : Hθ = m} where Hs×r and ms×1 are specified (see Casella and Hwang, 1987). ∥ Example 6.3 Combining biased and unbiased estimators. Green and Strawder-man (1991) show how the Stein effect can be used to combine biased and unbiased estimators, and they apply their results to a problem in forestry. 368 MINIMAXITY AND ADMISSIBILITY [ 5.6 An important attribute of a forest stand (a homogeneous group of trees) is the basal area per acre, B, defined as the sum of the cross-sectional areas 4.5 feet above the ground of all trees. Regression models exist that predict log B as a function of stand age, number of trees per acre, and the average height of the dominant trees in the stand. The average prediction, Y, from the regression is a biased estimator of B. Green and Strawderman investigated how to combine this estimator with X, the sample mean basal area from a small sample of trees, to obtain an improved estimator of B. They formulated the problem in the following way. Suppose X ∼Nr(θ, σ 2I) and Y ∼Nr(θ + ξ, τ 2I), independent, where σ 2 and τ 2 are known, and the loss function is L(θ, δ) = |θ −δ|2/rσ 2. Thus, ξ is an unknown nuisance parameter. The estimator δc(x, y) = y +  1 −c(r −2)σ 2 |x −y|2  (x −y) (6.9) is a minimax estimator of θ if 0 ≤c ≤2, which follows from noting that R(θ, δc) = 1 −σ 2 (r −2)2 r E c(2 −c) |X −Y|2 (6.10) and that the minimax risk is 1. If ξ = 0 and, hence, Y is also an unbiased estimator of θ, then the optimal linear combined estimator δcomb(x, y) = τ 2x + σ 2y σ 2 + τ 2 (6.11) dominates δ1(x, y) in risk. However, the risk of δcomb becomes unbounded as |ξ| →∞, whereas that of δ1 is bounded by 1. (See Problems 6.5 and 6.6.) ∥ The next example looks at the important case of unknown variance. Example 6.4 Unknown variance. The James-Stein estimator (4.21) which shrinks X toward a given point µ was obtained under the assumption that X ∼N(θ, I). We shall now generalize this estimator to the situations, first, that the common variance of the Xi has a known value σ 2 and then that σ 2 is unknown. In the first case, the problem can be reduced to that with unit variance by con-sidering the variables Xi/σ and estimating the means θi/σ and then multiplying the estimator of θi/σ by σ to obtain an estimator of θi. This argument leads to replacing (4.21) by δi = µi +  1 − r −2 |x −µ|2/σ 2  (xi −µi), (6.12) where |x −µ|2 = (xi −µi)2, with risk function (see Problem 6.7) R(θ, δ) = σ 2  1 −r −2 r Eθ  r −2 |X −µ|2/σ 2  . (6.13) Suppose now that σ 2 is unknown. We shall then suppose that there exists a random variable S2, independent of X and such that S2/σ 2 ∼χ2 ν , and we shall in (6.12) replace σ 2 by ˆ σ 2 = kS2, where k is a positive constant. The estimator is 5.6 ] EXTENSIONS 369 then modified to δi = µi +  1 − r −2 |x −µ|2/ˆ σ 2  (xi −µi) (6.14) = µi +  1 − r −2 |x −µ|2/σ 2 · ˆ σ 2 σ 2  (xi −µi). The conditional risk of δ given ˆ σ is given by (5.4) with |x −µ|2 replaced by |x −µ|2/σ 2 and c = ˆ σ 2/σ 2. Because of the independence of S2 and |x −µ|2, we thus have R(θ, δ) = 1 −(r −2)2 r Eθ  σ 2 |X −µ|2  E  2k S2 σ 2 −k2  S2 σ 2 2 . (6.15) Now, E(S2/σ 2) = ν and E(S2/σ 2)2 = ν(ν + 2), so that the second expectation is E  2k S2 σ 2 −k2  S2 σ 2 2 = 2kν −k2ν(ν + 2). (6.16) This is positive (making the estimator minimax) if k ≤2/(ν + 2), and (6.16) is maximized at k = 1/(ν + 2) where its value is ν/(ν + 2). The best choice of k in ˆ σ 2 thus leads to using the estimator ˆ σ 2 = S2/(ν + 2) (6.17) and the risk of the resulting estimator is R(θ, δ) = 1 − ν ν + 2 (r −2)2 r Eθ σ 2 |X −µ|2 . (6.18) The improvement in risk over X is thus reduced from that of (4.25) by a factor of ν/(ν + 2). (See Problems 6.7–6.11.) ∥ For distributions other than the normal, Strawderman (1974) determined mini-max Stein estimators for the following situation. Example 6.5 Mixture of normals. Suppose that, given σ, the vector X is dis-tributed as N(θ, σ 2I), and that σ is a random variable with distribution G, so that the density of X is f (|x −θ|) = 1 (2π)r/2  ∞ 0 e−(1/2σ 2)|x−θ|2σ −rdG(σ), (6.19) a scale mixture of normals, including, in particular, the multivariate Student’s t-distribution. Since E(|X −θ|2 | σ) = rσ 2, it follows that with loss function L(θ, δ) = |θ −δ|2/r, the risk of the estimator X is E(σ 2). On the other hand, the risk of the estimator δ(x) =  1 − c |x|2  x (6.20) is given by EθL[θ, δ(X)] = EσEθ|σL[θ, δ(X)]. (6.21) 370 MINIMAXITY AND ADMISSIBILITY [ 5.6 Calculations similar to those in the proofs of Theorems 4.7.2 and 5.1 show that Eθ|σL[θ, δ(X)] = σ 2 −1 r  2c(r −2) −c2 σ 2  Eθ|σ  σ 2 |X|2  and, hence, R(θ, δ) = Eσσ 2 −1 r Eσ  2c(r −2) −c2 σ 2   Eθ|σ  σ 2 |X|2 " . (6.22) An upper bound on the risk can be obtained from the following lemma, whose proof is left to Problem 6.13. Lemma 6.6 Let Y be a random variable, and g(y) and h(y) any functions for which E[g(Y)], E[(h(Y)], and E(g(Y)h(Y)] exist. Then: (a) If one of the functions g(·) and h(·) is nonincreasing and the other is nonde-creasing, E[g(Y)h(Y)] ≤E[g(Y)]E[h(Y)]. (b) If both functions are either nondecreasing or nonincreasing, E[g(Y)h(Y)] ≥E[g(Y)]E[h(Y)]. Returning to the risk function (6.22), we see that [2c(r−2)−c2/σ 2] is an increasing function of σ 2, and Eθ|σ(σ 2/|X|2) is also an increasing function of σ 2. (This latter statement follows from the fact that, given σ 2, |X|2/σ 2 has a noncentral χ2-distribution with noncentrality parameter |θ|2/σ 2, and that, therefore, as was pointed out following (4.25), the expectation is increasing in σ 2.) Therefore, by Lemma 6.6, Eσ  2c(r −2) −c2 σ 2  Eθ|σ  σ 2 |X|2 " ≥Eσ  2c(r −2) −c2 σ 2  Eσ σ 2 |X|2 . Hence, δ(x) will dominate x if 2c(r −2) −c2Eσ 1 σ 2 ≥0 or 0 < c < 2(r −2)/Eσ 1 σ 2 = 2/E0|X|−2, where E0|X|−2 is the expectation when θ = 0 (see Problem 6.12). If f (|x −θ|) is the normal density N(θ, I), then E0|X|−2 = (r −2)−1, and we are back to a familiar condition. The interesting fact is that, for a wide class of scale mixtures of normals, E0|X|−2 > (r −2)−1. This holds, for example, if 1/σ 2 ∼χ2 ν /ν so f (|x −θ|) is multivariate Student’s t. This implies a type of robustness of the estimator (6.20); that is, for 0 ≤c ≤2(r −2), δ(X) dominates 5.6 ] EXTENSIONS 371 X under a multivariate t-distribution and hence retains its minimax property (see Problem 6.12). ∥ Bayes estimators minimize the average risk under the prior, but the maximum of their risk functions can be large and even infinite. On the other hand, minimax estimators often have relatively large Bayes risk under many priors. The following example, due to Berger (1982a) , shows how it is sometimes possible to construct estimators having good Bayes risk properties (with respect to a given prior), while at the same time being minimax. The resulting estimator is a compromise between a Bayes estimator and a Stein estimator. Example 6.7 Bayesianrobustness.ForX ∼Nr(θ, σ 2I)andθ ∼π = Nr(0, τ 2I), the Bayes estimator against squared error loss is δπ(x) = τ 2 σ 2 + τ 2 x (6.23) with Bayes risk r(π, δπ) = rσ 2τ 2/(σ 2 + τ 2). However, δπ is not minimax and, in fact, has unbounded risk ( Problem 4.3.12). The Stein estimator δc(x) =  1 −c(r −2)σ 2 |x|2  x (6.24) is minimax if 0 ≤c ≤2, but its Bayes risk r(π, δc) = r(π, δπ) + σ 4 σ 2 + τ 2 [r + c(c −2)(r −2)], at the best value c = 1, is r(π, δ′) = r(π, δπ) + 2σ 4/(σ 2 + τ 2). To construct a minimax estimator with small Bayes risk, consider the compro-mise estimator δR(x) = δπ(x) if |x|2 < c(r −2)(σ 2 + τ 2) δc(x) if |x|2 ≥c(r −2)(σ 2 + τ 2). (6.25) This estimator is minimax if 0 ≤c ≤2 (Problem 6.14). If |x|2 > c(r −2)(σ 2+τ 2), the data do not support the prior specification and we, therefore, put δR = δc; if |x|2 < c(r −2)(σ 2 + τ 2), we tend to believe that the data support the prior specification since |x|2 σ 2+τ 2 ∼χ2 r , and we are, therefore, willing to gamble on π and put δR = δπ. The Bayes risk of δR is r(π, δR) = E|θ −δπ(X)|2I  |X|2 < c(r −2)(σ 2 + τ 2)  (6.26) + E|θ −δc(X)|2I  |X|2 ≥c(r −2)(σ 2 + τ 2)  , where the expectation is over the joint distribution of X and θ. Adding ±δπ to the second term in (6.26) yields r(π, δR) = E|θ −δπ(X)|2 +E|δπ(X) −δc(X)|2I  |X|2 ≥c(r −2)(σ 2 + τ 2)  (6.27) = r(π, δπ) + E|δπ(X) −δc(X)|2I  |X|2 ≥c(r −2)(σ 2 + τ 2)  . 372 MINIMAXITY AND ADMISSIBILITY [ 5.6 Here, we have used the fact that, marginally, |X|2/(σ 2 + τ 2) ∼χ2 r . We can write (6.27) as (see Problem 6.14) r(π, δR) = r(π, δπ) + 1 r −2 σ 4 σ 2 + τ 2 E[Y −c(r −2)]2I[Y > c(r −2)], (6.28) where Y ∼χ2 r−2. An upper bound on (6.28) is obtained by dropping the indicator function, which gives r(π, δR) ≤r(π, δπ) + 1 r −2 σ 4 σ 2 + τ 2 E[Y −c(r −2)]2 = r(π, δπ) + [r + c(c −2)(r −2)] σ 4 σ 2 + τ 2 (6.29) = r(π, δc). This shows that δR has smaller Bayes risk than δc while remaining minimax. Since E(Y −a)2I(Y > a) is a decreasing function of a (Problem 6.14), the value c = 2 minimizes (6.28) and therefore, among the estimators (6.25), determines the minimax estimator with minimum Bayes risk. However, for c = 2, δc has the same (constant) risk as X, so we are trading optimal Bayes risk for minimal frequentist risk improvement over X, the constant risk minimax estimator. Thus, it may be better to choose c = 1, which gives optimal frequentist risk performance and still provides good Bayes risk reduction over δ′. Table 6.1 shows the relative Bayes savings r∗= r(π, δc) −r(π, δπ) r(π, δc) for c = 1. Table 6.1. Values of r∗, the Relative Bayes Risk Savings of δR over δc, with c = 1 r 3 4 5 7 10 20 r∗ .801 .736 .699 .660 .629 .587 For other approaches to this “compromise” decision problem, see Bickel (1983, 1984), Kempthorne (1988a, 1988b), and DasGupta and Rubin (1988). ∥ Thus far, we have considered only continuous distributions, but the Stein effect continues to hold also in discrete families. Minimax proofs in discrete families have developed along two different lines. The first method, due to Clevenson and Zidek (1975), is illustrated by the following result. Theorem 6.8 Let Xi ∼Poisson(λi), i = 1, . . . , r, r ≥2, be independent, and let the loss be given by L(λ, δ) = r i=1 (λi −δi)2/λi. (6.30) 5.6 ] EXTENSIONS 373 The estimator δcz(x) =  1 −c(xi) xi + b  x (6.31) is minimax if (i) c(·) is nondecreasing, (ii) 0 ≤c(·) ≤2(r −1), (iii) b ≥r −1. Recall (Corollary 2.20) that the usual minimax estimator here is X, with constant risk r. Note also that, in contrast to the normal-squared-error-loss case, by (ii) there exist positive values of c for which δcz is minimax provided r ≥2. Proof. If Z = Xi, the risk of δcz can be written as R(λ, δcz) = E  r i=1 1 λi  λi −Xi + c(Z)Xi Z + b 2 = r + 2E c(Z) Z + b r i=1 Xi(λi −Xi) (6.32) +E c2(Z) (Z + b)2 r i=1 X2 i . Let us first evaluate the expectations conditional on Z. The distribution of Xi|Z is multinomial with E(Xi|Z) = Z(λi/) and var(Xi|Z) = Z(λi/)(1 −λi/), where  = λi. Hence, E  r i=1 Xi(λi −Xi)|Z = Z  [ −(Z + r −1)] (6.33) E  r i=1 X2 i |Z = Z (Z + r −1), and, so, after some rearrangement of terms, R(λ, δcz) = r + E c(Z)Z (Z + b)  2( −Z) −2(r −1) + c(Z)Z + r −1 Z + b " . (6.34) Now, if b ≥r −1, z + r −1 < z + b, and c(z) < 2(r −1), we have −2(r −1) + c(z)z + r −1 z + b ≤−2(r −1) + c(z) ≤0, so the risk of δcz is bounded above by R(λ, δcz) ≤r + 2E  c(Z)Z (Z + b)  ( −Z)  . But this last expectation is the product of an increasing and a decreasing function of z; hence, by Lemma 6.6, E  c(Z)Z (Z + b)  ( −Z) ≤E c(Z)Z (Z + b)E( −z) = 0, (6.35) 374 MINIMAXITY AND ADMISSIBILITY [ 5.6 since Z ∼Poisson(). Hence, R(λ, δcz) ≤r and δcz is minimax. ✷ If we recall Example 4.6.6 [in particular, Equation (4.6.29)], we see a similarity between δcz and the hierarchical Bayes estimators derived there. It is interesting to note that δcz is also a Bayes estimator (Clevenson and Zidek 1975; see Problem 6.15). The above method of proof, which relies on being able to evaluate the conditional distribution of Xi|Xi and the marginal distribution of  Xi, works for other discrete families, in particular the negative binomial and the binomial (where n is the parameter to be estimated). (See Problem 6.16.) However, there exists a more powerful method (similar to that of Stein’s lemma) which is based on the following lemma due to Hudson (1978) and Hwang (1982a). The proof is left to Problem 6.17. Lemma 6.9 Let Xi, i = 1, . . . , r, be independent with probabilities pi(x|θi) = ci(θi)hi(x)θx i , x = 0, 1, . . . , (6.36) that is, pi(x|θi) is in the exponential family. Then, for any real-valued function g(x) with Eθ|g(X)| < ∞, and any number m for which g(x) = 0 when x + i < m, Eθθm i g(X) = Eθ g(X −mei)hi(Xi −m) hi(Xi) " (6.37) where ei is the unit vector with ith coordinate equal to 1 and the rest equal to 0. The principal application of Lemma 6.9 is to find an unbiased estimator of the risk of estimators of the form X + g(X), analogous to that of Corollary 4.7.2. Theorem 6.10 Let X1, . . . , Xr be independently distributed according to (6.36), and let δ0(x) = {hi(xi −1)/hi(xi)} [the estimator whose ith coordinate is hi(xi − 1)/hi(xi)] be the UMVU estimator of θ. For the loss function Lm(θ, δ) = r i=1 θmi i (θi −δi)2, (6.38) where m = (m1, . . . , mr) are known numbers, the risk of the estimator δ(x) = δ0(x) + g(x) is given by R(θ, δ) = R(θ, δ0) + EθD(x) (6.39) with D(x) = r i=1 2hi(xi −mi −1) hi(xi) [gi(x −miei −ei) −gi(x −miei)] (6.40) + hi(xi −mi) hi(xi) g2 i (x −miei) " . Proof. Write R(θ, δ) = R(θ, δ0) −2Eθ r i=1 θm i gi(X)  θi −hi(Xi −1) hi(Xi)  5.6 ] EXTENSIONS 375 +Eθ r i=1 θm i g2 i (X) . and apply Lemma 6.9 (see Problem 6.17.) ✷ Hwang (1982a) established general conditions on g(x) for which D(x) ≤0, leading to improved estimators of θ (see also Ghosh et al. 1983, Tsui 1979a, 1979b, and Hwang 1982b). We will only look at some examples. Example 6.11 Improved estimation for independent Poissons. The Clevenson-Zidek estimator (6.31) dominates X (and is minimax) for the loss L−1(θ, δ) of (6.38); however, Theorem 6.8 does not cover squared error loss, L0(θ, δ). For this loss, if Xi ∼Poisson(λi), independent, the risk of an estimator δ(x) = x + g(x) is given by (6.39) with δ0 = x and D(x) = r i=1 3 2xi[gi(x) −gi(x −ei)] + g2 i (x) 4 . (6.41) The estimator with gi(x) = c(x)k(xi) r j=1 k(xj)k(xj + 1), k(x) = x l=1 1 l , (6.42) and c(x) nondecreasing in each coordinate with 0 ≤c(x) ≤2[#(x′ is > 1) −2] satisfies D(x) ≤0 and hence dominates x under L0. (The notation #(ais > b) denotes the number of ai s that are greater than b.) For the loss function L−1(θ, δ), the situation is somewhat easier, and the esti-mator x + g(x), with gi(x) = c(x −ei) xi r j=1 xi , (6.43) where c(·) is nondecreasing with 0 ≤c(·) ≤2(p −1), will satisfy D(x) ≤0 and, hence, is minimax for L−1. Note that (6.43) includes the Clevenson-Zidek estimator as a special case. (See Problem 6.18.) ∥ As might be expected, these improved estimators, which shrink toward 0, per-form best and give the greatest risk improvement, when the θi’s are close to zero and, more generally, when they are close together. Numerical studies (Clevenson and Zidek 1975, Tsui, 1979a, 1979b, Hudson and Tsui, 1981) quantify this im-provement, which can be substantial. Other estimators, which shrink toward other targets in the parameter space, can optimize the region of greatest risk reduction (see, for example, Ghosh et al. 1983, Hudson 1985). Just as the minimaxity of Stein estimators carried over from the normal distribu-tion to mixtures of normals, minimaxity carries over from the Poisson to mixtures of Poissons, for example, the negative binomial distribution (see Example 4.6.6). Example 6.12 Improved negative binomial estimation. For X1, . . . , , Xr inde-pendent negative binomial random variables with distribution pi(x|θi) = ti + x −1 x  θx i (1 −θi)ti, x = 0, 1, . . . (6.44) 376 MINIMAXITY AND ADMISSIBILITY [ 5.7 the UMVU estimator of θi is δ0 i (xi) = xi/(xi + ti −1) (where δ0 i (0) = 0 for ti = 1). Using Theorem 6.10, this estimator can be improved. For example, for the loss L−1(θ, δ) of (6.38), the estimator δ0(x) + g(x), with gi(x) = c(x −ei)xi r j=1(x2 j + xj) (6.45) and c(·) nondecreasing with 0 ≤c ≤2(r −2)/ mini{ti}, satisfies D(x) ≤0 and, hence, has uniformly smaller risk than δ0(x). Similar results can be obtained for other loss functions (see Problem 6.19). Surprisingly, however, similar domination results do not hold for the MLE ˆ θi = xi/(xi + ti). Chow (1990) has shown that the MLE is admissible in all dimensions (see also Example 7.14). ∥ Finally, we turn to a situation where the Stein effect fails to yield improved estimators. Example 6.13 Multivariate binomial. For Xi ∼b(θi, ni), i = 1, . . . , r, inde-pendent, that is, with distribution pi(x|θi) = ni x  θx i (1 −θi)ni−x, x = 0, 1, . . . , ni, (6.46) it seems reasonable to expect that estimators of the form x+g(x) exist that dominate theUMVUestimatorx.Thisexpectationispartiallybasedonthefactthat(6.46)isa discrete exponential family. However, Johnson (1971) showed that such estimators do not exist in the binomial problem for squared error loss (see Example 7.23 and Problem 7.28). Theorem 6.14 If ki(θi), i = 1, . . . , r, are continuous functions and δi(xi) is an admissible estimator of ki(θi) under squared error loss, then (δ1(x1), . . . , δr(xr)) is an admissible estimator of (k1(θ1), . . . , kr(θr)) under sum-of-squared-error loss. Thus, there is no “Stein effect” in the binomial problem. In particular, as Xi is an admissible estimator of θi under squared error loss (Example 2.16), X is an admissible estimator of θ. ∥ It turns out that the absence of the Stein effect is not a property of the binomial distribution, but rather a result of the finiteness of the sample space (Gutmann 1982a; see also Brown 1981). See Note 9.7 for further discussion. 7 Admissibility and Complete Classes In Section 1.7, we defined the admissibility of an estimator which can be formally stated as follows. Definition 7.1 An estimator δ = δ(X) of θ is admissible [with respect to the loss function L(θ, δ)] if there exists no estimator δ′ that satisfies (i) R(θ, δ′) ≤R(θ, δ) for all θ, (7.1) (ii) R(θ, δ′) < R(θ, δ) for some θ, where R(θ, δ) = EθL(θ, δ). If such an estimator δ′ exists, then δ is inadmissible. When a pair of estimators δ and δ′ satisfy (7.1), δ′ is said to dominate δ. 5.7 ] ADMISSIBILITY AND COMPLETE CLASSES 377 Although admissibility is a desirable property, it is a very weak requirement. This is illustrated by Example 2.23, where an admissible estimator was completely unreasonable since it used no information from the relevant distribution. Here is another example (from Makani 1977, who credits it to L.D. Brown). Example 7.2 Unreasonable admissible estimator. Let X1 and X2 be indepen-dent random variables, Xi distributed as N(θi, 1), and consider the estimation of θ1 with loss function L((θ1, θ2), δ) = (θ1 −δ)2. Then, δ = sign(X2) is an admissi-ble estimator of θ1, although its distribution does not depend on θ1. The result is established by showing that δ cannot be simultaneously beaten at (θ1, θ2) = (1, θ2) and (−1, θ2). (See Problem 7.1.) ∥ Conversely, there exist inadmissible decision rules that perform quite well. Example 7.3 The positive-part Stein estimator. For X ∼Nr(θ, I), the positive-part Stein estimator δ+(x) =  1 −r −2 |x|2 + x (7.2) is a good estimator of θ under squared error loss, being both difficult to improve upon and difficult to dominate. However, as was pointed out by Baranchik (1964), it is not admissible. (This follows from Theorem 7.17, as δ+ is not smooth enough to be a generalized Bayes estimator.) Thus, there exists an estimator that uniformly dominates it. How much better can such a dominating estimator be? Efron and Morris (1973a, Section 5) show that δ+ is “close” to a Bayes rule (Problem 7.2). Brown (1988; see also Moore and Brook 1978) writing R(θ, δ+) = Eθ  1 r r i=1 (θi −δi(X))2 = EθDδ+(X), (7.3) where Dδ+(x) = 1 + m2(x)|x|2 r −2 r {(r −2)m(x) + 2I[m(x) = 1]} with m(x) = min{1, c(r −2)/|x|2} (see Corollary 4.7.2), proves that no estimator δ exists for which Dδ(x) ≤Dδ+(x) for all x. These observations imply that the inadmissible δ+ behaves similar to a Bayes rule and has a risk that is close to that of an admissible estimator. ∥ However, since admissibility generally is a desirable property, it is of interest to determine the totality of admissible estimators. Definition 7.4 A class of C of estimators is complete if for any δ not in C there exists an estimator δ′ in C such that (7.1) holds; C is essentially complete if for any δ not in C there exists an estimator δ′ in C such that (7.1)(i) holds. It follows from this definition that any estimator outside a complete class is inadmissible. If C is essentially complete, an estimator δ outside of C may be admissible, but there will then exist an estimator δ′ in C with the same risk function. 378 MINIMAXITY AND ADMISSIBILITY [ 5.7 It is therefore reasonable, in the search for an optimal estimator, to restrict attention to a complete or essentially complete class. The following result provides two examples of such classes. Lemma 7.5 (i) If C is the class of all (including randomized) estimators based on a sufficient statistic, then C is essentially complete. (ii) If the loss function L(θ, d) is convex in d, then the class of nonrandomized estimators is complete. Proof. These results are immediate consequences of Theorem 1.6.1 and Corollary 1.7.9. ✷ Although a complete class contains all admissible estimators, it may also contain many inadmissible ones. (This is, for example, the case for the two complete classes of Lemma 7.5.) A complete class is most useful if it is as small as possible. Definition 7.6 A complete class C of estimators is minimal complete if no proper subset of C is complete. Lemma 7.7 If a minimal complete class C exists, then it is exactly the class of all admissible estimators. Proof. It is clear that C contains all admissible rules, so we only need to prove that it cannot contain any inadmissible ones. Let δ ∈C and suppose that δ is inadmissible. Then, there is a δ′ ∈C that dominates it, and, hence, the class C {δ} (C with the estimator δ removed) is a complete class. This contradicts the fact that C is minimal complete. ✷ Note that Lemma 7.7 requires the existence of a minimal complete class. The following example illustrates the possibility that a minimal complete class may not exist. (For another example, see Blackwell and Girshick 1954, Problem 5.2.1.) Example 7.8 Nonexistence of a minimal complete class. Let X be normally distributed as N(θ, 1) and consider the problem of estimating θ with loss function L(θ, d) = d −θ if θ < d 0 if θ ≥d. Then, if δ(x) ≤δ′(x) for all x, we have R(θ, δ) ≤R(θ, δ′) with strict inequality if Pθ[δ(X) ≤δ′(X)] > 0. Many complete classes exist in this situation. For example, if δ0 is any estimator of θ, then the class of all estimators with δ(x) < δ0(x) for some x is complete (Problem 7.4). We shall now show that there exists no minimal complete class. Suppose C is minimal complete and δ0 is any member of C. Then, some estimator δ1 dominating δ0 must also lie in C. If not, there would be no members of C left to dominate such estimators and C would not be complete. On the other hand, if δ1 dominates δ0, and δ1 and δ0 are both in C, the class C is not minimal since δ0 could be removed without disturbing completeness. ∥ 5.7 ] ADMISSIBILITY AND COMPLETE CLASSES 379 Despite this example, the minimal complete class typically coincides with the class of admissible estimators, and the search for a minimal complete class is there-fore equivalent to the determination of all admissible estimators. The following results are concerned with these two related aspects, admissibility and complete-ness, and with the relation of both to Bayes estimators. Theorem 2.4 showed that any unique Bayes estimator is admissible. The fol-lowing result replaces the uniqueness assumption by some other conditions. Theorem 7.9 For a possibly vector-valued parameter θ, suppose that δπ is a Bayes estimator having finite Bayes risk with respect to a prior density π which is positive for all θ, and that the risk function of every estimator δ is a continuous function of θ. Then, δπ is admissible. Proof. If δπ is not admissible, there exists an estimator δ such that R(θ, δ) ≤R(θ, δπ) for all θ and R(θ, δ) < R(θ, δπ) for some θ. It then follows from the continuity of the risk functions that R(θ, δ) < R(θ, δπ) for all θ in some open subset 0 of the parameter space and hence that  R(θ, δ)π(θ)dθ <  R(θ, δπ)π(θ)dθ, which contradicts the definition of δπ. ✷ A basic assumption in this theorem is the continuity of all risk functions. The following example provides an important class of situations for which this assump-tions holds. Example 7.10 Exponential families have continuous risks. Suppose that we let p(x|η) be the exponential family of (5.2). Then, it follows from Theorem 1.5.8 that for any loss function L(η, δ) for which R(η, δ) = EηL(η, δ) is finite, R(η, δ) is continuous. (See Problem 7.6.) ∥ There are many characterizations of problems in which all risk functions are continuous. With assumptions on both the loss function and the density, theorems can be established to assert the continuity of risks. (See Problem 7.7 for a set of conditions involving boundedness of the loss function.) The following theorem, which we present without proof, is based on a set of assumptions that are often satisfied in practice. Theorem 7.11 Consider the estimation of θ with loss L(θ, δ), where X ∼f (x|θ) has monotone likelihood ratio and is continuous in θ for each x. If the loss function L(θ, δ) satisfies (i) L(θ, δ) is continuous in θ for each δ, (ii) L is decreasing in δ for δ < θ and increasing in δ for δ > θ, (iii) there exist functions a and b, which are bounded on all bounded subsets of the parameter space, such that for all δ L(θ, δ) ≤a(θ, θ′)L(θ′, δ) + b(θ, θ′), 380 MINIMAXITY AND ADMISSIBILITY [ 5.7 thentheestimatorswithfinite-valued,continuousriskfunctionsR(θ, δ) = EθL(θ, δ) form a complete class. Theorems similar to Theorem 7.11 can be found in Ferguson (1967, Section 3.7, Brown 1986a, Berk, Brown, and Cohen 1981, Berger 1985, Section 8.8, or Robert 1994a, Section 6.2.1). Also see Problem 7.9 for another version of this theorem. The assumptions on the loss are relatively simple to check. In fact, assumptions (i) and (ii) are almost self-evident, whereas (iii) will be satisfied by most interesting loss functions. Example 7.12 Squared error loss. For L(θ, δ) = (θ −δ)2, we have (θ −δ)2 = (θ −θ′ + θ′ −δ)2 (7.4) = (θ −θ′)2 + 2(θ −θ′)(θ′ −δ) + (θ′ −δ)2. Now, since 2xy ≤x2 + y2, 2(θ −θ′)(θ′ −δ) ≤(θ −θ′)2 + (θ′ −δ)2 and, hence, (θ −δ)2 ≤2(θ −θ′)2 + (θ′ −δ)2, so condition (iii) is satisfied with a(θ, θ′) = 2 and b(θ, θ′) = 2(θ −θ′)2. ∥ Since most problems that we will be interested in will satisfy the conditions of Theorem 7.11, we now only need consider estimators with finite-valued continuous risks. Restriction to continuous risk, in turn, allows us to utilize the method of proving admissibility that we previously saw in Example 2.8. (But note that this restriction can be relaxed somewhat; see Gajek 1983.) The following theorem extends the admissibility of Bayes estimators to sequences of Bayes estimators. Theorem 7.13 (Blyth’s Method) Suppose that the parameter space ∈ℜr is open, and estimators with continuous risk functions form a complete class. Let δ be an estimator with a continuous risk function, and let {πn} be a sequence of (possibly improper) prior measures such that (a) r(πn, δ) < ∞for all n, (b) for any nonempty open set 0 ∈ , there exist constants B > 0 and N such that  0 πn(θ) dθ ≥B for all n ≥N, (c) r(πn, δ) −r(πn, δπn) →0 as n →∞. Then, δ is an admissible estimator. Proof. Suppose δ is inadmissible, so that there exists δ′ with R(θ, δ′) ≤R(θ, δ), with strict inequality for some θ. By the continuity of the risk functions, this implies that there exists a set 0 and ε > 0 such that R(θ, δ) −R(θ, δ′) > ε for θ ∈ 0. Hence, for all n ≥N, r(πn, δ) −r(πn, δ′) > ε  0 πn(θ) dθ ≥εB (7.5) and therefore (c) cannot hold. ✷ 5.7 ] ADMISSIBILITY AND COMPLETE CLASSES 381 Note that condition (b) prevents the possibility that as n →∞, all the mass of πn escapes to ∞. This is similar to the requirement of tightness of a family of measures (see Chung 1974, Section 4.4, or Billingsley 1995, Section 25). [It is possible to combine conditions (b) and (c) into one condition involving a ratio (Problem 7.12) which is how Blyth’s method was applied in Example 2.8.] Example 7.14 Admissible negative binomial MLE. As we stated in Example 6.12 (but did not prove), the MLE of a negative binomial success probability is admissible under squared error loss. We can now prove this result using Theorem 7.13. Let X have the negative binomial distribution p(x|θ) = r + x −1 x  θx(1 −θ)r, 0 < θ < 1. (7.6) The ML estimator of θ is δ0(x) = x/(x + r). To use Blyth’s method, we need a sequence of priors π for which the Bayes risks r(π, δπ) get close to the Bayes risk of δ0. When θ has the beta prior π = B(a, b), the Bayes estimator is the posterior mean δπ = (x + a)/(x + r + a + b). Since δπ(x) →δ0(x) as a, b →0, it is reasonable to try a sequence of priors B(a, b) with a, b →0. It is straightforward to calculate the posterior expected losses E 3 [δπ(x) −θ]2|x 4 = (x + a)(r + b) (x + r + a + b)2(x + r + a + b + 1) (7.7) E 3 [δ0(x) −θ]2|x 4 = (bx −ar)2 (x + r)2(x + r + a + b)2 + E 3 [δπ(x) −θ]2|x 4 , and hence the difference is D(x) = (bx −ar)2 (x + r)2(x + r + a + b)2 . (7.8) Before proceeding further, we must check that the priors satisfy condition (b) of Theorem 7.13. [The normalized priors will not, since, for example, the probability of the interval (ε, 1 −ε) under B(a, b) tends to zero as a, b →0.] Since we are letting a, b →0, we only need consider 0 < a, b < 1. We then have for any 0 < ε < ε′ < 1,  ε′ ε θa−1(1 −θ)b−1 dθ ≥  ε′ ε θ−1(1 −θ)−1 dθ = log 1 −ε ε ε′ 1 −ε′  , (7.9) satisfying condition (b). To compute the Bayes risk, we next need the marginal distribution of X, which is given by P(X = x) = H(r + x) H(x + 1)H(r) H(r + b)H(x + a) H(r + x + a + b) H(a + b) H(a)H(b), (7.10) 382 MINIMAXITY AND ADMISSIBILITY [ 5.7 the beta-Pascal distribution. Hence, the difference in Bayes risks, using the un-normalized priors, is ∞ x=0 H(r + x) H(x + 1)H(r) H(r + b)H(x + a) H(r + x + a + b) D(x), (7.11) which we must show goes to 0 as a, b →0. Note first that the x=0 term in (7.11) is H(r)H(a + 1) H(r + a + b) a (r + a + b)2 →0 as a →0. Also, for x ≥1 and a ≤1, H(r+x) H(x+1) H(x+a) H(r+x+a+b) ≤1, so it is sufficient to show ∞ x=1 D(x) →0 as a, b →0. From (7.8), using the facts that sup x≥0 (bx −ar)2 (x + r)2 = max{a2, b2} and 1 (x + r + a + b)2 ≤1 x2 , we have ∞ x=1 D(x) ≤max{a2, b2} ∞ x=1 1 x2 →0 as a, b →0, establishing the admissibility of the ML estimator of θ. ∥ Theorem 7.13 shows that one of the sufficient conditions for an estimator to be admissible is that its Bayes risk is approachable by a sequence of Bayes risks of Bayes estimators. It would be convenient if it were possible to replace the risks by the estimators themselves. That this is not the case can be seen from the fact that the normal sample mean in three or more dimensions is not admissible although it is the limit of Bayes estimators. However, under certain conditions the converse is true: That every admissible estimator is a limit of Bayes estimators.4 We present, but do not prove, the follow-ing necessary conditions for admissibility. (This is essentially Theorem 4A.12 of Brown (1986a); see his Appendix to Chapter 4 for a detailed proof.) Theorem 7.15 Let X ∼f (x|θ) be a density relative to a σ-finite measure ν, such that f (x|θ) > 0 for all x ∈X, θ ∈ . Let the loss function L(θ, δ) be continuous, strictly convex in δ for every θ, and satisfy lim |δ|→∞L(θ, δ) = ∞ for all θ ∈ . Then, to every admissible procedure δ(x) there corresponds a sequence πn of prior distributions with support on a finite set (and hence with finite Bayes risk) for which δπn(x) →δ(x) a.e. (ν), (7.12) where δπn is the Bayes estimator for πn. 4 The remaining material of this section is of a somewhat more advanced nature. It is sketched here to give the reader some idea of these developments and to serve as an introduction to the literature. 5.7 ] ADMISSIBILITY AND COMPLETE CLASSES 383 As an immediate corollary to Theorem 7.15, we have the following complete class theorem. Corollary 7.16 Under the assumptions of Theorem 7.15, the class of all estimators δ(x) that satisfy (7.12) is complete. For exponential families, the assumptions of Theorem 7.15 are trivially satisfied, so limits of Bayes estimators are a complete class. More importantly, if X has a density in the r-variate exponential family, and if δπ is a limit of Bayes estimators δπn, then a subsequence of measures {πn′} can be found such that πn′ →π and δπ is generalized Bayes against π. Such a result was originally developed by Sacks (1963) and extended by Brown (1971) and Berger and Srinivasan (1978) to the following theorem. Theorem 7.17 Under the assumptions of Theorem 7.15, if the densities of X con-stitute an r-variate exponential family, then any admissible estimator is a gener-alized Bayes estimator. Thus, the generalized Bayes estimators form a complete class. Further characterizations of generalized Bayes estimators were given by Straw-derman and Cohen (1971) and Berger and Srinivasan (1978). See Berger 1985 for more details. Note that it is not the case that all generalized Bayes estimators are admissible. Farrell (1964) gave examples of inadmissible generalized Bayes estimators in location problem, in particular X ∼N(θ, 1), π(θ) = eθ. (See also Problem 4.2.15.) Thus, it is of interest to determine conditions under which gen-eralized Bayes estimators are admissible. We do so in the following examples, where we look at a number of characterizations of admissible estimators in spe-cific situations. Although these characterizations have all been derived using the tools (or their generalizations) that have been described here, in some cases the exact derivations are complex. We begin with a fundamental identity. Example 7.18 Brown’s identity. In order to understand what types of estimators are admissible, it would be helpful if the convergence of risk functions in Blyth’s method were more explicitly dependent on the convergence of the estimators. Brown (1971) gave an identity that makes this connection clearer. Let X ∼Nr(θ, I) and L(θ, δ) = |θ −δ|2, and for a given prior π(θ), let δπ(x) = x+∇log mπ(x) be the Bayes estimator, where mπ(x) = f (x|θ)π(θ) dθ is the marginal density. Suppose that δg(x) = x + ∇log mg(x) is another estimator. First note that r(π, δπ) −r(π, δg) = E  δπ(X) −δg(X)  2 , (7.13) (see Problem 7.16); hence, we have the identity r(π, δπ) −r(π, δg) =   ∇log mπ(x) −∇log mg(x)  2 mπ(x) dx. (7.14) We now have the estimator explicitly in the integral, but we must develop (7.14) a bit further to be more useful in helping to decide admissibility. Two paths have been taken. On the first, we note that if we were going to use (7.14) to establish the admissibility of δg, we might replace the prior π(·) with a sequence πn(·). However, 384 MINIMAXITY AND ADMISSIBILITY [ 5.7 it would be more useful to have the measure of integration not depend on n [since mπ(x) would now equal mπn(x)]. To this end, write kn(x) = mπn(x)/mg(x), and r(π, δπ) −r(π, δg) =   ∇log  mπn(x)/mg(x)  2 mπn(x) dx (7.15) =  |∇kn(x)|2 kn(x) mg(x) dx, wherethesecondequalityfollowsfromdifferentiation(seeProblem7.16),andnow the integration measure does not depend on n. Thus, if we could apply Lebesgue’s dominated convergence, then |∇kn(x)|2 kn(x) →0 would imply the admissibility of δg. This is the path taken by Brown (1971), who established a relationship between (7.15) and the behavior of a diffusion process in r dimensions, and then gave necessary and sufficient conditions for the admissibility of δg. For example, the admissibility of the sample mean in one and two dimensions is linked to the recurrence of a random walk in one and two dimensions, and the inadmissibility is linked to its transience in three or more dimensions. This is an interesting and fruitful approach, but to pursue it fully requires the development of properties of diffusions, which we will not do here. [Johnstone 1984 (see also Brown and Farrell 1985) developed similar theorems for the Poisson distribution (Problem 7.25), and Eaton (1992) investigated another related stochastic process; the review paper by Rukhin (1995) provides an excellent entry into the mathematics of this literature.] Another path, developed in Brown and Hwang (1982), starts with the estimator δg and constructs a sequence gn →g that leads to a simplified condition for the convergence of (7.14) to zero, and uses Blyth’s method to establish admissibility. Although they prove their theorem for exponential families, we shall only state it here for the normal distribution. (See Problem 7.19 for a more general statement.) ∥ Theorem 7.19 Let X ∼Nr(θ, I) and L(θ, δ) = |θ −δ|2. Let δg(x) = x + ∇log mg(x) where mg(x) = f (x|θ)g(θ) dθ. Assume that g(·) satisfies (a)  {θ :|θ |>1} g(θ) |θ|2 max{log |θ|, log 2}2 dθ < ∞, (b)  |∇g(θ)|2 g(θ) dθ < ∞, (c) sup{R(θ, δg) : θ ∈K} < ∞for all compact sets K ∈ . Then, δg(x) is admissible. Proof. The proof follows from (7.14) by taking the sequence of priors gn(θ) → g(θ), where gn(θ) = h2 n(θ)g(θ) and hn(θ) =      1 if |θ| ≤1 1 −log(|θ |) log(n) if 1 < |θ| ≤n 0 if |θ| > n (7.16) for n = 2, 3, . . .. See Problem 7.18 for details. ✷ 5.7 ] ADMISSIBILITY AND COMPLETE CLASSES 385 Example 7.20 Multivariate normal mean. The conditions of Theorem 7.19 re-late to the tails of the prior, which are crucial in determining whether the integral are finite. Priors with polynomial tails, that is, priors of the form g(θ) = 1/|θ|k, have received a lot of attention. Perhaps the reason for this is that using a Laplace approximation (4.6.33), we can write δg(x) = x + ∇log mg(x) = x + ∇mg(x) mg(x) (7.17) ≈x + ∇g(x) g(x) =  1 − k |x|2  x. See Problem 7.20 for details. Now what can we say about the admissibility of δg? For g(θ) = 1/|θ|k, condition (a) of Theorem 7.19 becomes, upon transforming to polar coordinates,  {θ :|θ |>1} g(θ) |θ|2 max{log |θ|, log 2}2 dθ (7.18) =  2π 0 sinr−2 β dβ  ∞ 1 1 tk+2 max{log(t), log 2}2 tr−1 dt where t = |θ| and β is a vector of direction cosines. The integral over β is finite, and if we ignore the log term, a sufficient condition for this integral to be finite is  ∞ 1 1 tk−r+3 d t < ∞, (7.19) which is satisfied if k > r −2. If we keep the log term and work a little harder, condition (a) can be verified for k ≥r −2 (see Problem 7.22). ∥ Example 7.21 Continuation of Example 7.20. The characterization of admissi-ble estimators by Brown (1971) goes beyond that of Theorem 7.19, as he was able to establish both necessary and sufficient conditions. Here is an example of these results. Using a spherically symmetric prior (see, for example, Corollary 4.3.3), all generalized Bayes estimators are of the form δπ(x) = x + ∇log m(x) = (1 −h(|x|))x. (7.20) The estimator δπ is (a) inadmissible if there exists ε > 0 and M < ∞such that 0 ≤h(|x|) < r −2 −ε |x|2 for |x| > M, (b) admissible if h(|x|)|x| is bounded and there exists M < ∞such that 1 ≥h(|x|) ≥r −2 |x|2 for |x| > M. 386 MINIMAXITY AND ADMISSIBILITY [ 5.7 It is interesting how the factor r −2 appears again and supports its choice as the optimal constant in the James-Stein estimator (even though that estimator is not generalized Bayes, and hence inadmissible). Bounds such as these are called semitail upper bounds by Hwang (1982b), who further developed their applicabil-ity. For Strawderman’s estimator (see Problem 5.5), we have h(|x|) = r −2a + 2 |x|2 P(χ2 r−2a+4 ≤|x|2) P(χ2 r−2a+2 ≤|x|2) (7.21) and it is admissible (Problem 7.23) as long as r −2a + 2 ≥r −2, or r ≥1 and a ≤2. ∥ Now that we have a reasonably complete picture of the types of estimators that are admissible estimators of a normal mean, it is interesting to see how the admissibility conditions fit in with minimaxity conditions. To do so requires the development of some general necessary conditions for minimaxity. This was first donebyBerger(1976a),whoderivedconditionsforanestimatortobe tailminimax. Example 7.22 Tail minimaxity. Let X ∼Nr(θ, I) and L(θ, δ) = |θ −δ|2. Since the estimator X is minimax with constant risk R(θ, X), another estimator δ is tail minimax if there exists M > 0 such that R(θ, δ) ≤R(θ, X) for all |θ| > M. (Berger investigated tail minimaxity for much more general situations than are considered here, including non-normal distributions and nonquadratic loss.) Since tail minimaxity is a necessary condition for minimaxity, it can help us see which admissible estimators have the possibility of also being minimax. An interest-ing characterization of h(|x|) of (7.20) is obtained if admissibility is considered together with tail minimaxity. Using a risk representation similar to (5.4), the risk of δ(x) = [1 −h(|x|)]x is R(θ, δ) = r + Eθ |X|2h2(|X|) −2rh(|X|) −4|X|2h′(|X|) ! . (7.22) If we now use a Laplace approximation on the expectation in (7.22), we have Eθ |X|2h2(|X|) −2rh(|X|) −4|X|2h′(|X|) ! ≈|θ|2h2(|θ|) −2rh(|θ|) −4|θ|2h′(|θ|) = B(θ). (7.23) By carefully working with the error terms in the Laplace approximation, Berger showed that the error of approximation was o(|θ|−2), that is, R(θ, δ) = r + B(θ) + o(|θ|−2). (7.24) In order to ensure that the estimator is tail minimax, we must be able to ensure that B(θ) + o(|θ|−2) < 0 for sufficiently large |θ|. This would occur if, for some ε > 0, |θ|−2B(θ) ≤−ε for sufficiently large |θ|, that is, |θ|2h2(|θ|) −2rh(|θ|) −4|θ|2h′(|θ|) ≤−ε |θ|2 (7.25) for sufficiently large |θ|. 5.7 ] ADMISSIBILITY AND COMPLETE CLASSES 387 Now, for δ(x) = [1 −h(|x|)]x to be admissible, we must have h(|x|) ≥(r − 2)/|x|2. Since |x|h(|x|) must be bounded, this suggests that, for large |x|, we could have h(|x|) ≈k/|x|2α, for some α, 1/2 ≤α ≤1. We now show that for δ(x) to be minimax, it is necessary that α = 1. For h(|x|) = k/|x|2α, (7.25) is equal to k |θ|2α  k |θ|2α−2 −2(r −2α)  < −ε |θ|2 for |x| > M (7.26) which, for r ≥3, cannot be satisfied if 1/2 ≤α < 1. Thus, the only possible admissible minimax estimators are those for which h(|x|) ≈k/|x|2, with r −2 ≤ k ≤2(r −2). ∥ Theorem 7.17 can be adapted to apply to discrete distributions (the assumption of a density can be replaced by a probability mass function), and an interesting case is the binomial distribution. It turns out that the fact that the sample space is finite has a strong influence on the form of the admissible estimators. We first look at the following characterization of admissible estimators, due to Johnson (1971). Example 7.23 Binomial estimation. For the problem of estimating h(p), where h(·) is a continuous real-valued function on [0, 1], X ∼binomial(p, n), and L(h(p), δ) = (h(p) −δ)2, a minimal complete class is given by δπ(x) =        h(0) if x ≤r 1 0 h(p)px−r−1(1−p)s−x−1dπ(p) 1 0 px−r−1(1−p)s−x−1dπ(p) if r + 1 ≤x < s −1 h(1) if x ≥s, (7.27) where p has the prior distribution p ∼k(p)dπ(p) (7.28) with k(p) = P(r + 1 ≤X ≤s −1|p) pr+1(1 −p)n−s+1 , r and s are integers, −1 ≤r < s ≤n + 1, and π is a probability measure with π({0} ∪{1}) < 1, that is, π does not put all of its mass on the endpoints of the parameter space. To see that δπ is admissible, let δ′ be another estimator of h(p) that satisfies R(p, δπ) ≥R(p, δ′). We will assume that s ≥0, r ≤n, and r + 1 < s (as the cases r = −1, s = n+1, and r +1 = s are straightforward). Also, if δ′(x) = h(0) for x ≤r′, and δ′(x) = h(1) for x ≥s′, then it follows that r′ ≥r and s′ ≤s. Define Rr,s(p, δ) = s−1 x=r+1 n x  [h(p) −δ(x)]2px−r−1(1 −p)s−x−1k(p)−1. Now, R(p, δ′) ≤R(p, δπ) for all p ∈[0, 1] if and only if Rr,s(p, δ′) ≤Rr,s(p, δπ) for all p ∈ [0, 1]. However, for the prior (7.28), 1 0 Rr,s(p, δ) ×k(p)dπ(p) is uniquely minimized by [δπ(r + 1), . . . , δπ(s −1)], which estab-lishes the admissibility of δπ. The converse assertion, that any admissible estimator is of the form (7.27), follows from Theorem 7.17. (See Problem 7.27.) 388 MINIMAXITY AND ADMISSIBILITY [ 5.7 For h(p) = p, we can take r = 0, s = n, and π(p) = Beta(a, b). The re-sulting estimators are of the form α x n + (1 −α) a a+b, so we obtain conditions on admissibility of linear estimators. In particular, we see that x/n is an admissible estimator. If h(p) = p(1 −p), we find an admissible estimator of the variance to be n n+1 (x/n) (1 −x/n). (See Problem 7.26.) Brown (1981, 1988) has generalized Johnson’s results and characterizes a min-imal complete class for estimation in a wide class of problems with finite sample spaces. ∥ Johnson (1971) was further able to establish the somewhat surprising result that if δ1 is an admissible estimator of h(p1) in the binomial b(p1, n1) problem and δ2 is an admissible estimator of h(p2) in the binomial b(p2, n2) problem, then [δ1, δ2] is an admissible estimator of [h(p1), h(p2)] if the loss function is the sum of the losses. This result can be extended to higher dimensions, and thus there is no Stein effect in the binomial problem. The following example gives conditions under which this can be expected. Example 7.24 When there is no Stein effect. For i = 1, 2, let Xi ∼fi(x|θi) and suppose that δ∗ i (xi) is a unique Bayes (hence, admissible) estimator of θi under the loss L(θi, δ), where L satisfies L(a, a) = 0 and L(a, a′) > 0, a ̸= a′, and all risk functions are continuous. Suppose there is a value θ∗such that if θ2 = θ∗, (i) X2 = x∗with probability 1, (ii) δ∗ 2(x∗) = θ∗, then (δ∗ 1(x1), δ∗ 2(x2)) is admissible for (θ1, θ2) under the loss  i L(θi, δ); that is, there is no Stein effect. To see why this is so, let δ′ = (δ′ 1(x1, x2), δ′ 2(x1, x2)) be a competitor. At the parameter value (θ1, θ2) = (θ1, θ∗), we have R[(θ1, θ∗), δ′] = E(θ1,θ∗)L[θ1, δ′ 1(X1, X2)] + E(θ1,θ∗)L[θ2, δ′ 2(X1, X2)] = E(θ1,θ∗)L[θ1, δ′ 1(X1, x∗)] + E(θ1,θ∗)L[θ∗, δ′ 2(X1, x∗)] (7.29) = Eθ1L[θ1, δ′ 1(X1, x∗)] + Eθ1L[θ∗, δ′ 2(X1, x∗)], while for (δ∗ 1(x1), δ∗ 2(x2)), R[(θ1, θ∗), δ∗] = E(θ1,θ∗)L[θ1, δ∗ 1(X1)] + E(θ1,θ∗)L[θ2, δ∗ 2(X2)] = Eθ1L[θ1, δ∗ 1(X1)] + Eθ∗L[θ∗, δ∗ 2(x∗)] (7.30) = Eθ1L[θ1, δ∗ 1(X1)] as Eθ∗L[θ∗, δ∗ 2(x∗)] = 0. Since δ∗ 1 is a unique Bayes estimator of θ1, Eθ1L[θ1, δ∗ 1(X1)] < Eθ1L[θ1, δ′ 1(X1, x∗)] for some θ1. Since Eθ1L[θ∗, δ′ 2(X1, x∗)] ≥0, it follows that R[(θ1, θ∗), δ∗] < R[(θ1, θ∗), δ′] for some θ1, and hence that δ∗is an admissible estimator of (θ1, θ2). By induction, the result can be extended to any number of coordinates (see Problem 7.28). If X ∼b(θ, n), then we can take θ∗= 0 or 1, and the above result applies. The absence of the Stein effect persists in other situations, such as any problem with 5.8 ] PROBLEMS 389 a finite sample space (Gutmann 1982a; see also Brown 1981). Gutmann (1982b) also demonstrates a sequential context in which the Stein effect does not hold (see Problem 7.29). ∥ Finally, we look at the admissibility of linear estimators. There has always been interest in characterizing admissibility of linear estimators, partly due to the ease of computing and using linear estimators, and also due to a search for a converse to Karlin’s theorem (Theorem 2.14) (which gives sufficient conditions for admis-sibility of linear estimators). Note that we are concerned with the admissibility of linear estimators in the class of all estimators, not just in the class of linear estimators. (This latter question was addressed by La Motte (1982).) Example 7.25 Admissible linear estimators. Let X ∼Nr(θ, I), and consider estimation of ϕ′θ, where ϕr×1 is a known vector, and L(ϕ′θ, δ) = (ϕ′θ −δ)2. For r = 1, the results of Karlin (1958); see also Meeden and Ghosh (1977), show that ax is admissible if and only if 0 ≤a ≤ϕ. This result was generalized by Cohen (1965a) to show that a′x is admissible if and only if a is in the sphere: {a : (a −ϕ/2)′(a −ϕ/2) ≤ϕ′ϕ/4} (7.31) (see Problem 7.30). Note that the extension to known covariance matrix is straight-forward, and (7.31) becomes an ellipse. For the problem of estimating θ, the linear estimator Cx, where C is an r × r symmetric matrix, is admissible if and only if all of the eigenvalues of C are between 0 and 1, with at most two equal to 1 (Cohen 1966). Necessary and sufficient conditions for admissibility of linear estimators have also been described for multivariate Poisson estimation (Brown and Farrell, 1985a, 1985b) and for estimation of the scale parameters in the multivariate gamma distri-bution (Farrell et al., 1989). This latter result also has application to the estimation of variance components in mixed models. ∥ 8 Problems Section 1 1.1 For the situation of Example 1.2: (a) Plot the risk functions of δ1/4, δ1/2, and δ3/4 for n = 5, 10, 25. (b) For each value of n in part (a), find the range of prior values of p for which each estimator is preferred. (c) If an experimenter has no prior knowledge of p, which of δ1/4, δ1/2, and δ3/4 would you recommend? Justify your choice. 1.2 The principle of gamma-minimaxity [first used by Hodges and Lehmann (1952); see also Robbins 1964 and Solomon 1972a, 1972b)] is a Bayes/frequentist synthesis. An estimator δ∗is gamma-minimax if inf δ∈D sup π∈H r(π, δ) = sup π∈H r(π, δ∗) where H is a specified class of priors. Thus, the estimator δ∗minimizes the maximum Bayes risk over those priors in the class H. (If H = all priors, then δ∗would be minimax.) 390 MINIMAXITY AND ADMISSIBILITY [ 5.8 (a) Show that if H = {π0}, that is, H consists of one prior, then the Bayes estimator is H minimax. (b) Show that if H = {all priors}, then the minimax estimator is H minimax. (c) Find the H-minimax estimator among the three estimators of Example 1.2. 1.3 Classes of priors for H-minimax estimation have often been specified using moment restrictions. (a) For X ∼b(p, n), find the H-minimax estimator of p under squared error loss, with Hµ =  π(p) : π(p) = beta(a, b), µ = a a + b  where µ is considered fixed and known. (b) For X ∼N(θ, 1), find the H-minimax estimator of θ under squared error loss, with Hµ,τ = 3 π(θ) : E(θ) = µ, var θ = τ 24 where µ and τ are fixed and known. [Hint: In part (b), show that the H-minimax estimator is the Bayes estimator against a normal prior with the specified moments (Jackson et al. 1970; see Chen, Eichenhauer-Herrmann, and Lehn 1990 for a multivariate version). This somewhat nonrobust H-minimax estimator is characteristic of estimators derived from moment restrictions and shows why robust Bayesians tend to not use such classes. See Berger 1985, Section 4.7.6 for further discussion.] 1.4 (a) For the random effects model of Example 4.2.7 (see also Example 3.5.1), show that the restricted maximum likelihood (REML) likelihood of σ 2 A and σ 2 is given by (4.2.13), which can be obtained by integrating the original likelihood against a uniform (−∞, ∞) prior for µ. (b) For ni = n in Xij = µ + Ai + uij (j = 1, . . . , ni, i = 1, . . . , s) calculate the expected value of the REML estimate of σ 2 A and show that it is biased. Compare REML to the unbiased estimator of σ 2 A. Which do you prefer? (Construction of REML-type marginal likelihoods, where some effects are integrated out against priors, becomes particularly useful in nonlinear and generalized linear models. See, for example, Searle et al. 1992, Section 9.4 and Chapter 10.) 1.5 Establishing the fact that (9.1) holds, so S2 is conditionally biased, is based on a number of steps, some of which can be involved. Define φ(a, µ, σ 2) = (1/σ 2)Eµ,σ 2[S2 | |¯ x|/s < a]. (a) Show that φ(a, µ, σ 2) only depends on µ and σ 2 through µ/σ. Hence, without loss of generality, we can assume σ = 1. (b) Use the fact that the density f (s | |¯ x|/s < a, µ) has monotone likelihood ratio to establish φ(a, µ, 1) ≥φ(a, 0, 1). (c) Show that lim a→∞φ(a, 0, 1) = 1 and lim a→0 φ(a, 0, 1) = E0,1S3 E0,1S = n n −1. (d) Combine parts (a), (b), and (c) to establish (19.1). 5.8 ] PROBLEMS 391 The next three problems explore conditional properties of estimators. A detailed de-velopment of this theory is found in Robinson (1979a, 1979b), who also explored the relationship between admissibility and conditional properties. 1.6 Suppose that X ∼f (x|θ), and T (x) is used to estimate τ(θ). One might question the worth of T (x) if there were some set A ∈X for which T (x) > τ(θ) for x ∈A (or if the reverse inequality holds). This leads to the conditional principle of never using an estimator if there exists a set A ∈X for which Eθ{[T (X) −τ(θ)]I(X ∈A)} ≥0 ∀θ, with strict inequality for some θ (or if the equivalent statement holds with the inequality reversed). Show that if T (x) is the posterior mean of τ(θ) against a proper prior, where both the prior and f (x|θ) are continuous in θ, then no such A can exist. (If such an A exists, it is called a semirelevant set. Elimination of semirelevant sets is an extremely strong requirement. A weaker requirement, elimination of relevant sets, seems more appropriate.) 1.7 Show that if there exists a set A ∈X and an ε > 0 for which Eθ{[T (X)−τ(θ)]I(X ∈ A)} > ε, then T (x) is inadmissible for estimating τ(θ) under squared error loss. (A set A satisfying the this inequality is an example of a relevant set.) [Hint: Consider the estimator T (x) + εI(x ∈A)] 1.8 To see why elimination of semirelevant sets is too strong a requirement, consider the estimation of θ based on observing X ∼f (x −θ). Show that for any constant a, the Pitman estimator X satisfies Eθ[(X −θ)I(X < a)] ≤0 ∀θ or Eθ[(X −θ)I(X > a)] ≥0 ∀θ, with strict inequality for some θ. Thus, there are semirelevant sets for the Pitman esti-mator, which is, by most accounts, a fine estimator. 1.9 In Example 1.7, let δ∗(X) = X/n with probability 1 −ε and = 1/2 with probability ε. Determine the risk function of δ∗and show that for ε = 1/(n + 1), its risk is constant and less than sup R(p, X/n). 1.10 Find the bias of the minimax estimator (1.11) and discuss its direction. 1.11 In Example 1.7, (a) determine cn and show that cn →0 as n →∞, (b) show that Rn(1/2)/rn →1 as n →∞. 1.12 In Example 1.7, graph the risk functions of X/n and the minimax estimator (1.11) for n = 1, 4, 9, 16, and indicate the relative positions of the two graphs for large values of n. 1.13 (a) Find two points 0 < p0 < p1 < 1 such that the estimator (1.11) for n = 1 is Bayes with respect to a distribution  for which P(p = p0) + P(p = p1) = 1. (b) For n = 1, show that (1.11) is a minimax estimator of p even if it is known that po ≤p ≤p1. (c) In (b), find the values p0 and p1 for which p1 −p0 is as small as possible. 1.14 Evaluate (1.16) and show that its maximum is 1 −α. 1.15 Let X = 1 or 0 with probabilities p and q, respectively, and consider the estimation of p with loss = 1 when |d −p| ≥1/4, and 0 otherwise. The most general randomized estimator is δ = U when X = 0, and δ = V when X = 1 where U and V are two random variables with known distributions. (a) Evaluate the risk function and the maximum risk of δ when U and V are uniform on (0, 1/2) and (1/2, 1), respectively. 392 MINIMAXITY AND ADMISSIBILITY [ 5.8 (b) Show that the estimator δ of (a) is minimax by considering the three values p = 0, 1/2, 1. [Hint: (b) The risk at p = 0, 1/2, 1 is, respectively, P(U > 1/4), 1/2[P(U < 1/4) + P (V > 3/4)], and P (V < 3/4)]. 1.16 Show that the problem of Example 1.8 remains invariant under the transformations X′ = n −X, p′ = 1 −p, d′ = 1 −d. This illustrates that randomized equivariant estimators may have to be considered when ¯ G is not transitive. 1.17 Let r be given by (1.3). If r = ∞for some , show that any estimator δ has unbounded risk. 1.18 In Example 1.9, show that no linear estimator has constant risk. 1.19 Show that the risk function of (1.22) depends on p1 and p2 only through p1 + p2 and takes on its maximum when p1 + p2 = 1. 1.20 (a) In Example 1.9, determine the region in the (p1, p2) unit square in which (1.22) is better than the UMVU estimator of p2 −p1 for m = n = 2, 8, 18, and 32. (b) Extend Problems 1.11 and 1.12 to Example 1.9. 1.21 In Example 1.14, show that ¯ X is minimax for the loss function (d −θ)2/σ 2 without any restrictions on σ. 1.22 (a) Verify (1.37). (b) Show that equality holds in (1.39) if and only if P(Xi = 0) + P(Xi = 1) = 1. 1.23 In Example 1.16(b), show that for any k > 0, the estimator δ = √n 1 + √n 1 n n i=1 Xk i + 1 2(1 + √n) is a Bayes estimator for the prior distribution  over F0 for which (1.36) was shown to be Bayes. 1.24 Let Xi (i = 1, . . . , n) and Yj (j = 1, . . . , n) be independent with distributions F and G, respectively. If F(1) −F(0) = G(1) −G(0) = 1 but F and G are otherwise unknown, find a minimax estimator for E(Yj) −E(Xi) under squared error loss. 1.25 Let Xi (i = 1, . . . , n) be iid with unknown distribution F. Show that δ = No. of Xi ≤0 √n · 1 1 + √n + 1 2(1 + √n) is minimax for estimating F(0) = P(Xi ≤0) with squared error loss. [Hint: Consider the risk function of δ.] 1.26 LetX1, . . . , Xm andY1, . . . , Yn beindependentlydistributedasN(ξ, σ 2)andN(η, τ 2), respectively, and consider the problem of estimating W = η −ξ with squared error loss. (a) If σ and τ are known, ¯ Y −¯ X is minimax. (b) If σ and τ are restricted by σ 2 ≤A and τ 2 ≤B, respectively (A, B known and finite), ¯ Y −¯ X continues to be minimax. 1.27 In the linear model (3.4.4), show that ai ˆ ξi (in the notation of Theorem 3.4.4) is minimax for estimating θ = aiξi with squared error loss, under the restriction σ 2 ≤M. [Hint: Treat the problem in its canonical form.] 1.28 For the random variable X whose distribution is (1.42), show that x must satisfy the inequalities stated below (1.42). 5.8 ] PROBLEMS 393 1.29 Show that the estimator defined by (1.43) (a) has constant risk, (b) is Bayes with respect to the prior distribution specified by (1.44) and (1.45). 1.30 Show that for fixed X and n, (1.43) →(1.11) as N →∞. 1.31 Show that var( ¯ Y) given by (3.7.6) takes on its maximum value subject to (1.41) when all the a’s are 0 or 1. 1.32 (a) If R(p, δ) is given by (1.49), show that sup R(p, δ)·4(1+√n)2 →1 as n →∞. (b) Determine the smallest value of n for which the Bayes estimator of Example 1.18 satisfies (1.48) for r = 1 and b = 5, 10, and 20. 1.33 (Efron and Morris 1971) (a) Show that the estimator δ of (1.50) is the estimator that minimizes |δ −c ¯ x| subject to the constraint |δ −¯ x| ≤M. In this sense, it is the estimator that is closest to a Bayes estimator, c ¯ x, while not straying too far from a minimax estimator, ¯ x. (b) Show that for the situation of Example 1.19, R(θ, δ) is bounded for δ of (1.50). (c) For the situation of Example 1.19, δ of (1.50) satisfies supθ R(θ, δ) = (1/n) + M2. Section 2 2.1 Lemma 2.1 has been extended by Berger (1990a) to include the case where the estimand need not be restricted to a finite interval, but, instead, attains a maximum or minimum at a finite parameter value. Lemma 8.1 Let the estimand g(θ) be nonconstant with global maximum or minimum at a point θ ∗∈ for which f (x|θ∗) > 0 a.e. (with respect to a dominating measure µ), and let the loss L(θ, d) satisfy the assumptions of Lemma 2.1. Then, any estimator δ taking values above the maximum of g(θ), or below the minimum, is inadmissible. (a) Show that if θ ∗minimizes g(θ), and if ˆ g(x) is an unbiased estimator of g(θ), then there exists ϵ > 0 such that the set Aϵ = {x ∈X : ˆ g(x) < g(θ) −ϵ} satisfies P (Aϵ) > 0. A similar conclusion holds if g(θ∗) is a maximum. (b) Suppose g(θ ∗) is a minimum. (The case of a maximum is handled similarly.) Show that the estimator δ(x) = ˆ g(x) if ˆ g(x) ≥g(θ ∗) g(θ ∗) if ˆ g(x) < g(θ ∗) satisfies R(δ, θ) −R( ˆ g(x), θ) < 0. (c) For the situation of Example 2.3, apply Lemma 8.1 to establish the inadmissibility of the UMVU estimator of σ 2 A. Also, explain why the hypotheses of Lemma 8.1 are not satisfied for the estimation of σ 2 2.2 Determine the Bayes risk of the estimator (2.4) when θ has the prior distribution N(µ, τ 2). 2.3 Prove part (d) in the second proof of Example 2.8, that there exists a sequence of values θi →−∞with b′(θi) →0 . 2.4 Show that an estimator aX + b (0 ≤a ≤1) of Eθ(X) is inadmissible (with squared error loss) under each of the following conditions: (a) if Eθ(X) ≥0 for all θ, and b < 0; 394 MINIMAXITY AND ADMISSIBILITY [ 5.8 (b) if Eθ(X) ≤k for all θ, and ak + b > k. [Hint: In (b), replace X by X′ = k −X and aX + b by k −(aX + b) = aX′ + k −b −ak, respectively and use (a).] 2.5 Show that an estimator [1/(1+λ)+ε]X of Eθ(X) is inadmissible (with squared error loss) under each of the following conditions: (a) if varθ(X)/E2 θ(X) > λ > 0 and ε > 0, (b) if varθ(X)/E2 θ(X) < λ and ε < 0. [Hint: (a) Differentiate the risk function of the estimator with respect to ε to show that it decreases as ε decreases (Karlin 1958).] 2.6 Show that if varθ(X)/E2 θ(X) > λ > 0, an estimator [1/(1+λ)+ε]X+b is inadmissible (with squared error loss) under each of the following conditions: (a) if Eθ(X) > 0 for all θ, b > 0 and ε > 0; (b) if Eθ(X) < 0 for all θ, b < 0 and ε > 0 (Gupta 1966). 2.7 Brown (1986a) points out a connection between the information inequality and the unbiased estimator of the risk of Stein-type estimators. (a) Show that (2.7) implies R(θ, δ) ≥[1 + b′(θ)]2 n + b2(θ) ≥1 n + 2b′(θ) n + b2(θ) and, hence, if R(θ, δ) ≤R(θ, ¯ X), then 2b′(θ) n + b2 ≤0. (b) Show that a nontrivial solution b(θ) would lead to an improved estimator x −g(x), for p = 1, in Corollary 4.7.2. 2.8 A density function f (x|θ) is variation reducing of order n + 1 (V Rn+1) if, for any function g(x) with k (k ≤n) sign changes (ignoring zeros), the expectation Eθg(X) = g(x)f (x|θ) dx has at most k sign changes. If Eθg(X) has exactly k sign changes, they are in the same order. Show that f (x|θ) is V R2 if and only if it has monotone likelihood ratio. (See TSH2, Lemma 2, Section 3.3 for the “if” implication). Brown et al. (1981) provide a thorough introduction to this topic, including V R charac-terizations of many families of distributions (the exponential family is V R∞, as is the χ2 ν with ν the parameter, and the noncentral χ2 ν (λ) in λ). There is an equivalence between V Rn and T Pn, Karlin’s (1968) total positivity of order n, in that V Rn = T Pn. 2.9 For the situation of Example 2.9, show that: (a) without loss of generality, the restriction θ ∈[a, b] can be reduced to θ ∈[−m, m], m > 0. (b) If  is the prior distribution that puts mass 1/2 on each of the points ±m, then the Bayes estimator against squared error loss is δ(¯ x) = memn¯ x −e−mn¯ x emn¯ x + e−mn¯ x = m tanh(mn¯ x). (c) For m < 1/√n, max θ∈[−m,m] R(θ, δ( ¯ X)) = max 3 R(−m, δ( ¯ X)), R(m, δ( ¯ X)) 4 and hence, by Corollary 1.6, δ is minimax. 5.8 ] PROBLEMS 395 [Hint: Problem 2.8 can be used to show that the derivative of the risk function can have at most one sign change, from negative to positive, and hence any interior extrema can only be a minimum.] (d) For m > 1.05/√n, δ of part (b) is no longer minimax. Explain why this is so and suggest an alternate estimator in this case. [Hint: Consider R(0, δ)]. 2.10 For the situation of Example 2.10, show that: (a) max θ∈[−m,m] R(θ, a ¯ X + b) = max{R(−m, a ¯ X + b), R(m, a ¯ X + b)}. (b) The estimator a∗¯ X, with a∗= m2/( 1 n + m2), is the linear minimax estimator for all m with minimax risk a∗/n. (c) ¯ X is the linear minimax estimator for m = ∞. 2.11 Suppose X has distribution Fξ and Y has distribution Gη, where ξ and η vary independently. If it is known that η = η0, then any estimator δ(X, Y) can be improved upon by δ∗(x) = EYδ(x, Y) =  δ(x, y) dGη0(y). [Hint: Recall the proof of Theorem 1.6.1.] 2.12 In Example 2.13, prove that the estimator aY +b is inadmissible when a > 1/(r +1). [Hint: Problems 2.4–2.6] 2.13 Let X1, . . . , Xn be iid according to a N(0, σ 2) density, and let S2 =  X2 i . We are interested in estimating σ 2 under squared error loss using linear estimators cS2 + d, where c and d are constants. Show that: (a) admissibility of the estimator aY + b in Example 2.13 is equivalent to the admissi-bility of cS2 + d, for appropriately chosen c and d. (b) the risk of cS2 + d is given by R(cS2 + d, σ 2) = 2nc2σ 2 + [(nc −1)σ 2 + d]2 (c) for d = 0, R(cS2, σ 2) < R(0, σ 2) when c < 2/(n + 2), and hence the estimator aY + b in Example 2.13 is inadmissible when a = b = 0. [Thisexerciseillustratesthefactthatconstantsarenotnecessarilyadmissibleestimators.] 2.14 For the situation of Example 2.15, let Z = ¯ X/S. (a) Show that the risk, under squared error loss, of δ = ϕ(z)s2 is minimized by taking ϕ(z) = ϕ∗ µ,σ(z) = E(S2/σ 2|z)/E((S2/σ 2)2|z). (b) Stein (1964) showed that ϕ∗ µ,σ(z) ≤ϕ∗ 0,1(z) for every µ, σ. Assuming this is so, deduce that ϕs(Z)S2 dominates [1/(n + 1)]S2 in squared error loss, where ϕs(z) = min ϕ∗ 0,1(z), 1 n + 1 " . (c) Show that ϕ∗ 0,1(z) = (1 + z2)/(n + 2), and, hence, ϕs(Z)S2 is given by (2.31). (d) The best equivariant estimator of the form ϕ(Z)S2 was derived by Brewster and Zidek (1974) and is given by ϕBZ(z) = E(S2|Z ≤z) E(S4|Z ≤z), 396 MINIMAXITY AND ADMISSIBILITY [ 5.8 where the expectation is calculated assuming µ = 0 and σ = 1. Show that ϕBZ(Z)S2 is generalized Bayes against the prior π(µ, σ) = 1 σ  ∞ 0 u−1/2(1 + u)−1e−unµ2/σ 2 du dµ dσ. [Brewster and Zidek did not originally derive their estimator as a Bayes estimator, but rather first found the estimator and then found the prior. Brown (1968) consid-ered a family of estimators similar to those of Stein (1964), which took different values depending on a cutoff point for z2. Brewster and Zidek (1974) showed that the number of cutoff points can be arbitrarily large. They constructed a sequence of estimators, with decreasing risks and increasingly dense cutoffs, whose limit was the best equivalent estimator.] 2.15 Show the equivalence of the following relationships: (a) (2.26) and (2.27), (b) (2.34) and (2.35) when c = √(n −1)/(n + 1), and (c) (2.38) and (2.39). 2.16 In Example 2.17, show that the estimator aX/n + b is inadmissible for all (a, b) outside the triangle (2.39). [Hint: Problems 2.4–2.6.] 2.17 Prove admissibility of the estimators corresponding to the interior of the triangle (2.39), by applying Theorem 2.4 and using the results of Example 4.1.5. 2.18 Use Theorem 2.14 to provide an alternative proof for the admissibility of the esti-mator a ¯ X + b satisfying (2.6), in Example 2.5. 2.19 Determine which estimators aX + b are admissible for estimating E(X) in the following situations, for squared error loss: (a) X has a Poisson distribution. (b) X has a negative binomial distribution (Gupta 1966). 2.20 Let X have the Poisson(λ) distribution, and consider the estimation of λ under the loss (d −λ)2/λ with the restriction 0 ≤λ ≤m, where m is known. (a) Using an argument similar to that of Example 2.9, show that X is not minimax, and a least favorable prior distribution must have a set w∧[of (1.5)] consisting of a finite number of points. (b) Let a be a prior distribution that puts mass ai, i = 1, . . . , k, at parameter points bi, i = 1, . . . , k. Show that the Bayes estimator associated with this prior is δa(x) = 1 E(λ−1|x) = k i=1 aibx i e−bi k i=1 aibx−1 i e−bi . (c) Let m0 be the solution to m = e−m(m0 ≈.57). Show that for 0 ≤λ ≤m, m ≤m0 a one-point prior (ai = 1, b1 = m) yields the minimax estimator. Calculate the minimax risk and compare it to that of X. (d) Let m1 be the first positive zero of (1+δ(m))2 = 2+m2/2, where  is a two-point prior (a1 = a, b1 = 0; a2 = 1−a, b2 = m). Show that for 0 ≤λ ≤m, m0 < m ≤m1, a two-point prior yields the minimax estimator (use Corollary 1.6). Calculate the minimax risk and compare it to that of X. [As m increases, the situation becomes more complex and exact minimax solutions become intractable. For these cases, linear approximations can be quite satisfactory. See Johnstone and MacGibbon 1992, 1993.] 5.8 ] PROBLEMS 397 2.21 Show that the conditions (2.41) and (2.42) of Example 2.22 are not only sufficient but also necessary for admissibility of (2.40). 2.22 Let X and Y be independently distributed according to Poisson distributions with E(X) = ξ and E(Y) = η, respectively. Show that aX+bY +c is admissible for estimating ξ with squared error loss if and only if either 0 ≤a < 1, b ≥0, c ≥0 or a = 1, b = c = 0 (Makani 1972). 2.23 Let X be distributed with density 1 2β(θ)eθxe−|x|, |θ| < 1. (a) Show that β(θ) = 1 −θ 2. (b) Show that aX + b is admissible for estimating Eθ(X) with squared error loss if and only if 0 ≤a ≤1/2. [Hint: (b) To see necessity, let δ = (1/2 + ε)X + b (0 < ε ≤1/2) and show that δ is dominated by δ′ = (1 −1 2α + αε)X + (b/α) for some α with 0 < α < 1/(1/2 −ε).] 2.24 Let X be distributed as N(θ, 1) and let θ have the improper prior density π(θ) = eθ (−∞< θ < ∞). For squared error loss, the formal Bayes estimator of θ is X +1, which is neither minimax nor admissible. (See also Problem 2.15.) Conditions under which the formal Bayes estimator corresponding to an improper prior distribution for θ in Example 3.4 is admissible are given by Zidek (1970). 2.25 Show that the natural parameter space of the family (2.16) is (−∞, ∞) for the normal (variance known), binomial, and Poisson distribution but not in the gamma or negative binomial case. Section 3 3.1 Show that Theorem 3.2.7 remains valid for almost equivariant estimators. 3.2 Verify the density (3.1). 3.3 In Example 3.3, show that a loss function remains invariant under G if and only if it is a function of (d −θ)∗. 3.4 In Example 3.3, show that neither of the loss functions [(d −θ)∗∗]2 or |(d −θ)∗∗| is convex. 3.5 Let Y be distributed as G(y −η). If T = [Y] and X = Y −T , find the distribution of X and show that it depends on η only through η −[η]. 3.6 (a) If X1, . . . , Xn are iid with density f (x−θ), show that the MRE estimator against squared error loss [the Pitman estimator of (3.1.28)] is the Bayes estimator against right-invariant Haar measure. (b) If X1, . . . , Xn are iid with density 1/τf [(x −µ)/τ], show that: (i) Under squared error loss, the Pitman estimator of (3.1.28) is the Bayes esti-mator against right-invariant Haar measure. (ii) Under the loss (3.3.17), the Pitman estimator of (3.3.19) is the Bayes estimator against right-invariant Haar measure. 3.7 Prove formula (3.9). 3.8 Prove (3.11). [Hint: In the term on the left side, lim inf can be replaced by lim. Let the left side of (3.11) be A and the right side B, and let AN = inf h(a, b), where the inf is taken over a ≤−N, b ≥N, N = 1, 2, . . . , so that AN →A. There exist (aN, bN) such that |h(aN, bN) −AN| ≤1/N. Then, h(aN, bN) →A and A ≥B.] 398 MINIMAXITY AND ADMISSIBILITY [ 5.8 3.9 In Example 3.8, let h(θ) be the length of the path θ after cancellation. Show that h does not satisfy conditions (3.2.11). 3.10 Discuss Example 3.8 for the case that the random walk instead of being in the plane is (a) on the line and (b) in three-space. 3.11 (a) Show that the probabilities (3.17) add up to 1. (b) With pk given by (3.17), show that the risk (3.16) is infinite. [Hint: (a) 1/k(k + 1) = (1/k) −1/(k + 1).] 3.12 Show that the risk R(θ, δ) of (3.18) is finite. [Hint: R(θ, δ) < k>M|k+θ|1/(k + 1) ≤c<k<d1/(k + 1) < d+1 c dx/x, where c = M|θ|/(M + 1) and d = M|θ|/(M −1). The reason for the second inequality is that values of k outside (c, d) make no contribution to the sum.] 3.13 Show that the two estimators δ∗and δ∗∗, defined by (3.20) and (3.21), respectively, are equivariant. 3.14 Prove the relations (3.22) and (3.23). 3.15 Let the distribution of X depend on parameters θ and ϑ, let the risk function of an estimator δ = δ(x) of θ be R(θ, ϑ; δ), and let r(θ, δ) = R(θ, ϑ; δ) dP (ϑ) for some dis-tribution P . If δ0 minimizes supθ r(θ, δ) and satisfies supθ r(θ, δ0) = supθ,ϑ R(θ, ϑ; δ0), show that δ0 minimizes supθ,ϑ R(θ, ϑ; δ). Section 4 4.1 In Example 4.2, show that an estimator δ is equivariant if and only if it satisfies (4.11) and (4.12). 4.2 Show that a function µ satisfies (4.12) if and only if it depends only on X2 i . 4.3 Verify the Bayes estimator (4.15). 4.4 Let Xi be independent with binomial distribution b(pi, ni), i = 1, . . . , r. For estimat-ing p = (p1, . . . , pr) with average squared error loss (4.17), find the minimax estimator of p, and determine whether it is admissible. 4.5 Establishing the admissibility of the normal mean in two dimensions is quite difficult, made so by the fact that the conjugate priors fail in the limiting Bayes method. Let X ∼N2(θ, I) and L(θ, δ) = |θ −δ|2. The conjugate priors are θ ∼N2(0, τ 2I), τ 2 > 0. (a) For this sequence of priors, verify that the limiting Bayes argument, as in Example 2.8, results in inequality (4.18), which does not establish admissibility. (b) Stein (in James and Stein 1961), proposed the sequence of priors that works to prove X is admissible by the limiting Bayes method. A version of these priors, given by Brown and Hwang (1982), is gn(θ) =                1 if |θ| ≤1 1 −log |θ| log n if 1 ≤|θ| ≤n 0 if |θ| ≥n for n = 2, 3, . . .. Show that δgn(x) →x a.e. as n →∞. 5.8 ] PROBLEMS 399 (c) A special case of the very general results of Brown and Hwang (1982) state that for the prior πn(θ)2g(θ), the limiting Bayes method (Blyth’s method) will establish the admissibility of the estimator δg(x) [the generalized Bayes estimator against g(θ)] if  {θ :|θ |>1} g(θ) dθ |θ|2[max{log |θ|, log 2}]2 < ∞. Show that this holds for g(θ) = 1 and that δg(x) = x, so x is admissible. [Stein (1956b) originally established the admissibility of X in two dimensions using an argument based on the information inequality. His proof was complicated by the fact that he needed some additional invariance arguments to establish the result. See Theorem 7.19 and Problem 7.19 for more general statements of the Brown/Hwang result.] 4.6 Let X1, X2, . . . , Xr be independent with Xi ∼N(θi, 1). The following heuristic argument, due to Stein (1956b), suggests that it should be possible, at least for large r and hence large |θ|, to improve on the estimator X = (X1, X2, . . . , Xr). (a) Use a Taylor series argument to show |x|2 = r + |θ|2 + Op[(r + |θ|2)1/2], so, with high probability, the true θ is in the sphere {θ : |θ|2 ≤|x|2 −r}. The usual estimator X is approximately the same size as θ and will almost certainly be outside of this sphere. (b) Part (a) suggested to Stein an estimator of the form δ(x) = [1 −h(|x|2)]x. Show that |θ −δ(x)|2 = (1 −h)2|x −θ|2 −2h(1 −h)θ ′(x −θ) + h2|θ|2. (c) Establish that θ ′(x −θ)/|θ| = Z ∼N(0, 1), and |x −θ|2 ≈r, and, hence, |θ −δ(x)|2 ≈(1 −h)2r + h2|θ|2 + Op[(r + |θ|2)1/2]. (d) Show that the leading term in part (c) is minimized at h = r/(r + |θ|2), and since |x|2 ≈r + |θ|2, this leads to the estimator δ(x) = 1 − r |x|2 x of (4.20). 4.7 If S2 is distributed as χ2 r , use (2.2.5) to show that E(S−2) = 1/(r −2). 4.8 In Example 4.7, show that R is nonsingular for ρ1 and ρ2 and singular for ρ3 and ρ4. 4.9 Show that the function ρ2 of Example 4.7 is convex. 4.10 In Example 4.7, show that X is admissible for (a) ρ3 and (b) ρ4. [Hint: (a) It is enough to show that X1 is admissible for estimating θ1 with loss (d1 −θ1)2. This can be shown by letting θ2, . . . , θr be known. (b) Note that X is admissible minimax for θ = (θ1, . . . , θr) when θ1 = · · · = θr.] 4.11 In Example 4.8, show that X is admissible under the assumptions (ii)(a). [Hint: i. If v(t) > 0 is such that  1 v(t)e−t2/2τ2 dt < ∞, show that there exists a constant k(τ) for which λτ(θ) = k(τ) v(θj) ! exp  −1 2τ 2 θ 2 j  /Ov(θj) is a probability density for θ = (θ1, . . . , θr). 400 MINIMAXITY AND ADMISSIBILITY [ 5.8 ii. If the Xi are independent N(θi, 1) and θ has the prior λτ(θ), the Bayes estimator of θ with loss function (4.27) is τ 2X/(1 + τ 2). iii. To prove X admissible, use (4.18) with λτ(θ) instead of a normal prior.] 4.12 Let L be a family of loss functions and suppose there exists L0 ∈L and a minimax estimator δ0 with respect to L0 such that in the notation of (4.29), sup L,θ RL(θ, δ0) = sup θ RL0(θ, δ0). Then, δ0 is minimax with respect to L; that is, it minimizes supL,θ RL(θ, δ). 4.13 Assuming (4.25), show that E = 1 −[(r −2)2/r|X −µ|2] is the unique unbiased estimator of the risk (5..4.25), and that E is inadmissible. [The estimator E is also unbiased for estimation of the loss L(θ, δ). See Note 9.5.] 4.14 A natural extension of risk domination under a particular loss is to risk domination under a class of losses. Hwang (1985) defines universal domination of δ by δ′ if the inequality EθL(|θ −δ′(X)|) ≤EθL(|θ −δ(X)|) for all θ holds for all loss functions L(·) that are nondecreasing, with at least one loss function producing nonidentical risks. (a) Show that δ′ universally dominates δ if and only if it stochastically dominates δ, that is, if and only if Pθ(|θ −δ′(X)| > k) ≤Pθ(|θ −δ(X)| > k) for all k and θ with strict inequality for some θ. [Hint: For a positive random variable Y, recall that EY = ∞ 0 P(Y > t) dt. Al-ternatively, use the fact that stochastic ordering on random variables induces an ordering on expectations. See Lemma 1, Section 3.3 of TSH2.] (b) For X ∼Nr(θ, I), show that the James-Stein estimator δc(x) = (1 −c/|x|2)x does not universally dominate x. [From (a), it only need be shown that Pθ(|θ −δc(X)| > k) > Pθ(|θ −X| > k) for some θ and k. Take θ = 0 and find such a k.] Hwang (1985) and Brown and Hwang (1989) explore many facets of universal domi-nation. Hwang (1985) shows that even δ+ does not universally dominate X unless the class of loss functions is restricted. We also note that although the inequality in part (a) may seem reminiscent of the “Pitman closeness” criterion, there is really no relation. The criterion of Pitman closeness suffers from a number of defects not shared by stochastic domination (see Robert et al. 1993). Section 5 5.1 Show that the estimator δc defined by (5.2) with 0 < c = 1 −W < 1 is dominated by any δd with |d −1| < W. 5.2 In the context of Theorem 5.1, show that Eθ  1 |X|2  ≤E0  1 |X|2  < ∞. [Hint: The chi-squared distribution has monotone likelihood ratio in the noncentrality parameter.] 5.8 ] PROBLEMS 401 5.3 Stigler (1990) presents an interesting explanation of the Stein phenomenon using a regression perspective, and also gives an identity that can be used to prove the minimaxity of the James-Stein estimator. For X ∼(Nrθ, I), and δc(x) = 1 − c |x|2 x: (a) Show that Eθ |θ −δc(X)|2 = r −2cEθ X′θ + (c/2) |X|2 −1  . (b) The expression in square brackets is increasing in c. Prove the minimaxity of δc for 0 ≤c ≤2(r −2) by establishing Stigler’s identity Eθ X′θ + r −2 |X|2  = 1. [Hint: Part (b) can be established by transforming to polar coordinates and directly integrating, or by writing x′θ |x|2 = x′(θ −x)+|x|2 |x|2 and using Stein’s identity.] 5.4 (a) Prove Theorem 5.5. (b) Apply Theorem 5.5 to establish conditions for minimaxity of Strawderman’s (1971) proper Bayes estimator given by (5.10) and (5.12). [Hint: (a) Use the representation of the risk given in (5.4), with g(x) = c(|x|)(r−2)x/|x|2. Show that R(θ, δ) can be written R(θ, δ) = 1 −(r −2)2 r Eθ c(|X|)(2 −c(|X|)) |X|2  −2(r −2) r Eθ Xi ∂ ∂Xi c(|X|) |X|2 and an upper bound on R(θ, δ) is obtained by dropping the last term. It is not necessary to assume that c(·) is differentiable everywhere; it can be nondifferentiable on a set of Lebesgue measure zero.] 5.5 For the hierarchical model (5.11) of Strawderman (1971): (a) Show that the Bayes estimator against squared error loss is given by E(θ|x) = [1 −E(λ|x)]x where E(λ|x) = 1 0 λr/2−a+1e−1/2λ|x|2 dλ 1 0 λr/2−ae−1/2λ|x|2 dλ. . (b) Show that E(λ|x) has the alternate representations E(λ|x) = r −2a + 2 |x|2 P(χ 2 r−2a+4 ≤|x|2) P(χ 2 r−2a+2 ≤|x|2 , E(λ|x) = r −2a + 2 |x|2 − 2e−1/2|x|2 |x|2 1 0 λr/2−ae−1/2λ|x|2 dλ, and hence that a = 0 gives the estimator of (5.12). (c) Show that |x|2E(λ|x) is increasing in |x|2 with maximum r −2a + 2. Hence, the Bayes estimator is minimax if r −2a+2 ≤2(r −2) or r ≥2(3−a). For 0 ≤a ≤1, this requires r ≥5. [Berger (1976b) considers matrix generalizations of this hierarchical model and derives admissible minimax estimators. Proper Bayes minimax estimators only exist if r ≥5 (Strawderman 1971); however, formal Bayes minimax estimators exist for r = 3 and 4.] 402 MINIMAXITY AND ADMISSIBILITY [ 5.8 5.6 Consider a generalization of the Strawderman (1971) hierarchical model of Problem 5.5: X|θ ∼N(θ, I), θ|λ ∼N(0, λ−1(1 −λ)I), λ ∼π(λ). (a) Show that the Bayes estimator against squared error loss is [1 −E(λ|x)]x, where E(λ|x) = 1 0 λr/2+1e−1/2λ|x|2π(λ) dλ 1 0 λr/2e−1/2λ|x|2π(λ) dλ . (b) Suppose λ ∼beta(α, β), with density π(λ) = H(α + β) H(α)H(β)λα−1(1 −λ)β−1. Show that the Bayes estimator is minimax if β ≥1 and 0 ≤a ≤(r −4)/2. [Hint: Use integration by parts on E(λ|x), and apply Theorem 5.5. These estimators were introduced by Faith (1978).] (c) Let t = λ−1(1−λ), the prior precision of θ. If λ ∼beta(α, β), show that the density of t is proportional to tα−1/(1 + t)α+β, that is, t ∼F2α,2β, the F-distribution with 2α and 2β degrees of freedom. [Strawderman’s prior of Problem 5.5 corresponds to β = 1 and 0 < α < 1. If we take α = 1/2 and β = 1, then t ∼F1,2.] (d) Two interesting limiting cases are α = 1, β = 0 and α = 0, β = 1. For each case, show that the resulting prior on t is proper, and comment on the minimaxity of the resulting estimators. 5.7 Faith (1978) considered the hierarchical model X|θ ∼N(θ, I), θ|t ∼N  0, 1 t I  , t ∼Gamma(a, b), that is, π(t) = 1 H(a)ba ta−1e−t/b. (a) Show that the marginal prior for θ, unconditional on t, is π(θ) ∝(2/b + |θ|2)−(a+r/2), a multivariate Student’s t-distribution. (b) Show that a ≤−1 is a sufficient condition for  i ∂2π(θ ) ∂θ2 i ≥0 and, hence, is a sufficient condition for the minimaxity of the Bayes estimator against squared error loss. (c) Show,moregenerally,thattheBayesestimatoragainstsquarederrorlossisminimax if a ≤(r −4)/2 and a ≤1/b + 3. (d) What choices of a and b would produce a multivariate Cauchy prior for π(θ)? Is the resulting Bayes estimator minimax? 5.8 ] PROBLEMS 403 5.8 (a) Let X ∼N(θ, ) and consider the estimation of θ under the loss L(θ, δ) = (θ −δ)′(θ −δ). Show that R(θ, X) = tr , the minimax risk. Hence, X is a minimax estimator. (b) Let X ∼N(θ, I) and consider estimation of θ under the loss L(θ, δ) = (θ − δ)′Q(θ −δ), where Q is a known positive definite matrix. Show that R(θ, X) = tr Q, the minimax risk. Hence, X is a minimax estimator. (c) Show that the calculations in parts (a) and (b) are equivalent. 5.9 In Theorem 5.7, verify Eθ c(|X|2) |X|2 X′(θ −X) = Eθ c(|X|2) |X|2 tr() −2c(|X|2) |X|4 X′X + 2c′(|X|2) |X|2 X′X " . [Hint: There are several ways to do this: (a) Write Eθ c(|X|2) |X|2 X′(θ −X) = Eθ c(Y′Y Y′Y Y′(η −Y) = i Eθ c(Y′Y Y′Y j Yjσji(ηi −Yi) where = {σij} and Y = −1/2X ∼N(−1/2θ, I) = N(η, I). Now apply Stein’s lemma. (b) Write = P DP ′, where P is an orthogonal matrix (P ′P = I) and D=diagonal matrix of eigenvalues of , D = diagonal{di}. Then, establish that Eθ c(|X|2) |X|2 X′(θ −X) = j Eθ c( i diZ2 i )  i diZ2 i djZj(η∗ j −Zj) where Z = P −1/2X and η ∗= P−1/2θ. Now apply Stein’s lemma. 5.10 In Theorem 5.7, show that condition (i) allows the most shrinkage when = σ 2I, for some value of σ 2. That is, show that for all r × r positive definite , max tr λmax() = tr σ 2I λmax(σ 2I) = r. [Hint: Write tr λmax() =  λi/λmax, where the λi’s are the eigenvalues of .] 5.11 The estimation problem of (5.18), X ∼N(θ, ) L(θ, δ) = (θ −δ)′Q(θ −δ), where both and Q are positive definite matrices, can always be reduced, without loss of generality, to the simpler case, Y ∼N(η, I) L(η, δ∗) = (η −δ∗)′Dq∗(η −δ∗), whereDq∗isadiagonalmatrixwithelements(q∗ 1, . . . , q∗ r ),usingthefollowingargument. Define R = 1/2B, where 1/2 is a symmetric square root of (that is, 1/21/2 = ), and B is the matrix of eigenvectors of 1/2Q1/2 (that is, B′1/2Q1/2B = Dq∗). (a) Show that R satisfies R′−1R = I , R′QR = Dq∗ 404 MINIMAXITY AND ADMISSIBILITY [ 5.8 (b) Define Y = R−1X. Show that Y ∼N(η, I), where η = R−1θ. (c) Show that estimations problems are equivalent if we define δ∗(Y) = R−1δ(RY). [Note: If has the eigenvalue-eigenvector decomposition P ′P = D = diagonal(d1, · · ·, dr), then we can define 1/2 = PD1/2P ′, where D1/2 is a diagonal matrix with elements √di. Since is positive definite, the di’s are positive.] 5.12 Complete the proof of Theorem 5.9. (a) Show that the risk of δ(x) is R(θ, δ) = Eθ (θ −X)′Q(θ −X) ! −2Eθ c(|X|2) |X|2 X′Q(θ −X)  +Eθ c2(|X|2) |X|4 X′QX  where Eθ(θ −X)′Q(θ −X) = tr(Q). (b) Use Stein’s lemma to verify Eθ c(|X|2) |X|2 X′Q(θ −X) = Eθ c(|X|2) |X|2 tr(Q) −2c(|X|2) |X|4 X′QX + 2c′(|X|2) |X|2 X′QX " . Use an argument similar to the one in Theorem 5.7. [Hint: Write Eθ c(|X|2) |X|2 X′Q(θ −X) = i Eθ c(|X|2) |X|2 j Xjqji(θi −Xi) and apply Stein’s lemma.] 5.13 Prove the following ”generalization” of Theorem 5.9. Theorem 8.2 Let X ∼N(θ, ). An estimator of the form (5.13) is minimax against the loss L(θ, δ) = (θ −δ)′Q(θ −δ), provided (i) 0 ≤c(|x|2) ≤2[tr(Q∗)/λmax(Q∗)] −4, (ii) the function c(·) is nondecreasing, where Q∗= 1/2Q1/2. 5.14 Brown (1975) considered the performance of an estimator against a class of loss functions L(C) = L : L(θ, δ) = r i=1 ci(θi −δi)2; (c1, . . . , cr) ∈C for a specified set C, and proved the following theorem. Theorem 8.3 For X ∼Nr(θ, I), there exists a spherically symmetric estimator δ, that is, δ(x) = [1−h(|x|2)]x, where h(|x|2) ̸= 0, such that R(θ, δ) ≤R(θ, X) for all L ∈L(C) if, for all (c1, . . . , cr) ∈C, the inequality r j=1 ci > 2ck holds for k = 1, . . . , r. 5.8 ] PROBLEMS 405 Show that this theorem is equivalent to Theorem 5.9 in that the above inequality is equivalent to part (i) of Theorem 5.9, and the estimator (5.13) is minimax. [Hint: Identify the eigenvalues of Q with c1, . . . , cr.] Bock (1975) also establishes this theorem; see also Shinozaki (1980). 5.15 There are various ways to seemingly generalize Theorems 5.5 and 5.9. However, if both the estimator and loss function are allowed to depend on the covariance and loss matrix, then linear transformations can usually reduce the problem. Let X ∼Nr(θ, ), and let the loss function be L(θ, δ) = (θ −δ)′Q(θ −δ), and consider the following “generalizations” of Theorems 5.5 and 5.9. (a) δ(x) =  1 −c(x′−1x) x′−1x  x, Q = −1, (b) δ(x) =  1 −c(x′Qx) x′Qx  x, = I or = Q, (c) δ(x) =  1 −c(x′−1/2Q−1/2x) x′−1/2Q−1/2x  x. In each case, use transformations to reduce the problem to that of Theorem 5.5 or 5.9, and deduce the condition for minimaxity of δ. [Hint: For example, in (a) the transformation Y = −1/2X will show that δ is minimax if 0 < c(·) < 2(r −2).] 5.16 A natural extension of the estimator (5.10) is to one that shrinks toward an arbitrary known point µ = (µ1, . . . , µr), δµ(x) = µ +  1 −c(S) r −2 |x −µ|2  (x −µ) where |x −µ|2 = (xi −µi)2. (a) Show that, under the conditions of Theorem 5.5, δµ is minimax. (b) Show that its positive-part version is a better estimator. 5.17 Let X ∼Nr(θ, I). Show that the Bayes estimator of θ, against squared error loss, is given by δ(x) = x + ∇log m(x) where m(x) is the marginal density function and ∇f = {∂/∂xif }. 5.18 Verify (5.27). [Hint: Show that, as a function of |x|2, the only possible interior extremum is a minimum, so the maximum must occur either at |x|2 = 0 or |x|2 = ∞.] 5.19 Thepropertyofsuperharmonicity,anditsrelationshiptominimaxity,isnotrestricted to Bayes estimators. For X ∼Nr(θ, I), a pseudo-Bayes estimator (so named, and investigated by Bock, 1988) is an estimator of the form x + ∇log m(x) where m(x) is not necessarily a marginal density. (a) Show that the positive-part Stein estimator δ+ a = µ +  1 − a |x −µ|2 + (x −µ) is a pseudo-Bayes estimator with m(x) = e−(1/2)|x−µ|2 if |x −µ|2 < a (|x −µ|2)−a/2 if |x −µ|2 ≥a. 406 MINIMAXITY AND ADMISSIBILITY [ 5.8 (b) Show that, except at the point of discontinuity, if a ≤r −2, then r i=1 ∂2 ∂x2 i m(x) ≤0, so m(x) is superharmonic. (c) Show how to modify the proof of Corollary 5.11 to accommodate superharmonic functions m(x) with a finite number of discontinuities of measure zero. This result is adapted from George (1986a, 1986b), who exploits both pseudo-Bayes and superharmonicity to establish minimaxity of an interesting class of estimators that are further investigated in the next problem. 5.20 For X|θ ∼Nr(θ, I), George (1986a, 1986b) looked at multiple shrinkage estima-tors, those that can shrink to a number of different targets. Suppose that θ ∼π(θ) = k j=1 ωiπi(θ), where the ωi are known positive weights,  ωi = 1. (a) Show that the Bayes estimator against π(θ), under squared error loss, is given by δ∗(x) = x + ∇log m∗(x) where m∗(x) = k j=1 ωjmj(x) and mi(x) =  1 (2π)p/2 e−(1/2)|x−θ |2πi(θ) dθ. (b) Clearly, δ∗is minimax if m∗(x) is superharmonic. Show that δ∗(x) is minimax if either (i) mi(x) is superharmonic, i = 1, . . . , k, or (ii) πi(θ) is superharmonic, i = 1, . . . , k. [Hint: Problem 1.7.16] (c) The real advantage of δ∗occurs when the components specify different targets. For ρj = ωjmj(x)/m∗(x), let δ∗(x) = k j=1 ρjδ+ j (x) where δ+ j (x) = µj +  1 − r −2 |x −µj|2 + (x −µj) and the µj’s are target vectors. Show that δ∗(x) is minimax. [Hint: Problem 5.19] [George (1986a, 1986b) investigated many types of multiple targets, including multiple points, subspaces, and clusters and subvectors. The subvector problem was also con-sidered by Berger and Dey (1983a, 1983b). Multiple shrinkage estimators were also investigated by Ki and Tsui (1990) and Withers (1991).] 5.21 Let Xi, Yj be independent N(ξi, 1) and N(ηj, 1), respectively (i = 1, . . . , r; j = 1, . . . , s). (a) Find an estimator of (ξ1, . . . , ξr; η1, . . . , ηs) that would be good near ξi = · · · = ξr = ξ, η1 = · · · = ηs = η, with ξ and η unknown, if the variability of the ξ’s and η’s is about the same. (b) When the loss function is (4.17), determine the risk function of your estimator. [Hint: Consider the Bayes situation in which ξi ∼N(ξ, A) and ηj ∼N(η, A). See Berger 1982b for further development of such estimators]. 5.22 TheearlyproofsofminimaxityofSteinestimators(JamesandStein1961,Baranchik 1970) relied on the representation of a noncentral χ 2-distribution as a Poisson sum of central χ 2 (TSH2, Problem 6.7). In particular, if χ 2 r (λ) is a noncentral χ2 random variable with noncentrality parameter λ, then Eλh(χ 2 r (λ)) = E[Eh(χ 2 r+2K)|K] where K ∼Poisson(λ) and χ 2 r+2k is a central χ2 random variable with r + 2k df . Use this representation, and the properties of the central χ 2-distribution, to establish the following identities for X ∼Nr(θ, I) and λ = |θ|2. 5.8 ] PROBLEMS 407 (a) Eθ X′θ |X|2 = |θ|2E 1 χ2 r+2(λ). (b) (r −2)E 1 χ2 r (λ) + |θ|2E 1 χ2 r+2(λ) = 1. (c) For δ(x) = (1 −c/|x|2)x, use the identities (a) and (b) to show that for L(θ, δ) = |θ −δ|2, R(θ, δ) = r + 2c|θ|2E 1 χ 2 r+2(λ) −2c + c2E 1 χ 2 r (λ) = r + 2c  1 −(r −2)E 1 χ 2 r (λ)  −2c + c2E 1 χ 2 r (λ) and, hence, that δ(x) is minimax if 0 ≤c ≤2(r −2). [See Bock 1975 or Casella 1980 for more identities involving noncentral χ 2 expecta-tions.] 5.23 Let χ 2 r (λ) be a χ 2 random variable with r degrees of freedom and noncentrality parameter λ. (a) Show that E 1 χ2 r (λ) = E 1 E 1 χ2 r+2K |K 2 = E 1 r−2+2K ! , where K ∼ Poisson(λ/2). (b) Establish (5.32). 5.24 For the most part, the risk function of a Stein estimator increases as |θ| moves away from zero (if zero is the shrinkage target). To guarantee that the risk function is monotone increasing in |θ| (that is, that there are no “dips” in the risk as in Berger’s 1976a tail minimax estimators) requires a somewhat stronger assumption on the estimator (Casella 1990). Let X ∼Nr(θ, I) and L(θ, δ) = |θ −δ|2, and consider the Stein estimator δ(x) =  1 −c(|x|2)(r −2) |x|2  x. (a) Show that if 0 ≤c(·) ≤2 and c(·) is concave and twice differentiable, then δ(x) is minimax. [Hint: Problem 1.7.7.] (b) Under the conditions in part (a), the risk function of δ(x) is nondecreasing in |θ|. [Hint: The conditions on c(·), together with the identity (d/dλ)Eλ[h(χ 2 p(λ))] = Eλ{[∂/∂χ 2 p+2(λ)]h(χ 2 p+2(λ))}, where χ 2 p(λ) is a noncentral χ2 random variable with p degrees of freedom and noncentrality parameter λ, can be used to show that (∂/∂|θ|2)R(θ, δ) > 0.] 5.25 In the spirit of Stein’s “large r and |θ|” argument, Casella and Hwang (1982) inves-tigated the limiting risk ratio of δJS(x) = (1−(r −2)/|x|2)x to that of x. If X ∼Nr(θ, I) and L(θ, δ) = |θ −δ|2, they showed limr→∞ |θ |2 r →c R(θ, δJS) R(θ, X) = c c + 1. To establish this limit we can use the following steps. (a) Show that R(θ ,δJS) R(θ ,x) = 1 −(r−2)2 r Eθ 1 |X|2 . (b) Show that 1 p−2+|θ |2 ≤Eθ 1 |X)|2 ≤ 1 p−2 p p+|θ |2 . (c) Show that the upper and lower bounds on the risk ratio both have the same limit. 408 MINIMAXITY AND ADMISSIBILITY [ 5.8 [Hint: (b) The upper bound is a consequence of Problem 5.22(b). For the lower bound, show Eθ(1/|X)|2) = E(1/p −2 + K), where K ∼Poisson(|θ|2) and use Jensen’s inequality.] Section 6 6.1 Referring to Example 6.1, this problem will establish the validity of the expression (6.2) for the risk of the estimator δL of (6.1), using an argument similar to that in the proof of Theorem 5.7. (a) Show that R(θ, δL) = i Eθ[θi −δL i (X)]2 = i Eθ (θi −Xi)2 + 2c(r −3) S (θi −Xi)(Xi −¯ X) +[c(r −3)]2 S2 (Xi −¯ X)2 " where S =  j(Xj −¯ X)2. (b) Use integration by parts to show Eθ (θi −Xi)(Xi −¯ X) S = −Eθ r−1 r S + 2(Xi −¯ X)2 S2 . [Hint: Write the cross-term as −Eθ 1 (Xi−¯ X) S 2 (Xi −θi) and adapt Stein’s identity (Lemma 1.5.15).] (c) Use the results of parts (a) and (b) to establish (6.2). 6.2 In Example 6.1, show that: (a) The estimator δL is minimax if r ≥4 and c ≤2. (b) The risk of δL is infinite if r ≤3 (c) The minimum risk is equal to 3/r , and is attained at θ1 = θ2 = · · · = θ. (d) The estimator δL is dominated in risk by its positive-part version δL+ = ¯ x1 +  1 −c(r −3) |x −¯ x1|2 + (x −¯ x1). 6.3 In Example 6.2: (a) Show that kx is the MLE if θ ∈Lk. (b) Show that δk(x) of (6.8) is minimax under squared error loss. (c) Verify that θi of the form (6.4) satisfy T (T ′T )−1T ′θ = θ for T of (6.5), and construct a minimax estimator that shrinks toward this subspace. 6.4 Consider the problem of estimating the mean based on X ∼Nr(θ, I), where it is thought that θi = s j=1 βjtj i where (ti, . . . , tr) are known, (β1, . . . , βs) are unknown, and r −s > 2. (a) Find the MLE of θ, say ˆ θ R, if θ is assumed to be in the linear subspace L = θ : s j=1 βjtj i = θi, i = 1, . . . , r . 5.8 ] PROBLEMS 409 (b) Show that L can be written in the form (6.7), and find K. (c) Construct a Stein estimator that shrinks toward the MLE of part (a) and prove that it is minimax. 6.5 For the situation of Example 6.3: (a) Show that δc(x, y) is minimax if 0 ≤c ≤2. (b) Show that if ξ = 0, R(θ, δ1) = 1 − σ 2 σ 2+τ2 r−2 r , R(θ, δcomb) = 1 − σ 2 σ 2+τ2 , and, hence, R(θ, δ1) > R(θ, δcomb). (c) For ξ ̸= 0, show that R(θ, δcomb) = 1 − σ 2 σ 2+τ2 + |ξ |2σ 2 r(σ 2+τ2) and hence is unbounded as |ξ| →∞. 6.6 The Green and Strawderman (1991) estimator δc(x, y) can be derived as an empirical Bayes estimator. (a) For X|θ ∼Nr(θ, σ 2I), Y|θ, ξ ∼Nr(θ + ξ, τ 2I), ξ ∼N(0, γ 2I), and θi ∼ Uniform(−∞, ∞), with σ 2 and τ 2 assumed to be known, show how to derive δr−2(x, y) as an empirical Bayes estimator. (b) Calculate the Bayes estimator, δπ, against squared error loss. (c) Compare r(π, δπ) and r(π, δr−2). [Hint: For part (a), Green and Strawderman suggest starting with θ ∼N(0, κ2I) and let κ2 →∞get the uniform prior.] 6.7 In Example 6.4: (a) Verify the risk function (6.13). (b) Verify that for unknown σ 2, the risk function of the estimator (6.14) is given by (6.15). (c) Show that the minimum risk of the estimator (6.14) is 1 − ν ν+2 r−2 r . 6.8 For the situation of Example 6.4, the analogous modification of the Lindley estimator (6.1) is δL = ¯ x1 +  1 − r −3 (xi −¯ x)2/ˆ σ 2  (x −¯ x1), where ˆ σ 2 = S2/(ν + 2) and S2/σ 2 ∼χ 2 ν , independent of X. (a) Show that R(θ, δL) = 1 − ν ν+2 (r−3)2 r Eθ σ 2 (xi−¯ x)2 . (b) Show that both δL and δ of (6.14) can be improved by using their positive-part versions. 6.9 The major application of Example 6.4 is to the situation Yij ∼N(θi, σ 2), i = 1, . . . , s, j = 1, . . . , n, independent with ¯ Yi = (1/n)jYij and ˆ σ 2 = ij(Yij −Y i)2/s(n −1). Show that the estimator δi = ¯ ¯ y +  1 −c (s −3)ˆ σ 2 ( ¯ yi −¯ ¯ y)2 + ( ¯ yi −¯ ¯ y) is a minimax estimator, where ¯ y = ijyij/sn, as long as 0 ≤c ≤2. [The case of unequal sample sizes ni is not covered by what we have done so far. See Efron and Morris 1973b, Berger and Bock 1976, and Morris 1983 for approaches to this problem. The case of totally unknown covariance matrix is considered by Berger et al. (1977) and Gleser (1979, 1986).] 410 MINIMAXITY AND ADMISSIBILITY [ 5.8 6.10 The positive-part Lindley estimator of Problem 6.9 has an interesting interpretation in the one-way analysis of variance, in particular with respect to the usual test performed, that of H0 : θ1 = θ2 = · · · = θs. This hypothesis is tested with the statistic F = ( ¯ yi −¯ ¯ y)2/(s −1) (yij −¯ yi)2/s(n −1), which, under H0, has an F-distribution with s −1 and s(n −1) degrees of freedom. (a) Show that the positive-part Lindley estimator can be written as δi = ¯ ¯ y +  1 −c s −3 s −1 1 F + ( ¯ yi −¯ ¯ y). (b) The null hypothesis is rejected if F is large. Show that this corresponds to using the MLE under H0 if F is small, and a Stein estimator if F is large. (c) The null hypothesis is rejected at level α if F > Fs−1,s(n−1),α. For s = 8 and n = 6: (i) What is the level of the test that corresponds to choosing c = 1, the optimal risk choice? (ii) What values of c correspond to choosing α = .05 or α = .01, typical α levels. Are the resulting estimators minimax? 6.11 Prove the following extension of Theorem 5.5 to the case of unknown variance, due to Strawderman (1973). Theorem 8.4 Let X ∼Nr(θ, σ 2I) and let S2/σ 2 ∼χ 2 ν , independent of X. The estima-tor δc(x) =  1 −c(F, S2) S2 r −2 ν + 2  x, where F = x2 i /S2, is a minimax estimator of θ, provided (i) for each fixed S2, c(·, S2) is nondecreasing, (ii) for each fixed F, c(F, ·) is nonincreasing, (iii) 0 ≤c(·, ·) ≤2. [Note that, here, the loss function is taken to be scaled by σ 2, L(θ, δ) = |θ −δ|2/σ 2, otherwise the minimax risk is not finite. Strawderman (1973) went on to derive proper Bayes minimax estimators in this case.] 6.12 For the situation of Example 6.5: (a) Show that Eσ 1 σ 2 = E0 r−2 |X|2 . (b) If 1/σ 2 ∼χ 2 ν /ν, then f (|x −θ|) of (6.19) is the multivariate t-distribution, with ν degrees of freedom and E0|X|−2 = (r −2)−1. (c) If 1/σ 2 ∼Y, where χ 2 ν /ν is stochastically greater than Y, then δ(x) of (6.20) is minimax for this mixture as long as 0 ≤c ≤2(r −2). 6.13 Prove Lemma 6.2. 6.14 For the situation of Example 6.7: (a) Verify that the estimator (6.25) is minimax if 0 ≤c ≤2. (Theorem 5.5 will apply.) 5.8 ] PROBLEMS 411 (b) Referring to (6.27), show that E|δπ(X) −δc(X)|2I[|X|2 ≥c(r −2)(σ 2 + τ 2)] = σ 4 σ 2 + τ 2 E 1 Y [Y −c(r −2)]2I[Y ≥c(r −2)] where Y ∼χ 2 r . (c) If χ 2 ν denotes a chi-squared random variable with ν degrees of freedom, establish the identity Eh(χ 2 ν ) = νE h(χ2 ν+2) χ 2 ν+2 to show that r(π, δR) = r(π, δπ) + 1 r −2 σ 4 σ 2 + τ 2 E[Y −c(r −2)]2I[Y ≥c(r −2)] where, now, Y ∼χ 2 r−2. (d) Verify (6.29), hence showing that r(π, δR) ≤r(π, δc). (e) Show that E(Y −a)2I(Y > a) is a decreasing function of a, and hence the maximal Bayes risk improvement, while maintaining minimaxity, is obtained at c = 2. 6.15 For Xi ∼Poisson(λi) i = 1, . . . , r, independent, and loss function L(λ, δ) = (λi −δi)2/λi: (a) For what values of a, α, and β are the estimators of (4.6.29) minimax? Are they also proper Bayes for these values? (b) Let = λi anddefineθi = λi/,i = 1, . . . , r.Forthepriordistributionπ(θ, ) = m()d #r i=1 dθi, show that the Bayes estimator is δπ(x) = ψπ(z) z + r −1x, where z = xi and ψπ(z) = ze−m() d z−1e−m() d. (c) Show that the choice m() = 1, yields the estimator δ(x) = [1−(r−1)/(z+r−1)]x, which is minimax. (d) Show that the choice m() = (1 + )−β, 1 ≤β ≤r −1 yields an estimator that is proper Bayes minimax for r > 2. (e) The estimator of part (d) is difficult to evaluate. However, for the prior choice m() =  ∞ 0 t−re−1/t (1 + t)β dt, 1 ≤β ≤r −1, show that the generalized Bayes estimator is δπ(x) = z z + β + r −1 x, and determine conditions for its minimaxity. Show that it is proper Bayes if β > 1. 6.16 Let Xi ∼binomial(p, ni), i = 1, . . . , r, where ni are unknown and p is known. The estimation target is n = (n1, . . . , nr) with loss function L(n, δ) = r i=1 1 ni (ni −δi)2. 412 MINIMAXITY AND ADMISSIBILITY [ 5.8 (a) Show that the usual estimator x/p has constant risk r(1 −p)/p. (b) For r ≥2, show that the estimator δ(x) =  1 − a z + r −1  x p dominates x/p in risk, where z = xi and 0 < a < 2(r −1)(1 −p). [Hint: Use an argument similar to Theorem 6.8, but here Xi|Z is hypergeometric, with E(Xi|z) = z ni N and var(Xi|z) = z ni N  1 −ni N  N−z N−1, where N = ni.] (c) Extend the argument from part (b) and find conditions on the function c(·) and constant b so that δ(x) =  1 −c(z) z + b  x p dominates x/p in risk. Domination of the usual estimator of n was looked at by Feldman and Fox (1968), Johnson (1987), and Casella and Strawderman (1994). The problem of n estimation for the binomial has some interesting practical applications; see Olkin et al. 1981, Carroll and Lombard 1985, Casella 1986. Although we have made the unrealistic assumption that p is known, these results can be adapted to the more practical unknown p case (see Casella and Strawderman 1994 for details). 6.17 (a) Prove Lemma 6.9. [Hint: Change variables from x to x −ei, and note that hi must be defined so that δ0(0) = 0.] (b) Prove that for X ∼pi(x|θ), where pi(x|θ) is given by (6.36), δ0(x) = hi(x − 1)/hi(x) is the UMVU estimator of θ (Roy and Mitra 1957). (c) Prove Theorem 6.10. 6.18 For the situation of Example 6.11: (a) Establish that x + g(x), where g(x) is given by (6.42), satisfies D(x) ≤0 for the loss L0(θ, δ) of (6.38), and hence dominates x in risk. (b) Derive D(x) for Xi ∼Poisson(λi), independent, and loss L−1(λ, δ) of (6.38). Show that x + g(x), for g(x) given by (6.43), satisfies D(x) ≤0 and hence is a minimax estimator of λ. 6.19 For the situation of Example 6.12: (a) Show that the estimator δ0(x) + g(x), for g(x) of (6.45) dominates δ0 in risk under the loss L−1(θ, δ) of (6.38) by establishing that D(x) ≤0. (b) For the loss L0(θ, δ) of (6.38), show that the estimator δ0(x) + g(x), where gi(x) = c(x)ki(xi) r j=1[k2 j(xj) + 1+tj 2 kj(xj)] , with ki(x) = x ℓ=1(ti −1 + ℓ)/ℓand c(·) nondecreasing with 0 ≤c(·) ≤2[(#xis > 1) −2] has D(x) ≤0 and hence dominates δ0(x) in risk. 6.20 In Example 6.12, we saw improved estimators for the success probability of nega-tive binomial distributions. Similar results hold for estimating the means of the negative binomial distributions, with some added features of interest. Let X1, . . . , Xr be inde-pendent negative binomial random variables with mass function (6.44), and suppose we want to estimate µ = {µi}, where µi = tiθi/(1 −θi), the mean of the ith distribution, using the loss L(µ, δ) = (µi −δi)2/µi. 5.8 ] PROBLEMS 413 (a) Show that the MLE of µ is X, and the risk of an estimator δ(x) = x + g(x) can be written R(µ, δ) = R(µ, X) + Eµ[D1(X) + D2(X)] where D1(x) = r i=1 2[gi(x + ei) −gi(x)] + g2 i (x + ei) xi + 1 " and D2(x) = r i=1 2xi ti [gi(x + ei) −gi(x)] + g2 i (x + ei) ti  xi xi+1 −1 " so that a sufficient condition for domination of the MLE is D1(x) + D2(x) ≤0 for all x. [Use Lemma 6.9 in the form Ef (X)/θi = E 1 ti+Xi Xi+1 f (X + ei) 2 .] (b) Show that if Xi are Poisson(θi) (instead of negative binomial), then D2(x) = 0. Thus, any estimator that dominates the MLE in the negative binomial case also dominates the MLE in the Poisson case. (c) Show that the Clevenson-Zidek estimator δcz(x) =  1 − c(r −1) xi + r −1  x satisfies D1(x) ≤0 and D2(x) ≤0 and, hence, dominates the MLE under both the Poisson and negative binomial model. This robustness property of Clevenson-Zidek estimators was discovered by Tsui (1984) and holds for more general forms of the estimator. Tsui (1984, 1986) also explores other estimators of Poisson and negative binomial means and their robustness properties. Section 7 7.1 Establish the claim made in Example 7.2. Let X1 and X2 be independent random variables, Xi ∼N(θi, 1), and let L((θ1, θ2), δ) = (θ1 −δ)2. Show that δ = sign(X2) is an admissible estimate of θ1, even though its distribution does not depend on θ1. 7.2 Efron and Morris (1973a) give the following derivation of the positive-part Stein es-timator as a truncated Bayes estimator. For X ∼Nr(θ, σ 2I), r ≥3, and θ ∼N(0, τ 2I), where σ 2 is known and τ 2 is unknown, define t = σ 2/(σ 2 + τ 2) and put a prior h(t), 0 < t < 1 on t. (a) Show that the Bayes estimator against squared error loss is given by E(θ|x) = [1 −E(t|x)]x where π(t|x) = tr/2e−t|x|2/2h(t) 1 0 tr/2e−t|x|2/2h(t) dt . (b) For estimators of the form δτ(x) = 1 −τ(|x|2) r−2 |x|2 x, the estimator that satisfies (i) τ(·) is nondecreasing, (ii) τ(·) ≤c, (iii) δτ minimizes the Bayes risk against h(t) 414 MINIMAXITY AND ADMISSIBILITY [ 5.8 has τ(|x|2) = τ ∗(|x|2) = min{c, |x|2 r−2E(t|x)}. (This is a truncated Bayes estimator, and is minimax if c ≤2.) (c) Show that if h(t) puts all of its mass on t = 1, then τ ∗(|x|2) = min c, |x|2 r −2 " and the resulting truncated Bayes estimator is the positive-part estimator. 7.3 Fill in the details of the proof of Lemma 7.5. 7.4 For the situation of Example 7.8, show that if δ0 is any estimator of θ, then the class of all estimators with δ(x) < δ0(x) for some x is complete. 7.5 A decision problem is monotone (as defined by Karlin and Rubin 1956; see also Brown, Cohen and Strawderman 1976 and Berger 1985, Section 8.4) if the loss function L(θ, δ) is, for each θ, minimized at δ = θ and is an increasing function of |δ −θ|. An estimator δ is monotone if it is a nondecreasing function of x. (a) Show that if L(θ, δ) is convex, then the monotone estimators form a complete class. (b) If δ(x) is not monotone, show that the monotone estimator δ′ defined implicitly by Pt(δ′(X) ≤t) = Pt(δ(X) ≤t) for every t satisfies R(θ, δ′) ≤R(θ, δ) for all θ. (c) If X ∼N(θ, 1) and L(θ, δ) = (θ −δ)2, construct a monotone estimator that dominates δa(x) =    −2a −x if x < −a x if |x| ≤a 2a −x if x > a. 7.6 Show that, in the following estimation problems, all risk functions are continuous. (a) Estimate θ with L(θ, δ(x)) = [θ −δ(x)]2, X ∼N(θ, 1). (b) Estimate θ with L(θ, δ(x)) = |θ −δ(x)|2, X ∼Nr(θ, I). (c) Estimate λ with L(λ, δ(x)) = r i=1 λ−m i (λi −δi(x))2, Xi ∼Poisson(λi), indepen-dent. (d) Estimate β with L(β, δ(x)) = r i=1 β−m i (βi −δi(x))2, Xi ∼Gamma(αi, βi), inde-pendent, αi known. 7.7 Prove the following theorem, which gives sufficient conditions for estimators to have continuous risk functions. Theorem 8.5 (Ferguson 1967, Theorem 3.7.1) Consider the estimation of θ with loss L(θ, δ), where X ∼f (x|θ). Assume (i) the loss function L(θ, δ) is bounded and continuous in θ uniformly in δ (so that limθ→θ0 supδ |L(θ, δ) −L(θ0, δ)| = 0); (ii) for any bounded function ϕ, ϕ(x)f (x|θ)dµ(x) is continuous in θ. Then, the risk function R(θ, δ) = EθL(θ, δ) is continuous in θ. [Hint: Show that |R(θ ′, δ) −R(θ, δ)| ≤  |L(θ ′, δ(x)) −L(θ, δ(x))|f (x|θ ′) dx +  L(θ, δ(x))|f (x|θ ′) −f (x|θ)| dx, and use (i) and (ii) to make the first integral < ε/2, and (i) and (iii) to make the second integral < ε/2.] 5.8 ] PROBLEMS 415 7.8 Referring to Theorem 8.5, show that condition (iii) is satisfied by (a) the exponential family, (b) continuous densities in which θ is a one-dimensional location or scale parameter. 7.9 A family of functions F is equicontinuous at the point x0 if, given ε > 0, there exists δ such that |f (x) −f (x0)| < ε for all |x −x0| < δ and all f ∈F. (The same δ works for all f .) The family is equicontinuous if it is equicontinuous at each x0. Theorem 8.6 (Communicated by L. Gajek) Consider estimation of θ with loss L(θ, δ), where X ∼f (x|θ) is continuous in θ for each x. If (i) The family L(θ, δ(x)) is equicontinuous in θ for each δ. (ii) For all θ, θ′ ∈ , supx f (x|θ ′) f (x|θ) < ∞. Then, any finite-valued risk function R(θ, δ) = EθL(θ, δ) is continuous in θ and, hence, the estimators with finite, continuous risks form a complete class. (a) Prove Theorem 8.6. (b) Give an example of an equicontinuous family of loss functions. [Hint: Consider squared error loss with a bounded sample space.] 7.10 Referring to Theorem 7.11, this problem shows that the assumption of continuity of f (x|θ) in θ cannot be relaxed. Consider the density f (x|θ) that is N(θ, 1) if θ ≤0 and N(θ + 1, 1) if θ > 0. (a) Show that this density has monotone likelihood ratio, but is not continuous in θ. (b) Show that there exists a bounded continuous loss function L(θ −δ) for which the risk R(θ, X) is discontinuous. 7.11 For X ∼f (x|θ) and loss function L(θ, δ) = r i=1 θ m i (θi −δi)2, show that condition (iii) of Theorem 7.11 holds. 7.12 Prove the following (equivalent) version of Blyth’s Method (Theorem 7.13). Theorem 8.7 Suppose that the parameter space ∈ℜr is open, and estimators with continuous risks are a complete class. Let δ be an estimator with a continuous risk function, and let {πn} be a sequence of (possibly improper) prior measures such that (i) r(πn, δ) < ∞for all n, (ii) for any nonempty open set 0 ∈ , r(πn, δ) −r(πn, δπn) 0 πn(θ) dθ →0 as n →∞. Then, δ is an admissible estimator. 7.13 Fill in some of the gaps in Example 7.14: (i) Verify the expressions for the posterior expected losses of δ0 and δπ in (7.7). (ii) Show that the normalized beta priors will not satisfy condition (b) of Theorem 7.13, and then verify (7.9). (iii) Show that the marginal distribution of X is given by (7.10). 416 MINIMAXITY AND ADMISSIBILITY [ 5.8 (iv) Show that ∞ x=1 D(x) ≤max{a2, b2} ∞ x=1 1 x2 →0, and hence that δ0 is admissible. 7.14 Let X ∼Poisson(λ). Use Blyth’s method to show that δ0 = X is an admissible estimator of λ under the loss function L(λ, δ) = (λ −δ)2 with the following steps: (a) Show that the unnormalized gamma priors πn(λ) = λa−1e−λ/n satisfy condition (b) of Theorem 7.13 by verifying that for any c, lim n→∞  c 0 πn(λ) dλ = constant. Also show that the normalized gamma priors will not work. (b) Show that under the priors πn(λ), the Bayes risks of δ0 and δπ′ n, the Bayes estimator, are given by r(π ′ n, δ0) = naH(a), r(π ′ n, δπ′ n) = n n + 1naH(a). (c) The difference in risks is r(π ′ n, δ0) −r(π ′ n, δπ′ n) = H(a)na 1 − n n + 1 , which, for fixed a > 0, goes to infinity as n →∞(Too bad!). However, show that if we choose a = a(n) = 1/√n, then H(a)na  1 − n n+1  →0 as n →∞. Thus, the difference in risks goes to zero. (d) Unfortunately, we must go back and verify condition (b) of Theorem 7.13 for the sequence of priors with a = 1/√n, as part (a) no longer applies. Do this, and conclude that δ0(x) = x is an admissible estimator of λ. [Hint: For large n, since t ≤c/n, use Taylor’s theorem to write e−t = 1 −t + error, where the error can be ignored.] (Recall that we have previously considered the admissibility of δ0 = X in Corollaries 2.18 and 2.20, where we saw that δ0 is admissible.) 7.15 Use Blyth’s method to establish admissibility in the following situations. (a) If X ∼Gamma(α, β), α known, then x/α is an admissible estimator of β using the loss function L(β, δ) = (β −δ)2/β2. (b) If X ∼Negative binomial(k, p), then X is an admissible estimator of µ = k(1 − p)/p using the loss function L(µ, δ) = (µ −δ)2/(µ + 1 k µ2). 7.16 (i) Show that, in general, if δπ is the Bayes estimator under squared error loss, then r(π, δπ) −r(π, δg) = E |δπ(X) −δg(X)|2 , thus establishing (7.13). (ii) Prove (7.15). (iii) Use (7.15) to prove the admissibility of X in one dimension. 5.8 ] PROBLEMS 417 7.17 The identity (7.14) can be established in another way. For the situation of Example 7.18, show that r(π, δg) = r −2  [∇log mπ(x)][∇log mg(x)]mπ(x) dx +  |∇log mg(x)|2 mπ(x) dx, which implies r(π, δπ) = r −  |∇log mπ(x)|2 mπ(x) dx, and hence deduce (7.14). 7.18 This problem will outline the argument needed to prove Theorem 7.19: (a) Show that ∇mg(x) = m∇g(x), that is, ∇  g(θ)e−|x−θ |2 dθ =  [∇g(θ)] e−|x−θ |2 dθ. (b) Using part (a), show that r(π, δg) −r(gn, δgn) =   ∇log mg(x) −∇log mgn(x)  2 mgn(x) dx =      ∇mg(x) mg(x) −∇mgn(x) mgn(x)     2 mgn(x) dx ≤2      ∇mg(x) mg(x) − mh2 n∇g(x) mgn(x)     2 mgn(x) dx +2      mg∇h2 n(x) mgn(x)     2 mgn(x) dx = Bn + An. (c) Show that An = 4       mghn∇hn(x) mgh2 n(x)      2 mgn(x) dx ≤4  mg(∇hn)2(x) dx and this last bound →0 by condition (a). (d) Show that the integrand of Bn →0 as n →∞, and use condition (b) together with the dominated convergence theorem to show Bn →0, proving the theorem. 7.19 Brown and Hwang (1982) actually prove Theorem 7.19 for the case f (x|θ) = eθ ′x−ψ(θ ), where we are interested in estimating τ(θ) = Eθ(X) = ∇ψ(θ) under the loss L(θ, δ) = |τ(θ) −δ|2. Prove Theorem 7.19 for this case. [The proof is similar to that outlined in Problem 7.18.] 7.20 For the situation of Example 7.20: (a) Using integration by parts, show that ∂ ∂xi  g(θ)e−|x−θ |2 dθ = −  (xi −θi)g(θ)e−|x−θ |2 dθ =   ∂ ∂θi g(θ)  e−|x−θ |2 dθ 418 MINIMAXITY AND ADMISSIBILITY [ 5.8 and hence ∇mg(x) mg(x) = [∇g(θ)] e−|x−θ |2 dθ g(θ)e−|x−θ |2 dθ . (b) Use the Laplace approximation (4.6.33) to show that [∇g(θ)] e−|x−θ |2 dθ g(θ)e−|x−θ |2 dθ ≈∇g(x) g(x) , and that δg(x) ≈x + ∇g(x) g(x) . (c) If g(θ) = 1/|θ|k, show that δg(x) ≈  1 −k x2  x. 7.21 In Example 7.20, if g(θ) = 1/|θ|k is a proper prior, then δg is admissible. For what values of k is this the case? 7.22 Verify that the conditions of Theorem 7.19 are satisfied for g(θ) = 1/|θ|k if (a) k > r −2 and (b) k = r −2. 7.23 Establish conditions for the admissibility of Strawderman’s estimator (Example 5.6) (a) using Theorem 7.19, (b) using the results of Brown (1971), given in Example 7.21. (c) Give conditions under which Strawderman’s estimator is an admissible minimax estimator. (See Berger 1975, 1976b for generalizations). 7.24 (a) Verify the Laplace approximation of (7.23). (b) Show that, for h(|x|) = k/|x|2α, (7.25) can be written as (7.26) and that α = 1 is needed for an estimator to be both admissible and minimax. 7.25 Theorem 7.17 also applies to the Poisson(λ) case, where Johnstone (1984) obtained the following characterization of admissible estimators for the loss L(λ, δ) = r i=1(λi − δi)2/λi. A generalized Bayes estimator of the form δ(x) = [1 −h(xi)]x is (i) inadmissible if there exists ε > 0 and M < ∞such that h(xi) < r −1 −ε xi for xi > M, (ii) admissible if h(xi)(xi)1/2 is bounded and there exits M < ∞such that h(xi) ≥r −1 xi for xi > M. (a) Use Johnstone’s characterization of admissible Poisson estimators (Example 7.22) to find an admissible Clevenson-Zidek estimator (6.31). (b) Determine conditions under which the estimator is both admissible and minimax. 7.26 For the situation of Example 7.23: (a) Show that X/n and (n/n + 1) (X/n) (1 −X/n) are admissible for estimating p and p(1 −p), respectively. 5.8 ] PROBLEMS 419 (b) Show that α(X/n) + (1 −α)(a/(a + b)) is an admissible estimator of p, where α = n/(n + a + b). Compare the results here to that of Theorem 2.14 (Karlin’s theorem). [Note that the results of Diaconis and Ylvisaker (1979) imply that π(·) = uniform are the only priors that give linear Bayes estimators.] 7.27 Fill in the gaps in the proof that estimators δπ of the form (7.27) are a complete class. (a) Show that δπ is admissible when r = −1, s = n + 1, and r + 1 = s. (b) For any other estimator δ′(x) for which δ′(x) = h(0) for x ≤r′ and δ′(x) = h(1) for x ≥s′, show that we must have r′ ≥r and s′ ≤s. (c) Show that R(p, δ′) ≤R(p, δπ) for all p ∈[0, 1] if and only if Rr,s(p, δ′) ≤ Rr,s(p, δπ) for all p ∈[0, 1]. (d) Show that 1 0 Rr,s(p, δ)k(p)dπ(p) is uniquely minimized by [δπ(r +1), . . ., δπ(s − 1)], and hence deduce the admissibility of δπ. (e) Use Theorem 7.17 to show that any admissible estimator of h(p) is of the form (7.27), and hence that (7.27) is a minimal complete class. 7.28 For i = 1, 2, . . . , k, let Xi ∼fi(x|θi) and suppose that δ∗ i (xi) is a unique Bayes estimator of θi under the loss Li(θi, δ), where Li satisfies Li(a, a) = 0 and Li(a, a′) > 0, a ̸= a′. Suppose that for some j, 1 ≤j ≤k, there is a value θ ∗such that if θj = θ ∗, (i) Xj = x∗with probability 1, (ii) δ∗ j (x∗) = θ∗. Show that (δ∗ 1(x1), δ∗ 2(x2), . . . , δ∗ k(xk)) is admissible for (θ1, θ2, · · · , θk) under the loss  i Li(θi, δ); that is, there is no Stein effect. 7.29 Suppose we observe X1, X2, . . . sequentially, where Xi ∼fi(x|θi). An estimator of θ j = (θ1, θ2, . . . , θj) is called nonanticipative (Gutmann 1982b) if it only depends on (X1, X2, . . . , Xj). That is, we cannot use information that comes later, with indices > j. If δ∗ i (xi) is an admissible estimator of θi, show that it cannot be dominated by a nonanticipative estimator. Thus, this is again a situation in which there is no Stein effect. [Hint: It is sufficient to consider j = 2. An argument similar to that of Example 7.24 will work.] 7.30 For X ∼Nr(θ, I), consider estimation of ϕ′θ where ϕr×1 is known, using the estimator a′X with loss function L(ϕ′θ, δ) = (ϕ′θ −δ)2. (a) Show that if a lies outside the sphere (7.31), then a′X is inadmissible. (b) Show that the Bayes estimator of ϕ′θ against the prior θ ∼N(0, V ) is given by E(ϕ′θ|x) = (I + V )−1ϕx. (c) Find a covariance matrix V such that E(ϕ′θ|x) lies inside the sphere (7.31) [V will be of rank one, hence of the form vv′ for some r × 1 vector v]. Parts (a)–(c) show that all linear estimators inside the sphere (7.31) are admissible, and those outside are inadmissible. It remains to consider the boundary, which is slightly more involved. See Cohen 1966 for details. 7.31 Brown’s ancillarity paradox. Let X ∼Nr(µ, I), r > 2, and consider the estimation of w′µ = r i=1wiµi, where w is a known vector with w2 i > 0, using loss function L(µ, d) = (w′µ −w′d)2. (a) Show that the estimator w′X is minimax and admissible. 420 MINIMAXITY AND ADMISSIBILITY [ 5.9 (b) Assume no that w is the realized value of a random variable W, with distribution independent of X, where V = E(W′W) is known. Show that the estimator w′d∗, where d∗(x) =  I −cV−1 xV−1x  x, with 0 < c < 2(r −2), dominates w′X in risk. [Hint: Establish and use the fact that E[L(µ, d)] = E[(d −µ)′V(d −µ)]. This is a special case of results established by Brown (1990a). It is referred to as a paradox because the distribution of the ancillary, which should not affect the estimation of µ, has an enormous effect on the properties of the standard estimator. Brown showed how these results affect the properties of coefficient estimates in multiple regression when the assumption of random regressors is made. In that context, the ancillarity paradox also relates to Shaffer’s (1991) work on best linear unbiased estimation (see Theorem 3.4.14 and Problems 3.4.16-3.4.18.] 7.32 Efron (1990), in a discussion of Brown’s (1990a) ancillarity paradox, proposed an alternate version. Suppose X ∼Nr(µ, I), r > 2, and with probability 1/r, independent of X, the value of the random variable J = j is observed, j = 1, 2, . . . , r. The problem is to estimate θj using the loss function L(θj, d) = (θj −d)2. Show that, conditional on J = j, Xj is a minimax and admissible estimator of θj. However, unconditionally, Xj is dominated by the jth coordinate of the James-Stein estimator. This version of the paradox may be somewhat more transparent. It more clearly shows how the presence of the ancillary random variable forces the problem to be considered as a multivariate problem, opening the door for the Stein effect. 9 Notes 9.1 History Deliberate efforts to develop statistical inference and decision making not based on “in-verse probability” (i.e., without assuming prior distributions) were mounted by R.A. Fisher (for example, 1922, 1930, and 1935; see also Lane 1980), by Neyman and Pear-son (for example, 1933ab), and by Wald (1950). The latter’s general decision theory introduced, as central notions, the minimax principle and least favorable distributions in close parallel to the corresponding concepts of the theory of games. Many of the exam-ples of Section 5.2 were first worked out by Hodges and Lehmann (1950). Admissibility is another basic concept of Wald’s decision theory. The admissibility proofs in Example 2.8 are due to Blyth (1951) and Hodges and Lehmann (1951). A general necessary and sufficient condition for admissibility was obtained by Stein (1955). Theorem 2.14 is due to Karlin (1958), and the surprising inadmissibility results of Section 5.5 had their origin in Stein’s seminal paper (1956b). The relationship between equivariance and the mini-max property was foreshadowed in Wald (1939) and was developed for point estimation by Peisakoff (1950), Girshick and Savage (1951), Blackwell and Girshick (1954), Kudo (1955), and Kiefer (1957). Characterizations of admissible estimators and complete classes have included tech-niques such as Blyth’s method and the information inequality. The pathbreaking paper of Brown (1971) was influential in shaping the understanding of admissibility problems, and motivated further study of differential inequalities (Brown 1979, 1988) and asso-ciated stochastic processes and Markov chains (Brown 1971, Johnstone 1984, Eaton 1992). 5.9 ] NOTES 421 9.2 Synthesis The strengths of combining the Bayesian and frequentist approach are evident in Prob-lem 1.4. The Bayes approach provides a clear methodology for constructing estimators (of which REML is a version), while the frequentist approach provides the methodology for evaluation. There are many other approaches to statistical problems, that is, many other statistical philosophies. For example, there is the fiducial approach (Fisher 1959), structural inference (Fraser 1968, 1979), pivotal inference (Barnard 1985), conditional inference (Fisher 1956, Cox 1958, Buehler 1959, Robinson 1979a, 1979b), likelihood-based conditional inference (Barndorff-Nielsen 1980, 1983, Barndorff-Nielsen and Cox, 1979, 1994), and many more. Moreover, within each philosophy, there are many subdi-visions, for example, robust Bayesian, conditional frequentist, and so on. Examination of conditional inference, with both synthesis and review in mind, can also be found in Casella (1987, 1988, 1992b). An important difference among these different philosophies is the role of conditional and unconditional inference, that is, whether the criterion for evaluation of an estimator is allowed to depend on the data. Example 9.1 Conditional bias. If X1, . . . , Xn are distributed iid as N(µ, σ 2), both unknown, the estimator S2 = 1 n−1 n i=1(Xi −¯ X)2 is an unbiased es-timator of σ 2; that is, the unconditional expectation satisfies Eσ 2[S2] = σ 2, for all values of σ 2. In doing a conditional evaluation, we might ask if there is a set in the sample space [a reference set or recognizable subset according to Fisher (1959)] on which the condi-tional expectation is always biased. Robinson (1979b) showed that there exist constants a and δ > 0 such that Eσ 2[S2 | | ¯ X|/S < a] > (1 + δ)σ 2 for all µ, σ 2, (9.1) showing that S2 is conditionally biased. See Problem 1.5 for details. The importance of a result such as (9.1) is that the experimenter knows whether the recognizable set {(x1, . . . , xn) : ¯ x/s < a} has occurred. If it has, then the claim that S2 is unbiased may be suspect if the inference is to apply to experiments in which the recognizable set occurs. ∥ The study of conditional properties is actually better suited to examination of confidence procedures, which we are not covering here. (However, see TSH2, Chapter 10 for an introduction to conditional inference in testing and, hence, in confidence procedures.) The variance inequality (9.1) has many interesting consequences in interval estimation for normal parameters, both for the mean (Brown 1968, Goutis and Casella 1992) and the variance (Stein 1964, Maata and Casella 1987, Goutis and Casella 1997 and Shorrock 1990). 9.3 The Hunt-Stein Theorem The relationship between equivariance and minimaxity finds an expression in the Hunt-Stein theorem. Although these authors did not publish their result, it plays an important role in mathematical statistics. The work of Hunt and Stein took place in the 1940s, but it was not until the landmark paper by Kiefer (1957) that a comprehensive treatment of the topic, and a very general version of the theorem, was given. (See also Kiefer 1966 for an expanded discussion.) The basis of the theorem is that in invariant statistical problems, if the group satisfies certain assumptions, then the existence of a minimax estimator implies the existence of an equivariant minimax estimator. Intuitively, we expect such a theorem to exist in 422 MINIMAXITY AND ADMISSIBILITY [ 5.9 invariant decision problems for which the group is transitive and a right-invariant Haar measure exists. If the Haar measure were proper, then Theorem 4.1 and Theorem 3.1 (see also the end of Section 5.4) would apply. The question that the Hunt-Stein theorem addresses is whether an improper right-invariant Haar measure can yield a minimax estimator. The theorem turns out to be not true for all groups, but only for groups possessing certain properties. A survey of these properties, and some interrelationships, was made by Stone and van Randow (1968). A later paper by Bondar and Milnes (1981) reviews and establishes many of the group-theoretic equivalences conjectured by Stone and van Randow. From this survey, two equivalent group-theoretic conditions, which we discuss informally, emerge as the appropriate conditions on the group. A. Amenability. A group is amenable if there exists a right-invariant mean. That is, if we define the sequence of functionals mn(f ) = 1 2n  n −n f (x)dx, wheref ∈L∞,thenthereexistsafunctionalm(·)suchthatforanyf1, . . . , fk ∈L∞ and ε > 0 there is an n0 such that |mn(fi) −m(fi)| < ε for i = 1, . . . , k and all n > n0. B. Approximable by proper priors, or the existence of a sequence of proper probabil-ity distributions that converge to the right-invariant Haar measure. [The concept of approximable by proper priors can be traced to Stein (1965), and was further developed by Stone (1970) and Heath and Sudderth (1989).] With these conditions, we can state the theorem Theorem 9.2 (Hunt-Stein) If the decision problem is invariant with respect to a group G that satisfies condition A (equivalently condition B), then if a minimax estimator exists, an equivariant minimax estimator exists. Conversely, if there exists an equivariant estimator that is minimax among equivariant estimators, it is minimax overall. The proof of this theorem has a history almost as rich as the theorem itself. The original published proof of Kiefer (1957) was improved upon by use of a fixed-point theorem. This elegant method is attributed to LeCam and Huber, and is used in the general devel-opment of the Hunt-Stein theorem by LeCam (1986, Section 8.6) and Strasser (1985, Section 48). An outline of such a proof is given by Kiefer (1966). Brown (1986b) pro-vides an interesting commentary on Kiefer’s 1957 paper, and also sketches Huber’s method of proof. Robert (1994a, Section 7.5) gives a particularly readable sketch of the proof. If the group is finite, then the assumptions of the Hunt-Stein theorem are satisfied, and a somewhat less complex proof will work. See Berger (1985, Section 6.7) for the proof for finite groups. In TSH2, Section 9.5, a version of the Hunt-Stein theorem for testing problems is stated and proved under condition B. Theorem 9.2 reduces the problem to a property of groups, and to apply the theorem we need to identify which groups satisfy the A/B conditions. Bondar and Milnes (1981) provide a nice catalog of groups, and we note that the amenable groups include finite groups, location/scale groups, the triangular group T (n) of n × n nonsingular upper triangular matrices, and many permutation groups. ”Large” groups, such as those arising in nonparametric problems, are often not amenable. A famous group that is not amenable is the general linear group GLn, n > 2, of nonsingular n × n matrices (see Note 3.9.3). See also Examples 3.7-3.9 for MRE estimators that are not minimax. 5.9 ] NOTES 423 9.4 Recentered Sets The topic of set estimation has not been ignored because of its lack of importance, but rather because the subject is so vast that it really needs a separate book-length treatment. (TSH2 covers many aspects of standard set estimation theory.) Here, we will only comment on some of the developments in set estimators that are centered at Stein estimators, so-called recentered sets. The remarkable paper of Stein (1962) gave heuristic arguments that showed why recen-tered sets of the form C+ = 3 θ : |θ −δ+(x)| ≤c 4 would dominate the usual confidence set C C0 = {θ : |θ −x| ≤c} in the sense that Pθ(θ ∈C+(X)) > Pθ(θ ∈C0(X)) for all θ, where X ∼Nr(θ, I), r ≥3, and δ+ is the positive-part Stein estimator. Stein’s argument was heuristic, but Brown (1966) and Joshi (1967) proved the inadmissibility of C0 if r ≥3 (without giving an explicit dominating procedure). Joshi (1969b) also showed that C0 was admissible if r ≤2. Advances in this problem were made by Olshen (1977), Morris (1977, 1983a), Faith (1976), and Berger (1980), each demonstrating (but not proving) dominance of C0 by Stein-like set estimators. Analytic dominance of C0 by C+ was established by Hwang and Casella (1982, 1984) and, in subsequent papers (Casella and Hwang 1983, 1987), dominance in both coverage probability and volume was achieved (the latter was only demonstrated numerically). Many other results followed. Generalizations were given by Ki and Tsui (1985) and Shinozaki (1989), and domination results for non-normal distributions by Hwang and Chen (1986), Robert and Casella (1990), and Hwang and Ullah (1994). All of these improved confidence sets have the property that their coverage probabil-ity is uniformly greater than that of C0, but the infimum of the coverage probability (the confidence coefficient) is equal to that of C0. As this is the value that is usually reported, unless there is a great reduction in volume, the practical advantages of such sets may be minimal. For example, recentered sets such as C+ will present the same volume and confidence coefficient to an experimenter. Other sets, which attain some volume reduction but maintain the same confidence coefficient as C0, still are somewhat “wasteful” because they have coverage probabilities higher than that of C0. However, this deficiency now seems to be overcome. By adapting results of Brown et al. (1995), Tseng and Brown (1997) have constructed an improved confidence set, C∗, with the property that Pθ(θ ∈C∗(X)) = Pθ(θ ∈C0(X)) for every θ, and vol(C∗) < vol(C0), achieving a maximal amount of volume reduction while maintaining the same coverage probability as C0. 9.5 Estimation of the Loss Function In the proof of Theorem 5.1, the integration-by-parts technique yielded an unbiased estimate of the risk, that is, a function D(x) satisfying EθD(X) = EθL(θ, δ) = R(θ, δ). Of course, we could also consider D(x) as an estimate of L(θ, δ), and ask if D(x) is a reasonable estimator using, perhaps, another loss function such as L(L, D) = (L(θ, δ) −D(x))2. If we think of L(θ, δ) as a measure of accuracy of δ, then we are looking for good estimators of this accuracy. Note, however, that this problem is slightly more complex 424 MINIMAXITY AND ADMISSIBILITY [ 5.9 than the ones we have considered, as the “parameter” L(θ, δ(x)) is a function of both θ and x. Loss estimation was first addressed by Rukhin (1988a, 1988b, 1988c) and Johnstone (1988). Rukhin considered “decision-precision” losses, that combined the estimation of θ and L(θ, δ) into one loss. Johnstone looked at the multivariate normal problem, and showed that 1 is an inadmissible estimator of L(θ, x) = 1/r r i=1(θi −xi)2 (recall that Eθ[L(θ, X)] = 1) and showed that estimates of the form D(x) = 1 −(c/r)/|x|2 dominate it, where 0 ≤c ≤2(r −4). Note that this implies r ≥5. Further advances in loss estimation are found in Lu and Berger (1989a, 1989b) and Fourdrinier and Wells (1995). The loss estimation problem is closely tied to the problem of set estimation, and actually transforms the set estimation problem into one of point estimation. Suppose C(x) is a set estimator (or confidence set) for θ, and we measure the worth of C(x) with the loss function L(θ, C) = 0 if θ ∈C 1 if θ / ∈C. We usually calculate R(θ, C) = EθL(θ, C(X)) = Pθ(θ ∈C(X)), the probability of cov-erage of C. Moreover, 1 −α = infθ Pθ(θ ∈C(X)) is usually reported as our confidence in C. However, it is really of interest to estimate L(θ, C), the actual coverage. We can thus ask how well 1 −α estimates this quantity, and if there are estimators γ (x) (known as estimators of accuracy) that are better. Using the loss function L(θ, γ ) = (L(θ, C) −γ (x))2, a number of interesting (and some surprising) results have been obtained. In the mul-tivariate normal problem improved estimates have been found for the accuracy of the usual confidence set (Lu and Berger 1989a, 1989b, Robert and Casella 1994) and for the accuracy of Stein-type confidence sets (George and Casella, 1994). However, Hwang and Brown (1991) have shown that under an additional constraint (that of frequency validity), the estimator 1 −α is an admissible estimator of the accuracy of the usual confidence set. Other situations have also been considered. Goutis and Casella (1992) have demonstrated that the accuracy statement of Student’s t interval can be uniformly increased, and will still dominate 1 −α under squared error loss. Hwang et al. (1992) have looked at accuracy estimation in the context of testing, where complete classes are described and the question of the admissibility of the p value is addressed. More recently, Lindsay and Yi (1996) have shown that, up to second-order terms, the observed Fisher information is the best estimator of the expected Fisher information, which is the variance (or loss) of the MLE. One can think of this result as a decision-theoretic formalization of the work of Efron and Hinkley (1978). This variant of loss estimation as confidence estimation also has roots in the work of Kiefer (1976, 1977) who considered an alternate approach to the assignment of confidence (see also Brown 1978). 9.6 Shrinkage and Multicollinearity In Sections 5 and 6, we have assumed that the covariance is known, and hence, without loss of generality, that it is the identity. This has led to shrinkage estimators of the form δi(x) = (1−h(x))xi, that is, estimators that shrink every coefficient by the same fraction. If the original variances are unequal, say X ∼Nr(θ, ), then it may be more desirable to shrink some coordinates more than others (Efron and Morris 1973a, 1973b, Morris 5.9 ] NOTES 425 1983a). Intuitively, it seems reasonable for the amount of shrinkage to be proportional to the size of the componentwise variance, that is, the greater the variance the greater the shrinkage. This would tend to leave alone the coefficients with small variance, and to shrink the coefficients with higher variance relatively more, bringing the ensemble information more to bear on these coefficients (and improving the variance of the coef-ficient estimates). This strategy reflects the shrinkage pattern of a Bayes estimator with prior θ ∼N(0, τ 2I). However, general minimax estimators of Hudson (1974) and Berger (1976a, 1976b) (see also Berger 1985, Section 5.4.3, and Chen 1988), on the contrary, shrink the lower variance coordinates more than the higher variance coordinates. What happens is that the variance/bias trade-off is profitable for coordinates with low variance, but not so for coordinates with high variance, where Xi is minimax anyway. This minimax shrinkage pattern is directly opposite to what is advocated to relieve the problem of multicollinearity in multiple regression problems. For that problem, work that started from ridge regression (Hoerl and Kennard 1971a, 1971b) advocated shrink-age patterns that are similar to those arising from the N(0, τ 2I) prior—and were in the opposite direction of the minimax pattern. There is a large literature on ridge regression, with much emphasis on applications and data analysis, and less on this dichotomy of shrinkage patterns. A review of ridge regression is given by Draper and van Nostrand (1979), and some theoretical properties are investigated by Brown and Zidek (1980), Casella (1980), and Obenchain (1981); see also Oman 1985 for a discussion of appropri-ate prior distributions. Casella (1985b) attempts to resolve the minimax/multicollinear shrinkage dilemma. 9.7 Other Minimax Considerations The following provides a guide to some additional minimax literature. (i) Bounded mean The multiparameter version of Example 2.9 suffers from the additional complica-tion that many different shapes of the bounding set may be of interest. Shapes that have been considered are convex sets (DasGupta, 1985), spheres and rectangles (Berry, 1990), and hyperrectangles (Donoho, et al. 1990). Other versions of this problem that have been investigated include different loss functions (Bischoff and Fieger 1992, Eichenauer-Herrmann and Fieger 1992), gamma-minimax estimation (Vidakovic and DasGupta 1994), other distributions (Eichenauer-Herrmann and Ickstadt 1992), and other restrictions (Fan and Gijbels 1992, Spruill 1986, Feld-man 1991). Some other advances in this problem have come from application of bounds on the risk function, often derived using the information inequality (Gajek 1987, 1988, Brown and Gajek 1990, Brown and Low 1991, Gajek and Kaluzka 1995). Truncated mean problems also underlie many deeper problems in estima-tion, as illustrated by Donoho (1994) and Johnstone (1994). (ii) Selection of shrinkage target In Stein estimation, much has been written on the problem of selecting a shrinkage target.Berger(1982a,1982b) shows how to specify elliptical regions in which maxi-mal risk improvement is obtained, and also shows that desirable Bayesian properties can be maintained. Oman (1982a, 1982b) and Casella and Hwang (1987) describe shrinking toward linear subspaces, and Bock (1982) shows how to shrink toward convex polyhedra. George (1986a, 1986b) constructs estimators that shrink toward multiple targets, using properties of superharmonic functions establish minimaxity of multiple shrinkage estimators. 426 MINIMAXITY AND ADMISSIBILITY [ 5.9 (iii) A Bayes/minimax compromise Bayes estimation subject to a bound on the maximum risk was first considered by Hodges and Lehmann (1952). Although such problems tend to be computationally difficult, the resulting estimators often show good performance on both frequentist and Bayesian measures (Berger 1982a, 1982b, 1985, Section 4.7.7, DasGupta and Bose 1988, Chen 1988, DasGupta and Studden 1989) Some of these properties are related to those of Stein-type shrinkage estimators (Bickel 1983, 1984, Kempthorne 1988a, 1988b) and some are discussed in Section 5.7. (iv) Superharmonicity The superharmonic condition, although often difficult to verify, has sometimes proved helpful in not only establishing minimaxity, but also in understanding what types of prior distributions may lead to minimax Bayes estimators. Berger and Robert (1990) applied the condition to a family of hierarchical Bayes estimators andHaffandJohnson(1986)generalizedittotheestimationofmeansinexponential families. More recently, Fourdrinier, Strawderman, and Wells (1998) have shown that no superharmonic prior can be proper, and were able to use Corollary 5.11 to establish minimaxity of a class of proper Bayes estimators, in particular, the Bayes estimator using a Cauchy prior. The review article of Brandwein and Strawderman (1990) contains other examples. (v) Minimax robustness Minimax robustness of Stein estimators, that is, the fact that minimaxity holds over a wide range of densities, has been established for many different spherically symmetric densities. Strawderman (1974) was the first author to exhibit minimax Stein estimators for distributions other than the normal. [The work of Stein (1956b) and Brown (1966) had established the inadmissibility of the best invariant estimator, but explicit improvements had not been given.] Brandwein and Strawderman (1978, 1980) have established minimax results for wide classes of mixture distributions, under both quadratic and concave loss. Elliptical distributions were considered by Srivastava and Bilodeau (1989) and Cellier, Fourdrinier, and Robert (1989), where domination was established for an entire class of distributions. (vi) Stein estimation Other topics of Stein estimation that have received attention include matrix esti-mation (Efron and Morris 1976b, Haff 1979, Dey and Srinivasan 1985, Bilodeau and Srivastava 1988, Carter et al. 1990, Konno 1991), regression problems (Zidek 1978, Copas 1983, Casella 1985b, Jennrich and Oman 1986, Gelfand and Dey 1988, Rukhin 1988c, Oman 1991), nonnormal distributions (Bravo and MacGib-bon 1988, Chen and Hwang 1988, Srivastava and Bilodeau 1989, Cellier et al. 1989, Ralescu et al. 1992), robust estimation (Liang and Waclawiw 1990, Konno 1991), sequential estimation (Natarajan and Strawderman 1985, Sriram and Bose 1988, Ghosh et al. 1987), and unknown variances (Berger and Bock 1976, Berger et al. 1977, Gleser 1979, 1986, DasGupta and Rubin 1988, Honda 1991, Tan and Gleser 1992). 9.8 Other Admissibility Considerations The following provides a guide to some additional admissibility literature. 5.9 ] NOTES 427 (i) Establishing admissibility Theorems 7.13 and 7.15 (and their generalizations; see, for example, Farrell 1968) represent the major tools for establishing admissibility. The other admissibility re-sult we have seen (Karlin’s theorem, Theorem 2.14) can actually be derived using Blyth’s method. (See Zidek 1970, Portnoy 1971, and Brown and Hwang 1982 for more general Karlin-type theorems, and Berger 1982b for a partial converse.) Com-bining these theorems with a thorough investigation of the differential inequality that results from an integration by parts can also lead to some interesting charac-terizations of the behavior of admissible estimators (Portnory 1975, Berger 1976d, 1976e, 1980a, Brown 1979, 1988). A detailed survey of admissibility is given by Rukhin (1995). (ii) Dimension doubling Note that for the Poisson case, in contrast to the normal case, the factor r−1 tends to appear (instead of r−2). This results in the Poisson sample mean being inadmissible in two dimensions. This occurrence was first explained by Brown (1978, Section 2.3), who noted that the Poisson problem in k dimensions is ”qualitatively similar” to the location problem in 2k dimensions (in terms of a differential inequality derived to establish admissibility). In Johnstone and MacGibbon (1992), this idea of “dimension doubling” also occurs and provides motivation for the transformed version of the Poisson problem that they consider. (iii) Finite populations Although we did not cover the topic of admissibility in finite population sampling, there are interesting connections between admissibility in multinomial, nonpara-metric, and finite population problems. Using results of Meeden and Ghosh (1983) and Cohen and Kuo (1983), Meeden, Ghosh and Vardeman (1985) present a theorem that summarizes the admissibility connection, relating admissibility in a multinomial problem to admissibility in a nonparametric problem. Stepwise Bayes arguments, which originated with Johnson (1971) [see also Alam 1979, Hsuan 1979, Brown 1981] are useful tools for establishing admissibility in these situations. This page intentionally left blank CHAPTER 6 Asymptotic Optimality 1 Performance Evaluations in Large Samples The performance of the estimators developed in the preceding chapters—UMVU, MRE, Bayes, and minimax—is often difficult to evaluate exactly. This difficulty can frequently be overcome by computing power (particularly by simulation). Although such an approach works well on a case-by-case basis, it lacks the ability to provide an overall picture of performance which is needed, for example, to assess robustness and efficiency. We shall consider an alternative approach here: to obtain approximations to or limits of performance measures as the sample size gets large. Some of the probabilistic tools required for this purpose were treated in Section 1.8. One basic result of that section concerned the consistency of estimators, that is, their convergence in probability to the parameters that they are estimating. For example, if X1, X2, . . . are iid with E(Xi) = ξ and var(Xi) = σ 2 < ∞, it was seen in Example 1.8.3 that the sample mean ¯ X is a consistent estimator of ξ. More detailed information about the large-sample behavior of ¯ X can be obtained from the central limit theorem (Theorem 1.8.9), which states that √n( ¯ X −ξ) L →N(0, σ 2). (1.1) The limit theorem (1.1) suggests the approximation P  ¯ X ≤ξ + W √n  ≈X  W √n  (1.2) where X denotes the standard normal distribution function. Instead of the probabilities (1.2), one may be interested in the expectation, variance, and higher moments of ¯ X and then find E( ¯ X) = ξ, E( ¯ X −ξ)2 = σ 2 n ; (1.3) E( ¯ X −ξ)3 = 1 n2 µ3, E( ¯ X −ξ)4 = 1 n3 µ4 + 3(n −1) n3 σ 4 where µk = E(X1 −ξ)k. We shall be concerned with the behavior corresponding to (1.1) and (1.3) not only of ¯ X but also of functions of ¯ X. As we shall see, performance evaluation of statistics h( ¯ Xn) based on, respec-tively,(1.1)and(1.3)—theasymptoticdistributionandlimitingmomentapproach— agree often but not always. It is convenient to have both approaches since they tend 430 ASYMPTOTIC OPTIMALITY [ 6.1 to be applicable in different circumstances. In the present section, we shall begin with (1.3) and then take up (1.1). (a) The limiting moment approach Theorem 1.1 Let X1, . . . , Xn be iid with E(X1) = ξ, var(X1) = σ 2, and finite fourth moment, and suppose h is a function of a real variable whose first four derivatives h′(x), h′′(x), h′′′(x), and h(iv)(x) exist for all x ∈I, where I is an interval with P(X1 ∈I) = 1. Furthermore, suppose that |h(iv)(x)| ≤M for all x ∈I, for some M < ∞. Then, E[h( ¯ X)] = h(ξ) + σ 2 2nh′′(ξ) + Rn, (1.4) and if, in addition, the fourth derivative of h2 is also bounded, var[h( ¯ X)] = σ 2 n [h′(ξ)]2 + Rn, (1.5) where the remainder Rn in both cases is O(1/n2), that is, there exist n0 and A < ∞ such that Rn(ξ) < A/n2 for n > n0 and all ξ. Proof. The reason for the possibility of such a result is the strong set of assump-tions concerning h, which permit an expansion of h( ¯ Xn) about h(ξ) with bounded coefficients. Using the assumptions on the fourth derivative h(iv)(x), we can write h(¯ xn) = h(ξ) + h′(ξ)(¯ xn −ξ) + 1 2h′′(ξ)(¯ xn −ξ)2 (1.6) +1 6h′′′(ξ)(¯ xn −ξ)3 + R(xn, ξ) where |R(¯ xn, ξ)| ≤M(¯ xn −ξ)4 24 . (1.7) Using (1.3) and taking expectations of both sides of (1.6), we find E[h( ¯ Xn)] = h(ξ) + 1 2h′′(ξ)σ 2 n + O  1 n2  . (1.8) Here, the term in h′(ξ) is missing since E( ¯ Xn) = ξ, and the order of the remainder term follows from (1.3) and (1.7). To obtain an expansion of var[h( ¯ Xn)], apply (1.8) to h2 in place of h, using the fact that [h2(ξ)]′′ = 2{h(ξ)h′′(ξ) + [h′(ξ)]2}. (1.9) This yields E[h2( ¯ Xn)] = h2(ξ) + [h(ξ)h′′(ξ) + (h′(ξ))2]σ 2 n + O  1 n2  , (1.10) and it follows from (1.8) that [Eh( ¯ Xn)]2 = h2(ξ) + h(ξ)h′′(ξ)σ 2 n + O  1 n2  . (1.11) 6.1 ] PERFORMANCE EVALUATIONS IN LARGE SAMPLES 431 Taking the difference proves the validity of (1.5). ✷ Equation (1.4) suggests the following definition. Definition 1.2 A sequence of estimators δn of h(ξ) is unbiased in the limit if E[δn] →h(ξ) as n →∞, (1.12) that is, if the bias of δn, E[δn] −h(ξ), tends to 0 as n →∞. Whenever (1.4) holds, the estimator h( ¯ X) of h(ξ) is unbiased in the limit. Example 1.3 Approximate variance of a binomial UMVU estimator. Consider the UMVU estimator T (n −T )/n(n −1) of pq in Example 2.3.1. Note that ξ = E(X) = p and σ 2 = pq and that ¯ X = T /n, and write the estimator as δ( ¯ X) = ¯ X(1 −¯ X) n n −1. To obtain an approximation of its variance, let us consider first h( ¯ X) = ¯ X(1 −¯ X). Then, h′(p) = 1−2p = q −p and var[h( ¯ X)] = (1/n)pq(q −p)2 +O(1/n2). Also,  n n −1 2 = 1 (1 −1/n)2 = 1 + 2 n + O  1 n2  . Thus, var δ( ¯ X) =  n n −1 2 var h( ¯ X) = pq(q −p)2 n + O  1 n2   1 + 2 n + O  1 n2  = pq(q −p)2 n + O  1 n2  . The exact variance of δ( ¯ X) given in Problem 2.3.1(b) shows that the error is 2p2q2/n(n −1) which is, indeed, of the order 1/n2. The maximum absolute error occurs at p = 1/2 and is 1/8n(n −1). It is a decreasing function of n which, for n = 10, equals 1/720. On the other hand, the relative error will tend to be large, unless p is close to 0 or 1 (Problem 1.2; see also Examples 1.8.13 and 1.8.15). ∥ In this example, the bounded derivative condition of Theorem 1.1 is satisfied for all polynomials h because ¯ X is bounded. On the other hand, the condition fails when h is polynomial of degree k ≥4 and the X’s are, for example, normally distributed. However, (1.5) continues to hold in these circumstances. To see this, carryoutanexpansionlike(1.6)tothe(k−1)stpower.Thekthderivativeofhisthen a constant M, and instead of (1.7), the remainder will satisfy R = M( ¯ X −δ)k/k!. This result then follows from the fact that all moments of the X’s of order ≤k exist and from Problem 1.1. This argument proves the following variant of Theorem 1.1. Theorem 1.4 In the situation of Theorem 1.1, formulas (1.4) and (1.5) remain valid if for some k ≥3 the function h has k derivatives, the kth derivative is bounded, and the first k moments of the X’s exist. 432 ASYMPTOTIC OPTIMALITY [ 6.1 To cover estimators such as δn( ¯ X) = X $ n n −1(u −¯ X)  (1.13) of p in Example 2.2.2, in which the function h depends on n, a slight generalization of Theorem 1.1 is required. Theorem 1.5 Suppose that the assumptions of Theorem 1.1 hold, and that cn is a sequence of constants satisfying cn = 1 + a n + O  1 n2  . (1.14) Then, the variance of δn( ¯ X) = h(cn ¯ X) (1.15) satisfies var[δn( ¯ X)] = σ 2 n [h′(ξ)]2 + O  1 n2  . (1.16) The proof is left as an exercise (Problem 1.3). Example 1.6 Approximate variance of a normal probability estimator. For the estimator δn( ¯ X) given by (1.13), we have cn = $ n n −1 =  1 −1 n −1/2 = 1 + 1 2n + O  1 n2  and δn = h(cn ¯ Y) = X(−cn ¯ Y) where Yi = Xi −u. Thus, h′(ξ) = −φ(ξ), h′′(ξ) = ξφ(ξ) (1.17) and hence from (1.16) var δn( ¯ X) = 1 nφ2(u −ξ) + O  1 n2  . (1.18) Since ξ is unknown, it is of interest to note that to terms of order 1/n, the maximum variance is 1/2πn. If the factor √n/(n −1) is neglected and the maximum likelihood estimator δ( ¯ X) = X(u −¯ X) is used instead of δn, the variance is unchanged (up to the order 1/n); however, the estimator is now biased. It follows from (1.8) and (1.17) that Eδ( ¯ X) = p + ξ −u 2n φ(u −ξ) + O  1 n2  so that the bias is of the order 1/n. The MLE is therefore unbiased in the limit. ∥ The approximations of the accuracy of an estimator indicated by the above theorems may appear somewhat restrictive in that they apply only to functions of sample means. However, this covers all sufficiently smooth estimators based on samples from one-parameter exponential families. For on the basis of such a sample, ¯ T = T (Xi)/n [in the notation of (8.1)] is a sufficient statistic, so that 6.1 ] PERFORMANCE EVALUATIONS IN LARGE SAMPLES 433 attention can be restricted to estimators that are functions of ¯ T (and possibly n). Extensions of the approximations of Theorems 1.1 - 1.5 to functions of higher sample moments are given by Cram´ er (1946a, Sections 27.6 and 28.4) and Ser-fling (1980, Section 2.2). On the other hand, this type of approximation is not applicable to optimal estimators for distributions whose support depends on the unknown parameter, such as the uniform or exponential distributions. Here, the minimal sufficient statistics and the estimators based on them are governed by different asymptotic laws with different convergence rates. (Problems 1.15–1.18 and Examples 7.11 - 7.14). The conditions on the function h(·) of Theorem 1.1 are fairly stringent and do not apply, for example, to h(x) = 1/x or √x (unless the Xi are bounded away from zero) and the corresponding fact also limits the applicability of multivariate versions of these theorems (see Problem 1.27). When the assumptions of the the-orems are not satisfied, the conclusions may or may not hold, depending on the situation. Example 1.7 Limiting variance in the exponential distribution. Suppose that X1, . . . , Xn are iid from the exponential distribution with density (1/θ)e−x/θ, x > 0, θ > 0, so that EXi = θ and varXi = θ2. The assumptions of Theorem 1.1 do not hold for h(x) = √x, so we cannot use (1.5) to approximate var( √¯ X). However, an exact calculation shows that (Problem 1.14) var  ¯ X =  1 −1 n H(n + 1/2) H(n) 2 θ and that limn→∞n var( √¯ X) = θ/4 = θ2[h′(θ)]2. Thus, although the assumptions of Theorem 1.1 do not apply, the limit of the approximation (1.5) is correct. For an example in which the conclusions of the theorem do not hold, see Problem 1.13(a). ∥ Let us next take up the second approach mentioned at the beginning of the section. (b) The asymptotic distribution approach Instead of the behavior of the moments E[h( ¯ X)] and var[h( ¯ X)], we now consider the probabilistic behavior of h( ¯ X). Theorem 1.8 If X1, . . . , Xn are iid with expectation ξ, and h is any function which is continuous at ξ, then h( ¯ X) P →h(ξ) as n →∞. (1.19) Proof. It was seen in Section 1.8 (Example 8.3) that ¯ X P →ξ. The conclusion (1.19) is then a consequence of the following general result. ✷ Theorem 1.9 If a sequence of random variables Tn tends to ξ in probability and if h is continuous at ξ, then h(Tn) P →h(ξ). 434 ASYMPTOTIC OPTIMALITY [ 6.1 Proof. To show that P(|h(Tn) −h(ξ)| < a) →1, it is enough to notice that by the continuity of h, the difference |h(Tn) −h(ξ)| will be arbitrarily small if Tn is close to ξ, and that for any a, the probability that |Tn −ξ| < a tends to 1 as n →∞. We leave a detailed proof to Problem 1.30 ✷ Consistency can be viewed as a probabilistic analog of unbiasedness in the limit but, as Theorem 1.8 shows, requires much weaker assumptions on h. Unlike those needed for Theorem 1.1, they are satisfied, for example, when ξ ̸= 0 and h(x) = 1/x or h(x) = 1/√x. The assumptions of Theorem 1.1 provide sufficient conditions in order that var[√n h( ¯ X)] →σ 2[h′(ξ)]2 as n →∞. (1.20) On the other hand, it follows from Theorem 8.12 of Section 1.8 that v2 = σ 2[h′(ξ)] is also the variance of the limiting distribution N(0, v2) of √n[h( ¯ X) −h(ξ)]. This asymptotic normality holds under the following weak assumptions on h. Theorem 1.10 Let X1, . . . , Xn be iid with E(Xi) = ξ and var(Xi) = σ 2. Suppose that (a) the function h has a derivative h′ with h′(ξ) ̸= 0, (b) the constants cn satisfy cn = 1 + a/n + O(1/n2). Then, (i) √n[h(cn ¯ X) −h(ξ)] has the normal limit distribution with mean zero and variance σ 2[h′(ξ)]2; (ii) ifh′(ξ) = 0 buth′′(ξ)existsandisnot0,thenn[h(cn ¯ x)−h(ξ)] L →1 2σ 2h′′(ξ)χ2 1 . Proof. Immediate consequence of Theorems 1.8.10 - 1.8.14. ✷ Example 1.11 Continuation of Example 1.6. For Yi = Xi −u, we have EYi = ξ −u and var Yi = σ 2. The maximum likelihood estimator of X(u −ξ) is given by δ′ n = X(−¯ Yn) and Theorem 1.10 shows √n[δ′ n −X(u −ξ)] D →N(0, φ2(u −ξ)). The UMVU estimator is δn = X(−cn ¯ Yn) as in (1.13), and again by Theorem 1.10, √n[δn −X(u −ξ)] D →N(0, ϕ2(u −ξ)). ∥ Example 1.12 Asymptotic distribution of squared mean estimators. Let X1, . . .,Xn beiidN(θ, σ 2),andlettheestimandbeθ2.Threeestimatorsofθ2 (Problems 2.2.1 and 2.2.2) are δ1n = ¯ X2 −σ 2 n (UMVU when σ is known), δ2n = ¯ X2 − S2 n(n −1) (UMVU when σ is unknown) where S2 = (Xi −¯ X)2, δ3n = ¯ X2 (MLE in either case). 6.1 ] PERFORMANCE EVALUATIONS IN LARGE SAMPLES 435 For each of these three sequences of estimators δ(X), let us find the limiting distri-bution of [δ(X) −θ2] suitably normalized. Now, √n(δ3n −θ) →N(0, σ 2) in law by the central limit theorem. Using h(u) = u2 in Theorem 1.8.10, it follows that √n( ¯ X2 −θ2) L →N(0, 4σ 2θ2), (1.21) provided h′(θ) = 2θ ̸= 0. Next, consider δ1n. Since √n  ¯ X2 −σ 2 n −θ2  = √n( ¯ X2 −θ2) −σ 2 √n, it follows from Theorem 1.8.9 that √n(δ1n −θ2) L →N(0, 4σ 2θ2). (1.22) Finally, consider √n  ¯ X2 − S2 n(n −1) −θ2  = √n( ¯ X2 −θ2) −1 √n  S2 n −1  . Now,S2/(n−1)tendstoσ 2 inprobability,soS2/√n (n−1)tendsto0inprobability. Thus, √n(δ2n −θ2) L →N(0, 4σ 2θ2). Hence, when θ ̸= 0, all three estimators have the same limit distribution. There remains the case θ = 0. It is seen from the Taylor expansion [for example, Equation (1.6)] that if h′(θ) = 0, then √n[h(Tn) −h(θ)] →0 in probability. Thus, in particular, in the present situation, when θ = 0, √n[δ(X) −θ2] →0 in probabilityforallthreeestimators.Whenh′(θ) = 0, √nisnolongertheappropriate normalizing factor: it tends to infinity too slowly. Let us therefore apply the second part of Theorem 1.10 to the three estimators δin (i = 1, 2, 3) when θ = 0. Since h′′(0) = 2, it follows that for δ3n = ¯ X2, n( ¯ X2 −θ2) = n( ¯ X2 −02) →1 2σ 2 (2χ2 1 ) = σ 2χ2 1 . Actually, since the distribution of √n ¯ X is N(0, σ 2) for each n, the statistic n ¯ X2 is distributed exactly as σ 2χ2 1 , so that no asymptotic argument would have been necessary. For δ1n, we find n  ¯ X2 −σ 2 n −θ2  = n ¯ X2 −σ 2, and the right-hand side tends in law to σ 2(χ2 1 −1). In fact, here too, this is the exact rather than just a limit distribution. Finally, consider δ2n. Here, n  ¯ X2 − S2 n(n −1) −θ2  = n ¯ X2 − S2 n −1, 436 ASYMPTOTIC OPTIMALITY [ 6.1 and since S2/(n −1) tends in probability to σ 2, the limiting distribution is again σ 2(χ2 1 −1). Although, for θ ̸= 0, the three sequences of estimators have the same limit distribution, this is no longer true when θ = 0. In this case, the limit distribution of n(δ −θ2) is σ 2(χ2 1 −1) for δ1n and δ2n but σ 2χ2 1 for the MLE δ3n. These two distributions differ only in their location. The distribution of σ 2(χ2 1 −1) is centered so that its expectation is zero, while that of σ 2χ2 1 has expectation σ 2. So, although δ1n and δ2n suffer from the disadvantage of taking on negative values with probability > 0, asymptotically they are preferable to the MLE δ3n. The estimators δ1n and δ2n of Example 1.12 can be thought of as bias-corrected versions of the MLE δ3n. Typically, the MLE ˆ θn has bias of order 1/n, say bn(θ) = B(θ) n + O  1 n2  . The order of the bias can be reduced by subtracting from ˆ θn an estimator of the bias based on the MLE. This leads to the bias-corrected ML estimator ˆ ˆ θn = ˆ θn −B( ˆ θn) n (1.23) whose bias will be of order 1/n2. (For an example, see Problem 1.25; see also Example 7.15.) ∥ To compare these bias-correcting approaches, consider the following example. Example 1.13 A family of estimators. The estimators δ1n and δ3n for θ2 of Ex-ample 1.12 are special cases (with c = 1 or 0) of the family of estimators δ(c) n = ¯ X2 −cσ 2 n . (1.24) As in Example 1.12, it follows from Theorems 1.8.10 and 1.8.12 that for θ ̸= 0, √n[δ(c) n −θ2] L →N(0, 4σ 2θ2), (1.25) so that the asymptotic variance is 4σ 2θ2. If, instead, we apply Theorem 1.1 with h(θ) = θ2, we see that E  ¯ X2 = θ2 + σ 2 2n + O  1 n2  , (1.26) var  ¯ X2 = 4σ 2θ2 n + O  1 n2  . Thus, to the first order, the two approaches give the same result. Since the common value of the asymptotic and limiting variance does not involve c, this first-order approach does not provide a useful comparison of the estimators (1.25) corresponding to different values of c. To obtain such a comparison, we must take the next-order terms into account. This is easy for approach (a), where we only need to take the Taylor expansions (1.10) and (1.11) a step further. In fact, in the present case, it is easy to calculate var(δ(c) n ) exactly (Problem 1.26). 6.2 ] ASYMPTOTIC EFFICIENCY 437 However, since the estimators δ(c) n are biased when c ̸= 1, they should be com-pared not in terms of their variances but in terms of the expected squared error, which is (Problem 1.26) E  ¯ X2 −cσ 2 n −θ2 2 = 4σ 2θ2 n + (c2 −2c + 3)σ 4 n2 . (1.27) In terms of this measure, the estimator is better the smaller c2 −2c + 3 is. This quadratic has its minimum at c = 1, and the UMVU estimator δ(1) n therefore minimizes the risk (1.27) among all estimators (1.25). Equality of the asymptotic variance and the limit of the variance does not al-ways hold (Problem 1.38). However, what can be stated quite generally is that the appropriately normalized limit of the variance is greater than or equal to the asymptotic variance. To see this, let us state the following lemma. Lemma 1.14 Let Yn, n = 1, 2, . . ., be a sequence of random variables such that Yn D →Y, where E(Y) = 0 and var(Y 2) = E(Y 2) = v2 < ∞. For a constant A, define YnA = YnI(|Yn| ≤A) + AI(|Yn| > A), the random variable Yn truncated at A. Then, (a) limA→∞limn→∞E(Y 2 nA) = limA→∞limn→∞E[min(Y 2 n , A)] = v2, (b) if EY 2 n →w2, then w2 ≤v2. Proof. (a) By Theorem 1.8.8, lim n→∞E(Y 2 nA) = E[Y 2I(|Y| ≤A)] + A2P(|Y| > A), and as A →∞, the right side tends to v2. (b) It follows from Problem 1.39 that lim A→∞lim n→∞E(Y 2 nA) ≤lim n→∞lim A→∞E(Y 2 nA), (1.28) provided the indicated limit exists. Now, limA→∞E(Y 2 nA) = E(Y 2 n ), so that the right side of (1.28) is w2, while the left side is v2 by part (a). ✷ Suppose now that Tn is a sequence of statistics for which Yn = kn[Tn −E(Tn)] tends in law to a random variable Y with zero expectation. Then, the asymptotic variance v2 = var(Y) and the limit of the variances w2 = lim E(Y 2 n ), if it exists, satisfy v2 ≤w2 as was claimed. (Note that w2 need not be finite.) Conditions for v2 and w2 to coincide are given by Chernoff (1956). For the special case that Tn is a function of a sample mean of iid variables, the two coincide under the assumptions of Theorems 1.1 and 1.8.12. ∥ 2 Asymptotic Efficiency The large-sample approximations of the preceding section not only provide a con-venient method for assessing the performance of an estimator and for comparing different estimators, they also permit a new approach to optimality that is less restrictive than the theories of unbiased and equivariant estimation developed in Chapters 2 and 3. 438 ASYMPTOTIC OPTIMALITY [ 6.2 It was seen that estimators1 of interest typically are consistent as the sample sizes tend to infinity and, suitably normalized, are asymptotically normally dis-tributed about the estimand with a variance v(θ) (the asymptotic variance), which provides a reasonable measure of the accuracy of the estimator sequence. (In this connection, see Problem 2.1.) Within this class of consistent asymptotically nor-mal estimators, it turns out that under additional restrictions, there exist estimators that uniformly minimize v(θ). The remainder of this chapter is mainly concerned with the development of fairly explicit methods of obtaining such asymptotically efficient estimators. Before embarking on this program, it may be helpful to note an important dif-ference between the present large-sample approach and the small-sample results here and elsewhere. Both UMVU and MRE estimators tend to be unique (Theorem 1.7.10) and so are at least some of the minimax estimators derived in Chapter 5. On the other hand, it is in the nature of asymptotically optimal solutions not to be unique, since asymptotic results refer to the limiting behavior of sequences, and the same limit is shared by many different sequences. More specifically, if √n[δn −g(θ)] L →N(0, v) and {δn} is asymptotically optimal in the sense of minimizing v, then δn + Rn is also optimal, provided √n Rn →0 in probability. As we shall see later, asymptotically equivalent optimal estimators can be obtained from quite different starting points. The goal of minimizing the asymptotic variance is reasonable only if the estima-tors under consideration have the same asymptotic expectation. In particular, we shall be concerned with estimators whose asymptotic expectation is the quantity being estimated. Definition 2.1 If kn[δn −g(θ)] L →H for some sequence kn, the estimator δn of g(θ) is asymptotically unbiased if the expectation of H is zero. Note that the definition of asymptotic unbiasedness is analogous to that of asymptotic variance. Unlike Definition 1.2, it is concerned with properties of the limiting distribution rather than limiting properties of the distribution of the es-timator sequence. To see that Definition 2.1 is independent of the normalizing constant, see Problem 2.2. Under the conditions of Theorem 1.8.12, the estimator h(Tn) is asymptotically unbiased for the parameter h(θ). The estimator of Theorem 1.1 is unbiased in the limit. Example 2.2 Large-sample behavior of squared mean estimators. In Example 1.12, all three estimators of θ2 are asymptotically unbiased and unbiased in the limit. We note that these results continue to hold if the assumption of normality is replaced by that of finite variance. ∥ 1 We shall frequently use estimator instead of the more accurate but cumbersome term estimator sequence. 6.2 ] ASYMPTOTIC EFFICIENCY 439 In Example 1.12, the MLE was found to be asymptotically unbiased for θ ̸= 0 but asymptotically biased when the range of the distribution depends on the parameter. Example 2.3 Asymptotically biased estimator. If X1, . . . , Xn are iid as U(0, θ), the MLE of θ is X(n) and satisfies (Problem 2.6) n(θ −X(n)) L →E(0, θ), and, hence, is asymptotically biased, but it is unbiased in the limit. A similar situation occurs in sampling from an exponential E(a, b) distribution. See Examples 7.11 and 7.12 for further details. ∥ The asymptotic analog of a UMVU estimator is an asymptotically unbiased estimator with minimum asymptotic variance. In the theory of such estimators, an important role is played by an asymptotic analog of the information inequality (2.5.31). If X1, . . . , Xn are iid according to a density fθ(x) (with respect to µ) satisfying suitable regularity conditions, this inequality states that the variance of any unbiased estimator δ of g(θ) satisfies varθ(δ) ≥[g′(θ)]2 nI(θ) , (2.1) where I(θ) is the amount of information in a single observation defined by (2.5.10). Suppose now that δn = δn(X1, . . . , Xn) is asymptotically normal, say that √n[δn −g(θ)] L →N[0, v(θ)], v(θ) > 0. (2.2) Then, it turns out that under some additional restrictions, one also has v(θ) ≥[g′(θ)]2 I(θ) . (2.3) However, although the lower bound (2.1) is attained only in exceptional circum-stances (Section 2.5), there exist sequences {δn} that satisfy (2.2) with v(θ) equal to the lower bound (2.3) subject only to quite general regularity conditions. Definition 2.4 A sequence {δn} = {δn(X1, . . . , Xn)}, satisfying (2.2) with v(θ) = [g′(θ)]2 I(θ) (2.4) is said to be asymptotically efficient. At first glance, (2.3) might be thought to be a consequence of (2.1). Two differ-ences between the inequalities (2.1) and (2.3) should be noted, however. (i) The estimator δ in (2.1) is assumed to be unbiased, whereas (2.2) only implies asymptotic unbiasedness and consistency of {δn}. It does not imply that δn is unbiased or even that its bias tends to zero (Problem 2.11). (ii) The quantity v(θ) in (2.3) is an asymptotic variance whereas (2.1) refers to the actual variance of δ. It follows from Lemma 1.14 that v(θ) ≤lim inf[n varθδn] (2.5) 440 ASYMPTOTIC OPTIMALITY [ 6.2 but equality need not hold. Thus, (2.3) is a consequence of (2.1), provided var{√n[δn −g(θ)]} →v(θ) (2.6) and if δn is unbiased, but not necessarily if these requirements do not hold. For a long time, (2.3) was nevertheless believed to be valid subject only to regularity conditions on the densities fθ. This belief was exploded by the example (due to Hodges; see Le Cam 1953) given below. Before stating the example, note that in discussing the inequality (2.3) under assumption (2.2), if θ is real-valued and g(θ) is differentiable, it is enough to consider the case g(θ) = θ, for which (2.3) reduces to v(θ) ≥ 1 I(θ). (2.7) For if √n(δn −θ) L →N[0, v(θ)] (2.8) and if g has derivative g′, it was seen in Theorem 1.8.12 that √n[g(δn) −g(θ)] L →N[0, v(θ)[g′(θ)]2]. (2.9) After the obvious change of notation, this implies (2.3). Example 2.5 Superefficient estimator. Let X1, . . . , Xn be iid according to the normal distribution N(θ, 1) and let the estimand be θ. It was seen in Table 2.5.1 that in this case, I(θ) = 1 so that (2.7) reduces to v(θ) ≥1. On the other hand, consider the sequence of estimators, δn = ¯ X if | ¯ X| ≥1/n1/4 a ¯ X if | ¯ X| < 1/n1/4. Then (Problem 2.8), √n(δn −θ) L →N[0, v(θ)], where v(θ) = 1 when θ ̸= 0 and v(θ) = a2 when θ = 0. If a < 1, inequality (2.3) is therefore violated at θ = 0. ∥ This phenomenon is quite general (Problems 2.4 - 2.5). There will typically exist estimators satisfying (2.8) but with v(θ) violating (2.7) for at least some values of θ, called points of superefficiency. However, (2.7) is almost true, for it was shown by Le Cam (1953) that for any sequence δn satisfying (2.8), the set S of points of superefficiency has Lebesgue measure zero. The following version of this result, which we shall not prove, is due to Bahadur (1964). The assumptions are somewhat stronger but similar to those of Theorem 2.5.15. Remark on notation. Recall that we are using Xi and X, and xi and x for real-valued random variables and the values they take on, respectively, and X and x for the vectors (X1, . . . , Xn) and (x1, . . . , xn), respectively. Theorem 2.6 Let X1, . . . , Xn be iid, each with density f (x|θ) with respect to a σ-finite measure µ, where θ is real-valued, and suppose the following regularity conditions hold. (a) The parameter space is an open interval (not necessarily finite). 6.2 ] ASYMPTOTIC EFFICIENCY 441 (b) The distributions Pθ of the Xi have common support, so that the set A = {x : f (x|θ) > 0} is independent of θ. (c) For every x ∈A, the density f (x|θ) is twice differentiable with respect to θ, and the second derivative is continuous in θ. (d) The integral f (x|θ) dµ(x) can be twice differentiated under the integral sign. (e) The Fisher information I(θ) defined by (3.5.10) satisfies 0 < I(θ) < ∞. (f) For any given θ0 ∈ , there exists a positive number c and a function M(x) (both of which may depend on θ0) such that |∂2 log f (x|θ)/∂θ2| ≤M(x) for all x ∈A, θ0 −c < θ < θ0 + c and Eθ0[M(X)] < ∞. Under these assumptions, if δn = δn(X1, . . . , Xn) is any estimator satisfying (2.8), then v(θ) satisfies (2.7) except on a set of Lebesgue measure zero. Note that by Lemma 2.5.3, condition (d) ensures that for all θ ∈ (g) E[∂log f (X|θ)/∂θ] = 0 and (h) E[−∂2 log f (X|θ)/∂θ2] = E[∂log f (X|θ)/∂θ]2 = I(θ). Condition (d) can be replaced by conditions (g) and (h) in the statement of the theorem. The example makes it clear that no regularity conditions on the densities f (x|θ) can prevent estimators from violating (2.7). This possibility can be avoided only by placing restrictions on the sequence of estimators also. In view of the information inequality (2.5.31), an obvious sufficient condition is (2.6) [with g(θ) = θ] together with b′ n(θ) →0 (2.10) where bn(θ) = Eθ(δn) −θ is the bias of δn. If I(θ) is continuous, as will typically be the case, a more appealing assumption is perhaps that v(θ) also be continuous. Then, (2.7) clearly cannot be violated at any point since, otherwise, it would be violated in an interval around this point in contradiction to Theorem 2.6. As an alternative, which under mild assumptions on f implies continuity of v(θ), Rao (1963) and Wolfowitz (1965) require the convergence in (2.2) to be uniform in θ. By working with coverage probabilities rather than asymptotic variance, the latter author also removes the unpleasant assumption that the limit distribution in (2.2) must be normal. An analogous result is proved by Pfanzagl, (1970), who requires the estimators to be asymptotically median unbiased. The search for restrictions on the sequence {δn}, which would ensure (2.7) for all values of θ, is motivated in part by the hope of the existence, within the restricted class, of uniformly best estimators for which v(θ) attains the lower bound. It is 442 ASYMPTOTIC OPTIMALITY [ 6.2 furtherjustifiedbythefact,broughtoutbyLeCam(1953),Huber(1966),andH´ ajek (1972), that violation of (2.7) at a point θ0 entails certain unpleasant properties of the risk of the estimator in the neighborhood of θ0. This behavior can be illustrated in the Hodges example. Example 2.7 Continuation of Example 2.5. The normalized risk function Rn(θ) = nE(δn −θ)2 (2.11) of the Hodges estimator δn can be written as Rn(θ) = 1 −(1 −a2)  ¯ In I n (x + √n θ)2φ(x) dx +2θ√n(1 −a)  ¯ In I n (x + √n θ)φ(x) dx where ¯ In = 4 √n −√n θ and I n = −4 √n −√n θ. When the integrals are broken up into their three and two terms, respectively, and the relations X′(x) = φ(x) and φ′(x) = −xφ(x) are used, Rn(θ) reduces to Rn(θ) =  1 −(1 −a2)  ¯ In I n x2φ(x) dx +nθ2(1 −a)2[X( ¯ In) −X(I n)] +2√n θa(1 −a)[φ( ¯ In) −φ(I n)]. Consider now the sequence of parameter values θn = 1/ 4 √n, so that √n θn = n1/4, I n = −2n1/4, ¯ In = 0. Then, √n θnφ(I n) →0, so that the third term tends to infinity as n →∞. Since the second term is positive and the first term is bounded, it follows that Rn(θn) →∞ for θn = 1/ 4 √n, and hence, a fortiori, that supθRn(θ) →∞. Let us now compare this result with the fact that (Problem 2.12) for any fixed θ Rn(θ) →1 for θ ̸= 0, Rn(0) →a2. (This shows that in the present case, the limiting risk is equal to the asymptotic variance (see Problem 2.4).) The functions Rn(θ) are continuous functions of θ with discontinuous limit function L(θ) = 1 for θ ̸= 0, L(0) = a2. However, each of the functions with large values of n rises to a high above the limit value 1, at values of θ tending to the origin with n, and with the value of the 6.3 ] EFFICIENT LIKELIHOOD ESTIMATION 443 Figure 2.1. Risk functions Rn(θ) of the superefficient estimator δn of Example 2.5 for a = .5. peak tending to infinity with n. This is illustrated in Figure 2.1, where values of Rn(θ) are given for various values of n. As Figure 2.1 shows, the improvement (over ¯ X) from 1 to a2 in the limiting risk at the origin and hence for large finite n also near the origin, therefore, leads to an enormous increase in risk at points slightly further away but nevertheless close to the origin. (In this connection, see Problem 2.14.) ∥ 3 Efficient Likelihood Estimation Under smoothness assumptions similar to those of Theorem 2.6, we shall in the present section prove the existence of asymptotically efficient estimators and pro-vide a method for determining such estimators which, in many cases, leads to an explicit solution. We begin with the following assumptions: (A0) The distributions Pθ of the observations are distinct (otherwise, θ cannot be estimated consistently2). (A1) The distributions Pθ have common support. 2 But see Redner (1981) for a different point of view 444 ASYMPTOTIC OPTIMALITY [ 6.3 (A2) The observations are X = (X1, . . . , Xn), where the Xi are iid with probability density f (xi|θ) with respect to µ. (A3) The parameter space contains an open set ω of which the true parameter value θ0 is an interior point. Note: The true value of θ will be denoted by θ0. The joint density of the sample, f (x1|θ) · · · f (xn|θ) = Of (xi|θ), considered as a function of θ, plays a central role in statistical estimation, with a history dating back to the eighteenth century (see Note 10.1). Definition 3.1 For a sample point x = (x1, . . . , xn) from a density f (x|θ), the likelihood function L(θ|x) = f (x|θ) is the sample density considered as a function of θ for fixed x . In the case of iid observations, we have L(θ|x) = On i=1f (xi|θ). It is then often easier to work with the logarithm of the likelihood function, the log likelihood l(θ|x) = n i=1 log f (xi|θ). Theorem 3.2 Under assumptions (A0)–(A2), Pθ0(L(θ0|X) > L(θ|X)) →1 as n →∞ (3.1) for any fixed θ ̸= θ0. Proof. The inequality is equivalent to 1 n log[f (Xi|θ)/f (Xi|θ0)] < 0. By the law of large numbers, the left side tends in probability toward Eθ0 log[f (X|θ)/f (X|θ0)]. Since −log is strictly convex, Jensen’s inequality shows that Eθ0 log[f (X|θ)/f (X|θ0)] < log Eθ0[f (X|θ)/f (X|θ0)] = 0, and the result follows. ✷ By (3.1), the density of X at the true θ0 exceeds that at any other fixed θ with high probability when n is large. We do not know θ0, but we can determine the value ˆ θ of θ which maximizes the density of X, that is, which maximizes the likelihood function at the observed X = x. If this value exists and is unique, it is the maximum likelihood estimator (MLE) of θ.3 The MLE of g(θ) is defined to be g( ˆ θ). If g is 1:1 and ξ = g(θ), this agrees with the definition of ˆ ξ as the value of ξ that maximizes the likelihood, and the definition is consistent also in the case that g is not 1:1. (In this connection, see Zehna 1966 and Berk 1967b.) Theorem 3.2 suggests that if the density of X varies smoothly with θ, the MLE of θ typically should be close to the true value of θ, and hence be a reasonable estimator. 3 For a more general definition, see Strasser (1985, Sections 64.4 and 84.2) or Scholz (1980, 1985). A discussion of the MLE as a summarizer of the data rather than an estimator is given by Efron (1982a). 6.3 ] EFFICIENT LIKELIHOOD ESTIMATION 445 Example 3.3 Binomial MLE. Let X have the binomial distribution b(p, n). Then, the MLE of p is obtained by maximizing n x  pxqn−x and hence is ˆ p = X/n (Problem 3.1). ∥ Example 3.4 Normal MLE. If X1, . . . , Xn are iid as N(ξ, σ 2), it is convenient to obtain the MLE by maximizing the logarithm of the density, −n log σ −1/2σ 2(xi −ξ)2 −c. When (ξ, σ) are both unknown, the maximizing values are ˆ ξ = ¯ x, ˆ σ 2 = (x −¯ x)2/n (Problem 3.3). ∥ As a first question regarding the MLE for iid variables, let us ask whether it is consistent. We begin with the case in which is finite, so that θ can take on only a finite number of values. In this case, a sequence δn is consistent if and only if Pθ(δn = θ) →1 for all θ ∈ (3.2) (Problem 3.6). Corollary 3.5 Under assumptions (A0)–(A2) if is finite, the MLE ˆ θn exists, it is unique with probability tending to 1, and it is consistent. Proof. The result is an immediate consequence of Theorem 3.2 and the fact that if P(Ain) →1 for i = 1, . . . , k, then also P[A1n ∩· · · ∩Akn] →1 as n →∞. ✷ The proof of Corollary 3.5 breaks down when is not restricted to be finite. That the consistency conclusion itself can break down even if is only countably infinite is shown by the following example due to Bahadur (1958) and Le Cam (1979b, 1990). Example 3.6 An inconsistent MLE. Let h be a continuous function defined on (0, 1], which is strictly decreasing, with h(x) ≥1 for all 0 < x ≤1 and satisfying  1 0 h(x) dx = ∞. (3.3) Given a constant 0 < c < 1, let ak, k = 0, 1, . . ., be a sequence of constants defined inductively as follows: a0 = 1; given a0, . . . , ak−1, the constant ak is defined by  ak−1 ak [h(x) −c] dx = 1 −c. (3.4) It is easy to see that there exists a unique value 0 < ak < ak−1 satisfying (3.4) (Problem 3.8). Since the sequence {ak} is decreasing, it tends to a limit a ≥0. If a were > 0, the left side of (3.4) would tend to zero which is impossible. Thus, ak →0 as k →∞. Consider now the sequence of densities fk(x) = c if x ≤ak or x > ak−1 h(x) if ak < x ≤ak−1, and the problem of estimating the parameter k on the basis of independent obser-vations X1, . . . , Xn from fk. We shall show that the MLE exists and that it tends to infinity in probability regardless of the true value k0 of k and is, therefore, not consistent, provided h(x) →∞sufficiently fast as x →0. 446 ASYMPTOTIC OPTIMALITY [ 6.3 Let us denote the joint density of the X’s by pk(x) = fk(x1) · · · fk(xn). That the MLE exists follows from the fact that pk(x) = cn < 1 for any value of k for which the interval Ik = (ak, ak−1] contains none of the observations, so that the maximizing value of k must be one of the ≤n values for which Ik contains at least one of the x’s. For n = 1, the MLE is the value of k for which X1 ∈Ik, and for n = 2, the MLE is the value of k for which X(1) ∈Ik. For n = 3, it may happen that one observation lies in Ik and two in Il(k < l), and whether the MLE is k or l then depends on whether c · h(x(1)) is greater than or less than h(x(2))h(x(3)). We shall now prove that the MLE ˆ Kn (which is unique with probability tending to 1) tends to infinity in probability, that is, that P( ˆ Kn > k) →1 for every k, (3.5) provided h satisfies h(x) ≥e1/x2 (3.6) for all sufficiently small values of x. To prove (3.5), we will show that for any fixed j, P[pK∗ n(X) > pj(X)] →1 as n →∞ (3.7) where K∗ n is the value of k for which X(1) ∈Ik. Since p ˆ Kn(X) ≥pK∗ n(X), it then follows that for any fixed k, p[ ˆ Kn > k] ≥P[p ˆ Kn(X) > pj(X) for j = 1, . . . , k] →1. To prove (3.7), consider Ljk = log fk(x1) · · · fk(xn) fj(x1) · · · fj(xn) = (1) i log h(xi) c −(2) i log h(xi) c where (1) and (2) extend over all i for which xi ∈Ik and xi ∈Ij, respectively. Now xi ∈Ij implies that h(xi) < h(aj), so that (2) log[h(xi)/c] < νjn log[h(aj)/c] where νjn is the number of x’s in Ij. Similarly, for k = K∗ n, (1) log[h(xi)/c] ≥log[h(x(1))/c] since log[h(x)/c] ≥0 for all x. Thus, 1 nLj,K∗ n ≥1 n log h(x(1)) c −1 nνjn log h(aj) c . Since vjn/n tends in probability to P(X1 ∈Ij) < 1, it only remains to show that 1 n log h(X(1)) →∞ in probability. (3.8) Instead of X1, . . . , Xn, consider a sample Y1, . . . , Yn from the uniform distri-bution U(0, 1/c). Then, for any x, P(Yi > x) ≥P(Xi > x) and hence P[h(Y(1)) > x] ≤P[h(X(1)) > x], 6.3 ] EFFICIENT LIKELIHOOD ESTIMATION 447 and it is therefore enough to prove that (1/n) log h(Y(1)) →∞in probability. If h satisfies (3.6), (1/n) log h(Y(1)) ≥1/nY 2 (1), and the right side tends to infinity in probability since nY(1) tends to a limit distribution (Problem 2.6). This completes the proof. ∥ For later reference, note that the proof has established not only (2.5) but the fact that for any fixed A (Problem 3.9), P[pK∗ n(X) > Anpj(X)] →1. (3.9) The example suggests (and this suggestion will be verified in the next section) that also for densities depending smoothly on a continuously varying parameter θ, the MLE need not be consistent. We shall now show, however, that a slightly weaker conclusion is possible under relatively mild conditions. Throughout the present section, we shall assume θ to be real-valued. The case of several parameters will be taken up in Section 6.5. In the following, we shall frequently use the shorthand notation l(θ) for the log likelihood l(θ|x) = log f (xi|θ), (3.10) and l′(θ), l′′(θ), . . . for its derivatives with respect to θ. A way around the difficulty presented by this example was found by Cram´ er (1946a, 1946b), who replaced the search for a global maximum of the likelihood function with that for a local maximum. Theorem 3.7 Let X1, . . . , Xn satisfy (A0)–(A3) and suppose that for almost all x, f (x|θ) is differentiable with respect to θ in ω, with derivative f ′(x|θ). Then, with probability tending to 1 as n →∞, the likelihood equation ∂ ∂θ l(θ|x) = 0 (3.11) or, equivalently, the equation l′(θ|x) = f ′(xi|θ) f (xi|θ) = 0 (3.12) has a root ˆ θn = ˆ θn(x1, . . . , xn) such that ˆ θn(X1, . . . , Xn) tends to the true value θ0 in probability. Proof. Let a be small enough so that (θ0 −a, θ0 + a) ⊂ω, and let Sn = {x : l(θ0|x) > l(θ0 −a|x) and l(θ0|x) > l(θ0 + a|x)}. (3.13) By Theorem 3.2, Pθ0(Sn) →1. For any x ∈Sn, there thus exists a value θ0 −a < ˆ θn < θ0 + a at which l(θ) has a local maximum, so that l′( ˆ θn) = 0. Hence, for any a > 0 sufficiently small, there exists a sequence ˆ θn = ˆ θn(a) of roots such that Pθ0(| ˆ θn −θ0| < a) →1. (3.14) It remains to show that we can determine such a sequence, which does not depend on a. Let θ∗ n be the root closest to θ0. [This exists because the limit of a sequence of roots is again a root by the continuity of l(θ).] Then, clearly, Pθ0(|θ∗ n −θ0| < a) →1 and this completes the proof. ✷ 448 ASYMPTOTIC OPTIMALITY [ 6.3 In connection with this theorem, the following comments should be noted. 1. The proof yields the additional fact that with probability tending to 1, the roots ˆ θn(a) can be chosen to be local maxima and so, therefore, can the θ∗ n if we let θ∗ n be the closest root corresponding to a maximum. 2. On the other hand, the theorem does not establish the existence of a consistent estimator sequence since, with the true value θ0 unknown, the data do not tell us which root to choose so as to obtain a consistent sequence. An exception, of course, is the case in which the root is unique. 3. It should also be emphasized that the existence of a root ˆ θn is not asserted for all x (or for a given n even for any x). This does not affect consistency, which only requires ˆ θn to be defined on a set S′ n, the probability of which tends to 1 as n →∞. 4. Although the likelihood equation can have many roots, the consistent se-quence of roots generated by Theorem 3.7 is essentially unique. For a more precise statement of this result, which is due to Huzurbazar (1948), see Prob-lem 3.28. 5. Finally, there is a technical question concerning the measurability of the esti-mator sequence ˆ θn(a), and hence of the sequence ˆ θ∗ n. Recall from Section 1.2 that ˆ θn(a) is measurable function if the set {a : ˆ θn(a) > t} is a measurable set for every t. Since ˆ θn(a) is defined implicitly, its measurability (and also that of ˆ θ∗ n) is not immediately obvious. Happily, it turns out that the sequences ˆ θn(a) and ˆ θ∗ n are measurable. (For details, see Problem 3.29.) Corollary 3.8 Under the assumptions of Theorem 3.7, if the likelihood equation has a unique root δn for each n and all x, then {δn} is a consistent sequence of estimators of θ. If, in addition, the parameter space is an open interval (θ, ¯ θ) (not necessarily finite), then with probability tending to 1, δn maximizes the likelihood, that is, δn is the MLE, which is therefore consistent. Proof. The first statement is obvious. To prove the second, suppose the probability of δn being the MLE does not tend to 1. Then, for sufficiently large values of n, the likelihood must tend to a supremum as θ tends toward θ or ¯ θ with positive proba-bility. Now with probability tending to 1, δn is a local maximum. This contradicts the assumed uniqueness of the root. ✷ The conclusion of Corollary 3.8 holds, of course, not only when the root of the likelihood equation is unique but also when the probability of multiple roots tends to zero as n →∞. On the other hand, even when the root is unique, the corollary says nothing about its properties for finite n. Example 3.9 Minimum likelihood. Let X take on the values 0, 1, 2 with proba-bilities 6θ2 −4θ + 1, θ −2θ2, and 3θ −4θ2 (0 < θ < 1/2). Then, the likelihood equation has a unique root for all x, which is a minimum for x = 0 and a maximum for x = 1 and 2 (Problem 3.11). ∥ Theorem 3.7 establishes the existence of a consistent root of the likelihood equation. The next theorem asserts that any such sequence is asymptotically normal and efficient. 6.3 ] EFFICIENT LIKELIHOOD ESTIMATION 449 Theorem 3.10 Suppose that X1, . . . , Xn are iid and satisfy the assumptions of Theorem 2.6, with (c) and (d) replaced by the corresponding assumptions on the third (rather than the second) derivative, that is, by the existence of a third derivative satisfying     ∂3 ∂θ3 log f (x|θ)     ≤M(x) (3.15) for all x ∈A, θ0 −c < θ < θ0 + c with Eθ0[M(X)] < ∞. (3.16) Then, any consistent sequence ˆ θn = ˆ θn(X1, . . . , Xn) of roots of the likelihood equation satisfies √n( ˆ θn −θ) £ →N  0, 1 I(θ)  . (3.17) We shall call such a sequence ˆ θn an efficient likelihood estimator (ELE) of θ. It is typically (but need not be, see Example 4.1) provided by the MLE. Note also that any sequence ˆ θ∗ n satisfying (3.19) is asymptotically efficient in the sense of Definition 2.4. Proof of Theorem 3.10. For any fixed x, expand l′( ˆ θn) about θ0, l′( ˆ θn) = l′(θ0) + ( ˆ θn −θ0)l′′(θ0) + 1 2( ˆ θn −θ0)2l′′′(θ∗ n) where θ∗ n lies between θ0 and ˆ θn. By assumption, the left side is zero, so that √n( ˆ θn −θ0) = (1/√n) l′(θ0) −(1/n)l′′(θ0) −(1/2n)( ˆ θn −θ0)l′′′(θ∗ n) where it should be remembered that l(θ), l′(θ), and so on are functions of X as well as θ. We shall show that 1 √nl′(θ0) L →N[0, I(θ0)], (3.18) that −1 nl′′(θ0) P →I(θ0) (3.19) and that 1 nl′′′(θ∗ n) is bounded in probability. (3.20) The desired result then follows from Theorem 1.8.10. Of the above statements, (3.18) follows from the fact that 1 √nl′(θ0) = √n 1 n f ′(Xi|θ0) f (Xi|θ0) −Eθ0 f ′(Xi|θ0) f (Xi|θ0)  since the expectation term is zero, and then from the central limit theorem (CLT) and the definition of I(θ). 450 ASYMPTOTIC OPTIMALITY [ 6.3 Next, (3.19) follows because −1 nl′′(θ0) = 1 n f ′2(Xi|θ0) −f (Xi|θ0)f ′′(Xi|θ0) f 2(Xi|θ0) , and, by the law of large numbers, this tends in probability to I(θ0) −Eθ0 f ′′(Xi|θ0) f (Xi|θ0) = I(θ0). Finally, (3.20) is established by noting 1 nl′′′(θ) = 1 n ∂3 ∂θ3 log f (Xi|θ) so that by (3.15),     1 nl′′′(θ∗ n)     < 1 n[M(X1) + · · · + M(Xn)] with probability tending to 1. The right side tends in probability to Eθ0[M(X)], and this completes the proof. ✷ Although the conclusions of Theorem 3.10 are quite far-reaching, the proof is remarkably easy. The reason is that Theorem 3.7 already puts ˆ θn into the neighbor-hood of the true value θ0, so that an expansion about θ0 essentially linearizes the problem and thereby prepares the way for application of the central limit theorem. Corollary 3.11 Under the assumptions of Theorem 3.10, if the likelihood equation has a unique root for all n and x, and more generally if the probability of multiple roots tends to zero as n →∞, the MLE is asymptotically efficient. To establish the assumptions of Theorem 3.10, one must verify the following two conditions that may not be obvious. (a) That f (x|θ) dµ(x) can be differentiated twice with respect to θ by differ-entiating under the integral sign. (b) The third derivative is uniformly bounded by an integrable function [see (3.15)]. Conditions when (a) holds are given in books on calculus (see also Casella and Berger 1990, Section 2.4) although it is often easier simply to calculate the difference quotient and pass to the limit. Condition (b) is usually easy to check after realizing that it is not necessary for (3.15) to hold for all θ, but that it is enough if there exist θ1 < θ0 < θ2 such that (3.15) holds for all θ1 ≤θ ≤θ2. Example 3.12 One-parameter exponential family. Let X1, . . . , Xn be iid ac-cording to a one-parameter exponential family with density f (xi|η) = eηT (xi)−A(η) (3.21) with respect to a σ-finite measure µ, and let the estimand be η. The likelihood equation is 1 nT (xi) = A′(η), (3.22) 6.4 ] LIKELIHOOD ESTIMATION: MULTIPLE ROOTS 451 which, by (1.5.14), is equivalent to Eη[T (Xj)] = 1 nT (xi). (3.23) The left side of (3.23) is a strictly increasing function of η since, by (1.5.15), d dη[EηT (Xj)] = varηT (Xj) > 0. It follows that Equation (3.23) has at most one solution. The conditions of Theorem 3.10 are easily checked in the present case. In particular, condition (a) follows from Theorem 1.5.8 and (b) from the fact that the third derivative of log f (x|η) is independent of x and a continuous function of η. With probability tending to 1, (3.23) therefore has a solution ˆ η. This solution is unique, consistent, and asymptotically efficient, so that √n(ˆ η −η) L →N(0, var T ) (3.24) where T = T (Xi) and the asymptotic variance follows from (2.5.18). ∥ Example 3.13 Truncated normal. As an illustration of the preceding example, consider a sample of n observations from a normal distribution N(ξ, 1), truncated at two fixed points a < b. The density of a single X is then 1 √ 2π exp  −1 2(x −ξ)2  / [X(b −ξ) −X(a −ξ)] , a < x < b, which satisfies (3.21) with η = ξ, T (x) = x. An ELE will therefore be the unique solution of Eξ(X) = ¯ x if it exists. To see that this equation has a solution for any value a < ¯ x < b, note that as ξ →−∞or +∞, X tends in probability to a or b, respectively (Problem 3.12). Since X is bounded, this implies that also Eξ(X) tends to a or b. Since Eξ(X) is continuous, the existence of ˆ ξ follows. ∥ For densities that are members of location or scale families, it is fairly straight-forward to determine the existence and behavior of the MLE. (See Problems 3.15 –3.19.) We turn to one last example, which is not covered by Theorem 3.10. Example 3.14 Double exponential. For the double exponential density DE(θ, 1) given in Table 1.5.1, it is not true that for all (or almost all) x, f (x −θ) is differ-entiable with respect to θ, since for every x there exists a value (θ = x) at which the derivative does not exist. Despite this failure, the MLE (which is the median of the X’s) satisfies the conclusion of Theorem 3.10 and is asymptotically normal with variance 1/n (see Problem 3.25). This was established by Daniels (1961), who proved a general theorem, not requiring differentiability of the density, that was motivated by this problem. (See Note 10.2.) ∥ 4 Likelihood Estimation: Multiple Roots When the likelihood equation has multiple roots, the assumptions of Theorem 3.10 are no longer sufficient to guarantee consistency of the MLE, even when it exists 452 ASYMPTOTIC OPTIMALITY [ 6.4 for all n. This is shown by the following example due to Le Cam (1979b, 1990), which is obtained by embedding the sequence {fk} of Example 3.6 in a sufficiently smooth continuous-parameter family. Example 4.1 Continuation of Example 3.6. For k ≤θ < k + 1, k = 1, 2, . . . , let f (x|θ) = [1 −u(θ −k)]fk(x) + u(θ −k)fk+1(x), (4.1) with fk defined as in Example 3.6 and u defined on (−∞, ∞) such that u(x) = 0 for x ≤0 and u(x) = 1 for x ≥1 is strictly increasing on (0, 1) and infinitely differentiable on (−∞, ∞) (Problem 4.1). Let X1, . . . , Xn be iid, each with density f (x|θ), and let p(x|θ) = Of (xi|θ). Since for any given x, the density p(x|θ) is bounded and continuous in θ and is equal to cn for all sufficiently large θ and greater than cn for some θ, it takes on its maximum for some finite θ, and the MLE ˆ θn therefore exists. To see that ˆ θn →∞in probability, note that for k ≤θ < k + 1, p(x|θ) ≤O max[fk(xi), fk+1(xi)] = ¯ pk(x). (4.2) If ˆ Kn and K∗ n are defined as in Example 3.6, the argument of that example shows that it is enough to prove that for any fixed j, P[pK∗ n(X) > ¯ pj(X)] →1 as n →∞, (4.3) where pk(x) = p(x|k). Now ¯ Ljk = pk(x) ¯ pj(x) = (1) log h(xi) c −(2) log h(xi) c −(3) log h(xi) c where (1), (2), and (3) extend over all i for which xi ∈Ik, xi ∈Ij, and xi ∈Ij+1, respectively. The argument is now completed as before to show that ˆ θn →∞in probability regardless of the true value of θ and is therefore not consistent. The example is not yet completely satisfactory since ∂f (x|θ)/∂θ = 0 and, hence, I(θ) = 0 for θ = 1, 2, . . . [The remaining conditions of Theorem 3.10 are easily checked (Problem 4.2).] To remove this difficulty, define g(x|θ) = 1 2[f (x|θ) + f (x|θ + αe−θ2)], θ ≥1, (4.4) for some fixed α < 1. If X1, . . . , Xn are iid according to g(x|θ), we shall now show that the MLE ˆ θn continues to tend to infinity for any fixed θ. We have, as before P[ ˆ θn > k] ≥P[Og(xi|K∗ n) > Og(xi|θ) for all θ ≤k] ≥P 1 2n Of (xi|K∗ n) > O 1 2 1 f (xi|θ) + f (xi|θ + αe−θ2) 2" . For j ≤θ < j + 1, it is seen from (4.1) that [f (xi|θ) + f (xi|θ + αe−θ2)]/2 is a weighted average of fj(xi), fj+1(xi), and possibly fj+2(xi). By using ¯ pj(x) = O max[fj(xi), fj+1(xi), fj+2(xi)] in place of pj(x), the proof can now be com-pleted as before. Since the densities g(xi|θ) satisfy the conditions of Theorem 6.4 ] LIKELIHOOD ESTIMATION: MULTIPLE ROOTS 453 3.10 (Problem 4.3), these conditions are therefore not enough to ensure the con-sistency of the MLE. (For another example, see Ferguson 1982.) ∥ Even under the assumptions of Theorem 3.10, one is thus, in the case of multiple roots, still faced with the problem of identifying a consistent sequence of roots. Following are three possible approaches. (a) In many cases, the maximum likelihood estimator is consistent. Conditions which ensure this were given by, among others, Wald (1949), Wolfowitz (1965), Le Cam (1953, 1955, 1970), Kiefer and Wolfowitz (1956), Kraft and Le Cam (1956), Bahadur (1967), and Perlman (1972). A survey of the literature can be found in Perlman (1983). This material is technically difficult, and even when the conditions are satisfied, the determination of the MLE may present problems (see Barnett 1966). We shall therefore turn to somewhat simpler alternatives. The following two methods require that some sequence of consistent (but not necessarily efficient) estimators be available. In any given situation, it is usually easy to construct a consistent sequence, as will be illustrated below and in the next section. (b) Suppose that δn is any consistent estimator of θ and that the assumptions of Theorem 3.10 hold. Then, the root ˆ θn of the likelihood equation closest to δn (which exists by the proof of Theorem 3.7) is also consistent, and hence is efficient by Theorem 3.10. To see this, note that by Theorem 3.10, there exists a consistent sequence of roots, say ˆ θ∗ n. Since ˆ θ∗ n −δn →0 in probability, so does ˆ θn −δn. The following approach, which does not require the determination of the closest root and in which the estimators are no longer exact roots of the likelihood equation, is often more convenient. (c) The usual iterative methods for solving the likelihood equation l′(θ) = 0 (4.5) are based on replacing the left side by the linear terms of its Taylor expansion about an approximate solution ˜ θ. If ˆ θ denotes a root of (4.5), this leads to the approximation 0 = l′( ˆ θ) . = l′( ˜ θ) + ( ˆ θ −˜ θ)l′′( ˜ θ), (4.6) and hence to ˆ θ = ˜ θ −l′( ˜ θ) l′′( ˜ θ) . (4.7) The procedure is then iterated by replacing ˜ θ by the value ˜ ˜ θ of the right side of (4.7), and so on. This is the Newton-Raphson iterative process. (For a discussion of the performance of this procedure, see, for example, Barnett 1966, Stuart and Ord 1991, Section 18.21, or Searle et al. 1992, Section 8.2.) Here, we are concerned only with the first step and with the performance of the one-step approximation (4.7) as an estimator of θ. The following result gives 454 ASYMPTOTIC OPTIMALITY [ 6.4 conditions on ˜ θ under which the resulting sequence of estimators is consistent, asymptotically normal, and efficient. It relies on the sequence of estimators pos-sessing the following property. Definition 4.2 A sequence of estimators δn is √n-consistent for θ if √n(δn −θ) is bounded in probability, that is, if δn −θ = Op(1/√n). Theorem 4.3 Suppose that the assumptions of Theorem 3.10 hold and that ˜ θn is not only a consistent but a √n-consistent4 estimator of θ. Then, the estimator sequence δn = ˜ θn −l′( ˜ θn) l′′( ˜ θn) (4.8) is asymptotically efficient, that is, it satisfies (3.17) with δn in place of ˆ θn. Proof. As in the proof of Theorem 3.10, expand l′( ˜ θn) about θ0 as l′( ˜ θn) = l′(θ0) + ( ˜ θn −θ0)l′′(θ0) + 1 2( ˜ θn −θ0)2l′′′(θ∗ n) where θ∗ n lies between θ0 and ˜ θn. Substituting this expression into (4.8) and sim-plifying, we find √n(δn −θ0) = (1/√n)l′(θ0) −(1/n)l′′( ˜ θn) + √n( ˜ θn −θ0) (4.9) ×  1 −l′′(θ0) l′′( ˜ θn) −1 2( ˜ θn −θ0)l′′′(θ∗ n) l′′( ˜ θn)  . The result now follows from the following facts: (a) (1/√n)l′(θ0) −(1/n)l′′(θ0) L →N(0, I −1(θ0)] [(3.18) and (3.19)] (b) √n( ˆ θn −θ0) = Op(1) [assumption] (c) l′′′(θ∗ n) l′′( ˜ θn) = Op(1) [(3.19) and (3.20)] (d) l′′( ˜ θn) l′′(θ0) →1 in probability [see below] Here, (d) follows from the fact that 1 nl′′( ˜ θn) = 1 nl′′(θ0) + 1 n( ˜ θn −θ0)l′′′(θ∗∗ n ), (4.10) for some θ∗∗ n between θ0 and ˜ θn. Now (3.19), (3.20), and consistency of ˜ θn applied to (4.10) imply (d). In turn, (b)-(d) show that the entire second term in (4.9) converges to zero in probability, and (a) shows that the first term has the correct limit distribution. ✷ 4 A general method for constructing √n-consistent estimators is given by Le Cam (1969, p. 103). See also Bickel et al. (1993). 6.4 ] LIKELIHOOD ESTIMATION: MULTIPLE ROOTS 455 Corollary 4.4 Suppose that the assumptions of Theorem 4.3 hold and that the Fisher information I(θ) is a continuous function of θ. Then, the estimator δ′ n = ˜ θn + l′( ˜ θn) nI( ˜ θn) (4.11) is also asymptotically efficient. Proof. By (d) in the proof of Theorem 4.3, condition (h) of Theorem 2.6, and the law of large numbers, −(1/n)l′′( ˜ θn) →I(θ0) in probability. Also, since I(θ) is continuous, I( ˜ θn) →I(θ0) in probability, so that −(1/n)l′′( ˜ θn)/I( ˜ θn) →1 in probability, and this completes the proof. ✷ The estimators (4.8) and (4.11) are compared by Stuart (1958), who gives a heuristic argument why (4.11) might be expected to be closer to the ELE than (4.8) and provides a numerical example supporting this argument. See also Efron and Hinkley 1978 and Lindsay and Yi 1996. Example 4.5 Location parameter. Consider the case of a symmetric location family, with density f (x −θ), in which the likelihood equation f ′(xi −θ) f (xi −θ) = 0 (4.12) has multiple roots. [For the Cauchy distribution, for example, it has been shown by Reeds (1985) that if (4.12) has K + 1 roots, then as n →∞, K tends in law to a Poisson distribution with expectation 1/π. The Cauchy case has also been considered by Barnett (1966) and Bai and Fu (1987).] If var(X) < ∞, it follows from the CLT that the sample mean ¯ Xn is √n-consistent and that an asymptotically efficient estimator of θ is therefore provided by (4.8) or (4.11) with ˜ θn = ¯ X as long as f (x −θ) satisfies the conditions of Theorem 3.10. For distributions such as the Cauchy for which E(X2) = ∞, one can, instead, take for ˜ θn the sample median provided f (0) > 0; other robust estimators provide still further possibilities (see, for example, Huber 1973, 1981 or Haberman 1989). ∥ Example 4.6 Grouped or censored observations. Suppose that X1, . . ., Xn are iid according to a location family with cdf F(x −θ), with F known and with 0 < F(x) < 1 for all x, but that it is only observed whether each Xi falls below a, between a and b, or above b where a < b are two given constants. The n observations constitute n trinomial trials with probabilities p1 = p1(θ) = F(a−θ), p2(θ) = F(b −θ) −F(a −θ), p3(θ) = 1 −F(b −θ) for the three outcomes. If V denotes the number of observations less than a, then √n V n −p1  L →N[0, p1(1 −p1)] (4.13) and, by Theorem 1.8.12, ˜ Vn = a −F −1 V n  (4.14) is a √n-consistent estimator of θ. Since the estimator is not defined when V = 0 or V = n, some special definition has to be adopted in these cases whose probability however tends to zero as n →∞. 456 ASYMPTOTIC OPTIMALITY [ 6.4 If the trinomial distribution for a single trial satisfies the assumptions of The-orem 3.10 as will be the case under mild assumptions on F, the estimator (4.8) is asymptotically efficient (but see the comment following Example 7.15). The approach applies, of course, equally to the case of more than three groups. A very similar situation arises when the X’s are censored, say at a fixed point a. For example, they might be lengths of life of light bulbs or patients, with obser-vation discontinued at time a. The observations can then be represented as Yi = Xi if Xi < a a if Xi ≥a. (4.15) Here, the value a of Yi when Xi ≥a has no significance; it simply indicates that the value of Xi is ≥a. The Y’s are then iid with density g(y|θ) = f (y −θ) if y < a 1 −F(a −θ) if y = a (4.16) with respect to the measure µ which is Lebesgue measure on (−∞, a) and assigns measure 1 to the point y = a. The estimator (4.14) continues to be √n-consistent in the present situation. An alternative starting point is, for example, the best linear combination of the ordered X’s less than a (see, for example, Chan 1967). ∥ Example 4.7 Mixtures. Let X1, . . . , Xn be a sample from a distribution θG+(1− θ)H, 0 < θ < 1, where G and H are two specified distributions with densities g and h. The log likelihood of a single observation is a concave function of θ, and so therefore is the log likelihood of a sample (Problem 4.5). It follows that the likelihood equation has at most one solution. [The asymptotic performance of the ML estimator is studied by Hill (1963).] Even when the root is unique, as it is here, Theorem 4.3 provides an alternative, which may be more convenient than the MLE. In the mixture problem, as in many other cases, a √n-consistent estimator can be obtained by the method of moments, which consists in equating the first k moments of X to the corresponding sample moments, say Eθ(Xr i ) = 1 n n j=1 Xr j, r = 1, . . . , k, (4.17) where k is the number of unknown parameters. (For further discussion, see, for example, Cram´ er 1946a, Section 33.1 and Serfling 1980, Section 4.3.1). In the present case, suppose that E(Xi) = ξ or η when X is distributed as G or H where η ̸= ξ and G and H have finite variance. Since k = 1, the method of moments estimates θ as the solution of the equation ξθ + η(1 −θ) = ¯ Xn and hence by ˜ θn = ¯ Xn −η ξ −n . [If η = ξ but the second moments of Xi under H and G differ, one can, instead, equate E(X2 i ) with X2 j/n (Problem 4.6).] An asymptotically efficient estimator 6.4 ] LIKELIHOOD ESTIMATION: MULTIPLE ROOTS 457 is then provided by (4.8). Estimation under a mixture distribution provides interesting challenges, and has many application in practice. There is a large literature on mixtures, and entry can be found through the books by Everitt and Hand (1981), Titterington et al. (1985), McLachlan and Basford (1988), and McLachlan (1997). ∥ In the context of choosing a √n-consistent estimator ˜ θn for (4.8), it is of interest to note that in sufficiently regular situations good efficiency of ˜ θn is equivalent to high correlation with ˆ θn. This is made precise by the following result, which is concerned only with first-order approximations. Theorem 4.8 Suppose ˆ θn is an ELE estimator and ˜ θn a √n-consistent estimator, for which the joint distribution of Tn = √n( ˆ θn −θ) and T ′ n = √n( ˜ θn −θ) tends to a bivariate limit distribution H with zero means and covariance matrix = ||σij||. Let (T, T ′) have distribution H and suppose that the means and covariance matrix of (Tn, T ′ n) tend toward those of (T, T ′) as n →∞. Then, var T var T ′ = ρ2 (4.18) where ρ = σ12/√σ11σ22 is the correlation coefficient of (T, T ′). Proof. Consider var[(1 −α)Tn + αT ′ n] which tends to var[(1 −α)T + αT ′] = (1 −α)2σ11 + 2α(1 −α)σ12 + α2σ22. (4.19) This is non-negative for all values of α and takes on its minimum at α = 0 since ˆ θn is asymptotically efficient. Evaluating the derivative of (4.19) at α = 0 shows that we must have σ11 = σ12 (Problem 4.7). Thus, ρ = √σ11/σ22, as was to be proved. ✷ The ratio of the asymptotic variances in (4.18) is a special case of asymptotic relative efficiency (ARE). See Definition 6.6. In Examples 4.6 and 4.7, we used the method of moments to obtain √n-consistent estimators and then applied the one-step estimator (4.8) or (4.11). An alternative approach, when the direct calculation of an ELE is difficult, is the fol-lowing expectation-maximization (EM) algorithm for obtaining a stationary point of the likelihood. The idea behind the EM algorithm is to replace one computationally difficult likelihoodmaximizationwithasequenceofeasiermaximizationswhoselimitisthe answer to the original problem. More precisely, let Y1, . . . , Yn be iid with density g(y|θ), and suppose that the object is to compute the value ˆ θ that maximizes L(θ|y) = #n i=1 g(yi|θ). If L(θ|y) is difficult to work with, we can sometimes augment the data y = (y1, . . . , yn) and create a new likelihood function L(θ|y, z) that has a simpler form. Example 4.9 Censored data likelihood. Suppose that we observe Y1, . . ., Yn, iid, with density (4.16), and we have ordered the observations so that (y1, . . . , ym) 458 ASYMPTOTIC OPTIMALITY [ 6.4 are uncensored and (ym+1, . . . , yn) are censored (and equal to a). The likelihood function is then L(θ|y) = n i=1 g(yi|θ) = m i=1 f (yi|θ) [1 −F(a −θ)]n−m . (4.20) If we had observed the last n −m values, say z = (zm+1, . . . , zn), the likelihood would have had the simpler form L(θ|y, z) = m i=1 f (yi|θ) n i=m+1 f (zi|θ). More generally, the EM algorithm is useful when the density of interest, g(yi|θ), can be expressed as g(y|θ) =  Z f (y, z|θ) dz, (4.21) for some simpler function f (y, z|θ). The z vector merely serves to simplify cal-culations, and its choice does not affect the value of the estimator. An illustration of a typical construction of the density f is the case of ”filling in” missing data, for example, by turning an unbalanced data set into a balanced one. Example 4.10 EM in a one-way layout. In a one-way layout (Example 3.4.9), suppose there are four treatments with the following data Treatments 1 2 3 4 y11 y12 y13 y14 y21 y22 y23 y24 z1 y32 z3 y34 where the yij’s represent the observed data, and the dummy variables z1 and z3 represent missing observations. Under the usual assumptions, the Yij’s are inde-pendently normally distributed as N(µ + αi, σ 2). If we let θ = (µ, α1, . . . , α4, σ 2) and let nij denote the number of observations per treatment, the incomplete-data likelihood is given by L(θ|y) = g(y|θ) =  1 √ 2πσ 2 10 e 4 i=1 nij j=1(yij −µ−αi)2/σ 2 while the complete-data likelihood is L(θ|y, z) = f (y, z|θ) =  1 √ 2πσ 2 12 e 4 i=1 3 j=1(yij −µ−αi)2/σ 2, where y31 = z1 and y33 = z3. By integrating out z1 and z3, the original likelihood is recovered. Although estimation in the original problem (with only the yij’s) is not difficult, it is easier in the augmented problem. [The computational advantage of the EM algorithm becomes more obvious as we move to higher-order designs, for example, the two-way layout (see Problem 4.14).] ∥ 6.4 ] LIKELIHOOD ESTIMATION: MULTIPLE ROOTS 459 The EM algorithm is often useful for obtaining an MLE when, as in Example 4.10, we should like to maximize L(θ|y), but it would be much easier, if certain additional observations z were available, to work with the joint density f (y, z|θ) = L(θ|y, z) and the conditional density of Z given y, that is, L(θ|y, z) and k(z|θ, y) = f (y, z|θ) g(y|θ) . (4.22) These quantities are related by the identity log L(θ|y) = log L(θ|y, z) −log k(z|θ, y). (4.23) Since z is not available, we replace the right side of (4.23) with its expectation, using the conditional distribution of Z given y. With an initial guess θ0 (to start the iterations), we define Q(θ|θ0, y) =  log L(θ|yk(z|θ0, y) dz, (4.24) H(θ|θ0, y) =  log k(z|θ, y)|θ0, y) dz. As the left side of (4.23) does not depend on z, the expected value of log L(θ|y) is then given by L(θ|y) = Q(θ|θ0, y) −H(θ|θ0, y). (4.25) Let the value of θ maximizing Q(θ|θ0, y) be ˆ θ(1). The process is then repeated with θ0 in (4.24) and (4.22) replaced by the updated value ˆ θ(1), so that (4.24) is replaced by Q(θ| ˆ θ(1), y). In this manner, a sequence of estimators ˆ θ(j), j = 1, 2, . . . is obtained iteratively where ˆ θ(j) is defined as the value of θ maximizing Q(θ| ˆ θ(j−1), y), that is, Q( ˆ θ(j)| ˆ θ(j−1), y) = max θ Q(θ| ˆ θ(j−1), y). (4.26) (It is sometimes written ˆ θ(j) = argmaxθQ(θ| ˆ θ(j−1), y), that is, ˆ θ(j) is the value of the argument θ that maximizes Q.) The quantities log L(θ|y), log L(θ|y, z), and Q(θ|θ0, y) are referred to as the incomplete, complete, and expected log likelihood. The term EM for this algorithm stands for Expectation-Maximization since the jth step of the iteration consists of the calculating the expectation (4.24), with θ0 replaced by ˆ θ(j−1), and then maximizing it. The following is a key property of the sequence { ˆ θ(j)}. Theorem 4.11 The sequence { ˆ θ(j)} defined by (4.26) satisfies L( ˆ θ(j+1)|y) ≥L( ˆ θ(j)|y), (4.27) with equality holding if and only if Q( ˆ θ(j+1)| ˆ θ(j), y) = Q( ˆ θ(j)| ˆ θ(j), y). Proof. On successive iterations, the difference between the logarithms of the left and right sides of (4.25) is log L( ˆ θ(j+1)|y) −log L( ˆ θ(j)|y) 460 ASYMPTOTIC OPTIMALITY [ 6.4 = 1 Q( ˆ θ(j+1)| ˆ θ(j), y) −Q( ˆ θ(j)| ˆ θ(j), y) 2 (4.28) − 1 H( ˆ θ(j+1)| ˆ θ(j), y) −H( ˆ θ(j)| ˆ θ(j), y) 2 . The first expression in (4.28) is non-negative by definition of ˆ θ(j+1). It remains to show that the second term is non-negative, that is,  1 log k(z| ˆ θ(j+1), y) −log k(z| ˆ θ(j), y) 2 k(z| ˆ θ(j), y) dz ≤0. (4.29) Since the difference of the logarithms is the logarithm of the ratio, this integral can be written as  log  k(z| ˆ θ(j+1), y) k(z| ˆ θ(j), y) k(z| ˆ θ(j), y) dz ≤log  k(z| ˆ θ(j+1), y)dz = 0. (4.30) The inequality follows from Jensen’s inequality (see Example 1.7.7, Inequality (1.7.13), and Problem 4.17), and this completes the proof. ✷ Although Theorem 4.11 guarantees that the likelihood will increase at each itera-tion, we still may not be able to conclude that the sequence { ˆ θ(j)} converges to a maximum likelihood estimator. To ensure convergence, we require further conditions on the mapping ˆ θ(j) → ˆ θ(j+1). These conditions are investigated by Boyles (1983) and Wu (1983); see also Finch et al. 1989. The following theorem is, perhaps, the most easily applicable condition to guarantee convergence to a stationary point, which may be a local maximum or saddlepoint. Theorem 4.12 If the expected complete-data likelihood Q(θ|θ0, y) is continuous in both θ and θ0, then all limit points of an EM sequence { ˆ θ(j)} are stationary points of L(θ|y), and L( ˆ θ(j)|y) converges monotonically to L( ˆ θ|y) for some stationary point ˆ θ. Example 4.13 Continuation of Example 4.9. The situation of Example 4.9 does not quite fit the conditions under which the EM algorithm was described above since the observations ym+1, . . . , yn are not missing completely but only partially. (We know that they are ≥a.) However, the situation reduces to the earlier one if we just ignore ym+1, . . . , yn, so that y now stands for (y1, . . . , ym). To be specific, let the density f (y|θ) of (4.16) be the N(θ, 1) density, so that the likelihood function (4.20) is L(θ|y) = 1 (2π)m/2 e−1 2 m i=1(yi−θ)2. We replace ym+1, . . . , yn with n −m phantom variables z = (z1, . . . , zn−m) which are distributed as n−m iid variables from the conditional normal distribution given that they are all ≥a; thus, for zi ≥a, i = 1, . . . , n −m, k(z|θ, y) = 1 ( √ 2π)(n−m)/2 exp 3 −1 2 n−m i=1 (zi −θ)24 [1 −X(a −θ)]n−m . 6.5 ] THE MULTIPARAMETER CASE 461 At the jth step in the EM sequence, we have Q(θ| ˆ θ(j), y) ∝−1 2 m i=1 (yi −θ)2 −1 2 n−m i=1  ∞ a (zi −θ)2k(z| ˆ θ(j), y) dzi, and differentiating with respect to θ yields m( ¯ y −θ) + (n −m) 1 E(Z| ˆ θ(j)) −θ 2 = 0 or ˆ θ(j+1) = m ¯ y + (n −m)E(Z| ˆ θ(j)) n where E(Z| ˆ θ(j)) =  ∞ a zk(z| ˆ θ(j), y) dz = ˆ θ(j) + φ(a −ˆ θ(j)) 1 −X(a −ˆ θ(j)) . Thus, the EM sequence is defined by ˆ θ(j+1) = m n ¯ y + n −m n  ˆ θ(j) + φ(a −ˆ θ(j)) 1 −X(a −ˆ θ(j) , which converges to the MLE ˆ θ (Problem 4.8). ∥ Quite generally, in an exponential family, computations are somewhat simplified because we can write Q(θ| ˆ θ(j), y) = E ˆ θ(j) log L(θ|y, Z)|y) ! = E ˆ θ(j) log (h(y, Z) e  ηi(θ)Ti−B(θ) |y 2 = E ˆ θ(j) log h(y, Z) ! + ηi(θ)E ˆ θ(j) Ti|y ! −B(θ). Thus, calculating the complete-data MLE only involves the simpler expectation E ˆ θ(j) Ti|y ! . The books by Little and Rubin (1987), Tanner (1996), and McLachlan and Krishnan (1997) provide good overviews of the EM literature. Other references include Louis (1982), Laird et al. (1987), Meng and Rubin (1993), Smith and Roberts (1993), and Liu and Rubin (1994). 5 The Multiparameter Case In the preceding sections, asymptotically efficient estimators were obtained when the distribution depends on a single parameter θ. When extending this theory to probability models involving several parameters θ1, . . . , θs, one may be interested either in the simultaneous estimation of these parameters (or certain functions of them) or with the estimation of one of the parameters at a time, the remaining parameters then playing the role of nuisance or incidental parameters. In the present section, we shall primarily take the latter point of view. Let X1, . . . , Xn be iid with a distribution that depends on θ = (θ1, . . . , θs) and satisfies assumptions (A0)–(A3) of Section 6.3. For the time being, we shall assume 462 ASYMPTOTIC OPTIMALITY [ 6.5 s to be fixed. Suppose we wish to estimate θj. Then, it was seen in Section 2.6 that the variance of any unbiased estimator δn of θj, based on n observations, satisfies the inequality var(δn) ≥[I(θ)]−1 jj /n (5.1) where the numerator on the right side is the jjth element of the inverse of the information matrix I(θ) with elements Ijk(θ), j, k = 1, . . . , s, defined by Ijk(θ) = cov  ∂ ∂θj log f (X|θ), ∂ ∂θk log f (X|θ)  . (5.2) It was further shown by Bahadur (1964) under conditions analogous to those of Theorem 2.6 that for any sequence of estimators δn of θj satisfying √n (δn −θj) L →N(0, v(θ)], (5.3) the asymptotic variance v satisfies v(θ) ≥[I(θ)]−1 jj , (5.4) except on a set of values θ having measure zero. We shall now show under assumptions generalizing those of Theorem 3.10 that with probability tending to 1, there exist solutions ˆ θ n = ( ˆ θ1n, . . . , ˆ θsn) of the likelihood equations ∂ ∂θj [f (x1|θ) · · · f (xn|θ)] = 0, j = 1, . . . , s, (5.5) or, equivalently, ∂ ∂θj [l(θ)] = 0, j = 1, . . . , s, (5.6) such that ˆ θjn is consistent for estimating θj and asymptotically efficient in the sense of satisfying (5.3) with v(θ) = [I(θ)]−1 jj . (5.7) We state first some assumptions: (A) There exists an open subset ω of containing the true parameter point θ 0 such that for almost all x, the density f (x|θ) admits all third derivatives (∂3/∂θj∂θk∂θl)f (x|θ) for all θ ∈ω. (B) The first and second logarithmic derivatives of f satisfy the equations Eθ  ∂ ∂θj log f (X|θ)  = 0 for j = 1, . . . , s (5.8) and Ijk(θ) = Eθ  ∂ ∂θj log f (X|θ) · ∂ ∂θk log f (X|θ)  (5.9) = Eθ  − ∂2 ∂θj∂θk log f (X|θ)  . Clearly, (5.8) and (5.9) imply (5.2). 6.5 ] THE MULTIPARAMETER CASE 463 (C) Since the s × s matrix I(θ) is a covariance matrix, it is positive semidefinite. In generalization of condition (v) of Theorem 2.6, we shall assume that the Ijk(θ) are finite and that the matrix I(θ) is positive definite for all θ in ω, and hence that the s statistics ∂ ∂θ1 log f (X|θ), . . . , ∂ ∂θs log f (X|θ) are affinely independent with probability 1. (D) Finally, we shall suppose that there exist functions Mjkl such that     ∂3 ∂θj∂θk∂θl log f (x|θ)     ≤Mjkl(x) for all θ ∈ω where mjkl = Eθ0[Mjkl(X)] < ∞ for all j, k, l. Theorem 5.1 Let X1, . . . , Xn be iid, each with a density f (x|θ) (with respect to µ) which satisfies (A0)–(A2) of Section 6.3 and assumptions (A)–(D) above. Then, with probability tending to 1 as n →∞, there exist solutions ˆ θ n = ˆ θ n(X1, . . . , Xn) of the likelihood equations such that (a) ˆ θjn is consistent for estimating θj, (b) √n(ˆ θ n −θ) is asymptotically normal with (vector) mean zero and covariance matrix [I(θ)]−1, and (c) ˆ θjn is asymptotically efficient in the sense that √n( ˆ θjn −θj) L →N{0, [I(θ)]−1 jj }. (5.10) Proof. (a) Existence and Consistency. To prove the existence, with probability tending to 1, of a sequence of solutions of the likelihood equations which is con-sistent, we shall consider the behavior of the log likelihood l(θ) on the sphere Qa with center at the true point θ 0 and radius a. We will show that for any sufficiently small a, the probability tends to 1 that l(θ) < l(θ 0) at all points θ on the surface of Qa, and hence that l(θ) has a local maximum in the interior of Qa. Since at a local maximum the likelihood equations must be satisfied, it will follow that for any a > 0, with probability tending to 1 as n →∞, the likelihood equations have a solution ˆ θ n(a) within Qa and the proof can be completed as in the one-dimensional case. To obtain the needed facts concerning the behavior of the likelihood on Qa for small a, we expand the log likelihood about the true point θ 0 and divide by n to find 1 nl(θ) −1 nl(θ 0) = 1 nAj(x)(θj −θ0 j ) 464 ASYMPTOTIC OPTIMALITY [ 6.5 + 1 2n Bjk(x)(θj −θ0 j )(θk −θ0 k ) + 1 6n j k l (θj −θ0 j )(θk −θ0 k )(θl −θ0 l ) n i=1 γjkl(xi)Mjkl(xi) = S1 + S2 + S3 where Aj(x) = ∂ ∂θj l(θ)     θ=θ0 and Bjk(x) = ∂2 ∂θj∂θk l(θ)     θ=θ0 , and where, by assumption (D), 0 ≤|γjkl(x)| ≤1. To prove that the maximum of this difference for θ on Qa is negative with proba-bility tending to 1 if a is sufficiently small, we will show that with high probability the maximum of S2 is negative while S1 and S3 are small compared to S2. The basic tools for showing this are the facts that by (5.8), (5.9), and the law of large numbers, 1 nAj(X) = 1 n ∂ ∂θj l(θ)     θ=θ0 →0 in probability (5.11) and 1 nBjk(X) = 1 n ∂2 ∂θj∂θk l(θ)     θ=θ0 →−Ijk(θ0) in probability. (5.12) Let us begin with S1. On Qa, we have |S1| ≤1 na|Aj(X)|. For any given a, it follows from (5.11) that |Aj(X)|/n < a2 and hence that |S1| < sa3 with probability tending to 1. Next, consider 2S2 = [−Ijk(θ0)(θj −θ0 j )(θk −θ0 k )] (5.13) + 1 nBjk(X) − −Ijk(θ0) !" (θj −θ0 j )(θk −θ0 k ). For the second term, it follows from an argument analogous to that for S1 that its absolute value is less than s2a3 with probability tending to 1. The first term is a negative (nonrandom) quadratic form in the variables (θj −θ0 j ). By an orthogonal transformation, this can be reduced to diagonal form λiζ 2 i with Qa becoming ζ 2 i = a2. Suppose that the λ’s that are negative are numbered so that λs ≤λs−1 ≤ · · · ≤λ1 < 0. Then, λiζ 2 i ≤λiζ 2 i = λ1a2. Combining the first and second terms, we see that there exist c > 0 and a0 > 0 such that for a < a0 S2 < −ca2 6.5 ] THE MULTIPARAMETER CASE 465 with probability tending to 1. Finally, with probability tending to 1,     1 nMjkl(Xi)     < 2mjkl and hence |S3| < ba3 on Qa where b = s3 3 mjkl. Combining the three inequalities, we see that max(S1 + S2 + S3) < −ca2 + (b + s)a3, (5.14) which is less than zero if a < c/(b + s), and this completes the proof of (i). (b) and (c) Asymptotic Normality and Efficiency. This part of the proof is ba-sically the same as that of Theorem 3.10. However, the single equation derived there from the expansion of ˆ θ n −θ 0 is now replaced by a system of s equations which must be solved for the differences ( ˆ θjn −θ0 j ). This makes the details of the argument somewhat more cumbersome. In preparation, it will be convenient to consider quite generally a set of random linear equations in s unknowns, s k=1 AjknYkn = Tjn (j = 1, . . . , s). (5.15) ✷ Lemma 5.2 Let (T1n, . . . , Tsn) be a sequence of random vectors tending weakly to (T1, . . . , Ts) and suppose that for each fixed j and k, Ajkn is a sequence of random variables tending in probability to constants ajk for which the matrix A = ||ajk|| is nonsingular. Let B = ||bjk|| = A−1. Then, if the distribution of (T1, . . . , Ts) has a density with respect to Lebesgue measure over Es, the solutions (Y1n, . . . , Ysn) of (5.15) tend in probability to the solutions (Y1, . . . , Ys) of s k=1 ajkYk = Tj (j = 1, . . . , s) (5.16) given by Yj = s k=1 bjkTk. (5.17) Proof. With probability tending to 1, the matrices ||Ajkn|| are nonsingular, and by Theorem 1.8.19 (Problem 5.1), the elements of the inverse of ||Ajkn|| tend in probability to the elements of B. Therefore, by a slight extension of Theorem 1.8.10, the solutions of (5.15) have the same limit distribution as those of Yjn = s k=1 bjkTkn. (5.18) By applying Theorem 1.8.19 to the set S, b1kTk ≤y1, . . . , bskTk ≤ys, (5.19) 466 ASYMPTOTIC OPTIMALITY [ 6.5 it is only necessary to show that the distribution of (T1, . . . , Ts) assigns probability zero to the boundary of (5.19). Since this boundary is contained in the union of the hyperplanes bjkTk = yj, the result follows. ✷ Proof of Parts (b) and (c) of Theorem 5.1. In the generalization of the proof of Theorem 3.10, expand ∂l(θ)/∂θj = l′ j(θ) about θ 0 to obtain l′ j(θ) = l′ j(θ 0) + (θk −θ0 k )l′′ jk(θ 0) (5.20) +1 2 (θk −θ0 k )(θl −θ0 l )l′′′ jkl(θ ∗) where l′′ jk and l′′′ jkl denote the indicated second and third derivatives of l and where θ ∗is a point on the line segment connecting θ and θ 0. In this expansion, replace θ by a solution ˆ θ n of the likelihood equations, which by part (a) of the theorem can be assumed to exist with probability tending to 1 and to be consistent. The left side of (5.20) is then zero and the resulting equations can be written as √n ( ˆ θk −θ0 k ) 1 nl′′ jk(θ 0) + 1 2n( ˆ θl −θ0 l )l′′′ jkl(θ ∗)  (5.21) = −1 √nl′ j(θ 0). These have the form (5.15) with Ykn = √n ( ˆ θk −θ0 k ), Ajkn = 1 nl′′ jk(θ 0) + 1 2n( ˆ θl −θ0 l )l′′′ jkl(θ ∗), (5.22) Tjn = −1 √nl′ j(θ 0) = −√n  1 n n i=1 ∂ ∂θj log f (Xi|θ) θ=θ0 . Since Eθ0[(∂/∂θj) log f (Xi|θ)] = 0, the multivariate central limit theorem (Theo-rem 1.8.21) shows that (T1n, . . . , Tsn) has a multivariate normal limit distribution with mean zero and covariance matrix I(θ 0). On the other hand, it is easy to see—again in parallel to the proof of Theorem 3.10—that Ajkn P →ajk = E[l′′ jk(θ 0)] = −Ijk(θ 0). (5.23) The limit distribution of the Y’s is therefore that of the solution (Y1, . . . , Ys) of the equations s k=1 Ijk(θ 0)Yk = Tj (5.24) where T = (T1, . . . , Ts) is multivariate normal with mean zero and covariance matrix I(θ 0). It follows that the distribution of Y is that of [I(θ 0)]−1T, whichisamultivariatedistributionwithzeromeanandcovariancematrix[I(θ 0)]−1. This completes the proof of asymptotic normality and efficiency. ✷ 6.5 ] THE MULTIPARAMETER CASE 467 If the likelihood equations have a unique solution ˆ θ n, then ˆ θ n is consistent, asymptotically normal, and efficient. It is, however, interesting to note that even if the parameter space is an open interval, it does not follow as in Corollary 3.8 that the MLE exists and hence is consistent (Problem 5.6). Sufficient conditions for existence and uniqueness are given in M¨ akel¨ ainen, Schmidt, and Styan (1981). As in the one-parameter case, if the solution of the likelihood equations is not unique, Theorem 5.1 does not establish the existence of an efficient estimator of θ. However, the methods mentioned in Section 2.5 also work in the present case. In particular, if ˜ θ n is a consistent sequence of estimators of θ, then the solutions ˆ θ n of the likelihood equations closest to ˜ θ n, for example, in the sense that ( ˆ θjn −˜ θjn)2 is smallest, is asymptotically efficient. More convenient, typically, is the approach of Theorem 4.3, which we now generalize to the multiparameter case. Theorem 5.3 Suppose that the assumptions of Theorem 5.1 hold and that ˜ θjn is a √n-consistent estimator of θj for j = 1, . . . , s. Let {δkn, k = 1, . . . , s} be the solution of the equations s k=1 (δkn −˜ θkn)l′′ jk(˜ θ n) = −l′ j(˜ θ n). (5.25) Then, δn = (δ1n, . . . , δsn) satisfies (5.10) with δjn in place of ˆ θjn and, thus, is asymptotically efficient. Proof. The proof is a simple combination of the proofs of Theorem 4.3 and 5.1 and we shall only sketch it. Expanding the right side about θ 0 allows us to rewrite (5.25) as k(δkn −˜ θkn)l′′ jk(˜ θ n) = −l′ j(θ 0) −k( ˜ θkn −θ0 k )l′′ jk(θ 0) + Rn where Rn = −1 2kl( ˜ θkn −θ0 k )( ˜ θln −θ0 l )l′′′ jkl(θ ∗ n) and hence as √n k(δkn −θ0 k )1 nl′′ jk(˜ θ n) = −1 √nl′ j(θ 0) + √n k( ˜ θkn −θ0 k ) 1 nl′′ jk(˜ θ n) −1 nl′′ jk(θ 0)  + 1 √nRn. (5.26) This has the form (5.15), and it is easy to check (Problem 5.2) that the limits (in probability) of the Ajkn are the same ajk as in (5.23) and that the second and third terms on the right side of (5.26) tend toward zero in probability. Thus, the joint distribution of the right side is the same as that of the Tjn given by (5.22). If follows that the joint limit distribution of the √n (δkn −θ0 k ) is the same as that of the √n ( ˆ θkn −θ0 k ) in Theorem 3.2, and this completes the proof. ✷ The following result generalizes Corollary 4.4 to the multiparameter case. Corollary 5.4 Suppose that the assumptions of Theorem 5.3 hold and that the elements Ijk(θ) of the information matrix of the X’s are continuous. Then, the 468 ASYMPTOTIC OPTIMALITY [ 6.6 solutions δ′ kn of the equations n(δ′ kn −˜ θkn)Ijk(˜ θ n) = l′ j(˜ θ n) (5.27) are asymptotically efficient. The proof is left to Problem 5.5. 6 Applications Maximum likelihood (together with some of its variants) is the most widely used method of estimation, and a list of its applications would cover practically the whole field of statistics. [For a survey with a comprehensive set of references, see Norden 1972-1973 or Scholz 1985.] In this section, we will discuss a few applications to illustrate some of the issues arising. The discussion, however, is not carried to the practical level, and in particular, the problem of choosing among alternative asymptotically efficient methods is not addressed. Such a choice must be based not only on theoretical considerations but requires empirical evidence on the performance of the estimators at various sample sizes. For any specific example, the relevant literature should be consulted. Example 6.1 Weibull distribution. Let X1, . . . , Xn be iid according to a two-parameter Weibull distribution, whose density it is convenient to write in a param-eterization suggested by Cohen (1965b) as γ β xγ −1e−xγ /β, x > 0, β > 0, γ > 0, (6.1) where γ is a shape parameter and β1/γ a scale parameter. The likelihood equations, after some simplification, reduce to (Problem 6.1) h(γ ) = xγ i log xi xγ i −1 γ = 1 n log xi (6.2) and β = xγ i /n. (6.3) To show that (6.2) has at most one solution, note that h′(γ ) exceeds the derivative of the first term, which equals (Problem 6.2) a2 i pi −(aipi)2 with ai = log xi, pi = eγ ai/ j eγ aj . (6.4) It follows that h′(γ ) > 0 for all γ > 0. That (6.2) always has a solution follows from (Problem 6.2): −∞= lim γ →0 h(γ ) < 1 n log xi < log x(n) = lim γ →∞h(γ ). (6.5) This example, therefore, illustrates the simple situation in which the likelihood equations always have a unique solution. ∥ Example 6.2 Location-scale families. Let X1, . . . , Xn be iid, each with density (1/a)f [(x−ξ)/a]. The calculation of an ELE is easy when the likelihood equations 6.6 ] APPLICATIONS 469 have a unique root (ˆ ξ, ˆ a). It was shown by Barndorff-Nielsen and Blaesild (1980) that sufficient conditions for this to be the case is that f (x) is positive, twice differentiable for all x, and strongly unimodal. Surprisingly, Copas (1975) showed that it is unique also when f is Cauchy, despite the fact that the Cauchy density is not strongly unimodal and that in this case the likelihood equation can have multiple roots when a is known. Ferguson (1978) gave explicit formulas for the Cauchy MLEs for n = 3 or 4. See also Haas et al. 1970 and McCullagh 1992. In the presence of multiple roots, the simplest approach typically is that of Theorem 5.3. The √n-consistent estimators of ξ and a required by this theorem are easily obtained in the present case. As was pointed out in Example 4.5, the mean or median of the X’s will usually have the desired property for ξ. (When f is asymmetric, this requires that ξ be specified to be some particular location measure such as the mean or median of the distribution of the Xi.) If E(X4 i ) < ∞, ˆ an =  (Xi −¯ X)2/n will be √n-consistent for a if the latter is taken to be the population standard deviation. If E(X4 i ) = ∞, one can instead, for example, take a suitable multiple of the interquartile range X(k) −X(j), where k = [3n/4] and j = [n/4] (see, for example, Mosteller 1946). If f satisfies the assumptions of Theorem 5.1, then [√n(ˆ ξn −ξ), √n(ˆ an −a)] have a joint bivariate normal distribution with zero means and covariance matrix I −1(a) = ||Iij(a)||−1, which is independent of ξ and where I(a) is given by (2.6.20) and (2.6.21). ∥ If the distribution of the Xi depends on θ = (θ1, . . . , θs), it is interesting to compare the estimation of θj when the other parameters are unknown with the situation in which they are known. The mathematical meaning of this distinction is that an estimator is permitted to depend on known parameters but not on un-known ones. Since the class of possible estimators is thus more restricted when the nuisance parameters are unknown, it follows from Theorems 3.10 and 5.1 that the asymptotic variance of an efficient estimator when some of the θ’s are unknown can never fall below its value when they are known, so that 1 Ijj(θ) ≤[I(θ)]−1 jj , (6.6) as was already shown in Section 2.6 as (2.6.25). There, it was also proved that equality holds in (6.6) whenever cov  ∂ ∂θj log f (X|θ), ∂ ∂θk log f (X|θ)  = 0 for all j ̸= k, (6.7) and that this condition, which states that I(θ) is diagonal, (6.8) is also necessary for equality. For the location-scale families of Example 6.2, it follows from (2.6.21) that I12 = 0 whenever f is symmetric about zero but not necessarily otherwise. For symmetric f , there is therefore no loss of asymptotic efficiency in estimating ξ or a when the other parameter is unknown. Quite generally, if the off-diagonal elements of the information matrix are zero, the parameters are said to be orthogonal. Although it is not always possible to find 470 ASYMPTOTIC OPTIMALITY [ 6.6 an entire set of orthogonal parameters, it is always possible to obtain orthogonality between a scalar parameter of interest and the remaining (nuisance) parameters. See Cox and Reid 1987 and Problem 6.5. As another illustration of efficient likelihood estimation, consider a multiparam-eter exponential family. Here, UMVU estimators often are satisfactory solutions of the estimation problem. However, the estimand may not be U-estimable and then another approach is needed. In some cases, even when a UMVU estimator exists, the MLE has the advantage of not taking on values outside the range of the estimand. Example 6.3 Multiparameter exponential families. Let X = (X1,. . ., Xn) be distributed according to an s-parameter exponential family with density (1.5.2) with respect to a σ-finite measure µ, where x takes the place of x and where it is assumed that T1(X), . . . , Ts(X) are affinely independent with probability 1. Using the fact that ∂ ∂ηj [l(η)] = −∂ ∂ηj [A(η)] + Tj(x) (6.9) and other properties of the densities (1.5.2), one sees that the conditions of Theorem 5.1 are satisfied when the X’s are iid. By (1.5.14), the likelihood equations for the η’s reduce to Tj(x) = Eη[Tj(X)]. (6.10) If these equations have a solution, it is unique (and is the MLE) since l(η) is a strictly concave function of η. This follows from Theorem 1.7.13 and the fact that, by (1.5.15), − ∂2 ∂ηj∂ηk [l(η)] = ∂2 ∂ηj∂ηk [A(η)] = cov[Tj(X), Tk(X)] (6.11) and that, by assumption, the matrix with entries (6.11) is positive definite. Sufficient conditions for the existence of a solution of the likelihood equations are given by Crain (1976) and Barndorff-Nielsen (1978, Section 9.3, 9.4), where they are shown to be satisfied for the two-parameter gamma family of Table 1.5.1. An alternative method for obtaining asymptotically efficient estimators for the parameters of an exponential family is based on the mean-value parameteriza-tion (2.6.17). Slightly changing the formulation of the model, consider a sample (X1, . . . , Xn) of size n from the family (1.5.2), and let ¯ Tj = [Tj(X1) + · · · + Tj(Xn)]/n and θj = E(Tj). By the CLT, the joint distribution of the √n( ¯ Tj −θj) is multivariatenormalwithzeromeansandcovariancematrixσij = cov[Ti(X), Tj(X)]. This proves the ¯ Tj to be asymptotically efficient estimators by (2.6.18). ∥ For further discussion of maximum likelihood estimation in exponential fam-ilies, see Berk 1972b, Sundberg 1974, 1976, Barndorff-Nielsen 1978, Johansen 1979, Brown 1986a, and Note 10.4. In the next two examples, we shall consider in somewhat more detail the most important case of Example 6.3, the multivariate and, in particular, the bivariate normal distribution. 6.6 ] APPLICATIONS 471 Example 6.4 Multivariatenormaldistribution.Supposewelet(X1ν, . . . , Xpν), ν = 1, . . . , n, be a sample from a nonsingular normal distribution with means E(Xiν) = ξi and covariances cov(Xiν, Xjν) = σij. By (1.4.15), the density of the X’s is given by |F|n/2(2π)−pn/2 exp  −1 2 ηjkSjk  (6.12) where Sjk = ν(Xjν −ξj)(Xkν −ξk), j, k = 1, . . . , p, (6.13) and where F = ||ηjk|| is the inverse of the covariance matrix ||σjk||. Consider,first,thecaseinwhichtheξ’sareknown.Then,(6.12)isanexponential familywithTjk = −(1/2)Sjk.Ifthematrix||σjk||isnonsingular,theTjk areaffinely independent with probability 1, so that the result of the preceding example applies. Since E(Sjk) = nσjk, the likelihood equations (6.10) reduce to nσjk = Sjk and thus have the solutions ˆ σjk = 1 nSjk. (6.14) The sample moments and correlations are, therefore, ELEs of the population variances, covariances, and correlation coefficients. Also, the (jk)th element of ||ˆ σjk||−1 is an asymptotically efficient estimator of ηjk. In addition to being the MLE, ˆ σjk is the UMVU estimator of σjk (Example 2.2.4). If the ξ’s are unknown, ˆ ξj = 1 nνXjν = Xj, and ˆ σjk, given by (6.14) but with Sjk now defined as Sjk = ν(Xjν −Xj.)(Xkν −Xk.), (6.15) continue to be ELEs for ξj and σjk (Problem 6.6). If ξ is known, the asymptotic distribution of Sjk given by (6.13) is immedi-ate from the central limit theorem since Sjk is the sum of n iid variables with expectation E(Xjν −ξj)(Xkν −ξk) = σjk and variance E[(Xjν −ξj)2(Xkν −ξk)2] −σ 2 jk. If j ̸= k, it follows from Problem 1.5.26 that E[(Xjν −ξj)2(Xkν −ξk)2] = σjjσkk + 2σ 2 jk so that var[(Xjν −ξj)(Xkν −ξk)] = σjjσkk + σ 2 jk and √n Sjk n −σjk  L →N(0, σjjσkk + σ 2 jk). (6.16) If ξ is unknown, the Sjk given by (6.15) are independent of the Xi, and the asymptotic distribution of (6.15) is the same as that of (6.13) (Problem 6.7). ∥ 472 ASYMPTOTIC OPTIMALITY [ 6.6 Example 6.5 Bivariate normal distribution. In the preceding example, it was seen that knowing the means does not affect the efficiency with which the covari-ances can be estimated. Let us now restrict attention to the covariances and, for the sake of simplicity, suppose that p = 2. With an obvious change of notation, let (Xi, Yi), i = 1, . . . , n be iid, each with density (1.4.16). Since the asymptotic distribution of ˆ σ, ˆ τ, and ˆ ρ are not affected by whether or not ξ and η are known, let us assume ξ = η = 0. For the information matrix I(θ) [where θ = (σ 2, τ 2, ρ)], we find [Problem 6.8(a)] (1 −ρ2)I(θ) =            2 −ρ2 4σ 4 −ρ2 4σ 2τ 2 −ρ 2σ 2 −ρ2 4σ 2τ 2 2 −ρ2 4τ 4 −ρ 2τ 2 −ρ 2σ 2 −ρ 2τ 2 1 + ρ2 1 −ρ2            . (6.17) Inversion of this matrix gives the covariance matrix of the √n( ˆ θj −θj) as [Problem 6.8(b)]   2σ 4 2ρ2σ 2τ 2 ρ(1 −ρ2)σ 2 2ρ2σ 2τ 2 2τ 4 ρ(1 −ρ2)τ 2 ρ(1 −ρ2)σ 2 ρ(1 −ρ2)τ 2 (1 −ρ2)2  . (6.18) Thus, we find that √n(ˆ σ 2 −σ 2) L →N(0, 2σ 4), √n(ˆ τ 2 −τ 2) L →N(0, 2τ 4), (6.19) √n( ˆ ρ −ρ) L →N[0, (1 −ρ2)2]. On the other hand, if σ and τ are known to be equal to 1, the MLE ˆ ˆ ρ of ρ satisfies (Problem 6.9) √n (ˆ ˆ ρ −ρ) L →N  0, (1 −ρ2)2 1 + ρ2  , (6.20) whereas if ρ and τ are known, the MLE ˆ ˆ σ of σ satisfies (Problem 6.10) √n( ˆ ˆ σ 2 −σ 2) L →N  0, 4σ 4(1 −ρ2) 2 −ρ2  . (6.21) ∥ A criterion for comparing ˆ ρ to ˆ ˆ ρ is provided by the asymptotic relative efficiency. Definition 6.6 If the sequence of estimators δn of g(θ) satisfies √n[δn −g(θ)] L → N(0, τ 2), and the sequence of estimators δ′ n′, where δ′ n′ is based on n′ = n′(n) ob-servations, also satisfies √n [δ′ n′ −g(θ)] L →N(0, τ 2), then the asymptotic relative efficiency (ARE) of {δn} with respect to {δ′ n} is eδ,δ′ = lim n→∞ n′(n) n , 6.6 ] APPLICATIONS 473 provided the limit exists and is independent of the subsequences n′. The interpretation is clear. Suppose, for example, the e = 1/2. Then, for large values of n, n′ is approximately equal to (1/2)n. To obtain the same limit distri-bution (and limit variance), half as many observations are therefore required with δ′ as with δ. It is then reasonable to say that δ′ is twice as efficient as δ or that δ is half as efficient as δ′. The following result shows that in order to obtain the ARE, it is not necessary to evaluate the limit n′(n)/n. Theorem 6.7 If √n [δin −g(θ)] L →N(0, τ 2 i ), i = 1, 2, then the ARE of {δ2n} with respect to {δ1n} exists and is e2,1 = τ 2 1 /τ 2 2 . Proof. Since √n [δ2n′ −g(θ)] = $ n n′ √ n′ [δ2n′ −g(θ)], it follows from Theorem 1.8.10 that the left side has the same limit distribution N(0, τ 2 1 ) as √n [δ1n −g(θ)] if and only if lim n/n′(n) ! exists and τ 2 2 lim n n′(n) = τ 2 1 , as was to be proved. ✷ Example 6.8 Continuation of Example 6.5. It follows from (6.19) and 6.20) that the efficiency of ˆ ρ to ˆ ˆ ρ is e ˆ ρ,ˆ ˆ ρ = 1 1 + ρ2 . (6.22) This is 1 when ρ = 0 but can be close to 1/2 when |ρ| is close to 1. Similarly, eˆ σ 2, ˆ ˆ σ 2 = 2(1 −ρ)2 2 −ρ2 . (6.23) This efficiency is again 1 when ρ = 0 but tends to zero as |ρ| →1. This last result, which at first may seem surprising, actually is easy to explain. If ρ were equal to 1, and τ = 1 say, we would have Xi = σYi. Since both Xi and Yi are observed, we could then determine σ without error from a single observation. ∥ Example 6.9 Efficiency of nonparametric UMVU estimator. As another ex-ample of an efficiency calculation, recall Example 2.2.2. If X1, . . ., Xn are iid according to N(θ, 1), it was found that the UMVU estimator of p = P(X1 ≤a) is δ1n = X $ n n −1(a −¯ X)  . (6.24) Suppose now that we do not trust the assumption of normality; then, we might, instead of (6.24), prefer to use the nonparametric UMVU estimator derived in Section 2.4, namely δ2n = 1 n(No. of Xi ≤a). (6.25) 474 ASYMPTOTIC OPTIMALITY [ 6.6 What do we lose by using (6.25) instead of (6.24) if the X’s are N(0, 1) after all? Note that p is then given by p = X(a −θ) (6.26) and that √n(δ1n −p) →N[0, φ2(a −θ)]. On the other hand, nδ2n is the number of successes in n binomial trials with success probability p, so that √n(δ2n −p) →N(0, p(1 −p)). It thus follows from Theorem 6.7 that e2,1 = φ2(a −θ) X(a −θ)[1 −X(a −θ)]. (6.27) At a = θ (when p = 1/2), e2,1 = (1/2π)/(1/4) = 2/π ≈0.637. As a −θ →∞, the efficiency tends to zero (Problem 6.12). It can be shown, in fact, that (6.27) is a decreasing function of |a −θ| (for a proof, see Sampford, 1953). The efficiency loss resulting from the use of δ2n instead of δ1n is therefore quite severe. If the underlying distribution is not normal, however, this conclusion could change (see Problem 6.13). ∥ Example 6.10 Normal mixtures. Let X1, . . . , Xn be iid, each with probability p as N(ξ, σ 2) and probability q = 1 −p as N(η, τ 2). (The Tukey models are examples of such distributions with η = ξ.) The joint density of the X’s is then given by n i=1 p √ 2πσ exp  −1 2σ 2 (xi −ξ)2  + q √ 2πτ exp  1 2τ 2 (xi −η)2 " . (6.28) This is a sum of non-negative terms of which one, for example, is proportional to 1 στ n−1 exp  −1 2σ 2 (x1 −ξ)2 − 1 2τ 2 n i=2 (xi −η)2 . When ξ = x1 and σ →0, this term tends to infinity for any fixed values of η, τ, and x2, . . . , xn.ThelikelihoodisthereforeunboundedandtheMLEdoesnotexist.(The corresponding result holds for any other mixture with density O{(p/σ)f [(xi − ξ)/σ] + (q/τ)f [(xi −η)/τ]} when f (0) ̸= 0.) On the other hand, the conditions of Theorem 5.1 are satisfied (Problem 6.10) so that efficient solutions of the likelihood equations exist and asymptotically efficient estimators can be obtained through Theorem 5.3. One approach to the determination of the required √n-consistent estimators is the method of moments. In the present case, this means equating the first five moments of the X’s with the corresponding sample moments and then solving for the five parameters. For the normal mixture problem, these estimators were proposed in their own right by K. Pearson (1894). For a discussion and possible simplifications, see Cohen 1967, and for more details on mixture problems, see Everitt and Hand 1981, Titterington et al. 1985, McLachlan and Basford 1988, and McLachlan 1997. 6.7 ] EXTENSIONS 475 A study of the improvement of an asymptotically efficient estimator over that obtained by the method of moments has been carried out for the case for which it is known that τ = σ (Tan and Chang 1972). If W = (η −ξ)/σ, the AREs for the estimation of all four parameters depend only on W and p. As an example, consider the estimation of p. Here, the ARE is < 0.01 if W < 1/2 and p < 0.2; it is < 0.1 if W < 1/2 and 0.2 < p < 0.4, and is > 0.9 if W > 0.5. (For an alternative starting point for the application of Theorem 5.3, see Quandt and Ramsey 1978, particularly the discussion by N. Kiefer.) ∥ Example 6.11 Multinomial experiments. Let (X0, X1, . . . , Xs) have the multi-nomial distribution (1.5.4). In the full-rank exponential representation, exp[n log p0 + x1 log(p1/p0) + · · · + xs log(ps/p0)]h(x), the statistics Tj can be taken to be the Xj. Using the mean-value parameterization, the likelihood equations (6.10) reduce to npj = Xj so that the MLE of pj is ˆ pj = Xj/n (j = 1, . . . , s). If Xj is 0 or n, the likelihood equations have no solution in the parameter space 0 < pj < 1, s j=1 pj < 1. However, for any fixed vector p, the probability of any Xj taking on either of these values tends to zero as n →∞. (But the convergence is not uniform, which causes trouble for asymptotic confidence intervals; see Lehmann and Loh 1990.) That the MLEs ˆ pj are asymptotically efficient is seen by introducing the indicator variables Xjν, ν = 1, . . . , n, which are 1 when the νth trial results in outcome j and are 0 otherwise. Then, the vectors (Xoν, . . . , Xsν) are iid and TjXj1 + · · · + Xjn, so that asymptotic efficiency follows from Example 6.3. ∥ In applications of the multinomial distribution to contingency tables, the p’s are usually subject to additional restrictions. Theorem 5.1 typically continues to apply, although the computation of the estimators tend to be less obvious. This class of problems is treated comprehensively in Haberman (1973, 1974), Bishop, Fien-berg, and Holland (1975), and Agresti (1990). Empty cells often present special problems. 7 Extensions The discussion of efficient likelihood estimation so far has been restricted to the iid case. In the present section, we briefly mention extensions to some more general situations, which permit results analogous to those of Sections 6.3–6.5. Treatments notrequiringthestringent(butfrequentlyapplicable)assumptionsofTheorem3.10 and 5.1 have been developed by Le Cam 1953, 1969, 1970, 1986, and others. For further work in this direction, see Pfanzagl 1970, 1994, Weiss and Wolfowitz 1974, Ibragimov and Has’minskii 1981, Blyth 1982, Strasser 1985, and Wong(1992). The theory easily extends to the case of two or more samples. Suppose that the variables Xα1, . . . , Xαnα in the αth sample are iid according to the distribution with density fα,θ (α = 1, . . . , r) and that the r samples are independent. In applications, it will typically turn out that the vector parameter θ = (θ1, . . . , θs) has some components occurring in more than one of the r distributions, whereas others may 476 ASYMPTOTIC OPTIMALITY [ 6.7 be specific to just one distribution. However, for the present discussion, we shall permit each of the distributions to depend on all the parameters. The limit situation we shall consider supposes that each of the sample sizes nα tends to infinity, all at the same rate, but that r remains fixed. Consider, therefore, sequences of sample sizes nα,k (k = 1, . . . , ∞) with total sample size Nk = r α=1nαk such that nα,k/Nk →λα as k →∞ (7.1) where λα = 1 and the λα are > 0. Theorem 7.1 Suppose the assumptions of Theorem 5.1 hold for each of the den-sities fα,θ. Let I (α)(θ) denote the information matrix corresponding to fα,θ and let I(θ) = λαI (α)(θ). (7.2) The log likelihood l(θ) is given by l(θ) = r α=1 nα j=1 log fα,θ(xαj) and the likelihood equations by ∂ ∂θj l(θ) = 0 (j = 1, . . . , s). (7.3) With these identifications, the conclusions of Theorem 5.1 remain valid. The proof is an easy extension of that of Theorem 5.1 since l(θ), and therefore each term of its Taylor expansion, is a sum of r independent terms of the kind considered in the proof of Theorem 5.1 (Problem 7.1). (For further discussion of this situation, see Bradley and Gart 1962.) That asymptotic efficiency continues to have the meaning it had in Theorems 3.10 and 5.1and follows from the fact that Theorem 2.6 and its extension to the mul-tiparameter case also extends to the present situation (see Bahadur 1964, Section 4). Corollary 7.2 Under the assumptions of Theorem 7.1, suppose that for each α, all off-diagonal elements in the jth row and jth column of I (α)(θ) are zero. Then, the asymptotic variance of ˆ θj is the same when the remaining θ’s are unknown as when they are known. Proof. If the property in question holds for each I (α)(θ), it also holds for I(θ) and the result thus follows from Problem 6.3. ✷ The following four examples illustrate some applications of Theorem 7.1. Example 7.3 Estimation of a common mean. Let X1, . . . , Xm and Y1, . . . , Yn be independently distributed according to N(ξ, σ 2) and N(ξ, τ 2), respectively, with ξ, σ, and τ unknown. The problem of estimating ξ was considered briefly in Example 2.2.3 where it was found that a UMVU estimator for ξ does not exist. Complications also arise in the problem of asymptotically efficient estimation of ξ. 6.7 ] EXTENSIONS 477 Since the MLEs of the mean and variance of a single normal distribution are asymptotically independent, Corollary 7.2 applies and shows that ξ can be esti-mated with the efficiency that is attainable when σ and τ are known. Now, in that case, the MLE—which is also UMVU—is ˆ ξ = (m/σ 2) ¯ X + (n/τ 2) ¯ Y m/σ 2 + n/τ 2 . It is now tempting to claim that Theorem 1.8.10 implies that the asymptotic dis-tribution of ˆ ξ is not changed when σ 2 and τ 2 are replaced by ˆ σ 2 = 1 m −1(Xi −¯ X)2 and ˆ τ 2 = 1 n −1(Yj −¯ Y)2 (7.4) and the resulting estimator, say ˆ ˆ ξ, is asymptotically normal and efficient. However, this does not immediately follow. To see why, let us look at the simple case where m = n and, hence, var(ˆ ξ) = (σ 2 + τ 2)/n. Consider the asymptotic distribution of √n ˆ ˆ ξ −ξ √ σ 2 + τ 2 = √n ˆ ˆ ξ −ˆ ξ √ σ 2 + τ 2 + √n ˆ ξ −ξ √ σ 2 + τ 2 . (7.5) Since ˆ ξ is efficient, efficiency of ˆ ˆ ξ will follow if √n(ˆ ˆ ξ −ˆ ξ) →0, which is not the case. But Theorem 7.1 does apply, and an asymptotically efficient estimator is given by the full MLE (see Problem 7.2). ∥ Example 7.4 Balanced one-way random effects model. Consider the estimation of variance components σ 2 A and σ 2 in model (3.5.1). In the canonical form (3.5.2), we are dealing with independent normal variables Z11 and Zi1, (i = 2, . . . , s), and Zij, (i = 1, . . . , s, j = 2, . . . , n). We shall restrict attention to the second and third group, as suggested by Thompson (1962), and we are then dealing with samples of sizes s −1 and (n −1)s from N(0, τ 2) and N(0, σ 2), where τ 2 = σ 2 + nσ 2 A. The assumptions of Theorem 7.1 are satisfied with r = 2, θ = (σ 2, τ 2), and the parameter space = {(σ, τ) : 0 < σ 2 < τ 2}. For fixed n, the sample sizes n1 = s −1 and n2 = s(n −1) tend to infinity as s →∞, with λ1 = 1/n and λ2 = (n −1)/n. ThejointdensityofthesecondandthirdgroupofZ’sconstitutesatwo-parameter exponential family; the log likelihood is given by −l(θ) = n2 log σ + n1 log τ + S2 2σ 2 + S2 A 2τ 2 + c (7.6) where S2 = s i=1 n j=2 Z2 ij and S2 A = s i=1 Z2 i1. By Example 7.8, the likelihood equations have at most one solution. Solving the equations yields ˆ σ 2 = S2/n2, ˆ τ 2 = S2 A/n1, (7.7) and these are the desired (unique, ML) solution, provided they are in , that is, they satisfy ˆ σ 2 < ˆ τ 2. (7.8) 478 ASYMPTOTIC OPTIMALITY [ 6.7 It follows from Theorem 5.1 that the probability of (7.8) tends to 1 as s →∞for any θ ∈ ; this can also be seen directly from the fact that ˆ σ 2 and ˆ τ 2 tend to σ 2 and τ 2 in probability. What can be said when (7.8) is violated? The likelihood equations then have no root in and an MLE does not exist (the likelihood attains its maximum at the boundary point ˆ σ 2 =ˆ τ 2 =(S2 A + S2)/(n1 + n2) which is not in ). However, none of this matters from the present point of view since the asymptotic theory has nothing to say about a set of values whose probability tends to zero. (For small-sample computations of the mean squared error of a number of estimators of σ 2 and σ 2 A, see Klotz, Milton and Zacks 1969, Portnoy 1971, and Searle et al. 1992.) The joint asymptotic distribution of ˆ σ 2 and ˆ τ 2 can be obtained from Theorem 6.7 or directly from the distribution of S2 A and S2 and the CLT, and a linear trans-formation of the limit distribution then gives the joint asymptotic distribution of ˆ σ 2 and ˆ σ 2 A (Problem 7.3). ∥ Example 7.5 Balancedtwo-wayrandomeffectsmodel.Anewissuearisesaswe go from the one-way to the two-way layout with the model given by (3.5.5). After elimination of Z111 (in the notation of Example 5.2), the data in canonical form consist of four samples Zi11 (i = 2, . . . , I), Z1j1 (j = 2, . . . , J), Zij1 (i = 2, . . . , I, j = 2, . . . , J), and Zijk (i = 1, . . . , I, j = 1, . . . , J, k = 2, . . . , n), and the parameter is θ = (σ, τA, τB, τC) where τ 2 C = σ 2 + nσ 2 C, τ 2 B = nIσ 2 B + nσ 2 C + σ 2, τ 2 A = nJσ 2 A + nσ 2 C + σ 2 (7.9) so that = {θ : σ 2 < τ 2 C < τ 2 A, τ 2 B}. The joint density of these variables constitutes a four-parameter exponential family. The likelihood equations thus again have at most one root, and this is given by ˆ σ 2 = S2/(n −1)IJ, ˆ τ 2 C = S2 C/(I −1)(J −1), ˆ τ 2 B = S2 B/(J −1), ˆ τ 2 A = S2 A/(I −1) when ˆ σ 2 < ˆ τ 2 C < ˆ τA 2, ˆ τB 2. No root exists when these inequalities fail. In this case, asymptotic theory requires that both I and J tend to infinity, and assumption (7.1) of Theorem 7.1 then does not hold. Asymptotic efficiency of the MLEs follows, however, from Theorem 5.1 since each of the samples depends on only one of the parameters σ 2, τ 2 A, τ 2 B, and τ 2 C. The apparent linkage of these parameters through the inequalities σ 2 < τ 2 C < τ 2 A, τ 2 B is immaterial. The true point θ 0 = (σ 0, τ 0 A, τ 0 B, τ 0 C) is assumed to satisfy these restrictions, and each parameter can then independently vary about the true value, which is all that is needed for Theorem 5.1. It, therefore, follows as in the preceding example that the MLEs are asymptotically efficient, and that √(n −1)IJ(ˆ σ 2 −σ 2), and so on. have the limit distributions given by Theorem 5.1 or are directly obtainable from the definition of these estimators. ∥ A general large-sample treatment both of components of variance and the more general case of mixed models, without assuming the models to be balanced was given by Miller (1977); see also Searle et al. 1992, Cressie and Lahiri 1993, and Jiang 1996, 1997. 6.7 ] EXTENSIONS 479 Example 7.6 Independent binomial experiments. As in Section 3.5, let Xi (i = 1, . . . , s) be independently distributed according to the binomial distribu-tions b(pi, ni), with the pi being functions of a smaller number of parameters. If the ni tend to infinity at the same rate, the situation is of the type considered in Theorem 7.1, which will, in typical cases, ensure the existence of an efficient solution of the likelihood equations with probability tending to 1. As an illustration, suppose, as in (3.6.12), that the p’s are given in terms of the logistic distribution, and more specifically that pi = e−(α+βti) 1 + e−(α+βti) (7.10) where the t’s are known numbers and α and β are the parameters to be estimated. The likelihood equations nipi = xi, nitipi = tixi (7.11) have at most one solution (Problem 7.6) which will exist with probability tending to 1 (but may not exist for some particular finite values) and which can be obtained by standard iterative methods. That the likelihood equations have at most one solution is true not only for the model (7.10) but more generally when pi = 1 −F βjtj (7.12) where the ts are known, the βs are being estimated, and F is a known distribution function with log F(x) and log[1 −F(x)] strictly concave. (See Haberman 1974, Chapter 8; and Problem 7.7.) For further discussion of this and more general logistic regression models, see Pregibon 1981 or Searle et al. 1992, Chapter 10. ∥ For the multinomial problem mentioned in the preceding section and those of Example 7.6, alternative methods have been developed which are asymptotically equivalent to the ELEs, and hence also asymptotically efficient. These methods are based on minimizing χ2 or some other functions measuring the distance of the vector of probabilities from that of the observed frequencies. (See, for example, Neyman 1949, Taylor 1953, Le Cam 1956, 1990, Wijsman 1959, Berkson 1980, Amemiya 1980, and Ghosh and Sinha 1981 or Agresti 1990 for entries to the literature on choosing between these different estimators.) The situation of Theorem 7.1 shares with that of Theorem 3.10 the crucial prop-erty that the total amount of information T (θ) asymptotically becomes arbitrarily large. In the general case of independent but not identically distributed variables, this need no longer be the case. Example 7.7 Total information. Let Xi (i = 1, . . . , n) be independent Poisson variables with E(Xi) = γiλ where the γ ’s are known numbers. Consider two cases. (a) ∞ i=1 γi < ∞. The amount of information Xi contains about λ is γi/λ by (2.5.11) and Table 2.5.1 and the total amount of information Tn(λ) that (X1, . . . , Xn) contains about λ is therefore Tn(λ) = 1 λ n i=1 γi. (7.13) 480 ASYMPTOTIC OPTIMALITY [ 6.7 Itisintuitivelyplausiblethatinthesecircumstancesλcannotbeestimatedcon-sistently because only the early observations provide an appreciable amount of information. To prove this formally, note that Yn = n i=1Xi is a sufficient statistic for λ on the basis of (X1, . . . , Xn) and that Yn has Poisson distribu-tion with mean λn i=1γi. Thus, all the Y’s are less informative than a random variable Y with distribution P(λ∞ i=1γi) in the sense that the distribution of any estimator based on Yn can be duplicated by one based on Y (Problem 7.9). Since λ cannot be estimated exactly on the basis of Y, the result follows. (b) ∞ i=1γi = ∞. Here, the MLE δn = n i=1Xi/n i=1γi is consistent and asymptot-ically normal (Problem 7.10) with  n i=1 γi 1/2 (δn −λ) L →N(0, λ). (7.14) Thus, δn is approximately distributed as N[λ, 1/Tn(λ)] and an extension of The-orem 2.6 to the present case (see Bahadur 1964) permits the conclusion that δn is asymptotically efficient. Note: The norming constant required for asymptotic normality must be propor-tional to  n i=1γi. Depending on the nature of the γ ’s, this can be any function of n tending to infinity rather than the customary √n. In general, it is the total amount of information rather than the sample size which governs the asymptotic distribution of an asymptotically efficient estimator. In the iid case, Tn(θ) = nI(θ), so that √Tn(θ) is proportional to √n. ∥ A general treatment of the case of independent random variables with densities fj(xj|θ), θ = (θ1, . . . , θr), along the lines of Theorems 3.10 and 5.1 has been given by Bradley and Gart (1962) and Hoadley (1971) (see also Nordberg 1980). The proof (for r = 1) is based on generalizations of (3.18)-(3.20) (see Problem 7.14) and hence depends on a suitable law of large numbers and central limit theorem for sums of independent nonidentical random variables. In the multiparameter case, of course, it may happen that some of the parameters can be consistently estimated and others not. The theory for iid variables summarized by Theorems 2.6, 3.10, and 5.1 can be generalized not only to the case of independent nonidentical variables but also to dependent variables whose joint distribution depends on a fixed number of param-eters θ = (θ1, . . . , θr) where, for illustration, we take r = 1. (The generalization to r > 1 is straightforward.) The log likelihood l(θ) is now the sum of the loga-rithms of the conditional densities fj(xj|θ, x1, . . . , xj−1) and the total amount of information Tn(θ) is the sum of the expected conditional amounts of information Ij(θ) in Xj, given X1, . . . , Xj−1: Ij(θ) = E E  ∂ ∂θ log fj(Xj|θ, X1, . . . , Xj−1) 2 = E  ∂ ∂θ log fj(Xj|θ) 2 . Under regularity conditions on the fj’s, analogous to those of Theorems 3.10 and 5.1 together with additional conditions to ensure that the total amount of information tends to infinity as n →∞and that the appropriate CLT for dependent 6.7 ] EXTENSIONS 481 variables is applicable, it can be shown that with probability tending to 1, there exists a root ˆ θn of the likelihood equations such that √Tn(θ)( ˆ θn −θ) L →N(0, 1). This program has been carried out in a series of papers by Bar-Shalom (1971), Bhat (1974), and Crowder (1976).5 [The required extension of Theorem 2.6 can be obtained from Bahadur (1964); see also Kabaila 1983.] The following illustrates the theory with a simple classic example. Example 7.8 Normal autoregressive Markov series. Let Xj = βXj−1 + Uj, j = 2, . . . , n, (7.15) where the Uj are iid as N(0, 1), where β is an unknown parameter satisfying |β| < 1,6 and where X1 is N(0, σ 2). The X’s all have marginal normal distributions with mean zero. The variance of Xj satisfies var (Xj) = β2var(Xj−1) + 1 (7.16) and hence var(Xj) = σ 2 for all j provided σ 2 = 1/(1 −β2). (7.17) This is the stationary case in which (Xj1, . . . , Xjk) has the same distribution as (Xj1+r, . . . , Xjk+r) for all r = 1, 2, . . . (Problem 7.15). The amount of information that each Xj(j > 1) contains about β is (Problem 7.17) Ij(θ) = 1/(1−β2), so that Tn(β) ∼n/(1−β2). The general theory therefore suggests the existence of a root ˆ βn of the likelihood equation such that √n( ˆ βn −β) L →N(0, 1 −β2). (7.18) That (7.18) does hold can also be checked directly (see, for example, Brockwell and Davis 1987, Section 8.8 ). ∥ The conclusions of this section up to this point can be summarized by saying that the asymptotic theory developed for the iid case in Sections 6.2–6.6 continues to hold—under appropriate safeguards—even if the iid assumption is dropped, provided the number of parameters is fixed and the total amount of information goes to infinity. We shall now briefly consider two generalizations of the earlier situation to which this conclusion does not apply. The first concerns the case in which the number of parameters tends to infinity with the total sample size. In Theorem 7.1, the number r of samples was considered fixed, whereas the sample sizes nα were assumed to tend to infinity. Such a model is appropriate when one is dealing with a small number of moderately large samples. A quite different asymptotic situation arises in the reverse case of a large number (considered as tending to infinity) of finite samples. Here, an important distinction arises between structural parameters such as ξ in Example 7.3, which are common to all the samples and which are the parameters of interest, and incidental parameters such 5 A review of the literature of maximum likelihood estimation in both discrete and continuous pa-rameter stochastic processes can be found in Basawa and Prakasa Rao(1980). 6 For a discussion without this restriction, see Anderson (1959) and Heyde and Feigin (1975). 482 ASYMPTOTIC OPTIMALITY [ 6.7 as σ 2 and τ 2 in Example 7.3, which occur in only one of the samples. That Theorem 5.1 does not extend to this case is illustrated by the following two examples. Example 7.9 Estimation of a common variance. Let Xαj (j = 1, . . . , r) be independently distributed according to N(θα, σ 2), α = 1, . . . , n. The MLEs are ˆ θα = Xα·, ˆ σ 2 = 1 rn (Xαj −Xα.)2. (7.19) Furthermore, these are the unique solutions of the likelihood equations. However, in the present case, the MLE of σ 2 is not even consistent. To see this, note that the statistics S2 α = (Xαj −Xα.)2 are identically independently distributed with expectation E(S2 α) = (r −1)σ 2, so that S2 α/n →(r −1)σ 2 and hence ˆ σ 2 →r −1 r σ 2 in probability. (7.20) A consistent and efficient estimator sequence of σ 2 is available in the present case, namely ˆ ˆ σ 2 = 1 (r −1)nS2 α. ∥ The study of this class of problems (including Example 7.9) was initiated by Neyman and Scott (1948), who also considered a number of other examples in-cluding one in which an MLE is consistent but not efficient. A reformulation of the problem of structural parameters was proposed by Kiefer and Wolfowitz (1956), who considered the case in which the incidental parameters are themselves random variables, identically independently distributed according tosomedistribution,but,ofcourse,unobservable.Thiswilloftenbringthesituation into the area of applicability of Theorems 5.1 or 7.1. Example 7.10 Regression with both variables subject to error. Let Xi and Yi (i = 1, . . . , n) be independent normal with means E(Xi) = ξi and E(Yi) = ηi and variancesσ 2 andτ 2,whereηi = α+βξi.Thereis,thus,alinearrelationshipbetween ξ and η, both of which are observed with independent, normally distributed errors. We are interested in estimating β and, for the sake of simplicity, shall take α as known to be zero. Then, θ = (β, σ 2, τ 2, ξ1, . . . , ξn), with the first three parameters being structural and the ξ’s incidental. The likelihood is proportional to 1 σ nτ n exp  −1 2σ 2 (xi −ξi)2 − 1 2τ 2 (yi −βξi)2  . (7.21) The likelihood equations have two roots, given by (Problem 7.20), ˆ β = ± 6 y2 i x2 i , 2nˆ σ 2 = x2 j −1 ˆ β xjyj, 2nˆ τ 2 = y2 j −ˆ βxjyj, (7.22) 6.7 ] EXTENSIONS 483 2ˆ ξi = xi + 1 ˆ β yi, i = 1, . . . , n and the likelihood is larger at the root for which ˆ βxiyi > 0. If Theorem 5.1 applies, one of these roots must be consistent and, hence, tend to β in probabil-ity. Since S2 X = X2 j and S2 Y = Y 2 j are independently distributed according to noncentral χ2-distributions with noncentrality parameters λ2 n = n j=1ξ 2 j and β2λ2 n, their limit behavior depends on that of λn. (Note, incidentally, that for λn = 0, the parameter β becomes unidentifiable.) Suppose that λ2 n/n →λ2 > 0. The distri-bution of S2 X and S2 Y is unchanged if we replace each ξ 2 i by λ2 n/n, and by the law of large numbers, X2 j/n therefore has the same limit as E(X2 1) = σ 2 + 1 nλ2 n →σ 2 + λ2. Similarly, Y 2 j /n tends in probability to τ 2 + β2λ2 and, hence, ˆ β2 n P →(τ 2 + β2λ2)/(σ 2 + λ2). Thus, neither of the roots is consistent. [It was pointed out by Solari (1969) that the likelihood in this problem is unbounded so that an MLE does not exist (Problem 7.21). The solutions (7.22) are, in fact, saddlepoints of the likelihood surface.] If in (7.21) it is assumed that τ = σ, it is easily seen that the MLE of β is consistent (Problem 7.18). For a discussion of this problem and some of its gener-alizations,seeAnderson1976,Gleser1981,andAndersonandSawa1982.Another modification leading to a consistent MLE is suggested by Copas (1972a). Instead of (7.21), it is sometimes assumed that the ξ’s are themselves iid accord-ing to a normal distribution N(µ, γ 2). The pairs (Xi, Yi) then constitute a sample from a bivariate normal distribution, and asymptotically efficient estimators of the parameters µ, γ , β, σ, and τ can be obtained from the MLEs of Example 6.4. An analogous treatment is possible for Example 7.9. ∥ Kiefer and Wolfowitz (1956) have considered not only this problem and that of Example 7.9, but a large class of problems of this type by postulating that the ξ’s are iid according to a distribution G, but treating G as unknown, subject only to some rather general regularity assumptions. Alternative approaches to the estimation of structural parameters in the presence of a large number of incidental parameters are discussed by Andersen (1970b) and Kalbfleisch and Sprott (1970). A discussion of Example 7.10 and its extension to more general regression models can be found in Stuart and Ord (1991, Chapters 26 and 28), and of Example 7.9 in Jewell and Raab (1981). A review of these models, also known as measurement error models, is given by Gleser (1991) and is the topic of the book by Carroll, Ruppert, and Stefanski (1995). Another extension of likelihood estimation leads us along the lines of Example 4.5, in which it was seen that an estimator such as the sample median, which was not the MLE, was a desirable alternative. Such situations can lead naturally to replacing the likelihood function by another function, often with the goal of obtaining a robust estimator. 484 ASYMPTOTIC OPTIMALITY [ 6.7 Such an approach was suggested by Huber (1964), resulting in a compromise be-tween the mean and the median. The mean and the median minimize, respectively, (xi −a)2 and  |xi −a|. Huber suggested minimizing instead n i=1 ρ(xi −a) (7.23) where ρ is given by ρ(x) = 1 2x2 if |x| ≤k k|x| −1 2k2 if |x| ≥k. (7.24) This function is proportional to x2 for |x| ≤k, but outside this interval, it replaces theparabolicarcsbystraightlines.Thepiecesfittogethersothat ρ anditsderivative ρ′ are continuous (Problem 7.22). As k gets larger, ρ will agree with 1 2x2 over most of its range, so that the estimator comes close to the mean, As k gets smaller, the estimator will become close to the median. As a moderate compromise, the value k = 1.5 is sometimes suggested. The Huber estimators minimizing (7.23) with ρ given by (7.24) are a subset of the class of M-estimators obtained by minimizing (7.23) for arbitrary ρ. If ρ is convex and even, as is the case for (7.24), it follows from Theorem 1.7.15 that the minimizing values of (7.23) constitute a closed interval; if ρ is strictly convex, the minimizing value is unique. If ρ has a derivative ρ′ = ψ, the M-estimators Mn may be defined as the solutions of the equation n i=1 ψ(xi −a) = 0. (7.25) If X1, . . . , Xn are iid according to F(x −θ) where F is symmetric about zero and has density f , it turns out under weak assumptions on ψ and F that √n(Mn −θ) →N[0, σ 2(F, ψ)] (7.26) where σ 2(F, ψ) = ψ2(x)f (x)dx [ ψ′(x)f (x)dx]2 , (7.27) provided both numerator and denominator on the right side are finite and the denominator is positive. Proofs of (7.26) can be found in Huber (1981), in which a detailed account of the theory of M-estimators is given not only for location parameters, but also in more general settings. See also Serfling 1980, Chapter 7, Hampel et al. 1986, Staudte and Sheather 1990, as well as Problems 7.24-7.26. For ρ(x) = −log f (x), (7.28) minimizing (7.23) is equivalent to maximizing # f (xi −a), and the M-estimator then coincides with the maximum likelihood estimator. In particular, for known F, the M-estimator of θ corresponding to (7.28) satisfies (7.26) with σ 2 = 1/If (see Theorem 3.10). Further generalizations are discussed in Note 10.4. 6.7 ] EXTENSIONS 485 The results of this chapter have all been derived in the so-called regular case, that is, when the densities satisfy regularity assumptions such as those of Theorems 2.6, 3.10, and 5.1. Of particular importance for the validity of the conclusions is that the support of the distributions Pθ does not vary with θ. Varying support brings with it information that often makes it possible to estimate some of the parameters with greater accuracy than that attainable in the regular case. Example 7.11 Uniform MLE. Let X1, . . . , Xn be iid as U(0, θ). Then, the MLE of θ is ˆ θn = X(n) and satisfies (Problem 2.6) n(θ −ˆ θn) L →E(0, θ). (7.29) Since ˆ θn always underestimates θ and has a bias of order 1/n, the order of the error ˆ θn −θ, considers as an alternative the UMVU estimator δn = [(n+1)/n]X(n), which satisfies n(θ −δn) L →E(−θ, θ). (7.30) The two asymptotic distributions have the same variance, but the first has expec-tation θ, whereas the second is asymptotically unbiased with expectation zero and is thus much better centered. The improvement of δn over ˆ θn is perhaps seen more clearly by considering expected squared error. We have (Problem 2.7) E[n( ˆ θn −θ)]2 →2θ2, E[n(δn −θ)]2 →θ2. (7.31) Thus, the risk efficiency of ˆ θn with respect to δn is 1/2. ∥ The example illustrates two ways in which such situations differ from the regular iid cases. First, the appropriate normalizing factor is n rather than √n, reflecting the fact that the error of the MLE is of order 1/n instead of 1/√n. Second, the MLE need no longer be asymptotically optimal even when it is consistent. Example 7.12 Exponential MLE. Let X1, . . . , Xn be iid according to the expo-nential distribution E(ξ, b). Then, the MLEs of ξ and b are ˆ ξ = X(1) and ˆ b = 1 n[Xi −X(1)]. (7.32) It follows from Problem 1.6.18 that n[X(1) −ξ]/b is exactly (and hence asymp-totically) distributed as E(0, 1). As was the case for ˆ θ in the preceding example, ˆ ξ is therefore asymptotically biased. More satisfactory is the UMVU estimator δn given by (2.2.23), which is obtained from ˆ ξ by subtracting an estimator of the bias (Problem 7.27). It was further seen in Problem 1.6.18 that 2nˆ b/b is distributed as χ2 2n−2. Since (χ2 n −n)/ √ 2n →N(0, 1) in law, it is seen that √n(ˆ b −b) →N(0, b2). We shall now show that ˆ b is asymptotically efficient. For this purpose, consider the case that ξ is known. The resulting one-parameter family of the X’s is an exponential family and the MLE ˆ ˆ b of b is asymptotically efficient and satisfies √n(ˆ ˆ b−b) →N(0, b2) (Problem 7.27). Since ˆ b and ˆ ˆ b have the same asymptotic distribution, ˆ b is a fortiori also asymptotically efficient, as was to be proved. ∥ 486 ASYMPTOTIC OPTIMALITY [ 6.7 Example 7.13 Pareto MLE. Let X1, . . . , Xn be iid according to the Pareto dis-tribution P(a, c) with density f (x) = aca/xa+1, 0 < c < x, 0 < a. (7.33) The distribution is widely used, for example, in economics (see Johnson, Kotz, and Balakrishnan 1994, Chapter 20) and is closely connected with the exponential distribution of the preceding example through the fact that if X has density (7.33), then Y = log X has the exponential distribution E(ξ, b) with (Problem 1.5.25) ξ = log c, b = 1/a. (7.34) From this fact, it is seen that the MLEs of a and c are ˆ a = n log(Xi/X(1)) and ˆ c = X(1) (7.35) and that these estimators are independently distributed, ˆ c as P(na, c) and 2na/ˆ a as χ2 2n−2 (Problem 7.29). Since ˆ b is asymptotically efficient in the exponential case, the same is true of 1/ˆ b and hence of ˆ a. On the other hand, n(X(1) −c) has the limit distribution E(0, c/a) and hence is biased. As was the case with the MLE of ξ in Example 7.12, an improvement over the MLE ˆ c of c is obtained by removing its bias and replacing ˆ c by the UMVU estimator X(1)  1 − 1 (n −1)ˆ a  . (7.36) For the details of these calculations, see Problems 7.29–7.31. ∥ Example 7.14 Lognormal MLE. As a last situation with variable support, con-sider a sample X1, . . . , Xn from a three-parameter lognormal distribution, defined by the requirement that Zi = log(Xi −ξ) are iid as N(γ, σ 2), so that f (x; ξ, γ, σ 2) = 1 (x −ξ) √ 2πσ exp −1 2σ 2 [log(x −ξ) −γ ]2 " (7.37) when x > ξ, and f = 0 otherwise. When ξ is known, the problem reduces to that of estimating the mean γ and variance σ 2 from the normal sample Z1, . . . , Zn. However, when ξ is unknown, the support varies with ξ. Although in this case the density(7.37)tendstozeroverysmoothlyat ξ (Problem7.34),thetheoryofSection 6.5 is not applicable, and the problem requires a more powerful approach such as that of Le Cam (1969). [For a discussion of the literature on this problem, see, for example, Johnson, Kotz, and Balakrishnan 1994, Chapter 14. A comprehensive treatment of the lognormal distribution is given in Crow and Shimizu (1988).] The difficulty can be circumvented by a device used in other contexts by Kemp-thorne (1966), Lambert (1970), and Copas (1972a), and suggested for the present problem by Giesbrecht and Kempthorne (1976). These authors argue that observa-tions are never recorded exactly but only to the nearest unit of measurement. This formulation leads to a multinomial model of the kind considered for one parameter in Example 4.6, and Theorem 5.1 is directly applicable. 6.8 ] ASYMPTOTIC EFFICIENCY OF BAYES ESTIMATORS 487 The corresponding problem for the three-parameter Weibull distribution is re-viewed in Scholz (1985). For further discussion of such irregular cases, see, for example, Polfeldt 1970 and Woodrofe 1972. ∥ Although the MLE, or bias-corrected MLE, may achieve the smallest asymptotic variance, it may not minimize mean squared error when compared with all other estimators. This is illustrated by the following example in which, for the sake of simplicity, we shall consider expected squared error instead of asymptotic variance. Example 7.15 Second-order mean squared error. Consider the estimation of σ 2 on the basis of a sample X1, . . . , Xn from N(0, σ 2). The MLE is then ˆ σ 2 = 1 nX2 i , which happens to be unbiased, so that no correction is needed. Let us now consider the more general class of estimators δn = 1 n + a n2  X2 i . (7.38) It can be shown (Problem 7.32) that E(δn −σ 2)2 = 2σ 4 n + (4a + a2)σ 4 n2 + O  1 n3  . (7.39) Thus, the estimators δn are all asymptotically efficient, that is, nE(δn −θ)2 → 1/I(θ) where θ = σ 2. However, the MLE does not minimize the error in this class since the term of order 1/n2 is minimized not by a = 0 (MLE) but by a = −2, so that (1/n −2/n2)X2 i has higher second-order efficiency than the MLE. In fact, the normalized limiting risk difference between the MLE (a = 0) relative to δn with a = −2 is 2, that is, the limiting risk of the MLE is larger (Problem 7.32) . ∥ A uniformly best estimator (up to second-order terms) typically will not ex-ist. The second-order situation is thus similar to that encountered in the exact (small-sample) theory. One can obtain uniform second-order optimality by impos-ing restrictions such as first-order unbiasedness, or must be content with weaker properties such as second-order admissibility or minimaxity. An admissibility re-sult (somewhat similar to Theorem 5.2.14) is given by Ghosh and Sinha (1981); the minimax problem is treated by Levit (1980). 8 Asymptotic Efficiency of Bayes Estimators Bayes estimators were defined in Section 4.1, and many of their properties were illustrated throughout Chapter 4. We shall now consider their asymptotic behavior. Example 8.1 Limiting binomial. If X has the binomial distribution b(p, n) and the loss is squared error, it was seen in Example 4.1.5 that the Bayes estimator of p corresponding to the beta prior B(a, b) is δn(X) = (a + X)/(a + b + n). 488 ASYMPTOTIC OPTIMALITY [ 6.8 Thus, √n[δn(X) −p] = √n X n −p  + √n a + b + n  a −(a + b)X n  and it follows from Theorem 1.8.10 that √n[δn(X) −p] has the same limit distri-bution as √n[X/n −p], namely the normal distribution N[0, p(1 −p)]. So, the Bayes estimator of the success probability p, suitably normalized, has a normal limit distribution which is independent of the parameters of the prior distribution and is the same as that of the MLE X/n. Therefore, these Bayes estimators are asymptotically efficient. (See Problem 8.1 for analogous results.) ∥ This example raises the question of whether the same limit distribution also obtains when the conjugate priors in this example are replaced by more general prior distributions, and whether the phenomenon persists in more general situa-tions. The principal result of the present section (Theorem 8.3) shows that, under suitable conditions, the distribution of Bayes estimators based on n iid random variables tends to become independent of the prior distribution as n →∞and that the Bayes estimators are asymptotically efficient. Versions of such a theorem were given by Bickel and Yahav (1969) and by Ibrag-imov and Has’minskii (1972, 1981). The present proof, which combines elements from these papers, is due to Bickel. We begin by stating some assumptions. Let X1, . . . , Xn be iid with density f (xi|θ) (with respect to µ), where θ is real-valued and the parameter space is an open interval. The true value of θ will be denoted by θ0. (B1) The log likelihood function l(θ) satisfies the assumptions of Theorem 2.6. To motivate the next assumption, note that under the assumptions of Theorem 2.6, if ˜ θ = ˜ θn is any sequence for which ˜ θ P →θ then l(θ) = l(θ0) + (θ −θ0)l′(θ0) −1 2(θ −θ0)2[nI(θ0) + Rn(θ)] (8.1) where 1 nRn(θ) P →0 as n →∞ (8.2) (Problem 8.3). We require here the following stronger assumption. (B2) Given any ε > 0, there exists δ > 0 such that in the expansion (8.1), the probability of the event sup     1 nRn(θ)     : |θ −θ0| ≤δ " ≥ε (8.3) tends to zero as n →∞. In the present case it is not enough to impose conditions on l(θ) in the neighbor-hood of θ0, as is typically the case in asymptotic results. Since the Bayes estimators involve integration over the whole range of θ values, it is also necessary to control the behavior of l(θ) at a distance from θ0. 6.8 ] ASYMPTOTIC EFFICIENCY OF BAYES ESTIMATORS 489 (B3) For any δ > 0, there exists ε > 0 such that the probability of the event sup 1 n[l(θ) −l(θ0)] : |θ −θ0| ≥δ " ≤−ε (8.4) tends to 1 as n →∞. (B4) The prior density π of θ is continuous and positive for all θ ∈ . (B5) The expectation of θ under π exists, that is,  |θ|π(θ)dθ < ∞. (8.5) To establish the asymptotic efficiency of Bayes estimators under these assump-tions, we shall first prove that for large values of n, the posterior distribution of θ given the X’s is approximately normal with mean = θ0 + 1 nI(θ0)l′(θ0) and variance = 1/nI(θ0). (8.6) Theorem 8.2 If π∗(t|x) is the posterior density of √n(θ −Tn) where Tn = θ0 + 1 nI(θ0)l′(θ0), (8.7) (i) then if (B1)-(B4) hold,    π∗(t|x) −  I(θ0)φ 1 t  I(θ0) 2   dt P →0. (8.8) (ii) If, in addition, (B5) holds, then  (1 + |t|)     π∗(t|x) −  I(θ0) φ 1 t  I(θ0) 2    dt P →0. (8.9) Proof. (i) By the definition of Tn, π∗(t|x) = π Tn + t √n exp 1 l Tn + t √n 2 π Tn + u √n exp 1 l Tn + u √n 2 du (8.10) = eω(t)π  Tn + t √n  /Cn where ω(t) = l  Tn + t √n  −l(θ0) − 1 2nI(θ0)[l′(θ0)]2 (8.11) and Cn =  eω(u)π  Tn + u √n  du. (8.12) We shall prove at the end of the section that J1 =     eω(t)π  Tn + t √n  −e−t2I(θ0)/2π(θ0)     dt P →0, (8.13) 490 ASYMPTOTIC OPTIMALITY [ 6.8 so that Cn P →  e−t2I(θ0)/2π(θ0) dt = π(θ0)  2π/I(θ0). (8.14) The left side of (8.8) is equal to J/Cn, where J =     eω(t)π  Tn + t √n  −Cn  I(θ0)φ 1 t  I(θ0) 2    dt (8.15) and, by (8.14), it is enough to show that J P →0. Now, J ≤J1 + J2 where J1 is given by (8.13) and J2 =     Cn  I(θ0)φ 1 t  I(θ0) 2 −exp  −t2 2 I(θ0)  π(θ0)     dt =     Cn √I(θ0) √ 2π −π(θ0)      exp  −t2 2 I(θ0)  dt. By (8.13) and (8.14), J1 and J2 tend to zero in probability, and this completes the proof of part (i). (ii) The left side of (8.9) is equal to 1 Cn J ′ ≤1 Cn (J ′ 1 + J ′ 2) where J ′, J ′ 1, and J ′ 2 are obtained from J, J1, and J2, respectively, by inserting the factor (1 + |t|) under the integral signs. It is therefore enough to prove that J ′ 1 and J ′ 2 both tend to zero in probability. The proof for J ′ 2 is the same as that for J2; the proof for J ′ 1 will be given at the end of the section, together with that for J1. ✷ On the basis of Theorem 8.2, we are now able to prove the principal result of this section. Theorem 8.3 If (B1)-(B5) hold, and if ˜ θn is the Bayes estimator when the prior density is π and the loss is squared error, then √n( ˜ θn −θ0) £ →N[0, 1/I(θ0)], (8.16) so that ˜ θn is consistent7 and asymptotically efficient. Proof. We have √n( ˜ θn −θ0) = √n( ˜ θn −Tn) + √n(Tn −θ0). By the CLT, the second term has the limit distribution N[0, 1/I(θ0)], so that it only remains to show that √n( ˜ θn −Tn) P →0. (8.17) Note that Equation (8.10 ) says that π∗(t|x) = 1 √nπ(Tn + t √n|x), and, hence, by a change of variable, we have ˜ θn =  θπ(θ|x) dθ 7 A general relationship between the consistency of MLEs and Bayes estimators is discussed by Strasser (1981). 6.8 ] ASYMPTOTIC EFFICIENCY OF BAYES ESTIMATORS 491 =   t √n + Tn  π∗(t|x) dt = 1 √n  tπ∗(t|x) dt + Tn and hence √n( ˜ θn −Tn) =  tπ∗(t|x) dt. Now, since t√I(θ0)φ t√I(θ0) ! dt = 0, √n| ˜ θn −Tn| =      tπ∗(t|x) dt −  t  I(θ0)φ 1 t  I(θ0) 2 dt     ≤  |t|   π∗(t|x) −  I(θ0)φ 1 t  I(θ0) 2   dt, which tends to zero in probability by Theorem 8.2. ✷ Before discussing the implications of Theorem 8.3, we shall show that assump-tions (B1)-(B5) are satisfied in exponential families. Example 8.4 Exponential families. Let f (xi|θ) = eθT (xi)−A(θ). so that A(θ) = log  eθT (x)dµ(x). Recall from Section 1.5 that A is differentiable to all orders and that A′(θ) = Eθ[T (X)], A′′(θ) = varθ[T (X)] = I(θ). Suppose I(θ) > 0. Then, l(θ) −l(θ0) = (θ −θ0)T (Xi) −n[A(θ) −A(θ0)] = (θ −θ0)[T (Xi) −A′(θ0)] (8.18) −n{[A(θ) −A(θ0)] −[(θ −θ0)A′(θ0)]}. The first term is equal to (θ −θ0)l′(θ0). Apply Taylor’s theorem to A(θ) to find A(θ) = A(θ0) + (θ −θ0)A′(θ0) + 1 2(θ −θ0)2A′′(θ∗), so that the second term in (8.18) is equal to (−n/2)(θ −θ0)2A′′(θ∗). Hence, l(θ) −l(θ0) = (θ −θ0)l′(θ0) −n 2(θ −θ0)2A′′(θ∗). To prove (B2), we must show that A′′(θ∗) = I(θ0) + 1 nRn(θ) where Rn(θ) = n[A′′(θ∗) −I(θ0)] 492 ASYMPTOTIC OPTIMALITY [ 6.8 satisfies (8.3); that is, we must show that given ε, there exists δ such that the probability of sup{|A′′(θ∗) −I(θ0)θ −θ0| ≤δ} ≥ε tends to zero. This follows from the facts that I(θ) = A′′(θ) is continuous and that θ∗→θ0 as θ →θ0. To see that (B3) holds, write 1 n[l(θ) −l(θ0)] = (θ −θ0) 1 n[T (Xi) −A′(θ0)] − A(θ) −A(θ0) θ −θ0 −A′(θ0) " and suppose without loss of generality that θ > θ0. Since A′′(θ) > 0, so that A(θ) is strictly convex, it is seen that θ > θ0 implies [A(θ) −A(θ0)]/(θ −θ0) > A′(θ0). On the other hand, [T (Xi) −A′(θ0)]/n P →0 and hence with probability tending to 1, the factor of (θ −θ0) is negative. It follows that sup 1 n[l(θ) −l(θ0)] : θ −θ0 ≥δ " ≤δ [T (Xi) −A′(θ0)] n −inf A(θ) −A(θ0) θ −θ0 −A′(θ0) : θ −θ0 ≥δ " . and hence that (B3) is satisfied. ∥ Theorems 8.2 and 8.3 were stated under the assumption that π is the density of a proper distribution, so that its integral is equal to 1. There is a trivial but useful extension to the case in which π(θ)dθ = ∞but where there exists n0, so that the posterior density ˜ π(θ|x1, . . . , xn0) = #n0 i=1 f (xi|θ)π(θ) #n0 i=1 f (xi|θ)π(θ) dθ ofθ givenx1, . . . , xn0 is,withprobability1,aproperdensitysatisfyingassumptions (B4) and (B5). The posterior density of θ given X1, . . . , Xn (n > n0) when θ has prior density π is then the same as the posterior density of θ given Xn0+1, . . . , Xn when θ has prior density ˜ π, and the result now follows. Example 8.5 Location families. The Pitman estimator derived in Theorem 3.1.20 is the Bayes estimator corresponding to the improper prior density π(θ) ≡1. If X1, . . . , Xn are iid with density f (x1 −θ) satisfying (B1)-(B3), the posterior den-sity after one observation X1 = x1 is f (xi−θ) and hence a proper density satisfying assumption (B5), provided Eθ|X1| < ∞(Problem 8.4). Under these assumptions, the Pitman estimator is therefore asymptotically efficient.8 An analogous result holds in the scale case (Problem 8.5). Theorem 7.9 can be generalized further. Rather than requiring the posterior density ˜ π to be proper with finite expectation after a fixed number n0 of observa-8 For a more general treatment of this result, see Stone 1974. 6.8 ] ASYMPTOTIC EFFICIENCY OF BAYES ESTIMATORS 493 tions, it is enough to assume that it satisfies these conditions for all n ≥n0 when (X1, . . . , Xn0) ∈Sn0, where P(Sn0) →1 as n0 →∞(Problem 8.7). Example 8.6 Binomial. Let Xi be independent, taking on the values 1 and 0 with probability p and q = 1 −p, respectively, and let π(p) = 1/pq. Then, the posterior distribution of p will be proper (and will then automatically have finite expectation) as soon as 0 < Xi < n, but not before. Since for any 0 < p < 1 the probability of this event tends to 1 as n →∞, the asymptotic efficiency of the Bayes estimator follows. ∥ Theorem 8.3 provides additional support for the suggestion, made in Section 4.1, that Bayes estimation constitutes a useful method for generating estimators. However, the theorem is unfortunately of no help in choosing among different Bayes estimators, since all prior distributions satisfying assumptions (B4) and (B5) lead to the same asymptotic behavior. In fact, if ˜ θn and ˜ θ′ n are Bayes estimators corresponding to two different prior distributions  and ′ satisfying (B4) and (B5), (8.17) implies the even stronger statement, √n( ˜ θ′ n −˜ θ′ n) P →0. (8.19) Nevertheless, the interpretation of θ as a random variable with density π(θ) leads to some suggestions concerning the choice of π. Theorem 8.2 showed that the posterior distribution of θ, given the observations, eventually becomes a normal distribution which is concentrated near the true θ0 and which is independent of π. It is intuitively plausible that a close approximation to the asymptotic result will tend to be achieved more quickly (i.e., for smaller n) if π assigns a relatively high probability to the neighborhood of θ0 than if this probability is very small. A minimax approach thus leads to the suggestion of a uniform assignment of prior density. It is clear what this means for a location parameter but not in general, since the parameterization is arbitrary and reparametrization destroys uniformity. In addition, it seems plausible that account should also be taken of the relative informativeness of the observations corresponding to different parameter values. As discussed in Section 4.1, proposals for prior distributions satisfying such criteria have been made (from a somewhat different point of view) by Jeffreys and others. For details, further suggestions, and references, see Box and Tiao 1973, Jaynes 1979, Berger and Bernardo 1989, 1992a, 1992b, and Robert 1994a, Section 3.4. When the likelihood equation has a unique root ˜ θn (which with probability tending to 1 is then the MLE), this estimator has a great practical advantage over the Bayes estimators which share its asymptotic properties. It provides a unique estimating procedure, applicable to a large class of problems, which is supported (partly because of its intuitive plausibility and partly for historical reasons) by a substantial proportion of the statistical profession. This advantage is less clear in the case of multiple roots where asymptotically efficient likelihood estimators such as the one-step estimator (4.11) depend on a somewhat arbitrary initial estimator and need no longer agree with the MLE even for large n. In the multiparameter case, calculation of Bayes estimators often require the computationally inconvenient evaluation of multiple integrals. However, this diffi-494 ASYMPTOTIC OPTIMALITY [ 6.8 culty can often be overcome through Gibbs sampling or other Monte Carlo-Markov chain algorithms; see Section 4.5. To resolve the problem raised by the profusion of asymptotically efficient es-timators, it seems natural to carry the analysis one step further and to take into account terms (for example, in an asymptotic expansion of the distribution of the estimator) of order 1/n or 1/n3/2. Investigations along these lines have been un-dertaken by Rao (1961), Peers (1965), Ghosh and Subramanyam (1974), Efron (1975, 1982a), Pfanzagl and Wefelmeyer (1978-1979), Tibshirani (1989), Ghosh and Mukerjee (1991, 1992, 1993), Barndorff-Nielsen and Cox (1994), and Datta and Ghosh (1995) (see also Section 6.4). They are complicated by the fact that to this order, the estimators tend to be biased and their efficiencies can be improved by removing these biases. For an interesting discussion of these issues, see Berkson (1980). The subject still requires further study. We conclude this section by proving that the quantities J1 [defined by (8.13)] and J ′ 1 tend to zero in probability. For this purpose, it is useful to obtain the following alternative expression for ω(t). Lemma 8.7 The quantity ω(t), defined by (8.11), is equal to ω(t) = −I(θ0) t2 2n −1 2nRn  Tn + t √n   t + 1 I(θ0)√nl′(θ0) 2 (8.20) where Rn is the function defined in (8.1) (Problem 8.9). Proof for J1. To prove that the integral (8.13) tends to zero in probability, divide the range of integration into the three parts: (i) |t| ≤M, (ii) |t| ≥δ√n, and (iii) M < |t| < δ√n, and show that the integral over each of the three tends to zero in probability. (i) |t| ≤M. To prove this result, we shall show that for every 0 < M < ∞, sup    eω(t)π  Tn + t √n  −e−I(θ0)t2/2π(θ0)     P →0, (8.21) where here and throughout the proof of (i), the sup is taken over |t| ≤M. The result will follow from (8.21) since the range of integration is bounded. Substituting the expression (8.20) for ω(t), (8.21) is seen to follow from the following two facts (Problem 8.10): sup     1 nRn  Tn + t √n      t + 1 I(θ0)√nl′(θ0) 2 P →0 (8.22) and sup    π  Tn + t √n  −π(θ0)     P →0. (8.23) The second of these is obvious from the continuity of π and the fact that (Problem 8.11) Tn P →θ0. (8.24) 6.8 ] ASYMPTOTIC EFFICIENCY OF BAYES ESTIMATORS 495 To prove (8.22), it is enough to show that sup     1 nRn  Tn + t √n     P →0 (8.25) and 1 I(θ0) 1 √nl′(θ0) is bounded in probability. (8.26) Of these, (8.26) is clear from (B1) and the central limit theorem. To see (8.25), note that |t| ≤M implies Tn −M √n ≤Tn + t √n ≤Tn + M √n and hence, by (8.24), that for any δ > 0, the probability of θ0 −δ ≤Tn + t √n ≤θ0 + δ will be arbitrarily close to 1 for sufficiently large n. The result now follows from (B2). (ii) M ≤|t| ≤δ√n. For this part it is enough to prove that for |t| ≤δ√n, the integrand of J1 is bounded by an integrable function with probability ≥1 −ε. Then, the integral can be made arbitrarily small by choosing a sufficiently large M. Since the second term of the integrand of (8.13) is integrable, it is enough to show that such an integrable bound exists for the first term. More precisely, we shall show that given ε > 0, there exists δ > 0 and C < ∞such that for sufficiently large n, P  eω(t)π  Tn + t √n  ≤Ce−t2I(θ0)/4 for all |t| ≤δ√n  ≥1 −ε. (8.27) The factor π(Tn + t/√n) causes no difficulty by (8.24) and the continuity of π, so that it remains to establish such a bound for exp ω(t) ≤exp −t2 2 I(θ0) + 1 n    Rn  Tn + t √n      t2 + (l′(θ0))2 nI 2(θ0) " . (8.28) For this purpose, note that |t| ≤δ′√n implies Tn −δ′ ≤Tn + t √n + δ′ and hence, by (8.24), that with probability arbitrarily close to 1, for n sufficiently large, |t| ≤δ′√n implies    Tn + 1 2 −θ0     ≤2δ′. By (B2), there exists δ′ such that the latter inequality implies P sup |t|≤δ′√n     1 nRn  Tn + t √n     ≤1 4I(θ0) ≥1 −ε. 496 ASYMPTOTIC OPTIMALITY [ 6.9 Combining this fact with (8.26), we see that the right side of (8.28) is ≤C′e−t2I(θ0)/4 for all t satisfying (ii), with probability arbitrarily close to 1, and this establishes (8.27). (iii) |t| ≥δ√n. As in (ii), the second term in the integrand of (8.13) can be neglected, and it is enough to show that for all δ,  |t|≥δ√n exp[ω(t)]π  Tn + t √n  dt = √n  |θ−Tn|≥δ π(θ) exp l(θ) −l(θ0) (8.29) − 1 2nI(θ0)[l′(θ0)]2 " dθ P →0. From (8.24) and (B3), it is seen that given δ, there exists ε such that sup |θ−Tn|≥δ e[l(θ)−l(θ0)] ≤e−nε with probability tending to 1. By (8.26), the right side of (8.29) is therefore bounded above by C√n e−nε  π(θ) dθ = C√n e−nε (8.30) with probability tending to 1, and this completes the proof of (iii). To prove (8.13), let us now combine (i)-(iii). Given ε > 0 and δ > 0, choose M so large that  ∞ M  C exp  −t2 2 I(θ0)  + exp  −t2 2 I(θ0)  π(θ0)  dt ≤ε 3, (8.31) and, hence, that for sufficiently large n, the integral (8.13) over (ii) is ≤ε/3 with probability ≥1 −ε. Next, choose n so large that the integrals (8.13) over (i) and over (iii) are also ≤ε/3 with probability ≥1 −ε. Then, P[J1 ≤ε] ≥1 −3ε, and this completes the proof of (8.13). The proof for J ′ 1 requires only trivial changes. In part (i), the factor [1 + |t|] is bounded, so that the proof continues to apply. In part (ii), multiplication of the integrand of (8.31) by [1 + |t|] does not affect its integrability, and the proof goes through as before. Finally, in part (iii), the integral in (8.30) must be replaced by Cne−nε |θ|π(θ) dθ, which is finite by (B5). 9 Problems Section 1 1.1 Let X1, . . . , Xn be iid with E(Xi) = ξ. (a) If the Xis have a finite fourth moment, establish (1.3) (b) For k a positive integer, show that E( ¯ X −ξ)2k−1 and E( ¯ X −ξ)2k, if they exist, are both O(1/nk). [Hint: Without loss of generality, let ξ = 0 and note that E(Xr1 i1 Xr2 i2 · · ·) = 0 if any of the r’s is equal to 1.] 6.9 ] PROBLEMS 497 1.2 For fixed n, describe the relative error in Example 1.3 as a function of p. 1.3 Prove Theorem 1.5. 1.4 LetX1, . . . , Xn beiidasN(ξ, σ 2),σ 2 known,andletg(ξ) = ξ r,r = 2,3,4.Determine, up to terms of order 1/n, (a) the variance of the UMVU estimator of g(ξ); (b) the bias of the MLE of g(ξ). 1.5 Let X1, . . . , Xn be iid as N(ξ, σ 2), ξ known. For even r, determine the variance of the UMVU estimator (2.2.4) of σ r up to terms of order r. 1.6 Solve the preceding problem for the case that ξ is unknown. 1.7 For estimating pm in Example 3.3.1, determine, up to order 1/n, (a) the variance of the UMVU estimator (2.3.2); (b) the bias of the MLE. 1.8 Solve the preceding problem if pm is replaced by the estimand of Problem 2.3.3. 1.9 Let X1, . . . , Xn be iid as Poisson P(θ). (a) Determine the UMVU estimator of P(Xi = 0) = e−θ. (b) Calculate the variance of the estimator of (a) up to terms of order 1/n. [Hint: Write the estimator in the form (1.15) where h( ¯ X) is the MLE of e−θ.] 1.10 Solve part (b) of the preceding problem for the estimator (2.3.22). 1.11 Under the assumptions of Problem 1.1, show that E| ¯ X−ξ|2k−1 = O(n−k+1/2). [Hint: Use the fact that E| ¯ X −ξ|2k−1 ≤[E( ¯ X −ξ)4k−2]1/2 together with the result of Problem 1.1.] 1.12 Obtain a variant of Theorem 1.1, which requires existence and boundedness of only h′′′ instead of h(iv), but where Rn is only O(n−3/2). [Hint: Carry the expansion (1.6) only to the second instead of the third derivative, and apply Problem 1.11.] 1.13 To see that Theorem 1.1 is not necessarily valid without boundedness of the fourth (or some higher) derivative, suppose that the X’s are distributed as N(ξ, σ 2) and let h(X) = ex4. Then, all moments of the X’s and all derivatives of h exist. (a Showthattheexpectationofh( ¯ X)doesnotexistforanyn,andhencethatE{√n[h( ¯ X) −h(ξ)]}2 = ∞for all values of n. (b On the other hand, show that √n [h( ¯ X)−h(ξ)] has a normal limit distribution with finite variance, and determine that variance. 1.14 Let X1, . . . , Xn be iid from the exponential distribution with density (1/θ)e−x/θ, x > 0, and θ > 0. (a) Use Theorem 1.1 to find approximations to E( √¯ X) and var( √¯ X). (b) Verify the exact calculation var(  ¯ X) =  1 −1 n H(n + 1/2) H(n) 2 θ and show that limn→∞n var( √¯ X) = θ/4. (c) Reconcile the results in parts (a) and (b). Explain why, even though Theorem 1.1 did not apply, it gave the correct answer. 498 ASYMPTOTIC OPTIMALITY [ 6.9 (d) Show that a similar conclusion holds for h(x) = 1/x. [Hint: For part (b), use the fact that T = Xi has a gamma distribution. The limit can be evaluated with Stirling’s formula. It can also be evaluated with a computer algebra program.] 1.15 Let X1, . . . , Xn be iid according to U(0, θ). Determine the variance of the UMVU estimator of θk, where k is an integer, k > −n. 1.16 UndertheassumptionsofProblem1.15,findtheMLEofθk andcompareitsexpected squared error with the variance of the UMVU estimator. 1.17 Let X1, . . . , Xn be iid according to U(0, θ), let T = max(X1, . . . , Xn), and let h be a function satisfying the conditions of Theorem 1.1. Show that E[h(T )] = h(θ) −θ nh′(θ) + 1 n2 [θh′(θ) + θ 2h′′(θ)] + O  1 n3  and var[h(T )] = θ 2 n2 [h′(θ)]2 + O  1 n3  . 1.18 Apply the results of Problem 1.17 to obtain approximate answers to Problems 1.15 and 1.16, and compare the answers with the exact solutions. 1.19 If the X’s are as in Theorem 1.1 and if the first five derivatives of h exist and the fifth derivative is bounded, show that E[h( ¯ X)] = h(ξ) + 1 2h′′ σ 2 n + 1 24n2 [4h′′′µ3 + 3h(iv)σ 4] + O(n−5/2) and if the fifth derivative of h2 is also bounded var[h( ¯ X)] = (h′2)σ 2 n + 1 n2 [h′h′′µ3 + (h′h′′′ + 1 2h′′2)σ 4] + O(n−5/2) where µ3 = E(X −ξ)3. [Hint: Use the facts that E( ¯ X −ξ)3 = µ3/n2 and E( ¯ X −ξ)4 = 3σ 4/n2 + O(1/n3).] 1.20 Under the assumptions of the preceding problem, carry the calculation of the vari-ance (1.16) to terms of order 1/n2, and compare the result with that of the preceding problem. 1.21 Carry the calculation of Problem 1.4 to terms of order 1/n2. 1.22 For the estimands of Problem 1.4, calculate the expected squared error of the MLE to terms of order 1/n2, and compare it with the variance calculated in Problem 1.21. 1.23 Calculate the variance (1.18) to terms of order 1/n2 and compare it with the expected squared error of the MLE carried to the same order. 1.24 Find the variance of the estimator (2.3.17) up to terms of the order 1/n3. 1.25 For the situation of Example 1.12, show that the UMVU estimator δ1n is the bias-corrected MLE, where the MLE is δ3n. 1.26 For the estimators of Example 1.13: (a) Calculate their exact variances. (b) Use the result of part (a) to verify (1.27). 1.27 (a) Under the assumptions of Theorem 1.5, if all fourth moments of the Xiν are finite, show that E( ¯ Xi −ξi)( ¯ Xj −ξj) = σij/n and that all third and fourth moments E( ¯ Xi −ξi)( ¯ Xj −ξj)( ¯ Xk −ξk), and so on are of the order 1/n2. 6.9 ] PROBLEMS 499 (b) If, in addition, all derivatives of h of total order ≤4 exist and those of order 4 are uniformly bounded, then E[h( ¯ X1, . . . , ¯ Xs)] = h(ξ1, . . . , ξs) + 1 2n s i=1 s j=1 σij ∂2h(ξ1, . . . , ξs) ∂ξi∂ξj + Rn, and if the derivatives of h2 of order 4 are also bounded, var[h( ¯ X1, . . . , ¯ Xs)] = 1 n σij ∂h ∂ξi ∂h ∂ξj + Rn where the remainder Rn in both cases is O(1/n2). 1.28 On the basis of a sample from N(ξ, σ 2), let Pn(ξ, σ) be the probability that the UMVU estimator ¯ X2 −σ 2/n of ξ 2 (σ known) is negative. (a) Show that Pn(ξ, σ) is a decreasing function of √n |ξ|/σ. (b) Show that Pn(ξ, σ) →0 as n →∞for any fixed ξ ̸= 0 and σ. (c) Determine the value of Pn(0, σ). [Hint: Pn(ξ, σ) = P [−1 −√n ξ/σ < Y < 1 −√n ξ/σ], where Y = √n ( ¯ X −ξ)/σ is distributed as N(0, 1).] 1.29 Use the t-distribution to find the value of Pn(0, σ) in the preceding problem for the UMVU estimator of ξ 2 when σ is unknown for representative values of n. 1.30 Fill in the details of the proof of Theorem 1.9. (See also Problem 1.8.8.) 1.31 In Example 8.13 with θ = 0, show that δ2n is not exactly distributed as σ 2(χ 2 1 −1)/n. 1.32 In Example 8.13, let δ4n = max(0, ¯ X2 −σ 2/n), which is an improvement over δ1n. (a) Show that √n(δ4n −θ 2) has the same limit distribution as √n(δ1n −θ 2) when θ ̸= 0. (b) Describe the limit distribution of nδ4n when θ = 0. [Hint: Write δ4n = δ1n + Rn and study the behavior of Rn.] 1.33 Let X have the binomial distribution b(p, n), and let g(p) = pq. The UMVU estimator of g(p) is δ = X(n−X)/n(n−1). Determine the limit distribution of √n(δ − pq) and n(δ −pq) when g′(p) ̸= 0 and g′(p) = 0, respectively. [Hint: Consider first the limit behavior of δ′ = X(n −X)/n2.] 1.34 Let X1, . . . , Xn be iid as N(ξ, 1). Determine the limit behavior of the distribution of the UMVU estimator of p = P[|Xi| ≤u]. 1.35 Determine the limit behavior of the estimator (2.3.22) as n →∞. [Hint: Consider first the distribution of log δ(T ).] 1.36 Let X1, . . . , Xn be iid with distribution Pθ, and suppose δn is UMVU for esti-mating g(θ) on the basis of X1, . . . , Xn. If there exists n0 and an unbiased estimator δ0(X1, . . . , Xn0) which has finite variance for all θ, then δn is consistent for g(θ). [Hint: For n = kn0 (with k an integer), compare δn with the estimator 1 k {δ0(X1, . . . , Xn0) + δ0(Xn0+1, . . . , X2n0) + . . .}]. 1.37 Let Yn be distributed as N(0, 1) with probability πn and as N(0, τ 2 n) with probability 1−πn. If τn →∞and πn →π, determine for what values of π the sequence {Yn} does and does not have a limit distribution. 1.38 (a) In Problem 1.37, determine to what values var(Yn) can tend as n →∞if πn →1 and τn →∞but otherwise both are arbitrary. 500 ASYMPTOTIC OPTIMALITY [ 6.9 (b) Use (a) to show that the limit of the variance need not agree with the variance of the limit distribution. 1.39 Let bm,n, m, n = 1, 2, . . ., be a double sequence of real numbers, which for each fixed m is nondecreasing in n. Show that limn→∞limm→∞bm,n = limm,n→∞inf bm,n and limm→∞limn→∞bm,n = limm,n→∞sup bm,n provided the indicated limits exist (they may be infinite) and where lim inf bm,n and lim sup bm,n denote, respectively, the smallest and the largest limit points attainable by a sequence bmk,nk, k = 1, 2, . . ., with mk →∞and nk →∞. Section 2 2.1 Let X1, . . . , Xn be iid as N(0, 1). Consider the two estimators Tn =    ¯ Xn if Sn ≤an n if Sn > an, where Sn = (Xi −¯ X)2, P (Sn > an) = 1/n, and T ′ n = (X1 + · · · + Xkn)/kn with kn the largest integer ≤√n. (a) Show that the asymptotic efficiency of T ′ n relative to Tn is zero. (b) Show that for any fixed ε > 0, P[|Tn −θ| > ε] = 1 n + o  1 n  , but P[|T ′ n −θ| > ε] = o  1 n  . (c) For large values of n, what can you say about the two probabilities in part (b) when ε is replaced by a/√n? (Basu 1956). 2.2 If kn[δn −g(θ)] L →H for some sequence kn, show that the same result holds if kn is replaced by k′ n, where kn/k′ n →1. 2.3 Assume that the distribution of Yn = √n(δn −g(θ)) converges to a distribution with mean 0 and variance v(θ). Use Fatou’s lemma (Lemma 1.2.6) to establish that varθ(δn) →0 for all θ. 2.4 If X1, . . ., Xn are a sample from a one-parameter exponential family (1.5.2), then T (Xi) is minimal sufficient and E[(1/n)T (Xi)] = (∂/∂η)A(η) = τ. Show that for any function g(·) for which Theorem 1.8.12 holds, g((1/n)T (Xi)) is asymptotically unbiased for g(τ). 2.5 If X1, . . . , Xn are iid n(µ, σ 2), show that Sr = [1/(n −1)(xi −¯ x)2]r/2 is an asymp-totically unbiased estimator of σ r. 2.6 Let X1, . . . , Xn be iid as U(0, θ). From Example 2.1.14, δn = (n + 1)X(n)/n is the UMVU estimator of θ, whereas the MLE is X(n). Determine the limit distribution of (a) n[θ −δn] and (b) n[θ −X(n)]. Comment on the asymptotic bias of these estimators. [Hint: P (X(n) ≤y) = yn/θ n for any 0 < y < θ.] 2.7 For the situation of Problem 2.6: (a) Calculate the mean squared errors of both δn and X(n) as estimators of θ. (b) Show lim n→∞ E(X(n) −θ)2 E(δn −θ)2 = 2. 2.8 Verify the asymptotic distribution claimed for δn in Example 2.5. 2.9 Let δn be any estimator satisfying (2.2) with g(θ) = θ. Construct a sequence δ′ n such that √n(δ′ n −θ) L →N[0, w2(θ)] with w(θ) = v(θ) for θ ̸= θ0 and w(θ0) = 0. 6.9 ] PROBLEMS 501 2.10 In the preceding problem, construct δ′ n such that w(θ) = v(θ) for all θ ̸= θ0 and θ1 and < v(θ) for θ = θ0 and θ1. 2.11 Construct a sequence {δn} satisfying (2.2) but for which the bias bn(θ) does not tend to zero. 2.12 In Example 2.7 with Rn(θ) given by (2.11), show that Rn(θ) →1 for θ ̸= 0 and that Rn(0) →a2. 2.13 Let bn(θ) = Eθ(δn) −θ be the bias of the estimator δn of Example 2.5. (a) Show that bn(θ) = −(1 −a) √n  4 √n −4 √n xφ(x −√nθ) dx; (b) Show that b′ n(θ) →0 for any θ ̸= 0 and b′ n(0) →(1 −a). (c) Use (b) to explain how the Hodges estimator δn can violate (2.7) without violating the information inequality. 2.14 In Example 2.7, show that if θn = c/√n, then Rn(θn) →a2 + c2(1 −a)2. Section 3 3.1 Let X have the binomial distribution b(p, n), 0 ≤p ≤1. Determine the MLE of p (a) by the usual calculus method determining the maximum of a function; (b) by showing that pxqn−x ≤(x/n)x[(n −x)/n]n−x. [Hint: (b) Apply the fact that the geometric mean is equal to or less than the arithmetic mean to n numbers of which x are equal to np/x and n −x equal to nq/(n −x).] 3.2 In the preceding problem, show that the MLE does not exist when p is restricted to 0 < p < 1 and when x = 0 or = n. 3.3 Let X1, . . . , Xn be iid according to N(ξ, σ 2). Determine the MLE of (a) ξ when σ is known, (b) σ when ξ is known, and (c) (ξ, σ) when both are unknown. 3.4 Suppose X1, . . . , Xn are iid as N(ξ, 1) with ξ > 0. Show that the MLE is ¯ X when ¯ X > 0 and does not exist when ¯ X ≤0. 3.5 Let X take on the values 0 and 1 with probabilities p and q, respectively. When it is known that 1/3 ≤p ≤2/3, (a) find the MLE and (b) show that the expected squared error of the MLE is uniformly larger than that of δ(x) = 1/2. [A similar estimation problem arises in randomized response surveys. See Example 5.2.2.] 3.6 When is finite, show that the MLE is consistent if and only if it satisfies (3.2). 3.7 Show that Theorem 3.2 remains valid if assumption A1 is relaxed to A1′: There is a nonempty set 0 ∈ such that θ0 ∈ 0 and 0 is contained in the support of each Pθ. 3.8 Prove the existence of unique 0 < ak < ak−1, k = 1, 2, . . . , satisfying (3.4). 3.9 Prove (3.9). 3.10 In Example 3.6 with 0 < c < 1/2, determine a consistent estimator of k. [Hint: (a) The smallest value K of j for which Ij contains at least as many of the X’s as any other I is consistent. (b) The value of j for which Ij contains the median of the X’s is consistent since the median of fk is in Ik.] 3.11 Verify the nature of the roots in Example 3.9. 502 ASYMPTOTIC OPTIMALITY [ 6.9 3.12 Let X be distributed as N(θ, 1). Show that conditionally given a < X < b, the variable X tends in probability to b as θ →∞. 3.13 Consider a sample X1, . . . , Xn from a Poisson distribution conditioned to be posi-tive, so that P (Xi = x) = θxe−θ/x!(1 −e−θ) for x = 1, 2, . . .. Show that the likelihood equation has a unique root for all values of x. 3.14 Let X have the negative binomial distribution (2.3.3). Find an ELE of p. 3.15 (a) Adensityfunctionisstronglyunimodal,orequivalentlylogconcave,iflog f (x) is a concave function. Show that such a density function has a unique mode. (b) Let X1, . . . , Xn be iid with density f (x −θ). Show that the likelihood function has a unique root if f ′(x)/f (x) is monotone, and the root is a maximum if f ′(x)/f (x) is decreasing. Hence, densities that are log concave yield unique MLEs. (c) Let X1, . . . , Xn be positive random variables (or symmetrically distributed about zero) with joint density anOf (axi), a > 0. Show that the likelihood equation has a unique maximum if xf ′(x)/f (x) is strictly decreasing for x > 0. (d) IfX1, . . . , Xn areiidwithdensityf (xi−θ)wheref isunimodalandifthelikelihood equation has a unique root, show that the likelihood equation also has a unique root when the density of each Xi is af [a(xi −θ)], with a known. 3.16 For each of the following densities, f (·), determine if (a) it is strongly unimodal and (b) xf ′(x)/f (x) is strictly decreasing for x > 0. Hence, comment on whether the respective location and scale parameters have unique MLEs: (a) f (x) = 1 √ 2π e−1 2 x2, −∞< x < ∞ (normal) (b) f (x) = 1 √ 2π 1 x e−1 2 (log x)2, 0 ≤x < ∞ (lognormal) (c) f (x) = e−x/(1 + e−x)2, −∞< x < ∞ (logistic) (d) f (x) = H(ν + 1/2) H(ν/2) 1 √νπ 1 [1 + (x/ν)2] ν+1 2 , −∞< x < ∞ (t with ν df) 3.17 If X1, . . . , Xn are iid with density f (xi −θ) or af (axi) and f is the logistic density L(0, 1), the likelihood equation has unique solutions ˆ θ and ˆ a both in the location and the scale case. Determine the limit distribution of √n( ˆ θ −θ) and √n(ˆ a −a). 3.18 In Problem 3.15(b), with f the Cauchy density C(0, a), the likelihood equation has a unique root ˆ a and √n(ˆ a −a) L →N(0, 2a2). 3.19 If X1, . . . , Xn are iid as C(θ, 1), then for any fixed n there is positive probability (a) that the likelihood equation has 2n −1 roots and (b) that the likelihood equation has a unique root. [Hint: (a) If the x’s are sufficiently widely separated, the value of L′(θ) in the neighbor-hood of xi is dominated by the term (xi −θ)/[1 + (xi −θ)2]. As θ passes through xi, this term changes signs so that the log likelihood has a local maximum near xi. (b) Let the x’s be close together.] 3.20 If X1, . . . , Xn are iid according to the gamma distribution H(θ, 1), the likelihood equation has a unique root. [Hint: Use Example 3.12. Alternatively, write down the likelihood and use the fact that H′(θ)/ H(θ) is an increasing function of θ.] 6.9 ] PROBLEMS 503 3.21 Let X1, . . . , Xn be iid according to a Weibull distribution with density fθ(x) = θxθ−1e−xθ , x > 0, θ > 0, which is not a member of the exponential, location, or scale family. Nevertheless, show that there is a unique interior maximum of the likelihood function. 3.22 Under the assumptions of Theorem 3.2, show that  L  θ0 + 1 √n  −L(θ0) + 1 2I(θ0)  /  I(θ0) tends in law to N(0, 1). 3.23 Let X1, . . . , Xn be iid according to N(θ, aθ2), θ > 0, where a is a known positive constant. (a) Find an explicit expression for an ELE of θ. (b) Determine whether there exists an MRE estimator under a suitable group of trans-formations. [This case was considered by Berk (1972).] 3.24 Check that the assumptions of Theorem 3.10 are satisfied in Example 3.12. 3.25 For X1, . . . , Xn iid as DE(θ, 1), show that (a) the sample median is an MLE of θ and (b) the sample median is asymptotically normal with variance 1/n, the information inequality bound. 3.26 In Example 3.12, show directly that (1/n)T (Xi) is an asymptotically efficient estimator of θ = Eη[T (X)] by considering its limit distribution. 3.27 Let X1, . . . , Xn be iid according to θg(x) + (1 −θ)h(x), where (g, h) is a pair of specified probability densities with respect to µ, and where 0 < θ < 1. (a) Give one example of (g, h) for which the assumptions of Theorem 3.10 are satisfied and one for which they are not. (b) Discuss the existence and nature of the roots of the likelihood equation for n = 1, 2, 3. 3.28 Under the assumptions of Theorem 3.7, suppose that ˆ θ1n and ˆ θ2n are two consistent sequences of roots of the likelihood equation. Prove that Pθ0( ˆ θ1n = ˆ θ2n) →1 as n →∞. [Hint: (a) Let Sn = {x : x = (x1, . . . , xn) such that ˆ θ1n(x) ̸= ˆ θ2n(x)}. For all x ∈Sn, there exists θ ∗ n between ˆ θ1n and ˆ θ2n such that L′′(θ ∗ n) = 0. For all x / ∈Sn, let θ ∗ n be the common value of ˆ θ1n and ˆ θ2n. Then, θ ∗ n is a consistent sequence of roots of the likelihood equation. (b) (1/n)L′′(θ ∗ n) −(1/n)L′′(θ0) →0 in probability and therefore (1/n)L′′(θ ∗ n) → −I(θ0) in probability. (c) Let 0 < ε < I(θ0) and let S′ n = x : 1 nL′′(θ ∗ n) < −I(θ0) + ε " . Then, Pθ0(S′ n) →1. On the other hand, L′′(θ ∗ n) = 0 on Sn so that Sn is contained in the complement of S′ n (Huzurbazar 1948).] 504 ASYMPTOTIC OPTIMALITY [ 6.9 3.29 To establish the measurability of the sequence of roots ˆ θ∗ n of Theorem 3.7, we can follow the proof of Serfling (1980, Section 4.2.2) where the measurability of a similar sequence is proved. (a) For definiteness, define ˆ θn(a) as the value that minimizes | ˆ θ −θ0| subject to θ0 −a ≤ˆ θ ≤θ0 + a and ∂ ∂θ l(θ|x)|θ= ˆ θ = 0. Show that ˜ θn(a) is measurable. (b) Show that θ ∗ n, the root closest to θ ∗, is measurable. [Hint: For part (a), write the set { ˆ θn(a) > t} as countable unions and intersections of measurablesets,usingthefactthat(∂/∂θ) log(θ|x)iscontinuous,andhencemeasurable.] Section 4 4.1 Let u(t) =    c t 0 e−1/x(1−x)dx for 0 < t < 1 0 for t ≤0 1 for t ≥1. Show that for a suitable c, the function u is continuous and infinitely differentiable for −∞< t < ∞. 4.2 Show that the density (4.1) with = (0, ∞) satisfies all conditions of Theorem 3.10 with the exception of (d) of Theorem 2.6. 4.3 Show that the density (4.4) with = (0, ∞) satisfies all conditions of Theorem 3.10. 4.4 In Example 4.5, evaluate the estimators (4.8) and (4.14) for the Cauchy case, using for ˜ θn the sample median. 4.5 In Example 4.7, show that l(θ) is concave. 4.6 In Example 4.7, if η = ξ, show how to obtain a √n-consistent estimator by equating sample and population second moments. 4.7 In Theorem 4.8, show that σ11 = σ12. 4.8 Without using Theorem 4.8, in Example 4.13 show that the EM sequence converges to the MLE. 4.9 Consider the following 12 observations from a bivariate normal distribution with parameters µ1 = µ2 = 0, σ 2 1 , σ 2 2 , ρ: x1 1 1 -1 -1 2 2 -2 -2 x2 1 -1 1 -1 2 2 -2 -2 where “∗” represents a missing value. (a) Show that the likelihood function has global maxima at ρ = ±1/2, σ 2 1 = σ 2 2 = 8/3, and a saddlepoint at ρ = 0, σ 2 1 = σ 2 2 = 5/2. (b) Show that if an EM sequence starts with ρ = 0, then it remains with ρ = 0 for all subsequent iterations. (c) Show that if an EM sequence starts with ρ bounded away from zero, it will converge to a maximum. [This problem is due to Murray (1977), and is discussed by Wu (1983).] 4.10 Show that if the EM complete-data density f (y, z|θ) of (4.21) is in a curved expo-nential family, then the hypotheses of Theorem 4.12 are satisfied. 6.9 ] PROBLEMS 505 4.11 In the EM algorithm, calculation of the E-step, the expectation calculation, can be complicated. In such cases, it may be possible to replace the E-step by a Monte Carlo evaluation, creating the MCEM algorithm (Wei and Tanner 1990). Consider the following MCEM evaluation of Q(θ| ˆ θ(j), y): Given ˆ θ (k) (j) (1) Generate Z1, · · · , Zk, iid, from k(z| ˆ θ (k) (j), y), (2) Let ˆ Q(θ| ˆ θ (k) (j), y) = 1 k k i=1 log L(θ|y, z) and then calculate ˆ θ (k) (j+1) as the value that maximizes ˆ Q(θ| ˆ θ (k) (j), y). (a) Show that ˆ Q(θ| ˆ θ (k) (j), y) →ˆ Q(θ| ˆ θ(j), y) as k →∞. (b) What conditions will ensure that L( ˆ θ (k) (j+1)|y) ≥L( ˆ θ (k) (j)|y) for sufficiently large k? Are the hypotheses of Theorem 4.12 sufficient? 4.12 For the mixture distribution of Example 4.7, that is, Xi ∼θg(x) + (1 −θ)h(x), i = 1, . . . , n, independent where g(·) and h(·) are known, an EM algorithm can be used to find the ML estimator of θ. Let Z1, · · · , Zn, where Zi indicates from which distribution Xi has been drawn, so Xi|Zi = 1 ∼g(x) Xi|Zi = 0 ∼h(x). (a) Show that the complete-data likelihood can be written L(θ|x, z) = n i=1 [zig(xi) + (1 −zi)h(xi)] θ zi(1 −θ)1−zi. (b) Show that E(Zi|θ, xi) = θg(xi)/[θg(xi) + (1 −θ)h(xi)] and hence that the EM sequence is given by ˆ θ(j+1) = 1 n n i=1 ˆ θ(j)g(xi) ˆ θ(j)g(xi) + (1 −ˆ θ(j))h(xi) . (c) Show that ˆ θ(j) →ˆ θ, the ML estimator of θ. 4.13 For the situation of Example 4.10: (a) Show that the M-step of the EM algorithm is given by ˆ µ =  4 i=1 ni j=1 yij + z1 + z2 /12, ˆ αi =  2 j=1 yij + zi /3 −ˆ µ, i = 1, 3 =  3 j=1 yij /3 −ˆ µ, i = 2, 4. (b) Show that the E-step of the EM algorithm is given by zi = E Yi3|µ = ˆ µ, αi = ˆ αi ! = ˆ µ + ˆ αi i = 1, 3. 506 ASYMPTOTIC OPTIMALITY [ 6.9 (c) Under the restriction  i αi = 0, show that the EM sequence converges to ˆ αi = ¯ yi· −ˆ µ, where ˆ µ =  i ¯ yi·/4. (d) Under the restriction  i niαi = 0, show that the EM sequence converges to ˆ αi = ¯ yi· −ˆ µ, where ˆ µ =  ij yij/10. (e) For a general one-way layout with a treatments and nij observations per treatment, show how to use the EM algorithm to augment the data so that each treatment has n observation. Write down the EM sequence, and show what it converges to under the restrictions of parts (c) and (d). [The restrictions of parts (c) and (d) were encountered in Example 3.4.9, where they led, respectively, to an unweighted means analysis and a weighted means analysis.] 4.14 In the two-way layout (see Example 3.4.11), the EM algorithm can be very helpful in computing ML estimators in the unbalanced case. Suppose that we observe Yijk : N(ξij, σ 2), i = 1, . . . , I, j = 1, . . . , J, k = 1, . . . , nij, where ξij = µ + αi + βj + γij. The data will be augmented so that the complete data have n observations per cell. (a) Show how to compute both the E-step and the M-step of the EM algorithm. (b) Under the restriction  i αi =  j βj =  i γij =  j γij = 0, show that the EM sequence converges to the ML estimators corresponding to an unweighted means analysis. (c) Under the restriction  i ni·αi =  j n·jβj =  i ni·γij =  j ·jγij = 0, show that the EM sequence converges to the ML estimators corresponding to a weighted means analysis. 4.15 For the one-way layout with random effects (Example 3.5.1), the EM algorithm is useful for computing ML estimates. (In fact, it is very useful in many mixed models; see Searle et al. 1992, Chapter 8.) Suppose we have the model Xij = µ + Ai + Uij (j = 1, . . . , ni, i = 1, . . . , s) where Ai and Uij are independent normal random variables with mean zero and known variance. To compute the ML estimates of µ, σ 2 U, and σ 2 U it is typical to employ an EM algorithm using the unobservable Ai’s as the augmented data. Write out both the E-step and the M-step, and show that the EM sequence converges to the ML estimators. 4.16 Maximum likelihood estimation in the probit model of Section 3.6 can be imple-mentedusingtheEMalgorithm.WeobserveindependentBernoullivariablesX1, . . . , Xn, which depend on unobservable variables Zi distributed independently as N(ζi, σ 2), where Xi = 0 if Zi ≤u 1 if Zi > u. Assuming that u is known, we are interested in obtaining ML estimates of ζ and σ 2. (a) Show that the likelihood function is pxi(1 −p)n−xi, where p = P(Zi > u) = X ζ −u σ  . (b) If we consider Z1, . . . , Zn to be the complete data, the complete-data likelihood is n i=1 1 √ 2πσ e− 1 2σ2 (zi−ζ)2 6.9 ] PROBLEMS 507 and the expected complete-data log likelihood is −n 2 log(2πσ 2) − 1 2σ 2 n i=1 E(Z2 i |Xi) −2ζE(Zi|Xi) + ζ 2! . (c) Show that the EM sequence is given by ˆ ζ(j+1) = 1 n n i=1 ti(ˆ ζ(j), ˆ σ 2 (j)) ˆ σ 2 (j+1) = 1 n   n i=1 vi(ˆ ζ(j), ˆ σ 2 (j)) −1 n  n i=1 ti(ˆ ζ(j), ˆ σ 2 (j)) 2  where ti(ζ, σ 2) = E(Zi|Xi, ζ, σ 2) and vi(ζ, σ 2) = E(Z2 i |Xi, ζ, σ 2). (d) Show that E(Zi|Xi, ζ, σ 2) = ζ + σHi u −ζ σ  , E(Z2 i |Xi, ζ, σ 2) = ζ 2 + σ 2 + σ(u + ζ)Hi u −ζ σ  where Hi(t) =          ϕ(t) 1 −X(t) if Xi = 1 −ϕ(t) X(t) if Xi = 0. (e) Show that ˆ ζ(j) →ˆ ζ and ˆ σ 2 (j) →ˆ σ 2, the ML estimates of ζ and σ 2. 4.17 Verify (4.30). 4.18 The EM algorithm can also be implemented in a Bayesian hierarchical model to find a posterior mode. Recall the model (4.5.5.1), X|θ ∼f (x|θ), |λ ∼π(θ|λ),  ∼γ (λ), where interest would be in estimating quantities from π(θ|x). Since π(θ|x) =  π(θ, λ|x)d λ, where π(θ, λ|x) = π(θ|λ, x)π(λ|x), the EM algorithm is a candidate method for finding the mode of π(θ|x), where λ would be used as the augmented data. (a) Define k(λ|θ, x) = π(θ, λ|x)/π(θ|x), and show that log π(θ|x) =  log π(θ, λ|x)k(λ|θ ∗, x)d λ −  log k(λ|θ, x)k(λ|θ ∗, x)d λ. (b) If the sequence { ˆ θ(j)} satisfies max θ  log π(θ, λ|x)k(λ|θ(j), x)d λ =  log π(θ(j+1), λ|x)k(λ|θ(j), x)d λ, show that log π(θ(j+1)|x) ≥log π(θ(j)|x). Under what conditions will the sequence { ˆ θ(j)} converge to the mode of π(θ|x)? 508 ASYMPTOTIC OPTIMALITY [ 6.9 (c) For the hierarchy X|θ ∼N(θ), 1), |λ ∼N(λ, 1)),  ∼Uniform(−∞, ∞), show how to use the EM algorithm to calculate the posterior mode of π(θ|x). 4.19 There is a connection between the EM algorithm and Gibbs sampling, in that both have their basis in Markov chain theory. One way of seeing this is to show that the incomplete-data likelihood is a solution to the integral equation of successive substitu-tion sampling (see Problems 4.5.9-4.5.11), and that Gibbs sampling can then be used to calculate the likelihood function. If L(θ|y) is the incomplete-data likelihood and L(θ|y, z) is the complete-data likelihood, define L∗(θ|y) = L(θ|y) L(θ|y)dθ , L∗(θ|y, z) = L(θ|y, z) L(θ|y, z)dθ . . (a) Show that L∗(θ|y) is the solution to L∗(θ|y) =   L∗(θ|y, z)k(z|θ ′, y)dz  L∗(θ ′|y)dθ ′ where, as usual, k(z|θ, y) = L(θ|y, z)/L(θ|y). (b) Show how the sequence θ(j) from the Gibbs iteration, θ(j) ∼L∗(θ|y, z(j−1)), z(j) ∼k(z|θ(j), y), will converge to a random variable with density L∗(θ|y) as j →∞. How can this be used to compute the likelihood function L(θ|y)? [Using the functions L(θ|y, z) and k(z|θ, y), the EM algorithm will get us the ML estimator from L(θ|y), whereas the Gibbs sampler will get us the entire function. This likelihood implementation of the Gibbs sampler was used by Casella and Berger (1994) and is also described by Smith and Roberts (1993). A version of the EM algorithm, where the Markov chain connection is quite apparent, was given by Baum and Petrie (1966) and Baum et al. (1970).] Section 5 5.1 (a) If a vector Yn in Es converges in probability to a constant vector a, and if h is a continuous function defined over Es, show that h(Yn) →h(a) in probability. (b) Use (a) to show that the elements of ||Ajkn||−1 tend in probability to the elements of B as claimed in the proof of Lemma 5.2. [Hint: (a) Apply Theorem 1.8.19 and Problem 1.8.13.] 5.2 (a) Show that (5.26) with the remainder term neglected has the same form as (5.15) and identify the Ajkn. (b) Show that the resulting ajk of Lemma 5.2 are the same as those of (5.23). (c) Show that the remainder term in (5.26) can be neglected in the proof of Theorem 5.3. 6.9 ] PROBLEMS 509 5.3 Let X1, . . . , Xn be iid according to N(ξ, σ 2). (a) Show that the likelihood equations have a unique root. (b) Show directly (i.e., without recourse to Theorem 5.1) that the MLEs ˆ ξ and ˆ σ are asymptotically efficient. 5.4 Let (X0, . . . , Xs) have the multinomial distribution M(p0, . . . , ps; n). (a) Show that the likelihood equations have a unique root. (b) Show directly that the MLEs ˆ pi are asymptotically efficient. 5.5 Prove Corollary 5.4. 5.6 Showthatthereexistsafunctionf oftwovariablesforwhichtheequations∂f (x, y)/∂x = 0 and ∂f (x, y)/∂y = 0 have a unique solution, and this solution is a local but not a global maximum of f . Section 6 6.1 In Example 6.1, show that the likelihood equations are given by (6.2) and (6.3). 6.2 In Example 6.1, verify Equation (6.4). 6.3 Verify (6.5). 6.4 If θ = (θ1, . . . , θr, θr+1, . . . , θs) and if cov  ∂ ∂θi L(θ), ∂ ∂θj L(θ)  = 0 for any i ≤r < j, then the asymptotic distribution of ( ˆ θ1, . . . , ˆ θr) under the assumptions of Theorem 5.1 is unaffected by whether or not θr+1, . . . , θs are known. 6.5 Let X1, . . . , Xn be iid from a H(α, β) distribution with density 1/(H(α)βα) × xα−1 e−x/β. (a) Calculate the information matrix for the usual (α, β) parameterization. (b) Write the density in terms of the parameters (α, µ) = (α, α/β). Calculate the information matrix for the (α, µ) parameterization and show that it is diagonal, and, hence, the parameters are orthogonal. (c) If the MLE’s in part (a) are (ˆ α, ˆ β), show that (ˆ α, ˆ µ) = (ˆ α, ˆ α/ ˆ β). Thus, either model estimates the mean equally well. (For the theory behind, and other examples of, parameter orthogonality, see Cox and Reid 1987.) 6.6 In Example 6.4, verify the MLEs ˆ ξi and ˆ σjk when the ξ’s are unknown. 6.7 In Example 6.4, show that the Sjk given by (6.15) are independent of (X1, . . ., Xp) and have the same joint distribution as the statistics (6.13) with n replaced by n −1. [Hint: Subject each of the p vectors (Xi1, . . . , Xin) to the same orthogonal transforma-tion, where the first row of the orthogonal matrix is (1/√n, . . ., 1/√n).] 6.8 Verify the matrices (a) (6.17) and (b) (6.18). 6.9 Considerthesituationleading to (6.20), where (Xi, Yi), i = 1, . . . , n, are iid according to a bivariate normal distribution with E(Xi) = E(Yi) = 0, var(Xi) = var(Yi) = 1, and unknown correlation coefficient ρ. 510 ASYMPTOTIC OPTIMALITY [ 6.9 (a) Show that the likelihood equation is a cubic for which the probability of a unique root tends to 1 as n →∞. [Hint: For a cubic equation ax3 + 3bx2 + 3cx + d = 0, let G = a2d −3abc + 2b3 and H = ac −b2. Then the condition for a unique real root is G2 + 4H 3 > 0.] (b) Show that if ˆ ρn is a consistent solution of the likelihood equation, then it satisfies (6.20). (c) Show that δ = XiYi/n is a consistent estimator of ρ and that √n (δ −ρ) L → N(0, 1 + ρ2) and, hence, that δ is less efficient than the MLE of ρ. 6.10 Verify the limiting distribution asserted in (6.21). 6.11 Let X, . . . , Xn be iid according to the Poisson distribution P(λ). Find the ARE of δ2n = [No. of Xi = 0]/n to δ1n = e−¯ Xn as estimators of e−λ. 6.12 Show that the efficiency (6.27) tends to 0 as |a −θ| →∞. 6.13 For the situation of Example 6.9, consider as another family of distributions, the contaminated normal mixture family suggested by Tukey (1960) as a model for obser-vations which usually follow a normal distribution but where occasionally something goes wrong with the experiment or its recording, so that the resulting observation is a gross error. Under the Tukey model, the distribution function takes the form Fτ,ϵ(t) = (1 −ϵ)X(t) + ϵX  t τ  . That is, in the gross error cases, the observations are assumed to be normally distributed with the same mean θ but a different (larger) variance τ 2. 9 (a) Show that if the Xi’s have distribution Fτ,ϵ(x −θ), the limiting distribution of δ2n is unchanged. (b) Show that the limiting distribution of δ1n is normal with mean zero and variance n n−1  φ 1 n n−1(a −θ) 22 (1 −ϵ + ϵτ 2). (c) Compare the asymptotic relative efficiency of δ1n and δ2n. 6.14 Let X1, . . . , Xn be iid as N(0, σ 2). (a) Show that δn = k|Xi|/n is a consistent estimator of σ if and only if k = √π/2. (b) Determine the ARE of δ with k = √π/2 with respect to the MLE X2 i /n. 6.15 Let X1, . . . , Xn be iid with E(Xi) = θ, var(Xi) = 1, and E(Xi −θ)4 = µ4, and consider the unbiased estimators δ1n = (1/n)X2 i −1 and δ2n = ¯ X2 n −1/n of θ 2. (a) Determine the ARE e2,1 of δ2n with respect to δ1n. (b) Show that e2,1 ≥1 if the Xi are symmetric about θ. (c) Find a distribution for the Xi for which e2,1 < 1. 6.16 The property of asymptotic relative efficiency was defined (Definition 6.6) for es-timators that converged to normality at rate √n. This definition, and Theorem 6.7, can be generalized to include other distributions and rates of convergence. 9 As has been pointed out by Stigler (1973) such models for heavy-tailed distributions had already been proposed much earlier in a forgotten work by Newcomb (1882, 1886). 6.9 ] PROBLEMS 511 Theorem 9.1 Let {δin} be two sequences of estimators of g(θ) such that nα[δin −g(θ)] L →τiT, α > 0, τi > 0, i = 1, 2, where the distribution H of T has support on an interval −∞≤A < B ≤∞with strictly increasing cdf on (A, B). Then, the ARE of {δ2n} with respect to {δ1n} exists and is e21 = lim n2→∞ n1(n2) n2 = τ1 τ2 1/α . 6.17 In Example 6.10, show that the conditions of Theorem 5.1 are satisfied. Section 7 7.1 Prove Theorem 7.1. 7.2 For the situation of Example 7.3 with m = n: (a) Show that a necessary condition for (7.5) to converge to N(0, 1) is that √n(ˆ λ−λ) → 0, where ˆ λ = ˆ σ 2/ˆ τ 2 and λ = σ 2/τ 2, for ˆ σ 2 and ˆ τ 2 of (7.4). (b) Use the fact that ˆ λ/λ has an F-distribution to show that √n(ˆ λ −λ) ̸→0. (c) Show that the full MLE is given by the solution to ξ = (m/σ 2) ¯ X + (n/τ 2) ¯ Y m/σ 2 + n/τ 2 , σ 2 = 1 m(Xi −ξ)2, τ 2 = 1 n(Yj −ξ)2, and deduce its asymptotic efficiency from Theorem 5.1. 7.3 In Example 7.4, determine the joint distribution of (a) (ˆ σ 2, ˆ τ 2) and (b) (ˆ σ 2, ˆ σ 2 A). 7.4 Consider samples (X1, Y1), . . . , (Xm, Ym) and (X′ 1, Y ′ 1), . . . , (X′ n, Y ′ n) from two bi-variate normal distributions with means zero and variance-covariances (σ 2, τ 2, ρστ) and (σ ′2, τ ′2, ρ′σ ′τ ′), respectively. Use Theorem 7.1 and Examples 6.5 and 6.8 to find the limit distribution (a) of ˆ σ 2 and ˆ τ 2 when it is known that ρ′ = ρ (b) of ˆ ρ when it is known that σ ′ = σ and τ ′ = τ. 7.5 In the preceding problem, find the efficiency gain (if any) (a) in part (a) resulting from the knowledge that ρ′ = ρ (b) in part (b) resulting from the knowledge that σ ′ = σ and τ ′ = τ. 7.6 Show that the likelihood equations (7.11) have at most one solution. 7.7 In Example 7.6, suppose that pi = 1 −F(α + βti) and that both log F(x) and log[1 − F(x)] are strictly concave. Then, the likelihood equations have at most one solution. 7.8 (a) If the cdf F is symmetric and if log F(x) is strictly concave, so is log[1−F(x)]. (b) Show that log F(x) is strictly concave when F is strongly unimodal but not when F is Cauchy. 7.9 In Example 7.7, show that Yn is less informative than Y. [Hint: Let Zn be distributed as P(λ∞ i=n+1γi) independently of Yn. Then, Yn + Zn is a sufficient statistic for λ on the basis of (Yn, Zn) and Yn + Zn has the same distribution as Y.] 7.10 Show that the estimator δn of Example 7.7 satisfies (7.14). 7.11 Find suitable normalizing constants for δn of Example 7.7 when (a) γi = i, (b) γi = i2, and (c) γi = 1/i. 512 ASYMPTOTIC OPTIMALITY [ 6.9 7.12 Let Xi (i = 1, . . . , n) be independent normal with variance 1 and mean βti (with ti known). Discuss the estimation of β along the lines of Example 7.7. 7.13 Generalize the preceding problem to the situation in which (a) E(Xi) = α + βti and var(Xi) = 1 and (b) E(Xi) = α + βti and var(Xi) = σ 2 where α, β, and σ 2 are unknown parameters to be estimated. 7.14 Let Xj (j = 1, . . . , n) be independently distributed with densities fj(xj|θ) (θ real-valued), let Ij(θ) be the information Xj contains about θ, and let Tn(θ) = n j=1Ij(θ) be the total information about θ in the sample. Suppose that ˆ θn is a consistent root of the likelihood equation L′(θ) = 0 and that, in generalization of (3.18)-(3.20), 1 √Tn(θ0)L′(θ0) L →N(0, 1) and −L′′(θ0) Tn(θ0) P →1 and L′′′(θ ∗ n) Tn(θ0) is bounded in probability. Show that  Tn(θ0)( ˆ θn −θ0) L →N(0, 1). 7.15 Prove that the sequence X1, X2, . . . of Example 7.8 is stationary provided it satisfies (7.17). 7.16 (a) In Example 7.8, show that the likelihood equation has a unique solution, that it is the MLE, and that it has the same asymptotic distribution as δ′ n = n i=1 XiXi+1/ n i=1 X2 i . (b) Show directly that δ′ n is a consistent estimator of β. 7.17 In Example 7.8: (a) Show that for j > 1 the expected value of the conditional information (given Xj−1) that Xj contains about β is 1/(1 −β2). (b) Determine the information X1 contains about β. 7.18 When τ = σ in (7.21), show that the MLE exists and is consistent. 7.19 Suppose that in (7.21), the ξ’s are themselves random variables, which are iid as N(µ, γ 2). (a) Show that the joint density of the (Xi, Yi) is that of a sample from a bivariate normal distribution, and identify the parameters of that distribution. (b) In the model of part (a), find asymptotically efficient estimators of the parameters µ, γ , β, σ, and τ. 7.20 Verify the roots (7.22). 7.21 Show that the likelihood (7.21) is unbounded. 7.22 Show that if ρ s defined by (7.24), then ρ and ρ′ are everywhere continuous. 7.23 Let F have a differentiable density f and let ψ2f < ∞. (a) Use integration by parts to write the denominator of (7.27) as [ ψ(x)f ′(x)dx]2. (b) Show that σ 2(F, ψ) ≥[ (f ′/f )2f ]−1 = I −1 f by applying the Schwarz inequality to part (a). The following three problems will investigate the technical conditions required for the consistency and asymptotic normality of M-estimators, as noted in (7.26). 6.9 ] PROBLEMS 513 7.24 To have consistency of M-estimators, a sufficient condition is that the root of the estimating function be unique and isolated. Establish the following theorem. Theorem 9.2 Assume that conditions (A0)-(A3) hold. Let t0 be an isolated root of the equation Eθ0[ψ(X, t)] = 0, where ψ(·, t) is monotone in t and continuous in a neigh-borhood of t0. If T0(x) is a solution to n i=1 ψ(xi, t) = 0, then T0 converges to t0 in probability. [Hint: The conditions on ψ imply that Eθ0[ψ(X, t)] is monotone, so t0 is a unique root. Adapt the proofs of Theorems 3.2 and 3.7 to complete this proof.] 7.25 Theorem 9.3 Under the conditions of Theorem 9.2, if, in addition (i) Eθ0 ∂ ∂t ψ(X, t)|t=t0 ! is finite and nonzero, (ii) Eθ0 ψ2(X, t0) ! < ∞, then √n(T0 −t0) L →N(0, σ 2 T0), where σ 2 T0 = Eθ0 ψ2(X, t0) ! /(Eθ0 ∂ ∂t ψ(X, t)|t=t0 ! )2. [Note that this is a slight generalization of (7.27).] [Hint: The assumptions on ψ are enough to adapt the Taylor series argument of Theorem 3.10, where ψ takes the place of l′.] 7.26 For each of the following estimates, write out the ψ function that determines it, and show that the estimator is consistent and asymptotically normal under the conditions of Theorems 9.2 and 9.3. (a) The least squares estimate, the minimizer of (xi −t)2. (b) The least absolute value estimate, the minimizer of  |xi −t|. (c) The Huber trimmed mean, the minimizer of (7.24). 7.27 In Example 7.12, compare (a) the asymptotic distributions of ˆ ξ and δn; (b) the normalized expected squared error of ˆ ξ and δn. 7.28 In Example 7.12, show that (a) √n(ˆ ˆ b −b) L →N(0, b2) and (b) √n(ˆ b −b) L → N(0, b2). 7.29 In Example 7.13, show that (a) ˆ c and ˆ a are independent and have the stated distributions; (b) X(1) and log[Xi/X(1)] are complete sufficient statistics on the basis of a sample from (7.33). 7.30 In Example 7.13, determine the UMVU estimators of a and c, and the asymptotic distributions of these estimators. 7.31 In the preceding problem, compare (a) the asymptotic distribution of the MLE and the UMVU estimator of c; (b) the normalized expected squared error of these two estimators. 7.32 In Example 7.15, (a) verify equation (7.39), (b) show that the choice a = −2 produces the estimator with the best second-order efficiency, (c) show that the limiting risk ratio of the MLE (a = 0) to δn(a = −2) is 2, and (d) discuss the behavior of this estimator in small samples. 7.33 Let X1, . . . , Xn be iid according to the three-parameter lognormal distribution (7.37). Show that 514 ASYMPTOTIC OPTIMALITY [ 6.9 (a) p∗(x|ξ) = sup γ,σ 2 p(x|ξ, γ, σ 2) = c/[ˆ σ(ξ)]nO[1/(xi −ξ)] where p(x|ξ, γ, σ 2) = n i=1 f (xi|ξ, σ 2), ˆ σ 2(ξ) = 1 n[log(xi −ξ) −ˆ γ (ξ)]2 and ˆ γ (ξ) = 1 n log(xi −ξ). (b) p∗(x|ξ) →∞as ξ →x(1). [Hint: (b) For ξ sufficiently near x(1), ˆ σ 2(ξ) ≤1 n[log(xi −ξ)]2 ≤[log(x(1) −ξ)]2 and hence p∗(x|ξ) ≥| log(x(1) −ξ)|−nO(x(i) −ξ)−1. The right side tends to infinity as ξ →x(1) (Hill 1963.] 7.34 The derivatives of all orders of the density (7.37) tend to zero as x →ξ. Section 8 8.1 Determine the limit distribution of the Bayes estimator corresponding to squared error loss, and verify that it is asymptotically efficient, in each of the following cases: (a) The observations X1, . . . , Xn are iid N(θ, σ 2), with σ known, and the estimand is θ. The prior distribution for is a conjugate normal distribution, say N(µ, b2). (See Example 4.2.2.) (b) The observations Yi have the gamma distribution H(γ, 1/τ), the estimand is 1/τ, and τ has the conjugate prior density H(g, α). (c) The observations and prior are as in Problem 4.1.9 and the estimand is λ. (d) The observations Yi have the negative binomial distribution (4.3), p has the prior density B(a, b), and the estimand is (a) p and (b) 1/b. 8.2 Referring to Example 8.1, consider, instead, the minimax estimator δn of p given by (1.11) which corresponds to the sequence of beta priors with a = b = √n/2. Then, √n[δn −p] = √n X n −p  + √n 1 + √n 1 2 −X n  . (a) Show that the limit distribution of √n[δn −p] is N[ 1 2 −p, p(1 −p)], so that δn has the same asymptotic variance as X/n, but that for p ̸= 1 2, it is asymptotically biased. (b) Show that ARE of δn relative to X/n does not exist except in the case p = 1 2 when it is 1. 8.3 The assumptions of Theorem 2.6 imply (8.1) and (8.2). 8.4 In Example 8.5, the posterior density of θ after one observation is f (x1 −θ); it is a proper density, and it satisfies (B5) provided Eθ|X1| < ∞. 8.5 Let X1, . . . , Xn be independent, positive variables, each with density (1/τ)f (xi/τ), and let τ have the improper density π(τ) = 1/τ (τ > 0). The posterior density after one observation is a proper density, and it satisfies (B5), provided Eτ(1/X1) < ∞. 6.10 ] NOTES 515 8.6 Give an example in which the posterior density is proper (with probability 1) after two observations but not after one. [Hint: In the preceding example, let π(τ) = 1/τ 2.] 8.7 Prove the result stated preceding Example 8.6. 8.8 Let X1, . . . , Xn be iid as N(θ, 1) and consider the improper density π(θ) = eθ4. Then, the posterior will be improper for all n. 8.9 Prove Lemma 8.7. 8.10 (a) If sup |Yn(t)| P →0 and sup |Xn(t) −c| P →0 as n →∞, then sup |Xn(t) − ceYn(t)| P →0, where the sup is taken over a common set t ∈T . (b) Use (a) to show that (8.22) and (8.23) imply (8.21). 8.11 Show that (B1) implies (a) (8.24) and (b) (8.26). 10 Notes 10.1 Origins The origins of the concept of maximum likelihood go back to the work of Lambert, Daniel Bernoulli, and Lagrange in the second half of the eighteenth century, and of Gauss and Laplace at the beginning of the nineteenth. (For details and references, see Edwards 1974 or Stigler 1986.) The modern history begins with Edgeworth (1908, 1909) and Fisher (1922, 1925), whose contributions are discussed by Savage (1976) and Pratt (1976). Fisher’s work was followed by a euphoric belief in the universal consistency and asymp-totic efficiency of maximum likelihood estimators, at least in the iid case. The true situa-tion was sorted out only gradually. Landmarks are Cram´ er (1946a, 1946b), who shifted the emphasis from the global to a local maximum and defined the “regular” case in which the likelihood equation has a consistent asymptotically efficient root; Wald (1949), who provided fairly general conditions for consistency; the counterexamples of Hodges (Le Cam, 1953) and Bahadur (1958); and Le Cam’s resulting theorem on superefficiency (1953). Convergence (under suitable restrictions and appropriately normalized) of the posterior distribution of a real-valued parameter with a prior distribution to its normal limit was first discovered by Laplace (1820) and later reobtained by Bernstein (1917) and von Mises (1931). More general versions of this result are given in Le Cam (1958). The asymptotic efficiency of Bayes solutions was established by Le Cam (1958), Bickel and Yahav (1969), and Ibragimov and Has’minskii (1972). (See also Ibragimov and Has’minskii 1981.) Computation of likelihood estimators was influenced by the development of the EM Algorithm (Dempster, Laird, and Rubin 1977). This algorithm grew out of work done on iterative computational methods that were developed in the 1950s and 1960s, and can be traced back at least as far as Hartley (1958). The EM algorithm has enjoyed widespread use as a computational tool for obtaining likelihood estimators in complex problems (see Little and Rubin 1987, Tanner 1996, or McLachlan and Krishnan 1997). 10.2 Alternative Conditions for Asymptotic Normality The Cram´ er conditions for asymptotic normality and efficiency that are given in Theo-rems 3.10 and 5.1 are not the most general; for those, see Strasser 1985, Pfanzagl 1985, or LeCam 1986. They were chosen because they have fairly wide applicability, yet are 516 ASYMPTOTIC OPTIMALITY [ 6.10 relatively straightforward to verify. In particular, it is possible to relax the assumptions somewhat, and only require conditions on the second, rather than third, derivative (see Le Cam 1956, H´ ajek 1972, and Inagaki 1973). These conditions, however, are somewhat more involved to check than those of Theorem 3.10, which already require some effort. The conditions have also been altered to accommodate specific features of a problem. One particular change was introduced by Daniels (1961) to overcome the nondifferentia-bility of the double exponential distribution (see Example 3.14). Huber (1967) notes an error in Daniels proof; however, the validity of the theorem remains. Others have taken advantage of the form of the likelihood. Berk (1972b) exploited the fact that in expo-nential families, the cumulant generating function is convex. This, in turn, implies that the log likelihood is concave, which then leads to simpler conditions for consistency and asymptotic normality. Other proofs of existence and consistency under slightly different assumptions are given by Foutz (1977). Consistency proofs in more general settings were given by Wald (1949), Le Cam (1953), Bahadur (1967), Huber (1967), Perlman (1972), and Ibragimov and Has’minskii (1981), among others. See also Pfanzagl 1969, 1994, Landers 1972, Pfaff 1982, Wong 1992, Bickel et al. 1993, and Note 10.4. Another condition, which also eliminates the problem of superefficiency, is that of local asymp-totic normality (Le Cam 1986, Strasser 1985, Section 81, LeCam and Yang 1990, and Wong 1992.) 10.3 Measurability and Consistency Theorems 3.7 and 4.3 assert the existence of a consistent sequence of roots of the likelihood equation, that is, a sequence of roots that converges in probability to the true parameter value. The proof of Theorem 3.7 is a modification of those of Cram´ er (1946a, 1946b) and Wald (1949), where the latter established convergence almost everywhere of the sequence. In almost all cases, we are taught, convergence almost everywhere implies convergence in probability, but that is not so here because a sequence of roots need not be measurable! Happily, the θ ∗ n of Theorem 3.7 are measurable (however, those of Theorem 4.3 are not necessarily). Serfling (1980, Section 4.2.2; see also Problem 3.29), addresses this point, as does Ferguson (1996, Section 17), who also notes that nonmeasurability does not preclude consistency. (We thank Professor R. Wijsman for alerting us to these measurability issues.) 10.4 Estimating Equations Theorems 9.2 and 9.3 use assumptions similar to the original assumptions of Huber (1964, 1981, Section 3.2). Alternate conditions for consistency and asymptotic normal-ity, which relax some smoothness requirements on ρ, have been developed by Boos (1979) and Boos and Serfling (1980); see also Serfling 1980, Chapter 7, for a detailed development of this topic. Further results can be found in Portnoy (1977a, 1984, 1985) and the discrete case is considered by Simpson, Carroll, and Ruppert (1987). The theory of M-estimation, in particular results such as (7.26), have been generalized in many ways. In doing so, much has been learned about the properties of the functions ρ and ψ = ρ′ needed for the solution ˜ θ to the equation n i=1 ψ(xi −θ) = 0 to have reasonable statistical properties. For example, the structure of the exponential family can be exploited to yield less re-strictive conditions for consistency and asymptotic efficiency of ˜ θ. In particular, the concavity of the log likelihood plays an important role. Haberman (1989) gives a com-prehensive treatment of consistency and asymptotic normality of estimators derived from maximizing concave functions (which include likelihood and M-estimators). 6.10 ] NOTES 517 This approach to constructing estimators has become known as the theory of estimating functions [see, for example, Godambe 1991 or the review paper by Liang and Zeger (1994)]. A general estimating equation has the form n i=1 h(xi|θ) = 0, and consis-tency and asymptotic normality of the solution ˜ θ can be established under quite gen-eral conditions (but also see Freedman and Diaconis 1982 or Lele 1994 for situations where this can go wrong). For example, if the estimating equation is unbiased, so that Eθ n i=1 h(Xi|θ) ! = 0 for all θ, then the “usual” regularity conditions (such as those in Problems 7.24 - 7.25 or Theorem 3.10) will imply that ˜ θ is consistent. Asymptotic normality will also often follow, using a proof similar to that of Theorem 3.10, where the estimating function h is used instead of the log likelihood l. Carroll, Ruppert, and Stefanski (1995, Appendix A.3) provide a nice introduction to this topic. 10.5 Variants of Likelihood Alargenumberofvariantsofthelikelihoodfunctionhavebeenproposed.Manystartedas a means of solving a particular problem and, as their usefulness and general effectiveness was realized, they were generalized. Although we cannot list all of these variants, we shall mention a few of them. The first modifications of the usual likelihood function are primarily aimed at dealing with nuisance parameters. These include the marginal, conditional, and profile likeli-hoods, and the modified profile likelihood of Barndorff-Nielsen (1983). In addition, many of the modifications are accompanied by higher-order distribution approximations that result in faster convergence to the asymptotic distribution. These approximations may utilize techniques of small-sample asymptotics (conditioning on ancillaries, saddlepoint expansions) or possibly Bartlett corrections (Barndorff-Nielsen and Cox 1984). Other modifications of likelihood may entail, perhaps, a more drastic variation of the likelihood function. The partial likelihood of Cox (1975; see also Oakes 1991), presents an effective means of dealing with censored data, by dividing the model into parametric and nonparametric parts. Along these lines quasi-likelihood (Wedderburn 1974, Mc-Culloch and Nelder 1989, McCulloch 1991) is based only on moment assumptions and empirical likelihood (Owen 1988, 1990, Hall and La Scala 1990) is a nonparametric approach based on a multinomial profile likelihood. There are many other variations of likelihood, including directed, penalized, and ex-tended, and the idea of predictive likelihood (Hinkley 1979, Butler 1986, 1989). An entry to this work can be obtained through Kalbfleisch (1986), Barndorff-Nielsen and Cox (1994), or Edwards (1992), the review articles of Hinkley (1980) and Bjørnstad (1990), or the volume of review articles edited by Hinkley, Reid, and Snell (1991). 10.6 Boundary Values A key feature throughout this chapter was the assumption that the true parameter point θ0 occurs at an interior point of the parameter space (Section 6.3, Assumption A3; Section 6.5, Assumption A). The effect of this assumption is that, for large n, as the likelihood estimator gets close to θ0, the likelihood estimator will, in fact, be a root of the likelihood. (Recall the proofs of Theorems 3.7 and 3.10 to see how this is used.) However, in some applications θ0 is on the boundary of the parameter space, and the ML estimator is not a root of the likelihood. This situation is more frequently encountered in testing than estimation, where the null hypothesis H0 : θ = θ0 often involves a boundary point. However, boundary values can also occur in point estimation. For example, in a mixture problem (Example 6.10), the value of the mixing parameter could be the boundary value 0 or 1. Chernoff (1954) first investigated the asymptotic distribution of the maximum 518 ASYMPTOTIC OPTIMALITY [ 6.10 likelihood estimator when the parameter is on the boundary. This distribution is typically not normal, and is characterized by Self and Liang (1987), who give many examples, ranging from multivariate normal to mixtures of chi-squared distributions to even more complicated forms. An alternate approach to establishing the limiting distribution is provided by Feng and McCulloch (1992), who use a strategy of expanding the parameter space. 10.7 Asymptotics of REML The results of Cressie and Lahiri (1993) and Jiang (1996, 1997) show that when using restricted maximum likelihood estimation (REML; see Example 2.7 and the discussion after Example 5.3) instead of ML, efficiency need not be sacrificed, as the asymptotic covariance matrix of the REML estimates is the inverse information matrix from the reduced problem. More precisely, we can write the general linear mixed model (gener-alizing the linear model of Section 3.4) as Y = Xβ + Zu + ε, (10.1) where Y is the N × 1 vector of observations, X and Z are N × p design matrices, β is the p × 1 vector of fixed effects, u ∼N(0, D) is the p × 1 vector of random effects, and ε ∼N(0, R), independent of u. The variance components of D and R are usually the targets of estimation. The likelihood function L(β, u, D, R|y) is transformed to the REML likelihood by marginalizing out the β and u effects, that is, L(D, R|y) =   L(β, u, D, R|y) du dβ. Suppose now that V = V (θ), that is, the vector θ represents the variance components to be estimated. We can thus write L(D, R|y) = L(V (θ)|y) and denote the information matrix of the marginal likelihood by IN(θ). Cressie and Lahiri (1993, Corollary 3.1) show that under suitable regularity conditions, [IN(θ)]1/2 ˆ θ −θ L →Nk(0, I), where ˆ θ N maximizes L(V (θ)|y). Thus, the REML estimator is asymptotically efficient. Jiang (1996, 1997) has extended this result, and established the asymptotic normality of ˆ θ even when the underlying distributions are not normal. 10.8 Higher-Order Asymptotics Typically, not only is the MLE asymptotically efficient, but so also are various approxi-mations to the MLE, to Bayes estimators, and so forth. Therefore, it becomes important to be able to distinguish between different asymptotically efficient estimator sequences. For example, it seems plausible that one would do best in any application of Theorem 4.3 by using a highly efficient √n-consistent starting sequence. It has been pointed out earlier that an efficient estimator sequence can always be modified by terms of order 1/n without affecting the asymptotic efficiency. Thus, to distinguish among them requires taking into account the terms of the next order. A number of authors (among them Rao 1963, Pfanzagl 1973, Ghosh and Subramanyam 1974, Efron 1975, 1978, Akahira and Takeuchi 1981, and Bhattacharya and Denker 1990) have investigated estimators that are “second-order efficient,” that is, efficient and among efficient estimators have the greatest accuracy to terms of the next order, and in particular these authors have tried to determine to what extent the MLE is second-order efficient. For example, Efron (1975, Section 10) shows that in exponential families, the MLE minimizes the coefficient of the second-order term among efficient estimators. 6.10 ] NOTES 519 For the most part, however, the asymptotic theory presented here is “first-order” theory in the sense that the conclusion of Theorem 3.10 can be expressed as saying that √n( ˆ θn −θ)  I −1(θ) = Z + O  1 √n  , where Z is a standard normal random variable, so the convergence is at rate O(1/n1/2). It is possible to reduce the error in the approximation to O(1/n3/2) using “higher-order” asymptotics. The book by Barndorff-Nielsen and Cox (1994) provides a detailed treatment of higher-order asymptotics. Other entries into this subject are through the review papers of Reid (1995, 1996) and a volume edited by Hinkley, Reid, and Snell (1991). Another technique that is very useful in obtaining accurate approximations for the densi-ties of statistics is the saddlepoint expansion (Daniels 1980, 1983), which can be derived through inversion of a characteristic function or through the use of Edgeworth expan-sions. Entries to this literature can be made through the review paper of Reid (1988), the monograph by Kolassa (1993), or the books by Field and Ronchetti (1990) or Jensen (1995). Still another way to achieve higher-order accuracy in certain cases is through a technique known as the bootstrap, initiated by Efron (1979, 1982b). Some of the theoretical foun-dations of the bootstrap are rooted in the work of von Mises (1936, 1947) and Kiefer and Wolfowitz (1956). The bootstrap can be thought of as a “nonparametric” MLE, where the quantity h(x)dF(x) is estimated by h(x)dFn(x). Using the technique of Edgeworth expansions, it was established by Singh (1981) (see also Bickel and Freedman 1981) that the bootstrap sometimes provides a more accurate approximation than the Delta Method (Theorem 1.8.12). An introduction to the asymptotic theory of the bootstrap is given by Lehmann (1999), and implementation and applications of the bootstrap are given in Efron and Tibshirani (1993). Other introductions to the bootstrap are through the volume edited by LePage and Billard (1992), the book by Shao and Tu (1995), or the review paper of Young (1994). A more theoretical treatment is given by Hall (1992). This page intentionally left blank References Abbey, I. L. and David, H. T. (1970). The construction of uniformly minimum variance unbiased estimators for exponential distributions. Ann. Math. Statist. 41, 1217-1222. Agresti, A. (1990). Categorical Data Analysis. New York: Wiley. Ahuja, J. C. (1972). Recurrence relation for minimum variance unbiased estimation of certain left-truncated Poisson distributions. J. Roy. Statist. Soc. C 21, 81-86. Aitken, A. C. and Silverstone, H. (1942). On the estimation of statistical parameters. Proc. Roy. Soc. Edinb. A 61, 186-194. Aitken, M., Anderson, D., Francis, B. and Hinde, J. (1989). Statistical Modelling in GLIM. Oxford: Clarendon Press. Akahira, M. and Takeuchi, K. (1981). Asymptotic Efficiency of Statistical Estimators. New York: Springer-Verlag. Alam, K. (1979). Estimation of multinomial probabilities. Ann. Statist. 7, 282-283. Albert, J. H. (1988). Computational methods using a Bayesian hierarchical generalized linear model J. Amer. Statist. Assoc. 83, 1037-1044. Albert, J. H. and Gupta, A. K. (1985). Bayesian methods for binomial data with application to a nonresponse problem. J. Amer. Statist. Assoc. 80, 167-174. Amari, S.-I., Barndorff-Nielsen, O. E., Kass, R. E., Lauritzen, S. L., and Rao, C. R. (1987). Differential geometry and statistical inference. Hayward, CA: Institute of Mathematical Statistics Amemiya, T. (1980). The n-2-order mean squared errors of the maximum likelihood and the minimum logit chi-square estimator. Ann. Statist. 8, 488-505. Andersen, E. B. (1970a). Sufficiency and exponential families for discrete sample spaces. J. Amer. Statist. Assoc. 65, 1248-1255. Andersen, E. B. (1970b). Asymptotic properties of conditional maximum likelihood esti-mators, J. Roy. Statist. Soc. Ser. B 32, 283-301. Anderson, T. W. (1959). On asymptotic distributions of estimates of parameters of stochastic difference equations. Ann. Math. Statist. 30, 676-687. Anderson, T. W. (1962). Least squares and best unbiased estimates. Ann. Math. Statist. 33, 266-272. Anderson, T. W. (1976). Estimation of linear functional relationships: Approximate distri-butions and connections with simultaneous equations in economics. J. Roy. Statist. Soc. Ser. B 38, 1-19. Anderson, T. W. (1984). Introduction to Multivariate Statistical Analysis, Second Edition. New York: Wiley. 522 REFERENCES [ 6.10 Anderson, T. W. and Sawa, T. (1982). Exact and approximate distributions of the maximum likelihood estimator of a slope coefficient. J. Roy. Statist. Soc. Ser. B 44, 52-62. Arnold, S. F. (1981). The Theory of Linear Models and Multivariate Analysis. New York: Wiley. Arnold, S. F. (1985). Sufficiency and invariance. Statist. Prob. Lett. 3, 275-279. Ash, R. (1972). Real Analysis and Probability. New York: Academic Press. Athreya, K. B., Doss, H., and Sethuraman, J. (1996). On the convergence of the Markov chain simulation method. Ann. Statist. 24, 69-100. Bahadur, R. R. (1954). Sufficiency and statistical decision functions. Ann. Math. Statist. 25, 423 462. Bahadur, R. R. (1957). On unbiased estimates of uniformly minimum variance. Sankhya 18, 211-224. Bahadur, R. R. (1958). Examples of inconsistency of maximum likelihood estimates. Sankhya 20, 207-210. Bahadur, R. R. (1964). On Fisher’s bound for asymptotic variances. Ann. Math. Statist. 35, 1545-1552. Bahadur, R. R. (1967). Rates of convergence of estimates and test statistics. Ann. Math. Statist. 38, 303-324. Bahadur, R. R. (1971). Some Limit Theorems in Statistics. Philadelphia: SIAM. Bai, Z. D. and Fu, J. C. (1987). On the maximum likelihood estimator for the location parameter of a Cauchy distribution. Can. J. Statist. 15, 137-146. Baker, R. J. and Nelder, J. A. (1983a). Generalized linear models. Encycl. Statist. Sci. 3, 343-348. Baker, R. J. and Nelder, J. A. (1983b). GLIM. Encycl. Statist. Sci. 3, 439-442. Bar-Lev, S. K. and Enis, P. (1986). Reproducibility and natural exponential families with power variance functions. Ann. Statist. 14, 1507-1522. Bar-Lev, S. K. and Enis, P. (1988). On the classical choice of variance stabilizing transfor-mations and an application for a Poisson variate. Biometrika 75, 803-804. Bar-Lev, S. K. and Bshouty, D. (1989). Rational variance functions. Ann. Statist. 17, 741-748. Bar-Shalom, Y. (1971). Asymptotic properties of maximum likelihood estimates. J. Roy. Statist. Soc. Ser. B 33, 72-77. Baranchik, A. J. (1964). Multiple regression and estimation of the mean of a multivariate normal distribution. Stanford University Technical Report No. 51. Baranchik, A. J. (1970). A family of minimax estimators of the mean of a multivariate normal distribution. Ann. Math. Statist. 41, 642-645. Barankin, E. W. (1950). Extension of a theorem of Blackwell. Ann. Math. Statist. 21, 280-284. Barankin, E. W. and Maitra, A. P. (1963). Generalization of the Fisher-Darmois Koopman-Pitman theorem on sufficient statistics. Sankhya A 25, 217-244. Barnard, G. A. (1985). Pivotal inference. Encycl. Stat. Sci. (N. L. Johnson, S. Kota, and C. Reade, eds.). New York: Wiley, pp. 743-747. Barndorff-Nielsen, O. (1978). Information and Exponential Families in Statistical Theory. New York: Wiley. 6.10 ] REFERENCES 523 Barndorff-Nielsen, O. (1980). Conditionality resolutions. Biometrika 67, 293-310. Barndorff-Nielsen, O. (1983). On a formula for the distribution of the maximum likelihood estimator. Biometrika 70, 343-365. Barndorff-Nielsen, O. (1988). Parametric Statistical Models and Likelihood. Lecture Notes in Statistics 50. New York: Springer-Verlag. Barndorff-Nielsen, O. and Blaesild, P. (1980). Global maxima, and likelihood in linear models. Research Rept. 57. Department of Theoretical Statistics, University of Aarhus. Barndorff-Nielsen, O. and Cox, D. R. (1979). Edgeworth and saddle-point approximations with statistical applications (with discussion). J. Roy. Statist. Soc. Ser. B 41, 279-312. Barndorff-Nielsen, O. and Cox, D. R. (1984). Bartlett adjustments to the likelihood ratio statistic and the distribution of the maximum likelihood estimator. J. Roy. Statist. Soc. Ser. B 46, 483-495. Barndorff-Nielsen, O. and Cox, D. R. (1994). Inference and Asymptotics. Chapman & Hall. Barndorff-Nielsen, O. and Pedersen, K. (1968). Sufficient data reduction and exponential families. Math. Scand. 22, 197-202. Barndorff-Nielsen, O., Hoffmann-Jorgensen, J., and Pedersen, K. (1976). On the minimal sufficiency of the likelihood function. Scand. J. Statist. 3, 37-38. Barndorff-Nielsen, O., Blaesild, P., Jensen, J. L., and Jorgensen, B. (1982). Exponential transformation models. Proc. Roy. Soc. London A 379, 41-65. Barndorff-Nielsen, O. Blaesild, P., and Seshadri, V. (1992). Multivariate distributions with generalized Gaussian marginals, and associated Poisson marginals. Can. J. Statist. 20, 109-120. Barnett, V. D. (1966). Evaluation of the maximum likelihood estimator where the likelihood equation has multiple roots. Biometrika 53, 151-166. Barnett, V. (1982). Comparitive Statistical Inference, Second Edition. New York: Wiley Barron, A. R. (1989). Uniformly powerful goodness of fit test. Ann. Statist. 17, 107-124. Basawa, I. V. and Prakasa Rao, B. L. S. (1980). Statistical Inference in Stochastic Processes. London: Academic Press. Basu, D. (1955a). A note on the theory of unbiased estimation. Ann. Math. Statist. 26, 345-348. Reprinted as Chapter XX of Statistical Information and Likelihood: A Collection of Critical Essays. New York: Springer-Verlag. Basu, D. (1955b). On statistics independent of a complete sufficient statistic. Sankhya 15, 377-380. Reprinted as Chapter XXII of Statistical Information and Likelihood: A Collection of Critical Essays. New York: Springer-Verlag. Basu, D. (1956). The concept of asymptotic efficiency. Sankhya 17, 193-196. Reprinted as Chapter XXI of Statistical Information and Likelihood: A Collection of Critical Essays. New York: Springer-Verlag. Basu, D. (1958). On statistics independent of sufficient statistics. Sankhya 20, 223-226. Reprinted as Chapter XXIII of Statistical Information and Likelihood: A Collection of Critical Essays. New York: Springer-Verlag. Basu, D. (1969). On sufficiency and invariance. In Essays in Probability and Statistics (Bose, ed.). Chapel Hill: University of North Carolina Press. Reprinted as Chapter XI of Statistical Information and Likelihood: A Collection of Critical Essays. 524 REFERENCES [ 6.10 Basu D. (1971). An essay on the logical foundations of survey sampling, Part I. In Foun-dations of Statistical Inference (Godambe and Sprott, eds.). Toronto: Holt, Rinehart and Winston, pp. 203-242. Reprinted as Chapters XII and XIII of Statistical Information and Likelihood: A Collection of Critical Essays. Basu, D. (1988). Statistical Information and Likelihood: A Collection of Critical Essays (J. K. Ghosh, ed.). New York: Springer-Verlag. Baum, L. and Petrie, T. (1966). Statistical inference for probabilistic functions of finite state Markov chains. Ann. Math. Statist. 37, 1554-1563. Baum, L., Petrie, T., Soules, G., and Weiss, N. (1970). A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. Ann. Math. Statist. 41, 164-171. Bayes, T. (1763). An essay toward solving a problem in the doctrine of chances. Phil. Trans. Roy. Soc. 153, 370-418. Reprinted in (1958) Biometrika 45, 293-315. Bell, C. B., Blackwell, D., and Breiman, L. (1960). On the completeness of order statistics. Ann. Math. Statist. 31, 794-797. Berger, J. (1975). Minimax estimation of location vectors for a wide class of densities. Ann. Statist. 3, 1318-1328. Berger, J. (1976a). Tail minimaxity in location vector problems and its applications. Ann. Statist. 4, 33-50. Berger, J. (1976b). Admissible minimax estimation of a multivariate normal mean with arbitrary quadratic loss. Ann. Statist. 4, 223-226. Berger, J. (1976c). Inadmissibility results for generalized Bayes estimators or coordinates of a location vector. Ann. Statist. 4, 302-333. Berger, J. (1976d). Admissibility results for generalized Bayes estimators of coordinates of a location vector. Ann. Statist. 4, 334 356. Berger, J. (1976e). Inadmissibility results for the best invariant estimator of two coordinates of a location vector. Ann. Statist. 4, 1065-1076. Berger, J. (1980a). Improving on inadmissible estimators in continuous exponential families with applications to simultaneous estimation of gamma scale parameters Ann. Statist. 8, 545-571. Berger, J. (1980b). A robust generalized Bayes estimator and confidence region for a mul-tivariate normal mean. Ann. Statist. 8, 716-761. Berger, J. O. (1982a). Bayesian robustness and the Stein effect. J. Amer. Statist. Assoc. 77, 358-368. Berger, J. O. (1982b). Estimation in continuous exponential families: Bayesian estimation subject to risk restrictions. Statistical Decision Theory III (S. S. Gupta and J. O. Berger, eds.). New York: Academic Press, pp. 109-141. Berger, J. (1982c). Selecting a minimax estimator of a multivariate normal mean. Ann. Statist. 10, 81-92. Berger, J. O. (1984). The robust Bayesian viewpoint. Robustness of Bayesian Analysis (J. Kadane, ed.). Amsterdam: North-Holland. Berger, J. O. (1985). Statistical Decision Theory and Bayesian Analysis, Second Edition. New York: Springer-Verlag. Berger, J. O. (1990a). On the inadmissibility of unbiased estimators. Statist. Prob. Lett. 9, 381-384. 6.10 ] REFERENCES 525 Berger, J. O. (1990b). Robust Bayesian analysis: Sensitivity to the prior. J. Statist. Plan. Inform. 25, 303-328. Berger, J. O. (1994). An overview of robust Bayesian analysis (with discussion). Test 3, 5-124. Berger, J. O. and Berliner, L.M. (1986). Robust Bayes and empirical Bayes analysis with ϵ-contaminated priors. Ann. Statist. 14, 461-486. Berger, J. O. and Bernardo, J. M. (1989). Estimating a product of means: Bayesian analysis with reference priors. J. Amer. Statist. Assoc. 84, 200-207. Berger, J. O. and Bernardo, J. M. (1992a). On the development of reference priors. Bayesian Statist. 4 (J. O. Berger and J. M. Bernardo, eds.). London: Clarendon Press, pp. 35-49. Berger, J. O. and Bernardo, J. M. (1992b). Ordered group reference priors with application to multinomial probabilities. Biometrika 79, 25-37. Berger, J. O. and Bock, M. E. (1976). Combining independent normal mean estimation problems with unknown variances. Ann. Statist. 4, 642-648. Berger, J. O. and Dey, D. (1983a). On truncation of shrinkage estimators in simultaneous estimation of normal means. J. Amer. Statist. Assoc. 78, 865-869. Berger, J. O. and Dey, D. (1983b). Combining coordinates in simultaneous estimation of normal means. J. Statist. Plan. Inform. 8, 143-160. Berger, J. O. and Robert, C. (1990). Subjective hierarchical Bayes estimation of a multi-variate normal mean: On the frequentist interface. Ann. Statist. 18, 617-651. Berger, J. and Srinivasan, C. (1978). Generalized Bayes estimators in multivariate problems. Ann. Statist. 6, 783-801. Berger, J. O. and Strawderman, W. E. (1996). Choice of hierarchical priors: Admissibility in estimation of normal means. Ann. Statist. 24, 931-951. Berger,J.O.andWolpert,R.W.(1988).TheLikelihoodPrinciple,SecondEdition.Hayward, CA: Institute of Mathematical Statistics Berger, J. O., Bock, M. E., Brown, L. D., Casella, G., and Gleser, L. J. (1977). Minimax estimation of a normal mean vector for arbitrary quadratic loss and unknown covariance matrix. Ann. Statist. 5, 763-771. Berk, R. (1967a). A special group structure and equivariant estimation Ann. Math. Statist. 38, 1436-1445. Berk, R. H. (1967b). Review of Zehna (1966). Math. Rev. 33, No. 1922. Berk, R. (1972a). A note on sufficiency and invariance. Ann. Math. Statist. 43, 647-650. Berk, R. H. (1972b). Consistency and asymptotic normality of MLE’s for exponential models. Ann. Math. Statist. 43, 193-204. Berk, R. H., Brown, L. D., and Cohen, A. (1981). Bounded stopping times for a class of sequential Bayes tests. Ann. Statist. 9, 834-845. Berkson J. (1980). Minimum chi-square, not maximum likelihood! Ann. Statist. 8, 457-469. Bernardo, J. M. (1979). Reference posterior distributions for Bayesian inference. J. Roy. Statist. Soc. Ser. B 41, 113-147. Bernardo, J. M. and Smith, A. F. M. (1994). Bayesian Theory. New York: Wiley. Bernstein, S. (1917). Theory of Probability. (Russian) Fourth Edition (1946) Gostekhizdat, Moscow-Leningrad (in Russian). 526 REFERENCES [ 6.10 Berry, J. C. (1987). Equivariant estimation of a normal mean using a normal concomitant variable for covariance adjustment. Can. J. Statist. 15, 177-183. Berry, J. C. (1990). Minimax estimation of a bounded normal mean vector. J. Mult. Anal. 35, 130-139. Bhat, B.R. (1974). On the method of maximum likelihood for dependent observations, J. Roy. Statist. Soc. Ser. B 36, 48-53. Bhattacharyya, A. (1946, 1948). On some analogs to the amount of information and their uses in statistical estimation. Sankhya 8, 1-14, 201-208, 277-280. Bhattacharyya, G. K., Johnson, R. A., and Mehrotra, K. G. (1977). On the completeness of minimal sufficient statistics with censored observations. Ann. Statist. 5, 547-553. Bhatttachayra, R. and Denker, M. (1990). Asymptotic Statistics. Basel: Birkhauser-Verlag. Bickel, P. J. (1981). Minimax estimation of the mean of a normal distribution when the parameter space is restricted. Ann. Statist. 9, 1301-1309. Bickel, P. J. (1983). Minimax estimation of the mean of a normal distribution subject to doing well at a point. Recent Advances in Statistics: Papers in Honor of Herman Chernoff on his Sixtieth Birthday (M. H. Rizvi, J. S. Rustagi and D. Siegmund, eds.), New York: Academic Press, pp. 511-528. Bickel, P. J. (1984). Parametric robustness: small biases can be worthwhile. Ann. Statist. 12, 864-879. Bickel, P. J. and Blackwell, D. (1967). A note on Bayes estimates. Ann. Math. Statist. 38, 1907-1911. Bickel, P. J. and Doksum, K. A. (1981). An analysis of transformations revisited. J. Amer. Statist. Assoc. 76, 296-311. Bickel, P. J. and Freedman, D. A. (1981). Some asymptotic theory for the bootstrap. Ann. Statist. 9, 1196-1217. Bickel, P. J. and Lehmann, E. L. (1969). Unbiased estimation in convex families. Ann. Math. Statist. 40, 1523-1535. Bickel, P. J. and Lehmann, E. L. (1975-1979). Descriptive statistics for nonparametric models I-III. Ann. Statist. 3, 1038-1045, 1045-1069, 4, 1139-1159; IV Contributions to Statistics, H´ ajek Memorial Volume (J. Jureckova, ed.). Prague: Academia, pp. 33-40. Bickel, P. J. and Lehmann, E. L. (1981). A minimax property of the sample mean in finite populations. Ann. Statist. 9, 1119-1122. Bickel, P. J. and Mallows, C. (1988). A note on unbiased Bayes estimation. Amer. Statist. 42, 132-134. Bickel, P.J. and Yahav, J.A. (1969) Some contributions to the asymptotic theory of Bayes solutions. Z. Wahrsch. Verw. Geb. 11, 257-276. Bickel, P. J., Klaassen, P., Ritov, C. A. J., and Wellner, J. (1993). Efficient and Adaptive Estimation for Semiparametric Models. Baltimore: Johns Hopkins University Press. Billingsley P. (1995). Probability and Measure, Third Edition. New York: Wiley. Bilodeau, M. and Srivastava, M. S. (1988). Estimation of the MSE matrix of the Stein estimator. Can. J. Statist. 16, 153-159. Bischoff, W. and Fieger, W. (1992). Minimax estimators and gamma-minimax estimators of a bounded normal mean under the loss Łp(θ, d) = |θ −d|p. Metrika 39, 185-197. 6.10 ] REFERENCES 527 Bishop, Y. M. M., Fienberg, S. E, and Holland, P. W. (1975). Discrete Multivariate Analysis. Cambridge, MA: MIT Press. Bjørnstad, J. F. (1990). Predictive likelihood: A review (with discussion). Statist. Sci. 5, 242-265. Blackwell, D. (1947). Conditional expectation and unbiased sequential estimation. Ann. Math. Statist. 18, 105-110. Blackwell, D. (1951). On the translation parameter problem for discrete variables. Ann. Math. Statist. 22, 393-399. Blackwell, D. and Girshick, M. A. (1947). A lower bound for the variance of some unbiased sequential estimates. Ann. Math. Statist. 18, 277-280. Blackwell, D. and Girshick, M. A. (1954). Theory of games and Statistical Decisions. New York: Wiley. Reissued by Dover, New York, 1979. Blackwell, D. and Ryll-Nardzewski, C. (1963). Non-existence of everywhere proper con-ditional distributions. Ann. Math. Statist. 34, 223-225 Blight, J. N. and Rao, P. V. (1974). The convergence of Bhattacharyya bounds. Biometrika 61, 137-142. Bloch, D. A. and Watson, G. S. (1967). A Bayesian study of the multinomial distribution. Ann. Math. Statist. 38, 1423-1435. Blyth C. R (1951). On minimax statistical decision procedures and their admissibility. Ann. Math. Statist. 22, 22-42. Blyth, C. R. (1970). On the inference and decision models of statistics (with discussion). Ann. Math. Statist. 41, 1034-1058. Blyth, C. R. (1974). Necessary and sufficient conditions for inequalities of Cr´ amer-Rao type. Ann. Statist. 2, 464-473. Blyth,C.R.(1980).Expectedabsoluteerroroftheusualestimatorofthebinomialparameter. Amer. Statist. 34, 155-157. Blyth, C. R. (1982). Maximum probability estimation in small samples. Festschrift for Erich Lehmann (P. J. Bickel. K. A. Doksum, and J. L. Hodges, Jr., eds). Pacific Grove, CA: Wadsworth and Brooks/Cole. Bock, M. E. (1975). Minimax estimators of the mean of a multivariate normal distribution. Ann. Statist. 3, 209-218. Bock, M. E. (1982). Employing vague inequality information in the estimation of normal mean vectors (Estimators that shrink toward closed convex polyhedra). Statistical Decision Theory III (S. S. Gupta and J. O. Berger, eds.). New York: Academic Press, pp. 169-193. Bock, M. E. (1988). Shrinkage estimators: Pseudo-Bayes estimators for normal mean vec-tors. Statistical Decision Theory IV (S. S. Gupta and J. O. Berger, eds.). New York: Springer-Verlag, pp. 281-298. Bondar, J. V. (1987). How much improvement can a shrinkage estimator give? Foundations of Statistical Inference (I. MacNeill and G Umphreys, eds.). Dordrecht: Reidel. Bondar, J. V. and Milnes, P. (1981). Amenability: A survey for statistical applications of Hunt-Stein and related conditions on groups. Zeitschr. Wahrsch. Verw. Geb. 57, 103-128. Bondesson, L. (1975). Uniformly minimum variance estimation in location parameter fam-ilies. Ann. Statist. 3, 637-66. Boos, D. D. (1979). A differential for L-statistics. Ann. Statist. 7, 955-959. 528 REFERENCES [ 6.10 Boos, D. D. and Serfling, R. J. (1980). A note on differentials and the CLT and LIL for statistical functions, with application to M-estimators. Ann. Statist. 86, 618-624. Borges, R. and Pfanzagl, J. (l965). One-parameter exponential families generated by trans-formation groups. Ann. Math. Statist. 36, 261-271. Box, G. E. and Tiao, G. C. (1973). Bayesian Inference in Statistical Analysis. Reading, MA: Addison-Wesley. Box, G. E. P. and Cox, D. R. (1982). An analysis of transformations revisited, rebutted. J. Amer. Statist. Assoc. 77, 209-210. Boyles, R. A. (1983). On the convergence of the EM algorithm. J. Roy. Statist. Soc. Ser. B 45, 47-50. Bradley R A. and Gart, J.J. (1962). The asymptotic properties of ML estimators when sampling for associated populations. Biometrika 49, 205-214. Brandwein, A. C. and Strawderman, W. E. (1978). Minimax estimation of location param-eters for spherically symmetric unimodal distributions under quadratic loss. Ann. Statist. 6, 377-416. Brandwein, A. C. and Strawderman, W. E. (1980). Minimax estimation of location param-eters for spherically symmetric distributions with concave loss. Ann. Statist. 8, 279-284. Brandwein, A. C. and Strawderman, W. E. (1990). Stein estimation: The spherically sym-metric case. Statist. Sci. 5, 356-369. Bravo, G. and MacGibbon, B. (1988). Improved shrinkage estimators for the mean vector of a scale mixture of normals with unknown variance. Can. J. Statist. 16, 237-245. Brewster, J. F and Zidek, J. V. (1974). Improving on equivariant estimators Ann. Statist. 2, 21-38. Brockwell, P. J. and Davis, R. A. (1987). Time Series: Theory and Methods New York: Springer-Verlag Brown, L. D. (1966). On the admissibility of invariant estimators of one or more location parameters. Ann. Math. Statist. 37, 1087-1136. Brown, L. D. (1968). Inadmissibility of the usual estimators of scale parameters in problems with unknown location and scale parameters. Ann. Math. Statist. 39, 29-48. Brown, L. D. (1971). Admissible estimators, recurrent diffusions, and insoluble boundary value problems. Ann. Math. Statist. 42, 855-903. [Corr: (1973) Ann. Statist. 1, 594-596.] Brown, L. D. (1975). Estimation with incompletely specified loss functions (the case of several location parameters). J. Amer. Statist. Assoc. 70, 417-427. Brown, L. D. (1978). A contirbution to Kiefer’s theory of conditional confidence procedures. Ann. Statist. 6, 59-71. Brown, L. D. (1979). A heuristic method for determining admissibility of estimators - with applications. Ann. Statist. 7, 960-994. Brown, L. D. (1980a). A necessary condition for admissibility. Ann. Statist. 8, 540-544. Brown, L. D. (1980b). Examples of Berger’s phenomenon in the estimation of independent normal means. Ann. Statist. 8, 572-585. Brown, L. D. (1981). A complete class theorem for statistical problems with finite sample spaces. Ann. Statist. 9, 1289-1300. Brown, L. D. (1986a). Fundamentals of Statistical Exponential Families. Hayward, CA: Institute of Mathematical Statistics 6.10 ] REFERENCES 529 Brown, L. D. (1986b). Commentary on paper . J. C. Kiefer Collected Papers, Supple-mentary Volume. New York: Springer-Verlag, pp. 20-27. Brown, L. D. (1988). Admissibility in discrete and continuous invariant nonparametric estimation problems and in their multinomial analogs. Ann. Statist. 16, 1567-1593. Brown L. D. (1990a). An ancillarity paradox which appears in multiple linear regression (with discussion). Ann. Statist. 18, 471-538. Brown L. D. (1990b). Comment on the paper by Maatta and Casella. Statist. Sci. 5, 103-106. Brown, L. D. (1994). Minimaxity, more or less. Statistical Decision Theory and Related Topics V (S. S. Gupta and J. O. Berger, eds.). New York: Springer-Verlag, pp. 1-18. Brown, L. D. and Cohen, A. (1974). Point and confidence estimation of a common mean and recovery of interblock information, Ann. Statist. 2, 963-976. Brown, L. D. and Farrell, R. (1985a). All admissible estimators of a multivariate Poisson mean. Ann. Statist. 13, 282-294. Brown, L. D. and Farrell, R. (1985b). Complete class theorems for estimation of multivariate Poisson means and related problems. Ann. Statist. 13, 706-726. Brown, L. D. and Farrell, R. (1990). A lower bound for the risk in estimating the value of a probability density. J. Amer. Statist. Assoc. 90, 1147-1153. Brown, L. D. and Fox, M. (1974a). Admissibility of procedures in two-dimensional location parameter problems. Ann. Statist. 2, 248-266. Brown, L. D. and Fox, M. (1974b). Admissibility in statistical problems involving a location or scale parameter. Ann. Statist. 2, 807-814. Brown, L. D. and Gajek, L. (1990). Information inequalities for the Bayes risk. Ann. Statist. 18, 1578-1594. Brown, L. D. and Hwang, J. T. (1982). A unified admissibility proof. Statistical Decision Theory III (S. S. Gupta and J. O. Berger, eds.). New York: Academic Press, pp. 205-230. Brown, L. D. and Hwang, J. T. (1989). Universal domination and stochastic domination: U-admissibility and U-inadmissibility of the least squares estimator. Ann. Statist. 17, 252-267. Brown, L. D., Johnstone, I., and MacGibbon, B. K. (1981). Variation diminishing transfor-mations: A direst approach to total positivity and its statistical applications. J. Amer. Statist. Assoc. 76, 824-832. Brown, L. D. and Low, M. G. (1991). Information inequality bounds on the minimax risk (with an application to nonparametric regression). Ann. Statist. 19, 329-337. Brown, L. D. and Purves, R. (1973). Measurable selection of extrema Ann. Statist. 1, 902-912. Brown, P. J. and Zidek, J. V. (1980). Adaptive multivariate ridge regression. Ann. Statist. 8, 64-74. Brown, L. D., Cohen, A. and Strawderman, W. E. (1976). A complete class theorem for strict monotone likelihood ratio with applications. Ann. Statist. 4, 712-722. Brown, L. D., Casella, G., and Hwang, J. T. G. (1995). Optimal confidence sets, bioequiv-alence, and the limac ¸on of Pascal. J. Amer. Statist. Assoc. 90, 880-889. Bucklew, J. A. (1990). Large Deviation Techniques in Decision, Simulation and Estimation. New York: Wiley. 530 REFERENCES [ 6.10 Buehler, R. J. (1959). Some validity criteria for statistical inference. Ann. Math. Statist. 30, 845-863. Buehler, R. J. (1982). Some ancillary statistics and their properties. J. Amer. Statist. Assoc. 77, 581-589. Burdick, R. K. and Graybill, F. A. (1992). Confidence Intervals on Variance Components. New York: Marcel Dekker Butler, R. W. (1986). Predictive likelihood inference with applications (with discussion). J. Roy. Statist. Soc. Ser. B 48, 1-38. Butler, R. W. (1989). Approximate predictive pivots and densities. Biometrika 76, 489-501. Carlin, B. P. and Louis, T. A. (1996). Bayes and Empirical Bayes Methods for Data Analysis. London: Chapman & Hall. Carroll, R. J. and Lombard, F. (1985). A note on N estimators for the binomial distribution. J. Amer. Statist. Assoc. 80, 423-426. Carroll, R. J. Ruppert, D., and Stefanski, L. (1995). Measurment Error in Nonlinear Models. London: Chapman & Hall. Carter, R. G., Srivastava, M. S., and Srivastava, V. K. (1990). Unbiased estimation of the MSE matrix of Stein-rule estimators, confidence ellipsoids, and hypothesis testing. Econ. Theory 6, 63-74. Casella, G. (1980). Minimax ridge regression estimation. Ann. Statist. 8, 1036-1056. Casella, G. (1985a). An introduction to empirical Bayes data analysis. Amer. Statist. 39, 83-87. Casella, G. (1985b). Matrix conditioning and minimax ridge regression estimation. J. Amer. Statist. Assoc. 80, 753-758. Casella, G. (1986). Stabilizing binomial n estimators. J. Amer. Statist. Assoc. 81, 172-175. Casella, G. (1987). Conditionally acceptable recentered set estimators. Ann. Statist. 15, 1363-1371. Casella, G. (1988). Conditionally acceptable frequentist solutions (with discussion). Statis-tical Decision Theory IV (S. S. Gupta and J. O. Berger, eds.). New York: Springer-Verlag, pp. 73-111. Casella, G. (1990). Estimators with nondecreasing risk: Application of a chi-squared iden-tity. Statist. Prob. Lett. 10, 107-109. Casella, G. (1992a). Illustrating empirical Bayes methods. Chemolab 16, 107-125. Casella, G. (1992b). Conditional inference from confidence sets. In Current Issues in Sta-tistical Inference: Essays in Honor of D. Basu (M. Ghosh and P. K. Pathak, eds.). Hayward, CA: Institute of Mathematical Statistics, pp. 1-12. Casella, G. and Berger, R. L. (1990). Statistical Inference. Pacific Grove, CA: Wadsworth/Brooks Cole. Casella, G. and Berger, R. L. (1992). Deriving generalized means as least squares and maximum likelihood estimates. Amer. Statist. 46, 279-282. Casella, G. and Berger, R. L. (1994). Estimation with selected binomial information. J. Amer. Statist. Assoc. 89, 1080-1090. Casella, G. and Hwang, J. T. G. (1982). Limit expressions for the risk of James-Stein estimators. Can. J. Statist. 10, 305-309. 6.10 ] REFERENCES 531 Casella, G. and Hwang, J. T. (1983). Empirical Bayes confidence sets for the mean of a multivariate normal distribution. J. Amer. Statist. Assoc. 78, 688-697. Casella, G. and Hwang, J. T. (1987). Employing vague prior information in the construction of confidence sets. J. Mult. Anal. 21, 79-104. Casella, G. and Strawderman, W. E. (1981). Estimating a bounded normal mean. Ann. Statist. 9, 870-878. Casella, G. and Strawderman, W. E. (1994). On estimating several binomial N’s. Sankhya 56, 115-120. Cassel, C., S¨ arndal, C., and Wretman, J. H. (1977). Foundations of Inference in Survey Sampling. New York: Wiley. Cellier, D., Fourdrinier, D., and Robert, C. (1989). Robust shrinkage estimators of the location parameter for elliptically symmetric distributions. J. Mult. Anal. 29, 39-42. Chan, K. S. and Geyer, C. J. (1994). Discussion of the paper by Tierney. Ann. Statist. 22, 1747-1758 Chan, L. K. (1967). Remark on the linearized maximum likelihood estimate. Ann. Math. Statist. 38, 1876-1881. Chapman, D. G. and Robbins, H. (1951). Minimum variance estimation without regularity assumptions. Ann. Math. Statist. 22, 581-586. Chatterji,S.D.(1982).AremarkontheCram´ er-Raoinequality.InStatisticsandProbability: Essays in Honor of C. R. Rao (G. Kallianpur, P. R. Krishnaiah, and J. K. Ghosh, eds.). New York: North Holland, pp. 193-196. Chaudhuri, A. and Mukerjee, R. (1988). Randomized Response: Theory and Techniques. New York: Marcel Dekker. Chen, J. and Hwang, J. T. (1988). Improved set estimators for the coefficients of a linear model when the error distribution is spherically symmetric. Can. J. Statist. 16, 293-299. Chen, L., Eichenauer-Herrmann, J., and Lehn, J. (1990). Gamma-minimax estimation of a multivariate normal mean. Metrika 37, 1-6. Chen, S-Y. (1988). Restricted risk Bayes estimation for the mean of a multivariate normal distribution. J. Mult. Anal. 24, 207-217. Chernoff, H. (1954). On the distribution of the likelihood ratio. Ann. Math. Statist. 25, 573–578. Chernoff, H. (1956). Large-sample theory: Parametric case. Ann. Math. Statist. 27, 1-22. Chib, S. and Greenberg, E. (1955). Understanding the Metropolis-Hastings algorithm. The American Statistician 49, 327–335. Chow, M. (1990). Admissibility of MLE for simultaneous estimation in negative binomila problems. J. Mult. Anal. 33, 212-219. Christensen, R. (1987). Plane Answers to Complex Questions: The Theory of Linear Models, Second Edition. New York: Springer-Verlag. Christensen, R. (1990). Log-linear Models. New York: Springer-Verlag. Churchill, G. A. (1985). Stochastic models for heterogeneous DNA. Bull. Math. Biol. (51) 1, 79-94. Chung, K. L. (1974). A Course in Probability Theory, Second Edition. New York: Academic Press. 532 REFERENCES [ 6.10 Clarke,B.S.andBarron,A.R.(1990).Information-theoreticasymptoticsofBayesmethods. IEEE Trans. Inform. Theory 36, 453-471. Clarke, B. S. and Barron, A. R. (1994). Jeffreys prior is asymptotically least favorable under entropy loss. J. Statist. Plan. Inform. 41, 37-60. Clarke, B. S. and Wasserman, L. (1993). Noninformative priors and nuisance parameters. J. Amer. Statist. Assoc. 88, 1427-1432. Cleveland, W. S. (1985). The Elements of Graphing Data. Monterey, CA: Wadsworth. Clevensen, M. L. and Zidek, J. (1975). Simultaneous estimation of the mean of independent Poisson laws. J. Amer. Statist. Assoc. 70, 698-705. Cochran, W. G. (1977). Sampling Techniques, Third Edition. New York: Wiley. Cohen, A. (1965a). Estimates of linear combinations of the parameters in the mean vector of a multivariate distribution. Ann. Math. Statist. 36, 78-87. Cohen, A. C. (1965b). Maximum likelihood estimation in the Weibull distribution based on complete and on censored samples. Technometrics 7, 579-588. Cohen, A. C. (1966). All admissible linear estimators of the mean vector. Ann. Math. Statist. 37, 458-463. Cohen, A. C. (1967). Estimation in mixtures of two normal distributions. Technometrics 9, 15-28. Cohen, A. (1981). Inference for marginal means in contingency tables. J. Amer. Statist. Assoc. 76, 895-902. Cohen, A. and Sackrowitz, H. B. (1974). On estimating the common mean of two normal distributions. Ann. Statist. 2, 1274-1282. Cohen, M. P. and Kuo, L. (1985). The admissibility of the empirical distribution function. Ann. Statist. 13, 262-271. Copas, J.B. (1972a) The likelihood surface in the linear functional relationship problem. J. Roy. Statist. Soc. Ser. B 34, 274-278. Copas, J. B. (1972b). Empirical Bayes methods and the repeated use of a standard. Biometrika 59, 349-360. Copas, J. B. (1975). On the unimodality of the likelihood for the Cauchy distribution. Biometrika 62, 701-704. Copas, J. B. (1983). Regression, prediction and shrinkage. J. Roy. Statist. Soc. Ser. B 45, 311-354. Corbeil, R. R. and Searle, S. R. (1976). Restricted maximum likelihood (REML) estimation of variance components in the mixed model. Technometrics 18, 31-38. Cox, D. R. (1958). Some problems connected with statistical inference. Ann. Math. Statist. 29, 357-372. Cox, D. R. (1970). The Analysis of Binary Data. London: Methuen. Cox, D. R. (1975) Partial likelihood. Biometrika 62, 269-276. Cox, D. R. and Oakes, D. O. (1984). Analysis of Survival Data. London: Chapman & Hall. Cox, D. R. and Reid, N. (1987). Parameter orthogonality and approximate conditional inference (with discussion). J. Roy. Statist. Soc. Ser. B 49, 1-39. Crain, B. R. (1976). Exponential models, maximum likelihood estimation, and the Haar condition. J. Amer. Statist. Assoc. 71, 737-740. 6.10 ] REFERENCES 533 Cram´ er, H. (1946a). Mathematical Methods of Statistics. Princeton, NJ: Princeton Univer-sity Press. Cram´ er, H. (1946b). A contribution to the theory of statistical estimation. Skand. Akt. Tidskr. 29, 85-94. Cressie, N. and Lahiri, S. N. (1993). The asymptotic distribution of REML estimators. J. Mult. Anal. 45, 217-233. Crow, E. L. and Shimizu, K. (1988). Lognormal Distributions: Theory and Practice. New York: Marcel Dekker. Crowder, M. J. (1976). Maximum likelihood estimation for dependent observations. J. Roy. Statist. Soc. Ser. B 38, 45-53. Crowder, M. J. and Sweeting, T. (1989). Bayesian inference for a bivariate binomial distri-bution. Biometrika 76, 599-603. Daniels, H.E. (1954). Saddlepoint approximations in statistics. Ann. Math. Statist. 25, 631-650. Daniels, H. E. (1961). The asymptotic efficiency of a maximum likelihood estimators. Proc. Fourth Berkeley Symp. Math. Statist. Prob. 1, University of California Press, 151-163. Daniels, H. E. (1980). Exact saddlepoint approximations. Biometrika 67, 59-63. Daniels, H. E. (1983). Saddlepoint approximations for estimating equations. Biometrika 70, 89-96. Darmois, G. (1935). Sur les lois de probabilite a estimation exhaustive. C. R. Acad. Sci. Paris 260, 265-1266. Darmois, G. (1945). Sur les lois limites de la dispersion de certaines estimations. Rev. Inst. Int. Statist. 13, 9-15. DasGupta, A. (1985). Bayes minimax estimation in multiparameter families when the pa-rameter space is restricted to a bounded convex set. Sankhya 47, 326-332. DasGupta, A. (1991). Diameter and volume minimizing confidence sets in Bayes and clas-sical problems. Ann. Statist. 19, 1225-1243. DasGupta, A. (1994). An examination of Bayesian methods and inference: In search of the truth. Technical Report, Department of Statistics, Purdue University. DasGupta, A. and Bose, A. (1988). Gamma-minimax and restricted-risk Bayes estimation of multiple Poisson means under ϵ-contamination of the subjective prior. Statist. Dec. 6, 311-341. DasGupta, A. and Rubin, H. (1988). Bayesian estimation subject to minimaxity of the mean of a multivariate normal distribution in the case of a common unknown variance. Statistical Decision Theory and Related Topics IV (S. S. Gupta and J. O. Berger, eds.). New York: Springer-Verlag, pp. 325-346. DasGupta, A. and Studden, W. J. (1989). Frequentist behavior of robust Bayes estimates of normal means. Statist. Dec. 7, 333-361. Datta, G. S. and Ghosh, J. K. (1995). On priors providing frequentist validity for Bayesian inference. Biometrika 82, 37-46. Davis, L. J. (1989). Intersection-union tests for strict collapsibility in three-dimensional contingency tables. Ann. Statist. 17, 1693-1708. Dawid, A. P. (1983). Invariant prior distributions Encycl. Statist. Sci. 4, 228-235. 534 REFERENCES [ 6.10 deFinetti, B. (1937). La prevision: Ses lois logiques, ses source subjectives. Ann. lnst. Henri Poincare 7, 1-68. (Translated in Studies in Subjective Probability (H. Kyburg and H. Smokler, eds.). New York: Wiley.) deFinetti, B. (1970). Teoria Delle Probabilita. English Translation (1974) Theory of Prob-ability. New York: Wiley. deFinetti, B. (1974). Theory of Probability, Volumes I and II. New York: Wiley DeGroot, M. (1970). Optimal Statistical Decisions. New York: McGraw-Hill. DeGroot, M., H. and Rao, M. M. (1963). Bayes estimation with convex loss. Ann. Math. Statist. 34, 839-846. Deeley, J. J. and Lindley, D. V. (1981). Bayes empirical Bayes. J. Amer. Statist. Assoc. 76, 833-841. Dempster, A. P. (1971). An overview of multivariate data analysis. J. Mult. Anal. 1, 316-346. Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incom-plete data via the EM algorithm. J. Roy. Statist. Soc. Ser. B 39, 1-22. Denny, J. L. (1964). A real-valued continuous function on R almost everywhere 1-1. Fund. Math. 55, 95-99. Denny, J. L. (1969). Note on a theorem of Dynkin on the dimension of sufficient statistics. Ann. Math. Statist. 40, 1474-1476. Devroye, L. (1985). Non-Uniform Random Variable Generation. New York: Springer-Verlag. Devroye, L. and Gyoerfi, L. (1985). Nonparametric Density Estimation: The L1 View. New York: Wiley. Dey, D. K and Srinivasan, C. (1985). Estimation of a covariance matrix under Stein’s loss. Ann. Statist. 13, 1581-1591. Dey, D. K., Ghosh, M., and Srinivasan, C. (1987). Simultaneous estimation of parameters under entropy loss. J. Statist. Plan. Inform. 15, 347-363. Diaconis, P. (1985). Theories of data analysis, from magical thinking through classical statistics. In Exploring Data Tables, Trends and Shapes (D. Hoaglin, F. Mosteller, and J. W. Tukey, eds.). New York: Wiley, pp. 1-36. Diaconis, P. (1988). Group Representations in Probability and Statistics. Hayward, CA: Institute of Mathematical Statistics Diaconis,P.andYlvisaker,D.(1979).Conjugatepriorsforexponentialfamilies.Ann.Statist. 7, 269-281. Dobson, A. J. (1990). An Introduction to Generalized Linear Models. London: Chapman & Hall. Donoho, D. (1994). Statistical estimation and optimal recovery. Ann. Statist. 22, 238-270. Donoho, D. L., Liu, R. C., and MacGibbon, B. (1990). Minimax risk over hyperrectangles, and implications. Ann. Statist. 18, 1416-1437. Doss, H. and Sethuraman, J. (1989). The price of bias reduction when there is no unbiased estimate. Ann. Statist. 17, 440-442. Downton, F. (1973). The estimation of P(Y < X) in the normal case. Technometrics 15, 551-558. Draper, N. R and Van Nostrand, R. C. (1979). Ridge regression and James-Stein estimation: Review and comments. Technometrics 21, 451-466. 6.10 ] REFERENCES 535 Dudley, R. M. (1989). Real Analysis and Probability. Pacific Grove, CA: Wadsworth and Brooks/Cole. Dynkin, E. B. (1951). Necessary and sufficient statistics for a family of probability distri-butions. English translation in Select. Transl. Math. Statist. Prob. I (1961) 23-41. Eaton, M. L. (1989). Group Invariance Applications in Statistics. Regional Conference Series in Probability and Statistics. Hayward, CA: Institute of Mathematical Statistics Eaton, M. L. (1992). A statistical diptych: Admissible inferences-recurrence of symmetric Markov chains. Ann. Statist. 20, 1147-1179. Eaton, M. L. and Morris, C. N. (1970). The application of invariance to unbiased estimation. Ann. Math. Statist. 4l,1708-1716. Eaves,D.M.(1983).OnBayesiannonlinearregressionwithanenzymeexample.Biometrika 70, 367-379. Edgeworth, F. Y. (1883). The law of error. Phil. Mag. (Fifth Series) 16, 300-309. Edgeworth, F. Y. (1908, 1909) On the probable errors of frequency constants. J. Roy. Statist. Soc. Ser. B 71, 381-397, 499-512, 651-678; 72, 81-90. Edwards, A. W. F. (1974). The history of likelihood, Int. Statist. Rev. 42, 4-15. Edwards, A. W. F. (1992). Likelihood. Baltimore: Johns Hopkins University Press. Efron, B. (1975). Defining the curvature of a statistical problem (with applications to second order efficiency). Ann. Statist. 3, 1189-1242. Efron, B. (1978). The geometry of exponential families. Ann. Statist. 6, 362-376. Efron, B. (1979). Bootstrap methods: Another look at the jackknife. Ann. Statist. 7, 1-26. Efron, B. (1982a). Maximum likelihood and decision theory. Ann. Statist. 10, 340-356. Efron, B. (1982b). The Jackknife, the Bootstrap, and other Resampling Plans. Volume 38 of CBMS-NSF Regional Conference Series in Applied Mathematics. Philadelphia: SIAM. Efron, B. (1990). Discussion of the paper by Brown. Ann. Statist. 18, 502-503. Efron, B. and Hinkley, D. (1978). Assessing the accuracy of the maximum likelihood estimator: Observed vs. expected Fisher information. Biometrica 65, 457-481. Efron, B. and Johnstone, I. (1990). Fisher’s information in terms of the hazard ratio. Ann. Statist. 18, 38-62. Efron, B. and Morris, C. N. (1971). Limiting the risk of Bayes and empirical Bayes estimators–Part I: The Bayes case. J. Amer. Statist. Assoc. 66, 807-815. Efron, B. and Morris, C. N. (1972a). Limiting the risk of Bayes and empirical Bayes estimators–Part II: The empirical Bayes case. J. Amer. Statist. Assoc. 67, 130-139. Efron, B. and Morris, C. (1972b). Empirical Bayes on vector observations-An extension of Stein’s method. Biometrika 59, 335-347. Efron, B. and Morris, C. N. (1973a). Stein’s estimation rule and its competitors–an empirical Bayes approach. J. Amer. Statist. Assoc. 68, 117-130. Efron, B. and Morris, C. (1973b). Combining possibly related estimation problems (with discussion). J. Roy. Statist. Soc. Ser. B 35, 379-421. Efron,B.andMorris,C.(1975).DataanalysisusingStein’sestimatoranditsgeneralizations. J. Amer. Statist. Assoc. 70, 311-319. Efron, B. and Morris, C. N. (1976a). Families of minimax estimators of the mean of a multivariate normal distribution. Ann. Statist. 4, 11-21. 536 REFERENCES [ 6.10 Efron, B. and Morris, C. N. (1976b). Multivariate empirical Bayes and estimation of co-variance matrices. Ann. Statist. 4, 22-32. Efron, B. and Tibshirani, R. J. (1993). An Introduction to the Bootstrap. London: Chapman & Hall. Eichenauer-Herrmann, J. and Fieger, W. (1992). Minimax estimation under convex loss when the parameter interval is bounded. Metrika 39, 27-43. Eichenauer-Herrmann, J. and Ickstadt, K. (1992). Minimax estimators for a bounded loca-tion parameter. Metrika 39, 227-237. Eisenhart, C. (1964). The meaning of ’least’ in least squares. J. Wash. Acad . Sci. 54, 24-33. Ericson, W. A. (1969). Subjective Bayesian models in sampling finite populations (with discussion). J. Roy. Statist. Soc. Ser. B 31, 195-233. Everitt, B. S. (1992). The Analysis of Contingency Tables, Second Edition. London: Chap-man & Hall. Everitt, B. S. and Hand, D. J. (1981). Finite Mixture Distributions. London: Chapman & Hall. Fabian, V. and Hannan, J. (1977). On the Cram´ er-Rao inequality. Ann. Statist. 5, 197-205. Faith, R. E. (1976). Minimax Bayes point and set estimators of a multivariate normal mean. Ph.D. Thesis, Department of Statistics, University of Michigan. Faith, R. E. (1978). Minimax Bayes point estimators of a multivariate normal mean. J. Mult. Anal. 8 372-379. Fan, J. and Gijbels, I. (1992). Minimax estimation of a bounded squared mean. Statist. Prob. Lett. 13, 383-390. Farrell, R. (1964). Estimators of a location parameter in the absolutely continuous case. Ann. Math. Statist. 35, 949-998. Farrell, R. H. (1968). On a necessary and sufficient condition for admissibility of estimators when strictly convex loss is used. Ann. Math. Statist. 38, 23-28. Farrell, R. H., Klonecki, W., and Zontek. (1989). All admissible linear estimators of the vector of gamma scale parameters with applications to random effects models. Ann. Statist. 17, 268-281. Feldman, I. (1991). Constrained minimax estimation of the mean of the normal distribution with known variance Ann. Statist. 19, 2259-2265. Feldman, D and Fox, M. (1968). Estimation of the parameter n in the binonial distribution. J. Amer. Statist. Assoc. 63 150-158. Feller, W. (1968). An Introduction to Probability Theory and Its Applications, Volume 1, Third Edition. New York: Wiley. Fend, A. V. (1959). On the attainment of the Cram´ er-Rao and Bhattacharyya bounds for the variance of an estimate. Ann. Math. Statist. 30, 381-388. Feng, Z. and McCulloch, C. E. (1992). Statistical inference using maximum likelihood estimation and the generalized likelihood ratio when the true parameter is on the boundary of the parameter space. Statist. Prob. Lett. 13, 325-332. Ferguson, T. S. (1962). Location and scale parameters in exponential families of distribu-tions. Ann. Math. Statist. 33, 986-1001. (Correction 34, 1603.) Ferguson, T. S. (1967). Mathematical Statistics: A Decision Theoretic Approach. New York: Academic Press. 6.10 ] REFERENCES 537 Ferguson, T. S. (1973). A Bayesian analysis of some nonparametric problems. Ann. Statist. 1, 209-230. Ferguson, T. S. (1978). Maximum likelihood estimation of the parameters of the Cauchy distribution for samples of size 3 and 4. J. Amer. Statist. Assoc. 73, 211-213. Ferguson, T. S. (1982). An inconsistent maximum likelihood estimate. J. Amer. Statist. Assoc. 77, 831-834. Ferguson, T. S. (1996). A Course in Large Sample Theory. London: Chapman & Hall. Field, C. A. and Ronchetti, E. (1990). Small Sample Asymptotics. Hayward, CA: Institute of Mathematical Statistics. Finch, S. J., Mendell, N. R. and Thode, H. C. (1989). Probabilistic measures of adequacy of a numerical search for a global maximum. J. Amer. Statist. Assoc. 84, 1020-1023. Finney, D. J. (1971). Probit Analysis. New York: Cambridge University Press. Fisher, R. A. (1920). A mathematical examination of the methods of determining the accu-racy of an observation by the mean error, and by the mean square error. Monthly Notices Roy. Astron. Soc. 80, 758-770. Fisher, R. A. (1922). On the mathematical foundations of theoretical statistics. Philos. Trans. Roy. Soc. London, Ser. A 222, 309-368. Fisher, R. A. (1925). Theory of statistical estimation. Proc. Camb. Phil. Soc. 22, 700-725. Fisher, R. A. (1930). Inverse probability. Proc. Camb. Phil. Soc. 26, 528-535. Fisher, R. A. (1935). The fiducial argument in statistical inference. Ann. Eugenics 6, 391-398. Fisher, R. A. (1934). Two new properties of mathematical likelihood. Proc. Roy. Soc. A 144, 285-307. Fisher, R. A. (1956). On a test of significance in Pearson’s Biometrika tables (No. 11). J. Roy. Statist. Soc. B 18, 56-60. Fisher, R. A. (1959). Statistical Methods and Scientific Inference, Second Edition. New York: Hafner. Reprinted 1990, Oxford: Oxford University Press. Fisher, N. I. (1982). Unbiased estimation for some new parametric families of distributions. Ann. Statist. 10, 603-615. Fleming, T. R. and Harrington, D. P. (1991). Counting Processes and Survival Analysis. New York: Wiley. Fourdrinier, D. and Wells, M. T. (1995). Estimation of a loss function for spherically sym-metric distributions in the general linear model. Ann. Statist. 23, 571-592. Fourdrinier, D., Strawderman, W. E., and Wells, M. T. (1998). On the construction of proper Bayes minimax estimators. Ann. Statist. 26, No. 2. Foutz, R. V. (1977). On the unique consistent solution to the likelihood equations. J. Amer. Statist. Assoc. 72, 147-148. Fox, M. (1981). An inadmissible best invariant estimator: The i.i.d. case. Ann. Statist. 9, 1127-1129. Fraser, D. A. S. (1954). Completeness of order statistics. Can. J. Math. 6, 42-45. Fraser, D. A. S. (1968). The Structure of Inference. New York: Wiley Fraser, D. A. S. (1979). Inference and Linear Models. New York: McGraw-Hill. Fraser, D. A. S. and Guttman, I. (1952). Bhattacharyya bounds without regularity assump-tions. Ann. Math. Statist. 23, 629- 632. 538 REFERENCES [ 6.10 Fr´ echet, M. (1943). Sur l’extension de certaines evaluations statistiques de petits echantil-lons. Rev. Int. Statist. 11, 182-205. Freedman, D. and Diaconis, P. (1982). On inconsistent M-estimators. Ann. Statist. 10, 454-461. Fu, J. C. (1982). Large sample point estimation: A large deviation theory approach. Ann. Statist. 10, 762-771. Gajek, L. (1983). Sufficient conditions for admissibility. Proceedings of the Fourth Pan-nonian Symposium on Mathematical Statistics (F. Konecny, J. Mogyor´ odi, W. Wertz, eds)., Bad Tatzmannsdorf, Austria, pp. 107-118. Gajek, L. (1987). An improper Cram´ er-Rao lower bound. Applic. Math. XIX, 241-256. Gajek, L. (1988). On the minimax value in the scale model with truncated data. Ann. Statist. 16, 669-677. Gajek, L. and Kaluzka, M. (1995). Nonexponential applications of a global Cram´ er-Rao inequality. Statistics 26, 111-122. Gart, J. J. (1959). An extension of the Cram´ er-Rao inequality. Ann. Math. Statist. 30, 367-380. Gatsonis, C. A. (1984). Deriving posterior distributions for a location parameter: A decision-theoretic approach. Ann. Statist. 12, 958-970. Gatsonis, C., MacGibbon, B., and Strawderman, W. (1987). On the estimation of a restricted normal mean. Statist. Prob. Lett. 6, 21-30. Gauss, C. F. (1821). Theoria combinationis obsercationunt erronbus minimis obnoxiae. An English translation can be found in Gauss’s work (1803-1826) on the Theory of Least Squares. Trans. H. F. Trotter. Statist. Techniques Res. Group. Tech. Rep. No. 5. Princeton University. Princeton. (Published translations of these papers are available in French and German.) Gelfand, A. E. and Dey, D. K. (1988). Improved estimation of the disturbance variance in a linear regression model. J. Economet. 39, 387-395. Gelfand, A. E. and Smith, A. F. M. (1990). Sampling-based approaches to calculating marginal densities. J. Amer. Statist. Assoc. 85, 398-409. Gelman, A. and Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences (with discussion). Statist. Sci. 7, 457-511. Gelman, A., Carlin, J., Stern, H., and Rubin, D.B. (1995). Bayesian Data Analysis. London: Chapman & Hall. Geman, S. and Geman, D. (1984). Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell. PAMT-6, 721-740. George, E. I. (1986a). Minimax multiple shrinkage estimators. Ann. Statist. 14, 188-205. George, E. I. (1986b). Combining minimax shrinkage estimators. J. Amer. Statist. Assoc. 81, 437-445. George, E. I. (1991). Shrinkage domination in a multivariate common mean problem. Ann. Statist. 19, 952-960. George, E. I. and Casella, G. (1994). An empirical Bayes confidence report. Statistica Sinica 4, 617-638. George, E. I. and McCulloch, R. (1993). On obtaining invariant prior distributions. J. Statist. Plan. Inform. 37, 169-179. 6.10 ] REFERENCES 539 George, E. I. and Robert, C. P. (1992). Capture-recapture estimation via Gibbs sampling. Biometrika 79, 677-683. George, E. I., Makov, Y., and Smith, A. F. M. (1993). Conjugate likelihood distributions. Scand. J. Statist. 20, 147-156. George, E. I., Makov, Y., and Smith, A. F. M. (1994). Fully Bayesian hierarchical analysis for exponential families via Monte Carlo simulation. Aspects of Uncertainty (P.R. Freeman and A. F. M. Smith, eds.). New York: Wiley, pp. 181-198. Geyer, C. (1992). Practical Markov chain Monte Carlo (with discussion). Statist. Sci. 7, 473-511. Ghosh, J. K., and Mukerjee, R. (1991). Characterization of priors under which Bayesian and frequentist Bartlett corrections are equivalent in the multiparameter case. J. Mult. Anal. 38, 385-393. Ghosh,J.K.andMukerjee,R.(1992). Non-informative priors (with discussion). In Bayesian Statistics IV (J. M. Bernardo, J. O. Berger, A. P. Dawid, and A. F. M. Smith, eds.). Oxford: Oxford University Press, pp. 195-210. Ghosh, J. K. and Mukerjee, R. (1993). On priors that match posterior and frequentist dis-tribution functions. Can. J. Statist. 21, 89-96. Ghosh, J. K. and Sinha, B. K. (1981). A necessary and sufficient condition for second order admissibility with applications to Berkson’s bioassay problem. Ann. Statist. 9, 1334-1338. Ghosh, J. K. and Subramanyam, K. (1974). Second order efficiency of maximum likelihood estimators. Sankhya A 36, 325-358. Ghosh, M. (1974). Admissibility and minimaxity of some maximum likelihood estimators when the parameter space is restricted to integers. J. Roy. Statist. Soc. Ser. B 37, 264-271. Ghosh, M. N. (1964). Uniform approximation of minimax point estimates. Ann. Math. Statist. 35, 1031-1047. Ghosh, M. and Meeden, G. (1977). Admissibility of linear estimators in the one parameter exponential family. Ann. Statist. 5, 772-778. Ghosh, M. and Meeden, G. (1978). Admissibility of the MLE of the normal integer mean. Sankhya B 40, 1-10. Ghosh, M., Hwang, J. T., and Tsui, K-W. (1983). Construction of improved estimators in multiparameter estimation for discrete exponential families. Ann. Statist. 11, 351-367. Ghosh, M., Hwang, J. T., and Tsui, K-W. (1987). Construction of improved estimators in multiparameter estimation for discrete exponential families. Ann. Statist. 11, 368-376. Ghurye, S. G. and Olkin, (1969), Unbiased estimation of some multivariate probability densities. Ann. Math. Statist. 40, 1261-1271. Giesbrecht, F. and Kempthorne, O. (1976). Maximum likelihood estimation in the three-parameter lognormal distribution. J. Roy. Statist. Soc. Ser. B 38, 257-264. Gilks, W.R., Richardson, S., and Spiegelhalter, D.J., eds. (1996). Markov Chain Monte Carlo in Practice. London: Chapman & Hall. Girshick, M. A. and Savage, L. J. (1951). Bayes and minimax estimates for quadratic loss functions. University of California Press, pp. 53-73. Girshick, M. A., Mosteller, F., and Savage, L. J. (1946). Unbiased estimates for certain binomial sampling problems with applications. Ann. Math. Statist. 17, 13-23. Glasser, G. J. (1962). Minimum variance unbiased estimators for Poisson probabilities. Technometrics 4, 409-418. 540 REFERENCES [ 6.10 Gleser, L. J. (1979). Minimax estimation of a normal mean vector when the covariance matrix is unknown. Ann. Statist. 7, 838-846. Gleser, L. J. (1981). Estimation in a multivariate errors-in-variables regression model: Large sample results. Ann. Statist. 9, 24-44. Gleser, L. J. (1986). Minimax estimators of a normal mean vector for arbitrary quadratic loss and unknown covariance matrix. Ann. Statist. 14, 1625-1633. Gleser, L. J. (1991). Measurement error models (with discussion). Chem. Int. Lab. Syst. 10, 45-67. Gleser, L. J. and Healy, J. (1976). Estimating the mean of a normal distribution with known coefficient of variation. J. Amer. Statist. Assoc. 71, 977-981. Godambe, V. P. (1955). A unified theory of sampling from finite populations. J. Roy. Statist. Soc. Ser. B 17, 269-278. Godambe, V. P. (1982). Estimation in survey sampling: Robustness and optimality. J. Amer. Statist. Assoc. 77, 393-406. Godambe, V. P. (1991). Estimating Functions. UK:Clarendon Press. Goel, P. and DeGroot, M. (1979). Comparison of experiments and information measures. Ann. Statist. 7 1066-1077. Goel, P. and DeGroot, M. (1980). Only normal distributions have linear posterior expecta-tions in linear regression. J. Amer. Statist. Assoc. 75, 895-900. Goel,P.andDeGroot,M.(1981).Informationabouthyperparametersinhierarchicalmodels. J. Amer. Statist. Assoc. 76, 140-147. Good, I. J. (1952). Rational decisions. J. Roy. Statist. Soc. Ser. B 14, 107-114. Good, I. J. (1965). The Estimation of Probabilities: An Essay on Modern Bayesian Methods. Cambridge: M.I.T. Press. Goodman, L. A. (1970). The multivariate analysis of qualitative data: Interactions among multiple classifications. J. Amer. Statist. Assoc. 65, 226-256. Goutis, C. and Casella, G. (1991). Improved invariant confidence intervals for a normal variance. Ann. Statist. 19, 2015-2031. Goutis, C. and Casella, G. (1992). Increasing the confidence in Student’s t. Ann. Statist. 20, 1501-1513. Govindarajulu, Z. and Vincze, I. (1989). The Cram´ er-Fr´ echet-Rao inequality for sequential estimation in non-regular case. Statist. Data. Anal. Inf 257-268. Graybill, F. A. and Deal, R. B. (1959). Combining unbiased estimators. Biometrics 15, 543-550. Green, E. and Strawderman, W. E. (1991). A James-Stein type estimator for combining unbiased and possibly biased estimators. J. Amer. Statist. Assoc. 86, 1001-1006. Groenebroom, P. and Oosterhoof, J. (1981). Bahadur efficiency and small sample efficiency. Int. Statist. Rev. 49, 127-141. Gupta, A. K. and Rohatgi, V. K. (1980). On the estimation of restricted mean. J. Statist. Plan. Inform. 4, 369-379. Gupta, M. K. (1966). On the admissibility of linear estimates for estimating the mean of distributions of the one parameter exponential family. Calc. Statist. Assoc. Bull. 15, 14-19. Guttman, S. (1982a). Stein’s paradox is impossible in problems with finite parameter spaces. Ann. Statist. 10, 1017-1020. 6.10 ] REFERENCES 541 Guttman, S. (1982b). Stein’s paradox is impossible in the nonanticipative context. J. Amer. Statist. Assoc. 77, 934-935. Haas, G., Bain, L., and Antle, C. (1970). Inferences for the Cauchy distribution based on maximum likelihood estimators. Biometrika 57, 403- 408. Haberman, S. J. (1973). Loglinear models for frequency data: Sufficient statistics and like-lihood equations. Ann. Statist. 1, 617-632. Haberman, S. J. (l974). The Analysis of Frequency Data. Chicago: University of Chicago Press. Haberman, S. (1989). Concavity and estimation. Ann. Statist. 17, 1631-1661. Haff, L. R. (1979). Estimation of the inverse covariance matrix: Random mixtures of the inverse Wishart matrix and the identity. Ann. Statist. 7 1264-1276. Haff, L. R. and Johnson, R. W. (1986). The superharmonic condition for simultaneous estimation of means in exponential families. Can. J. Statist. 14, 43-54. H´ ajek, J. (1972). Local asymptotic minimax and admissibility in estimation. Proc. Sixth Berkeley Symp. Math. Statist. Prob. 1, University of California Press175-194. Hall, P. (1990). Pseudo-likelihood theory for empirical likelihood. Ann. Statist. 18, 121-140. Hall, P. (1992). The Bootstrap and Edgeworth Expansion. New York: Springer-Verlag. Hall, P. and La Scala, B. (1990). Methodology and algorithms of empirical likelihood. Int. Statist. Rev. 58, 109-127. Hall, W. J., Wijsman, R. A., and Ghosh, J. R. (1965). The relationship between sufficiency and invariance with applications in sequential analysis. Ann. Math. Statist. 36, 575-614. Halmos, P. R. ( l946). The theory of unbiased estimation. Ann. Math. Statist. 17, 34-43. Halmos, P. R (1950). Measure Theory. New York: Van Nostrand. Halmos, P. R. and Savage, L. J. (1949). Application of the Radon-Nikodym theorem to the theory of sufficient statistics. Ann. Math. Statist. 20, 225-241. Hammersley, J. M. (1950). On estimating restricted parameters. J. Roy. Statist. Soc. Ser. B 12, 192-240. Hampel, F. R., Ronchetti, E. M., Rousseeuw, P. J., and Stahel, W. A. (1986). Robust Statis-tics: The Approach Based on Influence Functions. New York: Wiley. Hardy, G. H., Littlewood, J. E., and Polya, G. (1934). Inequalities. Cambridge: Cambridge University Press. Harter, H. L. (1974-1976). The method of least squares and some alternatives. Int. Statist. Rev. 42, 147-174, 235-268, 282; 43, 1-44, 125-190, 269-278; 44, 113-159. Hartley, H. O. (1958). Maximum likelihood estimation from incomplete data. Biometrics 14, 174-194. Hartung, J. (1981). Nonnegative minimum biased invariant estimation in variance compo-nent models. Ann. Statist. 9, 278-292. Harville, D. (1976). Extensions of the Gauss-Markov theorem to include the estimation of random effects. Ann. Statist. 4, 384-395. Harville, D. N. (1977). Maximum likelihood approaches to variance component estimation and to related problems. J. Amer. Statist. Assoc. 72, 320-340. Harville, D. (1981). Unbiased and minimum-variance unbiased estimation of estimable functions for fixed linear models with arbitrary covariance structure. Ann. Statist. 9, 633-637. 542 REFERENCES [ 6.10 Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their application. Biometrika 57, 97-109. Heath, D. and Sudderth, W. (1989). Coherent inference from improper priors and from finitely additive priors. Ann. Statist. 17, 907-919. Hedayat, A. S. and Sinha, B. K. (1991). Design and Inference in Finite Population Sampling. New York: Wiley. Helms, L. (1969). Introduction to Potential Theory. New York: Wiley Heyde, C. C. and Feigin, P. D. (1975). On efficiency and exponential families in stochastic process estimation. In Statistical Distributions in Scientific Work 1 (G. P. Patil, S. Kotz, and K. Ord, eds.). Dordrecht: Reidel, pp. 227-240. Hill, B. M. (1963). The three-parameter lognormal distribution and Bayesian analysis of a point-source epidemic. J. Amer. Statist. Assoc. 58, 72-84. Hill, B. M. (1965). Inference about variance components in a one-way model. J. Amer. Statist. Assoc. 60, 806-825. Hinkley, D. V. (1979). Predictive likelihood. Ann. Statist. 7, 718-728. (Corr: 8, 694.) Hinkley, D. V. (1980). Likelihood. Can. J. Statist. 8, 151-163. Hinkley, D. V. and Runger, G. (1984). The analysis of transformed data (with discussion). J. Amer. Statist. Assoc. 79, 302-320. Hinkley, D. V., Reid, N., and Snell, L. (1991). Statistical Theory and Modelling. In honor of Sir David Cox. London: Chapman & Hall. Hipp, C. (1974). Sufficient statistics and exponential families. Ann. Statist. 2, 1283-1292. Hjort, N. L. (1976). Applications of the Dirichlet process to some nonparametric problems. (Norwegian). Univ. of Tromsf, Inst. for Math. and Phys. Sciences. Hoadley, B. (1971). Asymptotic properties of maximum likelihood estimators for the inde-pendent not identically distributed case. Ann. Math. Statist. 42, 1977-1991. Hoaglin, D. C. (1975). The small-sample variance of the Pitman location estimators. J. Amer. Statist. Assoc. 70, 880-888. Hoaglin, D. C., Mosteller, F., and Tukey, J. W. (1985). Exploring Data Tables, Trends and Shapes. New York: Wiley, pp. 1-36. Hobert, J. (1994). Occurrences and consequences of nonpositive Markov chains in Gibbs sampling. Ph.D. Thesis, Biometrics Unit, Cornell University. Hobert, J. and Casella, G. (1996). The effect of improper priors on Gibbs sampling in hierarchical linear mixed models. J. Amer. Statist. Assoc. 91 1461-1473. Hodges, J. L., Jr., and Lehmann, E. L. (1950). Some problems in minimax point estimation. Ann. Math. Statist. 21, 182-197. Hodges, J. L., Jr., and Lehmann, E. L. (1951). Some applications of the Cram´ er-Rao in-equality. Proc. Second Berkeley Symp. Math. Statist. Prob. 1, University of California Press, 13-22. Hodges, J. L., Jr., and Lehmann, E. L. (1952). The use of previous experience in reaching statistical decisions. Ann. Math. Statist. 23, 396-407. Hodges, J. L., Jr., and Lehmann, E. L. (1981). Minimax estimation in simple random sampling. In Essays in Statistics in honor of C. R. Rao (P. Krishnaiah, ed.). Amsterdam: North-Holland. pp. 323-327. 6.10 ] REFERENCES 543 Hoeffding, W. (1948). A class of statistics with asymptotically normal distribution. Ann. Math. Statist. 19, 293-325. Hoeffding, W. (1977). Some incomplete and boundedly complete families of distributions. Ann. Statist. 5, 278-291. Hoeffding, W. (1982). Unbiased range-preserving estimators. Festschrift for Erich Lehmann (P. J. Bickel, K. A. Doksum, and J. L. Hodges, Jr., eds.). Pacific Grove, CA: Wadsworth and Brooks/Cole, pp. 249-260 Hoerl,A.E.andKennard,R.W.(1971a).Ridgeregression:Biasedestimationfornonorthog-onal problems. Technometrics 12, 55-67. Hoerl, A. E. and Kennard, R. W. (1971b). Ridge regression: Applications to nonorthogonal problems. Technometrics 12, 69-82. (Corr: 12, 723.) Hoffmann-Jorgensen, J. (1994). Probability with a View Toward Statistics, Volumes I and II. London: Chapman & Hall. Hogg, R. V. (1960). On conditionai expectations of location statistics. J. Amer. Statist. Assoc. 55, 714 717. Honda, T. (1991). Minimax estimators in the MANOVA model for arbitrary quadratic loss and unknown covariance matrix. J. Mult. Anal. 36, 113-120. Hora, R. B. and Buehler, R. J. (1966). Fiducial theory and invariant estimation. Ann. Math. Statist. 37, 643-656. Hsu, J. C. (1982). Simultaneous inference with respect to the best treatment in block designs. J. Amer. Statist. Assoc. 77, 461-467. Hsu, P. L. (1938). On the best unbiased quadratic estimate of the variance. Statist. Research Mem. 2, 91-104. Hsuan, F. C. (1979). A stepwise Bayes procedure. Ann. Statist. 7, 860-868. Huber, P. J. (1964). Robust estimation of a location parameter. Ann. Math. Statist. 35, 73-101. Huber, P. J. (1966). Strict efficiency excludes superefficiency (Abstract). Ann. Math. Statist. 37, 1425. Huber, P. J. (1967). The behavior of the maximum likelihood estimator under nonstandard conditions. Proc. Fifth Berkeley Symp. Math. Statist. Prob. 1, University of California Press, 221-233. Huber, P. J. (1973). Robust regression: Asymptotics, conjectures, and Monte Carlo. Ann. Statist. 1, 799 -821. Huber, P. J. (1981). Robust Statistics. New York: Wiley. Hudson, H. M. (1974). Empirical Bayes estimation. Technical Report No. 58, Department of Statistics, Stanford University. Hudson, H. M. (1978). A natural identity for exponential families with applications in multiparameter estimation. Ann. Statist. 6, 473-484. Hudson, H. M. (1985). Adaptive estimation for simultaneous estimation of Poisson means. Ann. Statist. 13, 246-261. Hudson, H. M. and Tsui, K.-W. (1981). Simultaneous Poisson estimators for a priori hy-potheses about means. J. Amer. Statist. Assoc. 76, 182-187. Huzurbazar, V. S. (1948). The likelihood equation, consistency and the maxima of the likelihood function. Ann. Eugenics 14, 185-200. 544 REFERENCES [ 6.10 Hwang, J. T. (1982a). Improving upon standard estimators in discrete exponential families with applications to Poisson and negative binomial cases. Ann. Statist. 10, 857-867. Hwang, J. T. (1982b). Semi-tail upper bounds on the class of admissible estimators in discrete exponential families with applications to Poisson and binomial cases. Ann. Statist. 10, 1137-1147. Hwang, J. T. (1985). Universal domination and stochastic domination: Estimation simulta-neously under a broad class of loss functions. Ann. Statist. 13, 295-314. Hwang, J. T. and Brown, L. D. (1991). Estimated confidence under the validity constraint. Ann. Statist. 19, 1964-1977. Hwang, J. T. and Casella, G. (1982). Minimax confidence sets for the mean of a multivariate normal distribution. Ann. Statist. 10, 868-881. Hwang, J. T. and Casella, G. (1984). Improved set estimators for a multivariate normal mean. Statistics and Decisions, Supplement Issue No. 1, pp. 3-16. Hwang, J. T and Chen, J. (1986). Improved confidence sets for the coefficients of a linear model with spherically symmetric errors. Ann. Statist. 14, 444-460. Hwang, J. T. G. and Ullah, A. (1994). Confidence sets centered at James-Stein estimators. A surprise concerning the unnknown variance case. J. Economet. 60, 145-156. Hwang, J. T., Casella, G., Robert, C., Wells, M. T., and Farrell, R. H. (1992). Estimation of accuracy in testing. Ann. Statist. 20, 490-509. Ibragimov, I. A. and Has’minskii, R Z. (1972). Asymptotic behavior of statistical estimators. II. Limit theorems for the a posteriori density and Bayes’ estimators. Theory Prob. Applic. 18, 76-91. Ibragimov, I. A. and Has’minskii, R. Z. (1981). Statistical Estimation: Asymptotic Theory. New York: Springer-Verlag. Ibrahim, J. G. and Laud, P. W. (1991). On Bayesian analysis of generalized linear models using Jeffreys’s prior. J. Amer. Statist. Assoc. 86, 981-986. Iwase,K. (1983). Uniform minimum variance unbiased estimation for the inverse Gaussian distribution. J. Amer. Statist. Assoc. 78, 660-663. Izenman, A. J. (1991). Recent developments in nonparametric density estimation. J. Amer. Statist. Assoc. 86, 205-224. Jackson, D. A., O’Donovan, T. M., Zimmer, W. J., and Deely, J. J. (1970). G2 minimax estimators in the exponential family. Biometrika 70, 439-443. James, I. R. (1986). On estimating equations with censored data. Biometrika 73, 35-42. James, W. and Stein, C. (1961). Estimation with quadratic loss. Proc. Fourth Berkeley Symp. Math. Statist. Prob. 1, University of California Press, 311-319. Jaynes, E. T. (1979). Where do we stand on maximum entropy? The Maximum Entropy Formalism (R. D. Levine and M. Tribus, eds.). Cambridge, MA: M.I.T. Press, pp. 15-118. Jeffreys, H. (1939,1948,1961). The Theory of Probability. Oxford: Oxford University Press. Jennrich, R I. and Oman, S. (1986). How much does Stein estimation help in multiple linear regression? Technometrics 28, 113-121. Jensen, J. L. (1995). Saddlepoint Approximations. Oxford: Clarendon Press. Jewell, N. P. and Raab, G. M. (1981). Difficulties in obtaining consistent estimators of variance parameters. Biometrika 68, 221-226. 6.10 ] REFERENCES 545 Jiang, J. (1996). REML estimation: Asymptotic behavior and related topics. Ann. Statist. 24, 255-286. Jiang, J. (1997). Wald consistency and the method of sieves in REML estimation. Ann. Statist. 25, 1781-1802. Johansen, S. (1979). Introduction to the Theory of Regular Exponential Families. Institute of Mathematical Statistics Lecture Notes, Vol. 3. Copenhagen: University of Copenhagen. Johnson, B. McK. (1971). On the admissible estimators for certain fixed sample binomial problems. Ann. Math. Statist. 42, 1579-1587. Johnson, N. L. and Kotz, S. (1969-1972). Distributions in Statistics (4 vols.). New York: Wiley. Johnson, N. L., Kotz. S., and Balakrishnan, N. (1994). Continuous Univariate Distributions, Volume 1, Second Edition. New York: Wiley. Johnson, N. L., Kotz. S., and Balakrishnan, N. (1995). Continuous Univariate Distributions, Volume 2, Second Edition. New York: Wiley. Johnson, N. L., Kotz. S., and Kemp, A. W. (1992). Univariate Discrete Distributions, Second Edition. New York: Wiley. Johnson, R. A., Ladalla, J., and Liu, S. T. (1979). Differential relations in the original parameters, which determine the first two moments of the multi-parameter exponential family. Ann. Statist. 7, 232-235. Johnson, R. W. (1987). Simultaneous estimation of binomial N’s. Sankhya Series A 49, 264-266. Johnstone, I. (1984). Admissibility, difference equations, and recurrence in estimating a Poisson mean. Ann. Statist. 12, 1173-1198. Johnstone, I. (1988). On inadmissibility of some unbiased estimates of loss. Statistical Decision Theory IV (S. S. Gupta and J. O. Berger, eds.). New York: Springer-Verlag, pp. 361-380. Johnstone, I. (1994). On minimax estimation of a sparse normal mean vector. Ann. Statist. 22, 271-289. Johnstone, I. and MacGibbon, K. B. (1992). Minimax estimation of a constrained Poisson vector. Ann. Statist. 20, 807-831. Johnstone, I. and MacGibbon, K. B. (1993). Asymptotically minimax estimation of a con-strained Poisson vector via polydisc transforms. Ann. Inst. Henri Poincar´ e, 29, 289-319. Joshi,V.M.(1967).Inadmissibilityoftheusualconfidencesetsforthemeanofamultivariate normal population. Ann. Math. Statist. 38, 1180-1207. Joshi, V. M. (1969a). On a theorem of Karlin regarding admissible estimates for exponential populations. Ann. Math. Statist. 40, 216-223. Joshi, V. M. (1969b). Admissibility of the usual confidence sets for the mean of a univariate or bivariate normal population. Ann. Math. Statist. 40, 1042-1067. Joshi, V. M. (1976). On the attainment of the Cram´ er-Rao lower bound. Ann. Statist. 4, 998-1002. Kabaila, P. V. (1983). On the asymptotic efficiency of estimators of the parameters of an ARMA process. J. Time Ser. Anal. 4, 37-47. Kagan,A.M.andPalamadov,V.P.(1968).Newresultsinthetheoryofestimationandtesting hypotheses for problems with nuisance parameters. Supplement to Y. V. Linnik, Statistical Problems with Nuisance Parameters. Amer. Math. Soc. Transl. of Math. Monographs 20. 546 REFERENCES [ 6.10 Kagan, A. M., Linnik, Yu V., and Rao, C. R. (1965). On a characterization of the normal law based on a property of the sample average. Sankhya A 27, 405-406. Kagan, A. M., Linnik, Yu V., and Rao, C. R. (1973). Characterization Problems in Mathe-matical Statistics. New York: Wiley. Kalbfleisch, J. D. (1986). Pseudo-likelihood. Encycl. Statist. Sci. 7, 324-327. Kalbfleisch, J. D. and Prentice, R. L. (1980). The Statistical Analysis of Failure Time Data. New York: Wiley. Kalbfleisch, J. D. and Sprott, D. A. (1970). Application of likelihood methods to models involving large numbers of parameters. J. Roy. Statist. Soc. Ser. B 32, 175-208 Kariya, T. (1985). A nonlinear version of the Gauss-Markov theorem. J. Amer. Statist. Assoc. 80, 476-477. Kariya, T. (1989). Equivariant estimation in a model with an ancillary statistic. Ann. Statist. 17, 920-928. Karlin, S. (1958). Admissibility for estimation with quadratic loss. Ann. Math. Statist. 29, 406-436. Karlin, S. (1968). Total Positivity. Stanford, CA: Stanford University Press. Karlin, S. and Rubin, H. (1956). Distributions possessing a monotone likelihood ratio. J. Amer. Statist. Assoc. 51, 637-643. Kass, R. E. and Steffey, D. (1989). Approximate Bayesian inference in conditionally inde-pendent hierarchical models (parametric empirical Bayes models). J. Amer. Statist. Assoc. 84, 717-726. Katz, M. W. (1961). Admissible and minimax estimates of parameters in truncated spaces. Ann. Math. Statist. 32, 136-142. Kemeny, J. G. and Snell, J. L. (1976). Finite Markov Chains. New York: Springer-Verlag Kempthorne, O (1966) Some aspects of experimental inference. J. Amer. Statist. Assoc. 61, 11-34. Kempthorne, P. (1988a). Dominating inadmissible procedures using compromise decision theory. Statistical Decision Theory IV (S. S. Gupta and J. O. Berger, eds.). New York: Springer-Verlag, pp. 381-396. Kempthorne, P. (1988b). Controlling risks under different loss functions: The compromise decision problem. Ann. Statist. 16, 1594-1608. Kester, A. D. M. (1985). Some Large Deviation Results in Statistics. Amsterdam: Centrum voor Wiskunde en Information. Kester, A. D. M. and Kallenberg, W. C. M. (1986). Large deviations of estimators Ann. Statist. 14, 648-664. Khan, R. A. (1973). On some properties of Hammersley’s estimator of an integer mean. Ann. Statist. 1, 838-850. Ki, F. and Tsui, K.-W. (1985). Omproved confidence set estimators of a multivariate normal mean and generalizations. Ann. Inst. Statist. Math. 37, 487-498. Ki, F. and Tsui, K.-W. (1990). Multiple shrinkage estimators of means in exponential fam-ilies. Can. J. Statist. 18, 31-46. Kiefer, J. (1952). On minimum variance estimators. Ann. Math. Statist. 23, 627-629. Kiefer, J. (1957). Invariance, minimax sequential estimation, and continuous time processes. Ann. Math. Statist. 28, 573-601. 6.10 ] REFERENCES 547 Kiefer, J. (1966). Multivariate optimality results. Multivariate Analysis. (P. Krishnaiah, ed.). New York: Academic Press, pp. 255-274. Kiefer, J. (1976). Admissibility of conditional confidence procedures. Ann. Statist. 4, 836-865. Kiefer, J. (1977). Conditional confidence statements and confidence estimators (with dis-cussion). J. Amer. Statist. Assoc. 72, 789-827. Kiefer, J. and Wolfowitz, J. (1956). Consistency of the maximum likelihood estimator in the presence of infinitely many incidental parameters. Ann. Math. Statist. 27, 887-906. Kish, L. (1965). Survey Sampling. New York: Wiley. Klaassen, C. A. J. (1984). Location estimators and spread. Ann. Statist. 12, 311-321. Klaassen, C. A. J. (1985). On an inequality of Chernoff. Ann. Prob. 13, 966-974. Kleffe, J. (1977). Optimal estimation of variance components-A survey. Sankhya B 39, 211-244. Klotz, J. (1970). The geometric density with unknown location parameter. Ann. Math. Statist. 41, 1078-1082. Klotz, J. H., Milton, R. C., and Zacks, S. (1969). Mean square efficiency of estimators of variance components. J. Amer. Statist. Assoc. 64, 1383-1402. Kojima, Y., Morimoto, H., and Taxeuchi, K. (1982). Two best unbiased estimators of normal integral mean. In Statistics and Probability: Essays in Honor of C. R. Rao, (G. Kallianpur, P. R. Krishnaiah, and J. K. Ghosh, eds.). New York: North-Holland, pp. 429-441. Kolassa, J. E. (1993). Series Approximation Methods in Statistics. New York: Springer-Verlag Kolmogorov, A. N. (1950). Unbiased estimates. Izvestia Akad. Nauk SSSR, Ser. Math. 14, 303-326. Konno, Y. (1991). On estimation of a matrix of normal means. J. Mult. Anal. 36, 44-55. Koopman, B. O. (1936). On distributions admitting a sufficient statistic. Trans. Amer. Math. Soc. 39, 399-409. Koroljuk, V. S. and Borovskich, Yu. V. (1994). Theory of U-Statistics. Boston: Kluwer Academic Publishers. Kozek, A. (1976). Efficiency and Cram´ er-Rao type inequalities for convex loss functions. Inst. Math., Polish Acad. Sci. Preprint No. 90. Kraft, C. and LeCam, L. (1956). A remark on the roots of the likelihood equation. Ann. Math. Statist. 27, 1174-1177. Kremers, W. (1986). Completeness and Unbiased Estimation for Sum-Quota Sampling J. Amer. Statist. Assoc. 81, 1070-1073. Kruskal, W. (1968). When are Gauss-Markov and least squares estimators identical? A coordinate-free approach. Ann. Math. Statist. 39, 70-75. Kubokawa, T. (1987). Admissible minimax estimation of the common mean of two normal populations. Ann. Statist. 15, 1245-1256. Kudo, H. (1955). On minimax invariant estimates of the translation parameter. Natural Sci. Report Ochanomizu Univ. 6, 31-73. Kullback, S. (1968). Information Theory and Statistics, Second Edition. New York: Dover. Reprinted in 1978, Gloucester, MA: Peter Smith. 548 REFERENCES [ 6.10 Laird, N., Lange, N., and Stram, D. (1987). Maximum likelihood computation with repeated measures: Application of the EM algorithm. J. Amer. Statist. Assoc. 82, 97-105. Lambert, J. A. (1970). Estimation of parameters in the four-parameter lognormal. Austr. J. Statist. 12, 33-44. LaMotte, L. R. (1982). Admissibility in linear estimation. Ann. Statist. 10, 245-255. Landers, D. (1972). Existence and consistency of modified minimum contrast estimates. Ann. Math. Statist. 43, 74 83. Landers, D. and Rogge, L. (1972). Minimal sufficient σ-fields and minimal sufficient statis-tics. Two counterexamples. Ann. Math. Statist. 43, 2045-2049. Landers, D. and Rogge, L. (1973). On sufficiency and invariance, Ann. Statist. 1, 543-544. Lane, D. A. (1980). Fisher, Jeffreys, and the nature of probability. In R. A. Fisher: An Appreciation. (S. E. Fienberg and D. V. Hinkley, eds.). Lecture Notes in Statistics 1. New York: Springer-Verlag. Laplace, P. S. (1774). M´ emoire sur la probabilit´ e des causes par ´ ev` enements. Mem. Acad. Sci. Sav. Etranger 6, 621-656. Laplace, P. S. de (1820). Th´ eorie analytique des probabilit´ es, Third Edition Paris: Courcier. Lavine, M. (1991a). Sensitivity in Bayesian statistics: The prior and the likelihood. J. Amer. Statist. Assoc. 86, 396-399. Lavine, M. (1991b). An approach to robust Bayesian analysis for multidimensional param-eter spaces. J. Amer. Statist. Assoc. 86, 400-403. Le Cam, L. (1953). On some asymptotic properties of maximum likelihood estimates and related Bayes’ estimates. Univ. of Calif. Publ. in Statist. 1, 277-330. Le Cam, L. (1955). An extension of Wald’s theory of statistical decision functions. Ann. Math. Statist. 26, 69-81. Le Cam, L. (1956). On the asymptotic theory of estimation and testing hypotheses. Proc. Third Berkeley Symp. Math. Statist. Prob. 1, University of California Press, 129-156. Le Cam, L. (1958). Les propri´ et´ es asymptotiques des solutions de Bayes. Publ. Inst. Statist. l’Univ. Paris Vll, Fasc. 3-4, 17-35. Le Cam, L. (1969). Th´ eorie Asymptotique de la D´ ecision Statistique. Montr´ eal: Les Presses de l’Universit´ e de Montr´ eal. Le Cam, L. (1970). On the assumptions used to prove asymptotic normality of maximum likelihood estimates. Ann. Math. Statist. 41, 802-828. Le Cam, L. (1979a). On a theorem of J. Hajek. Contributions to Statistics, J. Hajek Memorial Volume (J. Jureckova, ed.). Prague: Academia. Le Cam, L. (1979b). Maximum Likelihood: An Introduction. Lecture Notes in Statistics No. 18. University of Maryland, College Park, Md. Le Cam, L. (1986). Asymptotic Methods in Statistical Decision Theory. New York: Springer-Verlag. Le Cam, L. (1990). Maximum likelihood: An introduction. Int. Statist. Rev. 58, 153-171. Le Cam, L. and Yang, G. L. (1990). Asymptotics in Statistics: Some Basic Concepts. New York: Springer-Verlag. Lee, A. J. (1990). U-Statistics, Theory and Practice. New York: Marcel Dekker. Lehmann, E. L (1951). A general concept of unbiasedness. Ann. Math. Statist. 22, 587-592. 6.10 ] REFERENCES 549 Lehmann, E. L. (1981). An interpretation of completeness and Basu’s theorem. J. Amer. Statist. Assoc. 76, 335-340. Lehmann, E. L. (1986). Testing Statistical Hypotheses, Second Edition (TSH2). New York: Springer-Verlag Lehmann, E. L. (1983). Estimation with inadequate information. J. Amer. Statist. Assoc. 78, 624-627. Lehmann, E. L. (1999). Elements of Large-Sample Theory. New York: Springer-Verlag. Lehmann, E. L. and Loh, W-L. (1990). Pointwise versus uniform robustness of some large-sample tests and confidence intervals Scan. J. Statist. 17, 177-187. Lehmann, E. L. and Scheff´ e, H. (1950, 1955, 1956). Completeness, similar regions and unbiased estimation. Sankhya 10, 305-340; 15, 219-236. (Corr: 17, 250.) Lehmann, E. L. and Scholz, F. W. (1992). Ancillarity. Current Issues in Statistical Inference: Essays in Honor of D. Basu (M. Ghosh and P. K. Pathak, eds.). Institute of Mathematical Statistics, Hayward, CA: 32-51. Lehmann, E. L. and Stein, C. (1950). Completeness in the sequential case. Ann. Math. Statist. 21, 376-385. Lele, S. (1993). Euclidean distance matrix analysis (EDMA): Estimation of the mean form and mean form difference. Math. Geol. 5, 573-602. Lele, S. (1994). Estimating functions in chaotic systems.J. Amer. Statist. Assoc. 89, 512-516. Leonard, T. (1972). Bayesian methods for binomial data. Biometrika 59, 581-589. LePage, R. and Billard, L. (1992). Exploring the Limits of Bootstrap. New York: Wiley. Letac, G. and Mora, M. (1990). Natural real exponential families with cubic variance func-tions. Ann. Statist. 18, 1-37. Levit, B. (1980). On asymptotic minimax estimators of the second order. Theor. Prob. Applic. 25, 552-568. Liang, K.-Y. and Waclawiw, M. A. (1990). Extension of the Stein estimating procedure through the use of estimating functions. J. Amer. Statist. Assoc. 85, 435-440. Liang, K. Y. and Zeger, S. L. (1994). Inference based on estimating functions in the presence of nuisance parameters (with discussion). Statist. Sci. 10, 158-166. Lindley, D. V. (1962). Discussion of th paper by Stein. J. Roy. Statist. Soc. Ser. B 24 265-296. Lindley, D. V. (1964). The Bayesian analysis of contingency tables. Ann. Math. Statist. 35, 1622-1643. Lindley, D. V. (1965). Introduction to Probability and Statistics. Cambridge: Cambridge University Press. Lindley, D. V. (1965). Introduction to Probability and Statistics from a Bayesian Viewpoint. Part 2. Inference . Cambridge: Cambridge University Press. Lindley, D.V. and Phillips, L. D. (1976). Inference for a Bernoulli process (a Bayesian view). Amer. Statist. 30, 112-119. Lindley, D. V. and Smith, A. F. M.. (1972). Bayes estimates for the linear model (with discussion). J. Roy. Statist. Soc. Ser. B 34, 1-41. Lindsay, B and Yi, B. (1996). On second-order optimality of the observed Fisher informa-tion. Technical Report No. 95-2, Center for Likelihood Studies. Pennsylvania State Univer-sity. 550 REFERENCES [ 6.10 Linnik, Yu V. and Rukhin, A. L. (1971). Convex loss functions in the theory of unbiased estimation. Soviet Math. Dokl. 12, 839-842. Little and Rubin, D. B. (1987). Statistical Analysis with Missing Data. New York: Wiley. Liu, C. and Rubin, D. B. (1994). The ECME algorithm: A simple extension of EM and ECM with faster monotone convergence. Biometrika 81, 633-648. Liu, R. and Brown, L. D. (1993). Nonexistence of informative unbiased estimators in sin-gular problems. Ann. Statist. 21, 1-13. Loh, W-L. (1991). Estimating the common mean of two multivariate distributions. Ann. Statist. 19, 297-313. Louis, T. A. (1982). Finding the observed information matrix when using the EM algorithm. J. Roy. Statist. Soc. Ser. B 44, 226-233. Lu, K. L and Berger, J. O. (1989a). Estimation of normal means: frequentist estimation of loss. Ann. Statist. 17, 890-906. Lu, K. L. and Berger, J. O. (1989b). Estimated confidence procedures for multivariate normal means. J. Statist. Plan. Inform. 23, 1-19. Luenberger, D. G. (1969). Optimization by Vector Space Methods. New York: Wiley. Maatta, J. and Casella, G. (1987). Conditional properties of interval estimators of the normal variance. Ann. Statist. 15, 1372-1388. Maatta, J. and Casella, G. (1990). Developments in decision-theoretic variance estimation (with discussion). Statist. Sci. 5, 90-120. MacEachern, S. N. (1993). A characterization of some conjugate prior distributions for exponential families. Scan. J. Statist. 20, 77-82. Madansky, A. (1962). More on length of confidence intervals. J. Amer. Statist. Assoc. 57, 586-589. Makani, S. M. (1972). Admissibility of linear functions for estimating sums and differences of exponential parameters. Ph.D. Thesis. University of California, Berkeley. Makani, S. M. (1977). A paradox in admissibility. Ann. Statist. 5, 544-546. Makelainen, T., Schmidt, K., and Styan, G. (1981). On the existence and uniqueness of the maximum likelihood estimate of a vector-valued parameter in fixed-size samples. Ann. Statist. 9, 758-767. Mandelbaum, A. and R¨ uschendorf, L. (1987). Complete and symmetrically complete fam-ilies of distributions. Ann. Statist. 15, 1229-1244. Marazzi, A. (1980). Robust Bayesian estimation for the linear model. Res. Repon No. 27, Fachgruppe f. Stat., Eidg. Tech. Hochsch., Zurich. Maritz, J. S. and Lwin, T. (1989). Empirical Bayes Methods, Second Edition. London: Chapman & Hall. Marshall, A. and Olkin, I. (1979). Inequalities — Theory of Majorization and its Applica-tions. New York: Academic Press. Mathew, T. (1984). On nonnegative quadratic unbiased estimability of variance components. Ann. Statist. 12, 1566-1569. Mattner, L. (1992). Completeness of location families, translated moments, and uniqueness of charges. Prob. Theory Relat. Fields 92, 137-149. Mattner, L. (1993). Some incomplete but boundedly complete location families. Ann. Statist. 21, 2158-2162. 6.10 ] REFERENCES 551 Mattner, L. (1994). Complete order statistics in parametric models. Ann. Statist. 24, 1265-1282. McCullagh, P. (1991). Quasi-likelihood and estimating functions. In Statistical Theory and Modelling: In Honor of Sir David Cox (D. Hinkley, N. Reid, and L. Snell, eds.). London: Chapman & Hill, pp. 265-286. McCullagh, P. (1992). Conditional inference and Cauchy models. Biometrika 79, 247-259. McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, Second Edition. Lon-don: Chapman & Hall. McLachlan, G. (1997). Recent Advances in Finite Mixture Models. New York: Wiley McLachlan, G. and Basford, K. (1988). Mixture Models: Inference and Applications to Clustering. New York: Marcel Dekker. McLachlan, G. and Krishnan, T. (1997). The EM Algotithm and Extensions. New York: Wiley Meeden, G. and Ghosh, M. (1983). Choosing between experiments: Applications to finite population sampling. Ann. Statist. 11, 296-305. Meeden, G., Ghosh, M., and Vardeman, S. (1985). Some admissible nonparametric and related finite population sampling estimators. Ann. Statist. 13, 811-817. Meng, X-L. and Rubin, D. B. (1993). Maximum likelihood estimation via the ECM algo-rithm: A general framework. Biometrika 80, 267-278. Messig, M. A. and Strawderman, W. E. (1993). Minimal sufficiency and completeness for dichotomous quantal response models. Ann. Statist. 21, 2141-2157. Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., and Teller, E. (1953). Equations of state calculations by fast computing machines. J. Chem. Phys. 21, 1087–1092. Meyn, S. and Tweedie, R. (1993). Markov Chains and Stochastic Stability. New York: Springer-Verlag. Mikulski, P. W. and Monsour, M. (1988). On attainable Cram´ er-Rao type lower bounds for weighted loss functions. Statist. Prob. Lett. 7, 1-2. Miller, J. J. (1977). Asymptotic properties of maximum likelihood estimates in the mixed model of the analysis of variance. Ann. Statist. 5, 746-762. Moore, T. and Brook, R. J. (1978). Risk estimate optimality of James-Stein estimators. Ann. Statist. 6, 917-919. Moors, J. J. A. (1981). Inadmissibility of linearly invariant estimators in truncated parameter spaces. J. Amer. Statist. Assoc. 76, 910-915. Morris, C. N. (1977). Interval estimation for empirical Bayes generalizations of Stein’s estimator. Proc. Twenty-Second Conf. Design Exp. in Army Res. Devol. Test., ARO Report 77-2. Morris, C. N. (1982). Natural exponential families with quadratic variance functions. Ann. Statist. 10, 65-80. Morris, C. N. (1983a). Parametric empirical Bayes inference: Theory and applications (with discussion). J. Amer. Statist. Assoc. 78, 47-65. Morris, C. N. (1983b). Natural exponential families with quadratic variance functions: Statistical theory. Ann. Statist. 11, 515-529. Morton, R. and Raghavachari, M. (1966). On a theorem of Karlin regarding admissibility of linear estimates in exponential populations. Ann. Math. Statist. 37, 1809-1813. 552 REFERENCES [ 6.10 Mosteller, F. (1946). On some useful inefficient statistics. Ann. Math. Statist. 17, 377-408. M¨ uller-Funk, U., Pukelsheim, F. and Witting, H. (1989). On the attainment of the Cram´ er-Rao bound in Lr-differentiable families of distributions. Ann. Statist. 17, 1742-1748. Murray, G. D. (1977). Discussion of the paper by Dempster, Laird and Rubin. J. Roy. Statist. Soc. Ser. B 39, 27-28. Murray, M. K. and Rice, J. W. (1993). Differential Geometry and Statistics. London: Chap-man & Hall. Natarajan, J. and Strawderman, W. E. (1985). Two-stage sequential estimation of a multi-variate normal mean under quadratic loss. Ann. Statist. 13, 1509-1522. Natarajan, R. and McCulloch, C. E. (1995). A note on the existence of the posterior distri-bution for a class of mixed models for binomial responses. Biometrika 82, 639-643. Nelder, J. A. and Wedderburn, R. W M. (1972). Generalized linear models. J. Roy . Statist. Soc. A 135, 370-384. Newcomb, S. (1882). Discussion and results of observations on transits of Mercury from 1677 to 1881. Astronomical Papers, Vol. 1, U.S. Nautical Almanac Office, 363-487. Newcomb, S. (1886). A generalized theory of the combination of observations so as to obtain the best result. Amer. J. Math. 8, 343-366. Neyman, J. (1934). On the two different aspects of the representative method: The method of stratified sampling and the method of purposive selection. J Roy. Statist. Soc. 97, 558-625. Neyman, J. (1935). Sur un teorema concernente le cosidette statistiche sufficienti. Giorn. Ist. Ital. Att. 6, 320-334. Neyman, J. (1949). Contributions to the theory of the χ 2 test. Proc. First Berkeley Symp. Math. Statist. Prob. 1, University of California Press, 239-273. Neyman, J. and Pearson, E. S. (1933a). The testing of statistical hypotheses in relation to probabilities a priori. Proc Camb. Phil. Soc. 24, 492-510. Neyman,J.andPearson,E.S.(1933b).Ontheproblemofthemostefficienttestsofstatistical hypotheses. Phil. Trans. Roy. Soc. A. 231, 289-337. Neyman, J. and Scott, E. L. (1948). Consistent estimates based on partially consistent observations. Econometrica 16, 1-32. Noorbaloochi, S. and Meeden, G. (1983). Unbiasedness as the dual of being Bayes. J. Amer. Statist. Assoc. 78, 619-623. Nordberg, L. (1980). Asymptotic normality of maximum likelihood estimators based on independent, unequally distributed observations in exponential family models. Scand. J. Statist. 7, 27-32. Norden, R. H. (1972-73). A survey of maximum likelihood estimation. Int. Statist. Rev. 40, 329-354; 41, 39-58. Novick, M. R. and Jackson, P. H. (1974). Statistical Methods for Educational and Psycho-logical Research. New York: McGraw-Hill. Oakes, D. (1991). Life-table analysis. Statistical Theory and Modelling, in Honor of Sir David Cox, FRS. London: Chapman & Hall, pp. 107-128. Obenchain, R. L. (1981). Good and optimal ridge estimators. Ann. Statist. 6, 1111-1121. Olkin, I. and Pratt, J. W (1958). Unbiased estimation of certain correlation coefficients. Ann. Math. Statist. 29, 201-211. 6.10 ] REFERENCES 553 Olkin, I. and Selliah, J. B. (1977). Estimating covariances in a multivariate distribution. In Statistical Decision Theory and Related Topics II (S. S. Gupta and D. S. Moore, eds.). New York: Academic Press, pp. 313-326. Olkin, I. and Sobel, M. (1979). Admissible and minimax estimation for the multinomial distribution and for independent binomial distributions. Ann. Statist. 7, 284-290. Olkin, I., Petkau, A, and Zidek, J. V. (1981). A comparison of n estimators for the binomial distribution. J. Amer. Statist. Assoc. 76, 637-642. Olshen, R. A. (1977). Comments on “A note on a reformulation of the S-method of multiple comparisons.” J. Amer. Statist. Assoc. 72, 144-146. Oman, S. (1982a). Contracting towards subspaces when estimating the mean of a multi-variate distribution. J. Mult. Anal. 12, 270-270. Oman,S.(1982b).Shrinkingtowardssubspacesinmultiplelinearregression. Technometrics 24, 307-311. Oman, S. (1985). Specifying a prior distribution in structured regression problems. J. Amer. Statist. Assoc. 80, 190-195. Oman, S. (1991). Random calibration with many measurements: An application of Stein estimation. Technometrics 33, 187-195. Owen, A. (1988). Empirical likelihood ratio confidence intervals for a single functional. Biometrika 75, 237-249. Owen, A. (1990). Empirical likelihood ratio confidence regions. Ann. Statist. 18, 90-120. Padmanabhan, A. R. (1970). Some results on minimum variance unbiased estimation. Sankhya 32, 107-114. Pathak, P. K. (1976). Unbiased sampling in fixed cost sequential sampling schemes. Ann. Statist. 4, 1012-1017. Pearson, K (1894). Contributions to the mathematical theory of evolution. Phil. Trans. Royal Soc Ser. A. 185, 71-110. Peers, H. W. (1965). On confidence points and Bayesian probability points in the case of several parameters. J. Roy. Statist. Soc. Ser. B 27, 16-27. Peisakoff, M. P. (1950). Transformation Parameters. Ph. D. Thesis, Princeton University, Princeton, NJ. Perlman, M. (1972). On the strong consistency of approximate maximum likelihood esti-mators. Proc. Sixth Berkeley Symp. Math. Statist. Prob. 1, University of California Press, 263-281. Perlman, M. (1983). The limiting behavior of multiple roots of the likelihood equation. Recent Advances in Statistics: Papers in Honor of Herman Chernoff on his Sixtieth Birthday (M. H. Rizvi, J. S. Rustagi and D. Siegmund, eds.). New York: Academic Press. Perng, S. K. (1970). Inadmissibility of various ‘good’ statistical procedures which are translation invariant. Ann. Math. Statist. 41, 1311-1321. Pfaff, Th. (1982). Quick consistency of quasi maximum likelihood estimators. Ann. Statist. 10, 990-1005. Pfanzagl, J. (1969). On the measurability and consistency of minimum contrast estimators. Metrika 14, 249-272. Pfanzagl, J. (1970) On the asymptotic efficiency of median unbiased estimates. Ann. Math. Statist. 41, 1500-1509. 554 REFERENCES [ 6.10 Pfanzagl, J. (1972). Transformation groups and sufficient statistics. Ann. Math. Statist. 43, 553-568. Pfanzagl, J. (1973). Asymptotic expansions related to minimum contrast estimators. Ann. Statist. 1, 993-1026. Pfanzagl, J. (1979). On optimal median unbiased estimators in the presence of nuisance parameters. Ann. Statist. 7, 187-193. Pfanzagl, J. (1985). Asymptotic Expansions for General Statistical Models. New York: Springer-Verlag Pfanzagl, J. (1990). Large deviation probabilities for certain nonparametric maximum like-lihood estimators. Ann. Statist. 18, 1868-1877. Pfanzagl, J. (1994). Parametric Statistical Theory. New York: DeGruyter. Pfanzagl, J. and Wefelmeyer, W. (1978-1979). A third order optimum property of the max-imum likelihood estimator. J. Mult. Anal. 8, 1-29; 9, 179-182. Piegorsch, W. W. and Casella, G. (1996). Empirical Bayes estimation for Logistic regression and extended parametric regression models. J. Ag. Bio. Env. Statist. 1, 231-249. Ping, C. (1964). Minimax estimates of parameters of distributions belonging to the expo-nential family. Chinese Math. 5, 277-299. Pitcher, T. S. (1957). Sets of measures not admitting necessary and sufficient statistics or subfields. Ann. Math. Statist. 28, 267-268. Pitman, E. J. G. (1936). Sufficient statistics and intrinsic accuracy. Proc. Camb. Phil. Soc. 32, 567-579. Pitman, E. J. G. (1939). The estimation of the location and scale parameters of a continuous population of any given form. Biometrika 30, 391-421. Pitman, E. J. G. (1979). Some Basic Theory for Statistical Inference. London: Chapman & Hall. Plackett, R. L. (1958). The principle of the arithmetic mean. Biometrika 45, 130-135. Plackett, R. L. (1972). The discovery of the method of least squares. Biometrika 59, 239-251. Polfeldt, T. (l970). Asymptotic results in non-regular estimation. Skand. Akt. Tidskr. Suppl. 1-2. Polson, N. and Wasserman, L. (1990). Prior distributions for the bivariate binomial. Biometrika 77, 901-904. Portnoy, S. (1971). Formal Bayes estimation with application to a random effects models. Ann. Math. Statist. 42, 1379-1402. Portnoy, S. (1977a). Robust estimation in dependent situations. Ann. Statist. 5, 22- 43. Portnoy, S. (1977b). Asymptotic efficiency of minimum variance unbiased estimators. Ann. Statist. 5, 522-529. Portnoy, S. (1984). Asymptotic behavior of M-estimators of p regression parameters when p2/n is large. I. Consistency. Ann. Statist. 12, 1298-1309. Portnoy, S. (1985). Asymptotic behavior of M estimators of p regression parameters when p2/n is large: II. Normal approximation. Ann. Statist. 13, 1403-1417. (Corr: 19, 2282.) Pratt,J.W.(1976).F.Y.EdgeworthandR.A.Fisherontheefficiencyofmaximumlikelihood estimation. Ann. Statist. 4, 501-514. Pregibon, D. (1980). Goodness of link tests for generalized linear models. Appl. Statist. 29, 15-24. 6.10 ] REFERENCES 555 Pregibon, D. (1981). Logistic regression diagnostics. Ann. Statist. 9, 705-724. Pugh, E. L. (1963). The best estimate of reliability in the exponential case. Oper. Res. 11, 57-61. Pukelsheim, F. (1981). On the existence of unbiased nonnegative estimates of variance covariance components. Ann. Statist. 9, 293-299. Quandt, R. E and Ramsey, J. B. (1978). Estimating mixtures of normal distributions and switching regressions. J. Amer. Statist. Assoc. 73, 730-752. Quenouille, M. H. (1949). Approximate tests of correlation in time series. J. Roy. Statist. Soc. Ser. B 11, 18-44. Quenouille, M. H. (1956). Notes on bias in estimation. Biometrika 43, 353-360. Raiffa, H. and Schlaifer, R. (1961). Applied Statistical Decision Theory. Cambridge, MA: Harvard University Press. Ralescu, S., Brandwein, A. C. and Strawderman, W. E. (1992). Stein estimation for non-normal spherically symmetric location families in three dimensions. J. Mult. Anal. 42, 35-50. Ramamoorthi, R. V. (1990). Sufficiency, invariance, and independence in invariant models. J. Statist. Plan. Inform. 26, 59-63. Rao, B. L. S. Prakasa (1992). Cramer-Rao type integral inequalities for estimators of func-tions of multidimensional parameter. Sankhya 54, 53-73. Rao, C. R. (1945). Information and the accuracy attainable in the estimation of statistical parameters. Bull. Calc. Math. Soc. 37, 81-91. Rao, C. R. (1947). Minimum variance and the estimation of several parameters. Camb. Phil. Soc. 43, 280-283. Rao, C. R. (1949). Sufficient statistics and minimum variance estimates. Proc. Camb. Phil. Soc. 45, 213-218. Rao, C. R. (1961). Asymptotic efficiency and limiting information. Proc. Fourth Berkeley Symp. Math. Statist. Prob. 1, University of California Press, 531-546. Rao, C. R. (1963). Criteria of estimation in large samples. Sankhya 25, 189-206. Rao, C. R. (1970). Estimation of heteroscedastic variances in linear models. J. Amer. Statist. Assoc. 65, 161-172. Rao, C. R. (1976). Estimation of parameters in a linear model. Ann. Statist. 4, 1023-1037. Rao, C. R. (1977). Simultaneous estimation of parameters—A compound decision problem. In Decision Theory and Related Topics (S. S. Gupta and D. S. Moore, eds.). New York: Academic Press, pp. 327-350. Rao, C. R. and Kleffe, J. (1988). Estimation of variance components and applications. Amsterdam: North Holland/Elsevier. Rao, J. N. K. (1980). Estimating the common mean of possibly different normal populations: A simulation study. J. Amer. Statist. Assoc. 75, 447-453. Redner, R. (1981). Note on the consistency of the maximum likelihood estimate for non-identifiable distributions. Ann. Statist. 9, 225-228. Reeds, J. (1985). Asymptotic number of roots of Cauchy location likelihood equations. Ann. Statist. 13, 775-784. Reid, N. (1988). Saddlepoint methods and statistical inference (with discussion). Statist. Sci. 3, 213-238. 556 REFERENCES [ 6.10 Reid, N. (1995). The role of conditioning in inference (with discussion). Statist. Sci. 10, 138-166. Reid, N. (1996). Likelihood and Bayesian approximation methods. Bayesian Statistics 5, J. M. Bernardo, ed., 351-368. R´ enyi, A. (1961). On measures of entropy and information. Proc. Fourth Berkeley Symp. Math. Statist. Prob. 1, University of California Press, pp. 547-561. Resnick, S. I. (1992). Adventures in Stochastic Processes. Basel: Birkhauser. Ripley, B. (1987). Stochastic Simulation. New York: Wiley. Robbins, H. (1951). Asymptotically subminimax solutions of compound statistical decision problems. In Proc. Second Berkeley Symp. Math. Statist. Probab. 1. Berkeley: University of California Press. Robbins, H. (1964). The empirical Bayes approach to statistical decision problems. Ann. Math. Statist. 35, 1-20. Robbins, H. (1983). Some thoughts on empirical Bayes estimation. Ann. Statist. 11, 713-723. Robert, C. (1991). Generalized inverse normal distributions, Statist. Prob. Lett. 11, 37-41. Robert, C. P. (1994a). The Bayesian Choice: A Decision-Theoretic Motivation. New York: Springer-Verlag. Robert, C.P. (1994b). Discussion of the paper by Tierney. Ann. Statist. 22, 1742-1747. Robert, C. (1995). Convergence control methods for Markov chain Monte Carlo algorithms. Statist. Sci. 10, 231-253. Robert, C. and Casella, G. (1990). Improved confidence sets in spherically symmetric distributions. J. Mult. Anal. 32, 84-94. Robert, C. and Casella, G. (1994). Improved confidence statements for the usual multivariate normal confidence set. Statistical Decision Theory V (S. S. Gupta and J. O. Berger, eds.). New York: Springer-Verlag, pp. 351-368. Robert, C. and Casella, G. (1998). Monte Carlo Statistical Methods. New York: Springer-Verlag. Robert, C., Hwang, J. T., and Strawderman, W. E. (1993). Is Pitman closeness a reasonable criterion? (with discussion). J. Amer. Statist. Assoc. 88, 57-76. Roberts, A. W. and Varberg, D. E. (1973). Complex Functions. New York: Academic Press. Robinson, G. K. (1979a). Conditional properties of statistical procedures. Ann. Statist. 7, 742-755. Robinson, G. K. (1979b). Conditional properties of statistical procedures for location and scale parameters. Ann. Statist. 7, 756-771. Robson, D. S. and Whitlock, J. H. (1964). Estimation of a truncation point. Biometrika 51, 33. Romano, J. P. and Siegel, A. F. (1986). Counterexamples in Probability and Statistics. Monterey, CA: Wadsworth and Brooks/Cole. Rosenblatt, M. (1956). Remark on some nonparametric estimates of a density function. Ann. Math. Statist. 27, 832-837. Rosenblatt, M. (1971). Curve estimates. Ann. Math. Statist. 42, 1815-1842. Rosenblatt, M. (1971). Markov Processes. Structure and Asymptotic Behavior. New York: Springer-Verlag 6.10 ] REFERENCES 557 Ross, S. (1985). Introduction to Probability Models, Third Edition. New York: Academic Press. Rothenberg, T. J. (1977). The Bayesian approach and alternatives in econometrics. In Stud-ies in Bayesian Econometrics and Statistics. Vol. 1 (S. Fienberg and A. Zellner, eds.). Amsterdam: North-Holland, pp. 55-75. Roy, J. and Mitra, S. K. (1957). Unbiased minimum variance estimation in a class of discrete distributions. Sankhya 8, 371-378. Royall, R. M. (1968). An old approach to finite population sampling theory. J. Amer. Statist. Assoc. 63, 1269-1279. Rubin, D. B. and Weisberg, S. (1975). The variance of a linear combination of independent estimators using estimated weights. Biometrika 62, 708-709. Rubin, D. B. (1976). Inference and missing data. Biometrika 63, 581-590. Rubin, D. B. (1987). Multiple Imputation for Nonresponse in Surveys. New York: Wiley. Rudin, W. (1966). Real and Complex Analysis. New York: McGraw-Hill. Rukhin, A. L. (1978). Universal Bayes estimators. Ann. Statist. 6, 1345-1351. Rukhin, A. L. (1987). How much better are better estimators of the normal variance? J. Amer. Statist. Assoc. 82, 925-928. Rukhin, A. L. (1988a). Estimated loss and admissible loss estimators. Statistical Decision Theory IV (S. S. Gupta and J. O. Berger, Eds.). New York: Springer-Verlag, pp. 409-420. Rukhin, A. L. (1988b). Loss functions for loss estimation. Ann. Statist. 16, 1262-1269. Rukhin, A. L. (1988c). Improved estimation in lognormal regression models. J. Statist. Plan. Inform. 18, 291-297. Rukhin, A. (1995). Admissibility: Survey of a concept in progress. Int. Statist. Rev. 63, 95-115. Rukhin, A. and Strawderman, W. E. (1982). Estimating a quantile of an exponential distri-bution. J. Amer. Statist. Assoc. 77, 159-162. Rutkowska, M. (1977). Minimax estimation of the parameters of the multivariate hyperge-ometric and multinomial distributions. Zastos. Mat. 16, 9-21. Sacks, J. (1963). Generalized Bayes solutions in estimation problems. Ann. Math. Statist. 34, 751-768. Sampford, M. R. (1953). Some inequalities on Mill’s ratio and related functions. Ann. Math. Statist. 10, 643-645. Santner, T. J. and Duffy, D. E. (1990). The Statistical Analysis of Discrete Data. New York: Springer-Verlag. S¨ arndal, C-E., Swenson, B., and Wretman, J. (1992). Model Assisted Survey Sampling. New York: Springer-Verlag. Savage, L. J. (1954, 1972). The Foundations of Statistics. New York: Wiley. Rev. ed., Dover Publications. Savage, L. J. (1976). On rereading R. A. Fisher (with discussion). Ann. Statist. 4, 441-500. Schervish, M. (1995). Theory of Statistics. New York: Springer-Verlag. Scheff´ e, H. (1959). The Analysis of Variance. New York: Wiley. Scholz, F. W. (1980). Towards a unified definition of maximum likelihood. Can. J. Statist. 8, 193-203. 558 REFERENCES [ 6.10 Scholz, F. W. (1985). Maximum likelihood estimation. In Encyclopedia of Statistical Sci-ences 5, (S. Kotz, N. L. Johnson, and C. B. Read, eds.). New York: Wiley. Sclove, S. L., Morris, C., and Radhakrishnan, R. (1972). Non optimality of preliminary-test estimators for the mean of a multivariate normal distribution. Ann. Math. Statist. 43, 1481-1490. Seal, H. L. (1967). The historical development of the Gauss linear model. Biometrika 54, 1-24. Searle, S. R. (l971a). Linear Models. New York: Wiley. Searle, S. R. (l97lb). Topics in variance component estimation Biometrics 27, 1-76. Searle, S. R. (1987). Linear Models for Unbalanced Data. New York: Wiley. Searle, S.R., Casella, G., and McCulloch, C. E. (1992). Variance Components. New York: Wiley. Seber, G. A. F. (1977). Linear Regression Analysis. New York: Wiley. Self, S. G. and Liang, K-Y. (1987). Asymptotic properties of maximum likelihood estimators and likelihood ratio tests under nonstandard conditions. J. Amer. Statist. Assoc. 82, 605-610. Sen, P. K. and Ghosh, B. K. (1976). Comparison of some bounds in estimation theory. Ann. Statist. 4, 755-765. Sen, P. K. and Saleh, A. K. Md. (1985). On some shrinkage estimators of multivariate location. Ann. Statist. 13, 272-281. Sen, P. K. and Saleh, A. K. Md. (1987). On preliminary test and shrinkage M-estimation in linear models. Ann. Statist. 15, 1580-1592. Serfling, R. J. (1980). Approximation Theorems of Mathematical Statistics. New York: Wiley. Seshadri, V. (1963). Constructing uniformly better estimators. J. Amer. Statist. Assoc. 58, 172-175. Seth, G. R. (1949). On the variance of estimates. Ann. Math. Statist. 20, 1-27. Shaffer, J. P. (1991). The Gauss-Markov theorem and random regressors. Amer. Statist. 45, 269-273. Shao, J. and Tu, D. (1995). The Jackknife and the Bootstrap. New York: Springer-Verlag Shao, P. Y-S. and Strawderman, W. E. (1994). Improving on the James-Stein positive-part estimator. Ann. Statist. 22, 1517-1538. Shemyakin, A. E. (1987). Rao-Cramer type integral inequalities for estimates of a vector parameter. Theoret. Prob. Applic. 32, 426-434. Shinozaki, N. (1980). Estimation of a multivariate normal mean with a class of quadratic loss functions. J. Amer. Statist. Assoc. 75, 973-976. Shinozaki, N. (1984). Simultaneous estimation of location parameters under quadratic loss. Ann. Statist. 12, 233-335. Shinozaki, N. (1989). Improved confidence sets for the mean of a multivariate distribution. Ann. Inst. Statist. Math. 41, 331-346. Shorrock, G. (1990). Improved confidence intervals for a normal variance. Ann. Statist. 18, 972-980. Sieders, A. and Dzhaparidze, K. (1987). A large deviation result for parameter estimators and its application to nonlinear regression analysis. Ann. Statist. 15, 1031-1049. 6.10 ] REFERENCES 559 Silverman, B. W. (1986) Density Estimation for Statistic and Data Analysis. London: Chap-man & Hall. Simons, G. (1980). Sequential estimators and the Cram´ er-Rao lower bound. J. Statist. Plan. Inform. 4, 67-74. Simpson, D. G., Carroll, R. J., and Ruppert, D. (1987). M-estimation for discrete data: Asymptotic distribution theory and implications. Ann. Statist. 15, 657-669. Simpson, T. (1755). A letter to the Rignt Honorable George Earl of Macclesfield, President of the Royal Society. on the advantage of taking the mean of a number of observations, in practical astronomy. Phil. Trans. R. Soc. London 49 (Pt. 1), 82-93. Singh, K. (1981). On the asymptotic accuracy of Efron’s bootstrap. Ann. Statist. 9, 1187-1995. Sivagenesan, S. and Berger, J. O. (1989). Ranges of posterior measures for priors with unimodal contaminations. Ann. Statist. 17, 868-889. Smith, A. F. M. and Roberts, G. O. (1993). Bayesian computation via the GIbbs sampler and related Markov chain methods (with discussion). J. Roy. Statist. Soc. Ser. B 55, 3-23. Smith, W. L. (1957). A note on truncation and sufficient statistics. Ann. Math. Statist. 28, 247-252. Snedecor, G. W. and Cochran, W. G. (1989) Statistical Methods, Eighth Edition. Ames, IA: Iowa State University Press. Solari, M. E. (1969). The ’maximum likelihood solution’ of the problem of estimating a linear functional relationship. J. Roy. Statist. Soc. Ser. B 31, 372-375. Solomon, D. L. (1972a). H-minimax estimation of a multivariate location parameter. J. Amer. Statist. Assoc. 67, 641-646. Solomon, D. L. (1972b). H-minimax estimation of a scalemeter. J. Amer. Statist. Assoc. 67, 647-649. Spruill, M. C. (1986). Some approximate restricted Bayes estimators of a normal mean. Statist. Dec. 4, 337-351. Sriram, T. N. and Bose, A. (1988). Sequential shrinkage estimation in the general linear model. Seq. Anal. 7, 149-163. Srivastava, M. S. and Bilodeau, M. (1989). Stein estimation under elliptical distributions. J. Mult. Anal. 28, 247-259. Staudte, R. G., Jr. (1971). A characterization of invariant loss functions. Ann. Math. Statist. 42, 1322-1327. Staudte, R. G. and Sheather, S. J. (1990). Robust Estimation and Testing. New York: Wiley. Stefanov, V. T. (1990). A note on the attainment of the Cram´ er-Rao bound in the sequential case. Seq. Anal. 9, 327-334. Stein, C. (1950). Unbiased estimates of minimum variance. Ann. Math. Statist. 21, 406-415. Stein, C. (1955). A necessary and sufficient condition for admissibility. Ann. Math. Statist. 26, 518-522. Stein, C. (1956a). Efficient nonparametric testing and estimation. Proc. Third Berkeley Symp. Math. Statist. Prob. 1, University of California Press, 187-195. Stein, C. (1956b). Inadmissibility of the usual estimator for the mean of a multivariate distribution. Proc. Third Berkeley Symp. Math. Statist. Prob. 1, University of California Press, 197-206. 560 REFERENCES [ 6.10 Stein, C. (1959). The admissibility of Pitman’s estimator for a single location parameter. Ann. Math. Statist. 30, 970-979. Stein, C. (1962). Confidence sets for the mean of a multivariate normal distribution. J. Roy. Statist. Soc. Ser. B 24, 265-296. Stein, C. (1964). Inadmissibility of the usual estimator for the variance of a normal distri-bution with unknown mean. Ann. Inst. Statist. Math. 16, 155-160. Stein, C. (1965). Approximation of improper prior measures by prior probability measures. In Bernoulli, Bayes, Laplace Anniversary Volume. New York: Springer-Verlag. Stein, C. (1973). Estimation of the mean of a multivariate distribution. Proc. Prague Symp. on Asymptotic Statistics, pp. 345-381. Stein, C. (1981). Estimation of the mean of a multivariate normal distribution. Ann. Statist. 9, 1135-1151. Steinhaus, H. (1957). The problem of estimation. Ann. Math. Statist. 28, 633-648. Stigler, S. M. (1973). Laplace, Fisher, and the discovery of the concept of sufficiency. Biometrika 60, 439-445. Stigler, S. M. (1980). An Edgeworth Curiosum. Ann. Statist. 8, 931-934. Stigler, S. M. (1981). Gauss and the invention of least squares. Ann. Statist. 9, 465-474. Stigler, S. (1983). Who discovered Bayes’s theorem? Amer. Statist. 37, 290-296. Stigler, S. (1986). The History of Statistics: The Measurement of Uncertainty before 1900. Cambridge, MA: Harvard University Press. Stigler, S. (1990). A Galtonian perspective on shrinkage estimators. Statist. Sci. 5, 147-155. Stone, C. J. (1974). Asymptotic properties of estimators of a location parameter. Ann. Statist. 2, 1127-1137. Stone, M. (1965). Right Haar measure for convergence in probability to quasi posterior distributions. Ann. Math. Statist. 36, 440-453. Stone, M. (1967). Generalized Bayes decision functions, admissibility and the exponential family. Ann. Math. Statist. 38, 818-822. Stone, M. (1970). Necessary and sufficient conditions for convergence in probability to invariant posterior distributions. Ann. Math. Statist. 41, 1349-1353. Stone, M. (1976). Strong inconsistency from uniform priors (with discussion). J. Amer. Statist. Assoc. 71, 114-125. Stone, M. and Springer, B. G. F. (1965). A paradox involving quasi prior distributions. Biometrika 59, 623-627. Stone,M.andvonRandow,R.(1968).Statisticallyinspiredconditionsonthegroupstructure of invariant experiments. Zeitschr. Wahrsch. Verw. Geb. 10, 70-80. Strasser, H. (1981). Consistency of maximum likelihood and Bayes estimates. Ann. Statist. 9, 1107-1113. Strasser, H. (1985). Mathematical Theory of Statistics. New York: DeGruyter. Strawderman, W. E. (1971). Proper Bayes minimax estimators of the multivariate normal mean. Ann. Math. Statist. 42, 385-388. Strawderman, W. E. (1973). Proper Bayes minimax estimators of the multivariate normal mean vector for the case of common unknown variances. Ann. Statist. 1, 1189-1194. Strawderman, W. E. (1974). Minimax estimation of location parameters for certain spheri-cally symmetric distributions. J. Mult. Anal. 4, 255-264. 6.10 ] REFERENCES 561 Strawderman, W. E. (1992). The James-Stein estimator as an empirical Bayes estimator for an arbitrary location family. Bayesian Statist. 4, 821-824. Strawderman, W. E and Cohen, A. (1971). Admissibility of estimators of the mean vector of a multivariate normal distribution with quadratic loss. Ann. Math. Statist. 42, 270-296. Stuart, A. (1958). Note 129: Iterative solutions of likelihood equations. Biometrics 14, 128-130. Stuart, A. and Ord, J. K. (1987). Kendall’ s Advanced Theory of Statistics, Volume I, Fifth Edition. New York: Oxford University Press. Stuart, A. and Ord, J. K. (1991). Kendall’ s Advanced Theory of Statistics, Volume II, Fifth Edition. New York: Oxford University Press. Sundberg, R. (1974). Maximum likelihood theory for incomplete data from an exponential family. Scand. J. Statist. 2, 49-58. Sundberg, R. (1976). An iterative method for solution of the likelihood equations for in-complete data from exponential families. Comm. Statist. B 5, 55-64. Susarla, V. (1982). Empirical Bayes theory. In Encyclopedia of Statistical Sciences 2 (S. Kotz, N. L. Johnson, and C. B. Read, eds.). New York: Wiley. Tan, W. Y. and Chang, W. C. (1972). Comparisons of method of moments and method of maximum likelihood in estimating parameters of a mixture of two normal densities. J. Amer. Statist. Assoc. 67, 702-708. Tan, M. and Gleser, L. J. (1992). Minimax estimators for location vectors in elliptical distributions with unknown scale parameter and its application to variance reduction in simulation. Ann. Inst. Statist. Math. 44, 537-550. Tanner, M. A. (1996). Tools for Statistical Inference, Third edition. New York: Springer-Verlag. Tanner, M. A. and Wong, W. (1987). The calculation of posterior distributions by data augmentation (with discussion). J. Amer. Statist. Assoc. 82, 528-550. Tate, R. F. and Goen, R. L. (1958). Minimum variance unbiased estimation for a truncated Poisson distribution. Ann. Math. Statist. 29, 755-765. Taylor, W. F. (1953). Distance functions and regular best asymptotically normal estimates. Ann. Math. Statist. 24, 85-92. Thompson, J. R. (1968a). Some shrinkage techniques for estimating the mean. J. Amer. Statist. Assoc. 63, 113-122. Thompson, J. R. (1968b). Accuracy borrowing in the estimation of the mean by shrinking to an interval. J. Amer. Statist. Assoc. 63, 953-963. Thompson, W. A., Jr. (1962). The problem of negative estimates of variance components. Ann. Math. Statist. 33, 273-289. Thorburn, D. (1976). Some asymptotic properties of jackknife statistics. Biometrika 63 305-313. Tiao, G. C. and Tan, W. Y. (1965). Bayesian analysis of random effects models in analysis of variance, I. Posterior distribution of variance components. Biometrika 52, 37-53. Tibshirani, R. (1989). Noninformative priors for one parameter of many. Biometrika 76, 604-608. Tierney, L. (1994). Markov chains for exploring posterior distributions (with discussion). Ann. Statist. 22, 1701-1762. 562 REFERENCES [ 6.10 Tierney, L. and Kadane, J. B. (1986). Accurate approximations for posterior moments and marginal densities. J. Amer. Statist. Assoc. 81, 82-86. Tierney, L., Kass, R. E., and Kadane, J. B. (1989). Fully exponential Laplace approximations to expectations and variances of nonpositive functions. J. Amer. Statist. Assoc. 84, 710-716. Titterington, D. M., Smith, A. F. M., and Makov, U. E. (1985). Statistical Analysis of Finite Mixture Distributions. New York: Wiley. Trybula, S. (1958). Some problems of simultaneous minimax estimation. Ann. Math. Statist. 29, 245-253. Tseng, Y. and Brown, L. D. (1997). Good exact confidence sets and minimax estimators for the mean vector of a multivariate normal distribution. Ann. Statist. 25, 2228-2258. Tsui, K-W. (1979a). Multiparameter estimation of discrete exponential distributions. Can. J. Statist. 7, 193-200. Tsui, K-W. (1979b). Estimation of Poisson means under weighted squared error loss. Can. J. Statist. 7, 201-204. Tsui, K-W. (1984). Robustness of Clevenson-Zidek estimators. J. Amer. Statist. Assoc. 79, 152-157. Tsui, K-W. (1986). Further developments on the robustness of Clevenson-Zidek estimators. J. Amer. Statist. Assoc. 81, 176-180. Tukey, J. W. (1958). Bias and confidence in not quite large samples. Ann. Math. Statist. 29, 614. Tukey, J. W. (1960). A survey of sampling from contaminated distributions. In Contributions to Probability and Statistics (I. Olkin, ed.). Stanford, CA: Stanford University Press. Tweedie, M. C. K. (1947). Functions of a statistical variate with given means, with special reference to Laplacian distributions. Proc. Camb. Phil. Soc. 43, 41-49. Tweedie, M. C. K. (1957). Statistical properties of the inverse Gaussian distribution. Ann. Math. Statist. 28, 362. Unni, K. (1978). The theory of estimation in algebraic and analytical exponential families with applications to variance components models. PhD. Thesis, Indian Statistical Institute, Calcutta, India. Unni, K (1981). A note on a theorem of A. Kagan. Sankhya 43, 366-370. Van Rysin, J. and Susarla, V. (1977). On the empirical Bayes approach to multiple decision problems. Ann. Statist. 5, 172-181. Varde, S. D. and Sathe. Y. S. (1969). Minimum variance unbiased estimation of reliability for the truncated exponential distribution. Technometrics 11, 609-612. Verhagen, A. M. W. (1961). The estimation of regression and error-scale parameters when the joint distribution of the errors is of any continuous form and known apart from a scale parameter. Biometrika 48, 125-132. Vidakovic, B. and DasGupta, A. (1994). Efficiency of linear rules for estimating a bounded normal mean. Sankhya Series A 58, 81-100. Villegas, C. (1990). Bayesian inference in models with Euclidean structures. J. Amer. Statist. Assoc. 85, 1159-1164. Vincze, I. (1992). On nonparametric Cram´ er-Rao inequalities. Order Statistics and Non-parametrics (P. K. Sen and I. A. Salaman, eds.). Elsevier: North Holland, 439-454. von Mises, R. (1931). Wahrscheinlichkeitsrecheung. Franz Deutiche: Leipzig. 6.10 ] REFERENCES 563 von Mises, R. (1936) Les lois de probabilitit´ e pour les functions statistiques. Ann. Inst. Henri Poincar´ e 6, 185-212. von Mises, R. (1947). On the asymptotic distribution of differentiable statistical functions. Ann. Math. Statist. 18, 309-348. Wald, A. (1939). Contributions to the theory of statistical estimation and hypothesis testing. Ann. Math. Statist. 10, 299-326. Wald, A. (1949). Note on the consistency of the maximum likelihood estimate. Ann. Math. Statist. 20, 595-601. Wald, A. (1950). Statistical Decision Functions. New York: Wiley. Wand, M.P. and Jones, M.C. (1995). Kernel Smoothing. London: Chapman & Hall. Wasserman, L. (1989). A robust Bayesian interpretation of the likelihood region. Ann. Statist. 17, 1387-1393. Wasserman, L. (1990). Recent methodological advances in robust Bayesian inference (with discussion). In Bayesian Statistics 4 (J. M. Bernardo, J. O. Berger, A. P. David, and A. F. M. Smith, eds.). Oxford: Oxford University Press, pp. 483-502. Watson, G. S. (1964). Estimation in finite populations. Unpublished report. Watson, G. S. (1967). Linear least squares regression. Ann. Math. Statist. 38, 1679-1699. Wedderburn, R. W. M. (1974). Quasi-likelihood functions, generalized linear models, and the Gauss-Newton method. Biometrika 61, 439-447. Wedderburn, R. W. M. (1976). On the existence and uniqueness of the maximum likelihood estimates for certain generalized linear models. Biometrika 63, 27-32. Wei, G. C. G. and Tanner, M. A. (1990). A Monte Carlo implementation of the EM algorithm and the poor man’s data augmentation algorithm. J. Amer. Statist. Assoc. 85, 699-704. Weiss, L. and Wolfowitz, J. (1974). Maximum Probability Estimators and Related Topics. New York: Springer-Verlag. Wijsman, R. (1959). On the theory of BAN estimates. Ann. Math. Statist. 30, 185-191, 1268-1270. Wijsman, R. A. (1973). On the attainment of the Cram´ er-Rao lower bound. Ann. Statist. 1, 538-542. Wijsman, R. A. (1990). Invariant Measures on Groups and Their Use in Statistics. Hayward, CA: Institute of Mathematical Statistics Withers, C. S. (1991). A class of multiple shrinkage estimators. Ann. Inst. Statist. Math. 43, 147-156. Wolfowitz, J. (1946). On sequential binomial estimation. Ann. Math. Statist. 17, 489-493. Wolfowitz, J. (1947). The efficiency of sequential estimates and Wald’s equation for se-quential processes. Ann. Math. Statist. 18, 215-230. Wolfowitz, J. (1965). Asymptotic efficiency of the maximum likelihood estimator. Theory Prob. Applic. 10, 247-260. Wong, W. (1992). On asymptotic efficiency in estimation theory. Statistica Sinica 2, 47-68. Woodrofe, M. (1972). Maximum likelihood estimation of a translation parameter of a trun-cated distribution. Ann. Math. Statist. 43, 113-122. Woodward, W. A. and Kelley, G. D. (1977). Minimum variance unbiased estimation of P (Y < X) in the normal case. Technometrics 19, 95-98. 564 REFERENCES [ 6.10 Wu, C. F. J. (1983). On the convergence of the EM algorithm. Ann. Statist. 11, 95-103. Yamada, S. and Morimoto, H. (1992). Sufficiency. In Current Issues in Statistical Inference: Essays in Honor of D. Basu (M. Ghosh and P. K. Pathak, eds.). Hayward, CA: Institute of Mathematical Statistics, pp. 86-98. Young, G. A. (1994). Bootstrap: More than a stab in the dark (with discussion). Statist. Sci. 9, 382-415. Zacks, S. (1966). Unbiased estimation of the common mean of two normal distributions based on small samples of equal. size. J. Amer. Statist. Assoc. 61, 467-476. Zacks, S. and Even, M. (1966). The efficiencies in small samples of the maximum likelihood and best unbiased estimators of reliability functions. J. Amer. Statist. Assoc. 61, 1033-1051. Zehna, P. W. (1966). Invariance of maximum likelihood estimation. Ann. Math. Statist. 37, 744. Zellner, A. (1971). An Introduction to Bayesian Inference in Econometrics. New York: Wiley. Zidek, J. V. (1970). Sufficient conditions for the admissibility under squared error loss of formal Bayes estimators. Ann. Math. Statist. 41, 446-456. Zidek, J. V. (1973). Estimating the scale parameter of the exponential distribution with unknown location. Ann. Statist. 1, 264-278. Zidek, J. V. (1978). Deriving unbiased risk estimators of multinormal mean and regression coefficient estimators using zonal polynomials. Ann. Statist. 6, 769-782. Zinzius, E. (1981). Minimaxsch¨ atzer f¨ ur den Mittelwert einer normalverteilten Zu-fallsgr¨ oße mit bekannter Varianz bei vorgegebener oberer und unterer Schranke f¨ ur . Math Operationsforsch. Statist., Ser. Statistics 12, 551-557. Zinzius, E. (1982). Least favorable distributions for single parameter point estimation (Ger-man). Metrika 29, 115-128. Author Index Abbey, I. L., 143 Agresti, A., 194, 196, 475, 479 Ahuja, J. C., 143 Aitken, A. C., 143 Aitken, M., 198 Akahira, M., 518 Alam, K., 427 Albert, J. H., 289, 305 Amari, S.-I., 80 Amemiya, T., 479 Andersen, E. B., 40, 483 Anderson, T. W., 187, 214, 481, 483 Arnold, B. C., 156 Arnold, S. F., 176 Ash, R., 35 Athreya, K. B., 306 Bahadur, R. R., 37, 42, 78, 82, 85, 87, 88, 440, 445, 453, 462, 476, 480, 481, 516, 515 Bai, Z. D., 455 Baker, R. J., 198 Balakrishnan, N., 17, 486 Baranchik, A. J., 357, 363, 377, 406 Barankin, E. W., 78, 85 Bar-Lev, S. K., 69 Barnard, G. A., 421 Barndorff-Nielsen, O., 26, 32, 37, 40, 42, 68, 78, 79, 128, 421, 469, 470, 494, 517, 519 Barnett, V. D., 7, 453, 455 Barron, A. R., 81, 261 Bar-Shalom, Y., 481 Basford, K., 457, 474 Basu, D., 79, 89, 90, 138, 156, 224, 500 Baum, L., 508 Bayes, T., 305 Bell, C. B., 109 Berger, J. O., 47, 223, 227, 230, 233, 247, 252, 253, 254, 259, 261, 261, 271, 305, 315, 342, 351, 366, 371, 380, 383, 386, 390, 393, 401, 406, 409, 414, 418, 422, 423, 424, 425, 425, 426, 493 Berger, R. L., 62, 223, 450, 508 Berk, R. H., 156, 162, 380, 444, 470, 503, 516 Berkson, J., 479, 494 Berliner, L. M., 271, 307 Bernardo, J. M., 221, 227, 230, 253, 261, 261, 305, 493 Bernstein, S., 515, Berry, J. C., 95, 216, 425 Bhat, B.R., 481 Bhattacharya, R., 518 Bhattacharyya, G. K., 72, 143 Bickel, P. J., 77, 110, 112, 143, 235, 284, 320, 329, 366, 426, 454, 488, 516, 519 Billard, L., 519 Billingsley, P., 7, 58, 61, 78, 306, 381 Bilodeau, M., 426 Bischoff, W., 425 Bishop, Y. M. M., 194, 196, 305, 475 Bjørnstad, J. F., 517 Blackwell, D., 35, 78, 109, 129, 224, 235, 342, 344, 378, 420 Blaesild, P., 469 Blight, J. N., 129 Bloch, D. A., 305 566 Author Index [ 6.10 Blyth, C. R., 7, 63, 113, 139, 145, 420, 475 Bock, M. E., 284, 354, 359, 404, 405, 407, 409, 425, 426 Bondar, J. V., 352, 422, Bondesson, L., 157 Boos, D. D., 516 Borges, R., 23 Borovskich, Yu. V., 111 Bose, A., 426, 426 Box, G. E. P., 77, 227, 237, 305, 493 Boyles, R. A., 460, Brandwein, A. C., 426 Bravo, G., 426 Breiman, L., 109 Brewster, J. F., 173, 224, 395, 396 Brockwell, P. J., 481 Brook, R. J., 377 Brown, L. D., 26, 29, 42, 47, 78, 79, 95, 110, 129, 139, 141, 144, 144, 145, 146, 170, 185, 220, 224, 228, 239, 315, 326, 333, 342, 351, 352, 354, 354, 359, 376, 377, 380, 383, 384, 385, 388, 389, 394, 396, 398, 399, 400, 404, 414, 417, 418, 420, 421, 422, 423, 424, 435, 426, 427, 470 Bshouty, D., 69 Bucklew, J. A., 81 Buehler, R. J., 78, 421 Burdick, R. K., 192 Butler, R. W., 517 Carlin, B.P., 307 Carroll, R. J., 412, 483, 516, 517 Carter, R. G., 426 Casella, G., 62, 73, 223, 224, 237, 254, 256, 305, 307, 328, 367, 407, 412, 421, 423, 424, 425, 426, 450, 508 Cassel, C., 198, 224 Cellier, D., 426 Chan, K. S., 456 Chang, W. C., 475 Chapman, D. G., 114, 129 Chatterji, S. D., 129 Chaudhuri, A., 323 Chen, J., 390, 423, 425, 426 Chernoff, H., 437, 517 Chib, S., 305 Chow, M., 376 Christensen, R., 176, 194 Chung, K. L., 381 Churchill, G. A., 254 Clarke, B. S., 261, 261, 294 Clevenson, M. L., 270, 372, 374, 375 Cochran, W. G., 25,198 Cohen, A., 95, 108, 143, 338, 380, 383, 389, 414, 419, 427 Cohen, A. C., 468, 474 Copas, J. B., 426, 469, 483, 486 Cox, D. R., 26, 79, 77, 128, 144, 196, 421, 470, 494, 509, 517, 519 Crain, B. R., 470 Cram´ er, H., 28, 129, 143, 433, 447, 456, 515 516 Cressie, N., 478, 518 Crow, E. L., 486 Crowder, M. J., 294, 481 Daniels, H. E., 68, 451, 516, 519 Darmois, G., 78, 143 DasGupta, A., 284, 307, 372, 425, 425, 426 Datta, G. S., 494 David, H. T., 143 Davis, L. J., 108 Davis, R. A., 481 Dawid, A. P., 223 Deal, R. B., 95 Deely, J. J., 269 deFinetti, B., 221, 227, 305, 305 DeGroot, M. H., 228, 261, 293, 305 Dempster, A. P., 197, 223, 515 Denker, M., 518 Denny, J. L., 40 Devroye, L., 110, 258 Dey, D. K., 170, 214, 406, 426 Diaconis, P., 2, 194, 236, 305, 332, 6.10 ] Author Index 567 343, 419, 517 Dobson, A. J., 198 Doksum, K. A., 77 Donoho, D. L., 329, 425 Doss, H., 144, 306 Downton, F., 143 Draper, N. R., 425 Dudley, R. M., 7, 45, 78, 306 Duffy, D. E., 194 Dynkin, E. B., 32, 37, 78 Dzhaparidze, K., 81 Eaton, M. L., 93, 166, 213, 224, 247, 384, 420 Eaves, D. M., 305 Edgeworth, F. Y., 63, 143, 515 Edwards, A. W. F., 515, 517 Efron, B., 66, 79, 140, 144, 144, 273, 276, 280, 307, 321, 322, 357, 363, 366, 377, 393, 409, 413, 420, 424, 426, 444, 455, 494, 518, 519 Eichenhauer-Herrmann, J., 390, 425 Eisenhart, C., 3 Enis, P., 69 Ericson, W. A., 305 Even, M., 99 Everitt, B. S., 194, 457, 474 Fabian, V., 128, 145, 146 Faith, R. E., 363, 402, 423 Fan, J., 425 Farrell, R. H., 129, 285, 342, 383, 384, 389 Feigin, P. D., 481 Feldman, D., 412 Feldman, I., 425 Feller, W., 55, 140, 306 Fend, A. V., 129 Feng, Z., 518 Ferguson, T. S., 32, 53, 239, 323, 380, 414, 453, 469, 516 Fieger, W., 425 Field, C. A., 79, 519 Fienberg, S. E., 194, 196, 305, 475 Finch, S. J., 460 Finney, D. J., 197 Fisher, N., 109 Fisher, R. A., 78, 143, 420, 421, 515 Fleming, T. R., 144 Fourdrinier, D., 259, 424, 426, 426 Foutz, R. V., 516 Fox, M., 412 Fr´ echet, M., 143, Fraser, D. A. S., 79, 129, 421 Freedman, D., 517, 519 Fu, J. C., 455 Gajek, L., 141, 145, 146, 380, 425 Gart, J. J., 476, 480 Gatsonis, C., 97, 329 Gauss, C. F., 6, 143 Gelfand, A. E., 256, 291, 305, 426 Gelman, A., 227, 291, 305 Geman, D., 305 Geman, S., 305 George, E. I., 95, 101, 256, 294, 305, 406, 424, 425 Geyer, C., 291 Ghosh, B. K., 129 Ghosh, J. K., 479, 487, 494 Ghosh, J. R., 156 Ghosh, M., 375, 389, 427, 494, 518 Ghosh, M. N., 312 Ghurye, S. G., 97 Giesbrecht, F., 486 Gijbels, I., 425 Gilks, W.R., 256 Girshick, M. A., 103, 104, 129, 224, 342, 344, 378, 420 Gleser, L. J., 213, 409, 426, 483 Godambe, V. P., 224, 305, 517 Goel, P. K., 261, 293, 305 Goen, R. L., 106, 136 Good, I. J., 305, 307 Goodman, L. A., 194 Goutis, C., 421, 424 Govindarajulu, Z., 129 Graybill, F. A., 95, 192 Green, E., 367, 409 568 Author Index [ 6.10 Greenberg, E., 305 Groeneboom, P., 81 Gupta, A. K., 289, 327 Gupta, M. K., 330, 394, 396 Gutmann, S., 376, 389, 419 Guttman, I., 129 Gyoerfi, L., 110 H´ ajek, J., 326, 442, 516 Ha´ sminskii, R. Z., 145 Haberman, S. J., 194, 455, 475, 479, 516 Haff, L. R., 426 Hall, W. J., 156 Hall, P., 517, 519 Halmos, P. R., 7, 78, 143 Hammersley, J. M., 114, 129, 140 Hampel, F. R., 52, 484 Hand, D. J., 457, 474 Hannan, J., 128, 145, 146 Hardy, G. H., 45 Harrington, D. P., 144 Harter, H. L., 3 Hartley, H. O., 515 Hartung, J., 191 Harville, D., 185, 192, 193, 220 Has’minskii, R. Z., 475, 488, 516 Hastings, W. K., 305 Healy, J., 213 Heath, D., 285, 422 Hedayat, A. S., 204, 222 Helms, L., 53 Heyde, C. C., 481 Hill, B. M., 237, 456, 514 Hinkley, D. V., 77, 79, 424, 455, 517, 519 Hipp, C., 78 Hjort, N. L., 323 Hoadley, B., 480 Hoaglin, D. C., 2, 175 Hobert, J., 237, 256 Hodges, J. L., Jr., 78, 313, 325, 389, 420, 426, 440 Hoeffding, W., 109, 111, 323 Hoerl, A. E., 425 Hoffmann-Jorgensen, J., 29, 37, 78 Hogg, R. V., 95 Holland, P. W., 194, 196, 305, 475 Honda, T., 426 Horvitz, D., 222 Hsu, J. C., 278 Hsuan, F. C., 427 Huber, P. J., 6, 52, 91, 119, 442, 455, 484, 516 Hudson, H. M., 31, 67, 374, 375, 425 Huzurbazar, V. S., 448, 503 Hwang, J. T. G, 67, 354, 367, 374, 375, 384, 386, 398, 399, 400, 407, 417, 423, 424, 425, 426 Ibragimov, I. A., 145, 475, 488, 516 Ibrahim, J. G., 305 Ickstadt, K., 425 Inagaki, N., 516 Iwase, K., 143 Izenman, A. J., 110 Jackson, P. H., 227, 390 James, I. R., 145 James, W., 170, 214, 273, 351, 398, 406 Jaynes, E. T., 493 Jeffreys, H., 230, 305 Jennrich, R. I., 425 Jensen, J. L., 519 Jewell, N. P., 483 Jiang, J., 478, 518 Johansen, S., 29, 78, 470 Johnson, B. McK., 376, 387, 388, 427 Johnson, N. L, 17, 63, 486 Johnson, R. A., 66, 72 Johnson, R. W., 412, 426 Johnstone, I., 140, 144, 384, 396, 418, 420, 424, 425, 427 Jones, M.C., 110 Joshi, V. M., 121, 128, 333, 423 Kabaila, P. V., 481 6.10 ] Author Index 569 Kadane, J. B., 270, 297 Kagan, A., 96, 153 Kalbfleish, J. D., 144, 483, 517 Kallenberg, W. C. M., 81 Kaluzka, M., 425 Kariya, T., 156, 162, 185 Karlin, S., 330, 331, 333, 389, 394, 414, 420 Kass, R. E., 267, 297, 297 Katz, M. W., 327 Kelley, G. D., 143 Kemeny, J. G., 306, Kemp, A. W., 17 Kempthorne, O., 486 Kempthorne, P., 328, 366, 372, 426 Kennard, R. W., 425 Kester, A. D. M., 81 Khan, R. A., 140 Ki, F., 406, 423 Kiefer, J., 129, 132, 140, 223, 285, 342, 420, 421, 422, 453, 482, 483, 519 Kiefer, N., 475 Kish, L., 206 Klaassen, C. A. J., 129, 129 Kleffe, J., 192 Klotz, J. H., 191, 237, 478 Kojima, Y., 140 Kolassa, J. E., 519 Kolmogorov, A. N., 143 Konno, Y., 426 Koopman, B. O., 78 Koroljuk, V. S., 111 Kotz, S., 17, 486 Kozek, V. S., 129 Kraft, V. S., 453 Kremers, W., 104, 143, 200, 222 Krishnan, T., 461, 515 Kruskal, W., 62 Kubokawa, T., 95 Kudo, H., 420 Kullback, S., 47 Kuo, L., 427 La Motte, L. R., 389 La Scala, B., 517 Ladalla, J., 66 Lahiri, S. N., 478, 518 Laird, N. M., 461 Laird, N. M., 515 Lambert, J. A., 486 Landers, D., 37, 78, 156, 516 Lane, D. A., 420 Laplace, P. S., 6, 305 Laud, P. W., 305 Lavine, M., 307 Le Cam, L., 422, 440, 442, 445, 452, 453, 454, 475, 479, 486, 515, 516 Lee, A. J., 111 Lehmann, E. L., 37, 42, 43, 78, 79, 87, 103, 108, 110, 112,110, 112, 136, 143, 151, 223, 313, 320, 325, 389, 420, 426, 475, 519 Lehn, J., 390 Lele, S., 213, 214, 517 Leonard, T., 305 LePage, R., 519 Letac, G., 69 Levit, B., 487 Liang, K. Y., 426, 517, 518 Lindley, D. V., 227, 269, 280, 283, 305 Lindsay, B., 424, 455 Linnik, Yu. V., 87, 153 Little, R. J. A., 292, 461, 515 Littlewood, J. E., 45 Liu, C., 461 Liu, R., 110, 139, 144 Liu, S. T., 66 Loh, W-L., 95, 475 Lombard, F., 412 Louis, T. A., 307, 461 Low, M. G., 425 Lu, K. L., 424 Luenberger, D. G., 85 Lwin, T., 307 M¨ akel¨ ainen, T., 467 M¨ uller-Funk, U., 121, 128, 145 570 Author Index [ 6.10 Maatta, J., 224, 421 MacEachern, S. N., 236, 305 MacGibbon, B., 396, 427 Maitra, A. P., 78 Makani, S. M., 336, 377, 397 Mallows, C., 235, 284 Mandelbaum, A., 79 Marazzi, A., 322 Maritz, J. S., 307 Marshall, A., 142, 218 Mathew, T., 191 Mattner, L., 79, 271 McCullagh, P., 79, 197, 215, 469, 517 McCulloch, C. E., 305, 518 McCulloch, R., 294 McLachlan, G., 457, 461, 474, 515 Meeden, G., 140, 235, 389, 427 Mehrotra, K. G., 72 Meng, X-L., 461 Messig, M. A., 44, 71, 130 Metropolis, N., 305 Meyn, S., 257, 306 Mikulski, P. W., 129 Miller, J. J., 478 Milnes, P., 422 Milton, R. C., 191, 237, 478 Mitra, S. K., 412 Monsour, M., 129 Moore, T., 377 Moors, J. J. A., 323 Mora, M., 69 Morimoto, H., 78, 140 Morris, C. N., 69, 93, 273, 276, 280, 302, 304, 307, 321, 322, 352, 357, 363, 366, 377, 393, 409, 413, 423, 424, 426 Morton, R., 333 Mosteller, F., 2, 103, 104, 469 Mukerjee, R., 323, 494 Murray, G. D., 80, 504 Natarajan, J., 426 Natarajan, R., 305 Nelder, J. A., 79, 197, 198, 223, 517 Newcomb, S., 510 Neyman, J., 78, 224, 420, 479, 482 Noorbaloochi, S., 235 Nordberg, L., 480 Norden, R. H., 468 Novick, M. R., 227 Oakes, D., 144, 517 Obenchain, R. L., 425 Olkin, I., 96, 97, 142, 218, 224, 349, 412 Olshen, R. A., 423 Oman, S., 425, 425, 426 Oosterhoff, J., 81 Ord, J. K., 28, 81, 84, 96, 140, 453, 483 Owen, A., 517 Padmanabhan, A. R., 87 Palamadov, V. P., 96 Pathak, P. K., 200, 221 Pearson, E. S., 420 Pedersen, K., 37, 40 Peers, H. W., 494 Peisakoff, M. P., 223, 420 Perlman, M., 453, 516 Perng, S. K., 342 Petrie, T., 508 Pfaff, Th., 516 Pfanzagl, J., 23, 81, 90, 441, 475, 494, 516, 518 Phillips, L. D., 283 Ping, C., 330 Pitcher, T. S., 37 Pitman, E. J. G., 78, 115, 223 Plackett, R. L., 3 Polfeldt, T., 487 Polson, N., 261, 294 Polya, G., 45 Portnoy, S., 100, 237, 427, 478, 516 Prakasa Rao, B. L. S., 129, 481 Pratt, J. W., 96, 97, 515 Pregibon, D., 197, 479 Prentice, R. L., 144 Pukelsheim, F., 191, 219 6.10 ] Author Index 571 Purves, R., 228 Quandt, R. E., 475 Quenouille, M. H., 83 R¨ uschendorf, L., 79 R´ enyi, A., 293 Raab, G. M., 483 Radhakrishnan, R., 352 Raghavachari, M., 333 Raiffa, H., 305 Ralescu, S. , 426 Ramamoorthi, R. V., 156 Ramsey, J. B., 475 Rao, C. R., 78, 129, 143, 153, 185, 192, 354, 441, 494, 518 Rao, J. N. K., 95 Rao, M. M., 228 Rao, P. V., 129 Redner, R., 443 Reeds, J., 455 Reid, N., 79, 128, 470, 509, 517, 519 Resnick, S. I., 257, 306 Rice, J. W., 80 Ripley, B., 258, 292 Robbins, H., 114, 129, 307, 389 Robert, C. P., 47, 101, 213, 227, 230, 247, 252, 254, 256, 257, 258, 259, 261, 297, 305, 306, 342, 380, 400, 422, 423, 424, 426, 493 Roberts, A. W., 45 Roberts, D. M., 145 Roberts, G. O., 291, 461, 508 Robinson, G. K., 354, 391, 421 Robson, D. S., 129 Rogge, L., 37, 78, 156 Rohatgi, V. K., 327 Romano, J. P., 35 Ronchetti, E., 79, 519 Rosenblatt, M., 110 Ross, S., 306 Rothenberg, T. J., 305 Roy, J., 412 Royall, R. M., 224 Rubin, D. B., 95, 291, 292, 461, 515 Rubin, H., 372, 414, 426 Rudin, W., 7, 45 Rukhin, A. L., 87, 216, 284, 335, 384, 424, 427 Runger, G., 77 Ruppert, D., 483, 516, 517 Rutkowska, M., 349 Ryll-Nardzewsky, C., 35 S¨ arndal, C-E., 198, 224 Sackrowitz, H. B., 95 Sacks, J., 315, 327, 383 Saleh, A. K. Md., 352 Sampford, M. R., 474 Santner, T. J., 194 Sathe, Y. S., 143 Savage, L. J., 78, 103, 104, 143, 227, 305, 342, 420, 515 Sawa, T., 483 Scheff´ e, H., 37, 42, 43, 78, 79, 87, 143, 176 Schervish, M. J., 42, 223, 232, 247, 252 Schlaifer, R., 305 Schmidt, K., 467 Scholtz, F. W., 78, 151, 444, 468, 487 Sclove, S. L., 352 Scott, E. L., 482 Seal, H. L., 223 Searle, S. R., 176, 186, 192, 193, 219, 278, 305, 390, 453, 478, 479, 506 Seber, G. A. F., 176 Self, S. G., 518 Selliah, J. B., 224 Sen, P. K., 129, 352 Serfling, R. J., 77, 111, 433, 456, 484, 504, 516, 516 Seshadri, V., 95 Seth, G. R., 129 Sethuraman, J., 144, 306 Shaffer, J. P., 185, 218, 420 572 Author Index [ 6.10 Shao, J., 144, 519 Shao, P. Y-S., 357 Sheather, S. J., 52, 91, 111, 484 Shemyakin, A. E., 129 Shimizu, K., 486 Shinozaki, N., 354, 404, 423 Shorrock, G., 421 Sieders, A., 81 Siegel, A. F., 35 Silverman, B. W., 110 Silverstone, H., 143 Simons, G., 129 Simpson, D. G., 516 Simpson, T., 3 Singh, K., 519 Sinha, B. K., 204, 222, 479, 487 Sivaganesen, S., 307 Smith, W. L., 72 Smith, A. F. M., 221, 227, 230, 253, 256, 291, 305, 461, 508 Snedecor, G. W., 25 Snell, L., 79, 306, 517, 519 Sobel, M., 349 Solari, M. E., 483 Solomon, D. L., 389 Springer, B. G. F., 237 Sprott, D. A., 483 Spruill, M. C., 425 Srinivasan, C., 170, 214, 315, 383, 426 Sriram, T. N., 426 Srivastava, M. S., 426 Staudte, R. G., 52, 91, 111, 484 Stefanov, V. T., 129 Stefanski, L., 483, 517 Steffey, D., 267 Stein, C., 85, 103, 130, 156, 170, 173, 214, 224, 273, 285, 298, 334, 342, 351, 362, 396, 398, 399, 406, 420, 421, 422, 423, 426 Steinhaus, H., 349 Stigler, S. M., 3, 63, 78, 223, 305, 401, 510, 515 Stone, C. J., 493 Stone, M, 237, 285, 331, 333, 422 Strasser, H., 422, 444, 475, 490, 515 Strawderman, W. E., 44, 71, 130, 216, 254, 295, 328, 350, 357, 358, 367, 369, 383, 401, 409, 410, 412, 414, 426 Stuart, A., 28, 84, 96, 140, 453, 455, 483 Studden, W. J., 426 Styan, G. P. H., 467 Subramanyam, K., 494, 518 Sudderth, W., 285, 422 Sundberg, R., 470 Susarla, V., 307 Sweeting, T., 294 Swensson, B., 198, 224 Takeuchi, K., 140, 518 Tan, W. Y., 237, 426, 475 Tanner, M. A., 256, 258, 291, 305, 461, 505, 515 Tate, R. F., 106, 136 Taylor, W. F., 479 Thompson, E., 291 Thompson, J. R., 296 Thompson, W. A., Jr., 191, 222, 477 Thorburn, D., 129 Tiao, G. C., 227, 237, 305, 493 Tibshirani, R., 144, 494, 519 Tierney, L., 270, 297, 306, 306 Titterington, D. M., 457, 474 Trybula, S., 349 Tseng, Y., 423 Tsui, K-W., 375, 406, 413, 423 Tu, D., 144, 519 Tukey, J. W., 2, 83, 510 Tweedie, M. C. K., 32, 143 Tweedie, R., 257, 306 Ullah, A., 423 Unni, K., 96, 192 van Nostrand, R. C., 425 Van Ryzin, J., 307 Varberg, D. E., 45 Varde, S. D., 143 Vardeman, S., 427 6.10 ] Author Index 573 Verhagen, A. M. W., 187 Vidakovic, B., 425 Villegas, C., 253 Vincze, I., 129, 129 von Mises, R., 515, 519 von Randow, R., 422 Waclawiw, M. A., 426 Wald, A., 305, 420, 453, 515, 516 Wand, M.P., 110 Wasserman, L., 261, 261, 294, 307 Watson, G. S., 62, 224 Wedderburn, R. W. M., 197, 223, 517 Wefelmeyer, W., 494 Wei, G. C. G., 505 Weisberg, S., 95 Weiss, L., 475 Wells, M. T., 424, 426 Whitlock, J. H., 129 Wijsman, R. A., 121, 156, 166, 247, 479 Withers, C. S., 406 Wolfowitz, J., 129, 441, 453, 475, 482, 483, 519 Wolpert, R. W., 233 Wong, W., 291, 475, 516 Woodrofe, M., 487 Woodward, W. A., 143 Wretman, J. H., 198, 224 Wu, C. F. J., 460, 504 Yahav, J. A., 488, 515 Yamada, S., 78 Yang, G., 516 Yi, B., 424, 455 Ylvisaker, D., 236, 305, 332, 419 Young, G. A., 519 Zacks, S., 95, 99, 191, 237, 478 Zeger, S. L., 517 Zehna, P. W., 444 Zellner, A., 305 Zidek, J. V., 173, 224, 270, 342, 372, 374, 375, 395, 396, 425, 426 Zinzius, E., 329 Subject Index Abelian group, 247 Absolute Continuity one measure with respect to another, 12 of a real-valued function, 15 Absolute error loss, 50 Accept-reject algorithm, 292 Additive random effects model, 189. See also Random effects model Additive model, 189 Additivity of set functions, 7 of information, 119 Admissibility, 48, 323, 415, 426 of almost equivariant estimator, 338 of Bayes estimator, 323 of constants, 395 of equivariant estimator, 338, 342 in exponential family, 322 in group family, 338 of linear estimator, 323, 389 of minimax estimator, 323, 336 necessary condition for, 382 of Pitman estimator, 342 in presence of nuisance parameter, 334, 342 of sample mean, 325 of vector-valued estimator, 350, 376 of vector-valued sample mean, 350, 351, 398 Almost equivariance, 246, 338 Almost everywhere (a.e.), 11, 12 Amenable group, 422 Analysis of variance, 176 by empirical Bayes, 278. See also Linear model Ancillary statistic, 41, 78, 209, 419 Approximations, suspended by limits, 429 Arc length, 79, 80 Asymptotic distribution approach, 429, 433 Asymptotic efficiency, 437, 463 of Bayes estimator, 487, 515 of one-step estimator, 454 second order, 487, 494. See also Asymptotic relative efficiency Asymptotic relative efficiency (ARE), 472, 510, 511 Asymptotically unbiased, 438 ARE, see Asymptotic relative efficiency Asymptotic variance, 60 compared with limiting variance, 437 information bound for, 440 Autoregressive series, 481 Average (mean), 2 Basu’s theorem, 42, 79 Bayes empirical Bayes, 269 Bayes approach, 1, 225, 305 computation for, 240, 305 for contingency tables, 305 in exponential models, 240, 305 for linear and nonlinear models, 305 as method for generating estimators, 230 and minimax approach, 309 6.10 ] Subject Index 575 with noninformative prior, 230 for sampling from finite population, 305 subjective, 227, 232 and unbiasedness, 284 to utilize past experience, 226 Bayes estimator, 225, 492 admissibility of, 323, 379 almost equivariance of, 246 asymptotic efficiency of, 487, 515 asymptotic normality of, 490, 515 biasedness of, 234 choice of, 226, 230, 232, 493 comparison with best unbiased estimator, 243 determination of, 227 empirical, 230, 262, 264, 268 equivariance, 245 in exponential families, 240 generalized, 239 hierarchical, 253, 255, 268 limit of, 239 performance of, 240 robust, 271, 307 uniqueness of, 229 in vector-valued case, 348. See also Empirical Bayes, Hierarchical Bayes Bayes risk, 230, 272, 316 Bernoulli distribution b(p), 25 Best linear unbiased estimator (BLUE), 130 Beta distribution B(a, b), 25, 67 as prior for binomial, 230, 255, 264 Fisher information for, 127, moments and cumulants for, 67 Beta-Pascal distribution, 381, 382 Bhattacharya inequality, 128 Bias, 5 conditional, 421 Bias reduction, 83, 144 Binomial distribution b(p, n), 25, 63 admissible, 387 Bayes estimation in, 230, 255 bivariate, 293 lack of Stein effect in, 388 multivariate, 376 normal limit of, 59 not a group family, 65 unbiased estimation in, 88, 100 Binomial experiments, independent, 479 Binomial probability p admissible estimator of, 335, 335, Bayes estimation of, 230 difference of two, 313, maximum likelihood estimation of, 445, 501 minimax estimator of, 311, 336 prior for, 230, 255 sequential estimation of, 103, 233 UMVU estimator of, 100 Binomial sampling plans, 102 Bivariate binomial distribution, 293 Bivariate nonparametric family, 112, 137 Bivariate normal distribution, 27, 68 maximum likelihood estimation of, 472 missing observations in, 133 moments of, 68 unbiased estimation in, 96. See also Multivariate normal distribution Blyth’s method, 380, 415, 427 Bootstrap, 144, 519 Borel sets, 13 Bounded in probability, 77 Bounded loss, 51, 89, 153 Box-Cox Transformation, 77 Brown’s identity, 383 Canonical form of exponential family, 23 of linear model, 177 576 Subject Index [ 6.10 Capture-recapture method, 101 Cartesian product of two sets, 13 Cauchy distribution, 3, 18, 215 completeness of order statistics, 70 distribution of sample mean, 3, 62 efficient estimation in, 455 Fisher information for, 119 number of roots of likelihood equation, 455 Censored observations, 64, 455, 457, 460, 517 Center of symmetry, 91, 137 Central limit theorem, 58, 429 multivariate, 61 Central moments, 28 Chapman-Robbins inequality, see Hammersley-Chapman-Robbins inequality Chebychev inequality, 55, 75 Chi-squared distribution (χ2), 25, 406 moments of, 31. See also Gamma distribution Circular location family, 339 Classical inference, 1 Closed binomial sampling rule, 102 Closure (under composition and inversion), 18 CLT, see Central limit theorem Cluster sampling, 204 Coherence, 285 Common mean estimation of, 95, 132, 476 Common variance estimation of, 482 Commutative group, 19, 165 Complement of a set, 8 Complete class (of estimators), 377, 383, 419 Completeness, 42, 79, 143 in nonparametric families, 79 of order statistics, 72, 109 in sampling from a finite population, 199, 203 in sequential binomial sampling, 102, 103 of a sufficient statistic, 42, 72 and unbiased estimation, 87 Completion of a measure, 8 Components of variance, see Variance components Concave, 45 Concentration of vector-valued estimators, 347 Conditional bias, 421 distribution, 34, 93 expectation, 35 independence, 108, 195 inference, 421 probability, 34 Conditioning, 89 Confidence sets, 423 Conjugate hierarchy, 254 Conjugate prior, 236, 305 for exponential families, 244 Consistency, 54, 429, 438 of Bayes estimators, 490 of MLE, 445 of root of likelihood equation, 447, 448. See also √n-consistency Context invariance, 223 Contingency tables, 106, 107, 193, 194, 475 Contrast, 218 Convergence in law, 57, 60 Convergence in probability, 54, 60. See also Consistency Convex function, 45, 48, 49, 73 set, 49 Convex loss function, 49, 87, 90 Correlation coefficient, 20, 68 efficient estimation of, 472, 511 multiple, 96 unbiased estimation of, 96, 96 Counting measure, 8, 14 Covariance information inequality, 122 6.10 ] Subject Index 577 unbiased estimation of, 96, 132, 137 Covariance adjustment, 216 Covariance inequality, 113, 144, 370 Covariate, 26 Cra ´ mer-Rao lower bound, 136, 143. See also Information inequality Crossed design, 190 Cumulant, 28 Cumulant generating function, 28 Cumulative distribution function (cdf), 15 unbiased estimation of, 109 Curtailed single sampling, 135 Curvature, 66, 79, 80 Curved exponential family, 25, 79 minimal sufficient statistics for, 39 Data, 3, 15 Data analysis, 1 Data augmentation, 291 Data reduction, 17, 33, 42 Decision theory, 1, 420 Deconvolution, 254 Degree of a statistical functional, 111, 112 Delta Method, 58 multivariate, 61 Differentiable manifold, 79 Differential inequality, 420, 427 Diffusion process, 384 Digamma function, 126, 127 Dimensionality of a sufficient statistic, 40, 79 Dirichlet distribution, 349 Dirichlet process prior, 323 Discrete distribution, 14. See also the following distributions: Binomial, Geometric, Hypergeometric, Multinomial, Negative binomial, Poisson, Power series Discrete location family, 344 Distribution, see Absolutely continuous, Bernoulli, Beta, Beta Pascal, Binomial, Bivariate binomial, Bivariate normal, Cauchy, Chi squared, Conjugate, Dirichlet, Discrete, Double exponential, Exponential, Gamma, Geometric, Hypergeometric, Inverse Gaussian, Least favorable, Least informative, Logarithmic series, Logistic, Lognormal, Multinomial, Multivariate normal, Negative binomial, Negative hypergeometric, Normal, Pareto, Poisson, Posterior, Power series, Prior, Strongly unimodal, Student’s t, Uniform, Unimodal, U-shaped, Weibull Dominated convergence theorem, 11 Dominated family of probability distributions, 14 Domination: of one estimator over another, 48 universal, 400 Dose-response model, 71, 130 Double exponential distribution (DE), 18, 39, 70, 155 Fisher information in, 119 maximum likelihood estimation in, 451 Double sampling, 104 Dynkin-Ferguson theorem, 32, 41 Efficient likelihood estimation, 449, 463 for dependent variables, 481 for i.i.d. variables, 449, 463 for independent non-identical variables, 480, 512 measurability of, 516 one-step approximation to, 454, 467 578 Subject Index [ 6.10 in multi-sample problems, 475 uniqueness of, 503 Eigenvalues, 142 Eigenvector, 142 ELE, see Efficient likelihood estimation Elementary symmetric functions, 70 Em algorithm, 457, 506, 515 convergence, 460 one-way layout, 458 Empirical Bayes, 230, 262, 268, 294 parametric and nonparametric, 307 Empirical cdf, 109 Entropy distance between two densities, 47 Equicontinuity, 415 Equivalence of a family of distributions with a measure, 14 of two statistics, 36 Equivariance, 5, 223, 420 functional, 161 and minimaxity, 421 principle of, 158, 202 Equivariant Bayes, 245 Equivariant estimator, 148, 161 Ergodic, 290, 306 Essentially complete, 377. See also Completeness Estimable, 83 Estimand, 4 Estimate, 4 Estimating function, 517 Estimator, 4 randomized, 33 Euclidean sample space, 8, 14 Exchangeable, 221 Exponential distribution E(a, b), 18, 64, 216 complete sufficient statistics for, 43 estimation of parameters, 98, 133, 175, 208, 485 as limit distribution, 485 minimal sufficient statistics for, 38, 70 relation to uniform distribution, 71 Exponential family, 16, 23, 26, 32, 78 admissibility in, 331 asymptotically efficient estimation in, 450, 470 Bayes estimation in, 240, 305, 491 canonical form, 23 complete class for, 383 completeness in, 42 conjugate distributions for, 244 continuity of risk functions of, 379 and data reduction, 40 discrete, 104 empirical Bayes estimation in, 266 Fisher information in, 116, 125 of full rank, 79 maximum likelihood estimation in, 450, 470 mean-value parameter for, 116, 126 minimal sufficient statistic for, 39 minimax estimation in, 322 moments and cumulants for, 28, 28 natural parameter space of, 24 prior distributions, 236 relative to group families, 32 unbiased estimation in, 88 which is a location or scale family, 32, 41 Exponential linear model, 193, 223 Exponential location family, 32 Exponential one- and two-sample problem, 98, 133, 153, 175, 208, 485 Exponential scale family, 32 Factorial experiment, 184 Factorization criterion, for 6.10 ] Subject Index 579 sufficient statistics, 35 Family of distributions, see Discrete location, Dominated, Exponential, Group, Location, Location-scale, Nonparametric, Scale, Two-sample location Fatou’s Lemma, 11 Finite group, 252, 338, 344 Finite population model, 22, 198, 224, 305, 427 Finite binomial sampling plan, 103 First order ancillary, 41 Fisher information, 115, 144, 424 additivity of, 119 total, 610. See also Information matrix Fixed effects, 187 Formal invariance, 161, 209, 223 Frame, in survey sampling, 198 Frequentist, 2, 421 Functional equivariance, 161, 209, 223 Fubini’s theorem, 13, 78 Full exponential model, 79 Full linear group, 224 Full-rank exponential family, 24 Function, see Absolutely continuous, Concave, Convex, Digamma, Hypergeometric, Incomplete Beta, Strongly differentiable, Subharmonic, Superharmonic, Trigamma, Weakly differentiable Gamma distribution, 25, 67 conjugate of, 245 as exponential scale family, 32 Fisher information for, 117, 127 moments and cumulants of, 30 as prior distribution, 236, 240, 254, 257, 268, 277 Gamma-minimax, 307, 389 Gauss-Markov theorem, 184, 220. See also Least squares General linear group, 224, 422 General linear model, see Normal linear model General linear mixed model, 518 Generalized Bayes estimator, 239, 284, 315, 383 Generalized linear model, 197, 305 Geometric distribution, 134 Gibbs sampler, 256, 291, 305, 508 GLIM, 198 Gradient ∇, 80 Group, 19, 159 Abelian, 247 amenable, 422 commutative, 19, 165 finite, 338 full linear, 224 general linear, 224, 422 invariant measure over, 247 location, 247, 250 location-scale, 248, 250 orthogonal, 348 scale, 248, 250 transformation, 19 triangular, 65 Group family, 16, 17, 32, 65, 68, 163, 165 Grouped observations, 455 Haar measure, 247, 287, 422 Hammersley-Chapman-Robbins inequality, 114 Hardy-Weinberg model, 220 Hazard function, 140, 144 Hessian matrix, 49, 73 Hidden Markov chain, 254 Hierarchical Bayes, 230, 253, 260, 268 compared to empirical Bayes, 264 Higher order asymptotics, 518 Horvitz-Thompson estimator, 222 Huber estimator, 484 Huber loss function, 52 Hunt-Stein theorem, 421 Hypergeometric function, 97 580 Subject Index [ 6.10 Hypergeometric distribution, 320 Hyperparameter, 227, 254 Hyperprior, 230, choice of, 269 Idempotent, 367 Identity Transformation, 19 i.i.d. (identically independently distributed), 4 Improper prior, 238 Inadmissibility, 48, 324 of James-Stein estimator, 356, 357, 377 of minimax estimator, 327, 418 of MRE estimator, 334 of normal vector mean, 351, 352, of positive-part estimator, 377 in presence of nuisance parameters, 334, 342 of pre-test estimator, 351, 352 of UMVU estimator, 99. See also Admissibility Incidental parameters, 481, 482 Incomplete beta function, 219 Independence, conditional, 108, 195 Independent experiments, 195, 349, 374. See also Simultaneous estimation Indicator (IA) of a set A, 9 Inequality Bhattacharya, 128 Chebyshev, 55, 75 covariance, 113, 144, 370 Cram´ er-Rao, 136, 143 differential, 420 Hammersley-Chapman-Robbins, 114 information, 113, 120, 123, 127, 144, 325 Jensen, 47, 52, 460 Kiefer, 140 Schwarz, 74, 130 Information, in hierarchy, 260. See also Fisher information Information bound, attainment of, 121, 440 Information inequality, 113, 120, 144, 144 attainment of, 121 asymptotic version of, 439 geometry of, 144 multiparameter, 124, 127, 462 in proving admissibility, 325, 420. See also the following inequalities: Bhattacharya, Cram´ er-Rao, Hammersley-Chapman-Robbins, Kiefer Information matrix, 124, 462 Integrable, 10, 16 Integral, 9, 10 continuity of, 27 by Monte Carlo, 290 Interaction, 184, 195 Invariance: of estimation problem, 160 formal, 161 of induced measure, 250 of loss function, 148, 160 of measure, 247 nonexistence of, 166 of prior distribution, 246, 338 of probability model, 158, 159 and sufficiency, 156 and symmetry, 149. See also Equivariance, Haar measure Invariant distribution of a Markov chain, 290, 306 Inverse binomial sampling, 101. See also Negative binomial distribution Inverse cdf , 73 Inverse Gaussian distribution, 32, 68 Inverted gamma distribution, 245 Irreducible Markov chain, 306 Jackknife, 83, 129 James-Stein estimator, 272, 351 Bayes risk of, 274 Bayes robustness of, 275 6.10 ] Subject Index 581 as empirical Bayes estimator, 273, 295, 298 component risk, 356 inadmissibility of, 276, 282 356 maximum component risk of, 353 risk function of, 355. See also Positive part James-Stein estimator, Shrinkage estimator, Simultaneous estimation Jeffrey’s prior, 230, 234, 287, 305, 315. See also Reference prior Jensen’s inequality, 46, 47, 52 Karlin’s theorem, 331, 389, 427 Kiefer inequality, 140 Kullback-Leibler information, 47, 259, 293 Labels in survey sampling, 201, 224 random, 201 Laplace approximation, 270, 297 Laplacian (∇2f ), 361 Large deviation, 81 Least absolute deviations, 484 Least favorable: distribution, 310, 420 sequence of distributions, 316 Least informative distribution, 153 Least squares, 3, 178 Gauss’ theorem on, 184, 220 Lebesgue measure, 8, 14 Left invariant Haar measure, 247, 248, 250, 287 Likelihood: conditional, 517 empirical, 517 marginal, 517 partial, 517 penalized, 517 profile, 517 quasi, 517 Likelihood equation, 447, 462 consistent root of, 447, 463 multiple roots of, 451 Likelihood function, 238, 444, 517 Lim inf, 11, 63 Limit of Bayes estimators, 239, 383 Lim sup, 11, 63 Limiting Bayes method (for proving admissibility), 325 Limiting moment approach, 429, 430. See also Asymptotic distribution approach Linear estimator, admissibility of, 323, 389 properties of, 184 Linear minimax risk, 329 Linear model, 176 admissible estimators in, 329 Bayes estimation in, 305 canonical form for, 177 full-rank model, 180 generalization of, 220 least squares estimators in, 178, 180, 182, 184 minimax estimation in, 392 MRE estimation in, 178 without normality, 184 UMVU estimation in, 178. See also Normal linear model Link function, 197 Lipschitz condition, 123 LMVU, see Locally minimum variance unbiased estimator Local asymptotic normality (LAN), 516 Locally minimum variance unbiased estimator, 84, 90, 113 Location/curved exponential family, 41 Location family, 17, 340, 492 ancillary statistics for, 41 asymptotically efficient estimation in, 455, 492 circular, 339 discrete, 344 exponential, 32 information in, 118 invariance in, 158 minimal sufficient statistics for, 38 582 Subject Index [ 6.10 minimax estimation in, 340 MRE estimator in, 150 two-sample, 159 which is a curved exponential family, 41 which is an exponential family, 32. See also Location-scale family, Scale family Location group, 247, 250 Location invariance, 149, 149 Location parameter, 148, 223 Location-scale family, 17, 167 efficient estimation in, 468 information in, 126 invariance in, 167 invariant loss function for, 173 MRE estimator for, 174 Location-scale group, 248, 250 Log likelihood, 444 Log linear model, 194 Logarithmic series distribution, 67 Logistic distribution L(a, l), 18, 196, 479 Fisher information in, 119, 139 minimal sufficient statistics for, 38 Logistic regression model, 479 Logit, 26, 196 Logit dose-response model, 44 Log-likelihood, 447 Loglinear model, 194 Lognormal distribution, 486 Loss function, 4, 7 absolute error, 50 bounded, 51 choice of, 7 convex, 7, 45, 87, 152 estimation of, 423 family of, 354, 400 invariant, 148 multiple, 354 non-convex, 51 realism of, 51 squared error, 50 subharmonic, 53 Lower semicontinuous, 74 Markov chain, 55, 290, 306, 420 Markov chain Monte Carlo (MCMC), 256 Markov series, normal autogressive, 481 Maximum component risk, 353, 363, 364 Maximum likelihood estimator (MLE), 98, 444, 467, 515 asymptotic efficiency of, 449, 463, 482 asymptotic normality of, 449, 463 bias corrected, 436 of boundary values, 517 comparison with Bayes estimator, 493 comparison with UMVU estimator, 98 in empirical Bayes estimation, 265 inconsistent, 445, 452, 482 in irregular cases, 485 measurability of, 448 in the regular case, 515 restricted (REML), 191, 390, 518 second order properties of, 518. See also Efficient likelihood estimation, Superefficiency Mean (population), 200, 204, 319 nonparametric estimation of, 110, 318. See also Normal mean, Common mean, One-sample problem Mean (sample), admissibility of, 324 consistency of, 55 distribution in Cauchy case, 3, 62 inadmissibility of, 327, 350, 352 inconsistency of, 76 optimum properties of, 3, 98, 110, 153, 200, 317, 318 Mean-unbiasedness, 5, 157. See also Unbiasedness Mean-value parametrization of 6.10 ] Subject Index 583 exponential family, 116, 126 Measurable: function, 9, 16 set, 8, 15 Measurable transformation, 63, 64 Measure, 8 Measure space, 8 Measure theory, 7 Measurement error models, 483 Measurement invariance, 223 Measurement problem, 2 Median, 5, 62, 455 as Bayes estimator, 228. See also Scale median, 212 Median-unbiasedness, 5, 157 M-estimator, 484, 512, 513, 516 Method of moments, 456 Mill’s ratio, 140 Minimal complete class, 378 Minimal sufficient statistic, 37, 69, 78 and completeness, 42, 43 dimensionality of, 40, 79 Minimax estimator, 6, 225, 309, 425 characterization of, 311, 316, 318 and equivariance, 421 non-uniqueness, 327 randomized, 313 vector-valued, 349 with constant risk, 336 Minimax robustness, 426 Minimum χ2, 479 Minimum norm quadratic unbiased estimation (Minque), 192 Minimum risk equivariant (MRE) estimator, 150, 162 behavior under transformations, 210 comparison with UMVU estimator, 156 inadmissible, 342 in linear models, 178, 185 in location families, 154 in location-scale families, 171 minimaxity and admissibility of, 338, 342, 345 non-unique, 164, 170 risk unbiasedness of, 157, 165 in scale families, 169 under transitive group, 162 unbiasedness of, 157 which is not minimax, 343. See also Pitman estimator Minimum variance unbiased estimate, see Uniformly minimum variance unbiased estimator Minque, 192 Missing data, 458 Mixed effects model, 187, 192, 305, 478 Mixtures, 456 normal, 474 MLE, see Maximum likelihood estimator Model, see Exponential, Finite population, Fixed effects, General linear, Generalized linear, Hierarchical Bayes, Linear, Mixed effects, Probability, Random, Threshold, Tukey Moment generating function, 28 of exponential family, 28 Monotone decision problem, 414 Monte Carlo integration, 290 Morphometrics, 213 MRE estimator, see Minimum risk equivariant estimator Multicollinearity, 424 Multinomial distribution M(p0, . . . , ps; n), 24, 27, 220 Bayes estimation in, 349 for contingency tables, 106, 193, 197 maximum likelihood estimation in, 194, 475, 479 minimax estimation in, 349 restricted, 194 unbiased estimation in, 106, 194, 197 Multiple correlation coefficient, 96 584 Subject Index [ 6.10 Multiple imputation, 292 Multi-sample problem, efficient estimation in, 475 Multivariate CLT, 61 Multivariate normal distribution, 20, 61, 65, 96 information matrix for, 127 maximum likelihood estimation in, 471. See also Bivariate normal distribution Multivariate normal one-sample problem, 96 Natural parameter space (of exponential family), 24 √n-consistent estimator, 454, 467 Negative binomial distribution Nl(p, n), 25, 66, 101, 375, 381 Negative hypergeometric distribution, 300 Neighborhood model, 6 Nested design, 190 Newton-Raphson method, 453 Non-central χ2 distribution, 406 Nonconvex loss, 51. See also Bounded loss Noninformative prior, 230, 305. See also Jeffrey’s prior, Reference prior Nonparametric density estimation, 110, 144 Nonparametric family, 21, 79 complete sufficient statistic for, 109 unbiased estimation in, 109, 110 Nonparametric: mean, 318 model, 6 one-sample problem, 110 two-sample problem, 112 Normal cdf , estimation of, 93 Normal correlation coefficient, efficient estimation in, 472, 509 multiple, 96 unbiased estimation of, 96 Normal distribution, 18, 324 curved, 25 empirical Bayes estimation in, 263, 266 equivariant estimation in, 153 as exponential family, 24, 25, 27, 30, 32 hierarchy, 254, 255 as least informative, 153 as limit distribution, 59, 442 moments of, 30 as prior distribution, 233, 242, 254, 255, 258, 272 sufficient statistics for, 36, 36, 38 truncated, 393. See also Bivariate and Multivariate normal distribution Normal limit distribution, 58 of binomial, 59 Normal linear model, 21, 176, 177, 329 canonical form of, 177 Normal mean, estimation of squared, 434 Normal mean (multivariate), 20 admissibility of, 426 bounded, 425 equivariant estimation of, 348 minimax estimation of, 317. See also James-Stein estimator, Shrinkage estimation Normal mean (univariate): admissibility of, 324 Bayes estimator of, 234 minimax estimator of, 317 equivariant estimator of, 153, 174 minimax estimator of, 317 restricted Bayes estimator of, 321 restricted to integer values, 140 truncated, 327 unbiased estimation in, 350, 352, 352 Normal: mixtures, 474 one-sample problem, 91 probability, 93 6.10 ] Subject Index 585 probability density, 94, 97 two-sample problem Normal variance: admissibility of estimators, 330, 334 Bayes estimation of, 236, 237 estimation in presence of incidental parameters, 482, 483 inadmissibility of standard estimator of, 334 linear estimator of, 330 MRE estimator of, 170, 170, 172 UMVU estimator of, 92 Normal vector mean, see Normal mean (multivariate) Normalizing constant, 57 Nuisance parameters, 461 effect on admissibility, 342 effect on efficiency, 469. See also Incidental parameters Null-set, 14 One-sample problem, see Exponential one- and two-sample problem, Nonparametric one-sample problem, Normal one-sample problem, Uniform distribution One-way layout, 176, 410 em algorithm, 458 empirical Bayes estimation for, 278 loss function for, 360 random effects model for, 187, 237, 477 unbalanced, 181 Optimal procedure, 2 Orbit (of a transformation group), 163 Order notation (o, O, op, Op), 77 Order statistics, 36 sufficiency of, 36 completeness of, 72, 109, 199 Orthogonal: group, 348 parameters, 469 transformations, 177 Orthogonal polynomials, 216 Parameter, 1 boundary values of, 517 in exponential families, 245 incidental, 482, 483 orthogonal, 469 structural, 481. See also Location parameter, Scale parameter Parameter invariance, 223 Pareto distribution, 68, 486 Partitioned matrix, 142 Past experience, Bayes approach to, 226 Periodic Markov chain, 306 Pitman estimator, 154, 155 admissibility of, 156, 342 asymptotic efficiency of, 492 as Bayes estimator, 250, 252, 397 minimaxity of, 340 Point estimation, 2 Poisson distribution, 25, 30, 35, 121, 427 admissibility of estimators, 427 Bayes and empirical Bayes estimation in, 277 Fisher information for, 118 hierarchy, 257, 268, 277 minimax estimation in, 336, 372 misbehaving UMVU estimator, 108 moments and cumulants for, 30 not a group family, 65 Stein effect in, 372, 374 sufficient statistics for, 33, 35 truncated, 106 unbiased estimation in, 105 Poisson process, 106 Population variance, 200 Positive part of a function, 9 Positive part James-Stein estimator, 276, 356 as empirical Bayes estimator, 586 Subject Index [ 6.10 282 inadmissible, 357, 377 as truncated Bayes estimator, 413 Posterior distribution, 227, 240 convergence to normality, 489, 514 for improper prior, 340, 492, 515 Power series distribution, 67, 104 not a group family, 166 Prediction, 192, 220 Pre-test estimator, 351, 352 Prior distribution, 227 choice of, 227, 492 conjugate, 236, 305 improper, 232, 238 invariant, 246 Jeffrey’s, 230, 234, 287, 305, 315 least favorable, 310 noninformative, 230, 305 reference, 261 Probability P(A) of a set A, 14 second order inclusion Probability density, 14 nonexistence of nonparametric unbiased estimator for, 109 Probability distribution, 14 absolutely continuous, 14 discrete, 14 estimation of, 109 Probability measure, 14 Probability model, 3, 6 Probit, 196, 506 Product measure, 13 Projection, 367 Proportional allocation, 204, 222 Pseudo-Bayes estimator, 405 Quadratic estimator of variance, 186, 192 Radius of curvature, 81 Radon-Nikodym derivative, 12 Radon-Nikodym theorem, 12 Random effects model, 187, 278, 323, 477 additive, 187, 189 Bayes model for, 237 for balanced two-way layout, 478 nested, 190 prediction in, 192 UMVU estimators in, 189, 191. See also Variance components Random observable, 4 Random variable, 15 Random vector, 15 Random linear equations, 465 Random walk, 102, 343, 398 Randomized estimator, 33, 48 in complete class, 378 in equivariant estimation, 155, 156, 162 in minimax estimation, 313 in unbiased estimation, 131 Randomized response, 322, 501 Rao-Blackwell theorem, 47, 347 Ratio of variances, unbiased estimation of, 95 Rational invariance, 223 Recentered confidence sets, 423 Recurrent Markov chain, 306 Reference prior, 261 Regression, 176, 180, 181, 280, 420 with both variables subject to error, 482, 512. See also Simple linear regression, Ridge regression Regular case for maximum likelihood estimation, 485 Relevance of past experience, 230 Relevant subsets, 391 Reliability, 93 REML (restricted maximum likelihood) estimator, 390, 518 R´ engi’s entropy function, 293 Residuals, 3 Restricted Bayes estimator, 321, 426 Ridge regression, 425 6.10 ] Subject Index 587 Riemann integral, 10 Right invariant Haar measure, 247, 248, 249, 250, 253, 287 Risk function, 5 conditions for constancy, 162, 162 continuity of, 379 invariance of, 162 Risk unbiasedness, 157, 171, 223 of MRE estimators, 165 Robust Bayes, 230, 271, 307, 371 Robustness, 52, 483 Saddle point expansion, 519 Sample cdf , see Empirical cdf Sample space, 15 Sample variance, consistency of, 55 Scale family, 17, 32 Scale group, 163, 248, 250 Scale median, 212 Scale parameter, 167, 223 Schwarz inequality, 74, 130 Second order efficiency, 487, 494, 518 Second order inclusion probabilities, 222 Sequential binomial sampling, 102, 233 Shannon information, 261 Shrinkage estimator, 354, 366, 424 factor, 351 target, 366, 406, 424 Sigma-additivity, 7 Sigma field (σ-field), 8 Simple binomial sampling plan, 103 Simple function, 9 Simple linear regression, 180 Simple random sampling, 198 Bayes estimation for, 319 equivariant estimation in, 200 minimax estimation in, 319 Single prior Bayes, 239 Simultaneous estimation, 346, 354 admissibility in, 350, 418 equivariant estimation in, 348 minimax estimation in, 317. See also Independent experiments, Stein effect Singular problem, 110, 144 Size of population, estimation of, 101 Spherically symmetric, 359 Spurious correlation, 107 Square root of a positive definite matrix, 403 Squared error, 7 loss, 50, 51, 51, 90, 313 Standard deviation, 112 Stationary distribution of a Markov chain, see Invariant distribution Stationary sequence, 306 Statistic, 16 Stein effect, 366, 372, 419 absence of, 376, 388, 419 Stein estimation, see Shrinkage estimation Stein’s identity, 31, 67, 285 Stein’s loss function, 171, 214 Stirling number of the 2nd kind, 136 Stochastic processes, maximum likelihood estimation in, 481 Stopping rule, 233 Stratified cluster sampling, 206 Stratified sampling, 22, 203, 222 Strict convexity, 45, 49 Strong differentiability, 141, 145 Strongly unimodal, 502 Structural parameter, 481 Student’s t-distribution, Fisher information for, 138 Subgroup, 213, 224 Subharmonic function, 53, 74 loss, 53 Subjective Bayesian approach, 227, 305 Subminimax, 312 Sufficient statistics, 32, 47, 78, 347 and Bayes estimation, 238 completeness of, 42, 72 dimensionality of, 40 factorization criterion for, 35 588 Subject Index [ 6.10 minimal, 37, 69, 78 operational significance of, 33 for a symmetric distribution, 34. See also Minimal sufficient statistic Superefficiency, 440, 515, 515 Superharmonic function, 53, 74, 360, 362, 406, 426 Support of a distribution, 16, 64 Supporting hyperplane theorem, 52 Survey sampling, 22, 224 Symmetric distributions, 22, 50 sufficient statistics for, 34 Symmetry, 147. See also Invariance Systematic error, 5, 143 Systematic sampling, 204 Tail behavior, 51 Tail minimax, 386 Tightness (of a family of measures), 381 Threshold model, 197 Tonelli’s theorem, see Fubini’s theorem Total information, 479 Total positivity, 394 Transformation group, 19 transitive, 162 Transitive transformation group, 162 Translation group, see Location group Triangular matrix, 20, 65 Trigamma function, 126, 127 Truncated distributions, 68, 72 normal mean, 327 efficient estimation in, 451 Tschuprow-Negman allocation, 204 Tukey model, 474, 510 Two-sample location family, 159, 162 Two-way contingency table, 107, 194 Two-way layout, 183, 506 random effects, 189, 192, 478 U-estimable, 83, 87 UMVU, see Uniformly minimum variance unbiased Unbiased in the limit, 431 Unbiasedness, 5, 83, 143, 284 in vector-valued case, 347 Unidentifiable, 24, 56 Uniform distribution, 18, 34, 36, 70, 73 Bayes estimation in, 240 complete sufficient statistics for, 42, 42, 70 maximum likelihood estimation, 485 minimal sufficient statistics for, 38 MRE estimation for, 154, 172, 174 in the plane, 71 relation to exponential distribution, 71 UMV estimation in, 89 Uniformly best estimator, nonexistence of, 5 Uniformly minimum variance unbiased (UMVU) estimation, 85, 143 comparison with MLE, 98, 99 comparison with MRE estimator, 156 in contingency tables, 194 example of pathological case, 108 in normal linear models, 178 in random effects model, 189, 190 in restricted multinomial models, 194 in sampling from a finite population, 200, 203, 206 of vector parameters, 348 Unimodal density, 51, 153. See also Strongly unimodal Unimodular group, 247 Universal Bayes estimator, 284 U-shaped, 51, 153, 232 6.10 ] Subject Index 589 U-statistics, 111 Variance, estimator of, 98, 99, 110, 110 in linear models, 178, 184 nonexistence of unbiased estimator of, 132 nonparametric estimator of, 110 quadratic unbiased estimator of, 186 in simple random sampling, 200 in stratified random sampling, 204. See also Normal variance, Variance components Variance/bias tradeoff, 425 Variance components, 189, 189, 237, 323, 477, 478 negative, 191 Variance stabilizing transformation, 76 Variation reducing, 394 Vector-valued estimation, 348 Weak convergence, 57, 60 Weak differentiability, 141, 145 Weibull distribution, 65, 468, 487 Springer Texts in Statistics (continued from page ii) Nguyen and Rogers: Fundamentals of Mathematical Statistics: Volume I: Probability for Statistics Nguyen and Rogers: Fundamentals of Mathematical Statistics: Volume II: Statistical Inference Noether: Introduction to Statistics: The Nonparametric Way Nolan and Speed: Stat Labs: Mathematical Statistics Through Applications Peters: Counting for Something: Statistical Principles and Personalities Pfeiffer: Probability for Applications Pitman: Probability Rawlings, Pantula and Dickey: Applied Regression Analysis Robert: The Bayesian Choice: A Decision-Theoretic Motivation Robert: The Bayesian Choice: From Decision-Theoretic Foundations to Computational Implementation, Second Edition Robert and Casella: Monte Carlo Statistical Methods Santner and Duffy: The Statistical Analysis of Discrete Data Saville and Wood: Statistical Methods: The Geometric Approach Sen and Srivastava: Regression Analysis: Theory, Methods, and Applications Shao: Mathematical Statistics Shorack: Probability for Statisticians Shumway and Stoffer: Time Series Analysis and Its Applications Terrell: Mathematical Statistics: A Unified Introduction Whittle: Probability via Expectation, Fourth Edition Zacks: Introduction to Reliability Analysis: Probability Models and Statistical Methods
2419
https://quizlet.com/explanations/textbook-solutions/discrete-and-combinatorial-mathematics-an-applied-introduction-5th-edition-9780201726343
Discrete and Combinatorial Mathematics: An Applied Introduction - 5th Edition - Solutions and Answers | Quizlet hello quizlet Study tools Subjects Create Log in Math Discrete Math Discrete and Combinatorial Mathematics: An Applied Introduction Save 5th Edition Ralph P. Grimaldi ISBN: 9780201726343 Ralph P. Grimaldi More textbook info Ralph P. Grimaldi ISBN: 9780201726343 Ralph P. Grimaldi Save Textbook solutions Verified Chapter 1: Fundamentals of Discrete Mathematics Section 1-2: Permutations Section 1-3: Combinations: The Binomial Theorem Section 1-4: Combinations with Repetition Section 1-5: The Catalan Numbers (Optional) Section 1-6: Summary and Historical Review Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Exercise 6 Exercise 7 Exercise 8 Exercise 9 Exercise 10 Exercise 11 Exercise 12 Exercise 13 Exercise 14 Exercise 15 Exercise 16 Exercise 17 Exercise 18 Exercise 19 Exercise 20 Exercise 21 Exercise 22 Exercise 23 Exercise 24 Exercise 25 Exercise 26 Exercise 27 Exercise 28 Exercise 29 Exercise 30 Exercise 31 Exercise 32 Exercise 33 Exercise 34 Exercise 35 Exercise 36 Exercise 37 Exercise 38 Exercise 39 Chapter 2: Fundamentals of Logic Section 2-1: Basic Connectives and Truth Tables Section 2-2: Logical Equivalence: The Law of Logic Section 2-3: Logical Implication: Rules of Inference Section 2-4: The Use of Quantifiers Section 2-5: Quantifiers, Definitions, and the Proofs of Theorems Section 2-6: Summary and Historical Review Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Exercise 6 Exercise 7 Exercise 8 Exercise 9 Exercise 10 Exercise 11 Exercise 12 Exercise 13 Exercise 14 Exercise 15 Exercise 16 Exercise 17 Chapter 3: Set Theory Section 3-1: Sets and Subsets Section 3-2: Set Operations and the Laws of Set Theory Section 3-3: Counting and Venn Diagrams Section 3-4: A First Word on Probability Section 3-5: The Axioms of Probability (Optional) Section 3-6: Conditional Probability: Independence (Optional) Section 3-7: Discrete Random Variables (Optional) Section 3-8: Summary and Historical Review Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Exercise 6 Exercise 7 Exercise 8 Exercise 9 Exercise 10 Exercise 11 Exercise 12 Exercise 13 Exercise 14 Exercise 15 Exercise 16 Exercise 17 Exercise 18 Exercise 19 Exercise 20 Exercise 21 Exercise 22 Exercise 23 Exercise 24 Exercise 25 Exercise 26 Exercise 27 Exercise 28 Exercise 29 Exercise 30 Chapter 4: Properties of the Integers: Mathematical Induction Section 4-1: The Well-Ordering Principle: Mathematical Induction Section 4-2: Recursive Definitions Section 4-3: The Division Algorithm: Prime Numbers Section 4-4: The Greatest Common Divisor: The Euclidean Algorithm Section 4-5: The Fundamentals Theorem of Arithmetic Section 4-6: Summary and Historical Review Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Exercise 6 Exercise 7 Exercise 8 Exercise 9 Exercise 10 Exercise 11 Exercise 12 Exercise 13 Exercise 14 Exercise 15 Exercise 16 Exercise 17 Exercise 18 Exercise 19 Exercise 20 Exercise 21 Exercise 22 Exercise 23 Exercise 24 Exercise 25 Exercise 26 Exercise 27 Exercise 28 Chapter 5: Relations and Functions Section 5-1: Cartesian Products and Relations Section 5-2: Functions: Plain and One-to-One Section 5-3: Onto Functions: Stirling Numbers of the Second Kind Section 5-4: Special Functions Section 5-5: The Pigeonhole Principle Section 5-6: Function Composition and Inverse Functions Section 5-7: Computational Complexity Section 5-8: Analysis of Algorithms Section 5-9: Summary and Historical Review Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Exercise 6 Exercise 7 Exercise 8 Exercise 9 Exercise 10 Exercise 11 Exercise 12 Exercise 13 Exercise 14 Chapter 6: Languages: Finite State Machines Section 6-1: The Set Theory of Strings Section 6-2: Finite State Machines: A First Encounter Section 6-3: Finite State Machines: A Second Encounter Section 6-4: Summary and Historical Review Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Exercise 6 Exercise 7 Exercise 8 Exercise 9 Exercise 10 Exercise 11 Exercise 12 Exercise 13 Exercise 14 Exercise 15 Exercise 16 Exercise 17 Exercise 18 Exercise 19 Exercise 20 Exercise 21 Exercise 22 Exercise 23 Exercise 24 Exercise 25 Exercise 26 Exercise 27 Exercise 28 Chapter 7: Relations: The Second Time Around Section 7-1: Relations Revisited: Properties of Relations Section 7-2: Computer Recognition: Zero-One Matrices and Directed Graphs Section 7-3: Partial orders: Hasse Diagrams Section 7-4: Equivalence Relations and Partitions Section 7-5: Finite State Machines: The Minimization Process Section 7-6: Summary and Historical Review Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Exercise 6 Exercise 7 Exercise 8 Exercise 9 Exercise 10 Exercise 11 Exercise 12 Exercise 13 Exercise 14 Exercise 15 Exercise 16 Exercise 17 Exercise 18 Chapter 8: The Priciple of Inclusion and Exclusion Section 8-1: The Principle of Inclusion and Exclusion Section 8-2: Generalizations of the Principle Section 8-3: Derangments: Nothing is in Its Right Place Section 8-5: Arrangements with Forbidden Positions Section 8-6: Summary and Historical Review Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Exercise 6 Exercise 7 Exercise 8 Exercise 9 Exercise 10 Exercise 11 Exercise 12 Exercise 13 Exercise 14 Exercise 15 Exercise 16 Exercise 17 Exercise 18 Exercise 19 Exercise 20 Exercise 21 Exercise 22 Exercise 23 Exercise 24 Exercise 25 Exercise 26 Exercise 27 Exercise 28 Exercise 29 Exercise 30 Chapter 9: Generating Functions Section 9-1: Introductory Examples Section 9-2: Definition and Examples: Calculating Techniques Section 9-3: Partitions of Integers Section 9-4: The Exponential Generating Function Section 9-5: The Summation Operator Section 9-6: Summary and Historical Review Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Exercise 6 Chapter 10: Recurrence Relations Section 10-1: The First-Order linear Recurrence Relation Section 10-2: The Second-Order Linear Homogeneous Recurrence Relation with Constant Coefficients Section 10-3: The Nonhomogeneous Recurrence Relation Section 10-4: The Method of Generating Functions Section 10-5: A Special kind of Nonlinear Recurrence Relation (Optional) Section 10-6: Divide-and-Conquer Algorithms (Optional) Section 10-7: Summary and Historical Review Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Exercise 6 Exercise 7 Exercise 8 Exercise 9 Exercise 10 Chapter 11: An Introduction to Graph Theory Section 11-1: Definitions and Examples Section 11-2: Subgraphs, Complements, and Graph Isomorphism Section 11-3: Vertex Degree: Euler Trails and Circuits Section 11-4: Planar Graphs Section 11-5: Hamilton Paths and Cycles Section 11-6: Graph Coloring and Chromatic Polynomials Section 11-7: Summary and Historical Review Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Exercise 6 Exercise 7 Exercise 8 Exercise 9 Exercise 10 Exercise 11 Exercise 12 Exercise 13 Exercise 14 Exercise 15 Exercise 16 Chapter 12: Trees Section 12-1: Definitions, Properties, and Examples Section 12-2: Rooted Trees Section 12-3: Trees and Sorting Section 12-4: Weighted Trees and Prefix Codes Section 12-5: Biconnected Components and Articulation Points Section 12-6: Summary and Historical Review Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Exercise 6 Exercise 7 Exercise 8 Exercise 9 Exercise 10 Exercise 11 Exercise 12 Exercise 13 Exercise 14 Exercise 15 Exercise 16 Exercise 17 Exercise 18 Exercise 19 Exercise 20 Exercise 21 Exercise 22 Exercise 23 Exercise 24 Exercise 25 Chapter 13: Optimization and Matching Section 13-1: Dijkstra's Shortest-Path Algorithm Section 13-2: Miinimal Spanning Trees: The Algorithms of Kruskal and Prim Section 13-3: Transport Networks: The Max-Flow Min-Cut Theorem Section 13-4: Matching Theory Section 13-5: Summary and Historical Review Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Chapter 14: Rings and Modular Arithmetic Section 14-1: The Ring Structure: Definition and Examples Section 14-2: Ring properties and Substructures Section 14-3: The Integers Modulo n Section 14-4: Ring Homomorphisms and Isomorphisms Section 14-5: Summary and Historical Review Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Exercise 6 Exercise 7 Exercise 8 Exercise 9 Exercise 10 Exercise 11 Exercise 12 Exercise 13 Exercise 14 Exercise 15 Chapter 15: Boolean Algebra and Switching Functions Section 15-1: Switching Functions: Disjunctive and Conjunctive Normal Forms Section 15-2: Gating Networks: Minimal Sums of Products: Karnaugh Maps Section 15-3: Further Applications: Don't Care Conditions Section 15-4: The Structure of a Bolean Algebra (Optional) Section 15-5: Summary and Historical Review Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Exercise 6 Exercise 7 Exercise 8 Exercise 9 Exercise 10 Exercise 11 Exercise 12 Exercise 13 Exercise 14 Exercise 15 Chapter 16: Groups, Coding, Theory, and Polya's Method of Enumertaion Section 16-1: Definition, Examples and Elementary Properties Section 16-2: Homomorphisms, Isomorphisms, Cyclic Groups Section 16-3: Cosets and Lagarange's Theorem Section 16-4: The RSA Cryptosystem (Optional) Section 16-5: Elements of Coding Theory Section 16-7: The Parity-Check and Generator Matrices Section 16-9: Hamming Matrices Section 16-10: Counting and Equivalence: Burnside's Theorem Section 16-11: The Cycle Index Section 16-12: The Pattern Inventory: Polya's Method of Enumeration Section 16-13: Summary and Historical Review Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Exercise 6 Exercise 7 Exercise 8 Exercise 9 Exercise 10 Exercise 11 Exercise 12 Exercise 13 Exercise 14 Exercise 15 Exercise 16 Exercise 17 Exercise 18 Exercise 19 Exercise 20 Chapter 17: Finite Fields and Combinatorial Designs Section 17-1: Polynomial Rings Section 17-2: Irreducible Polynomials: Finite Fields Section 17-3: Latin Squares Section 17-4: Finite Geometries and Affine Planes Section 17-5: Block Designs and Projective Planes Section 17-6: Summary and Historical Review Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Exercise 6 Exercise 7 Exercise 8 Exercise 9 Exercise 10 Exercise 11 Exercise 12 Exercise 13 Exercise 14 Exercise 15 Exercise 16 Exercise 17 Exercise 18 Chapter Appendix 1: Exponential and Logarithmic Functions Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Exercise 6 Exercise 7 Exercise 8 Exercise 9 Exercise 10 Exercise 11 Exercise 12 Exercise 13 Exercise 14 Exercise 15 Chapter Appendix 2: Matrices, Matrix, Operations, and Determinants Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Exercise 6 Exercise 7 Exercise 8 Exercise 9 Exercise 10 Exercise 11 Exercise 12 Exercise 13 Exercise 14 Exercise 15 Exercise 16 Chapter Appendix 3: Countable and Uncountable Sets Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Exercise 6 Exercise 7 Exercise 8 At Quizlet, we’re giving you the tools you need to take on any subject without having to carry around solutions manuals or printing out PDFs! Now, with expert-verified solutions from Discrete and Combinatorial Mathematics: An Applied Introduction 5th Edition, you’ll learn how to solve your toughest homework problems. Our resource for Discrete and Combinatorial Mathematics: An Applied Introduction includes answers to chapter exercises, as well as detailed information to walk you through the process step by step. With Expert Solutions for thousands of practice problems, you can take the guesswork out of studying and move forward with confidence. About us About Quizlet How Quizlet works Careers Advertise with us For students Flashcards Test Learn Study groups Solutions Modern Learning Lab Quizlet Plus Study Guides Pomodoro timer For teachers Live Blog Be the Change Quizlet Plus for teachers Resources Help center Sign up Honor code Community guidelines Terms Privacy California Privacy Ad and Cookie Policy Interest-Based Advertising Quizlet for Schools Parents Language Get the app Country United States Canada United Kingdom Australia New Zealand Germany France Spain Italy Japan South Korea India China Mexico Sweden Netherlands Switzerland Brazil Poland Turkey Ukraine Taiwan Vietnam Indonesia Philippines Russia © 2025 Quizlet, Inc. Students Flashcards Learn Study Guides Test Expert Solutions Study groups Teachers Live Blast Categories Subjects Exams Literature Arts and Humanit... Languages Math Science Social Science Other Flashcards Learn Study Guides Test Expert Solutions Study groups Live Blast Categories Exams Literature Arts and Humanit... Languages Math Science Social Science Other
2420
https://gis.stackexchange.com/questions/371944/calculating-length-of-polyline-defined-by-list-of-coordinates-using-python
Calculating length of polyline defined by list of coordinates using Python - Geographic Information Systems Stack Exchange Join Geographic Information Systems By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Geographic Information Systems helpchat Geographic Information Systems Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Calculating length of polyline defined by list of coordinates using Python Ask Question Asked 5 years, 1 month ago Modified3 years, 7 months ago Viewed 2k times This question shows research effort; it is useful and clear 2 Save this question. Show activity on this post. I have a list of coordinates, knowing that I am working using EPSG:4326 and using Python I would like to calculate the length in kilometres of the polyline defined by these coordinates. python Coords = [(0.0, 50.787944), (0.0, 50.787944), (-0.20159865271498856, 50.824569950535725), (-0.40044683717220364, 50.803694), (-0.599761967889834, 50.78975316549538)] I've tried with GeoPandas and shapely but reading the documentation I can't find an easy way to do it. Is there any function that allows me to calculate the length in kilometre knowing Coords and the reference system used? python coordinate-system line distance length Share Share a link to this question Copy linkCC BY-SA 4.0 Improve this question Follow Follow this question to receive notifications edited Aug 20, 2020 at 12:02 Vince 20.5k 16 16 gold badges 49 49 silver badges 65 65 bronze badges asked Aug 20, 2020 at 9:12 G MG M 2,107 2 2 gold badges 21 21 silver badges 32 32 bronze badges 2 stackoverflow.com/questions/29545704/…FelixIP –FelixIP 2020-08-20 09:24:07 +00:00 Commented Aug 20, 2020 at 9:24 @Taras trying to post the code that I wrote I end up finding the solution, I am not sure it is right but I've posted it as an answer. Thanks for your encouragement.G M –G M 2020-08-20 09:42:47 +00:00 Commented Aug 20, 2020 at 9:42 Add a comment| 1 Answer 1 Sorted by: Reset to default This answer is useful 1 Save this answer. Show activity on this post. Eventually, I have understood the steps to follow I had to convert the EPSG:4326 which has degrees as for units to a coordinate system in meters as EPSG:32618. So eventually I have end up following the instructions in the shapely documentation for transformations. ```python Coords = [(0.0, 50.787944), (0.0, 50.787944), (-0.20159865271498856, 50.824569950535725), (-0.40044683717220364, 50.803694), (-0.599761967889834, 50.78975316549538)] polyline = LineString(Coords) ``` After converting the list of coordinates to a shapely LineString I used pyproj for computing the transformation. I had initially a attributeError : module 'pyproj' has no attribute 'CRS' because I was using an old version. ```python import pyproj from shapely.geometry import Point from shapely.ops import transform wgs84 = pyproj.CRS('EPSG:4326') utm = pyproj.CRS('EPSG:32618') utm_polyline = transform(project, wgs84_pt) length_km = utm_polyline.length/1000 ``` If using pyproj < 2.1: ```python from functools import partial import pyproj from shapely.ops import transform wgs84 = pyproj.Proj(init='epsg:4326') utm = pyproj.Proj(init='epsg:32618') project = partial( pyproj.transform, wgs84, utm) utm_polyline = transform(project, polyline) length_km = utm_polyline.length/1000 ``` Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications answered Aug 20, 2020 at 9:42 G MG M 2,107 2 2 gold badges 21 21 silver badges 32 32 bronze badges 4 2 You used the wrong UTM zone. UTM zone 18N is documented for "Between 78°W and 72°W, northern hemisphere between equator and 84°N, onshore and offshore. Bahamas. Canada - Nunavut; Ontario; Quebec. Colombia. Cuba. Ecuador. Greenland. Haiti. Jamica. Panama. Turks and Caicos Islands. United States (USA). Venezuela." I wouldn't use UTM for the UK, since it straddles zones 29&30.Vince –Vince 2020-08-20 12:15:19 +00:00 Commented Aug 20, 2020 at 12:15 @Vince thanks a lot, I did this for an experiment in my thesis in surface metrology so I am not very familiar with CRS should I use EPSG:27700? Please post an answer if you know a better way. Thanks a lot!G M –G M 2020-08-20 12:28:39 +00:00 Commented Aug 20, 2020 at 12:28 @Vince I was really impressed that you understood it was UK. Very good observation :-)G M –G M 2020-08-20 12:29:58 +00:00 Commented Aug 20, 2020 at 12:29 As a rule, you should probably use the projection the national mapping agency uses. 50E,0N is in the Indian Ocean, so it wasn't hard to figure you were at 50N,0E Vince –Vince 2020-08-20 13:44:55 +00:00 Commented Aug 20, 2020 at 13:44 Add a comment| Your Answer Thanks for contributing an answer to Geographic Information Systems Stack Exchange! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions python coordinate-system line distance length See similar questions with these tags. The Overflow Blog The history and future of software development (part 1) Getting Backstage in front of a shifting dev experience Featured on Meta Spevacus has joined us as a Community Manager Introducing a new proactive anti-spam measure Related 22What is unit of Shapely length attribute? 2Reprojecting XYZ coordinates in Python whilst keeping Z coordinate constant 5Calculating minimum distance in meters of two polygons (defined in lat/long) 1Calculate area, length, width of rectangles in shapefile with Python 1Python: Cut out the square from the raster. The square has a 30 kilometres (18 miles) side length and has the center in the given coordinates 0Calculate length using GeoPandas 4Calculating azimuth between two points using their projected coordinates with Python Hot Network Questions Checking model assumptions at cluster level vs global level? What can be said? PSTricks error regarding \pst@makenotverbbox Overfilled my oil Interpret G-code Do we need the author's permission for reference How do you emphasize the verb "to be" with do/does? Are there any world leaders who are/were good at chess? Gluteus medius inactivity while riding ICC in Hague not prosecuting an individual brought before them in a questionable manner? Riffle a list of binary functions into list of arguments to produce a result Why include unadjusted estimates in a study when reporting adjusted estimates? Is direct sum of finite spectra cancellative? Suggestions for plotting function of two variables and a parameter with a constraint in the form of an equation Discussing strategy reduces winning chances of everyone! Identifying a movie where a man relives the same day Non-degeneracy of wedge product in cohomology What "real mistakes" exist in the Messier catalog? Who is the target audience of Netanyahu's speech at the United Nations? What’s the usual way to apply for a Saudi business visa from the UAE? Analog story - nuclear bombs used to neutralize global warming What is a "non-reversible filter"? Suspicious of theorem 36.2 in Munkres “Analysis on Manifolds” Origin of Australian slang exclamation "struth" meaning greatly surprised Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. lang-py Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Geographic Information Systems Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
2421
https://web.mat.upc.edu/rafael.ramirez/ed_GETI/apuntes/edps.pdf
Ecuaciones en Derivadas Parciales Una ecuaci´ on en derivadas parciales (EDP) es una ecuaci´ on diferencial cuya inc´ ognita es una funci´ on que depende de m´ as de una variable. El orden de una EDP es el orden de la derivada parcial m´ as alta. En este tema vamos a estudiar algunas EDPs lineales de segundo orden a coeficientes constantes. Las tres ecuaciones b´ asicas: ondas, calor y Laplace/Poisson Todas las EDPs que estudiaremos provienen de modelos f´ ısicos: la vibraci´ on vertical de las cuerdas de una guitarra o la membrana de un tambor; la evoluci´ on de la temperatura en piezas 1D, 2D o 3D; los equilibrios el´ asticos y t´ ermicos de los problemas anteriores, etc. Esto proporciona una valiosa intuici´ on del comportamiento que deben tener las soluciones de las EDPs consideradas y podremos interpretar f´ ısicamente los resultados obtenidos. La ecuaci´ on de ondas 1D (cuerda vibrante). Consideramos el movimiento ondulatorio vertical de una cuerda vibrante horizontal de longitud L de densidad constante y composici´ on homog´ enea sometida a una fuerza externa que act´ ua en la direcci´ on vertical. Notamos por u(x, t) al desplazamiento vertical respecto la posici´ on de equilibrio del punto x ∈[0, L] de la cuerda en el instante t ∈R. An´ alogamente, F(x, t) es la fuerza externa por unidad de masa que act´ ua sobre el punto x ∈[0, L] en el instante t ∈R. La fuerza empuja hacia arriba/abajo cuando F(x, t) es positiva/negativa. La EDP que modela el movimiento es utt −c2uxx = F(x, t), x ∈(0, L), t ∈R. Aqu´ ı, los s´ ımbolos utt y uxx denotan las segundas derivadas parciales respecto el tiempo y la posici´ on, respectivamente. El par´ ametro c2 = τ/ρ depende de la tensi´ on τ y de la densidad lineal ρ. La cantidad c ser´ a interpretada m´ as adelante como la velocidad a la que viajan las ondas en el material considerado. Diremos que esta EDP es homog´ enea cuando F(x, t) ≡0. Si consideramos que la cuerda vibrante es de longitud infinita —algo sin sentido f´ ısico, pero con inter´ es matem´ atico—, escribiremos x ∈R. Ejercicio. Usar que los t´ erminos utt y c2uxx tienen las mismas unidades para deducir que c tiene unidades de velocidad “horizontal” (es decir, espacio “horizontal” partido tiempo). La ecuaci´ on del calor 1D. Consideramos la evoluci´ on de la temperatura en una barra homog´ enea de longitud L que posee algunos focos o sumideros de calor internos1 descritos por una funci´ on F(x, t). Las focos/sumideros de calor corresponden a los puntos x e instantes t tales que la funci´ on F(x, t) es positiva/negativa. Notamos por u(x, t) la temperatura del punto x ∈[0, L] en el instante t ≥0. Como no vivimos en un mundo 1D, desde un punto de vista f´ ısico tiene m´ as sentido considerar la evoluci´ on de la temperatura en un muro homog´ eneo infinito de grosor L, siendo x ∈[0, L] la coordenada que “atraviesa” el muro. La EDP que modela la evoluci´ on de la temperatura es ut −k2uxx = F(x, t), x ∈(0, L), t > 0. El par´ ametro k2 = κ/cρ depende de la conductividad t´ ermica κ, la densidad lineal ρ y el calor espec´ ıfico c del material que conforma la barra o el muro infinito. La EDP es homog´ enea cuando F(x, t) ≡0. Si consideramos que la barra es de longitud infinita, escribiremos x ∈R. Ejercicio. Probar que la funci´ on u : R × (0, +∞) →R definida por u(x, t) = 1 √ 4πk2t e−x2/4k2t cumple la ecuaci´ on del calor homog´ enea. Calcular l´ ımt→0 u(x, t). ¿Qu´ e pasa cuando t < 0? 1Por ejemplo, un transistor se calienta cuando una corriente el´ ectrica circula por ´ el. 1 2 Depositado en Equilibrios el´ asticos y t´ ermicos 1D del caso homog´ eneo. Buscamos los equilibrios el´ asticos en ausencia de fuerzas externas y los equilibrios t´ ermicos en ausencia de focos/sumideros de calor internos: F(x, t) ≡0. Equilibrio significa que el estado del cuerpo se mantiene estacionario en el tiempo, luego buscamos soluciones u = u(x) que no dependan del tiempo y as´ ı desaparecen las derivadas parciales ut y utt. En tal caso, las EDPs utt = c2uxx y ut = k2uxx se reducen a la EDO lineal de segundo orden u′′ = 0, cuyas ´ unicas soluciones son las funciones lineales de la forma u(x) = ax + b, con a, b ∈R. Queda probado pues que los ´ unicos equilibrios el´ asticos de una cuerda vibrante no sometida a fuerzas externas y los ´ unicos equilibrios t´ ermicos de una barra sin focos ni sumideros de calor internos son los estados (desplazamiento o temperatura) lineales. Las versiones multidimensionales. Antes de dar las versiones multidimensionales de las ecuaciones anteriores, recordamos que el Laplaciano de una funci´ on u : Ω⊂Rn →R que depende de una variable vectorial x = (x1, . . . , xn) ∈Rn es la funci´ on ∆u = div grad u = n X j=1 ∂2u ∂x2 j = ux1x1 + · · · + uxnxn. Por ejemplo, si la funci´ on u depende de una ´ unica variable x ∈R, entonces ∆u = uxx. En cambio, si depende de tres variables: (x, y, z) ∈R3, entonces ∆u = uxx + uyy + uzz. Adem´ as, cuando la funci´ on dependa de la posici´ on x = (x1, . . . , xn) y el tiempo t, interpretaremos que el Laplaciano s´ olo afecta a las variables de posici´ on; es decir, el Laplaciano no incluye el t´ ermino utt. Las versiones n-dimensionales de las ecuaciones anteriores son las siguientes. La ecuaci´ on de ondas que modela el movimiento ondulatorio de un cuerpo el´ astico Ω⊂Rn es utt −c2∆u = F(x, t), u = u(x, t), x = (x1, . . . , xn) ∈Ω, t ∈R. La ecuaci´ on del calor que modela la evoluci´ on de la temperatura en un cuerpo Ω⊂Rn es ut −k2∆u = F(x, t), u = u(x, t), x = (x1, . . . , xn) ∈Ω, t > 0. Desde un punto de vista f´ ısico, s´ olo interesan los casos 1D, 2D o 3D. Es decir, n ≤3. Al igual que en las versiones 1D estamos suponiendo que el cuerpo es completamente homog´ eneo. La funci´ on F(x, t) representa la acci´ on de una fuerza exterior en el caso de la ecuaci´ on de ondas o los focos y sumideros de calor internos en el caso de la ecuaci´ on de calor. Se supone que F(x, t) es un dato conocido. La ecuaci´ on de Laplace/Poisson. A partir de las versiones n-dimensionales de las ecuaciones de ondas y calor, vemos que si la funci´ on F(x, t) no depende de t, entonces los equilibrios t´ ermicos y el´ asticos de un cuerpo Ω⊂Rn est´ an modelados por la llamada ecuaci´ on de Poisson ∆u = G(x), u = u(x), x = (x1, . . . , xn) ∈Ω. Aqu´ ı, G(x) = −F(x)/c2 en la ecuaci´ on de ondas y G(x) = −F(x)/k2 en la ecuaci´ on del calor. La versi´ on homog´ enea de esta ecuaci´ on recibe el nombre de ecuaci´ on de Laplace: ∆u = 0, u = u(x), x = (x1, . . . , xn) ∈Ω. Condiciones iniciales, condiciones de frontera y flujo Todas las ecuaciones anteriores tienen infinitas soluciones. Para capturar una soluci´ on concreta impondremos condiciones adicionales, que pueden ser de dos tipos: iniciales y de frontera. Condiciones iniciales: desplazamiento, velocidad y temperatura. Estas condiciones fijan el estado del objeto en el instante inicial. Empezamos por la ecuaci´ on de ondas, que es de segundo orden en el tiempo, luego necesita exactamente dos condiciones iniciales; a saber, fijar El desplazamiento inicial: u(x, 0) = f(x) para x ∈Ω; y La velocidad inicial: ut(x, 0) = g(x) para x ∈Ω. Depositado en 3 En cambio, la ecuaci´ on del calor es de primer orden en el tiempo, luego basta fijar la temperatura inicial: u(x, 0) = f(x) para x ∈Ω. Y, para acabar, la ecuaci´ on de Laplace/Poisson es est´ atica, luego no tiene sentido fijar el estado inicial del objeto, ya que ese estado es justamente la inc´ ognita del problema. Ser´ ıa como preguntar de qu´ e color es el caballo blanco de Santiago. Condiciones de frontera: Dirichlet, Neumann, mixtas y peri´ odicas. Estas condiciones (tam-bi´ en llamadas condiciones de contorno) determinan la interacci´ on del objeto con el medio que lo rodea, luego s´ olo tienen sentido cuando el objeto estudiado tiene frontera. Por ejemplo, la cuerda vibrante infinita no tiene frontera y las cuerdas de una guitarra s´ ı. Consideraremos cuatro tipos de condiciones de frontera. Dirichlet: Consisten en fijar el valor de la funci´ on u en los puntos de la frontera. Neumann: Consisten en fijar el valor de la derivada ∂u ∂n en los puntos de la frontera. El s´ ımbolo ∂u ∂n denota a la derivada en la direcci´ on normal exterior a la frontera. La convenci´ on que seguimos en el caso 1D es: ∂u ∂n = ux en el extremo derecho y ∂u ∂n = −ux en el extremo izquierdo. M´ as adelante, veremos que la derivada ∂u ∂n cuantifica el flujo de calor a trav´ es de la frontera. Mixtas (s´ olo en el caso 1D): Consisten en considerar una condici´ on de tipo Dirichlet en un extremo y una condici´ on de tipo Neumann en el otro. Peri´ odicas (s´ olo en el caso 1D): Consisten en imponer que las funciones u y ux tengan el mismo valor en los dos extremos del intervalo [0, L]. En los tres primeros casos, diremos que estas condiciones son homog´ eneas cuando todos los valores fijados sean iguales a cero. La funci´ on id´ enticamente nula (tambi´ en llamada soluci´ on trivial) cumple cualquier condici´ on homog´ enea. Ejemplo 1. Sea W ⊂R3 un cuerpo de frontera S = ∂W. Un PVI de calor en este cuerpo 3D sin focos ni sumideros de calor interno con condiciones de frontera de tipo Neumann consiste en las ecuaciones    ut = k2∆u (x, y, z) ∈W t > 0 u(x, y, z, 0) = f(x, y, z) (x, y, z) ∈W ∂u ∂n(x, y, z, t) = h(x, y, z, t) (x, y, z) ∈S t > 0 donde la temperatura inicial f : W →R y el flujo h : S × [0, +∞) →R son funciones conocidas. Ejemplo 2. Un PVI de calor 1D en una barra de longitud L sin focos ni sumideros de calor interno con condiciones de frontera de tipo Neumann consiste en las ecuaciones        ut = k2uxx x ∈(0, L) t > 0 u(x, 0) = f(x) x ∈(0, L) ux(0, t) = −hi(t) t > 0 ux(L, t) = hd(t) t > 0 donde la temperatura inicial f : [0, L] →R y los flujos hi, hd : [0, +∞) →R son funciones conocidas. An´ alogamente, si imponemos que las condiciones de frontera sean peri´ odicas, el problema queda as´ ı:        ut = k2uxx x ∈(0, L) t > 0 u(x, 0) = f(x) x ∈(0, L) u(0, t) = u(L, t) t > 0 ux(0, t) = ux(L, t) t > 0 Ejemplo 3. Un problema de Poisson 2D en un cuadrado de lado 2L con condiciones de frontera de tipo Dirichlet homog´ eneas consiste en las ecuaciones    uxx + uyy = G(x, y) x ∈(−L, L) y ∈(−L, L) u(±L, y) = 0 y ∈(−L, L) u(x, ±L) = 0 x ∈(−L, L) 4 Depositado en donde la funci´ on G : [−L, L] × [−L, L] →R es un dato conocido. An´ alogamente, si imponemos que las condiciones de frontera sean de tipo Neumann homog´ eneas, entonces el problema queda as´ ı:    uxx + uyy = G(x, y) x ∈(−L, L) y ∈(−L, L) ux(±L, y) = 0 y ∈(−L, L) uy(x, ±L) = 0 x ∈(−L, L) . Interpretaci´ on del flujo en la ecuaci´ on del calor. Para entender qu´ e es flujo de calor a trav´ es de la frontera, explicaremos una ley de conservaci´ on referente a la evoluci´ on de la temperatura en un cuerpo 3D o una barra 1D sin focos ni sumideros de calor internos. Empezamos por el caso 3D. Consideramos un cuerpo W ⊂R3 y sea S = ∂W su frontera. Sea u(x, y, z, t) una soluci´ on del problema considerado en el ejemplo 1 e introducimos la funci´ on T(t) = 1 Vol(W) Z W u(x, y, z, t)dxdydz que mide la temperatura promedio del cuerpo en el instante t. Su derivada es T ′(t) = 1 Vol(W) Z W ut(x, y, z, t)dxdydz = k2 Vol(W) Z W ∆u(x, y, z, t)dxdydz = k2 Vol(W) I S ∂u ∂n(x, y, z, t)dS = k2 Vol(W) I S h(x, y, z, t)dS. Las propiedades que hemos usado son: derivada bajo el signo de la integral (primera igualdad), la ecuaci´ on del calor homog´ enea (segunda igualdad), el teorema de la divergencia de Gauss (tercera igualdad, mirar el ejemplo 33 de los apuntes de C´ alculo Vectorial) y las condiciones de frontera de tipo Neumann (cuarta igualdad). Por tanto, la tasa de variaci´ on de la temperatura promedio es proporcional a la integral del flujo de calor h sobre la superficie cerrada S. En realidad, la funci´ on h(x, y, z, t) determina exactamente a qu´ e velocidad se escapa/entra el calor a trav´ es de cada punto (x, y, z) ∈S en cada instante t ≥0. La temperatura promedio se mantiene constante cuando h ≡0 (es decir, cuando el cuerpo est´ a t´ ermi-camente aislado y ni entra ni sale calor por la frontera) o cuando H S hdS ≡0 (es decir, cuando el calor que entra por un lado se compensa exactamente con el calor que sale por otro). Por contra, la temperatura promedio aumenta/disminuye cuando la integral H S hdS es positiva/negativa (es decir, cuando hay una entrada/salida neta de calor). A continuaci´ on, estudiamos el caso 1D; o sea, la evoluci´ on de la temperatura en una barra de longitud L. Sea u(x, t) una soluci´ on del problema considerado en la primera parte del ejemplo 2 e introducimos la funci´ on T(t) = 1 L Z L 0 u(x, t)dx que mide la temperatura promedio de la barra en el instante t. Su derivada es T ′(t) = 1 L Z L 0 ut(x, t)dx = k2 L Z L 0 uxx(x, t)dx = k2 L ux(x, t) x=L x=0 = k2 L hd(t) + hi(t)  . Las propiedades que hemos usado son: derivada bajo el signo de la integral (primera igualdad), la ecuaci´ on del calor (segunda igualdad), el teorema fundamental del c´ alculo (tercera igualdad) y las condiciones de frontera (cuarta igualdad). Por tanto, la suma hd(t) + hi(t) nos dice cual es la tasa de variaci´ on de la temperatura promedio T(t). En otra palabras, las funciones hd(t) y hi(t) nos dicen a qu´ e velocidad fluye el calor por el extremo derecho e izquierdo de la barra, respectivamente. Cuando son positivas/negativas, tenemos una entrada/salida de calor por el extremo correspondiente. Ejercicio. Comprobar que la temperatura promedio se mantiene constante al considerar condiciones de frontera peri´ odicas en una barra 1D. Interpretar f´ ısicamente el resultado. Depositado en 5 Linealidad: superposici´ on y homogeneizaci´ on Existen varios trucos simples que se pueden aplicar en todos los problemas lineales que aparecen en este tema, pero los explicaremos a trav´ es de ejemplos concretos para no dispersarnos. Superposici´ on. Consideramos los dos PVIs de calor 1D en una barra de longitud L sin focos ni sumideros de calor internos dados por        vt = k2vxx x ∈(0, L) t > 0 v(x, 0) = f(x) x ∈(0, L) vx(0, t) = 0 t > 0 vx(L, t) = 0 t > 0 ,        wt = k2wxx x ∈(0, L) t > 0 w(x, 0) = 0 x ∈(0, L) wx(0, t) = −hi(t) t > 0 wx(L, t) = hd(t) t > 0 . Ambos problemas tienen condiciones de frontera de tipo Neumann. La diferencia estriba en que el primero tiene una ´ unica condici´ on no homog´ ena: la temperatura inicial, mientras que el segundo tiene dos: las condiciones de frontera en los extremos de la barra. Entonces, dadas dos soluciones cualesquiera v(x, t) y w(x, t) de estos problemas, su superposici´ on (suma) u(x, t) = v(x, t) + w(x, t) es una soluci´ on del PVI de calor 1D presentado en la primera parte del ejemplo 2, que tiene tres condiciones no homog´ eneas. En general, podemos “trocear” cualquier problema lineal en varios subproblemas de forma que cada subproblema tenga pocas (quiz´ a incluso s´ olo una) ecuaciones/condiciones no homog´ eneas, siendo, por tanto, m´ as simple que el problema original. En tal caso, si conseguimos resolver todos los subproblemas, la superposici´ on (suma) de sus soluciones cumplir´ a el problema original. Homogeneizaci´ on. Este truco es similar al anterior, pero en vez de “trocear” el problema original en varios subproblemas simples, ahora queremos simplificarlo mediante un cambio de variables astuto. Para fijar ideas, consideramos el PVI de calor 1D en una barra de longitud L = 1 sin focos ni sumideros de calor internos con condiciones de frontera de tipo Dirichlet constantes        ut = k2uxx x ∈(0, 1) t > 0 u(x, 0) = x2 x ∈(0, 1) u(0, t) = 1 t > 0 u(1, t) = 2 t > 0 . La funci´ on v(x) = x + 1 cumple las condiciones de frontera: v(0) = 1 y v(1) = 2. Por tanto, si realizamos el cambio de variables w(x, t) = u(x, t) −v(x), el problema original se transforma en        wt = k2wxx x ∈(0, 1) t > 0 w(x, 0) = x2 −x −1 x ∈(0, 1) w(0, t) = 0 t > 0 w(1, t) = 0 t > 0 que es un problema bastante m´ as simple pues hemos homogeneizado las dos condiciones de frontera, sin deshomogeneizar la EDP. F´ ormula de D’Alembert para la cuerda vibrante infinita Teorema (F´ ormula de D’Alembert). Consideramos el PVI de la cuerda vibrante infinita    utt −c2uxx = F(x, t) x ∈R t ∈R u(x, 0) = f(x) x ∈R ut(x, 0) = g(x) x ∈R donde la fuerza externa F(x, t), el desplazamiento inicial f(x) y la velocidad inicial g(x) son funciones conocidas. Este PVI tiene una ´ unica soluci´ on que viene dada por u(x, t) = 1 2 f(x + ct) + f(x −ct)  + 1 2c Z x+ct x−ct g(y)dy + 1 2c Z t 0 (Z x+c(t−s) x−c(t−s) F(y, s)dy ) ds. 6 Depositado en La siguiente demostraci´ on es opcional, no forma parte del temario de examen, pero deben entenderse las consecuencias f´ ısicas que se derivan de la f´ ormula. Demostraci´ on. S´ olo consideramos el caso F(x, t) ≡0; es decir, cuando no act´ ua ninguna fuerza externa sobre la cuerda. La idea principal consiste en realizar el cambio de variables ξ = x + ct, η = x −ct para simplificar la EDP. Para eso debemos relacionar las derivadas parciales de la funci´ on transformada v(ξ, η) = u(x, t) con las derivadas parciales de la funci´ on original u(x, t). Aplicamos repetidamente la regla de la cadena: ux = ∂u ∂x = ∂v ∂ξ ∂ξ ∂x + ∂v ∂η ∂η ∂x = vξ + vη ut = ∂u ∂t = ∂v ∂ξ ∂ξ ∂t + ∂v ∂η ∂η ∂t = cvξ −cvη uxx = ∂ux ∂x = ∂vξ ∂x + ∂vη ∂x = ∂vξ ∂ξ + ∂vη ∂ξ  ∂ξ ∂x + ∂vξ ∂η + ∂vη ∂η  ∂η ∂x = vξξ + 2vξη + vηη utt = ∂ut ∂t = c∂vξ ∂t −c∂vη ∂t = c ∂vξ ∂ξ −∂vη ∂ξ  ∂ξ ∂t + c ∂vξ ∂η −∂vη ∂η  ∂η ∂t = c2vξξ −2vξη + vηη  . Por tanto, resolviendo la EDP transformada se obtiene que utt = c2uxx ⇐ ⇒ c2vξξ −2vξη + vηη  = c2vξξ + 2vξη + vηη  ⇐ ⇒ vξη = 0 ⇐ ⇒ vξ(ξ, η) = r(ξ) para alguna funci´ on r : R →R arbitraria ⇐ ⇒ v(ξ, η) = p(ξ) + q(η) para algunas funciones p, q : R →R arbitrarias ⇐ ⇒ u(x, t) = p(x + ct) + q(x −ct) para algunas funciones p, q : R →R arbitrarias. Es decir, la “soluci´ on general” de la EDP de la cuerda vibrante infinita posee infinitas soluciones, las cuales dependen de dos funciones arbitrarias, de la misma manera que la soluci´ on general de una EDO lineal de segundo orden depend´ ıa de dos constantes libres. Por tanto, para hallar la soluci´ on del PVI planteado, utilizaremos la misma estrategia seguida con las EDOs: determinar las dos funciones “libres” imponiendo las dos condiciones iniciales. As´ ı pues, imponemos que f(x) = u(x, 0) = p(x) + q(x), g(x) = ut(x, 0) = cp′(x) −cq′(x). Derivando la primera ecuaci´ on y multiplicando por c, se obtiene la relaci´ on cp′(x) + cq′(x) = cf ′(x). Combinando esta ´ ultima relaci´ on con la segunda ecuaci´ on resulta que p′(x) = 1 2f ′(x) + 1 2cg(x). Integrando esta ´ ultima igualdad, se obtiene que p(x) = 1 2f(x) + 1 2c Z x 0 g(y)dy + k, q(x) = f(x) −p(x) = 1 2f(x) −1 2c Z x 0 g(y)dy −k, con lo cual las funciones “libres” p(x) y q(x) quedan determinadas salvo una constante de integraci´ on com´ un k ∈R. Finalmente, u(x, t) = p(x + ct) + q(x −ct) = 1 2 f(x + ct) + f(x −ct)  + 1 2c Z x+ct x−ct g(y)dy, pues las dos constantes de integraci´ on se cancelan entre si. □ Depositado en 7 Observaci´ on. La f´ ormula de D’Alembert implica que el desplazamiento vertical de la cuerda en el punto x y en el instante t s´ olo depende de: 1) El desplazamiento inicial en los puntos x ± ct; 2) La velocidad inicial en el intervalo [x −ct, x + ct]; y 3) La fuerza externa ejercida sobre los puntos y en los instantes s tales que (y, s) pertenece al tri´ angulo de v´ ertices (x, t) y (x ± ct, 0). Observaci´ on. La anterior demostraci´ on de la f´ ormula de D’Alembert muestra que, en ausencia de fuerzas externas, el desplazamiento de la cuerda es de la forma u(x, t) = p(x + ct) + q(x −ct) para algunas funciones p, q : R →R. Esto significa que el desplazamiento de la cuerda vibrante infinita en ausencia de fuerzas externas consiste en la superposici´ on de dos ondas, cuyos perfiles vienen dados por las funciones p(x) y q(x), viajando en sentidos opuestos a velocidad c. Concretamente, la onda de perfil p(x) se desplaza hacia la izquierda, mientras que la onda de perfil q(x) se desplaza hacia la derecha. Es recomendable conectarse al enlace para ver la animaci´ on Waves que muestra este fen´ omeno mediante un applet de JAVA. Pregunta. Sea F(x, t) la fuerza (por unidad de masa) externa sobre la cuerda. Sean f(x) y g(x) el desplazamiento y la velocidad iniciales de la cuerda. ¿Cu´ al es la aceleraci´ on inicial? (Respuesta: utt(x, 0) = c2uxx(x, 0) + F(x, 0) = c2f ′′(x) + F(x, 0).) Separaci´ on de variables El m´ etodo de separaci´ on de variables es un m´ etodo para resolver problemas con simetr´ ıas que poseen una ´ unica condici´ on (inicial o de frontera) no homog´ enea. No desarrollaremos una teor´ ıa general, sino que lo aplicaremos a tres o cuatro ejemplos concretos. PVFs lineales de segundo orden. El susodicho m´ etodo de separaci´ on de variables requiere resolver ciertos problemas de valores en la frontera (PVFs) asociados a EDOs lineales homog´ eneas a coeficientes constantes de segundo orden mediante el m´ etodo del polinomio caracter´ ıstico. Definici´ on. P(m) = m2 + a1m + a0 es el polinomio caracter´ ıstico de la EDO x′′ + a1x′ + a0x = 0. La importancia del polinomio caracter´ ıstico radica en que si x(t) = emt, entonces x′′(t) + a1x′(t) + a0x(t) = m2emt + a1memt + a0emt = P(m)emt. Usando esta relaci´ on podemos expresar la soluci´ on general de la EDO x′′ +a1x′ +a0x = 0 en t´ erminos de las ra´ ıces de su polinomio caracter´ ıstico. Lema. Sea xh(t) la soluci´ on general de la EDO x′′ + a1x′ + a0x = 0, donde a1, a2 ∈R. Entonces: P(m) tiene dos ra´ ıces reales diferentes m1, m2 ∈R ⇒xh(t) = c1em1t + c2em2t; P(m) tiene una ra´ ız real doble m∗∈R ⇒xh(t) = em∗t(c1 + c2t); y P(m) tiene ra´ ıces complejas conjugadas m± = α ± β i ̸∈R ⇒xh(t) = eαtc1 cos βt + c2 sin βt  . Demostraci´ on. En la asignatura de C´ alculo 2 se explic´ o que el conjunto de soluciones de una EDO lineal homog´ enea de orden n es un subespacio vectorial de dimensi´ on n. Por tanto, si encontramos dos soluciones linealmente independientes x1(t) y x2(t) de la EDO x′′ + a1x′ + a0x = 0, podemos asegurar que su soluci´ on general es xh(t) = c1x1(t) + c2x2(t), con c1, c2 ∈R libres. Si P(m) tiene dos ra´ ıces reales diferentes m1 y m2, entonces x1(t) = em1t y x2(t) = em2t son dos soluciones linealmente independientes. Si P(m) tiene una ra´ ız real doble m∗, entonces x1(t) = em∗t es una soluci´ on y aplicando el m´ etodo de reducci´ on de orden (ver C´ alculo 2) obtenemos que x2(t) = tem∗t es otra soluci´ on. Si P(m) tiene dos ra´ ıces complejas conjugadas m± = α ± β i ̸∈R, entonces las funciones x±(t) := e(α±β i)t = eαte±βti = eαtcos βt ± i sin βt  , 8 Depositado en son soluciones de la EDO. La ´ ultima igualdad es consecuencia de la f´ ormula de Euler. En particular, las combinaciones lineales x1(t) := x+(t) + x−(t) 2 = eαt cos βt, x2(t) := x+(t) −x−(t) 2i = eαt sin βt tambi´ en son soluciones de la EDO y resultan ser linealmente independientes. Con esto queda probado que la soluci´ on general tiene una de las tres formas dadas en el lema. □ Nos centramos ahora en los PVFs homog´ eneos de la forma    x′′ + a1(λ)x′ + a0(λ)x = 0 α10x(t1) + α11x′(t1) = 0 α20x(t2) + α21x′(t2) = 0 donde los instantes t1 ̸= t2 y los coeficientes αij y ai(λ) son datos del problema. Los coeficientes a1(λ) y a0(λ) son constantes, pero dependen de un par´ ametro λ. La funci´ on x(t) ≡0 siempre es una soluci´ on de estos PVFs. Es la llamada soluci´ on trivial. Queremos saber para qu´ e valores del par´ ametro λ ∈R existen soluciones no triviales. Definici´ on. Estos valores son los valores propios (VAPS) y las soluciones no triviales son las funciones propias (FUPS) del PVF. Un VAP es simple/doble cuando la dimensi´ on del subespacio vectorial formado por sus FUPS es uno/dos. Seguiremos los siguientes pasos para calcular los VAPs y sus FUPS: 1. Expresar la soluci´ on general de la EDO homog´ enea en funci´ on del par´ ametro λ ∈R; es decir, xh(t; λ) = c1x1(t; λ) + c2x2(t; λ), c1, c2 ∈R. 2. Imponer las condiciones de frontera, para as´ ı obtener un sistema lineal homog´ eneo de la forma Aλc = 0, c =  c1 c2  . 3. Calcular los (posiblemente infinitos) VAPs resolviendo la ecuaci´ on caracter´ ıstica det[Aλ] = 0. 4. Para cada VAP λ = λ∗, calcular sus FUPs resolviendo el sistema indeterminado Aλ∗c = 0. Antes de calcular los VAPs y las FUPs de algunos PVFs cl´ asicos, observamos que es posible expresar la soluci´ on general de la EDO x′′ = λx, λ ∈R, como una combinaci´ on lineal de: Funciones trigonom´ etricas si λ = −µ2 < 0: xh(t) = c1 cos µt + c2 sin µt; Funciones hiperb´ olicas si λ = µ2 > 0: xh(t) = c1 cosh µt + c2 sinh µt; y Funciones monomiales si λ = 0: xh(t) = c1 + c2t. La importancia de estas expresiones radica en que la EDO x′′ = λx aparece en casi todos los problemas de separaci´ on de variables que estudiaremos despu´ es. En particular, conviene saber resolver con soltura los cuatro PVFs que aparecen en la siguiente proposici´ on. Proposici´ on. Listamos los VAPs y las FUPs de los siguientes PVFs cl´ asicos. 1. PVF con condiciones de frontera tipo Dirichlet: x′′ = λx, x(0) = x(L) = 0. VAPs: λn = −(nπ/L)2, FUPs: xn(t) = sin(nπt/L), para n ≥1. 2. PVF con condiciones de frontera tipo Neumann: x′′ = λx, x′(0) = x′(L) = 0. VAPs: λn = −(nπ/L)2, FUPs: xn(x) = cos(nπt/L), para n ≥0. 3. PVF con condiciones de frontera mixtas: x′′ = λx, x(0) = x′(L) = 0. VAPs: λn = −(n + 1/2)2π2/L2, FUPs: xn(t) = sin (n + 1/2)πt/L  , para n ≥0. 4. PVF con condiciones de frontera peri´ odicas: x′′ = λx, x(−L) = x(L) y x′(−L) = x′(L). λ0 = 0 es un VAP simple de FUP c0(t) ≡1. λn = −(nπ/L)2 es VAP doble de FUPs cn(t) = cos(nπt/L) y sn(t) = sin(nπt/L), para n ≥1. Depositado en 9 Demostraci´ on. Ninguno de estos cuatro PVFs tiene VAPs positivos. Por ejemplo, si x(t) es una FUP de VAP λ, integrando por partes, usando que x′′ = λx y cualquiera de los tres primeros tipos de condiciones de frontera anteriores (Dirichlet, Neumann o mixtas), vemos que − Z L 0 x′(t) 2 dt = Z L 0 x(t)x′′(t)dt − x(t)x′(t) t=L t=0 = λ Z L 0 x(t) 2 dt ⇒λ = − R L 0 (x′(t))2 dt R L 0 (x(t))2 dt ≤0. En el caso peri´ odico se obtiene el mismo resultado, pero integrando en el intervalo [−L, L]. Adem´ as, λ = 0 ⇔x′(t) ≡0. Es decir, λ = 0 es un VAP si y s´ olo si la funci´ on x(t) ≡1 cumple las condiciones de frontera, lo cual s´ olo sucede con las condiciones de tipo Neumann y las peri´ odicas. Finalmente, basta calcular los VAPs negativos: λ = −µ2 < 0 y sus FUPs. Recordamos que en tal caso la soluci´ on general de la EDO x′′ = λx es la combinaci´ on lineal de funciones trigonom´ etricas xh(t) = c1 cos µt + c2 sin µt, c1, c2 ∈R. 1. Al imponer que xh(t) cumpla las condiciones de frontera tipo Dirichlet, obtenemos el sistema Aλc = 0, Aλ =  1 0 cos µL sin µL  , c =  c1 c2  . Por tanto, sin µL = det[Aλ] = 0 ⇔µ = µn = nπ L , n ∈Z ⇔λ = λn = −µ2 n = − nπ L 2, n ≥1. En la ´ ultima equivalencia hemos pasado de n ∈Z a los enteros n ≥1. Las dos razones para hacerlo son que n est´ a elevado al cuadrado (por tanto, los enteros negativos son superfluos) y que la elecci´ on n = 0 no tiene sentido (pues estamos en el caso λ < 0). Para calcular las FUPs de VAP λ = λn = −µ2 n, debemos resolver el sistema indeterminado c1 = c1 cos 0 + c2 sin 0 = x(0) = 0 (−1)nc1 = c1 cos nπ + c2 sin nπ = x(L) = 0  = ⇒ c1 = 0 c2 ∈R libre  . Por tanto, xn(t) = sin(µnt) = sin(nπt/L) es una FUP de VAP λn = −(nπ/L)2, para n ≥1. 2. Al imponer que xh(t) cumpla las condiciones de frontera tipo Newmann, obtenemos el sistema Aλc = 0, Aλ =  0 1 −sin µL cos µL  , c =  c1 c2  . Por tanto, sin µL = det[Aλ] = 0 ⇔µ = µn = nπ L , n ∈Z ⇔λ = λn = −µ2 n = − nπ L 2, n ≥1. Hemos pasado de n ∈Z a los enteros n ≥1 por el mismo motivo que antes. Para calcular las FUPs de VAP λ = λn = −µ2 n, debemos resolver el sistema indeterminado µnc2 = −c1µn sin 0 + c2µn cos 0 = x′(0) = 0 (−1)nµnc2 = −c1µn sin nπ + c2µn cos nπ = x′(L) = 0  = ⇒ c1 ∈R libre c2 = 0  . Por tanto, xn(t) = cos(µnt) = cos(nπt/L) es una FUP de VAP λn = −(nπ/L)2, para n ≥0. Hemos escrito n ≥0, en vez de n ≥1, para incluir que x0(t) ≡1 es una FUP de VAP λ0 = 0. 3. Al imponer que xh(t) cumpla las condiciones de frontera mixtas, obtenemos el sistema Aλc = 0, Aλ =  1 0 −sin µL cos µL  , c =  c1 c2  . Y cos µL = det[Aλ] = 0 ⇔µ = µn = (n+1/2)π L , n ∈Z ⇔λ = λn = −µ2 n = −(n+1/2)2π2 L2 , n ≥0. Hemos pasado de n ∈Z a los enteros n ≥0 por lo de siempre. Para calcular las FUPs de VAP λ = λn = −µ2 n, debemos resolver el sistema indeterminado c1 = c1 cos 0 + c2 sin 0 = x(0) = 0 (−1)n+1µnc1 = −c1µn sin µnL + c2µn cos µnL = x′(L) = 0  = ⇒ c1 = 0 c2 ∈R libre  . Por tanto, xn(t) = sin(µnt) = sin (n+1/2)πt/L  es una FUP de VAP λn = −(n+1/2)2π2/L2, para todo n ≥0. 10 Depositado en 4. Al imponer que xh(t) cumpla las condiciones de frontera peri´ odicas, obtenemos el sistema Aλc = 0, Aλ =  0 2 sin µL −2µ sin µL 0  , c =  c1 c2  . Luego 4µ sin2 µL = det[Aλ] = 0 ⇔µ = µn = nπ L , n ∈Z ⇔λ = λn = −µ2 n = − nπ L 2, n ≥1. Hemos pasado una vez m´ as de n ∈Z a los enteros n ≥1. Para calcular las FUPs de VAP λ = λn = −µ2 n, debemos resolver el sistema indeterminado 0 = 2c2 sin(µnL) = x(L) −x(−L) = 0 0 = −2µnc1 sin(µnL) = x′(L) −x′(−L) = 0  = ⇒ c1 ∈R libre c2 ∈R libre  . Por tanto, cn(t) = cos(µnt) = cos(nπt/L) y sn(t) = sin(µnt) = sin(nπt/L) son dos FUPs linealmente independientes del VAP doble λn = −(nπ/L)2, para todo n ≥1. Con esto hemos acabado la prueba de la proposici´ on. □ Ejercicio. Resolver el PVF con condiciones mixtas x′′ = λx, x′(0) = x(L) = 0. Separaci´ on de variables en la ecuaci´ on de ondas 1D. Consideramos una ecuaci´ on de ondas 1D homog´ enea con condiciones de contorno de tipo Neumann homog´ eneas. Para simplificar supondremos que la cuerda tiene longitud L = π y que la soltamos, sin impulso, con un desplazamiento inicial f(x) = 1 −2 cos(3x). Tambi´ en supondremos que no act´ ua ninguna fuerza externa. Notamos por c la velocidad a la que viajan las ondas por la cuerda. Las ecuaciones que modelan este problema son (1)            utt = c2uxx x ∈(0, π) t ∈R u(x, 0) = 1 −2 cos(3x) x ∈(0, π) ut(x, 0) = 0 x ∈(0, π) ux(0, t) = 0 t ∈R ux(π, t) = 0 t ∈R . La idea b´ asica del m´ etodo consiste en buscar soluciones en forma de variables separadas u(x, t) = X(x)T(t) de la parte homog´ enea del problema a resolver. En el caso anterior, todas las condiciones y ecuaciones son homog´ eneas, salvo la referente al desplazamiento inicial, luego su parte homog´ enea es (1)h        utt = c2uxx x ∈(0, π) t ∈R ut(x, 0) = 0 x ∈(0, π) ux(0, t) = 0 t ∈R ux(π, t) = 0 t ∈R . Al imponer que la funci´ on u(x, t) = X(x)T(t) cumpla: La ecuaci´ on del ondas utt = c2uxx, se obtiene que X(x)T ′′(t) = c2X′′(x)T(t), luego X′′(x) X(x) = T ′′(t) c2T(t) = λ ∈R. La condici´ on inicial ut(x, 0) = 0, vemos que T ′(0) = 0. La condici´ on de frontera ux(0, t) = 0, vemos que X′(0) = 0. La condici´ on de frontera ux(π, t) = 0, vemos que X′(π) = 0. Por tanto, obtenemos dos problemas separados:  X′′(x) = λX(x) X′(0) = X′(π) = 0 ,  T ′′(t) = λc2T(t) T ′(0) = 0 . En la secci´ on anterior vimos que los VAPs y las FUPs del PVF con condiciones de Neumann asociado a la funci´ on X(x) son VAPs: λ = λn = −n2 FUPs: X(x) = Xn(x) = cos(nx)  n ≥0. Depositado en 11 Ahora nos centramos en el problema asociado a la funci´ on T(t), pero teniendo en cuenta que λ = λn = −n2. En particular, la soluci´ on general de la EDO T ′′ + n2c2T = 0 es T(t) = c1 cos(cnt) + c2 sin(cnt), c1, c2 ∈R. Al imponer la condici´ on inicial T ′(0) = 0, vemos que c2 = 0 y c1 ∈R queda libre. Tras tomar c1 = 1, que es la opci´ on m´ as simple, obtenemos la familia de funciones T(t) = Tn(t) = cos(cnt), n ≥0. As´ ı pues, hemos obtenido que todas las funciones de variables separadas de la familia un(x, t) = Xn(x)Tn(t) = cos(nx) cos(cnt), n ≥0 son soluciones del problema homog´ eneo (1)h. Estas funciones reciben el nombre de modos normales y describen la forma en que vibra la cuerda. Concretamente, debido a la linealidad del problema homog´ eneo (1)h, cualquier vibraci´ on de la cuerda que estamos estudiando es una superposici´ on (suma) de estos infinitos modos normales. En otra palabras, la soluci´ on general del problema homog´ eneo (1)h viene dada, al menos a nivel formal, por la serie u(x, t) = X n≥0 anun(x, t) = X n≥0 an cos(nx) cos(cnt), donde las infinitas amplitudes a0, a1, a2, . . . ∈R quedan, de momento, indeterminadas. Para resolver esta indeterminaci´ on, recuperamos la ´ unica condici´ on no homog´ enea del problema original; es decir, la referente al desplazamiento inicial. Imponiendo que 1 −2 cos(3x) = f(x) = u(x, 0) = X n≥0 an cos(nx) = a0 + a1 cos x + a2 cos(2x) + a3 cos(3x) + · · · , se obteniene por inspecci´ on directa que a0 = 1, a3 = −2 y las dem´ as amplitudes son nulas, luego u(x, t) = a0u0(x, t) + a3u3(x, t) = 1 −2 cos(3x) cos(3ct) es una soluci´ on del problema original. (En realidad es la ´ unica, pero no lo probaremos.) As´ ı pues, en este caso la vibraci´ on de la cuerda es la superposici´ on de dos modos normales: el cero y el tres. Observaci´ on. La soluci´ on anterior se puede reescribir en forma de dos ondas superpuestas viajando en sentidos opuestos a velocidad c. Efectivamente, pues u(x, t) = 1 −2 cos(3x) cos(3ct) = 1 −cos 3(x + ct)  −cos 3(x −ct)  = p(x + ct) + q(x −ct), con p(x) = 1/2 −cos(3x) = q(x). (Hemos usado la relaci´ on 2 cos a cos b = cos(a + b) + cos(a −b).) Ejercicio. Leer la entrada inglesa de Wikipedia sobre standing waves; es decir, sobre ondas estaciona-rias. Ver alguno de los muchos videos que existen en Youtube sobre standing waves, en los cuales se visualizan experimentalmente los primeros modos normales de vibraci´ on de una cuerda. Tambi´ en se pueden ver algunos modos normales de vibraci´ on de una membrana el´ astica rectangular en un video de Youtube titulado Science fun. El experimento consiste en derramar sal encima de una membrana negra que vibra por el sonido que emite un altavoz situado debajo para comprobar que los modos normales cambian con la frecuencia del sonido. Ejercicio. Escribir las dos EDOs que se obtienen al imponer que la funci´ on u(x, t) = X(x)T(t) cumpla la EDP utt = −kut + c2uxx, escogiendo la opci´ on que proporciona una EDO lo m´ as simple posible para la funci´ on X(x). Esta EDP recibe el nombre de ecuaci´ on de la cuerda vibrante con fricci´ on, pues el t´ ermino −kut proviene de una fuerza de fricci´ on proporcional (y opuesta) a la velocidad. 12 Depositado en Desarrollos de Fourier. En el ´ ultimo paso del ejemplo anterior, hemos conseguido determinar todos los coeficientes libres por inspecci´ on directa. Cuando eso no sea posible, utilizaremos las siguientes f´ ormulas, ya vistas en la asignatura C´ alculo 2, para calcular desarrollos de Fourier. El desarrollo de Fourier completo de una funci´ on f : [−L, L] →R es f(x) ∼a0 2 + X n≥1 an cos(nπx/L) + bn sin(nπx/L),        an = 1 L Z L −L f(x) cos(nπx/L)dx, bn = 1 L Z L −L f(x) sin(nπx/L)dx. El desarrollo de Fourier en cosenos de una funci´ on f : [0, L] →R es f(x) ∼a0 2 + X n≥1 an cos(nπx/L), an = 2 L Z L 0 f(x) cos(nπx/L)dx. El desarrollo de Fourier en senos de una funci´ on f : [0, L] →R es f(x) ∼ X n≥1 bn sin(nπx/L), bn = 2 L Z L 0 f(x) sin(nπx/L)dx. En los dos primeros casos, el primer t´ ermino a0/2 es el promedio de la funci´ on f(x). Se puede probar que estos desarrollos en serie son (absoluta, uniformemente) convergentes cuando la funci´ on f(x) es suficientemente regular, pero en esta asignatura trabajamos a un nivel puramente formal, sin preocuparnos por la convergencia. Ejercicio. Sea f : [0, 2π] →R la funci´ on definida por f(x) = 1 −π2x. Comprobar, integrando por partes, que los coeficientes de su desarrollo de Fourier en senos son bn = 1 π Z 2π 0 (1 −π2x) sin(nx/2)dx = 4π2 (−1)n n + 2 π 1 −(−1)n n , n ≥1. Separaci´ on de variables en la ecuaci´ on del calor 1D. Objetivo. En este segundo ejemplo del m´ etodo de separaci´ on de variables, veremos que al resolver la ecuaci´ on del calor 1D homog´ enea con condiciones de contorno de tipo Dirichlet constantes, la temperatura tiende al equilibrio t´ ermico (en ingl´ es, steady state). Homogeneizaremos las condiciones de contorno antes de separar variables mediante un cambio de variables “astuto”. Problema f´ ısico. Tenemos una barra de longitud L > 0 compuesta por un material de conductividad t´ ermica κ, densidad ρ y calor espec´ ıfico c. Notamos k2 = κ/cρ. La temperatura inicial de la barra viene dada por una funci´ on f : [0, L] →R. Finalmente, mantenemos constante la temperatura de la barra en ambos extremos: α ∈R es la temperatura en el izquierdo y β ∈R es la temperatura en el derecho. Adem´ as, suponemos que no hay focos o sumideros de calor internos. Modelo matem´ atico. Las ecuaciones que modelan este problema son        ut = k2uxx x ∈(0, L) t > 0 u(x, 0) = f(x) x ∈(0, L) u(0, t) = α t > 0 u(L, t) = β t > 0 . Pasos del m´ etodo. 1. Encontrar unas funciones v(x) y g(x) tales que el cambio de variables w(x, t) = u(x, t) −v(x) transforme el problema original en el problema con condiciones de contorno homog´ eneas (∗)        wt = k2wxx x ∈(0, L) t > 0 w(x, 0) = g(x) x ∈(0, L) w(0, t) = 0 t > 0 w(L, t) = 0 t > 0 . Depositado en 13 Expresar v(x) y g(x) en t´ erminos de las cantidades α, β, L y de la funci´ on f(x). 2. Imponer que w(x, t) = X(x)T(t) cumpla la parte homog´ enea del problema (∗). Escribir el PVF asociado a la funci´ on X(x) y el problema asociado a la funci´ on T(t). 3. Resolver el PVF asociado a la funci´ on X(x). 4. Teniendo en cuenta los VAPs del PVF anterior, resolver el problema asociado a T(t). 5. Calcular los modos normales (es decir, las FUPs) de la parte homog´ enea del problema (∗). 6. Probar que, a nivel formal, la soluci´ on del problema original cumple l´ ımt→+∞u(x, t) = v(x). 7. Interpretar f´ ısicamente estos resultados. Desarrollo del m´ etodo. 1. Al imponer que la funci´ on w(x, t) = u(x, t) −v(x) cumpla la EDP wt = k2wxx resulta 0 = wt −k2wxx = (ut −k2uxx) −(vt −k2vxx) = 0 −k2v′′(x) = ⇒v′′(x) = 0. Al imponer que la funci´ on w(x, t) = u(x, t) −v(x) cumpla las condiciones de contorno queda  0 = w(0, t) = u(0, t) −v(0) = α −v(0) = ⇒ v(0) = α 0 = w(L, t) = u(L, t) −v(L) = β −v(L) = ⇒ v(L) = β. La ´ unica funci´ on v(x) tal que v′′(x) = 0, v(0) = α y v(L) = β es v(x) = α + (β −α)x/L. La gr´ afica de la funci´ on v(x) es la linea recta que une los puntos (0, α) y (L, β). Finalmente, g(x) = w(x, 0) = u(x, 0) −v(x) = f(x) −α + (α −β)x/L. 2. Al imponer que la funci´ on w(x, t) = X(x)T(t) cumpla: La ecuaci´ on del calor wt = k2wxx, se obtiene que X(x)T ′(t) = k2X′′(x)T(t), luego X′′(x) X(x) = T ′(t) k2T(t) = λ ∈R. La condici´ on de frontera w(0, t) = 0, vemos que X(0) = 0. La condici´ on de frontera w(L, t) = 0, vemos que X(L) = 0. Por tanto, obtenemos dos problemas separados: (a)  X′′(x) = λX(x) X(0) = X(L) = 0 (b)  T ′(t) = λk2T(t). El problema (a) es un PVF con condiciones de tipo Dirichlet asociado a la funci´ on X(x). 3. Vimos en la secci´ on anterior que los VAPs y las FUPs del PVF (a) son: VAPs: λ = λn = −n2π2/L2 FUPs: X(x) = Xn(x) = sin(nπx/L)  n ≥1. 4. Una soluci´ on de problema (b) para λ = λn = −n2π2/L2 es T(t) = Tn(t) = e−n2k2π2t/L2, n ≥1. 5. As´ ı pues, los modos normales (las FUPs) de la parte homog´ enea del problema (∗) son wn(x, t) = Tn(t)Xn(x) = e−n2π2k2t/L sin(nπx/L), n ≥1. Teniendo en cuenta que Xn(x) es una funci´ on acotada y Tn(t) tiende a cero cuando t →+∞, resulta que l´ ımt→+∞wn(x, t) = 0 para toda x ∈(0, L) y para todo entero n ≥1. 6. La soluci´ on final w(x, t) = P n≥1 bnwn(x, t) del problema (∗) se determina imponiendo la condici´ on no homog´ enea g(x) = w(x, 0) = X n≥1 bnwn(x, 0) = X n≥1 bn sin(nπx/L). Es decir, bn = 2 L R L 0 g(x) sin(nπx/L)dx, n ≥1, son los coeficientes del desarrollo de Fourier en senos de la funci´ on g(x) en el intervalo [0, L]. Por tanto, deshaciendo el cambio de variables, la soluci´ on u(x, t) = v(x) + w(x, t) del problema original cumple l´ ım t→+∞u(x, t) = v(x) + l´ ım t→+∞w(x, t) = v(x) + X n≥1 bn l´ ım t→+∞wn(x, t) = v(x). 14 Depositado en 7. Hemos probado que cuando el tiempo tiende a infinito, la temperatura tiende al equilibrio t´ ermico consistente en que la temperatura viene dada por la recta que une las temperaturas en los extremos. Algo acorde con nuestra experiencia f´ ısica, la cual nos ense˜ na que el calor tiende a distribuirse de la forma mas uniforme posible. Ejercicio. Conectarse al enlace y entender el applet de JAVA ti-tulado Heat Equation que ejemplifica este fen´ omeno f´ ısico. Ejercicio. Probar que si substituimos las dos condiciones tipo Dirichlet constantes por dos condiciones tipo Neumann homog´ eneas, entonces se cumple que l´ ım t→+∞u(x, t) = 1 L Z L 0 f(x)dx. La interpretaci´ on f´ ısica de este resultado es la siguiente. Las condiciones tipo Neumann homog´ eneas equivalen a la existencia de un aislamiento t´ ermico perfecto en los extremos que impide que el calor escape o entre, luego tan s´ olo puede redistribuirse internamente. Por tanto, la temperatura tiende a un valor constante y este valor debe coincidir con el promedio de la temperatura inicial. Separaci´ on de variables en la ecuaci´ on de Poisson 2D en dominios rectangulares. Objetivo. En este ´ ultimo ejemplo del m´ etodo de separaci´ on de variables, vamos a resolver una ecuaci´ on de Poisson 2D en un dominio rectangular con condiciones de contorno de tipo Dirichlet. Dos de las cuatro condiciones de contorno son no homog´ eneas. Antes de separar variables, homogeneizaremos tanto la ecuaci´ on de Poisson (es decir, la transformaremos en una ecuaci´ on de Laplace) como una condici´ on de contorno, mediante un cambio de variables. Problema original. Consideramos las ecuaciones            uxx + uyy = 2y x ∈(0, π) y ∈(0, 2π) u(x, 0) = 0 x ∈(0, π) u(x, 2π) = 2πx2 x ∈(0, π) u(0, y) = 0 y ∈(0, 2π) u(π, y) = 1 y ∈(0, 2π) Pasos del m´ etodo. 1. Encontrar unas funciones v(x, y) y g(y) tal que el cambio de variables w(x, y) = u(x, y)−v(x, y) transforme el problema original en el problema (△)            wxx + wyy = 0 x ∈(0, π) y ∈(0, 2π) w(x, 0) = 0 x ∈(0, π) w(x, 2π) = 0 x ∈(0, π) w(0, y) = 0 y ∈(0, 2π) w(π, y) = g(y) y ∈(0, 2π) 2. Imponer que la funci´ on w(x, y) = X(x)Y (y) cumpla la parte homog´ enea del problema (△). Escribir el PVF asociado a la funci´ on Y (y) y el problema asociado a la funci´ on X(x). 3. Resolver el PVF asociado a la funci´ on Y (y). 4. Teniendo en cuenta los VAPs del PVF anterior, resolver el problema asociado a X(x). 5. Calcular la soluci´ on general de la parte homog´ enea del problema (△). 6. Resolver el problema original. Desarrollo del m´ etodo. 1. Al imponer que la funci´ on w(x, y) = u(x, y) −v(x, y) cumpla la ecuaci´ on wxx + wyy = 0 resulta 0 = wxx + wyy = (uxx + uyy) −(vxx + vyy) = 2y −(vxx + vyy) = ⇒vxx + vyy = 2y. Depositado en 15 Al imponer que la funci´ on w(x, y) = u(x, y) −v(x, y) cumpla las condiciones de contorno correspondientes a los lados inferior, superior e izquierdo queda    0 = w(0, y) = u(0, y) −v(0, y) = 0 −v(0, y) = ⇒ v(0, y) = 0 0 = w(x, 0) = u(x, 0) −v(x, 0) = 0 −v(x, 0) = ⇒ v(x, 0) = 0 0 = w(x, 2π) = u(x, 2π) −v(x, 2π) = 2πx2 −v(x, 2π) = ⇒ v(x, 2π) = 2πx2 Necesitamos una funci´ on v(x, y) que cumpla estas cuatro condiciones. Para simplificar los c´ alcu-los, buscamos esta funci´ on en forma de variables separadas: v(x, y) = ˜ X(x) ˜ Y (y). Entonces, las cuatro condiciones anteriores equivalen a ˜ X′′(x) ˜ Y (y) + ˜ X(x) ˜ Y ′′(y) = 2y, ˜ X(0) = 0, ˜ Y (0) = 0, ˜ X(x) ˜ Y (2π) = 2πx2. Una posible soluci´ on es tomar ˜ X(x) = x2 y ˜ Y (y) = y. Es decir, v(x, y) = x2y, luego g(y) = w(π, y) = u(π, y) −v(π, y) = 1 −π2y. 2. Al imponer que la funci´ on w(x, y) = X(x)Y (y) cumpla: La ecuaci´ on de Laplace wxx + wyy = 0 se obtiene que X′′(x)Y (y) + X(x)Y ′′(y) = 0, luego −X′′(x) X(x) = Y ′′(y) Y (y) = λ ∈R. La condici´ on de contorno w(0, y) = 0, se obtiene que X(0) = 0. La condici´ on de contorno w(x, 0) = 0, se obtiene que Y (0) = 0. La condici´ on de contorno w(x, 2π) = 0, se obtiene que Y (2π) = 0. Por tanto, obtenemos dos problemas separados: (a)  X′′(x) + λX(x) = 0 X(0) = 0 (b)  Y ′′(y) = λY (y) Y (0) = 0 = Y (2π) . El problema (b) es un PVF con condiciones de tipo Dirichlet asociado a la funci´ on Y (y). 3. Ya vimos que los VAPs y las FUPs del PVF (b) son: VAPs: λ = λn = −n2/4 FUPs: Y (y) = Yn(y) = sin(ny/2)  n ≥1. 4. La EDO X′′(x) + λnX(x) = 0 es lineal, homog´ enea y a coeficientes constantes. Su polinomio caracter´ ıstico es P(m) = m2 + λn y sus ra´ ıces son m1,2 = ±√−λn = ±n/2. Por tanto, la soluci´ on general de esta ecuaci´ on es X(x) = c1enx/2 + c2e−nx/2, c1, c2 ∈R. Al imponer la condici´ on adicional 0 = X(0) = c1 + c2 obtenemos que c2 = −c1, luego X(x) = c1(enx/2 −e−nx/2), c1 ∈R. Tomando c1 = 1/2, obtenemos la familia de funciones Xn(x) = enx/2 −e−nx/2 2 = sinh(nx/2), n ≥1. 5. As´ ı pues, los modos normales (las FUPs) de la parte homog´ enea del problema (△) son wn(x, y) = Xn(x)Yn(x) = sinh(nx/2) sin(ny/2), n ≥1. En particular, resulta que, por linealidad, todas las series de la forma w(x, y) = X n≥1 βnwn(x, y) = X n≥1 βn sinh(nx/2) sin(ny/2) son soluciones formales de la parte homog´ enea del problema (△). 16 Depositado en 6. En el paso anterior los coeficientes βn hab´ ıan quedado libres, pero ahora los determinamos —para as´ ı obtener la soluci´ on final del problema (△)— imponiendo la ´ unica condici´ on no homog´ enea del problema; a saber, la condici´ on de contorno en el lado derecho del rect´ angulo: g(y) = w(π, y) = X n≥1 βnwn(π, y) = X n≥1 βn sinh(nπ/2) sin(ny/2) = X n≥1 bn sin(ny/2) donde hemos notado bn = βn sinh(nπ/2). En la secci´ on sobre desarrollos de Fourier vimos que bn = 1 π Z 2π 0 (1 −π2y) sin(ny/2)dy = 4π2 (−1)n n + 2 π 1 −(−1)n n , n ≥1 son los coeficientes de Fourier del desarrollo en senos de la funci´ on g(y) = 1−π2y en el intervalo [0, 2π]. Y deshaciendo el cambio de variables w(x, y) = u(x, y) −v(x, y), la soluci´ on final es u(x, y) = v(x, y) + w(x, y) = x2y + X n≥1 bn sinh(nπ/2) sinh(nx/2) sin(ny/2). Fin de la ´ Ultima Parte
2422
https://mathworld.wolfram.com/Hexahedron.html
Hexahedron -- from Wolfram MathWorld TOPICS AlgebraApplied MathematicsCalculus and AnalysisDiscrete MathematicsFoundations of MathematicsGeometryHistory and TerminologyNumber TheoryProbability and StatisticsRecreational MathematicsTopologyAlphabetical IndexNew in MathWorld Geometry Solid Geometry Polyhedra Hexahedra Geometry Solid Geometry Polyhedra Rhombohedra MathWorld Contributors Pegg Hexahedron Download Wolfram Notebook A hexahedron is a polyhedron with six faces. The figure above shows a number of named hexahedra, in particular the acute golden rhombohedron, cube, cuboid, hemicube, hemiobelisk, obtuse golden rhombohedron, pentagonal pyramid, pentagonal wedge, tetragonal antiwedge, and triangular dipyramid. There are seven topologically distinct convex hexahedra, corresponding through graph duality with the seven hexahedral graphs. The illustration above shows these seven hexahedra (top line), their skeletons (middle line), and the hexahedral graphs whose duals correspond to the polyhedra and their skeletons (bottom line). The unique regular hexahedron is the cube and the unique chiral hexahedron is the tetragonal antiwedge. Two hexahedra can be built from regular polygons with equal edge lengths: the equilateral triangular dipyramid and pentagonal pyramid. Rhombohedra are a special class of hexahedron in which opposite faces are congruent rhombi. Through graph duality, the list of numbers of vertices for each polyhedron in a hexahedron corresponds to the degree sequence (sequence of vertex degrees) of a hexahedral graph. The following table lists the hexahedra, together with their degree sequences, number of vertices , and number of edges , which are related through the polyhedral formula. Standard names do not appear to be in common use for a number of these; for such cases, the names appearing on Michon are used. hexahedrondegree sequence triagular dipyramid(3, 3, 3, 3, 3, 3)5 9 pentagonal pyramid(3, 3, 3, 3, 3, 5)6 10 tetragonal antiwedge(3, 3, 3, 3, 4, 4)6 10 hemiobelisk(3, 3, 3, 4, 4, 5)7 11 hemicube(3, 3, 4, 4, 4, 4)7 11 pentagonal wedge(3, 3, 4, 4, 5, 5)8 12 cube(4, 4, 4, 4, 4, 4)8 12 See also Cube, Cuboid, Hemicube, Hemiobelisk, Hexagonal Pyramid, Hexahedral Graph, Pentagonal Wedge, Polyhedron, Rhombohedron, Tetragonal Antiwedge, Triangular Dipyramid Explore with Wolfram|Alpha More things to try: hexahedron arcsin(1/2) derangements on 12 elements References Duijvestijn, A.J.W. and Federico, P.J. "The Number of Polyhedral (3-Connected Planar) Graphs." Math. Comput.37, 523-532, 1981.Gardner, M. "Find the Hexahedrons." §19.9 in Martin Gardner's New Mathematical Diversions from Scientific American. New York: Simon and Schuster, pp.224-225 and 233, 1966.Guy, R.K. Unsolved Problems in Number Theory, 2nd ed. New York: Springer-Verlag, 1994.McClellan, J. "The Hexahedra Problem." Recr. Math. Mag., No.4, 34-40, Aug.1961.Michon, G.P. "Final Answers: Polyhedra & Polytopes." J. "Problème de situation." Ann. de Math19, 36, 1828. Reprinted in Jacob Steiner's gesammelte Werke, Band I. Bronx, NY: Chelsea, p.227, 1971. Referenced on Wolfram|Alpha Hexahedron Cite this as: Weisstein, Eric W. "Hexahedron." From MathWorld--A Wolfram Resource. Subject classifications Geometry Solid Geometry Polyhedra Hexahedra Geometry Solid Geometry Polyhedra Rhombohedra MathWorld Contributors Pegg About MathWorld MathWorld Classroom Contribute MathWorld Book wolfram.com 13,278 Entries Last Updated: Sun Sep 28 2025 ©1999–2025 Wolfram Research, Inc. Terms of Use wolfram.com Wolfram for Education Created, developed and nurtured by Eric Weisstein at Wolfram Research Created, developed and nurtured by Eric Weisstein at Wolfram Research
2423
https://pmc.ncbi.nlm.nih.gov/articles/PMC11076936/
Understanding Factorial Designs, Main Effects, and Interaction Effects: Simply Explained with a Worked Example - PMC Skip to main content An official website of the United States government Here's how you know Here's how you know Official websites use .gov A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites. Search Log in Dashboard Publications Account settings Log out Search… Search NCBI Primary site navigation Search Logged in as: Dashboard Publications Account settings Log in Search PMC Full-Text Archive Search in PMC Journal List User Guide View on publisher site Download PDF Add to Collections Cite Permalink PERMALINK Copy As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice Indian J Psychol Med . 2024 Mar 21;46(2):175–177. doi: 10.1177/02537176241237066 Search in PMC Search in PubMed View in NLM Catalog Add to search Understanding Factorial Designs, Main Effects, and Interaction Effects: Simply Explained with a Worked Example Chittaranjan Andrade Chittaranjan Andrade 1 Dept. of Clinical Psychopharmacology and Neurotoxicology, National Institute of Mental Health and Neurosciences, Bengaluru, Karnataka, India. Find articles by Chittaranjan Andrade 1,✉ Author information Article notes Copyright and License information 1 Dept. of Clinical Psychopharmacology and Neurotoxicology, National Institute of Mental Health and Neurosciences, Bengaluru, Karnataka, India. ✉ Chittaranjan Andrade, Dept. of Clinical Psychopharmacology and Neurotoxicology, National Institute of Mental Health and Neurosciences, Bengaluru, Karnataka 560029, India. E-mail: andradec@gmail.com Issue date 2024 Mar. © 2024 The Author(s) This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License ( which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page ( PMC Copyright notice PMCID: PMC11076936 PMID: 38725713 Abstract A factorial design examines the effects of two independent variables on a single, continuous dependent variable. The statistical test employed to analyze the data is a two-way analysis of variance (ANOVA). This test yields three results: a main effect for each of the independent variables and an interaction effect between the two independent variables. This article explains factorial designs and two-way ANOVA with the help of a worked example using hypothetical data in a spreadsheet provided as a supplementary file. The main effects and interaction effects are explained and illustrated using tables and figures. A short discussion provides general notes about the concepts explained in this article, along with brief notes on repeated measures ANOVA and higher order ANOVAs. Many additional examples, with figures and explanations, are provided in the supplementary materials, which the reader is strongly encouraged to view. Keywords: Factorial design, two-way analysis of variance, main effects, interaction effect A study design is said to be factorial in nature if participants are randomized into two or more groups and if participants in each of these groups are further randomized into two or more subgroups. As an example, we conduct a study in which 96 adults with major depressive disorder (MDD) are randomized to receive escitalopram or placebo, and patients in each of these two groups are randomized to receive cognitive behavioral therapy (CBT) or waitlisted CBT. Because drug (escitalopram vs. placebo) and therapy (CBT vs. waitlist) each have two categories, this is a 2 × 2 factorial design. Table 1 presents endpoint Hamilton Rating Scale for Depression (HAM-D) scores in our hypothetical study. The data file from which Table 1 was generated is made available in the supplementary materials so that readers can run the analyses on their own, if they wish. Table 1. Endpoint Depression Ratings in a Hypothetical 2 × 2 Factorially Designed Study. Waitlist CBT Total Placebo 19.9 (2.1) n = 24 18.8 (1.6) n = 24 19.3 (1.9) n = 48 Escitalopram 14.9 (1.8) n = 24 10.3 (1.3) n = 24 12.6 (2.8) n = 48 Total 17.4 (3.2) n = 48 14.5 (4.5) n = 48 Open in a new tab Data in cells are mean (standard deviation) Hamilton Rating Scale for Depression scores and sample size (n) for the group. Two-way ANOVA Analysis of variance (ANOVA) is a statistical procedure used to compare the means of two or more groups. We analyze the Table 1 data using a statistical test known as two-way ANOVA. It is called “two-way” because, as the table shows, there are two factors. One factor is drug, presented in rows in the table, and the other factor is therapy, presented in columns in the table (the rows and the columns are the two “ways”). Each factor has two levels, making it, as already stated, a two row × two column (2 × 2) design with four groups in the study, represented by four boxes (cells) in the table. The two-way ANOVA, performed using a hand calculator or a statistical program, gives us three results. These are a main effect for drug (F = 361.55; df = 1,92; p< .001), a main effect for therapy (F = 65.59; df = 1,92; p< .001), and a drug × therapy interaction (F = 24.30; df = 1,92; p< .001). We observe that all three results are statistically significant. Main Effects The significant main effect for drug tells us that, regardless of what therapy the patients received, escitalopram was superior to placebo. This is evident from the last column: the treatment endpoint HAM-D means for escitalopram vs. placebo were 12.6 vs. 19.3, indicating that patients who received escitalopram were less depressed at endpoint than patients who received placebo. Similarly, the significant main effect for therapy tells us that, regardless of what drug the patients received, CBT was superior to waitlist. This is evident from the last row: the treatment endpoint HAM-D means for CBT vs. waitlist were 14.5 vs. 17.4, indicating that patients who received CBT were less depressed at endpoint than patients who were waitlisted for CBT. Interaction Effect The significant drug × therapy interaction tells us that the extent of improvement with drug depended on what therapy patients received. In the table, we see that placebo patients fared only marginally better with CBT relative to waitlist (endpoint HAM-D means, 18.8 vs. 19.9), whereas escitalopram patients fared noticeably better with CBT relative to waitlist (endpoint HAM-D means 10.3 vs. 14.9). The marked advantage for the escitalopram–CBT group is visually depicted in the supplementary materials; in the line diagram, the noticeable difference in the slopes of the placebo and escitalopram lines is due to the interaction (Supplementary Figure 1). Summary The two-way ANOVA tells us that, in our study of patients with MDD, escitalopram was superior to placebo (main effect for drug), CBT was superior to waitlist (main effect for therapy), and CBT improved outcomes with escitalopram more than it improved outcomes with placebo (drug × therapy interaction). Specific Notes Instead of randomizing and then subrandomizing, as described in the opening paragraph of this article, patients can be directly randomized into the four groups shown in Table 1. In the worked example in this article, the endpoint HAM-D score was the outcome variable. Endpoint scores of other rating instruments could be analyzed in the same way, using two-way ANOVA, provided that the ratings are continuous (measured along a ratio scale) and not categorical. If we actually conducted a study as described in this article, the method of analysis would be more elaborate than that presented here. Whereas the scenario and analysis presented here are technically correct, they are meant to explain concepts in the simplest possible way, and not to recommend a plan of analysis. General Notes Here is a technical point for geeks. The main effects are not merely the equivalent of t-tests or one-way ANOVAs for escitalopram vs. placebo (means 12.6 vs. 19.3) and for CBT vs. waitlist (means 14.5 vs. 17.4). Rather, the main effects are escitalopram vs. placebo after excluding the interaction effect and CBT vs. waitlist after excluding the interaction effect. So, it means that escitalopram would outperform placebo and CBT would outperform waitlist even had there not been an interaction. This can be mathematically understood from the way in which the sum of squares and the degrees of freedom are partitioned when calculating the F values for the main and interaction effects. Here is the same message for non-geeks. How main effects are independent of the interaction effect can be visually understood from the line diagram in Supplementary Figure 1. As an example for the main effect for drug, the escitalopram line is wholly below the placebo line. As an example for the main effect for therapy, the CBT circles are below the waitlist circles. As a cautionary note, what appears likely from visual inspection needs to be confirmed in the statistical analysis. Other visual examples of different combinations of significant and nonsignificant main and interaction effects are presented in Supplementary Figures 2–6. Readers are urged to view the supplementary materials to obtain a fuller understand of what is explained in this article. We can have a 3 × 2 factorial design if drug has three levels (e.g., escitalopram, bupropion, and placebo) and therapy has two levels (CBT and waitlist). We can have a 3 × 3 design if drug has three levels and CBT also has three levels (e.g., CBT, art therapy, and waitlist). However, the statistical test used to analyze the data is still a two-way ANOVA because there are still only two “ways”: drug (rows) and therapy (columns). We will still get only three results: a main effect for drug, a main effect for therapy, and a drug × therapy interaction. If any result is statistically significant and we want to know which drug level is better than which other drug level, or which therapy level is better than which other therapy level, and from where a significant interaction arises, we would need to do post hoc analyses. This is conceptually similar to performing post hoc analyses after a one-way ANOVA yields a significant F value when there are three or more groups being compared. A two-way ANOVA can also be applied to nonrandomized designs, such as when we want to see whether there is a main effect for sex (men vs. women), a main effect for quantity of alcohol consumed (one drink vs. two drinks), and a sex × quantity of alcohol interaction on performance on various cognitive tasks. Whereas we can randomize subjects into one drink vs. two drinks groups, sex is fixed; we cannot randomize subjects to be men or women.1 The concepts described in this article can be applied to analyses of longitudinal data. Consider a study in which MDD patients randomized to escitalopram or placebo are rated on the HAM-D at baseline, at 2 weeks, at 4 weeks, and at 6 weeks. The data are analyzed using two-way repeated measures ANOVA with two levels for drug and four levels for time. We get a main effect for drug, a main effect for time, and a drug × time interaction. An example is provided in Supplementary Figure 7; the significant drug × time interaction shows that patients treated with escitalopram showed greater improvement across time than patients treated with placebo. Finally, more complex factorial designs are possible. For example, in a three-way ANOVA, we could examine treatment outcomes based on sex (male vs. female), drug (escitalopram vs. placebo), and therapy (CBT vs. waitlist). We would get three main effects: for sex, for drug, and for therapy. We would get three two-way interactions: for sex × drug, drug × therapy, and sex × therapy. And, we would get one three-way interaction: for sex × drug × therapy. Such higher order ANOVAs are seldom performed because interpretation of the different interactions is difficult. Supplemental Material sj-xlsx-1-szj-10.1177_02537176241237066.xlsx (10.5KB, xlsx) Supplemental material for this article is available online. sj-doc-1-szj-10.1177_02537176241237066.doc (211KB, doc) Supplemental material for this article is available online. Footnotes The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding: The author received no financial support for the research, authorship, and/or publication of this article. Reference 1.Norman GR and Streiner D. Biostatistics: the bare essentials. 4th ed. Shelton, CT: People’s Medical Publishing House, 2014. [Google Scholar] Associated Data This section collects any data citations, data availability statements, or supplementary materials included in this article. Supplementary Materials sj-xlsx-1-szj-10.1177_02537176241237066.xlsx (10.5KB, xlsx) Supplemental material for this article is available online. sj-doc-1-szj-10.1177_02537176241237066.doc (211KB, doc) Supplemental material for this article is available online. Articles from Indian Journal of Psychological Medicine are provided here courtesy of Indian Psychiatric Society South Zonal Branch ACTIONS View on publisher site PDF (732.1 KB) Cite Collections Permalink PERMALINK Copy RESOURCES Similar articles Cited by other articles Links to NCBI Databases On this page Abstract Two-way ANOVA Main Effects Interaction Effect Summary Specific Notes General Notes Supplemental Material Footnotes Reference Associated Data Cite Copy Download .nbib.nbib Format: Add to Collections Create a new collection Add to an existing collection Name your collection Choose a collection Unable to load your collection due to an error Please try again Add Cancel Follow NCBI NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed Connect with NLM NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov Back to Top
2424
https://www.mooc-list.com/course/digital-systems-logic-gates-processors-coursera
Digital Systems: From Logic Gates to Processors (Coursera) | MOOC List Skip to main content User account menu Log in Advanced Course Search (Multiple Criteria) Sort options Sort by Course Title Contains Categories Subjects/Skills Initiative/Provider University/Entity Start Date Length Estimated Effort Course Level Language Subtitles Country Exam Certificate Breadcrumb Home Course Digital Systems: From Logic Gates to Processors (Coursera) Digital Systems: From Logic Gates to Processors (Coursera) Start Date Sep 29th 2025 facebook twitter envelope print Course Auditing Coursera Universitat Autònoma de Barcelona Categories Engineering Eng: Electronics Effort Intermediate 5-12 Weeks 1-4 Hours/Week Certification Yes Exam and/or Final Project Paid Certificate 42.00 EUR Languages English English Misc Enroll in course MOOC List is learner-supported. When you buy through links on our site, we may earn an affiliate commission. This course gives you a complete insight into the modern design of digital systems fundamentals from an eminently practical point of view. Unlike other more "classic" digital circuits courses, our interest focuses more on the system than on the electronics that support it. This approach will allow us to lay the foundation for the design of complex digital systems. Enroll in course MOOC List is learner-supported. When you buy through links on our site, we may earn an affiliate commission. You will learn a set of design methodologies and will use a set of (educational-oriented) computer-aided-design tools (CAD) that will allow you not only to design small and medium size circuits, but also to access to higher level courses covering so exciting topics as application specific integrated circuits (ASICs) design or computer architecture, to give just two examples. Course topics are complemented with the design of a simple processor, introduced as a transversal example of a complex digital system. This example will let you understand and feel comfortable with some fundamental computer architecture terms as the instruction set, microprograms and microinstructions. After completing this course you will be able to: Design medium complexity digital systems. Understand the description of digital systems using high-level languages such as VHDL. Understand how computers operate at their most basic level (machine language). Syllabus WEEK 1 All you need to know to start the course We have collected here everything you need to know before starting the course. This week is divided into three sections: The first is the one you're reading about now and includes a number of general explanations about how the course will run and about the virtual machine you should install on your computer to answer the different quizzes . The second (Previous knowledge: A review) presents a series of tests you can use to check your level of knowledge about numbering systems and the use of pseudocode to describe algorithms. The third block contains the first real topic of the course: What Digital Systems are? Previous knowledge: A review Check your knowledge about binary and hexadecimal numbering systems, and the description of algorithms using a pseudocode. What Digital Systems are? This module is an introduction to Digital Systems. Here you will find: A set of videos_L covering issue 1 and the corresponding exercises, Two videos_P introducing the processor that we will design along the course, and Some video-based explanations; a wiki and some FAQs about how VerilUOC_Desktop tool functions. You will have to use VerilUOC_Desktop in the next module. Read the "Lesson Index" in the "Index and PDF files" section and the "README" in the VerilUOC_Desktop section for more information. WEEK 2 Combinational Circuits (I) This module introduces combinational circuits, logic gates and boolean algebra, all of them items necessary to design simple combinational circuits. Read the "Index of lessons" for more information. To solve the exercises in this module you will need to use VerilUOC_Desktop. Look at the module "VerilUOC_Desktop tools" to learn how to use it. VerilUOC_Desktop tools From this week you will need to use VerilUOC_Desktop to do some of the exercises in the quizzes. VerilUOC_Desktop is a software package based on Logisim, enhanced with a number of modules to enable: Enter Boolean equations (BoolMin), Enter digital circuits and check them according the problem statement (VerilCirc), and Enter chronograms (time-charts) and check that they are correct (VerilChart). This section contains two videos explaining how these three tools work. By now you only need to use VerilCirc and BoolMin, so if you are pushed for time, you might postpone VerilChart for later. Obviously, it is impossible to cover in these two videos all eventualities you can find while working with VerilUOC_Desktop tools. In case of doubt, look at the VerilUOC_Desktop wiki, look at the FAQs or post your problems in the forums. There are specific forums for VerilCirc, BoolMin and VerilChart. WEEK 3 Combinational circuits (II) We continue the study of combinational circuits. While in the previous module we were working on the classical design techniques of combinational circuits, this one is focused on other issues such as a brief introduction to computer aided design tools (CAD tools), or the direct synthesis of combinational circuits from its algorithmic description. Read the "Lesson index" for more information. To solve the exercises in this module VerilUOC_Desktop is needed. Remember that the "VerilUOC_Desktop" section in module 2 contains all the information you need about this tool. WEEK 4 Arithmetic components + Introduction to VHDL Arithmetic circuits are an essential part of many digital circuits and thus deserve a particular treatment. The first part of this module presents some implementations of the basic arithmetic operations. Only operations with naturals (non-negative integers) are considered. The second part of this module introduces the basics of VHDL with the goal of providing enough knowledge to understand its usage throughout this course and start developing basic hardware models. WEEK 5 Sequential circuits (I) This is the first module dedicated to Sequential Circuits (Digital Systems with Memory). To solve the quizzes you will need VerilUOC_Desktop. Remember that the first week includes a complete description of VerilUOC_Desktop. In particular, VerilChart is presented in the second video. WEEK 6 Sequential circuits (II) This second module dedicated to Sequential Circuits deals with particular sequential circuits that are building blocks of larger circuits, namely registers, counters and memory blocks. WEEK 7 Sequential circuits III and Finite State Machines This module deals with two topics: In previous lessons, the relation between algorithms (programming language structures) and combinational circuits has been commented. This relation also exists between algorithms and sequential circuits. We will explore this relation in the current module. The second topic we will see is the definition and VHDL modelling of Finite State Machines. WEEK 8 Implementation of digital systems This last module presents some basic information about manufacturing technologies, as well as about implementation strategies, and synthesis and implementation tools. Course Summary and farewell Digital Systems Processors Logic Gates Design Method VHDL Digital Design Circuit Design Boolean Algebra Combinational Circuits Electrical Engineering Coursera Plus Enroll in course MOOC List is learner-supported. When you buy through links on our site, we may earn an affiliate commission. Course Auditing Coursera Universitat Autònoma de Barcelona David Bañeres Elena Valderrama Jean-Pierre Deschamps Joaquín Saiz Alcaine Juan Antonio Martínez Lluis Terés Merce Rullan Engineering Eng: Electronics Spain Intermediate 5-12 Weeks 1-4 Hours/Week Yes Exam and/or Final Project Paid Certificate 42.00 EUR English English Enroll in course MOOC List is learner-supported. When you buy through links on our site, we may earn an affiliate commission. Class Deals by MOOC List edX Coursera Domestika DataCamp FutureLearn Skillshare Udacity Udemy Some offers expires in 93 days 09 hours 24 minutes 24 seconds See All Active Deals Class Deals by MOOC List - Amazing and exclusive deals for lifelong learners. Search Courses Search Course Categories: Art and Culture Art, Architecture & Design Music, Film & Audio Business Economics & Finance Management & Leadership Marketing & Communication Computer Science CS: Artificial Intelligence, Robotics & Computer Vision CS: Design & Product CS: Information & Technology CS: Programming CS: Software Engineering CS: Systems, Security & Networking CS: Theory Data Science Statistics & Data Analysis Education Teacher Professional Development Engineering Eng: Electronics Health & Society Food & Nutrition Medicine & Pharmacology Veterinary Humanities Languages & Literature History Sports Personal and Professional Development Lifestyle Science Sci: Biology & Life Sciences Sci: Chemistry Sci: Energy & Earth Sciences Sci: Environment Sci: Mathematics Sci: Physical & Earth Sciences Sci: Physics Social Sciences Law Find MOOCs By Multiple Criteria Self-Paced MOOCs Micro Course Initiatives/Providers Categories University/Entity Instructor Country Language Type of Certificate Subjects/Skills What is a MOOC? MOOC stands for a Massive Open Online Course. It is an online course aimed at large-scale participation and open (free) access via the internet. However, some providers may charge for things like graded items, course completion certificates, or exams. They are similar to university courses but do not tend to offer academic credit. Several web-based platforms (providers Aka initiatives) supported by top universities and colleges offer MOOCs in a wide range of subjects. About MOOC List “MOOC List” is an aggregator (directory) of Massive Open Online Courses (MOOCs) and Free Online Courses from different providers. For more information please see our FAQs. Terms / Privacy Policy | Contact Us © 2012-2021 MOOC List - All Rights Reserved. A CoToNet Creation.
2425
https://math.libretexts.org/Bookshelves/Geometry/Elementary_College_Geometry_(Africk)/02%3A_Congruent_Triangles/2.03%3A_The_ASA_and_AAS_Theorems
2.3: The ASA and AAS Theorems - Mathematics LibreTexts Skip to main content Table of Contents menu search Search build_circle Toolbar fact_check Homework cancel Exit Reader Mode school Campus Bookshelves menu_book Bookshelves perm_media Learning Objects login Login how_to_reg Request Instructor Account hub Instructor Commons Search Search this book Submit Search x Text Color Reset Bright Blues Gray Inverted Text Size Reset +- Margin Size Reset +- Font Type Enable Dyslexic Font - [x] Downloads expand_more Download Page (PDF) Download Full Book (PDF) Resources expand_more Periodic Table Physics Constants Scientific Calculator Reference expand_more Reference & Cite Tools expand_more Help expand_more Get Help Feedback Readability x selected template will load here Error This action is not available. chrome_reader_mode Enter Reader Mode 2: Congruent Triangles Elementary College Geometry (Africk) { } { "2.01:_The_Congruence_Statement" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "2.02:_The_SAS_Theorem" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "2.03:_The_ASA_and_AAS_Theorems" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "2.04:_Proving_Lines_and_Angles_Equal" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "2.05:_Isosceles_Triangles" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "2.06:_The_SSS_Theorem" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "2.07:_The_Hyp-Leg_Theorem_and_Other_Cases" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" } { "00:_Front_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "01:_Lines_Angles_and_Triangles" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "02:_Congruent_Triangles" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "03:_Quadrilaterals" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "04:_Similar_Triangles" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "05:_Trigonometry_and_Right_Triangles" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "06:_Area_and_Perimeter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "07:_Regular_Polygons_and_Circles" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "zz:_Back_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" } Sun, 05 Sep 2021 05:47:03 GMT 2.3: The ASA and AAS Theorems 34126 34126 admin { } Anonymous Anonymous 2 false false [ "article:topic", "license:ccbyncsa", "showtoc:no", "authorname:hafrick", "Angle-Side-Angle theorem", "Angle-Angle-Side theorem", "licenseversion:40", "source@ ] [ "article:topic", "license:ccbyncsa", "showtoc:no", "authorname:hafrick", "Angle-Side-Angle theorem", "Angle-Angle-Side theorem", "licenseversion:40", "source@ ] Search site Search Search Go back to previous article Sign in Username Password Sign in Sign in Sign in Forgot password Expand/collapse global hierarchy 1. Home 2. Bookshelves 3. Geometry 4. Elementary College Geometry (Africk) 5. 2: Congruent Triangles 6. 2.3: The ASA and AAS Theorems Expand/collapse global location 2.3: The ASA and AAS Theorems Last updated Sep 5, 2021 Save as PDF 2.2: The SAS Theorem 2.4: Proving Lines and Angles Equal Page ID 34126 Henry Africk CUNY New York City College of Technology via New York City College of Technology at CUNY Academic Works ( \newcommand{\kernel}{\mathrm{null}\,}) Table of contents 1. Theorem 2.3.1: ASA or Angle-Side-Angle Theorem 2. Example 2.3.1 3. Example 2.3.2 4. Theorem 2.3.2 (AAS or Angle-Angle-Side Theorem) 5. Example 2.3.3 6. Example 2.3.4 7. Example 2.3.5 8. Historical Note 9. Problems In this section we will consider two more cases where it is possible to conclude that triangles are congruent with only partial information about their sides and angles, Suppose we are told that △A⁢B⁢C has ∠A=30∘,∠B=40∘, and A⁢B= 2 inches. Let us attempt to sketch △A⁢B⁢C. We first draw a line segment of 2 inches and label it A⁢B, With a protractor we draw an angle of 30∘ at A and an angle of 40∘ at B (Figure 2.3.1). We extend the lines forming ∠A and ∠B until they meet at C. We could now measure A⁢C,B⁢C, and ∠C to find the remaining parts of the triangle. Figure 2.3.1: (left) Sketching △A⁢B⁢C and (right) Sketching △D⁢E⁢F. Let △D⁢E⁢F be another triangle, with ∠D=30∘, ∠E=40∘, and D⁢E= 2 inches. We could sketch △D⁢E⁢F just as we did △A⁢B⁢C, and then measure D⁢F,E⁢F, and ∠F (Figure 2.3.2). It is clear that we must have A⁢C=D⁢F, B⁢C=E⁢F, and ∠C=∠F, because both triangles were drawn in exactly the same way, Therefore △A⁢B⁢C≅△D⁢E⁢F. In △A⁢B⁢C we say that A⁢B is the side included between ∠A and ∠B. In △D⁢E⁢F we would say that DE is the side included between ∠D and ∠E. Our discussion suggests the following theorem: Theorem 2.3.1: ASA or Angle-Side-Angle Theorem Two triangles are congruent if two angles and an included side of one are equal respectively to two angles and an included side of the other. In Figure 2.3.1 and 2.3.2, △A⁢B⁢C≅△D⁢E⁢F because ∠A,∠B, and A⁢B are equal respectively to ∠D, ∠E, and D⁢E. We sometimes abbreviate Theorem 2.3.1 by simply writing A⁢S⁢A=A⁢S⁢A. Example 2.3.1 In △P⁢Q⁢R, name the side included between ∠P and ∠Q. ∠P and ∠R. ∠Q and ∠R. Solution Note that the included side is named by the two letters representing each of the angles. Therefore, for (1), the side included between ∠P and ∠Q is named by the letters P and Q -- that is, side P⁢Q. Similarly for (2) and (3). Answer: (1) P⁢Q, (2) P⁢R, (3) Q⁢R. Example 2.3.2 For the two triangles in the diagram write the congruence statement, give a reason for (1), find x and y. Solution (1) From the diagram ∠A in △A⁢B⁢C is equal to ∠C in △A⁢D⁢C. Therefore, "A" corresponds to "C". Also ∠C in △A⁢B⁢C is equal to ∠A in △A⁢D⁢C. So "C" corresponds to "A". We have (2) ∠A,∠C, and included side A⁢C of △A⁢B⁢C are equal respectively to ∠C, ∠A, and included side C⁢A of △C⁢D⁢A. (A⁢C=C⁢A because they are just different names for the identical line segment, We sometimes say A⁢C=C⁢A because of identity.) Therefore △A⁢B⁢C≅△C⁢D⁢A because of the ASA Theorem (A⁢S⁢A=A⁢S⁢A). Summary: △A⁢B⁢C―△C⁢D⁢A―Angle∠B⁢A⁢C=∠D⁢C⁢A(marked = in diagram)Included Side A⁢C=C⁢A(identity)Angle∠B⁢C⁢A=∠D⁢A⁢C(marked = in diagram) (3) A⁢B=C⁢D and B⁢C=D⁢A because they are corresponding sides of the congruent triangles. Therefore x=A⁢B=C⁢D=12 and y=B⁢C=D⁢A=11. Answer: (1) △A⁢B⁢C≅△C⁢D⁢A. (2). A⁢S⁢A=A⁢S⁢A: ∠A,A⁢C,∠C of △A⁢B⁢C=∠C, C⁢A, ∠A of △C⁢D⁢A. (3) x=12, y=11. Let us now consider △A⁢B⁢C and △D⁢E⁢F in Figure 2.3.3. ∠A and ∠B Figure 2.3.3. Two angles and a:i unincluded side of △A⁢B⁢C are equal respectively to two angles and an unincluded side of △D⁢E⁢F. of △A⁢B⁢C are equal respectively to ∠D and ∠E of △D⁢E⁢F, yet we have no information about the sides included between these angles, A⁢B and D⁢E, Instead we know that the unincluded side BC is equal to the corresponding unincluded side E⁢F. Therefore, as things stand, we cannot use A⁢S⁢A=A⁢S⁢A to conclude that the triangles are congruent, However we may show ∠C equals ∠F as in Theorem 2.3.3, section 1.5 (∠C=180∘−(60∘+50∘)=180∘−110∘=70∘ and ∠F=180∘−(60∘+50∘)=180∘−110∘=70∘). Then we can apply the ASA Theorem to angles Band C and their included side B⁢C and the corresponding angles E and F with included side EF. These remarks lead us to the following theorem: Theorem 2.3.2 (AAS or Angle-Angle-Side Theorem) Two triangles are congruent if two angles and an unincluded side of one triangle are equal respectively to two angles and the corresponding unincluded side of the other triangle (A⁢A⁢S=A⁢A⁢S). In Figure 2.3.4, if ∠A=∠D, ∠B=∠E and B⁢C=E⁢F then △A⁢B⁢C≅△D⁢E⁢F. Figure 2.3.4. These two triangles are congruent by A⁢A⁢S=A⁢A⁢S. Proof ∠C=180∘−(∠A+∠B)=180∘−(∠D+∠E)=∠F. The triangles are then congruent by A⁢S⁢A=A⁢S⁢A applied to ∠B. ∠C and B⁢C of ∠A⁢B⁢C and ∠E,∠F and E⁢F of △D⁢E⁢F. Example 2.3.3 For two triangles in the diagram write the congruence statement, give a reason for (1), find x and y. Solution (1) △A⁢C⁢D≅△B⁢C⁢D. (2) A⁢A⁢S=A⁢A⁢S since ∠A,∠C and unincluded side C⁢D of ∠A⁢C⁢D are equal respectively to ∠B,∠C and unincluded side C⁢D of △B⁢C⁢D. △A⁢C⁢D―△B⁢C⁢D―Angle∠A=∠B(marked = in diagram)Angle∠A⁢C⁢D=∠B⁢C⁢D(marked = in diagram)Unincluded Side C⁢D=C⁢D(identity) (3) A⁢C=B⁢C and A⁢D=B⁢D since they are corresponding sides of the congruent triangles. Therefore x=A⁢C=B⁢C=10 and y=A⁢D=B⁢D. Since A⁢B=A⁢D+B⁢D=y+y=2⁢y=12, we must have y=6. Answer (1) △A⁢C⁢D≅△B⁢C⁢D (2) A⁢A⁢S=A⁢A⁢S: ∠A,∠C,C⁢D of △A⁢C⁢D=∠B,∠C,C⁢D of △B⁢C⁢D. (3) x=10, y=6. Example 2.3.4 For the two triangles in the diagram write the congruence statement, give a reason for (1), find x and y. Solution Part (1) and part (2) are identical to Example 2.3.2. (3): A⁢B=C⁢D 3⁢x−y=2⁢x+1 3⁢x−2⁢x−y=1 x−y=1 and B⁢C=D⁢A 3⁢x=2⁢y+4 3⁢x−2⁢y=4 We solve these equations simultaneously for x and y: Check: Answer: (1) and (2) same as Example 2.3.2. (3) x=2, y=1. Example 2.3.5 From the top of a tower Ton the shore, a ship Sis sighted at sea, A point P along the coast is also sighted from T so that ∠P⁢T⁢B=∠S⁢T⁢B. If the distance from P to the base of the tower B is 3 miles, how far is the ship from point Bon the shore? Solution △P⁢T⁢B≅△S⁢T⁢B by A⁢S⁢A=A⁢S⁢A. Therefore x=S⁢B=F⁡B=3. Answer: 3 miles Historical Note The method of finding the distance of ships at sea described in Example 2.3.5 has been attributed to the Greek philosopher Thales (c. 600 B.C.). We know from various authors that the ASA Theorem has been used to measure distances since ancient times, There is a story that one of Napoleon's officers used the ASA Theorem to measure the width of a river his army had to cross, (see Problem 25 below.) Problems 1 - 4. For each of the following (1) draw the triangle with the two angles and the included side and (2) measure the remaining sides and angle, △A⁢B⁢C with ∠A=40∘, ∠B=50∘, and A⁢B=3 inches, △D⁢E⁢F with ∠D=40∘, ∠E=50∘, and D⁢E=3 inches, △A⁢B⁢C with ∠A=50∘, ∠B=40∘, and A⁢B=3 inches, △D⁢E⁢F with ∠D=50∘, ∠E=40∘, and D⁢E=3 inches. 5 - 8. Name the side included between the angles: ∠A and ∠B in △A⁢B⁢C. ∠X and ∠Y in △X⁢Y⁢Z. ∠D and ∠F in △D⁢E⁢F. ∠S and ∠T in △R⁢S⁢T. 9 - 22. For each of the following (1) write a congruence statement for the two triangles, (2) give a reason for (1) (SAS, ASA, or AAS Theorems), (3) find x, or x and y. 10. 12. 14. 16. 18. 20. 22. 23 - 26. For each of the following, include the congruence statement and the reason as part of your answer: In the diagram how far is the ship S from the point P on the coast? Ship S is observed from points A and B along the coast. Triangle A⁢B⁢C is then constructed and measured as in the diagram, How far is the ship from point A? Find the distance A⁢B across a river if A⁢C=C⁢D=5 and D⁢E=7 as in the diagram. that is the distance across the pond? This page titled 2.3: The ASA and AAS Theorems is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Henry Africk (New York City College of Technology at CUNY Academic Works) via source content that was edited to the style and standards of the LibreTexts platform. Back to top 2.2: The SAS Theorem 2.4: Proving Lines and Angles Equal Was this article helpful? Yes No Recommended articles 3.3: The ASA and AAS TheoremsIn this section we will consider two more cases where it is possible to conclude that triangles are congruent with only partial information about thei... 2.1: The Congruence Statement 2.2: The SAS TheoremWe have said that two triangles are congruent if all their correspond­ ing sides and angles are equal, However in some cases, it is possible to conclu... 2.4: Proving Lines and Angles Equal 2.5: Isosceles TrianglesAn isosceles triangle is a triangle that has two sides of equal length. Article typeSection or PageAuthorHenry AfrickLicenseCC BY-NC-SALicense Version4.0Show Page TOCno Tags Angle-Angle-Side theorem Angle-Side-Angle theorem source@ © Copyright 2025 Mathematics LibreTexts Powered by CXone Expert ® ? The LibreTexts libraries arePowered by NICE CXone Expertand are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Privacy Policy. Terms & Conditions. Accessibility Statement.For more information contact us atinfo@libretexts.org. Support Center How can we help? Contact Support Search the Insight Knowledge Base Check System Status×
2426
https://study.com/skill/learn/understanding-determining-similarity-ratios-explanation.html
Understanding & Determining Similarity Ratios | Geometry | Study.com Log In Sign Up Menu Plans Courses By Subject College Courses High School Courses Middle School Courses Elementary School Courses By Subject Arts Business Computer Science Education & Teaching English (ELA) Foreign Language Health & Medicine History Humanities Math Psychology Science Social Science Subjects Art Business Computer Science Education & Teaching English Health & Medicine History Humanities Math Psychology Science Social Science Art Architecture Art History Design Performing Arts Visual Arts Business Accounting Business Administration Business Communication Business Ethics Business Intelligence Business Law Economics Finance Healthcare Administration Human Resources Information Technology International Business Operations Management Real Estate Sales & Marketing Computer Science Computer Engineering Computer Programming Cybersecurity Data Science Software Education & Teaching Education Law & Policy Pedagogy & Teaching Strategies Special & Specialized Education Student Support in Education Teaching English Language Learners English Grammar Literature Public Speaking Reading Vocabulary Writing & Composition Health & Medicine Counseling & Therapy Health Medicine Nursing Nutrition History US History World History Humanities Communication Ethics Foreign Languages Philosophy Religious Studies Math Algebra Basic Math Calculus Geometry Statistics Trigonometry Psychology Clinical & Abnormal Psychology Cognitive Science Developmental Psychology Educational Psychology Organizational Psychology Social Psychology Science Anatomy & Physiology Astronomy Biology Chemistry Earth Science Engineering Environmental Science Physics Scientific Research Social Science Anthropology Criminal Justice Geography Law Linguistics Political Science Sociology Teachers Teacher Certification Teaching Resources and Curriculum Skills Practice Lesson Plans Teacher Professional Development For schools & districts Certifications Teacher Certification Exams Nursing Exams Real Estate Exams Military Exams Finance Exams Human Resources Exams Counseling & Social Work Exams Allied Health & Medicine Exams All Test Prep Teacher Certification Exams Praxis Test Prep FTCE Test Prep TExES Test Prep CSET & CBEST Test Prep All Teacher Certification Test Prep Nursing Exams NCLEX Test Prep TEAS Test Prep HESI Test Prep All Nursing Test Prep Real Estate Exams Real Estate Sales Real Estate Brokers Real Estate Appraisals All Real Estate Test Prep Military Exams ASVAB Test Prep AFOQT Test Prep All Military Test Prep Finance Exams SIE Test Prep Series 6 Test Prep Series 65 Test Prep Series 66 Test Prep Series 7 Test Prep CPP Test Prep CMA Test Prep All Finance Test Prep Human Resources Exams SHRM Test Prep PHR Test Prep aPHR Test Prep PHRi Test Prep SPHR Test Prep All HR Test Prep Counseling & Social Work Exams NCE Test Prep NCMHCE Test Prep CPCE Test Prep ASWB Test Prep CRC Test Prep All Counseling & Social Work Test Prep Allied Health & Medicine Exams ASCP Test Prep CNA Test Prep CNS Test Prep All Medical Test Prep College Degrees College Credit Courses Partner Schools Success Stories Earn credit Sign Up Understanding & Determining Similarity Ratios Florida Math Standards (MAFS) - Geometry Skills Practice Click for sound 3:22 You must c C reate an account to continue watching Register to access this and thousands of other videos Are you a student or a teacher? I am a student I am a teacher Try Study.com, risk-free As a member, you'll also get unlimited access to over 88,000 lessons in math, English, science, history, and more. Plus, get practice tests, quizzes, and personalized coaching to help you succeed. Get unlimited access to over 88,000 lessons. Try it risk-free It only takes a few minutes to setup and you can cancel any time. It only takes a few minutes. Cancel any time. Already registered? Log in here for access Back What teachers are saying about Study.com Try it risk-free for 30 days Already registered? Log in here for access 00:04 Understanding &… 01:56 Understanding &… Jump to a specific example Speed Normal 0.5x Normal 1.25x 1.5x 1.75x 2x Speed Kathryn Boddie, AMY MAYERS Instructors Kathryn Boddie Kathryn has taught high school or university mathematics for over 10 years. She has a Ph.D. in Applied Mathematics from the University of Wisconsin-Milwaukee, an M.S. in Mathematics from Florida State University, and a B.S. in Mathematics from the University of Wisconsin-Madison. View bio AMY MAYERS Amy has taught middle school math and algebra for over seven years. She has a Bachelor's degree in Mathematical Sciences from the University of Houston and a Master's degree in Curriculum and Instruction from The University of St. Thomas. She is a Texas certified teacher for grades 4-12 in Mathematics. View bio Example SolutionsPractice Questions Understanding and Determining Similarity Ratios Step 1: Determine which sides are the corresponding sides of the shapes. Step 2: Calculate the similarity ratio by writing and simplifying the ratios of all corresponding sides. Using the bigger shape's sides in the numerator will give you the similarity ratio of the bigger shape to the smaller shape and using the smaller shape's sides in the numerator will give you the similarity ratio of the smaller shape to the bigger shape. Understanding and Determining Similarity Ratios - Vocabulary Similar Shapes: Two geometric shapes are called similar if they are the same shape (with congruent angles) and have proportional sides. That is, the ratios of corresponding sides of similar shapes is a constant. Similarity Ratio: The similarity ratio of similar shapes is the simplified ratio of corresponding sides. Corresponding Sides: Corresponding sides of similar shapes are the sides that are in the same position relative to the other sides and angles in both shapes. We will use these steps and definitions to determine similarity ratios in the following two examples. Example Problem 1: Understanding and Determining Similarity Ratios Given that the shapes in the image are similar, find the similarity ratio. Image for Example 1 Step 1: Determine which sides are the corresponding sides of the shapes. The corresponding sides are the sides that are in the same position relative to the other sides and angles in both shapes. Our pairs of corresponding sides are: A B and E F B C and F G C D and G H A D and E H Step 2: Calculate the similarity ratio by writing and simplifying the ratios of all corresponding sides. Using the bigger shape's sides in the numerator will give you the similarity ratio of the bigger shape to the smaller shape and using the smaller shape's sides in the numerator will give you the similarity ratio of the smaller shape to the bigger shape. We will start with the similarity ratio of A B C D to E F G H (bigger to smaller shape). Calculating the ratios of corresponding sides: |A B||E F|=5 3 |B C||F G|=3 1.8, which can be simplified by multiplying the numerator and denominator by 10, and then simplifying the resulting fraction: 30 18=5 3 |C D||G H|=7 4.2=70 42=5 3 |A D||E H|=9 5.4=90 54=5 3 The similarity ratio of A B C D to E F G H is 5 3 or 5:3. Calculating the ratios of corresponding sides for the similarity ratio of E F G H to A B C D (smaller to bigger shape): |E F||A B|=3 5 |F G||B C|=1.8 3=18 30=3 5 |G H||C D|=4.2 7=42 70=3 5 |E H||A D|=5.4 9=54 90=3 5 The similarity ratio of E F G H to A B C D is 3 5 or 3:5. Example Problem 2: Understanding and Determining Similarity Ratios Given that the shapes in the image are similar, find the similarity ratio. Image for Example 2 Step 1: Determine which sides are the corresponding sides of the shapes. Our pairs of corresponding sides are: A B and E F B C and F G C D and G H A D and E H Step 2: Calculate the similarity ratio by writing and simplifying the ratios of all corresponding sides. Using the bigger shape's sides in the numerator will give you the similarity ratio of the bigger shape to the smaller shape and using the smaller shape's sides in the numerator will give you the similarity ratio of the smaller shape to the bigger shape. Similarity ratio of A B C D to E F G H (bigger to smaller shape): |A B||E F|=2 |B C||F G|=2 |C D||G H|=2 |A D||E H|=2 The similarity ratio of A B C D to E F G H is 2 or 2:1. Similarity ratio of E F G H to A B C D (smaller to bigger shape): |E F||A B|=1 2 |F G||B C|=1 2 |G H||C D|=1 2 |E H||A D|=1 2 The similarity ratio of E F G H to A B C D is 1 2 or 1:2. Get access to thousands of practice questions and explanations! Create an account Table of Contents Understanding and Determining Similarity Ratios Understanding and Determining Similarity Ratios - Vocabulary Example Problem 1 Example Problem 2 Test your current knowledge Practice Understanding & Determining Similarity Ratios Recently updated on Study.com Videos Courses Lessons Articles Quizzes Concepts Teacher Resources Louisiana's Demographics | Racial Groups & Examples Hubris Greek Mythology | Definition, Examples & Use Guaifenesin | Definition, Interactions & Side Effects Longissimus Muscle | Function, Origin & Insertion Operculum | Definition, Location & Function Calcium Channel & Beta Blockers | Overview, Types & Examples Area vs. Volume | Overview, Differences & Examples Life and Times of Frederick Douglass | Summary & Facts Indira Gandhi | Biography, Death & Heroism Stratified Random Sampling | Definition, Method &... How to Insert Headers & Footers in Excel Education 107: Intro to Curriculum, Instruction, and... Economics 301: Intermediate Microeconomics TOEFL iBT (2026) Study Guide and Test Prep ILTS 230 Study Guide - Early Childhood Special Education... PiCAT Study Guide and Test Prep TExES 186 Special Education Specialist EC-12 Math 201: Linear Algebra MTTC 137 Study Guide - Science (5-9) Exam Prep UExcel Science of Nutrition: Study Guide & Test Prep Philosophy 102: Ethics in America CLEP Introduction to Educational Psychology Study Guide... Georgia Milestones - Physical Science EOC Study Guide and... CSET Business Subtest III Study Guide and Test Prep NY Regents - Earth Science Study Guide and Exam Prep CSET Science Subtest II Earth and Space Sciences (219)... Physical Science: High School ILTS English Language Arts (207) Study Guide and Test Prep Animal Science Study Guide Gynecology | Definition, Etymology & Importance Congressional Politics Definition, Advantages & Examples Tangent (Calculus) Gynecologic Oncology | Definition, Cancer Types & Treatments Rock of Ages Hymn | History, Author & Meaning Teaching Cultural Festival Celebrations to ESL Students Iron III Oxide | Formula, Properties & Molar Mass Student Application Assistance | Definition, Process &... Peruvian Food Facts: Lesson for Kids Customer Loyalty & Retention | Definition & Importance History of Islam: Lesson for Kids Fire Prevention Tips: Lesson for Kids Information Search: Control & Risk Factors Japanese Culture: Lesson for Kids The Characters in Maniac Magee by Jerry Spinelli Revolution in Math | Definition, Types & Angles Business 113 - Assignment 1: Memo Othello Act 3, Scene 2 Summary & Quotes How to Use & Cite AI Tools in College Saver Course... Understanding Generative AI as a Student: Uses, Benefits... WEST Prep Product Comparison What Are the Features of My Institutional Student Account... How to Pass the SLLA Exam How to Pass the TExES Exams Where Are CLEP Exam Locations? PERT Test Registration Information How to Pass the CFA Level 2 Exam Phases of the Moon Lesson Plan TExES Exam Cost Quiz & Worksheet - DNPH Formula & Safety Quiz & Worksheet - Acetophenone Basics Quiz & Worksheet - Ununpentium Facts Quiz & Worksheet - Aspartic Acid Quiz & Worksheet - Ounces to Grams Practice Problems Quiz & Worksheet - Aldehydes & Ketones Quiz & Worksheet - Excel's SUMIF Function Math Social Sciences Science Business Humanities Education Art and Design History Tech and Engineering Health and Medicine Plans Study help Test prep College credit Teacher resources Working Scholars® School group plans Online tutoring About us Blog Careers Teach for us Press Center Ambassador Scholarships Support Contact support FAQ Site feedback Resources and Guides Download the app Study.com on Facebook Study.com on YouTube Study.com on Instagram Study.com on Twitter Study.com on LinkedIn © Copyright 2025 Study.com. All other trademarks and copyrights are the property of their respective owners. All rights reserved. Contact us by phone at (877)266-4919, or by mail at 100 View Street#202, Mountain View, CA 94041. About Us Terms of Use Privacy Policy DMCA Notice ADA Compliance Honor Code For Students Support ×
2427
https://www.youtube.com/watch?v=TEBkCiCbS6w
Remainder Theorem Neso Academy 3020000 subscribers 117 likes Description 10607 views Posted: 1 Jun 2017 Reasoning and Aptitude: Remainder Theorem Topics Discussed: 1. What is remainder theorem 2. Use of remainder theorem 3. Different cases in which remainder theorem is used 4. Remainder theorem proof 5. Remainder theorem example 6. Homework problems on remainder theorem Contribute: Website ► Facebook ► Twitter ► Pinterest ► Music: Axol x Alex Skrindo - You [NCS Release] 17 comments Transcript: in the last few lectures we completed cyclicity of unit digit and now we will start a new topic which is remainder theorem remainder theorem is an important theorem and it is used to find out the remainder of a division which we cannot perform easily sometimes we have to divide a number by another number and the division is not so easy to perform in such cases we can use the remainder theorem to find out the remainder I will down few typical cases in which we use remainder theorem the first case is when two or more numbers are multiplied together and then they are divided by another number this type of division is not so simple so we use remainder theorem to find out the remainder the second case is when two or more numbers are added together and then divided by another number again again we use remainder theorem to find the remainder and the third case we have to discuss is when a number is having a large power let's say n and then divided by another number so N is a large power and as a is an integer you will have a large number in the numerator so dividing it by m and finding out the remainder is not an easy task and to make it easy we will use remainder theorem I will explain remainder theorem by the help of one question you can see the question on your screen in this question we need to find the remainder when 17 21 is divided by 12 so we need to find out the remainder when 17 multiplied by 21 is divided by a number 12 and if you see this you will find it is of first kind we have two numbers multiplied together and then they are divided by another number I can write 17 as 12 + 5 and I can write 21 as 12 + 9 and they are divided by 12 you can see I have broken 17 as 12 + 5 because 12 is the number by which we have to divide the numerator and again I have broken 21 like 12 + 9 because 12 is number by which we have to divide so this is first thing we can do and now I will open this two brackets it will give me 12 12 + 12 9 + 5 12 + 5 9 and they are divided by 12 while solving the problems you don't have to perform this step this is only for explanation purpose we are trying to obtain a final result which we will use directly while solving the problems I can also write this as 12 12 ID 12 + 12 9 divid 12 + 5 12 / 12 + 5 9 / 12 if you see the first term you will find the remainder is equal to zero because 12 12 when divided by 12 is a perfect Division and there is no remainder left so in this case the remainder is equal to0 in this case also the remainder is equal to 0 because 12 and 12 will cancel out here also the remainder is equal to Zer but in the last case 5 9 we cannot perfectly divide by 12 so the remainder is not equal to zero so we can say that the remainder when 17 21 is divided by 12 will only depend on the last term the last term is this one so the remainder of this depends only on the last term and the remainder will be same as the remainder of this last term I hope you can understand this the remainder of 17 21 when divided by 12 is same as the remainder when 5 9 is divided by 12 so simply we have to find out the remainder of this last term and it will be our answer now how we obtained this last term this last term is nothing but the remainder when 17 is divided by 12 and when 21 is divided by 12 individually let's try to understand how we obtained this last term when you divide 17 by 12 you will get the remainder 5 and when you divide 21 by 12 you will get the remainder 9 and as 17 and 21 are multiplied with each other we will also multiply the remainders so 5 9 and 17 21 is divided by 12 so we will divide the multiplied remainders by 12 and this will be equal to 45 / 12 and you can easily perform the division now you will get the remainder equal to 9 so 9 is the answer the remainder the remainder is equal to 9 so I hope this process is clear to you this was only for explanation purpose the only thing you have to remember is simply obtain the individual remainders multiply them and then perform the division to get the remainder this will be more clear when we will solve the next example the next example example will clear all your questions in this example we need to find out the remainder when 15 17 19 is divided by 7 so what we will do we will first find out the remainder when 15 is divided by 7 15 / 7 will give us the remainder equal to 1 then we will divide 177 by 7 17 by 7 to get the remainder and it will be equal to 3 after this we will divide the last number which is 19 by 7 19 by 7 to get the remainder and it is equal to 5 we have 1 3 and 5 we will multiply them together and then divide them by 7 this is equal to 15 / 7 and the remainder in this case is equal to 1 the remainder is equal to 1 because 7 2 is 14 and 15 - 14 is equal to 1 so one is the answer this will also have the remainder equal to 1 so I hope this process is clear to you now we will move to the homework problems I have three homework problems for you and you have to tell me the answer in all the three cases the first homework problem is is based on the first case the first case is when two or more numbers are multiplied together and divided by another number the two numbers which are multiplied together are 14 and 15 and they are divided by 8 you need to tell me the remainder in the second problem find the remainder when three numbers 75 78 and 57 are multiplied together and divided by number 34 and in the third problem in the third problem three numbers are there 75 78 and 57 and now they are added together instead of multiplication like in problem two they are added together and they are again divided by the same number which is 34 tell me the remain Remer so this is all for this lecture try to solve these three problems and if you have any doubt regarding the remainder theorem you can definitely ask in comment section in the coming presentations we will discuss few more topics related to remainder theorem [Music] [Applause] [Music]
2428
https://www.learntheta.com/maths-circumference-and-area-of-a-circle/
Circumference and Area of a Circle Skip to content CoursesMenu Toggle Aptitude Tests (Placement) Banking Exams SSC CGL CAT Quant Class 9 Maths (CBSE) Class 10 Maths (CBSE) How It Works Pricing Free ResourcesMenu Toggle AptitudeMenu Toggle Quant, Reasoning, Verbal Qs Aptitude Concepts Aptitude Tests Placement TestsMenu Toggle Placement Aptitude Qs Company wise Aptitude Qs TCS NQT Mock Test Placement Mock Tests Placement Aptitude Syllabus Company wise Syllabus How to Prepare Aptitude? Resume for Freshers Interview Qs for Freshers Placement Preparation Tips Bank ExamsMenu Toggle Bank Exam Syllabus Bank Exam Questions Bank Exam Mock Tests SSC CGLMenu Toggle SSC CGL Syllabus SSC CGL Questions CAT QuantMenu Toggle CAT Quant – Concepts CAT Free Mocks – Quant CAT 2023 Quant Papers with Solutions CAT Previous Year Papers Class 9 Maths Extra Qs Class 10 Maths Extra Qs Log In Start PracticeStart Practice Main Menu Start PracticeStart Practice Circumference and Area of a Circle The circumference and area are fundamental concepts related to circles in geometry. The circumference is the distance around the outside of a circle, while the area is the amount of space enclosed within the circle. Understanding these concepts is crucial for solving various geometric problems and real-world applications. Formulae The following formulae are essential for calculating the circumference and area of a circle: Circumference (C):C=2 π r C=2 π r where r r is the radius of the circle and π π (pi) is approximately 3.14159. Area (A):A=π r 2 A=π r 2 where r r is the radius of the circle and π π (pi) is approximately 3.14159. Examples Let’s look at a couple of examples: Example-1: A circle has a radius of 5 cm. Calculate its circumference and area. Circumference:C=2 π r=2×π×5≈31.42 C=2 π r=2×π×5≈31.42 cm Area:A=π r 2=π×5 2=25 π≈78.54 A=π r 2=π×5 2=25 π≈78.54 cm 2 Example-2: A circle has a diameter of 10 inches. Calculate its circumference and area. Radius: Remember that the radius is half the diameter, so r=10/2=5 r=10/2=5 inches Circumference:C=2 π r=2×π×5≈31.42 C=2 π r=2×π×5≈31.42 inches Area:A=π r 2=π×5 2=25 π≈78.54 A=π r 2=π×5 2=25 π≈78.54 inches 2 Common mistakes by students Confusing Radius and Diameter: Using the diameter instead of the radius in the formulas, or vice-versa. Always ensure you’re using the correct value for ‘r’. Incorrect Units: Forgetting to include the correct units (e.g., cm, m 2) in the final answer. Approximating π π: Using a rounded value of π π (like 3.14) prematurely, leading to minor inaccuracies. Using the π π button on your calculator will provide the most accurate results. Confusing Circumference and Area: Using the formula for circumference when calculating area, and vice versa. Real Life Application Understanding the circumference and area of circles has many real-world applications: Construction: Calculating the amount of fencing needed for a circular garden (circumference). Engineering: Determining the amount of material needed to build a circular storage tank (area). Design: Planning the layout of a circular room or the size of a pizza. Sports: Calculating the distance a runner covers on a circular track (circumference). Fun Fact The number π π (pi) is an irrational number, meaning it cannot be expressed as a simple fraction. Its decimal representation goes on forever without repeating. Mathematicians have calculated π π to trillions of digits! Recommended YouTube Videos for Deeper Understanding Practice Questions Q.1 A circular garden has a radius of 7 meters. What is the area of the garden? A) 44 m 2 44 m 2 B) 154 m 2 154 m 2 C) 88 m 2 88 m 2 D) 616 m 2 616 m 2 Check Solution Ans: B Area of circle = π r 2=π(7)2=49 π≈154 m 2 π r 2=π(7)2=49 π≈154 m 2 Q.2 The circumference of a circle is 20 π 20 π cm. What is the radius of the circle? A) 5 cm B) 10 cm C) 20 cm D) 40 cm Check Solution Ans: B Circumference = 2 π r 2 π r, so 20 π=2 π r 20 π=2 π r. Dividing both sides by 2 π 2 π, we get r=10 r=10 cm. Q.3 A circle has an area of 36 π 36 π square inches. What is the circumference of the circle? A) 6 π 6 π inches B) 12 π 12 π inches C) 18 π 18 π inches D) 72 π 72 π inches Check Solution Ans: B Area = π r 2=36 π π r 2=36 π, so r 2=36 r 2=36 and r=6 r=6 inches. Circumference = 2 π r=2 π(6)=12 π 2 π r=2 π(6)=12 π inches. Q.4 If the radius of a circle is doubled, how does the area of the circle change? A) It doubles.B) It triples.C) It quadruples.D) It remains the same. Check Solution Ans: C Let the original radius be r r. Original area = π r 2 π r 2. New radius = 2 r 2 r. New area = π(2 r)2=4 π r 2 π(2 r)2=4 π r 2. The area is multiplied by 4. Q.5 What is the area of a circle with a diameter of 10 cm? A) 10 π c m 2 10 π c m 2 B) 25 π c m 2 25 π c m 2 C) 50 π c m 2 50 π c m 2 D) 100 π c m 2 100 π c m 2 Check Solution Ans: B Diameter = 10 cm, so radius = 5 cm. Area = π r 2=π(5)2=25 π c m 2 π r 2=π(5)2=25 π c m 2 Next Topic: Length of an Arc of a Sector Practice Exta Questions for Class 10 Maths Improve Maths with LearnTheta’s AI Practice Adaptive Practice | Real Time Insights | Resume your Progress Start Your Free Trial Today! CoursesMenu Toggle Aptitude Tests (Placement) Banking Exams SSC CGL CAT Quant Class 9 Maths (CBSE) Class 10 Maths (CBSE) How It Works Pricing Free ResourcesMenu Toggle AptitudeMenu Toggle Aptitude Questions Aptitude Concepts Aptitude Tests Placement TestsMenu Toggle Company wise Aptitude Qs Placement Aptitude Qs TCS NQT Mock Test Placement Mock Tests Placement Aptitude Syllabus Company Wise Syllabus How to Prepare Aptitude? Resume for Freshers Interview Qs for Freshers Placement Preparation Tips Bank ExamsMenu Toggle Bank Exam Syllabus Bank Exam Questions Bank Exam Mock Tests SSC CGLMenu Toggle SSC CGL Syllabus SSC CGL Questions CAT QuantMenu Toggle CAT Quant – Concepts CAT Free Mock Test – Quant CAT 2023 Quant Papers with Solutions CAT Previous Year Papers Class 9 Maths Extra Qs Class 10 Maths Extra Qs Log In About Us | Blogs | Contact Us | Careers | Campus Ambassador | T&C | Privacy policy Email: info@learntheta.com Shubh Enclave, Haralur Road, Bengaluru - 560102 Copyright © 2025 Try a better way to Practice?YesNo Scroll to Top AI-Powered Practice! ➡️ Level Up × Practice Smart, Score Higher! Start AI-Powered Practice All Important Topics at One Place Qs Personalized on Your Attempts Track Progress as You Practice
2429
https://math.stackexchange.com/questions/144342/signs-in-binomial-expansions
Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Signs in binomial expansions Ask Question Asked Modified 12 years, 5 months ago Viewed 259 times 1 $\begingroup$ Edit the title as seems fit. $$\begin{align} (a^3+b^3) &= (a+b)(a^2 -ab+b^2) \ &= (a+b)^3 -3ab(a+b) \end{align}$$ And so on and so forth. Right now, I only need these expansions in solving quadratic equations. But why do signs vary in the expansions? (asterisk). What controls this? I see that something similar comes in $a^2-b^2 = (a+b)(a-b)$ to allow the intermediate term(s) to cancel but how does this translate to other (higher-order) forms? Level: US Grade-10 equivalent. algebra-precalculus intuition Share edited Jul 22, 2012 at 23:50 J. M. ain't a mathematician 76.7k88 gold badges222222 silver badges347347 bronze badges asked May 12, 2012 at 20:45 NoeinNoein 23611 silver badge88 bronze badges $\endgroup$ 1 $\begingroup$ The way you're using words to ask this question isn't translating well. I think you may have a valid question. But, the current choice of words makes it very hard to understand. $\endgroup$ 000 – 000 2012-05-12 21:22:36 +00:00 Commented May 12, 2012 at 21:22 Add a comment | 1 Answer 1 Reset to default 1 $\begingroup$ One way of looking at this is to see that you need the intermediate terms to cancel, so taking out a factor of $(a+b)$ you will need alternating signs for the cancellation to work. Share answered May 12, 2012 at 20:58 Mark BennetMark Bennet 102k1414 gold badges119119 silver badges232232 bronze badges $\endgroup$ 5 $\begingroup$ That doesn't tell why it was decided (apparently, that's how I was given it) that such terms, at face value, need the expansion with opposite signs. I added an () in the \$\$ so see there for what I meant. $\endgroup$ Noein – Noein 2012-05-12 21:03:54 +00:00 Commented May 12, 2012 at 21:03 $\begingroup$ I'm not at all sure what you mean by "it was decided" - do the multiplication. One way works and the signs cancel. The other way doesn't and they don't. Write it out as a multiplication - multiply by $x$ first, and put this in the top line on our paper, and then by $y$ on the next line - making sure you put the terms with the same power of $x$ under/over each other, and you will see the logic. $\endgroup$ Mark Bennet – Mark Bennet 2012-05-12 21:10:44 +00:00 Commented May 12, 2012 at 21:10 $\begingroup$ Or do the above with $a$ and $b$ instead of $x$ and $y$ $\endgroup$ Mark Bennet – Mark Bennet 2012-05-12 21:18:18 +00:00 Commented May 12, 2012 at 21:18 1 $\begingroup$ Suppose you have a homogeneous expression in $x$ and $y$ with consecutive terms $Ax^{r+1} y^s+Ax^r y^{s+1}$ and you multiply by $x-y$. Just look at the term in $x^{r+1} y^{s+1}$ - you get a positive contribution by multiplying the second monomial in the first expression by $+x$ and an "equal and opposite" negative contribution by multiplying the first monomial by $-y$ - it works because the coefficients are the same and the signs in the expression $x-y$ are different. I'm doing my best to explain. $\endgroup$ Mark Bennet – Mark Bennet 2012-05-17 20:39:25 +00:00 Commented May 17, 2012 at 20:39 $\begingroup$ Thank you; the gears clicked into place! Once the exams are over, I'll put on an answer that's -hopefully- as intuitive as I get it now. $\endgroup$ Noein – Noein 2012-05-18 11:54:49 +00:00 Commented May 18, 2012 at 11:54 Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions algebra-precalculus intuition See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Related 1 Intuition behind tensor expansions of linear maps 1 Reducing binomial fraction when equivalent numerator or denominator has inverted signs 0 Why did the signs inverse? Intuition behind Descartes' Rule of Signs Exponents across = signs 2 How to find coefficient of multiple binomial expansions quickly? Finding coefficients in expansions 0 Descartes's Rule of Signs Lemma Hot Network Questions How big of a hole can I drill in an exterior wall's bottom plate? Best solution to prevent loop between tables for granular relations How random are Fraïssé limits really? The geologic realities of a massive well out at Sea Storing a session token in localstorage Separating trefoil knot on torus Is it safe to route top layer traces under header pins, SMD IC? Find non-trivial improvement after submitting A time-travel short fiction where a graphologist falls in love with a girl for having read letters she has not yet written… to another man How to understand the reasoning behind modern Fatalism? Survival analysis - is a cure model a good fit for my problem? Riffle a list of binary functions into list of arguments to produce a result I have a lot of PTO to take, which will make the deadline impossible Identifying a thriller where a man is trapped in a telephone box by a sniper How to locate a leak in an irrigation system? Bypassing C64's PETSCII to screen code mapping Passengers on a flight vote on the destination, "It's democracy!" How do you emphasize the verb "to be" with do/does? How to use cursed items without upsetting the player? Help understanding moment of inertia What "real mistakes" exist in the Messier catalog? Why is the definite article used in “Mi deporte favorito es el fútbol”? Where is the first repetition in the cumulative hierarchy up to elementary equivalence? What meal can come next? more hot questions Question feed
2430
https://pre-med.jumedicine.com/wp-content/uploads/sites/7/2019/06/BRS-Physiology-Costanzo-Linda-S.-SRG.pdf
S i x t h E d i t i o n Physiology   iii Linda S. Costanzo, Ph.D. Professor of Physiology and Biophysics Medical College of Virginia Virginia Commonwealth University Richmond, Virginia S i x t h E d i t i o n Physiology Publisher: Michael Tully Acquisitions Editor: Crystal Taylor Product Development Editors: Stacey Sebring and Amy Weintraub Production Project Manager: David Saltzberg Marketing Manager: Joy Fisher-Williams Designer: Holly Reid McLaughlin Manufacturing Coordinator: Margie Orzech Compositor: SPi Global 6th Edition Copyright © 2015, 2011, 2007, 2003, 1998, 1995 Wolters Kluwer Health. 351 West Camden Street Two Commerce Square Baltimore, MD 21201 2001 Market Street Philadelphia, PA 19103 Printed in China All rights reserved. This book is protected by copyright. No part of this book may be reproduced or transmitted in any form or by any means, including as photocopies or scanned-in or other electronic copies, or utilized by any information storage and retrieval system without written permission from the copyright owner, except for brief quotations embodied in critical articles and reviews. Materials appearing in this book prepared by individuals as part of their official duties as US government employees are not covered by the above-mentioned copyright. To request permission, please contact Lippincott Williams & Wilkins at 2001 Market Street, Philadelphia, PA 19103, via email at permissions@lww.com, or via website at lww.com (products and services). 9 8 7 6 5 4 3 2 1 Library of Congress Cataloging-in-Publication Data Costanzo, Linda S., 1947- author. Physiology / Linda S. Costanzo. — Sixth edition. p. ; cm. — (Board review series) Includes index. ISBN 978-1-4511-8795-3 I. Title. II. Series: Board review series.  [DNLM: 1. Physiological Phenomena—Examination Questions. 2. Physiology—Examination Questions. QT 18.2] QP40 612'.0076—dc23 2013045098 DISCLAIMER Care has been taken to confirm the accuracy of the information present and to describe generally accepted practices. However, the authors, editors, and publisher are not responsible for errors or omissions or for any consequences from application of the information in this book and make no warranty, expressed or implied, with respect to the currency, completeness, or accuracy of the contents of the publication. Application of this information in a particular situation remains the professional responsibility of the practitioner; the clinical treatments described and recommended may not be considered absolute and universal recommendations. The authors, editors, and publisher have exerted every effort to ensure that drug selection and dosage set forth in this text are in accordance with the current recommendations and practice at the time of publication. However, in view of ongoing research, changes in government regulations, and the constant flow of information relating to drug therapy and drug reactions, the reader is urged to check the package insert for each drug for any change in indications and dosage and for added warnings and precautions. This is particularly important when the recommended agent is a new or infrequently employed drug. Some drugs and medical devices presented in this publication have Food and Drug Administration (FDA) clearance for limited use in restricted research settings. It is the responsibility of the health care provider to ascertain the FDA status of each drug or device planned for use in their clinical practice. To purchase additional copies of this book, call our customer service department at (800) 638-3030 or fax orders to (301) 223-2320. International customers should call (301) 223-2300. Visit Lippincott Williams & Wilkins on the Internet: Lippincott Williams & Wilkins customer service representatives are available from 8:30 am to 6:00 pm, EST. For Richard And for Dan, Rebecca, and Sheila And for Elise and Max vi The subject matter of physiology is the foundation of the practice of medicine, and a firm grasp of its principles is essential for the physician. This book is intended to aid the student prepar-ing for the United States Medical Licensing Examination (USMLE) Step 1. It is a concise review of key physiologic principles and is intended to help the student recall material taught ­ during the first and second years of medical school. It is not intended to substitute for comprehensive ­ textbooks or for course syllabi, although the student may find it a useful adjunct to physiology and pathophysiology courses. The material is organized by organ system into seven chapters. The first chapter reviews general principles of cellular physiology. The remaining six chapters review the major organ systems—neurophysiology, cardiovascular, respiratory, renal and acid–base, gastrointestinal, and endocrine physiology. Difficult concepts are explained stepwise, concisely, and clearly, with appropriate illustra-tive examples and sample problems. Numerous clinical correlations are included so that the student can understand physiology in relation to medicine. An integrative approach is used, when possible, to demonstrate how the organ systems work together to maintain homeostasis. More than 130 full-color illustrations and flow diagrams and more than 50 tables help the stu-dent visualize the material quickly and aid in long-term retention. The inside front cover con-tains “Key Physiology Topics for USMLE Step 1.” The inside back cover contains “Key Physiology Equations for USMLE Step 1.” Questions reflecting the content and format of USMLE Step 1 are included at the end of each chapter and in a Comprehensive Examination at the end of the book. These questions, many with clinical relevance, require problem-solving skills rather than straight recall. Clear, concise explanations accompany the questions and guide the student through the correct steps of reasoning. The questions can be used as a pretest to identify areas of weakness or as a posttest to determine mastery. Special attention should be given to the Comprehensive Examination, because its questions integrate several areas of physiology and related concepts of pathophysi-ology and pharmacology. New to this edition: ■ ■Addition of new full-color figures ■ ■Updated organization and text ■ ■Expanded coverage of cellular, respiratory, renal, gastrointestinal, and endocrine physiology ■ ■Increased emphasis on pathophysiology Best of luck in your preparation for USMLE Step 1! Linda S. Costanzo, Ph.D. Preface vii It has been a pleasure to be a part of the Board Review Series and to work with the staff at Lippincott Williams & Wilkins. Crystal Taylor and Stacey Sebring provided expert editorial assistance. My sincere thanks to students in the School of Medicine at Virginia Commonwealth University/Medical College of Virginia, who have provided so many helpful suggestions for BRS Physiology. Thanks also to the many students from other medical schools who have taken the time to write to me about their experiences with this book. Linda S. Costanzo, Ph.D. Acknowledgments vii viii Preface vi Acknowledgments vii 1. CELL PHYSIOLOGY 1 I. Cell Membranes 1 II. Transport Across Cell Membranes 2 III. Osmosis 4 IV. Diffusion Potential, Resting Membrane Potential, and Action Potential 7 V. Neuromuscular and Synaptic Transmission 12 VI. Skeletal Muscle 16 VII. Smooth Muscle 20 VIII. Comparison of Skeletal Muscle, Smooth Muscle, and Cardiac Muscle 22 Review Test 23 2. NEUROPHYSIOLOGY 32 I. Autonomic Nervous System (ANS) 32 II. Sensory Systems 36 III. Motor Systems 48 IV. Higher Functions of the Cerebral Cortex 54 V. Blood–Brain Barrier and Cerebrospinal Fluid (CSF) 55 VI. Temperature Regulation 56 Review Test 58 3. CARDIOVASCULAR PHYSIOLOGY 66 I. Circuitry of the Cardiovascular System 66 II. Hemodynamics 66 III. Cardiac Electrophysiology 71 IV. Cardiac Muscle and Cardiac Output 76 V. Cardiac Cycle 85 Contents Contents ix VI. Regulation of Arterial Pressure 87 VII. Microcirculation and Lymph 91 VIII. Special Circulations 94 IX. Integrative Functions of the Cardiovascular System: Gravity, Exercise, and Hemorrhage 97 Review Test 102 4. RESPIRATORY PHYSIOLOGY 115 I. Lung Volumes and Capacities 115 II. Mechanics of Breathing 117 III. Gas Exchange 124 IV. Oxygen Transport 126 V. CO2 Transport 131 VI. Pulmonary Circulation 132 VII. V/Q Defects 133 VIII. Control of Breathing 135 IX. Integrated Responses of the Respiratory System 137 Review Test 139 5. RENAL AND ACID–BASE PHYSIOLOGY 147 I. Body Fluids 147 II. Renal Clearance, Renal Blood Flow (RBF), and Glomerular Filtration Rate (GFR) 151 III. Reabsorption and Secretion 155 IV. NaCl Regulation 158 V. K+ Regulation 163 VI. Renal Regulation of Urea, Phosphate, Calcium, and Magnesium 166 VII. Concentration and Dilution of Urine 167 VIII. Renal Hormones 172 IX. Acid–Base Balance 172 X. Diuretics 181 XI. Integrative Examples 181 Review Test 184 6. GASTROINTESTINAL PHYSIOLOGY 194 I. Structure and Innervation of the Gastrointestinal Tract 194 II. Regulatory Substances in the Gastrointestinal Tract 195 III. Gastrointestinal Motility 199 IV. Gastrointestinal Secretion 204 V. Digestion and Absorption 214 VI. Liver Physiology 219 Review Test 221 x Contents 7. ENDOCRINE PHYSIOLOGY 227 I. Overview of Hormones 227 II. Cell Mechanisms and Second Messengers 229 III. Pituitary Gland (Hypophysis) 233 IV. Thyroid Gland 238 V. Adrenal Cortex and Adrenal Medulla 241 VI. Endocrine Pancreas–Glucagon and Insulin 248 VII. Calcium Metabolism (Parathyroid Hormone, Vitamin D, Calcitonin) 251 VIII. Sexual Differentiation 255 IX. Male Reproduction 256 X. Female Reproduction 258 Review Test 263 Comprehensive Examination 271 Index 293 1 Cell Physiology c h a p t e r 1 I.  Cell Membranes ■ ■are composed primarily of phospholipids and proteins. A.  Lipid bilayer 1.  Phospholipids have a glycerol backbone, which is the hydrophilic (water soluble) head, and two fatty acid tails, which are hydrophobic (water insoluble). The hydrophobic tails face each other and form a bilayer. 2.  Lipid-soluble substances (e.g., O2, CO2, steroid hormones) cross cell membranes because they can dissolve in the hydrophobic lipid bilayer. 3.  Water-soluble substances (e.g., Na+, Cl−, glucose, H2O) cannot dissolve in the lipid of the membrane, but may cross through water-filled channels, or pores, or may be trans-ported by carriers. B.  Proteins 1.  Integral proteins ■ ■are anchored to, and imbedded in, the cell membrane through hydrophobic interactions. ■ ■may span the cell membrane. ■ ■include ion channels, transport proteins, receptors, and guanosine 5′-triphosphate (GTP)–binding proteins (G proteins). 2.  Peripheral proteins ■ ■are not imbedded in the cell membrane. ■ ■are not covalently bound to membrane components. ■ ■are loosely attached to the cell membrane by electrostatic interactions. C.  Intercellular connections 1.  Tight junctions (zonula occludens) ■ ■are the attachments between cells (often epithelial cells). ■ ■may be an intercellular pathway for solutes, depending on the size, charge, and charac-teristics of the tight junction. ■ ■may be “tight” (impermeable), as in the renal distal tubule, or “leaky” (permeable), as in the renal proximal tubule and gallbladder. 2.  Gap junctions ■ ■are the attachments between cells that permit intercellular communication. ■ ■for example, permit current flow and electrical coupling between myocardial cells. 2 BRS Physiology II.  Transport Across Cell Membranes (Table 1.1) A.  Simple diffusion 1.  Characteristics of simple diffusion ■ ■is the only form of transport that is not carrier mediated. ■ ■occurs down an electrochemical gradient (“downhill”). ■ ■does not require metabolic energy and therefore is passive. 2.  Diffusion can be measured using the following equation: J PA C C 1 2 = − − ( ) where: J = flux (flow) (mmol/sec) P = permeability (cm/sec) A = area (cm2) C1 = concentration1 (mmol/L) C2 = concentration2 (mmol/L) 3.  Sample calculation for diffusion ■ ■The urea concentration of blood is 10 mg/100 mL. The urea concentration of proximal tubular fluid is 20 mg/100 mL. If the permeability to urea is 1 × 10−5 cm/sec and the sur-face area is 100 cm2, what are the magnitude and direction of the urea flux? Flux cm cm mg mL mg mL = ×      ( ) −       = × − 1 10 100 20 100 10 100 1 5 2 sec 10 100 10 100 1 10 1 5 2 5 − −      ( )       = ×       cm cm mg mL cm sec sec 00 0 1 1 10 2 3 4 cm mg cm mg from lumen toblood high tol ( )      = × − . /sec ow concentration ( ) Note: The minus sign preceding the diffusion equation indicates that the direction of flux, or flow, is from high to low concentration. It can be ignored if the higher concen-tration is called C1 and the lower concentration is called C2. Also note: 1 mL = 1 cm3. t a b l e 1.1 Characteristics of Different Types of Transport Type Electrochemical Gradient Carrier- Mediated Metabolic Energy Na+ Gradient Inhibition of Na+–K+ Pump Simple diffusion Downhill No No No — Facilitated diffusion Downhill Yes No No — Primary active transport Uphill Yes Yes — Inhibits (if Na+–K+ pump) Cotransport Uphill Yes Indirect Yes, same direction Inhibits Countertransport Uphill Yes Indirect Yes, opposite direction Inhibits One or more solutes are transported uphill; Na+ is transported downhill. Chapter 1 Cell Physiology 3 4.  Permeability ■ ■is the P in the equation for diffusion. ■ ■describes the ease with which a solute diffuses through a membrane. ■ ■depends on the characteristics of the solute and the membrane. a.  Factors that increase permeability: ■ ■↑ Oil/water partition coefficient of the solute increases solubility in the lipid of the membrane. ■ ■↓ Radius (size) of the solute increases the diffusion coefficient and speed of diffusion. ■ ■↓ Membrane thickness decreases the diffusion distance. b.  Small hydrophobic solutes (e.g., O2, CO2) have the highest permeabilities in lipid membranes. c.  Hydrophilic solutes (e.g., Na+, K+) must cross cell membranes through water-filled channels, or pores, or via transporters. If the solute is an ion (is charged), then its flux will depend on both the concentration difference and the potential difference across the membrane. B.  Carrier-mediated transport ■ ■includes facilitated diffusion and primary and secondary active transport. ■ ■The characteristics of carrier-mediated transport are 1.  Stereospecificity. For example, d-glucose (the natural isomer) is transported by facilitated diffusion, but the l-isomer is not. Simple diffusion, in contrast, would not distinguish between the two isomers because it does not involve a carrier. 2.  Saturation. The transport rate increases as the concentration of the solute increases, until the carriers are saturated. The transport maximum (Tm) is analogous to the maximum velocity (Vmax) in enzyme kinetics. 3.  Competition. Structurally related solutes compete for transport sites on carrier ­ molecules. For example, galactose is a competitive inhibitor of glucose transport in the small intestine. C.  Facilitated diffusion 1.  Characteristics of facilitated diffusion ■ ■occurs down an electrochemical gradient (“downhill”), similar to simple diffusion. ■ ■does not require metabolic energy and therefore is passive. ■ ■is more rapid than simple diffusion. ■ ■is carrier mediated and therefore exhibits stereospecificity, saturation, and competition. 2.  Example of facilitated diffusion ■ ■Glucose transport in muscle and adipose cells is “downhill,” is carrier-mediated, and is inhibited by sugars such as galactose; therefore, it is categorized as facilitated diffusion. In diabetes mellitus, glucose uptake by muscle and adipose cells is impaired because the carriers for facilitated diffusion of glucose require insulin. D.  Primary active transport 1.  Characteristics of primary active transport ■ ■occurs against an electrochemical gradient (“uphill”). ■ ■requires direct input of metabolic energy in the form of adenosine triphosphate (ATP) and therefore is active. ■ ■is carrier mediated and therefore exhibits stereospecificity, saturation, and competition. 2.  Examples of primary active transport a.  Na+, K+-ATPase (or Na+–K+ pump) in cell membranes transports Na+ from intracellular to extracellular fluid and K+ from extracellular to intracellular fluid; it maintains low intracellular [Na+] and high intracellular [K+]. 4 BRS Physiology ■ ■Both Na+ and K+ are transported against their electrochemical gradients. ■ ■Energy is provided from the terminal phosphate bond of ATP . ■ ■The usual stoichiometry is 3 Na+/2 K+. ■ ■Specific inhibitors of Na+, K+-ATPase are the cardiac glycoside drugs ouabain and digitalis. b.  Ca2+-ATPase (or Ca2+ pump) in the sarcoplasmic reticulum (SR) or cell membranes trans-ports Ca2+ against an electrochemical gradient. ■ ■Sarcoplasmic and endoplasmic reticulum Ca2+-ATPase is called SERCA. c.  H+, K+-ATPase (or proton pump) in gastric parietal cells transports H+ into the lumen of the stomach against its electrochemical gradient. ■ ■It is inhibited by proton pump inhibitors, such as omeprazole. E.  Secondary active transport 1.  Characteristics of secondary active transport a.  The transport of two or more solutes is coupled. b.  One of the solutes (usually Na+) is transported “downhill” and provides energy for the “uphill” transport of the other solute(s). c.  Metabolic energy is not provided directly but indirectly from the Na+ gradient that is maintained across cell membranes. Thus, inhibition of Na+, K+-ATPase will decrease transport of Na+ out of the cell, decrease the transmembrane Na+ gradient, and eventu-ally inhibit secondary active transport. d.  If the solutes move in the same direction across the cell membrane, it is called ­ cotransport or symport. ■ ■Examples are Na+-glucose cotransport in the small intestine and renal early proximal tubule and Na+–K+–2Cl– cotransport in the renal thick ascending limb. e.  If the solutes move in opposite directions across the cell membranes, it is called counter­ transport, exchange, or antiport. ■ ■Examples are Na+-Ca2+ exchange and Na+–H+ exchange. 2.  Example of Na+–glucose cotransport (Figure 1.1) a.  The carrier for Na+–glucose cotransport is located in the luminal membrane of intesti-nal mucosal and renal proximal tubule cells. b.  Glucose is transported “uphill”; Na+ is transported “downhill.” c.  Energy is derived from the “downhill” movement of Na+. The inwardly directed Na+ gradient is maintained by the Na+–K+ pump on the basolateral (blood side) membrane. Poisoning the Na+–K+ pump decreases the transmembrane Na+ gradient and conse-quently inhibits Na+–glucose cotransport. 3.  Example of Na+–Ca2+ countertransport or exchange (Figure 1.2) a.  Many cell membranes contain a Na+–Ca2+ exchanger that transports Ca2+ “uphill” from low intracellular [Ca2+] to high extracellular [Ca2]. Ca2+ and Na+ move in opposite direc-tions across the cell membrane. b.  The energy is derived from the “downhill” movement of Na+. As with cotransport, the inwardly directed Na+ gradient is maintained by the Na+–K+ pump. Poisoning the Na+– K+ pump therefore inhibits Na+–Ca2+ exchange. III.  Osmosis A.  Osmolarity ■ ■is the concentration of osmotically active particles in a solution. ■ ■is a colligative property that can be measured by freezing point depression. Chapter 1 Cell Physiology 5 Na+ Ca2+ Secondary active Primary active Na+ Na+ Na+ Ca2+ K+ Ca2+ Figure 1.2 Na+–Ca2+ countertransport (antiport). Primary active Lumen Secondary active Intestinal or proximal tubule cell Blood Na+ Na+ Na+ K+ Na+ Na+ Glucose Figure 1.1 Na+–glucose cotransport (symport) in intestinal or proximal tubule epithelial cell. ■ ■can be calculated using the following equation: Osmolarity g C = ¥ where: Osmolarity = concentration of particles (Osm/L) g = number of particles in solution (Osm/mol) [e.g., gNaCl = 2; gglucose = 1] C = concentration (mol/L) ■ ■Two solutions that have the same calculated osmolarity are isosmotic. If two solutions have different calculated osmolarities, the solution with the higher osmolarity is hyperosmotic and the solution with the lower osmolarity is hyposmotic. ■ ■Sample calculation: What is the osmolarity of a 1 M NaCl solution? Osmolarity Osm mol Osm = × = × = g C M L 2 1 2 / / B.  Osmosis and osmotic pressure ■ ■Osmosis is the flow of water across a semipermeable membrane from a solution with low solute concentration to a solution with high solute concentration. 1.  Example of osmosis (Figure 1.3) a.  Solutions 1 and 2 are separated by a semipermeable membrane. Solution 1 contains a solute that is too large to cross the membrane. Solution 2 is pure water. The presence of the solute in solution 1 produces an osmotic pressure. b.  The osmotic pressure difference across the membrane causes water to flow from solu-tion 2 (which has no solute and the lower osmotic pressure) to solution 1 (which has the solute and the higher osmotic pressure). c.  With time, the volume of solution 1 increases and the volume of solution 2 decreases. 6 BRS Physiology Semipermeable membrane 1 2 1 2 Time Water flows by osmosis from 2 1 Figure 1.3 Osmosis of H2O across a semipermeable membrane. 2.  Calculating osmotic pressure (van’t Hoff’s law) a.  The osmotic pressure of solution 1 (see Figure 1.3) can be calculated by van’t Hoff’s law, which states that osmotic pressure depends on the concentration of osmotically active particles. The concentration of particles is converted to pressure according to the fol-lowing equation: p = ¥ ¥ g C RT where: π = osmotic pressure (mm Hg or atm) g = number of particles in solution (osm/mol) C = concentration (mol/L) R = gas constant (0.082 L—atm/mol—K) T = absolute temperature (K) b.  The osmotic pressure increases when the solute concentration increases. A solution of 1 M CaCl2 has a higher osmotic pressure than a solution of 1 M KCl because the con-centration of particles is higher. c.  The higher the osmotic pressure of a solution, the greater the water flow into it. d.  Two solutions having the same effective osmotic pressure are isotonic because no water flows across a semipermeable membrane separating them. If two solutions separated by a semipermeable membrane have different effective osmotic pressures, the solu-tion with the higher effective osmotic pressure is hypertonic and the solution with the lower effective osmotic pressure is hypotonic. Water flows from the hypotonic to the hypertonic solution. e.  Colloid osmotic pressure, or oncotic pressure, is the osmotic pressure created by pro-teins (e.g., plasma proteins). 3.  Reflection coefficient (σ) ■ ■is a number between zero and one that describes the ease with which a solute perme-ates a membrane. a.  If the reflection coefficient is one, the solute is impermeable. Therefore, it is retained in the original solution, it creates an osmotic pressure, and it causes water flow. Serum albumin (a large solute) has a reflection coefficient of nearly one. b.  If the reflection coefficient is zero, the solute is completely permeable. Therefore, it will not exert any osmotic effect, and it will not cause water flow. Urea (a small solute) usually has a reflection coefficient of close to zero and it is, therefore, an ineffective osmole. 4.  Calculating effective osmotic pressure ■ ■Effective osmotic pressure is the osmotic pressure (calculated by van’t Hoff’s law) mul-tiplied by the reflection coefficient. ■ ■If the reflection coefficient is one, the solute will exert maximal effective osmotic pres-sure. If the reflection coefficient is zero, the solute will exert no osmotic pressure. Chapter 1 Cell Physiology 7 IV.  Diffusion Potential, Resting Membrane Potential, and Action Potential A.  Ion channels ■ ■are integral proteins that span the membrane and, when open, permit the passage of cer-tain ions. 1.  Ion channels are selective; they permit the passage of some ions, but not others. Selectivity is based on the size of the channel and the distribution of charges that line it. ■ ■For example, a small channel lined with negatively charged groups will be selective for small cations and exclude large solutes and anions. Conversely, a small channel lined with positively charged groups will be selective for small anions and exclude large sol-utes and cations. 2.  Ion channels may be open or closed. When the channel is open, the ion(s) for which it is selective can flow through. When the channel is closed, ions cannot flow through. 3.  The conductance of a channel depends on the probability that the channel is open. The higher the probability that a channel is open, the higher the conductance, or permeabil-ity. Opening and closing of channels are controlled by gates. a.  Voltage-gated channels are opened or closed by changes in membrane potential. ■ ■The activation gate of the Na+ channel in nerve is opened by depolarization; when open, the nerve membrane is permeable to Na+ (e.g., during the upstroke of the nerve action potential). ■ ■The inactivation gate of the Na+ channel in nerve is closed by depolarization; when closed, the nerve membrane is impermeable to Na+ (e.g., during the repolarization phase of the nerve action potential). b.  Ligand-gated channels are opened or closed by hormones, second messengers, or neurotransmitters. ■ ■For example, the nicotinic receptor for acetylcholine (ACh) at the motor end plate is an ion channel that opens when ACh binds to it. When open, it is permeable to Na+ and K+, causing the motor end plate to depolarize. B.  Diffusion and equilibrium potentials ■ ■A diffusion potential is the potential difference generated across a membrane because of a concentration difference of an ion. ■ ■A diffusion potential can be generated only if the membrane is permeable to the ion. ■ ■The size of the diffusion potential depends on the size of the concentration gradient. ■ ■The sign of the diffusion potential depends on whether the diffusing ion is positively or negatively charged. ■ ■Diffusion potentials are created by the diffusion of very few ions and, therefore, do not result in changes in concentration of the diffusing ions. ■ ■The equilibrium potential is the potential difference that would exactly balance (oppose) the tendency for diffusion down a concentration difference. At electrochemical equil­ ibrium, the chemical and electrical driving forces that act on an ion are equal and opposite, and no more net diffusion of the ion occurs. 1.  Example of a Na+ diffusion potential (Figure 1.4) a.  Two solutions of NaCl are separated by a membrane that is permeable to Na+ but not to Cl−. The NaCl concentration of solution 1 is higher than that of solution 2. b.  Because the membrane is permeable to Na+, Na+ will diffuse from solution 1 to solution 2 down its concentration gradient. Cl− is impermeable and therefore will not accom-pany Na+. c.  As a result, a diffusion potential will develop and solution 1 will become negative with respect to solution 2. 8 BRS Physiology d.  Eventually, the potential difference will become large enough to oppose further net diffusion of Na+. The potential difference that exactly counterbalances the diffusion of Na+ down its concentration gradient is the Na+ equilibrium potential. At electrochemical equilibrium, the chemical and electrical driving forces on Na+ are equal and opposite, and there is no net diffusion of Na+. 2.  Example of a Cl− diffusion potential (Figure 1.5) a.  Two solutions identical to those shown in Figure 1.4 are now separated by a membrane that is permeable to Cl− rather than to Na+. b.  Cl− will diffuse from solution 1 to solution 2 down its concentration gradient. Na+ is impermeable and therefore will not accompany Cl−. c.  A diffusion potential will be established such that solution 1 will become positive with respect to solution 2. The potential difference that exactly counterbalances the diffu-sion of Cl− down its concentration gradient is the Cl− equilibrium potential. At electro-chemical equilibrium, the chemical and electrical driving forces on Cl− are equal and opposite, and there is no net diffusion of Cl−. 3.  Using the Nernst equation to calculate equilibrium potentials a.  The Nernst equation is used to calculate the equilibrium potential at a given concentra-tion difference of a permeable ion across a cell membrane. It tells us what potential would exactly balance the tendency for diffusion down the concentration gradient; in other words, at what potential would the ion be at electrochemical equilibrium? E 2 3 RT zF log C C 10 i e = - . [ ] [ ] where: E = equilibrium potential (mV) 2.3 RT zF mV at = ° 60 37 z C z = charge on the ion (+1 for Na+, +2 for Ca2+, −1 for Cl−) Ci = intracellular concentration (mM) Ce = extracellular concentration (mM) Na+-selective membrane 1 2 – – – + – + Na+ Cl– Na+ Cl– Na+ Cl– Na+ Cl– 1 2 + + Figure 1.4 Generation of an Na+ diffusion potential across a Na+-selective membrane. Cl–-selective membrane 1 2 – – – + + + + – Na+ Cl– Na+ Cl– 1 2 Na+ Cl– Na+ Cl– Figure 1.5 Generation of a Cl− diffusion potential across a Cl−-selective membrane. Chapter 1 Cell Physiology 9 b.  Sample calculation with the Nernst equation ■ ■If the intracellular [Na+] is 15 mM and the extracellular [Na+] is 150 mM, what is the equilibrium potential for Na+? E z C C i e Na mV mV mM mM mV + = − [ ] [ ] − + = − 60 60 1 15 150 60 0 10 10 10 log log log = .1 60 = + mV Note: You need not remember which concentration goes in the numerator. Because it is a log function, perform the calculation either way to get the absolute value of 60 mV . Then use an “intuitive approach” to determine the correct sign. (Intuitive approach: The [Na+] is higher in extracellular fluid than in intracellular fluid, so Na+ ions will diffuse from extracellular to intracellular, making the inside of the cell positive [i.e., +60 mV at equilibrium].) c.  Approximate values for equilibrium potentials in nerve and muscle ENa+ +65 mV E Ca2+ +120 mV EK+ −85 mV ECl− −85 mV C.  Driving force and current flow ■ ■The driving force on an ion is the difference between the actual membrane potential (Em) and the ion’s equilibrium potential (calculated with the Nernst equation). ■ ■Current flow occurs if there is a driving force on the ion and the membrane is permeable to the ion. The direction of current flow is in the same direction as the driving force. The magnitude of current flow is determined by the size of the driving force and the perme-ability (or conductance) of the ion. If there is no driving force on the ion, no current flow can occur. If the membrane is impermeable to the ion, no current flow can occur. D.  Resting membrane potential ■ ■is expressed as the measured potential difference across the cell membrane in millivolts (mV). ■ ■is, by convention, expressed as the intracellular potential relative to the extracellular potential. Thus, a resting membrane potential of −70 mV means 70 mV, cell negative. 1.  The resting membrane potential is established by diffusion potentials that result from con-centration differences of permeant ions. 2.  Each permeable ion attempts to drive the membrane potential toward its equilibrium poten-tial. Ions with the highest permeabilities, or conductances, will make the greatest contri-butions to the resting membrane potential, and those with the lowest permeabilities will make little or no contribution. 3.  For example, the resting membrane potential of nerve is −70 mV, which is close to the cal-culated K+ equilibrium potential of −85 mV, but far from the calculated Na+ equilibrium potential of +65 mV. At rest, the nerve membrane is far more permeable to K+ than to Na+. 4.  The Na+–K+ pump contributes only indirectly to the resting membrane potential by main-taining, across the cell membrane, the Na+ and K+ concentration gradients that then produce diffusion potentials. The direct electrogenic contribution of the pump (3 Na+ pumped out of the cell for every 2 K+ pumped into the cell) is small. 10 BRS Physiology E.  Action potentials 1.  Definitions a.  Depolarization makes the membrane potential less negative (the cell interior becomes less negative). b.  Hyperpolarization makes the membrane potential more negative (the cell interior becomes more negative). c.  Inward current is the flow of positive charge into the cell. Inward current depolarizes the membrane potential. d.  Outward current is the flow of positive charge out of the cell. Outward current hyperpo-larizes the membrane potential. e.  Action potential is a property of excitable cells (i.e., nerve, muscle) that consists of a rapid depolarization, or upstroke, followed by repolarization of the membrane potential. Action potentials have stereotypical size and shape, are propagating, and are all-or-none. f.  Threshold is the membrane potential at which the action potential is inevitable. At threshold potential, net inward current becomes larger than net outward current. The resulting depolarization becomes self-sustaining and gives rise to the upstroke of the action potential. If net inward current is less than net outward current, no action potential will occur (i.e., all-or-none response). 2.  Ionic basis of the nerve action potential (Figure 1.6) a.  Resting membrane potential ■ ■is approximately −70 mV, cell negative. ■ ■is the result of the high resting conductance to K+, which drives the membrane poten-tial toward the K+ equilibrium potential. ■ ■At rest, the Na+ channels are closed and Na+ conductance is low. b.  Upstroke of the action potential (1)  Inward current depolarizes the membrane potential to threshold. (2)  Depolarization causes rapid opening of the activation gates of the Na+ channels, and the Na+ conductance of the membrane promptly increases. (3)  The Na+ conductance becomes higher than the K+ conductance, and the mem-brane potential is driven toward (but does not quite reach) the Na+ equilibrium potential of +65 mV. Thus, the rapid depolarization during the upstroke is caused by an inward Na+ current. (4)  The overshoot is the brief portion at the peak of the action potential when the mem-brane potential is positive. (5)  Tetrodotoxin (TTX) and lidocaine block these voltage-sensitive Na+ channels and abolish action potentials. c.  Repolarization of the action potential (1)  Depolarization also closes the inactivation gates of the Na+ channels (but more slowly than it opens the activation gates). Closure of the inactivation gates results in clo-sure of the Na+ channels, and the Na+ conductance returns toward zero. (2)  Depolarization slowly opens K+ channels and increases K+ conductance to even higher levels than at rest. Tetraethylammonium (TEA) blocks these voltage-gated K+ channels. (3)  The combined effect of closing the Na+ channels and greater opening of the K+ channels makes the K+ conductance higher than the Na+ conductance, and the membrane potential is repolarized. Thus, repolarization is caused by an outward K+ current. d.  Undershoot (hyperpolarizing afterpotential) ■ ■The K+ conductance remains higher than at rest for some time after closure of the Na+ channels. During this period, the membrane potential is driven very close to the K+ equilibrium potential. Chapter 1 Cell Physiology 11 3.  Refractory periods (see Figure 1.6) a.  Absolute refractory period ■ ■is the period during which another action potential cannot be elicited, no matter how large the stimulus. ■ ■coincides with almost the entire duration of the action potential. ■ ■Explanation: Recall that the inactivation gates of the Na+ channels are closed when the membrane potential is depolarized. They remain closed until repolarization occurs. No action potential can occur until the inactivation gates open. b.  Relative refractory period ■ ■begins at the end of the absolute refractory period and continues until the mem-brane potential returns to the resting level. ■ ■An action potential can be elicited during this period only if a larger than usual inward current is provided. ■ ■Explanation: The K+ conductance is higher than at rest, and the membrane potential is closer to the K+ equilibrium potential and, therefore, farther from threshold; more inward current is required to bring the membrane to threshold. c.  Accommodation ■ ■occurs when the cell membrane is held at a depolarized level such that the threshold potential is passed without firing an action potential. ■ ■occurs because depolarization closes inactivation gates on the Na+ channels. ■ ■is demonstrated in hyperkalemia, in which skeletal muscle membranes are depol­ arized by the high serum K+ concentration. Although the membrane potential is closer to threshold, action potentials do not occur because inactivation gates on Na+ channels are closed by depolarization, causing muscle weakness. 4.  Propagation of action potentials (Figure 1.7) ■ ■occurs by the spread of local currents to adjacent areas of membrane, which are then depolarized to threshold and generate action potentials. Absolute refractory period Relative refractory period Action potential Na+ conductance 1.0 +65 mV 0 mV Voltage or conductance –70 mV –85 mV 2.0 Time (msec) K+ conductance K+ equilibrium potential Na+ equilibrium potential Resting membrane potential Figure 1.6 Nerve action potential and associated changes in Na+ and K+ conductance. 12 BRS Physiology ■ ■Conduction velocity is increased by: a. ↑ fiber size. Increasing the diameter of a nerve fiber results in decreased internal resis-tance; thus, conduction velocity down the nerve is faster. b.  Myelination. Myelin acts as an insulator around nerve axons and increases conduction velocity. Myelinated nerves exhibit saltatory conduction because action potentials can be generated only at the nodes of Ranvier, where there are gaps in the myelin sheath (Figure 1.8). V.  Neuromuscular and Synaptic Transmission A.  General characteristics of chemical synapses 1.  An action potential in the presynaptic cell causes depolarization of the presynaptic terminal. 2.  As a result of the depolarization, Ca2+ enters the presynaptic terminal, causing release of neurotransmitter into the synaptic cleft. 3.  Neurotransmitter diffuses across the synaptic cleft and combines with receptors on the postsynaptic cell membrane, causing a change in its permeability to ions and, conse-quently, a change in its membrane potential. 4.  Inhibitory neurotransmitters hyperpolarize the postsynaptic membrane: excitatory neuro­ transmitters depolarize the postsynaptic membrane. B.  Neuromuscular junction (Figure 1.9 and Table 1.2) ■ ■is the synapse between axons of motoneurons and skeletal muscle. ■ ■The neurotransmitter released from the presynaptic terminal is ACh, and the postsynaptic membrane contains a nicotinic receptor. 1.  Synthesis and storage of ACh in the presynaptic terminal ■ ■Choline acetyltransferase catalyzes the formation of ACh from acetyl coenzyme A (CoA) and choline in the presynaptic terminal. ■ ■ACh is stored in synaptic vesicles with ATP and proteoglycan for later release. 2.  Depolarization of the presynaptic terminal and Ca2+ uptake ■ ■Action potentials are conducted down the motoneuron. Depolarization of the presyn-aptic terminal opens Ca2+ channels. Node of Ranvier Myelin sheath Figure 1.8 Myelinated axon. Action potentials can occur at nodes of Ranvier. + – + – + – + – – + + – + – + – + – Figure 1.7 Unmyelinated axon showing spread of depolarization by local current flow. Box shows active zone where action potential had reversed the polarity. Chapter 1 Cell Physiology 13 ■ ■When Ca2+ permeability increases, Ca2+ rushes into the presynaptic terminal down its electrochemical gradient. 3.  Ca2+ uptake causes release of ACh into the synaptic cleft ■ ■The synaptic vesicles fuse with the plasma membrane and empty their contents into the cleft by exocytosis. 4.  Diffusion of ACh to the postsynaptic membrane (muscle end plate) and binding of ACh to nicotinic receptors ■ ■The nicotinic ACh receptor is also a Na+ and K+ ion channel. ■ ■Binding of ACh to α subunits of the receptor causes a conformational change that opens the central core of the channel and increases its conductance to Na+ and K+. These are examples of ligand-gated channels. 5.  End plate potential (EPP) in the postsynaptic membrane ■ ■Because the channels opened by ACh conduct both Na+ and K+ ions, the postsynaptic membrane potential is depolarized to a value halfway between the Na+ and K+ equilib-rium potentials (approximately 0 mV). ■ ■The contents of one synaptic vesicle (one quantum) produce a miniature end plate potential (MEPP), the smallest possible EPP . ■ ■MEPPs summate to produce a full-fledged EPP . The EPP is not an action potential, but simply a depolarization of the specialized muscle end plate. 6.  Depolarization of adjacent muscle membrane to threshold ■ ■Once the end plate region is depolarized, local currents cause depolarization and action potentials in the adjacent muscle tissue. Action potentials in the muscle are followed by contraction. Motoneuron Muscle Action potential in nerve AChR ACh Action potential in muscle 1 5 2 3 4 Na+ K+ Ca2+ Figure 1.9 Neuromuscular junction. ACh = acetylcholine; AChR = acetylcholine receptor. t a b l e 1.2 Agents Affecting Neuromuscular Transmission Example Action Effect on Neuromuscular Transmission Botulinus toxin Blocks release of ACh from presynaptic terminals Total blockade Curare Competes with ACh for receptors on motor end plate Decreases size of EPP; maximal doses produce paralysis of respiratory muscles and death Neostigmine Inhibits acetylcholinesterase Prolongs and enhances action of ACh at muscle end plate Hemicholinium Blocks reuptake of choline into presynaptic terminal Depletes ACh stores from presynaptic terminal ACh = acetylcholine; EPP = end plate potential. 14 BRS Physiology 7.  Degradation of Ach ■ ■The EPP is transient because ACh is degraded to acetyl CoA and choline by acetylcholin-esterase (AChE) on the muscle end plate. ■ ■One-half of the choline is taken back into the presynaptic ending by Na+-choline cotransport and used to synthesize new ACh. ■ ■AChE inhibitors (neostigmine) block the degradation of ACh, prolong its action at the muscle end plate, and increase the size of the EPP . ■ ■Hemicholinium blocks choline reuptake and depletes the presynaptic endings of ACh stores. 8.  Disease—myasthenia gravis ■ ■is caused by the presence of antibodies to the ACh receptor. ■ ■is characterized by skeletal muscle weakness and fatigability resulting from a reduced number of ACh receptors on the muscle end plate. ■ ■The size of the EPP is reduced; therefore, it is more difficult to depolarize the muscle membrane to threshold and to produce action potentials. ■ ■Treatment with AChE inhibitors (e.g., neostigmine) prevents the degradation of ACh and prolongs the action of ACh at the muscle end plate, partially compensating for the reduced number of receptors. C.  Synaptic transmission 1.  Types of arrangements a.  One-to-one synapses (such as those found at the neuromuscular junction) ■ ■An action potential in the presynaptic element (the motor nerve) produces an action potential in the postsynaptic element (the muscle). b.  Many-to-one synapses (such as those found on spinal motoneurons) ■ ■An action potential in a single presynaptic cell is insufficient to produce an action potential in the postsynaptic cell. Instead, many cells synapse on the ­ postsynaptic cell to depolarize it to threshold. The presynaptic input may be excitatory or inhibitory. 2.  Input to synapses ■ ■The postsynaptic cell integrates excitatory and inhibitory inputs. ■ ■When the sum of the input brings the membrane potential of the postsynaptic cell to threshold, it fires an action potential. a.  Excitatory postsynaptic potentials (EPSPs) ■ ■are inputs that depolarize the postsynaptic cell, bringing it closer to threshold and closer to firing an action potential. ■ ■are caused by opening of channels that are permeable to Na+ and K+, similar to the ACh channels. The membrane potential depolarizes to a value halfway between the equi-librium potentials for Na+ and K+ (approximately 0 mV). ■ ■Excitatory neurotransmitters include ACh, norepinephrine, epinephrine, dopamine, glutamate, and serotonin. b.  Inhibitory postsynaptic potentials (IPSPs) ■ ■are inputs that hyperpolarize the postsynaptic cell, moving it away from threshold and farther from firing an action potential. ■ ■are caused by opening Cl− channels. The membrane potential is hyperpolarized toward the Cl− equilibrium potential (−90 mV). ■ ■Inhibitory neurotransmitters are γ-aminobutyric acid (GABA) and glycine. 3.  Summation at synapses a.  Spatial summation occurs when two excitatory inputs arrive at a postsynaptic neuron simultaneously. Together, they produce greater depolarization. Chapter 1 Cell Physiology 15 b.  Temporal summation occurs when two excitatory inputs arrive at a postsynaptic neu-ron in rapid succession. Because the resulting postsynaptic depolarizations overlap in time, they add in stepwise fashion. c.  Facilitation, augmentation, and posttetanic potentiation occur after tetanic stimula-tion of the presynaptic neuron. In each of these, depolarization of the postsynaptic ­ neuron is greater than expected because greater than normal amounts of neurotrans-mitter are released, possibly because of the accumulation of Ca2+ in the presynaptic terminal. ■ ■Long-term potentiation (memory) involves new protein synthesis. 4.  Neurotransmitters a.  ACh (see V B) b.  Norepinephrine, epinephrine, and dopamine (Figure 1.10) (1)  Norepinephrine ■ ■is the primary transmitter released from postganglionic sympathetic neurons. ■ ■is synthesized in the nerve terminal and released into the synapse to bind with a or b receptors on the postsynaptic membrane. ■ ■is removed from the synapse by reuptake or is metabolized in the presynaptic ter-minal by monoamine oxidase (MAO) and catechol-O-methyltransferase (COMT). The metabolites are: (a)  3,4-Dihydroxymandelic acid (DOMA) (b)  Normetanephrine (NMN) (c)  3-Methoxy-4-hydroxyphenylglycol (MOPEG) (d)  3-Methoxy-4-hydroxymandelic acid or vanillylmandelic acid (VMA) ■ ■In pheochromocytoma, a tumor of the adrenal medulla that secretes catechol-amines, urinary excretion of VMA is increased. (2)  Epinephrine ■ ■is synthesized from norepinephrine by the action of phenylethanolamine-N-methyltransferase in the adrenal medulla ■ ■a methyl group is transferred to norepinephrine from S-adenosylmethionine tyrosine hydroxylase dopa decarboxylase dopamine β-hydroxylase phenylethanolamine-N-methyltransferase (adrenal medulla) Tyrosine Dopamine Norepinephrine Epinephrine L-dopa Figure 1.10 Synthetic pathway for dopamine, norepi-nephrine, and epinephrine. 16 BRS Physiology (3)  Dopamine ■ ■is prominent in midbrain neurons. ■ ■is released from the hypothalamus and inhibits prolactin secretion; in this context, it is called prolactin-inhibiting factor (PIF). ■ ■is metabolized by MAO and COMT. (a)  D1 receptors activate adenylate cyclase via a Gs protein. (b)  D2 receptors inhibit adenylate cyclase via a Gi protein. (c)  Parkinson disease involves degeneration of dopaminergic neurons that use the D2 receptors. (d)  Schizophrenia involves increased levels of D2 receptors. c.  Serotonin ■ ■is present in high concentrations in the brain stem. ■ ■is formed from tryptophan. ■ ■is converted to melatonin in the pineal gland. d.  Histamine ■ ■is formed from histidine. ■ ■is present in the neurons of the hypothalamus. e.  Glutamate ■ ■is the most prevalent excitatory neurotransmitter in the brain. ■ ■There are four subtypes of glutamate receptors. ■ ■Three subtypes are ionotropic receptors (ligand-gated ion channels) including the NMDA (N-methyl-d-aspartate) receptor. ■ ■One subtype is a metabotropic receptor, which is coupled to ion channels via a heterotrimeric G protein. f.  GABA ■ ■is an inhibitory neurotransmitter. ■ ■is synthesized from glutamate by glutamate decarboxylase. ■ ■has two types of receptors: (1)  The GABAA receptor increases Cl− conductance and is the site of action of benzodi-azepines and barbiturates. (2)  The GABAB receptor increases K+ conductance. g.  Glycine ■ ■is an inhibitory neurotransmitter found primarily in the spinal cord and brain stem. ■ ■increases Cl− conductance. h.  Nitric oxide (NO) ■ ■is a short-acting inhibitory neurotransmitter in the gastrointestinal tract, blood ves-sels, and the central nervous system. ■ ■is synthesized in presynaptic nerve terminals, where NO synthase converts arginine to citrulline and NO. ■ ■is a permeant gas that diffuses from the presynaptic terminal to its target cell. ■ ■also functions in signal transduction of guanylyl cyclase in a variety of tissues, including vascular smooth muscle. VI.  Skeletal Muscle A.  Muscle structure and filaments (Figure 1.11) ■ ■Each muscle fiber is multinucleate and behaves as a single unit. It contains bundles of myofibrils, surrounded by SR and invaginated by transverse tubules (T tubules). Chapter 1 Cell Physiology 17 ■ ■Each myofibril contains interdigitating thick and thin filaments arranged longitudinally in sarcomeres. ■ ■Repeating units of sarcomeres account for the unique banding pattern in striated muscle. A sarcomere runs from Z line to Z line. 1.  Thick filaments ■ ■are present in the A band in the center of the sarcomere. ■ ■contain myosin. a.  Myosin has six polypeptide chains, including one pair of heavy chains and two pairs of light chains. b.  Each myosin molecule has two “heads” attached to a single “tail.” The myosin heads bind ATP and actin and are involved in cross-bridge formation. 2.  Thin filaments ■ ■are anchored at the Z lines. ■ ■are present in the I bands. B A Transverse tubules Sarcolemmal membrane Terminal cisternae Sarcoplasmic reticulum Motoneuron Muscle M line Thin filament Thick filament Z line I band Z line Myofibril Sarcomere A band H band I band Figure 1.11 Structure of the sarcomere in skeletal muscle. A: Arrangement of thick and thin filaments. B: Transverse tubules and sarcoplasmic reticulum. 18 BRS Physiology ■ ■interdigitate with the thick filaments in a portion of the A band. ■ ■contain actin, tropomyosin, and troponin. a.  Troponin is the regulatory protein that permits cross-bridge formation when it binds Ca2+. b.  Troponin is a complex of three globular proteins: ■ ■Troponin T (“T” for tropomyosin) attaches the troponin complex to tropomyosin. ■ ■Troponin I (“I” for inhibition) inhibits the interaction of actin and myosin. ■ ■Troponin C (“C” for Ca2+) is the Ca2+-binding protein that, when bound to Ca2+, ­ permits the interaction of actin and myosin. 3.  T tubules ■ ■are an extensive tubular network, open to the extracellular space, that carry the depolarization from the sarcolemmal membrane to the cell interior. ■ ■are located at the junctions of A bands and I bands. ■ ■contain a voltage-sensitive protein called the dihydropyridine receptor; depolarization causes a conformational change in the dihydropyridine receptor. 4.  SR ■ ■is the internal tubular structure that is the site of Ca2+ storage and release for excitation– contraction coupling. ■ ■has terminal cisternae that make intimate contact with the T tubules in a triad arrangement. ■ ■membrane contains Ca2+-ATPase (Ca2+ pump), which transports Ca2+ from intracellular fluid into the SR interior, keeping intracellular [Ca2+] low. ■ ■contains Ca2+ bound loosely to calsequestrin. ■ ■contains a Ca2+ release channel called the ryanodine receptor. B.  Steps in excitation–contraction coupling in skeletal muscle (Figures 1.12 and 1.13) 1.  Action potentials in the muscle cell membrane initiate depolarization of the T tubules. 2.  Depolarization of the T tubules causes a conformational change in its dihydropyridine receptor, which opens Ca2+ release channels (ryanodine receptors) in the nearby SR, caus-ing release of Ca2+ from the SR into the intracellular fluid. 3.  Intracellular [Ca2+] increases. 4.  Ca2+ binds to troponin C on the thin filaments, causing a conformational change in troponin that moves tropomyosin out of the way. The cross-bridge cycle begins (see Figure 1.12): a.  At first, no ATP is bound to myosin (A) and myosin is tightly attached to actin. In rapidly contracting muscle, this stage is brief. In the absence of ATP , this state is permanent (i.e., rigor). b.  ATP then binds to myosin (B) producing a conformational change in myosin that causes myosin to be released from actin. c.  Myosin is displaced toward the plus end of actin. There is hydrolysis of ATP to ADP and inorganic phosphate (Pi). ADP remains attached to myosin (C) d.  Myosin attaches to a new site on actin, which constitutes the power (force-generating) stroke (D) ADP is then released, returning myosin to its rigor state. e.  The cycle repeats as long as Ca2+ is bound to troponin C. Each cross-bridge cycle “walks” myosin further along the actin filament. 5.  Relaxation occurs when Ca2+ is reaccumulated by the SR Ca2+-ATPase (SERCA). Intracellular Ca2+ concentration decreases, Ca2+ is released from troponin C, and tropomyosin again blocks the myosin-binding site on actin. As long as intracellular Ca2+ concentration is low, cross-bridge cycling cannot occur. 6.  Mechanism of tetanus. A single action potential causes the release of a standard amount of Ca2+ from the SR and produces a single twitch. However, if the muscle is stimulated repeat-edly, more Ca2+ is released from the SR and there is a cumulative increase in intracellular [Ca2+], extending the time for cross-bridge cycling. The muscle does not relax (tetanus). Chapter 1 Cell Physiology 19 C.  Length–tension and force–velocity relationships in muscle ■ ■Isometric contractions are measured when length is held constant. Muscle length (preload) is fixed, the muscle is stimulated to contract, and the developed tension is measured. There is no shortening. ■ ■Isotonic contractions are measured when load is held constant. The load against which the muscle contracts (afterload) is fixed, the muscle is stimulated to contract, and shortening is measured. 1.  Length–tension relationship (Figure 1.14) ■ ■measures tension developed during isometric contractions when the muscle is set to fixed lengths (preload). Actin filament Myosin head Myosin filament – + – + Pi ADP ADP – + – + ATP D B C A Figure 1.12 Cross-bridge cycle. Myosin “walks” toward the plus end of actin to produce shortening and force generation. ADP = adenosine diphosphate; ATP = adenosine triphosphate; Pi = inorganic phosphate. Action potential Twitch tension Response Time Intracellular [Ca2+] Figure 1.13 Relationship of the action potential, the increase in intracellular [Ca2+], and muscle contrac-tion in skeletal muscle. 20 BRS Physiology a.  Passive tension is the tension developed by stretching the muscle to different lengths. b.  Total tension is the tension developed when the muscle is stimulated to contract at dif-ferent lengths. c.  Active tension is the difference between total tension and passive tension. ■ ■Active tension represents the active force developed from contraction of the muscle. It can be explained by the cross-bridge cycle model. ■ ■Active tension is proportional to the number of cross-bridges formed. Tension will be maximum when there is maximum overlap of thick and thin filaments. When the muscle is stretched to greater lengths, the number of cross-bridges is reduced because there is less overlap. When muscle length is decreased, the thin filaments collide and tension is reduced. 2.  Force–velocity relationship (Figure 1.15) ■ ■measures the velocity of shortening of isotonic contractions when the muscle is chal-lenged with different afterloads (the load against which the muscle must contract). ■ ■The velocity of shortening decreases as the afterload increases. VII.  Smooth Muscle ■ ■has thick and thin filaments that are not arranged in sarcomeres; therefore, they appear homogeneous rather than striated. A.  Types of smooth muscle 1.  Multiunit smooth muscle ■ ■is present in the iris, ciliary muscle of the lens, and vas deferens. ■ ■behaves as separate motor units. Muscle length Length at maximum cross-bridge overlap Tension Total Passive Active Figure 1.14 Length–tension relation-ship in skeletal muscle. Afterload Initial velocity of shortening Figure 1.15 Force–velocity relationship in skeletal muscle. Chapter 1 Cell Physiology 21 ■ ■has little or no electrical coupling between cells. ■ ■is densely innervated; contraction is controlled by neural innervation (e.g., autonomic nervous system). 2.  Unitary (single-unit) smooth muscle ■ ■is the most common type and is present in the uterus, gastrointestinal tract, ureter, and bladder. ■ ■is spontaneously active (exhibits slow waves) and exhibits “pacemaker” activity (see Chapter 6 III A), which is modulated by hormones and neurotransmitters. ■ ■has a high degree of electrical coupling between cells and, therefore, permits coordi-nated contraction of the organ (e.g., bladder). 3.  Vascular smooth muscle ■ ■has properties of both multiunit and single-unit smooth muscle. B.  Steps in excitation–contraction coupling in smooth muscle (Figure 1.16) ■ ■The mechanism of excitation–contraction coupling is different from that in skeletal muscle. ■ ■There is no troponin; instead, Ca2+ regulates myosin on the thick filaments. [Ca2+] Ca2+-calmodulin (CaM) Phosphorylation of myosin light chains Open ligand-gated Ca2+ channels Tension Myosin-light-chain kinase Myosin ATPase Myosin~P + actin Hormones or neurotransmitters Hormones or neurotransmitters Ca2+ release from SR Depolarization Opens voltage-gated Ca2+ channels IP3 Cross-bridge cycling Ca2+-induced Ca2+ release from SR Figure 1.16 Sequence of events in con-traction of smooth muscle. 22 BRS Physiology 1.  Depolarization of the cell membrane opens voltage-gated Ca2+ channels and Ca2+ flows into the cell down its electrochemical gradient, increasing the intracellular [Ca2+]. Hormones and neurotransmitters may open ligand-gated Ca2+ channels in the cell membrane. Ca2+ entering the cell causes release of more Ca2+ from the SR in a process called Ca2+-induced Ca2+ release. Hormones and neurotransmitters also directly release Ca2+ from the SR through inositol 1,4,5-trisphosphate (IP3)–gated Ca2+ channels. 2.  Intracellular [Ca2+] increases. 3.  Ca2+ binds to calmodulin. The Ca2+–calmodulin complex binds to and activates myosin light chain kinase. When activated, myosin light chain kinase phosphorylates myosin and allows it to bind to actin, thus initiating cross-bridge cycling. The amount of tension produced is proportional to the intracellular Ca2+ concentration. 4.  A decrease in intracellular [Ca2+] produces relaxation. VIII.  Comparison of Skeletal Muscle, Smooth Muscle, and Cardiac Muscle ■ ■Table 1.3 compares the ionic basis for the action potential and mechanism of contraction in skeletal muscle, smooth muscle, and cardiac muscle. ■ ■Cardiac muscle is discussed in Chapter 3. t a b l e 1.3 Comparison of Skeletal, Smooth, and Cardiac Muscles Feature Skeletal Muscle Smooth Muscle Cardiac Muscle Appearance Striated No striations Striated Upstroke of action potential Inward Na+ current Inward Ca2+ current Inward Ca2+ current (SA node) Inward Na+ current (atria, ventricles, Purkinje fibers) Plateau No No No (SA node) Yes (atria, ventricles, Purkinje fibers; due to inward Ca2+ current) Duration of action potential ~1 msec ~10 msec 150 msec (SA node, atria) 250–300 msec (ventricles and Purkinje fibers) Excitation– contraction coupling Action potential → T tubules Ca2+ released from nearby SR ↑ [Ca2+]i Action potential opens voltage-gated Ca2+ channels in cell membrane Hormones and transmitters open IP3-gated Ca2+ channels in SR Inward Ca2+ current during plateau of action potential Ca2+-induced Ca2+ release from SR ↑ [Ca2+]i Molecular basis for contraction Ca2+–troponin C Ca2+–calmodulin ↑ myosin-light-chain kinase Ca2+–troponin C IP3 = inositol 1,4,5-triphosphate; SA = sinoatrial; SR = sarcoplasmic reticulum. 23 Review Test 1. Which of the following characteristics is shared by simple and facilitated diffusion of glucose? (A) Occurs down an electrochemical gradient (B) Is saturable (C) Requires metabolic energy (D) Is inhibited by the presence of galactose (E) Requires a Na+ gradient 2. During the upstroke of the nerve action potential (A) there is net outward current and the cell interior becomes more negative (B) there is net outward current and the cell interior becomes less negative (C) there is net inward current and the cell interior becomes more negative (D) there is net inward current and the cell interior becomes less negative 3. Solutions A and B are separated by a semipermeable membrane that is permeable to K+ but not to Cl−. Solution A is 100 mM KCl, and solution B is 1 mM KCl. Which of the following statements about solution A and solution B is true? (A) K+ ions will diffuse from solution A to solution B until the [K+] of both solutions is 50.5 mM (B) K+ ions will diffuse from solution B to solution A until the [K+] of both solutions is 50.5 mM (C) KCl will diffuse from solution A to solution B until the [KCl] of both solutions is 50.5 mM (D) K+ will diffuse from solution A to solution B until a membrane potential develops with solution A negative with respect to solution B (E) K+ will diffuse from solution A to solution B until a membrane potential develops with solution A positive with respect to solution B 4. The correct temporal sequence for events at the neuromuscular junction is (A)  action potential in the motor nerve; depolarization of the muscle end plate; uptake of Ca2+ into the presynaptic nerve terminal (B) uptake of Ca2+ into the presynaptic terminal; release of acetylcholine (ACh); depolarization of the muscle end plate (C) release of ACh; action potential in the motor nerve; action potential in the muscle (D) uptake of Ca2+ into the motor end plate; action potential in the motor end plate; action potential in the muscle (E) release of ACh; action potential in the muscle end plate; action potential in the muscle 5. Which characteristic or component is shared by skeletal muscle and smooth muscle? (A) Thick and thin filaments arranged in sarcomeres (B) Troponin (C) Elevation of intracellular [Ca2+] for excitation–contraction coupling (D) Spontaneous depolarization of the membrane potential (E) High degree of electrical coupling between cells 6. Repeated stimulation of a skeletal muscle fiber causes a sustained contraction (tetanus). Accumulation of which solute in intracellular fluid is responsible for the tetanus? (A) Na+ (B) K+ (C) Cl− (D) Mg2+ (E) Ca2+ (F) Troponin (G) Calmodulin (H) Adenosine triphosphate (ATP) 7. Solutions A and B are separated by a membrane that is permeable to Ca2+ and impermeable to Cl−. Solution A contains 10 mM CaCl2, and solution B contains 1 mM CaCl2. Assuming that 2.3 RT/F = 60 mV, Ca2+ will be at electrochemical equilibrium when (A) solution A is +60 mV (B) solution A is +30 mV 24 BRS Physiology (C) solution A is −60 mV (D) solution A is −30 mV (E) solution A is +120 mV (F) solution A is −120 mV (G) the Ca2+ concentrations of the two solutions are equal (H) the concentrations of the two solutions are equal 8. A 42-year-old man with myasthenia gravis notes increased muscle strength when he is treated with an acetylcholinesterase (AChE) inhibitor. The basis for his improvement is increased (A) amount of acetylcholine (ACh) released from motor nerves (B) levels of ACh at the muscle end plates (C) number of ACh receptors on the muscle end plates (D) amount of norepinephrine released from motor nerves (E) synthesis of norepinephrine in motor nerves 9. In a hospital error, a 60-year-old woman is infused with large volumes of a solution that causes lysis of her red blood cells (RBCs). The solution was most likely (A) 150 mM NaCl (B) 300 mM mannitol (C) 350 mM mannitol (D) 300 mM urea (E) 150 mM CaCl2 10. During a nerve action potential, a stimulus is delivered as indicated by the arrow shown in the following figure. In response to the stimulus, a second action potential Stimulus (A) of smaller magnitude will occur (B) of normal magnitude will occur (C) of normal magnitude will occur but will be delayed (D) will occur but will not have an overshoot (E) will not occur 11. Solutions A and B are separated by a membrane that is permeable to urea. Solution A is 10 mM urea, and solution B is 5 mM urea. If the concentration of urea in solution A is doubled, the flux of urea across the membrane will (A) double (B) triple (C) be unchanged (D) decrease to one-half (E) decrease to one-third 12. A muscle cell has an intracellular [Na+] of 14 mM and an extracellular [Na+] of 140 mM. Assuming that 2.3 RT/F = 60 mV, what would the membrane potential be if the muscle cell membrane were permeable only to Na+? (A) 80 mV (B) −60 mV (C) 0 mV (D) +60 mV (E) +80 mV Questions 13–15 The following diagram of a nerve action potential applies to Questions 13–15. Na+ equilibrium potential K+ equilibrium potential Time (msec) Membrane potential +35 mV –70 mV 1.0 2.0 0 mV 1 2 3 4 5 13. At which labeled point on the action potential is K+ closest to electrochemical equilibrium? (A) 1 (B) 2 (C) 3 (D) 4 (E) 5 14. What process is responsible for the change in membrane potential that occurs between point 1 and point 3? (A) Movement of Na+ into the cell (B) Movement of Na+ out of the cell (C) Movement of K+ into the cell Chapter 1 Cell Physiology 25 (D) Movement of K+ out of the cell (E) Activation of the Na+–K+ pump (F) Inhibition of the Na+–K+ pump 15. What process is responsible for the change in membrane potential that occurs between point 3 and point 4? (A) Movement of Na+ into the cell (B) Movement of Na+ out of the cell (C) Movement of K+ into the cell (D) Movement of K+ out of the cell (E) Activation of the Na+–K+ pump (F) Inhibition of the Na+–K+ pump 16. The velocity of conduction of action potentials along a nerve will be increased by (A) stimulating the Na+–K+ pump (B) inhibiting the Na+–K+ pump (C) decreasing the diameter of the nerve (D) myelinating the nerve (E) lengthening the nerve fiber 17. Solutions A and B are separated by a semipermeable membrane. Solution A contains 1 mM sucrose and 1 mM urea. Solution B contains 1 mM sucrose. The reflection coefficient for sucrose is one, and the reflection coefficient for urea is zero. Which of the following statements about these solutions is correct? (A) Solution A has a higher effective osmotic pressure than solution B (B) Solution A has a lower effective osmotic pressure than solution B (C) Solutions A and B are isosmotic (D) Solution A is hyperosmotic with respect to solution B, and the solutions are isotonic (E) Solution A is hyposmotic with respect to solution B, and the solutions are isotonic 18. Transport of d- and l-glucose proceeds at the same rate down an electrochemical gradient by which of the following processes? (A) Simple diffusion (B) Facilitated diffusion (C) Primary active transport (D) Cotransport (E) Countertransport 19. Which of the following will double the permeability of a solute in a lipid bilayer? (A) Doubling the molecular radius of the solute (B) Doubling the oil/water partition coefficient of the solute (C) Doubling the thickness of the bilayer (D) Doubling the concentration difference of the solute across the bilayer 20. A newly developed local anesthetic blocks Na+ channels in nerves. Which of the following effects on the action potential would it be expected to produce? (A) Decrease the rate of rise of the upstroke of the action potential (B) Shorten the absolute refractory period (C) Abolish the hyperpolarizing afterpotential (D) Increase the Na+ equilibrium potential (E) Decrease the Na+ equilibrium potential 21. At the muscle end plate, acetylcholine (ACh) causes the opening of (A) Na+ channels and depolarization toward the Na+ equilibrium potential (B) K+ channels and depolarization toward the K+ equilibrium potential (C) Ca2+ channels and depolarization toward the Ca2+ equilibrium potential (D) Na+ and K+ channels and depolarization to a value halfway between the Na+ and K+ equilibrium potentials (E) Na+ and K+ channels and hyperpolarization to a value halfway between the Na+ and K+ equilibrium potentials 22. An inhibitory postsynaptic potential (A) depolarizes the postsynaptic membrane by opening Na+ channels (B) depolarizes the postsynaptic membrane by opening K+ channels (C) hyperpolarizes the postsynaptic membrane by opening Ca2+ channels (D) hyperpolarizes the postsynaptic membrane by opening Cl− channels 23. Which of the following would occur as a result of the inhibition of Na+, K+-ATPase? (A) Decreased intracellular Na+ concentration (B) Increased intracellular K+ concentration (C) Increased intracellular Ca2+ concentration (D) Increased Na+–glucose cotransport (E) Increased Na+–Ca2+ exchange 24. Which of the following temporal sequences is correct for excitation– contraction coupling in skeletal muscle? (A) Increased intracellular [Ca2+]; action potential in the muscle membrane; cross-bridge formation 26 BRS Physiology (B) Action potential in the muscle membrane; depolarization of the T tubules; release of Ca2+ from the sarcoplasmic reticulum (SR) (C) Action potential in the muscle membrane; splitting of adenosine triphosphate (ATP); binding of Ca2+ to troponin C (D) Release of Ca2+ from the SR; depolarization of the T tubules; binding of Ca2+ to troponin C 25. Which of the following transport processes is involved if transport of glucose from the intestinal lumen into a small intestinal cell is inhibited by abolishing the usual Na+ gradient across the cell membrane? (A) Simple diffusion (B) Facilitated diffusion (C) Primary active transport (D) Cotransport (E) Countertransport 26. In skeletal muscle, which of the following events occurs before depolarization of the T tubules in the mechanism of excitation– contraction coupling? (A) Depolarization of the sarcolemmal membrane (B) Opening of Ca2+ release channels on the sarcoplasmic reticulum (SR) (C) Uptake of Ca2+ into the SR by Ca2+-adenosine triphosphatase (ATPase) (D) Binding of Ca2+ to troponin C (E) Binding of actin and myosin 27. Which of the following is an inhibitory neurotransmitter in the central nervous system (CNS)? (A) Norepinephrine (B) Glutamate (C) γ-Aminobutyric acid (GABA) (D) Serotonin (E) Histamine 28. Adenosine triphosphate (ATP) is used indirectly for which of the following processes? (A) Accumulation of Ca2+ by the sarcoplasmic reticulum (SR) (B) Transport of Na+ from intracellular to extracellular fluid (C) Transport of K+ from extracellular to intracellular fluid (D) Transport of H+ from parietal cells into the lumen of the stomach (E) Absorption of glucose by intestinal epithelial cells 29. Which of the following causes rigor in skeletal muscle? (A) Lack of action potentials in motoneurons (B) An increase in intracellular Ca2+ level (C) A decrease in intracellular Ca2+ level (D) An increase in adenosine triphosphate (ATP) level (E) A decrease in ATP level 30. Degeneration of dopaminergic neurons has been implicated in (A) schizophrenia (B) Parkinson disease (C) myasthenia gravis (D) curare poisoning 31. Assuming complete dissociation of all solutes, which of the following solutions would be hyperosmotic to 1 mM NaCl? (A) 1 mM glucose (B) 1.5 mM glucose (C) 1 mM CaCl2 (D) 1 mM sucrose (E) 1 mM KCl 32. A new drug is developed that blocks the transporter for H+ secretion in gastric parietal cells. Which of the following transport processes is being inhibited? (A) Simple diffusion (B) Facilitated diffusion (C) Primary active transport (D) Cotransport (E) Countertransport 33. A 56-year-old woman with severe muscle weakness is hospitalized. The only abnormality in her laboratory values is an elevated serum K+ concentration. The elevated serum K+ causes muscle weakness because (A) the resting membrane potential is hyperpolarized (B) the K+ equilibrium potential is hyperpolarized (C) the Na+ equilibrium potential is hyperpolarized (D) K+ channels are closed by depolarization (E) K+ channels are opened by depolarization Chapter 1 Cell Physiology 27 (F) Na+ channels are closed by depolarization (G) Na+ channels are opened by depolarization 34. In contraction of gastrointestinal smooth muscle, which of the following events occurs after binding of Ca2+ to calmodulin? (A) Depolarization of the sarcolemmal membrane (B) Ca2+-induced Ca2+ release (C) Increased myosin-light-chain kinase (D) Increased intracellular Ca2+ concentration (E) Opening of ligand-gated Ca2+ channels 35. In an experimental preparation of a nerve axon, membrane potential (Em), K+ equilibrium potential, and K+ conductance can be measured. Which combination of values will create the largest outward current flow? Em (mV) EK (mV) K conductance (relative units) (A) −90 –90 1 (B) −100 –90 1 (C) −50 –90 1 (D) 0 –90 1 (E) +20 –90 1 (F) −90 –90 2 28  Answers and Explanations 1. The answer is A [II A 1, C]. Both types of transport occur down an electrochemical gradient (“downhill”) and do not require metabolic energy. Saturability and inhibition by other sugars are characteristic only of carrier-mediated glucose transport; thus, facilitated diffusion is saturable and inhibited by galactose, whereas simple diffusion is not. 2. The answer is D [IV E 1 a, b, 2 b]. During the upstroke of the action potential, the cell depolarizes or becomes less negative. The depolarization is caused by inward current, which is, by definition, the movement of positive charge into the cell. In nerve and in most types of muscle, this inward current is carried by Na+. 3. The answer is D [IV B]. Because the membrane is permeable only to K+ ions, K+ will diffuse down its concentration gradient from solution A to solution B, leaving some Cl− ions behind in solution A. A diffusion potential will be created, with solution A negative with respect to solution B. Generation of a diffusion potential involves movement of only a few ions and, therefore, does not cause a change in the concentration of the bulk solutions. 4. The answer is B [V B 1–6]. Acetylcholine (ACh) is stored in vesicles and is released when an action potential in the motor nerve opens Ca2+ channels in the presynaptic terminal. ACh diffuses across the synaptic cleft and opens Na+ and K+ channels in the muscle end plate, depolarizing it (but not producing an action potential). Depolarization of the muscle end plate causes local currents in adjacent muscle membrane, depolarizing the membrane to threshold and producing action potentials. 5. The answer is C [VI A, B 1–4; VII B 1–4]. An elevation of intracellular [Ca2+] is common to the mechanism of excitation–contraction coupling in skeletal and smooth muscle. In skeletal muscle, Ca2+ binds to troponin C, initiating the cross-bridge cycle. In smooth muscle, Ca2+ binds to calmodulin. The Ca2+–calmodulin complex activates myosin light chain kinase, which phosphorylates myosin so that shortening can occur. The striated appearance of the sarcomeres and the presence of troponin are characteristic of skeletal, not smooth, muscle. Spontaneous depolarizations and gap junctions are characteristics of unitary smooth muscle but not skeletal muscle. 6. The answer is E [VI B 6]. During repeated stimulation of a muscle fiber, Ca2+ is released from the sarcoplasmic reticulum (SR) more quickly than it can be reaccumulated; therefore, the intracellular [Ca2+] does not return to resting levels as it would after a single twitch. The increased [Ca2+] allows more cross-bridges to form and, therefore, produces increased tension (tetanus). Intracellular Na+ and K+ concentrations do not change during the action potential. Very few Na+ or K+ ions move into or out of the muscle cell, so bulk concentrations are unaffected. Adenosine triphosphate (ATP) levels would, if anything, decrease during tetanus. 7. The answer is D [IV B]. The membrane is permeable to Ca2+ but impermeable to Cl−. Although there is a concentration gradient across the membrane for both ions, only Ca2+ can diffuse down this gradient. Ca2+ will diffuse from solution A to solution B, leaving negative charge behind in solution A. The magnitude of this voltage can be calculated for electrochemical equilibrium with the Nernst equation as follows: ECa2+ = 2.3 RT/zF log CA/CB = 60 mV/+2 log 10 mM/1 mM = 30 mV log 10 = 30 mV. The sign is determined with an intuitive approach—Ca2+ diffuses from solution A to solution B, so solution A develops a negative voltage (−30 mV). Net diffusion of Ca2+ will cease when this voltage is achieved, that is, when the chemical driving force is exactly balanced by the electrical driving force (not when the Ca2+ concentrations of the solutions become equal). Chapter 1 Cell Physiology 29 8. The answer is B [V B 8]. Myasthenia gravis is characterized by a decreased density of acetylcholine (ACh) receptors at the muscle end plate. An acetylcholinesterase (AChE) inhibitor blocks degradation of ACh in the neuromuscular junction, so levels at the muscle end plate remain high, partially compensating for the deficiency of receptors. 9. The answer is D [III B 2 d]. Lysis of the patient’s red blood cells (RBCs) was caused by entry of water and swelling of the cells to the point of rupture. Water would flow into the RBCs if the extracellular fluid became hypotonic (had a lower osmotic pressure) relative to the intracellular fluid. By definition, isotonic solutions do not cause water to flow into or out of cells because the osmotic pressure is the same on both sides of the cell membrane. Hypertonic solutions would cause shrinkage of the RBCs. 150 mM NaCl and 300 mM mannitol are isotonic. 350 mM mannitol and 150 mM CaCl2 are hypertonic. Because the reflection coefficient of urea is <1.0, 300 mM urea is hypotonic. 10. The answer is E [IV E 3 a]. Because the stimulus was delivered during the absolute refractory period, no action potential occurs. The inactivation gates of the Na+ channel were closed by depolarization and remain closed until the membrane is repolarized. As long as the inactivation gates are closed, the Na+ channels cannot be opened to allow for another action potential. 11. The answer is B [II A]. Flux is proportional to the concentration difference across the membrane, J = −PA (CA − CB). Originally, CA − CB = 10 mM − 5 mM = 5 mM. When the urea concentration was doubled in solution A, the concentration difference became 20 mM − 5 mM = 15 mM or three times the original difference. Therefore, the flux would also triple. Note that the negative sign preceding the equation is ignored if the lower concentration is subtracted from the higher concentration. 12. The answer is D [IV B 3 a, b]. The Nernst equation is used to calculate the equilibrium potential for a single ion. In applying the Nernst equation, we assume that the membrane is freely permeable to that ion alone. ENa+ = 2.3 RT/zF log Ce/Ci = 60 mV log 140/14 = 60 mV log 10 = 60 mV. Notice that the signs were ignored and that the higher concentration was simply placed in the numerator to simplify the log calculation. To determine whether ENa+ is +60 mV or −60 mV, use the intuitive approach—Na+ will diffuse from extracellular to intracellular fluid down its concentration gradient, making the cell interior positive. 13. The answer is E [IV E 2 d]. The hyperpolarizing afterpotential represents the period during which K+ permeability is highest, and the membrane potential is closest to the K+ equilibrium potential. At that point, K+ is closest to electrochemical equilibrium. The force driving K+ movement out of the cell down its chemical gradient is balanced by the force driving K+ into the cell down its electrical gradient. 14. The answer is A [IV E 2 b (1)–(3)]. The upstroke of the nerve action potential is caused by opening of the Na+ channels (once the membrane is depolarized to threshold). When the Na+ channels open, Na+ moves into the cell down its electrochemical gradient, driving the membrane potential toward the Na+ equilibrium potential. 15. The answer is D [IV E 2 c]. The process responsible for repolarization is the opening of K+ channels. The K+ permeability becomes very high and drives the membrane potential toward the K+ equilibrium potential by flow of K+ out of the cell. 16. The answer is D [IV E 4 b]. Myelin insulates the nerve, thereby increasing conduction velocity; action potentials can be generated only at the nodes of Ranvier, where there are breaks in the insulation. Activity of the Na+–K+ pump does not directly affect the formation or conduction of action potentials. Decreasing nerve diameter would increase internal resistance and, therefore, slow the conduction velocity. 17. The answer is D [III A, B 4]. Solution A contains both sucrose and urea at concentrations of 1 mM, whereas solution B contains only sucrose at a concentration of 1 mM. The calculated osmolarity of solution A is 2 mOsm/L, and the calculated osmolarity of solution B is 1 mOsm/L. Therefore, solution A, which has a higher osmolarity, is hyperosmotic 30 BRS Physiology with respect to solution B. Actually, solutions A and B have the same effective osmotic pressure (i.e., they are isotonic) because the only “effective” solute is sucrose, which has the same concentration in both solutions. Urea is not an effective solute because its reflection coefficient is zero. 18. The answer is A [II A 1, C 1]. Only two types of transport occur “downhill”—simple and facilitated diffusion. If there is no stereospecificity for the d- or l-isomer, one can conclude that the transport is not carrier mediated and, therefore, must be simple diffusion. 19. The answer is B [II A 4 a–c]. Increasing oil/water partition coefficient increases solubility in a lipid bilayer and therefore increases permeability. Increasing molecular radius and increased membrane thickness decrease permeability. The concentration difference of the solute has no effect on permeability. 20. The answer is A [IV E 1–3]. Blockade of the Na+ channels would prevent action potentials. The upstroke of the action potential depends on the entry of Na+ into the cell through these channels and therefore would also be reduced or abolished. The absolute refractory period would be lengthened because it is based on the availability of the Na+ channels. The hyperpolarizing afterpotential is related to increased K+ permeability. The Na+ equilibrium potential is calculated from the Nernst equation and is the theoretical potential at electrochemical equilibrium (and does not depend on whether the Na+ channels are open or closed). 21. The answer is D [V B 5]. Binding of acetylcholine (ACh) to receptors in the muscle end plate opens channels that allow passage of both Na+ and K+ ions. Na+ ions will flow into the cell down its electrochemical gradient, and K+ ions will flow out of the cell down its electrochemical gradient. The resulting membrane potential will be depolarized to a value that is approximately halfway between their respective equilibrium potentials. 22. The answer is D [V C 2 b]. An inhibitory postsynaptic potential hyperpolarizes the postsynaptic membrane, taking it farther from threshold. Opening Cl− channels would hyperpolarize the postsynaptic membrane by driving the membrane potential toward the Cl− equilibrium potential (about −90 mV). Opening Ca2+ channels would depolarize the postsynaptic membrane by driving it toward the Ca2+ equilibrium potential. 23. The answer is C [II D 2 a]. Inhibition of Na+, K+-adenosine triphosphatase (ATPase) leads to an increase in intracellular Na+ concentration. Increased intracellular Na+ concentration decreases the Na+ gradient across the cell membrane, thereby inhibiting Na+–Ca2+ exchange and causing an increase in intracellular Ca2+ concentration. Increased intracellular Na+ concentration also inhibits Na+–glucose cotransport. 24. The answer is B [VI B 1–4]. The correct sequence is action potential in the muscle membrane; depolarization of the T tubules; release of Ca2+ from the sarcoplasmic reticulum (SR); binding of Ca2+ to troponin C; cross-bridge formation; and splitting of adenosine triphosphate (ATP). 25. The answer is D [II D 2 a, E 1]. In the “usual” Na+ gradient, the [Na+] is higher in extracellular than in intracellular fluid (maintained by the Na+–K+ pump). Two forms of transport are energized by this Na+ gradient—cotransport and countertransport. Because glucose is moving in the same direction as Na+, one can conclude that it is cotransport. 26. The answer is A [VI A 3]. In the mechanism of excitation–contraction coupling, excitation always precedes contraction. Excitation refers to the electrical activation of the muscle cell, which begins with an action potential (depolarization) in the sarcolemmal membrane that spreads to the T tubules. Depolarization of the T tubules then leads to the release of Ca2+ from the nearby sarcoplasmic reticulum (SR), followed by an increase in intracellular Ca2+ concentration, binding of Ca2+ to troponin C, and then contraction. 27. The answer is C [V C 2 a, b]. γ-Aminobutyric acid (GABA) is an inhibitory neurotransmitter. Norepinephrine, glutamate, serotonin, and histamine are excitatory neurotransmitters. Chapter 1 Cell Physiology 31 28. The answer is E [II D 2]. All of the processes listed are examples of primary active transport (and therefore use adenosine triphosphate [ATP] directly), except for absorption of glucose by intestinal epithelial cells, which occurs by secondary active transport (i.e., cotransport). Secondary active transport uses the Na+ gradient as an energy source and, therefore, uses ATP indirectly (to maintain the Na+ gradient). 29. The answer is E [VI B]. Rigor is a state of permanent contraction that occurs in skeletal muscle when adenosine triphosphate (ATP) levels are depleted. With no ATP bound, myosin remains attached to actin and the cross-bridge cycle cannot continue. If there were no action potentials in motoneurons, the muscle fibers they innervate would not contract at all, since action potentials are required for release of Ca2+ from the sarcoplasmic reticulum (SR). When intracellular Ca2+ concentration increases, Ca2+ binds troponin C, permitting the cross-bridge cycle to occur. Decreases in intracellular Ca2+ concentration cause relaxation. 30. The answer is B [V C 4 b (3)]. Dopaminergic neurons and D2 receptors are deficient in people with Parkinson disease. Schizophrenia involves increased levels of D2 receptors. Myasthenia gravis and curare poisoning involve the neuromuscular junction, which uses acetylcholine (ACh) as a neurotransmitter. 31. The answer is C [III A]. Osmolarity is the concentration of particles (osmolarity = g × C). When two solutions are compared, that with the higher osmolarity is hyperosmotic. The 1 mM CaCl2 solution (osmolarity = 3 mOsm/L) is hyperosmotic to 1 mM NaCl (osmolarity = 2 mOsm/L). The 1 mM glucose, 1.5 mM glucose, and 1 mM sucrose solutions are hyposmotic to 1 mM NaCl, whereas 1 mM KCl is isosmotic. 32. The answer is C [II D c]. H+ secretion by gastric parietal cells occurs by H+–K+ adenosine triphosphatase (ATPase), a primary active transporter. 33. The answer is F [IV E 2]. Elevated serum K+ concentration causes depolarization of the K+ equilibrium potential and therefore depolarization of the resting membrane potential in skeletal muscle. Sustained depolarization closes the inactivation gates on Na+ channels and prevents the occurrence of action potentials in the muscle. 34. The answer is C [VII B]. The steps that produce contraction in smooth muscle occur in the following order: various mechanisms that raise intracellular Ca2+ concentration, including depolarization of the sarcolemmal membrane, which opens voltage-gated Ca2+ channels, and opening of ligand-gated Ca2+ channels; Ca2+-induced Ca2+ released from SR; increased intracellular Ca2+ concentration; binding of Ca2+ to calmodulin; increased myosin-light-chain kinase; phosphorylation of myosin; binding of myosin to actin; cross-bridge cycling, which produces contraction 35. The answer is E [IV C]. Data sets A and F have no difference between membrane potential (Em) and EK and thus have no driving force or current flow; although data set F has the higher K+ conductance, this is irrelevant since the driving force is zero. Data sets C, D, and E all will have outward K+ current, since Em is less negative than EK; of these, data set E will have the largest outward K+ current because it has the highest driving force. Data set B will have inward K+ current since Em is more negative than EK. 32 c h a p t e r Neurophysiology 2 I.  Autonomic Nervous System (ANS) ■ is a set of pathways to and from the central nervous system (CNS) that innervates and regulates smooth muscle, cardiac muscle, and glands. ■ ■is distinct from the somatic nervous system, which innervates skeletal muscle. ■ ■has three divisions: sympathetic, parasympathetic, and enteric (the enteric division is dis-cussed in Chapter 6). A.  Organization of the ANS (Table 2.1 and Figure 2.1) 1.  Synapses between neurons are made in the autonomic ganglia. a.  Parasympathetic ganglia are located in or near the effector organs. b.  Sympathetic ganglia are located in the paravertebral chain. 2.  Preganglionic neurons have their cell bodies in the CNS and synapse in autonomic ganglia. ■ ■Preganglionic neurons of the sympathetic nervous system originate in spinal cord seg-ments T1–L3 or the thoracolumbar region. ■ ■Preganglionic neurons of the parasympathetic nervous system originate in the nuclei of cranial nerves and in spinal cord segments S2–S4 or the craniosacral region. 3.  Postganglionic neurons of both divisions have their cell bodies in the autonomic ganglia and synapse on effector organs (e.g., heart, blood vessels, sweat glands). 4.  Adrenal medulla is a specialized ganglion of the sympathetic nervous system. ■ ■Preganglionic fibers synapse directly on chromaffin cells in the adrenal medulla. ■ ■The chromaffin cells secrete epinephrine (80%) and norepinephrine (20%) into the cir-culation (see Figure 2.1). ■ ■Pheochromocytoma is a tumor of the adrenal medulla that secretes excessive amounts of catecholamines and is associated with increased excretion of 3-methoxy-4-hydroxy-mandelic acid (VMA). B.  Neurotransmitters of the ANS ■ ■Adrenergic neurons release norepinephrine as the neurotransmitter. ■ ■Cholinergic neurons, whether in the sympathetic or parasympathetic nervous system, release acetylcholine (ACh) as the neurotransmitter. ■ ■Nonadrenergic, noncholinergic neurons include some postganglionic parasympathetic neu-rons of the gastrointestinal tract, which release substance P , vasoactive intestinal peptide (VIP), or nitric oxide (NO). Chapter 2 Neurophysiology 33 t a b l e 2.1 Organization of the Autonomic Nervous System Characteristic Sympathetic Parasympathetic Somatic Origin of preganglionic nerve Nuclei of spinal cord segments T1–T12; L1–L3 (thoracolumbar) Nuclei of cranial nerves III, VII, IX, and X; spinal cord segments S2–S4 (craniosacral) Length of preganglionic nerve axon Short Long Neurotransmitter in ganglion ACh ACh Receptor type in ganglion Nicotinic Nicotinic Length of postganglionic nerve axon Long Short Effector organs Smooth and cardiac muscle; glands Smooth and cardiac muscle; glands Skeletal muscle Neurotransmitter in effector organs Norepinephrine (except sweat glands, which use ACh) ACh ACh (synapse is neuromuscular junction) Receptor types in effector organs α1, α2, β1, and β2 Muscarinic Nicotinic Somatic nervous system has been included for comparison. ACh = acetylcholine. Preganglionic ACh Nicotinic receptor (NN) Norepinephrine Except sweat glands, which use ACh. α1, α2, β1, β2 receptors Postganglionic Preganglionic ACh ACh Nicotinic receptor (NN) Muscarinic receptor Skeletal muscle Nicotinic receptor (NM) Postganglionic Adrenal gland Epinephrine (80%) Norepinephrine (20%) ACh Nicotinic receptor ACh Sympathetic CNS Parasympathetic Adrenal Somatic Effector organ Figure 2.1 Organization of the autonomic nervous system. ACh = acetylcholine; CNS = central nervous system. 34 BRS Physiology C.  Receptor types in the ANS (Table 2.2) 1.  Adrenergic receptors (adrenoreceptors) a.  a1 Receptors ■ ■are located on vascular smooth muscle of the skin and splanchnic regions, the gas-trointestinal (GI) and bladder sphincters, and the radial muscle of the iris. ■ ■produce excitation (e.g., contraction or constriction). ■ ■are equally sensitive to norepinephrine and epinephrine. However, only norepi-nephrine released from adrenergic neurons is present in high enough concentra-tions to activate α1 receptors. ■ ■Mechanism of action: Gq protein, stimulation of phospholipase C and increase in inositol 1,4,5-triphosphate (IP3) and intracellular [Ca2+]. b. a2 Receptors ■ ■are located on sympathetic postganglionic nerve terminals (autoreceptors), plate-lets, fat cells, and the walls of the GI tract (heteroreceptors). ■ ■often produce inhibition (e.g., relaxation or dilation). ■ ■Mechanism of action: Gi protein, inhibition of adenylate cyclase and decrease in cyclic adenosine monophosphate (cAMP). c. b1 Receptors ■ ■are located in the sinoatrial (SA) node, atrioventricular (AV) node, and ventricular muscle of the heart. ■ ■produce excitation (e.g., increased heart rate, increased conduction velocity, increased contractility). ■ ■are sensitive to both norepinephrine and epinephrine, and are more sensitive than the α1 receptors. ■ ■Mechanism of action: Gs protein, stimulation of adenylate cyclase and increase in cAMP . d. b2 Receptors ■ ■are located on vascular smooth muscle of skeletal muscle, bronchial smooth muscle, and in the walls of the GI tract and bladder. ■ ■produce relaxation (e.g., dilation of vascular smooth muscle, dilation of bronchioles, relaxation of the bladder wall). ■ ■are more sensitive to epinephrine than to norepinephrine. ■ ■are more sensitive to epinephrine than the α1 receptors. ■ ■Mechanism of action: same as for β1 receptors. t a b l e 2.2 Signaling Pathways and Mechanisms for Autonomic Receptors Receptor Location G Protein Mechanism Adrenergic α1 Smooth muscle Gq ↑ IP3 /Ca2+ α2 Gastrointestinal tract Gi ↓ cAMP β1 Heart Gs ↑ cAMP β2 Smooth muscle Gs ↑ cAMP Cholinergic NM (N1) Skeletal muscle — Opening Na+/K+ channels NN (N2) Autonomic ganglia — Opening Na+/K+ channels M1 CNS Gq ↑ IP3 /Ca2+ M2 Heart Gi ↓ cAMP M3 Glands, smooth muscle Gq ↑ IP3 /Ca2+ IP3 = inositol 1,4,5-triphosphate; cAMP = cyclic adenosine monophosphate. Chapter 2 Neurophysiology 35 2.  Cholinergic receptors (cholinoreceptors) a.  Nicotinic receptors ■ ■are located in the autonomic ganglia (NN) of the sympathetic and parasympathetic nervous systems, at the neuromuscular junction (NM), and in the adrenal medulla (NN). The receptors at these locations are similar, but not identical. ■ ■are activated by ACh or nicotine. ■ ■produce excitation. ■ ■are blocked by ganglionic blockers (e.g., hexamethonium) in the autonomic ganglia, but not at the neuromuscular junction. ■ ■Mechanism of action: ACh binds to α subunits of the nicotinic ACh receptor. The nic-otinic ACh receptors are also ion channels for Na+ and K+. b.  Muscarinic receptors ■ ■are located in the heart (M2), smooth muscle (M3), and glands (M3). ■ ■are inhibitory in the heart (e.g., decreased heart rate, decreased conduction velocity in AV node). ■ ■are excitatory in smooth muscle and glands (e.g., increased GI motility, increased secretion). ■ ■are activated by ACh and muscarine. ■ ■are blocked by atropine. ■ ■Mechanism of action: (1) Heart SA node: Gi protein, inhibition of adenylate cyclase, which leads to opening of K+ channels, slowing of the rate of spontaneous Phase 4 depolarization, and decreased heart rate. (2) Smooth muscle and glands: Gq protein, stimulation of phospholipase C, and increase in IP3 and intracellular [Ca2+]. 3.  Drugs that act on the ANS (Table 2.3) t a b l e 2.3 Prototypes of Drugs that Affect Autonomic Activity Type of Receptor Agonist Antagonist Adrenergic α1 Norepinephrine Phenylephrine Phenoxybenzamine Phentolamine Prazosin α2 Clonidine Yohimbine β1 Norepinephrine Isoproterenol Dobutamine Propranolol Metoprolol β2 Isoproterenol Albuterol Propranolol Butoxamine Cholinergic Nicotinic ACh Nicotine Carbachol Curare (neuromuscular junction N1 receptors) Hexamethonium (ganglionic N2 receptors) Muscarinic ACh Muscarine Carbachol Atropine ACh = acetylcholine. 36 BRS Physiology t a b l e 2.4 Effect of the Autonomic Nervous System on Organ Systems Organ Sympathetic Action Sympathetic Receptor Parasympathetic Action Parasympathetic Receptor Heart ↑ heart rate ↑ contractility ↑ AV node conduction β1 β1 β1 ↓ heart rate ↓ contractility (atria) ↓ AV node conduction M2 M2 M2 Vascular smooth muscle Constricts blood vessels in skin; splanchnic Dilates blood vessels in skeletal muscle α1 β2 — — Gastrointestinal tract ↓ motility Constricts sphincters α2, β2 α1 ↑ motility Relaxes sphincters M3 M3 Bronchioles Dilates bronchiolar smooth muscle β2 Constricts bronchiolar smooth muscle M3 Male sex organs Ejaculation α Erection M Bladder Relaxes bladder wall Constricts sphincter β2 α1 Contracts bladder wall Relaxes sphincter M3 M3 Sweat glands ↑ sweating M (sympathetic cholinergic) — Eye Radial muscle, iris  Circular sphincter muscle, iris Ciliary muscle Dilates pupil (mydriasis) — Dilates (far vision) α1 β — Constricts pupil (miosis) Contracts (near vision) M M Kidney ↑ renin secretion β1 — Fat cells ↑ lipolysis β1 — AV = atrioventricular; M = muscarinic. D.  Effects of the ANS on various organ systems (Table 2.4) E.  Autonomic centers—brain stem and hypothalamus 1.  Medulla ■ ■Vasomotor center ■ ■Respiratory center ■ ■Swallowing, coughing, and vomiting centers 2.  Pons ■ ■Pneumotaxic center 3.  Midbrain ■ ■Micturition center 4.  Hypothalamus ■ ■Temperature regulation center ■ ■Thirst and food intake regulatory centers II.  Sensory Systems A.  Sensory receptors—general ■ ■are specialized epithelial cells or neurons that transduce environmental signals into neural signals. Chapter 2 Neurophysiology 37 ■ ■The environmental signals that can be detected include mechanical force, light, sound, chemicals, and temperature. 1.  Types of sensory transducers a.  Mechanoreceptors ■ ■Pacinian corpuscles ■ ■Joint receptors ■ ■Stretch receptors in muscle ■ ■Hair cells in auditory and vestibular systems ■ ■Baroreceptors in carotid sinus b.  Photoreceptors ■ ■Rods and cones of the retina c.  Chemoreceptors ■ ■Olfactory receptors ■ ■Taste receptors ■ ■Osmoreceptors ■ ■Carotid body O2 receptors d.  Extremes of temperature and pain ■ ■Nociceptors 2.  Fiber types and conduction velocity (Table 2.5) 3.  Receptive field ■ ■is an area of the body that, when stimulated, changes the firing rate of a sensory neu-ron. If the firing rate of the sensory neuron is increased, the receptive field is excitatory. If the firing rate of the sensory neuron is decreased, the receptive field is inhibitory. 4.  Steps in sensory transduction a.  Stimulus arrives at the sensory receptor. The stimulus may be a photon of light on the retina, a molecule of NaCl on the tongue, a depression of the skin, and so forth. b.  Ion channels are opened in the sensory receptor, allowing current to flow. t a b l e 2.5 Characteristics of Nerve Fiber Types General Fiber Type and Example Sensory Fiber Type and Example Diameter Conduction Velocity A-alpha Ia Largest Fastest Large α-motoneurons Muscle spindle afferents Ib Largest Fastest Golgi tendon organs A-beta II Medium Medium Touch, pressure Secondary afferents of muscle spindles; touch and pressure A-gamma — γ-Motoneurons to muscle spindles (intrafusal fibers) Medium Medium A-delta III Small Medium Touch, pressure, temperature, and pain Touch, pressure, fast pain, and temperature B — Small Medium Preganglionic autonomic fibers C IV Smallest Slowest Slow pain; postganglionic autonomic fibers Pain and temperature (unmyelinated) 38 BRS Physiology ■ ■Usually, the current is inward, which produces depolarization of the receptor. ■ ■The exception is in the photoreceptor, where light causes decreased inward current and hyperpolarization. c.  The change in membrane potential produced by the stimulus is the receptor potential, or generator potential (Figure 2.2). ■ ■If the receptor potential is depolarizing, it brings the membrane potential closer to threshold. If the receptor potential is large enough, the membrane potential will exceed threshold, and an action potential will fire in the sensory neuron. ■ ■Receptor potentials are graded in size depending on the size of the stimulus. 5.  Adaptation of sensory receptors a.  Slowly adapting, or tonic, receptors (muscle spindle; pressure; slow pain) ■ ■respond repetitively to a prolonged stimulus. ■ ■detect a steady stimulus. b.  Rapidly adapting, or phasic, receptors (pacinian corpuscle; light touch) ■ ■show a decline in action potential frequency with time in response to a constant stimulus. ■ ■primarily detect onset and offset of a stimulus. 6.  Sensory pathways from the sensory receptor to the cerebral cortex a.  Sensory receptors ■ ■are activated by environmental stimuli. ■ ■may be specialized epithelial cells (e.g., photoreceptors, taste receptors, auditory hair cells). ■ ■may be primary afferent neurons (e.g., olfactory chemoreceptors). ■ ■transduce the stimulus into electrical energy (i.e., receptor potential). b.  First-order neurons ■ ■are the primary afferent neurons that receive the transduced signal and send the infor-mation to the CNS. Cell bodies of the primary afferent neurons are in dorsal root or spinal cord ganglia. c.  Second-order neurons ■ ■are located in the spinal cord or brain stem. ■ ■receive information from one or more primary afferent neurons in relay nuclei and transmit it to the thalamus. ■ ■Axons of second-order neurons may cross the midline in a relay nucleus in the spinal cord before they ascend to the thalamus. Therefore, sensory information originating on one side of the body ascends to the contralateral thalamus. d.  Third-order neurons ■ ■are located in the relay nuclei of the thalamus. From there, encoded sensory infor-mation ascends to the cerebral cortex. e.  Fourth-order neurons ■ ■are located in the appropriate sensory area of the cerebral cortex. The information received results in a conscious perception of the stimulus. Action potential Receptor potential Threshold Figure 2.2 Receptor (generator) potential and how it may lead to an action potential. Chapter 2 Neurophysiology 39 B.  Somatosensory system ■ ■includes the sensations of touch, movement, temperature, and pain. 1.  Pathways in the somatosensory system a.  Dorsal column system ■ ■processes sensations of fine touch, pressure, two-point discrimination, vibration, and proprioception. ■ ■consists primarily of group II fibers. ■ ■Course: primary afferent neurons have cell bodies in the dorsal root. Their axons ascend ipsilaterally to the nucleus gracilis and nucleus cuneatus of the medulla. From the medulla, the second-order neurons cross the midline and ascend to the contra-lateral thalamus, where they synapse on third-order neurons. Third-order neurons ascend to the somatosensory cortex, where they synapse on fourth-order neurons. b.  Anterolateral system ■ ■processes sensations of temperature, pain, and light touch. ■ ■consists primarily of group III and IV fibers, which enter the spinal cord and terminate in the dorsal horn. ■ ■Course: second-order neurons cross the midline to the anterolateral quadrant of the spinal cord and ascend to the contralateral thalamus, where they synapse on third-order neurons. Third-order neurons ascend to the somatosensory cortex, where they synapse on fourth-order neurons. 2.  Mechanoreceptors for touch and pressure (Table 2.6) 3.  Thalamus ■ ■Information from different parts of the body is arranged somatotopically. ■ ■Destruction of the thalamic nuclei results in loss of sensation on the contralateral side of the body. 4.  Somatosensory cortex—the sensory homunculus ■ ■The major somatosensory areas of the cerebral cortex are SI and SII. ■ ■SI has a somatotopic representation similar to that in the thalamus. ■ ■This “map” of the body is called the sensory homunculus. ■ ■The largest areas represent the face, hands, and fingers, where precise localization is most important. 5.  Pain ■ ■is associated with the detection and perception of noxious stimuli (nociception). ■ ■The receptors for pain are free nerve endings in the skin, muscle, and viscera. ■ ■Neurotransmitters for nociceptors include substance P. Inhibition of the release of ­ substance P is the basis of pain relief by opioids. a.  Fibers for fast pain and slow pain ■ ■Fast pain is carried by group III fibers. It has a rapid onset and offset, and is localized. ■ ■Slow pain is carried by C fibers. It is characterized as aching, burning, or throbbing that is poorly localized. t a b l e 2.6 Types of Mechanoreceptors Type of Mechanoreceptor Description Sensation Encoded Adaptation Pacinian corpuscle Onion-like structures in the subcutaneous skin (surrounding unmyelinated nerve endings) Vibration; tapping Rapidly adapting Meissner corpuscle Present in nonhairy skin Velocity Rapidly adapting Ruffini corpuscle Encapsulated Pressure Slowly adapting Merkel disk Transducer is on epithelial cells Location Slowly adapting 40 BRS Physiology b.  Referred pain ■ ■Pain of visceral origin is referred to sites on the skin and follows the dermatome rule. These sites are innervated by nerves that arise from the same segment of the spinal cord. ■ ■For example, ischemic heart pain is referred to the chest and shoulder. C.  Vision 1.  Optics a.  Refractive power of a lens ■ ■is measured in diopters. ■ ■equals the reciprocal of the focal distance in meters. ■ ■Example: 10 diopters = 1/10 m = 10 cm b.  Refractive errors (1) Emmetropia—normal. Light focuses on the retina. (2) Hypertropia—farsighted. Light focuses behind the retina and is corrected with a convex lens. (3) Myopia—nearsighted. Light focuses in front of the retina and is corrected with a biconcave lens. (4) Astigmatism. Curvature of the lens is not uniform and is corrected with a cylindric lens. (5) Presbyopia is a result of loss of the accommodation power of the lens that occurs with aging. The near point (closest point on which one can focus by accommodation of the lens) moves farther from the eye and is corrected with a convex lens. 2.  Layers of the retina (Figure 2.3) a.  Pigment epithelial cells ■ ■absorb stray light and prevent scatter of light. ■ ■convert 11-cis retinal to all-trans retinal. Pigment cell layer Photoreceptor layer Outer nuclear layer Outer plexiform layer Inner nuclear layer Inner plexiform layer Ganglion cell layer Optic nerve layer Internal limiting membrane External limiting membrane Horizontal cell Direction of light Bipolar cell Amacrine cell Ganglion cell Figure 2.3 Cellular layers of the retina. (Reprinted with permission from Bullock J, Boyle J III, Wang MB. Physiology. 4th ed. Baltimore: Lippincott Williams & Wilkins, 2001:77.) Chapter 2 Neurophysiology 41 b.  Receptor cells are rods and cones (Table 2.7). ■ ■Rods and cones are not present on the optic disk; the result is a blind spot. c.  Bipolar cells. The receptor cells (i.e., rods and cones) synapse on bipolar cells, which synapse on the ganglion cells. (1)  Few cones synapse on a single bipolar cell, which synapses on a single ganglion cell. This arrangement is the basis for the high acuity and low sensitivity of the cones. In the fovea, where acuity is highest, the ratio of cones to bipolar cells is 1:1. (2)  Many rods synapse on a single bipolar cell. As a result, there is less acuity in the rods than in the cones. There is also greater sensitivity in the rods because light striking any one of the rods will activate the bipolar cell. d.  Horizontal and amacrine cells form local circuits with the bipolar cells. e.  Ganglion cells are the output cells of the retina. ■ ■Axons of ganglion cells form the optic nerve. 3.  Optic pathways and lesions (Figure 2.4) ■ ■Axons of the ganglion cells form the optic nerve and optic tract, ending in the lateral geniculate body of the thalamus. ■ ■The fibers from each nasal hemiretina cross at the optic chiasm, whereas the fibers from each temporal hemiretina remain ipsilateral. Therefore, fibers from the left nasal hemiretina and fibers from the right temporal hemiretina form the right optic tract and synapse on the right lateral geniculate body. ■ ■Fibers from the lateral geniculate body form the geniculocalcarine tract and pass to the occipital lobe of the cortex. a.  Cutting the optic nerve causes blindness in the ipsilateral eye. b.  Cutting the optic chiasm causes heteronymous bitemporal hemianopia. c.  Cutting the optic tract causes homonymous contralateral hemianopia. d.  Cutting the geniculocalcarine tract causes homonymous hemianopia with macular sparing. 4.  Steps in photoreception in the rods (Figure 2.5) ■ ■The photosensitive element is rhodopsin, which is composed of opsin (a protein) belonging to the superfamily of G-protein–coupled receptors and retinal (an aldehyde of vitamin A). a.  Light on the retina converts 11-cis retinal to all-trans retinal, a process called photoisomerization. A series of intermediates is then formed, one of which is metarhodopsin II. ■ ■Vitamin A is necessary for the regeneration of 11-cis rhodopsin. Deficiency of vitamin A causes night blindness. b.  Metarhodopsin II activates a G protein called transducin (Gt), which in turn activates a phosphodiesterase. t a b l e 2.7 Functions of Rods and Cones Function Rods Cones Sensitivity to light Sensitive to low-intensity light; night vision Sensitive to high-intensity light; day vision Acuity Lower visual acuity Not present in fovea Higher visual acuity Present in fovea Dark adaptation Rods adapt later Cones adapt first Color vision No Yes 42 BRS Physiology c.  Phosphodiesterase catalyzes the conversion of cyclic guanosine monophosphate (cGMP) to 5′-GMP , and cGMP levels decrease. d.  Decreased levels of cGMP cause closure of Na+ channels, decreased inward Na+ current, and, as a result, hyperpolarization of the photoreceptor cell membrane. Increasing light intensity increases the degree of hyperpolarization. a.  When the photoreceptor is hyperpolarized, there is decreased release of glutamate, an excitatory neurotransmitter. There are two types of glutamate receptors on bipolar and horizontal cells, which determine whether the cell is excited or inhibited. (1)  Ionotropic glutamate receptors are excitatory. If decreased release of glutamate from the photoreceptors interacts with ionotropic receptors, there will be hyperpolar­ ization (inhibition) because there is decreased excitation. (2)  Metabotropic glutamate receptors are inhibitory. If decreased release of glutamate from photoreceptors interacts with metabotropic receptors, there will be depolar-ization (excitation) because there is decreased inhibition. 5.  Receptive visual fields a.  Receptive fields of the ganglion cells and lateral geniculate cells (1)  Each bipolar cell receives input from many receptor cells. In turn, each ganglion cell receives input from many bipolar cells. The receptor cells connected to a ganglion cell form the center of its receptor field. The receptor cells connected to ganglion cells via horizontal cells form the surround of its receptive field. (Remember that the response of bipolar and horizontal cells to light depends on whether that cell has ionotropic or metabotropic receptors.) Optic nerve Temporal field Optic chiasm Optic tract Ganglion cell Lateral geniculate body Left eye Right eye Nasal field Pretectal region Occipital cortex Geniculocalcarine tract Left Right a b c d a b c d Figure 2.4 Effects of lesions at various levels of the optic pathway. (Modified with permission from Ganong WF. Review of Medical Physiology. 20th ed. New York: McGraw-Hill, 2001:147.) Chapter 2 Neurophysiology 43 (2)  On-center, off-surround is one pattern of a ganglion cell receptive field. Light striking the center of the receptive field depolarizes (excites) the ganglion cell, whereas light striking the surround of the receptive field hyperpolarizes (inhibits) the ganglion cell. Off-center, on-surround is another possible pattern. (3)  Lateral geniculate cells of the thalamus retain the on-center, off-surround or off-center, on-surround pattern that is transmitted from the ganglion cell. b.  Receptive fields of the visual cortex ■ ■Neurons in the visual cortex detect shape and orientation of figures. ■ ■Three cortical cell types are involved: (1)  Simple cells have center-surround, on-off patterns, but are elongated rods rather than concentric circles. They respond best to bars of light that have the correct posi-tion and orientation. (2)  Complex cells respond best to moving bars or edges of light with the correct orientation. (3)  Hypercomplex cells respond best to lines with particular length and to curves and angles. 11-cis retinal All-trans retinal Hyperpolarization Metarhodopsin II Activation of G protein (transducin) Activation of phosphodiesterase Closure of Na+ channels Light cGMP Decreased glutamate release Figure 2.5 Steps in photoreception in rods. cGMP = cyclic guanosine monophosphate. 44 BRS Physiology D.  Audition 1.  Sound waves ■ ■Frequency is measured in hertz (Hz). ■ ■Intensity is measured in decibels (dB), a log scale. dB 20 P P 0 = log where: dB = decibel P = sound pressure being measured P0 =  reference pressure measured at the threshold frequency 2.  Structure of the ear a.  Outer ear ■ ■directs the sound waves into the auditory canal. b.  Middle ear ■ ■is air filled. ■ ■contains the tympanic membrane and the auditory ossicles (malleus, incus, and sta-pes). The stapes inserts into the oval window, a membrane between the middle ear and the inner ear. ■ ■Sound waves cause the tympanic membrane to vibrate. In turn, the ossicles vibrate, pushing the stapes into the oval window and displacing fluid in the inner ear (see II D 2 c). ■ ■Sound is amplified by the lever action of the ossicles and the concentration of sound waves from the large tympanic membrane onto the smaller oval window. c.  Inner ear (Figure 2.6) ■ ■is fluid filled. ■ ■consists of a bony labyrinth (semicircular canals, cochlea, and vestibule) and a series of ducts called the membranous labyrinth. The fluid outside the ducts is perilymph; the fluid inside the ducts is endolymph. Scala vestibuli Tectorial membrane Basilar membrane Scala tympani Spiral ganglia Scala media Figure 2.6 Organ of Corti and auditory transduction. Chapter 2 Neurophysiology 45 (1)  Structure of the cochlea: three tubular canals (a)  The scala vestibuli and scala tympani contain perilymph, which has a high [Na+]. (b)  The scala media contains endolymph, which has a high [K+]. ■ ■The scala media is bordered by the basilar membrane, which is the site of the organ of Corti. (2)  Location and structure of the organ of Corti ■ ■The organ of Corti is located on the basilar membrane. ■ ■It contains the receptor cells (inner and outer hair cells) for auditory stimuli. Cilia protrude from the hair cells and are embedded in the tectorial membrane. ■ ■Inner hair cells are arranged in single rows and are few in number. ■ ■Outer hair cells are arranged in parallel rows and are greater in number than the inner hair cells. ■ ■The spiral ganglion contains the cell bodies of the auditory nerve [cranial nerve (CN) VIII], which synapse on the hair cells. 3.  Steps in auditory transduction by the organ of Corti (see Figure 2.6) ■ ■The cell bodies of hair cells contact the basilar membrane. The cilia of hair cells are embedded in the tectorial membrane. a.  Sound waves cause vibration of the organ of Corti. Because the basilar membrane is more elastic than the tectorial membrane, vibration of the basilar membrane causes the hair cells to bend by a shearing force as they push against the tectorial membrane. b.  Bending of the cilia causes changes in K+ conductance of the hair cell membrane. Bending in one direction causes depolarization; bending in the other direction causes hyperpolar­ ization. The oscillating potential that results is the cochlear microphonic potential. c.  The oscillating potential of the hair cells causes intermittent firing of the cochlear nerves. 4.  How sound is encoded ■ ■The frequency that activates a particular hair cell depends on the location of the hair cell along the basilar membrane. a.  The base of the basilar membrane (near the oval and round windows) is narrow and stiff. It responds best to high frequencies. b.  The apex of the basilar membrane (near the helicotrema) is wide and compliant. It responds best to low frequencies. 5.  Central auditory pathways ■ ■Fibers ascend through the lateral lemniscus to the inferior colliculus to the medial geniculate nucleus of the thalamus to the auditory cortex. ■ ■Fibers may be crossed or uncrossed. As a result, a mixture of ascending auditory fibers represents both ears at all higher levels. Therefore, lesions of the cochlea of one ear cause unilateral deafness, but more central unilateral lesions do not. ■ ■There is tonotopic representation of frequencies at all levels of the central auditory pathway. ■ ■Discrimination of complex features (e.g., recognizing a patterned sequence) is a prop-erty of the cerebral cortex. E.  Vestibular system ■ ■detects angular and linear acceleration of the head. ■ ■Reflex adjustments of the head, eyes, and postural muscles provide a stable visual image and steady posture. 1.  Structure of the vestibular organ a.  It is a membranous labyrinth consisting of three perpendicular semicircular canals, a utricle, and a saccule. The semicircular canals detect angular acceleration or rotation. The utricle and saccule detect linear acceleration. 46 BRS Physiology b.  The canals are filled with endolymph and are bathed in perilymph. c.  The receptors are hair cells located at the end of each semicircular canal. Cilia on the hair cells are embedded in a gelatinous structure called the cupula. A single long cilium is called the kinocilium; smaller cilia are called stereocilia (Figure 2.7). 2.  Steps in vestibular transduction—angular acceleration (see Figure 2.7) a.  During counterclockwise (left) rotation of the head, the horizontal semicircular canal and its attached cupula also rotate to the left. Initially, the cupula moves more quickly than the endolymph fluid. Thus, the cupula is dragged through the endolymph; as a result, the cilia on the hair cells bend. b.  If the stereocilia are bent toward the kinocilium, the hair cell depolarizes (excitation). If the stereocilia are bent away from the kinocilium, the hair cell hyperpolarizes (inhibi-tion). Therefore, during the initial counterclockwise (left) rotation, the left horizontal canal is excited and the right horizontal canal is inhibited. c.  After several seconds, the endolymph “catches up” with the movement of the head and the cupula. The cilia return to their upright position and are no longer depolarized or hyperpolarized. d.  When the head suddenly stops moving, the endolymph continues to move counterclock-wise (left), dragging the cilia in the opposite direction. Therefore, if the hair cell was depolarized with the initial rotation, it now will hyperpolarize. If it was hyperpolarized initially, it now will depolarize. Therefore, when the head stops moving, the left hori-zontal canal will be inhibited and the right horizontal canal will be excited. 3.  Vestibular–ocular reflexes a.  Nystagmus ■ ■An initial rotation of the head causes the eyes to move slowly in the opposite direc-tion to maintain visual fixation. When the limit of eye movement is reached, the eyes rapidly snap back (nystagmus), then move slowly again. Depolarized (excited) Hyperpolarized (inhibited) Stereocilia Kinocilium Hair cell Afferent vestibular fiber Left cupula Left horizontal semicircular canal Right horizontal semicircular canal Right cupula Rotation of head Initial direction of endolymph movement Figure 2.7 The semicircular canals and vestibular transduction during counterclockwise rotation. Chapter 2 Neurophysiology 47 ■ ■The direction of the nystagmus is defined as the direction of the fast (rapid eye) move-ment. Therefore, the nystagmus occurs in the same direction as the head rotation. b.  Postrotatory nystagmus ■ ■occurs in the opposite direction of the head rotation. F.  Olfaction 1.  Olfactory pathway a.  Receptor cells ■ ■are located in the olfactory epithelium. ■ ■are true neurons that conduct action potentials into the CNS. ■ ■Basal cells of the olfactory epithelium are undifferentiated stem cells that continu-ously turn over and replace the olfactory receptor cells (neurons). These are the only neurons in the adult human that replace themselves. b.  CN I (olfactory) ■ ■carries information from the olfactory receptor cells to the olfactory bulb. ■ ■The axons of the olfactory nerves are unmyelinated C fibers and are among the small-est and slowest in the nervous system. ■ ■Olfactory epithelium is also innervated by CN V (trigeminal), which detects noxious or painful stimuli, such as ammonia. ■ ■The olfactory nerves pass through the cribriform plate on their way to the olfactory bulb. Fractures of the cribriform plate sever input to the olfactory bulb and reduce (hyposmia) or eliminate (anosmia) the sense of smell. The response to ammonia, how-ever, will be intact after fracture of the cribriform plate because this response is car-ried on CN V. c.  Mitral cells in the olfactory bulb ■ ■are second-order neurons. ■ ■Output of the mitral cells forms the olfactory tract, which projects to the prepiriform cortex. 2.  Steps in transduction in the olfactory receptor neurons a.  Odorant molecules bind to specific olfactory receptor proteins located on cilia of the olfactory receptor cells. b.  When the receptors are activated, they activate G proteins (Golf), which in turn activate adenylate cyclase. c.  There is an increase in intracellular cAMP that opens Na+ channels in the olfactory receptor membrane and produces a depolarizing receptor potential. d.  The receptor potential depolarizes the initial segment of the axon to threshold, and action potentials are generated and propagated. G.  Taste 1.  Taste pathways a.  Taste receptor cells line the taste buds that are located on specialized papillae. The receptor cells are covered with microvilli, which increase the surface area for binding taste chemicals. In contrast to olfactory receptor cells, taste receptors are not neurons. b.  The anterior two-thirds of the tongue ■ ■has fungiform papillae. ■ ■detects salty, sweet, and umami sensations. ■ ■is innervated by CN VII (chorda tympani). c.  The posterior one-third of the tongue ■ ■has circumvallate and foliate papillae. ■ ■detects sour and bitter sensations. 48 BRS Physiology ■ ■is innervated by CN IX (glossopharyngeal). ■ ■The back of the throat and the epiglottis are innervated by CN X. d.  CN VII, CN IX, and CN X enter the medulla, ascend in the solitary tract, and terminate on second-order taste neurons in the solitary nucleus. They project, primarily ipsilat-erally, to the ventral posteromedial nucleus of the thalamus and, finally, to the taste cortex. 2.  Steps in taste transduction ■ ■Taste chemicals (sour, sweet, salty, bitter, and umami) bind to taste receptors on the microvilli and produce a depolarizing receptor potential in the receptor cell. III.  Motor Systems A.  Motor unit ■ ■consists of a single motoneuron and the muscle fibers that it innervates. For fine control (e.g., muscles of the eye), a single motoneuron innervates only a few muscle fibers. For larger movements (e.g., postural muscles), a single motoneuron may innervate thousands of muscle fibers. ■ ■The motoneuron pool is the group of motoneurons that innervates fibers within the same muscle. ■ ■The force of muscle contraction is graded by recruitment of additional motor units (size principle). The size principle states that as additional motor units are recruited, more motoneurons are involved and more tension is generated. 1.  Small motoneurons ■ ■innervate a few muscle fibers. ■ ■have the lowest thresholds and, therefore, fire first. ■ ■generate the smallest force. 2.  Large motoneurons ■ ■innervate many muscle fibers. ■ ■have the highest thresholds and, therefore, fire last. ■ ■generate the largest force. B.  Muscle sensor 1.  Types of muscle sensors (see Table 2.5) a.  Muscle spindles (groups Ia and II afferents) are arranged in parallel with extrafusal fibers. They detect both static and dynamic changes in muscle length. b.  Golgi tendon organs (group Ib afferents) are arranged in series with extrafusal muscle fibers. They detect muscle tension. c.  Pacinian corpuscles (group II afferents) are distributed throughout muscle. They detect vibration. d.  Free nerve endings (groups III and IV afferents) detect noxious stimuli. 2.  Types of muscle fibers a.  Extrafusal fibers ■ ■make up the bulk of muscle. ■ ■are innervated by a-motoneurons. ■ ■provide the force for muscle contraction. b.  Intrafusal fibers ■ ■are smaller than extrafusal muscle fibers. ■ ■are innervated by g-motoneurons. ■ ■are encapsulated in sheaths to form muscle spindles. Chapter 2 Neurophysiology 49 ■ ■run in parallel with extrafusal fibers, but not for the entire length of the muscle. ■ ■are too small to generate significant force. 3.  Muscle spindles ■ ■are distributed throughout muscle. ■ ■consist of small, encapsulated intrafusal fibers connected in parallel with large (force-generating) extrafusal fibers. ■ ■The finer the movement required, the greater the number of muscle spindles in a muscle. a.  Types of intrafusal fibers in muscle spindles (Figure 2.8) (1)  Nuclear bag fibers ■ ■detect the rate of change in muscle length (fast, dynamic changes). ■ ■are innervated by group Ia afferents. ■ ■have nuclei collected in a central “bag” region. (2)  Nuclear chain fibers ■ ■detect static changes in muscle length. ■ ■are innervated by group II afferents. ■ ■are more numerous than nuclear bag fibers. ■ ■have nuclei arranged in rows. b.  How the muscle spindle works (see Figure 2.8) ■ ■Muscle spindle reflexes oppose (correct for) increases in muscle length (stretch). (1)  Sensory information about muscle length is received by group Ia (velocity) and group II (static) afferent fibers. (2)  When a muscle is stretched (lengthened), the muscle spindle is also stretched, stim-ulating group Ia and group II afferent fibers. (3)  Stimulation of group Ia afferents stimulates α-motoneurons in the spinal cord. This stimulation in turn causes contraction and shortening of the muscle. Thus, the original stretch is opposed and muscle length is maintained. c.  Function of g-motoneurons ■ ■innervate intrafusal muscle fibers. ■ ■adjust the sensitivity of the muscle spindle so that it will respond appropriately dur-ing muscle contraction. ■ ■a-Motoneurons and g-motoneurons are coactivated so that muscle spindles remain sensitive to changes in muscle length during contraction. Dynamic γ-motor fiber Static γ-motor fiber Group Ia afferent Group II afferent Nuclear bag fiber Trail ending Primary ending Secondary ending Nuclear chain fiber Plate ending Figure 2.8 Organization of the muscle spindle. (Modified with permission from Matthews PBC. Muscle spindles and their motor control. Physiol Rev 1964;44:232.) 50 BRS Physiology C.  Muscle reflexes (Table 2.8) 1.  Stretch (myotatic) reflex—knee jerk (Figure 2.9) ■ ■is monosynaptic. a.  Muscle is stretched, and the stretching stimulates group Ia afferent fibers. b.  Group Ia afferents synapse directly on a-motoneurons in the spinal cord. The pool of α-motoneurons that is activated innervates the homonymous muscle. c.  Stimulation of α-motoneurons causes contraction in the muscle that was stretched. As the muscle contracts, it shortens, decreasing the stretch on the muscle spindle and returning it to its original length. d.  At the same time, synergistic muscles are activated and antagonistic muscles are inhibited. e.  Example of the knee-jerk reflex. Tapping on the patellar tendon causes the quadriceps to stretch. Stretch of the quadriceps stimulates group Ia afferent fibers, which activate α-motoneurons that make the quadriceps contract. Contraction of the quadriceps forces the lower leg to extend. ■ ■Increases in g-motoneuron activity increase the sensitivity of the muscle spindle and therefore exaggerate the knee-jerk reflex. 2.  Golgi tendon reflex (inverse myotatic) ■ ■is disynaptic. ■ ■is the opposite, or inverse, of the stretch reflex. t a b l e 2.8 Summary of Muscle Reflexes Reflex Number of Synapses Stimulus Afferent Fibers Response Stretch reflex (knee-jerk) Monosynaptic Muscle is stretched Ia Contraction of the muscle Golgi tendon reflex (clasp-knife) Disynaptic Muscle contracts Ib Relaxation of the muscle Flexor-withdrawal reflex (after touching a hot stove) Polysynaptic Pain II, III, and IV Ipsilateral flexion; contralateral extension Antagonist muscles Synergist muscles Ia afferent Homonymous muscle α-motoneuron + + + – Figure 2.9 The stretch reflex. Chapter 2 Neurophysiology 51 a. Active muscle contraction stimulates the Golgi tendon organs and group lb afferent fibers. b. The group Ib afferents stimulate inhibitory interneurons in the spinal cord. These inter-neurons inhibit a-motoneurons and cause relaxation of the muscle that was originally contracted. c. At the same time, antagonistic muscles are excited. d. Clasp-knife reflex, an exaggerated form of the Golgi tendon reflex, can occur with disease of the corticospinal tracts (hypertonicity or spasticity). ■ ■For example, if the arm is hypertonic, the increased sensitivity of the muscle spindles in the extensor muscles (triceps) causes resistance to flexion of the arm. Eventually, tension in the triceps increases to the point at which it activates the Golgi tendon reflex, causing the triceps to relax and the arm to flex closed like a jackknife. 3.  Flexor withdrawal reflex ■ ■is polysynaptic. ■ ■results in flexion on the ipsilateral side and extension on the contralateral side. Somatosensory and pain afferent fibers elicit withdrawal of the stimulated body part from the noxious stimulus. a. Pain (e.g., touching a hot stove) stimulates the flexor reflex afferents of groups II, III, and IV. b. The afferent fibers synapse polysynaptically (via interneurons) onto motoneurons in the spinal cord. c. On the ipsilateral side of the pain stimulus, flexors are stimulated (they contract) and extensors are inhibited (they relax), and the arm is jerked away from the stove. On the contralateral side, flexors are inhibited and extensors are stimulated (crossed extension reflex) to maintain balance. d. As a result of persistent neural activity in the polysynaptic circuits, an afterdischarge occurs. The afterdischarge prevents the muscle from relaxing for some time. D.  Spinal organization of motor systems 1.  Convergence ■ ■occurs when a single α-motoneuron receives its input from many muscle spindle group Ia afferents in the homonymous muscle. ■ ■produces spatial summation because although a single input would not bring the mus-cle to threshold, multiple inputs will. ■ ■also can produce temporal summation when inputs arrive in rapid succession. 2.  Divergence ■ ■occurs when the muscle spindle group Ia afferent fibers project to all of the α-motoneurons that innervate the homonymous muscle. 3.  Recurrent inhibition (Renshaw cells) ■ ■Renshaw cells are inhibitory cells in the ventral horn of the spinal cord. ■ ■They receive input from collateral axons of motoneurons and, when stimulated, nega-tively feedback (inhibit) on the motoneuron. E.  Brain stem control of posture 1.  Motor centers and pathways ■ ■Pyramidal tracts (corticospinal and corticobulbar) pass through the medullary pyramids. ■ ■All others are extrapyramidal tracts and originate primarily in the following structures of the brain stem: a.  Rubrospinal tract ■ ■originates in the red nucleus and projects to interneurons in the lateral spinal cord. ■ ■Stimulation of the red nucleus produces stimulation of flexors and inhibition of extensors. 52 BRS Physiology b.  Pontine reticulospinal tract ■ ■originates in the nuclei in the pons and projects to the ventromedial spinal cord. ■ ■Stimulation has a general stimulatory effect on both extensors and flexors, with the predominant effect on extensors. c.  Medullary reticulospinal tract ■ ■originates in the medullary reticular formation and projects to spinal cord interneu-rons in the intermediate gray area. ■ ■Stimulation has a general inhibitory effect on both extensors and flexors, with the ­ predominant effect on extensors. d.  Lateral vestibulospinal tract ■ ■originates in Deiters nucleus and projects to ipsilateral motoneurons and interneurons. ■ ■Stimulation causes a powerful stimulation of extensors and inhibition of flexors. e.  Tectospinal tract ■ ■originates in the superior colliculus and projects to the cervical spinal cord. ■ ■is involved in the control of neck muscles. 2.  Effects of transections of the spinal cord a.  Paraplegia ■ ■is the loss of voluntary movements below the level of the lesion. ■ ■results from interruption of the descending pathways from the motor centers in the brain stem and higher centers. b.  Loss of conscious sensation below the level of the lesion c.  Initial loss of reflexes—spinal shock ■ ■Immediately after transection, there is loss of the excitatory influence from α- and γ-motoneurons. Limbs become flaccid, and reflexes are absent. With time, partial recovery and return of reflexes (or even hyperreflexia) will occur. (1) If the lesion is at C7, there will be loss of sympathetic tone to the heart. As a result, heart rate and arterial pressure will decrease. (2) If the lesion is at C3, breathing will stop because the respiratory muscles have been disconnected from control centers in the brain stem. (3) If the lesion is at C1 (e.g., as a result of hanging), death occurs. 3.  Effects of transections above the spinal cord a.  Lesions above the lateral vestibular nucleus ■ ■cause decerebrate rigidity because of the removal of inhibition from higher centers, resulting in excitation of α- and γ-motoneurons and rigid posture. b.  Lesions above the pontine reticular formation but below the midbrain ■ ■cause decerebrate rigidity because of the removal of central inhibition from the pon-tine reticular formation, resulting in excitation of α- and γ-motoneurons and rigid posture. c.  Lesions above the red nucleus ■ ■result in decorticate posturing and intact tonic neck reflexes. F.  Cerebellum—central control of movement 1.  Functions of the cerebellum a.  Vestibulocerebellum—control of balance and eye movement b.  Pontocerebellum—planning and initiation of movement c.  Spinocerebellum—synergy, which is control of rate, force, range, and direction of movement Chapter 2 Neurophysiology 53 2.  Layers of the cerebellar cortex a.  Granular layer ■ ■is the innermost layer. ■ ■contains granule cells, Golgi type II cells, and glomeruli. ■ ■In the glomeruli, axons of mossy fibers form synaptic connections on dendrites of granular and Golgi type II cells. b.  Purkinje cell layer ■ ■is the middle layer. ■ ■contains Purkinje cells. ■ ■Output is always inhibitory. c.  Molecular layer ■ ■is the outermost layer. ■ ■contains stellate and basket cells, dendrites of Purkinje and Golgi type II cells, and parallel fibers (axons of granule cells). ■ ■The parallel fibers synapse on dendrites of Purkinje cells, basket cells, stellate cells, and Golgi type II cells. 3.  Connections in the cerebellar cortex a.  Input to the cerebellar cortex (1)  Climbing fibers ■ ■originate from a single region of the medulla (inferior olive). ■ ■make multiple synapses onto Purkinje cells, resulting in high-frequency bursts, or complex spikes. ■ ■“condition” the Purkinje cells. ■ ■play a role in cerebellar motor learning. (2)  Mossy fibers ■ ■originate from many centers in the brain stem and spinal cord. ■ ■include vestibulocerebellar, spinocerebellar, and pontocerebellar afferents. ■ ■make multiple synapses on Purkinje fibers via interneurons. Synapses on Purkinje cells result in simple spikes. ■ ■synapse on granule cells in glomeruli. ■ ■The axons of granule cells bifurcate and give rise to parallel cells. The parallel fibers excite multiple Purkinje cells as well as inhibitory interneurons (basket, stellate, Golgi type II). b.  Output of the cerebellar cortex ■ ■Purkinje cells are the only output of the cerebellar cortex. ■ ■Output of the Purkinje cells is always inhibitory; the neurotransmitter is g-­ aminobutyric acid (GABA). ■ ■The output projects to deep cerebellar nuclei and to the vestibular nucleus. This inhibitory output modulates the output of the cerebellum and regulates rate, range, and direction of movement (synergy). c.  Clinical disorders of the cerebellum—ataxia ■ ■result in lack of coordination, including delay in initiation of movement, poor execu-tion of a sequence of movements, and inability to perform rapid alternating move-ments (dysdiadochokinesia). (1)  Intention tremor occurs during attempts to perform voluntary movements. (2)  Rebound phenomenon is the inability to stop a movement. G.  Basal ganglia—control of movement ■ ■consists of the striatum, globus pallidus, subthalamic nuclei, and substantia nigra. ■ ■modulates thalamic outflow to the motor cortex to plan and execute smooth movements. ■ ■Many synaptic connections are inhibitory and use GABA as their neurotransmitter. 54 BRS Physiology ■ ■The striatum communicates with the thalamus and the cerebral cortex by two opposing pathways. ■ ■Indirect pathway is, overall, inhibitory. ■ ■Direct pathway is, overall, excitatory. ■ ■Connections between the striatum and the substantia nigra use dopamine as their neurotransmitter. Dopamine is inhibitory on the indirect pathway (D2 receptors) and excitatory on the direct pathway (D1 receptors). Thus, the action of dopamine is, overall, excitatory. ■ ■Lesions of the basal ganglia include 1.  Lesions of the globus pallidus ■ ■result in inability to maintain postural support. 2.  Lesions of the subthalamic nucleus ■ ■are caused by the release of inhibition on the contralateral side. ■ ■result in wild, flinging movements (e.g., hemiballismus). 3.  Lesions of the striatum ■ ■are caused by the release of inhibition. ■ ■result in quick, continuous, and uncontrollable movements. ■ ■occur in patients with Huntington disease. 4.  Lesions of the substantia nigra ■ ■are caused by destruction of dopaminergic neurons. ■ ■occur in patients with Parkinson disease. ■ ■Since dopamine inhibits the indirect (inhibitory) pathway and excites the direct (excit-atory) pathway, destruction of dopaminergic neurons is, overall, inhibitory. ■ ■Symptoms include lead-pipe rigidity, tremor, and reduced voluntary movement. H.  Motor cortex 1.  Premotor cortex and supplementary motor cortex (area 6) ■ ■are responsible for generating a plan for movement, which is transferred to the primary motor cortex for execution. ■ ■The supplementary motor cortex programs complex motor sequences and is active during “mental rehearsal” for a movement. 2.  Primary motor cortex (area 4) ■ ■is responsible for the execution of movement. Programmed patterns of motoneurons are activated in the motor cortex. Excitation of upper motoneurons in the motor cortex is transferred to the brain stem and spinal cord, where the lower motoneurons are acti-vated and cause voluntary movement. ■ ■is somatotopically organized (motor homunculus). Epileptic events in the primary motor cortex cause Jacksonian seizures, which illustrate the somatotopic organization. IV.  Higher Functions of the Cerebral Cortex A.  Electroencephalographic (EEG) findings ■ ■EEG waves consist of alternating excitatory and inhibitory synaptic potentials in the pyra-midal cells of the cerebral cortex. ■ ■A cortical evoked potential is an EEG change. It reflects synaptic potentials evoked in large numbers of neurons. ■ ■In awake adults with eyes open, beta waves predominate. ■ ■In awake adults with eyes closed, alpha waves predominate. ■ ■During sleep, slow waves predominate, muscles relax, and heart rate and blood pressure decrease. Chapter 2 Neurophysiology 55 B.  Sleep 1.  Sleep–wake cycles occur in a circadian rhythm, with a period of about 24 hours. The circa-dian periodicity is thought to be driven by the suprachiasmatic nucleus of the hypothala-mus, which receives input from the retina. 2.  Rapid eye movement (REM) sleep occurs every 90 minutes. ■ ■During REM sleep, the EEG resembles that of a person who is awake or in stage 1 non-REM sleep. ■ ■Most dreams occur during REM sleep. ■ ■REM sleep is characterized by eye movements, loss of muscle tone, pupillary constric-tion, and penile erection. ■ ■Use of benzodiazepines and increasing age decrease the duration of REM sleep. C.  Language ■ ■Information is transferred between the two hemispheres of the cerebral cortex through the corpus callosum. ■ ■The right hemisphere is dominant in facial expression, intonation, body language, and spatial tasks. ■ ■The left hemisphere is usually dominant with respect to language, even in left-handed people. Lesions of the left hemisphere cause aphasia. 1.  Damage to Wernicke area causes sensory aphasia, in which there is difficulty understand-ing written or spoken language. 2.  Damage to Broca area causes motor aphasia, in which speech and writing are affected, but understanding is intact. D.  Learning and memory ■ ■Short-term memory involves synaptic changes. ■ ■Long-term memory involves structural changes in the nervous system and is more stable. ■ ■Bilateral lesions of the hippocampus block the ability to form new long-term memories. V.  Blood–Brain Barrier and Cerebrospinal Fluid (CSF) A.  Anatomy of the blood–brain barrier ■ ■It is the barrier between cerebral capillary blood and the CSF . CSF fills the ventricles and the subarachnoid space. ■ ■It consists of the endothelial cells of the cerebral capillaries and the choroid plexus epithelium. B.  Formation of CSF by the choroid plexus epithelium ■ ■Lipid-soluble substances (CO2 and O2) and H2O freely cross the blood–brain barrier and equilibrate between blood and CSE. ■ ■Other substances are transported by carriers in the choroid plexus epithelium. They may be secreted from blood into the CSF or absorbed from the CSF into blood. ■ ■Protein and cholesterol are excluded from the CSF because of their large molecular size. ■ ■The composition of CSF is approximately the same as that of the interstitial fluid of the brain but differs significantly from blood (Table 2.9). ■ ■CSF can be sampled with a lumbar puncture. C.  Functions of the blood–brain barrier 1.  It maintains a constant environment for neurons in the CNS and protects the brain from endogenous or exogenous toxins. 2.  It prevents the escape of neurotransmitters from their functional sites in the CNS into the general circulation. 56 BRS Physiology 3.  Drugs penetrate the blood–brain barrier to varying degrees. For example, nonionized (lipid-soluble) drugs cross more readily than ionized (water-soluble) drugs. ■ ■Inflammation, irradiation, and tumors may destroy the blood–brain barrier and permit entry into the brain of substances that are usually excluded (e.g., antibiotics, radiola-beled markers). VI.  Temperature Regulation A.  Sources of heat gain and heat loss from the body 1.  Heat-generating mechanisms—response to cold a.  Thyroid hormone increases metabolic rate and heat production by stimulating Na+, K+-adenosine triphosphatase (ATPase). b.  Cold temperatures activate the sympathetic nervous system and, via activation of β recep-tors in brown fat, increase metabolic rate and heat production. c.  Shivering is the most potent mechanism for increasing heat production. ■ ■Cold temperatures activate the shivering response, which is orchestrated by the posterior hypothalamus. ■ ■α-Motoneurons and γ-motoneurons are activated, causing contraction of skeletal muscle and heat production. 2.  Heat-loss mechanisms—response to heat a.  Heat loss by radiation and convection increases when the ambient temperature increases. ■ ■The response is orchestrated by the anterior hypothalamus. ■ ■Increases in temperature cause a decrease in sympathetic tone to cutaneous blood vessels, increasing blood flow through the arterioles and increasing arteriovenous shunting of blood to the venous plexus near the surface of the skin. Shunting of warm blood to the surface of the skin increases heat loss by radiation and convection. b.  Heat loss by evaporation depends on the activity of sweat glands, which are under sympathetic muscarinic control. B.  Hypothalamic set point for body temperature 1.  Temperature sensors on the skin and in the hypothalamus “read” the core temperature and relay this information to the anterior hypothalamus. 2.  The anterior hypothalamus compares the detected core temperature to the set-point temperature. a.  If the core temperature is below the set point, heat-generating mechanisms (e.g., increased metabolism, shivering, vasoconstriction of cutaneous blood vessels) are activated by the posterior hypothalamus. t a b l e 2.9  Comparison of Cerebrospinal Fluid (CSF) and Blood Concentrations CSF ª Blood CSF < Blood CSF > Blood Na+ K+ Mg2+ Cl− Ca2+ Creatinine HCO3 − Glucose Osmolarity Cholesterol Protein Negligible concentration in CSF. Chapter 2 Neurophysiology 57 b.  If the core temperature is above the set point, mechanisms for heat loss (e.g., vasodilation of the cutaneous blood vessels, increased sympathetic outflow to the sweat glands) are activated by the anterior hypothalamus. 3.  Pyrogens increase the set-point temperature. Core temperature will be recognized as lower than the new set-point temperature by the anterior hypothalamus. As a result, heat-generating mechanisms (e.g., shivering) will be initiated. C.  Fever 1.  Pyrogens increase the production of interleukin-1 (IL-1) in phagocytic cells. ■ ■IL-1 acts on the anterior hypothalamus to increase the production of prostaglandins. Prostaglandins increase the set-point temperature, setting in motion the heat-generating mechanisms that increase body temperature and produce fever. 2.  Aspirin reduces fever by inhibiting cyclooxygenase, thereby inhibiting the production of prostaglandins. Therefore, aspirin decreases the set-point temperature. In response, mechanisms that cause heat loss (e.g., sweating, vasodilation) are activated. 3.  Steroids reduce fever by blocking the release of arachidonic acid from brain phospholip-ids, thereby preventing the production of prostaglandins. D.  Heat exhaustion and heat stroke 1.  Heat exhaustion is caused by excessive sweating. As a result, blood volume and arterial blood pressure decrease and syncope (fainting) occurs. 2.  Heat stroke occurs when body temperature increases to the point of tissue damage. The normal response to increased ambient temperature (sweating) is impaired, and core temperature increases further. E.  Hypothermia ■ ■results when the ambient temperature is so low that heat-generating mechanisms (e.g., shivering, metabolism) cannot adequately maintain core temperature near the set point. F.  Malignant hyperthermia ■ ■is caused in susceptible individuals by inhalation anesthetics. ■ ■is characterized by a massive increase in oxygen consumption and heat production by skeletal muscle, which causes a rapid rise in body temperature. 58 Review Test 1. Which autonomic receptor is blocked by hexamethonium at the ganglia, but not at the neuromuscular junction? (A) Adrenergic α receptors (B) Adrenergic β1 receptors (C) Adrenergic β2 receptors (D) Cholinergic muscarinic receptors (E) Cholinergic nicotinic receptors 2. A 66-year-old man with chronic hyper­ tension is treated with prazosin by his physician. The treatment successfully decreases his blood pressure to within the normal range. What is the mechanism of the drug’s action? (A) Inhibition of β1 receptors in the sinoatrial (SA) node (B) Inhibition of β2 receptors in the SA node (C) Stimulation of muscarinic receptors in the SA node (D) Stimulation of nicotinic receptors in the SA node (E) Inhibition of β1 receptors in ventricular muscle (F) Stimulation of β1 receptors in ventricular muscle (G) Inhibition of α1 receptors in ventricular muscle (H) Stimulation of α1 receptors in the SA node (I) Inhibition of α1 receptors in the SA node (J) Inhibition of α1 receptors on vascular smooth muscle (K) Stimulation of α1 receptors on vascular smooth muscle (L) Stimulation of α2 receptors on vascular smooth muscle 3. Which of the following responses is mediated by parasympathetic muscarinic receptors? (A) Dilation of bronchiolar smooth muscle (B) Erection (C) Ejaculation (D) Constriction of gastrointestinal (GI) sphincters (E) Increased cardiac contractility 4. Which of the following is a property of C fibers? (A) Have the slowest conduction velocity of any nerve fiber type (B) Have the largest diameter of any nerve fiber type (C) Are afferent nerves from muscle spindles (D) Are afferent nerves from Golgi tendon organs (E) Are preganglionic autonomic fibers 5. When compared with the cones of the retina, the rods (A) are more sensitive to low-intensity light (B) adapt to darkness before the cones (C) are most highly concentrated on the fovea (D) are primarily involved in color vision 6. Which of the following statements best describes the basilar membrane of the organ of Corti? (A) The apex responds better to low frequencies than the base does (B) The base is wider than the apex (C) The base is more compliant than the apex (D) High frequencies produce maximal displacement of the basilar membrane near the helicotrema (E) The apex is relatively stiff compared to the base 7. Which of the following is a feature of the sympathetic, but not the parasympathetic nervous system? (A) Ganglia located in the effector organs (B) Long preganglionic neurons (C) Preganglionic neurons release norepinephrine (D) Preganglionic neurons release acetylcholine (ACh) (E) Preganglionic neurons originate in the thoracolumbar spinal cord (F) Postganglionic neurons synapse on effector organs (G) Postganglionic neurons release epinephrine (H) Postganglionic neurons release ACh Chapter 2 Neurophysiology 59 8. Which autonomic receptor mediates an increase in heart rate? (A) Adrenergic α receptors (B) Adrenergic β1 receptors (C) Adrenergic β2 receptors (D) Cholinergic muscarinic receptors (E) Cholinergic nicotinic receptors 9. Cutting which structure on the left side causes total blindness in the left eye? (A) Optic nerve (B) Optic chiasm (C) Optic tract (D) Geniculocalcarine tract 10. Which reflex is responsible for monosynaptic excitation of ipsilateral homonymous muscle? (A) Stretch reflex (myotatic) (B) Golgi tendon reflex (inverse myotatic) (C) Flexor withdrawal reflex (D) Subliminal occlusion reflex 11. Which type of cell in the visual cortex responds best to a moving bar of light? (A) Simple (B) Complex (C) Hypercomplex (D) Bipolar (E) Ganglion 12. Administration of which of the following drugs is contraindicated in a 10-year-old child with a history of asthma? (A) Albuterol (B) Epinephrine (C) Isoproterenol (D) Norepinephrine (E) Propranolol 13. Which adrenergic receptor produces its stimulatory effects by the formation of inositol 1,4,5-triphosphate (IP3) and an increase in intracellular [Ca2+]? (A) α1 Receptors (B) α2 Receptors (C) β1 Receptors (D) β2 Receptors (E) Muscarinic receptors (F) Nicotinic receptors 14. The excessive muscle tone produced in decerebrate rigidity can be reversed by (A) stimulation of group la afferents (B) cutting the dorsal roots (C) transection of cerebellar connections to the lateral vestibular nucleus (D) stimulation of α-motoneurons (E) stimulation of γ-motoneurons 15. Which of the following parts of the body has cortical motoneurons with the largest representation on the primary motor cortex (area 4)? (A) Shoulder (B) Ankle (C) Fingers (D) Elbow (E) Knee 16. Which autonomic receptor mediates secretion of epinephrine by the adrenal medulla? (A) Adrenergic α receptors (B) Adrenergic β1 receptors (C) Adrenergic β2 receptors (D) Cholinergic muscarinic receptors (E) Cholinergic nicotinic receptors 17. Cutting which structure on the right side causes blindness in the temporal field of the left eye and the nasal field of the right eye? (A) Optic nerve (B) Optic chiasm (C) Optic tract (D) Geniculocalcarine tract 18. A ballet dancer spins to the left. During the spin, her eyes snap quickly to the left. This fast eye movement is (A) nystagmus (B) postrotatory nystagmus (C) ataxia (D) aphasia 19. Which of the following has a much lower concentration in the cerebrospinal fluid (CSF) than in cerebral capillary blood? (A) Na+ (B) K+ (C) Osmolarity (D) Protein (E) Mg2+ 20. Which of the following autonomic drugs acts by stimulating adenylate cyclase? (A) Atropine (B) Clonidine (C) Curare (D) Norepinephrine 60 BRS Physiology (E) Phentolamine (F) Phenylephrine (G) Propranolol 21. Which of the following is a step in photoreception in the rods? (A) Light converts all-trans retinal to 11-cis retinal (B) Metarhodopsin II activates transducin (C) Cyclic guanosine monophosphate (cGMP) levels increase (D) Rods depolarize (E) Glutamate release increases 22. Pathogens that produce fever cause (A) decreased production of interleukin-1 (IL-1) (B) decreased set-point temperature in the hypothalamus (C) shivering (D) vasodilation of blood vessels in the skin 23. Which of the following statements about the olfactory system is true? (A) The receptor cells are neurons (B) The receptor cells are sloughed off and are not replaced (C) Axons of cranial nerve (CN) I are A-delta fibers (D) Axons from receptor cells synapse in the prepiriform cortex (E) Fractures of the cribriform plate can cause inability to detect ammonia 24. A lesion of the chorda tympani nerve would most likely result in (A) impaired olfactory function (B) impaired vestibular function (C) impaired auditory function (D) impaired taste function (E) nerve deafness 25. Which of the following would produce maximum excitation of the hair cells in the right horizontal semicircular canal? (A) Hyperpolarization of the hair cells (B) Bending the stereocilia away from the kinocilia (C) Rapid ascent in an elevator (D) Rotating the head to the right 26. The inability to perform rapidly alternating movements (dysdiadochokinesia) is associated with lesions of the (A) premotor cortex (B) motor cortex (C) cerebellum (D) substantia nigra (E) medulla 27. Which autonomic receptor is activated by low concentrations of epinephrine released from the adrenal medulla and causes vasodilation? (A) Adrenergic α receptors (B) Adrenergic β1 receptors (C) Adrenergic β2 receptors (D) Cholinergic muscarinic receptors (E) Cholinergic nicotinic receptors 28. Complete transection of the spinal cord at the level of T1 would most likely result in (A) temporary loss of stretch reflexes below the lesion (B) temporary loss of conscious proprioception below the lesion (C) permanent loss of voluntary control of movement above the lesion (D) permanent loss of consciousness above the lesion 29. Sensory receptor potentials (A) are action potentials (B) always bring the membrane potential of a receptor cell toward threshold (C) always bring the membrane potential of a receptor cell away from threshold (D) are graded in size, depending on stimulus intensity (E) are all-or-none 30. Cutting which structure causes blindness in the temporal fields of the left and right eyes? (A) Optic nerve (B) Optic chiasm (C) Optic tract (D) Geniculocalcarine tract 31. Which of the following structures has a primary function to coordinate rate, range, force, and direction of movement? (A) Primary motor cortex (B) Premotor cortex and supplementary motor cortex (C) Prefrontal cortex (D) Basal ganglia (E) Cerebellum Chapter 2 Neurophysiology 61 32. Which reflex is responsible for polysynaptic excitation of contralateral extensors? (A) Stretch reflex (myotatic) (B) Golgi tendon reflex (inverse myotatic) (C) Flexor withdrawal reflex (D) Subliminal occlusion reflex 33. Which of the following is a characteristic of nuclear bag fibers? (A) They are one type of extrafusal muscle fiber (B) They detect dynamic changes in muscle length (C) They give rise to group Ib afferents (D) They are innervated by α-motoneurons 34. Muscle stretch leads to a direct increase in firing rate of which type of nerve? (A) α-Motoneurons (B) γ-Motoneurons (C) Group Ia fibers (D) Group Ib fibers 35. A 42-year-old woman with elevated blood pressure, visual disturbances, and vomiting has increased urinary excretion of 3-methoxy-4-hydroxymandelic acid (VMA). A computerized tomographic scan shows an adrenal mass that is consistent with a diagnosis of pheochromocytoma. While awaiting surgery to remove the tumor, she is treated with phenoxybenzamine to lower her blood pressure. What is the mechanism of this action of the drug? (A) Increasing cyclic adenosine monophosphate (cAMP) (B) Decreasing cAMP (C) Increasing inositol 1,4,5-triphosphate (IP3)/Ca2+ (D) Decreasing IP3/Ca2+ (E) Opening Na+/K+ channels (F) Closing Na+/K+ channels 36. Patients are enrolled in trials of a new atropine analog. Which of the following would be expected? (A) Increased AV node conduction velocity (B) Increased gastric acidity (C) Pupillary constriction (D) Sustained erection (E) Increased sweating 62 Answers and Explanations 1. The answer is E [I C 2 a]. Hexamethonium is a nicotinic blocker, but it acts only at ganglionic (not neuromuscular junction) nicotinic receptors. This pharmacologic distinction emphasizes that nicotinic receptors at these two locations, although similar, are not identical. 2. The answer is J [I C 1 a; Table 2.2]. Prazosin is a specific antagonist of α1 receptors, which are present in vascular smooth muscle, but not in the heart. Inhibition of α1 receptors results in vasodilation of the cutaneous and splanchnic vascular beds, decreased total peripheral resistance, and decreased blood pressure. 3. The answer is B [I C 2 b; Table 2.6]. Erection is a parasympathetic muscarinic response. Dilation of the bronchioles, ejaculation, constriction of the gastrointestinal (GI) sphincters, and increased cardiac contractility are all sympathetic α or β responses. 4. The answer is A [II F 1 b; Table 2.5]. C fibers (slow pain) are the smallest nerve fibers and therefore have the slowest conduction velocity. 5. The answer is A [II C 2 c (2); Table 2.7]. Of the two types of photoreceptors, the rods are more sensitive to low-intensity light and therefore are more important than the cones for night vision. They adapt to darkness after the cones. Rods are not present in the fovea. The cones are primarily involved in color vision. 6. The answer is A [II D 4]. Sound frequencies can be encoded by the organ of Corti because of differences in properties along the basilar membrane. The base of the basilar membrane is narrow and stiff, and hair cells on it are activated by high frequencies. The apex of the basilar membrane is wide and compliant, and hair cells on it are activated by low frequencies. 7. The answer is E [I A, B; Table 2.1; Figure 2.1]. Sympathetic preganglionic neurons originate in spinal cord segments T1–L3. Thus, the designation is thoracolumbar. The sympathetic nervous system is further characterized by short preganglionic neurons that synapse in ganglia located in the paravertebral chain (not in the effector organs) and postganglionic neurons that release norepinephrine (not epinephrine). Common features of the sympathetic and parasympathetic nervous systems are preganglionic neurons that release acetylcholine (ACh) and postganglionic neurons that synapse in effector organs. 8. The answer is B [I C 1 c]. Heart rate is increased by the stimulatory effect of norepinephrine on β1 receptors in the sinoatrial (SA) node. There are also sympathetic β1 receptors in the heart that regulate contractility. 9. The answer is A [II C 3 a]. Cutting the optic nerve from the left eye causes blindness in the left eye because the fibers have not yet crossed at the optic chiasm. 10. The answer is A [III C 1]. The stretch reflex is the monosynaptic response to stretching of a muscle. The reflex produces contraction and then shortening of the muscle that was originally stretched (homonymous muscle). 11. The answer is B [II C 5 b (2)]. Complex cells respond to moving bars or edges with the correct orientation. Simple cells respond to stationary bars, and hypercomplex cells respond to lines, curves, and angles. Bipolar and ganglion cells are found in the retina, not in the visual cortex. 12. The answer is E [I C 1 d; Table 2.2]. Asthma, a disease involving increased resistance of the upper airways, is treated by administering drugs that produce bronchiolar dilation (i.e., β2 agonists). β2 Agonists include isoproterenol, albuterol, epinephrine, and, to a lesser extent, norepinephrine. β2 Antagonists, such as propranolol, are strictly contraindicated because they cause constriction of the bronchioles. Chapter 2 Neurophysiology 63 13. The answer is A [I C 1 a]. Adrenergic α1 receptors produce physiologic actions by stimulating the formation of inositol 1,4,5-triphosphate (IP3) and causing a subsequent increase in intracellular [Ca2+]. Both β1 and β2 receptors act by stimulating adenylate cyclase and increasing the production of cyclic adenosine monophosphate (cAMP). α2 Receptors inhibit adenylate cyclase and decrease cAMP levels. Muscarinic and nicotinic receptors are cholinergic. 14. The answer is B [III E 3 a, b]. Decerebrate rigidity is caused by increased reflex muscle spindle activity. Stimulation of group Ia afferents would enhance, not diminish, this reflex activity. Cutting the dorsal roots would block the reflexes. Stimulation of α- and γ-motoneurons would stimulate muscles directly. 15. The answer is C [II B 4]. Representation on the motor homunculus is greatest for those structures that are involved in the most complicated movements—the fingers, hands, and face. 16. The answer is E [I C 2 a; Figure 2.1]. Preganglionic sympathetic fibers synapse on the chromaffin cells of the adrenal medulla at a nicotinic receptor. Epinephrine and, to a lesser extent, norepinephrine are released into the circulation. 17. The answer is C [II C 3 c]. Fibers from the left temporal field and the right nasal field ascend together in the right optic tract. 18. The answer is A [II E 3]. The fast eye movement that occurs during a spin is nystagmus. It occurs in the same direction as the rotation. After the spin, postrotatory nystagmus occurs in the opposite direction. 19. The answer is D [V B; Table 2.9]. Cerebrospinal fluid (CSF) is similar in composition to the interstitial fluid of the brain. Therefore, it is similar to an ultrafiltrate of plasma and has a very low protein concentration because large protein molecules cannot cross the blood– brain barrier. There are other differences in composition between CSF and blood that are created by transporters in the choroid plexus, but the low protein concentration of CSF is the most dramatic difference. 20. The answer is D [I C 1 c, d; Table 2.2]. Among the autonomic drugs, only β1 and β2 adrenergic agonists act by stimulating adenylate cyclase. Norepinephrine is a β1 agonist. Atropine is a muscarinic cholinergic antagonist. Clonidine is an α2 adrenergic agonist. Curare is a nicotinic cholinergic antagonist. Phentolamine is an α1 adrenergic antagonist. Phenylephrine is an α1 adrenergic agonist. Propranolol is a β1 and β2 adrenergic antagonist. 21. The answer is B [II C 4]. Photoreception involves the following steps. Light converts 11-cis retinal to all-trans retinal, which is converted to such intermediates as metarhodopsin II. Metarhodopsin II activates a stimulatory G protein (transducin), which activates a phosphodiesterase. Phosphodiesterase breaks down cyclic guanosine monophosphate (cGMP), so intracellular cGMP levels decrease, causing closure of Na+ channels in the photoreceptor cell membrane and hyperpolarization. Hyperpolarization of the photoreceptor cell membrane inhibits release of the neurotransmitter, glutamate. If the decreased release of glutamate interacts with ionotropic receptors on bipolar cells, there will be inhibition (decreased excitation). If the decreased release of glutamate interacts with metabotropic receptors on bipolar cells, there will be excitation (decreased inhibition). 22. The answer is C [VI C 1]. Pathogens release interleukin-1 (IL-1) from phagocytic cells. IL-1 then acts to increase the production of prostaglandins, ultimately raising the temperature set point in the anterior hypothalamus. The hypothalamus now “thinks” that the body temperature is too low (because the core temperature is lower than the new set-point temperature) and initiates mechanisms for generating heat—shivering, vasoconstriction, and shunting of blood away from the venous plexus near the skin surface. 64 BRS Physiology 23. The answer is A [II F 1 a, b]. Cranial nerve (CN) I innervates the olfactory epithelium. Its axons are C fibers. Fracture of the cribriform plate can tear the delicate olfactory nerves and thereby eliminate the sense of smell (anosmia); however, the ability to detect ammonia is left intact. Olfactory receptor cells are unique in that they are true neurons that are continuously replaced from undifferentiated stem cells. 24. The answer is D [II G 1 b]. The chorda tympani (cranial nerve [CN] VII) is involved in taste; it innervates the anterior two-thirds of the tongue. 25. The answer is D [II E 1 a, 2 a, b]. The semicircular canals are involved in angular acceleration or rotation. Hair cells of the right semicircular canal are excited (depolarized) when there is rotation to the right. This rotation causes bending of the stereocilia toward the kinocilia, and this bending produces depolarization of the hair cell. Ascent in an elevator would activate the saccules, which detect linear acceleration. 26. The answer is C [III F 1 c, 3 c]. Coordination of movement (synergy) is the function of the cerebellum. Lesions of the cerebellum cause ataxia, lack of coordination, poor execution of movement, delay in initiation of movement, and inability to perform rapidly alternating movements. The premotor and motor cortices plan and execute movements. Lesions of the substantia nigra, a component of the basal ganglia, result in tremors, lead-pipe rigidity, and poor muscle tone (Parkinson disease). 27. The answer is C [I C 1 d]. β2 Receptors on vascular smooth muscle produce vasodilation. α Receptors on vascular smooth muscle produce vasoconstriction. Because β2 receptors are more sensitive to epinephrine than are α receptors, low doses of epinephrine produce vasodilation, and high doses produce vasoconstriction. 28. The answer is A [III E 2]. Transection of the spinal cord causes “spinal shock” and loss of all reflexes below the level of the lesion. These reflexes, which are local circuits within the spinal cord, will return with time or become hypersensitive. Proprioception is permanently (rather than temporarily) lost because of the interruption of sensory nerve fibers. Fibers above the lesion are intact. 29. The answer is D [II A 4 c]. Receptor potentials are graded potentials that may bring the membrane potential of the receptor cell either toward (depolarizing) or away from (hyperpolarizing) threshold. Receptor potentials are not action potentials, although action potentials (which are all-or-none) may result if the membrane potential reaches threshold. 30. The answer is B [II C 3 b]. Optic nerve fibers from both temporal receptor fields cross at the optic chiasm. 31. The answer is E [III F 3 b]. Output of Purkinje cells from the cerebellar cortex to deep cerebellar nuclei is inhibitory. This output modulates movement and is responsible for the coordination that allows one to “catch a fly.” 32. The answer is C [III C 3]. Flexor withdrawal is a polysynaptic reflex that is used when a person touches a hot stove or steps on a tack. On the ipsilateral side of the painful stimulus, there is flexion (withdrawal); on the contralateral side, there is extension to maintain balance. 33. The answer is B [III B 3 a (1)]. Nuclear bag fibers are one type of intrafusal muscle fiber that make up muscle spindles. They detect dynamic changes in muscle length, give rise to group Ia afferent fibers, and are innervated by γ-motoneurons. The other type of intrafusal fiber, the nuclear chain fiber, detects static changes in muscle length. 34. The answer is C [III B 3 b]. Group Ia afferent fibers innervate intrafusal fibers of the muscle spindle. When the intrafusal fibers are stretched, the group Ia fibers fire and activate the stretch reflex, which causes the muscle to return to its resting length. 35. The answer is D [I C; Tables 2.2 and 2.5]. Pheochromocytoma is a tumor of the adrenal medulla that secretes excessive amounts of norepinephrine and epinephrine. Increased blood pressure is due to activation of α1 receptors on vascular smooth muscle and Chapter 2 Neurophysiology 65 activation of β1 receptors in the heart. Phenoxybenzamine decreases blood pressure by acting as an α1 receptor antagonist, thus decreasing intracellular IP3/Ca2+. 36. The answer is A [I C 3; I D]. An atropine analog would block muscarinic receptors and thus block actions that are mediated by muscarinic receptors. Muscarinic receptors slow AV node conduction velocity, thus muscarinic blocking agents would increase AV node conduction velocity. Muscarinic receptors increase gastric acid secretion, constrict the pupils, mediate erection, and cause sweating (via sympathetic cholinergic innervation of sweat glands); thus, blocking muscarinic receptors will inhibit all of those actions. 66 c h a p t e r Cardiovascular Physiology 3 I.  Circuitry of the Cardiovascular System (Figure 3.1) A.  Cardiac output of the left heart equals cardiac output of the right heart. ■ ■Cardiac output from the left side of the heart is the systemic blood flow. ■ ■Cardiac output from the right side of the heart is the pulmonary blood flow. B.  Direction of blood flow ■ ■Blood flows along the following course: 1.  From the lungs to the left atrium via the pulmonary vein 2.  From the left atrium to the left ventricle through the mitral valve 3.  From the left ventricle to the aorta through the aortic valve 4.  From the aorta to the systemic arteries and the systemic tissues (i.e., cerebral, coronary, renal, splanchnic, skeletal muscle, and skin) 5.  From the tissues to the systemic veins and vena cava 6.  From the vena cava (mixed venous blood) to the right atrium 7.  From the right atrium to the right ventricle through the tricuspid valve 8.  From the right ventricle to the pulmonary artery through the pulmonic valve 9.  From the pulmonary artery to the lungs for oxygenation II.  Hemodynamics A.  Components of the vasculature 1.  Arteries ■ ■deliver oxygenated blood to the tissues. ■ ■are thick walled, with extensive elastic tissue and smooth muscle. ■ ■are under high pressure. ■ ■The blood volume contained in the arteries is called the stressed volume. 2.  Arterioles ■ ■are the smallest branches of the arteries. ■ ■are the site of highest resistance in the cardiovascular system. ■ ■have a smooth muscle wall that is extensively innervated by autonomic nerve fibers. ■ ■Arteriolar resistance is regulated by the autonomic nervous system (ANS). Chapter 3 Cardiovascular Physiology 67 ■ ■α1-Adrenergic receptors are found on the arterioles of the skin, splanchnic, and renal circulations. ■ ■β2-Adrenergic receptors are found on arterioles of skeletal muscle. 3.  Capillaries ■ ■have the largest total cross-sectional and surface area. ■ ■consist of a single layer of endothelial cells surrounded by basal lamina. ■ ■are thin walled. ■ ■are the site of exchange of nutrients, water, and gases. 4.  Venules ■ ■are formed from merged capillaries. 5.  Veins ■ ■progressively merge to form larger veins. The largest vein, the vena cava, returns blood to the heart. ■ ■are thin walled. ■ ■are under low pressure. ■ ■contain the highest proportion of the blood in the cardiovascular system. ■ ■The blood volume contained in the veins is called the unstressed volume. ■ ■have α1-adrenergic receptors. B.  Velocity of blood flow ■ ■can be expressed by the following equation: v Q A = / where: v = velocity (cm/sec) Q = blood flow (mL/min) A = cross-sectional area (cm2) Pulmonary artery Systemic veins Pulmonary vein Systemic arteries Lungs Right atrium Right ventricle Vena cava Aorta Cerebral Coronary Renal Splanchnic Skeletal muscle Skin Left atrium Left ventricle Figure 3.1 Circuitry of the cardiovascular system. 68 BRS Physiology ■ ■Velocity is directly proportional to blood flow and inversely proportional to the cross-­ sectional area at any level of the cardiovascular system. ■ ■For example, blood velocity is higher in the aorta (small cross-sectional area) than in the sum of all of the capillaries (large cross-sectional area). The lower velocity of blood in the capillaries optimizes conditions for exchange of substances across the capillary wall. C.  Blood flow ■ ■can be expressed by the following equation: Q P R Cardiac output Mean arterial pressure Right atrial pres = D = -/ or sure Total peripheral resistance TPR ( ) where: Q = flow or cardiac output (mL/min) ΔP = pressure gradient (mm Hg) R = resistance or total peripheral resistance (mm Hg/mL/min) ■ ■The equation for blood flow (or cardiac outputv) is analogous to Ohm’s law for electrical circuits (I = V/R), where flow is analogous to current, and pressure is analogous to voltage. ■ ■The pressure gradient (ΔP) drives blood flow. ■ ■Thus, blood flows from high pressure to low pressure. ■ ■Blood flow is inversely proportional to the resistance of the blood vessels. D.  Resistance ■ ■Poiseuille’s equation gives factors that change the resistance of blood vessels. R 8 l r4 = h p where: R = resistance η = viscosity of blood l = length of blood vessel r4 = radius of blood vessel to the fourth power ■ ■Resistance is directly proportional to the viscosity of the blood. For example, increasing viscosity by increasing hematocrit will increase resistance and decrease blood flow. ■ ■Resistance is directly proportional to the length of the vessel. ■ ■Resistance is inversely proportional to the fourth power of the vessel radius. This relation-ship is powerful. For example, if blood vessel radius decreases by a factor of 2, then resis-tance increases by a factor of 16 (24), and blood flow accordingly decreases by a factor of 16. 1.  Resistances in parallel or series a.  Parallel resistance is illustrated by the systemic circulation. Each organ is supplied by an artery that branches off the aorta. The total resistance of this parallel arrangement is expressed by the following equation: 1 1 1 1 R R R R a b n total = + +… Ra, Rb, and Rn are the resistances of the renal, hepatic, and other circulations, respectively. ■ ■Each artery in parallel receives a fraction of the total blood flow. ■ ■The total resistance is less than the resistance of any of the individual arteries. ■ ■When an artery is added in parallel, the total resistance decreases. ■ ■In each parallel artery, the pressure is the same. Chapter 3 Cardiovascular Physiology 69 b.  Series resistance is illustrated by the arrangement of blood vessels within a given organ. Each organ is supplied by a large artery, smaller arteries, arterioles, capillaries, and veins arranged in series. The total resistance is the sum of the individual resistances, as expressed by the following equation: R R R R total artery arterioles capillaries = + + ■ ■The largest proportion of resistance in this series is contributed by the arterioles. ■ ■Each blood vessel (e.g., the largest artery) or set of blood vessels (e.g., all of the capil-laries) in series receives the same total blood flow. Thus, blood flow through the larg-est artery is the same as the total blood flow through all of the capillaries. ■ ■As blood flows through the series of blood vessels, the pressure decreases. 2.  Laminar flow versus turbulent flow ■ ■Laminar flow is streamlined (in a straight line); turbulent flow is not. ■ ■Reynolds’ number predicts whether blood flow will be laminar or turbulent. ■ ■When Reynolds’ number is increased, there is a greater tendency for turbulence, which causes audible vibrations called bruits. Reynolds’ number (and therefore turbulence) is increased by the following factors: a. ↓ blood viscosity (e.g., ↓ hematocrit, anemia) b. ↑ blood velocity (e.g., narrowing of a vessel) 3.  Shear ■ ■Is a consequence of the fact that adjacent layers of blood travel at different velocities within a blood vessel. ■ ■Velocity of blood is zero at the wall and highest at the center of the vessel. ■ ■Shear is therefore highest at the wall, where the difference in blood velocity of adja-cent layers is greatest; shear is lowest at the center of the vessel, where blood velocity is constant. E.  Capacitance (compliance) ■ ■describes the distensibility of blood vessels. ■ ■is inversely related to elastance, or stiffness. The greater the amount of elastic tissue there is in a blood vessel, the higher the elastance is, and the lower the compliance is. ■ ■is expressed by the following equation: C V P = where: C = capacitance or compliance (mL/mm Hg) V = volume (mL) P = pressure (mm Hg) ■ ■is directly proportional to volume and inversely proportional to pressure. ■ ■describes how volume changes in response to a change in pressure. ■ ■is much greater for veins than for arteries. As a result, more blood volume is contained in the veins (unstressed volume) than in the arteries (stressed volume). ■ ■Changes in the capacitance of the veins produce changes in unstressed volume. For exam-ple, a decrease in venous capacitance decreases unstressed volume and increases stressed volume by shifting blood from the veins to the arteries. ■ ■Capacitance of the arteries decreases with age; as a person ages, the arteries become stiffer and less distensible. F.  Pressure profile in blood vessels ■ ■As blood flows through the systemic circulation, pressure decreases progressively because of the resistance to blood flow. ■ ■Thus, pressure is highest in the aorta and large arteries and lowest in the venae cavae. 70 BRS Physiology ■ ■The largest decrease in pressure occurs across the arterioles because they are the site of highest resistance. ■ ■Mean pressures in the systemic circulation are as follows: 1.  Aorta, 100 mm Hg 2.  Arterioles, 50 mm Hg 3.  Capillaries, 20 mm Hg 4.  Vena cava, 4 mm Hg G.  Arterial pressure (Figure 3.2) ■ ■is pulsatile. ■ ■is not constant during a cardiac cycle. 1.  Systolic pressure ■ ■is the highest arterial pressure during a cardiac cycle. ■ ■is measured after the heart contracts (systole) and blood is ejected into the arterial system. 2.  Diastolic pressure ■ ■is the lowest arterial pressure during a cardiac cycle. ■ ■is measured when the heart is relaxed (diastole) and blood is returned to the heart via the veins. 3.  Pulse pressure ■ ■is the difference between the systolic and diastolic pressures. ■ ■The most important determinant of pulse pressure is stroke volume. As blood is ejected from the left ventricle into the arterial system, arterial pressure increases because of the relatively low capacitance of the arteries. Because diastolic pressure remains unchanged during ventricular systole, the pulse pressure increases to the same extent as the systolic pressure. ■ ■Decreases in capacitance, such as those that occur with the aging process, cause increases in pulse pressure. Time Arterial pressure (mm Hg) Mean pressure Diastolic pressure Systolic pressure Pulse pressure 120 40 0 80 Figure 3.2 Arterial pressure during the cardiac cycle. Chapter 3 Cardiovascular Physiology 71 P Q R T S PR ST QT Figure 3.3 Normal electrocardio-gram measured from lead II. 4.  Mean arterial pressure ■ ■is the average arterial pressure with respect to time. ■ ■is not the simple average of diastolic and systolic pressures (because a greater fraction of the cardiac cycle is spent in diastole). ■ ■can be calculated approximately as diastolic pressure plus one-third of pulse pressure. H.  Venous pressure ■ ■is very low. ■ ■The veins have a high capacitance and, therefore, can hold large volumes of blood at low pressure. I.  Atrial pressure ■ ■is slightly lower than venous pressure. ■ ■Left atrial pressure is estimated by the pulmonary wedge pressure. A catheter, inserted into the smallest branches of the pulmonary artery, makes almost direct contact with the pul-monary capillaries. The measured pulmonary capillary pressure is approximately equal to the left atrial pressure. III.  Cardiac Electrophysiology A.  Electrocardiogram (ECG) (Figure 3.3) 1.  P wave ■ ■represents atrial depolarization. ■ ■does not include atrial repolarization, which is “buried” in the QRS complex. 2.  PR interval ■ ■is the interval from the beginning of the P wave to the beginning of the Q wave (initial depolarization of the ventricle). 72 BRS Physiology ■ ■depends on conduction velocity through the atrioventricular (AV) node. For example, if AV nodal conduction decreases (as in heart block), the PR interval increases. ■ ■is decreased (i.e., increased conduction velocity through AV node) by stimulation of the sympathetic nervous system. ■ ■is increased (i.e., decreased conduction velocity through AV node) by stimulation of the parasympathetic nervous system. 3.  QRS complex ■ ■represents depolarization of the ventricles. 4.  QT interval ■ ■is the interval from the beginning of the Q wave to the end of the T wave. ■ ■represents the entire period of depolarization and repolarization of the ventricles. 5.  ST segment ■ ■is the segment from the end of the S wave to the beginning of the T wave. ■ ■is isoelectric. ■ ■represents the period when the ventricles are depolarized. 6.  T wave ■ ■represents ventricular repolarization. B.  Cardiac action potentials (see Table 1.3) ■ ■The resting membrane potential is determined by the conductance to K+ and approaches the K+ equilibrium potential. ■ ■Inward current brings positive charge into the cell and depolarizes the membrane potential. ■ ■Outward current takes positive charge out of the cell and hyperpolarizes the membrane potential. ■ ■The role of Na+, K+-adenosine triphosphatase (ATPase) is to maintain ionic gradients across cell membranes. 1.  Ventricles, atria, and the Purkinje system (Figure 3.4) ■ ■have stable resting membrane potentials of about −90 millivolts (mV). This value approaches the K+ equilibrium potential. ■ ■Action potentials are of long duration, especially in Purkinje fibers, where they last 300 milliseconds (msec). a.  Phase 0 ■ ■is the upstroke of the action potential. ■ ■is caused by a transient increase in Na+ conductance. This increase results in an inward Na+ current that depolarizes the membrane. 0 +20 –20 +40 –40 –60 –80 –100 Millivolts 100 msec 0 INa 1 2 ICa 3 IK 4 IK1 Figure 3.4 Ventricular action potential. Chapter 3 Cardiovascular Physiology 73 ■ ■At the peak of the action potential, the membrane potential approaches the Na+ equilibrium potential. b.  Phase 1 ■ ■is a brief period of initial repolarization. ■ ■Initial repolarization is caused by an outward current, in part because of the move-ment of K+ ions (favored by both chemical and electrical gradients) out of the cell and in part because of a decrease in Na+ conductance. c.  Phase 2 ■ ■is the plateau of the action potential. ■ ■is caused by a transient increase in Ca2+ conductance, which results in an inward Ca2+ current, and by an increase in K+ conductance. ■ ■During phase 2, outward and inward currents are approximately equal, so the mem-brane potential is stable at the plateau level. d.  Phase 3 ■ ■is repolarization. ■ ■During phase 3, Ca2+ conductance decreases, and K+ conductance increases and therefore predominates. ■ ■The high K+ conductance results in a large outward K+ current (IK), which hyperpolar-izes the membrane back toward the K+ equilibrium potential. e.  Phase 4 ■ ■is the resting membrane potential. ■ ■is a period during which inward and outward currents (IK1) are equal and the membrane potential approaches the K+ equilibrium potential. 2.  Sinoatrial (SA) node (Figure 3.5) ■ ■is normally the pacemaker of the heart. ■ ■has an unstable resting potential. ■ ■exhibits phase 4 depolarization, or automaticity. ■ ■The AV node and the His-Purkinje systems are latent pacemakers that may exhibit auto-maticity and override the SA node if it is suppressed. ■ ■The intrinsic rate of phase 4 depolarization (and heart rate) is fastest in the SA node and slowest in the His-Purkinje system: SA node AV node His Purkinje > > -a.  Phase 0 ■ ■is the upstroke of the action potential. ■ ■is caused by an increase in Ca2+ conductance. This increase causes an inward Ca2+ current that drives the membrane potential toward the Ca2+ equilibrium potential. 0 –20 –40 –60 –80 Millivolts 100 msec ICa 0 IK 3 If 4 Figure 3.5 Sinoatrial nodal action potential. 74 BRS Physiology ■ ■The ionic basis for phase 0 in the SA node is different from that in the ventricles, atria, and Purkinje fibers (where it is the result of an inward Na+ current). b.  Phase 3 ■ ■is repolarization. ■ ■is caused by an increase in K+ conductance. This increase results in an outward K+ current that causes repolarization of the membrane potential. c.  Phase 4 ■ ■is slow depolarization. ■ ■accounts for the pacemaker activity of the SA node (automaticity). ■ ■is caused by an increase in Na+ conductance, which results in an inward Na+ current called If. ■ ■If is turned on by repolarization of the membrane potential during the preceding action potential. d.  Phases 1 and 2 ■ ■are not present in the SA node action potential. 3.  AV node ■ ■Upstroke of the action potential in the AV node is the result of an inward Ca+ current (as in the SA node). C.  Conduction velocity ■ ■reflects the time required for excitation to spread throughout cardiac tissue. ■ ■depends on the size of the inward current during the upstroke of the action potential. The larger the inward current, the higher the conduction velocity. ■ ■is fastest in the Purkinje system. ■ ■is slowest in the AV node (seen as the PR interval on the ECG), allowing time for ­ ventricular filling before ventricular contraction. If conduction velocity through the AV node is increased, ventricular filling may be compromised. D.  Excitability ■ ■is the ability of cardiac cells to initiate action potentials in response to inward, depolar-izing current. ■ ■reflects the recovery of channels that carry the inward currents for the upstroke of the action potential. ■ ■changes over the course of the action potential. These changes in excitability are described by refractory periods (Figure 3.6). 1.  Absolute refractory period (ARP) ■ ■begins with the upstroke of the action potential and ends after the plateau. ARP ERP RRP +20 0 –20 –40 –60 –80 –100 Membrane potential (mV) 100 msec Figure 3.6 Absolute refractory period (ARP), effective refractory period (ERP), and relative refractory period (RRP) in the ventricle. Chapter 3 Cardiovascular Physiology 75 ■ ■reflects the time during which no action potential can be initiated, regardless of how much inward current is supplied. 2.  Effective refractory period (ERP) ■ ■is slightly longer than the ARP . ■ ■is the period during which a conducted action potential cannot be elicited. 3.  Relative refractory period (RRP) ■ ■is the period immediately after the ARP when repolarization is almost complete. ■ ■is the period during which an action potential can be elicited, but more than the usual inward current is required. E.  Autonomic effects on heart rate and conduction velocity (Table 3.1) ■ ■See IV C for a discussion of inotropic effects. 1.  Definitions of chronotropic and dromotropic effects a.  Chronotropic effects ■ ■produce changes in heart rate. ■ ■A negative chronotropic effect decreases heart rate by decreasing the firing rate of the SA node. ■ ■A positive chronotropic effect increases heart rate by increasing the firing rate of the SA node. b.  Dromotropic effects ■ ■produce changes in conduction velocity, primarily in the AV node. ■ ■A negative dromotropic effect decreases conduction velocity through the AV node, slowing the conduction of action potentials from the atria to the ventricles and increasing the PR interval. ■ ■A positive dromotropic effect increases conduction velocity through the AV node, speeding the conduction of action potentials from the atria to the ventricles and decreasing the PR interval. 2.  Parasympathetic effects on heart rate and conduction velocity ■ ■The SA node, atria, and AV node have parasympathetic vagal innervation, but the ven-tricles do not. The neurotransmitter is acetylcholine (ACh), which acts at muscarinic receptors. a.  Negative chronotropic effect ■ ■decreases heart rate by decreasing the rate of phase 4 depolarization. ■ ■Fewer action potentials occur per unit time because the threshold potential is reached more slowly and, therefore, less frequently. ■ ■The mechanism of the negative chronotropic effect is decreased If, the inward Na+ current that is responsible for phase 4 depolarization in the SA node. t a b l e 3.1 Autonomic Effects on the Heart and Blood Vessels Sympathetic Parasympathetic Effect Receptor Effect Receptor Heart rate ↑ β1 ↓ Muscarinic Conduction velocity (AV node) ↑ β1 ↓ Muscarinic Contractility ↑ β1 ↓ (Atria only) Muscarinic Vascular smooth muscle Skin, splanchnic Skeletal muscle Constriction Constriction Relaxation α1 α1 β2 AV = atrioventricular. 76 BRS Physiology b.  Negative dromotropic effect ■ ■decreases conduction velocity through the AV node. ■ ■Action potentials are conducted more slowly from the atria to the ventricles. ■ ■increases the PR interval. ■ ■The mechanism of the negative dromotropic effect is decreased inward Ca2+ current and increased outward K+ current. 3.  Sympathetic effects on heart rate and conduction velocity ■ ■Norepinephrine is the neurotransmitter, acting at b1 receptors. a.  Positive chronotropic effect ■ ■increases heart rate by increasing the rate of phase 4 depolarization. ■ ■More action potentials occur per unit time because the threshold potential is reached more quickly and, therefore, more frequently. ■ ■The mechanism of the positive chronotropic effect is increased If, the inward Na+ current that is responsible for phase 4 depolarization in the SA node. b.  Positive dromotropic effect ■ ■increases conduction velocity through the AV node. ■ ■Action potentials are conducted more rapidly from the atria to the ventricles, and ventricular filling may be compromised. ■ ■decreases the PR interval. ■ ■The mechanism of the positive dromotropic effect is increased inward Ca2+ current. IV.  Cardiac Muscle and Cardiac Output A.  Myocardial cell structure 1.  Sarcomere ■ ■is the contractile unit of the myocardial cell. ■ ■is similar to the contractile unit in skeletal muscle. ■ ■runs from Z line to Z line. ■ ■contains thick filaments (myosin) and thin filaments (actin, troponin, tropomyosin). ■ ■As in skeletal muscle, shortening occurs according to a sliding filament model, which states that thin filaments slide along adjacent thick filaments by forming and breaking cross-bridges between actin and myosin. 2.  Intercalated disks ■ ■occur at the ends of the cells. ■ ■maintain cell-to-cell cohesion. 3.  Gap junctions ■ ■are present at the intercalated disks. ■ ■are low-resistance paths between cells that allow for rapid electrical spread of action potentials. ■ ■account for the observation that the heart behaves as an electrical syncytium. 4.  Mitochondria ■ ■are more numerous in cardiac muscle than in skeletal muscle. 5.  T tubules ■ ■are continuous with the cell membrane. ■ ■invaginate the cells at the Z lines and carry action potentials into the cell interior. ■ ■are well developed in the ventricles, but poorly developed in the atria. ■ ■form dyads with the sarcoplasmic reticulum. Chapter 3 Cardiovascular Physiology 77 6.  Sarcoplasmic reticulum (SR) ■ ■are small-diameter tubules in close proximity to the contractile elements. ■ ■are the site of storage and release of Ca2+ for excitation–contraction coupling. B.  Steps in excitation–contraction coupling 1.  The action potential spreads from the cell membrane into the T tubules. 2.  During the plateau of the action potential, Ca2+ conductance is increased and Ca2+ enters the cell from the extracellular fluid (inward Ca2+ current) through L-type Ca2+ channels (dihydropyridine receptors). 3.  This Ca2+ entry triggers the release of even more Ca2+ from the SR (Ca2+-induced Ca2+ release) through Ca2+ release channels (ryanodine receptors). ■ ■ The amount of Ca2+ released from the SR depends on the: a. amount of Ca2+ previously stored in the SR. b. size of the inward Ca2+ current during the plateau of the action potential. 4.  As a result of this Ca2+ release, intracellular [Ca2+] increases. 5.  Ca2+ binds to troponin C, and tropomyosin is moved out of the way, removing the inhibi-tion of actin and myosin binding. 6.  Actin and myosin bind, the thick and thin filaments slide past each other, and the myocardial cell contracts. The magnitude of the tension that develops is proportional to the intracellular [Ca2+]. 7.  Relaxation occurs when Ca2+ is reaccumulated by the SR by an active Ca2+-ATPase pump. C.  Contractibility ■ ■is the intrinsic ability of cardiac muscle to develop force at a given muscle length. ■ ■is also called inotropism. ■ ■is related to the intracellular Ca2+ concentration. ■ ■can be estimated by the ejection fraction (stroke volume/end-diastolic volume), which is normally 0.55 (55%). ■ ■Positive inotropic agents produce an increase in contractility. ■ ■Negative inotropic agents produce a decrease in contractility. 1.  Factors that increase contractility (positive inotropism) [see Table 3.1] a.  Increased heart rate ■ ■When more action potentials occur per unit time, more Ca2+ enters the myocardial cells during the action potential plateaus, more Ca2+ is stored in the SR, more Ca2+ is released from the SR, and greater tension is produced during contraction. ■ ■Examples of the effect of increased heart rate are (1)  Positive staircase or Bowditch staircase (or Treppe). Increased heart rate increases the force of contraction in a stepwise fashion as the intracellular [Ca2+] increases cumulatively over several beats. (2)  Postextrasystolic potentiation. The beat that occurs after an extrasystolic beat has increased force of contraction because “extra” Ca2+ entered the cells during the extrasystole. b.  Sympathetic stimulation (catecholamines) via b1 receptors (see Table 3.1) ■ ■increases the force of contraction by two mechanisms: (1)  It increases the inward Ca2+ current during the plateau of each cardiac action potential. (2)  It increases the activity of the Ca2+ pump of the SR (by phosphorylation of phos-pholamban); as a result, more Ca2+ is accumulated by the SR and thus more Ca2+ is available for release in subsequent beats. c.  Cardiac glycosides (digitalis) ■ ■increase the force of contraction by inhibiting Na+, K+-ATPase in the myocardial cell membrane (Figure 3.7). 78 BRS Physiology ■ ■As a result of this inhibition, the intracellular [Na+] increases, diminishing the Na+ gradient across the cell membrane. ■ ■Na+–Ca2+ exchange (a mechanism that extrudes Ca2+ from the cell) depends on the size of the Na+ gradient and thus is diminished, producing an increase in intracel-lular [Ca2+]. 2.  Factors that decrease contractility (negative inotropism) [see Table 3.1] ■ ■Parasympathetic stimulation (ACh) via muscarinic receptors decreases the force of con-traction in the atria by decreasing the inward Ca2+ current during the plateau of the cardiac action potential. D.  Length–tension relationship in the ventricles (Figure 3.8) ■ ■describes the effect of ventricular muscle cell length on the force of contraction. ■ ■is analogous to the relationship in skeletal muscle. 1.  Preload ■ ■is end-diastolic volume, which is related to right atrial pressure. ■ ■When venous return increases, end-diastolic volume increases and stretches or length-ens the ventricular muscle fibers (see Frank-Starling relationship, IV D 5). 2.  Afterload ■ ■for the left ventricle is aortic pressure. Increases in aortic pressure cause an increase in afterload on the left ventricle. Ca2+ [Na+] Na+ Na+ Myocardial cell K+ 2 [Ca2+] 4 1 3 Figure 3.7 Stepwise explanation of how ouabain (digitalis) causes an increase in intracellular [Ca2+] and myocardial contractility. The circled numbers show the sequence of events. Right atrial pressure or end-diastolic volume Stroke volume or cardiac output Positive inotropic effect Negative inotropic effect Control Figure 3.8 Frank-Starling relationship and the effect of positive and negative inotropic agents. Chapter 3 Cardiovascular Physiology 79 ■ ■for the right ventricle is pulmonary artery pressure. Increases in pulmonary artery pres-sure cause an increase in afterload on the right ventricle. 3.  Sarcomere length ■ ■determines the maximum number of cross-bridges that can form between actin and myosin. ■ ■determines the maximum tension, or force of contraction. 4.  Velocity of contraction at a fixed muscle length ■ ■is maximal when the afterload is zero. ■ ■is decreased by increases in afterload. 5.  Frank-Starling relationship ■ ■describes the increases in stroke volume and cardiac output that occur in response to an increase in venous return or end-diastolic volume (see Figure 3.8). ■ ■is based on the length–tension relationship in the ventricle. Increases in end-diastolic volume cause an increase in ventricular fiber length, which produces an increase in devel-oped tension. ■ ■is the mechanism that matches cardiac output to venous return. The greater the venous return, the greater the cardiac output. ■ ■Changes in contractility shift the Frank-Starling curve upward (increased contractility) or downward (decreased contractility). a.  Increases in contractility cause an increase in cardiac output for any level of right atrial pressure or end-diastolic volume. b.  Decreases in contractility cause a decrease in cardiac output for any level of right atrial pressure or end-diastolic volume. E.  Ventricular pressure–volume loops (Figure 3.9) ■ ■are constructed by combining systolic and diastolic pressure curves. ■ ■The diastolic pressure curve is the relationship between diastolic pressure and diastolic volume in the ventricle. ■ ■The systolic pressure curve is the corresponding relationship between systolic pressure and systolic volume in the ventricle. ■ ■A single left ventricular cycle of contraction, ejection, relaxation, and refilling can be visual-ized by combining the two curves into a pressure–volume loop. 1.  Steps in the cycle a.  1 → 2 (isovolumetric contraction). The cycle begins at the end of diastole at point 1. The left ventricle is filled with blood from the left atrium and its volume is about 140 mL (end-diastolic volume). Ventricular pressure is low because the ventricular muscle is relaxed. On excitation, the ventricle contracts and ventricular pressure increases. The mitral valve closes when left ventricular pressure is greater than left atrial pressure. Because all valves are closed, no blood can be ejected from the ventricle (isovolumetric). Left ventricular volume (mL) Left ventricular pressure (mm Hg) 1 150 100 50 0 150 75 0 2 3 4 Figure 3.9 Left ventricular pressure–volume loop. 80 BRS Physiology b.  2 → 3 (ventricular ejection). The aortic valve opens at point 2 when pressure in the left ventricle exceeds pressure in the aorta. Blood is ejected into the aorta, and ventricular volume decreases. The volume that is ejected in this phase is the stroke volume. Thus, stroke volume can be measured graphically by the width of the pressure–volume loop. The volume remaining in the left ventricle at point 3 is end-systolic volume. c.  3 → 4 (isovolumetric relaxation). At point 3, the ventricle relaxes. When ventricular pres-sure decreases to less than aortic pressure, the aortic valve closes. Because all of the valves are closed again, ventricular volume is constant (isovolumetric) during this phase. d.  4 → 1 (ventricular filling). Once left ventricular pressure decreases to less than left atrial pressure, the mitral valve opens and filling of the ventricle begins. During this phase, ventricular volume increases to about 140 mL (the end-diastolic volume). 2.  Changes in the ventricular pressure–volume loop are caused by several factors (Figure 3.10). a.  Increased preload (see Figure 3.10A) ■ ■refers to an increase in end-diastolic volume and is the result of increased venous return (e.g., increased blood volume or decreased venous capacitance) ■ ■causes an increase in stroke volume based on the Frank-Starling relationship. ■ ■The increase in stroke volume is reflected in increased width of the pressure–volume loop. b.  Increased afterload (see Figure 3.10B) ■ ■refers to an increase in aortic pressure. ■ ■The ventricle must eject blood against a higher pressure, resulting in a decrease in stroke volume. ■ ■The decrease in stroke volume is reflected in decreased width of the pressure–­ volume loop. ■ ■The decrease in stroke volume results in an increase in end-systolic volume. c.  Increased contractility (see Figure 3.10C) ■ ■The ventricle develops greater tension than usual during systole, causing an increase in stroke volume. ■ ■The increase in stroke volume results in a decrease in end-systolic volume. F.  Cardiac and vascular function curves (Figure 3.11) ■ ■are simultaneous plots of cardiac output and venous return as a function of right atrial pressure or end-diastolic volume. 1.  The cardiac function (cardiac output) curve ■ ■depicts the Frank-Starling relationship for the ventricle. ■ ■shows that cardiac output is a function of end-diastolic volume. A B C Increased preload Increased afterload Increased contractility Left ventricular volume Left ventricular volume Left ventricular volume Left ventricular pressure 1 2 3 4 1 2 3 4 1 2 3 4 Figure 3.10 Effects of changes in (A) preload, (B) afterload, and (C) contractility on the ventricular pressure–volume loop. Chapter 3 Cardiovascular Physiology 81 2.  The vascular function (venous return) curve ■ ■depicts the relationship between blood flow through the vascular system (or venous return) and right atrial pressure. a.  Mean systemic pressure ■ ■is the point at which the vascular function curve intersects the x-axis. ■ ■equals right atrial pressure when there is “no flow” in the cardiovascular system. ■ ■is measured when the heart is stopped experimentally. Under these conditions, ­ cardiac output and venous return are zero, and pressure is equal throughout the cardiovascular system. (1)  Mean systemic pressure is increased by an increase in blood volume or by a decrease in venous capacitance (where blood is shifted from the veins to the arteries). An increase in mean systemic pressure is reflected in a shift of the vascular function curve to the right (Figure 3.12). Mean systemic pressure Right atrial pressure (mm Hg) or end-diastolic volume (L) Cardiac output or venous return (L/min) Venous return Cardiac output Figure 3.11 Simultaneous plots of the cardiac and vascular function curves. The curves cross at the equilibrium point for the cardiovascular system. Increased blood volume Right atrial pressure (mm Hg) or end-diastolic volume (L) Cardiac output or venous return (L/min) Cardiac output Mean systemic pressure Venous return Figure 3.12 Effect of increased blood volume on the mean systemic pressure, vascu-lar function curve, cardiac output, and right atrial pressure. 82 BRS Physiology (2)  Mean systemic pressure is decreased by a decrease in blood volume or by an increase in venous capacitance (where blood is shifted from the arteries to the veins). A decrease in mean systemic pressure is reflected in a shift of the vascular function curve to the left. b.  Slope of the venous return curve ■ ■is determined by the resistance of the arterioles. (1)  A clockwise rotation (not illustrated) of the venous return curve indicates a decrease in total peripheral resistance (TPR). When TPR is decreased for a given right atrial pressure, there is an increase in venous return (i.e., vasodilation of the arterioles “allows” more blood to flow from the arteries to the veins and back to the heart). (2)  A counterclockwise rotation of the venous return curve indicates an increase in TPR (Figure 3.13). When TPR is increased for a given right atrial pressure, there is a decrease in venous return to the heart (i.e., vasoconstriction of the arterioles decreases blood flow from the arteries to the veins and back to the heart). 3.  Combining cardiac output and venous return curves ■ ■When cardiac output and venous return are simultaneously plotted as a function of right atrial pressure, they intersect at a single value of right atrial pressure. ■ ■The point at which the two curves intersect is the equilibrium, or steady-state, point (see Figure 3.11). Equilibrium occurs when cardiac output equals venous return. ■ ■Cardiac output can be changed by altering the cardiac output curve, the venous return curve, or both curves simultaneously. ■ ■The superimposed curves can be used to predict the direction and magnitude of changes in cardiac output and the corresponding values of right atrial pressure. a.  Inotropic agents change the cardiac output curve. (1)  Positive inotropic agents (e.g., cardiac glycosides) produce increased contractility and increased cardiac output (Figure 3.14). ■ ■The equilibrium, or intersection, point shifts to a higher cardiac output and a correspondingly lower right atrial pressure. ■ ■Right atrial pressure decreases because more blood is ejected from the heart on each beat (increased stroke volume). Right atrial pressure (mm Hg) or end-diastolic volume (L) Cardiac output or venous return (L/min) Mean systemic pressure Cardiac output Venous return Increased TPR Figure 3.13 Effect of increased total peripheral resistance (TPR) on the cardiac and vascular function curves and on cardiac output. Chapter 3 Cardiovascular Physiology 83 (2)  Negative inotropic agents produce decreased contractility and decreased cardiac output (not illustrated). b.  Changes in blood volume or venous capacitance change the venous return curve. (1)  Increases in blood volume or decreases in venous capacitance increase mean sys-temic pressure, shifting the venous return curve to the right in a parallel fashion (see Figure 3.12). A new equilibrium, or intersection, point is established at which both cardiac output and right atrial pressure are increased. (2)  Decreases in blood volume (e.g., hemorrhage) or increases in venous capacitance have the opposite effect—decreased mean systemic pressure and a shift of the venous return curve to the left in a parallel fashion. A new equilibrium point is established at which both cardiac output and right atrial pressure are decreased (not illustrated). c.  Changes in TPR change both the cardiac output and the venous return curves. ■ ■Changes in TPR alter both curves simultaneously; therefore, the responses are more complicated than those noted in the previous examples. (1)  Increasing TPR causes a decrease in both cardiac output and venous return (see Figure 3.13). (a)  A counterclockwise rotation of the venous return curve occurs. Increased TPR results in decreased venous return as blood is retained on the arterial side. (b)  A downward shift of the cardiac output curve is caused by the increased aortic pressure (increased afterload) as the heart pumps against a higher pressure. (c)  As a result of these simultaneous changes, a new equilibrium point is estab-lished at which both cardiac output and venous return are decreased, but right atrial pressure is unchanged. (2)  Decreasing TPR causes an increase in both cardiac output and venous return (not illustrated). (a)  A clockwise rotation of the venous return curve occurs. Decreased TPR results in increased venous return as more blood is allowed to flow back to the heart from the arterial side. (b)  An upward shift of the cardiac output curve is caused by the decreased aortic pressure (decreased afterload) as the heart pumps against a lower pressure. (c)  As a result of these simultaneous changes, a new equilibrium point is estab-lished at which both cardiac output and venous return are increased, but right atrial pressure is unchanged. Positive inotropic effect Mean systemic pressure Right atrial pressure (mm Hg) or end-diastolic volume (L) Cardiac output or venous return (L/min) Venous return Cardiac output Figure 3.14 Effect of a positive inotropic agent on the cardiac function curve, cardiac output, and right atrial pressure. 84 BRS Physiology G.  Stroke volume, cardiac output, and ejection fraction 1.  Stroke volume ■ ■is the volume ejected from the ventricle on each beat. ■ ■is expressed by the following equation: Stroke volume End diastolic volume End systolic volume = − --2.  Cardiac output ■ ■is expressed by the following equation: Cardiac output Stroke volume Heart rate = × 3.  Ejection fraction ■ ■is the fraction of the end-diastolic volume ejected in each stroke volume. ■ ■is related to contractility. ■ ■is normally 0.55 or 55%. ■ ■is expressed by the following equation: Ejection fraction Stroke volume End diastolic volume = -H.  Stroke work ■ ■is the work the heart performs on each beat. ■ ■is equal to pressure ¥ volume. For the left ventricle, pressure is aortic pressure and volume is stroke volume. ■ ■is expressed by the following equation: Stroke work Aortic pressure Stroke volume = × ■ ■Fatty acids are the primary energy source for stroke work. I.  Cardiac oxygen (O2) consumption ■ ■is directly related to the amount of tension developed by the ventricles. ■ ■is increased by 1.  Increased afterload (increased aortic pressure) 2.  Increased size of the heart (Laplace's law states that tension is proportional to the radius of a sphere.) 3.  Increased contractility 4.  Increased heart rate J.  Measurement of cardiac output by the Fick principle ■ ■The Fick principle for measuring cardiac output is expressed by the following equation: Cardiac output consumption pulmonary vein pulmonary = − O O O 2 2 2 [ ] [ ] artery ■ ■The equation is solved as follows: 1.  O2 consumption for the whole body is measured. 2.  Pulmonary vein [O2] is measured in systemic arterial blood. 3.  Pulmonary artery [O2] is measured in systemic mixed venous blood. ■ ■For example, a 70-kg man has a resting O2 consumption of 250 mL/min, a systemic arterial O2 content of 0.20 mL O2/mL of blood, a systemic mixed venous O2 content of 0.15 mL O2/mL of blood, and a heart rate of 72 beats/min. What is his cardiac output? What is his stroke volume? Chapter 3 Cardiovascular Physiology 85 Cardiac output mL mL O mL mL O mL mL or = − = 250 0 20 0 15 5000 5 2 2 min . . min, .0 L min Stroke volume Cardiac output Heart rate mL beats = = = 5000 72 69 min min .4 mL beat V.  Cardiac Cycle ■ ■Figure 3.15 shows the mechanical and electrical events of a single cardiac cycle. The seven phases are separated by vertical lines. ■ ■Use the ECG as an event marker. ■ ■Opening and closing of valves causes the physiologic heart sounds. ■ ■When all valves are closed, ventricular volume is constant, and the phase is called isovolumetric. A.  Atrial systole ■ ■is preceded by the P wave, which represents electrical activation of the atria. ■ ■contributes to, but is not essential for, ventricular filling. ■ ■The increase in atrial pressure (venous pressure) caused by atrial systole is the a wave on the venous pulse curve. ■ ■In ventricular hypertrophy, filling of the ventricle by atrial systole causes the fourth heart sound, which is not audible in normal adults. B.  Isovolumetric ventricular contraction ■ ■begins during the QRS complex, which represents electrical activation of the ventricles. ■ ■When ventricular pressure becomes greater than atrial pressure, the AV valves close. Their closure corresponds to the first heart sound. Because the mitral valve closes before the tri-cuspid valve, the first heart sound may be split. ■ ■Ventricular pressure increases isovolumetrically as a result of ventricular contraction. However, no blood leaves the ventricle during this phase because the aortic valve is closed. C.  Rapid ventricular ejection ■ ■Ventricular pressure reaches its maximum value during this phase. ■ ■C wave on venous pulse curve occurs because of bulging of tricuspid value into right atrium during right ventricular contraction. ■ ■When ventricular pressure becomes greater than aortic pressure, the aortic valve opens. ■ ■Rapid ejection of blood into the aorta occurs because of the pressure gradient between the ventricle and the aorta. ■ ■Ventricular volume decreases dramatically because most of the stroke volume is ejected during this phase. ■ ■Atrial filling begins. ■ ■The onset of the T wave, which represents repolarization of the ventricles, marks the end of both ventricular contraction and rapid ventricular ejection. D.  Reduced ventricular ejection ■ ■Ejection of blood from the ventricle continues, but is slower. ■ ■Ventricular pressure begins to decrease. ■ ■Aortic pressure also decreases because of the runoff of blood from large arteries into smaller arteries. 86 BRS Physiology Q T S P R ECG P Venous pulse Ventricular volume Heart sounds Mitral valve opens Mitral valve closes Left atrial pressure Aortic pressure Left ventricular pressure Aortic valve closes Aortic valve opens a c v 0 20 40 60 80 100 120 Pressure (mm Hg) A B C D E F G 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Time (sec) 4 1 3 2 Figure 3.15 The cardiac cycle. ECG = electrocardiogram; A = atrial systole; B = isovolumetric ventricular contraction; C = rapid ventricular ejection; D = reduced ventricular ejection; E = isovolumetric ventricular relaxation; F = rapid ventricu-lar filling; G = reduced ventricular filling. Chapter 3 Cardiovascular Physiology 87 ■ ■Atrial filling continues. ■ ■V wave on venous pulse curve represents blood flow into right atrium (rising phase of wave) and from right atrium into right ventricle (falling phase of wave). E.  Isovolumetric ventricular relaxation ■ ■Repolarization of the ventricles is now complete (end of the T wave). ■ ■The aortic valve closes, followed by closure of the pulmonic valve. Closure of the semilunar valves corresponds to the second heart sound. Inspiration delays closure of the pulmonic valve and thus causes splitting of the second heart sound. ■ ■The AV valves remain closed during most of this phase. ■ ■Ventricular pressure decreases rapidly because the ventricle is now relaxed. ■ ■Ventricular volume is constant (isovolumetric) because all of the valves are closed. ■ ■The “blip” in the aortic pressure tracing occurs after closure of the aortic valve and is called the dicrotic notch, or incisura. F.  Rapid ventricular filling ■ ■When ventricular pressure becomes less than atrial pressure, the mitral valve opens. ■ ■With the mitral valve open, ventricular filling from the atrium begins. ■ ■Aortic pressure continues to decrease because blood continues to run off into the smaller arteries. ■ ■Rapid flow of blood from the atria into the ventricles causes the third heart sound, which is normal in children but, in adults, is associated with disease. G.  Reduced ventricular filling (diastasis) ■ ■is the longest phase of the cardiac cycle. ■ ■Ventricular filling continues, but at a slower rate. ■ ■The time required for diastasis and ventricular filling depends on heart rate. For example, increases in heart rate cause decreased time available for ventricular refilling, decreased end-diastolic volume, and decreased stroke volume. VI.  Regulation of Arterial Pressure ■ ■The most important mechanisms for regulating arterial pressure are a fast, neurally medi-ated baroreceptor mechanism and a slower, hormonally regulated renin–angiotensin– aldosterone mechanism. A.  Baroreceptor reflex ■ ■includes fast, neural mechanisms. ■ ■is a negative feedback system that is responsible for the minute-to-minute regulation of arterial blood pressure. ■ ■Baroreceptors are stretch receptors located within the walls of the carotid sinus near the bifurcation of the common carotid arteries. 1.  Steps in the baroreceptor reflex (Figure 3.16) a.  A decrease in arterial pressure decreases stretch on the walls of the carotid sinus. ■ ■Because the baroreceptors are most sensitive to changes in arterial pressure, rapidly decreasing arterial pressure produces the greatest response. ■ ■Additional baroreceptors in the aortic arch respond to increases, but not to decreases, in arterial pressure. b.  Decreased stretch decreases the firing rate of the carotid sinus nerve [Hering’s nerve, cra-nial nerve (CN) IX], which carries information to the vasomotor center in the brain stem. c.  The set point for mean arterial pressure in the vasomotor center is about 100 mm Hg. Therefore, if mean arterial pressure is less than 100 mm Hg, a series of autonomic responses is coordinated by the vasomotor center. These changes will attempt to increase blood pressure toward normal. 88 BRS Physiology d.  The responses of the vasomotor center to a decrease in mean arterial blood pressure are coordinated to increase the arterial pressure back to 100 mm Hg. The responses are decreased parasympathetic (vagal) outflow to the heart and increased sympathetic out-flow to the heart and blood vessels. ■ ■The following four effects attempt to increase the arterial pressure back to normal: (1) ↑ heart rate, resulting from decreased parasympathetic tone and increased sympa-thetic tone to the SA node of the heart. (2) ↑ contractility and stroke volume, resulting from increased sympathetic tone to the heart. Together with the increase in heart rate, the increases in contractility and stroke volume produce an increase in cardiac output that increases arterial pressure. (3) ↑ vasoconstriction of arterioles, resulting from the increased sympathetic outflow. As a result, TPR and arterial pressure will increase. (4) ↑ vasoconstriction of veins (venoconstriction), resulting from the increased sympa-thetic outflow. Constriction of the veins causes a decrease in unstressed volume and an increase in venous return to the heart. The increase in venous return causes an increase in cardiac output by the Frank-Starling mechanism. Acute hemorrhage Pa Pa toward normal Stretch on carotid sinus baroreceptors Parasympathetic outflow to heart Sympathetic outflow to heart and blood vessels Firing rate of carotid sinus nerve (Hering's nerve) Heart rate Contractility Constriction of arterioles ( TPR) Constriction of veins Unstressed volume Venous return Mean systemic pressure Heart rate Figure 3.16 Role of the baroreceptor reflex in the cardiovascular response to hemorrhage. Pa = mean arterial pressure; TPR = total peripheral resistance. Chapter 3 Cardiovascular Physiology 89 2.  Example of the baroreceptor reflex: response to acute blood loss (see Figure 3.16) 3.  Example of the baroreceptor mechanism: Valsalva maneuver ■ ■The integrity of the baroreceptor mechanism can be tested with the Valsalva maneuver (i.e., expiring against a closed glottis). ■ ■Expiring against a closed glottis causes an increase in intrathoracic pressure, which decreases venous return. ■ ■The decrease in venous return causes a decrease in cardiac output and arterial pressure (Pa). ■ ■If the baroreceptor reflex is intact, the decrease in Pa is sensed by the baroreceptors, leading to an increase in sympathetic outflow to the heart and blood vessels. In the test, an increase in heart rate would be noted. ■ ■When the person stops the maneuver, there is a rebound increase in venous return, cardiac output, and Pa. The increase in Pa is sensed by the baroreceptors, which direct a decrease in heart rate. B.  Renin–angiotensin–aldosterone system ■ ■is a slow, hormonal mechanism. ■ ■is used in long-term blood pressure regulation by adjustment of blood volume. ■ ■Renin is an enzyme. ■ ■Angiotensin I is inactive. ■ ■Angiotensin II is physiologically active. ■ ■Angiotensin II is degraded by angiotensinase. One of the peptide fragments, angiotensin III, has some of the biologic activity of angiotensin II. 1.  Steps in the renin–angiotensin–aldosterone system (Figure 3.17) a.  A decrease in renal perfusion pressure causes the juxtaglomerular cells of the afferent arteriole to secrete renin. b.  Renin is an enzyme that catalyzes the conversion of angiotensinogen to angiotensin I in plasma. c.  Angiotensin-converting enzyme (ACE) catalyzes the conversion of angiotensin I to angio-tensin II, primarily in the lungs. ■ ■ACE inhibitors (e.g., captopril) block the conversion of angiotensin I to angiotensin II and, therefore, decrease blood pressure. ■ ■Angiotensin receptor (AT1) antagonists (e.g., losartan) block the action of angiotensin II at its receptor and decrease blood pressure. d.  Angiotensin II has four effects: (1)  It stimulates the synthesis and secretion of aldosterone by the adrenal cortex. ■ ■Aldosterone increases Na+ reabsorption by the renal distal tubule, thereby increas-ing extracellular fluid (ECF) volume, blood volume, and arterial pressure. ■ ■This action of aldosterone is slow because it requires new protein synthesis. (2)  It increases Na+–H+ exchange in the proximal convoluted tubule. ■ ■This action of angiotensin II directly increases Na+ reabsorption, complementing the indirect stimulation of Na+ reabsorption via aldosterone. ■ ■This action of angiotensin II leads to contraction alkalosis. (3)  It increases thirst and therefore water intake. (4)  It causes vasoconstriction of the arterioles, thereby increasing TPR and arterial pressure. 2.  Example: response of the renin–angiotensin–aldosterone system to acute blood loss (see Figure 3.17) C.  Other regulation of arterial blood pressure 1.  Cerebral ischemia a.  When the brain is ischemic, the partial pressure of carbon dioxide (Pco2) in brain tissue increases. 90 BRS Physiology b.  Chemoreceptors in the vasomotor center respond by increasing sympathetic outflow to the heart and blood vessels. ■ ■Constriction of arterioles causes intense peripheral vasoconstriction and increased TPR. Blood flow to other organs (e.g., kidneys) is significantly reduced in an attempt to preserve blood flow to the brain. ■ ■Mean arterial pressure can increase to life-threatening levels. c.  The Cushing reaction is an example of the response to cerebral ischemia. Increases in intracranial pressure cause compression of the cerebral blood vessels, leading to cere-bral ischemia and increased cerebral Pco2. The vasomotor center directs an increase in sympathetic outflow to the heart and blood vessels, which causes a profound increase in arterial pressure. 2.  Chemoreceptors in the carotid and aortic bodies ■ ■are located near the bifurcation of the common carotid arteries and along the aortic arch. ■ ■have very high rates of O2 consumption and are very sensitive to decreases in the partial pressure of oxygen (Po2). Acute hemorrhage Renin Aldosterone Renal perfusion pressure Angiotensin-converting enzyme (ACE) Thirst Vasoconstriction Na+–H+ exchange Na+ reabsorption Na+ reabsorption TPR Pa Pa toward normal Angiotensin Conversion of angiotensin to angiotensin Conversion of angiotensinogen to angiotensin Water intake Figure 3.17 Role of the renin–angiotensin–aldosterone system in the cardiovascular response to hemor-rhage. Pa = mean arterial pressure; TPR = total peripheral resistance. Chapter 3 Cardiovascular Physiology 91 ■ ■Decreases in Po2 activate vasomotor centers that produce vasoconstriction, an increase in TPR, and an increase in arterial pressure. 3.  Vasopressin [antidiuretic hormone (ADH)] ■ ■is involved in the regulation of blood pressure in response to hemorrhage, but not in minute-to-minute regulation of normal blood pressure. ■ ■Atrial receptors respond to a decrease in blood volume (or blood pressure) and cause the release of vasopressin from the posterior pituitary. ■ ■Vasopressin has two effects that tend to increase blood pressure toward normal: a.  It is a potent vasoconstrictor that increases TPR by activating V1 receptors on the arterioles. b.  It increases water reabsorption by the renal distal tubule and collecting ducts by activat-ing V2 receptors. 4.  Atrial natriuretic peptide (ANP) ■ ■is released from the atria in response to an increase in blood volume and atrial pressure. ■ ■causes relaxation of vascular smooth muscle, dilation of arterioles, and decreased TPR. ■ ■causes increased excretion of Na+ and water by the kidney, which reduces blood volume and attempts to bring arterial pressure down to normal. ■ ■inhibits renin secretion. VII.  Microcirculation and Lymph A.  Structure of capillary beds ■ ■Metarterioles branch into the capillary beds. At the junction of the arterioles and capillar-ies is a smooth muscle band called the precapillary sphincter. ■ ■True capillaries do not have smooth muscle; they consist of a single layer of endothelial cells surrounded by a basement membrane. ■ ■Clefts (pores) between the endothelial cells allow passage of water-soluble substances. The clefts represent a very small fraction of the surface area (<0.1%). ■ ■Blood flow through the capillaries is regulated by contraction and relaxation of the ­ arterioles and the precapillary sphincters. B.  Passage of substances across the capillary wall 1.  Lipid-soluble substances ■ ■cross the membranes of the capillary endothelial cells by simple diffusion. ■ ■include O2 and CO2. 2.  Small water-soluble substances ■ ■cross via the water-filled clefts between the endothelial cells. ■ ■include water, glucose, and amino acids. ■ ■Generally, protein molecules are too large to pass freely through the clefts. ■ ■In the brain, the clefts between endothelial cells are exceptionally tight (blood–brain barrier). ■ ■In the liver and intestine, the clefts are exceptionally wide and allow passage of protein. These capillaries are called sinusoids. 3.  Large water-soluble substances ■ ■can cross by pinocytosis. 92 BRS Physiology C.  Fluid exchange across capillaries 1.  The Starling equation (Figure 3.18) J K P P v f c i c i = --( ) ( )     π −π where: Jv = fluid movement (mL/min) Kf = hydraulic conductance (mL/min ⋅ mm Hg) Pc = capillary hydrostatic pressure (mm Hg) Pi = interstitial hydrostatic pressure (mm Hg) πc = capillary oncotic pressure (mm Hg) πi = interstitial oncotic pressure (mm Hg) a.  Jv is fluid flow. ■ ■When Jv is positive, there is net fluid movement out of the capillary (filtration). ■ ■When Jv is negative, there is net fluid movement into the capillary (absorption). b.  Kf is the filtration coefficient. ■ ■It is the hydraulic conductance (water permeability) of the capillary wall. c.  Pc is capillary hydrostatic pressure. ■ ■An increase in Pc favors filtration out of the capillary. ■ ■Pc is determined by arterial and venous pressures and resistances. ■ ■An increase in either arterial or venous pressure produces an increase in Pc; increases in venous pressure have a greater effect on Pc. ■ ■Pc is higher at the arteriolar end of the capillary than at the venous end (except in glomerular capillaries, where it is nearly constant). d.  Pi is interstitial fluid hydrostatic pressure. ■ ■An increase in Pi opposes filtration out of the capillary. ■ ■It is normally close to 0 mm Hg (or it is slightly negative). e. πc is capillary oncotic, or colloidosmotic, pressure. ■ ■An increase in πc opposes filtration out of the capillary. ■ ■πc is increased by increases in the protein concentration in the blood (e.g., dehydration). ■ ■πc is decreased by decreases in the protein concentration in the blood (e.g., nephrotic syndrome, protein malnutrition, liver failure). ■ ■Small solutes do not contribute to πc. f. πi is interstitial fluid oncotic pressure. ■ ■An increase in πi favors filtration out of the capillary. ■ ■πi is dependent on the protein concentration of the interstitial fluid, which is nor-mally quite low because very little protein is filtered. 2.  Factors that increase filtration a. ↑ Pc—caused by increased arterial pressure, increased venous pressure, arteriolar dila-tion, and venous constriction b. ↓ Pi – Capillary Interstitial fluid Pc Pi πc πi – + + Figure 3.18 Starling forces across the capillary wall. + sign = favors filtration; – sign = opposes filtration; Pc = capillary hydrostatic pressure; Pi = interstitial hydrostatic pressure; πc = cap-illary oncotic pressure; πi = interstitial oncotic pressure. Chapter 3 Cardiovascular Physiology 93 c. ↓ πc—caused by decreased protein concentration in the blood d. ↑ πi—caused by inadequate lymphatic function 3.  Sample calculations using the Starling equation a.  Example 1: At the arteriolar end of a capillary, Pc is 30 mm Hg, πc is 28 mm Hg, Pi is 0 mm Hg, and πi is 4 mm Hg. Will filtration or absorption occur? Net pressure mm Hg mm Hg = − − − = + 30 0 28 4 6 ( ) ( ) Because the net pressure is positive, filtration will occur. b.  Example 2: At the venous end of the same capillary, Pc has decreased to 16 mm Hg, πc remains at 28 mm Hg, Pi is 0 mm Hg, and πi is 4 mm Hg. Will filtration or absorption occur? Net pressure mm Hg mm Hg = − − − = − 16 0 28 4 8 ( ) ( ) Because the net pressure is negative, absorption will occur. 4.  Lymph a.  Function of lymph ■ ■Normally, filtration of fluid out of the capillaries is slightly greater than absorption of fluid into the capillaries. The excess filtered fluid is returned to the circulation via the lymph. ■ ■Lymph also returns any filtered protein to the circulation. b.  Unidirectional flow of lymph ■ ■One-way flap valves permit interstitial fluid to enter, but not leave, the lymph vessels. ■ ■Flow through larger lymphatic vessels is also unidirectional, and is aided by one-way valves and skeletal muscle contraction. c.  Edema (Table 3.2) ■ ■occurs when the volume of interstitial fluid exceeds the capacity of the lymphatics to return it to the circulation. ■ ■can be caused by excess filtration or blocked lymphatics. ■ ■Histamine causes both arteriolar dilation and venous constriction, which together produce a large increase in Pc and local edema. D.  Nitric oxide (NO) ■ ■is produced in the endothelial cells. ■ ■causes local relaxation of vascular smooth muscle. t a b l e 3.2 Causes and Examples of Edema Cause Examples ↑ Pc Arteriolar dilation Venous constriction Increased venous pressure Heart failure Extracellular volume expansion Standing (edema in the dependent limbs) ↓ πc Decreased plasma protein concentration Severe liver disease (failure to synthesize proteins) Protein malnutrition Nephrotic syndrome (loss of protein in urine) ↑ Kf Burn Inflammation (release of histamine; cytokines) 94 BRS Physiology ■ ■Mechanism of action involves the activation of guanylate cyclase and production of cyclic guanosine monophosphate (cGMP). ■ ■is one form of endothelial-derived relaxing factor (EDRF). ■ ■Circulating ACh causes vasodilation by stimulating the production of NO in vascular smooth muscle. VIII.  Special Circulations (Table 3.3) ■ ■Blood flow varies from one organ to another. ■ ■Blood flow to an organ is regulated by altering arteriolar resistance, and can be varied, depending on the organ’s metabolic demands. ■ ■Pulmonary and renal blood flow are discussed in Chapters 4 and 5, respectively. A.  Local (intrinsic) control of blood flow 1.  Examples of local control a.  Autoregulation ■ ■Blood flow to an organ remains constant over a wide range of perfusion pressures. ■ ■Organs that exhibit autoregulation are the heart, brain, and kidney. ■ ■For example, if perfusion pressure to the heart is suddenly decreased, compensatory vasodilation of the arterioles will occur to maintain a constant flow. b.  Active hyperemia ■ ■Blood flow to an organ is proportional to its metabolic activity. ■ ■For example, if metabolic activity in skeletal muscle increases as a result of strenuous exercise, blood flow to the muscle will increase proportionately to meet metabolic demands. t a b l e 3.3 Summary of Control of Special Circulations Circulation (% of Resting Cardiac Output) Local Metabolic Control Vasoactive Metabolites Sympathetic Control Mechanical Effects Coronary (5%) Most important mechanism Hypoxia Adenosine Least important mechanism Mechanical compression during systole Cerebral (15%) Most important mechanism CO2 H+ Least important mechanism Increases in intracranial pressure decrease cerebral blood flow Muscle (20%) Most important mechanism during exercise Lactate K+ Adenosine Most important mechanism at rest (α1 receptor causes vasoconstriction; β2 receptor causes vasodilation) Muscular activity causes temporary decrease in blood flow Skin (5%) Least important mechanism Most important mechanism (temperature regulation) Pulmonary† (100%) Most important mechanism Hypoxia vasoconstricts Least important mechanism Lung inflation Renal blood flow (25% of resting cardiac output) is discussed in Chapter 5. †Pulmonary blood flow is discussed in Chapter 4. Chapter 3 Cardiovascular Physiology 95 c.  Reactive hyperemia ■ ■is an increase in blood flow to an organ that occurs after a period of occlusion of flow. ■ ■The longer the period of occlusion is, the greater the increase in blood flow is above preocclusion levels. 2.  Mechanisms that explain local control of blood flow a.  Myogenic hypothesis ■ ■explains autoregulation, but not active or reactive hyperemia. ■ ■is based on the observation that vascular smooth muscle contracts when it is stretched. ■ ■For example, if perfusion pressure to an organ suddenly increases, the arteriolar smooth muscle will be stretched and will contract. The resulting vasoconstriction will maintain a constant flow. (Without vasoconstriction, blood flow would increase as a result of the increased pressure.) b.  Metabolic hypothesis ■ ■is based on the observation that the tissue supply of O2 is matched to the tissue demand for O2. ■ ■Vasodilator metabolites are produced as a result of metabolic activity in tissue. These vasodilators are CO2, H+, K+, lactate, and adenosine. ■ ■Examples of active hyperemia: (1)  If the metabolic activity of a tissue increases (e.g., strenuous exercise), both the demand for O2 and the production of vasodilator metabolites increase. These metabolites cause arteriolar vasodilation, increased blood flow, and increased O2 delivery to the tissue to meet demand. (2)  If blood flow to an organ suddenly increases as a result of a spontaneous increase in arterial pressure, then more O2 is provided for metabolic activity. At the same time, the increased flow “washes out” vasodilator metabolites. As a result of this “washout,” arteriolar vasoconstriction occurs, resistance increases, and blood flow is decreased to normal. B.  Hormonal (extrinsic) control of blood flow 1.  Sympathetic innervation of vascular smooth muscle ■ ■Increases in sympathetic tone cause vasoconstriction. ■ ■Decreases in sympathetic tone cause vasodilation. ■ ■The density of sympathetic innervation varies widely among tissues. Skin has the greatest innervation, whereas coronary, pulmonary, and cerebral vessels have little innervation. 2.  Other vasoactive hormones a.  Histamine ■ ■causes arteriolar dilation and venous constriction. The combined effects of arteriolar dilation and venous constriction cause increased Pc and increased filtration out of the capillaries, resulting in local edema. ■ ■is released in response to tissue trauma. b.  Bradykinin ■ ■causes arteriolar dilation and venous constriction. ■ ■produces increased filtration out of the capillaries (similar to histamine), and causes local edema. c.  Serotonin (5-hydroxytryptamine) ■ ■causes arteriolar constriction and is released in response to blood vessel damage to help prevent blood loss. ■ ■has been implicated in the vascular spasms of migraine headaches. d.  Prostaglandins ■ ■Prostacyclin is a vasodilator in several vascular beds. ■ ■E-series prostaglandins are vasodilators. 96 BRS Physiology ■ ■F-series prostaglandins are vasoconstrictors. ■ ■Thromboxane A2 is a vasoconstrictor. C.  Coronary circulation ■ ■is controlled almost entirely by local metabolic factors. ■ ■exhibits autoregulation. ■ ■exhibits active and reactive hyperemia. ■ ■The most important local metabolic factors are hypoxia and adenosine. ■ ■For example, increases in myocardial contractility are accompanied by an increased demand for O2. To meet this demand, compensatory vasodilation of coronary vessels occurs and, accordingly, both blood flow and O2 delivery to the contracting heart muscle increase (active hyperemia). ■ ■During systole, mechanical compression of the coronary vessels reduces blood flow. After the period of occlusion, blood flow increases to repay the O2 debt (reactive hyperemia). ■ ■Sympathetic nerves play a minor role. D.  Cerebral circulation ■ ■is controlled almost entirely by local metabolic factors. ■ ■exhibits autoregulation. ■ ■exhibits active and reactive hyperemia. ■ ■The most important local vasodilator for the cerebral circulation is CO2. Increases in Pco2 cause vasodilation of the cerebral arterioles and increased blood flow to the brain. Decreases in Pco2 cause vasoconstriction of cerebral arterioles and decreased blood flow to the brain. ■ ■Sympathetic nerves play a minor role. ■ ■Vasoactive substances in the systemic circulation have little or no effect on cerebral circu-lation because such substances are excluded by the blood–brain barrier. E.  Skeletal muscle ■ ■is controlled by the extrinsic sympathetic innervation of blood vessels in skeletal muscle and by local metabolic factors. 1.  Sympathetic innervation ■ ■is the primary regulator of blood flow to the skeletal muscle at rest. ■ ■The arterioles of skeletal muscle are densely innervated by sympathetic fibers. The veins also are innervated, but less densely. ■ ■There are both α1 and β2 receptors on the blood vessels of skeletal muscle. ■ ■Stimulation of α1 receptors causes vasoconstriction. ■ ■Stimulation of β2 receptors causes vasodilation. ■ ■The state of constriction of skeletal muscle arterioles is a major contributor to the TPR (because of the large mass of skeletal muscle). 2.  Local metabolic control ■ ■Blood flow in skeletal muscle exhibits autoregulation and active and reactive hyperemia. ■ ■Demand for O2 in skeletal muscle varies with metabolic activity level, and blood flow is regulated to meet demand. ■ ■During exercise, when demand is high, these local metabolic mechanisms are dominant. ■ ■The local vasodilator substances are lactate, adenosine, and K+. ■ ■Mechanical effects during exercise temporarily compress the arteries and decrease blood flow. During the postocclusion period, reactive hyperemia increases blood flow to repay the O2 debt. F.  Skin ■ ■has extensive sympathetic innervation. Cutaneous blood flow is under extrinsic control. ■ ■Temperature regulation is the principal function of the cutaneous sympathetic nerves. Increased ambient temperature leads to cutaneous vasodilation, allowing dissipation of excess body heat. Chapter 3 Cardiovascular Physiology 97 ■ ■Trauma produces the “triple response” in skin—a red line, a red flare, and a wheal. A wheal is local edema that results from the local release of histamine, which increases capillary filtration. IX.  Integrative Functions of the Cardiovascular System: Gravity, Exercise, and Hemorrhage ■ ■The responses to changes in gravitational force, exercise, and hemorrhage demonstrate the integrative functions of the cardiovascular system. A.  Changes in gravitational forces (Table 3.4 and Figure 3.19) ■ ■The following changes occur when an individual moves from a supine position to a standing position: 1.  When a person stands, a significant volume of blood pools in the lower extremities because of the high compliance of the veins. (Muscular activity would prevent this pooling.) 2.  As a result of venous pooling and increased local venous pressure, Pc in the legs increases and fluid is filtered into the interstitium. If net filtration of fluid exceeds the ability of the lymphatics to return it to the circulation, edema will occur. 3.  Venous return decreases. As a result of the decrease in venous return, both stroke volume and cardiac output decrease (Frank-Starling relationship, IV D 5). 4.  Arterial pressure decreases because of the reduction in cardiac output. If cerebral blood pressure becomes low enough, fainting may occur. 5.  Compensatory mechanisms will attempt to increase blood pressure to normal (see Figure 3.19). The carotid sinus baroreceptors respond to the decrease in arterial pressure by decreasing the firing rate of the carotid sinus nerves. A coordinated response from the vasomotor center then increases sympathetic outflow to the heart and blood vessels and decreases parasympathetic outflow to the heart. As a result, heart rate, contractility, TPR, and venous return increase, and blood pressure increases toward normal. 6.  Orthostatic hypotension (fainting or lightheadedness on standing) may occur in individu-als whose baroreceptor reflex mechanism is impaired (e.g., individuals treated with sympatholytic agents) or who are volume-depleted. B.  Exercise (Table 3.5 and Figure 3.20) 1.  The central command (anticipation of exercise) ■ ■originates in the motor cortex or from reflexes initiated in muscle proprioceptors when exercise is anticipated. ■ ■initiates the following changes: t a b l e 3.4 Summary of Responses to Standing Parameter Initial Response to Standing Compensatory Response Arterial blood pressure ↓ ↑ (toward normal) Heart rate — ↑ Cardiac output ↓ ↑ (toward normal) Stroke volume ↓ ↑ (toward normal) TPR — ↑ Central venous pressure ↓ ↑ (toward normal) TPR = total peripheral resistance. 98 BRS Physiology Standing Heart Pa toward normal Arterioles Veins Pa Blood pools in veins Sympathetic outflow Baroreceptor reflex Heart rate Constriction of arterioles Constriction of veins Contractility TPR Venous return Cardiac output Cardiac output Venous return Figure 3.19 Cardiovascular responses to standing. Pa = arterial pressure; TPR = total peripheral resistance. t a b l e 3.5 Summary of Effects of Exercise Parameter Effect Heart rate ↑↑ Stroke volume ↑ Cardiac output ↑↑ Arterial pressure ↑ (slight) Pulse pressure ↑ (due to increased stroke volume) TPR ↓↓ (due to vasodilation of skeletal muscle beds) AV O2 difference ↑↑ (due to increased O2 consumption) AV = arteriovenous; TPR = total peripheral resistance. Chapter 3 Cardiovascular Physiology 99 a.  Sympathetic outflow to the heart and blood vessels is increased. At the same time, para-sympathetic outflow to the heart is decreased. As a result, heart rate and contractility (stroke volume) are increased, and unstressed volume is decreased. b.  Cardiac output is increased, primarily as a result of the increased heart rate and, to a lesser extent, the increased stroke volume. c.  Venous return is increased as a result of muscular activity and venoconstriction. Increased venous return provides more blood for each stroke volume (Frank-Starling relationship, IV D 5). d.  Arteriolar resistance in the skin, splanchnic regions, kidneys, and inactive muscles is increased. Accordingly, blood flow to these organs is decreased. 2.  Increased metabolic activity of skeletal muscle ■ ■Vasodilator metabolites (lactate, K+, and adenosine) accumulate because of increased metabolism of the exercising muscle. ■ ■These metabolites cause arteriolar dilation in the active skeletal muscle, thus increas-ing skeletal muscle blood flow (active hyperemia). ■ ■As a result of the increased blood flow, O2 delivery to the muscle is increased. The number of perfused capillaries is increased so that the diffusion distance for O2 is decreased. ■ ■This vasodilation accounts for the overall decrease in TPR that occurs with exercise. Note that activation of the sympathetic nervous system alone (by the central com-mand) would cause an increase in TPR. Exercise Blood flow to skeletal muscle Central command Local responses Heart rate Constriction of arterioles (splanchnic and renal) Constriction of veins Dilation of skeletal muscle arterioles Contractility Sympathetic outflow Vasodilator metabolites Parasympathetic outflow Venous return TPR Cardiac output Figure 3.20 Cardiovascular responses to exercise. TPR = total peripheral resistance. 100 BRS Physiology C.  Hemorrhage (Table 3.6 and Figure 3.21) ■ ■The compensatory responses to acute blood loss are as follows: 1.  A decrease in blood volume produces a decrease in venous return. As a result, there is a decrease in both cardiac output and arterial pressure. 2.  The carotid sinus baroreceptors detect the decrease in arterial pressure. As a result of the baroreceptor reflex, there is increased sympathetic outflow to the heart and blood vessels and decreased parasympathetic outflow to the heart, producing: a. ↑ heart rate b. ↑ contractility c. ↑ TPR (due to arteriolar constriction) d.  Venoconstriction, which increases venous return e.  Constriction of arterioles in skeletal, splanchnic, and cutaneous vascular beds. However, it does not occur in coronary or cerebral vascular beds, ensuring that ade-quate blood flow will be maintained to the heart and brain. f.  These responses attempt to restore normal arterial blood pressure. 3.  Chemoreceptors in the carotid and aortic bodies are very sensitive to hypoxia. They supple-ment the baroreceptor mechanism by increasing sympathetic outflow to the heart and blood vessels. 4.  Cerebral ischemia (if present) causes an increase in Pco2, which activates ­ chemoreceptors in the vasomotor center to increase sympathetic outflow. 5.  Arteriolar vasoconstriction causes a decrease in Pc. As a result, capillary absorption is favored, which helps to restore circulating blood volume. 6.  The adrenal medulla releases epinephrine and norepinephrine, which supplement the actions of the sympathetic nervous system on the heart and blood vessels. 7.  The renin–angiotensin–aldosterone system is activated by the decrease in renal perfusion pressure. Because angiotensin II is a potent vasoconstrictor, it reinforces the stimulatory effect of the sympathetic nervous system on TPR. Aldosterone increases NaCl reabsorp-tion in the kidney, increasing the circulating blood volume. 8.  ADH is released when atrial receptors detect the decrease in blood volume. ADH causes both vasoconstriction and increased water reabsorption, both of which tend to increase blood pressure. t a b l e 3.6 Summary of Compensatory Responses to Hemorrhage Parameter Compensatory Response Heart rate ↑ Contractility ↑ TPR ↑ Venoconstriction ↑ Renin ↑ Angiotensin II ↑ Aldosterone ↑ Circulating epinephrine and norepinephrine ↑ ADH ↑ ADH = antidiuretic hormone; TPR = total peripheral resistance. Chapter 3 Cardiovascular Physiology 101 Hemorrhage Pa Pa Heart rate Constriction of arterioles Constriction of veins Contractility Sympathetic outflow TPR TPR Aldosterone Na+ reabsorption Blood volume Blood volume Fluid absorption Pc Renin Baroreceptor reflex Venous return Angiotensin Figure 3.21 Cardiovascular responses to hemorrhage. Pa = arterial pressure; Pc = capillary hydrostatic pressure; TPR = total peripheral resistance. 102 Review Test 1. A 53-year-old woman is found, by arteriography, to have 50% narrowing of her left renal artery. What is the expected change in blood flow through the stenotic artery? (A) Decrease to ½ (B) Decrease to ¼ (C) Decrease to 1⁄8 (D) Decrease to 1⁄16 (E) No change 2. When a person moves from a supine position to a standing position, which of the following compensatory changes occurs? (A) Decreased heart rate (B) Increased contractility (C) Decreased total peripheral resistance (TPR) (D) Decreased cardiac output (E) Increased PR intervals 3. At which site is systolic blood pressure the highest? (A) Aorta (B) Central vein (C) Pulmonary artery (D) Right atrium (E) Renal artery (F) Renal vein 4. A person's electrocardiogram (ECG) has no P wave, but has a normal QRS complex and a normal T wave. Therefore, his pacemaker is located in the (A) sinoatrial (SA) node (B) atrioventricular (AV) node (C) bundle of His (D) Purkinje system (E) ventricular muscle 5. If the ejection fraction increases, there will be a decrease in (A) cardiac output (B) end-systolic volume (C) heart rate (D) pulse pressure (E) stroke volume (F) systolic pressure Questions 6 and 7 An electrocardiogram (ECG) on a person shows ventricular extrasystoles. 6. The extrasystolic beat would produce (A) increased pulse pressure because contractility is increased (B) increased pulse pressure because heart rate is increased (C) decreased pulse pressure because ventricular filling time is increased (D) decreased pulse pressure because stroke volume is decreased (E) decreased pulse pressure because the PR interval is increased 7. After an extrasystole, the next “normal” ventricular contraction produces (A) increased pulse pressure because the contractility of the ventricle is increased (B) increased pulse pressure because total peripheral resistance (TPR) is decreased (C) increased pulse pressure because compliance of the veins is decreased (D) decreased pulse pressure because the contractility of the ventricle is increased (E) decreased pulse pressure because TPR is decreased 8. An increase in contractility is demonstrated on a Frank-Starling diagram by (A) increased cardiac output for a given end-diastolic volume (B) increased cardiac output for a given end-systolic volume (C) decreased cardiac output for a given end-diastolic volume (D) decreased cardiac output for a given end-systolic volume Chapter 3 Cardiovascular Physiology 103 Questions 9–12 Left ventricular volume (mL) Left ventricular pressure (mm Hg) 150 100 50 0 150 75 0 1 2 3 4 9. On the graph showing left ventricular volume and pressure, isovolumetric contraction occurs between points (A) 4 → 1 (B) 1 → 2 (C) 2 → 3 (D) 3 → 4 10. The aortic valve closes at point (A) 1 (B) 2 (C) 3 (D) 4 11. The first heart sound corresponds to point (A) 1 (B) 2 (C) 3 (D) 4 12. If the heart rate is 70 beats/min, then the cardiac output of this ventricle is closest to (A) 3.45 L/min (B) 4.55 L/min (C) 5.25 L/min (D) 8.00 L/min (E) 9.85 L/min Questions 13 and 14 In a capillary, Pc is 30 mm Hg, Pi is −2 mm Hg, πc is 25 mm Hg, and πi is 2 mm Hg. 13. What is the direction of fluid movement and the net driving force? (A) Absorption; 6 mm Hg (B) Absorption; 9 mm Hg (C) Filtration; 6 mm Hg (D) Filtration; 9 mm Hg (E) There is no net fluid movement 14. If Kf is 0.5 mL/min/mm Hg, what is the rate of water flow across the capillary wall? (A) 0.06 mL/min (B) 0.45 mL/min (C) 4.50 mL/min (D) 9.00 mL/min (E) 18.00 mL/min 15. The tendency for blood flow to be turbulent is increased by (A) increased viscosity (B) increased hematocrit (C) partial occlusion of a blood vessel (D) decreased velocity of blood flow 16. A 66-year-old man, who has had a sympathectomy, experiences a greater-than-normal fall in arterial pressure upon standing up. The explanation for this occurrence is (A) an exaggerated response of the renin– angiotensin–aldosterone system (B) a suppressed response of the renin– angiotensin–aldosterone system (C) an exaggerated response of the baroreceptor mechanism (D) a suppressed response of the baroreceptor mechanism 17. The ventricles are completely depolarized during which isoelectric portion of the electrocardiogram (ECG)? (A) PR interval (B) QRS complex (C) QT interval (D) ST segment (E) T wave 18. In which of the following situations is pulmonary blood flow greater than aortic blood flow? (A) Normal adult (B) Fetus (C) Left-to-right ventricular shunt (D) Right-to-left ventricular shunt (E) Right ventricular failure (F) Administration of a positive inotropic agent 104 BRS Physiology 19. The change indicated by the dashed lines on the cardiac output/venous return curves shows Right atrial pressure (mm Hg) or end-diastolic volume (L) Cardiac output or venous return (L/min) Cardiac output Venous return (A) decreased cardiac output in the “new” steady state (B) decreased venous return in the “new” steady state (C) increased mean systemic pressure (D) decreased blood volume (E) increased myocardial contractility 20. A 30-year-old female patient's electrocardiogram (ECG) shows two P waves preceding each QRS complex. The interpretation of this pattern is (A) decreased firing rate of the pacemaker in the sinoatrial (SA) node (B) decreased firing rate of the pacemaker in the atrioventricular (AV) node (C) increased firing rate of the pacemaker in the SA node (D) decreased conduction through the AV node (E) increased conduction through the His-Purkinje system 21. An acute decrease in arterial blood pressure elicits which of the following compensatory changes? (A) Decreased firing rate of the carotid sinus nerve (B) Increased parasympathetic outflow to the heart (C) Decreased heart rate (D) Decreased contractility (E) Decreased mean systemic pressure 22. The tendency for edema to occur will be increased by (A) arteriolar constriction (B) increased venous pressure (C) increased plasma protein concentration (D) muscular activity 23. Inspiration “splits” the second heart sound because (A) the aortic valve closes before the pulmonic valve (B) the pulmonic valve closes before the aortic valve (C) the mitral valve closes before the tricuspid valve (D) the tricuspid valve closes before the mitral valve (E) filling of the ventricles has fast and slow components 24. During exercise, total peripheral resistance (TPR) decreases because of the effect of (A) the sympathetic nervous system on splanchnic arterioles (B) the parasympathetic nervous system on skeletal muscle arterioles (C) local metabolites on skeletal muscle arterioles (D) local metabolites on cerebral arterioles (E) histamine on skeletal muscle arterioles Questions 25 and 26 Time Volume or pressure Curve A Curve B 25. Curve A in the figure represents (A) aortic pressure (B) ventricular pressure (C) atrial pressure (D) ventricular volume 26. Curve B in the figure represents (A) left atrial pressure (B) ventricular pressure (C) atrial pressure (D) ventricular volume Chapter 3 Cardiovascular Physiology 105 27. An increase in arteriolar resistance, without a change in any other component of the cardiovascular system, will produce (A) a decrease in total peripheral resistance (TPR) (B) an increase in capillary filtration (C) an increase in arterial pressure (D) a decrease in afterload 28. The following measurements were obtained in a male patient: Central venous pressure: 10 mm Hg Heart rate: 70 beats/min Systemic arterial [O2] = 0.24 mL O2/mL Mixed venous [O2] = 0.16 mL O2/mL Whole body O2 consumption: 500 mL/min What is this patient's cardiac output? (A) 1.65 L/min (B) 4.55 L/min (C) 5.00 L/min (D) 6.25 L/min (E) 8.00 L/min 29. Which of the following is the result of an inward Na+ current? (A) Upstroke of the action potential in the sinoatrial (SA) node (B) Upstroke of the action potential in Purkinje fibers (C) Plateau of the action potential in ventricular muscle (D) Repolarization of the action potential in ventricular muscle (E) Repolarization of the action potential in the SA node Questions 30 and 31 Right atrial pressure (mm Hg) Cardiac output or venous return (L/min) Venous return Cardiac output 30. The dashed line in the figure illustrates the effect of (A) increased total peripheral resistance (TPR) (B) increased blood volume (C) increased contractility (D) a negative inotropic agent (E) increased mean systemic pressure 31. The x-axis in the figure could have been labeled (A) end-systolic volume (B) end-diastolic volume (C) pulse pressure (D) mean systemic pressure (E) heart rate 32. The greatest pressure decrease in the circulation occurs across the arterioles because (A) they have the greatest surface area (B) they have the greatest cross-sectional area (C) the velocity of blood flow through them is the highest (D) the velocity of blood flow through them is the lowest (E) they have the greatest resistance 33. Pulse pressure is (A) the highest pressure measured in the arteries (B) the lowest pressure measured in the arteries (C) measured only during diastole (D) determined by stroke volume (E) decreased when the capacitance of the arteries decreases (F) the difference between mean arterial pressure and central venous pressure 34. In the sinoatrial (SA) node, phase 4 depolarization (pacemaker potential) is attributable to (A) an increase in K+ conductance (B) an increase in Na+ conductance (C) a decrease in Cl– conductance (D) a decrease in Ca2+ conductance (E) simultaneous increases in K+ and Cl– conductances 35. A healthy 35-year-old man is running a marathon. During the run, there is a increase in his splanchnic vascular resistance. Which receptor is responsible for the increased resistance? (A) α1 Receptors (B) β1 Receptors (C) β2 Receptors (D) Muscarinic receptors 106 BRS Physiology 36. During which phase of the cardiac cycle is aortic pressure highest? (A) Atrial systole (B) Isovolumetric ventricular contraction (C) Rapid ventricular ejection (D) Reduced ventricular ejection (E) Isovolumetric ventricular relaxation (F) Rapid ventricular filling (G) Reduced ventricular filling (diastasis) 37. Myocardial contractility is best correlated with the intracellular concentration of (A) Na+ (B) K+ (C) Ca2+ (D) Cl– (E) Mg2+ 38. Which of the following is an effect of histamine? (A) Decreased capillary filtration (B) Vasodilation of the arterioles (C) Vasodilation of the veins (D) Decreased Pc (E) Interaction with the muscarinic receptors on the blood vessels 39. Carbon dioxide (CO2) regulates blood flow to which one of the following organs? (A) Heart (B) Skin (C) Brain (D) Skeletal muscle at rest (E) Skeletal muscle during exercise 40. Cardiac output of the right side of the heart is what percentage of the cardiac output of the left side of the heart? (A) 25% (B) 50% (C) 75% (D) 100% (E) 125% 41. The physiologic function of the relatively slow conduction through the atrioventricular (AV) node is to allow sufficient time for (A) runoff of blood from the aorta to the arteries (B) venous return to the atria (C) filling of the ventricles (D) contraction of the ventricles (E) repolarization of the ventricles 42. Blood flow to which organ is controlled primarily by the sympathetic nervous system rather than by local metabolites? (A) Skin (B) Heart (C) Brain (D) Skeletal muscle during exercise 43. Which of the following parameters is decreased during moderate exercise? (A) Arteriovenous O2 difference (B) Heart rate (C) Cardiac output (D) Pulse pressure (E) Total peripheral resistance (TPR) 44. A 72-year-old woman, who is being treated with propranolol, finds that she cannot maintain her previous exercise routine. Her physician explains that the drug has reduced her cardiac output. Blockade of which receptor is responsible for the decrease in cardiac output? (A) α1 Receptors (B) β1 Receptors (C) β2 Receptors (D) Muscarinic receptors (E) Nicotinic receptors 45. During which phase of the cardiac cycle is ventricular volume lowest? (A) Atrial systole (B) Isovolumetric ventricular contraction (C) Rapid ventricular ejection (D) Reduced ventricular ejection (E) Isovolumetric ventricular relaxation (F) Rapid ventricular filling (G) Reduced ventricular filling (diastasis) 46. Which of the following changes will cause an increase in myocardial O2 consumption? (A) Decreased aortic pressure (B) Decreased heart rate (C) Decreased contractility (D) Increased size of the heart (E) Increased influx of Na+ during the upstroke of the action potential 47. Which of the following substances crosses capillary walls primarily through water-filled clefts between the endothelial cells? (A) O2 (B) CO2 (C) CO (D) Glucose Chapter 3 Cardiovascular Physiology 107 48. A 24-year-old woman presents to the emergency department with severe diarrhea. When she is supine (lying down), her blood pressure is 90/60 mm Hg (decreased) and her heart rate is 100 beats/min (increased). When she is moved to a standing position, her heart rate further increases to 120 beats/ min. Which of the following accounts for the further increase in heart rate upon standing? (A) Decreased total peripheral resistance (B) Increased venoconstriction (C) Increased contractility (D) Increased afterload (E) Decreased venous return 49. A 60-year-old businessman is evaluated by his physician, who determines that his blood pressure is significantly elevated at 185/130 mm Hg. Laboratory tests reveal an increase in plasma renin activity, plasma aldosterone level, and left renal vein renin level. His right renal vein renin level is decreased. What is the most likely cause of the patient's hypertension? (A) Aldosterone-secreting tumor (B) Adrenal adenoma secreting aldosterone and cortisol (C) Pheochromocytoma (D) Left renal artery stenosis (E) Right renal artery stenosis Questions 50–52 0 +20 –20 –40 –60 –80 –100 100 msec 0 1 2 3 4 50. During which phase of the ventricular action potential is the membrane potential closest to the K+ equilibrium potential? (A) Phase 0 (B) Phase 1 (C) Phase 2 (D) Phase 3 (E) Phase 4 51. During which phase of the ventricular action potential is the conductance to Ca2+ highest? (A) Phase 0 (B) Phase 1 (C) Phase 2 (D) Phase 3 (E) Phase 4 52. Which phase of the ventricular action potential coincides with diastole? (A) Phase 0 (B) Phase 1 (C) Phase 2 (D) Phase 3 (E) Phase 4 53. Propranolol has which of the following effects? (A) Decreases heart rate (B) Increases left ventricular ejection fraction (C) Increases stroke volume (D) Decreases splanchnic vascular resistance (E) Decreases cutaneous vascular resistance 54. Which receptor mediates slowing of the heart? (A) α1 Receptors (B) β1 Receptors (C) β2 Receptors (D) Muscarinic receptors 55. Which of the following agents or changes has a negative inotropic effect on the heart? (A) Increased heart rate (B) Sympathetic stimulation (C) Norepinephrine (D) Acetylcholine (ACh) (E) Cardiac glycosides 56. The low-resistance pathways between myocardial cells that allow for the spread of action potentials are the (A) gap junctions (B) T tubules (C) sarcoplasmic reticulum (SR) (D) intercalated disks (E) mitochondria 57. Which agent is released or secreted after a hemorrhage and causes an increase in renal Na+ reabsorption? (A) Aldosterone (B) Angiotensin I (C) Angiotensinogen (D) Antidiuretic hormone (ADH) (E) Atrial natriuretic peptide 108 BRS Physiology 58. During which phase of the cardiac cycle does the mitral valve open? (A) Atrial systole (B) Isovolumetric ventricular contraction (C) Rapid ventricular ejection (D) Reduced ventricular ejection (E) Isovolumetric ventricular relaxation (F) Rapid ventricular filling (G) Reduced ventricular filling (diastasis) 59. A hospitalized patient has an ejection fraction of 0.4, a heart rate of 95 beats/min, and a cardiac output of 3.5 L/min. What is the patient's end-diastolic volume? (A) 14 mL (B) 37 mL (C) 55 mL (D) 92 mL (E) 140 mL 109 Answers and Explanations 1. The answer is D [II C, D]. If the radius of the artery decreased by 50% (1/2), then resistance would increase by 24, or 16 (R = 8ηl/πr4). Because blood flow is inversely proportional to resistance (Q = ΔP/R), flow will decrease to 1/16 of the original value. 2. The answer is B [IX A; Table 3.4]. When a person moves to a standing position, blood pools in the leg veins, causing decreased venous return to the heart, decreased cardiac output, and decreased arterial pressure. The baroreceptors detect the decrease in arterial pressure, and the vasomotor center is activated to increase sympathetic outflow and decrease parasympathetic outflow. There is an increase in heart rate (resulting in a decreased PR interval), contractility, and total peripheral resistance (TPR). Because both heart rate and contractility are increased, cardiac output will increase toward normal. 3. The answer is E [II G, H, I]. Pressures on the venous side of the circulation (e.g., central vein, right atrium, renal vein) are lower than pressures on the arterial side. Pressure in the pulmonary artery (and all pressures on the right side of the heart) are much lower than their counterparts on the left side of the heart. In the systemic circulation, systolic pressure is actually slightly higher in the downstream arteries (e.g., renal artery) than in the aorta because of the reflection of pressure waves at branch points. 4. The answer is B [III A]. The absent P wave indicates that the atrium is not depolarizing and, therefore, the pacemaker cannot be in the sinoatrial (SA) node. Because the QRS and T waves are normal, depolarization and repolarization of the ventricle must be proceeding in the normal sequence. This situation can occur if the pacemaker is located in the atrioventricular (AV) node. If the pacemaker were located in the bundle of His or in the Purkinje system, the ventricles would activate in an abnormal sequence (depending on the exact location of the pacemaker) and the QRS wave would have an abnormal configuration. Ventricular muscle does not have pacemaker properties. 5. The answer is B [IV G 3]. An increase in ejection fraction means that a higher fraction of the end-diastolic volume is ejected in the stroke volume (e.g., because of the administration of a positive inotropic agent). When this situation occurs, the volume remaining in the ventricle after systole, the end-systolic volume, will be reduced. Cardiac output, pulse pressure, stroke volume, and systolic pressure will be increased. 6. The answer is D [V G]. On the extrasystolic beat, pulse pressure decreases because there is inadequate ventricular filling time—the ventricle beats “too soon.” As a result, stroke volume decreases. 7. The answer is A [IV C I a (2)]. The postextrasystolic contraction produces increased pulse pressure because contractility is increased. Extra Ca2+ enters the cell during the extrasystolic beat. Contractility is directly related to the amount of intracellular Ca2+ available for binding to troponin C. 8. The answer is A [IV D 5 a]. An increase in contractility produces an increase in cardiac output for a given end-diastolic volume, or pressure. The Frank-Starling relationship demonstrates the matching of cardiac output (what leaves the heart) with venous return (what returns to the heart). An increase in contractility (positive inotropic effect) will shift the curve upward. 9. The answer is B [IV E 1 a]. Isovolumetric contraction occurs during ventricular systole, before the aortic valve opens. Ventricular pressure increases, but volume remains constant because blood cannot be ejected into the aorta against a closed valve. 110 BRS Physiology 10. The answer is C [IV 1 c]. Closure of the aortic valve occurs once ejection of blood from the ventricle has occurred and the left ventricular pressure has decreased to less than the aortic pressure. 11. The answer is A [V B]. The first heart sound corresponds to closure of the atrial–ventricular valves. Before this closure occurs, the ventricle fills (phase 4 → 1). After the valves close, isovolumetric contraction begins and ventricular pressure increases (phase 1 → 2). 12. The answer is C [IV E 1, G 1, 2]. Stroke volume is the volume ejected from the ventricle and is represented on the pressure–volume loop as phase 2 → 3; end-diastolic volume is about 140 mL and end-systolic volume is about 65 mL; the difference, or stroke volume, is 75 mL. Cardiac output is calculated as stroke volume × heart rate or 75 mL × 70 beats /min = 5250 mL/min or 5.25 L/min. 13. The answer is D [VII C 1]. The net driving force can be calculated with the Starling equation Net pressure P P mm Hg mm H c i c i = − −π −π = − − = ( ) ( ) − ( ) ( )−( )     30 2 25 2 32 g mm Hg mm Hg − = + 23 9 Because the net pressure is positive, filtration out of the capillary will occur. 14. The answer is C [VII C 1]. Kf is the filtration coefficient for the capillary and describes the intrinsic water permeability. Water flow K Net pressure mL mm Hg mm Hg mL f = × = × = 0 5 9 4 5 . min . min 15. The answer is C [II D 2 a, b]. Turbulent flow is predicted when the Reynolds number is increased. Factors that increase the Reynolds number and produce turbulent flow are decreased viscosity (hematocrit) and increased velocity. Partial occlusion of a blood vessel increases the Reynolds number (and turbulence) because the decrease in cross-sectional area results in increased blood velocity (v = Q/A). 16. The answer is D [IX A]. Orthostatic hypotension is a decrease in arterial pressure that occurs when a person moves from a supine to a standing position. A person with a normal baroreceptor mechanism responds to a decrease in arterial pressure through the vasomotor center by increasing sympathetic outflow and decreasing parasympathetic outflow. The sympathetic component helps to restore blood pressure by increasing heart rate, contractility, total peripheral resistance (TPR), and mean systemic pressure. In a patient who has undergone a sympathectomy, the sympathetic component of the baroreceptor mechanism is absent. 17. The answer is D [III A]. The PR segment (part of the PR interval) and the ST segment are the only portions of the electrocardiogram (ECG) that are isoelectric. The PR interval includes the P wave (atrial depolarization) and the PR segment, which represents conduction through the atrioventricular (AV) node; during this phase, the ventricles are not yet depolarized. The ST segment is the only isoelectric period when the entire ventricle is depolarized. 18. The answer is C [I A]. In a left-to-right ventricular shunt, a defect in the ventricular septum allows blood to flow from the left ventricle to the right ventricle instead of being ejected into the aorta. The “shunted” fraction of the left ventricular output is therefore added to the output of the right ventricle, making pulmonary blood flow (the cardiac output of the right ventricle) higher than systemic blood flow (the cardiac output of the left ventricle). In normal adults, the outputs of both ventricles are equal in the steady state. In the fetus, pulmonary blood flow is near zero. Right ventricular failure results in decreased pulmonary blood flow. Administration of a positive inotropic agent should have the same effect on contractility and cardiac output in both ventricles. Chapter 3 Cardiovascular Physiology 111 19. The answer is C [IV F 2 a]. The shift in the venous return curve to the right is consistent with an increase in blood volume and, as a consequence, mean systemic pressure. Both cardiac output and venous return are increased in the new steady state (and are equal to each other). Contractility is unaffected. 20. The answer is D [III E 1 b]. A pattern of two P waves preceding each QRS complex indicates that only every other P wave is conducted through the atrioventricular (AV) node to the ventricle. Thus, conduction velocity through the AV node must be decreased. 21. The answer is A [VI A 1 a-d]. A decrease in blood pressure causes decreased stretch of the carotid sinus baroreceptors and decreased firing of the carotid sinus nerve. In an attempt to restore blood pressure, the parasympathetic outflow to the heart is decreased and sympathetic outflow is increased. As a result, heart rate and contractility will be increased. Mean systemic pressure will increase because of increased sympathetic tone of the veins (and a shift of blood to the arteries). 22. The answer is B [VII C 4 c; Table 3.2]. Edema occurs when more fluid is filtered out of the capillaries than can be returned to the circulation by the lymphatics. Filtration is increased by changes that increase Pc or decrease πc. Arteriolar constriction would decrease Pc and decrease filtration. Dehydration would increase plasma protein concentration (by hemoconcentration) and thereby increase πc and decrease filtration. Increased venous pressure would increase Pc and filtration. 23. The answer is A [V E]. The second heart sound is associated with closure of the aortic and pulmonic valves. Because the aortic valve closes before the pulmonic valve, the sound can be split by inspiration. 24. The answer is C [IX B 2]. During exercise, local metabolites accumulate in the exercising muscle and cause local vasodilation and decreased arteriolar resistance of the skeletal muscle. Because muscle mass is large, it contributes a large fraction of the total peripheral resistance (TPR). Therefore, the skeletal muscle vasodilation results in an overall decrease in TPR, even though there is sympathetic vasoconstriction in other vascular beds. 25. The answer is A [V A-G]. The electrocardiogram (ECG) tracing serves as a reference. The QRS complex marks ventricular depolarization, followed immediately by ventricular contraction. Aortic pressure increases steeply after QRS, as blood is ejected from the ventricles. After reaching peak pressure, aortic pressure decreases as blood runs off into the arteries. The characteristic dicrotic notch (“blip” in the aortic pressure curve) appears when the aortic valve closes. Aortic pressure continues to decrease as blood flows out of the aorta. 26. The answer is D [V A-G]. Ventricular volume increases slightly with atrial systole (P wave), is constant during isovolumetric contraction (QRS), and then decreases dramatically after the QRS, when blood is ejected from the ventricle. 27. The answer is C [II C]. An increase in arteriolar resistance will increase total peripheral resistance (TPR). Arterial pressure = cardiac output × TPR, so arterial pressure will also increase. Capillary filtration decreases when there is arteriolar constriction because Pc decreases. Afterload of the heart would be increased by an increase in TPR. 28. The answer is D [IV J]. Cardiac output is calculated by the Fick principle if whole body oxygen (O2) consumption and [O2] in the pulmonary artery and pulmonary vein are measured. Mixed venous blood could substitute for a pulmonary artery sample, and peripheral arterial blood could substitute for a pulmonary vein sample. Central venous pressure and heart rate are not needed for this calculation. Cardiac output mL mL O mL mL O mL mL or = = 500 0 24 0 16 6250 6 2 2 min . . min, . − 25 L min 112 BRS Physiology 29. The answer is B [III B 1 a, c, d, 2 a]. The upstroke of the action potential in the atria, ventricles, and Purkinje fibers is the result of a fast inward Na+ current. The upstroke of the action potential in the sinoatrial (SA) node is the result of an inward Ca2+ current. The plateau of the ventricular action potential is the result of a slow inward Ca2+ current. Repolarization in all cardiac tissues is the result of an outward K+ current. 30. The answer is C [IV F 3 a (1)]. An upward shift of the cardiac output curve is consistent with an increase in myocardial contractility; for any right atrial pressure (sarcomere length), the force of contraction is increased. Such a change causes an increase in stroke volume and cardiac output. Increased blood volume and increased mean systemic pressure are related and would cause a rightward shift in the venous return curve. A negative inotropic agent would cause a decrease in contractility and a downward shift of the cardiac output curve. 31. The answer is B [IV F 3]. End-diastolic volume and right atrial pressure are related and can be used interchangeably. 32. The answer is E [II A 2, 3, F]. The decrease in pressure at any level of the cardiovascular system is caused by the resistance of the blood vessels (ΔP = Q × R). The greater the resistance is, the greater the decrease in pressure is. The arterioles are the site of highest resistance in the vasculature. The arterioles do not have the greatest surface area or cross-sectional area (the capillaries do). Velocity of blood flow is lowest in the capillaries, riot in the arterioles. 33. The answer is D [II G 3]. Pulse pressure is the difference between the highest (systolic) and lowest (diastolic) arterial pressures. It reflects the volume ejected by the left ventricle (stroke volume). Pulse pressure increases when the capacitance of the arteries decreases, such as with aging. 34. The answer is B [III B 2 c]. Phase 4 depolarization is responsible for the pacemaker property of sinoatrial (SA) nodal cells. It is caused by an increase in Na+ conductance and an inward Na+ current (If), which depolarizes the cell membrane. 35. The answer is A [VIII E 1; Table 3.1]. During exercise, the sympathetic nervous system is activated. The observed increase in splanchnic vascular resistance is due to sympathetic activation of α1 receptors on splanchnic arterioles. 36. The answer is D [V A–G]. Aortic pressure reaches its highest level immediately after the rapid ejection of blood during left ventricular systole. This highest level actually coincides with the beginning of the reduced ventricular ejection phase. 37. The answer is C [IV B 6]. Contractility of myocardial cells depends on the intracellular [Ca2+], which is regulated by Ca2+ entry across the cell membrane during the plateau of the action potential and by Ca2+ uptake into and release from the sarcoplasmic reticulum (SR). Ca2+ binds to troponin C and removes the inhibition of actin–myosin interaction, allowing contraction (shortening) to occur. 38. The answer is B [VIII B 2 a]. Histamine causes vasodilation of the arterioles, which increases Pc and capillary filtration. It also causes constriction of the veins, which contributes to the increase in Pc. Acetylcholine (ACh) interacts with muscarinic receptors (although these are not present on vascular smooth muscle). 39. The answer is C [VIII C, D, E 2, F]. Blood flow to the brain is autoregulated by the Pco2. If metabolism increases (or arterial pressure decreases), the Pco2 will increase and cause cerebral vasodilation. Blood flow to the heart and to skeletal muscle during exercise is also regulated metabolically, but adenosine and hypoxia are the most important vasodilators for the heart. Adenosine, lactate, and K+ are the most important vasodilators for exercising skeletal muscle. Blood flow to the skin is regulated by the sympathetic nervous system rather than by local metabolites. 40. The answer is D [I A]. Cardiac output of the left and right sides of the heart is equal. Blood ejected from the left side of the heart to the systemic circulation must be oxygenated by passage through the pulmonary circulation. Chapter 3 Cardiovascular Physiology 113 41. The answer is C [III C]. The atrioventricular (AV) delay (which corresponds to the PR interval) allows time for filling of the ventricles from the atria. If the ventricles contracted before they were filled, stroke volume would decrease. 42. The answer is A [VIII C–F]. Circulation of the skin is controlled primarily by the sympathetic nerves. The coronary and cerebral circulations are primarily regulated by local metabolic factors. Skeletal muscle circulation is regulated by metabolic factors (local metabolites) during exercise, although at rest it is controlled by the sympathetic nerves. 43. The answer is E [IX B]. In anticipation of exercise, the central command increases sympathetic outflow to the heart and blood vessels, causing an increase in heart rate and contractility. Venous return is increased by muscular activity and contributes to an increase in cardiac output by the Frank-Starling mechanism. Pulse pressure is increased because stroke volume is increased. Although increased sympathetic outflow to the blood vessels might be expected to increase total peripheral resistance (TPR), it does not because there is an overriding vasodilation of the skeletal muscle arterioles as a result of the buildup of vasodilator metabolites (lactate, K+ adenosine). Because this vasodilation improves the delivery of O2, more O2 can be extracted and used by the contracting muscle. 44. The answer is B [III 3; Table 3.1]. Propranolol is an adrenergic antagonist that blocks both β1 and β2 receptors. When propranolol is administered to reduce cardiac output, it inhibits β1 receptors in the sinoatrial (SA) node (heart rate) and in ventricular muscle (contractility). 45. The answer is E [V E]. Ventricular volume is at its lowest value while the ventricle is relaxed (diastole), just before ventricular filling begins. 46. The answer is D [IV I]. Myocardial O2 consumption is determined by the amount of tension developed by the heart. It increases when there are increases in aortic pressure (increased afterload), when there is increased heart rate or stroke volume (which increases cardiac output), or when the size (radius) of the heart is increased (T = P × r). Influx of Na+ ions during an action potential is a purely passive process, driven by the electrochemical driving forces on Na+ ions. Of course, maintenance of the inwardly directed Na+ gradient over the long term requires the Na+–K+ pump, which is energized by adenosine triphosphate (ATP). 47. The answer is D [VII B 1, 2]. Because O2, CO2, and CO are lipophilic, they cross capillary walls primarily by diffusion through the endothelial cell membranes. Glucose is water soluble; it cannot cross through the lipid component of the cell membrane and is restricted to the water-filled clefts, or pores, between the cells. 48. The answer is E [VI A]. Diarrhea causes a loss of extracellular fluid volume, which produces a decrease in arterial pressure. The decrease in arterial pressure activates the baroreceptor mechanism, which produces an increase in heart rate when the patient is supine. When she stands up, blood pools in her leg veins and produces a decrease in venous return, a decrease in cardiac output (by the Frank-Starling mechanism), and a further decrease in arterial pressure. The further decrease in arterial pressure causes further activation of the baroreceptor mechanism and a further increase in heart rate. 49. The answer is D [VI B]. In this patient, hypertension is most likely caused by left renal artery stenosis, which led to increased renin secretion by the left kidney. The increased plasma renin activity causes an increased secretion of aldosterone, which increases Na+ reabsorption by the renal distal tubule. The increased Na+ reabsorption leads to increased blood volume and blood pressure. The right kidney responds to the increase in blood pressure by decreasing its renin secretion. Right renal artery stenosis causes a similar pattern of results, except that renin secretion from the right kidney, not the left kidney, is increased. Aldosterone-secreting tumors cause increased levels of aldosterone but decreased plasma renin activity (as a result of decreased renin secretion by both kidneys). Pheochromocytoma is associated with increased circulating levels of catecholamines, which increase blood pressure by their effects on the heart (increased heart rate and contractility) and blood vessels (vasoconstriction); the increase in blood pressure is sensed by the kidneys and results in decreased plasma renin activity and aldosterone levels. 114 BRS Physiology 50. The answer is E [III B 1 e]. Phase 4 is the resting membrane potential. Because the conductance K+ is highest, the membrane potential approaches the equilibrium potential for K+. 51. The answer is C [III B 1 c]. Phase 2 is the plateau of the ventricular action potential. During this phase, the conductance to Ca2+ increases transiently. Ca2+ that enters the cell during the plateau is the trigger that releases more Ca2+ from the sarcoplasmic reticulum (SR) for the contraction. 52. The answer is E [III B 1 e]. Phase 4 is electrical diastole. 53. The answer is A [III E 2, 3; Table 3.1]. Propranolol, a β-adrenergic antagonist, blocks all sympathetic effects that are mediated by a β1 or β2 receptor. The sympathetic effect on the sinoatrial (SA) node is to increase heart rate via a β1 receptor; therefore, propranolol decreases heart rate. Ejection fraction reflects ventricular contractility, which is another effect of β1 receptors; thus, propranolol decreases contractility, ejection fraction, and stroke volume. Splanchnic and cutaneous resistance are mediated by α1 receptors. 54. The answer is D [III E 2 a; Table 3.1]. Acetylcholine (ACh) causes slowing of the heart via muscarinic receptors in the sinoatrial (SA) node. 55. The answer is D [IV C]. A negative inotropic effect is one that decreases myocardial contractility. Contractility is the ability to develop tension at a fixed muscle length. Factors that decrease contractility are those that decrease the intracellular [Ca2+]. Increasing heart rate increases intracellular [Ca2+] because more Ca2+ ions enter the cell during the plateau of each action potential. Sympathetic stimulation and norepinephrine increase intracellular [Ca2+] by increasing entry during the plateau and increasing the storage of Ca2+ by the sarcoplasmic reticulum (SR) [for later release]. Cardiac glycosides increase intracellular [Ca2+] by inhibiting the Na+–K+ pump, thereby inhibiting Na+–Ca2+ exchange (a mechanism that pumps Ca2+ out of the cell). Acetylcholine (ACh) has a negative inotropic effect on the atria. 56. The answer is A [IVA 3]. The gap junctions occur at the intercalated disks between cells and are low-resistance sites of current spread. 57. The answer is A [VI C 4; IX C]. Angiotensin I and aldosterone are increased in response to a decrease in renal perfusion pressure. Angiotensinogen is the precursor for angiotensin I. Antidiuretic hormone (ADH) is released when atrial receptors detect a decrease in blood volume. Of these, only aldosterone increases Na+ reabsorption. Atrial natriuretic peptide is released in response to an increase in atrial pressure, and an increase in its secretion would not be anticipated after blood loss. 58. The answer is E [V E]. The mitral [atrioventricular (AV)] valve opens when left atrial pressure becomes higher than left ventricular pressure. This situation occurs when the left ventricular pressure is at its lowest level—when the ventricle is relaxed, blood has been ejected from the previous cycle, and before refilling has occurred. 59. The answer is D [IV G]. First, calculate stroke volume from the cardiac output and heart rate: Cardiac output = stroke volume × heart rate; thus, stroke volume = cardiac output/ heart rate = 3500 mL/95 beats/min = 36.8 mL. Then, calculate end-diastolic volume from stroke volume and ejection fraction: Ejection fraction = stroke volume/end-diastolic volume; thus end-diastolic volume = stroke volume/ejection fraction = 36.8 mL/0.4 = 92 mL. 115 Respiratory Physiology c h a p t e r 4 I.  Lung Volumes and Capacities A.  Lung volumes (Figure 4.1) 1.  Tidal volume (Vt) ■ ■is the volume inspired or expired with each normal breath. 2.  Inspiratory reserve volume (IRV) ■ ■is the volume that can be inspired over and above the tidal volume. ■ ■is used during exercise. 3.  Expiratory reserve volume (ERV) ■ ■is the volume that can be expired after the expiration of a tidal volume. 4.  Residual volume (RV) ■ ■is the volume that remains in the lungs after a maximal expiration. ■ ■cannot be measured by spirometry. 5.  Dead space a.  Anatomic dead space ■ ■is the volume of the conducting airways. ■ ■is normally approximately 150 mL. b.  Physiologic dead space ■ ■is a functional measurement. ■ ■is defined as the volume of the lungs that does not participate in gas exchange. ■ ■is approximately equal to the anatomic dead space in normal lungs. ■ ■may be greater than the anatomic dead space in lung diseases in which there are ventilation/perfusion (V/Q) defects. ■ ■is calculated by the following equation: V V P P P D T A E A CO CO CO 2 2 2 = ¥ -where: Vd = physiologic dead space (mL) Vt = tidal volume (mL) PaCO2 = Pco2 of alveolar gas (mm Hg) = Pco2 of arterial blood PeCO2 = Pco2 of expired air (mm Hg) 116 BRS Physiology ■ ■In words, the equation states that physiologic dead space is tidal volume multiplied by a fraction. The fraction represents the dilution of alveolar Pco2 by dead-space air, which does not participate in gas exchange and does not therefore contribute CO2 to expired air. 6.  Ventilation rate a.  Minute ventilation is expressed as follows: Minute ventilation V Breaths T = × min b.  Alveolar ventilation (Va) is expressed as follows: V V V Breaths A T D = − ( )× min ■ ■Sample problem: A person with a tidal volume (Vt) of 0.5 L is breathing at a rate of 15 breaths/min. The Pco2 of his arterial blood is 40 mm Hg, and the Pco2 of his expired air is 36 mm Hg. What is his rate of alveolar ventilation? Dead space V P P P mm Hg mm Hg mm Hg T A E A CO CO CO = − = − = × × 2 2 2 0 5 40 36 40 0 0 . . L 5 0 5 0 05 15 6 75 L L L L V V V Breaths Breaths A T D = − ( ) = − ( ) = × × min . . min . min B.  Lung capacities (see Figure 4.1) 1.  Inspiratory capacity ■ ■is the sum of tidal volume and IRV. 2.  Functional residual capacity (FRC) ■ ■is the sum of ERV and RV. ■ ■is the volume remaining in the lungs after a tidal volume is expired. ■ ■includes the RV, so it cannot be measured by spirometry. 3.  Vital capacity (VC), or forced vital capacity (FVC) ■ ■is the sum of tidal volume, IRV, and ERV. Lung volumes Lung capacities Inspiratory reserve volume Tidal volume Expiratory reserve volume Residual volume Total lung capacity Vital capacity Inspiratory capacity Functional residual capacity Figure 4.1 Lung volumes and capacities. Chapter 4 Respiratory Physiology 117 ■ ■is the volume of air that can be forcibly expired after a maximal inspiration. 4.  Total lung capacity (TLC) ■ ■is the sum of all four lung volumes. ■ ■is the volume in the lungs after a maximal inspiration. ■ ■includes RV, so it cannot be measured by spirometry. C.  Forced expiratory volume (FEV1) (Figure 4.2) ■ ■FEV1 is the volume of air that can be expired in the first second of a forced maximal expiration. ■ ■FEV1 is normally 80% of the forced vital capacity, which is expressed as FEV F C 1/ V = 0.8 ■ ■In obstructive lung disease, such as asthma and chronic obstructive pulmonary disease (COPD), both FEV1 and FVC are reduced, but FEV1 is reduced more than FVC is: thus, FEV1/FVC is decreased. ■ ■In restrictive lung disease, such as fibrosis, both FEV1 and FVC are reduced, but FEV1 is reduced less than FVC is: thus, FEV1/FVC is increased. II.  Mechanics of Breathing A.  Muscles of inspiration 1.  Diaphragm ■ ■is the most important muscle for inspiration. ■ ■When the diaphragm contracts, the abdominal contents are pushed downward, and the ribs are lifted upward and outward, increasing the volume of the thoracic cavity. 2.  External intercostals and accessory muscles ■ ■are not used for inspiration during normal quiet breathing. ■ ■are used during exercise and in respiratory distress. B.  Muscles of expiration ■ ■Expiration is normally passive. ■ ■Because the lung–chest wall system is elastic, it returns to its resting position after inspiration. Normal A Time (sec) Obstructive (asthma) B Time (sec) Restrictive (fibrosis) C Time (sec) 0 1 2 3 0 1 2 3 0 1 2 3 Volume (L) 7 6 5 4 3 2 1 7 6 5 4 3 2 1 7 6 5 4 3 2 1 FEV1 FVC FEV1 FVC FEV1 FVC Figure 4.2 Forced vital capacity (FVC) and FEV1 in normal subjects and in patients with lung disease. FEV1 = volume expired in first second of forced maximal expiration. 118 BRS Physiology ■ ■Expiratory muscles are used during exercise or when airway resistance is increased because of disease (e.g., asthma). 1.  Abdominal muscles ■ ■compress the abdominal cavity, push the diaphragm up, and push air out of the lungs. 2.  Internal intercostal muscles ■ ■pull the ribs downward and inward. C.  Compliance of the respiratory system ■ ■is analogous to capacitance in the cardiovascular system. ■ ■is described by the following equation: C V P = where: C = compliance (mL/mm Hg) V = volume (mL) P = pressure (mm Hg) ■ ■describes the distensibility of the lungs and chest wall. ■ ■is inversely related to elastance, which depends on the amount of elastic tissue. ■ ■is inversely related to stiffness. ■ ■is the slope of the pressure–volume curve. ■ ■is the change in volume for a given change in pressure. Pressure can refer to the pressure inside the lungs and airways or to transpulmonary pressure (i.e., the pressure difference across pulmonary structures). 1.  Compliance of the lungs (Figure 4.3) ■ ■Transmural pressure is alveolar pressure minus intrapleural pressure. ■ ■When the pressure outside of the lungs (i.e., intrapleural pressure) is negative, the lungs expand and lung volume increases. ■ ■When the pressure outside of the lungs is positive, the lungs collapse and lung volume decreases. ■ ■Inflation of the lungs (inspiration) follows a different curve than deflation of the lungs (expiration); this difference is called hysteresis and is due to the need to overcome ­ surface tension forces when inflating the lungs. ■ ■In the middle range of pressures, compliance is greatest and the lungs are most distensible. ■ ■At high expanding pressures, compliance is lowest, the lungs are least distensible, and the curve flattens. Pressure Volume Inspiration E xp ir at io n Figure 4.3 Compliance of the lungs. Different curves are fol-lowed during inspiration and expiration (hysteresis). Chapter 4 Respiratory Physiology 119 2.  Compliance of the combined lung–chest wall system (Figure 4.4) a.  Figure 4.4 shows the pressure–volume relationships for the lungs alone (hysteresis has been eliminated for simplicity), the chest wall alone, and the lungs and chest wall together. ■ ■Compliance of the lung–chest wall system is less than that of the lungs alone or the chest wall alone (the slope is flatter). b.  At rest (identified by the filled circle in the center of Figure 4.4), lung volume is at FRC and the pressure in the airways and lungs is equal to atmospheric pressure (i.e., zero). Under these equilibrium conditions, there is a collapsing force on the lungs and an expanding force on the chest wall. At FRC, these two forces are equal and opposite and, therefore, the combined lung–chest wall system neither wants to collapse nor wants to expand (i.e., equilibrium). c.  As a result of these two opposing forces, intrapleural pressure is negative (subatmospheric). ■ ■If air is introduced into the intrapleural space (pneumothorax), the intrapleural pres-sure becomes equal to atmospheric pressure. Without the normal negative intra-pleural pressure, the lungs will collapse (their natural tendency) and the chest wall will spring outward (its natural tendency). d.  Changes in lung compliance ■ ■In a patient with emphysema, lung compliance is increased and the tendency of the lungs to collapse is decreased. Therefore, at the original FRC, the tendency of the lungs to collapse is less than the tendency of the chest wall to expand. The lung–chest wall system will seek a new, higher FRC so that the two opposing forces can be bal-anced again; the patient’s chest becomes barrel-shaped, reflecting this higher volume. ■ ■In a patient with fibrosis, lung compliance is decreased and the tendency of the lungs to collapse is increased. Therefore, at the original FRC, the tendency of the lungs to col-lapse is greater than the tendency of the chest wall to expand. The lung–chest wall sys-tem will seek a new, lower FRC so that the two opposing forces can be balanced again. D.  Surface tension of alveoli and surfactant 1.  Surface tension of the alveoli (Figure 4.5) ■ ■results from the attractive forces between liquid molecules lining the alveoli at the air– liquid interface. Combined lung and chest wall Chest wall only Lung only FRC Airway pressure 0 – + Volume Figure 4.4 Compliance of the lungs and chest wall separately and together. FRC = functional residual capacity. 120 BRS Physiology ■ ■creates a collapsing pressure that is directly proportional to surface tension and inversely proportional to alveolar radius (Laplace’s law), as shown in the following equation: P 2T r = where: P =  collapsing pressure on alveolus (or pressure required to keep alveolus open) [dynes/cm2] T = surface tension (dynes/cm) r = radius of alveolus (cm) a. Large alveoli (large radii) have low collapsing pressures and are easy to keep open. b. Small alveoli (small radii) have high collapsing pressures and are more difficult to keep open. ■ ■In the absence of surfactant, the small alveoli have a tendency to collapse (atelectasis). 2.  Surfactant (see Figure 4.5) ■ ■lines the alveoli. ■ ■reduces surface tension by disrupting the intermolecular forces between liquid mol-ecules. This reduction in surface tension prevents small alveoli from collapsing and increases compliance. ■ ■is synthesized by type II alveolar cells and consists primarily of the phospholipid ­ dipalmitoylphosphatidylcholine (DPPC). ■ ■In the fetus, surfactant synthesis is variable. Surfactant may be present as early as gesta-tional week 24 and is almost always present by gestational week 35. ■ ■Generally, a lecithin:sphingomyelin ratio greater than 2:1 in amniotic fluid reflects mature levels of surfactant. ■ ■Neonatal respiratory distress syndrome can occur in premature infants because of the lack of surfactant. The infant exhibits atelectasis (lungs collapse), difficulty reinflat-ing the lungs (as a result of decreased compliance), and hypoxemia (as a result of decreased V/Q). E.  Relationships between pressure, airflow, and resistance ■ ■are analogous to the relationships between blood pressure, blood flow, and resistance in the cardiovascular system. 1.  Airflow ■ ■is driven by, and is directly proportional to, the pressure difference between the mouth (or nose) and the alveoli. P = 2T r Large alveolus Small alveolus Small alveolus with surfactant r P Tendency to collapse r P Tendency to collapse Same r T causes Tendency to collapse P Figure 4.5 Effect of alveolar size and surfactant on the pressure that tends to collapse the alveoli. P = pressure; r = radius; T = surface tension. Chapter 4 Respiratory Physiology 121 ■ ■is inversely proportional to airway resistance; thus, the higher the airway resistance, the lower the airflow. This inverse relationship is shown in the following equation: Q P R = D where: Q = airflow (mL/min or L/min) ΔP = pressure gradient (cm H2O) R = airway resistance (cm H2O/L/min) 2.  Resistance of the airways ■ ■is described by Poiseuille’s law, as shown in the following equation: R = h p 8 l r 4 where: R = resistance η = viscosity of the inspired gas l = length of the airway r = radius of the airway ■ ■Notice the powerful inverse fourth-power relationship between resistance and the size (radius) of the airway. ■ ■For example, if airway radius decreases by a factor of 4, then resistance will increase by a factor of 256 (44), and airflow will decrease by a factor of 256. 3.  Factors that change airway resistance ■ ■The major site of airway resistance is the medium-sized bronchi. ■ ■The smallest airways would seem to offer the highest resistance, but they do not because of their parallel arrangement. a.  Contraction or relaxation of bronchial smooth muscle ■ ■changes airway resistance by altering the radius of the airways. (1)  Parasympathetic stimulation, irritants, and the slow-reacting substance of anaphy-laxis (asthma) constrict the airways, decrease the radius, and increase the resistance to airflow. (2)  Sympathetic stimulation and sympathetic agonists (isoproterenol) dilate the airways via b2 receptors, increase the radius, and decrease the resistance to airflow. b.  Lung volume ■ ■alters airway resistance because of the radial traction exerted on the airways by sur-rounding lung tissue. (1)  High lung volumes are associated with greater traction on airways and decreased airway resistance. Patients with increased airway resistance (e.g., asthma) “learn” to breathe at higher lung volumes to offset the high airway resistance associated with their disease. (2)  Low lung volumes are associated with less traction and increased airway resistance, even to the point of airway collapse. c.  Viscosity or density of inspired gas ■ ■changes the resistance to airflow. ■ ■During a deep-sea dive, both air density and resistance to airflow are increased. ■ ■Breathing a low-density gas, such as helium, reduces the resistance to airflow. 122 BRS Physiology F.  Breathing cycle—description of pressures and airflow (Figure 4.6) 1.  At rest (before inspiration begins) a.  Alveolar pressure equals atmospheric pressure. ■ ■Because lung pressures are expressed relative to atmospheric pressure, alveolar pressure is said to be zero. b.  Intrapleural pressure is negative. ■ ■At FRC, the opposing forces of the lungs trying to collapse and the chest wall trying to expand create a negative pressure in the intrapleural space between them. ■ ■Intrapleural pressure can be measured by a balloon catheter in the esophagus. c.  Lung volume is the FRC. 2.  During inspiration a.  The inspiratory muscles contract and cause the volume of the thorax to increase. ■ ■As lung volume increases, alveolar pressure decreases to less than atmospheric pres-sure (i.e., becomes negative). ■ ■The pressure gradient between the atmosphere and the alveoli now causes air to flow into the lungs; airflow will continue until the pressure gradient dissipates. b.  Intrapleural pressure becomes more negative. ■ ■Because lung volume increases during inspiration, the elastic recoil strength of the lungs also increases. As a result, intrapleural pressure becomes even more negative than it was at rest. ■ ■Changes in intrapleural pressure during inspiration are used to measure the dynamic compliance of the lungs. –3 –6 0 + – Volume of breath (L) Intrapleural pressure (cm H2O) Alveolar pressure (cm H2O) Rest Rest Inspiration Expiration Figure 4.6 Volumes and pressures during the breathing cycle. Chapter 4 Respiratory Physiology 123 c.  Lung volume increases by one Vt. ■ ■At the peak of inspiration, lung volume is the FRC plus one Vt. 3.  During expiration a.  Alveolar pressure becomes greater than atmospheric pressure. ■ ■The alveolar pressure becomes greater (i.e., becomes positive) because alveolar gas is compressed by the elastic forces of the lung. ■ ■Thus, alveolar pressure is now higher than atmospheric pressure, the pressure gradi-ent is reversed, and air flows out of the lungs. b.  Intrapleural pressure returns to its resting value during a normal (passive) expiration. ■ ■However, during a forced expiration, intrapleural pressure actually becomes positive. This positive intrapleural pressure compresses the airways and makes expiration more difficult. ■ ■In COPD, in which airway resistance is increased, patients learn to expire slowly with “pursed lips” to prevent the airway collapse that may occur with a forced expiration. c.  Lung volume returns to FRC. G.  Lung disease (Table 4.1) 1.  Asthma ■ ■is an obstructive disease in which expiration is impaired. ■ ■is characterized by decreased FVC, decreased FEV1, and decreased FEV1/FVC. ■ ■Air that should have been expired is not, leading to air trapping and increased FRC. 2.  COPD ■ ■is a combination of chronic bronchitis and emphysema. ■ ■is an obstructive disease with increased lung compliance in which expiration is impaired. ■ ■is characterized by decreased FVC, decreased FEV1, and decreased FEV1/FVC. ■ ■Air that should have been expired is not, leading to air trapping, increased FRC, and a barrel-shaped chest. a.  “Pink puffers” (primarily emphysema) have mild hypoxemia and, because they maintain alveolar ventilation, normocapnia (normal Pco2). b.  “Blue bloaters” (primarily bronchitis) have severe hypoxemia with cyanosis and, because they do not maintain alveolar ventilation, hypercapnia (increased Pco2). They have right ventricular failure and systemic edema. 3.  Fibrosis ■ ■is a restrictive disease with decreased lung compliance in which inspiration is impaired. ■ ■is characterized by a decrease in all lung volumes. Because FEV1 is decreased less than is FVC, FEV1/FVC is increased (or may be normal). t a b l e 4.1 Characteristics of Lung Diseases Disease FEV1 FVC FEV1/FVC FRC Asthma ↓↓ ↓ ↓ ↑ COPD ↓↓ ↓ ↓ ↑ Fibrosis ↓ ↓↓ ↑ (or normal) ↓ COPD = chronic obstructive pulmonary disease; FEV1 = volume expired in first second of forced expiration; FRC = functional residual capacity; FVC = forced vital capacity. 124 BRS Physiology III.  Gas Exchange A.  Dalton’s law of partial pressures ■ ■can be expressed by the following equation: Partial pressure Total pressure Fractional gas concentration = × 1.  In dry inspired air, the partial pressure of O2 can be calculated as follows. Assume that total pressure is atmospheric and the fractional concentration of O2 is 0.21. P O2 760 0 21 160 = × = mm Hg mm Hg . 2.  In humidified tracheal air at 37°C, the calculation is modified to correct for the partial pressure of H2O, which is 47 mm Hg. P PO Total mm Hg mm Hg mm Hg mm Hg mm Hg = − = = × = 760 47 713 713 0 21 150 2 . B.  Partial pressure of O2 and CO2 in inspired air, alveolar air, and blood (Table 4.2) ■ ■Approximately 2% of the systemic cardiac output bypasses the pulmonary circulation (“physiologic shunt”). The resulting admixture of venous blood with oxygenated arterial blood makes the Po2 of arterial blood slightly lower than that of alveolar air. C.  Dissolved gases ■ ■The amount of gas dissolved in a solution (such as blood) is proportional to its partial pressure. The units of concentration for a dissolved gas are mL gas/100 mL blood. ■ ■The following calculation uses O2 in arterial blood as an example: Dissolved O P Solubility of O in blood 100 mm Hg mL O 1 0.003 2 2 2 2 O [ ] = ¥ = ¥ 00 mL mm Hg 0.3 mL O 100 mL blood 2 = where: [O2] = O2 concentration in blood Po2 = partial pressure of O2 in blood 0.003 mL O2/100 mL/mm Hg = solubility of O2 in blood D.  Diffusion of gases such as O2 and CO2 ■ ■The diffusion rates of O2 and CO2 depend on the partial pressure differences across the membrane and the area available for diffusion. ■ ■For example, the diffusion of O2 from alveolar air into the pulmonary capillary depends on the partial pressure difference for O2 between alveolar air and pulmonary capillary blood. Normally, capillary blood equilibrates with alveolar gas; when the partial pressures of O2 become equal (see Table 4.2), there is no more net diffusion of O2. ■ ■Gas diffusion across the alveolar–pulmonary capillary barrier occurs according to Fick’s law: V D P x L = ×D where: Vx = volume of gas transferred per minute (mL/min) Dl = lung diffusing capacity (mL/min/mm Hg) ΔP = partial pressure difference of gas (mm Hg) Chapter 4 Respiratory Physiology 125 ■ ■Dl, or lung diffusing capacity, is the equivalent of permeability of the alveolar–pulmonary capillary barrier and is proportional to diffusion coefficient of the gas and surface area, and inversely proportional to thickness of the barrier. Dl is measured with carbon mon-oxide (i.e., DlCO). 1. Dl increases during exercise because there are more open capillaries and thus more surface area for diffusion. 2. Dl decreases in emphysema (because of decreased surface area) and in fibrosis and pulmonary edema (because of increased diffusion distance). E.  Perfusion-limited and diffusion-limited gas exchange (Table 4.3) 1.  Perfusion-limited exchange ■ ■is illustrated by N2O and by O2 under normal conditions. ■ ■In perfusion-limited exchange, the gas equilibrates early along the length of the pul-monary capillary. The partial pressure of the gas in arterial blood becomes equal to the partial pressure in alveolar air. ■ ■Thus, for a perfusion-limited process, diffusion of the gas can be increased only if blood flow increases. 2.  Diffusion-limited exchange ■ ■is illustrated by CO and by O2 during strenuous exercise. ■ ■is also illustrated in disease states. In fibrosis, the diffusion of O2 is restricted because thickening of the alveolar membrane increases diffusion distance. In emphysema, the diffusion of O2 is decreased because the surface area for diffusion of gases is decreased. ■ ■In diffusion-limited exchange, the gas does not equilibrate by the time blood reaches the end of the pulmonary capillary. The partial pressure difference of the gas between alveolar air and pulmonary capillary blood is maintained. Diffusion continues as long as the partial pressure gradient is maintained. t a b l e 4.2 Partial Pressures of O2 and CO2 (mm Hg) Gas Dry Inspired Air Humidified Tracheal Air Alveolar Air Systemic Arterial Blood Mixed Venous Blood Po2 160 150 Addition of H2O decreases Po2 100 O2 has diffused from alveolar air into pulmonary capillary blood, decreasing the Po2 of alveolar air 100 Blood has equilibrated with alveolar air (is “arterialized”) 40 O2 has diffused from arterial blood into tissues, decreasing the Po2 of venous blood Pco2 0 0 40 CO2 has been added from pulmonary capillary blood into alveolar air 40 Blood has equilibrated with alveolar air 46 CO2 has diffused from the tissues into venous blood, increasing the Pco2 of venous blood Actually, slightly <100 mm Hg because of the “physiologic shunt.” t a b l e 4.3  Perfusion-limited and Diffusion-limited Gas Exchange Perfusion-limited Diffusion-limited O2 (normal conditions) O2 (emphysema, fibrosis, strenuous exercise) CO2 CO N2O 126 BRS Physiology IV.  Oxygen Transport ■ ■O2 is carried in blood in two forms: dissolved or bound to hemoglobin (most important). ■ ■Hemoglobin, at its normal concentration, increases the O2-carrying capacity of blood 70-fold. A. Hemoglobin 1.  Characteristics—globular protein of four subunits ■ ■Each subunit contains a heme moiety, which is iron-containing porphyrin. ■ ■The iron is in the ferrous state (Fe2+), which binds O2. ■ ■Each subunit has a polypeptide chain. Two of the subunits have α chains and two of the subunits have β chains; thus, normal adult hemoglobin is called α2β2. 2.  Fetal hemoglobin [hemoglobin F (HbF)] ■ ■In fetal hemoglobin, the b chains are replaced by g chains; thus, fetal hemoglobin is called α2γ2. ■ ■The O2 affinity of fetal hemoglobin is higher than the O2 affinity of adult hemoglobin (left-shift) because 2,3-diphosphoglycerate (DPG) binds less avidly to the γ chains of fetal hemoglobin than to the β chains of adult hemoglobin. ■ ■Because the O2 affinity of fetal hemoglobin is higher than the O2 affinity of adult hemo-globin, O2 movement from mother to fetus is facilitated (see IV C 2 b). 3.  Methemoglobin ■ ■Iron is in the Fe3 + state. ■ ■does not bind O2. 4.  Hemoglobin S ■ ■causes sickle cell disease. ■ ■The α subunits are normal and the β subunits are abnormal, giving hemoglobin S the designation α β 2 2 A S . ■ ■In the deoxygenated form, deoxyhemoglobin forms sickle-shaped rods that deform red blood cells (RBCs). 5.  O2-binding capacity of hemoglobin ■ ■is the maximum amount of O2 that can be bound to hemoglobin. ■ ■limits the amount of O2 that can be carried in blood. ■ ■is measured at 100% saturation. ■ ■is expressed in units of mL O2/g hemoglobin 6.  O2 content of blood ■ ■is the total amount of O2 carried in blood, including bound and dissolved O2. ■ ■depends on the hemoglobin concentration, the O2-binding capacity of hemoglobin, the Po2, and the P50 of hemoglobin. ■ ■is given by the following equation: O content hemoglobin concentration O binding capacity % sat -2 2 = ( ¥ ¥ uration) Dissolved O2 + where: O2 content = amount of O2 in blood (mL O2/100 mL blood) Hemoglobin concentration = hemoglobin concentration (g/100 mL) O2-binding capacity =  maximal amount of O2 bound to hemoglobin at 100% saturation (mL O2/g hemoglobin) % saturation = % of heme groups bound to O2 (%) Dissolved O2 = unbound O2 in blood (mL O2/100 mL blood) Chapter 4 Respiratory Physiology 127 B.  Hemoglobin–O2 dissociation curve (Figure 4.7) 1.  Hemoglobin combines rapidly and reversibly with O2 to form oxyhemoglobin. 2.  The hemoglobin–O2 dissociation curve is a plot of percent saturation of hemoglobin as a function of Po2. a.  At a Po2 of 100 mm Hg (e.g., arterial blood) ■ ■hemoglobin is 100% saturated; O2 is bound to all four heme groups on all hemoglo-bin molecules. b.  At a Po2 of 40 mm Hg (e.g., mixed venous blood) ■ ■hemoglobin is 75% saturated, which means that, on average, three of the four heme groups on each hemoglobin molecule have O2 bound. c.  At a Po2 of 25 mm Hg ■ ■hemoglobin is 50% saturated. ■ ■The Po2 at 50% saturation is the P50. Fifty percent saturation means that, on average, two of the four heme groups of each hemoglobin molecule have O2 bound. 3.  The sigmoid shape of the curve is the result of a change in the affinity of hemoglobin as each successive O2 molecule binds to a heme site (called positive cooperativity). ■ ■Binding of the first O2 molecule increases the affinity for the second O2 molecule, and so forth. ■ ■The affinity for the fourth O2 molecule is the highest. ■ ■This change in affinity facilitates the loading of O2 in the lungs (flat portion of the curve) and the unloading of O2 at the tissues (steep portion of the curve). a.  In the lungs ■ ■Alveolar gas has a Po2 of 100 mm Hg. ■ ■Pulmonary capillary blood is “arterialized” by the diffusion of O2 from alveolar gas into blood, so that the Po2 of pulmonary capillary blood also becomes 100 mm Hg. ■ ■The very high affinity of hemoglobin for O2 at a Po2 of 100 mm Hg facilitates the diffusion process. By tightly binding O2, the free O2 concentration and O2 partial pressure are kept low, thus maintaining the partial pressure gradient (that drives the diffusion of O2). ■ ■The curve is almost flat when the Po2 is between 60 and 100 mm Hg. Thus, humans can tolerate changes in atmospheric pressure (and Po2) without compromising the O2-carrying capacity of hemoglobin. Arterial blood Mixed venous blood P50 PO2 (mm Hg) Hemoglobin saturation (%) 100 50 25 50 75 100 Figure 4.7 Hemoglobin–O2 dissociation curve. 128 BRS Physiology b.  In the peripheral tissues ■ ■O2 diffuses from arterial blood to the cells. ■ ■The gradient for O2 diffusion is maintained because the cells consume O2 for aerobic metabolism, keeping the tissue Po2 low. ■ ■The lower affinity of hemoglobin for O2 in this steep portion of the curve facilitates the unloading of O2 to the tissues. C.  Changes in the hemoglobin–O2 dissociation curve (Figure 4.8) 1.  Shifts to the right ■ ■occur when the affinity of hemoglobin for O2 is decreased. ■ ■The P50 is increased, and unloading of O2 from arterial blood to the tissues is facilitated. ■ ■For any level of Po2, the percent saturation of hemoglobin, and thus the O2 content of blood, is decreased. a.  Increases in Pco2 or decreases in pH ■ ■shift the curve to the right, decreasing the affinity of hemoglobin for O2 and facilitat-ing the unloading of O2 in the tissues (Bohr effect). ■ ■For example, during exercise, the tissues produce more CO2, which decreases tissue pH and, through the Bohr effect, stimulates O2 delivery to the exercising muscle. b.  Increases in temperature (e.g., during exercise) ■ ■shift the curve to the right. ■ ■The shift to the right decreases the affinity of hemoglobin for O2 and facilitates the delivery of O2 to the tissues during this period of high demand. PO2 (mm Hg) Hemoglobin saturation (%) 100 50 25 50 75 100 PcO2, pH Temperature 2,3-DPG PcO2, pH Temperature 2,3-DPG Hemoglobin F Figure 4.8 Changes in the hemoglobin–O2 dissociation curve. Effects of Pco2, pH, tempera-ture, 2,3-diphosphoglycerate (DPG), and fetal hemoglobin (hemoglobin F) on the hemoglobin– O2 dissociation curve. Chapter 4 Respiratory Physiology 129 c.  Increases in 2,3-DPG concentration ■ ■shift the curve to the right by binding to the β chains of deoxyhemoglobin and decreasing the affinity of hemoglobin for O2. ■ ■The adaptation to chronic hypoxemia (e.g., living at high altitude) includes increased synthesis of 2,3-DPG, which binds to hemoglobin and facilitates unloading of O2 in the tissues. 2.  Shifts to the left ■ ■occur when the affinity of hemoglobin for O2 is increased. ■ ■The P50 is decreased, and unloading of O2 from arterial blood into the tissues is more difficult. ■ ■For any level of Po2, the percent saturation of hemoglobin, and thus the O2 content of blood, is increased. a.  Causes of a shift to the left ■ ■are the mirror image of those that cause a shift to the right. ■ ■include decreased Pco2, increased pH, decreased temperature, and decreased 2,3-DPG concentration. b.  HbF ■ ■does not bind 2,3-DPG as strongly as does adult hemoglobin. Decreased binding of 2,3-DPG results in increased affinity of HbF for O2, decreased P50, and a shift of the curve to the left. c.  Carbon monoxide (CO) poisoning (Figure 4.9) ■ ■CO competes for O2-binding sites on hemoglobin. The affinity of hemoglobin for CO is 200 times its affinity for O2. ■ ■CO occupies O2-binding sites on hemoglobin, thus decreasing the O2 content of blood. ■ ■In addition, binding of CO to hemoglobin increases the affinity of remaining sites for O2, causing a shift of the curve to the left. D.  Causes of hypoxemia and hypoxia (Tables 4.4 and 4.5) 1.  Hypoxemia ■ ■is a decrease in arterial Po2. ■ ■is caused by decreased PaO2 , diffusion defect, V/Q defects, and right-to-left shunts. Carbon monoxide poisoning PO2 (mm Hg) O2 content 25 50 75 100 Figure 4.9 Effect of carbon monoxide on the hemo-globin–O2 dissociation curve. 130 BRS Physiology ■ ■A–a gradient can be used to compare causes of hypoxemia, and is described by the ­ following equation: A a gradient PaO2 − = − PAO2 where: A–a gradient = difference between alveolar Po2 and arterial Po2 PaO2 = alveolar Po2 (calculated from the alveolar gas equation) PaO2 = arterial Po2 (measured in arterial blood) ■ ■Alveolar Po2 is calculated from the alveolar gas equation as follows: P P P R O O O 2 2 2 A I A C = -where: PaO2 = alveolar Po2 PiO2 = inspired Po2 PaCO2 = alveolar Pco2 = arterial Pco2 (measured in arterial blood) R =  respiratory exchange ratio or respiratory quotient (CO2 production/O2 consumption) ■ ■The normal A–a gradient is between 0 and 10 mm Hg. Since O2 normally equilibrates between alveolar gas and arterial blood, PaO2 is approximately equal to PaO2. ■ ■The A–a gradient is increased (>10 mm Hg) if O2 does not equilibrate between alveolar gas and arterial blood (e.g., diffusion defect, V/Q defect, and right-to-left shunt) and PaO2 is greater than PaO2. 2.  Hypoxia ■ ■is decreased O2 delivery to the tissues. t a b l e 4.5 Causes of Hypoxia Cause Mechanisms ↓ Cardiac output ↓ Blood flow Hypoxemia ↓ PaO2 causes ↓ % saturation of hemoglobin Anemia ↓ Hemoglobin concentration causes ↓ O2 content of blood Carbon monoxide poisoning ↓ O2 content of blood Cyanide poisoning ↓ O2 utilization by tissues PaO2 = arterial Po2. t a b l e 4.4 Causes of Hypoxemia Cause PaO2 A–a Gradient High altitude (↓ Pb) Decreased Normal Hypoventilation (↓ Pao2) Decreased Normal Diffusion defect (e.g., fibrosis) Decreased Increased V/Q defect Decreased Increased Right-to-left shunt Decreased Increased A–a gradient = difference in Po2 between alveolar gas and arterial blood; Pb = barometric pressure; Pao2= alveolar Po2; Pao2 = arterial Po2; V/Q = ventilation/perfusion ratio. Chapter 4 Respiratory Physiology 131 ■ ■is caused by decreased blood flow, hypoxemia, decreased hemoglobin concentration, CO poisoning, and cyanide poisoning. ■ ■O2 delivery is described by the following equation: O O 2 2 delivery Cardiac output content of blood = × ■ ■O2 content of blood depends on hemoglobin concentration, O2-binding capacity of hemoglobin, and Po2 (which determines % saturation of hemoglobin by O2). E.  Erythropoietin (EPO) ■ ■is a growth factor that is synthesized in the kidneys in response to hypoxia (Figure 4.10). ■ ■Decreased O2 delivery to the kidneys causes increased production of hypoxia-inducible factor 1a. ■ ■Hypoxia-inducible factor 1α directs synthesis of mRNA for EPO, which ultimately promotes development of mature red blood cells. V.  CO2 Transport A.  Forms of CO2 in blood ■ ■CO2 is produced in the tissues and carried to the lungs in the venous blood in three forms: 1.  Dissolved CO2 (small amount), which is free in solution 2.  Carbaminohemoglobin (small amount), which is CO2 bound to hemoglobin 3.  HCO3 - (from hydration of CO2 in the RBCs), which is the major form (90%) B.  Transport of CO2 as HCO3 - (Figure 4.11) 1.  CO2 is generated in the tissues and diffuses freely into the venous plasma and then into the RBCs. 2.  In the RBCs, CO2 combines with H2O to form H2CO3, a reaction that is catalyzed by carbonic anhydrase. H2CO3 dissociates into H+ and HCO3 -. 3.  HCO3 - leaves the RBCs in exchange for Cl- (chloride shift) and is transported to the lungs in the plasma. HCO3 - is the major form in which CO2 is transported to the lungs. Kidney Hypoxia Hypoxia-inducible factor 1α EPO mRNA EPO synthesis Erythrocytes Figure 4.10 Hypoxia induces synthesis of erythropoi-etin. EPO, erythropoietin; mRNA, messenger RNA. 132 BRS Physiology 4.  H+ is buffered inside the RBCs by deoxyhemoglobin. Because deoxyhemoglobin is a ­ better buffer for H+ than is oxyhemoglobin, it is advantageous that hemoglobin has been deoxygenated by the time blood reaches the venous end of the capillaries (i.e., the site where CO2 is being added). 5.  In the lungs, all of the above reactions occur in reverse. HCO3 - enters the RBCs in exchange for Cl-. HCO3 - recombines with H+ to form H2CO3, which decomposes into CO2 and H2O. Thus, CO2, originally generated in the tissues, is expired. VI.  Pulmonary Circulation A.  Pressures and cardiac output in the pulmonary circulation 1.  Pressures ■ ■are much lower in the pulmonary circulation than in the systemic circulation. ■ ■For example, pulmonary arterial pressure is 15 mm Hg (compared with aortic pressure of 100 mm Hg). 2.  Resistance ■ ■is also much lower in the pulmonary circulation than in the systemic circulation. 3.  Cardiac output of the right ventricle ■ ■is pulmonary blood flow. ■ ■is equal to cardiac output of the left ventricle. ■ ■Although pressures in the pulmonary circulation are low, they are sufficient to pump the cardiac output because resistance of the pulmonary circulation is proportionately low. B.  Distribution of pulmonary bold flow ■ ■When a person is supine, blood flow is nearly uniform throughout the lung. ■ ■When a person is standing, blood flow is unevenly distributed because of the effect of gravity. Blood flow is lowest at the apex of the lung (zone 1) and highest at the base of the lung (zone 3). 1.  Zone 1—blood flow is lowest. ■ ■Alveolar pressure > arterial pressure > venous pressure. + Tissue Plasma Red blood cell carbonic anhydrase CO2 CO2 CO2 H2O H2CO3 H+ Hb – H HCO3– Cl– + Figure 4.11 Transport of CO2 from the tis-sues to the lungs in venous blood. H+ is buffered by hemoglobin (Hb–H). Chapter 4 Respiratory Physiology 133 ■ ■The high alveolar pressure may compress the capillaries and reduce blood flow in zone 1. This situation can occur if arterial blood pressure is decreased as a result of hemorrhage or if alveolar pressure is increased because of positive pressure ventilation. 2.  Zone 2—blood flow is medium. ■ ■Arterial pressure > alveolar pressure > venous pressure. ■ ■Moving down the lung, arterial pressure progressively increases because of gravita-tional effects on arterial pressure. ■ ■Arterial pressure is greater than alveolar pressure in zone 2, and blood flow is driven by the difference between arterial pressure and alveolar pressure. 3.  Zone 3—blood flow is highest. ■ ■Arterial pressure > venous pressure > alveolar pressure. ■ ■Moving down toward the base of the lung, arterial pressure is highest because of grav-itational effects, and venous pressure finally increases to the point where it exceeds alveolar pressure. ■ ■In zone 3, blood flow is driven by the difference between arterial and venous pressures, as in most vascular beds. C.  Regulation of pulmonary blood flow—hypoxic vasoconstriction ■ ■In the lungs, hypoxia causes vasoconstriction. ■ ■This response is the opposite of that in other organs, where hypoxia causes vasodilation. ■ ■Physiologically, this effect is important because local vasoconstriction redirects blood away from poorly ventilated, hypoxic regions of the lung and toward well-ventilated regions. ■ ■Fetal pulmonary vascular resistance is very high because of generalized hypoxic vasocon-striction; as a result, blood flow through the fetal lungs is low. With the first breath, the alveoli of the neonate are oxygenated, pulmonary vascular resistance decreases, and pul-monary blood flow increases and becomes equal to cardiac output (as occurs in the adult). D. Shunts 1.  Right-to-left shunts ■ ■normally occur to a small extent because 2% of the cardiac output bypasses the lungs. May be as great as 50% of cardiac output in certain congenital abnormalities. ■ ■are seen in tetralogy of Fallot. ■ ■always result in a decrease in arterial Po2 because of the admixture of venous blood with arterial blood. ■ ■The magnitude of a right-to-left shunt can be estimated by having the patient breathe 100% O2 and measuring the degree of dilution of oxygenated arterial blood by nonoxy-genated shunted (venous) blood. 2.  Left-to-right shunts ■ ■are more common than are right-to-left shunts because pressures are higher on the left side of the heart. ■ ■are usually caused by congenital abnormalities (e.g., patent ductus arteriosus) or trau-matic injury. ■ ■do not result in a decrease in arterial Po2. Instead, Po2 will be elevated on the right side of the heart because there has been admixture of arterial blood with venous blood. VII.  V/Q Defects A.  V/Q ratio ■ ■is the ratio of alveolar ventilation (V) to pulmonary blood flow (Q). Ventilation and perfusion (blood flow) matching is important to achieve the ideal exchange of O2 and CO2. 134 BRS Physiology ■ ■If the breathing rate, tidal volume, and cardiac output are normal, the V/Q ratio is approxi-mately 0.8. This V/Q ratio results in an arterial Po2 of 100 mm Hg and an arterial Pco2 of 40 mm Hg. B.  V/Q ratios in different parts of the lung (Figure 4.12 and Table 4.6) ■ ■Both ventilation and blood flow (perfusion) are nonuniformly distributed in the normal upright lung. 1.  Blood flow, or perfusion, is lowest at the apex and highest at the base because of gravita-tional effects on arterial pressure. 2.  Ventilation is lower at the apex and higher at the base because of gravitational effects in the upright lung. Importantly, however, the regional differences for ventilation are not as great as for perfusion. 3.  Therefore, the V/Q ratio is higher at the apex of the lung and lower at the base of the lung. 4.  As a result of the regional differences in V/Q ratio, there are corresponding differences in the efficiency of gas exchange and in the resulting pulmonary capillary Po2 and Pco2. Regional differences for Po2 are greater than those for Pco2. a.  At the apex (higher V/Q), Po2 is highest and Pco2 is lowest because gas exchange is more efficient. b.  At the base (lower V/Q), Po2 is lowest and Pco2 is highest because gas exchange is less efficient. C.  Changes in V/Q ratio (Figure 4.13) 1.  V/Q ratio in airway obstruction ■ ■If the airways are completely blocked (e.g., by a piece of steak caught in the trachea), then ventilation is zero. If blood flow is normal, then V/Q is zero, which is called a shunt. ■ ■There is no gas exchange in a lung that is perfused but not ventilated. The Po2 and Pco2 of pulmonary capillary blood (and, therefore, of systemic arterial blood) will approach their values in mixed venous blood. ■ ■There is an increased A–a gradient. Base Zone 1 Zone 2 Zone 3 Q V V/Q PO2 PcO2 Apex Figure 4.12 Regional variations in the lung of perfusion (blood flow [Q]), ventilation (V), V/Q, Po2, and Pco2. t a b l e 4.6 V/Q Characteristics of Different Areas of the Lung Area of Lung Blood Flow Ventilation V/Q Regional Arterial Po2 Regional Arterial Pco2 Apex Lowest Lower Higher Highest Lower Base Highest Higher Lower Lowest Higher V/Q = ventilation/perfusion ratio. Chapter 4 Respiratory Physiology 135 2.  V/Q ratio in pulmonary embolism ■ ■If blood flow to a lung is completely blocked (e.g., by an embolism occluding a pulmo-nary artery), then blood flow to that lung is zero. If ventilation is normal, then V/Q is infinite, which is called dead space. ■ ■There is no gas exchange in a lung that is ventilated but not perfused. The Po2 and Pco2 of alveolar gas will approach their values in inspired air. VIII.  Control of Breathing ■ ■Sensory information (Pco2, lung stretch, irritants, muscle spindles, tendons, and joints) is coordinated in the brain stem. ■ ■The output of the brain stem controls the respiratory muscles and the breathing cycle. A.  Central control of breathing (brain stem and cerebral cortex) 1.  Medullary respiratory center ■ ■is located in the reticular formation. a.  Dorsal respiratory group ■ ■is primarily responsible for inspiration and generates the basic rhythm for breathing. ■ ■Input to the dorsal respiratory group comes from the vagus and glossopharyngeal nerves. The vagus nerve relays information from peripheral chemoreceptors and mechanoreceptors in the lung. The glossopharyngeal nerve relays information from peripheral chemoreceptors. ■ ■Output from the dorsal respiratory group travels, via the phrenic nerve, to the diaphragm. V/Q PAO2 PACO2 PaO2 PaCO2 0.8 100 mm Hg 40 mm Hg 100 mm Hg 40 mm Hg 0 ∞ – – 40 mm Hg 46 mm Hg 150 mm Hg 0 mm Hg – – Normal Airway obstruction (shunt) Pulmonary embolus (dead space) V/Q DEFECTS Figure 4.13 Effect of ventilation/perfusion (V/Q) defects on gas exchange. With airway obstruction, the composition of systemic arterial blood approaches that of mixed venous blood. With pulmonary embolus, the composition of alveolar gas approaches that of inspired air. PaO2 = alveolar Po2; PaCO2 = alveolar Pco2; PaO2 = arterial Po2, PaCO2 = arterial Pco2. 136 BRS Physiology b. Ventral respiratory group ■ ■is primarily responsible for expiration. ■ ■is not active during normal, quiet breathing, when expiration is passive. ■ ■is activated, for example, during exercise, when expiration becomes an active process. 2.  Apneustic center ■ ■is located in the lower pons. ■ ■stimulates inspiration, producing a deep and prolonged inspiratory gasp (apneusis). 3.  Pneumotaxic center ■ ■is located in the upper pons. ■ ■inhibits inspiration and, therefore, regulates inspiratory volume and respiratory rate. 4.  Cerebral cortex ■ ■Breathing can be under voluntary control; therefore, a person can voluntarily hyper-ventilate or hypoventilate. ■ ■Hypoventilation (breath-holding) is limited by the resulting increase in Pco2 and decrease in Po2. A previous period of hyperventilation extends the period of breath-holding. B.  Chemoreceptors for CO2, H+, and O2 (Table 4.7) 1.  Central chemoreceptors in the medulla ■ ■are sensitive to the pH of the cerebrospinal fluid (CSF). Decreases in the pH of the CSF produce increases in breathing rate (hyperventilation). ■ ■H+ does not cross the blood–brain barrier as well as CO2 does. a.  CO2 diffuses from arterial blood into the CSF because CO2 is lipid-soluble and readily crosses the blood–brain barrier. b.  In the CSF , CO2 combines with H2O to produce H+ and HCO3 -. The resulting H+ acts directly on the central chemoreceptors. c.  Thus, increases in Pco2 and [H+] stimulate breathing, and decreases in Pco2 and [H+] inhibit breathing. d.  The resulting hyperventilation or hypoventilation then returns the arterial Pco2 toward normal. 2.  Peripheral chemoreceptors in the carotid and aortic bodies ■ ■The carotid bodies are located at the bifurcation of the common carotid arteries. ■ ■The aortic bodies are located above and below the aortic arch. a.  Decreases in arterial Po2 ■ ■stimulate the peripheral chemoreceptors and increase breathing rate. ■ ■Po2 must decrease to low levels (<60 mm Hg) before breathing is stimulated. When Po2 is less than 60 mm Hg, breathing rate is exquisitely sensitive to Po2. b.  Increases in arterial Pco2 ■ ■stimulate peripheral chemoreceptors and increase breathing rate. ■ ■potentiate the stimulation of breathing caused by hypoxemia. t a b l e 4.7 Comparison of Central and Peripheral Chemoreceptors Type of Chemoreceptor Location Stimuli that Increase Breathing Rate Central Medulla ↓ pH ↑ Pco2 Peripheral Carotid and aortic bodies ↓ Po2 (if <60 mm Hg) ↑ Pco2 ↓ pH Chapter 4 Respiratory Physiology 137 ■ ■The response of the peripheral chemoreceptors to CO2 is less important than is the response of the central chemoreceptors to CO2 (or H+). c.  Increases in arterial [H+] ■ ■stimulate the carotid body peripheral chemoreceptors directly, independent of changes in Pco2. ■ ■In metabolic acidosis, breathing rate is increased (hyperventilation) because arterial [H+] is increased and pH is decreased. C.  Other types of receptors for control of breathing 1.  Lung stretch receptors ■ ■are located in the smooth muscle of the airways. ■ ■When these receptors are stimulated by distention of the lungs, they produce a reflex decrease in breathing frequency (Hering–Breuer reflex). 2.  Irritant receptors ■ ■are located between the airway epithelial cells. ■ ■are stimulated by noxious substances (e.g., dust and pollen). 3.  J (juxtacapillary) receptors ■ ■are located in the alveolar walls, close to the capillaries. ■ ■Engorgement of the pulmonary capillaries, such as that may occur with left heart fail-ure, stimulates the J receptors, which then cause rapid, shallow breathing. 4.  Joint and muscle receptors ■ ■are activated during movement of the limbs. ■ ■are involved in the early stimulation of breathing during exercise. IX.  Integrated Responses of the Respiratory System A.  Exercise (Table 4.8) 1.  During exercise, there is an increase in ventilatory rate that matches the increase in O2 consumption and CO2 production by the body. The stimulus for the increased ven-tilation rate is not completely understood. However, joint and muscle receptors are ­ activated and cause an increase in breathing rate at the beginning of exercise. t a b l e 4.8 Summary of Respiratory Responses to Exercise Parameter Response O2 consumption ↑ CO2 production ↑ Ventilation rate ↑ (Matches O2 consumption/CO2 production) Arterial Po2 and Pco2 No change Arterial pH No change in moderate exercise ↓ In strenuous exercise (lactic acidosis) Venous Pco2 ↑ Pulmonary blood flow (cardiac output) ↑ V/Q ratios More evenly distributed in lung V/Q = ventilation/perfusion ratio. 138 BRS Physiology 2.  The mean values for arterial Po2 and Pco2 do not change during exercise. ■ ■Arterial pH does not change during moderate exercise, although it may decrease during strenuous exercise because of lactic acidosis. 3.  On the other hand, venous Pco2 increases during exercise because the excess CO2 pro-duced by the exercising muscle is carried to the lungs in venous blood. 4.  Pulmonary blood flow increases because cardiac output increases during exercise. As a result, more pulmonary capillaries are perfused, and more gas exchange occurs. The distribution of V/Q ratios throughout the lung is more even during exercise than when at rest, and there is a resulting decrease in the physiologic dead space. B.  Adaptation to high altitude (Table 4.9) 1.  Alveolar Po2 is decreased at high altitude because the barometric pressure is decreased. As a result, arterial Po2 is also decreased (hypoxemia). 2.  Hypoxemia stimulates the peripheral chemoreceptors and increases the ventilation rate (hyperventilation). This hyperventilation produces respiratory alkalosis, which can be treated by administering acetazolamide. 3.  Hypoxemia also stimulates renal production of EPO, which increases the production of RBCs. As a result, there is increased hemoglobin concentration and increased O2 content of blood. 4.  2,3-DPG concentrations are increased, shifting the hemoglobin–O2 dissociation curve to the right. There is a resulting decrease in affinity of hemoglobin for O2 that facilitates unloading of O2 in the tissues. 5.  Pulmonary vasoconstriction is a result of hypoxic vasoconstriction. Consequently, there is an increase in pulmonary arterial pressure, increased work of the right side of the heart against the higher resistance, and hypertrophy of the right ventricle. t a b l e 4.9 Summary of Adaptation to High Altitude Parameter Response Alveolar Po2 ↓ (Resulting from ↓ barometric pressure) Arterial Po2 ↓ (Hypoxemia) Ventilation rate ↑ (Hyperventilation due to hypoxemia) Arterial pH ↑ (Respiratory alkalosis) Hemoglobin concentration ↑ (↑ EPO) 2,3-DPG concentration ↑ Hemoglobin-O2 curve Shift to right; ↓ affinity; ↑ P50 Pulmonary vascular resistance ↑ (Hypoxic vasoconstriction) DPG = diphosphoglycerate; EPO, erythropoietin. 139 Review Test 1. Which of the following lung volumes or capacities can be measured by spirometry? (A) Functional residual capacity (FRC) (B) Physiologic dead space (C) Residual volume (RV) (D) Total lung capacity (TLC) (E) Vital capacity (VC) 2. An infant born prematurely in gestational week 25 has neonatal respiratory distress syndrome. Which of the following would be expected in this infant? (A) Arterial Po2 of 100 mm Hg (B) Collapse of the small alveoli (C) Increased lung compliance (D) Normal breathing rate (E) Lecithin:sphingomyelin ratio of greater than 2:1 in amniotic fluid 3. In which vascular bed does hypoxia cause vasoconstriction? (A) Coronary (B) Pulmonary (C) Cerebral (D) Muscle (E) Skin Questions 4 and 5 A 12-year-old boy has a severe asthmatic attack with wheezing. He experiences rapid breathing and becomes cyanotic. His arterial Po2 is 60 mm Hg and his Pco2 is 30 mm Hg. 4. Which of the following statements about this patient is most likely to be true? (A) Forced expiratory volume1/forced vital capacity (FEV1/FVC) is increased (B) Ventilation/perfusion (V/Q) ratio is increased in the affected areas of his lungs (C) His arterial Pco2 is higher than normal because of inadequate gas exchange (D) His arterial Pco2 is lower than normal because hypoxemia is causing him to hyperventilate (E) His residual volume (RV) is decreased 5. To treat this patient, the physician should administer (A) an α1-adrenergic antagonist (B) a β1-adrenergic antagonist (C) a β2-adrenergic agonist (D) a muscarinic agonist (E) a nicotinic agonist 6. Which of the following is true during inspiration? (A) Intrapleural pressure is positive (B) The volume in the lungs is less than the functional residual capacity (FRC) (C) Alveolar pressure equals atmospheric pressure (D) Alveolar pressure is higher than atmospheric pressure (E) Intrapleural pressure is more negative than it is during expiration 7. Which volume remains in the lungs after a tidal volume (Vt) is expired? (A) Tidal volume (Vt) (B) Vital capacity (VC) (C) Expiratory reserve volume (ERV) (D) Residual volume (RV) (E) Functional residual capacity (FRC) (F) Inspiratory capacity (G) Total lung capacity 8. A 35-year-old man has a vital capacity (VC) of 5 L, a tidal volume (Vt) of 0.5 L, an inspiratory capacity of 3.5 L, and a functional residual capacity (FRC) of 2.5 L. What is his expiratory reserve volume (ERV)? (A) 4.5 L (B) 3.9 L (C) 3.6 L (D) 3.0 L (E) 2.5 L (F) 2.0 L (G) 1.5 L 9. When a person is standing, blood flow in the lungs is (A) equal at the apex and the base (B) highest at the apex owing to the effects of gravity on arterial pressure 140 BRS Physiology (C) highest at the base because that is where the difference between arterial and venous pressure is greatest (D) lowest at the base because that is where alveolar pressure is greater than arterial pressure 10. Which of the following is illustrated in the graph showing volume versus pressure in the lung–chest wall system? Combined lung and chest wall Chest wall only Lung only Airway pressure 0 – + Volume (A) The slope of each of the curves is resistance (B) The compliance of the lungs alone is less than the compliance of the lungs plus chest wall (C) The compliance of the chest wall alone is less than the compliance of the lungs plus chest wall (D) When airway pressure is zero (atmospheric), the volume of the combined system is the functional residual capacity (FRC) (E) When airway pressure is zero (atmospheric), intrapleural pressure is zero 11. Which of the following is the site of highest airway resistance? (A) Trachea (B) Largest bronchi (C) Medium-sized bronchi (D) Smallest bronchi (E) Alveoli 12. A 49-year-old man has a pulmonary embolism that completely blocks blood flow to his left lung. As a result, which of the following will occur? (A) Ventilation/perfusion (V/Q) ratio in the left lung will be zero (B) Systemic arterial Po2 will be elevated (C) V/Q ratio in the left lung will be lower than in the right lung (D) Alveolar Po2 in the left lung will be approximately equal to the Po2 in inspired air (E) Alveolar Po2 in the right lung will be approximately equal to the Po2 in venous blood Questions 13 and 14 A B Hemoglobin saturation (%) 100 50 PO2 (mm Hg) 25 50 75 100 13. In the hemoglobin–O2 dissociation curves shown above, the shift from curve A to curve B could be caused by (A) increased pH (B) decreased 2,3-diphosphoglycerate (DPG) concentration (C) strenuous exercise (D) fetal hemoglobin (HbF) (E) carbon monoxide (CO) poisoning 14. The shift from curve A to curve B is associated with (A) increased P50 (B) increased affinity of hemoglobin for O2 (C) impaired ability to unload O2 in the tissues (D) increased O2-carrying capacity of hemoglobin (E) decreased O2-carrying capacity of hemoglobin 15. Which volume remains in the lungs after a maximal expiration? (A) Tidal volume (Vt) (B) Vital capacity (VC) (C) Expiratory reserve volume (ERV) (D) Residual volume (RV) (E) Functional residual capacity (FRC) (F) Inspiratory capacity (G) Total lung capacity Chapter 4 Respiratory Physiology 141 16. Compared with the systemic circulation, the pulmonary circulation has a (A) higher blood flow (B) lower resistance (C) higher arterial pressure (D) higher capillary pressure (E) higher cardiac output 17. A healthy 65-year-old man with a tidal volume (Vt) of 0.45 L has a breathing frequency of 16 breaths/min. His arterial Pco2 is 41 mm Hg, and the Pco2 of his expired air is 35 mm Hg. What is his alveolar ventilation? (A) 0.066 L/min (B) 0.38 L/min (C) 5.0 L/min (D) 6.14 L/min (E) 8.25 L/min 18. Compared with the apex of the lung, the base of the lung has (A) a higher pulmonary capillary Po2 (B) a higher pulmonary capillary Pco2 (C) a higher ventilation/perfusion (V/Q) ratio (D) the same V/Q ratio 19. Hypoxemia produces hyperventilation by a direct effect on the (A) phrenic nerve (B) J receptors (C) lung stretch receptors (D) medullary chemoreceptors (E) carotid and aortic body chemoreceptors 20. Which of the following changes occurs during strenuous exercise? (A) Ventilation rate and O2 consumption increase to the same extent (B) Systemic arterial Po2 decreases to about 70 mm Hg (C) Systemic arterial Pco2 increases to about 60 mm Hg (D) Systemic venous Pco2 decreases to about 20 mm Hg (E) Pulmonary blood flow decreases at the expense of systemic blood flow 21. If an area of the lung is not ventilated because of bronchial obstruction, the pulmonary capillary blood serving that area will have a Po2 that is (A) equal to atmospheric Po2 (B) equal to mixed venous Po2 (C) equal to normal systemic arterial Po2 (D) higher than inspired Po2 (E) lower than mixed venous Po2 22. In the transport of CO2 from the tissues to the lungs, which of the following occurs in venous blood? (A) Conversion of CO2 and H2O to H+ and HCO3 − in the red blood cells (RBCs) (B) Buffering of H+ by oxyhemoglobin (C) Shifting of HCO3 − into the RBCs from plasma in exchange for Cl− (D) Binding of HCO3 − to hemoglobin (E) Alkalinization of the RBCs 23. Which of the following causes of hypoxia is characterized by a decreased arterial Po2 and an increased A–a gradient? (A) Hypoventilation (B) Right-to-left cardiac shunt (C) Anemia (D) Carbon monoxide poisoning (E) Ascent to high altitude 24. A 42-year-old woman with severe pulmonary fibrosis is evaluated by her physician and has the following arterial blood gases: pH = 7.48, PaO2 = 55 mm Hg, and PaCO2 = 32 mm Hg. Which statement best explains the observed value of PaCO2 ? (A) The increased pH stimulates breathing via peripheral chemoreceptors (B) The increased pH stimulates breathing via central chemoreceptors (C) The decreased PaO2 inhibits breathing via peripheral chemoreceptors (D) The decreased PaO2 stimulates breathing via peripheral chemoreceptors (E) The decreased PaO2 stimulates breathing via central chemoreceptors 25. A 38-year-old woman moves with her family from New York City (sea level) to Leadville Colorado (10,200 feet above sea level). Which of the following will occur as a result of residing at high altitude? (A) Hypoventilation (B) Arterial Po2 greater than 100 mm Hg (C) Decreased 2,3-diphosphoglycerate (DPG) concentration (D) Shift to the right of the hemoglobin–O2 dissociation curve (E) Pulmonary vasodilation (F) Hypertrophy of the left ventricle (G) Respiratory acidosis 142 BRS Physiology 26. The pH of venous blood is only slightly more acidic than the pH of arterial blood because (A) CO2 is a weak base (B) there is no carbonic anhydrase in venous blood (C) the H+ generated from CO2 and H2O is buffered by HCO3 – in venous blood (D) the H+ generated from CO2 and H2O is buffered by deoxyhemoglobin in venous blood (E) oxyhemoglobin is a better buffer for H+ than is deoxyhemoglobin 27. In a maximal expiration, the total volume expired is (A) tidal volume (Vt) (B) vital capacity (VC) (C) expiratory reserve volume (ERV) (D) residual volume (RV) (E) functional residual capacity (FRC) (F) inspiratory capacity (G) total lung capacity 28. A person with a ventilation/perfusion (V/Q) defect has hypoxemia and is treated with supplemental O2. The supplemental O2 will be most helpful if the person’s predominant V/Q defect is (A) dead space (B) shunt (C) high V/Q (D) low V/Q (E) V/Q = 0 (F) V/Q = ∞ 29. Which person would be expected to have the largest A–a gradient? (A) Person with pulmonary fibrosis (B) Person who is hypoventilating due to morphine overdose (C) Person at 12,000 feet above sea level (D) Person with normal lungs breathing 50% O2 (E) Person with normal lungs breathing 100% O2 30. Which of the following sets of data would have the highest rate of O2 transfer between alveolar gas and pulmonary capillary blood? PiO2 (mm Hg) PvO2 (mm Hg) Surface Area (relative) Thickness (relative) (A) 150 40 1 1 (B) 150 40 2 2 (C) 300 40 1 2 (D) 150 80 1 1 (E) 190 80 2 2 143 1. The answer is E [I A 4, 5, B 2, 3, 5]. Residual volume (RV) cannot be measured by spirometry. Therefore, any lung volume or capacity that includes the RV cannot be measured by spirometry. Measurements that include RV are functional residual capacity (FRC) and total lung capacity (TLC). Vital capacity (VC) does not include RV and is, therefore, measurable by spirometry. Physiologic dead space is not measurable by spirometry and requires sampling of arterial Pco2 and expired CO2. 2. The answer is B [II D 2]. Neonatal respiratory distress syndrome is caused by lack of adequate surfactant in the immature lung. Surfactant appears between the 24th and the 35th gestational week. In the absence of surfactant, the surface tension of the small alveoli is too high. When the pressure on the small alveoli is too high (P = 2T/r), the small alveoli collapse into larger alveoli. There is decreased gas exchange with the larger, collapsed alveoli, and ventilation/perfusion (V/Q) mismatch, hypoxemia, and cyanosis occur. The lack of surfactant also decreases lung compliance, making it harder to inflate the lungs, increasing the work of breathing, and producing dyspnea (shortness of breath). Generally, lecithin:sphingomyelin ratios greater than 2:1 signify mature levels of surfactant. 3. The answer is B [VI C]. Pulmonary blood flow is controlled locally by the Po2 of alveolar air. Hypoxia causes pulmonary vasoconstriction and thereby shunts blood away from unventilated areas of the lung, where it would be wasted. In the coronary circulation, hypoxemia causes vasodilation. The cerebral, muscle, and skin circulations are not controlled directly by Po2. 4. The answer is D [VIII B 2 a]. The patient’s arterial Pco2 is lower than the normal value of 40 mm Hg because hypoxemia has stimulated peripheral chemoreceptors to increase his breathing rate; hyperventilation causes the patient to blow off extra CO2 and results in respiratory alkalosis. In an obstructive disease, such as asthma, both forced expiratory volume (FEV1) and forced vital capacity (FVC) are decreased, with the larger decrease occurring in FEV1. Therefore, the FEV1/FVC ratio is decreased. Poor ventilation of the affected areas decreases the ventilation/perfusion (V/Q) ratio and causes hypoxemia. The patient’s residual volume (RV) is increased because he is breathing at a higher lung volume to offset the increased resistance of his airways. 5. The answer is C [II E 3 a (2)]. A cause of airway obstruction in asthma is bronchiolar constriction. β2-adrenergic stimulation (β2-adrenergic agonists) produces relaxation of the bronchioles. 6. The answer is E [II F 2]. During inspiration, intrapleural pressure becomes more negative than it is at rest or during expiration (when it returns to its less negative resting value). During inspiration, air flows into the lungs when alveolar pressure becomes lower (due to contraction of the diaphragm) than atmospheric pressure; if alveolar pressure were not lower than atmospheric pressure, air would not flow inward. The volume in the lungs during inspiration is the functional residual capacity (FRC) plus one tidal volume (Vt). 7. The answer is E [I B 2]. During normal breathing, the volume inspired and then expired is a tidal volume (Vt). The volume remaining in the lungs after expiration of a Vt is the functional residual capacity (FRC). 8. The answer is G [I A 3; Figure 4.1]. Expiratory reserve volume (ERV) equals vital capacity (VC) minus inspiratory capacity [Inspiratory capacity includes tidal volume (Vt) and inspiratory reserve volume (IRV)].  Answers and Explanations 144 BRS Physiology 9. The answer is C [VI B]. The distribution of blood flow in the lungs is affected by gravitational effects on arterial hydrostatic pressure. Thus, blood flow is highest at the base, where arterial hydrostatic pressure is greatest and the difference between arterial and venous pressure is also greatest. This pressure difference drives the blood flow. 10. The answer is D [II C 2; Figure 4.3]. By convention, when airway pressure is equal to atmospheric pressure, it is designated as zero pressure. Under these equilibrium conditions, there is no airflow because there is no pressure gradient between the atmosphere and the alveoli, and the volume in the lungs is the functional residual capacity (FRC). The slope of each curve is compliance, not resistance; the steeper the slope is, the greater the volume change is for a given pressure change, or the greater compliance is. The compliance of the lungs alone or the chest wall alone is greater than that of the combined lung–chest wall system (the slopes of the individual curves are steeper than the slope of the combined curve, which means higher compliance). When airway pressure is zero (equilibrium conditions), intrapleural pressure is negative because of the opposing tendencies of the chest wall to spring out and the lungs to collapse. 11. The answer is C [II E 4]. The medium-sized bronchi actually constitute the site of highest resistance along the bronchial tree. Although the small radii of the alveoli might predict that they would have the highest resistance, they do not because of their parallel arrangement. In fact, early changes in resistance in the small airways may be “silent” and go undetected because of their small overall contribution to resistance. 12. The answer is D [VII B 2]. Alveolar Po2 in the left lung will equal the Po2 in inspired air. Because there is no blood flow to the left lung, there can be no gas exchange between the alveolar air and the pulmonary capillary blood. Consequently, O2 is not added to the capillary blood. The ventilation/perfusion (V/Q) ratio in the left lung will be infinite (not zero or lower than that in the normal right lung) because Q (the denominator) is zero. Systemic arterial Po2 will, of course, be decreased because the left lung has no gas exchange. Alveolar Po2 in the right lung is unaffected. 13. The answer is C [IV C 1; Figure 4.8]. Strenuous exercise increases the temperature and decreases the pH of skeletal muscle; both effects would cause the hemoglobin–O2 dissociation curve to shift to the right, making it easier to unload O2 in the tissues to meet the high demand of the exercising muscle. 2,3-Diphosphoglycerate (DPG) binds to the β chains of adult hemoglobin and reduces its affinity for O2, shifting the curve to the right. In fetal hemoglobin, the β. chains are replaced by γ chains, which do not bind 2,3-DPG, so the curve is shifted to the left. Because carbon monoxide (CO) increases the affinity of the remaining binding sites for O2, the curve is shifted to the left. 14. The answer is A [IV C 1; Figure 4.8]. A shift to the right of the hemoglobin–O2 dissociation curve represents decreased affinity of hemoglobin for O2. At any given Po2, the percent saturation is decreased, the P50 is increased (read the Po2 from the graph at 50% hemoglobin saturation), and unloading of O2 in the tissues is facilitated. The O2-carrying capacity of hemoglobin is the mL of O2 that can be bound to a gram of hemoglobin at 100% saturation and is unaffected by the shift from curve A to curve B. 15. The answer is D [I A 3]. During a forced maximal expiration, the volume expired is a tidal volume (Vt) plus the expiratory reserve volume (ERV). The volume remaining in the lungs is the residual volume (RV). 16. The answer is B [VI A]. Blood flow (or cardiac output) in the systemic and pulmonary circulations is nearly equal; pulmonary flow is slightly less than systemic flow because about 2% of the systemic cardiac output bypasses the lungs. The pulmonary circulation is characterized by both lower pressure and lower resistance than the systemic circulation, so flows through the two circulations are approximately equal (flow = pressure/resistance). 17. The answer is D [I A 5 b, 6 b]. Alveolar ventilation is the difference between tidal volume (Vt) and dead space multiplied by breathing frequency. Vt and breathing frequency are given, but dead space must be calculated. Dead space is Vt multiplied by the difference between arterial Chapter 4 Respiratory Physiology 145 Pco2 and expired Pco2 divided by arterial Pco2. Thus: dead space = 0.45 × (41 − 35/41) = 0.066 L. Alveolar ventilation is then calculated as: (0.45 L − 0.066 L) × 16 breaths/min = 6.14 L/min. 18. The answer is B [VII C; Figure 4.10; Table 4.5]. Ventilation and perfusion of the lung are not distributed uniformly. Both are lowest at the apex and highest at the base. However, the differences for ventilation are not as great as for perfusion, making the ventilation/ perfusion (V/Q) ratios higher at the apex and lower at the base. As a result, gas exchange is more efficient at the apex and less efficient at the base. Therefore, blood leaving the apex will have a higher Po2 and a lower Pco2. 19. The answer is E [VIII B 2]. Hypoxemia stimulates breathing by a direct effect on the peripheral chemoreceptors in the carotid and aortic bodies. Central (medullary) chemoreceptors are stimulated by CO2 (or H+). The J receptors and lung stretch receptors are not chemoreceptors. The phrenic nerve innervates the diaphragm, and its activity is determined by the output of the brain stem breathing center. 20. The answer is A [IX A]. During exercise, the ventilation rate increases to match the increased O2 consumption and CO2 production. This matching is accomplished without a change in mean arterial Po2 or Pco2. Venous Pco2 increases because extra CO2 is being produced by the exercising muscle. Because this CO2 will be blown off by the hyperventilating lungs, it does not increase the arterial Pco2. Pulmonary blood flow (cardiac output) increases manifold during strenuous exercise. 21. The answer is B [VII B 1]. If an area of lung is not ventilated, there can be no gas exchange in that region. The pulmonary capillary blood serving that region will not equilibrate with alveolar Po2 but will have a Po2 equal to that of mixed venous blood. 22. The answer is A [V B; Figure 4.9]. CO2 generated in the tissues is hydrated to form H+ and HCO3 − in red blood cells (RBCs). H+ is buffered inside the RBCs by deoxyhemoglobin, which acidifies the RBCs. HCO3 − leaves the RBCs in exchange for Cl− and is carried to the lungs in the plasma. A small amount of CO2 (not HCO3 −) binds directly to hemoglobin (carbaminohemoglobin). 23. The answer is B [IV A 4; IV D; Table 4.4; Table 4.5]. Hypoxia is defined as decreased O2 delivery to the tissues. It occurs as a result of decreased blood flow or decreased O2 content of the blood. Decreased O2 content of the blood is caused by decreased hemoglobin concentration (anemia), decreased O2-binding capacity of hemoglobin (carbon monoxide poisoning), or decreased arterial Po2 (hypoxemia). Hypoventilation, right-to-left cardiac shunt, and ascent to high altitude all cause hypoxia by decreasing arterial Po2. Of these, only right-to-left cardiac shunt is associated with an increased A–a gradient, reflecting a lack of O2 equilibration between alveolar gas and systemic arterial blood. In right-to-left shunt, a portion of the right heart output, or pulmonary blood flow, is not oxygenated in the lungs and thereby “dilutes” the Po2 of the normally oxygenated blood. With hypoventilation and ascent to high altitude, both alveolar and arterial Po2 are decreased, but the A–a gradient is normal. 24. The answer is D [VIII B; Table 4.7]. The patient’s arterial blood gases show increased pH, decreased PaO2, and decreased PaCO2. The decreased PaO2 causes hyperventilation (stimulates breathing) via the peripheral chemoreceptors, but not via the central chemoreceptors. The decreased PaCO2 results from hyperventilation (increased breathing) and causes increased pH, which inhibits breathing via the peripheral and central chemoreceptors. 25. The answer is D [IX B; Table 4.9]. At high altitudes, the Po2 of alveolar air is decreased because barometric pressure is decreased. As a result, arterial Po2 is decreased (<100 mm Hg), and hypoxemia occurs and causes hyperventilation by an effect on peripheral chemoreceptors. Hyperventilation leads to respiratory alkalosis. 2,3-Diphosphoglycerate (DPG) levels increase adaptively; 2,3-DPG binds to hemoglobin and causes the hemoglobin– O2 dissociation curve to shift to the right to improve unloading of O2 in the tissues. The 146 BRS Physiology pulmonary vasculature vasoconstricts in response to alveolar hypoxia, resulting in increased pulmonary arterial pressure and hypertrophy of the right ventricle (not the left ventricle). 26. The answer is D [V B]. In venous blood, CO2 combines with H2O and produces the weak acid H2CO3, catalyzed by carbonic anhydrase. The resulting H+ is buffered by deoxyhemoglobin, which is such an effective buffer for H+ (meaning that the pK is within 1.0 unit of the pH of blood) that the pH of venous blood is only slightly more acid than the pH of arterial blood. Oxyhemoglobin is a less effective buffer than is deoxyhemoglobin. 27. The answer is B [I B 3]. The volume expired in a forced maximal expiration is forced vital capacity, or vital capacity (VC). 28. The answer is D [VII]. Supplemental O2 (breathing inspired air with a high Po2) is most helpful in treating hypoxemia associated with a ventilation/perfusion (V/Q) defect if the predominant defect is low V/Q. Regions of low V/Q have the highest blood flow. Thus, breathing high Po2 air will raise the Po2 of a large volume of blood and have the greatest influence on the total blood flow leaving the lungs (which becomes systemic arterial blood). Dead space (i.e., V/Q = ∞) has no blood flow, so supplemental O2 has no effect on these regions. Shunt (i.e., V/Q = 0) has no ventilation, so supplemental O2 has no effect. Regions of high V/Q have little blood flow, thus raising the Po2 of a small volume of blood will have little overall effect on systemic arterial blood. 29. The answer is A [IV D]. Increased A–a gradient signifies lack of O2 equilibration between alveolar gas (A) and systemic arterial blood (a). In pulmonary fibrosis, there is thickening of the alveolar/pulmonary capillary barrier and increased diffusion distance for O2, which results in lack of equilibration of O2, hypoxemia, and increased A–a gradient. Hypoventilation and ascent to 12,000 feet also cause hypoxemia, because systemic arterial blood is equilibrated with a lower alveolar Po2 (normal A–a gradient). Persons breathing 50% or 100% O2 will have elevated alveolar Po2, and their arterial Po2 will equilibrate with this higher value (normal A–a gradient). 30. The answer is C [III D). The diffusion of O2 from alveolar gas to pulmonary capillary blood is proportional to the partial pressure difference for O2 between inspired air and mixed venous blood entering the pulmonary capillaries, proportional to the surface area for diffusion, and inversely proportional to diffusion distance, or thickness of the barrier. 147 Renal and Acid–Base Physiology c h a p t e r 5 I.  Body Fluids ■ ■Total body water (TBW) is approximately 60% of body weight. ■ ■The percentage of TBW is highest in newborns and adult males and lowest in adult females and in adults with a large amount of adipose tissue. A.  Distribution of water (Figure 5.1 and Table 5.1) 1.  Intracellular fluid (ICF) ■ ■is two-thirds of TBW. ■ ■The major cations of ICF are K+ and Mg2+. ■ ■The major anions of ICF are protein and organic phosphates (adenosine triphosphate [ATP], adenosine diphosphate [ADP], and adenosine monophosphate [AMP]). 2.  Extracellular fluid (ECF) ■ ■is one-third of TBW. ■ ■is composed of interstitial fluid and plasma. The major cation of ECF is Na+. ■ ■The major anions of ECF are Cl- and HCO3 -. a.  Plasma is one-fourth of the ECF. Thus, it is one-twelfth of TBW (1/4 × 1/3). ■ ■The major plasma proteins are albumin and globulins. b.  Interstitial fluid is three-fourths of the ECF. Thus, it is one-fourth of TBW (3/4 × 1/3). ■ ■The composition of interstitial fluid is the same as that of plasma except that it has little protein. Thus, interstitial fluid is an ultrafiltrate of plasma. 3.  60-40-20 rule ■ ■TBW is 60% of body weight. ■ ■ICF is 40% of body weight. ■ ■ECF is 20% of body weight. B.  Measuring the volumes of the fluid compartments (see Table 5.1) 1.  Dilution method a.  A known amount of a substance is given whose volume of distribution is the body fluid compartment of interest. ■ ■For example: (1)  Tritiated water is a marker for TBW that distributes wherever water is found. (2)  Mannitol is a marker for ECF because it is a large molecule that cannot cross cell membranes and is therefore excluded from the ICF . (3)  Evans blue is a marker for plasma volume because it is a dye that binds to serum albumin and is therefore confined to the plasma compartment. 148 BRS Physiology b.  The substance is allowed to equilibrate. c.  The concentration of the substance is measured in plasma, and the volume of distribu-tion is calculated as follows: Volume Amount Concentration = where: Volume =  volume of distribution, or volume of the body fluid compartment (L) Amount = amount of substance present (mg) Concentration = concentration in plasma (mg/L) d.  Sample calculation ■ ■A patient is injected with 500 mg of mannitol. After a 2-hour equilibration period, the concentration of mannitol in plasma is 3.2 mg/100 mL. During the equilibration period, 10% of the injected mannitol is excreted in urine. What is the patient’s ECF volume? Volume Amount Concentration Amount injected Amount excreted Co = = − ncentration mg mg mL L mg = − = 500 50 3 2 100 14 1 . . Intracellular Extracellular Plasma Interstitial Total body water FIGURE 5.1 Body fluid compartments. t a b l e 5.1 Body Water and Body Fluid Compartments Body Fluid Compartment Fraction of TBW Markers Used to Measure Volume Major Cations Major Anions TBW 1.0 Tritiated H2O D2O Antipyrene ECF 1/3 Sulfate Inulin Mannitol Na+ Cl -HCO3 -Plasma 1/12 (1/4 of ECF) RISA Evans blue Na+ Cl -HCO3 -Plasma protein Interstitial 1/4 (3/4 of ECF) ECF–plasma volume (indirect) Na+ Cl -HCO3 -ICF 2/3 TBW–ECF (indirect) K+ Organic phosphates Protein  Total body water (TBW) is approximately 60% of total body weight, or 42 L in a 70-kg man. ECF = extracellular fluid; ICF = intracellular fluid; RISA = radioiodinated serum albumin. Chapter 5 Renal and Acid–Base Physiology 149 2.  Substances used for major fluid compartments (see Table 5.1) a.  TBW ■ ■Tritiated water, D2O, and antipyrene b.  ECF ■ ■Sulfate, inulin, and mannitol c.  Plasma ■ ■Radioiodinated serum albumin (RISA) and Evans blue d.  Interstitial ■ ■Measured indirectly (ECF volume–plasma volume) e.  ICF ■ ■Measured indirectly (TBW–ECF volume) C.  Shifts of water between compartments 1.  Basic principles a.  Osmolarity is concentration of solute particles. b.  Plasma osmolarity (Posm) is estimated as: P 2 Na Glucose 18 BUN 2.8 osm = ¥ + + + where: Posm = plasma osmolarity (mOsm/L) Na+ = plasma Na+ concentration (mEq/L) Glucose = plasma glucose concentration (mg/dL) BUN = blood urea nitrogen concentration (mg/dL) c.  At steady state, ECF osmolarity and ICF osmolarity are equal. d.  To achieve this equality, water shifts between the ECF and ICF compartments. e.  It is assumed that solutes such as NaCl and mannitol do not cross cell membranes and are confined to ECF . 2.  Examples of shifts of water between compartments (Figure 5.2 and Table 5.2) a.  Infusion of isotonic NaCl—addition of isotonic fluid ■ ■is also called isosmotic volume expansion. (1)  ECF volume increases, but no change occurs in the osmolarity of ECF or ICF . Because osmolarity is unchanged, water does not shift between the ECF and ICF compartments. (2)  Plasma protein concentration and hematocrit decrease because the addition of fluid to the ECF dilutes the protein and red blood cells (RBCs). Because ECF osmolarity is unchanged, the RBCs will not shrink or swell. (3)  Arterial blood pressure increases because ECF volume increases. b.  Diarrhea—loss of isotonic fluid ■ ■is also called isosmotic volume contraction. (1)  ECF volume decreases, but no change occurs in the osmolarity of ECF or ICF . Because osmolarity is unchanged, water does not shift between the ECF and ICF compartments. (2)  Plasma protein concentration and hematocrit increase because the loss of ECF con-centrates the protein and RBCs. Because ECF osmolarity is unchanged, the RBCs will not shrink or swell. (3)  Arterial blood pressure decreases because ECF volume decreases. c.  Excessive NaCl intake—addition of NaCl ■ ■is also called hyperosmotic volume expansion. (1)  The osmolarity of ECF increases because osmoles (NaCl) have been added to the ECF . 150 BRS Physiology (2)  Water shifts from ICF to ECF . As a result of this shift, ICF osmolarity increases until it equals that of ECF . (3)  As a result of the shift of water out of the cells, ECF volume increases (volume expan-sion) and ICF volume decreases. (4)  Plasma protein concentration and hematocrit decrease because of the increase in ECF volume. t a b l e 5.2 Changes in Volume and Osmolarity of Body Fluids Type Key Examples ECF Volume ICF Volume ECF Osmolarity Hct and Serum [Na+] Isosmotic volume expansion Isotonic NaCl infusion ↑ No change No change ↓ Hct –[Na+] Isosmotic volume contraction Diarrhea ↓ No change No change ↑ Hct –[Na+] Hyperosmotic volume expansion High NaCl intake ↑ ↓ ↑ ↓ Hct ↑ [Na+] Hyperosmotic volume contraction Sweating Fever Diabetes insipidus ↓ ↓ ↑ –Hct ↑ [Na+] Hyposmotic volume expansion SIADH ↑ ↑ ↓ –Hct ↓ [Na+] Hyposmotic volume contraction Adrenal insufficiency ↓ ↑ ↓ ↑ Hct ↓ [Na+] – = no change; ECF = extracellular fluid; Hct = hematocrit; ICF = intracellular fluid; SIADH = syndrome of inappropriate antidiuretic hormone Diarrhea Liters Lost in desert Liters Adrenal insufficiency Liters Volume expansion Volume contraction Infusion of isotonic NaCl Liters Excessive NaCl intake Liters SIADH Liters Osmolarity Osmolarity ECF ICF ECF ICF ECF ICF ECF ICF ECF ICF ECF ICF FIGURE 5.2 Shifts of water between body fluid compartments. Volume and osmolarity of normal extracellular fluid (ECF) and intracellular fluid (ICF) are indicated by the solid lines. Changes in volume and osmolarity in response to various situ-ations are indicated by the dashed lines. SIADH = syndrome of inappropriate antidiuretic hormone. Chapter 5 Renal and Acid–Base Physiology 151 d.  Sweating in a desert—loss of water ■ ■is also called hyperosmotic volume contraction. (1)  The osmolarity of ECF increases because sweat is hyposmotic (relatively more water than salt is lost). (2)  ECF volume decreases because of the loss of volume in the sweat. Water shifts out of ICF; as a result of the shift, ICF osmolarity increases until it is equal to ECF osmolar-ity, and ICF volume decreases. (3)  Plasma protein concentration increases because of the decrease in ECF volume. Although hematocrit might also be expected to increase, it remains unchanged because water shifts out of the RBCs, decreasing their volume and offsetting the concentrating effect of the decreased ECF volume. e.  Syndrome of inappropriate antidiuretic hormone (SIADH)—gain of water ■ ■is also called hyposmotic volume expansion. (1)  The osmolarity of ECF decreases because excess water is retained. (2)  ECF volume increases because of the water retention. Water shifts into the cells; as a result of this shift, ICF osmolarity decreases until it equals ECF osmolarity, and ICF volume increases. (3)  Plasma protein concentration decreases because of the increase in ECF volume. Although hematocrit might also be expected to decrease, it remains unchanged because water shifts into the RBCs, increasing their volume and offsetting the dilut-ing effect of the gain of ECF volume. f.  Adrenocortical insufficiency—loss of NaCl ■ ■is also called hyposmotic volume contraction. (1)  The osmolarity of ECF decreases. As a result of the lack of aldosterone in adrenocor-tical insufficiency, there is decreased NaCl reabsorption, and the kidneys excrete more NaCl than water. (2)  ECF volume decreases. Water shifts into the cells; as a result of this shift, ICF osmolar-ity decreases until it equals ECF osmolarity, and ICF volume increases. (3)  Plasma protein concentration increases because of the decrease in ECF volume. Hematocrit increases because of the decreased ECF volume and because the RBCs swell as a result of water entry. (4)  Arterial blood pressure decreases because of the decrease in ECF volume. II.  Renal Clearance, Renal Blood Flow (RBF), and Glomerular Filtration Rate (GFR) A.  Clearance equation ■ ■indicates the volume of plasma cleared of a substance per unit time. ■ ■The units of clearance are mL/min or mL/24 hour. C UV P = where: C = clearance (mL/min or mL/24 hour) U = urine concentration (mg/mL) V = urine volume/time (mL/min) P = plasma concentration (mg/mL) ■ ■Example: If the plasma [Na+] is 140 mEq/L, the urine [Na+] is 700 mEq/L, and the urine flow rate is 1 mL/min, what is the clearance of Na+? 152 BRS Physiology C = U V P = 700 1 140 = 5 mEq L mL min mEq L mL min Na Na Na + + + × × [ ] [ ] B.  RBF ■ ■is 25% of the cardiac output. ■ ■is directly proportional to the pressure difference between the renal artery and the renal vein, and is inversely proportional to the resistance of the renal vasculature. ■ ■Vasoconstriction of renal arterioles, which leads to a decrease in RBF , is produced by activation of the sympathetic nervous system and angiotensin II. At low concentrations, ­ angiotensin II preferentially constricts efferent arterioles, thereby “protecting” (increasing) the GFR. Angiotensin-converting enzyme (ACE) inhibitors dilate efferent arterioles and pro-duce a decrease in GFR; these drugs reduce hyperfiltration and the occurrence of diabetic nephropathy in diabetes mellitus. ■ ■Vasodilation of renal arterioles, which leads to an increase in RBF , is produced by prosta-glandins E2 and I2, bradykinin, nitric oxide, and dopamine. ■ ■Atrial natriuretic peptide (ANP) causes vasodilation of afferent arterioles and, to a lesser extent, vasoconstriction of efferent arterioles; overall, ANP increases RBF . 1.  Autoregulation of RBF ■ ■is accomplished by changing renal vascular resistance. If arterial pressure changes, a proportional change occurs in renal vascular resistance to maintain a constant RBF . ■ ■RBF remains constant over the range of arterial pressures from 80 to 200 mm Hg (autoregulation). ■ ■The mechanisms for autoregulation include: a.  Myogenic mechanism, in which the renal afferent arterioles contract in response to stretch. Thus, increased renal arterial pressure stretches the arterioles, which contract and increase resistance to maintain constant blood flow. b.  Tubuloglomerular feedback, in which increased renal arterial pressure leads to increased delivery of fluid to the macula densa. The macula densa senses the increased load and causes constriction of the nearby afferent arteriole, increasing resistance to maintain constant blood flow. 2.  Measurement of renal plasma flow (RPF)—clearance of para-aminohippuric acid (PAH) ■ ■PAH is filtered and secreted by the renal tubules. ■ ■Clearance of PAH is used to measure RPF . ■ ■Clearance of PAH measures effective RPF and underestimates true RPF by 10%. (Clearance of PAH does not measure renal plasma flow to regions of the kidney that do not filter and secrete PAH, such as adipose tissue.) RPF C U V P PAH PAH PAH = = [ ] [ ] where: RPF = renal plasma flow (mL/min or mL/24 hour) CPAH = clearance of PAH (mL/min or mL/24 hour) [U]PAH = urine concentration of PAH (mg/mL) V = urine flow rate (mL/min or mL/24 hour) [P]PAH = plasma concentration of PAH (mg/mL) 3. Measurement of RBF RBF RPF 1 Hematocrit = - Chapter 5 Renal and Acid–Base Physiology 153 ■ ■Note that the denominator in this equation, 1 − hematocrit, is the fraction of blood ­ volume occupied by plasma. C.  GFR 1.  Measurement of GFR—clearance of inulin ■ ■Inulin is filtered, but not reabsorbed or secreted by the renal tubules. ■ ■The clearance of inulin is used to measure GFR, as shown in the following equation: GFR U V P inulin inulin = [ ] [ ] where: GFR = glomerular filtration rate (mL/min or mL/24 hour) [U]inulin = urine concentration of inulin (mg/mL) V = urine flow rate (mL/min or mL/24 hour) [P]inulin = plasma concentration of inulin (mg/mL) ■ ■Example of calculation of GFR: Inulin is infused in a patient to achieve a steady-state plasma concentration of 1 mg/mL. A urine sample collected during 1 hour has a ­ volume of 60 mL and an inulin concentration of 120 mg/mL. What is the patient’s GFR? GFR U V P mg mL mL h mg mL mg mL mL inulin inulin = [ ] [ ] = × = × 120 60 1 120 1 1 min mg L mL m min =120 2.  Estimates of GFR with blood urea nitrogen (BUN) and serum [creatinine] ■ ■Both BUN and serum [creatinine] increase when GFR decreases. ■ ■In prerenal azotemia (hypovolemia), BUN increases more than serum creatinine and there is an increased BUN/creatinine ratio (>20:1). ■ ■GFR decreases with age, although serum [creatinine] remains constant because of decreased muscle mass. 3.  Filtration fraction ■ ■is the fraction of RPF filtered across the glomerular capillaries, as shown in the follow-ing equation: Filtration fraction GFR RPF = ■ ■is normally about 0.20. Thus, 20% of the RPF is filtered. The remaining 80% leaves the glomerular capillaries by the efferent arterioles and becomes the peritubular capillary circulation. ■ ■Increases in the filtration fraction produce increases in the protein concentration of peri-tubular capillary blood, which leads to increased reabsorption in the proximal tubule. ■ ■Decreases in the filtration fraction produce decreases in the protein concentration of peritubular capillary blood and decreased reabsorption in the proximal tubule. 4.  Determining GFR–Starling forces (Figure 5.3) ■ ■The driving force for glomerular filtration is the net ultrafiltration pressure across the glomerular capillaries. ■ ■Filtration is always favored in glomerular capillaries because the net ultrafiltration pres-sure always favors the movement of fluid out of the capillary. 154 BRS Physiology ■ ■GFR can be expressed by the Starling equation: GFR K P P f GC BS GC BS = − ( ) − − ( )     π π a.  GFR is filtration across the glomerular capillaries. b.  Kf is the filtration coefficient of the glomerular capillaries. ■ ■The glomerular barrier consists of the capillary endothelium, basement membrane, and filtration slits of the podocytes. ■ ■Normally, anionic glycoproteins line the filtration barrier and restrict the filtration of plasma proteins, which are also negatively charged. ■ ■In glomerular disease, the anionic charges on the barrier may be removed, resulting in proteinuria. c.  PGC is glomerular capillary hydrostatic pressure, which is constant along the length of the capillary. ■ ■It is increased by dilation of the afferent arteriole or constriction of the efferent arteriole. Increases in PGC cause increases in net ultrafiltration pressure and GFR. d.  PBS is Bowman space hydrostatic pressure and is analogous to Pi in systemic capillaries. ■ ■It is increased by constriction of the ureters. Increases in PBS cause decreases in net ultrafiltration pressure and GFR. e.  pGC is glomerular capillary oncotic pressure. It normally increases along the length of the glomerular capillary because filtration of water increases the protein concentration of glomerular capillary blood. ■ ■It is increased by increases in protein concentration. Increases in πGC cause decreases in net ultrafiltration pressure and GFR. f.  pBS is Bowman space oncotic pressure. It is usually zero, and therefore ignored, because only a small amount of protein is normally filtered. 5.  Sample calculation of ultrafiltration pressure with the Starling equation ■ ■At the afferent arteriolar end of a glomerular capillary, PGC is 45 mm Hg, PBS is 10 mm Hg, and πGC is 27 mm Hg. What are the value and direction of the net ultrafiltra-tion pressure? Net pressure P P Net pressure mm Hg mm Hg mm H GC BS GC = − ( ) − = ( ) − π − 45 10 27 g mm Hg favoring filtration = + ( ) 8 6.  Changes in Starling forces—effect on GFR and filtration fraction (Table 5.3) art eri ol e art eri o l e Glomerular capillary Affe re nt Eff ere n t Bowman's space PBS PGC πGC Proximal tubule Figure 5.3 Starling forces across the glomerular capil-laries. Heavy arrows indicate the driving forces across the glomerular capillary wall. PBS = hydrostatic pressure in Bowman space; PGC = hydrostatic pressure in the glo-merular capillary; πGC = colloidosmotic pressure in the glomerular capillary. Chapter 5 Renal and Acid–Base Physiology 155 III.  Reabsorption and Secretion (Figure 5.4) A.  Calculation of reabsorption and secretion rates ■ ■The reabsorption or secretion rate is the difference between the amount filtered across the glomerular capillaries and the amount excreted in urine. It is calculated with the following equations: Filtered load GFR plasma Excretion rate V urine absorpti = ×[ ] = ×[ ] Re on rate Filtered load Excretion rate Secretion rate Excretion r = − = ate Filtered load − ■ ■If the filtered load is greater than the excretion rate, then net reabsorption of the substance has occurred. If the filtered load is less than the excretion rate, then net secretion of the substance has occurred. ■ ■Example: A woman with untreated diabetes mellitus has a GFR of 120 mL/min, a plasma glucose concentration of 400 mg/dL, a urine glucose concentration of 2500 mg/dL, and a urine flow rate of 4 mL/min. What is the reabsorption rate of glucose? t a b l e 5.3 Effect of Changes in Starling Forces on GFR, RPF, and Fraction Filtration Effect on GFR Effect on RPF Effect on Filtration Fraction Constriction of afferent arteriole (e.g., sympathetic) ↓ (caused by ↓ PGC) ↓ No change Constriction of efferent arteriole (e.g., angiotensin II) ↑ (caused by ↑ PGC) ↓ ↑ (↑ GFR/↓ RPF) Increased plasma (protein) ↓ (caused by ↑ πGC) No change ↓ (↓ GFR/unchanged RPF) Ureteral stone ↓ (caused by ↑ PBS) No change ↓ (↓ GFR/unchanged RPF) GER = glomerular filtration rate; RPF = renal plasma flow. Glomerular capillary Aff er e n t Eff ere n t art eri o l e Filtered load Bowman's space Peritubular capillary Excretion Reabsorption Secretion Figure 5.4 Processes of filtration, reabsorption, and secretion. The sum of the three processes is excretion. 156 BRS Physiology Filtered load GFR Plasma glucose mL mg dL mg = × [ ] = × = 120 400 480 min min min min Re Excretion V Urine glucose mL mg dL mg ab = × [ ] = × = 4 2500 100 sorption mg mg mg = − = 480 100 380 min min min B.  Transport maximum (Tm) curve for glucose—a reabsorbed substance (Figure 5.5) 1.  Filtered load of glucose ■ ■increases in direct proportion to the plasma glucose concentration (filtered load of glu-cose = GFR × [P]glucose). 2.  Reabsorption of glucose a.  Na+–glucose cotransport in the proximal tubule reabsorbs glucose from tubular fluid into the blood. There are a limited number of Na+–glucose carriers. b.  At plasma glucose concentrations less than 250 mg/dL, all of the filtered glucose can be reabsorbed because plenty of carriers are available; in this range, the line for reabsorp-tion is the same as that for filtration. c.  At plasma glucose concentrations greater than 350 mg/dL, the carriers are satu-rated. Therefore, increases in plasma concentration above 350 mg/dL do not result in increased rates of reabsorption. The reabsorptive rate at which the carriers are satu-rated is the Tm. 3.  Excretion of glucose a.  At plasma concentrations less than 250 mg/dL, all of the filtered glucose is reabsorbed and excretion is zero. Threshold (defined as the plasma concentration at which glucose first appears in the urine) is approximately 250 mg/dL. b.  At plasma concentrations greater than 350 mg/dL, reabsorption is saturated (Tm). Therefore, as the plasma concentration increases, the additional filtered glucose can-not be reabsorbed and is excreted in the urine. 4.  Splay ■ ■is the region of the glucose curves between threshold and Tm. ■ ■occurs between plasma glucose concentrations of approximately 250 and 350 mg/dL. Glucose filtration, excretion, reabsorption (mg/min) Plasma [glucose] (mg/dL) 200 0 400 600 800 Filtered Reabsorbed Excreted Threshold Tm Figure 5.5 Glucose titration curve. Glucose filtra-tion, excretion, and reabsorption are shown as a function of plasma [glucose]. Shaded area indi-cates the “splay.” Tm = transport maximum. Chapter 5 Renal and Acid–Base Physiology 157 ■ ■represents the excretion of glucose in urine before saturation of reabsorption (Tm) is fully achieved. ■ ■is explained by the heterogeneity of nephrons and the relatively low affinity of the Na+– glucose carriers. C.  Tm curve for PAH—a secreted substance (Figure 5.6) 1.  Filtered load of PAH ■ ■As with glucose, the filtered load of PAH increases in direct proportion to the plasma PAH concentration. 2.  Secretion of PAH a.  Secretion of PAH occurs from peritubular capillary blood into tubular fluid (urine) via carriers in the proximal tubule. b.  At low plasma concentrations of PAH, the secretion rate increases as the plasma con-centration increases. c.  Once the carriers are saturated, further increases in plasma PAH concentration do not cause further increases in the secretion rate (Tm). 3.  Excretion of PAH a.  Excretion of PAH is the sum of filtration across the glomerular capillaries plus secretion from peritubular capillary blood. b.  The curve for excretion is steepest at low plasma PAH concentrations (lower than at Tm). Once the Tm for secretion is exceeded and all of the carriers for secretion are satu-rated, the excretion curve flattens and becomes parallel to the curve for filtration. c.  RPF is measured by the clearance of PAH at plasma concentrations of PAH that are lower than at Tm. D.  Relative clearances of substances 1.  Substances with the highest clearances ■ ■are those that are both filtered across the glomerular capillaries and secreted from the peritubular capillaries into urine (e.g., PAH). 2.  Substances with the lowest clearances ■ ■are those that either are not filtered (e.g., protein) or are filtered and subsequently reab-sorbed into peritubular capillary blood (e.g., Na+, glucose, amino acids, HCO3 -, Cl-). PAH filtration, excretion, and secretion Filtered Secreted Excreted Plasma [PAH] Tm Figure 5.6 Para-aminohippuric acid (PAH) titration curve. PAH filtration, excretion, and secretion are shown as a function of plasma [PAH]. Tm = transport maximum. 158 BRS Physiology 3.  Substances with clearances equal to GFR ■ ■are glomerular markers. ■ ■are those that are freely filtered, but not reabsorbed or secreted (e.g., inulin). 4.  Relative clearances ■ ■PAH > K+ (high-K+ diet) > inulin > urea > Na+ > glucose, amino acids, and HCO3 -. E.  Nonionic diffusion 1.  Weak acids ■ ■have an HA form and an A- form. ■ ■The HA form, which is uncharged and lipid soluble, can “back-diffuse” from urine to blood. ■ ■The A− form, which is charged and not lipid soluble, cannot back-diffuse. ■ ■At acidic urine pH, the HA form predominates, there is more back-diffusion, and there is decreased excretion of the weak acid. ■ ■At alkaline urine pH, the A− form predominates, there is less back-diffusion, and there is increased excretion of the weak acid. For example, the excretion of salicylic acid (a weak acid) can be increased by alkalinizing the urine. 2.  Weak bases ■ ■have a BH+ form and a B form. ■ ■The B form, which is uncharged and lipid soluble, can “back-diffuse” from urine to blood. ■ ■The BH+ form, which is charged and not lipid soluble, cannot back-diffuse. ■ ■At acidic urine pH, the BH+ form predominates, there is less back-diffusion, and there is increased excretion of the weak base. For example, the excretion of morphine (a weak base) can be increased by acidifying the urine. ■ ■At alkaline urine pH, the B form predominates, there is more back-diffusion, and there is decreased excretion of the weak base. IV.  NaCl Regulation A.  Single nephron terminology ■ ■Tubular fluid (TF) is urine at any point along the nephron. ■ ■Plasma (P) is systemic plasma. It is considered to be constant. 1.  TF/Px ratio ■ ■compares the concentration of a substance in tubular fluid at any point along the neph-ron with the concentration in plasma. a.  If TF/P = 1.0, then either there has been no reabsorption of the substance or reabsorp-tion of the substance has been exactly proportional to the reabsorption of water. ■ ■For example, if TF/PNa+ = 1.0, the [Na+] in tubular fluid is identical to the [Na+] in plasma. ■ ■For any freely filtered substance, TF/P = 1.0 in Bowman space (before any reabsorp-tion or secretion has taken place to modify the tubular fluid). b.  If TF/P < 1.0, then reabsorption of the substance has been greater than the reabsorption of water and the concentration in tubular fluid is less than that in plasma. ■ ■For example, if TF/PNa+ = 0.8, then the [Na+] in tubular fluid is 80% of the [Na+] in plasma. c.  If TF/P > 1.0, then either reabsorption of the substance has been less than the reabsorp-tion of water or there has been secretion of the substance. Chapter 5 Renal and Acid–Base Physiology 159 2.  TF/Pinulin ■ ■is used as a marker for water reabsorption along the nephron. ■ ■increases as water is reabsorbed. ■ ■Because inulin is freely filtered, but not reabsorbed or secreted, its concentration in tubular fluid is determined solely by how much water remains in the tubular fluid. ■ ■The following equation shows how to calculate the fraction of the filtered water that has been reabsorbed: Fraction of filtered H O reabsorbed TF P inulin 2 1 1 = −[ ] ■ ■For example, if 50% of the filtered water has been reabsorbed, the TF/Pinulin = 2.0. For another example, if TF/Pinulin = 3.0, then 67% of the filtered water has been reabsorbed (i.e., 1 − 1/3). 3.  [TF/P]x/[TF/P]inulin ratio ■ ■corrects the TF/Px ratio for water reabsorption. This double ratio gives the fraction of the filtered load remaining at any point along the nephron. ■ ■For example, if [TF/P]K+/[TF/P]inulin = 0.3 at the end of the proximal tubule, then 30% of the filtered K+ remains in the tubular fluid and 70% has been reabsorbed into the blood. B.  General information about Na+ reabsorption ■ ■Na+ is freely filtered across the glomerular capillaries; therefore, the [Na+] in the tubular fluid of Bowman space equals that in plasma (i.e., TF/PNa+ = 1.0). ■ ■Na+ is reabsorbed along the entire nephron, and very little is excreted in urine (<1% of the filtered load). C.  Na+ reabsorption along the nephron (Figure 5.7) 1.  Proximal tubule ■ ■reabsorbs two-thirds, or 67%, of the filtered Na+ and H2O, more than any other part of the nephron. ■ ■is the site of glomerulotubular balance. Collecting duct Distal convoluted tubule Proximal convoluted tubule Thick ascending limb Thin ascending limb Thin descending limb Excretion < 1% 3% 5% 67% 25% Figure 5.7 Na+ handling along the nephron. Arrows indicate reabsorption of Na+. Numbers indicate the percentage of the filtered load of Na+ that is reabsorbed or excreted. 160 BRS Physiology ■ ■The process is isosmotic. The reabsorption of Na+ and H2O in the proximal tubule is exactly proportional. Therefore, both TF/PNa+ and TF/Posm = 1.0. a.  Early proximal tubule—special features (Figure 5.8) ■ ■reabsorbs Na+ and H2O with HCO3 -, glucose, amino acids, phosphate, and lactate. ■ ■Na+ is reabsorbed by cotransport with glucose, amino acids, phosphate, and lactate. These cotransport processes account for the reabsorption of all of the filtered glu-cose and amino acids. ■ ■Na+ is also reabsorbed by countertransport via Na+–H+ exchange, which is linked directly to the reabsorption of filtered HCO3 -. ■ ■Carbonic anhydrase inhibitors (e.g., acetazolamide) are diuretics that act in the early proximal tubule by inhibiting the reabsorption of filtered HCO3 -. b.  Late proximal tubule—special features ■ ■Filtered glucose, amino acids, and HCO3 - have already been completely removed from the tubular fluid by reabsorption in the early proximal tubule. ■ ■In the late proximal tubule, Na+ is reabsorbed with Cl-. c.  Glomerulotubular balance in the proximal tubule ■ ■maintains constant fractional reabsorption (two-thirds, or 67%) of the filtered Na+ and H2O. (1)  For example, if GFR spontaneously increases, the filtered load of Na+ also increases. Without a change in reabsorption, this increase in GFR would lead to increased Na+ excretion. However, glomerulotubular balance functions such that Na+ reabsorp-tion also will increase, ensuring that a constant fraction is reabsorbed. (2)  The mechanism of glomerulotubular balance is based on Starling forces in the peri-tubular capillaries, which alter the reabsorption of Na+ and H2O in the proximal tubule (Figure 5.9). ■ ■The route of isosmotic fluid reabsorption is from the lumen, to the proximal tubule cell, to the lateral intercellular space, and then to the peritubular capillary blood. ■ ■Starling forces in the peritubular capillary blood govern how much of this isos-motic fluid will be reabsorbed. ■ ■Fluid reabsorption is increased by increases in πc of the peritubular capillary blood and decreased by decreases in πc. ■ ■Increases in GFR and filtration fraction cause the protein concentration and πc of peritubular capillary blood to increase. This increase, in turn, produces an increase in fluid reabsorption. Thus, there is matching of filtration and reabsorp-tion, or glomerulotubular balance. d.  Effects of ECF volume on proximal tubular reabsorption (1)  ECF volume contraction increases reabsorption. Volume contraction increases peri-tubular capillary protein concentration and πc, and decreases peritubular capillary Cell of the early proximal tubule Lumen Peritubular capillary blood Na+ Na+ H+ Glucose, amino acid, phosphate, lactate Na+ K+ Figure 5.8 Mechanisms of Na+ reab-sorption in the cells of the early proximal tubule. Chapter 5 Renal and Acid–Base Physiology 161 Pc. Together, these changes in Starling forces in peritubular capillary blood cause an increase in proximal tubular reabsorption. (2)  ECF volume expansion decreases reabsorption. Volume expansion decreases peri-tubular capillary protein concentration and πc, and increases Pc. Together, these changes in Starling forces in peritubular capillary blood cause a decrease in proxi-mal tubular reabsorption. e.  TF/P ratios along the proximal tubule (Figure 5.10) ■ ■At the beginning of the proximal tubule (i.e., Bowman space), TF/P for freely filtered substances is 1.0, since no reabsorption or secretion has taken place yet. ■ ■Moving along the proximal tubule, TF/P for Na+ and osmolarity remain at 1.0 because Na+ and total solute are reabsorbed proportionately with water, that is, isosmotically. Glucose, amino acids, and HCO3 - are reabsorbed proportionately more than water, so their TF/P values fall below 1.0. In the early proximal tubule, Cl- is reabsorbed proportionately less than water, so its TF/P value is greater than 1.0. Inulin is not reabsorbed, so its TF/P value increases steadily above 1.0, as water is reabsorbed and inulin is “left behind.” Cells of the proximal tubule Peritubular capillary blood Lumen πc Pc Figure 5.9 Mechanism of isosmotic reabsorption in the proximal tubule. The dashed arrow shows the pathway. Increases in πc and decreases in Pc cause increased rates of isosmotic reabsorption. 1.0 50 25 75 100 Glucose Amino acids Na+ Osmolarity Cl– Inulin HCO3– Proximal tubule length (%) TF/P 2.0 3.0 Figure 5.10 Changes in TF/P concentration ratios for various solutes along the proximal tubule. 162 BRS Physiology 2.  Thick ascending limb of the loop of Henle (Figure 5.11) ■ ■reabsorbs 25% of the filtered Na+. ■ ■contains a Na+–K+–2Cl- cotransporter in the luminal membrane. ■ ■is the site of action of the loop diuretics (furosemide, ethacrynic acid, bumetanide), which inhibit the Na+–K+–2Cl- cotransporter. ■ ■is impermeable to water. Thus, NaCl is reabsorbed without water. As a result, tubular fluid [Na+] and tubular fluid osmolarity decrease to less than their concentrations in plasma (i.e., TF/PNa+ and TF/Posm < 1.0). This segment, therefore, is called the diluting segment. ■ ■has a lumen-positive potential difference. Although the Na+–K+–2Cl- cotransporter appears to be electroneutral, some K+ diffuses back into the lumen, making the lumen electrically positive. 3.  Distal tubule and collecting duct ■ ■together reabsorb 8% of the filtered Na+. a.  Early distal tubule—special features (Figure 5.12) ■ ■reabsorbs NaCl by a Na+–Cl- cotransporter. ■ ■is the site of action of thiazide diuretics. ■ ■is impermeable to water, as is the thick ascending limb. Thus, reabsorption of NaCl occurs without water, which further dilutes the tubular fluid. ■ ■is called the cortical diluting segment. b.  Late distal tubule and collecting duct—special features ■ ■have two cell types. (1)  Principal cells ■ ■reabsorb Na+ and H2O. ■ ■secrete K+. ■ ■Aldosterone increases Na+ reabsorption and increases K+ secretion. Like other ­ steroid hormones, the action of aldosterone takes several hours to develop Cell of the thick ascending limb Lumen Peritubular capillary blood Na+ K+ Cl– K+ 2Cl– Furosemide Na+ K+ Figure 5.11 Mechanism of ion transport in the thick ascending limb of the loop of Henle. Cell of the early distal tubule Lumen Peritubular capillary blood Na+ Cl– Cl– Thiazide diuretics Na+ K+ Figure 5.12 Mechanisms of ion transport in the early distal tubule. Chapter 5 Renal and Acid–Base Physiology 163 because new protein synthesis of Na+ channels (ENaC) is required. About 2% of overall Na+ reabsorption is affected by aldosterone. ■ ■Antidiuretic hormone (ADH) increases H2O permeability by directing the insertion of H2O channels in the luminal membrane. In the absence of ADH, the principal cells are virtually impermeable to water. ■ ■K+-sparing diuretics (spironolactone, triamterene, amiloride) decrease K+ secretion. (2)  a-Intercalated cells ■ ■secrete H+ by an H+-adenosine triphosphatase (ATPase), which is stimulated by aldosterone. ■ ■reabsorb K+ by an H+, K+-ATPase. V.  K + Regulation A.  Shifts of K+ between the ICF and ECF (Figure 5.13 and Table 5.4) ■ ■Most of the body’s K+ is located in the ICF . ■ ■A shift of K+ out of cells causes hyperkalemia. ■ ■A shift of K+ into cells causes hypokalemia. B.  Renal regulation of K+ balance (Figure 5.14) ■ ■K+ is filtered, reabsorbed, and secreted by the nephron. ■ ■K+ balance is achieved when urinary excretion of K+ exactly equals intake of K+ in the diet. ■ ■K+ excretion can vary widely from 1% to 110% of the filtered load, depending on dietary K+ intake, aldosterone levels, and acid–base status. 1.  Glomerular capillaries ■ ■Filtration occurs freely across the glomerular capillaries. Therefore, TF/PK + in Bowman space is 1.0. 2.  Proximal tubule ■ ■reabsorbs 67% of the filtered K+ along with Na+ and H2O. 3.  Thick ascending limb of the loop of Henle ■ ■reabsorbs 20% of the filtered K+. ■ ■Reabsorption involves the Na+–K+–2Cl- cotransporter in the luminal membrane of cells in the thick ascending limb (see Figure 5.11). 4.  Distal tubule and collecting duct ■ ■either reabsorb or secrete K+, depending on dietary K+ intake. H+ K+ ICF ECF K+ shift out Hyperosmolarity Exercise Cell lysis K+ shift in Insulin β-agonists Figure 5.13 Internal K+ balance. ECF = extracellular fluid; ICF = intra-cellular fluid. 164 BRS Physiology a.  Reabsorption of K+ ■ ■involves an H+, K+-ATPase in the luminal membrane of the α-intercalated cells. ■ ■occurs only on a low-K+ diet (K+ depletion). Under these conditions, K+ excretion can be as low as 1% of the filtered load because the kidney conserves as much K+ as possible. b.  Secretion of K+ ■ ■occurs in the principal cells. ■ ■is variable and accounts for the wide range of urinary K+ excretion. ■ ■depends on factors such as dietary K+, aldosterone levels, acid–base status, and urine flow rate. t a b l e 5.4 Shifts of K+ between ECF and ICF Causes of Shift of K+ Out of CellsÆHyperkalemia Causes of Shift of K+ into CellsÆHypokalemia Insulin deficiency Insulin β-Adrenergic antagonists β-Adrenergic agonists Acidosis (exchange of extracellular H+ for intracellular K+) Alkalosis (exchange of intracellular H+ for extracellular K+) Hyperosmolarity (H2O flows out of the cell; K+ diffuses out with H2O) Hyposmolarity (H2O flows into the cell; K+ diffuses in with H2O) Inhibitors of Na+–K+ pump (e.g., digitalis) (when pump is blocked, K+ is not taken up into cells) Exercise Cell lysis ECF = extracellular fluid; ICF = intracellular fluid. 20% Low-K+ diet only Excretion 1%–110% Variable Dietary K+ Aldosterone Acid–base Flow rate 67% Figure 5.14 K+ handling along the nephron. Arrows indicate reabsorption of secretion of K+. Numbers indicate the percentage of the filtered load of K+ that is reabsorbed, secreted, or excreted. Chapter 5 Renal and Acid–Base Physiology 165 (1)  Mechanism of distal K+ secretion (Figure 5.15) (a) At the basolateral membrane, K+ is actively transported into the cell by the Na+–K+ pump. As in all cells, this mechanism maintains a high intracellular K+ concentration. (b) At the luminal membrane, K+ is passively secreted into the lumen through K+ channels. The magnitude of this passive secretion is determined by the chemical and electrical driving forces on K+ across the luminal membrane. ■ ■Maneuvers that increase the intracellular K+ concentration or decrease the luminal K+ concentration will increase K+ secretion by increasing the driving force. ■ ■Maneuvers that decrease the intracellular K+ concentration will decrease K+ secretion by decreasing the driving force. (2)  Factors that change distal K+ secretion (see Figure 5.15 and Table 5.5) ■ ■Distal K+ secretion by the principal cells is increased when the electrochemi-cal driving force for K+ across the luminal membrane is increased. Secretion is decreased when the electrochemical driving force is decreased. (a) Dietary K+ ■ ■A diet high in K+ increases K+ secretion, and a diet low in K+ decreases K+ secretion. ■ ■On a high-K+ diet, intracellular K+ increases so that the driving force for K+ secretion also increases. ■ ■On a low-K+ diet, intracellular K+ decreases so that the driving force for K+ secretion decreases. Also, the α-intercalated cells are stimulated to reabsorb K+ by the H+, K+-ATPase. Principal cell of distal tubule (Dietary K+) (Acid–base) (Aldosterone) Lumen Blood Na+ (Flow rate) K+ Na+ K+ H+ K+ K+ Figure 5.15 Mechanism of K+ secretion in the principal cell of the distal tubule. t a b l e 5.5 Changes in Distal K+ Secretion Causes of Increased Distal K+ Secretion Causes of Decreased Distal K+ Secretion High-K+ diet Low-K+ diet Hyperaldosteronism Hypoaldosteronism Alkalosis Acidosis Thiazide diuretics K+-sparing diuretics Loop diuretics Luminal anions 166 BRS Physiology (b) Aldosterone ■ ■increases K+ secretion. ■ ■The mechanism involves increased Na+ entry into the cells across the lumi-nal membrane and increased pumping of Na+ out of the cells by the Na+–K+ pump. Stimulation of the Na+–K+ pump simultaneously increases K+ uptake into the principal cells, increasing the intracellular K+ concentration and the driving force for K+ secretion. Aldosterone also increases the number of lumi-nal membrane K+ channels. ■ ■Hyperaldosteronism increases K+ secretion and causes hypokalemia. ■ ■Hypoaldosteronism decreases K+ secretion and causes hyperkalemia. (c) Acid–base ■ ■Effectively, H+ and K+ exchange for each other across the basolateral cell membrane. ■ ■Acidosis decreases K+ secretion. The blood contains excess H+; therefore, H+ enters the cell across the basolateral membrane and K+ leaves the cell. As a result, the intracellular K+ concentration and the driving force for K+ secretion decrease. ■ ■Alkalosis increases K+ secretion. The blood contains too little H+, therefore, H+ leaves the cell across the basolateral membrane and K+ enters the cell. As a result, the intracellular K+ concentration and the driving force for K+ secretion increase. (d) Thiazide and loop diuretics ■ ■increase K+ secretion. ■ ■Diuretics that increase flow rate through the distal tubule and collecting ducts (e.g., thiazide diuretics, loop diuretics) cause dilution of the luminal K+ concentration, increasing the driving force for K+ secretion. Also, as a result of increased K+ secretion, these diuretics cause hypokalemia. (e) K+-sparing diuretics ■ ■decrease K+ secretion. If used alone, they cause hyperkalemia. ■ ■Spironolactone is an antagonist of aldosterone; triamterene and amiloride act directly on the principal cells. ■ ■The most important use of the K+-sparing diuretics is in combination with thiazide or loop diuretics to offset (reduce) urinary K+ losses. (f) Luminal anions ■ ■Excess anions (e.g., HCO3 -) in the lumen cause an increase in K+ secretion by increasing the negativity of the lumen and increasing the driving force for K+ secretion. VI.  RENAL REGULATION OF UREA, PHOSPHATE, CALCIUM, AND MAGNESIUM A.  Urea ■ ■Urea is reabsorbed and secreted in the nephron by diffusion, either simple or facilitated, depending on the segment of the nephron. ■ ■Fifty percent of the filtered urea is reabsorbed in the proximal tubule by simple diffusion. ■ ■Urea is secreted into the thin descending limb of the loop of Henle by simple diffusion (from the high concentration of urea in the medullary interstitial fluid). ■ ■The distal tubule, cortical collecting ducts, and outer medullary collecting ducts are impermeable to urea; thus, no urea is reabsorbed by these segments. ■ ■ADH stimulates a facilitated diffusion transporter for urea (UT1) in the inner medullary col-lecting ducts. Urea reabsorption from inner medullary collecting ducts contributes to urea recycling in the inner medulla and to the addition of urea to the corticopapillary osmotic gradient. Chapter 5 Renal and Acid–Base Physiology 167 ■ ■Urea excretion varies with urine flow rate. At high levels of water reabsorption (low urine flow rate), there is greater urea reabsorption and decreased urea excretion. At low levels of water reabsorption (high urine flow rate), there is less urea reabsorption and increased urea excretion. B.  Phosphate ■ ■Eighty-five percent of the filtered phosphate is reabsorbed in the proximal tubule by Na+– phosphate cotransport. Because distal segments of the nephron do not reabsorb phosphate, 15% of the filtered load is excreted in urine. ■ ■Parathyroid hormone (PTH) inhibits phosphate reabsorption in the proximal tubule by acti-vating adenylate cyclase, generating cyclic AMP (cAMP), and inhibiting Na+–phosphate cotransport. Therefore, PTH causes phosphaturia and increased urinary cAMP. ■ ■Phosphate is a urinary buffer for H+; excretion of H2PO4 - is called titratable acid. C.  Calcium (Ca2+) ■ ■Sixty percent of the plasma Ca2+ is filtered across the glomerular capillaries. ■ ■Together, the proximal tubule and thick ascending limb reabsorb more than 90% of the fil-tered Ca2+ by passive processes that are coupled to Na+ reabsorption. ■ ■Loop diuretics (e.g., furosemide) cause increased urinary Ca2+ excretion. Because Ca2+ reab-sorption is linked to Na+ reabsorption in the loop of Henle, inhibiting Na+ reabsorption with a loop diuretic also inhibits Ca2+ reabsorption. If volume is replaced, loop diuretics can be used in the treatment of hypercalcemia. ■ ■Together, the distal tubule and collecting duct reabsorb 8% of the filtered Ca2+ by an active process. 1.  PTH increases Ca2+ reabsorption by activating adenylate cyclase in the distal tubule. 2.  Thiazide diuretics increase Ca2+ reabsorption in the early distal tubule and therefore decrease Ca2+ excretion. For this reason, thiazides are used in the treatment of idiopathic hypercalciuria. D.  Magnesium (Mg2+) ■ ■is reabsorbed in the proximal tubule, thick ascending limb of the loop of Henle, and distal tubule. ■ ■In the thick ascending limb, Mg2+ and Ca2+ compete for reabsorption; therefore, hypercal-cemia causes an increase in Mg2+ excretion (by inhibiting Mg2+ reabsorption). Likewise, hypermagnesemia causes an increase in Ca2+ excretion (by inhibiting Ca2+ reabsorption). VII.  Concentration and Dilution of Urine A.  Regulation of plasma osmolarity ■ ■is accomplished by varying the amount of water excreted relative to the amount of solute excreted (i.e., by varying urine osmolarity). 1.  Response to water deprivation (Figure 5.16) 2.  Response to water intake (Figure 5.17) B.  Production of concentrated urine (Figure 5.18) ■ ■is also called hyperosmotic urine, in which urine osmolarity > blood osmolarity. ■ ■is produced when circulating ADH levels are high (e.g., water deprivation, volume depletion, SIADH). 1.  Corticopapillary osmotic gradient—high ADH ■ ■is the gradient of osmolarity from the cortex (300 mOsm/L) to the papilla (1200 mOsm/L) and is composed primarily of NaCl and urea. ■ ■is established by countercurrent multiplication and urea recycling. ■ ■is maintained by countercurrent exchange in the vasa recta. 168 BRS Physiology a.  Countercurrent multiplication in the loop of Henle ■ ■depends on NaCl reabsorption in the thick ascending limb and countercurrent flow in the descending and ascending limbs of the loop of Henle. ■ ■is augmented by ADH, which stimulates NaCl reabsorption in the thick ascend-ing limb. Therefore, the presence of ADH increases the size of the corticopapillary osmotic gradient. b.  Urea recycling from the inner medullary collecting ducts into the medullary interstitial fluid also is augmented by ADH (by stimulating the UT1 transporter). c.  Vasa recta are the capillaries that supply the loop of Henle. They maintain the cortico­ papillary gradient by serving as osmotic exchangers. Vasa recta blood equilibrates osmotically with the interstitial fluid of the medulla and papilla. 2.  Proximal tubule—high ADH ■ ■The osmolarity of the glomerular filtrate is identical to that of plasma (300 mOsm/L). ■ ■Two-thirds of the filtered H2O is reabsorbed isosmotically (with Na+, Cl-, HCO3 -, ­ glucose, amino acids, and so forth) in the proximal tubule. ■ ■TF/Posm = 1.0 throughout the proximal tubule because H2O is reabsorbed isosmotically with solute. 3.  Thick ascending limb of the loop of Henle—high ADH ■ ■is called the diluting segment. ■ ■reabsorbs NaCl by the Na+–K+–2Cl- cotransporter. Water deprivation Increases plasma osmolarity Stimulates osmoreceptors in anterior hypothalamus Increases secretion of ADH from posterior pituitary Decreases plasma osmolarity toward normal Increases water permeability of late distal tubule and collecting duct Increases water reabsorption Increases urine osmolarity and decreases urine volume Figure 5.16 Responses to water deprivation. ADH = antidiuretic hormone. Chapter 5 Renal and Acid–Base Physiology 169 ■ ■is impermeable to H2O. Therefore, H2O is not reabsorbed with NaCl, and the tubular fluid becomes dilute. ■ ■The fluid that leaves the thick ascending limb has an osmolarity of 100 mOsm/L and TF/Posm < 1.0 as a result of the dilution process. 4.  Early distal tubule—high ADH ■ ■is called the cortical diluting segment. ■ ■Like the thick ascending limb, the early distal tubule reabsorbs NaCl but is impermeable to water. Consequently, tubular fluid is further diluted. 5.  Late distal tubule—high ADH ■ ■ADH increases the H2O permeability of the principal cells of the late distal tubule. ■ ■H2O is reabsorbed from the tubule until the osmolarity of distal tubular fluid equals that of the surrounding interstitial fluid in the renal cortex (300 mOsm/L). ■ ■TF/Posm = 1.0 at the end of the distal tubule because osmotic equilibration occurs in the presence of ADH. 6.  Collecting ducts—high ADH ■ ■As in the late distal tubule, ADH increases the H2O permeability of the principal cells of the collecting ducts. ■ ■As tubular fluid flows through the collecting ducts, it passes through the corticopapil-lary gradient (regions of increasingly higher osmolarity), which was previously estab-lished by countercurrent multiplication and urea recycling. Water intake Decreases plasma osmolarity Inhibits osmoreceptors in anterior hypothalamus Decreases secretion of ADH from posterior pituitary Increases plasma osmolarity toward normal Decreases water permeability of late distal tubule and collecting duct Decreases water reabsorption Decreases urine osmolarity and increases urine volume Figure 5.17 Responses to water intake. ADH = antidiuretic hormone. 170 BRS Physiology ■ ■H2O is reabsorbed from the collecting ducts until the osmolarity of tubular fluid equals that of the surrounding interstitial fluid. ■ ■The osmolarity of the final urine equals that at the bend of the loop of Henle and the tip of the papilla (1200 mOsm/L). ■ ■TF/Posm > 1.0 because osmotic equilibration occurs with the corticopapillary gradient in the presence of ADH. C.  Production of dilute urine (Figure 5.19) ■ ■is called hyposmotic urine, in which urine osmolarity < blood osmolarity. ■ ■is produced when circulating levels of ADH are low (e.g., water intake, central diabetes insipidus) or when ADH is ineffective (nephrogenic diabetes insipidus). 1.  Corticopapillary osmotic gradient—no ADH ■ ■is smaller than in the presence of ADH because ADH stimulates both countercurrent multiplication and urea recycling. 300 300 100 300 300 600 1200 High ADH 1200 Figure 5.18 Mechanisms for producing hyperosmotic (concentrated) urine in the presence of antidiuretic hormone (ADH). Numbers indicate osmolarity. Heavy arrows indicate water reabsorption. The thick outline shows the water-impermeable segments of the neph-ron. (Adapted with permission from Valtin H. Renal Function. 3rd ed. Boston: Little, Brown; 1995:158.) 300 300 120 300 100 450 600 No ADH 50 Figure 5.19 Mechanisms for producing hyposmotic (dilute) urine in the absence of antidiuretic hormone (ADH). Numbers indicate osmolarity. Heavy arrow indi-cates water reabsorption. The thick outline shows the water-impermeable segments of the nephron. (Adapted with permission from Valtin H. Renal Function. 3rd ed. Boston: Little, Brown; 1995:159.) Chapter 5 Renal and Acid–Base Physiology 171 2.  Proximal tubule—no ADH ■ ■As in the presence of ADH, two-thirds of the filtered water is reabsorbed isosmotically. ■ ■TF/Posm = 1.0 throughout the proximal tubule. 3.  Thick ascending limb of the loop of Henle—no ADH ■ ■As in the presence of ADH, NaCl is reabsorbed without water, and the tubular fluid becomes dilute (although not quite as dilute as in the presence of ADH). ■ ■TF/Posm < 1.0. 4.  Early distal tubule—no ADH ■ ■As in the presence of ADH, NaCl is reabsorbed without H2O and the tubular fluid is further diluted. ■ ■TF/Posm < 1.0. 5.  Late distal tubule and collecting ducts—no ADH ■ ■In the absence of ADH, the cells of the late distal tubule and collecting ducts are imper-meable to H2O. ■ ■Thus, even though the tubular fluid flows through the corticopapillary osmotic gradi-ent, osmotic equilibration does not occur. ■ ■The osmolarity of the final urine will be dilute with an osmolarity as low as 50 mOsm/L. ■ ■TF/Posm < 1.0. D.  Free-water clearance (CH2O) ■ ■is used to estimate the ability to concentrate or dilute the urine. ■ ■Free water, or solute-free water, is produced in the diluting segments of the kidney (i.e., thick ascending limb and early distal tubule), where NaCl is reabsorbed and free water is left behind in the tubular fluid. ■ ■In the absence of ADH, this solute-free water is excreted and CH O 2 is positive. ■ ■In the presence of ADH, this solute-free water is not excreted but is reabsorbed by the late distal tubule and collecting ducts and CH O 2 is negative. 1. Calculation of CH2O C V C H O osm 2 = -where: CH O 2 = free-water clearance (mL/min) V = urine flow rate (mL/min) Cosm = osmolar clearance (UosmV/Posm) (mL/min) ■ ■Example: If the urine flow rate is 10 mL/min, urine osmolarity is 100 mOsm/L, and plasma osmolarity is 300 mOsm/L, what is the free-water clearance? C V C H O 2 10 100 10 300 10 3 33 = − = − × = − osm mL mL mOsm L mOsm L mL m min min . min L mL min min . = +6 7 2.  Urine that is isosmotic to plasma (isosthenuric) ■ ■CH2O is zero. ■ ■is produced during treatment with a loop diuretic, which inhibits NaCl reabsorption in the thick ascending limb, inhibiting both dilution in the thick ascending limb and production of the corticopapillary osmotic gradient. Therefore, the urine cannot be diluted during high water intake (because a diluting segment is inhibited) or con-centrated during water deprivation (because the corticopapillary gradient has been abolished). 172 BRS Physiology 3.  Urine that is hyposmotic to plasma (low ADH) ■ ■CH O 2 is positive. ■ ■is produced with high water intake (in which ADH release from the posterior pituitary is suppressed), central diabetes insipidus (in which pituitary ADH is insufficient), or ­ nephrogenic diabetes insipidus (in which the collecting ducts are unresponsive to ADH). 4.  Urine that is hyperosmotic to plasma (high ADH) ■ ■CH O 2 is negative. ■ ■is produced in water deprivation (ADH release from the pituitary is stimulated) or SIADH. E.  Clinical disorders related to the concentration or dilution of urine (Table 5.6) VIII.  Renal Hormones ■ ■See Table 5.7 for a summary of renal hormones (see Chapter 7 for a discussion of hormones). IX. ACID–BASE BALANCE A.  Acid production ■ ■Two types of acid are produced in the body: volatile acid and nonvolatile acids. 1.  Volatile acid ■ ■is CO2. ■ ■is produced from the aerobic metabolism of cells. ■ ■CO2 combines with H2O to form the weak acid H2CO3, which dissociates into H+ and HCO3 - by the following reactions: CO CO HCO 2 2 2 3 3 + ↔ ↔ + + − H O H H ■ ■Carbonic anhydrase, which is present in most cells, catalyzes the reversible reaction between CO2 and H2O. 2.  Nonvolatile acids ■ ■are also called fixed acids. t a b l e 5.6 Summary of ADH Pathophysiology Serum ADH Serum Osmolarity/ Serum [Na+] Urine Osmolarity Urine Flow Rate CH2O Primary polydipsia ↓ Decreased Hyposmotic High Positive Central diabetes insipidus ↓ Increased (because of excretion of too much H2O) Hyposmotic High Positive Nephrogenic diabetes insipidus ↑ (Because of increased plasma osmolarity) Increased (because of excretion of too much H2O) Hyposmotic High Positive Water deprivation ↑ High–normal Hyperosmotic Low Negative SIADH ↑↑ Decreased (because of reabsorption of too much H2O) Hyperosmotic Low Negative ADH = antidiuretic hormone; CH O 2 = free water clearance; SIADH = syndrome of inappropriate antidiuretic hormone. Chapter 5 Renal and Acid–Base Physiology 173 ■ ■include sulfuric acid (a product of protein catabolism) and phosphoric acid (a product of phospholipid catabolism). ■ ■are normally produced at a rate of 40 to 60 mmoles/day. ■ ■Other fixed acids that may be overproduced in disease or may be ingested include ­ ketoacids, lactic acid, and salicylic acid. B.  Buffers ■ ■prevent a change in pH when H+ ions are added to or removed from a solution. ■ ■are most effective within 1.0 pH unit of the pK of the buffer (i.e., in the linear portion of the titration curve). 1.  Extracellular buffers a.  The major extracellular buffer is HCO3 -, which is produced from CO2 and H2O. ■ ■The pK of the CO2/HCO3 - buffer pair is 6.1. b.  Phosphate is a minor extracellular buffer. ■ ■The pK of the H2PO4 -/HPO4 -2 buffer pair is 6.8. ■ ■Phosphate is most important as a urinary buffer; excretion of H+ as H2PO4 - is called titratable acid. 2.  Intracellular buffers a.  Organic phosphates (e.g., AMP , ADP , ATP , 2,3-diphosphoglycerate [DPG]) b.  Proteins ■ ■Imidazole and α-amino groups on proteins have pKs that are within the physiologic pH range. ■ ■Hemoglobin is a major intracellular buffer. ■ ■In the physiologic pH range, deoxyhemoglobin is a better buffer than oxyhemoglobin. t a b l e 5.7 Summary of Hormones That Act on the Kidney Hormone Stimulus for Secretion Time Course Mechanism of Action Actions on the Kidneys PTH ↓ plasma [Ca2+] Fast Basolateral receptor Adenylate cyclase cAMP→urine ↓  Phosphate reabsorption (proximal tubule) ↑  Ca2+ reabsorption (distal tubule) Stimulates 1α-hydroxylase (proximal tubule) ADH ↑ plasma osmolarity ↓ blood volume Fast Basolateral V2 receptor Adenylate cyclase cAMP (Note: V1 receptors are on blood vessels; mechanism is Ca2+–IP3) ↑ H2O permeability (late distal tubule and collecting duct principal cells) Aldosterone ↓ blood volume (via renin– angiotensin II) ↑ plasma [K+] Slow New protein synthesis ↑  Na+ reabsorption (ENaC, distal tubule principal cells) ↑  K+ secretion (distal tubule principal cells) ↑  H+ secretion (distal tubule α-intercalated cells) ANP ↑ atrial pressure Fast Guanylate cyclase cGMP ↑ GFR ↓ Na+ reabsorption Angiotensin II ↓ blood volume (via renin) Fast ↑  Na+–H+ exchange and HCO3 - reabsorption (proximal tubule) ADH = antidiuretic hormone; ANP = atrial natriuretic peptide; cAMP = cyclic adenosine monophosphate; cGMP = cyclic guanosine monophosphate; GFR = glomerular filtration rate; PTH = parathyroid hormone; EnaC = epithelial Na+ channel. 174 BRS Physiology 3.  Using the Henderson-Hasselbalch equation to calculate pH pH pK A HA = +     [ ] − log where: pH = −log10 [H+] (pH units) pK = −log10 equilibrium constant (pH units) [A-] = concentration of base form of buffer (mM) [HA] = concentration of acid form of buffer (mM) ■ ■A-, the base form of the buffer, is the H+ acceptor. ■ ■HA, the acid form of the buffer, is the H+ donor. ■ ■When the concentrations of A- and HA are equal, the pH of the solution equals the pK of the buffer, as calculated by the Henderson-Hasselbalch equation. ■ ■Example: The pK of the H2PO4 -/HPO4 -2 buffer pair is 6.8. What are the relative concen-trations of H2PO4 - and HPO4 -2 in a urine sample that has a pH of 4.8? pH pK HPO H PO HPO H PO HPO H PO = + = + − − − − − log . . log log 4 2 2 4 4 2 2 4 4 2 2 4 4 8 6 8 − − − − − = − = = 2 0 0 01 100 4 2 2 4 2 4 4 2 . . HPO H PO H PO HPO For this buffer pair, HPO4 -2 is A- and H2PO4 - is HA. Thus, the Henderson-Hasselbalch equation can be used to calculate that the concentration of H2PO4 - is 100 times that of HPO4 -2 in a urine sample of pH 4.8. 4.  Titration curves (Figure 5.20) ■ ■describe how the pH of a buffered solution changes as H+ ions are added to it or removed from it. ■ ■As H+ ions are added to the solution, the HA form is produced; as H+ ions are removed, the A- form is produced. ■ ■A buffer is most effective in the linear portion of the titration curve, where the addition or removal of H+ causes little change in pH. ■ ■According to the Henderson-Hasselbalch equation, when the pH of the solution equals the pK, the concentrations of HA and A- are equal. C.  Renal acid–base 1.  Reabsorption of filtered HCO3 - (Figure 5.21) ■ ■occurs primarily in the proximal tubule. a.  Key features of reabsorption of filtered HCO3 -(1)  H+ and HCO3 - are produced in the proximal tubule cells from CO2 and H2O. CO2 and H2O combine to form H2CO3, catalyzed by intracellular carbonic anhydrase; H2CO3 dissociates into H+ and HCO3 -. H+ is secreted into the lumen via the Na+–H+ exchange mechanism in the luminal membrane. The HCO3 - is reabsorbed. (2)  In the lumen, the secreted H+ combines with filtered HCO3 - to form H2CO3, which dissociates into CO2 and H2O, catalyzed by brush border carbonic anhydrase. CO2 and H2O diffuse into the cell to start the cycle again. (3)  The process results in net reabsorption of filtered HCO3 -. However, it does not result in net secretion of H+. Chapter 5 Renal and Acid–Base Physiology 175 b.  Regulation of reabsorption of filtered HCO3 -(1)  Filtered load ■ ■Increases in the filtered load of HCO3 − result in increased rates of HCO3 − reab-sorption. However, if the plasma HCO3 − concentration becomes very high (e.g., metabolic alkalosis), the filtered load will exceed the reabsorptive capacity, and HCO3 - will be excreted in the urine. (2)  Pco2 ■ ■Increases in Pco2 result in increased rates of HCO3 − reabsorption because the sup-ply of intracellular H+ for secretion is increased. This mechanism is the basis for the renal compensation for respiratory acidosis. ■ ■Decreases in Pco2 result in decreased rates of HCO3 − reabsorption because the supply of intracellular H+ for secretion is decreased. This mechanism is the basis for the renal compensation for respiratory alkalosis. (3)  ECF volume ■ ■ECF volume expansion results in decreased HCO3 - reabsorption. ■ ■ECF volume contraction results in increased HCO3 - reabsorption (contraction alkalosis). (4)  Angiotensin II ■ ■stimulates Na+–H+ exchange and thus increases HCO3 − reabsorption, con-tributing to the contraction alkalosis that occurs secondary to ECF volume contraction. H+ added H+ removed pH 4 3 HA pK A– 5 6 7 8 9 10 Figure 5.20 Titration curve for a weak acid (HA) and its conjugate base (A−). Lumen Blood Na+ H+ HCO3– + H+ H+ + HCO3– CO2 + H2O CO2 + H2O H2CO3 CA H2CO3 CA Na+ K+ Filtered HCO3– is reabsorbed (filtered) Cell Figure 5.21 Mechanism for reabsorption of filtered HCO3 - in the proximal tubule. CA = carbonic anhydrase. 176 BRS Physiology 2.  Excretion of fixed H+ ■ ■Fixed H+ produced from the catabolism of protein and phospholipid is excreted by two mechanisms, titratable acid and NH4 +. a.  Excretion of H+ as titratable acid (H2PO4 -) (Figure 5.22) ■ ■The amount of H+ excreted as titratable acid depends on the amount of urinary buffer present (usually HPO4 −2) and the pK of the buffer. (1)  H+ and HCO3 - are produced in the intercalated cells from CO2 and H2O. The H+ is secreted into the lumen by an H+-ATPase, and the HCO3 - is reabsorbed into the blood (“new” HCO3 −). In the urine, the secreted H+ combines with filtered HPO4 −2 to form H2PO4 −, which is excreted as titratable acid. The H+-ATPase is increased by aldosterone. (2)  This process results in net secretion of H+ and net reabsorption of newly synthesized HCO3 -. (3)  As a result of H+ secretion, the pH of urine becomes progressively lower. The min­ imum urinary pH is 4.4. (4)  The amount of H+ excreted as titratable acid is determined by the amount of urinary buffer and the pK of the buffer. b.  Excretion of H+ as NH4 + (Figure 5.23) ■ ■The amount of H+ excreted as NH4 + depends on both the amount of NH3 synthesized by renal cells and the urine pH. (1)  NH3 is produced in renal cells from glutamine. It diffuses down its concentration gradient from the cells into the lumen. (2)  H+ and HCO3 − are produced in the intercalated cells from CO2 and H2O. The H+ is secreted into the lumen via an H+-ATPase and combines with NH3 to form NH4 +, which is excreted (diffusion trapping). The HCO3 - is reabsorbed into the blood (“new” HCO3 −). Intercalated cell Lumen Blood HPO4–2 + H+ H+ + HCO3– CO2 + H2O H2CO3 CA (filtered) Na+ K+ “New” HCO3– is reabsorbed Titratable acid is excreted H2PO4– Figure 5.22 Mechanism for excretion of H+ as titratable acid. CA = carbonic anhydrase. Blood Na+ K+ “New” HCO3– is reabsorbed Lumen H+ NH3 NH4+ excreted + NH3 Glutamine H+ + HCO3– CO2 + H2O H2CO3 CA Cell Figure 5.23 Mechanism for excretion of H+ as NH4 +. CA = carbonic anhydrase. Chapter 5 Renal and Acid–Base Physiology 177 (3)  The lower the pH of the tubular fluid, the greater the excretion of H+ as NH4 +; at low urine pH, there is more NH4 + relative to NH3 in the urine, thus increasing the gradi-ent for NH3 diffusion. (4)  In acidosis, an adaptive increase in NH3 synthesis occurs and aids in the excretion of excess H+. (5)  Hyperkalemia inhibits NH3 synthesis, which produces a decrease in H+ excretion as NH4 + (type 4 renal tubular acidosis [RTA]). For example, hypoaldosteronism causes hyperkalemia and thus also causes type 4 RTA. Conversely, hypokalemia stimulates NH3 synthesis, which produces an increase in H+ excretion. D.  Acid–base disorders (Tables 5.8 and 5.9 and Figure 5.24) ■ ■The expected compensatory responses to simple acid–base disorders can be calculated as shown in Table 5.10. If the actual response equals the calculated (predicted) response, then one acid–base disorder is present. If the actual response differs from the calculated response, then more than one acid–base disorder is present. 1.  Metabolic acidosis a.  Overproduction or ingestion of fixed acid or loss of base produces a decrease in arterial [HCO3 -]. This decrease is the primary disturbance in metabolic acidosis. b.  Decreased HCO3 − concentration causes a decrease in blood pH (acidemia). c.  Acidemia causes hyperventilation (Kussmaul breathing), which is the respiratory compen-sation for metabolic acidosis. d.  Correction of metabolic acidosis consists of increased excretion of the excess fixed H+ as titratable acid and NH4 +, and increased reabsorption of “new” HCO3 −, which replen-ishes the blood HCO3 − concentration. ■ ■In chronic metabolic acidosis, an adaptive increase in NH3 synthesis aids in the excre-tion of excess H+. e.  Serum anion gap = (Na+]–([Cl−] + [HCO3 −]) (Figure 5.25) ■ ■The serum anion gap represents unmeasured anions in serum. These unmeasured anions include phosphate, citrate, sulfate, and protein. ■ ■The normal value of the serum anion gap is 12 mEq/L (range, 8 to 16 mEq/L) ■ ■In metabolic acidosis, the serum [HCO3 −] decreases. For electroneutrality, the ­ concentration of another anion must increase to replace HCO3 −. That anion can be Cl− or it can be an unmeasured anion. (1)  The serum anion gap is increased if the concentration of an unmeasured anion (e.g., phosphate, lactate, β-hydroxybutyrate, and formate) is increased to replace HCO3 −. (2)  The serum anion gap is normal if the concentration of Cl− is increased to replace HCO3 − (hyperchloremic metabolic acidosis). t a b l e 5.8 Summary of Acid–Base Disorders Disorder CO2 + H2O ´ H+ HCO3 -Respiratory Compensation Renal Compensation Metabolic acidosis ↓  (respiratory compensation) ↑ Ø Hyperventilation Metabolic alkalosis ↑  (respiratory compensation) ↓ ≠ Hypoventilation Respiratory acidosis ≠ ↑ ↑ None ↑ H+ excretion ↑ HCO3 - reabsorption Respiratory alkalosis Ø ↓ ↓ None ↓ H+ excretion ↓ HCO3 - reabsorption Heavy arrows indicate primary disturbance. 178 BRS Physiology 2.  Metabolic alkalosis a.  Loss of fixed H+ or gain of base produces an increase in arterial [HCO3 -]. This increase is the primary disturbance in metabolic alkalosis. ■ ■For example, in vomiting, H+ is lost from the stomach, HCO3 − remains behind in the blood, and the [HCO3 −] increases. b.  Increased HCO3 − concentration causes an increase in blood pH (alkalemia). c.  Alkalemia causes hypoventilation, which is the respiratory compensation for metabolic alkalosis. d.  Correction of metabolic alkalosis consists of increased excretion of HCO3 − because the filtered load of HCO3 − exceeds the ability of the renal tubule to reabsorb it. t a b l e 5.9 Causes of Acid–Base Disorders Example Comments Metabolic acidosis Ketoacidosis Lactic acidosis Chronic renal failure Salicylate intoxication Methanol/formaldehyde intoxication Ethylene glycol intoxication Diarrhea Type 2 RTA Type 1 RTA Type 4 RTA Accumulation of β-OH-butyric acid and acetoacetic acid ↑ anion gap Accumulation of lactic acid during hypoxia ↑ anion gap Failure to excrete H+ as titratable acid and NH4 + ↑ anion gap Also causes respiratory alkalosis ↑ anion gap Produces formic acid ↑ anion gap Produces glycolic and oxalic acids ↑ anion gap GI loss of HCO3 -Normal anion gap Renal loss of HCO3 -Normal anion gap Failure to excrete titratable acid and NH4 +; failure to acidify urine Normal anion gap Hypoaldosteronism; failure to excrete NH4 + Hyperkalemia caused by lack of aldosterone inhibits NH3 synthesis Normal anion gap Metabolic alkalosis Vomiting Hyperaldosteronism Loop or thiazide diuretics Loss of gastric H+; leaves HCO3 - behind in blood Worsened by volume contraction Hypokalemia May have ↑ anion gap because of production of ketoacids (starvation) Increased H+ secretion by distal tubule; increased new HCO3 - reabsorption Volume contraction alkalosis Respiratory acidosis Opiates; sedatives; anesthetics Guillain-Barré syndrome; polio; ALS; multiple sclerosis Airway obstruction Adult respiratory distress syndrome; COPD Inhibition of medullary respiratory center Weakening of respiratory muscles ↓ CO2 exchange in lungs ↓ CO2 exchange in lungs Respiratory alkalosis Pneumonia; pulmonary embolus High altitude Psychogenic Salicylate intoxication Hypoxemia causes ↑ ventilation rate Hypoxemia causes ↑ ventilation rate Direct stimulation of medullary respiratory center; also causes metabolic acidosis ALS = amyotrophic lateral sclerosis; COPD = chronic obstructive pulmonary disease; GI = gastrointestinal; RTA = renal tubular acidosis. Chapter 5 Renal and Acid–Base Physiology 179 ■ ■If metabolic alkalosis is accompanied by ECF volume contraction (e.g., vomiting), the reabsorption of HCO3 − increases (secondary to ECF volume contraction and activa-tion of the renin–angiotensin II–aldosterone system), worsening the metabolic alka-losis (i.e., contraction alkalosis). 3.  Respiratory acidosis ■ ■is caused by decreased alveolar ventilation and retention of CO2. a.  Increased arterial Pco2, which is the primary disturbance, causes an increase in [H+] and [HCO3 -] by mass action. b.  There is no respiratory compensation for respiratory acidosis. c.  Renal compensation consists of increased excretion of H+ as titratable acid and NH4 + and increased reabsorption of “new” HCO3 −. This process is aided by the increased Pco2, which supplies more H+ to the renal cells for secretion. The resulting increase in serum [HCO3 −] helps to normalize the pH. 0 [HCO3 ] (mEq/L) – PCO2 (mm Hg) 20 40 60 80 100 12 0 24 36 48 60 Figure 5.24 Acid–base map with values for simple acid–base disorders superimposed. The relationships are shown between arterial Pco2, [HCO3 −], and pH. The ellipse in the center shows the normal range of values. Shaded areas show the range of values associated with simple acid–base disorders. Two shaded areas are shown for each respiratory dis-order: one for the acute phase and one for the chronic phase. (Adapted with permission from Cohen JJ, Kassirer JP. Acid/ Base. Boston: Little, Brown; 1982.) 180 BRS Physiology ■ ■In acute respiratory acidosis, renal compensation has not yet occurred. ■ ■In chronic respiratory acidosis, renal compensation (increased HCO3 - reabsorption) has occurred. Thus, arterial pH is increased toward normal (i.e., a compensation). 4.  Respiratory alkalosis ■ ■is caused by increased alveolar ventilation and loss of CO2. a.  Decreased arterial Pco2, which is the primary disturbance, causes a decrease in [H+] and [HCO3 -] by mass action. b.  There is no respiratory compensation for respiratory alkalosis. c.  Renal compensation consists of decreased excretion of H+ as titratable acid and NH4 + and decreased reabsorption of “new” HCO3 −. This process is aided by the decreased Pco2, which causes a deficit of H+ in the renal cells for secretion. The resulting decrease in serum [HCO3 −] helps to normalize the pH. ■ ■In acute respiratory alkalosis, renal compensation has not yet occurred. ■ ■In chronic respiratory alkalosis, renal compensation (decreased HCO3 − reabsorption) has occurred. Thus, arterial pH is decreased toward normal (i.e., a compensation). t a b l e 5.10 Calculating Compensatory Responses to Simple Acid–Base Disorders Acid–base Disturbance Primary Disturbance Compensation Predicted Compensatory Response Metabolic acidosis ↓ [HCO3 −] ↓ Pco2 1 mEq/L decrease in HCO3 − Æ 1.3 mm Hg decrease in Pco2 Metabolic alkalosis ↑ [HCO3 −] ↑ Pco2 1 mEq/L increase in HCO3 − Æ 0.7 mm Hg increase in Pco2 Respiratory acidosis Acute Chronic ↑ Pco2 ↑ Pco2 ↑ [HCO3 −] ↑ [HCO3 −] 1 mm Hg increase in Pco2 Æ 0.1 mEq/L increase in HCO3 − 1 mm Hg increase in Pco2 Æ 0.4 mEq/L increase in HCO3 − Respiratory alkalosis Acute Chronic ↓ Pco2 ↓ Pco2 ↓ [HCO3 −] ↓ [HCO3 −] 1 mm Hg decrease in Pco2 Æ 0.2 mEq/L decrease in HCO3 − 1 mm Hg decrease in Pco2 Æ 0.4 mEq/L decrease in HCO3 − – Unmeasured anions = protein, phosphate, citrate, sulfate Cations Anions Na+ Cl– HCO3 Anion gap Figure 5.25 Serum anion gap. Chapter 5 Renal and Acid–Base Physiology 181 d.  Symptoms of hypocalcemia (e.g., tingling, numbness, muscle spasms) may occur because H+ and Ca2+ compete for binding sites on plasma proteins. Decreased [H+] causes increased protein binding of Ca2+ and decreased free ionized Ca2+. x.  Diuretics (Table 5.11) Xi.  Integrative Examples A.  Hypoaldosteronism 1.  Case study ■ ■A woman has a history of weakness, weight loss, orthostatic hypotension, increased pulse rate, and increased skin pigmentation. She has decreased serum [Na+], decreased serum osmolarity, increased serum [K+], and arterial blood gases consistent with meta-bolic acidosis. 2.  Explanation of hypoaldosteronism a.  The lack of aldosterone has three direct effects on the kidney: decreased Na+ reab-sorption, decreased K+ secretion, and decreased H+ secretion. As a result, there is ECF volume contraction (caused by decreased Na+ reabsorption), hyperkalemia (caused by decreased K+ secretion), and metabolic acidosis (caused by decreased H+ secretion). b.  The ECF volume contraction is responsible for this woman’s orthostatic hypotension. The decreased arterial pressure produces an increased pulse rate via the baroreceptor mechanism. t a b l e 5.11 Effects of Diuretics on the Nephron Class of Diuretic Site of Action Mechanism Major Effect Carbonic anhydrase inhibitors (acetazolamide) Proximal tubule Inhibition of carbonic anhydrase ↑ HCO3 - excretion Loop diuretics (furosemide, ethacrynic acid, bumetanide) Thick ascending limb of the loop of Henle Inhibition of Na+–K+− 2Cl− cotransport ↑ NaCl excretion ↑  K+ excretion (↑ distal tubule flow rate) ↑  Ca2+ excretion (treat hypercalcemia) ↓  ability to concentrate urine (↓  corticopapillary gradient) ↓  ability to dilute urine (inhibition of diluting segment) Thiazide diuretics (chlorothiazide, hydrochlorothiazide) Early distal tubule (cortical diluting segment) Inhibition of Na+–Cl−cotransport ↑ NaCl excretion ↑  K+ excretion (↑ distal tubule flow rate) ↓  Ca2+ excretion (treatment of idiopathic hypercalciuria) ↓  ability to dilute urine (inhibition of cortical diluting segment) No effect on ability to concentrate urine K+-sparing diuretics (spironolactone, triamterene, amiloride) Late distal tubule and collecting duct Inhibition of Na+ reabsorption Inhibition of K+ secretion Inhibition of H+ secretion ↑  Na+ excretion (small effect) ↓  K+ excretion (used in combination with loop or thiazide diuretics) ↓ H+ excretion 182 BRS Physiology c.  The ECF volume contraction also stimulates ADH secretion from the posterior pituitary via volume receptors. ADH causes increased water reabsorption from the collecting ducts, which results in decreased serum [Na+] (hyponatremia) and decreased serum osmolarity. Thus, ADH released by a volume mechanism is “inappropriate” for the serum osmolarity in this case. d.  Hyperpigmentation is caused by adrenal insufficiency. Decreased levels of cortisol produce increased secretion of adrenocorticotropic hormone (ACTH) by negative feedback. ACTH has pigmenting effects similar to those of melanocyte-stimulating hormone. B.  Vomiting 1.  Case study ■ ■A man is admitted to the hospital for evaluation of severe epigastric pain. He has had persistent nausea and vomiting for 4 days. Upper gastrointestinal (GI) endoscopy shows a pyloric ulcer with partial gastric outlet obstruction. He has orthostatic hypo-tension, decreased serum [K+], decreased serum [Cl−], arterial blood gases consistent with metabolic alkalosis, and decreased ventilation rate. 2.  Responses to vomiting (Figure 5.26) a.  Loss of H+ from the stomach by vomiting causes increased blood [HCO3 −] and meta-bolic alkalosis. Because Cl− is lost from the stomach along with H+, hypochloremia and ECF volume contraction occur. b.  The decreased ventilation rate is the respiratory compensation for metabolic alkalosis. – Na+–H+ exchange K+ secretion H+ secretion HCO3 reabsorption Metabolic alkalosis (generation) Metabolic alkalosis (maintenance) Hypokalemia Vomiting ECF volume contraction Renal perfusion pressure Angiotensin II Aldosterone Loss of gastric HCl Loss of fixed H+ Figure 5.26 Metabolic alkalosis caused by vomiting. ECF = extracellular fluid. Chapter 5 Renal and Acid–Base Physiology 183 c.  ECF volume contraction is associated with decreased blood volume and decreased renal perfusion pressure. As a result, renin secretion is increased, production of angio-tensin II is increased, and secretion of aldosterone is increased. Thus, the ECF volume contraction worsens the metabolic alkalosis because angiotensin II increases HCO3 − reabsorption in the proximal tubule (contraction alkalosis). d.  The increased levels of aldosterone (secondary to ECF volume contraction) cause increased distal K+ secretion and hypokalemia. Increased aldosterone also causes increased distal H+ secretion, further worsening the metabolic alkalosis. e.  Treatment consists of NaCl infusion to correct ECF volume contraction (which is main-taining the metabolic alkalosis and causing hypokalemia) and administration of K+ to replace K+ lost in the urine. C.  Diarrhea 1.  Case study ■ ■A man returns from a trip abroad with “traveler’s diarrhea.” He has weakness, weight loss, orthostatic hypotension, increased pulse rate, increased breathing rate, pale skin, a serum [Na+] of 132 mEq/L, a serum [Cl−] of 111 mEq/L, and a serum [K+] of 2.3 mEq/L. His arterial blood gases are pH, 7.25; Pco2, 24 mm Hg; HCO3 -, 10.2 mEq/L. 2.  Explanation of responses to diarrhea a.  Loss of HCO3 - from the GI tract causes a decrease in the blood [HCO3 −] and, according to the Henderson-Hasselbalch equation, a decrease in blood pH. Thus, this man has metabolic acidosis. b.  To maintain electroneutrality, the HCO3 − lost from the body is replaced by Cl−, a mea-sured anion; thus, there is a normal anion gap. The serum anion gap = [Na+] − ([Cl−] + [HCO3 −]) = 132 − (111 + 10.2) = 10.8 mEq/L. c.  The increased breathing rate (hyperventilation) is the respiratory compensation for meta-bolic acidosis. d.  As a result of his diarrhea, this man has ECF volume contraction, which leads to decreases in blood volume and arterial pressure. The decrease in arterial pressure activates the baroreceptor reflex, resulting in increased sympathetic outflow to the heart and blood vessels. The increased pulse rate is a consequence of increased sympathetic activity in the sinoatrial (SA) node, and the pale skin is the result of cutaneous vasoconstriction. e.  ECF volume contraction also activates the renin–angiotensin–aldosterone system. Increased levels of aldosterone lead to increased distal K+ secretion and hypokalemia. Loss of K+ in diarrhea fluid also contributes to hypokalemia. f.  Treatment consists of replacing all fluid and electrolytes lost in diarrhea fluid and urine, including Na+, HCO3 −, and K+. 184 Review Test 1. Secretion of K+ by the distal tubule will be decreased by (A) metabolic alkalosis (B) a high-K+ diet (C) hyperaldosteronism (D) spironolactone administration (E) thiazide diuretic administration 2. Jared and Adam both weigh 70 kg. Jared drinks 2 L of distilled water, and Adam drinks 2 L of isotonic NaCl. As a result of these ingestions, Adam will have a (A) greater change in intracellular fluid (ICF) volume (B) higher positive free-water clearance CH O 2 ( ) (C) greater change in plasma osmolarity (D) higher urine osmolarity (E) higher urine flow rate Questions 3 and 4 A 45-year-old woman develops severe diar-rhea while on vacation. She has the following arterial blood values: pH = 7.25 Pco2 = 24 mm Hg [HCO3 -] = 10 mEq/L Venous blood samples show decreased blood [K+] and a normal anion gap. 3. The correct diagnosis for this patient is (A) metabolic acidosis (B) metabolic alkalosis (C) respiratory acidosis (D) respiratory alkalosis (E) normal acid–base status 4. Which of the following statements about this patient is correct? (A) She is hypoventilating (B) The decreased arterial [HCO3 -] is a result of buffering of excess H+ by HCO3 -(C) The decreased blood [K+] is a result of exchange of intracellular H+ for extracellular K+ (D) The decreased blood [K+] is a result of increased circulating levels of aldosterone (E) The decreased blood [K+] is a result of decreased circulating levels of antidiuretic hormone (ADH) 5. Use the values below to answer the following question. Glomerular capillary hydrostatic pressure = 47 mm Hg Bowman space hydrostatic pressure = 10 mm Hg Bowman space oncotic pressure = 0 mm Hg At what value of glomerular capillary oncotic pressure would glomerular filtration stop? (A) 57 mm Hg (B) 47 mm Hg (C) 37 mm Hg (D) 10 mm Hg (E) 0 mm Hg 6. The reabsorption of filtered HCO3 -(A) results in reabsorption of less than 50% of the filtered load when the plasma concentration of HCO3 - is 24 mEq/L (B) acidifies tubular fluid to a pH of 4.4 (C) is directly linked to excretion of H+ as NH4 + (D) is inhibited by decreases in arterial Pco2 (E) can proceed normally in the presence of a renal carbonic anhydrase inhibitor 7. The following information was obtained in a 20-year-old college student who was participating in a research study in the Clinical Research Unit: Assuming that X is freely filtered, which of the following statements is most correct? (A) There is net secretion of X (B) There is net reabsorption of X (C) There is both reabsorption and secretion of X (D) The clearance of X could be used to measure the glomerular filtration rate (GFR) (E) The clearance of X is greater than the clearance of inulin Plasma Urine [Inulin] = 1 mg/mL [Inulin] = 150 mg/mL [X] = 2 mg/mL [X] = 100 mg/mL Urine flow rate = 1 mL/min Chapter 5 Renal and Acid–Base Physiology 185 8. To maintain normal H+ balance, total daily excretion of H+ should equal the daily (A) fixed acid production plus fixed acid ingestion (B) HCO3 - excretion (C) HCO3 - filtered load (D) titratable acid excretion (E) filtered load of H+ 9. One gram of mannitol was injected into a woman. After equilibration, a plasma sample had a mannitol concentration of 0.08 g/L. During the equilibration period, 20% of the injected mannitol was excreted in the urine. The woman’s (A) extracellular fluid (ECF) volume is 1 L (B) intracellular fluid (ICF) volume is 1 L (C) ECF volume is 10 L (D) ICF volume is 10 L (E) interstitial volume is 12.5 L 10. A 58-year-old man is given a glucose tolerance test. In the test, the plasma glucose concentration is increased and glucose reabsorption and excretion are measured. When the plasma glucose concentration is higher than occurs at transport maximum (Tm), the (A) clearance of glucose is zero (B) excretion rate of glucose equals the filtration rate of glucose (C) reabsorption rate of glucose equals the filtration rate of glucose (D) excretion rate of glucose increases with increasing plasma glucose concentrations (E) renal vein glucose concentration equals the renal artery glucose concentration 11. A negative free-water clearance − ( ) CH O 2 will occur in a person who (A) drinks 2 L of distilled water in 30 minutes (B) begins excreting large volumes of urine with an osmolarity of 100 mOsm/L after a severe head injury (C) is receiving lithium treatment for depression and has polyuria that is unresponsive to the administration of antidiuretic hormone (ADH) (D) has an oat cell carcinoma of the lung, and excretes urine with an osmolarity of 1,000 mOsm/L 12. A buffer pair (HA/A-) has a pK of 5.4. At a blood pH of 7.4, the concentration of HA is (A) 1/l00 that of A-(B) 1/10 that of A-(C) equal to that of A-(D) 10 times that of A-(E) 100 times that of A-13. Which of the following would produce an increase in the reabsorption of isosmotic fluid in the proximal tubule? (A) Increased filtration fraction (B) Extracellular fluid (ECF) volume expansion (C) Decreased peritubular capillary protein concentration (D) Increased peritubular capillary hydrostatic pressure (E) Oxygen deprivation 14. Which of the following substances or combinations of substances could be used to measure interstitial fluid volume? (A) Mannitol (B) D2O alone (C) Evans blue (D) Inulin and D2O (E) Inulin and radioactive albumin 15. At plasma para-aminohippuric acid (PAH) concentrations below the transport maximum (Tm), PAH (A) reabsorption is not saturated (B) clearance equals inulin clearance (C) secretion rate equals PAH excretion rate (D) concentration in the renal vein is close to zero (E) concentration in the renal vein equals PAH concentration in the renal artery 16. Compared with a person who ingests 2 L of distilled water, a person with water deprivation will have a (A) higher free-water clearance CH O 2 ( ) (B) lower plasma osmolarity (C) lower circulating level of antidiuretic hormone (ADH) (D) higher tubular fluid/plasma (TF/P) osmolarity in the proximal tubule (E) higher rate of H2O reabsorption in the collecting ducts 17. Which of the following would cause an increase in both glomerular filtration rate (GFR) and renal plasma flow (RPF)? (A) Hyperproteinemia (B) A ureteral stone 186 BRS Physiology (C) Dilation of the afferent arteriole (D) Dilation of the efferent arteriole (E) Constriction of the efferent arteriole 18. A patient has the following arterial blood values: pH = 7.52 Pco2 = 20 mm Hg [HCO3 −] = 16 mEq/L Which of the following statements about this patient is most likely to be correct? (A) He is hypoventilating (B) He has decreased ionized [Ca2+] in blood (C) He has almost complete respiratory compensation (D) He has an acid–base disorder caused by overproduction of fixed acid (E) Appropriate renal compensation would cause his arterial [HCO3 -] to increase 19. Which of the following would best distinguish an otherwise healthy person with severe water deprivation from a person with the syndrome of inappropriate antidiuretic hormone (SIADH)? (A) Free-water clearance CH O 2 ( ) (B) Urine osmolarity (C) Plasma osmolarity (D) Circulating levels of antidiuretic hormone (ADH) (E) Corticopapillary osmotic gradient 20. Which of the following causes a decrease in renal Ca2+ clearance? (A) Hypoparathyroidism (B) Treatment with chlorothiazide (C) Treatment with furosemide (D) Extracellular fluid (ECF) volume expansion (E) Hypermagnesemia 21. A patient arrives at the emergency room with low arterial pressure, reduced tissue turgor, and the following arterial blood values: pH = 7.69 [HCO3 -] = 57 mEq/L Pco2 = 48 mm Hg Which of the following responses would also be expected to occur in this patient? (A) Hyperventilation (B) Decreased K+ secretion by the distal tubules (C) Increased ratio of H2PO4 - to HPO4 -2 in urine (D) Exchange of intracellular H+ for extracellular K+ 22. A woman has a plasma osmolarity of 300 mOsm/L and a urine osmolarity of 1200 mOsm/L. The correct diagnosis is (A) syndrome of inappropriate antidiuretic hormone (SIADH) (B) water deprivation (C) central diabetes insipidus (D) nephrogenic diabetes insipidus (E) drinking large volumes of distilled water 23. A patient is infused with para-aminohippuric acid (PAH) to measure renal blood flow (RBF). She has a urine flow rate of 1 mL/min, a plasma [PAH] of 1 mg/mL, a urine [PAH] of 600 mg/mL, and a hematocrit of 45%. What is her “effective” RBF? (A) 600 mL/min (B) 660 mL/min (C) 1,091 mL/min (D) 1,333 mL/min 24. Which of the following substances has the highest renal clearance? (A) Para-aminohippuric acid (PAH) (B) Inulin (C) Glucose (D) Na+ (E) Cl-25. A woman runs a marathon in 90°F weather and replaces all volume lost in sweat by drinking distilled water. After the marathon, she will have (A) decreased total body water (TBW) (B) decreased hematocrit (C) decreased intracellular fluid (ICF) volume (D) decreased plasma osmolarity (E) increased intracellular osmolarity 26. Which of the following causes hyperkalemia? (A) Exercise (B) Alkalosis (C) Insulin injection (D) Decreased serum osmolarity (E) Treatment with β-agonists 27. Which of the following is a cause of metabolic alkalosis? (A) Diarrhea (B) Chronic renal failure (C) Ethylene glycol ingestion (D) Treatment with acetazolamide (E) Hyperaldosteronism (F) Salicylate poisoning Chapter 5 Renal and Acid–Base Physiology 187 28. Which of the following is an action of parathyroid hormone (PTH) on the renal tubule? (A) Stimulation of adenylate cyclase (B) Inhibition of distal tubule K+ secretion (C) Inhibition of distal tubule Ca2+ reabsorption (D) Stimulation of proximal tubule phosphate reabsorption (E) Inhibition of production of 1,25-dihydroxycholecalciferol 29. A man presents with hypertension and hypokalemia. Measurement of his arterial blood gases reveals a pH of 7.5 and a calculated HCO3− of 32 mEq/L. His serum cortisol and urinary vanillylmandelic acid (VMA) are normal, his serum aldosterone is increased, and his plasma renin activity is decreased. Which of the following is the most likely cause of his hypertension? (A) Cushing syndrome (B) Cushing disease (C) Conn syndrome (D) Renal artery stenosis (E) Pheochromocytoma 30. Which set of arterial blood values describes a heavy smoker with a history of emphysema and chronic bronchitis who is becoming increasingly somnolent? 31. Which set of arterial blood values describes a patient with partially compen­ sated respiratory alkalosis after 1 month on a mechanical ventilator? 32. Which set of arterial blood values describes a patient with chronic renal failure (eating a normal protein diet) and decreased urinary excretion of NH4 +? 33. Which set of arterial blood values describes a patient with untreated diabetes mellitus and increased urinary excretion of NH4 +? 34. Which set of arterial blood values describes a patient with a 5-day history of vomiting? The following figure applies to Questions 35–39. pH HCO3 - (mEq/L) Pco2 (mm Hg) (A) 7.65 48 45 (B) 7.50 15 20 (C) 7.40 24 40 (D) 7.32 30 60 (E) 7.31 16 33 pH HCO3 - (mEq/L) Pco2 (mm Hg) (A) 7.65 48 45 (B) 7.50 15 20 (C) 7.40 24 40 (D) 7.32 30 60 (E) 7.31 16 33 pH HCO3 - (mEq/L) Pco2 (mm Hg) (A) 7.65 48 45 (B) 7.50 15 20 (C) 7.40 24 40 (D) 7.32 30 60 (E) 7.31 16 33 pH HCO3 - (mEq/L) Pco2 (mm Hg) (A) 7.65 48 45 (B) 7.50 15 20 (C) 7.40 24 40 (D) 7.32 30 60 (E) 7.31 16 33 pH HCO3 - (mEq/L) Pco2 (mm Hg) (A) 7.65 48 45 (B) 7.50 15 20 (C) 7.40 24 40 (D) 7.32 30 60 (E) 7.31 16 33 E C D A B 188 BRS Physiology 35. At which nephron site does the amount of K+ in tubular fluid exceed the amount of filtered K+ in a person on a high-K+ diet? (A) Site A (B) Site B (C) Site C (D) Site D (E) Site E 36. At which nephron site is the tubular fluid/plasma (TF/P) osmolarity lowest in a person who has been deprived of water? (A) Site A (B) Site B (C) Site C (D) Site D (E) Site E 37. At which nephron site is the tubular fluid inulin concentration highest during antidiuresis? (A) Site A (B) Site B (C) Site C (D) Site D (E) Site E 38. At which nephron site is the tubular fluid inulin concentration lowest? (A) Site A (B) Site B (C) Site C (D) Site D (E) Site E 39. At which nephron site is the tubular fluid glucose concentration highest? (A) Site A (B) Site B (C) Site C (D) Site D (E) Site E The following graph applies to Questions 40–42. The curves show the percentage of the filtered load remaining in the tubular fluid at various sites along the nephron. 40. Which curve describes the inulin profile along the nephron? (A) Curve A (B) Curve B (C) Curve C (D) Curve D 41. Which curve describes the alanine profile along the nephron? (A) Curve A (B) Curve B (C) Curve C (D) Curve D 42. Which curve describes the para-aminohippuric acid (PAH) profile along the nephron? (A) Curve A (B) Curve B (C) Curve C (D) Curve D 43. A 5-year-old boy swallows a bottle of aspirin (salicylic acid) and is treated in the emergency room. The treatment produces a change in urine pH that increases the excretion of salicylic acid. What was the change in urine pH, and what is the mechanism of increased salicylic acid excretion? (A) Acidification, which converts salicylic acid to its HA form (B) Alkalinization, which converts salicylic acid to its A- form (C) Acidification, which converts salicylic acid to its A- form (D) Alkalinization, which converts salicylic acid to its HA form Percent filtered load remaining 100% Proximal tubule C D B A Bowman's space Distal tubule Urine 189 Answers and Explanations 1. The answer is D [V B 4 b]. Distal K+ secretion is decreased by factors that decrease the driving force for passive diffusion of K+ across the luminal membrane. Because spironolactone is an aldosterone antagonist, it reduces K+ secretion. Alkalosis, a diet high in K+, and hyperaldosteronism all increase [K+] in the distal cells and thereby increase K+ secretion. Thiazide diuretics increase flow through the distal tubule and dilute the luminal [K+] so that the driving force for K+ secretion is increased. 2. The answer is D [I C 2 a; VII C; Figure 5.15; Table 5.6]. After drinking distilled water, Jared will have an increase in intracellular fluid (ICF) and extracellular fluid (ECF) volumes, a decrease in plasma osmolarity, a suppression of antidiuretic hormone (ADH) secretion, and a positive free-water clearance (CH O 2 ), and will produce dilute urine with a high flow rate. Adam, after drinking the same volume of isotonic NaCl, will have an increase in ECF volume only and no change in plasma osmolarity. Because Adam’s ADH will not be suppressed, he will have a higher urine osmolarity, a lower urine flow rate, and a lower CH O 2 than Jared. 3. The answer is A [IX D 1 a–c; Tables 5.8 and 5.9]. An acid pH, together with decreased HCO3 - and decreased Pco2, is consistent with metabolic acidosis with respiratory compensation (hyperventilation). Diarrhea causes gastrointestinal (GI) loss of HCO3 -, creating a metabolic acidosis. 4. The answer is D [IX D 1 a–c; Tables 5.8 and 5.9]. The decreased arterial [HCO3 -] is caused by gastrointestinal (GI) loss of HCO3 - from diarrhea, not by buffering of excess H+ by HCO3 -. The woman is hyperventilating as respiratory compensation for metabolic acidosis. Her hypokalemia cannot be the result of the exchange of intracellular H+ for extracellular K+, because she has an increase in extracellular H+, which would drive the exchange in the other direction. Her circulating levels of aldosterone would be increased as a result of extracellular fluid (ECF) volume contraction, which leads to increased K+ secretion by the distal tubule and hypokalemia. 5. The answer is C [II C 4, 5]. Glomerular filtration will stop when the net ultrafiltration pressure across the glomerular capillary is zero; that is, when the force that favors filtration (47 mm Hg) exactly equals the forces that oppose filtration (10 mm Hg + 37 mm Hg). 6. The answer is D [IX C 1 a, b]. Decreases in arterial Pco2 cause a decrease in the reabsorption of filtered HCO3 - by diminishing the supply of H+ in the cell for secretion into the lumen. Reabsorption of filtered HCO3 - is nearly 100% of the filtered load and requires carbonic anhydrase in the brush border to convert filtered HCO3 - to CO2 to proceed normally. This process causes little acidification of the urine and is not linked to net excretion of H+ as titratable acid or NH4 +. 7. The answer is B [II C 1]. To answer this question, calculate the glomerular filtration rate (GFR) and CX. GFR = 150 mg/mL × 1 mL/min ÷ 1 mg/mL = 150 mL/min. CX = 100 mg/mL × 1 mL/min ÷ 2 mg/mL = 50 mL/min. Because the clearance of X is less than the clearance of inulin (or GFR), net reabsorption of X must have occurred. Clearance data alone cannot determine whether there has also been secretion of X. Because GFR cannot be measured with a substance that is reabsorbed, X would not be suitable. 8. The answer is A [IX C 2]. Total daily production of fixed H+ from catabolism of proteins and phospholipids (plus any additional fixed H+ that is ingested) must be matched by the sum of excretion of H+ as titratable acid plus NH4 + to maintain acid–base balance. 190 BRS Physiology 9. The answer is C [I B 1 a]. Mannitol is a marker substance for the extracellular fluid (ECF) volume. ECF volume = amount of mannitol/concentration of mannitol = 1 g – 0.2 g/0.08 g/L = 10 L. 10. The answer is D [III B; Figure 5.5]. At concentrations greater than at the transport maximum (Tm) for glucose, the carriers are saturated so that the reabsorption rate no longer matches the filtration rate. The difference is excreted in the urine. As the plasma glucose concentration increases, the excretion of glucose increases. When it is greater than the Tm, the renal vein glucose concentration will be less than the renal artery concentration because some glucose is being excreted in urine and therefore is not returned to the blood. The clearance of glucose is zero at concentrations lower than at Tm (or lower than threshold) when all of the filtered glucose is reabsorbed but is greater than zero at concentrations greater than Tm. 11. The answer is D [VII D; Table 5.6]. A person who produces hyperosmotic urine (1,000 mOsm/L) will have a negative free-water clearance (–CH2O) [CH2O = V – Cosm]. All of the others will have a positive CH2O because they are producing hyposmotic urine as a result of the suppression of antidiuretic hormone (ADH) by water drinking, central diabetes insipidus, or nephrogenic diabetes insipidus. 12. The answer is A [IX B 3]. The Henderson-Hasselbalch equation can be used to calculate the ratio of HA/A-: pH pK A HA A HA A HA A HA or HA A is = + = + = = − − − − − log . . log . log 7 4 5 4 2 0 100 1 100 13. The answer is A [II C 3; IV C 1 d (2)]. Increasing filtration fraction means that a larger portion of the renal plasma flow (RPF) is filtered across the glomerular capillaries. This increased flow causes an increase in the protein concentration and oncotic pressure of the blood leaving the glomerular capillaries. This blood becomes the peritubular capillary blood supply. The increased oncotic pressure in the peritubular capillary blood is a driving force favoring reabsorption in the proximal tubule. Extracellular fluid (ECF) volume expansion, decreased peritubular capillary protein concentration, and increased peritubular capillary hydrostatic pressure all inhibit proximal reabsorption. Oxygen deprivation would also inhibit reabsorption by stopping the Na+–K+ pump in the basolateral membranes. 14. The answer is E [I B 2 b–d]. Interstitial fluid volume is measured indirectly by determining the difference between extracellular fluid (ECF) volume and plasma volume. Inulin, a large fructose polymer that is restricted to the extracellular space, is a marker for ECF volume. Radioactive albumin is a marker for plasma volume. 15. The answer is D [III C; Figure 5.6]. At plasma concentrations that are lower than at the transport maximum (Tm) for para-aminohippuric acid (PAH) secretion, PAH concentration in the renal vein is nearly zero because the sum of filtration plus secretion removes virtually all PAH from the renal plasma. Thus, the PAH concentration in the renal vein is less than that in the renal artery because most of the PAH entering the kidney is excreted in urine. PAH clearance is greater than inulin clearance because PAH is filtered and secreted; inulin is only filtered. 16. The answer is E [VII D; Figures 5.14 and 5.15]. The person with water deprivation will have a higher plasma osmolarity and higher circulating levels of antidiuretic hormone (ADH). These effects will increase the rate of H2O reabsorption in the collecting ducts and create a negative free-water clearance (–CH2O). Tubular fluid/plasma (TF/P) osmolarity in the proximal tubule is not affected by ADH. Chapter 5 Renal and Acid–Base Physiology 191 17. The answer is C [II C 4; Table 5.3]. Dilation of the afferent arteriole will increase both renal plasma flow (RPF) (because renal vascular resistance is decreased) and glomerular filtration rate (GFR) (because glomerular capillary hydrostatic pressure is increased). Dilation of the efferent arteriole will increase RPF but decrease GFR. Constriction of the efferent arteriole will decrease RPF (due to increased renal vascular resistance) and increase GFR. Both hyperproteinemia (↑ π in the glomerular capillaries) and a ureteral stone (↑ hydrostatic pressure in Bowman space) will oppose filtration and decrease GFR. 18. The answer is B [IX D 4; Table 5.8]. First, the acid–base disorder must be diagnosed. Alkaline pH, low Pco2, and low HCO3 - are consistent with respiratory alkalosis. In respiratory alkalosis, the [H+] is decreased and less H+ is bound to negatively charged sites on plasma proteins. As a result, more Ca2+ is bound to proteins and, therefore, the ionized [Ca2+] decreases. There is no respiratory compensation for primary respiratory disorders. The patient is hyperventilating, which is the cause of the respiratory alkalosis. Appropriate renal compensation would be decreased reabsorption of HCO3 -, which would cause his arterial [HCO3 -] to decrease and his blood pH to decrease (become more normal). 19. The answer is C [VII B, D 4; Table 5.6]. Both individuals will have hyperosmotic urine, a negative free-water clearance (−CH O 2 ), a normal corticopapillary gradient, and high circulating levels of antidiuretic hormone (ADH). The person with water deprivation will have a high plasma osmolarity, and the person with syndrome of inappropriate antidiuretic hormone (SIADH) will have a low plasma osmolarity (because of dilution by the inappropriate water reabsorption). 20. The answer is B [Table 5.11]. Thiazide diuretics have a unique effect on the distal tubule; they increase Ca2+ reabsorption, thereby decreasing Ca2+ excretion and clearance. Because parathyroid hormone (PTH) increases Ca2+ reabsorption, the lack of PTH will cause an increase in Ca2+ clearance. Furosemide inhibits Na+ reabsorption in the thick ascending limb, and extracellular fluid (ECF) volume expansion inhibits Na+ reabsorption in the proximal tubule. At these sites, Ca2+ reabsorption is linked to Na+ reabsorption, and Ca2+ clearance would be increased. Because Mg2+ competes with Ca2+ for reabsorption in the thick ascending limb, hypermagnesemia will cause increased Ca2+ clearance. 21. The answer is D [IX D 2; Table 5.8]. First, the acid–base disorder must be diagnosed. Alkaline pH, with increased HCO3 - and increased Pco2, is consistent with metabolic alkalosis with respiratory compensation. The low blood pressure and decreased turgor suggest extracellular fluid (ECF) volume contraction. The reduced [H+] in blood will cause intracellular H+ to leave cells in exchange for extracellular K+. The appropriate respiratory compensation is hypoventilation, which is responsible for the elevated Pco2. H+ excretion in urine will be decreased, so less titratable acid will be excreted. K+ secretion by the distal tubules will be increased because aldosterone levels will be increased secondary to ECF volume contraction. 22. The answer is B [VII B; Figure 5.14]. This patient’s plasma and urine osmolarity, taken together, are consistent with water deprivation. The plasma osmolarity is on the high side of normal, stimulating the posterior pituitary to secrete antidiuretic hormone (ADH). Secretion of ADH, in turn, acts on the collecting ducts to increase water reabsorption and produce hyperosmotic urine. Syndrome of inappropriate antidiuretic hormone (SIADH) would also produce hyperosmotic urine, but the plasma osmolarity would be lower than normal because of the excessive water retention. Central and nephrogenic diabetes insipidus and excessive water intake would all result in hyposmotic urine. 23. The answer is C [II B 2, 3]. Effective renal plasma flow (RPF) is calculated from the clearance of para-aminohippuric acid (PAH) [CPAH = UPAH × V/PPAH = 600 mL/min]. Renal blood flow (RBF) = RPF/1 – hematocrit = 1091 mL/min. 24. The answer is A [III D]. Para-aminohippuric acid (PAH) has the greatest clearance of all of the substances because it is both filtered and secreted. Inulin is only filtered. The other 192 BRS Physiology substances are filtered and subsequently reabsorbed; therefore, they will have clearances that are lower than the inulin clearance. 25. The answer is D [I C 2 f; Table 5.2]. By sweating and then replacing all volume by drinking H2O, the woman has a net loss of NaCl without a net loss of H2O. Therefore, her extracellular and plasma osmolarity will be decreased, and as a result, water will flow from extracellular fluid (ECF) to intracellular fluid (ICF). The intracellular osmolarity will also be decreased after the shift of water. Total body water (TBW) will be unchanged because the woman replaced all volume lost in sweat by drinking water. Hematocrit will be increased because of the shift of water from ECF to ICF and the shift of water into the red blood cells (RBCs), which causes their volume to increase. 26. The answer is A [Table 5.4]. Exercise causes a shift of K+ from cells into blood. The result is hyperkalemia. Hyposmolarity, insulin, β-agonists, and alkalosis cause a shift of K+ from blood into cells. The result is hypokalemia. 27. The answer is E [Table 5.9]. A cause of metabolic alkalosis is hyperaldosteronism; increased aldosterone levels cause increased H+ secretion by the distal tubule and increased reabsorption of “new” HCO3 -. Diarrhea causes loss of HCO3 - from the gastrointestinal (GI) tract and acetazolamide causes loss of HCO3 - in the urine, both resulting in hyperchloremic metabolic acidosis with normal anion gap. Ingestion of ethylene glycol and salicylate poisoning leads to metabolic acidosis with increased anion gap. 28. The answer is A [VI B; Table 5.7]. Parathyroid hormone (PTH) acts on the renal tubule by stimulating adenyl cyclase and generating cyclic adenosine monophosphate (cAMP). The major actions of the hormone are inhibition of phosphate reabsorption in the proximal tubule, stimulation of Ca2+ reabsorption in the distal tubule, and stimulation of 1,25-dihydroxycholecalciferol production. PTH does not alter the renal handling of K+. 29. The answer is C [IV C 3 b; V B 4 b]. Hypertension, hypokalemia, metabolic alkalosis, elevated serum aldosterone, and decreased plasma renin activity are all consistent with a primary hyperaldosteronism (e.g., Conn syndrome). High levels of aldosterone cause increased Na+ reabsorption (leading to increased blood pressure), increased K+ secretion (leading to hypokalemia), and increased H+ secretion (leading to metabolic alkalosis). In Conn syndrome, the increased blood pressure causes an increase in renal perfusion pressure, which inhibits renin secretion. Neither Cushing syndrome nor Cushing disease is a possible cause of this patient’s hypertension because serum cortisol and adrenocorticotropic hormone (ACTH) levels are normal. Renal artery stenosis causes hypertension that is characterized by increased plasma renin activity. Pheochromocytoma is ruled out by the normal urinary excretion of vanillylmandelic acid (VMA). 30. The answer is D [IX D 3; Tables 5.8 and 5.9]. The history strongly suggests chronic obstructive pulmonary disease (COPD) as a cause of respiratory acidosis. Because of the COPD, the ventilation rate is decreased and CO2 is retained. The [H+] and [HCO3 -] are increased by mass action. The [HCO3 -] is further increased by renal compensation for respiratory acidosis (increased HCO3 - reabsorption by the kidney is facilitated by the high Pco2). 31. The answer is B [IX D 4; Table 5.8]. The blood values in respiratory alkalosis show decreased Pco2 (the cause) and decreased [H+] and [HCO3 -] by mass action. The [HCO3 -] is further decreased by renal compensation for chronic respiratory alkalosis (decreased HCO3 - reabsorption). 32. The answer is E [IX D 1; Tables 5.8 and 5.9]. In patients who have chronic renal failure and ingest normal amounts of protein, fixed acids will be produced from the catabolism of protein. Because the failing kidney does not produce enough NH4 + to excrete all of the fixed acid, metabolic acidosis (with respiratory compensation) results. 33. The answer is E [IX D 1; Tables 5.8 and 5.9]. Untreated diabetes mellitus results in the production of keto acids, which are fixed acids that cause metabolic acidosis. Urinary Chapter 5 Renal and Acid–Base Physiology 193 excretion of NH4 + is increased in this patient because an adaptive increase in renal NH3 synthesis has occurred in response to the metabolic acidosis. 34. The answer is A [IX D 2; Tables 5.8 and 5.9]. The history of vomiting (in the absence of any other information) indicates loss of gastric H+ and, as a result, metabolic alkalosis (with respiratory compensation). 35. The answer is E [V B 4]. K+ is secreted by the late distal tubule and collecting ducts. Because this secretion is affected by dietary K+, a person who is on a high-K+ diet can secrete more K+ into the urine than was originally filtered. At all of the other nephron sites, the amount of K+ in the tubular fluid is either equal to the amount filtered (site A) or less than the amount filtered (because K+ is reabsorbed in the proximal tubule and the loop of Henle). 36. The answer is D [VII B 3; Figure 5.16]. A person who is deprived of water will have high circulating levels of antidiuretic hormone (ADH). The tubular fluid/plasma (TF/P) osmolarity is 1.0 throughout the proximal tubule, regardless of ADH status. In antidiuresis, TF/P osmolarity is greater than 1.0 at site C because of equilibration of the tubular fluid with the large corticopapillary osmotic gradient. At site E, TF/P osmolarity is greater than 1.0 because of water reabsorption out of the collecting ducts and equilibration with the corticopapillary gradient. At site D, the tubular fluid is diluted because NaCl is reabsorbed in the thick ascending limb without water, making TF/P osmolarity less than 1.0. 37. The answer is E [IV A 2]. Because inulin, once filtered, is neither reabsorbed nor secreted, its concentration in tubular fluid reflects the amount of water remaining in the tubule. In antidiuresis, water is reabsorbed throughout the nephron (except in the thick ascending limb and cortical diluting segment). Thus, inulin concentration in the tubular fluid progressively rises along the nephron as water is reabsorbed, and will be highest in the final urine. 38. The answer is A [IVA 2]. The tubular fluid inulin concentration depends on the amount of water present. As water reabsorption occurs along the nephron, the inulin concentration progressively increases. Thus, the tubular fluid inulin concentration is lowest in Bowman space, prior to any water reabsorption. 39. The answer is A [IV C 1 a]. Glucose is extensively reabsorbed in the early proximal tubule by the Na+–glucose cotransporter. The glucose concentration in tubular fluid is highest in Bowman space before any reabsorption has occurred. 40. The answer is C [IV A 2]. Once inulin is filtered, it is neither reabsorbed nor secreted. Thus, 100% of the filtered inulin remains in tubular fluid at each nephron site and in the final urine. 41. The answer is A [IV C 1 a]. Alanine, like glucose, is avidly reabsorbed in the early proximal tubule by a Na+–amino acid cotransporter. Thus, the percentage of the filtered load of alanine remaining in the tubular fluid declines rapidly along the proximal tubule as alanine is reabsorbed into the blood. 42. The answer is D [III C; IVA 3]. Para-aminohippuric acid (PAH) is an organic acid that is filtered and subsequently secreted by the proximal tubule. The secretion process adds PAH to the tubular fluid; therefore, the amount that is present at the end of the proximal tubule is greater than the amount that was present in Bowman space. 43. The answer is B [III E]. Alkalinization of the urine converts more salicylic acid to its A- form. The A- form is charged and cannot back-diffuse from urine to blood. Therefore, it is trapped in the urine and excreted. 194 Gastrointestinal Physiology chapter 6 I.  Structure and Innervation of the Gastrointestinal Tract A.  Structure of the gastrointestinal (GI) tract (Figure 6.1) 1.  Epithelial cells ■ ■are specialized in different parts of the GI tract for secretion or absorption. 2.  Muscularis mucosa ■ ■Contraction causes a change in the surface area for secretion or absorption. 3.  Circular muscle ■ ■Contraction causes a decrease in diameter of the lumen of the GI tract. 4.  Longitudinal muscle ■ ■Contraction causes shortening of a segment of the GI tract. 5.  Submucosal plexus (Meissner plexus) and myenteric plexus ■ ■comprise the enteric nervous system of the GI tract. ■ ■integrate and coordinate the motility, secretory, and endocrine functions of the GI tract. B.  Innervation of the GI tract ■ ■The autonomic nervous system (ANS) of the GI tract comprises both extrinsic and intrin-sic nervous systems. 1.  Extrinsic innervation (parasympathetic and sympathetic nervous systems) ■ ■Efferent fibers carry information from the brain stem and spinal cord to the GI tract. ■ ■Afferent fibers carry sensory information from chemoreceptors and mechanoreceptors in the GI tract to the brain stem and spinal cord. a.  Parasympathetic nervous system ■ ■is usually excitatory on the functions of the GI tract. ■ ■is carried via the vagus and pelvic nerves. ■ ■Preganglionic parasympathetic fibers synapse in the myenteric and submucosal plexuses. ■ ■Cell bodies in the ganglia of the plexuses then send information to the smooth mus-cle, secretory cells, and endocrine cells of the GI tract. (1)  The vagus nerve innervates the esophagus, stomach, pancreas, and upper large intestine. ■ ■Reflexes in which both afferent and efferent pathways are contained in the vagus nerve are called vagovagal reflexes. (2)  The pelvic nerve innervates the lower large intestine, rectum, and anus. Chapter 6 Gastrointestinal Physiology 195 b.  Sympathetic nervous system ■ ■is usually inhibitory on the functions of the GI tract. ■ ■Fibers originate in the spinal cord between T-8 and L-2. ■ ■Preganglionic sympathetic cholinergic fibers synapse in the prevertebral ganglia. ■ ■Postganglionic sympathetic adrenergic fibers leave the prevertebral ganglia and syn-apse in the myenteric and submucosal plexuses. Direct postganglionic adrenergic innervation of blood vessels and some smooth muscle cells also occurs. ■ ■Cell bodies in the ganglia of the plexuses then send information to the smooth mus-cle, secretory cells, and endocrine cells of the GI tract. 2.  Intrinsic innervation (enteric nervous system) ■ ■coordinates and relays information from the parasympathetic and sympathetic ner-vous systems to the GI tract. ■ ■uses local reflexes to relay information within the GI tract. ■ ■controls most functions of the GI tract, especially motility and secretion, even in the absence of extrinsic innervation. a.  Myenteric plexus (Auerbach plexus) ■ ■primarily controls the motility of the GI smooth muscle. b.  Submucosal plexus (Meissner plexus) ■ ■primarily controls secretion and blood flow. ■ ■receives sensory information from chemoreceptors and mechanoreceptors in the GI tract. II.  Regulatory Substances in the Gastrointestinal Tract (Figure 6.2) A.  GI hormones (Table 6.1) ■ ■are released from endocrine cells in the GI mucosa into the portal circulation, enter the general circulation, and have physiologic actions on target cells. ■ ■Four substances meet the requirements to be considered “official” GI hormones; others are considered “candidate” hormones. The four official GI hormones are gastrin, cholecys-tokinin (CCK), secretin, and glucose-dependent insulinotropic peptide (GIP). Circular muscle Longitudinal muscle Serosa Epithelial cells, endocrine cells, and receptor cells Lamina propria Muscularis mucosae Submucosal plexus Myenteric plexus Figure 6.1 Structure of the gastrointestinal tract. 196 BRS Physiology Hormones Endocrine cell Target cell Portal circulation Diffusion Secretion Systemic circulation Paracrines Endocrine cell Target cell Action potential Neurocrines Neuron Target cell Figure 6.2 Gastrointestinal hormones, paracrines, and neurocrines. t a b l e 6.1 Summary of Gastrointestinal (GI) Hormones Hormones Homology (Family) Site of Secretion Stimulus for Secretion Actions Gastrin Gastrin–CCK G cells of stomach Small peptides and amino acids Distention of stomach Vagus (via GRP) Inhibited by H+ in stomach Inhibited by somatostatin ↑ Gastric H+ secretion Stimulates growth of gastric mucosa CCK Gastrin–CCK I cells of duodenum and jejunum Small peptides and amino acids Fatty acids Stimulates contraction of gallbladder and relaxation of sphincter of Oddi ↑  Pancreatic enzyme and HCO3 − secretion ↑  Growth of exocrine pancreas/gallbladder Inhibits gastric emptying Secretin Secretin–glucagon S cells of duodenum H+ in duodenum Fatty acids in duodenum ↑  Pancreatic HCO3 − secretion ↑  Biliary HCO3 − secretion ↓ Gastric H+ secretion GIP Secretin–glucagon Duodenum and jejunum Fatty acids, amino acids, and oral glucose ↑ Insulin secretion ↓ Gastric H+ secretion CCK = cholecystokinin; GIP = glucose-dependent insulinotropic peptide; GRP = gastrin-releasing peptide. Chapter 6 Gastrointestinal Physiology 197 1.  Gastrin ■ ■contains 17 amino acids (“little gastrin”). ■ ■Little gastrin is the form secreted in response to a meal. ■ ■All of the biologic activity of gastrin resides in the four C-terminal amino acids. ■ ■“Big gastrin” contains 34 amino acids, although it is not a dimer of little gastrin. a.  Actions of gastrin (1)  Increases H+ secretion by the gastric parietal cells. (2)  Stimulates growth of gastric mucosa by stimulating the synthesis of RNA and new protein. Patients with gastrin-secreting tumors have hypertrophy and hyperplasia of the gastric mucosa. b.  Stimuli for secretion of gastrin ■ ■Gastrin is secreted from the G cells of the gastric antrum in response to a meal. ■ ■Gastrin is secreted in response to the following: (1)  Small peptides and amino acids in the lumen of the stomach ■ ■The most potent stimuli for gastrin secretion are phenylalanine and tryptophan. (2)  Distention of the stomach (3)  Vagal stimulation, mediated by gastrin-releasing peptide (GRP) ■ ■Atropine does not block vagally mediated gastrin secretion because the mediator of the vagal effect is GRP , not acetylcholine (ACh). c.  Inhibition of gastrin secretion ■ ■H+ in the lumen of the stomach inhibits gastrin release. This negative feedback control ensures that gastrin secretion is inhibited if the stomach contents are sufficiently acidified. ■ ■Somatostatin inhibits gastrin release. d.  Zollinger–Ellison syndrome (gastrinoma) ■ ■occurs when gastrin is secreted by non–β-cell tumors of the pancreas. 2.  CCK ■ ■contains 33 amino acids. ■ ■is homologous to gastrin. ■ ■The five C-terminal amino acids are the same in CCK and gastrin. ■ ■The biologic activity of CCK resides in the C-terminal heptapeptide. Thus, the heptapep-tide contains the sequence that is homologous to gastrin and has gastrin activity as well as CCK activity. a.  Actions of CCK (1)  Stimulates contraction of the gallbladder and simultaneously causes relaxation of the sphincter of Oddi for secretion of bile. (2)  Stimulates pancreatic enzyme secretion. (3)  Potentiates secretin-induced stimulation of pancreatic HCO3 − secretion. (4)  Stimulates growth of the exocrine pancreas. (5)  Inhibits gastric emptying. Thus, meals containing fat stimulate the secretion of CCK, which slows gastric emptying to allow more time for intestinal digestion and absorption. b.  Stimuli for the release of CCK ■ ■CCK is released from the I cells of the duodenal and jejunal mucosa by (1)  Small peptides and amino acids. (2)  Fatty acids and monoglycerides. ■ ■Triglycerides do not stimulate the release of CCK because they cannot cross intestinal cell membranes. 3.  Secretin ■ ■contains 27 amino acids. 198 BRS Physiology ■ ■is homologous to glucagon; 14 of the 27 amino acids in secretin are the same as those in glucagon. ■ ■All of the amino acids are required for biologic activity. a.  Actions of secretin ■ ■are coordinated to reduce the amount of H+ in the lumen of the small intestine. (1)  Stimulates pancreatic HCO3 - secretion and increases growth of the exocrine pancreas. Pancreatic HCO3 − neutralizes H+ in the intestinal lumen. (2)  Stimulates HCO3 − and H2O secretion by the liver and increases bile production. (3)  Inhibits H+ secretion by gastric parietal cells. b.  Stimuli for the release of secretin ■ ■Secretin is released by the S cells of the duodenum in response to (1)  H+ in the lumen of the duodenum. (2)  Fatty acids in the lumen of the duodenum. 4.  GIP ■ ■contain 42 amino acids. ■ ■is homologous to secretin and glucagon. a.  Actions of GIP (1)  Stimulates insulin release. In the presence of an oral glucose load, GIP causes the release of insulin from the pancreas. Thus, oral glucose is more effective than intra-venous glucose in causing insulin release and, therefore, glucose utilization. (2)  Inhibits H+ secretion by gastric parietal cells. b.  Stimuli for the release of GIP ■ ■GIP is secreted by the duodenum and jejunum. ■ ■GIP is the only GI hormone that is released in response to fat, protein, and carbohydrate. GIP secretion is stimulated by fatty acids, amino acids, and orally administered glucose. 5.  Candidate hormones ■ ■are secreted by cells of the GI tract. ■ ■Motilin increases GI motility and is involved in interdigestive myoelectric complexes. ■ ■Pancreatic polypeptide inhibits pancreatic secretions. ■ ■Glucagon-like peptide-1 (GLP-1) binds to pancreatic β-cells and stimulates insulin secre-tion. Analogues of GLP-1 may be helpful in the treatment of type 2 diabetes mellitus. B.  Paracrines ■ ■are released from endocrine cells in the GI mucosa. ■ ■diffuse over short distances to act on target cells located in the GI tract. ■ ■The GI paracrines are somatostatin and histamine. 1.  Somatostatin ■ ■is secreted by cells throughout the GI tract in response to H+ in the lumen. Its secretion is inhibited by vagal stimulation. ■ ■inhibits the release of all GI hormones. ■ ■inhibits gastric H+ secretion. 2.  Histamine ■ ■is secreted by mast cells of the gastric mucosa. ■ ■increases gastric H+ secretion directly and by potentiating the effects of gastrin and vagal stimulation. C.  Neurocrines ■ ■are synthesized in neurons of the GI tract, moved by axonal transport down the axons, and released by action potentials in the nerves. ■ ■Neurocrines then diffuse across the synaptic cleft to a target cell. ■ ■The GI neurocrines are vasoactive intestinal peptide (VIP), GRP (bombesin), and enkephalins. Chapter 6 Gastrointestinal Physiology 199 1.  VIP ■ ■contains 28 amino acids and is homologous to secretin. ■ ■is released from neurons in the mucosa and smooth muscle of the GI tract. ■ ■produces relaxation of GI smooth muscle, including the lower esophageal sphincter. ■ ■stimulates pancreatic HCO3 − secretion and inhibits gastric H+ secretion. In these actions, it resembles secretin. ■ ■is secreted by pancreatic islet cell tumors and is presumed to mediate pancreatic cholera. 2.  GRP (bombesin) ■ ■is released from vagus nerves that innervate the G cells. ■ ■stimulates gastrin release from G cells. 3.  Enkephalins (met-enkephalin and leu-enkephalin) ■ ■are secreted from nerves in the mucosa and smooth muscle of the GI tract. ■ ■stimulate contraction of GI smooth muscle, particularly the lower esophageal, pyloric, and ileocecal sphincters. ■ ■inhibit intestinal secretion of fluid and electrolytes. This action forms the basis for the usefulness of opiates in the treatment of diarrhea. D.  Satiety ■ ■Hypothalamic centers 1.  Satiety center (inhibits appetite) is located in the ventromedial nucleus of the hypothalamus. 2.  Feeding center (stimulates appetite) is located in the lateral hypothalamic area of the hypothalamus. ■ ■Anorexigenic neurons release proopiomelanocortin (POMC) in the hypothalamic centers and cause decreased appetite. ■ ■Orexigenic neurons release neuropeptide Y in the hypothalamic centers and stimulate appetite. ■ ■Leptin is secreted by fat cells. It stimulates anorexigenic neurons and inhibits orexigenic neurons, thus decreasing appetite. ■ ■Insulin and GLP-1 inhibit appetite. ■ ■Ghrelin is secreted by gastric cells. It stimulates orexigenic neurons and inhibits anorexi-genic neurons, thus increasing appetite. III.  Gastrointestinal Motility ■ ■Contractile tissue of the GI tract is almost exclusively unitary smooth muscle, with the exception of the pharynx, upper one-third of the esophagus, and external anal sphincter, all of which are striated muscle. ■ ■Depolarization of circular muscle leads to contraction of a ring of smooth muscle and a decrease in diameter of that segment of the GI tract. ■ ■Depolarization of longitudinal muscle leads to contraction in the longitudinal direction and a decrease in length of that segment of the GI tract. ■ ■Phasic contractions occur in the esophagus, gastric antrum, and small intestine, which contract and relax periodically. ■ ■Tonic contractions occur in the lower esophageal sphincter, orad stomach, and ileocecal and internal anal sphincters. A.  Slow waves (Figure 6.3) ■ ■are oscillating membrane potentials inherent to the smooth muscle cells of some parts of the GI tract. ■ ■occur spontaneously. 200 BRS Physiology ■ ■originate in the interstitial cells of Cajal, which serve as the pacemaker for GI smooth muscle. ■ ■are not action potentials, although they determine the pattern of action potentials and, there-fore, the pattern of contraction. 1.  Mechanism of slow wave production ■ ■is the cyclic opening of Ca2+ channels (depolarization) followed by opening of K+ ­ channels (repolarization). ■ ■Depolarization during each slow wave brings the membrane potential of smooth muscle cells closer to threshold and, therefore, increases the probability that action potentials will occur. ■ ■Action potentials, produced on top of the background of slow waves, then initiate pha-sic contractions of the smooth muscle cells (see Chapter 1, VII B). 2.  Frequency of slow waves ■ ■varies along the GI tract, but is constant and characteristic for each part of the GI tract. ■ ■is not influenced by neural or hormonal input. In contrast, the frequency of the action potentials that occur on top of the slow waves is modified by neural and hormonal influences. ■ ■sets the maximum frequency of contractions for each part of the GI tract. ■ ■is lowest in the stomach (3 slow waves/min) and highest in the duodenum (12 slow waves/min). B.  Chewing, swallowing, and esophageal peristalsis 1.  Chewing ■ ■lubricates food by mixing it with saliva. ■ ■decreases the size of food particles to facilitate swallowing and to begin the digestive process. 2.  Swallowing ■ ■The swallowing reflex is coordinated in the medulla. Fibers in the vagus and glossopha-ryngeal nerves carry information between the GI tract and the medulla. ■ ■The following sequence of events is involved in swallowing: a.  The nasopharynx closes and, at the same time, breathing is inhibited. b.  The laryngeal muscles contract to close the glottis and elevate the larynx. c.  Peristalsis begins in the pharynx to propel the food bolus toward the esophagus. Simultaneously, the upper esophageal sphincter relaxes to permit the food bolus to enter the esophagus. 3.  Esophageal motility ■ ■The esophagus propels the swallowed food into the stomach. Action potential “spikes” superimposed on slow waves Slow wave Contraction Voltage Tension Figure 6.3 Gastrointestinal slow waves superimposed by action potentials. Action potentials produce subsequent contraction. Chapter 6 Gastrointestinal Physiology 201 ■ ■Sphincters at either end of the esophagus prevent air from entering the upper esopha-gus and gastric acid from entering the lower esophagus. ■ ■Because the esophagus is located in the thorax, intraesophageal pressure equals tho-racic pressure, which is lower than atmospheric pressure. In fact, a balloon catheter placed in the esophagus can be used to measure intrathoracic pressure. ■ ■The following sequence of events occurs as food moves into and down the esophagus: a.  As part of the swallowing reflex, the upper esophageal sphincter relaxes to permit swal-lowed food to enter the esophagus. b.  The upper esophageal sphincter then contracts so that food will not reflux into the pharynx. c.  A primary peristaltic contraction creates an area of high pressure behind the food bolus. The peristaltic contraction moves down the esophagus and propels the food bolus along. Gravity accelerates the movement. d.  A secondary peristaltic contraction clears the esophagus of any remaining food. e.  As the food bolus approaches the lower end of the esophagus, the lower esophageal sphincter relaxes. This relaxation is vagally mediated, and the neurotransmitter is VIP. f.  The orad region of the stomach relaxes (“receptive relaxation”) to allow the food bolus to enter the stomach. 4.  Clinical correlations of esophageal motility a.  Gastroesophageal reflux (heartburn) may occur if the tone of the lower esophageal sphincter is decreased and gastric contents reflux into the esophagus. b.  Achalasia may occur if the lower esophageal sphincter does not relax during swallow-ing and food accumulates in the esophagus. C.  Gastric motility ■ ■The stomach has three layers of smooth muscle—the usual longitudinal and circular lay-ers and a third oblique layer. ■ ■The stomach has three anatomic divisions—the fundus, body, and antrum. ■ ■The orad region of the stomach includes the fundus and the proximal body. This region contains oxyntic glands and is responsible for receiving the ingested meal. ■ ■The caudad region of the stomach includes the antrum and the distal body. This region is responsible for the contractions that mix food and propel it into the duodenum. 1.  “Receptive relaxation” ■ ■is a vagovagal reflex that is initiated by distention of the stomach and is abolished by vagotomy. ■ ■The orad region of the stomach relaxes to accommodate the ingested meal. ■ ■CCK participates in “receptive relaxation” by increasing the distensibility of the orad stomach. 2.  Mixing and digestion ■ ■The caudad region of the stomach contracts to mix the food with gastric secretions and begins the process of digestion. The size of food particles is reduced. a.  Slow waves in the caudad stomach occur at a frequency of 3–5 waves/min. They depolarize the smooth muscle cells. b.  If threshold is reached during the slow waves, action potentials are fired, followed by contraction. Thus, the frequency of slow waves sets the maximal frequency of contraction. c.  A wave of contraction closes the distal antrum. Thus, as the caudad stomach contracts, food is propelled back into the stomach to be mixed (retropulsion). 202 BRS Physiology d.  Gastric contractions are increased by vagal stimulation and decreased by sympathetic stimulation. e.  Even during fasting, contractions (the “migrating myoelectric complex”) occur at 90-minute intervals and clear the stomach of residual food. Motilin is the mediator of these contractions. 3.  Gastric emptying ■ ■The caudad region of the stomach contracts to propel food into the duodenum. a.  The rate of gastric emptying is fastest when the stomach contents are isotonic. If the stomach contents are hypertonic or hypotonic, gastric emptying is slowed. b.  Fat inhibits gastric emptying (i.e., increases gastric emptying time) by stimulating the release of CCK. c.  H+ in the duodenum inhibits gastric emptying via direct neural reflexes. H+ receptors in the duodenum relay information to the gastric smooth muscle via interneurons in the GI plexuses. D.  Small intestinal motility ■ ■The small intestine functions in the digestion and absorption of nutrients. The small intes-tine mixes nutrients with digestive enzymes, exposes the digested nutrients to the absorp-tive mucosa, and then propels any nonabsorbed material to the large intestine. ■ ■As in the stomach, slow waves set the basic electrical rhythm, which occurs at a frequency of 12 waves/min. Action potentials occur on top of the slow waves and lead to contractions. ■ ■Parasympathetic stimulation increases intestinal smooth muscle contraction; sympathetic stimulation decreases it. 1.  Segmentation contractions ■ ■mix the intestinal contents. ■ ■A section of small intestine contracts, sending the intestinal contents (chyme) in both orad and caudad directions. That section of small intestine then relaxes, and the con-tents move back into the segment. ■ ■This back-and-forth movement produced by segmentation contractions causes mixing without any net forward movement of the chyme. 2.  Peristaltic contractions ■ ■are highly coordinated and propel the chyme through the small intestine toward the large intestine. Ideally, peristalsis occurs after digestion and absorption have taken place. ■ ■Contraction behind the bolus and, simultaneously, relaxation in front of the bolus cause the chyme to be propelled caudally. ■ ■The peristaltic reflex is coordinated by the enteric nervous system. a.  Food in the intestinal lumen is sensed by enterochromaffin cells, which release sero-tonin (5-hydroxytryptamine, 5-HT). b.  5-HT binds to receptors on intrinsic primary afferent neurons (IPANs), which initiate the peristaltic reflex. c.  Behind the food bolus, excitatory transmitters cause contraction of circular muscle and inhibitory transmitters cause relaxation of longitudinal muscle. In front of the bolus, inhibitory transmitters cause relaxation of circular muscle and excitatory transmitters cause contraction of longitudinal muscle. 3.  Gastroileal reflex ■ ■is mediated by the extrinsic ANS and possibly by gastrin. ■ ■The presence of food in the stomach triggers increased peristalsis in the ileum and relaxation of the ileocecal sphincter. As a result, the intestinal contents are delivered to the large intestine. Chapter 6 Gastrointestinal Physiology 203 E.  Large intestinal motility ■ ■Fecal material moves from the cecum to the colon (i.e., through the ascending, transverse, descending, and sigmoid colons), to the rectum, and then to the anal canal. ■ ■Haustra, or saclike segments, appear after contractions of the large intestine. 1.  Cecum and proximal colon ■ ■When the proximal colon is distended with fecal material, the ileocecal sphincter con-tracts to prevent reflux into the ileum. a.  Segmentation contractions in the proximal colon mix the contents and are responsible for the appearance of haustra. b.  Mass movements occur 1 to 3 times/day and cause the colonic contents to move distally for long distances (e.g., from the transverse colon to the sigmoid colon). 2.  Distal colon ■ ■Because most colonic water absorption occurs in the proximal colon, fecal material in the distal colon becomes semisolid and moves slowly. Mass movements propel it into the rectum. 3.  Rectum, anal canal, and defecation ■ ■The sequence of events for defecation is as follows: a.  As the rectum fills with fecal material, it contracts and the internal anal sphincter relaxes (rectosphincteric reflex). b.  Once the rectum is filled to about 25% of its capacity, there is an urge to defecate. However, defecation is prevented because the external anal sphincter is tonically contracted. c.  When it is convenient to defecate, the external anal sphincter is relaxed voluntarily. The smooth muscle of the rectum contracts, forcing the feces out of the body. ■ ■Intra-abdominal pressure is increased by expiring against a closed glottis (Valsalva maneuver). 4.  Gastrocolic reflex ■ ■The presence of food in the stomach increases the motility of the colon and increases the frequency of mass movements. a.  The gastrocolic reflex has a rapid parasympathetic component that is initiated when the stomach is stretched by food. b.  A slower, hormonal component is mediated by CCK and gastrin. 5.  Disorders of large intestinal motility a.  Emotional factors strongly influence large intestinal motility via the extrinsic ANS. Irritable bowel syndrome may occur during periods of stress and may result in consti-pation (increased segmentation contractions) or diarrhea (decreased segmentation contractions). b.  Megacolon (Hirschsprung disease), the absence of the colonic enteric nervous system, results in constriction of the involved segment, marked dilatation and accumulation of intestinal contents proximal to the constriction, and severe constipation. F.  Vomiting ■ ■A wave of reverse peristalsis begins in the small intestine, moving the GI contents in the orad direction. ■ ■The gastric contents are eventually pushed into the esophagus. If the upper esophageal sphincter remains closed, retching occurs. If the pressure in the esophagus becomes high enough to open the upper esophageal sphincter, vomiting occurs. ■ ■The vomiting center in the medulla is stimulated by tickling the back of the throat, gastric distention, and vestibular stimulation (motion sickness). ■ ■The chemoreceptor trigger zone in the fourth ventricle is activated by emetics, radiation, and vestibular stimulation. 204 BRS Physiology IV.  Gastrointestinal Secretion (Table 6.2) A.  Salivary secretion 1.  Functions of saliva a.  Initial starch digestion by α-amylase (ptyalin) and initial triglyceride digestion by lingual lipase b.  Lubrication of ingested food by mucus c.  Protection of the mouth and esophagus by dilution and buffering of ingested foods 2.  Composition of saliva a.  Saliva is characterized by (1)  High volume (relative to the small size of the salivary glands) (2)  High K+ and HCO3 − concentrations (3)  Low Na+ and Cl− concentrations (4)  Hypotonicity (5)  Presence of α-amylase, lingual lipase, and kallikrein b.  The composition of saliva varies with the salivary flow rate (Figure 6.4). (1)  At the lowest flow rates, saliva has the lowest osmolarity and lowest Na+, Cl−, and HCO3 − concentrations but has the highest K+ concentration. (2)  At the highest flow rates (up to 4 mL/min), the composition of saliva is closest to that of plasma. 3.  Formation of saliva (Figure 6.5) ■ ■Saliva is formed by three major glands—the parotid, submandibular, and sublingual glands. t a b l e 6.2 Summary of Gastrointestinal (GI) Secretions GI Secretion Major Characteristics Stimulated By Inhibited By Saliva High HCO3 − High K+ Hypotonic α-Amylase Lingual lipase Parasympathetic nervous system Sympathetic nervous system Sleep Dehydration Atropine Gastric secretion HCl Pepsinogen Intrinsic factor Gastrin Parasympathetic nervous system Histamine Parasympathetic nervous system ↓ Stomach pH Chyme in duodenum (via secretin and GIP) Somatostatin Atropine Cimetidine Omeprazole Pancreatic secretion High HCO3 − Isotonic Pancreatic lipase, amylase, proteases Secretin CCK (potentiates secretin) Parasympathetic nervous system CCK Parasympathetic nervous system Bile Bile salts Bilirubin Phospholipids Cholesterol CCK (causes contraction of gallbladder and relaxation of sphincter of Oddi) Parasympathetic nervous system (causes contraction of gallbladder) Ileal resection CCK = cholecystokinin; GIP = glucose-dependent insulinotropic peptide. Chapter 6 Gastrointestinal Physiology 205 ■ ■The structure of each gland is similar to a bunch of grapes. The acinus (the blind end of each duct) is lined with acinar cells and secretes an initial saliva. A branching duct sys-tem is lined with columnar epithelial cells, which modify the initial saliva. ■ ■When saliva production is stimulated, myoepithelial cells, which line the acinus and initial ducts, contract and eject saliva into the mouth. a.  The acinus ■ ■produces an initial saliva with a composition similar to plasma. ■ ■This initial saliva is isotonic and has the same Na+, K+, Cl−, and HCO3 − concentrations as plasma. b.  The ducts ■ ■modify the initial saliva by the following processes: (1)  The ducts reabsorb Na+ and Cl-, therefore, the concentrations of these ions are lower than their plasma concentrations. (2)  The ducts secrete K+ and HCO3 -; therefore, the concentrations of these ions are higher than their plasma concentrations. (3)  Aldosterone acts on the ductal cells to increase the reabsorption of Na+ and the secretion of K+ (analogous to its actions on the renal distal tubule). (4)  Saliva becomes hypotonic in the ducts because the ducts are relatively imperme-able to water. Because more solute than water is reabsorbed by the ducts, the saliva becomes dilute relative to plasma. (5)  The effect of flow rate on saliva composition is explained primarily by changes in the contact time available for reabsorption and secretion processes to occur in the ducts. ■ ■Thus, at high flow rates, saliva is most like the initial secretion from the acinus; it has the highest Na+ and Cl− concentrations and the lowest K+ concentration. Na+; osmolarity HCO3 – Cl– K+ Concentration relative to [plasma] < plasma > plasma < plasma > plasma Concentration or osmolarity Flow rate of saliva Figure 6.4 Composition of saliva as a function of salivary flow rate. Ductal cells Acinar cells Saliva (hypotonic) Plasma-like solution (isotonic) Na+ K+ Cl– HCO3– Figure 6.5 Modification of saliva by ductal cells. 206 BRS Physiology ■ ■At low flow rates, saliva is least like the initial secretion from the acinus; it has the lowest Na+ and Cl− concentrations and the highest K+ concentration. ■ ■The only ion that does not “fit” this contact-time explanation is HCO3 −; HCO3 − secretion is selectively stimulated when saliva secretion is stimulated. 4.  Regulation of saliva production (Figure 6.6) ■ ■Saliva production is controlled by the parasympathetic and sympathetic nervous sys-tems (not by GI hormones). ■ ■Saliva production is unique in that it is increased by both parasympathetic and sympa-thetic activity. Parasympathetic activity is more important, however. a.  Parasympathetic stimulation (cranial nerves VII and IX) ■ ■increases saliva production by increasing transport processes in the acinar and duc-tal cells and by causing vasodilation. ■ ■Cholinergic receptors on acinar and ductal cells are muscarinic. ■ ■The second messenger is inositol 1,4,5-triphosphate (IP3) and increased intracellular [Ca2+]. ■ ■Anticholinergic drugs (e.g., atropine) inhibit the production of saliva and cause dry mouth. b.  Sympathetic stimulation ■ ■increases the production of saliva and the growth of salivary glands, although the effects are smaller than those of parasympathetic stimulation. ■ ■Receptors on acinar and ductal cells are b-adrenergic. ■ ■The second messenger is cyclic adenosine monophosphate (cAMP). c.  Saliva production ■ ■is increased (via activation of the parasympathetic nervous system) by food in the mouth, smells, conditioned reflexes, and nausea. Muscarinic receptor β Receptor Parasympathetic Saliva Conditioning Food Nausea Smell ACh Atropine Acinar and ductal cells Dehydration Fear Sleep Anticholinergic drugs Sympathetic NE IP3, Ca2+ cAMP Figure 6.6 Regulation of salivary secretion. ACh = acetylcholine; cAMP = cyclic adenosine mono-phosphate; IP3 = inositol 1,4,5-triphosphate; NE = norepinephrine. Chapter 6 Gastrointestinal Physiology 207 ■ ■is decreased (via inhibition of the parasympathetic nervous system) by sleep, ­ dehydration, fear, and anticholinergic drugs. B.  Gastric secretion 1.  Gastric cell types and their secretions (Table 6.3 and Figure 6.7) ■ ■Parietal cells, located in the body, secrete HCl and intrinsic factor. ■ ■Chief cells, located in the body, secrete pepsinogen. ■ ■G cells, located in the antrum, secrete gastrin. 2.  Mechanism of gastric H+ secretion (Figure 6.8) ■ ■Parietal cells secrete HCl into the lumen of the stomach and, concurrently, absorb HCO3 - into the bloodstream as follows: t a b l e 6.3 Gastric Cell Types and Their Secretions Cell Type Part of Stomach Secretion Products Stimulus for Secretion Parietal cells Body (fundus) HCl Intrinsic factor (essential) Gastrin Vagal stimulation (ACh) Histamine Chief cells Body (fundus) Pepsinogen (converted to pepsin at low pH) Vagal stimulation (ACh) G cells Antrum Gastrin Vagal stimulation (via GRP) Small peptides Inhibited by somatostatin Inhibited by H+ in stomach (via stimulation of somatostatin release) Mucous cells Antrum Mucus Pepsinogen Vagal stimulation (ACh) ACh = acetylcholine; GRP = gastrin-releasing peptide. Fundus Parietal cells Intrinsic factor HCl Pepsinogen Chief cells Gastrin Antrum Body G cells + Figure 6.7 Gastric cell types and their functions. 208 BRS Physiology a.  In the parietal cells, CO2 and H2O are converted to H+ and HCO3 −, catalyzed by carbonic anhydrase. b.  H+ is secreted into the lumen of the stomach by the H+–K+ pump (H+, K+-ATPase). Cl− is secreted along with H+; thus, the secretion product of the parietal cells is HCl. ■ ■The drug omeprazole (a “proton pump inhibitor”) inhibits the H+, K+-ATPase and blocks H+ secretion. c.  The HCO3 - produced in the cells is absorbed into the bloodstream in exchange for Cl− (Cl-–HCO3 - exchange). As HCO3 − is added to the venous blood, the pH of the blood increases (“alkaline tide”). (Eventually, this HCO3 − will be secreted in pancreatic ­ secretions to neutralize H+ in the small intestine.) ■ ■If vomiting occurs, gastric H+ never arrives in the small intestine, there is no stimulus for pancreatic HCO3 − secretion, and the arterial blood becomes alkaline (metabolic alkalosis). 3.  Stimulation of gastric H+ secretion (Figure 6.9) a.  Vagal stimulation ■ ■increases H+ secretion by a direct pathway and an indirect pathway. ■ ■In the direct path, the vagus nerve innervates parietal cells and stimulates H+ secre-tion directly. The neurotransmitter at these synapses is ACh, the receptor on the parietal cells is muscarinic (M3), and the second messengers for CCK are IP3 and increased intracellular [Ca2+]. ■ ■In the indirect path, the vagus nerve innervates G cells and stimulates gastrin secre-tion, which then stimulates H+ secretion by an endocrine action. The neurotrans-mitter at these synapses is GRP (not ACh). ■ ■Atropine, a cholinergic muscarinic antagonist, inhibits H+ secretion by blocking the direct pathway, which uses ACh as a neurotransmitter. However, atropine does not block H+ secretion completely because it does not inhibit the indirect pathway, which uses GRP as a neurotransmitter. ■ ■Vagotomy eliminates both direct and indirect pathways. b.  Gastrin ■ ■is released in response to eating a meal (small peptides, distention of the stomach, vagal stimulation). ■ ■stimulates H+ secretion by interacting with the cholecystokininB (CCKB) receptor on the parietal cells. ■ ■The second messenger for gastrin on the parietal cell is IP3/Ca2+. ■ ■Gastrin also stimulates enterochromaffin-like (ECL) cells and histamine secretion, which stimulates H+ secretion (not shown in figure). c.  Histamine ■ ■is released from ECL cells in the gastric mucosa and diffuses to the nearby parietal cells. Gastric parietal cell Blood (“alkaline tide”) Lumen of stomach H+ K+ HCO3 – Cl– Cl– HCl Na+ K+ H+ + HCO3– CO2 + H2O H2CO3 CA Figure 6.8 Simplified mecha-nism of H+ secretion by gastric parietal cells. CA = carbonic anhydrase. Chapter 6 Gastrointestinal Physiology 209 ■ ■stimulates H+ secretion by activating H2 receptors on the parietal cell membrane. ■ ■The H2 receptor is coupled to adenylyl cyclase via a Gs protein. ■ ■The second messenger for histamine is cAMP. ■ ■H2 receptor–blocking drugs, such as cimetidine, inhibit H+ secretion by blocking the stimulatory effect of histamine. d.  Potentiating effects of ACh, histamine, and gastrin on H+ secretion ■ ■Potentiation occurs when the response to simultaneous administration of two stimu-lants is greater than the sum of responses to either agent given alone. As a result, low concentrations of stimulants given together can produce maximal effects. ■ ■Potentiation of gastric H+ secretion can be explained, in part, because each agent has a different mechanism of action on the parietal cell. (1)  Histamine potentiates the actions of ACh and gastrin in stimulating H+ secretion. ■ ■Thus, H2 receptor blockers (e.g., cimetidine) are particularly effective in treating ulcers because they block both the direct action of histamine on parietal cells and the potentiating effects of histamine on ACh and gastrin. (2)  ACh potentiates the actions of histamine and gastrin in stimulating H+ secretion. ■ ■Thus, muscarinic receptor blockers, such as atropine, block both the direct action of ACh on parietal cells and the potentiating effects of ACh on histamine and gastrin. 4.  Inhibition of gastric H+ secretion ■ ■Negative feedback mechanisms inhibit the secretion of H+ by the parietal cells. a.  Low pH (<3.0) in the stomach ■ ■inhibits gastrin secretion and thereby inhibits H+ secretion. ■ ■After a meal is ingested, H+ secretion is stimulated by the mechanisms discussed previously (see IV B 2). After the meal is digested and the stomach emptied, further H+ secretion decreases the pH of the stomach contents. When the pH of the stomach H+,K+-ATPase Vagus M3 receptor CCKB receptor H2 receptor G cells ECL cells Somatostatin Prostaglandins ACh Gastrin Histamine Atropine Cimetidine H+ secretion Lumen Omeprazole Gastric parietal cell Gq Gs Gi IP3/Ca2+ cAMP + + – + D cells Figure 6.9 Agents that stimulate and inhibit H+ secretion by gastric parietal cells. ACh = acetylcholine; cAMP = cyclic adenosine monophosphate; CCK = cholecystokinin; ECL = enterochromaffin-like; IP3 = inositol 1, 4, 5-triphosphate; M = muscarinic. 210 BRS Physiology contents is <3.0, gastrin secretion is inhibited and, by negative feedback, inhibits further H+ secretion. b.  Somatostatin (see Figure 6.9) ■ ■inhibits gastric H+ secretion by a direct pathway and an indirect pathway. ■ ■In the direct pathway, somatostatin binds to receptors on the parietal cell that are coupled to adenylyl cyclase via a Gi protein, thus inhibiting adenylyl cyclase and decreasing cAMP levels. In this pathway, somatostatin antagonizes the stimulatory action of histamine on H+ secretion. ■ ■In the indirect pathways (not shown in Figure 6.9), somatostatin inhibits release of histamine and gastrin, thus decreasing H+ secretion indirectly. c.  Prostaglandins (see Figure 6.9) ■ ■inhibit gastric H+ secretion by activating a Gi protein, inhibiting adenylyl cyclase and decreasing cAMP levels. 5.  Peptic ulcer disease ■ ■is an ulcerative lesion of the gastric or duodenal mucosa. ■ ■can occur when there is loss of the protective mucous barrier (of mucus and HCO3 −) and/or excessive secretion of H+ and pepsin. ■ ■Protective factors are mucus, HCO3 −, prostaglandins, mucosal blood flow, and growth factors. ■ ■Damaging factors are H+, pepsin, Helicobacter pylori (H. pylori), nonsteroidal anti-inflammatory drugs (NSAIDs), stress, smoking, and alcohol. a.  Gastric ulcers ■ ■The gastric mucosa is damaged. ■ ■Gastric H+ secretion is decreased because secreted H+ leaks back through the dam-aged gastric mucosa. ■ ■Gastrin levels are increased because decreased H+ secretion stimulates gastrin secretion. ■ ■A major cause of gastric ulcer is the gram-negative bacterium Helicobacter pylori (H. pylori). ■ ■H. pylori colonizes the gastric mucus and releases cytotoxins that damage the gastric mucosa. ■ ■H. pylori contains urease, which converts urea to NH3, thus alkalinizing the local environment and permitting H. pylori to survive in the otherwise acidic gastric lumen. ■ ■The diagnostic test for H. pylori involves drinking a solution of 13C-urea, which is con-verted to 13CO2 by urease and measured in the expired air. b.  Duodenal ulcers ■ ■The duodenal mucosa is damaged. ■ ■Gastric H+ secretion is increased. Excess H+ is delivered to the duodenum, damaging the duodenal mucosa. ■ ■Gastrin secretion in response to a meal is increased (although baseline gastrin may be normal). ■ ■H. pylori is also a major cause of duodenal ulcer. H. pylori inhibits somatostatin secretion (thus stimulating gastric H+ secretion) and inhibits intestinal HCO3 − secre-tion (so there is insufficient HCO3 − to neutralize the H+ load from the stomach). c.  Zollinger–Ellison syndrome ■ ■occurs when a gastrin-secreting tumor of the pancreas causes increased H+ secretion. ■ ■H+ secretion continues unabated because the gastrin secreted by pancreatic tumor cells is not subject to negative feedback inhibition by H+. 6.  Drugs that block gastric H+ secretion (see Figure 6.9) a.  Atropine ■ ■blocks H+ secretion by inhibiting cholinergic muscarinic receptors on parietal cells, thereby inhibiting ACh stimulation of H+ secretion. Chapter 6 Gastrointestinal Physiology 211 b.  Cimetidine ■ ■blocks H2 receptors and thereby inhibits histamine stimulation of H+ secretion. ■ ■is particularly effective in reducing H+ secretion because it not only blocks the histamine stimulation of H+ secretion but also blocks histamine's potentiation of ACh effects. c.  Omeprazole ■ ■is a proton pump inhibitor. ■ ■directly inhibits H+, K+-ATPase, and H+ secretion. C.  Pancreatic secretion ■ ■contains a high concentration of HCO3 -, whose purpose is to neutralize the acidic chyme that reaches the duodenum. ■ ■contains enzymes essential for the digestion of protein, carbohydrate, and fat. 1.  Composition of pancreatic secretion a.  Pancreatic juice is characterized by (1)  High volume (2)  Virtually the same Na+ and K+ concentrations as plasma (3)  Much higher HCO3 - concentration than plasma (4)  Much lower Cl− concentration than plasma (5)  Isotonicity (6)  Pancreatic lipase, amylase, and proteases b.  The composition of the aqueous component of pancreatic secretion varies with the flow rate (Figure 6.10). ■ ■At low flow rates, the pancreas secretes an isotonic fluid that is composed mainly of Na+ and Cl-. ■ ■At high flow rates, the pancreas secretes an isotonic fluid that is composed mainly of Na+ and HCO3 -. ■ ■Regardless of the flow rate, pancreatic secretions are isotonic. 2.  Formation of pancreatic secretion (Figure 6.11) ■ ■Like the salivary glands, the exocrine pancreas resembles a bunch of grapes. ■ ■The acinar cells of the exocrine pancreas make up most of its weight. a.  Acinar cells ■ ■produce a small volume of initial pancreatic secretion, which is mainly Na+ and Cl−. b.  Ductal cells ■ ■modify the initial pancreatic secretion by secreting HCO3 - and absorbing Cl- via a Cl-–HCO3 - exchange mechanism in the luminal membrane. Na+ HCO3 – Cl– K+ Concentration relative to [plasma] plasma plasma plasma plasma ~ ~ ~ ~ > < Concentration Flow rate of pancreatic juice Figure 6.10 Composition of pancreatic secretion as a function of pancreatic flow rate. 212 BRS Physiology ■ ■Because the pancreatic ducts are permeable to water, H2O moves into the lumen to make the pancreatic secretion isosmotic. 3.  Stimulation of pancreatic secretion a.  Secretin ■ ■is secreted by the S cells of the duodenum in response to H+ in the duodenal lumen. ■ ■acts on the pancreatic ductal cells to increase HCO3 - secretion. ■ ■Thus, when H+ is delivered from the stomach to the duodenum, secretin is released. As a result, HCO3 − is secreted from the pancreas into the duodenal lumen to neutral-ize the H+. ■ ■The second messenger for secretin is cAMP. b.  CCK ■ ■is secreted by the I cells of the duodenum in response to small peptides, amino acids, and fatty acids in the duodenal lumen. ■ ■acts on the pancreatic acinar cells to increase enzyme secretion (amylase, lipases, proteases). ■ ■potentiates the effect of secretin on ductal cells to stimulate HCO3 − secretion. ■ ■The second messengers for CCK are IP3 and increased intracellular [Ca2+]. The poten-tiating effects of CCK on secretin are explained by the different mechanisms of action for the two GI hormones (i.e., cAMP for secretin and IP3/Ca2+ for CCK). c.  ACh (via vagovagal reflexes) ■ ■is released in response to H+, small peptides, amino acids, and fatty acids in the duo-denal lumen. ■ ■stimulates enzyme secretion by the acinar cells and, like CCK, potentiates the effect of secretin on HCO3 − secretion. 4.  Cystic fibrosis ■ ■is a disorder of pancreatic secretion. ■ ■results from a defect in Cl− channels that is caused by a mutation in the cystic fibrosis transmembrane conductance regulator (CFTR) gene. ■ ■is associated with a deficiency of pancreatic enzymes resulting in malabsorption and steatorrhea. D.  Bile secretion and gallbladder function (Figure 6.12) 1.  Composition and function of bile ■ ■Bile contains bile salts, phospholipids, cholesterol, and bile pigments (bilirubin). a.  Bile salts ■ ■are amphipathic molecules because they have both hydrophilic and hydrophobic portions. In aqueous solution, bile salts orient themselves around droplets of lipid and keep the lipid droplets dispersed (emulsification). Pancreatic ductal cell Blood Lumen of duct Na+ Cl– HCO3– Na+ Na+ K+ HCO3– + H+ H+ CO2 + H2O H2CO3 CA Figure 6.11 Modification of pancreatic secretion by ductal cells. CA = carbonic anhydrase. Chapter 6 Gastrointestinal Physiology 213 ■ ■aid in the intestinal digestion and absorption of lipids by emulsifying and solubiliz-ing them in micelles. b.  Micelles ■ ■Above a critical micellar concentration, bile salts form micelles. ■ ■Bile salts are positioned on the outside of the micelle, with their hydrophilic por-tions dissolved in the aqueous solution of the intestinal lumen and their hydropho-bic portions dissolved in the micelle interior. ■ ■Free fatty acids and monoglycerides are present in the inside of the micelle, essen-tially “solubilized” for subsequent absorption. 2.  Formation of bile ■ ■Bile is produced continuously by hepatocytes. ■ ■Bile drains into the hepatic ducts and is stored in the gallbladder for subsequent release. ■ ■Choleretic agents increase the formation of bile. ■ ■Bile is formed by the following process: a.  Primary bile acids (cholic acid and chenodeoxycholic acid) are synthesized from cho-lesterol by hepatocytes. ■ ■In the intestine, bacteria convert a portion of each of the primary bile acids to ­ secondary bile acids (deoxycholic acid and lithocholic acid). ■ ■Synthesis of new bile acids occurs, as needed, to replace bile acids that are excreted in the feces. b.  The bile acids are conjugated with glycine or taurine to form their respective bile salts, which are named for the parent bile acid (e.g., taurocholic acid is cholic acid conju-gated with taurine). c.  Electrolytes and H2O are added to the bile. d.  During the interdigestive period, the gallbladder is relaxed, the sphincter of Oddi is closed, and the gallbladder fills with bile. e.  Bile is concentrated in the gallbladder as a result of isosmotic absorption of solutes and H2O. 3.  Contraction of the gallbladder a.  CCK ■ ■is released in response to small peptides and fatty acids in the duodenum. – + Bile salts Bile salts Sphincter of Oddi Duodenum Ileum Na+ Bile salts Cholesterol Liver Gallbladder CCK Micelles Figure 6.12 Recirculation of bile acids from the ileum to the liver. CCK = cholecystokinin. 214 BRS Physiology ■ ■tells the gallbladder that bile is needed to emulsify and absorb lipids in the duodenum. ■ ■causes contraction of the gallbladder and relaxation of the sphincter of Oddi. b.  ACh ■ ■causes contraction of the gallbladder. 4.  Recirculation of bile acids to the liver ■ ■The terminal ileum contains a Na+–bile acid cotransporter, which is a secondary active transporter that recirculates bile acids to the liver. ■ ■Because bile acids are not recirculated until they reach the terminal ileum, bile acids are present for maximal absorption of lipids throughout the upper small intestine. ■ ■After ileal resection, bile acids are not recirculated to the liver but are excreted in feces. The bile acid pool is thereby depleted, and fat absorption is impaired, resulting in steatorrhea. V.  Digestion and Absorption (Table 6.4) ■ ■Carbohydrates, proteins, and lipids are digested and absorbed in the small intestine. ■ ■The surface area for absorption in the small intestine is greatly increased by the presence of the brush border. A.  Carbohydrates 1.  Digestion of carbohydrates ■ ■Only monosaccharides are absorbed. Carbohydrates must be digested to glucose, galac-tose, and fructose for absorption to proceed. a. a-Amylases (salivary and pancreatic) hydrolyze 1,4-glycosidic bonds in starch, yielding maltose, maltotriose, and α-limit dextrins. b.  Maltase, a-dextrinase, and sucrase in the intestinal brush border then hydrolyze the oligosaccharides to glucose. c.  Lactase, trehalase, and sucrase degrade their respective disaccharides to monosaccharides. ■ ■Lactase degrades lactose to glucose and galactose. ■ ■Trehalase degrades trehalose to glucose. ■ ■Sucrase degrades sucrose to glucose and fructose. 2.  Absorption of carbohydrates (Figure 6.13) a.  Glucose and galactose ■ ■are transported from the intestinal lumen into the cells by a Na+-dependent cotrans-port (SGLT 1) in the luminal membrane. The sugar is transported “uphill” and Na+ is transported “downhill.” ■ ■are then transported from cell to blood by facilitated diffusion (GLUT 2). ■ ■The Na+–K+ pump in the basolateral membrane keeps the intracellular [Na+] low, thus maintaining the Na+ gradient across the luminal membrane. ■ ■Poisoning the Na+–K+ pump inhibits glucose and galactose absorption by dissipating the Na+ gradient. b.  Fructose ■ ■is transported exclusively by facilitated diffusion; therefore, it cannot be absorbed against a concentration gradient. 3.  Clinical disorders of carbohydrate absorption ■ ■Lactose intolerance results from the absence of brush border lactase and, thus, the inability to hydrolyze lactose to glucose and galactose for absorption. Nonabsorbed lactose and H2O remain in the lumen of the GI tract and cause osmotic diarrhea. Chapter 6 Gastrointestinal Physiology 215 B.  Proteins 1.  Digestion of proteins a.  Endopeptidases ■ ■degrade proteins by hydrolyzing interior peptide bonds. b.  Exopeptidases ■ ■hydrolyze one amino acid at a time from the C terminus of proteins and peptides. c.  Pepsin ■ ■is not essential for protein digestion. ■ ■is secreted as pepsinogen by the chief cells of the stomach. ■ ■Pepsinogen is activated to pepsin by gastric H+. ■ ■The optimum pH for pepsin is between 1 and 3. ■ ■When the pH is >5, pepsin is denatured. Thus, in the intestine, as HCO3 − is secreted in pancreatic fluids, duodenal pH increases and pepsin is inactivated. d.  Pancreatic proteases ■ ■include trypsin, chymotrypsin, elastase, carboxypeptidase A, and carboxypepti­ dase B. ■ ■are secreted in inactive forms that are activated in the small intestine as follows: (1)  Trypsinogen is activated to trypsin by a brush border enzyme, enterokinase. (2)  Trypsin then converts chymotrypsinogen, proelastase, and procarboxypeptidase A and B to their active forms. (Even trypsinogen is converted to more trypsin by trypsin!) t a b l e 6.4 Summary of Digestion and Absorption Nutrient Digestion Site of Absorption Mechanism of Absorption Carbohydrates To monosaccharides (glucose, galactose, fructose) Small intestine Na+-dependent cotransport (glucose, galactose) Facilitated diffusion (fructose) Proteins To amino acids, dipeptides, tripeptides Small intestine Na+-dependent cotransport (amino acids) H+-dependent cotransport (di- and tripeptides) Lipids To fatty acids, monoglycerides, cholesterol Small intestine Micelles form with bile salts in intestinal lumen Diffusion of fatty acids, monoglycerides, and cholesterol into cell Re-esterification in cell to triglycerides and phospholipids Chylomicrons form in cell (requires apoprotein) and are transferred to lymph Fat-soluble vitamins Small intestine Micelles with bile salts Water-soluble vitamins Vitamin B12 Small intestine Ileum of small intestine Na+-dependent cotransport Intrinsic factor–vitamin B12 complex Bile acids Ileum of small intestine Na+-dependent cotransport; recirculated to liver Ca2+ Small intestine Vitamin D dependent (calbindin D-28K) Fe2+ Fe3+ is reduced to Fe2+ Small intestine Binds to apoferritin in cell Circulates in blood bound to transferrin 216 BRS Physiology (3)  After their digestive work is complete, the pancreatic proteases degrade each other and are absorbed along with dietary proteins. 2.  Absorption of proteins (Figure 6.14) ■ ■Digestive products of protein can be absorbed as amino acids, dipeptides, and tripeptides (in contrast to carbohydrates, which can only be absorbed as monosaccharides). a.  Free amino acids ■ ■Na+-dependent amino acid cotransport occurs in the luminal membrane. It is analo-gous to the cotransporter for glucose and galactose. ■ ■The amino acids are then transported from cell to blood by facilitated diffusion. ■ ■There are four separate carriers for neutral, acidic, basic, and imino amino acids, respectively. b.  Dipeptides and tripeptides ■ ■are absorbed faster than free amino acids. ■ ■H+-dependent cotransport of dipeptides and tripeptides also occurs in the luminal membrane. ■ ■After the dipeptides and tripeptides are transported into the intestinal cells, cyto-plasmic peptidases hydrolyze them to amino acids. ■ ■The amino acids are then transported from cell to blood by facilitated diffusion. C.  Lipids 1.  Digestion of lipids a.  Stomach (1)  In the stomach, mixing breaks lipids into droplets to increase the surface area for digestion by pancreatic enzymes. K+ Epithelial cell of small intestine Blood Lumen of intestine Glucose or galactose Glucose or galactose Na+ Secondary active Facilitated diffusion Na+ Figure 6.13 Mechanism of absorp-tion of monosaccharides by intes-tinal epithelial cells. Glucose and galactose are absorbed by Na+-dependent cotransport (secondary active), and fructose (not shown) is absorbed by facilitated diffusion. Epithelial cell of small intestine Blood Lumen of intestine Amino acids Amino acids peptidases Na+ Dipeptides and tripeptides Na+ K+ H+ Figure 6.14 Mechanism of absorption of amino acids, dipeptides, and tripeptides by intestinal epithelial cells. Chapter 6 Gastrointestinal Physiology 217 (2)  Lingual lipases digest some of the ingested triglycerides to monoglycerides and fatty acids. However, most of the ingested lipids are digested in the intestine by pancre-atic lipases. (3)  CCK slows gastric emptying. Thus, delivery of lipids from the stomach to the duode-num is slowed to allow adequate time for digestion and absorption in the intestine. b.  Small intestine (1)  Bile acids emulsify lipids in the small intestine, increasing the surface area for digestion. (2)  Pancreatic lipases hydrolyze lipids to fatty acids, monoglycerides, cholesterol, and lysolecithin. The enzymes are pancreatic lipase, cholesterol ester hydrolase, and phospholipase A2. (3)  The hydrophobic products of lipid digestion are solubilized in micelles by bile acids. 2.  Absorption of lipids a.  Micelles bring the products of lipid digestion into contact with the absorptive surface of the intestinal cells. Then, fatty acids, monoglycerides, and cholesterol diffuse across the luminal membrane into the cells. Glycerol is hydrophilic and is not contained in the micelles. b.  In the intestinal cells, the products of lipid digestion are re-esterified to triglycerides, cholesterol ester, and phospholipids and, with apoproteins, form chylomicrons. ■ ■Lack of apoprotein B results in the inability to transport chylomicrons out of the intestinal cells and causes abetalipoproteinemia. c.  Chylomicrons are transported out of the intestinal cells by exocytosis. Because chylo-microns are too large to enter the capillaries, they are transferred to lymph vessels and are added to the bloodstream via the thoracic duct. 3.  Malabsorption of lipids—steatorrhea ■ ■can be caused by any of the following: a.  Pancreatic disease (e.g., pancreatitis, cystic fibrosis), in which the pancreas cannot synthesize adequate amounts of the enzymes (e.g., pancreatic lipase) needed for lipid digestion. b.  Hypersecretion of gastrin, in which gastric H+ secretion is increased and the duodenal pH is decreased. Low duodenal pH inactivates pancreatic lipase. c.  Ileal resection, which leads to a depletion of the bile acid pool because the bile acids do not recirculate to the liver. d.  Bacterial overgrowth, which may lead to deconjugation of bile acids and their “early” absorption in the upper small intestine. In this case, bile acids are not present through-out the small intestine to aid in lipid absorption. e.  Decreased number of intestinal cells for lipid absorption (tropical sprue). f.  Failure to synthesize apoprotein B, which leads to the inability to form chylomicrons. D.  Absorption and secretion of electrolytes and H2O ■ ■Electrolytes and H2O may cross intestinal epithelial cells by either cellular or paracellular (between cells) routes. ■ ■Tight junctions attach the epithelial cells to one another at the luminal membrane. ■ ■The permeability of the tight junctions varies with the type of epithelium. A “tight” (imper-meable) epithelium is the colon. “Leaky” (permeable) epithelia are the small intestine and gallbladder. 1.  Absorption of NaCl a.  Na+ moves into the intestinal cells, across the luminal membrane, and down its electro-chemical gradient by the following mechanisms: (1)  Passive diffusion (through Na+ channels) 218 BRS Physiology (2)  Na+–glucose or Na+–amino acid cotransport (3)  Na+–Cl− cotransport (4)  Na+–H+ exchange ■ ■In the small intestine, Na+–glucose cotransport, Na+–amino acid cotransport, and Na+–H+ exchange mechanisms are most important. These cotransport and exchange mechanisms are similar to those in the renal proximal tubule. ■ ■In the colon, passive diffusion via Na+ channels is most important. The Na+ chan-nels of the colon are similar to those in the renal distal tubule and are stimulated by aldosterone. b.  Na+ is pumped out of the cell against its electrochemical gradient by the Na+–K+ pump in the basolateral membranes. c.  Cl− absorption accompanies Na+ absorption throughout the GI tract by the following mechanisms: (1)  Passive diffusion by a paracellular route (2)  Na+–Cl− cotransport (3)  Cl−–HCO3 − exchange 2.  Absorption and secretion of K+ a.  Dietary K+ is absorbed in the small intestine by passive diffusion via a paracellular route. b.  K+ is actively secreted in the colon by a mechanism similar to that for K+ secretion in the renal distal tubule. ■ ■As in the distal tubule, K+ secretion in the colon is stimulated by aldosterone. ■ ■In diarrhea, K+ secretion by the colon is increased because of a flow rate–dependent mechanism similar to that in the renal distal tubule. Excessive loss of K+ in diarrheal fluid causes hypokalemia. 3.  Absorption of H2O ■ ■is secondary to solute absorption. ■ ■is isosmotic in the small intestine and gallbladder. The mechanism for coupling solute and water absorption in these epithelia is the same as that in the renal proximal tubule. ■ ■In the colon, H2O permeability is much lower than in the small intestine, and feces may be hypertonic. 4.  Secretion of electrolytes and H2O by the intestine ■ ■The GI tract also secretes electrolytes from blood to lumen. ■ ■The secretory mechanisms are located in the crypts. The absorptive mechanisms are located in the villi. a.  Cl− is the primary ion secreted into the intestinal lumen. It is transported through Cl− channels in the luminal membrane that are regulated by cAMP. b.  Na+ is secreted into the lumen by passively following Cl−. H2O follows NaCl to maintain isosmotic conditions. c.  Vibrio cholerae (cholera toxin) causes diarrhea by stimulating Cl− secretion. ■ ■Cholera toxin catalyzes adenosine diphosphate (ADP) ribosylation of the αs subunit of the Gs protein coupled to adenylyl cyclase, permanently activating it. ■ ■Intracellular cAMP increases; as a result, Cl- channels in the luminal membrane open. ■ ■Na+ and H2O follow Cl− into the lumen and lead to secretory diarrhea. ■ ■Some strains of Escherichia coli cause diarrhea by a similar mechanism. E.  Absorption of other substances 1.  Vitamins a.  Fat-soluble vitamins (A, D, E, and K) are incorporated into micelles and absorbed along with other lipids. Chapter 6 Gastrointestinal Physiology 219 b.  Most water-soluble vitamins are absorbed by Na+-dependent cotransport mechanisms. c.  Vitamin B12 is absorbed in the ileum and requires intrinsic factor. ■ ■The vitamin B12–intrinsic factor complex binds to a receptor on the ileal cells and is absorbed. ■ ■Gastrectomy results in the loss of gastric parietal cells, which are the source of ­ intrinsic factor. Injection of vitamin B12 is required to prevent pernicious anemia. ■ ■Ileectomy results in loss of absorption of the vitamin B12–intrinsic factor complex and thus requires injection of vitamin B12. 2.  Calcium ■ ■absorption in the small intestine depends on the presence of adequate amounts of the active form of vitamin D, 1,25-dihydroxycholecalciferol, which is produced in the kidney. 1,25-Dihydroxycholecalciferol induces the synthesis of an intestinal Ca2+-binding pro-tein, calbindin D-28K. ■ ■Vitamin D deficiency or chronic renal failure results in inadequate intestinal Ca2+ absorption, causing rickets in children and osteomalacia in adults. 3.  Iron ■ ■is absorbed as heme iron (iron bound to hemoglobin or myoglobin) or as free Fe2+. In the intestinal cells, “heme iron” is degraded and free Fe2+ is released. The free Fe2+ binds to apoferritin and is transported into the blood. ■ ■Free Fe2+ circulates in the blood bound to transferrin, which transports it from the small intestine to its storage sites in the liver and from the liver to the bone marrow for the synthesis of hemoglobin. ■ ■Iron deficiency is the most common cause of anemia. VI.  Liver Physiology A.  Bile formation and secretion (see IV D) B.  Bilirubin production and excretion (Figure 6.15) ■ ■Hemoglobin is degraded to bilirubin by the reticuloendothelial system. ■ ■Bilirubin is carried in the circulation bound to albumin. ■ ■In the liver, bilirubin is conjugated with glucuronic acid via the enzyme UDP glucuronyl transferase. ■ ■A portion of conjugated bilirubin is excreted in the urine, and a portion is secreted into bile. ■ ■In the intestine, conjugated bilirubin is converted to urobilinogen, which is returned to the liver via the enterohepatic circulation, and urobilin and stercobilin, which are excreted in feces. C.  Metabolic functions of the liver 1.  Carbohydrate metabolism ■ ■Performs gluconeogenesis, stores glucose as glycogen, and releases stored glucose into the circulation 2.  Protein metabolism ■ ■Synthesizes nonessential amino acids ■ ■Synthesizes plasma proteins 3.  Lipid metabolism ■ ■Participates in fatty acid oxidation ■ ■Synthesizes lipoproteins, cholesterol, and phospholipids 220 BRS Physiology D.  Detoxification ■ ■Potentially toxic substances are presented to the liver via the portal circulation. ■ ■The liver modifies these substances in “first-pass metabolism.” ■ ■Phase I reactions are catalyzed by cytochrome P-450 enzymes, which are followed by phase II reactions that conjugate the substances. Bloodstream Reticuloendothelial system Small intestine Terminal ileum Colon Liver Bile Conjugated bilirubin Urobilinogen Urobilin Stercobilin Excreted in feces Red blood cells Hemoglobin Biliverdin Bilirubin Bilirubin-albumin Bilirubin UDP glucuronyl transferase Conjugated bilirubin Conjugated bilirubin Excreted in urine Enterohepatic circulation Figure 6.15 Bilirubin metabolism. UDP = uridine diphosphate. 221 Review Test 1. Which of the following substances is released from neurons in the GI tract and produces smooth muscle relaxation? (A) Secretin (B) Gastrin (C) Cholecystokinin (CCK) (D) Vasoactive intestinal peptide (VIP) (E) Gastric inhibitory peptide (GIP) 2. Which of the following is the site of secretion of intrinsic factor? (A) Gastric antrum (B) Gastric fundus (C) Duodenum (D) Ileum (E) Colon 3. Vibrio cholerae causes diarrhea because it (A) increases HCO3 − secretory channels in intestinal epithelial cells (B) increases Cl− secretory channels in crypt cells (C) prevents the absorption of glucose and causes water to be retained in the intestinal lumen isosmotically (D) inhibits cyclic adenosine monophosphate (cAMP) production in intestinal epithelial cells (E) inhibits inositol 1,4,5-triphosphate (IP3) production in intestinal epithelial cells 4. Cholecystokinin (CCK) has some gastrin-like properties because both CCK and gastrin (A) are released from G cells in the stomach (B) are released from I cells in the duodenum (C) are members of the secretin-homologous family (D) have five identical C-terminal amino acids (E) have 90% homology of their amino acids 5. Which of the following is transported in intestinal epithelial cells by a Na+-dependent cotransport process? (A) Fatty acids (B) Triglycerides (C) Fructose (D) Alanine (E) Oligopeptides 6. A 49-year-old male patient with severe Crohn disease has been unresponsive to drug therapy and undergoes ileal resection. After the surgery, he will have steatorrhea because (A) the liver bile acid pool increases (B) chylomicrons do not form in the intestinal lumen (C) micelles do not form in the intestinal lumen (D) dietary triglycerides cannot be digested (E) the pancreas does not secrete lipase 7. Cholecystokinin (CCK) inhibits (A) gastric emptying (B) pancreatic HCO3 − secretion (C) pancreatic enzyme secretion (D) contraction of the gallbladder (E) relaxation of the sphincter of Oddi 8. Which of the following abolishes “receptive relaxation” of the stomach? (A) Parasympathetic stimulation (B) Sympathetic stimulation (C) Vagotomy (D) Administration of gastrin (E) Administration of vasoactive intestinal peptide (VIP) (F) Administration of cholecystokinin (CCK) 9. Secretion of which of the following substances is inhibited by low pH? (A) Secretin (B) Gastrin (C) Cholecystokinin (CCK) (D) Vasoactive intestinal peptide (VIP) (E) Gastric inhibitory peptide (GIP) 10. Which of the following is the site of secretion of gastrin? (A) Gastric antrum (B) Gastric fundus (C) Duodenum (D) Ileum (E) Colon 222 BRS Physiology 11. Micelle formation is necessary for the intestinal absorption of (A) glycerol (B) galactose (C) leucine (D) bile acids (E) vitamin B12 (F) vitamin D 12. Which of the following changes occurs during defecation? (A) Internal anal sphincter is relaxed (B) External anal sphincter is contracted (C) Rectal smooth muscle is relaxed (D) Intra-abdominal pressure is lower than when at rest (E) Segmentation contractions predominate 13. Which of the following is characteristic of saliva? (A) Hypotonicity relative to plasma (B) A lower HCO3 − concentration than plasma (C) The presence of proteases (D) Secretion rate that is increased by vagotomy (E) Modification by the salivary ductal cells involves reabsorption of K+ and HCO3 − 14. Which of the following substances is secreted in response to an oral glucose load? (A) Secretin (B) Gastrin (C) Cholecystokinin (CCK) (D) Vasoactive intestinal peptide (VIP) (E) Glucose-dependent insulinotropic peptide (GIP) 15. Which of the following is true about the secretion from the exocrine pancreas? (A) It has a higher Cl− concentration than does plasma (B) It is stimulated by the presence of HCO3 − in the duodenum (C) Pancreatic HCO3 − secretion is increased by gastrin (D) Pancreatic enzyme secretion is increased by cholecystokinin (CCK) (E) It is hypotonic 16. Which of the following substances must be further digested before it can be absorbed by specific carriers in intestinal cells? (A) Fructose (B) Sucrose (C) Alanine (D) Dipeptides (E) Tripeptides 17. Slow waves in small intestinal smooth muscle cells are (A) action potentials (B) phasic contractions (C) tonic contractions (D) oscillating resting membrane potentials (E) oscillating release of cholecystokinin (CCK) 18. A 24-year-old male graduate student participates in a clinical research study on intestinal motility. Peristalsis of the small intestine (A) mixes the food bolus (B) is coordinated by the central nervous system (CNS) (C) involves contraction of circular smooth muscle behind and in front of the food bolus (D) involves contraction of circular smooth muscle behind the food bolus and relaxation of circular smooth muscle in front of the bolus (E) involves relaxation of circular and longitudinal smooth muscle simultaneously throughout the small intestine 19. A 38-year-old male patient with a duodenal ulcer is treated successfully with the drug cimetidine. The basis for cimetidine's inhibition of gastric H+ secretion is that it (A) blocks muscarinic receptors on parietal cells (B) blocks H2 receptors on parietal cells (C) increases intracellular cyclic adenosine monophosphate (CAMP) levels (D) blocks H+,K+-adenosine triphosphatase (ATPase) (E) enhances the action of acetylcholine (ACh) on parietal cells 20. Which of the following substances inhibits gastric emptying? (A) Secretin (B) Gastrin (C) Cholecystokinin (CCK) Chapter 6 Gastrointestinal Physiology 223 (D) Vasoactive intestinal peptide (VIP) (E) Gastric inhibitory peptide (GIP) 21. When parietal cells are stimulated, they secrete (A) HCl and intrinsic factor (B) HCl and pepsinogen (C) HCl and HCO3 − (D) HCO3 − and intrinsic factor (E) mucus and pepsinogen 22. A 44-year-old woman is diagnosed with Zollinger–Ellison syndrome. Which of the following findings is consistent with the diagnosis? (A) Decreased serum gastrin levels (B) Increased serum insulin levels (C) Increased absorption of dietary lipids (D) Decreased parietal cell mass (E) Peptic ulcer disease 23. Which of the following is the site of Na+–bile acid cotransport? (A) Gastric antrum (B) Gastric fundus (C) Duodenum (D) Ileum (E) Colon 224 Answers and Explanations 1. The answer is D [II C 1]. Vasoactive intestinal peptide (VIP) is a gastrointestinal (GI) neurocrine that causes relaxation of GI smooth muscle. For example, VIP mediates the relaxation response of the lower esophageal sphincter when a bolus of food approaches it, allowing passage of the bolus into the stomach. 2. The answer is B [IV B 1; Table 6.3; Figure 6.7]. Intrinsic factor is secreted by the parietal cells of the gastric fundus (as is HCl). It is absorbed, with vitamin B12, in the ileum. 3. The answer is B [V D 4 c]. Cholera toxin activates adenylate cyclase and increases cyclic adenosine monophosphate (cAMP) in the intestinal crypt cells. In the crypt cells, cAMP activates the Cl−-secretory channels and produces a primary secretion of Cl− with Na+ and H2O following. 4. The answer is D [II A 2]. The two hormones have five identical amino acids at the C terminus. Biologic activity of cholecystokinin (CCK) is associated with the seven C-terminal amino acids, and biologic activity of gastrin is associated with the four C-terminal amino acids. Because this CCK heptapeptide contains the five common amino acids, it is logical that CCK should have some gastrin-like properties. G cells secrete gastrin. I cells secrete CCK. The secretin family includes glucagon. 5. The answer is D [V A–C; Table 6.4]. Fructose is the only monosaccharide that is not absorbed by Na+-dependent cotransport; it is transported by facilitated diffusion. Amino acids are absorbed by Na+-dependent cotransport, but oligopeptides (larger peptide units) are not. Triglycerides are not absorbed without further digestion. The products of lipid digestion, such as fatty acids, are absorbed by simple diffusion. 6. The answer is C [IV D 4]. Ileal resection removes the portion of the small intestine that normally transports bile acids from the lumen of the gut and recirculates them to the liver. Because this process maintains the bile acid pool, new synthesis of bile acids is needed only to replace those bile acids that are lost in the feces. With ileal resection, most of the bile acids secreted are excreted in the feces, and the liver pool is significantly diminished. Bile acids are needed for micelle formation in the intestinal lumen to solubilize the products of lipid digestion so that they can be absorbed. Chylomicrons are formed within the intestinal epithelial cells and are transported to lymph vessels. 7. The answer is A [II A 2 a; Table 6.1]. Cholecystokinin (CCK) inhibits gastric emptying and therefore helps to slow the delivery of food from the stomach to the intestine during periods of high digestive activity. CCK stimulates both functions of the exocrine pancreas— HCO3 − secretion and digestive enzyme secretion. It also stimulates the delivery of bile from the gallbladder to the small intestinal lumen by causing contraction of the gallbladder while relaxing the sphincter of Oddi. 8. The answer is C [III C 1]. “Receptive relaxation” of the orad region of the stomach is initiated when food enters the stomach from the esophagus. This parasympathetic (vagovagal) reflex is abolished by vagotomy. 9. The answer is B [II A 1; Table 6.1]. Gastrin's principal physiologic action is to increase H+ secretion. H+ secretion decreases the pH of the stomach contents. The decreased pH, in turn, inhibits further secretion of gastrin—a classic example of negative feedback. 10. The answer is A [II A 1 b; Table 6.3; Figure 6.7]. Gastrin is secreted by the G cells of the gastric antrum. HCl and intrinsic factor are secreted by the fundus. Chapter 6 Gastrointestinal Physiology 225 11. The answer is F [V E 1; Table 6.4]. Micelles provide a mechanism for solubilizing fat-soluble nutrients in the aqueous solution of the intestinal lumen until the nutrients can be brought into contact with and absorbed by the intestinal epithelial cells. Because vitamin D is fat soluble, it is absorbed in the same way as other dietary lipids. Glycerol is one product of lipid digestion that is water soluble and is not included in micelles. Galactose and leucine are absorbed by Na+-dependent cotransport. Although bile acids are a key ingredient of micelles, they are absorbed by a specific Na+-dependent cotransporter in the ileum. Vitamin B12 is water soluble; thus, its absorption does not require micelles. 12. The answer is A [III E 3]. Both the internal and external anal sphincters must be relaxed to allow feces to be expelled from the body. Rectal smooth muscle contracts and intra-abdominal pressure is elevated by expiring against a closed glottis (Valsalva maneuver). Segmentation contractions are prominent in the small intestine during digestion and absorption. 13. The answer is A [IV A 2 a; Table 6.2]. Saliva is characterized by hypotonicity and a high HCO3 − concentration (relative to plasma) and by the presence of α-amylase and lingual lipase (not proteases). The high HCO3 − concentration is achieved by secretion of HCO3 − into saliva by the ductal cells (not reabsorption of HCO3 −). Because control of saliva production is parasympathetic, it is abolished by vagotomy. 14. The answer is E [II A 4; Table 6.4]. Glucose-dependent insulinotropic peptide (GIP) is the only gastrointestinal (GI) hormone that is released in response to all three categories of nutrients—fat, protein, and carbohydrate. Oral glucose releases GIP , which, in turn, causes the release of insulin from the endocrine pancreas. This action of GIP explains why oral glucose is more effective than intravenous glucose in releasing insulin. 15. The answer is D [II A 2, 3; Table 6.2]. The major anion in pancreatic secretions is HCO3 − (which is found in higher concentration than in plasma), and the Cl− concentration is lower than in plasma. Pancreatic secretion is stimulated by the presence of fatty acids in the duodenum. Secretin (not gastrin) stimulates pancreatic HCO3 − secretion, and cholecystokinin (CCK) stimulates pancreatic enzyme secretion. Pancreatic secretions are always isotonic, regardless of flow rate. 16. The answer is B [V A, B; Table 6.4]. Only monosaccharides can be absorbed by intestinal epithelial cells. Disaccharides, such as sucrose, must be digested to monosaccharides before they are absorbed. On the other hand, proteins are hydrolyzed to amino acids, dipeptides, or tripeptides, and all three forms are transported into intestinal cells for absorption. 17. The answer is D [III A; Figure 6.3]. Slow waves are oscillating resting membrane potentials of the gastrointestinal (GI) smooth muscle. The slow waves bring the membrane potential toward or to threshold, but are not themselves action potentials. If the membrane potential is brought to threshold by a slow wave, then action potentials occur, followed by contraction. 18. The answer is D [III D 2]. Peristalsis is contractile activity that is coordinated by the enteric nervous system (not the central nervous system [CNS]) and propels the intestinal contents forward. Normally, it takes place after sufficient mixing, digestion, and absorption have occurred. To propel the food bolus forward, the circular smooth muscle must simultaneously contract behind the bolus and relax in front of the bolus; at the same time, longitudinal smooth muscle relaxes (lengthens) behind the bolus and contracts (shortens) in front of the bolus. 19. The answer is B [IV B 3 c, d (1), 6]. Cimetidine is a reversible inhibitor of H2 receptors on parietal cells and blocks H+ secretion. Cyclic adenosine monophosphate (cAMP) (the second messenger for histamine) levels would be expected to decrease, not increase. Cimetidine also blocks the action of acetylcholine (ACh) to stimulate H+ secretion. Omeprazole blocks H+, K+-adenosine triphosphatase (ATPase) directly. 226 BRS Physiology 20. The answer is C [II A 2 a; Table 6.1]. Cholecystokinin (CCK) is the most important hormone for digestion and absorption of dietary fat. In addition to causing contraction of the gallbladder, it inhibits gastric emptying. As a result, chyme moves more slowly from the stomach to the small intestine, thus allowing more time for fat digestion and absorption. 21. The answer is A [IV B I; Table 6.3]. The gastric parietal cells secrete HCl and intrinsic factor. The chief cells secrete pepsinogen. 22. The answer is E [II A 1 d; V C 3 b]. Zollinger–Ellison syndrome (gastrinoma) is a tumor of the non–β-cell pancreas. The tumor secretes gastrin, which then circulates to the gastric parietal cells to produce increased H+ secretion, peptic ulcer, and parietal cell growth (trophic effect of gastrin). Because the tumor does not involve the pancreatic β-cells, insulin levels should be unaffected. Absorption of lipids is decreased (not increased) because increased H+ secretion decreases the pH of the intestinal lumen and inactivates pancreatic lipases. 23. The answer is D [IV D 4]. Bile salts are recirculated to the liver in the enterohepatic circulation via a Na+–bile acid cotransporter located in the ileum of the small intestine. 227 Endocrine Physiology c h a p t e r 7 I.  Overview of Hormones A.  See Table 7.1 for a list of hormones, including abbreviations, glands of origin, and major actions. B.  Hormone synthesis 1.  Protein and peptide hormone synthesis ■ ■Preprohormone synthesis occurs in the endoplasmic reticulum and is directed by a spe-cific mRNA. ■ ■Signal peptides are cleaved from the preprohormone, producing a prohormone, which is transported to the Golgi apparatus. ■ ■Additional peptide sequences are cleaved in the Golgi apparatus to form the hormone, which is packaged in secretory granules for later release. 2.  Steroid hormone synthesis ■ ■Steroid hormones are derivatives of cholesterol (the biosynthetic pathways are described in V A 1). 3.  Amine hormone synthesis ■ ■Amine hormones (thyroid hormones, epinephrine, norepinephrine) are derivatives of tyrosine (the biosynthetic pathway for thyroid hormones is described in IV A). C.  Regulation of hormone secretion 1.  Negative feedback ■ ■is the most commonly applied principle for regulating hormone secretion. ■ ■is self-limiting. ■ ■A hormone has biologic actions that, directly or indirectly, inhibit further secretion of the hormone. ■ ■For example, insulin is secreted by the pancreatic beta cells in response to an increase in blood glucose. In turn, insulin causes an increase in glucose uptake into cells that results in decreased blood glucose concentration. The decrease in blood glucose con-centration then decreases further secretion of insulin. 2.  Positive feedback ■ ■is rare. ■ ■is explosive and self-reinforcing. ■ ■A hormone has biologic actions that, directly or indirectly, cause more secretion of the hormone. ■ ■For example, the surge of luteinizing hormone (LH) that occurs just before ovulation is a result of positive feedback of estrogen on the anterior pituitary. LH then acts on the ovaries and causes more secretion of estrogen. 228 BRS Physiology t a b l e 7.1 Master List of Hormones Hormone Abbreviation Gland of Origin Major Action Thyrotropin-releasing hormone TRH Hypothalamus Stimulates secretion of TSH and prolactin Corticotropin-releasing hormone CRH Hypothalamus Stimulates secretion of ACTH Gonadotropin-releasing hormone GnRH Hypothalamus Stimulates secretion of LH and FSH Growth hormone–releasing hormone GHRH Hypothalamus Stimulates secretion of growth hormone Somatotropin release–inhibiting hormone (somatostatin) SRIF Hypothalamus Inhibits secretion of growth hormone Prolactin-inhibiting factor (dopamine) PIF Hypothalamus Inhibits secretion of prolactin Thyroid-stimulating hormone TSH Anterior pituitary Stimulates synthesis and secretion of thyroid hormones Follicle-stimulating hormone FSH Anterior pituitary Stimulates growth of ovarian follicles and estrogen secretion Promotes sperm maturation (testes) Luteinizing hormone LH Anterior pituitary Stimulates ovulation, formation of corpus luteum, and synthesis of estrogen and progesterone (ovary) Stimulates synthesis and secretion of testosterone (testes) Growth hormone GH Anterior pituitary Stimulates protein synthesis and overall growth Prolactin Anterior pituitary Stimulates milk production and breast development Adrenocorticotropic hormone ACTH Anterior pituitary Stimulates synthesis and secretion of adrenal cortical hormones Melanocyte-stimulating hormone MSH Anterior pituitary Stimulates melanin synthesis (? humans) Oxytocin Posterior pituitary Milk ejection; uterine contraction Antidiuretic hormone (vasopressin) ADH Posterior pituitary Stimulates H2O reabsorption by renal collecting ducts and contraction of arterioles l-thyroxine Triiodothyronine T4 T3 Thyroid gland Skeletal growth; ↑ O2 consumption; heat production; ↑ protein, fat, and carbohydrate use; maturation of nervous system (perinatal) Glucocorticoids (cortisol) Adrenal cortex Stimulates gluconeogenesis; anti-inflammatory; immunosuppression Estradiol Ovary Growth and development of female reproductive organs; follicular phase of menstrual cycle Progesterone Ovary Luteal phase of menstrual cycle Testosterone Testes Spermatogenesis; male secondary sex characteristics Parathyroid hormone PTH Parathyroid gland ↑ Serum [Ca2+]; ↓ serum [phosphate] Calcitonin Thyroid gland (parafollicular cells) ↓ Serum [Ca2+] Aldosterone Adrenal cortex ↑ Renal Na+ reabsorption; ↑ renal K+ secretion; ↑ renal H+ secretion 1,25-Dihydroxycholecalciferol Kidney (activation) ↑ Intestinal Ca2+ absorption; ↑ bone mineralization Insulin Pancreas (beta cells) ↓ Blood [glucose]; ↓ blood [amino acid]; ↓ blood [fatty acid] Glucagon Pancreas (alpha cells) ↑ Blood [glucose]; ↑ blood [fatty acid] Human chorionic gonadotropin HCG Placenta ↑ Estrogen and progesterone synthesis in corpus luteum of pregnancy Human placental lactogen HPL Placenta Same actions as growth hormone and prolactin during pregnancy See text for more complete description of each hormone. Chapter 7 Endocrine Physiology 229 D.  Regulation of receptors ■ ■Hormones determine the sensitivity of the target tissue by regulating the number or sensitivity of receptors. 1.  Down-regulation of receptors ■ ■A hormone decreases the number or affinity of receptors for itself or for another hor-mone. For example, in the uterus, progesterone down-regulates its own receptor and the receptor for estrogen. 2.  Up-regulation of receptors ■ ■A hormone increases the number or affinity of receptors for itself or for another hormone. ■ ■For example, in the ovary, estrogen up-regulates its own receptor and the receptor for LH. II.  Cell Mechanisms and Second Messengers (Table 7.2) A.  G proteins ■ ■are guanosine triphosphate (GTP)-binding proteins that couple hormone receptors to adjacent effector molecules. For example, in the cyclic adenosine monophos-phate (cAMP) second messenger system, G proteins couple the hormone receptor to adenylate cyclase. ■ ■are used in the adenylate cyclase and inositol 1,4,5-triphosphate (IP3) ­ second messenger systems. ■ ■have intrinsic GTPase activity. ■ ■have three subunits: α, β, and γ. ■ ■The α subunit can bind either guanosine diphosphate (GDP) or GTP . When GDP is bound to the α subunit, the G protein is inactive. When GTP is bound, the G protein is active. ■ ■G proteins can be either stimulatory (Gs) or inhibitory (Gi). Stimulatory or inhibitory activ-ity resides in the α subunits, which are accordingly called αs and αi. t a b l e 7.2 Mechanisms of Hormone Action cAMP Mechanism IP3 Mechanism Steroid Hormone Mechanism Other Mechanisms ACTH LH and FSH TSH ADH (V2 receptor) HCG MSH CRH β1 and β2 Receptors Calcitonin PTH Glucagon GnRH TRH GHRH Angiotensin II ADH (V1 receptor) Oxytocin α1 Receptors Glucocorticoids Estrogen Testosterone Progesterone Aldosterone Vitamin D Thyroid hormone Activation of tyrosine kinase Insulin IGF-1 Growth hormone Prolactin cGMP ANP Nitric oxide ANP = atrial natriuretic peptide; cAMP = cyclic adenosine monophosphate; cGMP = cyclic guanosine monophosphate; IGF = insulin-like growth factor; IP3 = inositol 1,4,5-triphosphate. See Table 7.1 for other abbreviations. 230 BRS Physiology B.  Adenylate cyclase mechanism (Figure 7.1) 1.  Hormone binds to a receptor in the cell membrane (step 1). 2.  GDP is released from the G protein and replaced by GTP (step 2), which activates the G protein. The G protein then activates or inhibits adenylate cyclase. If the G protein is stimulatory (Gs), then adenylate cyclase will be activated. If the G protein is inhibitory (Gi), then adenylate cyclase will be inhibited (not shown). Intrinsic GTPase activity in the G protein converts GTP back to GDP (not shown). 3.  Activated adenylate cyclase then catalyzes the conversion of adenosine triphosphate (ATP) to cAMP (step 3). 4.  cAMP activates protein kinase A (step 4), which phosphorylates specific proteins (step 5), producing specific physiologic actions (step 6). 5.  cAMP is degraded to 5¢-AMP by phosphodiesterase, which is inhibited by caffeine. Therefore, phosphodiesterase inhibitors would be expected to augment the physiologic actions of cAMP . C.  IP3 mechanism (Figure 7.2) 1.  Hormone binds to a receptor in the cell membrane (step 1) and, via a G protein (step 2), activates phospholipase C (step 3). 2.  Phospholipase C liberates diacylglycerol and IP3 from membrane lipids (step 4). 3.  IP3 mobilizes Ca2+ from the endoplasmic reticulum (step 5). Together, Ca2+ and diacylglyc-erol activate protein kinase C (step 6), which phosphorylates proteins and causes specific physiologic actions (step 7). D.  Catalytic receptor mechanisms ■ ■Hormone binds to extracellular receptors that have, or are associated with, enzymatic activity on the intracellular side of the membrane. Hormone Receptor GDP GTP G protein (stimulatory or inhibitory) Adenylate cyclase Activates protein kinase A 5'-AMP (inactive) cAMP ATP 1 2 3 5 Phosphorylates proteins Physiologic actions 6 Phosphodiesterase ~ ~ 4 Figure 7.1 Mechanism of hormone action—adenylate cyclase. ATP = adenosine triphosphate; cAMP = cyclic adenosine monophosphate; GDP = guanosine diphos-phate; GTP = guanosine triphosphate. Chapter 7 Endocrine Physiology 231 1.  Guanylyl cyclase a.  Atrial natriuretic peptide (ANP) acts through receptor guanylyl cyclase, where the extra-cellular side of the receptor binds ANP and the intracellular side of the receptor has guanylyl cyclase activity. Activation of guanylyl cyclase converts GTP to cyclic GMP , which is the second messenger. b.  Nitric oxide (NO) acts through cytosolic guanylyl cyclase. Activation of guanylyl cyclase converts GTP to cyclic GMP , which is the second messenger. 2.  Tyrosine kinases (Figure 7.3) ■ ■Hormone binds to extracellular receptors that have, or are associated with, tyrosine kinase activity. When activated, tyrosine kinase phosphorylates tyrosine moieties on proteins, leading to the hormone’s physiologic actions. a.  Receptor tyrosine kinase ■ ■Hormone binds to the extracellular side of the receptor. ■ ■The intracellular side of the receptor has intrinsic tyrosine kinase activity. ■ ■One type of receptor tyrosine kinase is a monomer (e.g., receptor for nerve growth factor). Binding of hormone or ligand causes dimerization of the receptor, activation of intrinsic tyrosine kinase, and phosphorylation of tyrosine moieties. ■ ■Another type of receptor tyrosine kinase is a dimer (e.g., receptors for insulin and insulin-like growth factor (IGF). Binding of hormone activates intrinsic tyrosine kinase, leading to phosphorylation of tyrosine moieties. ■ ■Insulin receptors are also discussed in Section VI C 2. b.  Tyrosine kinase–associated receptor ■ ■is the mechanism of action of growth hormone. ■ ■Growth hormone binds to the extracellular side of the receptor. ■ ■The intracellular side of the receptor does not have tyrosine kinase activity but is non-covalently associated with tyrosine kinase (e.g., Janus family of receptor-­ associated tyrosine kinase, JAK). Hormone Receptor GDP GTP G protein Phospholipase C Arachidonic acid Prostaglandins PIP2 Diacylglycerol Protein kinase C Ca2+ released from endoplasmic reticulum IP3 1 2 3 4 4 6 5 Physiologic actions 7 ~ ~ Figure 7.2 Mechanism of hormone action—inositol 1,4,5-triphosphate (IP3)–Ca2+. GDP = guano-sine diphosphate; GTP = guanosine triphosphate; PIP2 = phosphatidylinositol 4,5-diphosphate. 232 BRS Physiology ■ ■Binding of growth hormone causes dimerization of the receptor and activation of tyrosine kinase in the associated protein (e.g., JAK). ■ ■Targets of JAK include signal transducers and activators of transcription (STAT), which cause transcription of new mRNAs and new protein synthesis. E.  Steroid hormone and thyroid hormone mechanism (Figure 7.4) 1.  Steroid (or thyroid) hormone diffuses across the cell membrane and binds to its receptor (step 1). 2.  The hormone–receptor complex enters the nucleus and dimerizes (step 2). 3.  The hormone–receptor dimers are transcription factors that bind to steroid-responsive ele-ments (SREs) of DNA (step 3) and initiate DNA transcription (step 4). 4.  New messenger RNA is produced, leaves the nucleus, and is translated to synthesize new proteins (step 5). 5.  The new proteins that are synthesized have specific physiologic actions. For example, 1,25-dihydroxycholecalciferol induces the synthesis of calbindin D-28K, a Ca2+-binding protein in the intestine; aldosterone induces the synthesis of Na+ channels in the renal principal cells. Tyrosine kinase Tyrosine kinase NGF Cell membrane Intracellular fluid Tyrosine kinase Tyrosine kinase β β –S–S– –S–S– –S–S– Insulin α α Nerve growth factor receptor Insulin receptor Receptor tyrosine kinases JAK tyrosine kinase Growth hormones Growth hormone receptor JAK tyrosine kinase Tyrosine kinase– associated receptors TYROSINE KINASE RECEPTORS Extracellular fluid Figure 7.3 Tyrosine kinase receptors. Nerve growth factor and insulin utilize receptor tyrosine kinases. Growth hormone utilizes a tyrosine kinase–associated receptor. JAK = Janus family of receptor-associated tyrosine kinase; NGF = nerve growth factor. Chapter 7 Endocrine Physiology 233 III.  Pituitary Gland (Hypophysis) A.  Hypothalamic–pituitary relationships 1.  The anterior lobe of the pituitary gland is linked to the hypothalamus by the hypotha-lamic–hypophysial portal system. Thus, blood from the hypothalamus that contains high concentrations of hypothalamic hormones is delivered directly to the anterior pituitary. Hypothalamic hormones (e.g., growth hormone–releasing hormone [GHRH]) then stimulate or inhibit the release of anterior pituitary hormones (e.g., growth hormone). 2.  The posterior lobe of the pituitary gland is derived from neural tissue. The nerve cell bodies are located in hypothalamic nuclei. Posterior pituitary hormones are synthesized in the nerve cell bodies, packaged in secretory granules, and transported down the axons to the posterior pituitary for release into the circulation. B.  Hormones of the anterior lobe of the pituitary ■ ■are growth hormone, prolactin, thyroid-stimulating hormone (TSH), LH, follicle-stimulat-ing hormone (FSH), and adrenocorticotropic hormone (ACTH). ■ ■Growth hormone and prolactin are discussed in detail in this section. TSH, LH, FSH, and ACTH are discussed in context (e.g., TSH with thyroid hormone) in later sections of this chapter. Steroid hormone Hormone binds to receptor Hormone–receptor complex enters nucleus and dimerizes Hormone–receptor dimers bind SREs of DNA DNA transcription mRNAs Translation New proteins Physiologic actions 1 2 3 6 4 5 Figure 7.4 Mechanism of hormone action—steroid hormones. SREs = steroid-responsive elements. 234 BRS Physiology 1.  TSH, LH, and FSH ■ ■belong to the same glycoprotein family. Each has an α subunit and a β subunit. The a subunits are identical. The β subunits are different and are responsible for the unique biologic activity of each hormone. 2.  ACTH, melanocyte-stimulating hormone (MSH), b-lipotropin, and b-endorphin (Figure 7.5) ■ ■are derived from a single precursor, proopiomelanocortin (POMC). ■ ■a-MSH and b-MSH are produced in the intermediary lobe, which is rudimentary in adult humans. 3.  Growth hormone (somatotropin) ■ ■is the most important hormone for normal growth to adult size. ■ ■is a single-chain polypeptide that is homologous with prolactin and human placental lactogen. a.  Regulation of growth hormone secretion (Figure 7.6) ■ ■Growth hormone is released in pulsatile fashion. ■ ■Secretion is increased by sleep, stress, hormones related to puberty, starvation, exer-cise, and hypoglycemia. ■ ■Secretion is decreased by somatostatin, somatomedins, obesity, hyperglycemia, and pregnancy. (1)  Hypothalamic control—GHRH and somatostatin ■ ■GHRH stimulates the synthesis and secretion of growth hormone. ■ ■Somatostatin inhibits secretion of growth hormone by blocking the response of the anterior pituitary to GHRH. (2)  Negative feedback control by somatomedins ■ ■Somatomedins are produced when growth hormone acts on target tissues. Somatomedins inhibit the secretion of growth hormone by acting directly on the anterior pituitary and by stimulating the secretion of somatostatin from the hypothalamus. (3)  Negative feedback control by GHRH and growth hormone ■ ■GHRH inhibits its own secretion from the hypothalamus. ■ ■Growth hormone also inhibits its own secretion by stimulating the secretion of somatostatin from the hypothalamus. POMC ACTH intermediate β-Lipotropin Fragment ACTH γ-Lipotropin β-Endorphin + + + Figure 7.5 Proopiomelanocortin (POMC) is the precursor for adrenocorticotropic hormone (ACTH), β-lipotropin, and β-endorphin in the anterior pituitary. Chapter 7 Endocrine Physiology 235 b.  Actions of growth hormone ■ ■In the liver, growth hormone generates the production of somatomedins (insulin-like growth factors [IGF]), which serve as the intermediaries of several physiologic actions. ■ ■The IGF receptor has tyrosine kinase activity, similar to the insulin receptor. (1)  Direct actions of growth hormone (a) ↓ glucose uptake into cells (diabetogenic) (b) ↑ lipolysis (c) ↑ protein synthesis in muscle and ↑ lean body mass (d) ↑ production of IGF (2)  Actions of growth hormone via IGF (a) ↑ protein synthesis in chondrocytes and ↑ linear growth (pubertal growth spurt) (b) ↑ protein synthesis in muscle and ↑ lean body mass (c) ↑ protein synthesis in most organs and ↑ organ size c.  Pathophysiology of growth hormone (1)  Growth hormone deficiency ■ ■in children causes failure to grow, short stature, mild obesity, and delayed puberty. ■ ■can be caused by (a) Lack of anterior pituitary growth hormone (b) Hypothalamic dysfunction (↓ GHRH) (c) Failure to generate IGF in the liver (d) Growth hormone receptor deficiency (2)  Growth hormone excess ■ ■can be treated with somatostatin analogs (e.g., octreotide), which inhibit growth hormone secretion. ■ ■Hypersecretion of growth hormone causes acromegaly. Hypothalamus Anterior pituitary Target tissues Somatostatin (SRIF) GHRH + + – + – – Growth hormone Somatomedins (IGF) Somatomedins (IGF) Figure 7.6 Control of growth hormone secretion. GHRH = growth hormone–releas-ing hormone; IGF = insulin-like growth factor; SRIF = somatotropin release–inhibit-ing factor. 236 BRS Physiology (a)  Before puberty, excess growth hormone causes increased linear growth (gigantism). (b)  After puberty, excess growth hormone causes increased periosteal bone growth, increased organ size, and glucose intolerance. 4.  Prolactin ■ ■is the major hormone responsible for lactogenesis. ■ ■participates, with estrogen, in breast development. ■ ■is structurally homologous to growth hormone. a.  Regulation of prolactin secretion (Figure 7.7 and Table 7.3) (1)  Hypothalamic control by dopamine and thyrotropin-releasing hormone (TRH) ■ ■Prolactin secretion is tonically inhibited by dopamine (prolactin-inhibiting factor [PIF]) secreted by the hypothalamus. Thus, interruption of the hypothalamic– pituitary tract causes increased secretion of prolactin and sustained lactation. ■ ■TRH increases prolactin secretion. (2)  Negative feedback control ■ ■Prolactin inhibits its own secretion by stimulating the hypothalamic release of dopamine. Hypothalamus Anterior pituitary Mammary glands Dopamine (PIF) TRH Prolactin + – + Figure 7.7 Control of prolactin secretion. PIF = prolactin-inhibiting factor; TRH = thyrotropin-releasing hormone. t a b l e 7.3 Regulation of Prolactin Secretion Factors that Increase Prolactin Secretion Factors that Decrease Prolactin Secretion Estrogen (pregnancy) Breast-feeding Sleep Stress TRH Dopamine antagonists Dopamine Bromocriptine (dopamine agonist) Somatostatin Prolactin (by negative feedback) TRH = thyrotropin-releasing hormone. Chapter 7 Endocrine Physiology 237 b.  Actions of prolactin (1)  Stimulates milk production in the breast (casein, lactalbumin) (2)  Stimulates breast development (in a supportive role with estrogen) (3)  Inhibits ovulation by decreasing synthesis and release of gonadotropin-releasing hormone (GnRH) (4)  Inhibits spermatogenesis (by decreasing GnRH) c.  Pathophysiology of prolactin (1)  Prolactin deficiency (destruction of the anterior pituitary) ■ ■results in the failure to lactate. (2)  Prolactin excess ■ ■results from hypothalamic destruction (due to loss of the tonic “inhibitory” control by dopamine) or from prolactin-secreting tumors (prolactinomas). ■ ■causes galactorrhea and decreased libido. ■ ■causes failure to ovulate and amenorrhea because it inhibits GnRH secretion. ■ ■can be treated with bromocriptine, which reduces prolactin secretion by acting as a dopamine agonist. C.  Hormones of the posterior lobe of the pituitary ■ ■are antidiuretic hormone (ADH) and oxytocin. ■ ■are homologous nonapeptides. ■ ■are synthesized in hypothalamic nuclei and are packaged in secretory granules with their respective neurophysins. ■ ■travel down the nerve axons for secretion by the posterior pituitary. 1.  ADH (see Chapter 5, VII) ■ ■originates primarily in the supraoptic nuclei of the hypothalamus. ■ ■regulates serum osmolarity by increasing the H2O permeability of the late distal tubules and collecting ducts. a.  Regulation of ADH secretion (Table 7.4) b.  Actions of ADH (1) ↑ H2O permeability (aquaporin 2, AQP2) of the principal cells of the late distal tubule and collecting duct (via a V2 receptor and an adenylate cyclase–cAMP mechanism) (2)  Constriction of vascular smooth muscle (via a V1 receptor and an IP3/Ca2+ mechanism) c.  Pathophysiology of ADH (see Chapter 5, VII) 2.  Oxytocin ■ ■originates primarily in the paraventricular nuclei of the hypothalamus. ■ ■causes ejection of milk from the breast when stimulated by suckling. t a b l e 7.4 Regulation of ADH Secretion Factors that Increase ADH Secretion Factors that Decrease ADH Secretion Serum osmolarity Volume contraction Pain Nausea (powerful stimulant) Hypoglycemia Nicotine, opiates, antineoplastic drugs ↓ Serum osmolarity Ethanol α-Agonists ANP ADH = antidiuretic hormone; ANP = atrial natriuretic peptide. 238 BRS Physiology a.  Regulation of oxytocin secretion (1)  Suckling ■ ■is the major stimulus for oxytocin secretion. ■ ■Afferent fibers carry impulses from the nipple to the spinal cord. Relays in the hypothalamus trigger the release of oxytocin from the posterior pituitary. ■ ■The sight or sound of the infant may stimulate the hypothalamic neurons to secrete oxytocin, even in the absence of suckling. (2)  Dilation of the cervix and orgasm ■ ■increases the secretion of oxytocin. b.  Actions of oxytocin (1)  Contraction of myoepithelial cells in the breast ■ ■Milk is forced from the mammary alveoli into the ducts and ejected. (2)  Contraction of the uterus ■ ■During pregnancy, oxytocin receptors in the uterus are up-regulated as parturi-tion approaches, although the role of oxytocin in normal labor is uncertain. ■ ■Oxytocin can be used to induce labor and reduce postpartum bleeding. IV.  Thyroid Gland A.  Synthesis of thyroid hormones (Figure 7.8) ■ ■Each step in synthesis is stimulated by TSH. 1.  Thyroglobulin is synthesized from tyrosine in the thyroid follicular cells, packaged in secretory vesicles, and extruded into the follicular lumen (step 1). 2.  The iodide (I-) pump, or Na+–I- cotransport ■ ■is present in the thyroid follicular epithelial cells. ■ ■actively transports I− into the thyroid follicular cells for subsequent incorporation into thyroid hormones (step 2). ■ ■is inhibited by thiocyanate and perchlorate anions. 3.  Oxidation of I- to I2 ■ ■is catalyzed by a peroxidase enzyme in the follicular cell membrane (step 3). ■ ■I2 is the reactive form, which will be “organified” by combination with tyrosine on thyroglobulin. ■ ■The peroxidase enzyme is inhibited by propylthiouracil, which is used therapeutically to reduce thyroid hormone synthesis for the treatment of hyperthyroidism. ■ ■The same peroxidase enzyme catalyzes the remaining organification and coupling reactions involved in the synthesis of thyroid hormones. 4.  Organification of I2 ■ ■At the junction of the follicular cells and the follicular lumen, tyrosine residues of thyroglobulin react with I2 to form monoiodotyrosine (MIT) and diiodotyrosine (DIT) (step 4). ■ ■High levels of I− inhibit organification and, therefore, inhibit synthesis of thyroid hor-mone (Wolff–Chaikoff effect). 5.  Coupling of MIT and DIT ■ ■While MIT and DIT are attached to thyroglobulin, two coupling reactions occur (step 5). a.  When two molecules of DIT combine, thyroxine (T4) is formed. b.  When one molecule of DIT combines with one molecule of MIT, triiodothyronine (T3) is formed. ■ ■More T4 than T3 is synthesized, although T3 is more active. Chapter 7 Endocrine Physiology 239 c.  Iodinated thyroglobulin is stored in the follicular lumen until the thyroid gland is stim-ulated to secrete thyroid hormones. 6.  Stimulation of thyroid cells by TSH ■ ■When the thyroid cells are stimulated, iodinated thyroglobulin is taken back into the follicular cells by endocytosis (step 6). Lysosomal enzymes then digest thyroglobulin, releasing T4 and T3 into the circulation (step 7). ■ ■Leftover MIT and DIT are deiodinated by thyroid deiodinase (step 8). The I2 that is released is reutilized to synthesize more thyroid hormones. Therefore, deficiency of thyroid deiodinase mimics I2 deficiency. 7.  Binding of T3 and T4 ■ ■In the circulation, most of the T3 and T4 is bound to thyroxine-binding globulin (TBG). a.  In hepatic failure, TBG levels decrease, leading to a decrease in total thyroid hormone levels, but normal levels of free hormone. b.  In pregnancy, TBG levels increase, leading to an increase in total thyroid hormone ­ levels, but normal levels of free hormone (i.e., clinically, euthyroid). 8.  Conversion of T4 to T3 and reverse T3 (rT3) ■ ■In the peripheral tissues, T4 is converted to T3 by 5¢-iodinase (or to rT3). ■ ■T3 is more biologically active than T4. ■ ■rT3 is inactive. B.  Regulation of thyroid hormone secretion (Figure 7.9) 1.  Hypothalamic–pituitary control—TRH and TSH a.  TRH is secreted by the hypothalamus and stimulates the secretion of TSH by the ante-rior pituitary. T4 T3 MIT DIT Thyroid follicular epithelial cell Follicular lumen Blood T4 Thyroglobulin + TG T3 MIT DIT MIT DIT 7 8 1 3 5 6 Peroxidase peroxidase Organification of I2 Coupling reaction peroxidase TG TG TG I2 TG T4, T3 (to circulation) I – Na+ deiodinase MIT, DIT T yr o si n e Endocytosis 4 2 I– Figure 7.8 Steps in the synthesis of thyroid hormones. Each step is stimulated by thyroid-stimulating hormone. DIT = diiodotyrosine; I− = iodide; MIT = monoiodotyrosine; T3 = triiodothyronine; T4 = thyroxine; TG = thyroglobulin. 240 BRS Physiology b.  TSH increases both the synthesis and the secretion of thyroid hormones by the follicu-lar cells via an adenylate cyclase–cAMP mechanism. ■ ■Chronic elevation of TSH causes hypertrophy of the thyroid gland. c.  T3 down-regulates TRH receptors in the anterior pituitary and thereby inhibits TSH secretion. 2.  Thyroid-stimulating immunoglobulins ■ ■are components of the immunoglobulin G (IgG) fraction of plasma proteins and are antibodies to TSH receptors on the thyroid gland. ■ ■bind to TSH receptors and, like TSH, stimulate the thyroid gland to secrete T3 and T4. ■ ■circulate in high concentrations in patients with Graves disease, which is characterized by high circulating levels of thyroid hormones and, accordingly, low concentrations of TSH (caused by feedback inhibition of thyroid hormones on the anterior pituitary). C.  Actions of thyroid hormone ■ ■T3 is three to four times more potent than T4. The target tissues convert T4 to T3 (see IV A 8). 1.  Growth ■ ■Attainment of adult stature requires thyroid hormone. ■ ■Thyroid hormones act synergistically with growth hormone and somatomedins to pro-mote bone formation. ■ ■Thyroid hormones stimulate bone maturation as a result of ossification and fusion of the growth plates. In thyroid hormone deficiency, bone age is less than chronologic age. 2.  Central nervous system (CNS) a.  Perinatal period ■ ■Maturation of the CNS requires thyroid hormone in the perinatal period. ■ ■Thyroid hormone deficiency causes irreversible mental retardation. Because there is only a brief perinatal period when thyroid hormone replacement therapy is helpful, screening for neonatal hypothyroidism is mandatory. b.  Adulthood ■ ■Hyperthyroidism causes hyperexcitability and irritability. ■ ■Hypothyroidism causes listlessness, slowed speech, somnolence, impaired memory, and decreased mental capacity. Hypothalamus Anterior pituitary Thyroid TRH TSH T 3, T 4 – + + Figure 7.9 Control of thyroid hormone secretion. T3 = triiodothyronine; T4 = thyroxine; TRH = thyrotropin-releasing hormone; TSH = thyroid-stimulating hormone. Chapter 7 Endocrine Physiology 241 3.  Autonomic nervous system ■ ■Thyroid hormone has many of the same actions as the sympathetic nervous system because it up-regulates b1-adrenergic receptors in the heart. Therefore, a useful adjunct therapy for hyperthyroidism is treatment with a β-adrenergic blocking agent, such as propranolol. 4.  Basal metabolic rate (BMR) ■ ■O2 consumption and BMR are increased by thyroid hormone in all tissues except the brain, gonads, and spleen. The resulting increase in heat production underlies the role of thyroid hormone in temperature regulation. ■ ■Thyroid hormone increases the synthesis of Na+, K+-ATPase and consequently increases O2 consumption related to Na+–K+ pump activity. 5.  Cardiovascular and respiratory systems ■ ■Effects of thyroid hormone on cardiac output and ventilation rate combine to ensure that more O2 is delivered to the tissues. a.  Heart rate and stroke volume are increased. These effects combine to produce increased cardiac output. Excess thyroid hormone can cause high output heart failure. b.  Ventilation rate is increased. 6.  Metabolic effects ■ ■Overall, metabolism is increased to meet the demand for substrate associated with the increased rate of O2 consumption. a.  Glucose absorption from the gastrointestinal tract is increased. b.  Glycogenolysis, gluconeogenesis, and glucose oxidation (driven by demand for ATP) are increased. c.  Lipolysis is increased. d.  Protein synthesis and degradation are increased. The overall effect of thyroid hormone is catabolic. D.  Pathophysiology of the thyroid gland (Table 7.5) V.  Adrenal Cortex and Adrenal Medulla (Figure 7.10) A.  Adrenal cortex 1.  Synthesis of adrenocortical hormones (Figure 7.11) ■ ■The zona glomerulosa produces aldosterone. ■ ■The zonae fasciculata and reticularis produce glucocorticoids (cortisol) and androgens (dehydroepiandrosterone and androstenedione). a.  21-carbon steroids ■ ■include progesterone, deoxycorticosterone, aldosterone, and cortisol. ■ ■Progesterone is the precursor for the others in the 21-carbon series. ■ ■Hydroxylation at C-21 leads to the production of deoxycorticosterone, which has min-eralocorticoid (but not glucocorticoid) activity. ■ ■Hydroxylation at C-17 leads to the production of glucocorticoids (cortisol). b.  19-carbon steroids ■ ■have androgenic activity and are precursors to the estrogens. ■ ■If the steroid has been previously hydroxylated at C-17, the C20,21 side chain can be cleaved to yield the 19-carbon steroids dehydroepiandrosterone or androstenedione in the adrenal cortex. ■ ■Adrenal androgens have a ketone group at C-17 and are excreted as 17-ketosteroids in the urine. ■ ■In the testes, androstenedione is converted to testosterone. 242 BRS Physiology c.  18-carbon steroids ■ ■have estrogenic activity. ■ ■Oxidation of the A ring (aromatization) to produce estrogens occurs in the ovaries and placenta, but not in the adrenal cortex or testes. 2.  Regulation of secretion of adrenocortical hormones t a b l e 7.5 Pathophysiology of the Thyroid Gland Hyperthyroidism Hypothyroidism Symptoms ↑ metabolic rate Weight loss Negative nitrogen balance ↑ heat production (sweating) ↑ cardiac output Dyspnea Tremor, weakness Exophthalmos Goiter ↓ metabolic rate Weight gain Positive nitrogen balance ↓ heat production (cold sensitivity) ↓ cardiac output Hypoventilation Lethargy, mental slowness Drooping eyelids Myxedema Growth and mental retardation(perinatal) Goiter Causes Graves disease (antibodies to TSH receptor) Thyroid neoplasm Thyroiditis (autoimmune thyroiditis; Hashimoto thyroiditis) Surgical removal of thyroid I− deficiency Cretinism (congenital) ↓ TRH or TSH TSH levels ↓ (because of feedback inhibition on anterior pituitary by high thyroid hormone levels) ↑ (because of decreased feedback inhibition on anterior pituitary by low thyroid hormone levels) ↓ (if primary defect is in hypothalamus or anterior pituitary) Treatment Propylthiouracil (inhibits thyroid hormone synthesis by blocking peroxidase) Thyroidectomy 131I (destroys thyroid) β-blockers (adjunct therapy) Thyroid hormone replacement See Table 7.1 for abbreviations. Aldosterone Glucocorticoids Catecholamines Androgens Adrenal medulla Adrenal cortex Figure 7.10 Secretory products of the adrenal cortex and medulla. Chapter 7 Endocrine Physiology 243 a.  Glucocorticoid secretion (Figure 7.12) ■ ■oscillates with a 24-hour periodicity or circadian rhythm. ■ ■For those who sleep at night, cortisol levels are highest just before waking (ª8 am) and lowest in the evening (≈12 midnight). (1)  Hypothalamic control—corticotropin-releasing hormone (CRH) ■ ■CRH-containing neurons are located in the paraventricular nuclei of the hypothalamus. ■ ■When these neurons are stimulated, CRH is released into hypothalamic–hypoph-ysial portal blood and delivered to the anterior pituitary. ■ ■CRH binds to receptors on corticotrophs of the anterior pituitary and directs them to synthesize POMC (the precursor to ACTH) and secrete ACTH. ■ ■The second messenger for CRH is cAMP. (2)  Anterior lobe of the pituitary—ACTH ■ ■ACTH increases steroid hormone synthesis in all zones of the adrenal cortex by stimulating cholesterol desmolase and increasing the conversion of cholesterol to pregnenolone. Cholesterol Pregnenolone 17-Hydroxypregnenolone ACTH cholesterol desmolase aldosterone synthase 17α-hydroxylase 17α-hydroxylase 3β-hydroxysteroid dehydrogenase 21β-hydroxylase 17,20-lyase 17,20-lyase Progesterone 11-Deoxycorticosterone Corticosterone 3β-hydroxysteroid dehydrogenase 21β-hydroxylase 17-Hydroxyprogesterone 11-Deoxycortisol Cortisol 3β-hydroxysteroid dehydrogenase Dehydroepiandrosterone Androstenedione Testosterone Estradiol Aldosterone Angiotensin II Product of zona fasciculata Products of zona reticularis Product of zona glomerulosa + + 11β-hydroxylase 11β-hydroxylase Figure 7.11 Synthetic pathways for glucocorticoids, androgens, and mineralocorticoids in the adrenal cortex. ACTH = adrenocorticotropic hormone. 244 BRS Physiology ■ ■ACTH also up-regulates its own receptor so that the sensitivity of the adrenal cor-tex to ACTH is increased. ■ ■Chronically increased levels of ACTH cause hypertrophy of the adrenal cortex. ■ ■The second messenger for ACTH is cAMP. (3)  Negative feedback control—cortisol ■ ■Cortisol inhibits the secretion of CRH from the hypothalamus and the secretion of ACTH from the anterior pituitary. ■ ■When cortisol (glucocorticoid) levels are chronically elevated, the secretion of CRH and ACTH is inhibited by negative feedback. ■ ■The dexamethasone suppression test is based on the ability of dexamethasone (a synthetic glucocorticoid) to inhibit ACTH secretion. In normal persons, low-dose dexamethasone inhibits or “suppresses” ACTH secretion and, consequently, cortisol secretion. In persons with ACTH-secreting tumors, low-dose dexametha-sone does not inhibit cortisol secretion but high-dose dexamethasone does. In persons with adrenal cortical tumors, neither low- nor high-dose dexamethasone inhibits cortisol secretion. b.  Aldosterone secretion (see Chapter 3, VI B) ■ ■is under tonic control by ACTH, but is separately regulated by the renin–angiotensin system and by serum potassium. (1)  Renin–angiotensin–aldosterone system (a)  Decreases in blood volume cause a decrease in renal perfusion pressure, which in turn increases renin secretion. Renin, an enzyme, catalyzes the conversion of angiotensinogen to angiotensin I. Angiotensin I is converted to angiotensin II by angiotensin-converting enzyme (ACE). (b)  Angiotensin II acts on the zona glomerulosa of the adrenal cortex to increase the conversion of corticosterone to aldosterone. (c)  Aldosterone increases renal Na+ reabsorption, thereby restoring extracellular fluid (ECF) volume and blood volume to normal. (2)  Hyperkalemia increases aldosterone secretion. Aldosterone increases renal K+ secre-tion, restoring serum [K+] to normal. CRH Higher centers Anterior pituitary ACTH Adrenal cortex Cortisol + + – – Hypothalamus Figure 7.12 Control of glucocorticoid secretion. ACTH = adrenocortico-tropic hormone; CRH = corticotropin-releasing hormone. Chapter 7 Endocrine Physiology 245 3.  Actions of glucocorticoids (cortisol) ■ ■Overall, glucocorticoids are essential for the response to stress. a.  Stimulation of gluconeogenesis ■ ■Glucocorticoids increase gluconeogenesis by the following mechanisms: (1)  They increase protein catabolism in muscle and decrease protein synthesis, thereby providing more amino acids to the liver for gluconeogenesis. (2)  They decrease glucose utilization and insulin sensitivity of adipose tissue. (3)  They increase lipolysis, which provides more glycerol to the liver for gluconeo­ genesis. b.  Anti-inflammatory effects (1)  Glucocorticoids induce the synthesis of lipocortin, an inhibitor of phospholipase A2. (Phospholipase A2 is the enzyme that liberates arachidonate from membrane phospholipids, providing the precursor for prostaglandin and leukotriene synthe-sis.) Because prostaglandins and leukotrienes are involved in the inflammatory response, glucocorticoids have anti-inflammatory properties by inhibiting the for-mation of the precursor (arachidonate). (2)  Glucocorticoids inhibit the production of interleukin-2 (IL-2) and inhibit the prolifera-tion of T lymphocytes. (3)  Glucocorticoids inhibit the release of histamine and serotonin from mast cells and platelets. c.  Suppression of the immune response ■ ■Glucocorticoids inhibit the production of IL-2 and T lymphocytes, both of which are critical for cellular immunity. In pharmacologic doses, glucocorticoids are used to prevent rejection of transplanted organs. d.  Maintenance of vascular responsiveness to catecholamines ■ ■Cortisol up-regulates a1 receptors on arterioles, increasing their sensitivity to the vasoconstrictor effect of norepinephrine. Thus, with cortisol excess, arterial pressure increases; with cortisol deficiency, arterial pressure decreases. 4.  Actions of mineralocorticoids (aldosterone) (see Chapters 3 and 5) a. ↑ renal Na+ reabsorption (action on the principal cells of the late distal tubule and col-lecting duct) b. ↑ renal K+ secretion (action on the principal cells of the late distal tubule and collecting duct) c. ↑ renal H+ secretion (action on the α-intercalated cells of the late distal tubule and col-lecting duct) 5.  Pathophysiology of the adrenal cortex (Table 7.6) a.  Adrenocortical insufficiency (1)  Primary adrenocortical insufficiency—Addison disease ■ ■is most commonly caused by autoimmune destruction of the adrenal cortex and causes acute adrenal crisis. ■ ■is characterized by the following: (a) ↓ adrenal glucocorticoid, androgen, and mineralocorticoid (b) ↑ ACTH (Low cortisol levels stimulate ACTH secretion by negative feedback.) (c) Hypoglycemia (caused by cortisol deficiency) (d) Weight loss, weakness, nausea, and vomiting (e)  Hyperpigmentation (Low cortisol levels stimulate ACTH secretion; ACTH con-tains the MSH fragment.) (f)  ↓ pubic and axillary hair in women (caused by the deficiency of adrenal androgens) (g)  ECF volume contraction, hypotension, hyperkalemia, and metabolic acidosis (caused by aldosterone deficiency) 246 BRS Physiology (2)  Secondary adrenocortical insufficiency ■ ■is caused by primary deficiency of ACTH. ■ ■does not exhibit hyperpigmentation (because there is a deficiency of ACTH). ■ ■does not exhibit volume contraction, hyperkalemia, or metabolic acidosis (because aldosterone levels are normal). ■ ■Symptoms are otherwise similar to those of Addison disease. b.  Adrenocortical excess—Cushing syndrome ■ ■is most commonly caused by the administration of pharmacologic doses of glucocorticoids. ■ ■is also caused by primary hyperplasia of the adrenal glands. ■ ■is called Cushing disease when it is caused by overproduction of ACTH. ■ ■is characterized by the following: t a b l e 7.6 Pathophysiology of the Adrenal Cortex Disorder Clinical Features ACTH Levels Treatment Addison disease (e.g., primary adrenocortical insufficiency) Hypoglycemia Anorexia, weight loss, nausea, vomiting Weakness Hypotension Hyperkalemia Metabolic acidosis Decreased pubic and axillary hair in women Hyperpigmentation Increased (negative feedback effect of decreased cortisol) Replacement of glucocorticoids and mineralocorticoids Cushing syndrome (e.g., primary adrenal hyperplasia) Hyperglycemia Muscle wasting Central obesity Round face, supraclavicular fat, buffalo hump Osteoporosis Striae Virilization and menstrual disorders in women Hypertension Decreased (negative feedback effect of increased cortisol) Ketoconazole Metyrapone Cushing disease (excess ACTH) Same as Cushing syndrome Increased Surgical removal of ACTH-secreting tumor Conn syndrome (aldosterone-secreting tumor) Hypertension Hypokalemia Metabolic alkalosis Decreased renin Spironolactone (aldosterone antagonist) Surgical removal of aldosterone-secreting tumor 21β-Hydroxylase deficiency (↓ glucocorticoids and mineralocorticoids; ↑ adrenal androgens) Virilization of women Early acceleration of linear growth Early appearance of pubic and axillary hair Symptoms of glucocorticoid and mineralocorticoid deficiency Increased (negative feedback effect of decreased cortisol) Replacement of glucocorticoids and mineralocorticoids 17α-Hydroxylase deficiency (↓ adrenal androgens and glucocorticoids; ↑ mineralocorticoids) Lack of pubic and axillary hair in women Symptoms of glucocorticoid deficiency Symptoms of mineralocorticoid excess Increased (negative feedback effect of decreased cortisol) Replacement of glucocorticoids Aldosterone antagonist See Table 7.1 for abbreviation. Chapter 7 Endocrine Physiology 247 (1) ↑ cortisol and androgen levels (2) ↓ ACTH (if caused by primary adrenal hyperplasia or pharmacologic doses of glu-cocorticosteroids); ↑ ACTH (if caused by overproduction of ACTH, as in Cushing disease) (3)  Hyperglycemia (caused by elevated cortisol levels) (4)  ↑ protein catabolism and muscle wasting (5)  Central obesity (round face, supraclavicular fat, buffalo hump) (6)  Poor wound healing (7)  Virilization of women (caused by elevated levels of adrenal androgens) (8)  Hypertension (caused by elevated levels of cortisol and aldosterone) (9)  Osteoporosis (elevated cortisol levels cause increased bone resorption) (10)  Striae ■ ■Ketoconazole, an inhibitor of steroid hormone synthesis, can be used to treat Cushing disease. c.  Hyperaldosteronism—Conn syndrome ■ ■is caused by an aldosterone-secreting tumor. ■ ■is characterized by the following: (1)  Hypertension (because aldosterone increases Na+ reabsorption, which leads to increases in ECF volume and blood volume) (2)  Hypokalemia (because aldosterone increases K+ secretion) (3)  Metabolic alkalosis (because aldosterone increases H+ secretion) (4) ↓ renin secretion (because increased ECF volume and blood pressure inhibit renin secretion by negative feedback) d.  21b-Hydroxylase deficiency ■ ■is the most common biochemical abnormality of the steroidogenic pathway (see Figure 7.11). ■ ■belongs to a group of disorders characterized by adrenogenital syndrome. ■ ■is characterized by the following: (1) ↓ cortisol and aldosterone levels (because the enzyme block prevents the produc-tion of 11-deoxycorticosterone and 11-deoxycortisol, the precursors for cortisol and aldosterone) (2) ↑ 17-hydroxyprogesterone and progesterone levels (because of accumulation of intermediates above the enzyme block) (3) ↑ ACTH (because of decreased feedback inhibition by cortisol) (4)  Hyperplasia of zona fasciculata and zona reticularis (because of high levels of ACTH) (5) ↑ adrenal androgens (because 17-hydroxyprogesterone is their major precursor) and ↑ urinary 17-ketosteroids (6)  Virilization in women (7)  Early acceleration of linear growth and early appearance of pubic and axillary hair (8)  Suppression of gonadal function in both men and women e.  17a-Hydroxylase deficiency is characterized by the following: (1) ↓ androgen and glucocorticoid levels (because the enzyme block prevents the pro-duction of 17-hydroxypregnenolone and 17-hydroxyprogesterone) (2) ↑ mineralocorticoid levels (because intermediates accumulate to the left of the enzyme block and are shunted toward the production of mineralocorticoids) (3)  Lack of pubic and axillary hair (which depends on adrenal androgens) in women (4)  Hypoglycemia (because of decreased glucocorticoids) (5)  Metabolic alkalosis, hypokalemia, and hypertension (because of increased mineralocorticoids) (6) ↑ ACTH (because decreased cortisol levels stimulate ACTH secretion by negative feedback) B.  Adrenal medulla (see Chapter 2, I A 4) 248 BRS Physiology VI.  Endocrine Pancreas–Glucagon and Insulin (Table 7.7) A.  Organization of the endocrine pancreas ■ ■The islets of Langerhans contain three major cell types (Table 7.8). Other cells secrete pan-creatic polypeptide. ■ ■Gap junctions link beta cells to each other, alpha cells to each other, and beta cells to alpha cells for rapid communication. ■ ■The portal blood supply of the islets allows blood from the beta cells (containing insulin) to bathe the alpha and delta cells, again for rapid cell-to-cell communication. B.  Glucagon 1.  Regulation of glucagon secretion (Table 7.9) ■ ■The major factor that regulates glucagon secretion is the blood glucose concentration. Decreased blood glucose stimulates glucagon secretion. ■ ■Increased blood amino acids stimulate glucagon secretion, which prevents hypoglyce-mia caused by unopposed insulin in response to a high protein meal. 2.  Actions of glucagon ■ ■Glucagon acts on the liver and adipose tissue. ■ ■The second messenger for glucagon is CAMP. a.  Glucagon increases the blood glucose concentration. (1)  It increases glycogenolysis and prevents the recycling of glucose into glycogen. (2)  It increases gluconeogenesis. Glucagon decreases the production of fructose 2,6-bisphosphate, decreasing phosphofructokinase activity; in effect, substrate is directed toward glucose formation rather than toward glucose breakdown. b.  Glucagon increases blood fatty acid and ketoacid concentration. ■ ■Glucagon increases lipolysis. The inhibition of fatty acid synthesis in effect “shunts” substrates toward gluconeogenesis. ■ ■Ketoacids (β-hydroxybutyrate and acetoacetate) are produced from acetyl coen-zyme A (CoA), which results from fatty acid degradation. c.  Glucagon increases urea production. ■ ■Amino acids are used for gluconeogenesis (stimulated by glucagon), and the resulting amino groups are incorporated into urea. t a b l e 7.7 Comparison of Insulin and Glucagon Stimulus for Secretion Major Actions Overall Effect on Blood Levels Insulin (tyrosine kinase receptor) ↑ Blood glucose ↑ Amino acids ↑ Fatty acids Glucagon GIP Growth hormone Cortisol Increases glucose uptake into cells and glycogen formation Decreases glycogenolysis and gluconeogenesis Increases protein synthesis Increases fat deposition and decreases lipolysis Increases K+ uptake into cells ↓ [glucose] ↓ [amino acid] ↓ [fatty acid] ↓ [ketoacid] Hypokalemia Glucagon (cAMP mechanism) ↓ Blood glucose ↑ Amino acids CCK Norepinephrine, epinephrine, ACh Increases glycogenolysis and gluconeogenesis Increases lipolysis and ketoacid production ↑ [glucose] ↑ [fatty acid] ↑ [ketoacid] ACh = acetylcholine; cAMP = cyclic adenosine monophosphate; CCK = cholecystokinin; GIP = glucose-dependent insulinotropic peptide. Chapter 7 Endocrine Physiology 249 C.  Insulin ■ ■contains an A chain and a B chain, joined by two disulfide bridges. ■ ■Proinsulin is synthesized as a single-chain peptide. Within storage granules, a connecting peptide (C peptide) is removed by proteases to yield insulin. The C peptide is packaged and secreted along with insulin, and its concentration is used to monitor beta cell function in diabetic patients who are receiving exogenous insulin. 1.  Regulation of insulin secretion (Table 7.10) a.  Blood glucose concentration ■ ■is the major factor that regulates insulin secretion. ■ ■Increased blood glucose stimulates insulin secretion. An initial burst of insulin is fol-lowed by sustained secretion. b.  Mechanism of insulin secretion ■ ■Glucose, the stimulant for insulin secretion, binds to the Glut 2 receptor on the beta cells. ■ ■Inside the beta cells, glucose is oxidized to ATP, which closes K+ channels in the cell membrane and leads to depolarization of the beta cells. Similar to the action of ATP , sulfonylurea drugs (e.g., tolbutamide, glyburide) stimulate insulin secretion by clos-ing these K+ channels. ■ ■Depolarization opens Ca2+ channels, which leads to an increase in intracellular [Ca2+] and then to secretion of insulin. 2.  Insulin receptor (see Figure 7.3) ■ ■is found on target tissues for insulin. ■ ■is a tetramer, with two α subunits and two β subunits. a.  The α subunits are located on the extracellular side of the cell membrane. b.  The β subunits span the cell membrane and have intrinsic tyrosine kinase activity. When insulin binds to the receptor, tyrosine kinase is activated and autophosphorylates the β subunits. The phosphorylated receptor then phosphorylates intracellular proteins. t a b l e 7.8 Cell Types of the Islets of Langerhans Type of Cell Location Function Beta Central islet Secrete insulin Alpha Outer rim of islet Secrete glucagon Delta Intermixed Secrete somatostatin and gastrin t a b l e 7.9 Regulation of Glucagon Secretion Factors that Increase Glucagon Secretion Factors that Decrease Glucagon Secretion ↓ Blood glucose ↑ Blood glucose ↑ Amino acids (especially arginine) Insulin CCK (alerts alpha cells to a protein meal) Somatostatin Norepinephrine, epinephrine Fatty acids, ketoacids ACh ACh = acetylcholine, CCK = cholecystokinin. 250 BRS Physiology c.  The insulin–receptor complexes enter the target cells. d.  Insulin down-regulates its own receptors in target tissues. ■ ■Therefore, the number of insulin receptors is increased in starvation and decreased in obesity (e.g., type 2 diabetes mellitus). 3.  Actions of insulin ■ ■Insulin acts on the liver, adipose tissue, and muscle. a.  Insulin decreases blood glucose concentration by the following mechanisms: (1)  It increases uptake of glucose into target cells by directing the insertion of glucose transporters into cell membranes. As glucose enters the cells, the blood glucose concentration decreases. (2)  It promotes formation of glycogen from glucose in muscle and liver, and simultane-ously inhibits glycogenolysis. (3)  It decreases gluconeogenesis. Insulin increases the production of fructose 2,6-bisphosphate, increasing phosphofructokinase activity. In effect, substrate is directed away from glucose formation. b.  Insulin decreases blood fatty acid and ketoacid concentrations. ■ ■In adipose tissue, insulin stimulates fat deposition and inhibits lipolysis. ■ ■Insulin inhibits ketoacid formation in the liver because decreased fatty acid degrada-tion provides less acetyl CoA substrate for ketoacid formation. c.  Insulin decreases blood amino acid concentration. ■ ■Insulin stimulates amino acid uptake into cells, increases protein synthesis, and inhibits protein degradation. Thus, insulin is anabolic. d.  Insulin decreases blood K+ concentration. ■ ■Insulin increases K+ uptake into cells, thereby decreasing blood [K+]. 4.  Insulin pathophysiology—diabetes mellitus ■ ■Case study: A woman is brought to the emergency room. She is hypotensive and breathing rapidly; her breath has the odor of ketones. Analysis of her blood shows severe hypergly-cemia, hyperkalemia, and blood gas values that are consistent with metabolic acidosis. ■ ■Explanation: a.  Hyperglycemia ■ ■is consistent with insulin deficiency. ■ ■In the absence of insulin, glucose uptake into cells is decreased, as is storage of glu-cose as glycogen. ■ ■If tests were performed, the woman’s blood would have shown increased levels of both amino acids (because of increased protein catabolism) and fatty acids (because of increased lipolysis). t a b l e 7.10 Regulation of Insulin Secretion Factors that Increase Insulin Secretion Factors that Decrease Insulin Secretion ↑ Blood glucose ↓ Blood glucose ↑ Amino acids (arginine, lysine, leucine) Somatostatin ↑ Fatty acids Norepinephrine, epinephrine Glucagon GIP ACh ACh = acetylcholine; GIP = glucose-dependent insulinotropic peptide. Chapter 7 Endocrine Physiology 251 b.  Hypotension ■ ■is a result of ECF volume contraction. ■ ■The high blood glucose concentration results in a high filtered load of glucose that exceeds the reabsorptive capacity (Tm) of the kidney. ■ ■The unreabsorbed glucose acts as an osmotic diuretic in the urine and causes ECF volume contraction. c.  Metabolic acidosis ■ ■is caused by overproduction of ketoacids (β-hydroxybutyrate and acetoacetate). ■ ■The increased ventilation rate, or Kussmaul respiration, is the respiratory compensa-tion for metabolic acidosis. d.  Hyperkalemia ■ ■results from the lack of insulin; normally, insulin promotes K+ uptake into cells. D.  Somatostatin ■ ■is secreted by the delta cells of the pancreas. ■ ■inhibits the secretion of insulin, glucagon, and gastrin. VII.  Calcium Metabolism (Parathyroid Hormone, Vitamin D, Calcitonin) (Table 7.11) A.  Overall Ca2+ homeostasis (Figure 7.13) ■ ■40% of the total Ca2+ in blood is bound to plasma proteins. ■ ■60% of the total Ca2+ in blood is not bound to proteins and is ultrafilterable. Ultrafilterable Ca2+ includes Ca2+ that is complexed to anions such as phosphate and free, ionized Ca2+. ■ ■Free, ionized Ca2+ is biologically active. ■ ■Serum [Ca2+] is determined by the interplay of intestinal absorption, renal excretion, and bone remodeling (bone resorption and formation). Each component is hormonally regulated. ■ ■To maintain Ca2+ balance, net intestinal absorption must be balanced by urinary excretion. t a b l e 7.11 Summary of Hormones that Regulate Ca2+ PTH Vitamin D Calcitonin Stimulus for secretion ↓ Serum [Ca2+] ↓ Serum [Ca2+] ↑ PTH ↓ Serum [phosphate] ↑ Serum [Ca2+] Action on Bone Kidney Intestine ↑ Resorption ↓ P reabsorption (↑ urinary cAMP) ↑ Ca2+ reabsorption ↑  Ca2+ absorption (via activation of vitamin D) ↑ Resorption ↑ P reabsorption ↑ Ca2+ reabsorption ↑ Ca2+ absorption (calbindin D-28K) ↑ P absorption ↓ Resorption Overall effect on Serum [Ca2+] Serum [phosphate] ↑ ↓ ↑ ↑ ↓ cAMP = cyclic adenosine monophosphate. See Table 7.1 for other abbreviation. 252 BRS Physiology 1.  Positive Ca2+ balance ■ ■is seen in growing children. ■ ■Intestinal Ca2+ absorption exceeds urinary excretion, and the excess is deposited in the growing bones. 2.  Negative Ca2+ balance ■ ■is seen in women during pregnancy or lactation. ■ ■Intestinal Ca2+ absorption is less than Ca2+ excretion, and the deficit comes from the maternal bones. B.  Parathyroid hormone (PTH) ■ ■is the major hormone for the regulation of serum [Ca2+]. ■ ■is synthesized and secreted by the chief cells of the parathyroid glands. 1.  Secretion of PTH ■ ■is controlled by the serum [Ca2+] binding to Ca2+-sensing receptors in the parathyroid cell membrane. Decreased serum [Ca2+] increases PTH secretion, whereas increased serum Ca2+ decreases PTH secretion. ■ ■Decreased serum Ca2+ causes decreased binding to the Ca2+-sensing receptor, which stimulates PTH secretion. ■ ■Mild decreases in serum [Mg2+] stimulate PTH secretion. ■ ■Severe decreases in serum [Mg2+] inhibit PTH secretion and produce symptoms of hypoparathyroidism (e.g., hypocalcemia). ■ ■The second messenger for PTH secretion by the parathyroid gland is cAMP . 2.  Actions of PTH ■ ■are coordinated to produce an increase in serum [Ca2+] and a decrease in serum [phosphate]. Filtration ECF Ca2+ Reabsorption Bone formation Bone resorption PTH, 1,25-Dihydroxycholecalciferol Calcitonin Urinary Ca2+ excretion Fecal Ca2+ PTH 1,25-Dihydroxycholecalciferol Ingested Ca2+ Absorption Secretion + + + – Figure 7.13 Hormonal regulation of Ca2+ metabolism. ECF = extracellular fluid; PTH = parathyroid hormone. Chapter 7 Endocrine Physiology 253 ■ ■The second messenger for PTH actions on its target tissues is cAMP. a.  PTH increases bone resorption, which brings both Ca2+ and phosphate from bone min-eral into the ECF . Alone, this effect on bone would not increase the serum ionized [Ca2+] because phosphate complexes Ca2+. ■ ■Resorption of the organic matrix of bone is reflected in increased hydroxyproline excretion. b.  PTH inhibits renal phosphate reabsorption in the proximal tubule and, therefore, increases phosphate excretion (phosphaturic effect). As a result, the phosphate resorbed from bone is excreted in the urine, allowing the serum ionized [Ca2+] to increase. ■ ■cAMP generated as a result of the action of PTH on the proximal tubule is excreted in the urine (urinary cAMP). c.  PTH increases renal Ca2+ reabsorption in the distal tubule, which also increases the serum [Ca2+]. d.  PTH increases intestinal Ca2+ absorption indirectly by stimulating the production of 1,25-dihydroxycholecalciferol in the kidney (see VII C). 3.  Pathophysiology of PTH (Table 7.12) a.  Primary hyperparathyroidism ■ ■is most commonly caused by parathyroid adenoma. ■ ■is characterized by the following: (1) ↑ serum [Ca2+] (hypercalcemia) (2) ↓ serum [phosphate] (hypophosphatemia) (3) ↑ urinary phosphate excretion (phosphaturic effect of PTH) (4) ↑ urinary Ca2+ excretion (caused by the increased filtered load of Ca2+) (5) ↑ urinary cAMP (6) ↑ bone resorption t a b l e 7.12 Pathophysiology of PTH Disorder PTH 1,25-Dihydroxy­ cholecalciferol Bone Urine Serum [Ca2+] Serum [P] Primary hyperparathyroidism ↑ ↑ (PTH stimulates 1α-hydroxylase) ↑ Resorption ↑ P excretion (phosphaturia) ↑ Ca2+ excretion (high filtered load of Ca2+) ↑ urinary cAMP ↑ ↓ Humoral hypercalcemia of malignancy ↓ — ↑ Resorption ↑ P excretion ↑ ↓ Surgical hypoparathyroidism ↓ ↓ ↓ Resorption ↓ P excretion ↓ urinary cAMP ↓ ↑ Pseudohypopara­ thyroidism ↑ ↓ ↓ Resorption (defective Gs) ↓ P excretion ↓ urinary cAMP (defective Gs) ↓ ↑ Chronic renal failure ↑ (2°) ↓ (caused by renal failure) Osteomalacia (caused by ↓ 1,25-dihydroxy­ cholecalciferol) ↑ Resorption (caused by ↑ PTH) ↓ P excretion (caused by ↓ GFR) ↓ (caused by ↓ 1,25-dihydroxy­ cholecalciferol) ↑ (caused by ↓ P excretion) cAMP = cyclic adenosine monophosphate; GFR = glomerular filtration rate. See Table 7.1 for other abbreviation. 254 BRS Physiology b.  Humoral hypercalcemia of malignancy ■ ■is caused by PTH-related peptide (PTH-rp) secreted by some malignant tumors (e.g., breast, lung). PTH-rp has all of the physiologic actions of PTH, including increased bone resorption, increased renal Ca2+ reabsorption, and decreased renal phosphate reabsorption. ■ ■is characterized by the following: (1) ↑ serum [Ca2+] (hypercalcemia) (2) ↓ serum [phosphate] (hypophosphatemia) (3) ↑ urinary phosphate excretion (phosphaturic effect of PTH-rp) (4) ↓ serum PTH levels (due to feedback inhibition from the high serum Ca2+) c.  Hypoparathyroidism ■ ■is most commonly a result of thyroid surgery, or it is congenital. ■ ■is characterized by the following: (1) ↓ serum [Ca2+] (hypocalcemia) and tetany (2) ↑ serum [phosphate] (hyperphosphatemia) (3) ↓ urinary phosphate excretion d.  Pseudohypoparathyroidism type Ia—Albright hereditary osteodystrophy ■ ■is the result of defective Gs protein in the kidney and bone, which causes end-organ resistance to PTH. ■ ■Hypocalcemia and hyperphosphatemia occur (as in hypoparathyroidism), which are not correctable by the administration of exogenous PTH. ■ ■Circulating PTH levels are elevated (stimulated by hypocalcemia). e.  Chronic renal failure ■ ■Decreased glomerular filtration rate (GFR) leads to decreased filtration of phos-phate, phosphate retention, and increased serum [phosphate]. ■ ■Increased serum phosphate complexes Ca2+ and leads to decreased ionized [Ca2+]. ■ ■Decreased production of 1,25-dihydroxycholecalciferol by the diseased renal tissue also contributes to the decreased ionized [Ca2+] (see VII C 1). ■ ■Decreased [Ca2+] causes secondary hyperparathyroidism. ■ ■The combination of increased PTH levels and decreased 1,25-dihydroxycholecalcif-erol produces renal osteodystrophy, in which there is increased bone resorption and osteomalacia. f.  Familial hypocalciuric hypercalcemia (FHH) ■ ■autosomal dominant disorder with decreased urinary Ca2+ excretion and increased serum Ca2+ ■ ■caused by inactivating mutations of the Ca2+-sensing receptors that regulate PTH secretion. C.  Vitamin D ■ ■provides Ca2+ and phosphate to ECF for bone mineralization. ■ ■In children, vitamin D deficiency causes rickets. ■ ■In adults, vitamin D deficiency causes osteomalacia. 1.  Vitamin D metabolism (Figure 7.14) ■ ■Cholecalciferol, 25-hydroxycholecalciferol, and 24,25-dihydroxycholecalciferol are inactive. ■ ■The active form of vitamin D is 1,25-dihydroxycholecalciferol. ■ ■The production of 1,25-dihydroxycholecalciferol in the kidney is catalyzed by the enzyme 1α-hydroxylase. ■ ■1a-hydroxylase activity is increased by the following: a. ↓ serum [Ca2+] b. ↑ PTH levels c. ↓ serum [phosphate] Chapter 7 Endocrine Physiology 255 2.  Actions of 1,25-dihydroxycholecalciferol ■ ■are coordinated to increase both [Ca2+] and [phosphate] in ECF to mineralize new bone. a.  Increases intestinal Ca2+ absorption. Vitamin D–dependent Ca2+-binding protein (calbindin D-28K) is induced by 1,25-dihydroxycholecalciferol. ■ ■PTH increases intestinal Ca2+ absorption indirectly by stimulating 1α-hydroxylase and increasing production of the active form of vitamin D. b.  Increases intestinal phosphate absorption. c.  Increases renal reabsorption of Ca2+ and phosphate, analogous to its actions on the intestine. d.  Increases bone resorption, which provides Ca2+ and phosphate from “old” bone to min-eralize “new” bone. D.  Calcitonin ■ ■is synthesized and secreted by the parafollicular cells of the thyroid. ■ ■secretion is stimulated by an increase in serum [Ca2+]. ■ ■acts primarily to inhibit bone resorption. ■ ■can be used to treat hypercalcemia. VIII.  Sexual Differentiation (Figure 7.15) ■ ■Genetic sex is defined by the sex chromosomes, XY in males and XX in females. ■ ■Gonadal sex is defined by the presence of testes in males and ovaries in females. ■ ■Phenotypic sex is defined by the characteristics of the internal genital tract and the external genitalia. A.  Male phenotype ■ ■The testes of gonadal males secrete anti-müllerian hormone and testosterone. ■ ■Testosterone stimulates the growth and differentiation of the wolffian ducts, which develop into the male internal genital tract. ■ ■Anti-müllerian hormone causes atrophy of the müllerian ducts (which would have become the female internal genital tract). Diet + 1,25-(OH)2-cholecalciferol Cholecalciferol 25-OH-cholecalciferol Kidney 24,25-(OH)2-cholecalciferol [Ca2+] PTH [phosphate] (inactive) (active) Liver 7-Dehydrocholesterol Skin (ultraviolet) Figure 7.14 Steps and regulation in the synthesis of 1,25-dihydroxycholecalcif-erol. PTH = parathyroid hormone. 256 BRS Physiology B.  Female phenotype ■ ■The ovaries of gonadal females secrete estrogen, but not anti-müllerian hormone or testosterone. ■ ■Without testosterone, the wolffian ducts do not differentiate. ■ ■Without anti-müllerian hormone, the müllerian ducts are not suppressed and therefore develop into the female internal genital tract. IX.  Male Reproduction A.  Synthesis of testosterone (Figure 7.16) ■ ■Testosterone is the major androgen synthesized and secreted by the Leydig cells. ■ ■Leydig cells do not contain 21β-hydroxylase or 11β-hydroxylase (in contrast to the adrenal cortex) and, therefore, do not synthesize glucocorticoids or mineralocorticoids. ■ ■LH (in a parallel action to ACTH in the adrenal cortex) increases testosterone synthesis by stimulating cholesterol desmolase, the first step in the pathway. ■ ■Accessory sex organs (e.g., prostate) contain 5a-reductase, which converts testosterone to its active form, dihydrotestosterone. ■ ■5a-reductase inhibitors (finasteride) may be used to treat benign prostatic hyperplasia because they block the activation of testosterone to dihydrotestosterone in the prostate. B.  Regulation of testes (Figure 7.17) 1.  Hypothalamic control—GnRH ■ ■Arcuate nuclei of the hypothalamus secrete GnRH into the hypothalamic–hypophysial portal blood. GnRH stimulates the anterior pituitary to secrete FSH and LH. 2.  Anterior pituitary—FSH and LH ■ ■FSH acts on the Sertoli cells to maintain spermatogenesis. The Sertoli cells also secrete inhibin, which is involved in negative feedback of FSH secretion. ■ ■LH acts on the Leydig cells to promote testosterone synthesis. Testosterone acts via an intratesticular paracrine mechanism to reinforce the spermatogenic effects of FSH in the Sertoli cells. Testes XY Male Anti-müllerian hormone Sertoli cells Leydig cells Male phenotype Testosterone Ovaries XX Female No anti-müllerian hormone No testosterone Female phenotype Figure 7.15 Sexual differentiation in males and females. Chapter 7 Endocrine Physiology 257 Cholesterol Testosterone Dihydrotestosterone Pregnenolone 17-Hydroxypregnenolone Dehydroepiandrosterone 17β-OH-steroid dehydrogenase 5α-reductase (target tissues) Androstenedione LH + Figure 7.16 Synthesis of testosterone. LH = luteinizing hormone. Hypothalamus (arcuate nucleus) GnRH LH Testosterone Intratesticular Leydig cells Sertoli cells Inhibin – + – + – + FSH + Anterior pituitary Figure 7.17 Control of male reproductive hormones. FSH = follicle-stimulating hormone; GnRH = gonadotropin-releasing hormone; LH = luteinizing hormone. 258 BRS Physiology 3.  Negative feedback control—testosterone and inhibin ■ ■Testosterone inhibits the secretion of LH by inhibiting the release of GnRH from the hypo-thalamus and by directly inhibiting the release of LH from the anterior pituitary. ■ ■Inhibin (produced by the Sertoli cells) inhibits the secretion of FSH from the anterior pituitary. C.  Actions of testosterone or dihydrotestosterone 1.  Actions of testosterone ■ ■Differentiation of epididymis, vas deferens, and seminal vesicles ■ ■Pubertal growth spurt ■ ■Cessation of pubertal growth spurt (epiphyseal closure) ■ ■Libido ■ ■Spermatogenesis in Sertoli cells (paracrine effect) ■ ■Deepening of voice ■ ■Increased muscle mass ■ ■Growth of penis and seminal vesicles ■ ■Negative feedback on anterior pituitary 2.  Actions of dihydrotestosterone ■ ■Differentiation of penis, scrotum, and prostate ■ ■Male hair pattern ■ ■Male pattern baldness ■ ■Sebaceous gland activity ■ ■Growth of prostate 3.  Androgen insensitivity disorder (testicular feminizing syndrome) ■ ■is caused by deficiency of androgen receptors in target tissues of males. ■ ■Testosterone and dihydrotestosterone actions in target tissues are absent. ■ ■There are female external genitalia (“default”), and there is no internal genital tract. ■ ■Testosterone levels are elevated due to lack of testosterone receptors in the anterior pituitary (lack of feedback inhibition). D.  Puberty (male and female) ■ ■is initiated by the onset of pulsatile GnRH release from the hypothalamus. ■ ■FSH and LH are, in turn, secreted in pulsatile fashion. ■ ■GnRH up-regulates its own receptor in the anterior pituitary. E.  Variation in FSH and LH levels over the life span (male and female) 1.  In childhood, hormone levels are lowest and FSH > LH. 2.  At puberty and during the reproductive years, hormone levels increase and LH > FSH. 3.  In senescence, hormone levels are highest and FSH > LH. X.  Female Reproduction A.  Synthesis of estrogen and progesterone (Figure 7.18) ■ ■Theca cells produce testosterone (stimulated at the first step by LH). Androstenedione diffuses to the nearby granulosa cells, which contain 17β-hydroxysteroid dehydrogenase, which converts androstenedione to testosterone, and aromatase, which converts testos-terone to 17β-estradiol (stimulated by FSH). B.  Regulation of the ovary 1.  Hypothalamic control—GnRH ■ ■As in the male, pulsatile GnRH stimulates the anterior pituitary to secrete FSH and LH. Chapter 7 Endocrine Physiology 259 2.  Anterior lobe of the pituitary—FSH and LH ■ ■FSH and LH stimulate the following in the ovaries: a.  Steroidogenesis in the ovarian follicle and corpus luteum b.  Follicular development beyond the antral stage c.  Ovulation d.  Luteinization 3.  Negative and positive feedback control—estrogen and progesterone (Table 7.13) C.  Actions of estrogen 1.  Has both negative and positive feedback effects on FSH and LH secretion. 2.  Causes maturation and maintenance of the fallopian tubes, uterus, cervix, and vagina. 3.  Causes the development of female secondary sex characteristics at puberty. 4.  Causes the development of the breasts. 5.  Up-regulates estrogen, LH, and progesterone receptors. 6.  Causes proliferation and development of ovarian granulosa cells. 7.  Maintains pregnancy. Cholesterol Pregnenolone 17-Hydroxypregnenolone Dehydroepiandrosterone Androstenedione aromatase (granulosa cells) Testosterone 17β-Estradiol Progesterone LH (theca cells) FSH + + Figure 7.18 Synthesis of estrogen and progesterone. FSH = follicle-stimulating hormone; LH = luteinizing hormone. t a b l e 7.13  Negative and Positive Feedback Control of the Menstrual Cycle Phase of Menstrual Cycle Hormone Type of Feedback and Site Follicular Estrogen Negative; anterior pituitary Midcycle Estrogen Positive; anterior pituitary Luteal Estrogen Progesterone Negative; anterior pituitary Negative; anterior pituitary 260 BRS Physiology 8.  Lowers the uterine threshold to contractile stimuli during pregnancy. 9.  Stimulates prolactin secretion (but then blocks its action on the breast). D.  Actions of progesterone 1.  Has negative feedback effects on FSH and LH secretion during luteal phase. 2.  Maintains secretory activity of the uterus during the luteal phase. 3.  Maintains pregnancy. 4.  Raises the uterine threshold to contractile stimuli during pregnancy. 5.  Participates in development of the breasts. E.  Menstrual cycle (Figure 7.19) 1.  Follicular phase (days 0 to 14) ■ ■A primordial follicle develops to the graafian stage, with atresia of neighboring follicles. ■ ■LH and FSH receptors are up-regulated in theca and granulosa cells. ■ ■Estradiol levels increase and cause proliferation of the uterus. ■ ■FSH and LH levels are suppressed by the negative feedback effect of estradiol on the anterior pituitary. ■ ■Progesterone levels are low. Follicular phase Luteal phase Basal body temperature Ovulation LH 17β-Estradiol Progesterone FSH menses menses Day of cycle 24 26 0 2 4 6 8 10 12 14 16 18 20 22 24 26 0 2 4 Figure 7.19 The menstrual cycle. FSH = follicle-stimulating hormone; LH = luteinizing hormone. Chapter 7 Endocrine Physiology 261 2.  Ovulation (day 14) ■ ■occurs 14 days before menses, regardless of cycle length. Thus, in a 28-day cycle, ovula-tion occurs on day 14; in a 35-day cycle, ovulation occurs on day 22. ■ ■A burst of estradiol synthesis at the end of the follicular phase has a positive feedback effect on the secretion of FSH and LH (LH surge). ■ ■Ovulation occurs as a result of the estrogen-induced LH surge. ■ ■Estrogen levels decrease just after ovulation (but rise again during the luteal phase). ■ ■Cervical mucus increases in quantity; it becomes less viscous and more penetrable by sperm. 3.  Luteal phase (days 14 to 28) ■ ■The corpus luteum begins to develop, and it synthesizes estrogen and progesterone. ■ ■Vascularity and secretory activity of the endometrium increase to prepare for receipt of a fertilized egg. ■ ■Basal body temperature increases because of the effect of progesterone on the hypotha-lamic thermoregulatory center. ■ ■If fertilization does not occur, the corpus luteum regresses at the end of the luteal phase. As a result, estradiol and progesterone levels decrease abruptly. 4.  Menses (days 0 to 4) ■ ■The endometrium is sloughed because of the abrupt withdrawal of estradiol and progesterone. F.  Pregnancy (Figure 7.20) ■ ■is characterized by steadily increasing levels of estrogen and progesterone, which main-tain the endometrium for the fetus, suppress ovarian follicular function (by inhibiting FSH and LH secretion), and stimulate development of the breasts. 1.  Fertilization ■ ■If fertilization occurs, the corpus luteum is rescued from regression by human chorionic gonadotropin (HCG), which is produced by the placenta. Hormone level Weeks of pregnancy Placenta Corpus luteum 10 20 30 40 HPL Progesterone Estriol Figure 7.20 Hormone levels during pregnancy. HCG = human chorionic gonadotropin; HPL = human placental lactogen. 262 BRS Physiology 2.  First trimester ■ ■The corpus luteum (stimulated by HCG) is responsible for the production of estradiol and progesterone. ■ ■Peak levels of HCG occur at gestational week 9 and then decline. 3.  Second and third trimesters ■ ■Progesterone is produced by the placenta. ■ ■Estrogens are produced by the interplay of the fetal adrenal gland and the placenta. The fetal adrenal gland synthesizes dehydroepiandrosterone-sulfate (DHEA-S), which is then hydroxylated in the fetal liver. These intermediates are transferred to the placenta, where enzymes remove sulfate and aromatize to estrogens. The major placental estrogen is estriol. ■ ■Human placental lactogen is produced throughout pregnancy. Its actions are similar to those of growth hormone and prolactin. 4.  Parturition ■ ■Throughout pregnancy, progesterone increases the threshold for uterine contraction. ■ ■Near term, the estrogen/progesterone ratio increases, which makes the uterus more sensitive to contractile stimuli. ■ ■The initiating event in parturition is unknown. (Although oxytocin is a powerful stimulant of uterine contractions, blood levels of oxytocin do not change before labor.) 5.  Lactation ■ ■Estrogens and progesterone stimulate the growth and development of the breasts throughout pregnancy. ■ ■Prolactin levels increase steadily during pregnancy because estrogen stimulates prolac-tin secretion from the anterior pituitary. ■ ■Lactation does not occur during pregnancy because estrogen and progesterone block the action of prolactin on the breast. ■ ■After parturition, estrogen and progesterone levels decrease abruptly and lactation occurs. ■ ■Lactation is maintained by suckling, which stimulates both oxytocin and prolactin secretion. ■ ■Ovulation is suppressed as long as lactation continues because prolactin has the follow-ing effects: a.  Inhibits hypothalamic GnRH secretion. b.  Inhibits the action of GnRH on the anterior pituitary and consequently inhibits LH and FSH secretion. c.  Antagonizes the actions of LH and FSH on the ovaries. 263 Review Test Questions 1–5 Use the graph below, which shows changes during the menstrual cycle, to answer Questions 1–5. 1. The increase shown at point A is caused by the effect of (A) estrogen on the anterior pituitary (B) progesterone on the hypothalamus (C) follicle-stimulating hormone (FSH) on the ovary (D) luteinizing hormone (LH) on the anterior pituitary (E) prolactin on the ovary 2. Blood levels of which substance are described by curve B? (A) Estradiol (B) Estriol (C) Progesterone (D) Follicle-stimulating hormone (FSH) (E) Luteinizing hormone (LH) 3. The source of the increase in concentration indicated at point C is the (A) hypothalamus (B) anterior pituitary (C) corpus luteum (D) ovary (E) adrenal cortex 4. The source of the increase in concentration at point D is the (A) ovary (B) adrenal cortex (C) corpus luteum (D) hypothalamus (E) anterior pituitary 5. The cause of the sudden increase shown at point E is (A) negative feedback of progesterone on the hypothalamus (B) negative feedback of estrogen on the anterior pituitary (C) negative feedback of follicle-stimulating hormone (FSH) on the ovary (D) positive feedback of FSH on the ovary (E) positive feedback of estrogen on the anterior pituitary 6. A 41-year-old woman has hypocalcemia, hyperphosphatemia, and decreased urinary phosphate excretion. Injection of parathyroid hormone (PTH) causes an increase in urinary cyclic adenosine monophosphate (cAMP). The most likely diagnosis is (A) primary hyperparathyroidism (B) vitamin D intoxication (C) vitamin D deficiency (D) hypoparathyroidism after thyroid surgery (E) pseudohypoparathyroidism 7. Which of the following hormones acts on its target tissues by a steroid hormone mechanism of action? (A) Thyroid hormone (B) Parathyroid hormone (PTH) Day of cycle 24 26 0 2 4 6 8 10 12 14 16 18 20 22 24 26 0 2 4 A B C E D 264 BRS Physiology (C) Antidiuretic hormone (ADH) on the collecting duct (D) β1 adrenergic agonists (E) Glucagon 8. A 38-year-old man who has galactorrhea is found to have a prolactinoma. His physician treats him with bromocriptine, which eliminates the galactorrhea. The basis for the therapeutic action of bromocriptine is that it (A) antagonizes the action of prolactin on the breast (B) enhances the action of prolactin on the breast (C) inhibits prolactin release from the anterior pituitary (D) inhibits prolactin release from the hypothalamus (E) enhances the action of dopamine on the anterior pituitary 9. Which of the following hormones originates in the anterior pituitary? (A) Dopamine (B) Growth hormone–releasing hormone (GHRH) (C) Somatostatin (D) Gonadotropin-releasing hormone (GnRH) (E) Thyroid-stimulating hormone (TSH) (F) Oxytocin (G) Testosterone 10. Which of the following functions of the Sertoli cells mediates negative feedback control of follicle-stimulating hormone (FSH) secretion? (A) Synthesis of inhibin (B) Synthesis of testosterone (C) Aromatization of testosterone (D) Maintenance of the blood–testes barrier 11. Which of the following substances is derived from proopiomelanocortin (POMC)? (A) Adrenocorticotropic hormone (ACTH) (B) Follicle-stimulating hormone (FSH) (C) Melatonin (D) Cortisol (E) Dehydroepiandrosterone 12. Which of the following inhibits the secretion of growth hormone by the anterior pituitary? (A) Sleep (B) Stress (C) Puberty (D) Somatomedins (E) Starvation (F) Hypoglycemia 13. Selective destruction of the zona glomerulosa of the adrenal cortex would produce a deficiency of which hormone? (A) Aldosterone (B) Androstenedione (C) Cortisol (D) Dehydroepiandrosterone (E) Testosterone 14. Which of the following explains the suppression of lactation during pregnancy? (A) Blood prolactin levels are too low for milk production to occur (B) Human placental lactogen levels are too low for milk production to occur (C) The fetal adrenal gland does not produce sufficient estriol (D) Blood levels of estrogen and progesterone are high (E) The maternal anterior pituitary is suppressed 15. Which step in steroid hormone biosynthesis, if inhibited, blocks the production of all androgenic compounds but does not block the production of glucocorticoids? (A) Cholesterol → pregnenolone (B) Progesterone → 11-deoxycorticosterone (C) 17-Hydroxypregnenolone → dehydroepiandrosterone (D) Testosterone → estradiol (E) Testosterone → dihydrotestosterone 16. A 46-year-old woman has hirsutism, hyperglycemia, obesity, muscle wasting, and increased circulating levels of adrenocorticotropic hormone (ACTH). The most likely cause of her symptoms is (A) primary adrenocortical insufficiency (Addison disease) (B) pheochromocytoma (C) primary overproduction of ACTH (Cushing disease) (D) treatment with exogenous glucocorticoids (E) hypophysectomy Chapter 7 Endocrine Physiology 265 17. Which of the following decreases the conversion of 25-hydroxycholecalciferol to 1,25-dihydroxycholecalciferol? (A) A diet low in Ca2+ (B) Hypocalcemia (C) Hyperparathyroidism (D) Hypophosphatemia (E) Chronic renal failure 18. Increased adrenocorticotropic hormone (ACTH) secretion would be expected in patients (A) with chronic adrenocortical insufficiency (Addison disease) (B) with primary adrenocortical hyperplasia (C) who are receiving glucocorticoid for immunosuppression after a renal transplant (D) with elevated levels of angiotensin II 19. Which of the following would be expected in a patient with Graves disease? (A) Cold sensitivity (B) Weight gain (C) Decreased O2 consumption (D) Decreased cardiac output (E) Drooping eyelids (F) Atrophy of the thyroid gland (G) Increased thyroid-stimulating hormone (TSH) levels (H) Increased triiodothyronine (T3) levels 20. Blood levels of which of the following substances is decreased in Graves disease? (A) Triiodothyronine (T3) (B) Thyroxine (T4) (C) Diiodotyrosine (DIT) (D) Thyroid-stimulating hormone (TSH) (E) Iodide (I−) 21. Which of the following hormones acts by an inositol 1,4,5-triphosphate (IP3)-Ca2+ mechanism of action? (A) 1,25-Dihydroxycholecalciferol (B) Progesterone (C) Insulin (D) Parathyroid hormone (PTH) (E) Gonadotropin-releasing hormone (GnRH) 22. Which step in steroid hormone biosynthesis is stimulated by adrenocorticotropic hormone (ACTH)? (A) Cholesterol → pregnenolone (B) Progesterone → 11-deoxycorticosterone (C) 17-Hydroxypregnenolone → dehydroepiandrosterone (D) Testosterone → estradiol (E) Testosterone → dihydrotestosterone 23. The source of estrogen during the second and third trimesters of pregnancy is the (A) corpus luteum (B) maternal ovaries (C) fetal ovaries (D) placenta (E) maternal ovaries and fetal adrenal gland (F) maternal adrenal gland and fetal liver (G) fetal adrenal gland, fetal liver, and placenta 24. Which of the following causes increased aldosterone secretion? (A) Decreased blood volume (B) Administration of an inhibitor of angiotensin-converting enzyme (ACE) (C) Hyperosmolarity (D) Hypokalemia 25. Secretion of oxytocin is increased by (A) milk ejection (B) dilation of the cervix (C) increased prolactin levels (D) increased extracellular fluid (ECF) volume (E) increased serum osmolarity 26. A 61-year-old woman with hyperthyroidism is treated with propylthiouracil. The drug reduces the synthesis of thyroid hormones because it inhibits oxidation of (A) Triiodothyronine (T3) (B) Thyroxine (T4) (C) Diiodotyrosine (DIT) (D) Thyroid-stimulating hormone (TSH) (E) Iodide (I−) 27. A 39-year-old man with untreated diabetes mellitus type I is brought to the emergency room. An injection of insulin would be expected to cause an increase in his (A) urine glucose concentration (B) blood glucose concentration (C) blood K+ concentration (D) blood pH (E) breathing rate 266 BRS Physiology 28. Which of the following results from the action of parathyroid hormone (PTH) on the renal tubule? (A) Inhibition of 1α-hydroxylase (B) Stimulation of Ca2+ reabsorption in the distal tubule (C) Stimulation of phosphate reabsorption in the proximal tubule (D) Interaction with receptors on the luminal membrane of the proximal tubular cells (E) Decreased urinary excretion of cyclic adenosine monophosphate (CAMP) 29. Which step in steroid hormone biosynthesis occurs in the accessory sex target tissues of the male and is catalyzed by 5α-reductase? (A) Cholesterol → pregnenolone (B) Progesterone → 11-deoxycorticosterone (C) 17-Hydroxypregnenolone → dehydroepiandrosterone (D) Testosterone → estradiol (E) Testosterone → dihydrotestosterone 30. Which of the following pancreatic secretions has a receptor with four subunits, two of which have tyrosine kinase activity? (A) Insulin (B) Glucagon (C) Somatostatin (D) Pancreatic lipase 31. A 16-year-old, seemingly normal female is diagnosed with androgen insensitivity disorder. She has never had a menstrual cycle and is found to have a blind-ending vagina; no uterus, cervix, or ovaries; a 46 XY genotype; and intra-abdominal testes. Her serum testosterone is elevated. Which of the following characteristics is caused by lack of androgen receptors? (A) 46 XY genotype (B) Testes (C) Elevated serum testosterone (D) Lack of uterus and cervix (E) Lack of menstrual cycles 267 Answers and Explanations 1. The answer is B [X E 3; Figure 7.19]. Curve A shows basal body temperature. The increase in temperature occurs as a result of elevated progesterone levels during the luteal (secretory) phase of the menstrual cycle. Progesterone increases the set-point temperature in the hypothalamic thermoregulatory center. 2. The answer is C [X E 3; Figure 7.19]. Progesterone is secreted during the luteal phase of the menstrual cycle. 3. The answer is D [X A, E 1; Figure 7.19]. The curve shows blood levels of estradiol. The source of the increase in estradiol concentration shown at point C is the ovarian granulosa cells, which contain high concentrations of aromatase and convert testosterone to estradiol. 4. The answer is C [X E 3; Figure 7.19]. The curve shows blood levels of estradiol. During the luteal phase of the cycle, the source of the estradiol is the corpus luteum. The corpus luteum prepares the uterus to receive a fertilized egg. 5. The answer is E [X E 2; Figure 7.20]. Point E shows the luteinizing hormone (LH) surge that initiates ovulation at midcycle. The LH surge is caused by increasing estrogen levels from the developing ovarian follicle. Increased estrogen, by positive feedback, stimulates the anterior pituitary to secrete LH and follicle-stimulating hormone (FSH). 6. The answer is D [VII B 3 b]. Low blood [Ca2+] and high blood [phosphate] are consistent with hypoparathyroidism. Lack of parathyroid hormone (PTH) decreases bone resorption, decreases renal reabsorption of Ca2+, and increases renal reabsorption of phosphate (causing low urinary phosphate). Because the patient responded to exogenous PTH with an increase in urinary cyclic adenosine monophosphate (cAMP), the G protein coupling the PTH receptor to adenylate cyclase is apparently normal. Consequently, pseudohypoparathyroidism is excluded. Vitamin D intoxication would cause hypercalcemia, not hypocalcemia. Vitamin D deficiency would cause hypocalcemia and hypophosphatemia. 7. The answer is A [II E; Table 7.2]. Thyroid hormone, an amine, acts on its target tissues by a steroid hormone mechanism, inducing the synthesis of new proteins. The action of antidiuretic hormone (ADH) on the collecting duct (V2 receptors) is mediated by cyclic adenosine monophosphate (cAMP), although the other action of ADH (vascular smooth muscle, V1 receptors) is mediated by inositol 1,4,5-triphosphate (IP3). Parathyroid hormone (PTH), β1 agonists, and glucagon all act through cAMP mechanisms of action. 8. The answer is C [III B 4 a (1), c (2)]. Bromocriptine is a dopamine agonist. The secretion of prolactin by the anterior pituitary is tonically inhibited by the secretion of dopamine from the hypothalamus. Thus, a dopamine agonist acts just like dopamine—it inhibits prolactin secretion from the anterior pituitary. 9. The answer is E [III B; Table 7.1]. Thyroid-stimulating hormone (TSH) is secreted by the anterior pituitary. Dopamine, growth hormone–releasing hormone (GHRH), somatostatin, and gonadotropin-releasing hormone (GnRH) all are secreted by the hypothalamus. Oxytocin is secreted by the posterior pituitary. Testosterone is secreted by the testes. 10. The answer is A [IX B 2, 3]. Inhibin is produced by the Sertoli cells of the testes when they are stimulated by follicle-stimulating hormone (FSH). Inhibin then inhibits further secretion of FSH by negative feedback on the anterior pituitary. The Leydig cells synthesize testosterone. Testosterone is aromatized in the ovaries. 268 BRS Physiology 11. The answer is A [III B 1, 2; Figure 7.5]. Proopiomelanocortin (POMC) is the parent molecule in the anterior pituitary for adrenocorticotropic hormone (ACTH), β-endorphin, α-lipotropin, and β-lipotropin (and in the intermediary lobe for melanocyte-stimulating hormone [MSH]). Follicle-stimulating hormone (FSH) is not a member of this “family”; rather, it is a member of the thyroid-stimulating hormone (TSH) and luteinizing hormone (LH) “family.” MSH, a component of POMC and ACTH, may stimulate melatonin production. Cortisol and dehydroepiandrosterone are produced by the adrenal cortex. 12. The answer is D [III B 3 a]. Growth hormone is secreted in pulsatile fashion, with a large burst occurring during deep sleep (sleep stage 3 or 4). Growth hormone secretion is increased by sleep, stress, puberty, starvation, and hypoglycemia. Somatomedins are generated when growth hormone acts on its target tissues; they inhibit growth hormone secretion by the anterior pituitary, both directly and indirectly (by stimulating somatostatin release). 13. The answer is A [V A 1; Figure 7.10]. Aldosterone is produced in the zona glomerulosa of the adrenal cortex because that layer contains the enzyme for conversion of corticosterone to aldosterone (aldosterone synthase). Cortisol is produced in the zona fasciculata. Androstenedione and dehydroepiandrosterone are produced in the zona reticularis. Testosterone is produced in the testes, not in the adrenal cortex. 14. The answer is D [X F 5]. Although the high circulating levels of estrogen stimulate prolactin secretion during pregnancy, the action of prolactin on the breast is inhibited by progesterone and estrogen. After parturition, progesterone and estrogen levels decrease dramatically. Prolactin can then interact with its receptors in the breast, and lactation proceeds if initiated by suckling. 15. The answer is C [Figure 7.11]. The conversion of 17-hydroxypregnenolone to dehyd-roepiandrosterone (as well as the conversion of 17-hydroxyprogesterone to androstenedione) is catalyzed by 17,20-lyase. If this process is inhibited, synthesis of androgens is stopped. 16. The answer is C [V A 5 b]. This woman has the classic symptoms of a primary elevation of adrenocorticotropic hormone (ACTH) (Cushing disease). Elevation of ACTH stimulates overproduction of glucocorticoids and androgens. Treatment with pharmacologic doses of glucocorticoids would produce similar symptoms, except that circulating levels of ACTH would be low because of negative feedback suppression at both the hypothalamic (corticotropin-releasing hormone [CRH]) and anterior pituitary (ACTH) levels. Addison disease is caused by primary adrenocortical insufficiency. Although a patient with Addison disease would have increased levels of ACTH (because of the loss of negative feedback inhibition), the symptoms would be of glucocorticoid deficit, not excess. Hypophysectomy would remove the source of ACTH. A pheochromocytoma is a tumor of the adrenal medulla that secretes catecholamines. 17. The answer is E [VII C 1]. Ca2+ deficiency (low Ca2+ diet or hypocalcemia) activates 1α-hydroxylase, which catalyzes the conversion of vitamin D to its active form, 1,25-dihydroxycholecalciferol. Increased parathyroid hormone (PTH) and hypophosphatemia also stimulate the enzyme. Chronic renal failure is associated with a constellation of bone diseases, including osteomalacia caused by failure of the diseased renal tissue to produce the active form of vitamin D. 18. The answer is A [V A 2 a (3); Table 7.6; Figure 7.12]. Addison disease is caused by primary adrenocortical insufficiency. The resulting decrease in cortisol production causes a decrease in negative feedback inhibition on the hypothalamus and the anterior pituitary. Both of these conditions will result in increased adrenocorticotropic hormone (ACTH) secretion. Patients who have adrenocortical hyperplasia or who are receiving exogenous glucocorticoid will have an increase in the negative feedback inhibition of ACTH secretion. 19. The answer is H [IV B 2; Table 7.5]. Graves disease (hyperthyroidism) is caused by overstimulation of the thyroid gland by circulating antibodies to the thyroid-stimulating Chapter 7 Endocrine Physiology 269 hormone (TSH) receptor (which then increases the production and secretion of triiodothyronine (T3) and thyroxine (T4), just as TSH would). Therefore, the signs and symptoms of Graves disease are the same as those of hyperthyroidism, reflecting the actions of increased circulating levels of thyroid hormones: increased heat production, weight loss, increased O2 consumption and cardiac output, exophthalmos (bulging eyes, not drooping eyelids), and hypertrophy of the thyroid gland (goiter). TSH levels will be decreased (not increased) as a result of the negative feedback effect of increased T3 levels on the anterior pituitary. 20. The answer is D [IV B 2; Table 7.5]. In Graves disease (hyperthyroidism), the thyroid is stimulated to produce and secrete vast quantities of thyroid hormones as a result of stimulation by thyroid-stimulating immunoglobulins (antibodies to the thyroid-stimulating hormone [TSH] receptors on the thyroid gland). Because of the high circulating levels of thyroid hormones, anterior pituitary secretion of TSH will be turned off (negative feedback). 21. The answer is E [Table 7.2]. Gonadotropin-releasing hormone (GnRH) is a peptide hormone that acts on the cells of the anterior pituitary by an inositol 1,4,5-triphosphate (IP3)-Ca2+ mechanism to cause the secretion of follicle-stimulating hormone (FSH) and luteinizing hormone (LH). 1,25-Dihydroxycholecalciferol and progesterone are steroid hormone derivatives of cholesterol that act by inducing the synthesis of new proteins. Insulin acts on its target cells by a tyrosine kinase mechanism. Parathyroid hormone (PTH) acts on its target cells by an adenylate cyclase–cyclic adenosine monophosphate (cAMP) mechanism. 22. The answer is A [V A 2 a (2)]. The conversion of cholesterol to pregnenolone is catalyzed by cholesterol desmolase. This step in the biosynthetic pathway for steroid hormones is stimulated by adrenocorticotropic hormone (ACTH). 23. The answer is G [X F 3]. During the second and third trimesters of pregnancy, the fetal adrenal gland synthesizes dehydroepiandrosterone-sulfate (DHEA-S), which is hydroxylated in the fetal liver and then transferred to the placenta, where it is aromatized to estrogen. In the first trimester, the corpus luteum is the source of both estrogen and progesterone. 24. The answer is A [V A 2 b]. Decreased blood volume stimulates the secretion of renin (because of decreased renal perfusion pressure) and initiates the renin–angiotensin– aldosterone cascade. Angiotensin-converting enzyme (ACE) inhibitors block the cascade by decreasing the production of angiotensin II. Hyperosmolarity stimulates antidiuretic hormone (ADH) (not aldosterone) secretion. Hyperkalemia, not hypokalemia, directly stimulates aldosterone secretion by the adrenal cortex. 25. The answer is B [III C 2]. Suckling and dilation of the cervix are the physiologic stimuli for oxytocin secretion. Milk ejection is the result of oxytocin action, not the cause of its secretion. Prolactin secretion is also stimulated by suckling, but prolactin does not directly cause oxytocin secretion. Increased extracellular fluid (ECF) volume and hyperosmolarity are the stimuli for the secretion of the other posterior pituitary hormone, antidiuretic hormone (ADH). 26. The answer is E [IV A 2]. For iodide (I−) to be “organified” (incorporated into thyroid hormone), it must be oxidized to I2, which is accomplished by a peroxidase enzyme in the thyroid follicular cell membrane. Propylthiouracil inhibits peroxidase and, therefore, halts the synthesis of thyroid hormones. 27. The answer is D [VI C 3; Table 7.7]. Before the injection of insulin, the woman would have had hyperglycemia, glycosuria, hyperkalemia, and metabolic acidosis with compensatory hyperventilation. The injection of insulin would be expected to decrease her blood glucose (by increasing the uptake of glucose into the cells), decrease her urinary glucose (secondary to decreasing her blood glucose), decrease her blood K+ (by shifting K+ into the cells), and correct her metabolic acidosis (by decreasing the production of ketoacids). 270 BRS Physiology The correction of the metabolic acidosis will lead to an increase in her blood pH and will reduce her compensatory hyperventilation. 28. The answer is B [VII B 2]. Parathyroid hormone (PTH) stimulates both renal Ca2+ reabsorption in the renal distal tubule and the 1α-hydroxylase enzyme. PTH inhibits (not stimulates) phosphate reabsorption in the proximal tubule, which is associated with an increase in urinary cyclic adenosine monophosphate (cAMP). The receptors for PTH are located on the basolateral membranes, not the luminal membranes. 29. The answer is E [IX A]. Some target tissues for androgens contain 5α-reductase, which converts testosterone to dihydrotestosterone, the active form in those tissues. 30. The answer is A [VI C 2]. The insulin receptor in target tissues is a tetramer. The two β subunits have tyrosine kinase activity and autophosphorylate the receptor when stimulated by insulin. 31. The answer is C [IX C]. The elevated serum testosterone is due to lack of androgen receptors on the anterior pituitary (which normally would mediate negative feedback by testosterone). The presence of testes is due to the male genotype. The lack of uterus and cervix is due to anti-müllerian hormone (secreted by the fetal testes), which suppressed differentiation of the müllerian ducts into the internal female genital tract. The lack of menstrual cycles is due to the absence of a female reproductive tract. 271 Comprehensive Examination Questions 1 and 2 After extensive testing, a 60-year-old man is found to have a pheochromocytoma that secretes mainly epinephrine. 1. Which of the following signs would be expected in this patient? (A) Decreased heart rate (B) Decreased arterial blood pressure (C) Decreased excretion rate of 3-methoxy-4-hydroxymandelic acid (VMA) (D) Cold, clammy skin 2. Symptomatic treatment would be best achieved in this man with (A) phentolamine (B) isoproterenol (C) a combination of phentolamine and isoproterenol (D) a combination of phentolamine and propranolol (E) a combination of isoproterenol and phenylephrine 3. The principle of positive feedback is illustrated by the effect of (A) PO2 on breathing rate (B) glucose on insulin secretion (C) estrogen on follicle-stimulating hormone (FSH) secretion at midcycle (D) blood [Ca2+] on parathyroid hormone (PTH) secretion (E) decreased blood pressure on sympathetic outflow to the heart and blood vessels 4. In the graph at upper right, the response shown by the dotted line illustrates the effect of Right atrial pressure (mm Hg) or end-diastolic volume (L) Cardiac output or venous return (L/min) Venous return Cardiac output (A) administration of digitalis (B) administration of a negative inotropic agent (C) increased blood volume (D) decreased blood volume (E) decreased total peripheral resistance (TPR) Questions 5 and 6 A B PO2 (mm Hg) Hemoglobin saturation (%) 100 50 25 50 75 100 272 BRS Physiology 5. On the accompanying graph, the shift from curve A to curve B could be caused by (A) fetal hemoglobin (HbF) (B) carbon monoxide (CO) poisoning (C) decreased pH (D) increased temperature (E) increased 2,3-diphosphoglycerate (DPG) 6. The shift from curve A to curve B is associated with (A) decreased P50 (B) decreased affinity of hemoglobin for O2 (C) decreased O2-carrying capacity of hemoglobin (D) increased ability to unload O2 in the tissues 7. A negative free-water clearance CH O 2 ( ) would occur in a person (A) who drinks 2 L of water in 30 minutes (B) after overnight water restriction (C) who is receiving lithium for the treatment of depression and has polyuria that is unresponsive to antidiuretic hormone (ADH) administration (D) with a urine flow rate of 5 mL/min, a urine osmolarity of 295 mOsm/L, and a serum osmolarity of 295 mOsm/L (E) with a urine osmolarity of 90 mOsm/L and a serum osmolarity of 310 mOsm/L after a severe head injury 8. CO2 generated in the tissues is carried in venous blood primarily as (A) CO2 in the plasma (B) H2CO3 in the plasma (C) HCO3 − in the plasma (D) CO2 in the red blood cells (RBCs) (E) carboxyhemoglobin in the RBCs 9. In a 35-day menstrual cycle, ovulation occurs on day (A) 12 (B) 14 (C) 17 (D) 21 (E) 28 10. Which of the following hormones stimulates the conversion of testosterone to 17β-estradiol in ovarian granulosa cells? (A) Adrenocorticotropic hormone (ACTH) (B) Estradiol (C) Follicle-stimulating hormone (FSH) (D) Gonadotropin-releasing hormone (GnRH) (E) Human chorionic gonadotropin (HCG) (F) Prolactin (G) Testosterone 11. Which gastrointestinal secretion is hypotonic, has a high [HCO3 −], and has its production inhibited by vagotomy? (A) Saliva (B) Gastric secretion (C) Pancreatic secretion (D) Bile Questions 12 and 13 A 53-year-old man with multiple myeloma is hospitalized after 2 days of polyuria, polydip-sia, and increasing confusion. Laboratory tests show an elevated serum [Ca2+] of 15 mg/dL, and treatment is initiated to decrease it. The patient’s serum osmolarity is 310 mOsm/L. 12. The most likely reason for polyuria in this man is (A) increased circulating levels of antidiuretic hormone (ADH) (B) increased circulating levels of aldosterone (C) inhibition of the action of ADH on the renal tubule (D) stimulation of the action of ADH on the renal tubule (E) psychogenic water drinking 13. The treatment drug is administered in error and produces a further increase in the patient’s serum [Ca2+]. That drug is (A) a thiazide diuretic (B) a loop diuretic (C) calcitonin (D) mithramycin (E) etidronate disodium 14. Which of the following substances acts on its target cells via an inositol 1,4,5-triphosphate (IP3)–Ca2+ mechanism? (A) Somatomedins acting on chondrocytes (B) Oxytocin acting on myoepithelial cells of the breast (C) Antidiuretic hormone (ADH) acting on the renal collecting duct (D) Adrenocorticotropic hormone (ACTH) acting on the adrenal cortex (E) Thyroid hormone acting on skeletal muscle Comprehensive Examination 273 15. A key difference in the mechanism of excitation–contraction coupling between the muscle of the pharynx and the muscle of the wall of the small intestine is that (A) slow waves are present in the pharynx, but not in the small intestine (B) adenosine triphosphate (ATP) is used for contraction in the pharynx, but not in the small intestine (C) intracellular [Ca2+] is increased after excitation in the pharynx, but not in the small intestine (D) action potentials depolarize the muscle of the small intestine, but not of the pharynx (E) Ca2+ binds to troponin C in the pharynx, but not in the small intestine, to initiate contraction 16. A 40-year-old woman has an arterial pH of 7.25, an arterial Pco2 of 30 mm Hg, and serum [K+] of 2.8 mEq/L. Her blood pressure is 100/80 mm Hg when supine and 80/50 mm Hg when standing. What is the cause of her abnormal blood values? (A) Vomiting (B) Diarrhea (C) Treatment with a loop diuretic (D) Treatment with a thiazide diuretic 17. Secretion of HCl by gastric parietal cells is needed for (A) activation of pancreatic lipases (B) activation of salivary lipases (C) activation of intrinsic factor (D) activation of pepsinogen to pepsin (E) the formation of micelles 18. Which of the following would cause an increase in glomerular filtration rate (GFR)? (A) Constriction of the afferent arteriole (B) Constriction of the efferent arteriole (C) Constriction of the ureter (D) Increased plasma protein concentration (E) Infusion of inulin 19. Fat absorption occurs primarily in the (A) stomach (B) jejunum (C) terminal ileum (D) cecum (E) sigmoid colon 20. Which of the following hormones causes constriction of vascular smooth muscle through an inositol 1,4,5-triphosphate (IP3) second messenger system? (A) Antidiuretic hormone (ADH) (B) Aldosterone (C) Dopamine (D) Oxytocin (E) Parathyroid hormone (PTH) 21. A 30-year-old woman has the anterior lobe of her pituitary gland surgically removed because of a tumor. Without hormone replacement therapy, which of the following would occur after the operation? (A) Absence of menses (B) Inability to concentrate the urine in response to water deprivation (C) Failure to secrete catecholamines in response to stress (D) Failure to secrete insulin in a glucose tolerance test (E) Failure to secrete parathyroid hormone (PTH) in response to hypocalcemia 22. The following graph shows three relationships as a function of plasma [glucose]. At plasma [glucose] less than 200 mg/dL, curves X and Z are superimposed on each other because Plasma [glucose] (mg/dL) Glucose filtration, excretion, reabsorption (mg/min) 200 0 400 600 800 X Y Z (A) the reabsorption and excretion of glucose are equal (B) all of the filtered glucose is reabsorbed (C) glucose reabsorption is saturated (D) the renal threshold for glucose has been exceeded (E) Na+–glucose cotransport has been inhibited (F) all of the filtered glucose is excreted 274 BRS Physiology 23. Which of the following responses occurs as a result of tapping on the patellar tendon? (A) Stimulation of Ib afferent fibers in the muscle spindle (B) Inhibition of Ia afferent fibers in the muscle spindle (C) Relaxation of the quadriceps muscle (D) Contraction of the quadriceps muscle (E) Inhibition of α-motoneurons Questions 24 and 25 A 5-year-old boy has a severe sore throat, high fever, and cervical adenopathy. 24. It is suspected that the causative agent is Streptococcus pyogenes. Which of the following is involved in producing fever in this patient? (A) Increased production of interleukin-1 (IL-1) (B) Decreased production of prostaglandins (C) Decreased set-point temperature in the hypothalamus (D) Decreased metabolic rate (E) Vasodilation of blood vessels in the skin 25. Before antibiotic therapy is initiated, the patient is given aspirin to reduce his fever. The mechanism of fever reduction by aspirin is (A) shivering (B) stimulation of cyclooxygenase (C) inhibition of prostaglandin synthesis (D) shunting of blood from the surface of the skin (E) increasing the hypothalamic set-point temperature 26. Arterial pH of 7.52, arterial Pco2 of 26 mm Hg, and tingling and numbness in the feet and hands would be observed in a (A) patient with chronic diabetic ketoacidosis (B) patient with chronic renal failure (C) patient with chronic emphysema and bronchitis (D) patient who hyperventilates on a commuter flight (E) patient who is taking a carbonic anhydrase inhibitor for glaucoma (F) patient with a pyloric obstruction who vomits for 5 days (G) healthy person 27. Albuterol is useful in the treatment of asthma because it acts as an agonist at which of the following receptors? (A) α1 Receptor (B) β1 Receptor (C) β2 Receptor (D) Muscarinic receptor (E) Nicotinic receptor 28. Which of the following hormones is converted to its active form in target tissues by the action of 5α-reductase? (A) Adrenocorticotropic hormone (ACTH) (B) Aldosterone (C) Estradiol (D) Prolactin (E) Testosterone 29. If an artery is partially occluded by an embolism such that its radius becomes one-half the preocclusion value, which of the following parameters will increase by a factor of 16? (A) Blood flow (B) Resistance (C) Pressure gradient (D) Capacitance 30. If heart rate increases, which phase of the cardiac cycle is decreased? (A) Atrial systole (B) Isovolumetric ventricular contraction (C) Rapid ventricular ejection (D) Reduced ventricular ejection (E) Isovolumetric ventricular relaxation (F) Rapid ventricular filling (G) Reduced ventricular filling Questions 31 and 32 A 17-year-old boy is brought to the emer-gency department after being injured in an automobile accident and sustaining signifi-cant blood loss. He is given a transfusion of 3 units of blood to stabilize his blood pressure. 31. Before the transfusion, which of the following was true about his condition? (A) His total peripheral resistance (TPR) was decreased (B) His heart rate was decreased (C) The firing rate of his carotid sinus nerves was increased (D) Sympathetic outflow to his heart and blood vessels was increased Comprehensive Examination 275 32. Which of the following is a consequence of the decrease in blood volume in this patient? (A) Increased renal perfusion pressure (B) Increased circulating levels of angiotensin II (C) Decreased renal Na+ reabsorption (D) Decreased renal K+ secretion 33. A 37-year-old woman suffers a severe head injury in a skiing accident. Shortly thereafter, she becomes polydipsic and polyuric. Her urine osmolarity is 75 mOsm/L, and her serum osmolarity is 305 mOsm/L. Treatment with 1-deamino-8-d-arginine vasopressin (dDAVP) causes an increase in her urine osmolarity to 450 mOsm/L. Which diagnosis is correct? (A) Primary polydipsia (B) Central diabetes insipidus (C) Nephrogenic diabetes insipidus (D) Water deprivation (E) Syndrome of inappropriate antidiuretic hormone (SIADH) 34. Which diuretic inhibits Na+ reabsorption and K+ secretion in the distal tubule by acting as an aldosterone antagonist? (A) Acetazolamide (B) Chlorothiazide (C) Furosemide (D) Spironolactone 35. Which gastrointestinal secretion has a component that is required for the intestinal absorption of vitamin B12? (A) Saliva (B) Gastric secretion (C) Pancreatic secretion (D) Bile 36. Secretion of which of the following hormones is stimulated by extracellular fluid volume expansion? (A) Antidiuretic hormone (ADH) (B) Aldosterone (C) Atrial natriuretic peptide (ANP) (D) 1,25-Dihydroxycholecalciferol (E) Parathyroid hormone (PTH) 37. Which step in the steroid hormone synthetic pathway is stimulated by angiotensin II? (A) Aldosterone synthase (B) Aromatase (C) Cholesterol desmolase (D) 17,20-Lyase (E) 5α-Reductase Questions 38 to 41 Use the diagram of an action potential to answer the following questions. 0 100 msec 0 1 2 3 4 +20 –20 –40 –60 –80 –100 Millivolts 38. The action potential shown is from (A) a skeletal muscle cell (B) a smooth muscle cell (C) a sinoatrial (SA) cell (D) an atrial muscle cell (E) a ventricular muscle cell 39. Phase 0 of the action potential shown is produced by an (A) inward K+ current (B) inward Na+ current (C) inward Ca2+ current (D) outward Na+ current (E) outward Ca2+ current 40. Phase 2, the plateau phase, of the action potential shown (A) is the result of Ca2+ flux out of the cell (B) increases in duration as heart rate increases (C) corresponds to the effective refractory period (D) is the result of approximately equal inward and outward currents (E) is the portion of the action potential when another action potential can most easily be elicited 41. The action potential shown corresponds to which portion of an electrocardiogram (ECG)? (A) P wave (B) PR interval (C) QRS complex (D) ST segment (E) QT interval 276 BRS Physiology 42. Which of the following is the first step in the biosynthetic pathway for thyroid hormones that is inhibited by propylthiouracil? (A) Iodide (I−) pump (B) I− →I2 (C) I2 + tyrosine (D) Diiodotyrosine (DIT) + DIT (E) Thyroxine (T4)→triiodothyronine (T3) 43. Arterial pH of 7.29, arterial [HCO3 −] of 14 mEq/L, increased urinary excretion of NH4 +, and hyperventilation would be observed in a (A) patient with chronic diabetic ketoacidosis (B) patient with chronic renal failure (C) patient with chronic emphysema and bronchitis (D) patient who hyperventilates on a commuter flight (E) patient who is taking a carbonic anhydrase inhibitor for glaucoma (F) patient with a pyloric obstruction who vomits for 5 days (G) healthy person 44. Activation of which of the following receptors increases total peripheral resistance (TPR)? (A) α1 Receptor (B) β1 Receptor (C) β2 Receptor (D) Muscarinic receptor (E) Nicotinic receptor 45. The receptor for this hormone has tyrosine kinase activity. (A) Adrenocorticotropic hormone (ACTH) (B) Antidiuretic hormone (ADH) (C) Aldosterone (D) Insulin (E) Parathyroid hormone (PTH) (F) Somatostatin 46. If an artery is partially occluded by an embolism such that its radius becomes one-half the preocclusion value, which of the following parameters will decrease by a factor of 16? (A) Blood flow (B) Resistance (C) Pressure gradient (D) Capacitance 47. Which phase of the cardiac cycle is absent if there is no P wave on the electrocardiogram (ECG)? (A) Atrial systole (B) Isovolumetric ventricular contraction (C) Rapid ventricular ejection (D) Reduced ventricular ejection (E) Isovolumetric ventricular relaxation (F) Rapid ventricular filling (G) Reduced ventricular filling 48. A receptor potential in the pacinian corpuscle (A) is all-or-none (B) has a stereotypical size and shape (C) is the action potential of this sensory receptor (D) if hyperpolarizing, increases the likelihood of action potential occurrence (E) if depolarizing, brings the membrane potential closer to threshold 49. Compared with the base of the lung, in a person who is standing, the apex of the lung has (A) a higher ventilation rate (B) a higher perfusion rate (C) a higher ventilation/perfusion (V/Q) ratio (D) the same V/Q ratio (E) a lower pulmonary capillary PO2 50. A 54-year-old man with a lung tumor has high circulating levels of antidiuretic hormone (ADH), a serum osmolarity of 260 mOsm/L, and a negative free-water clearance CH O 2 ( ) . Which diagnosis is correct? (A) Primary polydipsia (B) Central diabetes insipidus (C) Nephrogenic diabetes insipidus (D) Water deprivation (E) Syndrome of inappropriate antidiuretic hormone (SIADH) 51. End-organ resistance to which of the following hormones results in polyuria and elevated serum osmolarity? (A) Antidiuretic hormone (ADH) (B) Aldosterone (C) 1,25-Dihydroxycholecalciferol (D) Parathyroid hormone (PTH) (E) Somatostatin Comprehensive Examination 277 52. Which diuretic causes increased urinary excretion of Na+ and K+ and decreased urinary excretion of Ca2+? (A) Acetazolamide (B) Chlorothiazide (C) Furosemide (D) Spironolactone 53. Arterial Pco2 of 72 mm Hg, arterial [HCO3 −] of 38 mEq/L, and increased H+ excretion would be observed in a (A) patient with chronic diabetic ketoacidosis (B) patient with chronic renal failure (C) patient with chronic emphysema and bronchitis (D) patient who hyperventilates on a commuter flight (E) patient who is taking a carbonic anhydrase inhibitor for glaucoma (F) patient with a pyloric obstruction who vomits for 5 days (G) healthy person 54. In a skeletal muscle capillary, the capillary hydrostatic pressure (Pc) is 32 mm Hg, the capillary oncotic pressure (πc) is 27 mm Hg, and the interstitial hydrostatic pressure (Pi) is 2 mm Hg. Interstitial oncotic pressure (πi) is negligible. What is the driving force across the capillary wall, and will it favor filtration or absorption? (A) 3 mm Hg, favoring absorption (B) 3 mm Hg, favoring filtration (C) 7 mm Hg, favoring absorption (D) 7 mm Hg, favoring filtration (E) 9 mm Hg, favoring filtration 55. Which of the following substances has the lowest renal clearance? (A) Creatinine (B) Glucose (C) K+ (D) Na+ (E) Para-aminohippuric acid (PAH) 56. Atropine causes dry mouth by inhibiting which of the following receptors? (A) α1 Receptor (B) β1 Receptor (C) β2 Receptor (D) Muscarinic receptor (E) Nicotinic receptor 57. Which of the following transport mechanisms is inhibited by furosemide in the thick ascending limb? (A) Na+ diffusion via Na+ channels (B) Na+–glucose cotransport (symport) (C) Na+–K+–2Cl− cotransport (symport) (D) Na+–H+ exchange (antiport) (E) Na+,K+-adenosine triphosphatase (ATPase) 58. Which of the following conditions decreases the likelihood of edema formation? (A) Arteriolar constriction (B) Venous constriction (C) Standing (D) Nephrotic syndrome (E) Inflammation 59. Which of the following conditions causes hypoventilation? (A) Strenuous exercise (B) Ascent to high altitude (C) Anemia (D) Diabetic ketoacidosis (E) Chronic obstructive pulmonary disease (COPD) 60. A 28-year-old man who is receiving lithium treatment for bipolar disorder becomes polyuric. His urine osmolarity is 90 mOsm/L; it remains at that level when he is given a nasal spray of dDAVP . Which diagnosis is correct? (A) Primary polydipsia (B) Central diabetes insipidus (C) Nephrogenic diabetes insipidus (D) Water deprivation (E) Syndrome of inappropriate antidiuretic hormone (SIADH) 61. Inhibition of which step in the steroid hormone synthetic pathway blocks the production of all androgenic compounds in the adrenal cortex, but not the production of glucocorticoids or mineralocorticoids? (A) Aldosterone synthase (B) Aromatase (C) Cholesterol desmolase (D) 17,20-Lyase (E) 5α-Reductase 62. Arterial pH of 7.54, arterial [HCO3] of 48 mEq/L, hypokalemia, and hypoventilation would be observed in a (A) patient with chronic diabetic ketoacidosis (B) patient with chronic renal failure (C) patient with chronic emphysema and bronchitis 278 BRS Physiology (D) patient who hyperventilates on a commuter flight (E) patient who is taking a carbonic anhydrase inhibitor for glaucoma (F) patient with a pyloric obstruction who vomits for 5 days (G) healthy person 63. Somatostatin inhibits the secretion of which of the following hormones? (A) Antidiuretic hormone (ADH) (B) Insulin (C) Oxytocin (D) Prolactin (E) Thyroid hormone 64. Which of the following substances is converted to a more active form after its secretion? (A) Testosterone (B) Triiodothyronine (T3) (C) Reverse triiodothyronine (rT3) (D) Angiotensin II (E) Aldosterone 65. Levels of which of the following hormones are high during the first trimester of pregnancy and decline during the second and third trimesters? (A) Adrenocorticotropic hormone (ACTH) (B) Estradiol (C) Follicle-stimulating hormone (FSH) (D) Gonadotropin-releasing hormone (GnRH) (E) Human chorionic gonadotropin (HCG) (F) Oxytocin (G) Prolactin (H) Testosterone The following diagram applies to Questions 66 and 67. A B C D E 66. During which labeled wave or segment of the electrocardiogram (ECG) are both the atria and the ventricles completely repolarized? (A) A (B) B (C) C (D) D (E) E 67. During which labeled wave or segment of the electrocardiogram (ECG) is aortic pressure at its lowest value? (A) A (B) B (C) C (D) D (E) E The following diagram applies to Questions 68 to 74. E C D A B 68. At which site is the amount of para-aminohippuric acid (PAH) in tubular fluid lowest? (A) Site A (B) Site B (C) Site C (D) Site D (E) Site E 69. At which site is the creatinine concentration highest in a person who is deprived of water? (A) Site A Comprehensive Examination 279 (B) Site B (C) Site C (D) Site D (E) Site E 70. At which site is the tubular fluid [HCO3 −] highest? (A) Site A (B) Site B (C) Site C (D) Site D (E) Site E 71. At which site is the amount of K+ in tubular fluid lowest in a person who is on a very low-K+ diet? (A) Site A (B) Site B (C) Site C (D) Site D (E) Site E 72. At which site is the composition of tubular fluid closest to that of plasma? (A) Site A (B) Site B (C) Site C (D) Site D (E) Site E 73. At which site is about one-third of the filtered water remaining in the tubular fluid? (A) Site A (B) Site B (C) Site C (D) Site D (E) Site E 74. At which site is the tubular fluid osmolarity lower than the plasma osmolarity in a person who is deprived of water? (A) Site A (B) Site B (C) Site C (D) Site D (E) Site E 75. A patient’s electrocardiogram (ECG) shows periodic QRS complexes that are not preceded by P waves and that have a bizarre shape. These QRS complexes originated in the (A) sinoatrial (SA) node (B) atrioventricular (AV) node (C) His–Purkinje system (D) ventricular muscle 76. Which of the following substances would be expected to cause an increase in arterial blood pressure? (A) Saralasin (B) V1 agonist (C) Acetylcholine (ACh) (D) Spironolactone (E) Phenoxybenzamine 77. A decrease in which of the following parameters in an artery will produce an increase in pulse pressure? (A) Blood flow (B) Resistance (C) Pressure gradient (D) Capacitance 78. Which of the following changes occurs during moderate exercise? (A) Increased total peripheral resistance (TPR) (B) Increased stroke volume (C) Decreased pulse pressure (D) Decreased venous return (E) Decreased arterial PO2 79. Plasma renin activity is lower than normal in patients with (A) hemorrhagic shock (B) essential hypertension (C) congestive heart failure (D) hypertension caused by aortic constriction above the renal arteries 80. Inhibition of which enzyme in the steroid hormone synthetic pathway reduces the size of the prostate? (A) Aldosterone synthase (B) Aromatase (C) Cholesterol desmolase (D) 17,20-Lyase (E) 5α-Reductase 81. During which phase of the cardiac cycle does ventricular pressure rise but ventricular volume remain constant? (A) Atrial systole (B) Isovolumetric ventricular contraction (C) Rapid ventricular ejection (D) Reduced ventricular ejection (E) Isovolumetric ventricular relaxation (F) Rapid ventricular filling (G) Reduced ventricular filling 280 BRS Physiology 82. Which of the following lung volumes or capacities includes the residual volume? (A) Tidal volume (TV) (B) Vital capacity (VC) (C) Inspiratory capacity (IC) (D) Functional residual capacity (FRC) (E) Inspiratory reserve volume (IRV) 83. Arterial [HCO3 −] of 18 mEq/L, Pco2 of 34 mm Hg, and increased urinary HCO3 − excretion would be observed in a (A) patient with chronic diabetic ketoacidosis (B) patient with chronic renal failure (C) patient with chronic emphysema and bronchitis (D) patient who hyperventilates on a commuter flight (E) patient who is taking a carbonic anhydrase inhibitor for glaucoma (F) patient with a pyloric obstruction who vomits for 5 days (G) healthy person 84. A 36-old-woman with galactorrhea is treated with bromocriptine. The basis for bromocriptine’s action is by acting as an agonist for (A) dopamine (B) estradiol (C) follicle-stimulating hormone (FSH) (D) gonadotropin-releasing hormone (GnRH) (E) human chorionic gonadotropin (HCG) (F) oxytocin (G) prolactin 85. A 32-year-old woman who is thirsty has a urine osmolarity of 950 mOsm/L and a serum osmolarity of 297 mOsm/L. Which diagnosis is correct? (A) Primary polydipsia (B) Central diabetes insipidus (C) Nephrogenic diabetes insipidus (D) Water deprivation (E) Syndrome of inappropriate antidiuretic hormone (SIADH) 86. Hypoxia causes vasoconstriction in which of the following vascular beds? (A) Cerebral (B) Coronary (C) Muscle (D) Pulmonary (E) Skin 87. Which diuretic is administered for the treatment of acute mountain sickness and causes an increase in the pH of urine? (A) Acetazolamide (B) Chlorothiazide (C) Furosemide (D) Spironolactone 88. Arterial pH of 7.25, arterial Pco2 of 30 mm Hg, and decreased urinary excretion of NH4 + would be observed in a (A) patient with chronic diabetic ketoacidosis (B) patient with chronic renal failure (C) patient with chronic emphysema and bronchitis (D) patient who hyperventilates on a commuter flight (E) patient who is taking a carbonic anhydrase inhibitor for glaucoma (F) patient with a pyloric obstruction who vomits for 5 days (G) healthy person 89. In which of the following situations will arterial PO2 be closest to 100 mm Hg? (A) A person who is having a severe asthmatic attack (B) A person who lives at high altitude (C) A person who has a right-to-left cardiac shunt (D) A person who has a left-to-right cardiac shunt (E) A person who has pulmonary fibrosis 90. Which of the following is an example of a primary active transport process? (A) Na+–glucose transport in small intestinal epithelial cells (B) Na+–alanine transport in renal proximal tubular cells (C) Insulin-dependent glucose transport in muscle cells (D) H+–K+ transport in gastric parietal cells (E) Na+–Ca2+ exchange in nerve cells 91. Which gastrointestinal secretion is inhibited when the pH of the stomach contents is 1.0? (A) Saliva (B) Gastric secretion (C) Pancreatic secretion (D) Bile Comprehensive Examination 281 92. Which of the following would be expected to increase after surgical removal of the duodenum? (A) Gastric emptying (B) Secretion of cholecystokinin (CCK) (C) Secretion of secretin (D) Contraction of the gallbladder (E) Absorption of lipids 93. Which of the following hormones causes contraction of vascular smooth muscle? (A) Antidiuretic hormone (ADH) (B) Aldosterone (C) Atrial natriuretic peptide (ANP) (D) 1,25-Dihydroxycholecalciferol (E) Parathyroid hormone (PTH) 94. Which of the following is absorbed by facilitated diffusion? (A) Glucose in duodenal cells (B) Fructose in duodenal cells (C) Dipeptides in duodenal cells (D) Vitamin B1 in duodenal cells (E) Cholesterol in duodenal cells (F) Bile acids in ileal cells 95. Which of the following hormones acts on the anterior lobe of the pituitary to inhibit secretion of growth hormone? (A) Dopamine (B) Gonadotropin-releasing hormone (GnRH) (C) Insulin (D) Prolactin (E) Somatostatin 96. Which step in the steroid hormone synthetic pathway is required for the development of female secondary sex characteristics, but not male secondary sex characteristics? (A) Aldosterone synthase (B) Aromatase (C) Cholesterol desmolase (D) 17,20-Lyase (E) 5α-Reductase 97. At the beginning of which phase of the cardiac cycle does the second heart sound occur? (A) Atrial systole (B) Isovolumetric ventricular contraction (C) Rapid ventricular ejection (D) Reduced ventricular ejection (E) Isovolumetric ventricular relaxation (F) Rapid ventricular filling (G) Reduced ventricular filling 98. Which of the following actions occurs when light strikes a photoreceptor cell of the retina? (A) Transducin is inhibited (B) The photoreceptor depolarizes (C) Cyclic guanosine monophosphate (cGMP) levels in the cell decrease (D) All-trans retinal is converted to 11-cis retinal (E) Increased release of an excitatory neurotransmitter 99. Which step in the biosynthetic pathway for thyroid hormones produces thyroxine (T4)? (A) Iodide (I−) pump (B) I− →I2 (C) I2 + tyrosine (D) Diiodotyrosine (DIT) + DIT (E) DIT + monoiodotyrosine (MIT) 282 1. The answer is D [Chapter 2, I C; Table 2.2]. Increased circulating levels of epinephrine from the adrenal medullary tumor stimulate both α-adrenergic and β-adrenergic receptors. Thus, heart rate and contractility are increased and, as a result, cardiac output is increased. Total peripheral resistance (TPR) is increased because of arteriolar vasoconstriction, which leads to decreased blood flow to the cutaneous circulation and causes cold, clammy skin. Together, the increases in cardiac output and TPR increase arterial blood pressure. 3-Methoxy-4-hydroxymandelic acid (VMA) is a metabolite of both norepinephrine and epinephrine; increased VMA excretion occurs in pheochromocytomas. 2. The answer is D [Chapter 2, I; Table 2.3]. Treatment is directed at blocking both the α-stimulatory and β-stimulatory effects of catecholamines. Phentolamine is an α-blocking agent; propranolol is a β-blocking agent. Isoproterenol is a β1 and β2 agonist. Phenylephrine is an α1 agonist. 3. The answer is C [Chapter 7, I D; X E 2]. The effect of estrogen on the secretion of follicle-stimulating hormone (FSH) and luteinizing hormone (LH) by the anterior lobe of the pituitary gland at midcycle is one of the few examples of positive feedback in physiologic systems—increasing estrogen levels at midcycle cause increased secretion of FSH and LH. The other options illustrate negative feedback. Decreased arterial PO2 causes an increase in breathing rate (via peripheral chemoreceptors). Increased blood glucose stimulates insulin secretion. Decreased blood [Ca2+] causes an increase in parathyroid hormone (PTH) secretion. Decreased blood pressure decreases the firing rate of carotid sinus nerves (via the baroreceptors) and ultimately increases sympathetic outflow to the heart and blood vessels to return blood pressure to normal. 4. The answer is B [Chapter 3, IV F 3 a; Figures 3.8 and 3.12]. A downward shift of the cardiac output curve is consistent with decreased myocardial contractility (negative inotropism); for any right atrial pressure or end-diastolic volume, the force of contraction is decreased. Digitalis, a positive inotropic agent, would produce an upward shift of the cardiac output curve. Changes in blood volume alter the venous return curve rather than the cardiac output curve. Changes in total peripheral resistance (TPR) alter both the cardiac output and venous return curves. 5. The answer is A [Chapter 4, IV A 2, C; Figure 4.7]. Because fetal hemoglobin (HbF) has a greater affinity for O2 than does adult hemoglobin, the O2–hemoglobin dissociation curve would shift to the left. Carbon monoxide poisoning would cause a shift to the left, but would also cause a decrease in total O2-carrying capacity (decreased percent saturation) because CO occupies O2-binding sites. Decreased pH, increased temperature, and increased 2,3-diphosphoglycerate (DPG) all would shift the curve to the right. 6. The answer is A [Chapter 4, IV C 2]. A shift to the left of the O2–hemoglobin dissociation curve represents an increased affinity of hemoglobin for O2. Accordingly, at any given level of PO2, the percent saturation is increased, the P50 is decreased (read the PO2 at 50% saturation), and the ability to unload O2 to the tissues is impaired (because of the higher affinity of hemoglobin for O2). The O2-carrying capacity is determined by hemoglobin concentration and is unaffected by the shift from curve A to curve B. 7. The answer is B [Chapter 5, VII D; Table 5.6]. A person with a negative free-water clearance CH O 2 ( ) would, by definition, be producing urine that is hyperosmotic to blood C V C H O 2 = − ( ) osm . After overnight water restriction, serum osmolarity increases. This increase, via hypothalamic osmoreceptors, stimulates the release of antidiuretic hormone Answers and Explanations Comprehensive Examination 283 (ADH) from the posterior lobe of the pituitary. This ADH circulates to the collecting ducts of the kidney and causes reabsorption of water, which results in the production of hyperosmotic urine. Drinking large amounts of water inhibits the secretion of ADH and causes excretion of dilute urine and a positive CH O 2 . Lithium causes nephrogenic diabetes insipidus by blocking the response of ADH on the collecting duct cells, resulting in dilute urine and a positive CH O 2 . In option D, the calculated value of CH O 2 is zero. In option E, the calculated value of CH O 2 is positive. 8. The answer is C [Chapter 4, V B; Figure 4.9]. CO2 generated in the tissues enters venous blood and, in the red blood cells (RBCs), combines with H2O in the presence of carbonic anhydrase to form H2CO3. H2CO3 dissociates into H+ and HCO3 −. The H+ remains in the RBCs to be buffered by deoxyhemoglobin, and the HCO3 − moves into plasma in exchange for Cl−. Thus, CO2 is carried in venous blood to the lungs as HCO3 −. In the lungs, the reactions occur in reverse: CO2 is regenerated and expired. 9. The answer is D [Chapter 7, X E 2]. Menses occurs 14 days after ovulation, regardless of cycle length. Therefore, in a 35-day menstrual cycle, ovulation occurs on day 21. Ovulation occurs at the midpoint of the menstrual cycle only if the cycle length is 28 days. 10. The answer is C [Chapter 7, X A]. Testosterone is synthesized from cholesterol in ovarian theca cells and diffuses to ovarian granulosa cells, where it is converted to estradiol by the action of aromatase. Follicle-stimulating hormone (FSH) stimulates the aromatase enzyme and increases the production of estradiol. 11. The answer is A [Chapter 6, IVA 2–4 a]. Saliva has a high [HCO3 −] because the cells lining the salivary ducts secrete HCO3 −. Because the ductal cells are relatively impermeable to water and because they reabsorb more solute (Na+ and Cl−) than they secrete (K+ and HCO3 −), the saliva is rendered hypotonic. Vagal stimulation increases saliva production, so vagotomy (or atropine) inhibits it and produces dry mouth. 12. The answer is C [Chapter 5, VII D 3; Table 5.6]. The most likely explanation for this patient’s polyuria is hypercalcemia. With severe hypercalcemia, Ca2+ accumulates in the inner medulla and papilla of the kidney and inhibits adenylate cyclase, blocking the effect of ADH on water permeability. Because ADH is ineffective, the urine cannot be concentrated and the patient excretes large volumes of dilute urine. His polydipsia is secondary to his polyuria and is caused by the increased serum osmolarity. Psychogenic water drinking would also cause polyuria, but the serum osmolarity would be lower than normal, not higher than normal. 13. The answer is A [Chapter 5, VI C]. Thiazide diuretics would be contraindicated in a patient with severe hypercalcemia because these drugs cause increased Ca2+ reabsorption in the renal distal tubule. On the other hand, loop diuretics inhibit Ca2+ and Na+ reabsorption and produce calciuresis. When given with fluid replacement, loop diuretics can effectively and rapidly lower the serum [Ca2+]. Calcitonin, mithramycin, and etidronate disodium inhibit bone resorption and, as a result, decrease serum [Ca2+]. 14. The answer is B [Chapter 7; Table 7.2]. Oxytocin causes contraction of the myoepithelial cells of the breast by an inositol 1,4,5-triphosphate (IP3)–Ca2+ mechanism. Somatomedins (insulin-like growth factor [IGF]), like insulin, act on target cells by activating tyrosine kinase. Antidiuretic hormone (ADH) acts on the V2 receptors of the renal collecting duct by a cyclic adenosine monophosphate (cAMP) mechanism (although in vascular smooth muscle it acts on V1 receptors by an IP3 mechanism). Adrenocorticotropic hormone (ACTH) also acts via a cAMP mechanism. Thyroid hormone induces the synthesis of new protein (e.g., Na+,K+-adenosine triphosphatase [ATPase]) by a steroid hormone mechanism. 15. The answer is E [Chapter 1, VI B; VII B; Table 1.3]. The pharynx is skeletal muscle, and the small intestine is unitary smooth muscle. The difference between smooth and skeletal muscle is the mechanism by which Ca2+ initiates contraction. In smooth muscle, Ca2+ 284 BRS Physiology binds to calmodulin, and in skeletal muscle, Ca2+ binds to troponin C. Both types of muscle are excited to contract by action potentials. Slow waves are present in smooth muscle but not skeletal muscle. Both smooth and skeletal muscle require an increase in intracellular [Ca2+] as the important linkage between excitation (the action potential) and contraction, and both consume adenosine triphosphate (ATP) during contraction. 16. The answer is B [Chapter 5, IX D; Table 5.9]. The arterial blood values and physical findings are consistent with metabolic acidosis, hypokalemia, and orthostatic hypotension. Diarrhea is associated with the loss of HCO3 − and K+ from the gastrointestinal (GI) tract, consistent with the laboratory values. Hypotension is consistent with extracellular fluid (ECF) volume contraction. Vomiting would cause metabolic alkalosis and hypokalemia. Treatment with loop or thiazide diuretics could cause volume contraction and hypokalemia, but would cause metabolic alkalosis rather than metabolic acidosis. 17. The answer is D [Chapter 6, V B 1 c]. Pepsinogen is secreted by the gastric chief cells and is activated to pepsin by the low pH of the stomach (created by secretion of HCl by the gastric parietal cells). Lipases are inactivated by low pH. 18. The answer is B [Chapter 5, II C 6; Table 5.3]. Glomerular filtration rate (GFR) is determined by the balance of Starling forces across the glomerular capillary wall. Constriction of the efferent arteriole increases the glomerular capillary hydrostatic pressure (because blood is restricted in leaving the glomerular capillary), thus favoring filtration. Constriction of the afferent arteriole would have the opposite effect and would reduce the glomerular capillary hydrostatic pressure. Constriction of the ureter would increase the hydrostatic pressure in the tubule and, therefore, oppose filtration. Increased plasma protein concentration would increase the glomerular capillary oncotic pressure and oppose filtration. Infusion of inulin is used to measure the GFR and does not alter the Starling forces. 19. The answer is B [Chapter 6, V C 1, 2]. First, fat absorption requires the breakdown of dietary lipids to fatty acids, monoglycerides, and cholesterol in the duodenum by pancreatic lipases. Second, fat absorption requires the presence of bile acids, which are secreted into the small intestine by the gallbladder. These bile acids form micelles around the products of lipid digestion and deliver them to the absorbing surface of the small intestinal cells. Because the bile acids are recirculated to the liver from the ileum, fat absorption must be complete before the chyme reaches the terminal ileum. 20. The answer is A [Chapter 7, III C 1 b]. Antidiuretic hormone (ADH) causes constriction of vascular smooth muscle by activating a V1 receptor that uses the inositol 1,4,5-triphosphate (IP3) and Ca2+ second messenger system. When hemorrhage or extracellular fluid (ECF) volume contraction occurs, ADH secretion by the posterior pituitary is stimulated via volume receptors. The resulting increase in ADH levels causes increased water reabsorption by the collecting ducts (V2 receptors) and vasoconstriction (V1 receptors) to help restore blood pressure. 21. The answer is A [Chapter 7, III B]. Normal menstrual cycles depend on the secretion of follicle-stimulating hormone (FSH) and luteinizing hormone (LH) from the anterior pituitary. Concentration of urine in response to water deprivation depends on the secretion of antidiuretic hormone (ADH) by the posterior pituitary. Catecholamines are secreted by the adrenal medulla in response to stress, but anterior pituitary hormones are not involved. Anterior pituitary hormones are not involved in the direct effect of glucose on the beta cells of the pancreas or in the direct effect of Ca2+ on the chief cells of the parathyroid gland. 22. The answer is B [Chapter 5, III B]. Curves X, Y, and Z show glucose filtration, glucose excretion, and glucose reabsorption, respectively. Below a plasma [glucose] of 200 mg/dL, the carriers for glucose reabsorption are unsaturated, so all of the filtered glucose can be reabsorbed, and none will be excreted in the urine. 23. The answer is D [Chapter 2, III C 1; Figure 2.9]. When the patellar tendon is stretched, the quadriceps muscle also stretches. This movement activates Ia afferent fibers of the muscle Comprehensive Examination 285 spindles, which are arranged in parallel formation in the muscle. These Ia afferent fibers form synapses on α-motoneurons in the spinal cord. In turn, the pool of α-motoneurons is activated and causes reflex contraction of the quadriceps muscle to return it to its resting length. 24. The answer is A [Chapter 2, VI C]. Streptococcus pyogenes causes increased production of interleukin-1 (IL-1) in macrophages. IL-1 acts on the anterior hypothalamus to increase the production of prostaglandins, which increase the hypothalamic set-point temperature. The hypothalamus then “reads” the core temperature as being lower than the new set-point temperature and activates various heat-generating mechanisms that increase body temperature (fever). These mechanisms include shivering and vasoconstriction of blood vessels in the skin. 25. The answer is C [Chapter 2, VI C 2]. By inhibiting cyclooxygenase, aspirin inhibits the production of prostaglandins and lowers the hypothalamic set-point temperature to its original value. After aspirin treatment, the hypothalamus “reads” the body temperature as being higher than the set-point temperature and activates heat-loss mechanisms, including sweating and vasodilation of skin blood vessels. This vasodilation shunts blood toward the surface skin. When heat is lost from the body by these mechanisms, body temperature is reduced. 26. The answer is D [Chapter 5, IX D 4; Table 5.9]. The blood values are consistent with acute respiratory alkalosis from hysterical hyperventilation. The tingling and numbness are symptoms of a reduction in serum ionized [Ca2+] that occurs secondary to alkalosis. Because of the reduction in [H+], fewer H+ ions will bind to negatively charged sites on plasma proteins, and more Ca2+ binds (decreasing the free ionized [Ca2+]). 27. The answer is C [Chapter 2, I C 1 d]. Albuterol is an adrenergic β2 agonist. When activated, the b2 receptors in the bronchioles produce bronchodilation. 28. The answer is E [Chapter 7, IX A; Figure 7.16]. Testosterone is converted to its active form, dihydrotestosterone, in some target tissues by the action of 5α-reductase. 29. The answer is B [Chapter 3, II C, D]. A decrease in radius causes an increase in resistance, as described by the Poiseuille relationship (resistance is inversely proportional to r4). Thus, if radius decreases twofold, the resistance will increase by (2)4 or 16-fold. 30. The answer is G [Chapter 3, V; Figure 3.15]. When heart rate increases, the time between ventricular contractions (for refilling of the ventricles with blood) decreases. Because most ventricular filling occurs during the “reduced” phase, this phase is the most compromised by an increase in heart rate. 31. The answer is D [Chapter 3, IX C; Table 3.6; Figure 3.21]. The blood loss that occurred in the accident caused a decrease in arterial blood pressure. The decrease in arterial pressure was detected by the baroreceptors in the carotid sinus and caused a decrease in the firing rate of the carotid sinus nerves. As a result of the baroreceptor response, sympathetic outflow to the heart and blood vessels increased, and parasympathetic outflow to the heart decreased. Together, these changes caused an increased heart rate, increased contractility, and increased total peripheral resistance (TPR) (in an attempt to restore the arterial blood pressure). 32. The answer is B [Chapter 3, IX C; Table 3.6; Figure 3.21; Chapter 5 IV C 3 b (1)]. The decreased blood volume causes decreased renal perfusion pressure, which initiates a cascade of events, including increased renin secretion, increased circulating angiotensin II, increased aldosterone secretion, increased Na+ reabsorption, and increased K+ secretion by the renal tubule. 33. The answer is B [Chapter 5, VII C; Table 5.6]. A history of head injury with production of dilute urine accompanied by elevated serum osmolarity suggests central diabetes insipidus. The response of the kidney to exogenous antidiuretic hormone (ADH) (1-deamino-8-d-arginine vasopressin [dDAVP]) eliminates nephrogenic diabetes insipidus as the cause of the concentrating defect. 286 BRS Physiology 34. The answer is D [Chapter 5, IV C 3 b (1); Table 5.11]. Spironolactone inhibits distal tubule Na+ reabsorption and K+ secretion by acting as an aldosterone antagonist. 35. The answer is B [Chapter 6, V E 1 c; Table 6.3]. Gastric parietal cells secrete intrinsic factor, which is required for the intestinal absorption of vitamin B12. 36. The answer is C [Chapter 3, VI C 4]. Atrial natriuretic peptide (ANP) is secreted by the atria in response to extracellular fluid volume expansion and subsequently acts on the kidney to cause increased excretion of Na+ and H2O. 37. The answer is A [Chapter 7, V A 2 b; Figure 7.11]. Angiotensin II increases production of aldosterone by stimulating aldosterone synthase, the enzyme that catalyzes the conversion of corticosterone to aldosterone. 38. The answer is E [Chapter 3, III B; Figures 3.4 and 3.5]. The action potential shown is characteristic of ventricular muscle, with a stable resting membrane potential and a long plateau phase of almost 300 ms. Action potentials in skeletal cells are much shorter (only a few milliseconds). Smooth muscle action potentials would be superimposed on fluctuating baseline potentials (slow waves). Sinoatrial (SA) cells of the heart have spontaneous depolarization (pacemaker activity) rather than a stable resting potential. Atrial muscle cells of the heart have a much shorter plateau phase and a much shorter overall duration. 39. The answer is B [Chapter 3, III C 1 a]. Depolarization, as in phase 0, is caused by an inward current (defined as the movement of positive charge into the cell). The inward current during phase 0 of the ventricular muscle action potential is caused by opening of Na+ channels in the ventricular muscle cell membrane, movement of Na+ into the cell, and depolarization of the membrane potential toward the Na+ equilibrium potential (approximately +65 mV). In sinoatrial (SA) cells, phase 0 is caused by an inward Ca2+ current. 40. The answer is D [Chapter 3, III B 1 c]. Because the plateau phase is a period of stable membrane potential, by definition, the inward and outward currents are equal and balance each other. Phase 2 is the result of opening of Ca2+ channels and inward, not outward, Ca2+ current. In this phase, the cells are refractory to the initiation of another action potential. Phase 2 corresponds to the absolute refractory period, rather than the effective refractory period (which is longer than the plateau). As heart rate increases, the duration of the ventricular action potential decreases, primarily by decreasing the duration of phase 2. 41. The answer is E [Chapter 3, III A 4; Figure 3.3]. The action potential shown represents both depolarization and repolarization of a ventricular muscle cell. Therefore, on an electrocardiogram (ECG), it corresponds to the period of depolarization (beginning with the Q wave) through repolarization (completion of the T wave). That period is defined as the QT interval. 42. The answer is B [Chapter 7, IV A 2]. The oxidation of I− to I2 is catalyzed by peroxidase and inhibited by propylthiouracil, which can be used in the treatment of hyperthyroidism. Later steps in the pathway that are catalyzed by peroxidase and inhibited by propylthiouracil are iodination of tyrosine, coupling of diiodotyrosine (DIT) and DIT, and coupling of DIT and monoiodotyrosine (MIT). 43. The answer is A [Chapter 5, IX D 1; Table 5.9]. The blood values are consistent with metabolic acidosis, as would occur in diabetic ketoacidosis. Hyperventilation is the respiratory compensation for metabolic acidosis. Increased urinary excretion of NH4 + reflects the adaptive increase in NH3 synthesis that occurs in chronic acidosis. Patients with metabolic acidosis secondary to chronic renal failure would have reduced NH4 + excretion (because of diseased renal tissue). 44. The answer is A [Chapter 2, I C 1 a]. When adrenergic α1 receptors on the vascular smooth muscle are activated, they cause vasoconstriction and increased total peripheral resistance (TPR). Comprehensive Examination 287 45. The answer is D [Chapter 7; Table 7.2]. Hormone receptors with tyrosine kinase activity include those for insulin and for insulin-like growth factors (IGF). The β subunits of the insulin receptor have tyrosine kinase activity and, when activated by insulin, the receptors autophosphorylate. The phosphorylated receptors then phosphorylase intracellular proteins; this process ultimately results in the physiologic actions of insulin. 46. The answer is A [Chapter 3, II C, D]. Blood flow through the artery is proportional to the pressure difference and inversely proportional to the resistance (Q = ΔP/R). Because resistance increased 16-fold when the radius decreased twofold, blood flow must decrease 16-fold. 47. The answer is A [Chapter 3, V; Figure 3.15]. The P wave represents electrical activation (depolarization) of the atria. Atrial contraction is always preceded by electrical activation. 48. The answer is E [Chapter 2, II A 4; Figure 2.2]. Receptor potentials in sensory receptors (such as the pacinian corpuscle) are not action potentials and therefore do not have the stereotypical size and shape or the all-or-none feature of the action potential. Instead, they are graded potentials that vary in size depending on the stimulus intensity. A hyperpolarizing receptor potential would take the membrane potential away from threshold and decrease the likelihood of action potential occurrence. A depolarizing receptor potential would bring the membrane potential toward threshold and increase the likelihood of action potential occurrence. 49. The answer is C [Chapter 4, VII C; Table 4.5]. In a person who is standing, both ventilation and perfusion are greater at the base of the lung than at the apex. However, because the regional differences for perfusion are greater than those for ventilation, the ventilation/ perfusion (V/Q) ratio is higher at the apex than at the base. The pulmonary capillary Po2 therefore is higher at the apex than at the base because the higher V/Q ratio makes gas exchange more efficient. 50. The answer is E [Chapter 5, VII D 4]. A negative value for free-water clearance CH O 2 ( ) means that “free water” (generated in the diluting segments of the thick ascending limb and early distal tubule) is reabsorbed by the collecting ducts. A negative CH O 2 is consistent with high circulating levels of antidiuretic hormone (ADH). Because ADH levels are high at a time when the serum is very dilute, ADH has been secreted “inappropriately” by the lung tumor. 51. The answer is A [Chapter 5, VII C; Table 5.6]. End-organ resistance to antidiuretic hormone (ADH) is called nephrogenic diabetes insipidus. It may be caused by lithium intoxication (which inhibits the Gs protein in collecting duct cells) or by hypercalcemia (which inhibits adenylate cyclase). The result is inability to concentrate the urine, polyuria, and increased serum osmolarity (resulting from the loss of free water in the urine). 52. The answer is B [Chapter 5, IV C 3 a; VI C 2; Table 5.11]. Thiazide diuretics act on the early distal tubule (cortical diluting segment) to inhibit Na+ reabsorption. At the same site, they enhance Ca2+ reabsorption so that urinary excretion of Na+ is increased while urinary excretion of Ca2+ is decreased. K+ excretion is increased because the flow rate is increased at the site of distal tubular K+ secretion. 53. The answer is C [Chapter 5, IX D 3; Table 5.9]. The blood values are consistent with respiratory acidosis with renal compensation. The renal compensation involves increased reabsorption of HCO3 − (associated with increased H+ secretion), which raises the serum [HCO3 −]. 54. The answer is B [Chapter 3, VII C]. The driving force is calculated from the Starling forces across the capillary wall. The net pressure = (Pc − Pi) − (πc − πi). Therefore, net pressure = (32 mm Hg − 2 mm Hg) − (27 mm Hg) = +3 mm Hg. Because the sign of the net pressure is positive, filtration is favored. 55. The answer is B [Chapter 5, III D]. Glucose has the lowest renal clearance of the substances listed, because at normal blood concentrations, it is filtered and completely reabsorbed. Na+ is also extensively reabsorbed, and only a fraction of the filtered Na+ is 288 BRS Physiology excreted. K+ is reabsorbed but also secreted. Creatinine, once filtered, is not reabsorbed at all. Para-aminohippuric acid (PAH) is filtered and secreted; therefore, it has the highest renal clearance of the substances listed. 56. The answer is D [Chapter 2, I C 2 b]. Atropine blocks cholinergic muscarinic receptors. Because saliva production is increased by stimulation of the parasympathetic nervous system, atropine treatment reduces saliva production and causes dry mouth. 57. The answer is C [Chapter 5, IV C 2]. Na+–K+–2Cl− cotransport is the mechanism in the luminal membrane of the thick ascending limb cells that is inhibited by loop diuretics such as furosemide. Other loop diuretics that inhibit this transporter are bumetanide and ethacrynic acid. 58. The answer is A [Chapter 3, VII C; Table 3.2]. Constriction of arterioles causes decreased capillary hydrostatic pressure and, as a result, decreased net pressure (Starling forces) across the capillary wall; filtration is reduced, as is the tendency for edema. Venous constriction and standing cause increased capillary hydrostatic pressure and tend to cause increased filtration and edema. Nephrotic syndrome results in the excretion of plasma proteins in the urine and a decrease in the oncotic pressure of capillary blood, which also leads to increased filtration and edema. Inflammation causes local edema by dilating arterioles. 59. The answer is E [Chapter 4, IX A, B; Chapter 5 IX D]. Chronic obstructive pulmonary disease (COPD) causes hypoventilation. Strenuous exercise increases the ventilation rate to provide additional oxygen to the exercising muscle. Ascent to high altitude and anemia cause hypoxemia, which subsequently causes hyperventilation by stimulating peripheral chemoreceptors. The respiratory compensation for diabetic ketoacidosis is hyperventilation. 60. The answer is C [Chapter 5, VII C]. Lithium inhibits the G protein that couples the antidiuretic hormone (ADH) receptor to adenylate cyclase. The result is inability to concentrate the urine. Because the defect is in the target tissue for ADH (nephrogenic diabetes insipidus), exogenous ADH administered by nasal spray will not correct it. 61. The answer is D [Chapter 7, V A 1; Figure 7.11]. 17,20-Lyase catalyzes the conversion of glucocorticoids to the androgenic compounds dehydroepiandrosterone and androstenedione. These androgenic compounds are the precursors of testosterone in both the adrenal cortex and the testicular Leydig cells. 62. The answer is F [Chapter 5, IX D 2; Table 5.9]. The blood values and history of vomiting are consistent with metabolic alkalosis. Hypoventilation is the respiratory compensation for metabolic alkalosis. Hypokalemia results from the loss of gastric K+ and from hyperaldosteronism (resulting in increased renal K+ secretion) secondary to volume contraction. 63. The answer is B [Chapter 6, II B 1; Chapter 7 III B 3 a (1), VI D]. The actions of somatostatin are diverse. It is secreted by the hypothalamus to inhibit the secretion of growth hormone by the anterior lobe of the pituitary. It is secreted by cells of the gastrointestinal (GI) tract to inhibit the secretion of the GI hormones. It is also secreted by the delta cells of the endocrine pancreas and, via paracrine mechanisms, inhibits the secretion of insulin and glucagon by the beta cells and alpha cells, respectively. Prolactin secretion is inhibited by a different hypothalamic hormone, dopamine. 64. The answer is A [Chapter 7, IX A; Figure 7.16]. Testosterone is converted to a more active form (dihydrotestosterone) in some target tissues. Triiodothyronine (T3) is the active form of thyroid hormone; reverse triiodothyronine (rT3) is an inactive alternative form of T3. Angiotensin I is converted to its active form, angiotensin II, by the action of angiotensin-converting enzyme (ACE). Aldosterone is unchanged after it is secreted by the zona glomerulosa of the adrenal cortex. 65. The answer is E [Chapter 7, X F 2; Figure 7.20]. During the first trimester of pregnancy, the placenta produces human chorionic gonadotropin (HCG), which stimulates estrogen Comprehensive Examination 289 and progesterone production by the corpus luteum. Peak levels of HCG occur at about the 9th gestational week and then decline. At the time of the decline in HCG, the placenta assumes the responsibility for steroidogenesis for the remainder of the pregnancy. 66. The answer is E [Chapter 3, V; Figure 3.15]. The atria depolarize during the P wave and then repolarize. The ventricles depolarize during the QRS complex and then repolarize during the T wave. Thus, both the atria and the ventricles are fully repolarized at the completion of the T wave. 67. The answer is C [Chapter 3, V; Figure 3.15]. Aortic pressure is lowest just before the ventricles contract. 68. The answer is A [Chapter 5, III C]. Para-aminohippuric acid (PAH) is filtered across the glomerular capillaries and then secreted by the cells of the late proximal tubule. The sum of filtration plus secretion of PAH equals its excretion rate. Therefore, the smallest amount of PAH present in tubular fluid is found in the glomerular filtrate before the site of secretion. 69. The answer is E [Chapter 5, III C; IV A 2]. Creatinine is a glomerular marker with characteristics similar to inulin. The creatinine concentration in tubular fluid is an indicator of water reabsorption along the nephron. The creatinine concentration increases as water is reabsorbed. In a person who is deprived of water (antidiuresis), water is reabsorbed throughout the nephron, including the collecting ducts, and the creatinine concentration is greatest in the final urine. 70. The answer is A [Chapter 5, IX C 1 a]. HCO3 − is filtered and then extensively reabsorbed in the early proximal tubule. Because this reabsorption exceeds that for H2O, the [HCO3 −] of proximal tubular fluid decreases. Therefore, the highest concentration of [HCO3 −] is found in the glomerular filtrate. 71. The answer is E [Chapter 5, V B]. K+ is filtered and then reabsorbed in the proximal tubule and loop of Henle. In a person on a diet that is very low in K+, the distal tubule continues to reabsorb K+ so that the amount of K+ present in tubular fluid is lowest in the final urine. If the person were on a high-K+ diet, then K+ would be secreted, not reabsorbed, in the distal tubule. 72. The answer is A [Chapter 5, II C 4 b]. In the glomerular filtrate, tubular fluid closely resembles plasma; there, its composition is virtually identical to that of plasma, except that it does not contain plasma proteins. These proteins cannot pass across the glomerular capillary because of their molecular size. Once the tubular fluid leaves Bowman space, it is extensively modified by the cells lining the tubule. 73. The answer is B [Chapter 5, IV C 1]. The proximal tubule reabsorbs about two-thirds of the glomerular filtrate isosmotically. Therefore, one-third of the glomerular filtrate remains at the end of the proximal tubule. 74. The answer is D [Chapter 5, VII B, C]. Under conditions of either water deprivation (antidiuresis) or water loading, the thick ascending limb of the loop of Henle performs its basic function of reabsorbing salt without water (owing to the water impermeability of this segment). Thus, fluid leaving the loop of Henle is dilute with respect to plasma, even when the final urine is more concentrated than plasma. 75. The answer is C [Chapter 3, III A]. Because there are no P waves associated with the bizarre QRS complex, activation could not have begun in the sinoatrial (SA) node. If the beat had originated in the atrioventricular (AV) node, the QRS complex would have had a “normal” shape because the ventricles would activate in their normal sequence. Therefore, the beat must have originated in the His–Purkinje system, and the bizarre shape of the QRS complex reflects an improper activation sequence of the ventricles. Ventricular muscle does not have pacemaker properties. 76. The answer is B [Chapter 3, III E; VI B]. V1 agonists simulate the vasoconstrictor effects of antidiuretic hormone (ADH). Because saralasin is an angiotensin-converting enzyme 290 BRS Physiology (ACE) inhibitor, it blocks the production of the vasoconstrictor substance angiotensin II. Spironolactone, an aldosterone antagonist, blocks the effects of aldosterone to increase distal tubule Na+ reabsorption and consequently reduces extracellular fluid (ECF) volume and blood pressure. Phenoxybenzamine, an α-blocking agent, inhibits the vasoconstrictor effect of α-adrenergic stimulation. Acetylcholine (ACh), via production of endothelium-derived relaxing factor (EDRF), causes vasodilation of vascular smooth muscle and reduces blood pressure. 77. The answer is D [Chapter 3, II E]. A decrease in the capacitance of the artery means that for a given volume of blood in the artery, the pressure will be increased. Thus, for a given stroke volume ejected into the artery, both the systolic pressure and pulse pressure will be greater. 78. The answer is B [Chapter 3, IX B; Table 3.5]. During moderate exercise, sympathetic outflow to the heart and blood vessels is increased. The sympathetic effects on the heart cause increased heart rate and contractility, and the increased contractility results in increased stroke volume. Pulse pressure increases as a result of the increased stroke volume. Venous return also increases because of muscular activity; this increased venous return further contributes to increased stroke volume by the Frank-Starling mechanism. Total peripheral resistance (TPR) might be expected to increase because of sympathetic stimulation of the blood vessels. However, the buildup of local metabolites in the exercising muscle causes local vasodilation, which overrides the sympathetic vasoconstrictor effect, thus decreasing TPR. Arterial Po2 does not decrease during moderate exercise, although O2 consumption increases. 79. The answer is B [Chapter 3, VI B]. Patients with essential hypertension have decreased renin secretion as a result of increased renal perfusion pressure. Patients with congestive heart failure and hemorrhagic shock have increased renin secretion because of reduced intravascular volume, which results in decreased renal perfusion pressure. Patients with aortic constriction above the renal arteries are hypertensive because decreased renal perfusion pressure causes increased renin secretion, followed by increased secretion of angiotensin II and aldosterone. 80. The answer is E [Chapter 7, IX A]. 5α-Reductase catalyzes the conversion of testosterone to dihydrotestosterone. Dihydrotestosterone is the active androgen in several male accessory sex tissues (e.g., prostate). 81. The answer is B [Chapter 3, V; Figure 3.15]. Because the ventricles are contracting during isovolumetric contraction, ventricular pressure increases. Because all of the valves are closed, the contraction is isovolumetric. No blood is ejected into the aorta until ventricular pressure increases enough to open the aortic valve. 82. The answer is D [Chapter 4, I A, B]. Residual volume is the volume present in the lungs after maximal expiration, or expiration of the vital capacity (VC). Therefore, residual volume is not included in the tidal volume (TV), VC, inspiratory reserve volume (IRV), or inspiratory capacity (IC). The functional residual capacity (FRC) is the volume remaining in the lungs after expiration of a normal TV and, therefore, includes the residual volume. 83. The answer is E [Chapter 5, IX D 1; Table 5.9]. The blood values are consistent with metabolic acidosis (calculate pH = 7.34). Treatment with a carbonic anhydrase inhibitor causes metabolic acidosis because it increases HCO3 − excretion. 84. The answer is A [Chapter 7, III B 4 a, c (2)]. Prolactin secretion by the anterior pituitary is tonically inhibited by dopamine secreted by the hypothalamus. If this inhibition is disrupted (e.g., by interruption of the hypothalamic–pituitary tract), then prolactin secretion will be increased, causing galactorrhea. The dopamine agonist bromocriptine simulates the tonic inhibition by dopamine and inhibits prolactin secretion. 85. The answer is D [Chapter 5, VII A 1; Table 5.6; Figure 5.14]. The description is of a normal person who is deprived of water. Serum osmolarity is slightly higher than normal because Comprehensive Examination 291 insensible water loss is not being replaced by drinking water. The increase in serum osmolarity stimulates (via osmoreceptors in the anterior hypothalamus) the release of antidiuretic hormone (ADH) from the posterior pituitary. ADH then circulates to the kidney and stimulates water reabsorption from the collecting ducts to concentrate the urine. 86. The answer is D [Chapter 3, VIII C-F; Table 3.3]. Both the pulmonary and coronary circulations are regulated by Po2. However, the critical difference is that hypoxia causes vasodilation in the coronary circulation and vasoconstriction in the pulmonary circulation. The cerebral and muscle circulations are regulated primarily by local metabolites, and the skin circulation is regulated primarily by sympathetic innervation (for temperature regulation). 87. The answer is A [Chapter 5, IX C 1; Tables 5.9 and 5.11]. Acetazolamide, a carbonic anhydrase inhibitor, is used to treat respiratory alkalosis caused by ascent to high altitude. It acts on the renal proximal tubule to inhibit the reabsorption of filtered HCO3 − so that the person excretes alkaline urine and develops mild metabolic acidosis. 88. The answer is B [Chapter 5, IX D 1; Table 5.9]. The blood values are consistent with metabolic acidosis with respiratory compensation. Because the urinary excretion of NH4 + is decreased, chronic renal failure is a likely cause. 89. The answer is D [Chapter 3, VI D]. In a person with a left-to-right cardiac shunt, arterial blood from the left ventricle is mixed with venous blood in the right ventricle. Therefore, Po2 in pulmonary arterial blood is higher than normal, but systemic arterial blood would be expected to have a normal Po2 value or 100 mm Hg. During an asthmatic attack, Po2 is reduced because of increased resistance to airflow. At high altitude, arterial Po2 is reduced because the inspired air has reduced Po2. Persons with a right-to-left cardiac shunt have decreased arterial Po2 because blood is shunted from the right ventricle to the left ventricle without being oxygenated or “arterialized.” In pulmonary fibrosis, the diffusion of O2 across the alveolar membrane is decreased. 90. The answer is D [Chapter 1, II]. H+–K+ transport occurs via H+, K+-adenosine triphosphatase (ATPase) in the luminal membrane of gastric parietal cells, a primary active transport process that is energized directly by ATP . Na+–glucose and Na+–alanine transport are examples of cotransport (symport) that are secondary active transport processes and do not use ATP directly. Glucose uptake into muscle cells occurs via facilitated diffusion. Na+–Ca2+ exchange is an example of countertransport (antiport) and is a secondary active transport process. 91. The answer is B [Chapter 6, II A 1 c; IV B 4 a]. When the pH of the stomach contents is very low, secretion of gastrin by the G cells of the gastric antrum is inhibited. When gastrin secretion is inhibited, further gastric HCl secretion by the parietal cells is also inhibited. Pancreatic secretion is stimulated by low pH of the duodenal contents. 92. The answer is A [Chapter 6, II A 2 a]. Removal of the duodenum would remove the source of the gastrointestinal (GI) hormones, cholecystokinin (CCK), and secretin. Because CCK stimulates contraction of the gallbladder (and, therefore, ejection of bile acids into the intestine), lipid absorption would be impaired. CCK also inhibits gastric emptying, so removing the duodenum should accelerate gastric emptying (or decrease gastric emptying time). 93. The answer is A [Chapter 7, III C 1 b]. Antidiuretic hormone (ADH) not only produces increased water reabsorption in the renal collecting ducts (V2 receptors) but also causes constriction of vascular smooth muscle (V1 receptors). 94. The answer is B [Chapter 6, V A 2 b]. Monosaccharides (glucose, galactose, and fructose) are the absorbable forms of carbohydrates. Glucose and galactose are absorbed by Na+-dependent cotransport; fructose is absorbed by facilitated diffusion. Dipeptides and water-soluble vitamins are absorbed by cotransport in the duodenum, and bile acids 292 BRS Physiology are absorbed by Na+-dependent cotransport in the ileum (which recycles them to the liver). Cholesterol is absorbed from micelles by simple diffusion across the intestinal cell membrane. 95. The answer is E [Chapter 7, III B 3 a (1)]. Somatostatin is secreted by the hypothalamus and inhibits the secretion of growth hormone by the anterior pituitary. Notably, much of the feedback inhibition of growth hormone secretion occurs by stimulating the secretion of somatostatin (an inhibitory hormone). Both growth hormone and somatomedins stimulate the secretion of somatostatin by the hypothalamus. 96. The answer is B [Chapter 7, X A]. Aromatase catalyzes the conversion of testosterone to estradiol in the ovarian granulosa cells. Estradiol is required for the development of female secondary sex characteristics. 97. The answer is E [Chapter 3, V; Figure 3.15]. Closure of the aortic and pulmonic valves creates the second heart sound. The closure of these valves corresponds to the end of ventricular ejection and the beginning of ventricular relaxation. 98. The answer is C [Chapter 2, II C 4; Figure 2.5]. Light striking a photoreceptor cell causes the conversion of 11-cis retinal to all-trans retinal; activation of a G protein called transducin; activation of phosphodiesterase, which catalyzes the conversion of cyclic guanosine monophosphate (cGMP) to 5′-GMP so that cGMP levels decrease; closure of Na+ channels by the decreased cGMP levels; hyperpolarization of the photoreceptor; and decreased release of glutamate, an excitatory neurotransmitter. 99. The answer is D [Chapter 7, IV A 4]. The coupling of two molecules of diiodotyrosine (DIT) results in the formation of thyroxine (T4). The coupling of DIT to monoiodotyrosine (MIT) produces triiodothyronine (T3). 293 Index (Note: Page numbers in italics denote figures; those followed by “t” denote tables; those followed by “Q” denote questions; and those followed by “E” denote explanations.) A A band, 17, 17 a wave, 85 A–a (alveolar–arterial) gradient, 130, 142Q, 146E A-alpha nerve fibers, 37t Abdominal muscles, in expiration, 118 A-beta nerve fibers, 37t Abetalipoproteinemia, 217 Absolute refractory period (ARP), 11, 11, 25Q, 30E cardiac, 74, 74–75 Absorption, 92, 93 of bile acid, 213, 213, 215t of calcium, 215t, 219 of carbohydrates, 214, 215t, 216 of electrolytes, 218 of iron, 215t, 219 of K+, 218 of lipids, 215t, 217 of NaCl, 217–218 of proteins, 215t, 216, 216 in small intestine, 202 of vitamins, 215t, 218–219 of water, 218 Accessory muscles, in breathing, 117 Accommodation, 11 ACE (angiotensin-converting enzyme) in arterial pressure regulation, 89 in regulation of angiotensin II production, 89, 90, 244 ACE (angiotensin-converting enzyme) inhibitors, 89 and glomerular filtration rate, 152 and renal blood flow, 152 Acetazolamide, 138 Acetylcholine (ACh) and atrial contractility, 78 in autonomic nervous system, 32 in gallbladder contraction, 213–214 in gastric H+ secretion, 209, 209 at neuromuscular junction, 12–14, 13, 25Q, 29E in pancreatic secretion, 212 receptors, muscarinic, 35 receptors, nicotinic, 13, 35 Acetylcholinesterase (AChE), 14 Acetylcholinesterase (AChE) inhibitors, 14, 24Q, 29E Acetylcholinesterase (AChE) receptors, in ­ myasthenia gravis, 14, 24Q, 29E ACh (see Acetylcholine (Ach)) Achalasia, 201 Acid(s) nonvolatile, 172–173 production of, 172–173 titratable, 167, 173, 176 H+ excretion as, 176, 176 volatile, 172 weak, 158 Acid–base balance, 172–177 acid production, 172–173 buffers, 172–174, 175 and K+ secretion, 165–166 renal, 174–177, 175, 176 Acid–base disorder(s), 177–181, 177t causes of, 178t compensatory responses to, 180t metabolic acidosis as, 177, 177t, 178t, 180 metabolic alkalosis as, 177t, 178–179, 178t respiratory acidosis as, 179–180 respiratory alkalosis as, 180–181 Acid–base map, 179 Acidemia, 177 Acidic urine pH, 158 Acidosis, 164t due to diabetes mellitus, 250, 251 effect on K+ secretion, 166 lactic, 138, 178t metabolic acid–base map of, 179 anion gap in, 177 causes of, 178t due to chronic renal failure, 187Q, 192E due to diabetes mellitus, 187Q, 192E–193E due to diarrhea, 183, 186Q, 189E due to hypoaldosteronism, 181 hyperchloremic, 177 respiratory compensation for, 177, 177t, 183, 186Q, 191E respiratory acid–base map of, 179 causes of, 178t due to COPD, 187Q, 192E renal compensation for, 177t, 179–180, 186Q, 191E Acromegaly, 235 ACTH (adrenocorticotropic hormone) actions of, 228t in adrenal hormone synthesis, 243, 265Q, 268E, 269E deficiency of, 245–246 excess, 246, 247, 264Q, 265Q, 268E, 269E in regulation of adrenocortical hormone ­ secretion, 243–244, 244 in regulation of aldosterone secretion, 244 synthesis of, 234, 234, 264Q, 268E Actin, 18 in excitation–contraction coupling, 18, 19 Action potential(s), 10–12, 11, 12, 22t cardiac, 72, 72–74, 73, 78 characteristics of, 10 conduction velocity of, 12, 12, 25Q, 29E defined, 10 depolarization of, 10, 12 ionic basis of, 10, 11 overshoot of, 10 plateau phase of, 73 294 Index Action potential(s) (Continued ) propagation of, 11–12, 12 refractory periods, 11, 11 repolarization of, 10, 25Q, 29E in skeletal muscle, 18, 18–19, 19 undershoot of, 10 upstroke of, 10, 28E, 29E, 74, 105Q, 112E Activation gate, of Na+ channel, 7 Active hyperemia, 94 Active tension, 20 Active transport primary, 2t, 3–4, 26Q, 31E secondary, 2t, 4, 4–5, 5, 26Q, 31E Acute respiratory acidosis, 180, 180t Acute respiratory alkalosis, 180 Addison disease, 245, 246, 246t, 264Q, 268E A-delta nerve fibers, 37t Adenoma, parathyroid, 253 Adenosine, and coronary circulation, 96 Adenosine diphosphate (ADP), in excitation–­ contraction coupling, 18, 19 Adenosine triphosphate (ATP) in active transport, 3–4, 26Q, 31E in excitation-contraction coupling, 18, 19 and rigor, 18, 26Q, 31E Adenylate cyclase, 230, 230 norepinephrine and, 59Q–60Q, 63E parathyroid hormone and, 187Q, 192E ADP (adenosine diphosphate), in excitation–­ contraction coupling, 18, 19 Adrenal cortex pathophysiology of, 245–247, 246t physiology of, 241–247, 242–244 synthetic pathways in, 243 Adrenal cortical tumors, 244 Adrenal crisis, 245 Adrenal medulla, 15, 32 in hemorrhage, 100 Adrenergic neurons, 32 Adrenergic receptors α1, 34, 34t, 59Q, 63E drugs that act on, 34t, 58Q, 62E α2drugs that act on, 34t β1, 34, 34t, 58Q, 62E drugs that act on, 35t β2, 34, 34t drugs that act on, 34t and vasodilation, 60Q, 64E Adrenocortical excess, 246–247, 246t Adrenocortical hormones actions of, 244 regulation of secretion of, 242–244, 244 synthesis of, 241–242, 243 Adrenocortical insufficiency, 245–246, 246t, 265Q, 268E water shifts between compartments due to, 150t, 151 Adrenocorticotropic hormone (ACTH) actions of, 228t in adrenal hormone synthesis, 243, 265Q, 268E–269E deficiency of, 245–246 excess, 246, 247, 264Q, 265Q, 268E, 269E in regulation of adrenocortical hormones secre-tion, 243–244, 244 in regulation of aldosterone secretion, 244 synthesis of, 234, 234, 264Q, 268E Adrenocorticotropic hormone (ACTH)-secreting tumors, 244 Adrenogenital syndrome, 247 Adrenoreceptors, 34 Afferent fibers, 194 Afterload, 19, 20, 20 ventricular, 78 and ventricular pressure–volume loops, 79, 79–80, 80 Afterpotential, hyperpolarizing, 10 A-gamma nerve fibers, 37t Age, GFR decreases with, 153 Airflow, 120–121 Airway obstruction, (V/Q) ratio in, 134–135, 135 Airway resistance, 121, 140Q, 144E Alanine Na+-dependent cotransport, 221Q, 224E in tubular fluid, 188Q, 193E Albuterol, 35t Aldosterone actions of, 228t, 245, 264Q in colonic secretion of K+, 218 effect on K+ secretion, 165, 165–166 in hemorrhage, 100, 107Q, 113E in Na+ reabsorption, 162–163 origin of, 241, 242, 264Q, 268E regulation of secretion of, 244, 265Q, 269E in renal acid–base balance, 176 renal effects of, 173t in renin–angiotensin–aldosterone system, 89, 90 in saliva production, 205 synthesis of, 243 vomiting and, 183 Aldosterone antagonist, 189E Aldosterone synthase, 243 Aldosterone-secreting tumors, 246t, 247 Alkalemia, 178 Alkaline tide, 208 Alkaline urine pH, 158 Alkalosis contraction, 183 due to vomiting, 208 effect on K+ secretion, 166 metabolic acid–base map of, 179 causes of, 178t compensatory responses, 180t due to hyperaldosteronism, 186Q, 192E due to vomiting, 182, 182, 187Q, 193E respiratory compensation for, 178, 186Q, 192E respiratory, 187Q, 192E acid–base map of, 179 causes of, 178t renal compensation, 180 renal compensation for, 175, 177t, 180, 186Q, 191E respiratory compensation for, 186Q, 191E Alpha cells, 248, 249t Alpha waves, 54 α1 receptor(s), 34, 34t, 59Q, 63E drugs that act on, 35t, 58Q, 62E and vasoconstriction, 105Q, 112E α2 receptor(s), 34, 34t drugs that act on, 35t α1 receptor agonists, 35t α1 receptor antagonists, 35t, 58Q, 62E α-amylases, 214 α-dextrinase, 214 Index 295 α-motoneurons, 48 convergence on, 51 divergence to, 51 in stretch reflex, 50, 50 Altitude and hemoglobin–O2 dissociation curve, 141Q, 145E–146E respiratory effects of, 138, 138t Alveolar gas equation, 130 Alveolar pressure, 122 Alveolar ventilation (VA), 116, 123, 133 Alveoli, surface tension of, 119–120, 120 Amacrine cell, 39, 40 Amine hormone(s), synthesis of, 227 Amino acids absorption of, 216, 216 free, 216 insulin and, 250 TF/P value for, 161, 161 Ammonia (NH3) synthesis, 176, 176–177 Ammonium cation (NH4 +), H+ excretion as, 176, 176–177 Anal canal, 203 Anatomic dead space, 115 Androgen(s), adrenal, 241, 242, 243, 264Q, 268E Androgen insensitivity disorder, 258, 266Q, 270E, 279E Androgen receptors, deficiency of, 258 Androstenedione, 241, 243, 257, 258, 259, 264Q, 268E Anemia iron deficiency, 219 pernicious, 219 Angiotensin I, 89 Angiotensin II and HCO3 − reabsorption, 175 in hemorrhage, 100 receptor (AT1) antagonists, 89 in regulation of aldosterone secretion, 243, 244 and renal blood flow, 152 renal effects of, 155t, 173t in renin–angiotensin–aldosterone system, 89, 90 Angiotensin-converting enzyme (ACE) in arterial pressure regulation, 89 in regulation of angiotensin II production, 89, 90, 244 Angiotensin-converting enzyme (ACE) inhibitors, 89 and renal blood flow, 152 Angiotensin receptor (AT1) antagonists, 89 Angular acceleration, 46, 46 Anion(s), and K+ secretion, 166 Anion gap, serum, 177, 180 Anorexigenic neurons, 199 Anosmia, 47, 64E ANP (atrial natriuretic peptide), 231 and arterial blood pressure, 91 mechanism of action, 229t, 231 renal effects of, 152 ANS (see Autonomic nervous system (ANS)) Anterior pituitary gland, 233 hormones of, 233–237, 234–236, 236t Anticholinergic drugs, and saliva, 206 Antidiuretic hormone (ADH) actions of, 228t, 237 and arterial blood pressure, 91 in hemorrhage, 100 hypoaldosteronism and, 182 in Na+ reabsorption, 163 origin of, 237 pathophysiology, 172t regulation of secretion of, 237t renal effects of, 173t syndrome of inappropriate, 151, 172 urine production in, 167 vs. water deprivation, 186Q, 191E water shifts between compartments due to, 150t, 151 and urea reabsorption, 166 in urine production disorders of, 167–172, 168–170 Anti-inflammatory effects, of glucocorticoids, 245 Anti-müllerian hormone, 255, 256, 256 Antiport, 4, 5 Aortic body, chemoreceptors in, 90–91 hemorrhage and, 100 Aortic pressure in cardiac cycle, 70, 86, 105Q, 112E QRS complex and, 70, 86, 105Q, 112E and ventricular afterload, 78–79 Aortic valve closure of, 87, 103Q, 109E opening of, 80, 86 Aphasia, 55 motor, 55 sensory, 55 Apneustic center, 136 Apoprotein B, failure to synthesize, 217 Aquaporin 2 (AQP2), 237 Aromatase, 258 Aromatization, of 18-carbon steroids, 242 ARP (absolute refractory period), 11, 11, 25Q, 30E cardiac, 74, 74–75 Arterial pressure, 70–71 in cardiac cycle, 70, 70–71 and carotid sinus baroreceptors, 104Q, 111E diarrhea and, 107Q, 113E diastolic, 70, 70 exercise and, 98t gravitational forces and, 97 hemorrhage and, 100, 101 mean, 70, 71 set point for, 87 pulse, 70, 70 regulation of, 87–91, 88, 90 systolic, 70, 70 Arterial stenosis, 107Q, 113E Arteriolar resistance, 105Q, 111E and arterial pressure, 105Q, 111E exercise and, 99 Arteriolar vasoconstriction in baroreceptor reflex, 89 in hemorrhage, 100 Arterioles, 66–67 Arteriovenous shunting, in heat loss, 56 Artery(ies), 66 Aspirin excretion in alkaline urine, 158 for fever, 57 overdose of, 188Q, 193E Asthma, 123, 123t airway resistance in, 121 β2 agonists for, 59Q, 62E, 139Q, 140E FEV1 in, 117, 117 hypoxemia due to, 139Q, 143E muscles of expiration during, 118 Astigmatism, 40 Ataxia, 53 Atelectasis, 120 296 Index ATP (adenosine triphosphate) in active transport, 3–4, 26Q, 31E in excitation–contraction coupling, 18, 19 and rigor, 18, 26Q, 31E Atrial natriuretic peptide (ANP) and arterial blood pressure, 91 mechanism of action, 229t renal effects of, 152 Atrial pressure, 71 Atrioventricular (AV) delay, 106Q, 113E Atrioventricular (AV) node, pacemaker in, 102Q, 109E Atrioventricular (AV) valves, closure of, 85, 103Q, 110E Atrium, action potentials of, 72–73 Atropine and gastric secretion, 208, 209, 209, 210 increased AV node conduction velocity, 61, 65 and muscarinic receptors, 35 blockage, 65 and saliva, 206 Audition, 44, 44–45 Auditory cortex, 45 Auditory ossicles, 44 Auditory pathways, central, 45 Auditory transduction, by organ of Corti, 44, 45 Auerbach plexus, 195, 195 Augmentation, 15 Autonomic centers, 36 Autonomic ganglia, 32 Autonomic nervous system (ANS), 32–36 autonomic centers in, 36 drugs that effect, 35–36, 35t–36t of GI tract, 194–195 neurotransmitters of, 32 organization of, 32, 33, 33t receptor types in, 34–35, 34t Autonomic nervous system (ANS) effects on heart rate and conduction velocity, 75–76, 75t of thyroid hormone, 241 on various organ systems, 36t Autoregulation, of blood flow renal, 152 AV (atrioventricular) delay, 106Q, 113E AV (atrioventricular) node, pacemaker in, 102Q, 109E AV (atrioventricular) valves, closure of, 85, 103Q, 110E Axons myelinated, 12, 12 unmyelinated, 11–12, 12 B B nerve fibers, 37t “Back-diffusion,” 158 Bacterial overgrowth, 217 Barbiturates, 16 Baroreceptor(s), 87 gravitational forces and, 97 hemorrhage and, 100 Baroreceptor reflex, 87–89, 88, 97, 100, 183 Basal body temperature, 260, 261, 263Q, 267E Basal metabolic rate (BMR), thyroid hormone and, 241 Base(s), weak, 155 Basilar membrane, of organ of Corti, 45, 58Q, 62E Beta cells, 248, 249, 249t β1 receptor(s), 34, 34t, 58Q, 62E agonist, 59Q, 63E drugs that act on, 35t β2 receptor(s), 34, 34t, 121 drugs that act on, 35t and vasodilation, 60Q, 64E, 96 β2 receptor agonists, 35t, 62E for asthma, 59Q, 62E β1 receptor antagonists, 35t β2 receptor antagonists, 35t, 59Q–60Q, 63E Biconcave lens, 40 Bile, 212–214, 213 composition of, 204t, 212–213 formation of, 213–214 secretion of, 204t, 213 Bile acids absorption of, 215t cotransport of, 214, 223Q, 226E in lipid digestion, 216–217 primary, 213 recirculation of, 213, 214 secondary, 213 Bile salts, 212–213, 213 Bilirubin metabolism, 219, 220 Bipolar cells, 41 Bladder, effect of autonomic nervous system on, 36t Bleeding (see Hemorrhage) Blind spot, 41 Blood flow blood vessel radius and, 68, 102Q, 109E direction of, 66, 67 equation for, 68 hormonal control of, 95–96 laminar vs. turbulent, 69, 103Q, 110E local (intrinsic) control of, 94–95 pulmonary, 132–133 in different regions of lung, 132–133 distribution of, 132–133 during exercise, 138 gravitational forces and, 132, 139Q, 143E regulation of, 133 renal, 152–153, 186Q, 191E resistance and, 68, 102Q, 109E velocity of, 67–68 Blood glucose and glucagon, 248 and insulin, 249 Blood pressure arterial, 70 diastolic, 70, 70 systolic, 70, 70, 105Q, 112E Blood urea nitrogen (BUN), 153 Blood vessel(s) autonomic effects on, 75–76, 75t pressure profile in, 69–70 Blood vessel radius and blood flow, 68, 102Q, 109E and Poiseulle’s equation, 68 and resistance, 68, 102Q, 109E Blood volume and aldosterone secretion, 244, 265Q, 269E and mean systemic pressure, 81, 81–82 and venous return curve, 83 Blood–brain barrier, 55–56, 56t functions of, 55 passage of substances across, 91 Blue bloaters, 123 Index 297 BMR (basal metabolic rate), thyroid hormone and, 241 Body fluid(s), 147–151, 148, 148t, 150, 150t Body fluid compartments, 147, 148, 148t measuring volume of, 147–149 shifts of water between, 149–151, 150, 150t Body temperature basal, 260, 261, 263Q, 267E core, 56–57 and hemoglobin–O2 dissociation curve, 128, 128 hypothalamic set point for, 56–57 Bohr effect, 128 Bombesin, 199 Bone resorption calcitonin and, 255 PTH and, 253 vitamin D and, 255 Botulinus toxin, and neuromuscular transmission, 13t Bowditch staircase, 77 Bowman space hydrostatic pressure (PBS), 154 Bowman space oncotic pressure (πBS), 154 Bradykinin, in blood flow regulation, 95 Brain stem in autonomic nervous system, 36 in control of breathing, 135–136 in posture control, 51–52 Breathing control of, 135–136 mechanics of breathing cycle in, 122, 122–123 with lung diseases, 123, 123t muscles of expiration, 117–118 muscles of inspiration, 117 pressure, airflow, and resistance in, 120–121 respiratory compliance in, 118 surface tension of alveoli and surfactant, 119–120, 120 Breathing cycle, 122, 122–123 Broca area, 55 Bromocriptine, 236t, 237, 264Q, 267E Bronchial obstruction, 141Q, 145E Bronchioles, effect of autonomic nervous system, 36t Brown fat, in heat generation, 56 Bruits, 69 Brush border, 214 Buffer(s), 173–176, 175 extracellular, 173 intracellular, 173 pK of, 176 titration curves, 174, 175 urinary, 176 Buffer pair, 185Q, 190Q BUN (blood urea nitrogen), 153 C C nerve fibers, 37t Ca2+ absorption of, 215t, 219 in excitation–contraction coupling, 18, 19 in gastrointestinal muscle contraction, 27Q, 31E and myocardial contractility, 77, 106Q, 112E renal regulation of, 186Q, 191E in tetanus, 18, 23Q, 28E Ca2+-ATPase pump, 4 Ca2+-ATPase, sarcoplasmic reticulum, 4 Ca2+ homeostasis, 251 Ca2+ metabolism, 251–255, 251t, 252 calcitonin in, 251t, 252, 255 parathyroid hormone in, 252, 252–254, 253t, 265Q, 270E vitamin D in, 251t, 252, 254, 255 Ca2+ pump, 4 Ca2+ reabsorption loop diuretics and, 167 thiazide diuretics and, 167, 186Q, 191E Ca2+ release channels, 18 Ca2+-sensing receptor, 252 Calbindin D-28K, 219, 251t, 255 Calcitonin, 228t, 255 in calcium regulation, 251t, 252 Calsequestrin, 18 cAMP (cyclic adenosine monophosphate), 34, 229t Capacitance, vascular, 69 Capillaries, 67, 91–93 fluid exchange across, 92, 92–93 Capillary beds, 91 Capillary colloidosmotic pressure, 92 Capillary hydrostatic pressure, 92 glomerular, 154 Capillary oncotic pressure, 92 glomerular 154, 184Q, 189E Capillary walls, passage of substances across, 91, 106Q, 113E Captopril, 89 Carbaminohemoglobin, 131 Carbohydrates absorption of, 214, 215t, 216 digestion of, 214, 215t metabolism, 219 Carbon dioxide (see CO2) Carbon monoxide (CO) poisoning, hemoglobin–O2 dissociation curve, 129, 129 Carbonic anhydrase in acid–base balance, 172 in CO2 transport, 131, 132 inhibitors, 160, 181t in reabsorption of filtered HCO3−, 174 Carboxypeptidase A, 215 Carboxypeptidase B, 215 Cardiac action potentials, 72, 72–74, 73, 78 Cardiac cycle, 85–87, 86 Cardiac electrophysiology, 71–74, 71–76, 75t action potentials in, 72, 72–74, 73 autonomic effects on heart rate in, 75–76, 75t conduction velocity in, 74 autonomic effects on, 75–76, 75t electrocardiogram in, 71, 71–72 excitability in, 74, 74–75 Cardiac function curve, 80, 80, 82, 83 Cardiac glycosides cardiac output curve, 82, 83 and myocardial contractility, 77–78, 78 Cardiac muscle, 76–85 comparison of, 22, 22t contractility of, 77–78, 78 depolarization of, 71 excitation–contraction coupling of, 77 length–tension relationship in, 78, 78–79 pressure–volume loops in, 79, 79–80, 80 repolarization of, 72 structure of, 76–77 Cardiac output equation for, 68, 84 exercise and, 99 298 Index Cardiac output (Continued ) Fick principle for measuring, 84–85 in Frank–Starling relationship, 78, 79 gravitational forces and, 97, 97t hemorrhage and, 100 on left and right sides of heart, 66, 106Q, 112E myocardial contractility and, 78, 78, 105Q, 112E of right ventricle, 132 and venous return, 79 Cardiac output curve, 80, 81, 82–83, 83 negative inotropic agents and, 83 positive inotropic agents and, 82, 83 Cardiac oxygen consumption, 84 Cardiac pacemaker, 73 in AV node, 102Q, 109E latent, 73 in SA node, 73 Cardiovascular effects, of thyroid hormone, 241 Cardiovascular physiology, 66–114 arterial pressure, regulation in, 87–91, 88, 90 cardiac cycle, 85–87, 86 cardiac electrophysiology, 71–74, 71–76, 75t cardiac muscle and cardiac output, 76–85, 78–83 circuitry of the cardiovascular system, 66, 67 hemodynamics in, 66–71, 70 integrative functions of the cardiovascular system in, 97, 97t, 98, 98t, 99, 99–100, 100t, 101 microcirculation and lymph in, 91–94, 92, 93t of special circulations, 94–97, 94t, 102Q, 109E Cardiovascular responses, to standing, 97, 97t, 98, 102Q, 109E to exercise, 97–99, 98t, 99 to hemorrhage, 100, 100t, 101 Cardiovascular system circuitry of, 66, 67 exercise and, 97–99, 98t, 99 Carotid body, chemoreceptors in, 90–91 in control of breathing, 136, 136t hemorrhage and, 100 Carotid sinus baroreceptors hemorrhage and, 100 Carotid sinus nerve, in baroreceptor reflex, 87 Carrier-mediated transport, 3 Catalytic receptor mechanisms, 230–232 guanylyl cyclase, 231 tyrosine kinases, 231–232, 232 Catecholamines glucocorticoids and, 245 and myocardial contractility, 77 synthetic pathways, 15 Catechol-O-methyltransferase (COMT), 15 Caudad region, of stomach, 201 Cecum, 203 Cell membranes structure of, 1 transport across, 2–5, 2t, 4, 5 Cell physiology, 1–31 of cardiac muscle, 22, 22t cell membranes in, 1 transport across, 2–5, 2t, 4, 5 diffusion potential, resting membrane potential, and action potential in, 7–12, 8, 11, 12 neuromuscular and synaptic transmission in, 12–16, 13, 13t, 15 osmosis in, 4–6, 6 of skeletal muscle in, 16–20, 17, 19, 20, 22t of smooth muscle, 20–22, 21, 22t Central auditory pathways, 45 Central chemoreceptors, in control of breathing, 136, 136t Central diabetes insipidus, 170, 172, 172t, 185Q, 190Q Central nervous system (CNS) effects, of thyroid hormone, 240 Cerebellar cortex connections in, 53 inputs to, 53 layers of, 53 outputs of, 53 Cerebellum clinical disorders of, 53 functions of, 52 in movement control, 52–53, 60Q, 64E Cerebral circulation, regulation of, 94t, 96, 106Q, 113E Cerebral cortex in control of breathing, 136 higher functions of, 54–55 sensory pathways to, 38 Cerebral ischemia, and arterial blood pressure, 90, 100 Cerebrospinal fluid (CSF), 55–56, 56t composition of, 59Q, 63E CFTR (cystic fibrosis transmembrane conductance regulator) gene, 212 Chemical synapses, 12 Chemoreceptor(s) in carotid and aortic bodies, 90–91 hemorrhage and, 100 in medulla, in control of breathing, 136 in vasomotor center, 90 Chemoreceptor trigger zone, for vomiting, 203 Chenodeoxycholic acid, 213 Chest wall compliance, 119, 119 Chewing, 200 Chief cells, 207, 207, 207t, 252 Chloride shift, 131 Chlorothiazide, 181t, 186Q, 191E CH2O (free-water clearance), 171–172, 185Q, 190Q Cholecalciferol, 254 Cholecystokinin (CCK), 195, 196t, 197 actions of, 196t, 197 in gallbladder contraction, 213–214 and gastric emptying, 221Q, 222Q–223Q, 224E, 226 in gastric motility, 201 gastrin-like properties of, 221Q, 224E in lipid digestion, 217 and pancreatic secretion, 212, 222Q, 225E stimulus for the release of, 196t, 197 Cholera, pancreatic, 199 Cholera toxin, 218, 221Q, 224E Choleretic agents, 213 Cholesterol absorption of, 217 in adrenal hormone synthesis, 243 in synthesis of estradiol, 259 in synthesis of testosterone, 257 Cholesterol desmolase, 243, 243, 269E Cholic acid, 213 Choline acetyltransferase, at neuromuscular ­ junction, 12 Cholinergic neurons, 32 Cholinergic receptors, 35 Cholinoreceptors, 35 Chorda tympani, lesion of, 60Q, 64E Choroid plexus epithelium, CSF formation, 55 Index 299 Chromaffin cells, 32 Chronic bronchitis, 123, 123t Chronic obstructive pulmonary disease (COPD), 123, 123t intrapleural pressure on, 123 respiratory acidosis due to, 178t, 187Q, 192E Chronic renal failure metabolic acidosis due to, 187Q, 192E and PTH, 253t, 254 and vitamin D, 265Q, 268E Chronic respiratory acidosis, 180 Chronic respiratory alkalosis, 180 Chronotropic effects, 75, 76 Chylomicrons, 217 Chyme, 202 Chymotrypsin, 215 Cimetidine for duodenal ulcer, 222Q, 225E and gastric secretion, 209, 209, 211 Circadian rhythm of glucocorticoid secretion, 243 of sleep, 55 Circular muscle, 194, 195 in gastrointestinal motility, 199 in peristalsis, 202 Circulation(s) cerebral, 94t, 96 coronary, 94t, 96 hormonal (extrinsic) control of, 95–96 local (intrinsic) control of, 94–95 pulmonary, 94t to skeletal muscle, 94t, 96 to skin, 94t, 96–97 special, 94t Circumvallate papillae, 47 Cl− intestinal secretion of, 218 TF/P ratio for, 161, 161 Cl− diffusion potential, 8, 8 Cl− equilibrium potential, 8, 8 Cl− reabsorption, proximal tubule, 160 Cl− shift, 131 Clasp-knife reflex, 50t, 51 Clearance equation, 151–152 Cl−–HCO3 − exchange, 208 Climbing fibers, 53 CNS (central nervous system) effects, of thyroid hormone, 240 CO2 and cerebral circulation, 96, 106Q, 113E in control of breathing, 136 diffusion-limited exchange, 125, 125t dissolved, 131 forms of, 131, 132 partial pressure of, 124, 124t, 175 arterial, 136, 138, 141Q and HCO3 − reabsorption, 175 hemoglobin–O2 dissociation curve, 128, 128, 130 venous, 138 production of, 172 CO (carbon monoxide) poisoning, hemoglobin–O2 dissociation curve, 129, 129 CO2 diffusion, 124–125 CO2 transport, 131–132, 132 Collecting duct and K+ regulation, 163 Na+ reabsorption in, 162–163 in urine production, 169–170 Colloid osmotic pressure, 6 capillary, 92 Colon, 203 Compensatory responses to acid–base disorders, 180t to hemorrhage, 100, 100t, 101 Competition, in carrier-mediated transport, 3 Complex cells, of visual cortex, 43, 59Q, 62E Compliance chestwall, 119, 119 lung, 118, 118, 119, 119 respiratory, 118, 118–119, 119 vascular, 69 COMT (catechol-O-methyltransferase), 15 Concentration gradient, role in flux, 8, 24Q, 29E Conductance, of ion channels, 7 Conduction, saltatory, 12 Conduction velocity, 12, 12, 25Q, 29E cardiac, 74 autonomic effects on, 75–76, 75t Cones, 41, 41t, 58Q, 62E Conjugated bilirubin, 219, 220 Conn syndrome, 187Q, 192E, 246t, 247 Contractility, 77–78 Contraction alkalosis, 175, 178t, 183 Convection, heat loss by, 56 COPD (chronic obstructive pulmonary disease), 117, 123, 123t Core temperature, 56–57 Coronary circulation, regulation of, 94t, 96 Corpus luteum, 261, 261, 262, 263Q, 265Q, 267E, 269E Cortical diluting segment, 162, 169 Cortical evoked potential, 54 Corticobulbar tract, 51 Corticopapillary osmotic gradient, 167–168, 170 Corticospinal tract, 51 Corticosterone, 243, 244 Corticotropin-releasing hormone (CRH), 228t, 243–244, 244 Cortisol actions of, 228t, 245 origin of, 241 regulation of secretion of, 244 synthesis of, 243 Cotransport, 2t, 4, 5, 26Q, 30E Countercurrent multiplication, in loop of Henle, 168 Countertransport, 2t, 4, 5 (see also Antiport) Coupled transport, 4 Creatinine, serum, 153 CRH (corticotropin-releasing hormone), 228t, 243–244, 244 Cribriform plate, fracture of, 60Q, 64E Cross-bridges, 18 Crossed extension reflex, 51 Crypts, intestinal, 218 CSF (cerebrospinal fluid), 55–56, 56t composition of, 59Q, 63E pH of, in control of breathing, 136 Cupula, 46, 46 Curare, and neuromuscular transmission, 13t Current inward, 10 outward, 10 Current flow, 9 Cushing reaction, 90 Cushing disease, 246, 246t, 247, 264Q, 268E Cushing syndrome, 246–247, 246t Cyclic adenosine monophosphate (cAMP), 34, 229t 300 Index Cyclic guanosine monophosphate (cGMP), 94 Cyclooxygenase, inhibition of, 57 Cystic fibrosis, 212 Cystic fibrosis transmembrane conductance ­ regulator (CFTR) gene, 212 D D1 receptors, 16 D2 receptors, 16 Dalton’s law, of partial pressures, 124 Dead space, 115–116, 135 Decerebrate rigidity, 52, 59Q, 63E Decibels (dB), 44 Decorticate posturing, 52 Defecation, 203, 222Q, 225E Dehydroepiandrosterone, 241, 243, 257, 264Q, 268E Dehydroepiandrosterone-sulfate (DHEA-S), 262, 269E Delta cells, 248, 249t Deoxycholic acid, 213 Deoxycorticosterone, 241 11-Deoxycorticosterone, 243 11-Deoxycortisol, 243 Deoxyhemoglobin, 132, 142Q, 146E as intracellular buffer, 173 Depolarization, 10 of cardiac muscle, 71 of T tubules, 18, 19 Dermatome rule, 40 Detoxification, liver function, 220 Dexamethasone suppression test, 244 α-Dextrinase, 214 DHEA-S (dehydroepiandrosterone-sulfate), 262, 269E Diabetes insipidus central, 170, 172, 172t, 185Q, 190Q nephrogenic, 170, 172, 172t urine production in, 170 water shifts between compartments due to, 150t Diabetes mellitus, 250–251, 265Q, 269E–270E metabolic acidosis due to, 187Q, 192E–193E Diacylglycerol, 230 Diaphragm, in breathing, 117 Diarrhea and arterial pressure, 107Q, 113E due to cholera toxin, 218, 221Q, 224E due to Escherichia coli, 218 due to lactose intolerance, 214 hypokalemia due to, 218 metabolic acidosis due to, 183 secretory, 218 water shifts between compartments due to, 149, 150t Diastasis, 87 Diastole, 70 Diastolic pressure, 70, 70 Diastolic pressure curve, 79 Dicrotic notch, 87, 111E Dietary K+, 165 Diffusion back, 158 facilitated, 2t, 3 measurement of, 2 nonionic, 158 sample calculation for, 2 simple, 2–3, 2t, 25Q, 30E across capillary wall, 91 of urea, 167 Diffusion potential, 7–9, 8, 23Q, 28E Diffusion trapping, 176 Digestion of carbohydrates, 214, 215t of lipids, 215t, 216–217 of proteins, 215–216, 215t in small intestine, 202 in stomach, 201–202 Digitalis and cardiac output curve, 82 and myocardial contractility, 77, 78 Dihydropyridine receptor, 18 Dihydrotestosterone, 256, 257, 258, 266Q, 270E 1,25-Dihydroxycholecalciferol, 255, 265Q, 268E actions of, 228t, 255 and calcium metabolism, 219, 252, 254, 255 regulation of synthesis of, 254, 255 24,25-Dihydroxycholecalciferol, 254 3,4-Dihydroxymandelic acid (DOMA), 15 Diiodotyrosine (DIT), 238, 239 Diluting segment, 162 Dilution method, for measuring volume of fluid compartments, 147–148 Diopters, 40 Dipalmitoylphosphatidylcholine (DPPC), 120 Dipeptides, 216, 216 2,3-Diphosphoglycerate (2,3-DPG), and ­ hemoglobin–O2 dissociation curve, 128, 129 in high altitude, 138 Disaccharides, digestion and absorption of, 214, 222Q, 225E Dissolved gases, 124 Distal tubule Ca2+ reabsorption in, 167 K+ regulation in, 163, 184Q, 189E Mg2+ reabsorption in, 167 Na+ reabsorption in, 162, 162–163 in urine production, 169, 171 DIT (diiodotyrosine), 238, 239 Diuretics effects on nephron of, 181t K+-sparing, 163 loop and Ca2+ excretion, 167 isosthenuric urine due to, 171 and K+ secretion, 166 major effects of, 181t mechanism of action of, 181t site of action of, 181t thiazide and Ca2+ reabsorption, 186Q, 191E for idiopathic hypercalciuria, 167 and K+ secretion, 164–166, 165, 165t major effects of, 181t mechanism of action of, 181t site of action of, 181t Divergence, 51 DOMA (3,4-dihydroxymandelic acid), 15 Dopamine, 228t and prolactin secretion, 236, 236 receptors in basal ganglia, 54 synthetic pathway for, 15, 16 Dopaminergic neurons, in Parkinson disease, 16, 26Q, 31E Dorsal column system, 39 Dorsal respiratory group, 135 Down-regulation of receptors, 229 2,3-DPG (2,3-diphosphoglycerate), and ­ hemoglobin–O2 dissociation curve, 128, 129 in high altitude, 138 Index 301 DPPC (dipalmitoyl phosphatidylcholine), 120 Driving force, 9 Dromotropic effects, 75 Duodenal ulcers, 210, 222Q, 225E Dysdiadochokinesia, 53, 60Q, 64E E Ear, structure of, 44, 44–45 ECF (see Extracellular fluid (ECF)) ECG (electrocardiogram), 71, 71–72 Edema causes of, 93, 93t due to gravitational forces, 97 venous pressure and, 104Q, 111E EDRF (endothelium-derived relaxing factor), 94 Effective osmotic pressure, 6 EEG waves, 54 Effective refractory period (ERP), cardiac, 74, 75 Efferent fibers, 194 Ejection fraction, 77, 84 and end-diastolic volume, 84, 109E and end-systolic volume, 102Q, 109E Elastase, 215 Electrical syncytium, heart as, 76 Electrocardiogram (ECG), 71, 71–72 Electrochemical equilibrium, 7, 23Q–24Q, 28E Electrochemical gradient, 2, 3, 23Q, 25Q, 28E, 30E Electroencephalographic (EEG) findings, 54 Electrolytes absorption of, 217–218 secretion of, 218 Embolism, pulmonary, V/Q ratio in, 135 Emmetropia, 40 Emphysema diffusion-limited exchange for, 125 lung compliance in, 119 End plate potential (EPP), 13 End-diastolic volume, 78 calculation of, 108Q, 114E ejection fraction and, 84, 109E and stroke volume, 78, 79 Endocrine physiology, 227–270 of adrenal cortex, 241, 242–244, 242–247, 246t of calcium metabolism, 251–255, 251t, 252, 253t, 255 cell mechanisms and second messengers in, 229–233, 229t, 230–233 of female reproduction, 258–262, 259, 259t, 260–261 of male reproduction, 256–258, 257 overview of hormones in, 227–229, 228t of pancreas, 248–251, 248t–250t of pituitary gland, 233–238, 234–236, 236t, 237t of sexual differentiation, 255–256, 256 of thyroid gland, 238–241, 239–240, 242t Endopeptidases, 215 β-Endorphin, 234, 234 Endothelium-derived relaxing factor (EDRF), 94 End-systolic volume, ejection fraction and, 102Q, 109E Enkephalins, 199 Enteric nervous system, 195 Epinephrine, 32, 33 in hemorrhage, 100 synthetic pathway for, 15, 15 and vasoconstriction, 64E and vasodilation, 60Q, 64E Epithelial cells, of GI tract, 194, 195 EPP (end plate potential), 13 EPSPs (excitatory postsynaptic potentials), 14 Equilibrium point, for cardiovascular system, 81, 83 Equilibrium potential, 8, 11, 24Q, 29E in nerve and muscle, 9 ERP (effective refractory period), cardiac, 74, 75 ERV (expiratory reserve volume), 115, 116 Erythropoietin (EPO), 131, 131 in high altitudes, 138 hypoxia, 131 Escherichia coli, diarrhea due to, 218 Esophageal motility, 200–201 Estradiol actions of, 228t in menstrual cycle, 260, 260, 261, 263Q, 267E origin of, 228t synthesis of, 243 Estriol, 261, 262 Estrogen(s) actions of, 259–260 during pregnancy, 261, 262, 265Q, 269E synthesis of, 258, 259 Evans blue, and plasma volume, 147 Excitability, of cardiac cells, 74, 74–75 Excitation, 34 Excitation–contraction coupling, 18, 19 cardiac, 77 in skeletal muscle, 18, 19 temporal sequence of, 25Q–26Q, 30E in smooth muscle, 21, 21–22, 23Q, 28E Excitatory neurotransmitters, 12 Excitatory postsynaptic potentials (EPSPs), 14 Excretion of Ca2+, 167 of glucose, 156, 185Q, 190Q of H+, 176, 176–177, 185Q, 190Q of K+, 164, 164 of Mg2+, 167 of Na+, 159–160 of PAH, 157, 157 of urea, 167 of weak acids and bases, 158 Exercise and blood flow to skeletal muscle, 96 cardiovascular effects of, 97–99, 98t, 99 diffusion-limited gas exchange, 125 hyperkalemia due to, 186Q, 192E muscles of expiration during, 118 muscles of inspiration during, 117 Exocytosis, 217 Exopeptidases, 215 Expiration muscles, 118 volumes and pressure during, 122, 122 Expiratory reserve volume (ERV), 115, 116, 139Q, 143E External intercostals muscles, in breathing, 117 Extracellular buffers, 173 Extracellular fluid (ECF), 147, 148, 148t shifts between compartments of, 149–151, 150, 150t Extracellular fluid (ECF) volume and HCO3 − reabsorption, 175 mannitol and, 147, 185Q, 190Q measurement of, 147 and proximal tubular reabsorption, 160–161 Extrafusal fibers, 48 302 Index Extrapyramidal tracts, 51 Extrasystolic beat, 102Q, 109E Eye, effect of autonomic nervous system, 36t F Facilitated diffusion, 2t, 3 Facilitation, 15 Familial hypocalciuric hypercalcemia, 254 Farsightedness, 40 Fast pain, 39 Fat(s) absorption of, 215t, 217 digestion of, 215t, 216–217 malabsorption of, 215t, 217 Fat cells, effect of autonomic nervous system, 36t Fatty acid(s) absorption of, 217 glucagon and, 248 insulin and, 250 Fatty acid tails, of phospholipids, 1 Fe2+ (ferrous iron), 126 absorption of, 215t, 219 Fe3+ (ferric iron), 126 Female phenotype, 256, 256 Female reproduction, 258–262 estrogen in, 258, 259, 261 menstrual cycle in, 260, 260–261 pregnancy in, 261, 261–262 progesterone in, 258, 259, 260, 261 regulation of ovary in, 258–259, 259t Ferric iron (Fe3+), 126 Ferrous iron (Fe2+), 126 absorption of, 215t, 219 Fertilization, 261 Fetal hemoglobin, 126 and hemoglobin–O2 dissociation curve, 128, 128 Fetal lung, surfactant in, 120 Fetal pulmonary vascular resistance, 133 FEV1 (Forced expiratory volume), 117, 117 Fever, 57 water shifts between compartments due to, 150t Fibrosis, 117, 117, 123, 123t as cause of hypoxemia, 130t diffusion-limited exchange for, 125 FEV1 in, 117 lung compliance in, 119 PaCO2 in, 141Q, 145E Fick’s law, 124–125 Fick principle, 84–85 Filtered load calculation of, 155 of glucose, 156, 156 of HCO3 −, 175 of PAH, 157, 157 Filtration, in capillaries, 92, 92 factors that increase, 92–93 Filtration fraction, 153, 185Q, 190Q Finasteride, 256 First heart sound, 85, 103Q, 110E First-order neurons, 38 Fixed acids, 172 Flexor withdrawal reflex, 50t, 51, 61Q, 64E Fluid(s), body, 147–151, 148, 148t, 150, 150t Fluid compartments, 147, 148, 148t measuring volume of, 147–149 shifts of water between, 149–151, 150, 150t Fluid exchange, across capillaries, 92, 92–93, 103Q, 110E Flux, concentration gradient, 8, 24Q, 29E Follicle-stimulating hormone (FSH) actions of, 228t in menstrual cycle, 260, 260 origin of, 228t in regulation of ovary, 259 regulation of secretion, 256, 264Q, 267E in regulation of testes, 256, 257 structure of, 234 variation over life span, 258 Follicular phase, of menstrual cycle, 260, 260 Forced expiratory volume (FEV1), 117, 117 Forced vital capacity (FVC), 116–117, 117 Force-generating stroke, 18 Force–velocity relationship, in skeletal muscle, 20, 20 Fourth heart sound, 85 Frank–Starling relationship, 78, 79 FRC (functional residual capacity), 116, 116 Free amino acids, 216 Free nerve endings, 39 Free-water clearance (CH2O), 185Q, 190Q Frequency, of sound, 44 Fructose, absorption of, 214, 216 Functional residual capacity (FRC), 116 Fungiform papillae, 47 Furosemide, 167 FVC (forced vital capacity), 116–117, 117 G G cells, 197, 207, 207, 207t GABA (γ–aminobutyric acid), 16, 26Q, 30E Galactorrhea, 237, 264Q, 267E Galactose, absorption of, 214, 216 Gallbladder, 212–214, 213 contraction of, 213–214 γ-aminobutyric acid (GABA), 16, 26Q, 30E receptors, 16 γ-motoneurons, 37t functions, 49 in stretch reflex, 48, 49 Ganglion cells, 42 receptive fields of, 42 Ganglionic blockers, 35, 58Q, 62E Gap junctions, 1 myocardial, 76 Gas(es) diffusion of, 124–125 dissolved, 124 Gas exchange, 124–125, 125t Gastrectomy, and vitamin B12, 215t, 219 Gastric emptying, 202 cholecystokinin and, 221Q, 222Q–223Q, 224E, 226E Gastric motility, 201–202 Gastric parietal cells, 197 Gastric secretion, 204t, 207–211 cells in, 207, 207, 207t inhibition of, 204t, 209, 209–210 mechanism of, 207, 207–208 and peptic ulcer disease, 210 products of, 204t, 207t stimulation of, 204t, 207t, 208–209, 209 Gastric ulcers, 210 Gastrin, 197, 196t, 207, 207, 207t actions of, 196t, 197 and gastric secretion, 208, 209, 209 secretion of, 197 Index 303 hypersecretion, 217 inhibition of, 197, 221Q, 224E site of, 207, 207, 207t, 221Q, 224E stimulus for, 196t, 197, 207t, 208 in Zollinger–Ellison syndrome, 197 types of, 196–197 Gastrinoma, 197, 223Q, 226E Gastrin-releasing peptide (GRP), 197, 198, 199 Gastrin-secreting tumors, 197 Gastrocolic reflex, 203 Gastroesophageal reflux, 201 Gastroileal reflex, 203 Gastrointestinal (GI) hormones, 195–198, 196, 196t Gastrointestinal (GI) motility, 199–203, 200 Gastrointestinal (GI) secretion, 204–214, 204t of bile, 204t, 212–213 gastric, 204t, 207, 207–211, 207t, 208, 209 salivary, 204–207, 204t, 205, 206 Gastrointestinal (GI) tract, 194–220 digestion and absorption in, 214–219, 215t, 216 of bile acids, 215t of calcium, 215t, 219 of carbohydrates, 214, 215t, 216 of electrolytes, 218 of iron, 215t, 219 of lipids, 215t, 216–217 of proteins, 215–216, 215t, 216 of vitamins, 215t, 218–219 of water, 218 effect of autonomic nervous system on, 36t innervation of, 194–195 liver physiology, 219–220, 220 regulatory substances in, 195–199, 196, 196t structure of, 194, 195 Generator potential, 38, 38 Genetic sex, 255 Geniculocalcarine tract, 41 lesion of, 41, 42 GFR (see Glomerular filtration rate (GFR)) GH (see Growth hormone (GH)) Ghrelin, 199 GI (see Gastrointestinal (GI)) Gigantism, 236 GIP (glucose-dependent insulinotropic peptide), 195, 196t, 198, 222Q, 225E Globus pallidus, 54 Glomerular capillary hydrostatic pressure (PGC), 154 Glomerular capillary oncotic pressure (πGC), 154, 184Q, 189E Glomerular filtration rate (GFR) blood urea nitrogen and, 153 filtration fraction and, 153 increase in, 185Q, 190Q inulin clearance and, 153, 184Q, 189E measurement of, 153 serum creatinine and, 153 Starling forces in, 154, 155t Glomerular markers, 158 Glomerulotubular balance, in proximal tubule, 159 Glucagon, 228t, 248, 248t, 249t Glucocorticoids actions of, 228t, 245 origin of, 241, 242, 243 regulation of secretion of, 243–244, 244 Gluconeogenesis glucagon and, 248 glucocorticoid stimulation of, 245 insulin and, 250 Glucose absorption of, 214, 216 blood and glucagon, 248 and insulin, 249 excretion, 156, 185Q, 190Q filtered load of, 156 reabsorption of, 156 TF/P value, 161, 161 transport maximum (Tm) curve for, 156, 156–157 in tubular fluid, 188Q, 193E Glucose titration curve, 156, 156–157 Glucose transporter 2 (GLUT 2), 214, 250 Glucose-dependent insulinotropic peptide (GIP), 195, 196t, 198 Glutamate, 16 in photoreception, 42, 43 Glycerol, absorption of, 217 Glycerol backbone, of phospholipids, 1 Glycine, 16 Glycogenolysis glucagon and, 248 insulin and, 250 Golgi tendon organs, 51 Golgi tendon reflex, 50t, 51 Gonadal sex, 255 Gonadotropin-releasing hormone (GnRH) actions of, 228t mechanism of action, 265Q, 269E in puberty, 258 in regulation of ovary, 258 in regulation of testes, 256, 257 Granular layer, of cerebellar cortex, 53 Granulosa cells, 258 Graves disease, 240, 265Q, 268E–269E Gravitational forces and cardiovascular system, 97, 97t, 98 and pulmonary blood flow, 132, 139Q–140Q, 143E and ventilation, 135 Growth, thyroid hormone and, 240 Growth hormone (GH), 228t, 231, 234–236, 264Q, 268E regulation of secretion of, 234, 235 Growth hormone (GH) deficiency, 235 Growth hormone (GH) excess, 235–236 Growth hormone-releasing hormone (GHRH), 228t, 234, 235 GRP (gastrin-releasing peptide), 197, 198, 199 Guanosine triphosphate (GTP)-binding (G) ­ proteins, 229 Guanylyl cyclase, 231 H H+ in control of breathing, 136 excretion of, 176, 176–177, 185Q, 190Q H band, 17, 17 H+, K+-ATPase, 4, 164, 208, 208 H2 receptors, 209, 209 blockers, 209, 209 H+ secretion, 207–211 cells in, 207, 207, 207t inhibition of, 204t, 209, 209–210 mechanism of, 207–208, 208 and peptic ulcer disease, 210 stimulation of, 204t, 207t, 208–209, 209 304 Index Hair cells, 45, 46, 60Q, 64E H+-ATPase, 176 Haustra, 203 HbF (hemoglobin F) and hemoglobin–O2 dissociation curve, 128, 128 HCG (human chorionic gonadotropin), 228t, 261, 262 HCO3 − in CO2 transport, 131, 132 as extracellular buffer, 173 and gastric acid secretion, 207–208, 208 and K+ secretion, 166 in metabolic acidosis, 177, 177t in metabolic alkalosis, 177t, 178 in pancreatic secretion, 211, 211 reabsorption of filtered, 174–175, 175 in respiratory acidosis, 179 in respiratory alkalosis, 180 TF/P value, 161, 161 H+-dependent cotransport, of dipeptides and tripeptides, 216 Heart rate autonomic effects on, 75–76, 75t β1 receptors and, 34, 59Q, 62E in baroreceptor reflex, 89 Heart size, and myocardial O2 consumption, 106Q, 113E Heart sound(s), 85, 86, 87 first, 85, 103Q, 110E fourth, 85 second, 87, 104Q, 111E third, 87 Heartburn, 201 Heat exhaustion, 57 Heat stroke, 57 Heat-generating mechanisms, 56 Heat-loss mechanisms, 56 Helicobacter pylori, 210 Heme moiety, 126 Hemicholinium, and neuromuscular ­ transmission, 13t Hemodynamics, 66–71, 70 arterial pressure in, 70, 70–71 atrial pressure in, 71 capacitance (compliance) in, 69 components of vasculature in, 66–67 equation for blood flow, 68 pressure profile in blood vessels in, 69–70 resistance in, 68–69 velocity of blood flow in, 67–68 venous pressure in 69, 70 Hemoglobin as intracellular buffer, 173 O2-binding capacity of, 126 in O2 transport, 126 Hemoglobin F (HbF), 126, 129 Hemoglobin S, 126 Hemoglobin–O2 dissociation curve, 128–129 altitude and, 141Q, 145E–146E changes in, 128, 128–129, 129 exercise and, 128, 140Q, 144E fetal hemoglobin and, 128, 128, 129 P50 and, 129, 140Q, 144E Hemorrhage, 133 baroreceptor reflex in, 87 cardiovascular responses to, 100, 100t, 101 renin–angiotensin–aldosterone system in, 90, 100, 107Q, 113E Henderson–Hasselbalch equation, 174 Hepatic failure, and thyroid hormones, 239 Hering–Breuer reflex, 137 Hering’s nerve, in baroreceptor reflex, 87 Hertz (Hz), 44 Hexamethonium, 35, 35t, 58Q, 62E High-K+ diet, 165 Hippocampus, in memory, 55 Hirschsprung disease, 203 Histamine, 16, 198 in blood flow regulation, 95, 97, 106Q, 112E edema, 93 gastric secretion of, 208–209, 209 glucocorticoids and, 245 H+-K+-ATPase, 4, 164, 208, 208 H+–K+ pump, 4, 164, 208 Horizontal cells, 42 Hormone(s) mechanisms of action of, 229–233, 229t, 230–233 overview of, 227–229, 228t regulation of secretion of, 227 synthesis of, 227 Hormone receptors, regulation of, 229 Hormone-receptor dimers, 232, 233 HPL (human placental lactogen), 228t, 261, 262 Human chorionic gonadotropin (HCG), 228t, 261, 262 Human placental lactogen (HPL), 228t, 261, 262 Humoral hypercalcemia of malignancy, 253t, 254 Huntington disease, 54 Hydrochloric acid (HCl) secretion, 207–211 cells in, 207, 208, 208 inhibition of, 204t, 209, 209–210 mechanism of, 207–208, 208 and peptic ulcer disease, 210 stimulation of, 204t, 207t, 208–209, 209 Hydrostatic pressure Bowman space, 154 capillary, 92 glomerular, 154 interstitial fluid, 92 25-Hydroxycholecalciferol, 254, 265Q, 268E 1α-Hydroxylase, 254, 255, 268E 11β-Hydroxylase, 243 17α-Hydroxylase, 243 17α-Hydroxylase deficiency, 246t, 247 21β-Hydroxylase, 243 21β-Hydroxylase deficiency, 246t, 247 17-Hydroxypregnenolone, 243, 247, 257, 264Q, 268E 17-Hydroxyprogesterone, 243 3β-Hydroxysteroid dehydrogenase, 243 17β-Hydroxysteroid dehydrogenase, 258 5-hydroxytryptamine (serotonin), in blood flow regulation, 95 Hyperaldosteronism, 246t, 247 (see also Conn syndrome) hypertension due to, 187Q, 192E and K+ secretion, 165t, 166 metabolic alkalosis due to, 186Q, 187Q, 192E Hypercalcemia, 167 of malignancy, 253t, 254 Hypercalciuria, idiopathic, 167 Hypercapnia, 123 Hyperchloremic metabolic acidosis, 177 Hyperemia active, 94 reactive, 95 Hyperglycemia, in diabetes mellitus, 250 Index 305 Hyperkalemia and aldosterone secretion, 166, 244 in diabetes mellitus, 251 due to exercise, 186Q, 192E due to hypoaldosteronism, 181 due to K+-sparing diuretics, 166 and NH3 synthesis, 177 Hypermagnesemia, 167 Hyperosmotic solution, 5, 25Q, 26Q, 29E–30E, 31E Hyperosmotic volume contraction, 150t, 151 Hyperosmotic volume expansion, 149–150, 150t Hyperparathyroidism, 253, 253t Hyperpigmentation, 182, 245, 246t Hyperpolarization, 10 of cardiac muscle, 72 Hyperpolarizing afterpotential, 11 Hypersecretion, of gastrin, 217 Hypertension due to Conn syndrome, 187Q, 192E due to renal artery stenosis, 107Q, 113E Hyperthermia, malignant, 57 Hyperthyroidism, 240, 241, 242t, 265Q, 268E Hypertonic solution, 6 Hyperventilation due to diarrhea, 183 in high altitude, 138 hypoxemia and, 141Q, 145E–146E in metabolic acidosis, 177, 183 respiratory alkalosis due to, 180 Hypoaldosteronism, 181–182 Hypocalcemia, in respiratory alkalosis, 181 Hypochloremia, due to vomiting, 182 Hypokalemia due to diarrhea, 183, 218 due to hyperaldosteronism, 166 due to thiazide and loop diuretics, 166 due to vomiting, 182, 183 Hyponatremia, due to hypoaldosteronism, 182 Hypoparathyroidism, 252, 253t, 254, 263Q, 267E pseudo, 253t, 254, 263Q, 267E Hyposmotic solution, 5 Hyposmotic volume contraction, 150t, 151 Hyposmotic volume expansion, 150t, 151 Hypotension in diabetes mellitus, 251 orthostatic baroreceptor reflex and, 97 due to hypoaldosteronism, 181 after sympathectomy, 103Q, 110E Hypothalamic set point, for body temperature, 56–57 Hypothalamic–hypophysial portal system, 233 Hypothalamus, 36, 55 in heat loss, 56 and pituitary gland, 233 Hypothermia, 57 Hypothyroidism, 240, 242t Hypotonic solution, 6 Hypotonic urea, 24Q, 29E Hypoventilation in metabolic alkalosis, 178 Hypoxemia adaptation to chronic, 129 as cause of hyperventilation, 138 causes of, 130t defined, 129–130 due to asthma, 141Q, 145E in high altitude, 138 Hypoxia causes of, 130t and coronary circulation, 96 defined, 130–131 due to right-to-left cardiac shunt, 139Q, 141Q, 143E, 145E erythropoietin, 131, 131 O2 transport in, 130–131, 130t, 131 Hypoxia-inducible factor 1α, 131, 131 Hypoxic vasoconstriction, 133, 139Q, 143E Hysteresis, 118, 118 Hz (hertz), 44 I If, 74, 76 I bands, 17, 17 I cells, 197, 212 I−(iodide), oxidation of, 238 I2 (iodine), organification of, 238 I−(iodide) pump, 238, 239 ICF (intracellular fluid), 147, 148, 148t measuring volume of, 147 shifts between compartments of, 149–150, 150, 150t Idiopathic hypercalciuria, 167 IGF (insulin-like growth factors), 235, 235 IL-1 (interleukin-1), in fever, 57 IL-2 (interleukin-2), glucocorticoids and, 245 Ileal resection, 217 Immune response, glucocorticoid suppression of, 245 Inactivation gates, of Na+ channel, 7, 24Q, 29E Incisura, 87 Inhibin, 256, 257, 258, 264Q, 267E Inhibitory neurotransmitters, 12, 26Q, 30E Inhibitory postsynaptic potentials (IPSPs), 14, 25Q, 30E Inner ear, 44, 44–45 Inner hair cells, 45 Inositol 1,4,5-triphosphate (IP3) α1 receptors, 34, 59Q, 63E Inositol 1,4,5-triphosphate (IP3)-gated Ca2+ ­ channels, 22 Inositol 1,4,5-triphosphate (IP3) mechanism, of hormone action, 229, 229t, 231, 265Q, 269E Inotropic agents negative, 77, 78 and cardiac output curve, 83 positive, 77, 78 and cardiac output curve, 83, 83 Inotropic effect, negative, 107Q, 114E Inotropism, 77–78, 78 negative, 77, 78 positive, 77, 78 Inspiration muscles, 117 volume and pressure during, 122–123 Inspiratory capacity, 116 Inspiratory reserve volume (IRV), 115, 116 Inspired gas viscosity or density of, 121 Insulin, 249–251 actions of, 228t, 250, 250t and blood glucose, 249 pathophysiology of, 250–251 secretion of, 227, 249, 250t regulation of, 249, 250t Insulin receptor, 249–250 306 Index Insulin-like growth factors (IGF), 235, 235 Insulin-like growth factor (IGF) receptor, 231 Integral proteins, 1, 7 Intensity, of sound, 44 Intention tremor, 53 α-Intercalated cells, 163 Intercalated disks, myocardial, 76, 107Q, 114E Intercellular connections, 1 Intercostal muscles, in breathing external, 117 internal, 118 Interleukin-1 (IL-1), in fever, 57 Interleukin-2 (IL-2), glucocorticoids and, 245 Internal intercostal muscles, in breathing, 118 Interstitial cells of Cajal, 200 Interstitial fluid, 148t hydrostatic pressure of, 92 measuring volume of, 147, 148t, 185Q, 190E oncotic pressure of, 92 Intestinal crypts, 218 Intracellular buffers, 173 Intracellular fluid (ICF) measuring volume of, 147 shifts between compartments of, 149–150, 150, 150t Intrafusal fibers, 37t, 48–49, 49, 61Q, 64E Intrapleural pressure, 119 during breathing cycle, 122, 122, 140Q, 144E measurement of, 122 Intrinsic factor, 219 secretion of, 207, 207, 207t, 221Q, 224E Intrinsic primary afferent neurons (IPANs), 202 Inulin clearance, as measurement of GFR, 153 receptor tyrosine kinase, 231 TF/P ratio, 159 in tubular fluid, 188Q, 193E Inward current, 10 Iodide (I−), oxidation of, 238 Iodide (I−) pump, 238, 239 Iodine (I2), organification of, 238 Ion channels, 7 Ionotropic glutamate receptors, excitatory, 42 Ionotropic receptors, 16, 42, 43 IPSPs (inhibitory postsynaptic potentials), 14, 25Q, 30E Iron ferric, 126 ferrous, 126 absorption of, 215t, 219 Iron deficiency anemia, 219 Irritable bowel syndrome, 203 Irritant receptors, 137 IRV (inspiratory reserve volume), 115, 116 Islets of Langerhans, 248, 249t Isometric contractions, 19 Isoproterenol, and airway resistance, 121 Isosmotic reabsorption in proximal tubule, 160 Isosmotic solution, 5 Isosmotic volume contraction, 149, 150t Isosmotic volume expansion, 149, 150t Isotonic contractions, 19 Isotonic fluid water shifts between compartments due to, 149, 150 Isotonic solution, 6 Isovolumetric contraction, 79, 103Q, 109E Isovolumetric relaxation, 80 ventricular, 87, 106Q, 108Q, 112E, 114E J Jacksonian seizures, 54 Janus family of receptor-associated tyrosine kinase (JAK), 231–232 Joint receptors, in control of breathing, 137 Juxtacapillary (J) receptors, 137 K K+ absorption of, 213, 218 dietary, 165 reabsorption of, 162, 163 secretion of, 164–165, 165, 165t shifts between ICF and ECF , 163, 164t K+ balance, renal regulation of, 163–166, 164–165, 165t K+ concentration, 166 insulin and, 250 K+ equilibrium potential, 10, 11, 24Q, 29E K+ secretion by colon, 218 factors that change in renal, 165, 165, 165t high-K+ diet and renal, 188Q, 193E mechanism of renal, 164–165, 165 spironolactone and renal, 184Q, 189E K+ shifts, 163, 164t Ketoacid(s), 173 glucagon and, 248 insulin and, 251 Ketoacidosis, diabetic, 251 Ketoconazole, for Cushing disease, 247 17-Ketosteroids, 241, 247 Kf (filtration coefficient), 92, 103Q, 110E, 154 Kidney, effect of autonomic nervous system, 36t Kinocilium 46, 46 Knee-jerk reflex, 50 K+-sparing diuretics, 163 Kussmaul breathing, 177 L Lactase, 214 Lactation, 262, 264Q, 268E Lactic acid, 173 Lactic acidosis, 138 Lactogenesis, 236 Lactose intolerance, 214 Laminar flow, 69 Laplace’s law, 120 Large intestinal motility, 203 Lateral geniculate body, 41, 42 Lateral geniculate cells, receptive fields of, 42–43 Lateral vestibulospinal tract, 52 Learning, 55 Lecithin:sphingomyelin ratio, 120 Left atrial pressure, 71, 86 Left hemisphere, in language, 55 Left ventricular pressure, 79, 86 Left-to-right shunts, 133, 103Q, 110E Length–tension relationship in skeletal muscle, 19, 20 in ventricles, 78, 78–79 Lens biconcave, 40 convex, 40 cylindrical, 40 refractive power of, 40 Leptin, 199 Leuenkephalin, 199 Leydig cells, 256 Index 307 Ligand-gated channels, 7 Linear acceleration, 45 Lingual lipases, 217 Lipid(s) absorption of, 215t, 217 digestion of, 215t, 216–217 malabsorption of, 215t, 217 metabolism, 219 Lipid bilayer, of cell membrane, 1 Lipid-soluble substances, and cell membrane, 1 Lipocortin, 245 Lipolysis, glucagon and, 248 β-Lipotropin, 228t, 234, 234 Lithocholic acid, 213 Liver function bilirubin metabolism, 219, 220 detoxification, 220 metabolic functions of, 219 Longitudinal muscle, 194, 195 in gastrointestinal motility, 199 Long-term memory, 55 Long-term potentiation, 15 Loop diuretics and Ca2+ excretion, 167 isosthenuric urine due to, 171 and K+ secretion, 166 major effects of, 181t mechanism of action of, 181t site of action of, 181t Loop of Henle countercurrent multiplication in, 168 thick ascending limb of in K+ reabsorption in, 162, 163 in Na+ reabsorption in, 162, 162 in urine production, 168–169, 171 Losartin, 89 Lower esophageal sphincter, 201 Low-K+ diet, 164 Lumbar puncture, 55 Lumen-positive potential difference, in thick ascending limb, 162 Luminal anions, and K+ secretion, 165t, 166 Lung capacities, 116, 116–117, 117 Lung compliance, 118, 118 Lung volumes, 115–116, 116 and airway resistance, 121 during breathing cycle, 122, 122–123 Lung–chest wall compliance, 119, 119 Luteal phase, of menstrual cycle, 260, 261, 263Q, 267E Luteinizing hormone (LH) actions of, 228t in menstrual cycle, 260, 260 origin of, 228t in regulation of ovary, 259 in regulation of secretion of, 256 in regulation of testes, 256, 257 structure of, 234 in testosterone synthesis, 256, 257 variation over life span, 258 Luteinizing hormone (LH) surge, 227, 261, 263Q, 267E Lymph, 93, 93t M M line, 17 Macula densa, in tubuloglomerular feedback, 152 Magnesium (Mg2+), renal regulation of, 167 Malabsorption, of lipids, 217 Male phenotype, 255, 256 Male reproduction, 256–258, 257 Male sex organs, effect of autonomic nervous system on, 36t Malignancy, humoral hypercalcemia of, 253t, 254 Malignant hyperthermia, 57 Maltase, 214 Mannitol, and extracellular fluid volume, 147, 185Q, 190Q Many-to-one synapses, 14 MAO (monoamine oxidase), 15 Mean arterial pressure, 70, 71 set point for, 87 Mean pressures in cardiovascular system, 71 Mean systemic pressure, 81, 81, 104Q, 105Q, 110E, 111E, 112E Mechanoreceptors, 39, 39t Medulla in autonomic nervous system, 32 in control of breathing, 136 Medullary respiratory center, 135 Megacolon, 203 Meissner corpuscle, 39t Meissner plexus, 194, 195 Melanocyte-stimulating hormone (MSH), 228t, 234 Membrane(s) cell structure of, 1 transport across, 2–5, 2t, 4, 5 semipermeable, 5, 6, 23Q, 28E Membrane potential, resting of cardiac muscle, 72 of skeletal muscle, 10, 11 Memory, 55 Menses, 260, 261 Menstrual cycle, 260, 260–261 follicular phase of, 260, 261 luteal phase of, 260, 261, 263Q, 267E menses in, 260, 261 negative and positive feedback control of, 259t ovulation in, 260, 261 MEPP (miniature end plate potential), 13 Merkel disk, 39t Metabolic acidosis acid–base map of, 179 causes of, 178t due to chronic renal failure, 187Q, 192E due to diabetes mellitus, 187Q, 192E–193E due to diarrhea, 183, 186Q, 189E due to hypoaldosteronism, 181 hyperchloremic, 177 respiratory compensation for, 177, 177t, 183, 186Q, 191E Metabolic alkalosis acid–base map of, 179 causes of, 178t compensatory responses, 180t due to hyperaldosteronism, 186Q, 192E due to vomiting, 182, 182, 187Q, 193E, 208 respiratory compensation for, 178, 186Q, 192E Metabolic effects, of thyroid hormone, 241 Metabolic hypothesis, of local control of blood flow, 95 Metabolism bilirubin, 219, 220 carbohydrate, 219 lipid, 219 protein, 219 308 Index Metabotropic receptor, 16, 42, 43 Metarhodopsin II, 41, 43, 60Q, 63E Metarterioles, 91 Metenkephalin, 199 Methemoglobin, 126 3-Methoxy-4-hydroxymandelic acid, 15 3-Methoxy-4-hydroxyphenylglycol (MOPEG), 15 Mg2+ (magnesium), renal regulation of, 167 Micelles bile salts and, 213, 213 in lipids absorption, 217 and vitamin D, 222Q, 225E Microcirculation, 91–94, 92 Midbrain, in autonomic nervous system, 36t Migraine headaches, 95 Migrating myoelectric complex, in gastrointestinal motility, 202 Mineralocorticoids, 243, 245 Miniature end plate potential (MEPP), 13 Minimum urine pH, 176 Minute ventilation, 116 MIT (monoiodotyrosine), 238–239, 239 Mitochondria, myocardial, 76 Mitral cells, in the olfactory bulb, 47 Mitral valve closure of, 85, 86 opening of, 86, 87 Molecular layer, of cerebellar cortex, 53 Monoamine oxidase (MAO), 15 Monoglycerides, absorption of, 217 Monoiodotyrosine (MIT), 238–239, 239 Monosaccharides, absorption of, 214, 216 MOPEG (3-methoxy-4-hydroxyphenylglycol), 15 Mossy fibers, 53 Motilin, 202 Motoneuron(s) α-, 48 convergence on, 51 divergence to, 51 in stretch reflex, 50, 50 γ-, 37t in stretch reflex, 48, 49 large, 48 small, 48 Motoneuron pool, 48 Motor aphasia, 55 Motor centers, 51–52 Motor cortex, 54, 58Q, 63E Motor homunculus, 54, 59Q, 63E Motor pathways, 51–52 Motor systems, 48–54 basal ganglia in, 53–54 brain stem control of posture in, 51–52 cerebellum in, 52–53 motor cortex in, 54 motor unit in, 48 muscle reflexes in, 50, 50–51, 50t muscle sensors in, 48–49, 49 spinal organization of, 51 Motor unit in, 48 MSH (melanocyte-stimulating hormone), 228t, 234 Mucous cells, in gastric secretion, 207t, 208 Mucous gastric secretion, 207t Müllerian ducts, 255 Multi-unit smooth muscle, 20–21 Muscarinic receptor(s), 35, 61Q, 65E drugs that act on, 35t Muscarinic receptor blocker, and gastric secretion, 208 Muscle contraction cardiac, 77–78, 78 isometric, 19 isotonic, 19 skeletal muscle, 18, 19 Muscle end plate, ACh at, 13, 25Q, 30E Muscle fibers, 48 Muscle reflexes, 50, 50–51, 50t Muscle relaxation cardiac, 77 skeletal, 19 Muscle sensors, 48–49, 49 Muscle spindles, 48, 49, 49 Muscle tension, 19–20, 20 Muscle weakness, K+ concentration and, 11, 26Q–27Q, 31E Muscularis mucosa, of GI tract, 194, 195 Myasthenia gravis, AChE receptors in, 14, 24Q, 29E Myelinated axon, 12, 12, 25Q, 29E Myenteric plexus, 194, 195, 195 Myocardial cell structure, 76–77 Myocardial contractility, 77–78, 78 Ca2+ and, 77, 106Q, 112E and cardiac output, 105Q, 111 factors that decrease, 78, 107Q, 114E factors that increase, 77 in Frank–Starling relationship, 79, 102Q, 109E and ventricular pressure–volume loop, 80, 80 Myocardial O2 consumption, 84, 106Q, 113E Myofibrils, 16, 17 Myogenic hypothesis, of local control of blood flow, 95 renal, 152 Myopia, 40 Myosin, 17 in excitation–contraction coupling, 18, 19 Myosin cross-bridges, 18, 19 Myosin-light-chain kinase, 21, 22 Myotatic reflex, 50t inverse, 50–51 stretch, 50, 50 N Na+ channels activation and inactivation gate of, 7, 24Q, 29E complete blockade of, 25Q, 30E Na+–Ca2+ countertransport, 4, 5 Na+–Ca2+ exchange, 4, 5 Na+ current, inward, 74, 105Q, 112E Na+-dependent cotransport, 2t, 4 of amino acids, 216, 216, 221Q, 224E of carbohydrates, 216 Na+ diffusion potential, 7–8, 8 Na+ equilibrium potential, 8 in nerve action potential, 10, 11 Na+–glucose cotransport, 4, 5, 25Q, 30E, 156 Na+–glucose cotransporter 1 (SGLT 1), 214 Na+ gradient, 4 Na+ reabsorption, 159–162, 159–163 NaCl, absorption of, 217–218 Na+–Cl− cotransporter, 162 NaCl intake, water shifts between compartments due to, 149–150, 150t NaCl regulation, 158–163, 159–162 Na+–H+ exchange, 160 Na+–K+ pump, 3, 214 Na+–K+–2Cl− cotransport, 4, 162 Na+–K+–ATPase, 3, 25Q, 30E Index 309 Na+–phosphate cotransport, 167 Near point, 40 Nearsightedness, 40 Negative chronotropic effect, 75 Negative dromotropic effect, 76 Negative feedback, for hormone secretion, 227 Negative inotropic agents, 77, 78 and cardiac output curve, 83 Negative inotropic effect, 78, 107Q, 114E Neonatal respiratory distress syndrome, 120, 139Q, 143E Neostigmine, and neuromuscular transmission, 13t Nephrogenic diabetes insipidus, 170 Nephron in calcium regulation, 167 concentration and dilution of urine in, 167–172, 168–170 disorders related to, 172t effects of diuretics on, 181t in K+ regulation, 163–164, 164 in magnesium regulation, 167 in Na+ reabsorption, 159–162, 159–163 in NaCl regulation, 158–163, 159–162 in phosphate regulation, 167 in urea regulation, 166 Nernst equation, 8–9 Nerve fiber types, 37t Neurocrines, 196, 198–199 Neuromuscular junction, 12–14, 13, 13t, 23Q, 28E Neuromuscular transmission, 12–16, 13, 13t, 15 Neurophysiology, 32–65 of autonomic nervous system, 32–36, 33, 33t–36t of blood–brain barrier and cerebrospinal fluid, 55–56, 56t of higher functions of cerebral cortex, 54–55 of motor systems basal ganglia in, 53–54 brain stem control of posture, 51–52 cerebellum in, 52–53 motor cortex in, 54 motor unit in, 48 muscle reflexes in, 50–51, 50t muscle sensors in, 48–49, 49 spinal organization of, 51 of sensory system(s) audition as, 44, 44–45 olfaction as, 47 sensory receptors in, 36–38, 37t, 38 somatosensory, 39–40, 39t taste as, 47–48 vestibular, 45–47, 46 vision as, 40, 40–43, 41t, 42, 43 of temperature regulation, 56–57 Neurotransmitters, 32, 33t, 55 excitatory, 12 inhibitory, 12, 26Q, 30E release of, 12 NH3 (ammonia) synthesis, 177 NH4 + (ammonium), H+ excretion as, 176, 176–177 Nicotinic receptors, 35 drugs that act on, 35t and epinephrine secretion, 59Q, 63E on ligand-gated channels, 7 at neuromuscular junction, 12 Night blindness, 41 Nitric oxide (NO), 16, 93–94, 231, 229, 229t, 231 Nitric oxide (NO) synthase, 17 Nitrous oxide (N2O), perfusion-limited exchange of, 125 NMDA (N-methyl-d-aspartate) receptor, 16 N-methyl-d-aspartate (NMDA) receptor, 16 NMN (normetanephrine), 15 NO (nitric oxide), 93–94, 229 Nociception, 39 Nociceptors, 39 Nodes of Ranvier, 12, 12 Nonadrenergic, noncholinergic neurons, 32 Nonionic diffusion, 158 Nonvolatile acids, 172–173 Norepinephrine, 32, 33t and adenylate cyclase, 59Q, 63E in autonomic nervous system, 26Q, 30E, 32 in hemorrhage, 100 synthetic pathway for, 15, 15 Normetanephrine (NMN), 15 Noxious stimuli, 48 Nuclear bag fibers, 49, 49, 61Q, 64E Nuclear chain fibers, 49 Nucleus cuneatus, 39 Nucleus gracilis, 39 Nystagmus, 46–47, 59Q, 63E postrotatory, 47 O O2in control of breathing, 136 diffusion-limited gas exchange, 125 dissolved, 124 partial pressure of, 124, 125t alveolar, 135 arterial, 138 perfusion-limited exchange, 125, 125t in ventilation/perfusion defect, 142Q, 146E O2 consumption cardiac, 84, 106Q, 113E during exercise, 141Q, 145E O2 content of blood, 126 O2 delivery, 131 O2 transport, 126–131 alveolar gas and pulmonary capillary blood, 142Q, 146E hemoglobin in, 126 hemoglobin–O2 dissociation curve, 127, 127–128 changes in, 128, 128–129, 129 and hypoxemia, 129–130, 130t and hypoxia, 130–131, 130t, 131 O2 binding capacity, of hemoglobin, 126 Octreotide, 235 Odorant molecules, 47 Off-center, on-surround pattern, 43 Ohm’s law, 68 Oil/water partition coefficient, 3, 25Q, 30E Olfaction, 47, 60Q, 64E Olfactory bulb, 47 Olfactory nerve, 47 Olfactory pathways, 47 Olfactory receptor neurons, transduction in, 47 Olfactory receptor proteins, 47 Olfactory system, 47, 60Q, 64E Omeprazole, and gastric secretion, 211 On-center, off-surround pattern, 43 Oncotic pressure, 6 Bowman space, 154 capillary, 92, 92 glomerular, 154 interstitial fluid, 92 One-to-one synapses, 14 Opsin, 41 310 Index Optic chiasm, 41 lesion of, 41, 42, 60Q, 64E Optic nerve, 41 lesion of, 41, 42 Optic pathways, 41, 42 Optic tract, 41 lesion of, 41, 42 Optics, 40 Orad region, of stomach, 201 Orexigenic neurons, 199 Organ of Corti, 44, 45, 58Q, 62E auditory transduction by, 44, 45 Organic phosphates, as intracellular buffers, 143 Orthostatic hypotension after sympathectomy, 110E baroreceptor reflex and, 97 due to hypoaldosteronism, 181 Osmolarity, 4–5, 26Q, 30E of body fluids, 149 calculation of, 5, 25Q, 29E–30E plasma estimation of, 149 regulation of, 167, 168, 169 of urine, 184Q, 189E Osmole, ineffective, 6 Osmosis, 4–6, 6 Osmotic diarrhea, due to lactose intolerance, 214 Osmotic exchangers, 168 Osmotic gradient, corticopapillary, 167–168 Osmotic pressure, 5–6 effective, 6 Ossicles, 44 Osteomalacia, 219, 254 Outer ear, 44 Outer hair cells, 45 Outward current, 10, 27Q, 31E Oval window, 44 Ovary, regulation of, 258–259, 259t Overshoot, of action potential, 10 Ovulation, 260, 261 lactation and, 262 Oxygen (see O2) Oxyhemoglobin, 127 as intracellular buffer, 173 Oxytocin, 228t, 237 actions of, 228t, 238 regulation of secretion of, 238, 264Q, 265Q, 267E, 269E P P wave, 71, 86 absent, 102Q, 109E additional, 104Q, 111E P50, hemoglobin–O2 dissociation curve, 127, 127, 128, 140Q, 144E Pacemaker, cardiac, 73 in AV node, 102Q, 109E latent, 73 Pacemaker potential, in SA node, 105Q, 112E Pacinian corpuscles, 39t, 48 PAH (see Para-aminohippuric acid (PAH)) Pain fast, 39 flexor withdrawal reflex to, 50t, 51 referred, 40 slow, 37t, 39 Pancreas, endocrine, 248–251, 248t–250t Pancreatic cholera, 199 Pancreatic enzymes, 212 Pancreatic juice, 211 Pancreatic lipases, 217 Pancreatic proteases, 215–216 Pancreatic secretion, 204t, 211–212, 212, 222Q, 225E composition of, 211, 211 flow rates for, 211 formation of, 211–212, 212 inhibition of, 204t modification of, 211–212, 212 stimulation of, 204t, 212 Papillae, 47 Para-aminohippuric acid (PAH) clearance of, 152, 186Q, 191E–192E excretion of, 157 filtered load of, 157 renal blood flow, 186Q, 192E secretion of, 157 titration curve, 157, 157 transport maximum (Tm) curve for, 157, 157, 185Q, 190Q in tubular fluid, 188Q, 193E Paracrines, 196, 198 Parallel fibers, of cerebellar cortex, 53 Parallel resistance, 68 Paraplegia, 52 Parasympathetic effects, on heart rate and con-duction velocity, 75–76 Parasympathetic ganglia, 32 Parasympathetic nervous system of GI tract, 194 organization of, 32 Parasympathetic stimulation and airway resistance, 121 and myocardial contractility, 78 of saliva, 206, 206 Parathyroid adenoma, 253 Parathyroid hormone (PTH) actions of, 228t, 252–253 in Ca2+ reabsorption, 167, 191E, 253, 266Q, 270E in calcium regulation, 251t, 252, 252 and phosphate reabsorption, 167 renal effects of, 173t, 187Q, 192E secretion of, 252 Parathyroid hormone-related peptide (PTH-rp), 254 Parietal cells, 202, 202t, 203, 207, 207, 207t H+ secretion by, 207, 207, 207t agents that stimulate and inhibit, 209 mechanism of, 208, 208–209 Parkinson disease, 16, 26Q, 31E Parotid glands, 204 Partial pressure(s) of carbon dioxide, 124, 124t Dalton’s law of, 124 of oxygen, 124, 125t Partial pressure differences, 124 Parturition, 262 Passive tension, 20 Patent ductus arteriosus, 133 PBS (Bowman space hydrostatic pressure), 154 PCO2 alveolar, 135 arterial, 138, 141Q, 145E and HCO3 − reabsorption, 175 on hemoglobin–O2 dissociation curve, 128 venous, 138 Pelvic nerve, 194 Pepsin, 215 Index 311 Pepsinogen, 207, 207, 207t Peptic ulcer disease, 210, 223Q, 226E Peptide hormone, synthesis of, 227 Perchlorate anions, 238 Perfusion-limited exchange, 125, 125t Perilymph, 44 Peripheral proteins, 1 Peripheral chemoreceptors, in control of ­ breathing, 136, 136t Peristalsis, 200 esophageal, 201 gastric, 201–202 large intestinal, 203 small intestinal, 202, 222Q, 225E Peristaltic contractions esophageal primary, 201 secondary, 201 in small intestine, 202 Peritubular capillaries, Starling forces in, 157, 157 Permeability of cell membrane, 3, 25Q, 30E of ion channels, 7 Pernicious anemia, 219 Peroxidase, 238, 239 PGC (glomerular capillary hydrostatic pressure), 154 pH and buffers, 174 calculation of, 174 and gastric secretion, 209–210 on hemoglobin–O2 dissociation curve, 128 urine acidic, 158 alkaline, 158 minimum, 176 of venous blood, 142Q, 146E Phasic contractions, in gastrointestinal motility, 199 Phasic receptors, 38 Phenotypic sex, 255, 256 Phenoxybenzamine, 35E–36E, 35t, 61Q Phenylalanine, and gastrin secretion, 197 Pheochromocytoma, 32 phenoxybenzamine for, 61Q, 64E–65E vanillylmandelic acid excretion with, 15, 32 Phosphate(s) as extracellular buffer, 173 as intracellular buffer, 173 renal regulation of, 167 PTH and, 253 as urinary buffer, 167 Phosphaturia, 167 Phospholamban, 77 Phospholipids, in cell membrane, 1 Phosphoric acid, 173, 176, 176 Photoisomerization, 41 Photoreception, 41–42, 43, 60Q, 63E Photoreceptors, 37, 58Q, 62E Physiologic dead space, 115–116, 135 Physiologic shunt, 124 PIF (prolactin-inhibiting factor), 16, 228t (see also Dopamine) Pink puffers, 123 Pinocytosis, 91 Pituitary gland, 233–238 anterior, 233 hormones of, 233–237, 234–236, 236t posterior, 233 hormones of, 237–238 and relationship with hypothalamus, 233 pK, of buffers, 176 Plasma, 147, 148t, 149 Plasma osmolarity estimation of, 149 regulation of, 167, 168, 169 sweating and, 186Q, 192E Plasma volume, 147 Plateau phase, of action potential, 73 Pneumotaxic center, 136 Pneumothorax, 119 PO2, 124, 125t alveolar, 138 arterial, 138 Poiseuille’s equation, 68, 121 Poiseuille’s law, 68, 121 Polydipsia, 172t POMC (pro-opiomelanocortin), 234, 243 Pontine reticulospinal tract, 52 Pontocerebellum, 52 Positive chronotropic effects, 75, 76 Positive cooperativity, 125 Positive dromotropic effects, 73 Positive feedback, for hormone secretion, 227 Positive inotropic agents, 77, 78 and cardiac output curve, 82, 83 Positive inotropism, 77–78, 78 Positive staircase, 77 Posterior pituitary gland, 233 hormones of, 237–238 Postextrasystolic potentiation, 77 Postganglionic neurons, 15, 32 Postrotatory nystagmus, 47, 59Q, 63E Postsynaptic cell membrane, 12, 25Q, 30E end plate potential in, 13 Postsynaptic potentials excitatory, 14 inhibitory, 14, 25Q, 30E Post-tetanic potentiation, 15 Posture, brain stem control of, 51–52 Potassium (see K+) Potentiation of gastric H+ secretion, 209 long-term, 15 postextrasystolic, 77 post-tetanic, 15 Power stroke, 18 PR interval, 71, 71–72, 102Q, 109E PR segment, 110E Prazosin, 35t, 58Q, 62E Precapillary sphincter, 91 Preganglionic neurons, 32 Pregnancy, 261–262 hormone levels during, 261 human chorionic gonadotropin in, 261 lactation suppression during, 264Q, 268E Pregnenolone, 243, 257 Preload, 19 ventricular, 78 and ventricular pressure–volume loop, 80, 80 Premotor cortex, 54 Preprohormone, 227 Prerenal azotemia, 153 Presbyopia, 40 Pressure profile, in blood vessels, 69–70 Presynaptic terminal, 12 Primary active transport, 2t, 3–4, 26Q, 31E Primary motor cortex, 54 312 Index Primordial follicle, 260 Principal cells in K+ regulation, 164–165, 165 in Na+ reabsorption, 162–163 in water regulation, 169–171 Progesterone actions of, 228t, 260 during menstrual cycle, 260, 260, 263Q, 267E during pregnancy, 262 synthesis of, 241, 243, 258, 259 Prohormone, 227 Proinsulin, 249 Prolactin, 228t, 236, 236–237, 236t, 262 Prolactin-inhibiting factor (PIF), 16, 228t Prolactinoma, 237, 264Q, 267E Pro-opiomelanocortin (POMC), 234, 243 Propagation, of action potential, 11–12, 12 Propranolol, 35t contraindication, 59Q, 62E mechanism of action of, 107Q, 114E Propylthiouracil, 238, 265Q, 269E Prostacyclin, in blood flow regulation, 95 Prostaglandins in blood flow regulation, 95–96 in fever, 57 and gastrin secretion, 209, 210 Protein hormones, synthesis of, 227 Protein kinase C, 230, 231 Protein(s) absorption of, 215t, 216, 216 in cell membrane, 1 digestion of, 215–216, 215t integral, 1, 7 as intracellular buffer, 173 metabolism, 219 peripheral, 1 Protein hormones, synthesis of, 227 Protein kinase C, 230 Proton (see H+) Proton pump, 4 Proximal tubular reabsorption, ECF volume and, 160–161 Proximal tubule(s) glomerulotubular balance in, 160, 161 isosmotic reabsorption in, 160 K+ reabsorption in, 163 Na+ reabsorption in, 160, 160 Na+–glucose cotransport in, 156 PAH secretion in, 157 reabsorption of filtered HCO3 −, 174, 175 TF/P ratios, 161, 161 in urine production, 168, 171 Pseudohypoparathyroidism, 253t, 254, 263Q, 267E PTH-rp (parathyroid hormone-related peptide), 254 Puberty, 258 Pulmonary artery pressure, 79 Pulmonary blood flow (Q) in different regions of lung, 132–133 distribution of, 132–133 during exercise, 138 gravitational forces and, 132, 139–140 regulation of, 133 Pulmonary circulation, 132–133 Pulmonary embolism, V/Q ratio in, 135, 140Q, 144E Pulmonary fibrosis diffusion-limited exchange during, 125 FEV1 in, 117 lung compliance in, 119 PaCO2 in, 141Q, 145E Pulmonary vascular resistance, 133 fetal, 133 Pulmonary vasoconstriction, in high altitudes, 138 Pulmonary wedge pressure, 71 Pulmonic valve, closure of, 87 Pulse pressure, 70, 70, 102Q, 105Q, 109E, 112E extrasystolic beat and, 102Q, 109E Purkinje cell layer, of cerebellar cortex, 53 Purkinje cells, 53 Purkinje system, action potentials of, 72–73 Pursed lips intrapleural pressure, 123 Pyramidal tracts, 51 Pyrogens, 57 Q QRS complex, 71, 72 QT interval, 71, 72 R Radiation, heat loss by, 56 Rapid eye movement (REM) sleep, 55 Rapidly adapting receptors, 38 RBCs (red blood cells), lysis of, 24Q, 29E RBF (renal blood flow), 152–153, 186Q, 191E Reabsorbed substance, transport maximum (Tm) curve for, 156, 156–157 Reabsorption, 155–158, 156, 157 of filtered HCO3 −, 174–175, 175 of glucose, 156 of Na+, 159 Reabsorption rate, calculation of, 155–156 Reactive hyperemia, 95 Rebound phenomenon, 53 Receptive field, 42–43 Receptive relaxation, of stomach, 201, 221Q, 224E Receptive visual fields, 42–43 Receptor potential, 38, 38, 60Q, 64E Receptor tyrosine kinase, 231, 232, 248t, 249 dimer, 231 monomer, 231 Recruitment of motor units, 48 Rectosphincteric reflex, 203 Rectum, 203 Recurrent inhibition, 51 Red blood cells (RBCs), lysis of, 24Q, 29E 5α-Reductase in testosterone synthesis, 256, 257, 266Q, 270E 5α-Reductase inhibitors, 256 Referred pain, 40 Reflection coefficient, 6, 25Q, 29E–30E Reflexes, muscle, 50, 50–51, 50t Refractive errors, 40 Refractive power, 40 Refractory period(s), 11, 11 absolute, 11, 11, 25Q, 30E cardiac, 74, 74–75 cardiac, 74, 74–75 relative, 11, 11 Renal plasma flow, 152 Relative clearance, 157–158 Relative refractory period (RRP), 11, 11 cardiac, 75, 75 Relaxation, 34 REM (rapid eye movement) sleep, 55 Renal arterioles vasoconstriction of, 152 vasodilation of, 152 Index 313 Renal artery stenosis, 102Q, 107Q, 109E, 113E Renal blood flow (RBF), 152–153, 186Q, 191E Renal clearance, 151–152, 186Q, 191E Renal compensation for respiratory acidosis, 177t, 179–180, 186Q, 191E for respiratory alkalosis, 175, 177t, 180, 186Q, 191E Renal failure, chronic metabolic acidosis due to, 187Q, 192E and PTH, 253t, 254 and vitamin D, 265Q, 268E Renal perfusion pressure in arterial pressure regulation, 89 Renal physiology, 147–193 acid–base balance, 172–181, 175–176, 177t, 178t, 179–180, 180t body fluids, 147–151, 148, 148t, 150, 150t calcium regulation in, 167 with diarrhea, 183 and diuretics, 181t glomerular filtration rate in, 153–154, 154, 155t in hypoaldosteronism, 181–182 integrative examples of, 181–183, 182 K+ regulation in, 163–165, 163–167, 164t, 165t magnesium regulation in, 167 NaCl regulation in, 158–163, 159–162 phosphate regulation in, 167 reabsorption and secretion in, 155, 155–158, 156, 157 renal blood flow in, 152–153 renal clearance in, 151–152 renal hormones in, 172, 173t urea regulation in, 166–167 urine concentration and dilution in, 167–172, 168–170 disorders related to, 172t with vomiting, 182, 182–183 Renal plasma flow (RPF), 152, 186Q, 191E Renal regulation of calcium, 167 of K+, 163–166, 164, 165, 165t of magnesium, 167 of NaCl, 158–163, 159–162 of phosphate, 167 of urea, 166–167 Renal tubular acidosis (RTA), type 1, 178t Renal tubular acidosis (RTA), type 2, 178t Renal tubular acidosis (RTA), type 4, 177, 178t Renin, 89, 90, 244 Renin–angiotensin–aldosterone system in arterial pressure regulation, 89, 90 in hemorrhage, 100, 107Q, 113E Renshaw cells, 51 Repolarization of action potential, 10, 25Q, 29E of cardiac muscle, 72 Reproduction female, 258–262, 259t, 260, 261 male, 256–258, 257 Residual volume (RV), 115, 116 after maximal expiration, 140Q, 144E measurement of, 115, 116 Resistance airway, 121, 140Q, 144E arteriolar, 66, 99, 105Q, 111E and arterial pressure, 105Q, 111E exercise and, 99 parallel, 68 pulmonary vascular, 133 series, 69 vascular blood vessel radius and, 68, 102Q, 109E fetal, 133 in pulmonary circulation, 133 Respiratory acidosis, 177t, 178t, 179–180 acid–base map of, 179 causes of, 178t due to COPD, 187Q, 192E renal compensation, 175, 177t, 179–180 Respiratory alkalosis, 177t, 180–181 acid–base map of, 179 causes of, 178t in high altitude, 138 renal compensation, 180 respiratory compensation for, 186Q, 191E Respiratory compensation for metabolic acidosis, 177, 177t for metabolic alkalosis, 182, 186Q, 191E Respiratory compliance, 118, 118–119, 119 Respiratory distress syndrome, neonatal, 120, 139Q, 143E Respiratory effects, of thyroid hormone, 241 Respiratory physiology CO2 transport in, 131–132, 132 control of breathing in, 135–137, 136t during exercise, 137–138, 137t gas exchange in, 124–125, 125t with high altitude, 138, 138t lung volumes and capacities in, 115–117, 116, 117 mechanics of breathing breathing cycle in, 122, 122–123 with lung diseases, 123, 123t muscles of expiration, 117–118 muscles of inspiration, 117 pressure, airflow, and resistance in, 120–121 respiratory compliance in, 118 surface tension of alveoli and surfactant, 119–120, 120 oxygen transport in, 126–131, 127–129, 130t, 131 pulmonary circulation in, 132–133 and ventilation/perfusion defects, 133–135, 134, 134t, 135 Resting membrane potential, 9 of cardiac muscle, 72 of skeletal muscle, 10, 11 Retching, 203 Reticulospinal tract medullary, 52 pontine, 52 Retina, layers of, 40, 40–41, 41t Retinal, 41 Retropulsion, in gastric mixing and digestion, 201 Reverse T3 (rT3), 239 Reynolds number, 69, 110E Rhodopsin, 41 Rickets, 219, 254 Right atrial pressure, 78, 78 and end-diastolic volume, 105Q, 112E Right hemisphere, in language, 55 Right-to-left shunts, 133, 139Q, 141Q, 143E, 145E as cause of hypoxemia, 130t Rigor, 18, 26Q, 31E Rods, 41, 41t, 58Q, 62E photoreception in, 41–42, 42, 43, 60Q, 63E 314 Index Rotation, vestibular system during, 46, 46–47, 60Q, 64E RPF (renal plasma flow), 152, 186Q, 191E RRP (relative refractory period), 11, 11 cardiac, 75, 75 rT3 (reverse T3), 239 RTA (renal tubular acidosis), type 1, 178t RTA (renal tubular acidosis), type 2, 178t RTA (renal tubular acidosis), type 4, 177, 178t Ruffini corpuscle, 39t Ryanodine receptor, 18 S S cells, 198, 212 SA (sinoatrial) node action potentials of, 73, 73–74 pacemaker potential in, 105Q, 112E Saccule, 45 Salicylic acid, 158, 173 as cause of metabolic acidosis, 178t as cause of respiratory alkalosis, 178t Saliva, 204–207 composition of, 204, 204t, 205, 222Q, 225E flow rates for, 205–206 formation of, 204–205, 205 functions of, 204 hypotonic, 205 inhibition of, 204t modification of, 205, 205 regulation of production of, 206, 206–207 stimulation of, 204t Salivary ducts, 205–206 Salivary glands, 204 Saltatory conduction, 12, 12 Sarcolemmal membrane, 17 Sarcomere, 17 myocardial, 76 length of, 78 Sarcoplasmic and endoplasmic reticulum Ca2+-ATPase (SERCA), 4 Sarcoplasmic reticulum (SR), 17, 18 myocardial, 77 Satiety, hypothalamic centers, 199 Saturation, in carrier-mediated transport, 3 Scala media, 44, 45 Scala tympani, 44, 45 Scala vestibuli, 44, 45 Schizophrenia, 16 Second heart sound, 87, 104Q, 111E splitting of, 87 Second messengers, 229–233, 229t, 230–233 Secondary active transport, 2t, 4, 4–5, 5, 26Q, 31E Second-order neurons, 38 Secreted substance, transport maximum (Tm) curve for, 157, 157 Secretin, 195, 196t, 197–198 actions of, 196t, 198 and pancreatic secretion, 212 stimulus for the release of, 196t, 198 Secretion of bile, 204t, 212–213 of electrolytes, 218 gastric (see Gastric secretion) of K+, 218 of PAH, 157 pancreatic, 204t, 211, 211–212, 222Q, 225E renal, 155 of water in intestine, 218 Secretion rate, calculation of, 155 Secretory diarrhea, 218 Segmentation contractions of large intestine, 203 of small intestine, 202 Seizures, Jacksonian, 54 Selectivity, of ion channels, 7 Semicircular canals, 44, 46, 60Q, 64E Semilunar valves, closure of, 87 Semipermeable membrane, 5, 6, 23Q, 28E Sensory aphasia, 55 Sensory homunculus, 39 Sensory pathways, 38 Sensory receptors, 38 adaptation, 38 sensory pathways, 38 types of, 37 Sensory systems, 36–48 audition as, 44, 44–45 olfaction as, 47 sensory receptors in, 36–38, 37t, 38 somatosensory, 39–40, 39t taste as, 47–48 vestibular, 45–47, 46 vision as, 40, 40–43, 41t, 42, 43 Sensory transducers, 37 Sensory transduction, 37–38, 38 SERCA (sarcoplasmic and endoplasmic reticulum Ca2+-ATPase), 4 Series resistance, 69 Serotonin, (5-hydroxytryptamine, 5-HT), 14, 16 in peristalsis, 202 Sertoli cells, 256, 264Q, 267E Set-point temperature, 56–57 Sex chromosomes, 255 Sexual differentiation, 255–256, 256 SGLT 1 (Na+–glucose cotransporter 1), 214 Shivering, 22E, 56, 60Q Short-term memory, 55 Shunt(s) left-to-right, 133, 103Q, 110E physiologic, 124 right-to-left, 133, 139Q, 141Q, 143E, 145E SIADH (see Syndrome of inappropriate ­ antidiuretic hormone (SIADH)) Signal peptides, 227 Simple cells, of visual cortex, 43, 62E Simple diffusion, 2–3, 2t, 25Q, 30E across capillary wall, 91 Single-unit smooth muscle, 21 Sinoatrial (SA) node action potentials of, 73, 73–74 pacemaker potential in, 105Q, 112E Sinusoids, 91 60-40-20 rule, 147 Size principle, 48 Skeletal muscle, 16–22 comparison of, 22, 22t excitation–contraction coupling in, 18, 19 temporal sequence of, 25Q–26Q, 30E exercise effect on, 96 length–tension and force–velocity relationships in, 19–20, 20 relaxation of, 18 structure of, 16–18, 17 Skin, regulation of circulation to, 96–97, 106Q, 113E Sleep, 55 Index 315 Sleep–wake cycles, 55 Slow pain, 37t, 39 Slow waves, 54 gastrointestinal, 199–200, 200, 201, 222Q, 225E Slowly adapting receptors, 38 Small intestinal motility, 202, 222Q, 225E Small intestine, lipid digestion in, 217 Smooth muscle, 21, 21–22, 22t Ca2+ binding, gastrointestinal muscle ­ contraction, 27Q, 31E comparison with, 22, 22t contraction, 27Q, 31E excitation–contraction coupling in, 21, 21–22, 23Q, 28E Sodium (see Na+) Sodium chloride (see NaCl) Solitary nucleus, in taste, 48 Solitary tract, in taste, 48 Somatomedins, 234, 235, 264Q, 268E Somatosensory cortex, 39 Somatosensory system, 39–40, 39t Somatostatin, 198, 228t, 251 and gastric acid secretion, 209, 210 and gastrin secretion, 197, 209, 210 and growth hormone secretion, 234, 235 Somatostatin analogs, 235 Somatotropin, 234, 235 Somatotropin release-inhibiting hormone (SRIF), 228t, 235 (see also Somatostatin) Sound encoding of, 45 frequency of, 44 intensity, 44 Sound waves, 44, 45 Spatial summation, 51 Spermatogenesis, 256, 258 Sphincter of Oddi, 213, 214 Spinal cord transection, effects of, 52, 61Q, 64E Spinal organization of motor systems, 51 Spinal shock, 52, 64E Spinocerebellum, 52 Spiral ganglion, 45 Spirometry, 116, 139Q, 143E Spironolactone, 163, 166, 181t, 184Q, 189E Splay, in glucose titration curve, 156–157 Sprue, tropical, 217 SR (sarcoplasmic reticulum), 17, 18 myocardial, 77 SREs (steroid-responsive elements), 232, 233 SRIF (somatotropin release-inhibiting hormone), 228t, 235 (see also Somatostatin) ST segment, 70, 71, 103Q, 110E Standing, cardiovascular responses to, 97, 97t, 98, 102Q, 109E Starling equation, 92, 92, 93, 110E, 154 Starling forces and glomerular filtration rate, 153–154, 154, 155t in peritubular capillary blood, 160–161 Steatorrhea, 217, 221Q, 224E Stercobilin, 219, 220 Stereocilia, 46, 46 Stereospecificity, of carrier-mediated transport, 3 Steroid(s) 18-carbon, 242 19-carbon, 241 21-carbon, 241 Steroid hormone(s) mechanisms of action, 229t, 232, 233 regulation of secretion, 243 synthesis of, 227 Steroid-responsive elements (SREs), 232, 233 Stimulus, 37, 38, 50t Stomach lipid digestion in, 216–217 receptive relaxation of, 201, 221Q, 224E structure of, 201 Stress, glucocorticoid response to, 245 Stretch reflex, 50, 50t, 59Q, 62E Striatum, 53, 54 lesions of, 54 Stroke volume afterload and, 80 in baroreceptor reflex, 89 defined, 84 end-diastolic volume and, 78, 79 extrasystolic beat and, 102Q, 109E gravitational forces and, 97 preload and, 80 and pulse pressure, 70 in ventricular pressure–volume loops, 79, 80 Stroke work, 84 Sublingual glands, 204 Submandibular glands, 204 Submucosal plexus, of GI tract, 194, 195, 195 Substance P , 39 Substantia nigra, 53, 54 lesions of, 54 Subthalamic nuclei, 53 lesions of, 54 Sucrase, 214, 215 Sucrose, digestion and absorption of, 222Q, 225E Sulfonylurea drugs, 249 Sulfuric acid, 173 Summation spatial, 14 at synapses, 14–15 temporal, 15 Supplementary motor cortex, 54 Suprachiasmatic nucleus, 55 Surface tension, of alveoli, 119–120, 120 Surfactant, 120, 120 Surround, of receptive field, 42 Swallowing, 200 Sweat glands, 36t effect of the autonomic nervous system on, 32, 33t in heat loss, 56 Sweating, water shifts between compartments due to, 150t, 151 Sympathectomy, orthostatic hypotension after, 103Q, 110E Sympathetic effects, on heart rate and conduction velocity, 76 Sympathetic ganglia, 32 Sympathetic innervation and blood flow to skeletal muscle, 96 to skin, 96–97 of vascular smooth muscle, 95 Sympathetic nervous system of GI tract, 195 in heat generation, 56 in heat loss, 56 organization of, 32, 58Q, 62E and renal blood flow, 152 316 Index Sympathetic stimulation and airway resistance, 121 and myocardial contractility, 77 renal effects of, 155t of saliva, 206, 206 Symport, 2t, 4, 4 Synapses input to, 14 many-to-one, 14 one-to-one, 14 summation at, 14–15 Synaptic cleft, 13 Synaptic transmission, 14–16, 15 Synaptic vesicles, 12 Syndrome of inappropriate antidiuretic hormone (SIADH), 151, 172 urine production in, 167 vs. water deprivation, 186Q, 191E water shifts between compartments due to, 150t, 151 Systole, 70 Systolic pressure, 70, 70, 102Q, 109E Systolic pressure curve, 79 T T (transverse) tubules, 16, 17 depolarization of, 18, 19 myocardial, 77 T wave, 70, 71 T3 (triiodothyronine) actions of, 228t, 240–241 regulation of secretion of, 240, 240 reverse, 239 synthesis of, 238–239, 239 T4 (l-thyroxine) actions of, 228t, 240–241 regulation of secretion of, 240, 240 synthesis of, 238–239, 239 Taste, 47–48, 60Q, 64E Taste buds, 47 Taste chemicals, 48 Taste pathways, 47–48 Taste receptor cells, 47 Taste transduction, 48 TBG (thyroxine-binding globulin), 239 TBW (total body water), 147, 148t measuring volume of, 147, 148t TEA (tetraethylammonium), 10 Tectorial membrane, 44, 45 Tectospinal tract, 52 Temperature, body core, 56–57 and hemoglobin–O2 dissociation curve, 128 hypothalamic set point for, 56–57 Temperature regulation, 56 and blood flow to skin, 96 Temperature sensors, 56 Temporal summation, 15, 51 Terminal cisternae, 17, 18 Testes, regulation of, 256, 257, 258 Testosterone actions of, 228t, 258 and male phenotype, 255, 256 synthesis of, 243, 256, 257, 266Q, 270E Tetanus, 18, 23Q, 28E Tetraethylammonium (TEA), 10 Tetralogy of Fallot, 133 TF/Px ratio, 158 along proximal tubule, 161, 161 TF/Pinulin ratio, 159 TF/Px/TF/Pinulin ratio, 159 TG (thyroglobulin), 238, 239, 239 Thalamus, in somatosensory system, 39 Theca cells, 258 Thiazide diuretics and Ca2+ reabsorption, 186Q, 191E for idiopathic hypercalciuria, 167 and K+ secretion, 164–166, 165, 165t major effects of, 181t mechanism of action of, 181t site of action of, 181t Thick ascending limb and Ca2+ reabsorption, 167 ion transport, 162, 162 and K+ reabsorption, 163 and Mg2+ reabsorption, 167 and Na2+ reabsorption, 159, 159 in urine production, 167, 167–168 Thick filaments, 17, 17 Thin filaments, 17, 17–18 Thiocyanate, 238 Threshold, 10, 156 Thromboxane A2, in blood flow regulation, 96 Thyroglobulin (TG), 238, 239, 239 Thyroid deiodinase, 239 Thyroid gland pathophysiology of, 242t physiology of, 238–241, 239, 240 Thyroid hormones actions of, 240–241 in heat generation, 56 mechanism of actions of, 232, 263Q–264Q, 267E regulation of secretion of, 239–240, 240 synthesis of, 238–239, 239 Thyroid-stimulating hormone (TSH) actions of, 228t origin of, 228t, 264Q, 267E in regulation of secretion of thyroid hormone, 240, 240 structure of, 234 in synthesis of thyroid hormones, 238, 239 Thyroid-stimulating immunoglobulins, 240 Thyrotropin-releasing hormone (TRH) actions of, 228t and prolactin, 236, 236 in regulation of thyroid hormone secretion, 239, 240 l-Thyroxine (T4) actions of, 228t, 240–241 regulation of secretion of, 240, 240 synthesis of, 238–239, 239 Thyroxine-binding globulin (TBG), 239 Tidal volume (Vt), 115, 116, 141Q, 144E–145E Tight junctions, 1, 217 Titratable acid, 167, 173 H+ excretion as, 176, 176 Titration curves, 174, 175 glucose, 156, 156–157 PAH, 157, 157 TLC (total lung capacity), 117 Tm (transport maximum), 3 Tm (transport maximum) curve for reabsorbed substance, 156, 156–157 for secreted substance, 157, 157 Tonic contractions, in gastrointestinal motility, 19 Tonic receptors, 38 Tonotopic representation, 45 Index 317 Total body water (TBW), 147, 148t measuring volume of, 147, 148t Total lung capacity (TLC), 117 Total peripheral resistance (TPR) arteriolar pressure and, 109E and cardiac output and venous return curve, 82, 83 exercise effect on, 99, 104Q, 106Q, 111E, 113E Total tension, 20 TPR (see Total peripheral resistance (TPR)) Transducin, 41 Transferrin, 219 Transport across cell membranes, 2–5, 2t, 4, 5 active primary, 2t, 3–4, 26Q, 31E secondary, 2t, 4, 4–5, 5, 26Q, 30E carrier-mediated, 3 coupled, 4 Transport maximum (Tm), 3 Transport maximum (Tm) curve for reabsorbed substance, 156, 156–157 for secreted substance, 157, 157 Transverse (T) tubules depolarization of, 18, 19 Trauma, and blood flow to skin Trehalase, 214, 215 Tremor, intention, 53 TRH (thyrotropin-releasing hormone) actions of, 228t and prolactin, 236, 236 in regulation of thyroid hormone secretion, 239, 240 Tricuspid valve, closure of, 85 Triiodothyronine (T3) actions of, 228t, 240–241 regulation of secretion of, 240, 240 reverse, 239 synthesis of, 238–239, 239 Tripeptides, 216, 218 Tritiated water, as marker for TBW, 147 Tropical sprue, 217 Tropomyosin, 18 Troponin, 18, 23Q, 28E Troponin C, Ca2+-binding to, 18, 22t, 25Q–26Q, 30E Trypsin, 215 Trypsinogen, 215 Tryptophan, and gastrin secretion, 197 TSH (see Thyroid-stimulating hormone (TSH)) Tubular fluid (TF) alanine in, 188Q, 193E glucose in, 188Q, 193E inulin in, 188Q, 193E para-aminohippuric acid in, 188Q, 193E Tubular fluid/plasma (TF/P) ratio, 156, 156–157, 161 Na+ and osmolarity, 161 Tubuloglomerular feedback, 152 Twitch tension, 19 Tympanic membrane, 44 Type II alveolar cells, 120 Tyrosine kinase-associated receptor, 231–232, 232 U UDP (uridine diphosphate) glucuronyl ­ transferase, 219, 220 Ulcer(s) duodenal, 210, 222Q, 225E gastric, 210 peptic, 210, 223Q, 226E Ultrafiltration pressure, net, 153, 154 Undershoot, of action potential, 10 Unitary smooth muscle, 21 Unmyelinated axon, 11–12, 12 Upper esophageal sphincter, 200 Up-regulation, of hormone receptors, 229 Upstroke, of action potential, 10, 23Q, 25Q, 28E, 29E Urea, 166–167 glucagon and, 248 hypotonic, 24Q, 29E renal regulation, 166–167 Urea recycling, in urine production, 166, 168 Uridine diphosphate (UDP) glucuronyl ­ transferase, 219, 220 Urinary buffers, 176 Urinary cyclic AMP , 253, 253t Urine concentrated (hyperosmotic), 167–170, 170 dilute (hyposmotic), 184Q, 189E isosthenuric, 171 osmolarity of, 186Q, 191E Urine pH acidic, 158 alkaline, 158 minimum, 176 Urobilin, 219, 220 Urobilinogen, 219, 220 UT1 transporter, 166 ADH effect, 166 role in urea recycling, 166 Utricle, 45 V Va (alveolar ventilation), 116, 123, 133 V1 receptors, 91, 237 V2 receptors, 91, 237 Vagal stimulation, of gastric H+ secretion, 208 Vagotomy, 221Q, 224E and H+ secretion, 208 Vagovagal reflexes, 194, 212 Vagus nerve, 194 Valsalva maneuver, 203 Vanillylmandelic acid (VMA), 15, 32 van’t Hoff’s law, 6, 6 Vasa recta, in urine production, 168 Vascular resistance, 68–69 blood vessel radius and, 68, 102Q, 109E Vascular smooth muscle, 21, 36t sympathetic innervation of, 98 Vasculature, components of, 66–67 Vasoactive intestinal peptide (VIP), 198–199 in esophageal motility, 201 and GI smooth muscle relaxation, 221Q, 224E Vasoconstriction, 95, 96 in baroreceptor reflex, 88 in hemorrhage, 100 hypoxic, 133, 139Q, 143E pulmonary, in high altitudes, 138 of renal arterioles, 152 Vasodilation, 94–96 of renal arterioles, 152 Vasodilator metabolites, 97 exercise and, 99 Vasomotor center in baroreceptor reflex, 88 chemoreceptors in, 90 Vasopressin, and arterial blood pressure, 91 VC (vital capacity), 116–117 measurement of, 139Q, 143E 318 Index Veins, 67 Venoconstriction in baroreceptor reflex, 88 Venous blood, pH of, 142Q, 146E Venous compliance and mean systemic pressure, 81, 81–82 and venous return curve, 83 Venous constriction, 95 Venous pooling, 97, 102Q, 109E Venous pressure, 71 and edema, 104Q, 111E Venous return and cardiac output, 79 diarrhea and, 107Q, 113E exercise and, 99 Venous return curve, 81, 81–82, 82 Ventilation alveolar, 116, 123, 133 minute, 116 positive pressure, pulmonary blood flow, 133 Ventilation rate, 116 Ventilation/perfusion (V/Q) defects, 133–135, 134, 134t, 135 as cause of hypoxemia, 130t Ventilation/perfusion (V/Q) ratio with airway obstruction, 134–135, 135 changes in, 134–135 defined, 133–134 in different parts of lung, 134, 134f, 134t during exercise, 134–135 in pulmonary embolism, 135, 140Q, 144E Ventral respiratory group, 136 Ventricles, length–tension relationship in, 78, 78–79 Ventricular action potential, 72, 72–73, 105Q, 112E Ventricular ejection, 80 rapid, 85 reduced, 85–87 Ventricular filling, 74, 80 rapid, 87 reduced, 87 Ventricular pressure–volume loop, 79, 79–80, 80 Ventricular volume, 86, 103Q, 109E Venules, 67 Vestibular organ, 45–46, 46 Vestibular system, 45–47, 46 Vestibular transduction, 46, 46 Vestibular–ocular reflexes, 46–47 Vestibule, of inner ear, 44 Vestibulocerebellum, 52 Vestibulospinal tract, lateral, 52 Vibrio cholerae, 218 VIP (vasoactive intestinal peptide), 198–199 in esophageal motility, 201 and GI smooth muscle relaxation, 221Q, 224E Vision, 40–43 layers of retina in, 40–41, 40, 41t optic pathways and lesions in, 41, 42 optics in, 40 photoreception in rods in, 41–42, 43 receptive visual fields in, 43 Visual cortex, receptive fields of, 43 Vital capacity (VC), 116–117 measurement of, 139Q, 143E Vitamin(s), absorption of, 215t, 219 Vitamin A, in photoreception, 41 Vitamin B12, absorption of, 215t, 219 Vitamin D, 254–255 actions of, 251t, 255 in calcium metabolism, 252 metabolism of, 251t, 254, 255 VMA (3-methoxy-4-hydroxymandelic acid), 15, 32 VMA (vanillylmandelic acid), 15, 32 Volatile acid, 172 Voltage-gated channels, 7 Volume contraction alkalosis, 149 due to diarrhea, 149, 183 hyperosmotic, 150t, 151 in hypoaldosteronism, 181 hyposmotic, 150t, 151 isosmotic, 149, 150t due to vomiting, 182, 182 Volume expansion hyperosmotic, 149–150, 150t hyposmotic, 150t, 151 isosmotic, 149, 150t Volume of distribution, 148 Vomiting, 203 and gastric secretion, 208 metabolic alkalosis due to, 182, 182, 187Q, 193E W Water (H2O) absorption of, 218 distribution of, 147, 148, 148t secretion of, 218 shifts between compartments of, 149–151, 150, 150t total body measuring volume of, 147, 148t tritiated, 147 Water deprivation, 167–170, 168 and free-water clearance, 168 and H2O reabsorption, 185Q, 190E and TF/P osmolarity, 188Q, 193E Water intake, response to, 163, 165 Water-soluble substances, and cell membrane, 1 Weak acids, 155 Weak bases, 155 Wernicke area, 55 Wheal, 97 Wolff–Chaikoff effect, 238 Wolffian ducts, 255, 256 Z Z line, 17, 17 Zollinger–Ellison syndrome, 197, 210, 223Q, 226E Zona fasciculata, 241, 242, 243 Zona glomerulosa, 241, 242, 243, 264Q, 268E Zona reticularis, 241, 242, 243 Zonula occludens, 1 Proudly sourced and uploaded by [StormRG] Kickass Torrents | The Pirate Bay | ExtraTorrent
2431
https://www.englishchatterbox.com/chatterbot-ai/question/the-locus-of-midpoints-of-the-chord-of-the-circle-x2y225-which-pass-through-a-fixed-point-46-is-a-circle-the-radius-of-that-circle-is-c-sqrt13
The locus of midpoints of the chord of the circle x^2+y^2=25 which pass through a fixed point (4,6) is a circle. The radius of that circle is: A. √52 B. √2 C √13 D. √10 Search Classes Reviews More Sign up for free Sign in Chatterbox AI Powered Learning Mathematics NCERT Three Dimensional Geometry The locus of midpoints of the chord of the circle x^2+y^2=25 which pass through a fixed point (4,6) is a circle. The radius of that circle is: A. √52 B. √2 C √13 D. √10 Question The locus of midpoints of the chord of the circle x 2+y 2=25 x^{2}+y^{2}=25 x 2+y 2=25 which pass through a fixed point (4,6)(4,6)(4,6) is a circle. The radius of that circle is: A. 52\sqrt{52}52​ B. 2\sqrt{2}2​ C 13\sqrt{13}13​ D. 10\sqrt{10}10​ Answer To determine the radius of the circle that is the locus of the midpoints of chords of the circle x 2+y 2=25 x^2 + y^2 = 25 x 2+y 2=25 that pass through the fixed point (4,6), follow these steps: Define the Midpoint: Let the midpoint of the chord be (h,k)(h, k)(h,k). Circle Equation: The given circle's equation is: x 2+y 2=25 x^2 + y^2 = 25 x 2+y 2=25 This can be rearranged as: x 2+y 2−25=0 x^2 + y^2 - 25 = 0 x 2+y 2−25=0 Equation of Chord with Midpoint: Using the property of circles, the equation of the chord with midpoint (h,k)(h, k)(h,k) can be written using the chord's midpoint formula, T=S 1 T = S_1 T=S 1​. Here, T T T represents the line equation and S 1 S_1 S 1​ is the substitution of (h,k)(h, k)(h,k) into the circle equation: T:h x+k y−25=0 T: hx + ky - 25 = 0 T:h x+k y−25=0 and S 1:h 2+k 2−25=0 S_1: h^2 + k^2 - 25 = 0 S 1​:h 2+k 2−25=0 Apply the Condition: Since the chord passes through the fixed point (4,6), substitute x=4 x = 4 x=4 and y=6 y = 6 y=6 into the equation T=S 1 T = S_1 T=S 1​: 4 h+6 k−25=h 2+k 2−25 4h + 6k - 25 = h^2 + k^2 - 25 4 h+6 k−25=h 2+k 2−25 Simplifying, we have: 4 h+6 k=h 2+k 2 4h + 6k = h^2 + k^2 4 h+6 k=h 2+k 2 Locus Determination: For the locus, replace h h h with x x x and k k k with y y y: x 2+y 2−4 x−6 y=0 x^2 + y^2 - 4x - 6y = 0 x 2+y 2−4 x−6 y=0 Convert to Standard Form: To convert the above equation into a recognizable circle form, complete the square: x 2−4 x+y 2−6 y=0 x^2 - 4x + y^2 - 6y = 0 x 2−4 x+y 2−6 y=0 Complete the square: (x−2)2+(y−3)2=13 (x - 2)^2 + (y - 3)^2 = 13 (x−2)2+(y−3)2=13 Thus, the equation represents a circle with center (2,3)(2, 3)(2,3) and radius 13\sqrt{13}13​. Determine the Radius: The radius of this locus circle is: 13 \boxed{\sqrt{13}} 13​​ Simplify Main points Follow-up Questions: What is the equation of given circle?How did we find the circle's midpoint?What role does the fixed point play? Related Questions See how Chatterbot AI can help you succeed ### Michael M. Halassa: Pioneering Neuroscientist Decoding Thalamocortical Circuits and Cognitive Flexibility Discover the groundbreaking work of Michael M. Halassa, a leading neuroscientist at MIT, who is unraveling the complexities of thalamocortical circuits. His innovative research sheds light on the neural mechanisms underpinning cognition, attention, and learning, with profound implications for understanding and treating cognitive disorders. ### Computer Science Class 11 CBSE - The Ultimate Guide with Notes, Solutions and AI This ultimate guide for CBSE Computer Science class 11 has detailed notes, NCERT solutions, cheat sheets, and our free AI-powered doubt-solving assistant, Chatterbot AI. ### JEE Advanced 2024 Exam Date Announced: Complete Guide to Eligibility, Syllabus, and Preparation Tips JEE Advanced 2024 on May 26! Get exam schedule, syllabus, prep tips & more in this guide. Ace India's top engineering test with topper strategies. ### How to Crack NEET: The Ultimate Blueprint to Outsmart the Exam and Unlock Your Medical Dreams Ace NEET with expert strategies: Discover effective exam strategies, time management, core concepts mastery, problem-solving techniques, revision tips, and AI-assisted doubt clearing with Chatterbot AI. ### How to Crack IIT: Smart Self-Study Strategies and AI Tools for Success Ace IIT JEE in 6 months without coaching. Discover expert self-study strategies for Physics, Chemistry, and Math. Master time management, mock tests, and leverage AI tools like Chatterbot AI for personalized doubt-solving. Previous Slide Next Slide Hi there! What can I help you learn today? Chat with your personal AI tutor Try Chatterbox AI Live Now Upload
2432
https://www.uptodate.com/contents/endometriosis-medical-treatment-of-pelvic-pain
Your Privacy To give you the best possible experience we use cookies and similar technologies. We use data collected through these technologies for various purposes, including to enhance website functionality, remember your preferences, and show the most relevant content. You can select your preferences by clicking the link. For more information, please review our Privacy & Cookie Notice Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device. Because we respect your right to privacy, you can choose not to allow certain types of cookies on our website. Click on the different category headings to find out more and manage your cookie preferences. However, blocking some types of cookies may impact your experience on the site and the services we are able to offer. Privacy & Cookie Notice Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function. They are usually set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, this may have an effect on the proper functioning of (parts of) the site. Performance Cookies These cookies support analytic services that measure and improve the performance of our site. They help us know which pages are the most and least popular and see how visitors move around the site.
2433
https://www.ncbi.nlm.nih.gov/books/NBK519063/
IgA Pemphigus - StatPearls - NCBI Bookshelf An official website of the United States government Here's how you know The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site. The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely. Log inShow account info Close Account Logged in as: username Dashboard Publications Account settings Log out Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation Bookshelf Search database Search term Search Browse Titles Advanced Help Disclaimer NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health. StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2025 Jan-. StatPearls [Internet]. Show details Treasure Island (FL): StatPearls Publishing; 2025 Jan-. Search term IgA Pemphigus Minira Aslanova; Siva Naga S. Yarrarapu; Hasnain A. Syed; Patrick M. Zito. Author Information and Affiliations Authors Minira Aslanova 1; Siva Naga S. Yarrarapu 2; Hasnain A. Syed 3; Patrick M. Zito 4. Affiliations 1 Hackensack UMC-Palisades 2 University of Texas Health Science Center 3 Sheikh Zayed Hospital, Lahore 4 University of Miami; Miller School of Medicine Last Update: May 1, 2024. Go to: Continuing Education Activity IgA pemphigus, an uncommon autoimmune blistering skin disorder, manifests as painful, itchy blisters that fill with neutrophils and evolve into pustules. This condition encompasses 2 clinically similar subtypes: subcorneal pustular dermatosis and intraepidermal neutrophilic dermatosis. Both subtypes exhibit distinct immunologic mechanisms targeting keratinocyte cell surface components. Diagnosis and subtype determination rely on histopathology and immunofluorescence findings. Despite typically presenting a less severe course than IgG-driven pemphigus, precise diagnosis and aggressive treatment with steroids and dapsone are imperative to avert relapses. Given the potential association with underlying malignancies or rheumatological and gastrointestinal disorders, a thorough physical examination and pertinent testing are essential for comprehensive patient evaluation. This activity reviews a better understanding of IgA pemphigus, its clinical nuances, and the latest evidence-based management strategies. In addition, the interplay between circulating IgA antibodies and specific keratinocyte cell surface autoantigens is discussed. By empowering healthcare professionals with valuable insights and tools, this activity aims to optimize patient care for this complex and rare condition, thereby enhancing their competence in managing IgA pemphigus effectively. Objectives: Identify the clinical presentation and characteristic features of IgA pemphigus, including painful and pruritic vesiculopustular eruptions and associated intraepidermal blistering. Differentiate IgA pemphigus from other blistering skin diseases based on clinical manifestations, histopathology, and immunofluorescence findings. Apply evidence-based treatment strategies for IgA pemphigus, including aggressive treatment with steroids and dapsone, to prevent recurrence and promote healing. Coordinate multidisciplinary care seamlessly, facilitating transitions between healthcare professionals and specialties to provide continuous and integrated management of IgA pemphigus throughout the patient's journey. Access free multiple choice questions on this topic. Go to: Introduction Immunoglobulin A (IgA) pemphigus, a rare autoimmune blistering disease, manifests as painful and pruritic vesiculopustular eruptions. These eruptions occur due to circulating IgA antibodies targeting keratinocyte cell surface components involved in cell-to-cell adherence.Although associated with various malignancies and chronic conditions, its precise etiology remains unclear. IgA pemphigus comprises 2 distinct subtypes: subcorneal pustular dermatosis and intraepidermal neutrophilic dermatosis. The subcorneal pustular dermatosis subtype presents with intercellular IgA deposition against desmocollin-1 glycoprotein, primarily in the upper epidermis, whereas the intraepidermal neutrophilic dermatosis subtype exhibits autoantibodies targeting desmoglein members in the cadherin superfamily, predominantly in the lower epidermis. Although IgA pemphigus typically has a milder disease course compared to IgG-driven pemphigus, it necessitates careful diagnosis and aggressive treatment with steroids and dapsone to prevent recurrence. Go to: Etiology Antikeratinocyte cell surface autoantibodies are responsible for the disease process associated with IgA pemphigus, but the inciting mechanism is unknown. Research suggests the potential involvement of interleukin 5 (IL-5), a type 2 T-helper cytokine associated with stimulating IgA class antibody and γδ T-cell receptor–containing T cells, crucial for mucosal IgA production. Moreover, the IgA autoantibodies possess specific binding sites for the monocyte/granulocyte IgA-Fc receptor (CD89), which may allow neutrophils to accumulate intraepidermally, causing the blistering process to occur. IgA pemphigus is associated with monoclonal IgA gammopathy and multiple myeloma.Experts are uncertain if monoclonal gammopathy precedes or follows IgA pemphigus, but most cases are present at the time of diagnosis. Other associated diseases include HIV infection, Sjögren syndrome, rheumatoid arthritis, lung cancer, ulcerative colitis, peripheral T-cell lymphoma, chronic myeloid leukemia, and diffuse large B-cell lymphoma. Although the direct relationship between the various diseases and IgA pemphigus is still unclear, healthcare professionals should thoroughly diagnose patients presenting with IgA pemphigus for hematologic, gastrointestinal, rheumatological, and infectious disorders. Go to: Epidemiology IgA pemphigus stands out as one of the most uncommon types of autoimmune blistering diseases, with a limited understanding of its epidemiology, including age and race demographics. Although it can manifest at any age, the reported range is 1 month to 94 years, predominantly emerging in adulthood, with the average onset occurring between the fourth and sixth decades.Notably, no distinct gender predilection is apparent in reported cases. Go to: Pathophysiology The pathogenesis of IgA pemphigus involves IgA autoantibodies that specifically target desmosomal and nondesmosomal keratinocyte cell surface components, notably desmoglein-1, desmoglein-3, and desmocollin-1. These molecules are integral to cell-to-cell adherence and belong to the cadherin superfamily of glycoproteins. In the subcorneal pustular dermatosis subtype, desmocollin-1 serves as the antigen. In contrast, in the intraepidermal neutrophilic dermatosis subtype, interactions occur with both desmoglein-1 and desmoglein-3.The binding of IgA antibodies to the IgA-Fc receptor triggers an intense inflammatory response, leading to epidermal neutrophilic infiltration followed by the formation of blisters and pustules. Despite identifying IgA antibody targets, the precise mechanistic cascade underlying the clinical presentation of IgA pemphigus remains elusive and necessitates further research. Go to: Histopathology Histological examination of IgA pemphigus reveals subcorneal blisters with massive neutrophilic infiltration and a mild loss of cohesion between keratinocytes. This histopathological analysis aids in distinguishing the 2 major subtypes of IgA pemphigus. The subcorneal pustular dermatosis subtype may reveal minimal subcorneal acantholysis and pustules with increased IgA autoantibody intensity on the epidermis's upper surface. In contrast, the characteristic features of the intraepidermal neutrophilic dermatosis subtype are pustules deeper in the epidermis and inflammatory infiltrates, primarily in the entire or lower part of the dermis (seeImage.Intraepidermal Neutrophilic IgA Pemphigus).Although IgA is predominately detected on direct immunofluorescence, occasionally, IgG and complement component C3 may be detected to a much lesser extent. Unlike IgG pemphigus, acantholysis may be minimal or absent in IgA pemphigus. Go to: History and Physical When obtaining a patient’s medical history, clinicians should ascertain the presence of mucosal involvement, as the presence of mucosal lesions can help distinguish between different subtypes of pemphigus disease. IgA pemphigus lacks mucosal involvement.The initial manifestation of IgA pemphigus involves a subacute onset of flaccid blisters on an erythematous base filled with clear fluid. These evolve into pustules that rapidly rupture, forming annular crusts. Patients often describe these plaques as both painful and pruritic.Although commonly observed in flexural areas such as the axilla and groin, the trunk and extremities are the most commonly affected areas.Patients typically experience symptoms limited to skin manifestations, with no systemic symptoms such as fever, malaise, headache, or weight loss. Go to: Evaluation The evaluation of suspected IgA pemphigus begins with a skin biopsy. Clinicians use histopathology and direct and indirect immunofluorescence techniques to establish the diagnosis. Clinicians should obtain a 4-mm lesional biopsy from the edge of an early lesion or erosion for hematoxylin and eosin staining and routine histopathological examination. For direct immunofluorescence, an additional perilesional skin biopsy should be taken from unaffected skin, situated 4 mm away from a vesicle or erosion. Lesional skin biopsies for direct immunofluorescence are more likely to give a false negative result due to the inflammatory response potentially destroying the immunoreactants. Histological examination reveals the hallmark of intraepidermal neutrophilic infiltration and the additional classic histopathological changes (refer to the Histology section for more information on the histopathology of IgA pemphigus).As acantholysis is minimal in IgA pemphigus, direct immunofluorescence may be considered an early screening tool for diagnosing IgA pemphigus in patients with diffuse pustular eruptions. Direct immunofluorescence allows clinicians to detect the absence or presence of IgA autoantibodies on epidermal cell surfaces. Indirect immunofluorescence detects the presence of IgA circulating autoantibodies in patient serum using patient serum on monkey esophagus or other epithelial substrates. Indirect immunofluorescence detects circulating IgA autoantibodies in approximately 50% of affected patients. Immunoblotting allows clinicians to document the specific skin antigen recognized by the patient's IgA autoantibodies. The sensitivity of immunoblotting is approximately 40% in revealing IgA reactivity to any autoantigen. Enzyme-linked immunosorbent assay can detect specific desmosomal antigens in patients with IgA pemphigus but is associated with a low sensitivity of approximately 55%. Go to: Treatment / Management Due to the inflammatory nature of the disease, the primary treatment approach for IgA pemphigus involves the use of oral and topical corticosteroids with a suggested daily dose of 0.5 to 1 mg/kg.However, patients should be aware of the potential adverse effects associated with the long-term use of steroids, including osteoporosis, diabetes, cataracts, adrenal suppression, and infection. In contrast to IgG pemphigus, IgA pemphigus typically does not respond adequately to steroid therapy alone. Numerous studies demonstrate that combining dapsone with corticosteroids yields significantly better outcomes. Dapsone primarily works by suppressing neutrophilic infiltration. Healthcare professionals should carefully monitor patients receiving dapsone for potential adverse effects, such as hemolysis and methemoglobinemia. Other medications reported to effectively treat IgA pemphigus include colchicine, retinoids, mycophenolate mofetil, and adalimumab.Many of these medications were tested based on their successful treatment of other forms of neutrophilic dermatoses and classic pemphigus. For example, adalimumab, a recombinant human immunoglobulin antibody, is believed to be therapeutic due to its tumor necrosis factor-α (TNF-α) inhibition. TNF-α activates neutrophilic infiltration in the epidermis; thus, its inhibition may hinder the further progression of IgA pemphigus.Rituximab, a monoclonal antibody targeting the B-lymphocyte antigen CD20 (CD-20) protein on cells,has also been safely used to treat patients with IgA pemphigus. Clinicians should also consider providing proton-pump inhibitors and bisphosphonate therapy to prevent peptic ulceration and osteoporosis. Patients should receive counseling regarding weight-bearing exercise and adequate calcium and vitamin D intake. Patients should undergo a thorough physical examination and, when clinically indicated, appropriate testing for concurrent malignancies, inflammatory bowel disease, rheumatological diseases, and infectious diseases. Go to: Differential Diagnosis Due to the rarity of IgA pemphigus, clinicians should consider various differential diagnoses when managing and treating the condition. The clinical presentation of these conditions is remarkably similar, and careful investigation using histology and immunofluorescence may be necessary to differentiate the conditions.The following are potential differential diagnoses for IgA pemphigus. Bullous impetigo Pustular psoriasis Linear IgA bullous dermatosis Dermatitis herpetiformis Subcorneal pustular dermatosis or Sneddon-Wilkinson disease Pemphigus foliaceus Classic subcorneal pustular dermatosis, also known as Sneddon-Wilkinson disease, is a chronic dermatosis characterized by sterile pustular lesions that erupt in cyclical patterns. Similar to the subcorneal pustular dermatosis subtype of IgA pemphigus, the pustular lesions observed in Sneddon-Wilkinson disease coalesce in an annular pattern and eventually burst to form crusted plaques. The 2 conditions have the same distribution area, favoring the groin, trunk, and axillae while avoiding mucosal surfaces. Histological examination of Sneddon-Wilkinson disease demonstrates perivascular infiltration of neutrophils and mild spongiosis. However, in contrast to IgA pemphigus, direct immunofluorescence of Sneddon-Wilkinson disease reveals a negative result for IgA deposits against adhesion molecules such as desmocollin-1. Pemphigus foliaceus is characterized by flaccid bullae typically found on the trunk that eventually crust over, much like the lesions in IgA pemphigus. Generally considered a benign disease, pemphigus foliaceus responds well to topical and oral corticosteroids. A clinical differentiation between IgA pemphigus and pemphigus foliaceus is nearly impossible. Thus, immunofluorescence is critical in diagnosis. Direct immunofluorescence of pemphigus foliaceus demonstrates IgG autoantibodies against desmoglein-1 in contrast to the IgA deposits against desmocollin-1 found in IgA pemphigus. Therefore, proper histology and immunofluorescence diagnosis are essential in differentiating the 2 conditions. Go to: Prognosis In contrast to classic pemphigus, IgA pemphigus typically manifests as a less severe and more localized condition. With appropriate treatment and regular monitoring, IgA pemphigus commonly resolves without scarring. Studies indicate that abruptly discontinuing oral steroids may lead to lesion recurrence; therefore, clinicians should gradually reduce and taper the dosage. Prognosis in cases of IgA pemphigus associated with other conditions such as malignancies, gastrointestinal diseases, or monoclonal gammopathy depends on the progression of the underlying conditions. Go to: Complications Many potential complications associated with IgA pemphigus are due to the effects of long-term treatment. Secondary infection of the lesions and scarring are the primary complications associated with IgA pemphigus. The following are potential complications related to corticosteroid use. Osteoporosis Peptic ulcer disease Adrenal insufficiency Infection Growth restriction in children Weight gain Anemia Hypertension Diabetes Potential complications related to dapsone use include: Methemoglobinemia Hemolytic anemia Neutropenia Agranulocytosis Hepatic failure Drug rash with eosinophilia and systemic symptoms (DRESS)syndrome Go to: Deterrence and Patient Education IgA pemphigus is a rare autoimmune blistering disease that can cause painful and pruritic skin eruptions. These eruptions result from the body's immune system mistakenly attacking certain skin proteins. Unlike some other forms of pemphigus, IgA pemphigus usually presents as a milder condition with limited skin involvement.Affected patients are at risk of secondary infection of open wounds and scarring. Pruritus is a prominent symptom; patients may scratch the lesions, increasing the risk of secondary infection. Patients should be aware that they must seek medical attention if they notice increased erythema, fevers, or purulent drainage. Although IgA pemphigus typically heals without scarring, patients must understand the medications used for treatment, including any potential adverse effects. Clinicians should stress the importance of continuing treatment and regularly attending scheduled follow-up appointments. Abruptly stopping medications, especially oral steroids, can lead to a recurrence of skin lesions and adrenal insufficiency. Patients must understand the need for a gradual reduction in medication dosage. In addition, if concurrent medical conditions are present, such as malignancies or gastrointestinal diseases, effectively diagnosing and managing these conditions are essential for overall health and may impact the prognosis of IgA pemphigus. Go to: Pearls and Other Issues A newer classification of IgA pemphigus exists, describing 5 subtypes as follows: Subcorneal pustular dermatosis type IgA pemphigus Intraepidermal neutrophilic type IgA pemphigus IgA-pemphigus vegetans IgA-pemphigus vulgaris Unclassified IgA pemphigus Pemphigus vegetans is a rare variant of pemphigus vulgaris. Clinicians classify patients with vegetating lesions and histology similar to pemphigus vegetans but with IgA antibodies as IgA pemphigus vegetans (IgA-PVeg). Those with desmoglein-1 or desmoglein-3 target antigens are diagnosed with having IgA pemphigus foliaceus (IgA-PF) or IgA pemphigus vulgaris (IgA-PV), respectively. The remainder that do not meet these criteria are diagnosed as unclassified or atypical IgA dermatosis. Go to: Enhancing Healthcare Team Outcomes IgA pemphigus is a rare autoimmune blistering disease characterized by painful and pruritic vesiculopustular eruptions resulting from IgA antibodies targeting keratinocyte cell surface components involved in cell-to-cell adherence. Two distinct forms of IgA pemphigus exist, both clinically similar but with distinct autoantibody target proteins. Although associated with various malignancies and chronic conditions, its exact cause remains unclear. Although typically milder than IgG-driven pemphigus, IgA pemphigus requires aggressive treatment with steroids and dapsone to prevent a recurrence.Collaborative healthcare teams should monitor patients for adverse effects and assess for concurrent conditions, ensuring tailored management. Physicians, advanced care practitioners, nurses, pharmacists, and all other healthcare professionals caring for patients with IgA pemphigus must work collaboratively, each contributing their unique expertise.Seamless interprofessional communication is crucial for prompt information sharing and care coordination, optimizing patient outcomes and quality of life.This coordination minimizes errors, reduces delays, and enhances patient safety, leading to improved team performance and patient-centered care that prioritizes improving immediate and long-term outcomes. Go to: Review Questions Access free multiple choice questions on this topic. Click here for a simplified version. Comment on this article. Figure Intraepidermal Neutrophilic IgA Pemphigus. Hematoxylin and eosin staining of intraepidermal neutrophilic IgA pemphigus. Contributed by M Abdel-Halim Ibrahim, MD Go to: References 1. Porro AM, Caetano Lde V, Maehara Lde S, Enokihara MM. Non-classical forms of pemphigus: pemphigus herpetiformis, IgA pemphigus, paraneoplastic pemphigus and IgG/IgA pemphigus. An Bras Dermatol. 2014 Jan-Feb;89(1):96-106. [PMC free article: PMC3938360] [PubMed: 24626654] 2. Hegazy S, Bouchouicha S, Khaled A, Laadher L, Sellami MK, Zeglaoui F. IgA pemphigus showing IgA antibodies to desmoglein 1 and 3. Dermatol Pract Concept. 2016 Oct;6(4):31-33. [PMC free article: PMC5108643] [PubMed: 27867744] 3. Wallach D. Intraepidermal IgA pustulosis. J Am Acad Dermatol. 1992 Dec;27(6 Pt 1):993-1000. [PubMed: 1479108] 4. Kridin K, Patel PM, Jones VA, Cordova A, Amber KT. IgA pemphigus: A systematic review. J Am Acad Dermatol. 2020 Jun;82(6):1386-1392. [PubMed: 31812619] 5. Kridin K, Schmidt E. Epidemiology of Pemphigus. JID Innov. 2021 Mar;1(1):100004. [PMC free article: PMC8659392] [PubMed: 34909708] 6. Cheng HF, Tsoi WK, Ng MMT, Ip WK, Ho KM. IgG/IgA pemphigus with differing regional presentations. JAAD Case Rep. 2022 Oct;28:119-122. [PMC free article: PMC9486352] [PubMed: 36147207] 7. Hashimoto T, Teye K, Hashimoto K, Wozniak K, Ueo D, Fujiwara S, Inafuku K, Kotobuki Y, Jukic IL, Marinović B, Bruckner A, Tsuruta D, Kawakami T, Ishii N. Clinical and Immunological Study of 30 Cases With Both IgG and IgA Anti-Keratinocyte Cell Surface Autoantibodies Toward the Definition of Intercellular IgG/IgA Dermatosis. Front Immunol. 2018;9:994. [PMC free article: PMC5950707] [PubMed: 29867971] 8. Yasuda H, Kobayashi H, Hashimoto T, Itoh K, Yamane M, Nakamura J. Subcorneal pustular dermatosis type of IgA pemphigus: demonstration of autoantibodies to desmocollin-1 and clinical review. Br J Dermatol. 2000 Jul;143(1):144-8. [PubMed: 10886149] 9. Amagai M. Adhesion molecules. I: Keratinocyte-keratinocyte interactions; cadherins and pemphigus. J Invest Dermatol. 1995 Jan;104(1):146-52. [PubMed: 7798634] 10. Robinson ND, Hashimoto T, Amagai M, Chan LS. The new pemphigus variants. J Am Acad Dermatol. 1999 May;40(5 Pt 1):649-71; quiz 672-3. [PubMed: 10321591] 11. Amagai M. Autoimmunity against desmosomal cadherins in pemphigus. J Dermatol Sci. 1999 Jun;20(2):92-102. [PubMed: 10379702] 12. Malik AM, Tupchong S, Huang S, Are A, Hsu S, Motaparthi K. An Updated Review of Pemphigus Diseases. Medicina (Kaunas). 2021 Oct 09;57(10) [PMC free article: PMC8540565] [PubMed: 34684117] 13. Kridin K. Pemphigus group: overview, epidemiology, mortality, and comorbidities. Immunol Res. 2018 Apr;66(2):255-270. [PubMed: 29479654] 14. Hashimoto T, Yasumoto S, Nagata Y, Okamoto T, Fujita S. Clinical, histopathological and immunological distinction in two cases of IgA pemphigus. Clin Exp Dermatol. 2002 Nov;27(8):636-40. [PubMed: 12472534] 15. Tsuruta D, Ishii N, Hamada T, Ohyama B, Fukuda S, Koga H, Imamura K, Kobayashi H, Karashima T, Nakama T, Dainichi T, Hashimoto T. IgA pemphigus. Clin Dermatol. 2011 Jul-Aug;29(4):437-42. [PubMed: 21679872] 16. Toosi S, Collins JW, Lohse CM, Wolz MM, Wieland CN, Camilleri MJ, Bruce AJ, McEvoy MT, Lehman JS. Clinicopathologic features of IgG/IgA pemphigus in comparison with classic (IgG) and IgA pemphigus. Int J Dermatol. 2016 Apr;55(4):e184-90. [PubMed: 26566588] 17. Camisa C, Warner M. Treatment of pemphigus. Dermatol Nurs. 1998 Apr;10(2):115-8, 123-31. [PubMed: 9697444] 18. Hirata Y, Abe R, Kikuchi K, Hamasaka A, Shinkuma S, Ujiie H, Nomura T, Nishie W, Arita K, Shimizu H. Intraepidermal neutrophilic IgA pemphigus successfully treated with dapsone. Eur J Dermatol. 2012 Mar-Apr;22(2):282-3. [PubMed: 22378059] 19. Lluch-Galcerá JJ, Alcoverro C, Prat E, Martinez-Molina M, Bielsa I, Quer A, Bassas J. [Refraktärer IgA-Pemphigus erfolgreich mit Adalimumab als Monotherapie behandelt]. J Dtsch Dermatol Ges. 2023 Dec;21(12):1560-1562. [PubMed: 38082513] 20. Howell SM, Bessinger GT, Altman CE, Belnap CM. Rapid response of IgA pemphigus of the subcorneal pustular dermatosis subtype to treatment with adalimumab and mycophenolate mofetil. J Am Acad Dermatol. 2005 Sep;53(3):541-3. [PubMed: 16112379] 21. Moreno AC, Santi CG, Gabbi TV, Aoki V, Hashimoto T, Maruta CW. IgA pemphigus: case series with emphasis on therapeutic response. J Am Acad Dermatol. 2014 Jan;70(1):200-1. [PubMed: 24355273] 22. Kaegi C, Wuest B, Schreiner J, Steiner UC, Vultaggio A, Matucci A, Crowley C, Boyman O. Systematic Review of Safety and Efficacy of Rituximab in Treating Immune-Mediated Disorders. Front Immunol. 2019;10:1990. [PMC free article: PMC6743223] [PubMed: 31555262] 23. Watts PJ, Khachemoune A. Subcorneal Pustular Dermatosis: A Review of 30 Years of Progress. Am J Clin Dermatol. 2016 Dec;17(6):653-671. [PubMed: 27349653] 24. Chatterjee M, Meru S, Vasudevan B, Deb P, Moorchung N. Pemphigus foliaceus masquerading as IgA pemphigus and responding to dapsone. Indian J Dermatol. 2012 Nov;57(6):495-7. [PMC free article: PMC3519261] [PubMed: 23248372] 25. Koga H, Tsutsumi M, Teye K, Ishii N, Yamaguchi M, Nagafuji K, Nakama T. Subcorneal pustular dermatosis-type IgA pemphigus associated with multiple myeloma: A case report and literature review. J Dermatol. 2023 Feb;50(2):234-238. [PubMed: 35838241] 26. Zhou Y, Xiao Y, Wang Y, Li W. Refractory atypical IgA pemphigus successfully treated with apremilast. J Dermatol. 2024 Mar;51(3):e86-e87. [PubMed: 37864455] Disclosure:Minira Aslanova declares no relevant financial relationships with ineligible companies. Disclosure:Siva Naga Yarrarapu declares no relevant financial relationships with ineligible companies. Disclosure:Hasnain Syed declares no relevant financial relationships with ineligible companies. Disclosure:Patrick Zito declares no relevant financial relationships with ineligible companies. Continuing Education Activity Introduction Etiology Epidemiology Pathophysiology Histopathology History and Physical Evaluation Treatment / Management Differential Diagnosis Prognosis Complications Deterrence and Patient Education Pearls and Other Issues Enhancing Healthcare Team Outcomes Review Questions References Copyright © 2025, StatPearls Publishing LLC. This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal. Bookshelf ID: NBK519063 PMID: 30085605 Share on Facebook Share on Twitter Views PubReader Print View Cite this Page In this Page Continuing Education Activity Introduction Etiology Epidemiology Pathophysiology Histopathology History and Physical Evaluation Treatment / Management Differential Diagnosis Prognosis Complications Deterrence and Patient Education Pearls and Other Issues Enhancing Healthcare Team Outcomes Review Questions References Bulk Download Bulk download StatPearls data from FTP Related information PMCPubMed Central citations PubMedLinks to PubMed Similar articles in PubMed IgA pemphigus--occurrence of anti-desmocollin 1 and anti-desmoglein 1 antibody reactivity in an individual patient.[J Dtsch Dermatol Ges. 2006]IgA pemphigus--occurrence of anti-desmocollin 1 and anti-desmoglein 1 antibody reactivity in an individual patient.Kopp T, Sitaru C, Pieczkowski F, Schneeberger A, Födinger D, Zillikens D, Stingl G, Karlhofer FM. J Dtsch Dermatol Ges. 2006 Dec; 4(12):1045-50. Review Subcorneal pustular dermatosis-type IgA pemphigus associated with multiple myeloma: A case report and literature review.[J Dermatol. 2023]Review Subcorneal pustular dermatosis-type IgA pemphigus associated with multiple myeloma: A case report and literature review.Koga H, Tsutsumi M, Teye K, Ishii N, Yamaguchi M, Nagafuji K, Nakama T. J Dermatol. 2023 Feb; 50(2):234-238. Epub 2022 Jul 15. Rapid response of IgA pemphigus of subcorneal pustular dermatosis type to treatment with isotretinoin.[J Am Acad Dermatol. 2000]Rapid response of IgA pemphigus of subcorneal pustular dermatosis type to treatment with isotretinoin.Gruss C, Zillikens D, Hashimoto T, Amagai M, Kroiss M, Vogt T, Landthaler M, Stolz W. J Am Acad Dermatol. 2000 Nov; 43(5 Pt 2):923-6. Subcorneal pustular dermatosis-type IgA pemphigus with autoantibodies to desmocollins 1, 2, and 3.[Arch Dermatol. 2009]Subcorneal pustular dermatosis-type IgA pemphigus with autoantibodies to desmocollins 1, 2, and 3.Düker I, Schaller J, Rose C, Zillikens D, Hashimoto T, Kunze J. Arch Dermatol. 2009 Oct; 145(10):1159-62. Review Subcorneal pustular dermatosis type of IgA pemphigus: demonstration of autoantibodies to desmocollin-1 and clinical review.[Br J Dermatol. 2000]Review Subcorneal pustular dermatosis type of IgA pemphigus: demonstration of autoantibodies to desmocollin-1 and clinical review.Yasuda H, Kobayashi H, Hashimoto T, Itoh K, Yamane M, Nakamura J. Br J Dermatol. 2000 Jul; 143(1):144-8. See reviews...See all... Recent Activity Clear)Turn Off)Turn On) IgA Pemphigus - StatPearlsIgA Pemphigus - StatPearls Your browsing activity is empty. Activity recording is turned off. Turn recording back on) See more... Follow NCBI Connect with NLM National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov PreferencesTurn off External link. Please review our privacy policy. Cite this Page Close Aslanova M, Yarrarapu SNS, Syed HA, et al. IgA Pemphigus. [Updated 2024 May 1]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2025 Jan-. Available from: Making content easier to read in Bookshelf Close We are experimenting with display styles that make it easier to read books and documents in Bookshelf. Our first effort uses ebook readers, which have several "ease of reading" features already built in. The content is best viewed in the iBooks reader. You may notice problems with the display of some features of books or documents in other eReaders. Cancel Download Share Share on Facebook Share on Twitter URL
2434
https://prowritingaid.com/grammar-vs-syntax
Cookie Policy Grammar Grammar Glossary Grammar vs Syntax: What's the Difference? By Allison Bressmer Professor and Freelance Writer Grammar and syntax are not synonymous, but they are related. Grammar represents an entire set of rules for language, and syntax is one section of those rules. Think of syntax as a chapter in a grammar book. Contents: What’s the Difference Between Syntax and Grammar? Definition & Meaning of Grammar Definition & Meaning of Syntax What Is the Relationship Between Grammar and Syntax? Conclusion on Grammar vs Syntax What’s the Difference Between Syntax and Grammar? Syntax is a subdivision of grammar. Grammar comprises the entire system of rules for a language, including syntax. Syntax deals with the way that words are put together to form phrases, clauses, and sentences. Definition & Meaning of Grammar Grammar is the set of rules a language follows to convey meaning. Grammar is a broad term that encompasses more specific areas of study including: Morphology: how words are formed and how adding or removing word parts can change the tense or part of speech. For example, “nation” is a noun. Adding “al” to the end of the word changes this part of speech to an adjective. Phonology: how the parts of language sound. Semantics: what words or symbols mean. Syntax: how words are put together to create phrases, clauses, or sentences. Prescriptive and Descriptive Grammar There are two classifications of grammar: prescriptive and descriptive. Prescriptive grammar represents how the rules of a language should be used. There are thousands of grammar rules, and it is challenging to learn them all. Use a grammar checker like ProWritingAid to make sure you don’t make any grammatical errors, such as including comma splices. Start using the right punctuations with a free ProWritingAid account Descriptive grammar describes how the rules of language are actually applied (or not) by speakers. Descriptive grammar doesn’t judge those non-standard practices; it simply describes them. For example, if a person says, “I don’t believe nothing you say,” their sentence is improperly constructed according to the rules of prescriptive grammar, which don’t allow for double negatives (“do not” and “nothing”). However, descriptive grammar recognizes that the sentence, even with the double negative, represents a common use of language. Despite the rule-break, those involved in the conversation will likely understand the speaker’s meaning. Definition & Meaning of Syntax Syntax is the part of grammar that focuses on how words are combined to form phrases and clauses and how those components are then arranged into meaningful sentences. There four are types of sentences: simple, compound, complex, or compound-complex sentences. Each of these contains a subject, a verb, and a complete thought. Simple sentence: consists of only one clause with a subject and a predicate. This is the most basic type of sentence. Compound sentence: a sentence composed of two independent clauses that are connected by a semi-colon, comma, or subordinating conjunction. Complex sentence: a sentence that has one independent clause and at least one dependent clause. They are often connected by subordinating conjunctions such as since and because. Compound-complex sentence: they are made up of a minimum of three clauses: two independent clauses and at least one dependent clause. It is the most sophisticated sentence type. Good writers vary the structure of their sentences to add interest and tone to their work. You can use ProWritingAid’s Sentence Structure Report to recognize patterns in your sentence structure, acquire suggestions for adding variety, and avoid writing errors. The order of words, phrases, and clauses affects the meaning of your sentences. The below sentences contain the same words, but have different meanings. Sue needs to offer that apology sincerely. Sue sincerely needs to offer that apology. In the first example, “sincerely” describes how Sue should offer her apology. In the second, “sincerely” expresses how the speaker feels about what Sue must do; she really should offer that apology. Different languages have different syntax rules. For example, in the English language and other Germanic languages, adjectives precede the noun) they describe. In Romance languages, adjectives come after the noun they describe. What Is the Relationship Between Grammar and Syntax? Syntax is one of the four parts of grammar, along with phonology, morphology, and semantics. Both grammar and syntax are essential for meaning in written language. Both sets of rules define the tone of every written piece. Each rule can be broken for stylistic purposes, but this should be done carefully so as not to obstruct the meaning. Conclusion on Grammar vs Syntax The way that words are ordered is just as important as the words themselves. Syntax is what defines that order. Grammar is a comprehensive set of rules for using language to convey meaning. Syntax is a crucial part of grammar, without which words would not make sense. All rules of syntax are grammar rules, but not all grammar rules are syntax rules. Take your writing to the next level: 20 Editing Tips From Professional Writers Whether you are writing a novel, essay, article, or email, good writing is an essential part of communicating your ideas. This guide contains the 20 most important writing tips and techniques from a wide range of professional writers. Be confident about grammar Check every email, essay, or story for grammar mistakes. Fix them before you press send. Sign up — it's free Share Allison Bressmer Professor and Freelance Writer Allison Bressmer is a professor of freshman composition and critical reading at a community college and a freelance writer. If she isn’t writing or teaching, you’ll likely find her reading a book or listening to a podcast while happily sipping a semi-sweet iced tea or happy-houring with friends. She lives in New York with her family. Connect at linkedin.com/in/allisonbressmer. Get started with ProWritingAid Get Started — it's free 25% OFF Yearly plans and credits Get the essential toolkit for storytellers—for less. Save 25% in our Storyteller’s Sale. Grab the discount while it lasts. Buy now — 25% off Ends Friday, August 29 at 2:00 PM EDT Desktop Everywhere for Windows Desktop Everywhere for Mac Chrome Extension (Firefox, Edge) Google Docs Microsoft Office Desktop Editor All Features Grammar Checker Plagiarism Checker Pricing Credits Blog Grammar Guide Novel November About Us Careers & Culture Partners Affiliates Terms of Service Privacy Policy Cookie Policy GDPR Sitemap Visit our Help Centeror let's stay in touch via: © 2025 - Orpheus Technology, prowritingaid.com
2435
https://www.youtube.com/watch?v=UT8NdY7dAPE
4 1 Angles and Radian Measure Grey Nakayama 1210 subscribers 1 likes Description 260 views Posted: 4 Nov 2022 Transcript: hello there today's lesson is 4.1 angles and radian measure this lesson packs a lot of information but I talked to the Algebra 2 teacher before I did this and everything you have done on here except for the last one so this should be review and we should be able to go through it very quickly so first off we're going to review the vocabulary of angles we're going to talk about degree measure radian measure converting back and forth between degrees and radians angles in standard position finding coterminal angles the length of circular arcs and the new thing will be linear and angular speed to describe motion on a circular path so to start off with and refresh your memory refresh you on what an angle is an angle is two rays that have a common endpoint that endpoint is called the vertex one Ray is called the initial side and the other is called the terminal side okay you have that geometry and talked about it again in algebra two so we talk about an angle being in standard position if the vertex is at the origin and it has an initial side on the positive x-axis so both of these the blue one the initial sides on the positive x axis it does not matter where the terminal side is so standard position vertex at the origin initial side on the positive x-axis okay now the other thing you need to remember is angles are positive if they rotate in a counterclockwise motion and they are negative if they rotate in a clockwise rotation so this angle here is positive and this angle here is negative so really what's important in and what you need to remember about this is that this uh arrow is very important because it's telling you the direction of the angle and whether the angle is positive or negative so another type of angle that we have is one called a quadrantal angle and that is an angle whose terminal side okay terminal side is on an axis so this is the terminal side of the angle and it's on any axis initial side still in standard position because the initial side is the positive x axis okay um again to refresh your memory we know that the whole a complete rotation of a circle is 360 degrees but we have an angle that is an acute angle that measures less than 90 a right angle measures 90. an obtuse angle measures more than 90 but less than 180 and a straight angle measures 180 degrees it's okay now I know that you've talked about radian measures you talked about it in geometry and again last year but the radian measure of an angle is just the arc divided by its radius okay so if you want to look at it in a formula set what you need to think of with radians um 360 degrees is all the way around the circles what we just talked about here okay radians allow us to keep our units smaller because the definition of a radian is it when let's say this when the length of the arc is the same as the radius of the circle then we say that the measure of that central angle is one radian so this particular angle here has a measure of one radian because the arc that it intercepts right here is the same length as the radius of the circle so you ought to stop and think how many radians does it take to go all the way around the circle well you're going to see it come up but before we go there we'll talk about some other things first a radian measure and we have to talk about what is called The Arc Length okay it is denoted by the letter s okay you have your radius and you have your angle okay Theta is defined as I just said is um excuse me defined as the Arc Length divided by the radius that was the the one I just told you that what one radian is so you have you need to remember that Theta is s over R and we'll have a variation of that later but a central angle Theta this is example one a central angle Theta in circle of radius 12 feet intercepts an arc of length 42 feet what is the radian measure of theta well I'm looking for the radian measure the radian measure is theta equals s over r so in this particular case Theta s in my problem is 42 and R is 12. so the radian measure of this angle is 3.5 radians now how do we convert between degrees and radians you're going to remember the basic relationship that Pi radians is equal to 180 degrees and one of the ways that I remember this is whatever you need to go to so to convert degrees to radians then I'm going to put radians on the top if I'm going 2 degrees then I'm going to put degrees on top if you think about y'all you know 12 inches is equal to one foot if I'm going converting two inches then I want 12 inches on top and one foot on the bottom if I'm going to feet then I'm going to multiply my one foot over 12 inches so it's a similar thing here are several examples two converting from degrees to radians so when I want to oops I didn't want to do that I go two radians then I'm going to take by 60 degrees I'm going to put it over 1 because I'm multiplying by a fraction and then I'm going to go 2 radians so I want pi on the top and 180 on the bottom and that is equal to pi over 3. if you have any trouble here y'all okay I mean I can talk to you about the fact that the zeros on the top cancel but most of y'all want to put that in your calculator just use the fraction button but 60 fraction bar 180 it will tell you one-third and then you put the pi back in the top that's how you use your calculator to do it okay the next 270 I might need to go 2 radians so radians is going to go on top Pi is going to go on top and again put 270 fraction bar 180 in your calculator it will tell you 3 over 2 but you got to put the pi back in so it's 3 pi over 2. now look at this one again negative 300 over 1 again times pi over 180 but the most important thing is a lot of people are going to forget that this negative carries all the way through the problem so the answer is negative 5 pi over 3. if you do not know how to use the fraction button on your calculator you need to come see me I can help you do that all right example three we're gonna go the other way and instead of going two radians this time we're going two degrees so I have pi over four I'm going two degrees so 180 is on top and Pi is on the bottom and I hope you all can see there that my pies are going to cancel out so again 180 divided by 4 is equal to 45 degrees um I will say that um radians the last examples that we did radians are the one unit label that you don't have to put but if you don't put a degree symbol then it's assumed to be radian so you've got to put the degree symbol when you're going two degrees all right negative 4 pi over 3 again I'm going 2 degrees so 180 goes on top Pi goes on bottom and again in your calculator do four you got two ways of doing this y'all you can do 4 times 180 divided by three or you can use the fraction bar but um you do have to give an exact answer and in this case it would be 240 degrees now were y'all paying attention that negative sign got left off of there okay so remember as I just said earlier if it starts off negative it your answer is going to be negative so you can't forget that all right look at this one 6 radians again six over one times 180 over Pi so when you do this one this would be one that you're going to round to whatever my lab math tells you to round to so 6 times 180 divided by pi I took it out one place that's 343.8 degrees okay let's talk about drawing angles in standard position okay when you do that okay all the way around is one full Revolution and that is 2 pi okay if you notice when we were talking about our um uh going back and forth our conversion factor okay we knew that pi was going to be the same thing as 180 degrees and 180 degrees is halfway around our Circle so a half of a revolution is pi radians a full Revolution is 2 pi radians three-fourths of a revolution is three pi over two and then one-fourth of revolution is pi over two so those numbers are going to be really important if you don't know them already I'm going to talk about them in the next lesson with our unit circle the other thing that you're going to be asked when you do these problems is you got to remember your quadrants so remember positive positive is quadrant one and then we go in a counterclockwise Direction so this is two quadrant three and quadrant four okay so the next examples are for uh angles in standard position and the directions say draw but I changed at the last minute because I wanted it to be like what you do in your homework and in your homework you're going to have multiple choice so what you have to remember and understand is what where pi over 4 is and in this particular example I'm using the first one here because it's easiest to read okay but you're gonna see these three tick marks and the one in the middle is halfway so this was pi over 2 which means the one in the middle here is going to be pi over 4. and what that one of the ways of thinking about that y'all is if Halfway Around is pi then that's 1 4. okay and this one down here is going to be pi over 6 and this one up here is pi over 3. so this is dividing it I didn't do that very well dividing it into thirds dividing it and when I say thirds I'm talking about half of the circle if you divide it into thirds you would have one piece here and one piece there if you divide it into Force this is one-fourth two-fourths three-fourths and four-fourths but we're going to talk about that a lot tomorrow so what you want to remember is fours in the middle three is closer to the top six is closer to the bottom because that's a small piece and it always starts from the x axis so if I need negative pi over four negative is a clockwise Direction so if I need a clockwise Direction then um this is counterclockwise this is counterclockwise that's D is clockwise and a is clockwise so which one is negative pi over 4 well as we just said pi over 4 is the one in the middle and this is going to be that angle right here so the answer is a all right look at 3 pi over four so the pi over fours are in the middle all right it is a positive 3 pi over four okay well we just talked about this one being three pi over two positive that's a negative angle that's a negative angle so that wasn't very hard hopefully so one of the ways when you're doing it pay attention to whether it's positive or negative in your your choices are limited makes it easier to get the right answer all right negative seven pi over four I'm going negative that means I'm going clockwise so a is not negative that's a positive angle because it's going counterclockwise B is a negative angle C is not a negative angle and D is a negative angle so I have to look and see which one is seven pi over 4 between these two 7 pi over 4 y'all um you think about the one in the middle and you count so um if I start this was pi over 4 up here this one right here so n up here is 2 pi over 4 and 3 pi over 4 and 4 pi over 4 and 5 pi over 4 and 6 pi over 4 and 7 pi over four so seven pi over 4 is going to be almost all the way around the circle so the answer is going to be B this one is actually negative 3 pi over 4. all right last one of drawing an angle in standard position this one is 13 pi over 4. so notice these dots because this one goes around the circle more than once and I think the easiest way to do these problems is to divide 4 into 13 which your calculator will do for you but it goes three times with one left over which is like three at one fourth Pi or three plus actually I'm interested like that so three and one fourth Pi is like three pi plus pi over four so when you're doing it remember zero I'm going to start over here at zeros here okay Pi is here that and two Pi is here which means three Pi is over here because I'm going I'm going 0 Pi 2 pi 3 pi and then I have another fourth to go so it really should end up here and you know what I just realized we didn't talk about what quadrant each of those is in so um the one that's got to be the answer is this guy here because this goes all the way around that's 2 pi 3 pi and then to here so my answer is d this is in quadrant three like I said I just realized they're going to ask you what quadrant this is in this is in quadrant one this one is in quadrant two that's where it ends and this one is in quadrant four so you have to answer both of those questions when you're um doing these problems in your homework all right coterminal angles coternal angles are two angles that share a terminal they have the same initial and the same terminal sides okay that's what coterminals angle angles are they always differ by 360 degrees or two Pi radians so if you need to find coterminal angles you're just going to add or subtract 360 or add or subtract 2 pi depending on what the problem is so in this particular case this example find a positive angle that's not what I wanted positive angle less than 360. so I've got to get 400 that's coterminal sorry so I've got to get 400 to a number between 0 and 360. so I'm going to subtract 360 from it and my answer is going to be 40 degrees okay look at negative 135 this one I'm not going to subtract because if I subtract 360 I'm not going to be positive so I'm going to add 360 to this one and when you add 360 to that you get 225 degrees all right in a similar fashion example eight was degrees example nine we're doing the same thing but this time we're going to be using radians okay so I have 13 pi over 5. and y'all when you're dealing with this again you can use your calculator to do it but you just actually some of them the black one will allow you to keep your pie in there so you need to try this with your calculator or try it by hand and if you have any difficulty with it please put a starve on it and ask me about it in class okay 13 pi over 5 and I'm going to subtract 2 pi well 2 pi is the same thing as 10 pi over 5 because 5 goes into ten two times or 5 times 2 is 10. so that gives us an answer of 3 pi over 5. if I have negative pi over 15 okay I need it positive so this time I'm going to add 2 pi in terms of 15 so that's 2 times 15 or 30 pi over 15 which is 29 pi over 15. again let me know if you need help with your calculator all right the length of a circular Arc so we already talked about y'all we talked about earlier that Theta was equal to S over r well if I need to solve that for S I'm going to multiply both sides by R so that's our next Formula the arc length is s equals R Theta so straight up a circle has a radius of 6 inches find the length of the arc intercepted by a central angle of 45 degrees Express The Arc and learn Link in terms of Pi oh well and then round your answer to okay then round your answer to two decimal places okay so our formula is s equals R Theta and so what is r r is 6 in this case that's the radius now whenever you use that formula I don't think I emphasized this but whenever you use this formula you always have to make sure your angle is in radian measure so in this particular one central angle of 45 degrees Yeah I can start with 45 degrees here but I have to change it to radians so if I have to go to radians then I'm going to multiply it by pi over 180. and I did not simplify that in terms of Pi which is what they told us to do so that would be 3 pi over 2. that would be 3 pi over 2 inches that's another thing about Arc Length y'all they do always want a label with it okay and then when you convert that to a decimal that's what you get all right okay we're on the home stretch definitions of linear and angular speed okay so the linear speed is defined as s over t s is our Arc Length remember R Theta okay that is the linear speed and then we have something called angular speed and angular speed is defined as Theta over t so lest you get confused by that okay remember s equals R Theta so another way of thinking about it is that the linear speed is the angular speed times the radius which is what that just says right there the radius times the angular speed okay so um if you have any difficulty with this y'all the best and easiest way to do it is to keep track of the labels and I've told you that in a lot of the word problems and applications keep track of the labels when you're doing this it will simplify things so long before I positive hold thousands of thousands of songs and play them with superb audio quality individual songs were delivered on 75 RPMs and 45 RPM circular records a 45 RPM record has an angular speed of 45 revolutions per minute find the linear speed in inches per minute at the point where the needle is 1.5 inches from the record Center so they've asked me for the linear speed and they gave me the angular speed so I'm going to use my formula of linear equals the angular times the radius and I'm going to go and watch this is one where I'm going to keep my labels in the whole time so first off my angular speed is 45 revolutions per minute so I'm going to keep it um in a vertical fashion it makes it much easier to see what labels cancel you cannot leave revolutions you can never leave Revolutions in your answer so if you remember earlier one revolution is the same thing as two Pi radians so that is my conversion factor that is going to allow me to get rid of the Revolutions in that problem so I'm trying to go from revolutions per minute to inches per minute all right so I got rid of my revolutions I have minutes on the bottom I need inches on top which that's my radius and my radius is right here 1.6 silly 1.5 inches from the record Center so that's where I'm going to get my inches and so when you multiply that out you get 424.12 inches per minute notice we did our conversions until I had inches on top and minutes on the bottom and that is all quick review thank you so much for watching if you have any questions please don't hesitate to ask me thank you so much for watching and y'all have a great day
2436
https://artofproblemsolving.com/wiki/index.php/Complex_number?srsltid=AfmBOoomgGD0WQdNGR41erKgHQVTwNe5OInGIoTGCy52G5MozFA-y-XL
Art of Problem Solving Complex number - AoPS Wiki Art of Problem Solving AoPS Online Math texts, online classes, and more for students in grades 5-12. Visit AoPS Online ‚ Books for Grades 5-12Online Courses Beast Academy Engaging math books and online learning for students ages 6-13. Visit Beast Academy ‚ Books for Ages 6-13Beast Academy Online AoPS Academy Small live classes for advanced math and language arts learners in grades 2-12. Visit AoPS Academy ‚ Find a Physical CampusVisit the Virtual Campus Sign In Register online school Class ScheduleRecommendationsOlympiad CoursesFree Sessions books tore AoPS CurriculumBeast AcademyOnline BooksRecommendationsOther Books & GearAll ProductsGift Certificates community ForumsContestsSearchHelp resources math training & toolsAlcumusVideosFor the Win!MATHCOUNTS TrainerAoPS Practice ContestsAoPS WikiLaTeX TeXeRMIT PRIMES/CrowdMathKeep LearningAll Ten contests on aopsPractice Math ContestsUSABO newsAoPS BlogWebinars view all 0 Sign In Register AoPS Wiki ResourcesAops Wiki Complex number Page ArticleDiscussionView sourceHistory Toolbox Recent changesRandom pageHelpWhat links hereSpecial pages Search Complex number The complex numbers arise when we try to solve equations such as . Contents [hide] 1 Derivation 2 Formal Definition 3 Parts 4 Operations 4.1 Examples 5 Alternate Forms 6 Topics 7 Problems 7.1 Introductory 7.2 Intermediate 7.3 Olympiad 8 See also Derivation We know (from the Trivial Inequality) that the square of a real number cannot be negative, so this equation has no solutions in the real numbers. However, it is possible to define a number, , such that . If we add this new number to the reals, we will have solutions to . It turns out that in the system that results from this addition, we are not only able to find the solutions of but we can now find all solutions to every polynomial. (See the Fundamental Theorem of Algebra for more details.) Formal Definition We are now ready for a more formal definition. A complex number is a number of the form where and is the imaginary unit. The set of complex numbers is denoted by . The set of complex numbers contains the set of the real numbers, since . Parts Every complex number has a real part denoted or and an imaginary part denoted or . Note that the imaginary part of a complex number is real: for example, . So, if , we can write . ( and are traditionally used in place of and as variables when dealing with complex numbers, while and (and frequently also and ) are used to represent real values such as the real and imaginary parts of complex numbers. This mathematical convention is often broken when it is inconvenient, so be sure that you know what set variables are taken from when dealing with the complex numbers.) As you can see, complex numbers enable us to remove the restriction of from the domain of the function (although some additional considerations are necessary). Operations Addition and subtraction of complex numbers are similar to doing the same operations to polynomials -- add the real parts then add the imaginary parts. Multiplication is also similar to doing the same operations to polynomials -- use the distributive property and apply . For division, however, the denominator needs to be a real number; this is done so by multiplying the complex conjugate, where the sign of the imaginary part is swapped. The complex conjugated is denoted by . The absolute value (or modulus or magnitude) of a complex number is the distance from the complex number to the origin. It is denoted by . The argument of a complex number is the angle formed between the line drawn from the complex number to the origin and the positive real axis on the complex coordinate plane. It is denoted by . Examples If and , , Alternate Forms In addition to the standard form , complex numbers can be expressed in two other forms. The trigonometric form of a complex number is denoted by , where equals the magnitude of the complex number and (in radians) is the argument of the complex number. The exponential form of a complex number is denoted by , where equals the magnitude of the complex number and (in radians) is the argument of the complex number. Topics Complex plane De Moivre's Theorem Exponential form Roots of unity Problems Introductory 2007 AMC 12A Problems/Problem 18 Intermediate 1984 AIME Problem 8 1985 AIME Problem 3 1988 AIME Problem 11 1989 AIME Problem 14 1990 AIME Problem 10 1992 AIME Problem 10 1994 AIME Problem 8 1994 AIME Problem 13 1995 AIME Problem 5 1996 AIME Problem 11 1997 AIME Problem 11 1997 AIME Problem 14 1998 AIME Problem 13 1999 AIME Problem 9 2000 AIME II Problem 9 2002 AIME I Problem 12 2004 AIME I Problem 13 2005 AIME II Problem 9 2009 AIME I Problem 2 2011 AIME II Problem 8 Olympiad See also Fundamental Theorem of Algebra Trigonometry Real numbers Imaginary unit Retrieved from " Categories: Definition Complex numbers Art of Problem Solving is an ACS WASC Accredited School aops programs AoPS Online Beast Academy AoPS Academy About About AoPS Our Team Our History Jobs AoPS Blog Site Info Terms Privacy Contact Us follow us Subscribe for news and updates © 2025 AoPS Incorporated © 2025 Art of Problem Solving About Us•Contact Us•Terms•Privacy Copyright © 2025 Art of Problem Solving Something appears to not have loaded correctly. Click to refresh.
2437
https://www.youtube.com/watch?v=0oHzBurIIpw
Fluid Mechanics: Topic 4.1 - Hydrostatic force on a plane surface CPPMechEngTutorials 158000 subscribers 2642 likes Description 241118 views Posted: 5 Mar 2016 Want to see more mechanical engineering instructional videos? Visit the Cal Poly Pomona Mechanical Engineering Department's video library, ME Online: 124 comments Transcript: In this lesson, we will derive the equation for calculating the magnitude of the hydrostatic force on a plane surface. Here we have a rectangular fish tank filled with water to a depth d. In order to determine whether the fish tank walls can support that much water without breaking, we would need to know the total force exerted on the walls. First we will determine the resultant force along the bottom wall. The tank is surrounded on all sides by the atmosphere, which has a pressure Pa. On the underside of the bottom wall, which is exposed to the atmosphere, the pressure force is Pa times the area of bottom wall, and this force points upward. Inside the tank, the pressure force increases as one descends through the water, reaching a maximum at the bottom of the tank. The absolute pressure on the upper part of bottom wall at depth d is the atmospheric pressure plus the specific weight of the water times the depth. This creates a downward force on the wall equal to the Pa plus gamma d, times the area of the bottom wall. If we examine just the fluid pressure forces acting along the bottom wall, the resultant force is Pa times the area of the bottom wall, minus the quantity Pa plus gamma d, times the area of the bottom wall. The atmospheric pressure terms cancel out and we are left with negative gamma times d times the area. This means the resultant force points downward with a magnitude of gamma times d times the area. Finding the resultant pressure force along the bottom wall was relatively easy because the pressure is constant along the entire wall. This is not the case for the side walls. The pressure along the exterior of the side walls is Pa and the related pressure force is Pa times the area of side wall. However, on the interior of the side walls, the pressure increases linearly with depth. At a depth h, the pressure is the atmospheric pressure plus gamma h. Multiplying by the area of the side walls gives the related pressure force. We want to determine the magnitude of the resultant force caused by the fluid pressure on the side walls. Here is a different tank. This tank is open to the atmosphere and partially filled with a liquid of specific weight gamma. We are going to examine a section of the flat wall, highlighted in pink, that has an arbitrary shape and which is oriented at an arbitrary angle theta relative to the free surface. We set up a coordinate system where the x-coordinate comes out of the screen and is oriented along the free surface. The y-coordinate is oriented along the section of wall we wish to examine. We will need to examine some important points on the side wall, so rotate the surface 90 degrees about the y-axis to get a better view. Point (x,y) is the location of some arbitrary small area dA on the wall. dA has an area dx times dy, and is at depth h from the free surface. Point C is the centroid of the wall and is located at coordinates (xC, yC). The depth of the centroid from the free surface is hC. The centroid is the geometric center of the wall, and the centroid and center of mass are the same if the density of the wall is constant. Point CP is the center of pressure, which is where the resultant force FR acts. The coordinates of the center of pressure are labeled (xR, yR). The resultant force and center of pressure location produce an equivalent force and moment on the wall as the original fluid pressure field. In order to find the resultant force over the entire highlighted surface, we will start by examining the force at the location of dA. The small pressure force on the exterior of the wall at area dA, labeled dFext, is the atmospheric pressure Pa times the area dA. The small pressure force on the interior of the wall at area dA, which is labeled dFin, is the pressure at dA times the area dA. The pressure at that depth is the atmospheric pressure plus gamma h. Using trigonometry, h can be rewritten as y times sin(theta). The net force due to fluid pressure at dA is dFinterior minus dFexterior. After plugging in the expressions for the two forces, and eliminating Pa times dA, we obtain gamma times y sin(theta) times dA. To find the resultant force of the fluid pressure on the entire wall, we integrate dFnet along all points of the wall. Plug in the expression for dFnet, and pull gamma sin(theta) out of the integral because they are constant. The integral of y dA can be simplified if we recall the definition of the centroid. The y-coordinate of the centroid, yC, is the integral of y dA, integrated over the entire surface, divided by the entire surface area A. We now have an expression for the resultant force FR. FR is equal to gamma times sin(theta) times yC times the area. yC times sin(theta) is equal to the depth of the centroid, hC, so we can rewrite the resultant force equation as gamma times hC times A. This form of the equation is often more convenient to use when solving fluid mechanics problems. Gamma hC is the pressure at the centroid, PC, so we can further rewrite the resultant force equation as the pressure at the centroid PC times the area. These equations are valid for any flat surface in which the free surface of the liquid and the side of the tank are exposed to the same pressure. In this case, they are both exposed to the atmospheric pressure. We are free to use either yC or hC to find the resultant force and it is important to understand the difference between the two quantities. hC is the vertical distance from the free surface to the centroid. yC is the distance from the free surface to the centroid as measured along the orientation of the wall. If the wall is oriented at an angle less than 90 degrees relative to the free surface, hC will be shorter than yC. If the wall is vertical and theta is 90 degrees, hC and yC will be the same distance. In both cases, the distance yR will be greater than yC. Although not proven in this video, if the tank is pressurized to a gage pressure of P0, the equation for the resultant force would need to be modified to (gamma sin(theta) yC plus P0) times the area A, which can be rewritten as (gamma hC plus P0) times A.
2438
https://www.varsitytutors.com/hotmath/hotmath_help/topics/systems-of-linear-inequalities
Master Systems of Linear Inequalities Master systems of linear inequalities with interactive lessons and practice problems! Designed for students like you! Get the basics Apply your skills Achieve excellence Understanding Systems of Linear Inequalities Choose your learning level Watch & Learn Video explanation of this concept concept. Use space or enter to play video. Beginner Start here! Easy to understand BeginnerExplanation Graph each inequality by drawing its boundary line (solid for (\leq) or (\geq), dashed for (<) or (>)), then shade the half-plane that satisfies the inequality. The solution to the system is the overlapping shaded region. Now showing Beginner level explanation. Practice Problems Test your understanding with practice problems 1 Quick Quiz Single Choice Quiz Beginner Which of the following lines should be dashed for $y < 2x + 1$? Please select an answer for all 1 questions before checking your answers. 1 question remaining. 2 Real-World Problem Question Exercise Intermediate Teenager Scenario A teenager wants to save money and decides he can spend less than $10 per day. How can we express this as an inequality? Click to revealthe detailed solution for this question exercise. 3 Thinking Challenge Thinking Exercise Intermediate Think About This On a coordinate plane, graph the line $y = -3x - 1$ as a solid boundary. Which region should be shaded to represent the solution of $y \geq -3x - 1$? Click to revealthe detailed explanation for this thinking exercise. Challenge Quiz Single Choice Quiz Which system of inequalities represents the region shaded below the line $y = \frac{1}{3}x + 4$ (dashed boundary) and above the line $y = 5x + 1$ (solid boundary)? Please select an answer for all 1 questions before checking your answers. 1 question remaining. Watch & Learn Review key concepts and takeaways recap. Use space or enter to play video.
2439
https://www.intmath.com/counting-probability/12-binomial-probability-distributions.php
The Binomial Probability Distribution Skip to main content Interactive Mathematics Home Tutoring Features Reviews Pricing FAQ Problem Solver More Lessons Forum Interactives Blog Contact About SearchSearch IntMath Search for: Close Login Create Free Account Search IntMath Search: Close Home Tutoring Features Reviews Pricing FAQ Problem Solver More Lessons Forum Interactives Blog Contact About Login Create Free Account Want Better Math Grades? ✅ Unlimited Solutions ✅ Step-by-Step Answers ✅ Available 24/7 ➕Free Bonuses ($1085 value!) Invalid email address Thank you for booking, we will follow up with available time slots and course plans. Home Counting and Probability - Introduction Binomial Probability Distributions On this page Counting and Probability - Introduction 1. Factorial Notation 2. Basic Principles of Counting 3. Permutations 4. Combinations 5. Introduction to Probability Theory 6. Probability of an Event Singapore TOTO Probability and Poker 7. Conditional Probability 8. Independent and Dependent Events 9. Mutually Exclusive Events 10. Bayes’ Theorem 11. Probability Distributions - Concepts Binomial Probability Distributions 13. Poisson Probability Distribution 14. Normal Probability Distribution The z-Table Normal Distribution Graph Interactive Related Sections Math Tutoring Need help? Chat with a tutor anytime, 24/7. Chat now Online Math Solver Solve your math problem step by step! Online Math Solver IntMath Forum Get help with your math queries: See Forum 12. The Binomial Probability Distribution Later, on this page... Mean and variance of a binomial distribution Notation We use upper case variables (like X and Z) to denote random variables, and lower-case letters (like x and z) to denote specific values of those variables. A binomial experiment is one that possesses the following properties: The experiment consists of n repeated trials; Each trial results in an outcome that may be classified as a success or a failure (hence the name, binomial); The probability of a success, denoted by p, remains constant from trial to trial and repeated trials are independent. The number of successes X in n trials of a binomial experiment is called a binomial random variable. The probability distribution of the random variable X is called a binomial distribution, and is given by the formula: P(X)=C x n p x q n−x\displaystyle{P}{\left({X}\right)}={{C}_{{x}}^{{n}}}{p}^{x}{q}^{{{n}-{x}}}P(X)=C x n​p x q n−x where n= the number of trials x = 0, 1, 2, ... n p= the probability of success in a single trial q= the probability of failure in a single trial (i.e. q = 1 − p) C x n\displaystyle{{C}_{{x}}^{{n}}}C x n​is a combination P(X) gives the probability of successes in n binomial trials. Mean and Variance of Binomial Distribution If p is the probability of success and q is the probability of failure in a binomial trial, then the expected number of successes in n trials (i.e. the mean value of the binomial distribution) is E(X) = μ = np The variance of the binomial distribution is V(X) = σ 2 = npq Note: In a binomial distribution, only 2 parameters, namely n and p, are needed to determine the probability. Example 1 Image source A die is tossed 3\displaystyle{3}3 times. What is the probability of (a) No fives turning up? (b) 1\displaystyle{1}1 five? (c) 3\displaystyle{3}3 fives? Answer This is a binomialdistribution because there are only 2 possible outcomes (we get a 5 or we don't). Now, n = 3 for each part. Let X = number of fives appearing. (a) Here, x = 0. P(X=0) =C_x^np^xq^[n-x] =C_0^3 (1/6)^0 (5/6)^3 =125/216 =0.5787 (b) Here, x = 1. P(X=1) =C_x^np^xq^[n-x] =C_1^3 (1/6)^1 (5/6)^2 =75/216 =0.34722 (c) Here, x = 3. P(X=3)=C_x^np^xq^[n-x] =C_3^3 (1/6)^3 (5/6)^0 =1/216 =4.6296times10^-3 Example 2 Hospital records show that of patients suffering from a certain disease, 7 5%\displaystyle{75}\%7 5% die of it. What is the probability that of 6\displaystyle{6}6 randomly selected patients, 4\displaystyle{4}4 will recover? Answer This is a binomialdistribution because there are only 2 outcomes (the patient dies, or does not). Let X = number who recover. Here, n = 6 and x = 4. Let p = 0.25 (success, that is, they live), q = 0.75 (failure, i.e. they die). The probability that 4 will recover: P(X) = C_x^np^xq^[n-x] =C_4^6(0.25)^4(0.75)^2 =15times 2.1973 times 10^-3 =0.0329595 Histogram of this distribution: We could calculate all the probabilities involved and we would get: X``text[Probability] 0``0.17798 1``0.35596 2``0.29663 3``0.13184 4``3.2959 times 10^-2 5``4.3945times10^-3 6``2.4414times10^-4 The histogram is as follows: Open image in a new page Histogram of the binomial distribution It means that out of the 6 patients chosen, the probability that: None of them will recover is 0.17798, One will recover is 0.35596, and All 6 will recover is extremely small. Example 3 Image source In the old days, there was a probability of 0.8\displaystyle{0.8}0.8 of success in any attempt to make a telephone call. (This often depended on the importance of the person making the call, or the operator's curiosity!) Calculate the probability of having 7\displaystyle{7}7 successes in 1 0\displaystyle{10}1 0 attempts. Answer Probability of success p = 0.8, so q = 0.2. X = success in getting through. Probability of 7 successes in 10 attempts: text[Probability]=P(X=7) =C_7^10(0.8)^7(0.2)^[10-7] =0.20133 Histogram We use the following function C(10,x)(0.8)^x(0.2)^[10-x] to obtain the probability histogram: Open image in a new page Histogram of the binomial distribution Example 4 A (blindfolded) marksman finds that on the average he hits the target 4\displaystyle{4}4 times out of 5\displaystyle{5}5. If he fires 4\displaystyle{4}4 shots, what is the probability of (a) more than 2\displaystyle{2}2 hits? (b) at least 3\displaystyle{3}3 misses? Answer Here, n = 4, p = 0.8, q = 0.2. Let X = number of hits. Let x 0 = no hits, x 1 = 1 hit, x 2 = 2 hits, etc. (a) P(X)=P(x_3)+P(x_4) =C_3^4(0.8)^3(0.2)^1+ C_4^4(0.8)^4(0.2)^0 =4(0.8)^3(0.2)+(0.8)^4 =0.8192 (b) 3 misses means 1 hit, and 4 misses means 0 hits. P(X)=P(x_1)+P(x_0) =C_1^4(0.8)^1(0.2)^3+ C_0^4(0.8)^0(0.2)^4 =4(0.8)^1(0.2)^3+(0.2)^4 =0.0272 Example 5 Image source The ratio of boys to girls at birth in Singapore is quite high at 1.0 9:1\displaystyle{1.09}:{1}1.0 9:1. What proportion of Singapore families with exactly 6 children will have at least 3\displaystyle{3}3 boys? (Ignore the probability of multiple births.) [Interesting and disturbing trivia:In most countries the ratio of boys to girls is about 1.0 4:1\displaystyle{1.04}:{1}1.0 4:1, but in China it is 1.1 5:1\displaystyle{1.15}:{1}1.1 5:1.] Answer The probability of getting a boy is 1.09/(1.09+1.00)=0.5215 Let X = number of boys in the family. Here, n = 6, p = 0.5215, q = 1 − 0.52153 = 0.4785 When x=3: P(X) =C_x^np^xq^(n-x) =C_3^6(0.5215)^3(0.4785)^3 =0.31077 When x=4: P(X) =C_4^6(0.5215)^4(0.4785)^2 =0.25402 When x=5: P(X) =C_5^6(0.5215)^5(0.4785)^1 =0.11074 When x=6: P(X) =C_6^6(0.5215)^6(0.4785)^0 =2.0115xx10^-2 So the probability of getting at least 3 boys is: "Probability"=P(X>=3) =0.31077+0.25402+ 0.11074+ 2.0115xx10^-2 =0.69565 NOTE: We could have calculated it like this: P(X>=3) =1-(P(x_0)+P(x_1)+P(x_2)) Example 6 A manufacturer of metal pistons finds that on the average, 1 2%\displaystyle{12}\%1 2% of his pistons are rejected because they are either oversize or undersize. What is the probability that a batch of 1 0\displaystyle{10}1 0 pistons will contain (a) no more than 2\displaystyle{2}2 rejects? (b) at least 2\displaystyle{2}2 rejects? Answer Let X = number of rejected pistons (In this case, "success" means rejection!) Here, n = 10, p = 0.12, q = 0.88. (a) No rejects. That is, when x=0: P(X) =C_x^np^xq^(n-x) =C_0^10(0.12)^0(0.88)^10 =0.2785 One reject. That is, when x=1 P(X) =C_1^10(0.12)^1(0.88)^9 =0.37977 Two rejects. That is, when x=2: P(X) =C_2^10(0.12)^2(0.88)^8 =0.23304 So the probability of getting no more than 2 rejects is: "Probability"=P(X<=2) =0.2785+ 0.37977+ 0.23304 =0.89131 (b) We could work out all the cases for X = 2, 3, 4, ..., 10, but it is much easier to proceed as follows: "Probablity of at least 2 rejects" =1-P(X<=1) =1-(P(x_0)+P(x_1)) =1-(0.2785+0.37977) =0.34173 Histogram Using the function g(x)=C(10,x)(0.12)^x(0.88)^(10-x) and finding the values at 0, 1, 2, ..., gives us the histogram: Open image in a new page Histogram of the binomial distribution Later, on this page... Mean and variance of a binomial distribution Notation We use upper case variables (like X and Z) to denote random variables, and lower-case letters (like x and z) to denote specific values of those variables. 11. Probability Distributions - Concepts 13. Poisson Probability Distribution Tips, tricks, lessons, and tutoring to help reduce test anxiety and move to the top of the class. Email Address Sign Up Interactive Mathematics Instant step by step answers to your math homework problems. Improve your math grades today with unlimited math solutions. Instagram TikTok Facebook LinkedIn About Us Tutoring Problem Solver Testimonials Affiliate Disclaimer California Privacy Policy Privacy Policy Terms of Service Contact © 2025 Interactive Mathematics. All rights reserved. top
2440
https://files.eric.ed.gov/fulltext/EJ1323294.pdf
Journal of Inquiry Based Activities (JIBA) / Araştırma Temelli Etkinlik Dergisi (ATED) Vol 11, No 2, 92-110, 2021 AN ALTERNATIVE MATERIAL FOR TEACHING PRIME NUMBERS: PRIME FACTORS CHART Satı Ceylan Oral1 ABSTRACT This study aimed to evaluate the effectiveness of the Prime Factors Chart (PFC) as an alternative teaching material, which was developed for teaching the concepts within the “factors and multiples” unit in the middle school mathematics curriculum. Twelve middle school mathematics teachers used the PFC to teach concepts, such as prime number, prime factor, the highest common factor, and the least common multiple, relatively prime numbers, factors, and multiples during face-to-face education in 4 lesson hours. The teachers and their students (n=90) evaluated the PFC based on the framework of the principles of material development. The participant teachers’ and students’ opinions indicated that the PFC is simple and understandable, suitable for the learning objectives and outcomes, suitable for the developmental characteristics of the students, and simple enough to be used by students as well as teachers. Keywords: prime factors chart, prime numbers, mathematics, material development. ASAL SAYILARIN ÖĞRETİMİNDE ALTERNATİF BİR MATERYAL: ASAL ÇARPAN KARTELÂSI ÖZ Bu çalışmanın amacı, ortaokul matematik dersi öğretim programında yer alan “çarpanlar ve katlar” ünitesindeki kavramlara yönelik alternatif bir öğretim materyali olarak geliştirilen Asal Çarpan Kartelâsının (AÇK) sınıf içerisinde kullanılabilirliğini değerlendirmektir. Yüz yüze eğitim sürecinde 4 ders saati boyunca 12 matematik öğretmeni ve 90 öğrencisi ile farklı zaman dilimlerinde asal sayı, asal çarpan, aralarında asal sayılar, en küçük ortak kat, en büyük ortak bölen, çarpanlar ve katlar gibi kavramları öğretmek amacıyla kullanılan AÇK, materyal geliştirme ilkeleri çerçevesinde değerlendirilmiştir. Öğretmen ve öğrenci yanıtlarına göre AÇK’nın basit, sade ve anlaşılabilir, dersin hedef ve davranışlarına uygun, öğrencinin gelişim ve öğrenim özelliklerine uygun, öğretmenler kadar öğrencilerin de kullanabileceği düzeyde basit olduğu söylenebilir. Öğrencilerin sürece ilişkin görüşleri incelendiğinde AÇK ile işlenen derslerin heyecan verici, merak uyandırıcı, öğretici, faydalı, eğlenceli olduğunu ifade ettikleri tespit edilmiştir. Öğrencilerden bir kısmı da boyama sürecinin gereksiz ve sıkıcı olduğunu belirtmişlerdir. Anahtar kelimeler: asal çarpan kartelâsı, asal sayılar, matematik, materyal geliştirme. Article Information: Submitted: 04.20.2021 Accepted: 10.14.2021 Online Published: 10.29.2021 1 Dr., Burhaniye Science and Art Center, ceylansati@hotmail.com, ORCID: JIBA/ATED 2021; 11(2):92-110 S. Ceylan Oral 93 INTRODUCTION According to Altun (2001), the primary purposes of mathematics education are to provide students with the mathematical skills necessary in life, teach them to solve real-life problems, and enhance their ability to use a problem-solving approach when analyzing situations. Hence, it is of paramount importance today. However, many people have prejudices towards the abstract nature of mathematics because engaging in mathematics requires higher-order mental processes, such as problem-solving skills, and uses special symbols and signs to express concepts and ideas (Yıldırım, 1996). An increase or decrease in these prejudices can be directly related to how mathematics is taught (Dale, 1946; Hare, 1999). In many countries, traditional approaches that design mathematics instruction based on unconnected learning objectives have been replaced by constructivism-based approaches that emphasize conceptual and relational learning. Indeed, curricula developed based on traditional methods caused mathematics education to be trapped in a vicious circle (Boz, 2008). Developing teaching materials that students can use effectively in the classroom environment are listed among the factors that might save mathematics instruction from this vicious circle (Tuncer, 2008). Many studies suggest that including materials in teaching environments comes with many benefits. According to the results of these studies, effective use of teaching materials in the learning environment facilitates learning and comprehension (İşman, 2005; Koşar et al., 2003), supports individual work, increases interest and motivation (Tuncer, 2008), provides learning experiences compatible with daily life (Gürbüz, 2007), improves critical thinking, problem-solving, and creativity skills (Körükçü, 2008), reinforces knowledge, and contributes to students’ involvement in the learning process (Aslan & Doğdu, 1993). Teaching materials should be included in the instruction as objects that students can interact with to make sense of the mathematical concepts through their kinesthetic and visual senses (Sowell, 1989). Teaching materials can be used with different purposes including modeling the relationships between the sub-topics of a subject, involving students in the learning environment actively, and concretizing abstract concepts that are difficult to comprehend (Van de Walle, 2007; Yazlık, 2018). Seferoğlu (2011) suggests that it is necessary to benefit from the prerequisites and principles of material development to design and produce a material that will contribute to effective teaching. According to the principles of material development; the material should be simple, plain, and understandable, suitable for the learning objectives and outcomes, have visual features highlighting the key points of the material, suitable for the developmental characteristics of students, provide students with the opportunity to practice concepts, simple enough to be used by students as well as teachers, and easily developed/revised as needed. During the material development process, the topics such as target analysis, identifying the learners’ characteristics, content analysis and design, integrating the content and the tool, and transferring the material to the learning environment should be addressed. Teaching the Concepts of Factors and Multiples In the middle school mathematics curriculum, the topics of “Factors and Multiples” are prerequisites for some curriculum standards (e.g., writing a number using exponents), and therefore, they are listed before the algebra topics in the sixth and eighth grades (Ministry of National Education [MoNE], 2018). Factors and multiples are important topics of the number strand in mathematics and are often included in both national and international exams (Tatar et al., 2008). However, students have learning difficulties related to these two concepts. Prior research studies examined students’ understanding of factors and multiples using different teaching approaches and participants (Bilge, 2005; Bolte, 1999; Korkmaz & Korkmaz, 2017; Özdeş, 2013). Özdeş (2013) found that some students thought that the number 1 was a prime number and the number 2 was not a prime number because it is even. The study also reported that according to some students, all odd numbers should be considered prime because they are not divisible by 2 and the negatives of prime numbers are also prime. The students had misconceptions about JIBA/ATED 2021; 11(2):92-110 S. Ceylan Oral 94 relatively prime numbers, prime factorization, the highest common factor (HCF), and the lowest common multiple (LCM). Among the students’ misconceptions were that relatively prime numbers must be prime numbers, they confused the concepts of division and divisibility, and they did not comprehend that number 1 is relatively prime with other numbers (Bolte, 1999; Yağmur, 2020). The Sieve of Eratosthenes, a material that has been used for centuries in learning the concept of prime number, was designed as the elimination of multiples of prime numbers less than k to find prime numbers up to n where k<√𝑛, and was introduced by the Greek mathematician Eratosthenes in around 250 BC (Lambert, 2004). The Sieve of Eratosthenes is limited to detecting prime numbers from 1 to n. There is a need for a material to teach the other concepts (factor, multiple, divisor, prime number, common divisor, common multiple, HCF, LCM, and relatively prime numbers) related to factors and multiples. In this context, this study aimed to evaluate the effectiveness of the Prime Factors Chart as a teaching material, which was developed as an alternative tool for teaching the concepts within the “factors and multiples” unit in the middle school mathematics curriculum. THE DEVELOPMENT PROCESS OF THE PRIME FACTORS CHART The material was developed based on the framework suggested by Seferoğlu (2011). Expert opinions were obtained at every stage of the process, which was carried out in accordance with the prerequisites of the material development process. These experts were two academicians and two mathematics teachers who were teaching the sixth and eighth-grade classes at the time of the study. The material, originally designed to teach only the concepts of prime number and prime factor, was improved based on expert opinions. Experts asserted that the material could not be “easily developed and revised as needed,” and upon this, a meeting was organized with the experts to discuss what other concepts and procedures can be taught using the material. In the meeting, we agreed that the Chart could be used not only to teach prime numbers and prime factors but also to teach the concepts of the factors, multiples, divisibility rules, common multiples, common factors, relatively prime numbers, and HCF and LCM. Additionally, the material was finalized with its simple and plain form, which fits an A4 paper in light of the expert opinions. Due to the colorful images in its content, the designed material was named the “Prime Factors Chart (PFC).” Based on the prerequisites of the material development and expert opinions, the target audience of the material was determined, the characteristics of the learning environment and the learners were examined, and the material was transferred to the classroom environment after content analysis and design as explained below. Target Analysis This study aimed to develop a fun and effective material suitable for the developmental level of students to teach the concepts of factors and multiples. Therefore, we decided to design a material that would support students in cognitive, affective, and psychomotor learning domains. Learner Characteristics The topics of factors and multiples are mentioned for the first time in the curriculum in the sixth grade under the unit “M.6.1.2. Factors and Multiples” and then in the eighth grade under the unit “M.8.1.1. Factors and Multiples” (MoNE, 2018). Therefore, the target audience of the material developed in the current study is the sixth and eighth-grade students. The material was designed considering the students’ attitudes, motivations, and readiness towards the mathematics course. Content Analysis, Design, and Transferring the Material to the Learning Environment The material evaluation principles (Seferoğlu, 2011) guided the design and evaluation stages of the material. The researcher and the experts agreed on its final form and how to transfer it to the classroom environment. ACTIVITY IMPLEMENTATION This study was conducted in the fall semester of the 2019-2020 academic year. The necessary legal permissions for the study were obtained from the relevant Directorate of National Education. In the study, a mixed-method JIBA/ATED 2021; 11(2):92-110 S. Ceylan Oral 95 research method was used. The quantitative data were obtained from the scores given by 12 teachers (Table 1) and 90 students (Table 2) within the framework of the material development principles, and the qualitative data were collected using open-ended questions administered to 15 eighth-grade students (Table 3) who participated in the sample implementation carried out by the researcher herself. The current study started with a webinar in which the Prime Factors Chart (PFC) was introduced to 12 middle school mathematics teachers working in the Balıkesir province of Turkey. All of the teachers used the PFC in their lessons, and after the lessons, both they and their students evaluated the PFC using a questionnaire designed based on the Principles of Material Development (Seferoğlu, 2011). The material was used with the sixth and eighth-grade students. Each implementation lasted 4 lesson hours. Table 1. Teacher Information in the Study Group Group Teachers (n) % Male 5 %41.6 Female 7 %58.4 Total 12 %100 Table 2. Student Information in the Study Group Group Students (n) % Male 51 30 sixth graders 21 eighth graders %56.6 Female 39 19 sixth graders 20 eighth graders %43.4 Total 90 %100 Table 3. Students Who Responded to Open-Ended Questions Group Students (n) % Male 8 eighth graders %53.3 Female 7 eighth graders %46.7 Total 15 %100 The Material Evaluation Form (Appendix 1), which was designed based on Seferoğlu’s (2011) principles of material development, was used to obtain the teachers’ and students’ evaluations of the PFC. In order for the evaluation to be measurable, the material development principles were transformed into a Likert-type questionnaire. The participants’ responses to questionnaire items were analyzed by calculating the mean scores. Sample Activity Implementation The researcher, the mathematics teacher of the participating students, carried out the sample activity implementation with 15 eighth-grade students attending a public school during 4 lesson hours. In order to help the students build new knowledge on their existing knowledge, the activity tasks focused on the concepts included within the Factors and Multiples Unit of the sixth and eighth-grade curriculum. These concepts are “M.6.1.2. Factors and Multiples Unit / Terms or Concepts: Factor, multiple, divisor, prime number, common divisor, common multiple” and “M.8.1.1. Factors and Multiples Unit / Terms or Concepts: The highest common factor (HCF), the least common multiple (LCM), relatively prime numbers” (MoNE, 2018). In the preliminary stage of the activity, the teacher told the students that they needed 25 different colored crayons for the next day. She added that if they could not find that many colors, they could obtain a new color by mixing the already existing ones. On the day of the activity, the blank version of the PFC (Figure 1, Appendix 2) was distributed to the students. The students were given the opportunity to examine the Chart. After an examination that lasted for about 2 minutes, the teacher asked the students what they noticed in the Chart. The following dialogue took place (all names are pseudonyms): Teacher: What do you see in the Chart in front of you? Furkan: The numbers from 1 to 100 are written in a certain order and divided into rectangles of equal size. These rectangles are also divided into boxes of different numbers and sizes. Teacher: Great, what else? Eren: I see that some numbers are not separated into any boxes. Teacher: Hmm, do these numbers have a common property? Ege: Yes, these numbers are prime. Teacher: Has everyone noticed this? Ok, write down the definition of a prime number in your notebook. JIBA/ATED 2021; 11(2):92-110 S. Ceylan Oral 96 Figure 1. The In-classroom Version of the PFC The PFC is indeed designed in a worksheet format, in which rectangles of the same size from 100 to 1 are divided into different sized segments. In this part, the teacher examined the prime number definitions that the students wrote in their notebooks and asked the whole class to write down all the prime numbers in the Chart in their notebooks as well. At this point, she tried to help the students realize that 25 different colors would represent 25 different prime numbers. The smaller boxes represent the prime factors of non-prime numbers. The teacher initiated a discussion in the class for students to discover this fact. Teacher: So, let’s think about what the little boxes mean. Does anyone have an idea? Onur: Factors, I guess. Teacher: What do you think about Onur’s idea? Ege: Let’s start with the smaller numbers one by one, ma’am. For example, the factors of 4 are 1, 2, and 4, so there are three numbers, but there are two boxes here. It can’t be factors. Zeynep: Oh, prime factors! But number 4 has one prime factor, which is 2. We have two boxes here. Onur: Couldn’t it be 4=2x2, then? For example, there are three boxes for number 8, and I think that 8=2x2x2. Students: Yes, product of the prime factors. Teacher: You guys are great! The whole class discussion supported the students to discover the working principle of PFC. Indeed, the PFC is designed as boxes containing the product of prime factors of non-prime numbers. Figure 2 shows an example. Figure 2. Examples of Product of Prime Factors Figure 2 shows that the rectangle representing the number 60 is divided into four equal parts, and the rectangle representing the number 68 is divided into three equal parts. In short, each number is divided into segments, each representing a prime factor, and painted in the color of the prime number it represents. Photograph 1 presents an example student work. Photograph 1. A Student’s Work on the PFC The initial exploration of the PFC took about 20 minutes. In this phase, the teacher stated that the color of the box is their main concern, not the size so that the students do not develop misconceptions. Teacher: Now, we will fill in a part of the Chart together. Previously, we discussed why 1 is not a prime number. Let’s start with 2. Banu: 2 is a prime number. It consists of one box. Should we assign a color to it? Alp: Yes, shall we color number 2 in yellow? Teacher: Yes, guys, yellow can represent the number 2 for us. Banu: Let’s color the number 3 in blue, then. Teacher: If you have colored numbers 2 and 3 in yellow and blue, let’s move on to 4. What do you think of 4? Zeynep: If we write the number 4 as the multiplication of its prime factors, since 4=2x2, we write 2 in both boxes and color it in yellow. JIBA/ATED 2021; 11(2):92-110 S. Ceylan Oral 97 Derin: There is something I don’t understand. Why don’t we think of it as 4=4x1? Zeynep: Because 4 and 1 are not prime numbers. We will always write the prime factors in these little boxes so we know what color we can paint them. Teacher: Yes, Derin, Zeynep is right. Come on; you tell us about the number 6 now. Derin: So, since 6=2x3, we will write 2 and 3 in the boxes and paint them in colors that represent 2 and 3. Teacher: Well done, kids! After making sure that the students firmly understood the process and the common colors were determined for each prime number, they were given approximately 1 lesson hour to fill in and paint the entire Chart (Figure 3, Appendix 3). Some students first wrote down the prime factors and then proceeded to color them (Figure 4, Appendix 4). The teacher reminded the students about the divisibility rules: Teacher: Guys, it will be easier to fill out the Chart if you know the divisibility rules. Tarık: Yes, teacher, non-prime numbers from 1 to 100 are multiples of 2, 3, 5, or 7. That’s why I always look to see if the numbers are divisible by these numbers. Teacher: Great job, Tarık! Can you please tell your friends about the shortcuts for whether a number is divisible by 2, 3, or 5? Figure 3. A Completed PFC It is crucial that the teacher guides the students throughout the process and creates an environment where students can express themselves freely. A dialogue illustrating this environment is as follows: Eylem: Teacher, I’m done. Teacher: How did you finish it so fast? Eylem: I found a shortcut, ma’am. For example, when factoring the number 78 into prime factors, I knew it was divisible by 2 because it was even. 78:2=39. I had already painted the Chart for the number 39. Look, it is 39=3x13. Then, 78=2x3x13. Teacher: Of course, you can think of it that way. Nice one. Figure 4. Prime Factorized PFC The first 2 hours of the activity were spent on discovering the working principle of PFC and writing and coloring the prime factors. In the next 2 hours, the students discussed which mathematical concepts they can explore using the PFC in addition to the concepts of prime numbers and prime factorization. Teacher: Everyone should have completed their prime factor chart and brought it with them today. Now, let’s discuss what else we can learn by using this Chart and briefly write down everything we find in our notebook. [Students are given some time to think about the question.] Yes, anyone wants to speak? Elif: Ma’am, the PFC teaches us the concepts of prime numbers and prime factorization (Photograph 2). Teacher: Now, everyone, draw and color a prime and non-prime number that you choose from your Chart. Photograph 2. Sample Student Work 1 Other concepts discovered with the students were noted in the student notebooks, along with their examples. JIBA/ATED 2021; 11(2):92-110 S. Ceylan Oral 98 Teacher: What other concepts does the prime factors chart teach us? Fuat: We repeatedly use the divisibility rules (Photograph 3). This reinforces our learning. Derin: We can see all the factors of a number. For example, when we consider the number 30, we can say that 30 is divisible by 2, 3, 5, and their product that are 6, 15, 10, and 30. Each number is already divisible by 1 (Photograph 4). Photograph 3. Sample Student Work 2 Photograph 4. Sample Student Work 3 The students explained their ideas to their classmates, and when necessary, they showed their work by writing them on the board. Akın: We can also find the common factors of numbers. For example, when I look at the numbers 28 and 63, I can see that 7 is a common color. Mustafa: That is easy. What about the numbers 12 and 24? They have a lot of common factors. It is not enough to say a common color and a common factor. Very complicated. Akın: Teacher, can I show this on the board? (Photograph 5) 12= 2.2.3 and 24=2.2.2.3. What colors do they both have in common? (He asks her to look at the Chart) 2, 2, and 3. Just like we find all the factors, we will write down all the factors that we can form from these common colors. 1, 2, 3, 4, 6, and 12. These are common factors of the numbers 12 and 24. Mustafa: We have also seen the highest of the common factors. Then, the highest number of common colors can also be the HCF. It is fine. Photograph 5. Akın’s Work on the Whiteboard During the activity, the students were encouraged to discuss and learn from each other. In the current implementation, the students had difficulty seeing that the Chart could be used for relatively prime numbers and the LCM concepts. The teacher gave hints for how these concepts can be examined with the help of the PFC. Teacher: Guys, can we see the relatively prime numbers when we examine the Chart? Zeynep: Do the relatively prime numbers have to be prime, teacher? Elif: No, I guess, they are not. There should be no common factors other than 1 for them to be relatively prime numbers. Teacher: Is this clear to everyone? Eylem, do you understand the definition Elif explained? How can the Chart help us in this regard? Eylem: Actually, I don’t know. I need to think. Akın just said that the common factor is the common color. Shouldn’t there be a common color then? [Looking at the Chart.] For example, 8 and 15 do not have any common colors. Can we call them relatively prime? Elif: Of course! Teacher: Well done, guys! Let’s draw an example (Photograph 6). Photograph 6. Sample Student Work 4 At the end of the lesson, to increase the students’ motivation, the teacher mentioned the benefits of mathematics as a discipline that teaches interrelated concepts and develops the JIBA/ATED 2021; 11(2):92-110 S. Ceylan Oral 99 human brain. Photograph 7 shows samples of student work with the PFC. Photograph 7. Students’ Work on PFC The PFC can be brought back to the classroom environment and used with students for exploring many other concepts and course objectives. Examples of these concepts and objectives are given in Appendix 5. EVALUATION OF THE ACTIVITY In order to analyze whether the PFC can be used in the classroom as an alternative teaching material, the evaluations of three different groups were taken into consideration. Table 5 presents information on these groups and related data collection tools. Table 5. Groups Evaluating the Material Group and No Data Collection Tool Teachers (n=12) Material evaluation form (Quantitative data) Students (n=90) Material evaluation form (Quantitative data) Students (n=15) Open-ended questions (Qualitative data) Twelve middle school mathematics teachers and their students (n=90) indicated in Table 5 used the PFC and evaluated it according to the material evaluation form. In this evaluation, each item was scored over 5 points (100%), and the mean scores were calculated (Table 6). Besides, 15 eighth-grade students who participated in the sample implementation lessons taught by the researcher evaluated the PFC by responding to open-ended questions. Table 6. Teachers’ and Students’ Responses to the Material Evaluation Form Teachers Students Mean % Mean % Is it simple and understandable? 3.33 83.33 3.67 91.67 Is it appropriate for the learning objectives and outcomes? 3.58 89.58 3.83 95.83 Do the visual features highlight key points of the material? 3.17 79.17 3.90 97.50 Is it appropriate for students’ developmental characteristics? 3.25 81.25 4.00 100.00 Does it provide the student with the opportunity to practice and exercise? 3.15 78.75 3.93 98.33 Is it simple enough to be used by students as well as teachers? 3.50 87.50 3.83 95.83 Can it be easily improved and revised as needed? 2.42 60.42 3.50 87.50 Mean 3.20 % 80.00 3.81 % 95.24 Table 6 shows that both the teachers and the students evaluated the PFC with high ratings ranging from 2.42 to 4. Based on the teachers’ and students’ evaluations, we can infer that the PFC is suitable for the learning objectives and outcomes, is simple and understandable, and can be used by students as well as teachers. Additionally, participant teachers and students indicated that the PFC is suitable for the developmental characteristics of the students, the visual features highlight the key points of the material, and it provides the students with the opportunity to practice and exercise. The lowest rating was given to the last item on improving and revising the material by the participating teachers. JIBA/ATED 2021; 11(2):92-110 S. Ceylan Oral 100 The eighth-grade students who participated in the sample activity implementation responded to open-ended questions about the skills they gained and the problems they experienced during the process. Table 8 shows the open-ended questions. The students answered the questions in writing. Table 8. Open-Ended Questions ✓ What are your opinions about the lessons taught with the Prime Factors Chart? ✓ What skills did you learn as a result of participating in the Prime Factors Chart lessons? ✓ What problems did you experience in this process? The students expressed that the lessons taught with the PFC were exciting, engaging, instructive, useful, and entertaining. Some of the students added that the process required patience. Examples of the student statements are as follows: I was very happy to attend this lesson. Gradually, the initially complex paper with numbers up to 100 that you gave us became meaningful. So, the lesson was interesting and fun. (Furkan). I found this activity very useful. First of all, it was very nice that our teacher encouraged us to question and listen to everyone’s opinion. (Elif). I think this process was very entertaining and instructive. I realized that I understood some concepts that I had trouble with understanding before. (Eren). The students had some prior knowledge of factors and multiples before participating in the activity. They wrote that they made more sense of the concepts after engaging in the lessons taught with the PFC. They answered the question about what skills the lessons taught them by writing down all the concepts related to the Chart. The students explained that they learned the concepts such as prime number, prime factors (8 students), factors (7 students), divisibility rules (9 students), relatively prime numbers (8 students), and HCF-LCM (7 students). Some of the student statements on the interview form are as follows: First of all, I must state that we are expected to know the divisibility rules, prime numbers, and prime factors, which we had learned in the sixth grade. However, we don’t really know these topics that much. Thanks to the PFC, I concretized the concepts I thought I knew, such as the common factors and relatively prime numbers… Everything was so clear… (Derin). The concept of the prime factor was interesting to me. Think about it; we can write all the numbers except 1 to infinity as the product of prime numbers. So weird and fascinating! (Mustafa). One of the most fundamental topics of the eighth grade math is factors and multiples. I understood a topic that I previously had difficulty with learning during a stressful time for us, as we are preparing for LGS [Nationwide High School Entrance Exam]. I learned relatively prime numbers, for example. If there is no common color, there is no common factor. These are relatively prime numbers. Very easy! (Elif). The last question asked to the students in the interview form was the problems they experienced during the lesson taught with the PFC. While most of the students expressed that they did not encounter any problems, some stated that it was difficult to find 25 different colors, and they were bored with coloring all the numbers. Additionally, some students expressed that they had difficulty understanding some parts and did not like the questioning process created in the classroom. Some of the student statements on the interview form are as follows: Was it necessary to color all the numbers in the Chart? We could only write the prime factors and discuss them. I think that our teacher had it painted for fun, but I thought it was boring and unnecessary. I couldn’t find 25 colors anyway. (Akın). Our teacher asked us everything. We tried to discover everything ourselves. I was not very active during that time, but after my friends explained the concepts, I understood them. (Eren). I had a hard time understanding some parts. That’s why I can say I was bored. (Eylem). CONCLUSION and SUGGESTIONS This study used classroom implementations to evaluate the effectiveness of the PFC, developed for teaching the concepts within the factors and multiples unit in the sixth and eighth JIBA/ATED 2021; 11(2):92-110 S. Ceylan Oral 101 grades mathematics curriculum in Turkey. The PFC was designed based on the prerequisites of the material development process by following the phases of target analysis, determination of learner’s characteristics, content analysis and design, integration of content and the tool, and transferring the material to the learning environment. Hence, aligned with the related literature (Seferoğlu, 2011; Yanpar, 2015), the material was designed to be simple, plain, and understandable, suitable for the learning objectives and outcomes, providing students with the opportunity to practice and exercise, and economical and ergonomic in a way that all students and teachers can easily use. Twelve mathematics teachers used the PFC for teaching the concepts of prime numbers and prime factors. The teachers’ and the students’ evaluations of the PFC within the framework of the principles of material development revealed that the PFC is suitable for the learning objectives and outcomes, simple and understandable, simple enough to be used by students as well as teachers, suitable for the developmental characteristics of students, and has visual features that highlight the key points of the material. The participants’ evaluations indicate that the PFC can be used to teach the concepts within the factors and multiples unit and will positively contribute to the teaching process. The findings of the current study are consistent with the findings obtained by Bilge’s (2005) study conducted on teaching the concepts of prime numbers and prime factors through an active learning method. Kamii et al. (2001) found that most of the mathematics teachers in their study underlined that the use of materials contributed positively to students’ mathematical thinking. A similar finding was obtained in this study based on the participating teachers’ evaluations of the PFC. The findings regarding the students’ opinions on the activity showed that they found the lessons taught with the PFC exciting, engaging, instructive, useful, and entertaining. Some of the students thought that the process required patience. The positive effect of the mathematics lessons conducted with the PFC on students’ motivation levels supports similar studies that reported a positive relationship between students’ motivation and using materials in lessons (Keller, 2010; Yorgancı & Terzioğlu, 2013). The related literature reported that students have misconceptions related to prime numbers such as the number 1 is a prime number, the number 2 is not prime because it is even, and all odd numbers are prime (Özdeş, 2013). It was found that according to some students, the negatives of prime numbers are also prime, and relatively prime numbers must be prime (Bolte, 1999). In light of the existing literature on students’ misconceptions, the researcher emphasized the key points of the concepts during the lessons taught with the PFC. In the Chart, the number 1 is colored in white, and it is clear that it is not prime as no other number includes a white section. Additionally, the PFC consists of only positive numbers, and this gives an implicit message that there are no negative prime numbers. The fact that there is no requirement for relatively prime numbers to be prime is presented to the students with many examples according to the “common color means common factor” principle. From this point of view, the PFC has a structure that can eliminate students’ misconceptions and can even prevent such misconceptions to occur. The results of the current and previous research studies support that using materials in the learning environments facilitates perception and learning, arouses interest, and brings vitality to the lesson. Additionally, materials save time in the teaching process, help to consolidate the knowledge, and promote long-lasting learning. Hence, materials that can make learning permanent, effective, and enjoyable should be used in learning environments. Educators should receive training to design materials and gain knowledge about the importance and positive effects of using materials in their courses. The current research is limited to the “Factors and Multiples” concepts in mathematics. New course contents can be developed using materials for teaching other mathematics concepts or for other subject fields. REFERENCES Altun, M. (2001). Matematik öğretimi [Teaching mathematics]. Alfa Press. Aslan, Z., & Doğdu, S. (1993). Eğitim teknolojisi uygulamaları ve eğitim araç gereçleri [Educational technology applications and educational tools]. Tekışık Publishing. JIBA/ATED 2021; 11(2):92-110 S. Ceylan Oral 102 Bilge, O. (2005). İlköğretim 6. sınıf matematik dersi asal sayılar ve çarpanlara ayırma ünitesinin hedef ve davranışlarını kazandırmada aktif öğrenme yaklaşımının etkisi [The effect of active learning approach in reaching the objectives and behaviors of the unit prime numbers and factorization in the sixth grade in primary school] [Unpublished master’s thesis]. Gazi University. Bolte, L. (1999). Enhancing and assessing preservice teachers’ integration and expression of mathematical knowledge. Journal of Mathematics Teacher Education, 2(2), 167-185. Boz, N. (2008). Matematik neden zor? [Why is mathematics difficult?]. Necatibey Faculty of Education Electronic Journal of Science and Mathematics, 2(2), 52-65. Dale, E. (1946) Audio-visual methods in teaching. The Dryden Press. Gürbüz, R. (2007). Olasılık konusunda geliştirilen materyallere dayalı öğretime ilişkin öğretmen ve öğrenci görüşleri [Students’ and their teachers’ opinions about the instruction based on the materials on probability subject]. Kastamonu Education Journal, 15(1), 259– 270. Hare, A. Y. M. (1999). Revealing what urban early childhood teachers think about mathematics and how they teach it: Implications for practice [Unpublished doctoral dissertation]. University of North Texas. İşman, A. (2005). Öğretim teknolojileri ve materyal geliştirme [Instructional technologies and material development]. PegemA Publishing. Kamii, C., Lewis, B. A., & Kirkland, L. (2001). Manipulatives: When are they useful? The Journal of Mathematical Behavior, 20(1), 21-31. Keller, J. M. (2010). Motivational design for learning and performance: The ARCS model approach. Springer Science & Business Media. Korkmaz, E., & Korkmaz, C. (2017). Ebob-Ekok konusunun gerçekçi matematik eğitimi etkinlikleriyle öğretiminin başarı ve tutuma etkisi [EBOB –EKOK subject effect to success and attitude with teaching realistic mathematics education activities]. Mustafa Kemal University Journal of Social Sciences Institute, 14(39), 504-523. Koşar, E., Yüksel, S., Özkılıç, R., Avcı, U., Alyaz, Y., & Çiğdem, H. (2003). Öğretim teknolojileri ve materyal geliştirme. [Instructional technologies and material development]. PegemA Publishing. Körükçü, E. (2008). Tam sayılar konusunun görsel materyal ile öğreniminin 6. sınıf öğrencilerinin matematik başarılarına etkisi [The effect of learning integers using visual materials on 6th grade students’ success in mathematics] [Unpublished master’s thesis]. Marmara University. Lambert, M. (2004). Calculating the Sieve of Eratosthenes. Journal of Functional Programming, 14(1), 759-763. Ministry of National Education. (2018). Matematik dersi öğretim programı (İlkokul ve ortaokul 1, 2, 3, 4, 5, 6, 7 ve 8. sınıflar) [Mathematics curriculum (Primary and middle school 1, 2, 3, 4, 5, 6, 7, and 8th grades)]. Özdeş, H. (2013). 9. sınıf öğrencilerinin doğal sayılar konusundaki kavram yanılgıları [Misconceptions of 9th class students regarding to natural numbers] [Unpublished master’s thesis]. Adnan Menderes University. Seferoğlu, S. S. (2011). Öğretim teknolojileri ve materyal geliştirme [Instructional technologies and material development]. PegemA Publishing. Sowell, E. J. (1989). Effects of manipulative materials in mathematics instruction. Journal for Research in Mathematics Education, 20(5), 498-505. Tatar, E., Okur, M., & Tuna, A. (2008). Ortaöğretim matematiğinde öğrenme güçlüklerinin saptanmasına yönelik bir çalışma [A study to determine learning difficulties in secondary mathematics education]. Kastamonu Educational Journal, 16(2), 507–516. Tuncer, D. (2008). Materyal destekli matematik öğretiminin ilköğretim 8. sınıf öğrencilerinin akademik başarısına ve başarının kalıcılık düzeyine etkisi [The effect of teaching material aided instruction on 8th grade students’ academic success and level of permanency] [Unpublished master’s thesis]. Gazi University. JIBA/ATED 2021; 11(2):92-110 S. Ceylan Oral 103 Van de Walle, J. A. (2007). Elementary and middle school mathematics: Teaching developmentally (6th ed.). Pearson Education, Inc. Yağmur, B. E. (2020). A game-based activity related to prime numbers. Journal of Inquiry Based Activities, 10(1), 18–30. Yanpar, T. (2015). Öğretim teknolojileri ve materyal tasarımı [Instructional technologies and material design]. Anı Publishing. Yazlık, D. Ö. (2018). Öğretmenlerin matematik öğretiminde somut öğretim materyali kullanımına yönelik görüşleri [The views of teachers about use of concrete teaching materials in mathematics teaching]. OPUS International Journal of Society Researches, 8(15), 775-805. Yorgancı, S., & Terzioğlu, Ö. (2013). Matematik öğretiminde akıllı tahta kullanımının başarıya ve matematiğe karşı tutuma etkisi [The effect of using interactive whiteboard in mathematics instruction on achievement and attitudes toward mathematics]. Kastamonu Educational Journal, 22(3), 919-930. Yıldırım, A. (1996). Disiplinlerarası öğretim kavramı ve programlar açısından doğurduğu sonuçlar [The concept of interdisciplinary teaching and its consequences in terms of programs]. Hacettepe University Journal of Education, 12, 89-94. Citation Information Ceylan Oral, S. (2021). An alternative material for teaching prime numbers: Prime factors chart. Journal of Inquiry Based Activities, 11(2), 92-110. JIBA/ATED 2021; 11(2):92-110 S. Ceylan Oral 104 Appendix 1 Material Evaluation Form Definitely No No Partly Yes Definitely Yes Is it simple and understandable? Is it appropriate for the learning objectives and outcomes? Do the visual features highlight key points of the material? Is it appropriate for students’ developmental characteristics? Does it provide the student with the opportunity to practice and exercise? Is it simple enough to be used by students as well as teachers? Can it be easily improved and revised as needed? JIBA/ATED 2021; 11(2):92-110 S. Ceylan Oral 105 Appendix 2 The In-classroom Version of the Prime Factors Chart JIBA/ATED 2021; 11(2):92-110 S. Ceylan Oral 106 Appendix 3 Completed Prime Factors Chart JIBA/ATED 2021; 11(2):92-110 S. Ceylan Oral 107 Appendix 4 PFC with Prime Factorized Numbers JIBA/ATED 2021; 11(2):92-110 S. Ceylan Oral 108 Appendix 5 Concepts, Curriculum Standards, and Examples Related to PFC Concept to Teach Curriculum Standard Prime Number: Numbers that have only one section in the PFC are prime and are shown in a different color from other primes. M.6.1.2.3. Students will be able to determine prime numbers with their properties. They also find prime numbers up to 100 with the help of the Sieve of Eratosthenes. Example: The numbers 43 and 61 are placed in one box in the material because they have no prime factors other than themselves. Prime Factors: The Prime Factor Chart allows non-prime numbers to be written in terms of prime factors. M.8.1.1.1. Students will be able to find the prime factors of a positive integer. Example: The prime factors of the numbers 68 and 84 appear clearly in the material. Divisibility Rules: A student working with the Chart continuously repeats divisibility rules to see if numbers are divisible by 2, 3, 5, or 7. M.6.1.2.2. Students will be able to explain and use the divisibility rules by 2, 3, 4, 5, 6, 9, and 10 without a remainder. Example: → → A student working on finding the prime factors of 90 and expressing it with appropriate colors will continuously repeat the divisibility rules to see if the number is divisible by the prime numbers 2, 3, or 5. Factors: PFC helps to clearly see the factors of the numbers. M.8.1.1.1. Students will be able to find positive integer factors of given positive integers, write the integer as the product of its factors using exponential expressions. Example: The number 78 is divisible by 2, 3, and 13, as well as 6 (2x3), 26 (2x13), 39 (3x13), and 78 (2x3x13). The Highest Common Factor (HCF): HCF of any selected two numbers can be found. M.8.1.1.2. Students will be able to calculate the highest common factor (HCF) and the least common multiple (LCM) of two whole numbers. JIBA/ATED 2021; 11(2):92-110 S. Ceylan Oral 109 Example: The greatest of the common divisors of numbers 24 and 36, that is, HCF, is 12 (2x2x3). In other words, it is the product of the numbers representing the common colors in both numbers. Least Common Multiple (LCM): LCM of numbers from 1 to 100 can be found using PFC. M.8.1.1.2. Students will be able to calculate the highest common factor (HCF) and least common multiple (LCM) of two whole numbers. Example: A student who wants to find the highest common factor of numbers 20 and 30 first finds the common colors in both boxes. These colors are yellow and orange for the numbers in question (2 and 5. 2x5=10 is the HCF of these numbers). Then, yellows and oranges are discarded in any box, and all remaining numbers are multiplied. Therefore, LCM of (20, 30) = 2.2.3.5 = 60. Relatively Prime Numbers: Relatively prime numbers have no common divisor other than 1. If the common divisor is considered a common color, numbers that do not have a common color in the PFC are relatively prime. M.8.1.1.3. Students will be able to determine whether two given whole numbers are prime or not. Examples: • Consecutive numbers are relatively prime. The numbers 38 and 39 do not have any common colors. No common color means no common factor, and these numbers are relatively prime. • Number 1 is relatively prime with all numbers. The white color belonging to the number 1 does not exist in any other number in the Chart. So, number 1 is relatively prime with all numbers. • Prime numbers are relatively prime. Since each prime number is represented by a different color, they cannot be expected to have common colors. This shows that all prime numbers are relatively prime. • The HCF of Relatively Prime Numbers is 1. The common colors of 8 and 15 expressed their HCFs. Numbers with no common color are relatively prime, so it is clear that these numbers have no common divisor other than 1. JIBA/ATED 2021; 11(2):92-110 S. Ceylan Oral 110 • The LCMs of Relatively Prime Numbers are found by multiplying these numbers with each other. Since the numbers 8 and 15 have no common color, there will not be a common color to be discarded to find the least common multiples, and thus all the factors of the numbers for LCM will be multiplied with each other. Then, LCM is (8.15)= 120 Square Numbers: The Chart allows students to directly see square numbers from 1 to 100. M.8.1.3.1. Students will be able to determine the relationship between square numbers and their square roots. Example: The same colors with even numbers , , , , , , The different colors with even numbers , Conceptual Multiplication: While doing multiplication, students can change places between the multipliers to make the operation easier. M.5.1.2.7. Students will be able to determine and use the appropriate strategy in mental multiplication and division with whole numbers. Example: For the 6x15 process, a student can follow a path as follows. x = x = Simplification: It can be used to see common factors from the PFC while simplifying fractions. M.5.1.3.4. Students will be able to understand that simplification and expansion will not change the value of the fraction and create fractions that are equivalent to a fraction. Example: 𝟖𝟒 𝟔𝟔= = = 𝟏𝟒 𝟏𝟏
2441
https://www3.nd.edu/~amoukasi/CBE358_Lab1/Approximations%20(3).pdf
SIMPLE EXPLICIT EXPRESSIONS F OR CALCULA TION OF THE HEISLER-GR OBER CHAR TS M.M. Y o v ano vic h Micro electronics Heat T ransfer Lab oratory Departmen t of Mec hanical Engineering Univ ersit y of W aterlo o W aterlo o, On tario, Canada Abstract Simple single term appro ximations for the Heisler co oling c harts and the Grob er fractional energy loss are presen ted for the plate, in nite circular cylinder and sphere. These solutions are accurate to within % of the series solutions pro vided the dimensionless time F o is greater than some critical v alue F o c whic h lies in the range: 0: 0:. Simple explicit expressions are pro vided for the accurate calculation of the rst eigen v alue for all v alues of the Biot n um b er. P olynomial expressions are pre-sen ted for the accurate calculation of the ro ots of the Bessel functions of the rst kind of orders zero and one. Expressions are dev elop ed for the accurate computation of the Bessel functions of the rst kind of orders zero and one. Simple accurate solutions are prop osed for calculating the dimensionless temp erature and the heat loss fraction for nite circular cylinders, rectangular parallelopip eds and in nite rectangular bars. Maple V R w orksheets are giv en for the accurate calculation of the dimen-sionless temp erature and dimensionless heat loss for the in nite plate, in nite circular cylinder and the sphere. Nomenclature A n = F ourier co ecien ts for dimensionless temp erature B n = F ourier co ecien ts for heat loss fraction B i = Biot n um b er; B i = hL=k C  ; C = correlation co ecien ts c p = sp eci c heat at constan t pressure; J =k g  K D = diameter of cylinder or sphere; m F o = F ourier n um b er; F o = t=L h = heat transfer co ecien t; W =m K J 0 ( ) ; J  () = Bessel functions, rst kind orders 0 and  L = some c haracteristic b o dy dimension N = n um b er of panels in trap ezoidal appro ximation n = constan t p o w er Q = energy loss; J Q i = initial in ternal energy , c p V  i ; J R = radius of circular cylinder S = total activ e surface area; m S c ; S p ; S s = spatial functions for cylinder, plate and sphere T ( ; F o) = temp erature of b o dy; K T f = uid temp erature; K t = time; s Cop yrigh t c   M.M. Y o v ano vic h. Published b y the American Institute of Aeronautics and Astronautics, Inc. with p ermission. Professor and Director, F ello w AIAA V = total v olume; m X ; Y ; Z = half-dimensions of cub oid and rectangular bar; m x; y ; z = Cartesian co ordinates Gr e ek Symb ols = thermal di usivit y; k =c p ; m =s 0 ;  = parameters in mo di ed Stok es appro ximation  n = n th ro ot of c haracteristic equations  n;c = n th ro ot for in nite cylinder  n;p = n th ro ot for in nite plate  n;s = n th ro ot for sphere   = st ro ot for 0 < B i <   ;0 = st ro ot for B i ! 0  ; = st ro ot for B i !  = dimensionless temp erature;  = i  = temp erature excess;  = T ( ; F o) T f ; K  i = initial temp erature excess,  i = T ( ; 0) T f ; K  = mass densit y; k g =m  = dimensionless p osition within an y b o dy Subscripts c = in nite cylinder cp = nite cylinder i = initial v alue  p = in nite plate x; y ; z = in nite plates along x; y ; z co ordinates xy = in nite rectangular bar xy z = cub oid 0 = at v ery small time  = the rst ro ot or rst eigen v alue  = at v ery large time In tro duction One-dimensional transien t conduction solutions inside plates, in nite circular cylinders and spheres are presen ted in all heat transfer texts. The go v erning equations for the three classical geometries are giv en in Cartesian, cir-cular cylinder and spherical co ordinates. The initial and b oundary conditions are sp eci ed. The dimensionless tem-p erature history () whic h is a function of three dimen-sionless parameters: p osition ( ), time (F o) and b ound-ary condition (B i) is presen ted in sym b olic form for the three geometries. All texts presen t the resp ectiv e solutions in graphical form, frequen tly called the Heisler  co oling c harts (for the temp erature at the cen terline (plate) or the origin (cylinder, sphere) as a function of F o and B i. Auxiliary c harts are a v ailable for all o -cen ter or o -origin p oin ts 0   < . In addition Heisler c harts are presen ted for small time F o < 0:. Grob er et al. in tro duced the c harts for the total heat loss fraction Q=Q i for the three geometries. These c harts are presen ted in great detail in Liuk o v , Grigull and Sandner  , and all heat transfer texts. Heisler  , Liuk o v and Grigull and Sandner  discuss the fact that the temp erature and heat loss fraction c harts can b e computed with acceptable accuracy using the lead-ing term in the resp ectiv e series solutions. The leading term can b e used for all v alues of B i pro vided F o  F o c where according to Heisler  , the critical F ourier n um b er is appro ximately equal to F o c = 0:; 0:; 0 : for the in nite plate, in nite circular cylinder and sphere resp ec-tiv ely . F or F o < F o c more terms in the series solutions are required to giv e acceptable accuracy . Heisler  also noted that more than 0% of the total co oling time is accoun ted for with the single term solution. Liuk o v rep orts an early attempt to pro vide appro xi-mations for the calculation of the rst ro ots (eigen v alues) of the c haracteristic equations for the three geometries. He sho w ed graphically that when ln ( ; =  ) is plotted against ln (B i) , o v er the range:  ln (B i)  , that the p oin ts w ere close to a straigh t line. A log-linear t of the data for the three geometries leads to the follo wing correlation equation:    ; =  q  + C  =B i C () where  ; is the v alue of   at B i = . The v al-ues of  ; are:  =; :0: :;  for the in nite plate, the in nite circular cylinder and the sphere resp ectiv ely . Liuk o v rep orts the correlation co ecien ts for the plate: C  = :; C = :0, the cylinder: C  = :; C = :0, and the sphere: C  = :0; C = :0. This correlation equation for the rst eigen v alues giv es acceptable v alues for B i  00; otherwise the errors are m uc h greater than %. The largest errors o ccur when B i  0:. It is un-kno wn what errors are in tro duced in to the computation of the F ourier co ecien ts A  and B  b y the use of this correlation equation. Chen and Kuo  applied the heat balance in tegral metho d to obtain appro ximate solutions for the in nite plate and the in nite circular cylinder. These equations whic h are rep orted in Chapman  can b e ev aluated b y means of programmable calculators, and they are said to b e accurate pro vided F o > F o c . Since these equations are length y and in v olv ed, they will not b e presen ted here. Some of the recen tly published heat transfer texts: Chapman  , Bejan  , Holman  , Incrop era and DeWitt , Mills 0 and White  recognize that the series solutions con v erge to the leading term for long times, i.e. F o > 0: with errors of ab out  %. They presen t tables for the ro ots (eigen v alues) of the corresp onding c haracteristic equations and the F ourier co ecien ts: A  and B  that app ear in the dimensionless temp erature and heat loss fraction ex-pressions. F or v alues of B i not giv en in the tables, it is necessary to emplo y in terp olation metho ds to use the tab-ulated v alues. Once the eigen v alues are kno wn, the ev alu-ation of the Bessel functions J 0 () and J  () that app ear in the c haracteristic equation for the circular cylinder, and the F ourier-Bessel co ecien ts that app ear in the temp era-ture and heat loss expressions m ust b e considered. These calculations are tedious and prone to errors, and unnec-essary with the a v ailabilit y of programmable calculators and computers. There is, therefore, a need to dev elop simple, accurate equations for the computation of the rst ro ot of the c har-acteristic equations for the three geometries. Secondly , there is a need to dev elop simple relationships for the ac-curate calculation of the Bessel functions that app ear in the solutions for the in nite circular cylinder. These re-lationships will then b e used to dev elop simple accurate relationships for the accurate calculation of the dimension-less temp erature and the heat loss fraction for the three geometries for F o  F o c . Finally , b y means of sup er-p osition, expressions will b e dev elop ed for calculating the dimensionless temp erature within comp osite b o dies suc h as cub oids (Fig. ), in nite rectangular bars (Fig. ), and nite circular cylinders (Fig. ). Also, b y means of the Langston  relationships for the determination of the heat loss fraction for comp osite b o dies, a metho d will b e pro-p osed for the simple, but accurate, calculation of the heat loss fraction from: cub oids, in nite rectangular bars, in -nite plates, nite and in nitely long circular cylinders. Y X 2Y 2X y x z 2Z 0 Z Fig.  The cub oid -∞ +∞ x y Y X 2Y 2X 0 Fig. The in nite rectangular bar R z r Z 2Z 0 Fig. The nite circular cylinder Heisler Dimensionless T emp erature Charts The Heisler  dimensionless temp erature c harts that app ear in all heat transfer texts w ere dev elop ed for the classic geometries: plate, in nite circular cylinder and sphere. Since the dimensionless temp erature dep ends on three dimensionless parameters: B i; F o;  where B i is the Biot n um b er, F o is the F ourier n um b er, and  is the dimensionless p osition, it is not p ossible to sho w the tem-p erature at an y p oin t within the solid at an y arbitrary time. The dimensionless temp erature c harts are based on the general solution: =  X n= A n exp  n F o  S ( n  ) () where A n are the temp erature F ourier co ecien ts that are functions of the b oundary condition through B i and the initial condition,  n are the eigen v alues whic h are the p os-itiv e ro ots of the c haracteristic equation, and S ( n  ) is the p osition function. F or the three geometries the p osi-tion function has the forms: Plate S p = cos (  n  ) () Circular Cylinder S c = J 0 (  n  ) () Sphere S s = sin (  n  ) ( n  ) () F ourier Co ecien ts for T emp erature The F ourier co ecien ts for temp erature for the three ge-ometries ha v e the follo wing forms: Plate A n;p = sin  n  n + sin  n cos  n () or A n;p = () n+ B i B i +  n  =  n B i + B i +  n  () Circular Cylinder A n;c = J  (  n )  n J 0 ( n ) + J  ( n ) () or A n;c = B i J 0 ( n )  n + B i =  n [  +  n =B i ] J  ( n ) ( ) Sphere A n;s = (sin  n  n cos  n )  n sin  n cos  n (0) A n;s = () n+ B i h  n + (B i ) i =  n + B i B i  () The eigen v alues that app ear in the ab o v e relationships are the p ositiv e ro ots of the c haracteristic equations whic h ha v e the follo wing forms for the three geometries: Plate x sin x = B i cos x () Circular Cylinder xJ  (x) = B iJ 0 (x) () Sphere ( B i) sin x = x cos x () Analysis of the Solutions Bi ! 0 F or these three geometries for all dimensionlesss time F o > 0, as B i ! 0, the rst F ourier co ecien t A  !  and all other F ourier co ecien ts A n ! 0 for n  . The rst ro ot of the three c haracteristic equations approac hes zero in the follo wing manner: Plate  ;p ! p B i () Circular Cylinder  ;c ! p B i () Sphere  ;s ! p B i () and the resp ectiv e dimensionless temp erature solutions b e-come: Plate p = e B iF o () Circular Cylinder c = e B iF o ( ) Sphere s = e B iF o (0) whic h are seen to b e particular cases of the general lump ed parameter solution: = e hS c p V t () where S is the total activ e heat transfer surface of the geometry and V is its v olume. Bi ! 0 Limit A t this limit the eigen v alues for the three geometries go to the follo wing relationships: Plate  n;p = (n )  ; n  () Circular Cylinder  n;c =               ; n = ; ; ::: () with  =  (n + ). The ab o v e relationship is a mo d-i cation of the Stok es appro ximation (Abramo witz and Stegun  ). It giv es acceptable v alues of the ro ots of J  () = 0 for all v alues n  . The largest error is ap-pro ximately 0:00% when n =  and the error is m uc h smaller for all n  . Sphere There is no simple relationship for the eigen v alues  n;p for n  . The eigen v alues are the ro ots of x cos (x) sin (x) = 0 () Numerical metho ds are required to nd the ro ots whic h lie in the in terv al: (n )  ; (n ) =. The rst nine ro ots are appro ximately: x  = 0 x = : 0 x = : x  = 0: 0 x  = :0  x  = :0 x  = 0:0 x  = :  x = :0 F or v ery large v alues of n the ro ots approac h the v alue x n ! (n ) =. Bi !  Limit A t this limit, the ro ots (eigen v alues) of the c harac-teristic equations are giv en b y the follo wing relationships: Plate  n = ( n )  n = ; ; ::: () Circular Cylinder  n = 0    + 0   0 +    0  ; n = ; ; ::: () with 0 =  (n ). The ab o v e relationship is a mo d-i cation of the Stok es appro ximation (Abramo witz and Stegun  ). It giv es acceptable v alues of the ro ots of  J 0 () = 0 for all v alues n  . The largest error is ap-pro ximately 0:00% when n = , and the error is m uc h smaller for all n  . Sphere  n = n n = ; ; ; ::: () The F ourier co ecien ts for temp erature are deter-mined b y means of the follo wing expressions: Plate A n = ( ) n+  (n )  n = ; ; ::: () Circular Cylinder A n =  n J  ( n ) n = ; ; ::: ( ) Sphere A n = () n+ n = ; ; ::: (0) Heat Loss F raction Charts The heat loss fraction Q=Q i where Q i = c p V  i is the initial total in ternal energy dep ends on the b oundary condition parameter B i and the dimensionless time F o in the follo wing w a y: Q Q i =   X n= B n exp  n F o  () The F ourier co ecien ts B n are giv en b y the follo wing re-lationships for the three geometries: Plate B n;p = A n;p sin  n  n = B i  n B i + B i +  n  () Circular Cylinder B n;c = A n;c J  ( n )  n = B i  n  n + B i  () Sphere B n;s = B i  n  n + B i B i  () Analysis of the Heat Loss Co ecien ts The heat loss co e cien ts B n ha v e particular v alues in the t w o limits: B i ! 0 and B i ! . In the rst limit, all F ourier co e cien ts for heat loss fraction are equal to zero for n  . In the second limit they are giv en b y: Plate B n;p =   ( n ) n = ; ; ::: () Circular Cylinder B n;c =   n n = ; ; ::: () where  n are the ro ots of J 0 ( n ) = 0 whic h are giv en ab o v e. Sphere B n;s =  (n ) n = ; ; ::: () Clearly the heat loss co ecien ts are easily computed as B i !  and therefore the heat loss fraction can b e deter-mined without dicult y . Numerical Solutions Accurate n umerical results for the three geometries can b e obtained easily b y means of a Computer Algebra System suc h as Maple  , MathCAD  , Mathematic a  or MA TLAB  . The prop osed solutions and pro cedure can also b e implemen ted in spreadsheets. Maple  V R w orksheets for the plate, in nite circu-lar cylinder and the sphere are presen ted in the App endix. F or eac h geometry the input parameters are: B i; F o;  ; N where N is the n um b er of terms in the partial sum. It is recommended that N =  although it can b e set to an y in teger v alue N > . In eac h w orksheet, the rst v e in-puts are: i) the de nition of the c haracteristic equation, ii) de nition of the F ourier co ecien t for temp erature, iii) de nition of the F ourier co ecien t for heat loss fraction, iv) de nition of the dimensionless temp erature, and v) def-inition of the heat loss fraction. The next v e inputs create lists for the N : i) eigen v alues, ii) co ecien ts A n , iii) co e -cien ts B n , iv) dimensionless temp erature terms n , and v) heat loss fraction terms Q=Q i;n . The last t w o inputs giv e the dimensionless temp erature and the heat loss fraction for the giv en v alues of B i; F o and  . Appro ximations of Bessel F unctions There are sev eral metho ds that can b e used to com-pute the Bessel functions: J 0 (x) and J  (x). There are p olynomial appro ximations (Abramo witz and Stegun  ) a v ailable for all p ositiv e v alues of x for b oth Bessel func-tions. These p olynomial appro ximations are based on man y terms that require space, and they are somewhat dif- cult to implemen t in spreadsheets or in a programmable calculator. The follo wing expression whic h is based on the application of the trap ezoidal rule to the in tegral form of J  (x) for arbitrary order w as dev elop ed b y means of the Maple function tr ap ezoid whic h is found in the studen t pac k age: J  (x) =  N + cos (  ) N +  N N  X i= cos  x sin  i N   i N  ()  where N is the n um b er of panels and  is the order of the Bessel function. F rom this general expression one can dev elop expressions for b oth J 0 (x) and J  (x) for di er-en t ranges of x and for di eren t accuracy . F or the range: 0  x  :0 that is applicable for the rst term and for the en tire range of B i, one can use  panels in the ab o v e expression to obtain the follo wing expressions whic h pro-vide accurate v alues of the t w o Bessel functions: J 0 (x) =   "  +  X i= cos  x sin    i  # ( ) and J  (x) =   "  X i= cos  x sin    i    i  # (0) Explicit Solutions of Characteristic Equations The observ ations regarding the relationship of the rst ro ot   of the the three c haracteristic equations with re-sp ect to B i rep orted b y Liuk o v and the results of the analysis of the theoretical solutions presen ted ab o v e sug-gests that it is p ossible to use the asymptotic v alues:  ;0 and  ; corresp onding to B i ! 0 and B i !  resp ec-tiv ely to dev elop in terp olation functions for   whic h will b e accurate for all v alues of B i. Plotting the ratio  ; =  v ersus B i giv es a function that v arious smo othly b et w een the asymptote:  ; = ;0 for B i ! 0 and the other asymp-tote is equal to  for B i ! . Based on the ab o v e observ ations the explicit solutions ( rst eigen v alues) of the c haracteristic equations for the three geometries can b e written in the general form:   =  ;   +   ;  ;0  n  =n () The form is based on the metho d rst prop osed b y Ch urc hill and Usagi  . The appro ximate explicit solution alw a ys giv es v ery accurate v alues for v ery small and v ery large v alues of B i indep enden t of the v alue of the parame-ter n. T o obtain accurate v alues for in termediate v alues of Bi: (0:  B i  ) it is necessary to nd appropriate v al-ues of n. The v alues: (n = : ; n = :; n = :) for the plate, cylinder and sphere resp ectiv ely , pro vide v alues of   whic h di er b y less than 0: % from the exact v alues of   . This accuracy is acceptable for most applications. T o dev elop more accurate solutions for the in termediate range it ma y b e necessary to nd relationships b et w een the parameter n and B i for eac h geometry . T emp erature and Heat Loss F raction for Comp osite Geometries The basic solutions giv en ab o v e can b e used to de-v elop solutions for comp osite geometries suc h as cub oids and nite circular cylinders with con v ection co oling at all b oundary surfaces. Since the cub oid solution is based on the sup erp osition of three plate solutions, it reduces to the in nite rectangular bar solution and the in nite plate solution. The dimensionless temp erature and heat loss fraction solutions for the cub oid and the nite circular cylinder will b e presen ted in the follo wing sections. F or the general case of a cub oid (Fig ): X  x  X ; Y  y  Y ; Z  z  Z that is co oled at its p er-p endicular faces: x = X ,y = Y , z = Z , through uni-form heat transfer co ecien ts: h x ; h y ; h z , there are three Biot n um b ers to consider: B i x ; B i y ; B i z . The co oling of a cub oid is also c haracterized b y three F ourier n um b ers: F o x ; F o y ; F o z . Dimension les s T emp erature for Cub oids The dimensionless temp erature at an y p oin t within the cub oid for arbitrary time can b e obtained b y the means of the pro duct of the solutions for three in nite plates: xy z ( x; y ; z ; t) = x;p y ;p z ;p () where the basic in nite plate solution giv en b y Eq. () is used three times: x;p = A ;x exp  ;x F o x  cos ( ;x x=X ) () y ;p = A ;y exp  ;y F o y  cos ( ;y y = Y ) () z ;p = A ;z exp  ;z F o z  cos ( ;z z = Z ) () The corresp onding eigen v alues:  ;x ;  ;y ;  ;z are de-p enden t on the resp ectiv e Biot n um b ers: B i x ; B i y ; B i z . The F ourier co ecien ts: A ;x ; A ;y ; A ;z are deter-mined according to Eqs. () or (). The eigen v alues:  ;x ;  ;y ;  ;z are calculated b y means of the general ex-plicit relationship dev elop ed for   . Rectangular Plates The dimensionless temp erature and heat loss fraction from rectangular plates or bars (Fig. ): X  x  X ; Y  y  Y is a sp ecial case of the cub oid solution. Here t w o Biot n um b ers: B i x ; B i y and t w o F ourier n um-b ers: F o x ; F o y are required to c haracterize its co oling. Dimensionl ess T emp erature of Rectangular Plates The dimensionless temp erature at an y p oin t within in nite rectangular plates for arbitrary time can b e ob-tained b y the means of the pro duct of the solutions for t w o in nite plates: xy (x; y ; t) = x;p y ;p ()  where the basic in nite plate solution giv en b y Eq. () is used t w o times: x;p = A ;x exp  ;x F o x  cos ( ;x x=X ) () y ;p = A ;y exp  ;y F o y  cos ( ;y y = Y ) () Heat Loss F raction for Cub oids The heat loss fraction can b e determined b y means of the relationship dev elop ed b y Langston  :  Q Q i  xy z =  Q Q i  x +  Q Q i  y    Q Q i  x  + ( )  Q Q i  z    Q Q i  x  "   Q Q i  y # where Q i = c p X Y Z  i . Heat Loss F raction from Rectangular Plates The heat loss fraction from a rectangular plate: X  x  X ; Y  y  Y is a sp ecial case of the cub oid solution. Here t w o Biot n um b ers: B i x ; B i y and t w o F ourier n um b ers: F o x ; F o y are required to c haracterize its co oling. The heat loss fraction is obtained b y follo wing relationship(Langston  ):  Q Q i  xy =  Q Q i  x +  Q Q i  y  Q Q i  x  Q Q i  y (0) where Q i = c p X Y  i . In the ab o v e relationships the comp onen t heat loss fractions are obtained b y means of the follo wing expres-sions:  Q Q i  x =  B ;x exp  ;x F o x  () and  Q Q i  y =  B ;y exp  ;y F o y  () and  Q Q i  z =  B ;z exp  ;z F o z  () The corresp onding eigen v alues:  ;x ;  ;y ;  ;z are dep en-den t on the resp ectiv e Biot n um b ers: B i x ; B i y ; B i z . Finite Circular Cylinders The dimensionless temp erature and the heat loss frac-tion from a nite circular cylinder (Fig. ): 0  r  R; X  x  X is based on the sup erp osition of the in nite circular cylinder and in nite plate solutions. Here t w o Biot n um b ers: B i p = h x X=k ; B i c = h r R=k and t w o F ourier n um b ers: F o p = t=X ; F o c = t=R are re-quired to c haracterize its co oling. The heat transfer co-ecien ts are iden tical o v er the t w o ends: x = X , but di eren t from the side heat transfer co ecien t h r . Dimensionl ess T emp erature for Finite Circular Cylinders The dimensionless temp erature at an y p oin t within the nite circular cylinder for arbitrary time can b e ob-tained through the pro duct of the solutions for the in nite circular cylinder and the in nite plate: cp ( r ; z ; t) = c p () where c = A ;c exp  ;c F o c  J 0 ( ;c r =R) () and p = A ;p exp  ;p F o p  cos ( ;p x=X ) () Heat Loss F raction for Finite Circular Cylinders The heat loss fraction is obtained b y follo wing rela-tionship (Langston  ):  Q Q i  cp =  Q Q i  c +  Q Q i  p  Q Q i  c  Q Q i  p () where Q i = c p  X R  i . In the ab o v e expressions the comp onen t heat loss frac-tions are obtained b y means of the follo wing expressions:  Q Q i  c =  B ;c exp  ;c F o c  () and  Q Q i  p =  B ;p exp  ;p F o p  ( ) The corresp onding eigen v alues:  ;c ;  ;p are dep en-den t on the resp ectiv e Biot n um b ers: B i c ; B i p . Summary Simple, explicit and accurate expressions w ere dev el-op ed for the calculation of the rst ro ots of the c harac-teristic equations for in nite plates, in nite circular cylin-ders and spheres for all v alues of the Biot n um b er. Ac-curate p olynomial expressions whic h are mo di cations of the Stok es appro ximations for the ro ots of the Bessel func-tions: J 0 () = 0 and J  () = 0 are prop osed. Simple expressions based on the application of the trap ezoidal rule to the in tegral form of the Bessel function of the rst kind and arbitrary order  are presen ted. These expres-sions are expanded in terms of the trigonometric func-tions whic h are easily computed in spreadsheets and with programmabl e calculators. Maple V R w orksheets are  presen ted in the app endix for accurate calculation of di-mensionless temp erature and dimensionless heat loss for the plates, cylinders and spheres for all v alues of the Biot n um b er and an y v alue of the F ourier n um b er pro vided F o  0:0. Simple single term expressions are giv en for the accurate calculation of the dimensionless temp erature and the dimensionless heat loss fraction for in nite plates, in nite cylinders and spheres. These single term expres-sions are used to dev elop expressions, based on sup erp osi-tion and the metho d of Langston  , for the calculation of dimensionless temp erature and the dimensionless heat loss fraction of b o dies suc h nite circular cylinders, cub oids and in nite rectangular bars. The prop osed expressions are simple and accurate pro vided F o  F o c . They should replace the tabular metho d curren tly presen ted in all heat transfer texts. The tabular metho d e ectiv ely replaces the Heisler  and Grob er c harts. Ac kno wledgemen ts The author ac kno wledges the con tin ued supp ort of the Canadian Natural Sciences and Engineering Researc h Council under gran t A. Also the assistance of P . T eertstra and Y. Muzyc hk a in the preparation of the g-ures and plots is greatly appreciated. References  Heisler, M.P ., T emp er atur e Charts for Induction He at-ing and Constant-T emp er atur e, T rans. ASME, V ol.  ,  , pp. -. Grob er, H., Erk, S., and Grigull, U., F undamentals of He at T r ansfer, McGra w-Hill Bo ok Compan y , Inc., New Y ork,  . Liuk o v, A.V., A nalytic al He at Di usion The ory, Aca-demic Press, New Y ork,  .  Grigull, U., and Sandner, H., He at Conduction, Hemi-sphere Publishing Corp oration, New Y ork,  .  Chen, R.Y. and Kuo, T.L., \Closed F orm Solutions for Constan t T emp erature Heating of Solids," Me ch. Eng. News, V ol. , No. , F eb.   , p. 0.  Chapman, A.J., F undamentals of He at T r ansfer, Macmillan Publishing Compan y , New Y ork,  .  Bejan, A., He at T r ansfer, John Wiley & Sons, New Y ork,  .  Holman, J.P ., He at T r ansfer, Sev en th Edition, McGra w-Hill Publishing Compan y , New Y ork,  0. Incrop era, F.C., and DeWitt, D.P ., Intr o duction to He at T r ansfer, rd ed., John Wiley & Sons,  0. 0 Mills, A.F., He at T r ansfer, Ric hard D. Irwin, Inc.,  .  White, F.M., He at and Mass T r ansfer, Addison-W esley , Reading, MA.  .  Langston, L.S., Heat T ransfer from Multidimensional Ob jects Using One-Dimensional Solutions for Heat Loss, In t. J. Heat Mass T ransfer, V ol. ,  , pp.  -0.  Abramo witz, M. and Stegun, I.A.,  , Handb o ok of Mathematic al F unctions with F ormulas, Gr aphs, and Mathematic al T ables, Do v er Publications, Inc., New Y ork.  Maple, W aterlo o Maple Soft w are, W aterlo o, ON, Canada.  MathCAD, MathSoft, Inc.,  Mathematic a, W olfram Researc h, Inc., Champaign, IL.  MA TLAB, The MA TH W ORKS, Inc., Natic k, MA.  Ch urc hill, S.W., and Usagi, R., A Gener al Expr ession for the Corr elation of R ates of T r ansfer and Other Phenomena, AIChE J., V ol. ,  , pp. -. App endix Maple V R W orksheets The Maple  w orksheets for the in nite plates and cir-cular cylinders, and the sphere are presen ted here. They are v alid for an y v alue of  and for all v alues of B i. The w orksheets can handle small dimensionless times: F o > 0:0 b y increasing the n um b er of terms in the sum-mation. The input parameters for eac h w orksheet are: B i; F o;  ; N where N is the n um b er of terms in the sum-mation. It is recommended that the n um b er of terms should b e limited to N =  or less for most problems. Maple Worksheet for Plates restart: case:= (Bi = 0., Fo = , zeta = 0, N = ): ce:= xsin(x) -Bicos(x) = 0: A:= sin(x)/(x + sin(x)cos(x)): B:= Asin(x)/x: phi:= Aexp(-x^Fo)cos(xzeta ): Q_Qi:= Bexp(-x^Fo): xvals:= [seq(fsolve(subs(ca se, ce), x = jPi..(j + /)Pi, j = 0..rhs(case[]))]: As:= evalf([seq(subs(x = xvals[j], A), j = ..nops(xvals))]): Bs:= evalf([seq(subs(x = xvals[j], B), j = ..nops(xvals))]): phis:=  [evalf(seq(subs(c ase, x = xvals[j], phi), j = ..nops(xvals)))]: Q_Qis:= [evalf(seq(subs(c ase, x = xvals[j], Q_Qi), j = ..nops(xvals)))]: plate_temp:= evalf(convert([se q(phi s[j], j = ..nops(xvals))], +), ); plate_heat_loss:= evalf( -convert([seq(Q_Qi s[j], j = ..nops(xvals))], +), ); Maple Worksheet for Cylinders restart: case:= (Bi = 0., Fo = , zeta = 0, N = ): ce:= xBesselJ(, x) -BiBesselJ(0, x) = 0: A:= BesselJ(, x)/(x(BesselJ(0, x)^ + BesselJ(, x)^)): B:= ABesselJ(, x)/x: phi:= Aexp(-x^Fo)BesselJ(0, xzeta): Q_Qi:= Bexp(-x^Fo): xvals:= [seq(fsolve(subs(ca se, ce), x = (j -)Pi..jPi), j = 0..rhs(case[]))]: As:= evalf([seq(subs(x = xvals[j], A), j = ..nops(xvals))]): Bs:= evalf([seq(subs(x = xvals[j], B), j = ..nops(xvals))]): phis:= [evalf(seq(subs(c ase, x = xvals[j], phi), j = ..nops(xvals)))]: Q_Qis:= [evalf(seq(subs(c ase, x = xvals[j], Q_Qi), j = ..nops(xvals)))]: cylinder_temp:= evalf(convert([se q(phi s[j], j = ..nops(xvals))], +), ); cylinder_heat_loss := evalf( -convert([seq(Q_Qi s[j], j = ..nops(xvals))], +), ); Maple Worksheet for Spheres. Bi cannot be set to  and zeta cannot be set to 0. For Bi = , put Bi = .00000, and for zeta = 0, #put zeta = 0.00000. restart: case:= (Bi = 0., Fo = , zeta = 0, N = ): ce:= xcos(x) -( -Bi)sin(x) = 0: A:= (sin(x) -xcos(x))/(x -sin(x)cos(x)): B:= A(sin(x) -xcos(x))/x^: phi:= Aexp(-x^Fo)sin(xzeta )/(x zeta) : Q_Qi:= Bexp(-x^Fo): xvals:= [seq(fsolve(subs(ca se, ce), x = (j -)Pi..jPi), j = 0..rhs(case[]))]: As:= evalf([seq(subs(x = xvals[j], A), j = ..nops(xvals))]): Bs:= evalf([seq(subs(x = xvals[j], B), j = ..nops(xvals))]): phis:= [evalf(seq(subs (case , x = xvals[j], phi), j = ..nops(xvals)))]: Q_Qis:= [evalf(seq(subs(ca se, x = xvals[j], Q_Qi), j = ..nops(xvals)))]: sphere_temp:= evalf(convert([seq (phi s[j], j = ..nops(xvals))], +), ); sphere_heat_loss:= evalf( -convert([seq(Q_Qi s[j], j = ..nops(xvals))], +), );
2442
https://www.mathworksheetsland.com/6/3realworld.html
Math Worksheets Land Math Worksheets For All Ages Math Topics Grade Levels Tests Contact Us Log In Sign Up Math Worksheets Land Math Worksheets For All Ages Math Topics Grade Levels Tests Contact Us Login Sign Up Home > Grade Levels > Grade 6 > Ratio and Rates Word Problems Worksheets When we are working with these types of word problems, we take a slightly different approach to how we are solving these types of problems. With your average everyday ordinary word problems, you spend a good deal of time determining which type of mathematical operation is used to satisfy the question. These types of problems are stated entirely differently they exhibit a relationship between two values. We then are asked to examine this relation in a specific situation. This is something that is a very helpful strategy to learn because ratio and rate word problems are found often in the real world. Many very substantial career paths are built on the back of this technique. Anyone who is in charge of purchasing items and cutting costs for their companies would need to be very good at this skill. These worksheets and lessons will help students become very comfortable tackling word problems that are built off of a rate or ratio. Aligned Standard: Grade 6 Proportional Relationships - 6.RP.A.3a On the Hunt Step-by-step Lesson- I use the subtle words "catching" rather than "hunting" just in case younger students are very advanced. Guided Lesson - Hello ratio tables! We meet again! We also look into equivalent ratios. Guided Lesson Explanation - Ratio tables seem to help out in all of these problems. Practice Worksheet - A three pronged approach. Ratio tables, word problems, and we get the tri-fecta with equivalent proportions. Ratio Word Problems 5 Pack - Relatively straight lined questions are found in this pack. Ratio Word Problem Five Pack (Harder) - Slightly more difficult problems in this pack. Matching Worksheet - We match equivalent ratios. Answer Keys - These are for all the unlocked materials above. Homework Sheets Complete with word problems, ratio tables, and numeric fixed ratios. Homework 1 - Liam and Noah went for a tour. On the first day Liam ate 3 burgers and Noah ate 5 slices of pizza. One the second day Liam ate 4 burgers and Noah ate 3 slices of pizza. One which day of the tour did Liam and Noah eat a higher ratio of burgers to pizza? Homework 2 - Complete the ratio tables in section b. Homework 3 - Circle the two ratios that are equal. We can see that the numbers can easily be reduced. Let's reduce them all to see which pair is equal. Practice Worksheets I focus this section on getting students to master the use of cross multiplication. Practice 1 - Are these ratios equivalent? You will review a number of different scenarios. Practice 2 - Find the equal ratios. You will have to choose between four groups. Practice 3 - Complete the ratio tables. You have three given values that leads you to determine the missing value. Math Skill Quizzes We test all forms of the skill to give students a complete picture of where they are at. Quiz 1 - Are these ratios equivalent? It is a simple Yes or No activity for you. 9 boxes to 9 students 12 boxes to 24 students Quiz 2 - Complete the missing values for each ratio table. This one has the value jump around. Quiz 3 - A mixed review. This will have you determine the unit rate to complete the remainder of the exercise. What Is the Difference Between a Ratio and a Rate? It is mathematics where we learn the concept of 'ratio' or 'rate'. This term is very common in statistics and business too. While both the terms specify the relationship between two or more quantities, they are far apart from each other in their actual meaning and usage. Let's discuss both of these in detail to identify the difference between the two. "Rate" - used to express amount, quantity or frequency with which a certain event occurs or happen. It is commonly expressed as the number of times it has happened for every thousands of the total population. It compares two measurements of different units and signifies how much time it takes to accomplish something. For example, the distance per unit time; 40 miles/hour. "Ratio," on the other hand, is the relationship or a connection between quantities such as their number, amount, size, or the extent or degree of those two or more similar quantities. It is the proportion of one thing in comparison with the other. A ratio explains the correlation of one thing with another, things, people or units. Consider this example: "Emma has five apples while Tom has ten." The ratio of Emma's apples to Tom's is 5:10. It can also be expressed as 2:1. This rate can be applied to future situations and help you predict future outcomes. Example Problem from This Section A farmer can harvest 4 acres of corn in 5 hours with his corn harvester tractor. If he has 160 acres of corn to farm, how long will it take him to harvest all that corn? Solution: There is a ratio of acres of corn to hours of time (4 acres : 5 hours). The unknown variable is the time needed. If we write the unknown amount of time with the known amount of land we are left with in the same format it would be written as: (160 acres : x hours). To solve this, we would solve the ratio for the missing variable using the format 4:5 = 160: x. We would cross multiply and end up with 200 hours as our answer. Unlock all the answers, worksheets, homework, tests and more! Save Tons of Time! Make My Life Easier Now Get Access to Answers, Tests, and Worksheets Become a paid member and get: Answer keys to everything Unlimited access - All Grades 64,000 printable Common Core worksheets, quizzes, and tests Used by 1000s of teachers! Upgrade Worksheets By Email: Get Our Free Email Now! We send out a monthly email of all our new free worksheets. Just tell us your email above. We hate spam! We will never sell or rent your email. Thanks and Don't Forget To Tell Your Friends! I would appreciate everyone letting me know if you find any errors. I'm getting a little older these days and my eyes are going. Please contact me, to let me know. I'll fix it ASAP. CONTACT ME About Us Contact Us Newsletter Privacy Policy Other Education Resource © MathWorksheetsLand.com, All Rights Reserved
2443
https://www.quora.com/What-is-the-mass-density-of-water-What-is-the-weight-density-of-water
What is the mass density of water? What is the weight density of water? - Quora Something went wrong. Wait a moment and try again. Try again Skip to content Skip to search Sign In What is the mass density of water? What is the weight density of water? All related (40) Sort Recommended Manoj Mandal Works at Geological Survey of India (GSI) · Author has 69 answers and 70K answer views ·5y mass density = mass of water contained in unit volume In SI units, unit volume is 1m^3 When water at 4°C is filled in a container of 1m^3, its mass is found to be 1000kg. In force terms, it is 9810N or 9.81kN. Thus, mass density of water at 4°C is 1000kg/m^3, and weight density is 9.81kN/m^3. please note that weight density is dependent on g, whereas mass density is not. So, when measured in moon, the mass density of water will remain the same, but the weight density will be only about 1.64kN/m^3 Upvote · 9 2 9 1 Promoted by Grammarly Grammarly Great Writing, Simplified ·Updated 2y How can I effectively edit my own writing? So, you think you’ve drafted a tweet, an email, a short story, or even a novel. These are different forms of communication, but the process of bringing them to fruition has a necessary, sometimes overlooked step: editing! Unless you’re a professional writer, it’s unlikely that you have an editor who can review your writing regularly. Here are some tips to help you review your own work. Give your writing some space. Have you ever felt a mix of pure relief and joy when you’ve finished a draft of something? Don’t downplay that feeling and the ability to walk away from your work before you start ed Continue Reading So, you think you’ve drafted a tweet, an email, a short story, or even a novel. These are different forms of communication, but the process of bringing them to fruition has a necessary, sometimes overlooked step: editing! Unless you’re a professional writer, it’s unlikely that you have an editor who can review your writing regularly. Here are some tips to help you review your own work. Give your writing some space. Have you ever felt a mix of pure relief and joy when you’ve finished a draft of something? Don’t downplay that feeling and the ability to walk away from your work before you start editing it. You may need minutes, hours, or days, but once you sit back down with what you originally had on the page, you’ll have the thrill of looking at it with fresh eyes. You’ll notice errors you may not have seen the first time. You’ll come to new realizations about its overall tone and structure. If it’s a text or email, maybe you only need a few minutes away from it. If it’s a story or essay, perhaps you’ll need longer. Regardless of what type of work it is, it will help your writing tremendously. Don’t use overachieving synonyms. Looking at your work for the second, third, or fourth time around may inspire you to spice up your language with longer, more uncommon words. There’s nothing wrong with having a thesaurus nearby, but try to limit the repetition of long, pretentious-feeling words so your work flows well and doesn’t feel too bogged down. At the end of the day, you want it to feel true to you and the message you’re conveying. Remember who the reader is. Don’t forget your own voice as the writer—but don’t forget who your reader is. Many writers get too close to their work; editing is a chance to try to get out of your own head. Who is your ideal reader? What do you want them to take away from the writing? It’s a unique time to step in their shoes, to make sure your communication is as effective as you’d like it to be. Kill your darlings. Don’t be scared to remove chunks of your work, even if it feels precious to you. If it’s a passage that’s really tough to part with, try saving it somewhere else, so you can return to it later in your piece or for another work. Use Grammarly. Last but not least, Grammarly has countless resources for editing your work. Our writing assistant helps you find areas of your writing that are unclear or too wordy, as well as help you find mistakes you might not have caught. Editing may feel tedious, but it’s just as important as writing itself. For an extra pair of editing eyes on everything you write, download the free Grammarly for Windows and Mac today. Upvote · 1.1K 1.1K 999 227 99 32 Trevor Hodgson Former Project Manager (1975–2000) · Author has 11.8K answers and 12M answer views ·5y At its maximum density Mass density = 1 g/cm³ said as : one gram per cubic centimetre Although the correct SI units would be 1,000 kg/m³ - kilograms per cubic metre Weight density: I answer this from basic principles, because I do not know this term Weight density = 0.00981N/cm³ said as zero point zero zero nine eight one newtons per cubic centimetre. The SI unit would be 9.81kN/m³ = nine point eight one kilonewtons per cubic metre. Note that the SI unit of mass is the kilogram (kg) The SI unit of weight is the newton (N) Upvote · Assistant Bot · 10mo The mass density of water is typically about 1,000 kg/m³ (or 1 g/cm³) at standard temperature and pressure (approximately 4°C). The weight density (or specific weight) of water can be calculated using the formula: Weight Density=Mass Density×g Weight Density=Mass Density×g where g g is the acceleration due to gravity, approximately 9.81 m/s ² 9.81 m/s ². Thus, the weight density of water is: Weight Density≈1,000 kg/m ³×9.81 m/s ²≈9,810 N/m ³ Weight Density≈1,000 kg/m ³×9.81 m/s ²≈9,810 N/m ³ In summary: Mass Density of Water: ~1,000 kg/m³ Weight Density of Water: ~9,810 N/m³ Upvote · Related questions More answers below What is the weight density of water? How do mass density and weight density differ? How can you determine the weight and density of water? What is the density of water in lb/ft3? How do you calculate the mass and density of water? Pierre Rorers Former Retired; former aviation engineer at Civil Aviation Authority · Author has 290 answers and 1.8M answer views ·6y Originally Answered: What is the weight density of water? · What is written here is true for part of it and false for an other part. It is true that density of water is the highest at 4°C; worm water is less heavy, and does not sink into the deepness of the ocean. Other way, cold water would stay at the surface. Since the heaviest water stays at 4°, the bottom of lakes seldom freezes, thus leaving room for life, fishes and so on. However the actual reason ice is floating is because water molecule is polar. The consequence of this polarity is that when freezing molecules artifically stack in hexagonal patterns which leave tiny voids between them, thus tak Continue Reading What is written here is true for part of it and false for an other part. It is true that density of water is the highest at 4°C; worm water is less heavy, and does not sink into the deepness of the ocean. Other way, cold water would stay at the surface. Since the heaviest water stays at 4°, the bottom of lakes seldom freezes, thus leaving room for life, fishes and so on. However the actual reason ice is floating is because water molecule is polar. The consequence of this polarity is that when freezing molecules artifically stack in hexagonal patterns which leave tiny voids between them, thus taking more space than in liquid water where these voids do not exist. There is an easy way to see water molecules are polar. Just open a kitchen faucet with a tiny, but continuous filet stream. Then take a plastic ball pen. Scrub it vigorously against wool, or any garment having static electricity. When the pencil is loaded with static electricity, bring it slowly close to the filet of water. You will see the stream magically bents towards the pencil. This is because the water molecules are polar, generating an electric field between the two atoms of hydrogen, and the atom of oxygen. Upvote · 9 3 9 1 Peter Edwards Studied Environmental Science (college major)&Psychology · Author has 1.6K answers and 467.5K answer views ·5y Density is expressed as Kilograms per cubic metre. For water it is 1000Kg/m^3 . Weight density is not an accepted term in SI units ( system international). But in laymans terms it would have to be calculated using local force of gravity, and expressed as Newtons per cubic metre. Upvote · 9 1 Sponsored by Fisher Investments 7 Investment Strategies for Canadians [Free Guide]. How do retirees maintain their wealth? Uncover seven secrets to successful investing today! Learn More 9 4 David Simpson 6y Originally Answered: What is the weight density of water? · Water has its highest density 1000 kg/m3 or 1.940 slug/ft at temperature 4°C (=39.2°F). Upvote · 99 10 Related questions More answers below What is the volume, mass, and weight density of water? Why isn't the density of water not just 1000 kg/m^3? What is the definition of weight density? What is the density of water? The density of water is 1,000kg/m^3. What do you mean by this? John Fellows Former Chief Oil Taster at the robot factory ·6y Originally Answered: What is the weight density of water? · 1000kg/m3 but also if you wait in denver for long enough the city will go dark and the rising sun will be there a few hours later. Upvote · Promoted by Almedia Marc Hammes Personal finance journalist @ Almedia ·Updated Apr 25 What is the best site to do online work for money? Dropshipping, freelancing, virtual assistant… These are only some “side gigs” I've tried to get extra cash. Ask me if I enjoyed them, and you'll see me frown. But eventually, my search paid off - I learned that there is a way to earn real money just by… playing games. I know, it sounds like one of those “too good to be true” pitches. I was dubious too. But I gave it a shot, and 30 days later, I had $955 sitting in my PayPal. No cap. That’s when I realized: Freecash isn’t a hustle. It’s a true lifesaver. … Freecash? What is that? I'll put it super simply: Freecash is a site that pays you for completi Continue Reading Dropshipping, freelancing, virtual assistant… These are only some “side gigs” I've tried to get extra cash. Ask me if I enjoyed them, and you'll see me frown. But eventually, my search paid off - I learned that there is a way to earn real money just by… playing games. I know, it sounds like one of those “too good to be true” pitches. I was dubious too. But I gave it a shot, and 30 days later, I had $955 sitting in my PayPal. No cap. That’s when I realized: Freecash isn’t a hustle. It’s a true lifesaver. … Freecash? What is that? I'll put it super simply: Freecash is a site that pays you for completing simple tasks in apps and games. You're basically a game tester — but way more fun. And without a CV 😁 So the process is easy: Register and complete your first task. Then, go through other tasks to see what’s available on the platform. There are higher and lower paying tasks. Well, do I need to say which one to choose? Really?? That's it! Simply choose the game you enjoy, play as often as you can, and soon, you’ll have your first earnings in your account. 🔥 Now, they have tons of high-paying offers. So if you ask me, I’d sign up on Freecash right away! And the best part is that Freecash withdraws your earnings really, really quickly. There are several options, including PayPal, different crypto, or even gift cards. Love that! 3 things everyone should know about Freecash: Alright, there are way more than 3 things. But these are the most important ones for me: IT IS REALLY EASY. Just read the task and follow it to get your reward. I really loved the game called Monopoly GO. I’ve achieved Board 150 in 26 days and made $695.39. Not bad, huh? You don't have to be a game tester. Yes, you don’t need any skills to earn decent money online. In fact, all the tasks are about one thing - playing. That's it! You can be anywhere to earn money. If you wanted a remote job, I got you fixed. Play anywhere you want, nobody cares - Freecash will pay you anyway (as long as you complete your task on time). 💸 One last thing: don't use it as a one-time gig. You can earn consistently and well. In fact, you know enough to register right now! Why wait? You can even get a $5 welcome bonus. Now, let’s talk money… How much can you actually earn? Finally! That's the best part. Straight answer - you can earn $40-50 a day with EASE. In fact, you can make even more money if you use just a few tricks: Use promo codes: Follow Freecash on social media - they drop limited-time codes for free coins. Log in daily: You get streak bonuses just for showing up. The longer the streak, the bigger the rewards. Use boosters: Some games offer in-app purchases to help you progress faster. I used one in Monopoly GO to skip a slow level, and… finished the task in half the time. So, yes, playtime can be your payday! 🔝 Don't miss your chance to earn good money and have fun. Sign up on Freecash and enjoy all the perks right away. Upvote · 999 632 99 88 99 14 Feborina' Tiger 6y Originally Answered: What is the weight density of water? · The density of water is the weight of the water per its unit volume, which depends on the temperature of the water. The usual value used in calculations is 1 gram per milliliter (1 g/ml) or 1 gram per cubic centimeter (1 g/cm3) Upvote · 9 1 Aakash Aeghan Physics ·7y Related What is the density of water? The density of water is the weight of the water per its unit volume (usually g/cm3), which depends on the temperature of the water. The usual value used in calculations is 1 gram per milliliter or 1 gram per cubic centimeter. While you can round the density to 1 gram per milliliter, here are more precise values below. The density of pure water actually is somewhat ​less than 1 g/cm3. TEMP (°C) DENSITY (KG/M3) +100 958.4 +80 971.8 +60 983.2 +40 992.2 +30 995.6502 +25 997.0479 +22 997.7735 +20 998.2071 +15 999.1026 +10 999.7026 +4 999.9720 0 999.8395 −10 998.117 −20 993.547 −30 983.854 The maximum density of wate Continue Reading The density of water is the weight of the water per its unit volume (usually g/cm3), which depends on the temperature of the water. The usual value used in calculations is 1 gram per milliliter or 1 gram per cubic centimeter. While you can round the density to 1 gram per milliliter, here are more precise values below. The density of pure water actually is somewhat ​less than 1 g/cm3. TEMP (°C) DENSITY (KG/M3) +100 958.4 +80 971.8 +60 983.2 +40 992.2 +30 995.6502 +25 997.0479 +22 997.7735 +20 998.2071 +15 999.1026 +10 999.7026 +4 999.9720 0 999.8395 −10 998.117 −20 993.547 −30 983.854 The maximum density of water occurs around 4 degrees Celsius. Ice is less dense than liquid water, so it floats. Upvote · 9 3 Sponsored by Canuck Chronicles From Student Debt to Six figures: My Canadian success story. Learn how leaving university led to financial freedom and a six-figure income in Canada. Learn More 99 41 Anish Gurung Studied at Studying ·7y Originally Answered: What is the weight density of water? · 1000kg/metercube Upvote · 9 2 Harsh Singhal Studied Biology (college minor)&Physics · Author has 64 answers and 222.4K answer views ·Updated 7y Related What is the density of water in lb/ft3? Density of water at 4∘C=1000 k g m 3=1000∗1 k g(1 m)3 4∘C=1000 k g m 3=1000∗1 k g(1 m)3 1 kg = 2.204623 pounds 1 m = 3.28084 feet So, density = 1000∗2.204623 l b(3.28084 f t)3=62.42796529…l b/f t 3 1000∗2.204623 l b(3.28084 f t)3=62.42796529…l b/f t 3 Upvote · 99 41 9 1 Richard BFA from Beloit College (Graduated 1973) · Upvoted by Malcolm Sargeant , Degree level applied chemistry + 20yr experience in corrosion prevention and water treatment · Author has 30.3K answers and 20.4M answer views ·1y Related How is the density of water? Your question is unanswerable. Is English not your native tongue? That is perfectly all right if it is not. I merely wish to know what you are trying to find out. Are you asking How does the density of water compare with that of other liquids, say mercury? Water has a density of 1.0 gram per milliliter while mercury has a density of 13.5 grams per milliliter which makes mercury much heavier than the equivalent amount of water. Upvote · 9 2 Brian Brady Ph. D. in Chemistry, Columbia University (Graduated 1986) · Author has 607 answers and 547K answer views ·5y Related What is the maximum density of water? Water is at its maximum density at 4 C. Below that temperature it starts to align in a way that will eventually become ice crystals. Above that temperature thermal motion takes molecules further apart. Upvote · 9 2 Related questions What is the weight density of water? How do mass density and weight density differ? How can you determine the weight and density of water? What is the density of water in lb/ft3? How do you calculate the mass and density of water? What is the volume, mass, and weight density of water? Why isn't the density of water not just 1000 kg/m^3? What is the definition of weight density? What is the density of water? The density of water is 1,000kg/m^3. What do you mean by this? What is the density of water in kg/m^3? The mass of a 5 litre water is 40 kg. What is the density of the water? How does water density compare with air density? What is the density of 10,000 kg of water? What is the relationship between the density of water and its weight? Related questions What is the weight density of water? How do mass density and weight density differ? How can you determine the weight and density of water? What is the density of water in lb/ft3? How do you calculate the mass and density of water? What is the volume, mass, and weight density of water? Advertisement About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
2444
https://brainly.com/question/34795067
[FREE] If the accelerating voltage in an X-ray tube is doubled, the minimum X-ray wavelength is multiplied by a - brainly.com Search Learning Mode Cancel Log in / Join for free Browser ExtensionTest PrepBrainly App Brainly TutorFor StudentsFor TeachersFor ParentsHonor CodeTextbook Solutions Log in Join for free Tutoring Session +39,9k Smart guidance, rooted in what you’re studying Get Guidance Test Prep +39,3k Ace exams faster, with practice that adapts to you Practice Worksheets +7,4k Guided help for every grade, topic or textbook Complete See more / Physics Expert-Verified Expert-Verified If the accelerating voltage in an X-ray tube is doubled, the minimum X-ray wavelength is multiplied by a factor of: A. 1 B. 1/2 C. 2 D. 1/4 1 See answer Explain with Learning Companion NEW Asked by ShawnaLove7963 • 07/12/2023 0:05 / 0:15 Read More Community by Students Brainly by Experts ChatGPT by OpenAI Gemini Google AI Community Answer This answer helped 35611174 people 35M 0.0 0 Upload your school material for a more relevant answer When the accelerating voltage in an X-ray tube is doubled, the minimum X-ray wavelength is multiplied by a factor of 2. X-ray tubes are devices used to generate X-rays for various applications, including medical imaging, material analysis, and scientific research. These tubes operate by accelerating electrons through an electric potential difference, which results in the emission of X-rays when the electrons strike a target material. The wavelength of X-rays produced in an X-ray tube is inversely proportional to the accelerating voltage. When the accelerating voltage is doubled, the electrons gain more energy, leading to a higher kinetic energy when they hit the target. According to the de Broglie wavelength equation, which relates the wavelength of a particle to its momentum, higher kinetic energy leads to a shorter wavelength. As a result, the minimum X-ray wavelength is reduced, and in this case, it is multiplied by a factor of 2. The relationship between accelerating voltage and X-ray wavelength can be understood in terms of the wave-particle duality of electrons. X-rays behave both as waves and as particles (photons). When an electron interacts with a target material, it releases energy in the form of X-ray photons. The energy of each photon is directly related to its frequency and inversely related to its wavelength. Therefore, higher accelerating voltage leads to higher photon energy, which corresponds to shorter X-ray wavelengths. In conclusion, when the accelerating voltage in an X-ray tube is doubled, the minimum X-ray wavelength is multiplied by a factor of 2. This relationship is crucial in understanding X-ray production and its applications in various fields, including medicine, industry, and research. Learn more about X-ray Wavelength brainly.com/question/24028626 SPJ11 Answered by mmrizvi034 •49.6K answers•35.6M people helped Thanks 0 0.0 (0 votes) Expert-Verified⬈(opens in a new tab) This answer helped 35611174 people 35M 0.0 0 Upload your school material for a more relevant answer When the accelerating voltage in an X-ray tube is doubled, the minimum X-ray wavelength decreases to half of its previous value. This is because the energy of the produced X-rays increases with voltage, leading to shorter wavelengths. Thus, the correct answer is B. 1/2. Explanation When the accelerating voltage in an X-ray tube is doubled, the minimum X-ray wavelength is halved. This relationship can be explained by understanding how X-rays are generated in these tubes. Understanding X-rays: X-rays are a form of electromagnetic radiation produced when high-energy electrons collide with a target material. The energy the electrons gain is determined by the voltage applied across the X-ray tube. Effect of Voltage on Energy: The energy of the electrons is given by the equation: E=e V where: E is the energy, e is the charge of the electron, and V is the accelerating voltage. Thus, if the voltage (V) is doubled, the energy (E) of the electrons also doubles. Wavelength and Energy Relationship: The energy of the emitted X-ray photons is related to their wavelength via the equation: E=λ h c​ where: h is Planck's constant, c is the speed of light, and λ is the wavelength. Thus, if the energy is doubled (as previously mentioned), the relationship implies that the wavelength is halved, leading to the new wavelength: λ=2 E h c​=2 1​λ ini t ia l​ Conclusion: Therefore, when the accelerating voltage in an X-ray tube is doubled, the minimum X-ray wavelength is multiplied by a factor of 2 1​. This means the correct answer to the question is option B. 1/2. Examples & Evidence For example, if the initial accelerating voltage produces X-rays with a wavelength of 1 angstrom (10^{-10} meters), doubling the voltage would reduce the wavelength to 0.5 angstroms (5 × 10^{-11} meters). This showcases how changes in voltage directly affect X-ray wavelengths. The relationship between the energy of X-ray photons and their wavelength, as derived from fundamental physics equations, clearly supports the assertion that doubling the accelerating voltage leads to halving the wavelength. Thanks 0 0.0 (0 votes) Advertisement ShawnaLove7963 has a question! Can you help? Add your answer See Expert-Verified Answer ### Free Physics solutions and answers Community Answer 47 Whats the usefulness or inconvenience of frictional force by turning a door knob? Community Answer 5 A cart is pushed and undergoes a certain acceleration. Consider how the acceleration would compare if it were pushed with twice the net force while its mass increased by four. Then its acceleration would be? Community Answer 4.8 2 define density and give its SI unit​ Community Answer 9 To prevent collisions and violations at intersections that have traffic signals, use the _____ to ensure the intersection is clear before you enter it. Community Answer 4.0 29 Activity: Lab safety and Equipment Puzzle Community Answer 4.7 5 If an instalment plan quotes a monthly interest rate of 4%, the effective annual/yearly interest rate would be _______. 4% Between 4% and 48% 48% More than 48% Community Answer When a constant force acts upon an object, the acceleration of the object varies inversely with its mass 2kg. When a certain constant force acts upon an object with mass , the acceleration of the object is 26m/s^2 . If the same force acts upon another object whose mass is 13 , what is this object's acceleration Community Answer 20 [4 A wheel starts from rest and has an angular acceleration of 4.0 rad/s2. When it has made 10 rev determine its angular velocity.]]( "4 A wheel starts from rest and has an angular acceleration of 4.0 rad/s2. When it has made 10 rev determine its angular velocity.]") Community Answer 4.1 5 Lucy and Zaki each throw a ball at a target. What is the probability that both Lucy and Zaki hit the target? Community Answer 22 The two non-parallel sides of an isosceles trapezoid are each 7 feet long. The longer of the two bases measures 22 feet long. The sum of the base angles is 140°. a. Find the length of the diagonal. b. Find the length of the shorter base. New questions in Physics The power in a lightbulb is given by the equation P=R R​, where I is the current flowing through the lightbulb and R is the resistance of the lightbulb. What is the current in a circuit that has a resistance of 30.0Ω and a power of 55.0 W? A. 1.83 A B. 0.740 A C. 0.550 A D. 1.35 A The equation for distance is d=s t. If a car has a speed of 85 m/s and travels for 120 seconds, how far does it go? A. 7400 m B. 14,000 m C. 0.71 m D. 10,200 m If two objects have identical volumes but different masses, then the object with a lower mass: A. will sink. B. will float. C. will have higher density. D. will have lower density. Planck's constant can be found by: A. energy of a particle x speed of light / wavelength B. energy of a particle x speed of light / frequency of light C. energy of a particle x wavelength / speed of light D. slope of the cut-off voltage vs frequency graph What is the most obvious and important way in which the characteristic spectra of atoms differ from those of molecules? A. Molecular spectra are generally far more complex than the characteristic spectra of the atoms of which they are composed. B. Molecular spectra are much dimmer than the characteristic spectra of the atoms of which they are composed. C. Molecular spectra fade more quickly with distance than do atomic spectra. D. Molecular spectra are generally far simpler than the characteristic spectra of the atoms of which they are composed. E. Molecular spectra are much brighter than the characteristic spectra of the atoms of which they are composed. Previous questionNext question Learn Practice Test Open in Learning Companion Company Copyright Policy Privacy Policy Cookie Preferences Insights: The Brainly Blog Advertise with us Careers Homework Questions & Answers Help Terms of Use Help Center Safety Center Responsible Disclosure Agreement Connect with us (opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab) Brainly.com Dismiss Materials from your teacher, like lecture notes or study guides, help Brainly adjust this answer to fit your needs. Dismiss
2445
https://www.youtube.com/watch?v=-zszozbClaI
How to Prove Uniform Convergence Example with f_n(x) = x/(1 + nx^2) The Math Sorcerer 1130000 subscribers 924 likes Description 77507 views Posted: 9 Oct 2018 Please Subscribe here, thank you!!! How to Prove Uniform Convergence Example with f_n(x) = x/(1 + nx^2) 76 comments Transcript: hey what's up YouTube in this video we have a sequence F sub n of X equal to X over 1 plus NX squared so for each positive integer we have a function so we have a sequence and we're going to prove that it converges uniformly on the set of real numbers to 0 so first let's briefly recall what it means for a sequence of functions to converge uniformly so we say that f sub n of f f sub n converges to F uniformly on the set of real numbers this is equivalent to saying that for all epsilon greater than zero we can find some positive integer capital n which depends on epsilon such that for every little n bigger than capital n and for every real number X the distance between F sub n of X and f of X is less than Epsilon so that's what it means for a function to converge uniformly for a sequence of functions to converge uniformly to a function so proof so we need to do some serious scratch work to figure out this proof so we'll go to the side here and work it out so we're showing uniform convergence so we have an epsilon greater than zero and then we need to find a capital n such that this is true here our epsilon is not allowed to depend on X so the natural thing to do is maybe write down what we're trying to show so we have F sub n of X and then our f of X is 0 so minus 0 that's equal to x over 1 plus NX squared minus 0 so and we want this to be less than Epsilon so we're kind of stuck so one way to do it or the way I'm thinking of doing it is to look at this function f sub n of X X over 1 plus and x squared and maximize it if we can find the maximum of this function then it'll be less than or equal to the maximum so to maximize it I was thinking of using the first derivative test so we'll start by taking the first derivative so it's the derivative of the top which is 1 times the bottom piece minus the top piece which is x times the derivative of the bottom piece so 2 NX all right we're differentiating with respect to X all over the bottom 1 squared so 1 plus an x squared quantity squared alright we can simplify this looks like we'll have 1 and then we have n x squared here and then here we have minus 2 n x squared so it'll be minus and x squared all over we have that bottom piece 1 plus and x squared quantity squared that's the first derivative and if we set this equal to 0 we can look for critical numbers when we do that we end up setting the numerator equal to 0 so 1 minus and x squared equals 0 and then we can solve this for X so we can do that by adding an x squared to the other side dividing by n so x squared is 1 over N and then so we get x equals plus or minus when we take this word of 1 over n so those are our critical numbers the next thing you do is you plot these on a number line so here's our number line and so here is maybe the square root of 1 over N and then here is negative square root of 1 over N and I'd like to use the room up here so I'm gonna cross this out and continue the scratch work up here and then we'll do the proof later so let's pick test points and plug them into the first derivative so a nice test point we can pick that's smaller than this one I'm thinking is negative 2 square root 1 over n right that's certainly smaller than negative square root of 1 over N and the test points go into the first derivative so we get 1 minus n negative 2 square root 1 over N quantity squared over and then on the bottom we have 1 plus + and then we have negative 2 square root 1 over N quantity squared squared so in the numerator we're going to get 1 minus N and then when we square the negative 2 we get 4 and when we squared the square root of 1 over N we just get 1 over N and look at that these ends cancel so you're going to get 1 minus 4 which is negative 3 on the bottom we get 1 plus n times you square the negative 2 you get 4 you square the square root of 1 over n you get 1 over N and that's squared so these ends cancel also the point is in the numerator you get 1 minus 4 and that's less than 0 so the function is decreasing here if we plug in 2 square root of 1 over n that resides over here exactly the same thing is going to happen because everything is squared so this is also less than 0 so it's also decreasing over here so now the easiest test point we can pick between these two critical numbers is 0 so if we check 0 looks like we get so we're plugging in 0 for X so we get 1 minus 0 over and then here we get 1 plus 0 squared so we just get 1 so that's positive so it's increasing here so the minimum at this number here so there's a min here and there's a maximum at the square root of 1 over N so that's what matters we'll have a max X equals the square root of 1 over and by the first derivative test so to actually find the maximum value of this function you take your critical number and you plug it back into the phone so f sub n of the square root of 1 over n is equal to 1 over N and we have a square root all over 1 plus n square root 1 over n quantity squared yet looks ok so this is equal to the square root of 1 over N over and then we have 1 plus and then N and then we're squaring the square root of 1 over n so we just get 1 over n they cancel so that leaves us with F sub n of the square root of 1 over N is equal to the square root of 1 over n over 1 plus 1 so we get I'm going to write this as 1 over square root of n over 2 the same thing as taking that and multiplying by 1/2 so it's going to be 1 over 2 square root of n so this is going to be the maximum of our sequence here so for each n we have that maximum so now we can go back to our scratch work right which is which was here this is our key step and we can bound this right x over 1 plus n x squared we can say that's less than or equal to 1 over 2 square root of n that's true for each n and we want this to be less than epsilon to complete the proof so we have to find and so we can multiply by the square root of n now so we have 1 over 2 less than epsilon square root of N and then divided by epsilon so we end up with the square root of N greater than 1 over 2 epsilon right dividing by Epsilon and reading everything backwards so we can solve for n by squaring both sides so we get n greater than 1 over 2 epsilon quantity squared so that's going to be the N we want to choose in this problem so now all we have to do is choose that end we can do that using the power of Archimedes right there's something called the Archimedean principle that will allow us to choose our n bigger than this all right so now let's finally go to the proof I spent looks like we spent nine minutes figuring it out which is not bad for a problem of this caliber right these problems can take a long time to to figure out so let's go ahead and write the proof now some come down here and write the proof use a different color for the proof so proof so we'll start the proof by letting epsilon be greater than zero okay if you look at back at the definition of uniform convergence that's that's how it starts and now we have to choose our end so in our end does not depend on X all right that that was the hard part we had to make sure it didn't depend on X so choose choose N greater than 1 over 2 epsilon squared by the way well I said that's the hard part why is that the hard part well because if we if our n could depend on X's would have been a much easier proof we could have taken cases like if X is 0 the proof is done if X is not 0 we could have done something like this you know X is not 0 so life is still good and then you can do this and then you can do this and just solve for N and you can prove convergence but that would only prove point-wise convergence right we had to prove a uniform convergence much more difficult so hence the extra work so now we have to go back up here and just say you know for all little n bigger than N and for all X and R so let's do that so then for all little and bigger than capital N and for every real number X we're going to look at the difference between F sub n of X and 0 well that's equal to F sub n of X we said that was x over 1 plus and x squared believe that's what it was and go back and just don't check yep and that's minus zero so nothing happens there and then we know that this is less than or equal to 1 over 2 square root of n this is my previous work so so you could say this is since you know the maximum of X over 1 plus n x squared is equal to 1 over 2 square root of n so we're doing this without proof but we did it in our scratch work so hopefully it's decent and now we have to do some some work here we have to show this as less than epsilon so now since little n is bigger than capital n which is bigger than 1 over 2 epsilon squared right what we want to do is we want to show that this is less than Epsilon so let's solve for 1 over the square root of n so to do that we have and bigger than 1 over 2 epsilon squared so the square root of n is bigger than 1 over 2 Epsilon and then we can divide by the square root of n and multiplied by epsilon so this means that we have epsilon is greater than 1 over 2 square root of n reading that backwards right that gives us 1 over 2 square root of n less than Epsilon so let's formally write down what we just did so thus that kind of quick we have F sub n of X minus 0 we know that's less than or equal to 1 over 2 square root of N and then we've just gone through the justification here of Y that's less than epsilon so that completes the proof and that shows uniform convergence I kind of rushed through that because I can see the time on the video and it just passed 13 minutes so it's kind of a long video if you stayed with me this long and you're watching it awesome I hope this helped that's it
2446
https://mertzios.net/papers/Jour/Jour_Metric-Dimension-Interval-Perm-Algorithms.pdf
Algorithmica manuscript No. (will be inserted by the editor) Identification, location-domination and metric dimension on interval and permutation graphs. II. Algorithms and complexity Florent Foucaud · George B. Mertzios · Reza Naserasr · Aline Parreau · Petru Valicov Received: date / Accepted: date Abstract We consider the problems of finding optimal identifying codes, (open) locating-dominating sets and resolving sets (denoted Identifying Code, (Open) Locating-Dominating Set and Metric Dimension) of an interval or a permutation graph. In these problems, one asks to distinguish all vertices of a graph by a subset of the vertices, using either the neighbourhood within the solution set or the distances to the solution vertices. Using a general reduction for this class of problems, we prove that the decision problems associated to these four notions are NP-complete, even for interval graphs of diameter 2 and permutation graphs of diameter 2. While Identifying Code and (Open) Locating-Dominating Set are trivially fixed-parameter-tractable when parameterized by solution size, it is known that in the same setting Metric Dimension is W-hard. We show that for interval graphs, this parameterization of Metric Dimension is fixed-parameter-tractable. Keywords Metric dimension · Resolving set · Identifying code · Locating-dominating set · Interval graph · Permutation graph · Complexity 1 Introduction Combinatorial identification problems have been widely studied in various contexts. The common char-acteristic of these problems is that we are given a combinatorial structure (graph or hypergraph), and we wish to distinguish (i.e. uniquely identify) its vertices by the means of a small set of selected elements. In this paper, we study several such related identification problems where the instances are graphs. In the problem metric dimension, we wish to select a set S of vertices of a graph G such that every vertex of G is uniquely identified by its distances to the vertices of S. The notions of identifying codes and (open) A short version of this paper, containing only the results about location-domination and metric dimension, appeared in the proceedings of the WG 2015 conference . F. Foucaud LIMOS - CNRS UMR 6158 Université Blaise Pascal, Clermont-Ferrand, France E-mail: florent.foucaud@gmail.com G. B. Mertzios School of Engineering and Computing Sciences, Durham University, United Kingdom Partially supported by the EPSRC Grant EP/K022660/1 E-mail: george.mertzios@durham.ac.uk R. Naserasr CNRS, IRIF UMR 8243, Université Paris Diderot - Paris 7, France E-mail: reza@irif.fr A. Parreau Université de Lyon,CNRS Université Lyon 1, LIRIS, UMR 5205, F-69622, France E-mail: aline.parreau@univ-lyon1.fr P. Valicov LIF - CNRS UMR 7279, Université d’Aix-Marseille, France E-mail: petru.valicov@lif.univ-mrs.fr 2 Florent Foucaud et al. locating-dominating sets are similar. Roughly speaking, instead of the distances to S, we ask for the vertices to be distinguished by their neighbourhood within S. These problems have been widely studied since their introduction in the 1970s and 1980s. They have been applied in various areas such as network verification [4,5], fault-detection in networks [41,56], graph isomorphism testing or the logical defin-ability of graphs . We note that the similar problem of finding a test cover of a hypergraph (where hyperedges distinguish the vertices) has been studied under several names by various authors, see e.g. [9, 10,15,30,46,49]. Important concepts and definitions. All considered graphs are finite and simple. We will denote by N[v], the closed neighbourhood of vertex v, and by N(v) its open neighbourhood, i.e. N[v] \ {v}. A vertex is universal if it is adjacent to all the vertices of the graph. A set S of vertices of G is a dominating set if for every vertex v, there is a vertex x in S ∩N[v]. It is a total dominating set if instead, x ∈S ∩N(v). In the context of (total) dominating sets we say that a vertex x (totally) separates two distinct vertices u, v if it (totally) dominates exactly one of them. Set S (totally) separates the vertices of a set X if every pair in X has a vertex in S (totally) separating it. We have the three key definitions, that merge the concepts of (total) domination and (total) separation: Definition 1 (Slater [52,53], Babai ) A set S of vertices of a graph G is a locating-dominating set if it is a dominating set and it separates the vertices of V (G) \ S. The smallest size of a locating-dominating set of G is the location-domination number of G, denoted γLD(G). This concept has also been used under the name distinguishing set in and sieve in . Definition 2 (Karpovsky, Chakrabarty and Levitin ) A set S of vertices of a graph G is an identifying code if it is a dominating set and it separates all vertices of V (G). The smallest size of an identifying code of G is the identifying code number of G, denoted γID(G). Definition 3 (Seo and Slater ) A set S of vertices of a graph G is an open locating-dominating set if it is a total dominating set and it totally separates all vertices of V (G). The smallest size of an open locating-dominating set of G is the open location-domination number of G, denoted γOLD(G). This concept has also been called identifying open code in . Another kind of separation based on distances is used in the following concept: Definition 4 (Harary and Melter , Slater ) A set R of vertices of a graph G is a resolving set if for each pair u, v of distinct vertices, there is a vertex x of R with d(x, u) ̸= d(x, v).1 The smallest size of a resolving set of G is the metric dimension of G, denoted dim(G). It is easy to check that the inequalities dim(G) ≤γLD(G) ≤γID(G) and γLD(G) ≤γOLD(G) hold, indeed every locating-dominating set of G is a resolving set, and every identifying code (or open locating-dominating set) is a locating-dominating set. Moreover it is proved that γID(G) ≤2γLD(G) (using the same proof idea one would get a similar relation between γLD(G) and γOLD(G) and between γID(G) and γOLD(G), perhaps with a different constant). There is no strict relation between γID(G) and γOLD(G). In a graph G of diameter 2, one can easily see that the concepts of resolving set and locating-dominating set are almost the same, as γLD(G) ≤dim(G) + 1. Indeed, let S be a resolving set of G. Then all vertices in V (G) \ S have a distinct neighbourhood within S. There might be (at most) one vertex that is not dominated by S, in which case adding it to S yields a locating-dominating set. While a resolving set and a locating-dominating set exist in every graph G (for example the whole vertex set), an identifying code may not exist in G if it contains twins, that is, two vertices with the same closed neighbourhood. However, if the graph is twin-free the set V (G) is an identifying code of G. Similarly, a graph admits an open locating-dominating set if and only if it has no open twins, i.e. vertices sharing the same open neighbourhood. We say that such a graph is open twin-free. The focus of this paper is the following set of four decision problems: Locating-Dominating-Set Instance: A graph G, an integer k. Question: Is it true that γLD(G) ≤k? Identifying Code Instance: A graph G, an integer k. Question: Is it true that γID(G) ≤k? 1 Resolving sets are also known under the name of locating sets . Optimal resolving sets have sometimes been called metric bases in the literature; to avoid an inflation in the terminology we will only use the term resolving set. Algorithms and complexity for distinguishing problems in interval and permutation graphs 3 Open Locating-Dominating Set Instance: A graph G, an integer k. Question: Is it true that γOLD(G) ≤k? Metric Dimension Instance: A graph G, an integer k. Question: Is it true that dim(G) ≤k? We will study these four concepts and decision problems on graphs belonging to specific subclasses of perfect graphs (i.e. graphs whose induced subgraphs all have equal clique and chromatic numbers). Many standard graph classes are perfect, for example bipartite graphs, split graphs, interval graphs. For precise definitions, we refer to the books of Brandstädt, Le and Spinrad and of Golumbic [13,31]. Some of these classes are classes defined using a geometric intersection model, that is, the vertices are associated to the elements of a set S of (geometric) objects, and two vertices are adjacent if and only if the corresponding objects intersect. The graph defined by the intersection model S is its intersection graph. An interval graph is the intersection graph of intervals of the real line, and a unit interval graph is an interval graph whose intersection model contains only (open) intervals of unit length. Given two parallel lines B and T, a permutation graph is the intersection graph of segments of the plane which have one endpoint on B and the other endpoint on T. Interval graphs and permutation graphs are classic graph classes that have many applications and are widely studied. They can be recognized efficiently, and many combinatorial problems have simple and efficient algorithms for these classes. Previous work. The complexity of distinguishing problems has been studied by many authors. Iden-tifying Code was first proved to be NP-complete by Charon, Hudry, Lobstein and Zémor , and Locating-Dominating-Set, by Colbourn, Slater and Stewart . Regarding their instance restriction to specific graph classes, Identifying Code and Locating-Dominating-Set were shown to be NP-complete for bipartite graphs by Charon, Hudry and Lobstein . This was improved by Müller and Sereni to planar bipartite unit disk graphs , by Auger to planar graphs with arbitrarily large girth , and by Foucaud to planar bipartite subcubic graphs . Foucaud, Gravier, Naserasr, Parreau and Vali-cov proved that Identifying Code is NP-complete for graphs that are both planar and line graphs of subcubic bipartite graphs . Berger-Wolf, Laifenfeld, Trachtenberg and Suomela independently showed that both Identifying Code and Locating-Dominating-Set are hard to approximate within factor α for any α = o(log n) (where n denotes the order of the graph), with no restriction on the in-put graph. This result was recently extended to bipartite graphs, split graphs and co-bipartite graphs by Foucaud . Moreover, Bousquet, Lagoutte, Li, Parreau and Thomassé proved the same non-approximability result for bipartite graphs with no 4-cycles. On the positive side, Identifying Code and Locating-Dominating-Set are constant-factor approximable for bounded degree graphs (showed by Gravier, Klasing and Moncel in ), line graphs [26,27], interval graphs and are linear-time solv-able for graphs of bounded clique-width (using Courcelle’s theorem ). Furthermore, Slater and Auger gave explicit linear-time algorithms solving Locating-Dominating-Set and Identifying Code, respectively, in trees. The complexity of Open Locating-Dominating Set was not studied much; Seo and Slater showed that it is NP-complete , and the inapproximability results of Foucaud for bipartite, co-bipartite and split graphs transfer to it. The problem Metric Dimension is widely studied. It was shown to be NP-complete by Garey and Johnson [30, Problem GT61]. This result has recently been extended to bipartite graphs, co-bipartite graphs, split graphs and line graphs of bipartite graphs by Epstein, Levin and Woeginger , to a special subclass of unit disk graphs by Hoffmann and Wanke , and to planar graphs by Diaz, Pottonen, Serna and van Leeuwen . Epstein, Levin and Woeginger also gave polynomial-time algorithms for the weighted version of Metric Dimension for paths, cycles, trees, graphs of bounded cyclomatic number, cographs and partial wheels. Diaz, Pottonen, Serna, van Leeuwen gave a polynomial-time algorithm for outerplanar graphs, and Fernau, Heggernes, van’t Hof, Meister and Saei gave one for chain graphs . Metric Dimension can most likely not be expressed in MSOL and it is an open problem to determine its complexity for bounded treewidth (even treewidth 2). Metric Dimension is hard to approximate within any o(log n) factor for general graphs, as shown by Beerliova, Eberhard, Erlebach, Hall, Hoffmann, Mihalák and Ram . This is even true for subcubic graphs, as shown by Hartung and Nichterlein (a result extended to bipartite subcubic graphs in Hartung’s thesis ). 4 Florent Foucaud et al. In light of these results, the complexity of Locating-Dominating-Set, Open Locating-Dominating Set, Identifying Code and Metric Dimension for interval and permutation graphs is a natural open question (as asked by Manuel, Rajan, Rajasingh, Chris Monica M. and by Epstein, Levin, Woeg-inger for Metric Dimension on interval graphs), since these classes are standard candidates for designing efficient algorithms to solve otherwise hard problems. Let us say a few words about the parameterized complexity of these problems. A decision problem is said to be fixed-parameter tractable (FPT) with respect to a parameter k of the instance, if it can be solved in time f(k)nO(1) for an instance of size n, where f is a computable function (for definitions and concepts in parameterized complexity, we refer to the books [22,48]). It is known that for the problems Locating-Dominating-Set, Open Locating-Dominating Set and Identifying Code, for a graph of order n and solution size k, the bound n ≤2k holds (see e.g. [41,53]). Therefore, when parameterized by k, these problems are (trivially) FPT: one can first check whether n ≤2k holds (if not, return “no”), and if yes, use a brute-force algorithm checking all possible subsets of vertices. This is an FPT algorithm. However, Metric Dimension (parameterized by solution size) is W-hard even for bipartite subcubic graphs [35,36] (and hence unlikely to be FPT). Remarkably, the bound n ≤Dk + k holds (where n is the graph’s order, D its diameter, and k is the solution size of Metric Dimension), and therefore for graphs of diameter bounded by a function of k, the same arguments as the previous ones yield an FPT algorithm. This holds, for example, for the class of (connected) split graphs, which have diameter at most 3. Also, it was recently proved that Metric Dimension is FPT when parameterized by the largest possible number of leaves in a spanning tree of a graph . Besides this, as remarked in , no non-trivial FPT algorithm for Metric Dimension was previously known. Finally, we also mention a companion paper , in which we study problems Identifying Code, Locating-Dominating-Set, Open Locating-Dominating Set and Metric Dimension on interval and permutation graphs from a combinatorial point of view, proving several bounds involving the order, the diameter and the solution size of a graph. Our results. We continue the classification of the complexity of problems Identifying Code, Locating-Dominating-Set, Open Locating-Dominating Set and Metric Dimension by giving a unified re-duction showing that all four problems are NP-complete even for graphs that are interval graphs and have diameter 2 or permutation graphs of diameter 2. The reductions are presented in Section 2. Then, in Section 3, we use dynamic programming on a path-decomposition to show that Metric Dimension is FPT on interval graphs, when the parameter is the solution size. Up to our knowledge, this is the first non-trivial FPT algorithm for this problem when parameterized by solution size. We then conclude the paper with some remarks in Section 4. 2 Hardness results We will provide a general framework to prove NP-hardness for distinguishing problems in interval graphs and permutation graphs. We just need to assume few generic properties about the problems, and then provide specific gadgets for each problem. We will reduce our problems from 3-Dimensional Matching which is known to be NP-complete . 3-Dimensional Matching Instance: Three disjoint sets A, B and C each of size n, and a set T of m triples of A × B × C. Question: Is there a perfect 3-dimensional matching M ⊆T of the hypergraph (A, B, C, T ), i.e. a set of disjoint triples of T such that each element of A ∪B ∪C belongs to exactly one of the triples? We give the general framework and the gadgets we will use in Section 2.1, then prove the general reduction using this framework in Section 2.2 and apply it to obtain the NP-hardness for Identifying Code, Locating-Dominating-Set and Open Locating-Dominating Set in Section 2.3. We finally deduce from the results for graphs of diameter 2 the hardness of Metric Dimension in Section 2.4. We give the reduction using interval graphs and prove subsequently that it can be built as a permutation graph too. Algorithms and complexity for distinguishing problems in interval and permutation graphs 5 2.1 Preliminaries and gadgets In the three distinguishing problems Locating-Dominating-Set, Identifying Code and Open Locating-Dominating Set, one asks for a set of vertices that dominates all vertices and separates all pairs of vertices (for suitable definitions of domination and separation). Since we give a reduction which applies to all three problems (and others that share certain properties with them), we will generally speak of a solution as a vertex set satisfying the two properties. For two vertices u, v let us denote by Iu,v the set N[u] \ N[v]. In the reduction, we will only make use of the following properties (that are common to Locating-Dominating-Set, Identifying Code and Open Locating-Dominating Set): Property 5 Let G be a graph with a solution S to Locating-Dominating-Set, Identifying Code or Open Locating-Dominating Set. (1) For each vertex v, any vertex from N(v) dominates v; (2) For each vertex v, at least one element of N[v] belongs to S; (3) For every pair u, v of adjacent vertices, any vertex of Iu,v ∪Iv,u separates u, v; (4) For every pair u, v of adjacent vertices, S contains a vertex of Iu,v ∪Iv,u ∪{u, v}. The problems Identifying Code and Open Locating-Dominating Set clearly satisfy these prop-erties. For Locating-Dominating-Set, the vertices of a solution set S do not need to be separated from any other vertex. However one can say equivalently that two vertices u, v are separated if either u or v belongs to S, or if there is a vertex of S in Iu,v ∪Iv,u. Therefore, Locating-Dominating-Set also satisfies the above properties. Before describing the reduction, we define the following dominating gadget independently of the con-sidered problem (we describe the specific gadgets for Locating-Dominating-Set, Open Locating-Dominating Set and Identifying Code in Section 2.3). The idea behind this gadget is to ensure that specific vertices are dominated locally — and are therefore separated from the rest of the graph. We will use it extensively in our construction. Definition 6 (Dominating gadget) A dominating gadget D is an interval graph such that there exists an integer d ≥1 and a subset SD of V (D) of size d (called standard solution for D) with the following properties: – SD is an optimal solution for D with the property that no vertex of D is dominated by all the vertices of SD;2 – if D is an induced subgraph of an interval graph G such that each interval of V (G) \ V (D) either contains all intervals of V (D) or does not intersect any of them, then for any solution S for G, |S ∩V (D)| ≥d.3 In the following, a dominating gadget will be represented graphically as shown in Figure 1, where D is an induced subgraph of an interval graph G. In our constructions, we will build a graph G with many isomorphic copies of D as its induced subgraphs, where D will be a fixed graph. Denote by S an optimal solution for G: the size of each local optimal solution S ∩V (D) for D will always be d. Moreover, the conditions of the second property in Definition 6 (that each interval of V (G) \ V (D) either contains all intervals of V (D) or does not intersect any of them) will always be satisfied. D Fig. 1 Representation of dominating gadget D. 2 Note that this implies d ≥2. 3 By this property, an interval of V (G) \ V (D) may only be useful to dominate a vertex in D (but not to separate a pair in D). Hence the property holds if any optimal solution for the separation property only, has the same size as an optimal solution for both separation and domination. 6 Florent Foucaud et al. Claim 7 Let G be an interval graph containing a dominating gadget D as an induced subgraph, such that each interval of V (G) \ V (D) either contains all intervals of V (D) or does not intersect any of them. Then, for any optimal solution S of G, |S ∩V (D)| = d and we can obtain an optimal solution S′ with |S′| = |S| by replacing S ∩V (D) by the standard solution SD. Proof By the second property of a dominating gadget, we have |S ∩V (D)| ≥d ≥1. Since each interval of V (G) \ V (D) either contains all intervals of V (D) or does not intersect any of them, a pair of intervals of V (G) \ V (D) either cannot be separated by any interval in V (D), or is separated equally by any interval in V (D). Since d ≥1, there is at least one interval in S ∩V (D) but the structure of S ∩D does not influence the rest of the graph. Hence, S ∩V (D) can be replaced by SD and we have |S ∩V (D)| ≤d (otherwise the solution with SD would be better and S would not be optimal). ⊓ ⊔ Definition 8 (Choice pair) A pair {u, v} of intervals is called choice pair if u, v both contain the intervals of a common dominating gadget (denoted D(uv)), and such that none of u, v contains the other. See Figure 2 for an illustration of a choice pair. Intuitively, a choice pair gives us the choice of separating it from the left or from the right: since none of u, v is included in the other, the intervals intersecting u but not v (the set Iu,v) can only be located at one side of u; the same holds for v. In our construction, we will make sure that all pairs of intervals will be easily separated using domination gadgets. It will remain to separate the choice pairs. We have the following claim: Claim 9 Let S be a solution of a graph G and {u, v} be a choice pair in G. If the solution S∩V (D(uv)) for the dominating gadget D(uv) is the standard solution SD, both vertices u and v are dominated, separated from all vertices in D(uv) and from all vertices not intersecting D(uv). Proof If S is such a solution, by the definition of a dominating gadget, |S ∩D(uv)| ≥d ≥1. Since all vertices of D(uv) are in the open neighbourhood of u and v, by Property 5(1)-(3), u and v are dominated and separated from the vertices not intersecting D(uv). Moreover, both u, v are adjacent to all vertices of D(uv) ∩S. By Definition 6, no vertex of D(uv) is dominated by the whole set SD, hence u, v are separated from all vertices in D(uv). ⊓ ⊔ u v D(uv) Iu Iv Fig. 2 Choice pair u, v. We now define the central gadget of the reduction, the transmitter gadget. Roughly speaking, it allows to transmit information across an interval graph using the separation property. Definition 10 (Transmitter gadget) Let P be a set of two or three choice pairs in an interval graph G. A transmitter gadget Tr(P) is an induced subgraph of G consisting of a path on seven vertices {u, uv1, uv2, v, vw1, vw2, w} and five dominating gadgets D(u), D(uv), D(v), D(vw), D(w) such that the following properties are satisfied: – u and w are the only vertices of Tr(P) that separate the pairs of P; – the intervals of the dominating gadget D(u) (resp. D(v), D(w)) are included in interval u (resp. v, w) and no interval of Tr(P) other than u (resp. v, w) intersects D(u) (resp. D(v), D(w)); – pair {uv1, uv2} is a choice pair and no interval of V (Tr(P)) \ (D(uv1, uv2) ∪{uv1, uv2}) intersects both intervals of the pair. The same holds for pair {vw1, vw2}. – the choice pairs {uv1, uv2} and {vw1, vw2} cannot be separated by intervals of G other than u, v and w. Algorithms and complexity for distinguishing problems in interval and permutation graphs 7 x1 x2 u uv1 uv2 v y1 y2 vw1 vw2 w z1 z2 D(u) D(uv) D(v) D(vw) D(w) Tr({x1, x2}, {y1, y2}, {z1, z2}) Fig. 3 Transmitter gadget Tr({x1, x2}, {y1, y2}, {z1, z2}) and its “box” representation. Figure 3 illustrates a transmitter gadget and shows the succinct graphical representation that we will use. As shown in the figure, we may use a “box” to denote Tr(P). This box does not include the choice pairs of P but indicates where they are situated. Note that the middle pair {y1, y2} could also be separated (from the left) by u instead of w, or it may not exist at all. The following claim shows how transmitter gadgets will be used in the main reduction. Claim 11 Let G be an interval graph with a transmitter gadget Tr(P) and let S be a solution. We have |S ∩Tr(P)| ≥5d + 1 and if |S ∩Tr(P)| = 5d + 1, no pair of P is separated by a vertex in S ∩Tr(P). Moreover, there exist two sets of vertices of Tr(P), S− T r(P ) and S+ T r(P ) of size 5d + 1 and 5d + 2 respectively, such that the following holds. – The set S− T r(P ) dominates all the vertices of Tr(P) and separates all the pairs of Tr(P) but no pairs in P. – The set S+ T r(P ) dominates all the vertices of Tr(P), separates all the pairs of Tr(P) and all the pairs in P. Proof By the definition of the dominating gadget, we must have |S ∩Tr(P)| ≥5d with 5d vertices of S belonging to the dominating gadgets. By Property 5(4) on the choice pair {uv1, uv2}, at least one vertex of {u, uv1, uv2, v} belongs to S (recall that the intervals not in Tr(P) cannot separate the choice pairs in Tr(P)), and similarly, for the choice pair {vw1, vw2}, at least one vertex of {v, vw1, vw2, w} belongs to S. Hence |S ∩Tr(P)| ≥5d + 1 and if |S ∩Tr(P)| = 5d + 1, vertex v must be in S and neither u nor w are in S. Therefore, no pair of P is separated by a vertex in S ∩Tr(P). We now prove the second part of the claim. Let Sdom be the union of the five standard solutions SD of the dominating gadgets of Tr(P). Let S− T r(P ) = Sdom ∪{v} and S+ T r(P ) = Sdom ∪{u, w}. The set Sdom has 5d vertices and so S− T r(P ) and S+ T r(P ) have respectively 5d + 1 and 5d + 2 vertices. Each interval of Tr(P) either contains a dominating gadget or is part of a dominating gadget and is therefore dominated by a vertex in Sdom. Hence, pairs of vertices that are not intersecting the same dominating gadget are clearly separated. By the first property in Definition 6, a vertex adjacent to a whole dominating gadget is separated from all the vertices of the dominating gadget. Similarly, by definition, pairs of vertices inside a dominating gadget are separated by Sdom. Therefore, the only remaining pairs to consider are the choice pairs. By Property 5(3), they are separated both at the same time either by v or by {u, w}. Hence the two sets S− T r(P ) and S+ T r(P ) are both dominating and separating the vertices of Tr(P). Moreover, since S+ T r(P ) contains u and w, it also separates the pairs of P. ⊓ ⊔ A transmitter gadget with a solution set of size 5d + 1 (resp. 5d + 2 vertices) is said to be tight (resp. non-tight). We will call the sets S− T r(P ) and S+ T r(P ) the tight and non-tight standard solutions of Tr(P). 2.2 The main reduction We are now ready to describe the reduction from 3-Dimensional Matching. Each element x ∈A∪B∪C is modelled by a choice pair {fx, gx}. Each triple of T is modelled by a triple gadget defined as follows. 8 Florent Foucaud et al. Definition 12 (Triple gadget) Let T = {a, b, c} be a triple of T . The triple gadget Gt(T) is an interval graph consisting of four choice pairs p = {p1, p2}, q = {q1, q2}, r = {r1, r2}, s = {s1, s2} together with their associated dominating gadgets D(p), D(q), D(r), D(s) and five transmitter gadgets Tr(p, q), Tr(r, s), Tr(s, a), Tr(p, r, b) and Tr(q, r, c), where: – a = {fa, ga}, b = {fb, gb} and c = {fc, gc}; – Except for the choice pairs p, q, r, s, a, b, c, for each pair of intervals of Gt(T), its two intervals intersect different subsets of dominating gadgets of Gt(T). – In each transmitter gadget Tr(P) and for each choice pair π ∈P, the intervals of π intersect the same intervals except for the vertices u, v, w of Tr(P); – The intervals of V (G) \ V (Gt(T)) that are intersecting only a part of the gadget intersect accordingly to the transmitter gadget definition and do not separate the choice pairs p, q, r and s. Note that there are several ways to obtain a triple gadget that is an interval graph and that satisfies the properties in Definition 12. The one in Figure 4 represents one of these possibilities. We remark that p, q, r and s in Gt({a, b, c}), are all functions of {a, b, c} but to simplify the notations we simply write p, q, r and s. p1 p2 q1 q2 r1 r2 s1 s2 fa ga fb gb fc gc . . . D(p) D(q) D(r) D(s) D(a) D(b) D(c) Tr(p, q) Tr(r, s) Tr(p, r, b) Tr(q, r, c) Tr(s, a) Fig. 4 Triple gadget Gt({a, b, c}) together with choice pairs of elements a, b and c. Claim 13 Let G be a graph with a triple gadget Gt(T) and S be a solution. We have |S∩Gt(T)| ≥29d+7 and if |S∩Gt(T)| = 29d+7, no choice pair corresponding to a, b or c is separated by a vertex in S∩Gt(T). Moreover, there exist two sets of vertices of Gt(T), S− Gt(T ) and S+ Gt(T ) of size 29d + 7 and 29d + 8 respectively, such that the following holds. – The set S− Gt(T ) dominates all the vertices of Gt(T) and separates all the pairs of Gt(T) but does not separate any choice pairs corresponding to {a, b, c}. – The set S+ Gt(T ) dominates all the vertices of Gt(T), separates all the pairs of Gt(T) and separates the choice pairs corresponding to {a, b, c}. Proof The proof is similar of the proof of Claim 11. Each transmitter gadget must contain at least 5d + 1 vertices, and each of the four dominating gadgets of the choice pairs p, q, r, s must contain d vertices. Hence there must be already 29d+5 vertices of Gt(T) in the solution. Furthermore, to separate the choice pair s, Tr(r, s) or Tr(s, a) must be non-tight (since s is not separated by other vertices of the graph). In the same way, to separate the choice pair p, Tr(p, q) or Tr(p, r, b) must be non-tight. Then at least two transmitter gadgets are non-tight and we have |S ∩Gt(T)| ≥29d + 7. If |S ∩Gt(T)| = 29d + 7, exactly two transmitter gadgets are non-tight and they can only be Tr(r, s) and Tr(p, q) (otherwise some of the choice pairs p, q, r, s would not be separated). Hence the choice pairs corresponding to {a, b, c} are not separated by the vertices of Gt(T) ∩S. For the second part of the claim, the set S− Gt(T ) is defined by taking the union of the tight standard solutions of Tr(s, a), Tr(q, r, c) and Tr(p, r, b)), the non-tight standard solutions of Tr(p, q) and Tr(r, s) and the standard solutions of the dominating gadgets D(p), D(q), D(r) and D(s). The set S+ Gt(T ) is defined by taking the union of the non-tight standard solutions of Tr(s, a), Tr(q, r, c) and Tr(p, r, b), the tight standard solutions of Tr(p, q) and Tr(r, s) and the standard solutions of the dominating gadgets D(p), D(q), D(r) and D(s). By Claim 11, the definition of a dominating gadget and the fact that the only intervals sharing the same sets of dominating gadgets are the choice pairs, all intervals of Gt(T) are Algorithms and complexity for distinguishing problems in interval and permutation graphs 9 dominated and all the pairs of intervals except the choice pairs are separated by both S− Gt(T ) and S+ Gt(T ). The choice pairs p, q, r and s are separated by the non-tight solutions of the transmitter gadgets. Hence S− Gt(T ) and S+ Gt(T ) are dominating and separating all the intervals of Gt(T). When the solution contains S− Gt(T ), the transmitter gadgets Tr(s, a), Tr(q, r, c) and Tr(p, r, b) are tight. Hence S− Gt(T ) does not separate any choice pairs among {a, b, c}. On the other hand, since S+ Gt(T ) contains the non-tight solution of Tr(s, a), Tr(q, r, c) and Tr(p, r, b), the three choice pairs {a, b, c} are separated by S+ Gt(T ). ⊓ ⊔ As before, a triple gadget with 29d + 7 vertices (resp. 29d + 8) is said to be tight (resp. non-tight). We will call the sets S− Gt(T ) and S+ Gt(T ) the tight and non-tight standard solutions of Gt(T). Given an instance (A, B, C, T ) of 3-Dimensional Matching with |A| = |B| = |C| = n and |T | = m, we construct the interval graph G = G(A, B, C, T ) as follows. – As mentioned previously, to each element x of A ∪B ∪C, we assign a distinct choice pair {fx, gx} in G. The intervals of any two distinct choice pairs {fx, gx}, {fy, gy} are disjoint and they are all in R+. – For each triple T = {a, b, c} of T we first associate an interval IT in R−such that for any two triples T1 and T2, IT1 and IT2 do not intersect. Then inside IT , we build the choice pairs {p1, p2}, {q1, q2}, {r1, r2}, {s1, s2}. Finally, using the choice pairs already associated to elements a, b and c we complete this to a triple gadget. – When placing the remaining intervals of the triple gadgets, we must ensure that triple gadgets do not “interfere”: for every dominating gadget D, no interval in V (G) \ V (D) must have an endpoint inside D. Similarly, the choice pairs of every triple gadget or transmitter gadget must only be separated by intervals among u, v and w of its corresponding private transmitter gadget. For intervals of distinct triple gadgets, this is easily done by our placement of the triple gadgets. To ensure that the intervals of transmitter gadgets of the same triple gadget do not interfere, we proceed as follows. We place the whole gadget Tr(p, q) inside interval u of Tr(p, r, b). Similarly, the whole Tr(r, s) is placed inside interval w of Tr(q, r, c) and the whole Tr(s, a) is placed inside interval v of Tr(p, r, b). One has to be more careful when placing the intervals of Tr(p, r, b) and Tr(q, r, c). In Tr(p, r, b), we must have that interval u separates p from the right of p. We also place u so that it separates r from the left of r. Intervals uv1, uv2 both start in r1, so that u also separates uv1, uv2 and also to ensure that uv1, uv2 does not separate the choice pair r. Intervals uv1, uv2 continue until after pair s. In Tr(q, r, c), we place u so that it separates q from the right, and we place w so that it separates r from the right; intervals uv1, uv2, v lie strictly between q and r; intervals vw1, vw2 intersect r1, r2 but stop before the end of r2 (so that w can separate both pairs vw1, vw2 and r but without these pairs interfering). It is now easy to place Tr(s, a) between s and a. An example is given in Figure 5. D(b) p1 p2 u T r(p, q) q1 q2 D(p) T r(r, s) r1 r2 s1 s2 uv1 uv2 v fa ga fb gb fc gc w Tr(p, r, b) D(r) u D(uv) D(a) D(c) v uv1 uv2 D(uv) Tr(q, r, c) vw1 vw2 D(vw) D(s) D(v) D(u) D(u) D(v) D(w) T r(s, a) D(q) vw1 vw2 D(vw) w D(w) Fig. 5 The detailed construction of a triple gadget. 10 Florent Foucaud et al. The graph G(A, B, C, T ) has (29vD+43)m+3(vD+2)n vertices (where vD is the order of a dominating gadget) and the interval representation described by our procedure can be obtained in polynomial time. We are now ready to state the main result of this section. Theorem 14 (A, B, C, T ) has a perfect 3-dimensional matching if and only if G = G(A, B, C, T ) has a solution with (29d + 7)m + (3d + 1)n vertices. Proof Let M be a perfect 3-dimensional matching of (A, B, C, T ). Let S+ (resp. S−) be the union of all the non-tight (resp. tight) standard solutions S+ Gt(T ) for T ∈M (resp. S− Gt(T ) for T / ∈M). Let Sd be the union of all the standard solutions of the dominating gadgets corresponding to the choice pairs of the elements. Then S = S+ ∪S−∪Sd is a solution of size (29d + 7)m + (3d + 1)n. Indeed, by the definition of the dominating gadgets, all the intervals inside a dominating gadget are dominated and separated from all the other intervals. All the other intervals intersect at least one dominating gadget and thus are dominated. Furthermore, two intervals that are not a choice pair do not intersect the same set of dominating gadgets and thus are separated by one of the dominating gadgets. Finally, the choice pairs inside a triple gadget are separated by Claim 13 and the choice pairs corresponding to the elements of A ∪B ∪C are separated by the non-tight standard solutions of the triple gadgets corresponding to the perfect matching. Now, let S be a solution of size (29d + 7)m + (3d + 1)n. We may assume that the solution is standard on all triple gadgets and on the dominating gadgets. Let n2 be the number of non-tight triple gadgets in the solution S. By Claim 13, there must by at least (29d + 7)m + n2 vertices of S inside the m triple gadgets and 3dn vertices of S for the dominating gadgets of the 3n elements of A ∪B ∪C. Hence (29d + 7)m + n2 + 3dn ≤(29d + 7)m + (3d + 1)n and we have n2 ≤n. Each non-tight triple gadget can separate three choice pairs corresponding to the elements of A ∪B ∪C. Hence, if n2 < n, it means that at least 3(n −n2) choice pairs corresponding to elements are not separated by a triple gadget. By the separation property, the only way to separate a choice pair {fx, gx} without using a non-tight triple gadget is to have fx or gx in the solution. Hence we need 3(n −n2) vertices to separate these 3(n −n2) choice pairs, and these vertices are not in the triple gadgets nor in the dominating gadgets. Hence the solution has size at least (29d + 7)m + n2 + 3dn + 3(n −n2) > (29d + 7)m + (3d + 1)n, leading to a contradiction. Therefore, n2 = n and there are exactly n non-tight triple gadgets. Each of them separates three choice element pairs and since there are 3n elements, the non-tight triple gadgets separate distinct choice pairs. Hence, the set of triples M corresponding to the non-tight triple gadgets is a perfect 3-dimensional matching of (A, B, C, T ). ⊓ ⊔ Corollary 15 Any graph distinguishing problem P based on domination and separation satisfying Prop-erty 5 and admitting a dominating gadget that is an interval graph, is NP-complete even for the class of interval graphs. A similar hardness result can be derived for the class of permutation graphs as follows. Corollary 16 Any graph distinguishing problem P based on domination and separation satisfying Prop-erty 5 and admitting a dominating gadget that is a permutation graph, is NP-complete even for the class of permutation graphs. Proof We can use the same reduction as the one that yields Theorem 14. We represent a permutation graph using its intersection model of segments as defined in the introduction. A dominating gadget will be represented as in Figure 6. The transmitter gadget of Definition 10 is also a permutation graph, see Figure 7 for an illustration. Using these gadgets, we can build a triple gadget that satisfies Definition 12 and is a permutation graph. A simplified permutation diagram (without dominating gadgets) of such a triple gadget is given in Figure 8. Now, similarly as for interval graphs, given an instance (A, B, C, T ) of 3-Dimensional Matching, one can define a graph G = G(A, B, C, T ) that is a permutation graph and for which (A, B, C, T ) has a perfect 3-dimensional matching if and only if G = G(A, B, C, T ) has a solution with (29d + 7)m + (3d + 1)n vertices. The proof is the same as the one in Theorem 14. Algorithms and complexity for distinguishing problems in interval and permutation graphs 11 D Fig. 6 Representation of a dominating gadget as a permutation diagram. x1 x2 u D(u) uv1 uv2 D(uv) v D(v) vw2 vw1 D(uv) w D(w) y2 y1 z1 z2 Fig. 7 Representation of a the transmitter gadget as a permutation diagram. Dashed : Tr(q, r, c) Gray : Tr(p, r, b) pair p pair q pair r pair s pair a pair b pair c uv1 u v uv2 vw2 vw1 w u uv2 uv1 v vw1 w vw2 Tr(p, q) Tr(r, s) Tr(s, a) Fig. 8 Representation of a the triple gadget as a permutation diagram. 2.3 Applications to the specific problems We now apply Theorem 14 to Locating-Dominating-Set, Identifying Code and Open Locating-Dominating Set by providing corresponding dominating gadgets. Corollary 17 Locating-Dominating-Set, Identifying Code and Open Locating-Dominating Set are NP-complete for interval graphs and permutation graphs. Proof We prove that the path graphs P4, P5 and P6 are dominating gadgets for Locating-Dominating-Set, Identifying Code and Open Locating-Dominating Set, respectively. These graphs are clearly interval and permutation graphs at the same time. To comply with Definition 6, we must prove that a dominating gadget D (i) has an optimal solution SD of size d such that no vertex of D is dominated by all the vertices of SD, and (ii) if D is an induced subgraph of an interval graph G such that each interval of V (G) \ V (D) either contains all intervals of V (D) or does not intersect any of them, then for any solution S for G, |S ∩V (D)| ≥d. – Locating-Dominating-Set. Let V (P4) = {x1, . . . , x4} and d = 2. The set SD = {x1, x4} satisfies (i). For (ii), assume that S is a locating-dominating set of a graph G containing a copy P of P4 satisfying the conditions. If S ∩P = ∅or S ∩P = {x1}, then x3 and x4 are not separated. If S ∩P = {x2}, then x1 and x3 are not separated. Hence, by symmetry, there at least two vertices of P in S, and (ii) is satisfied. 12 Florent Foucaud et al. – Identifying Code. Let V (P5) = {x1, . . . , x5} and d = 3. The set SD = {x1, x3, x5} satisfies (i). For (ii), assume that S is an identifying code of a graph G containing a copy P of P5 satisfying the conditions. To separate the pair {x1, x2}, we must have x3 ∈S, since the other vertices cannot separate any pair inside P. To separate the pair {x2, x3}, we must have {x1, x4} ∩S ̸= ∅and to separate the pair {x3, x4}, we must have {x2, x5} ∩S ̸= ∅. Hence there at least three vertices of P in S, and (ii) is satisfied. – Open Locating-Dominating Set. Let V (P6) = {x1, . . . , x6} and d = 4. The set SD = {x1, x3, x4, x6} satisfies (i). For (ii), assume that S is an open locating-dominating set of a graph G containing a copy P of P6 satisfying the conditions. To separate the pair {x1, x3}, we must have x4 ∈S. Symmetri-cally, x3 ∈S. To separate the pair {x2, x4}, we might have {x1, x5} ∩S ̸= ∅and symmetrically, {x2, x6} ∩S ̸= ∅. Hence there at least four vertices of P in S, and (ii) is satisfied. ⊓ ⊔ 2.4 Reductions for diameter 2 and consequence for Metric Dimension We now describe self-reductions for Identifying Code, Locating-Dominating-Set and Open Locating-Dominating Set for graphs with a universal vertex (hence, graphs of diameter 2). We also give a similar reduction from Locating-Dominating-Set to Metric Dimension. Let G be a graph. We define f1(G) to be the graph obtained from G by adding a universal vertex u and then, a neighbour v of u of degree 1. Similarly, f2(G) is the graph obtained from f1(G) by adding a twin w of v. See Figures 9(a) and 9(b) for an illustration. G u v (a) Transformation f1(G) G u v w (b) Transformation f2(G) G u u′ v w (c) Transformation f3(G) Fig. 9 Three reductions for diameter 2. Lemma 18 For any graph G, we have γLD(f1(G)) = γLD(G)+1. If G is twin-free, γID(f1(G)) = γID(G)+ 1. If G is open twin-free, γOLD(f2(G)) = γOLD(G) + 2. Proof Let S be an identifying code of G. Then S ∪{v} is also an identifying code of f1(G): all vertices within V (G) are distinguished by S as they were in G; vertex v is dominated only by itself; vertex u is the only vertex dominated by the whole set S ∪{v}. The same argument works for a locating-dominating set. Hence, γLD(f1(G)) ≤γLD(G) + 1 and γID(f1(G)) ≤γID(G) + 1. If S is an open locating-dominating set of G, then similarly, S ∪{v, w} is one of f2(G), hence γOLD(f2(G)) ≤γOLD(G) + 2. It remains to prove the converse. Let S1 be an identifying code (or locating-dominating set) of f1(G). Observe that |S1 ∩{u, v}| ≥1 since v must be dominated. Hence if S1 \ {u, v} is an identifying code (or locating-dominating set) of G, we are done. Let us assume the contrary. Then, necessarily u ∈S1 since v does not dominate any vertex of V (G). But u is a universal vertex, hence u does not separate any pair of vertices of V (G). Therefore, S1 \ {u} separates all pairs, but does not dominate some vertex x ∈V (G): we have N[x] ∩S1 = {u}. Note that x is the only such vertex of G. This implies that v ∈S1 (otherwise x and v are not separated by S1). But then (S1 {u, v})∪{x} is an identifying code (or locating-dominating set) of G of size |S1| −1. This completes the proof. A similar proof works for open location-domination: if S2 is an open locating-dominating set of f2(G), then |S2 ∩{u, v, w}| ≥2 since v, w must be separated and totally dominated. Similarly, if S2 \ {u, v, w} is an open locating-dominating set of G, we are done. Otherwise, again u must belong to S2, and is needed only for domination. But then if there is a vertex among v, w that is not in S2, the other one would not be Algorithms and complexity for distinguishing problems in interval and permutation graphs 13 separated from the vertex x only dominated by u. But then S2 \ {u, v, w} ∪{y}, for any vertex y ∈N(x), is an open locating-dominating set of size |S2| −2 and we are done. ⊓ ⊔ Lemma 18 directly implies the following theorem: Theorem 19 Let C be a class of graphs that is closed under the graph transformation f1 (f2, respec-tively). If Identifying Code or Locating-Dominating-Set ( Open Locating-Dominating Set, respectively) is NP-complete for graphs in C, then it is also NP-complete for graphs in C that have diam-eter 2. Theorem 19 can be applied to the classes of split graphs (for f1), interval graphs and permutation graphs (for both f1 and f2). By the results about split graphs from and about interval graphs and permutation graphs of Corollary 17, we have: Corollary 20 Identifying Code and Locating-Dominating-Set are NP-complete for split graphs of diameter 2. Identifying Code, Locating-Dominating-Set and Open Locating-Dominating Set are NP-complete for interval graphs of diameter 2 and for permutation graphs of diameter 2. We now a give a similar reduction from Locating-Dominating-Set to Metric Dimension. Given a graph G, let f3(G) be the graph obtained from G by adding two adjacent universal vertices u, u′ and then, two non-adjacent vertices v and w that are only adjacent to u and u′ (see Figure 9(c) for an illustration). Lemma 21 For any graph G, dim(f3(G)) = γLD(G) + 2. Proof Let S be a locating-dominating set of G. We claim that S3 = S ∪{u, v} is a resolving set of f3(G). Every vertex of S3 is clearly distinguished. Every original vertex of G is determined by a distinct set of vertices of S that are at distance 1 of it. Vertex u′ is the only vertex to be at distance 1 of each vertex in S3. Finally, vertex w is the only vertex to be at distance 1 of u and at distance 2 from all other vertices of S3. For the other direction, assume B is a resolving set of f3(G). Then necessarily one of u, u′ (say u) belongs to B; similarly, one of v, w (say v) belongs to B. Hence, if the restriction BG = B ∩V (G) is a locating-dominating set of G, we are done. Otherwise, since no vertex among u, u′, v, w may distinguish any pair of G and since vertices of G are at distance at most 2 in f3(G), all the sets N[x] ∩B are distinct for x ∈V (G) \ BG. But BG is not a locating-dominating set, so there is a (unique) x vertex of G that is not dominated by BG in G. If |B ∩{u, u′, v, w}| ≥3, BG ∪{x} is a locating-dominating set of size at most |B| −2 and we are done. Otherwise, note that in f3(G), x is at distance 1 from u and at distance 2 from all other vertices of B. But this is also the case for w, which is not separated from x by B, which is a contradiction. ⊓ ⊔ We obtain the following results: Theorem 22 Let C be a class of graphs that is closed under the graph transformation f3. If Locating-Dominating-Set is NP-complete for graphs in C, then Metric Dimension is also NP-complete for graphs in C that have diameter 2. Again, using the results about split graphs from and about interval graphs and permutation graphs of Corollary 17, we have: Corollary 23 Metric Dimension is NP-complete for split graphs of diameter 2, for interval graphs of diameter 2 and for permutation graphs of diameter 2. 3 Metric Dimension parameterized by solution size is FPT on interval graphs The purpose of this section is to prove that Metric Dimension (parameterized by solution size) is FPT on interval graphs. We begin with preliminary results, before describing our algorithm and proving its correctness. The algorithm is based on dynamic programming over a path-decomposition. 14 Florent Foucaud et al. 3.1 Preliminaries We start by stating a few properties and lemmas that are necessary for our algorithm. 3.1.1 Interval graphs Given an interval graph G, we can assume that in its interval model, all endpoints are distinct, and that the intervals are closed intervals. Given an interval I, we will denote by ℓ(I) and by r(I) its left and right endpoints, respectively. We define two natural total orderings of V (G) based on this model: x <L y if and only if the left endpoint of x is smaller then the left endpoint of y, and x <R y if and only if the right endpoint of x is smaller than the right endpoint of y. Given a graph G, its distance-power Gd is the graph obtained from G by adding an edge between each pair of vertices at distance at most d in G. We will use the following result. Theorem 24 () Let G be an interval graph with an interval model inducing orders <L and <R, and let d ≥2 be an integer. Then the power graph Gd is an interval graph with an interval model inducing the same orders <L and <R as G (that can be computed in linear time). 3.1.2 Tree-decompositions Definition 25 A tree-decomposition of a graph G is a pair (T , X), where T is a tree and X := {Xt : t ∈V (T )} is a collection of subsets of V (G) (called bags), such that they satisfy the following conditions: (i) S t∈V (T ) Xt = V (G); (ii) for every edge uv ∈E(G), there is a bag of X that contains both u and v; (iii) for every vertex v ∈V (G), the set of bags containing v induces a subtree of T . Given a tree-decomposition of (T , X), the maximum size of a bag Xt over all tree nodes t of T minus one is called the width of (T , X). The minimum width of a tree-decomposition of G is the treewidth of G. The notion of tree-decomposition has been used extensively in algorithm design, especially via dynamic programming over the tree-decomposition. We consider a rooted tree-decomposition by fixing a root of T and orienting the tree edges from the root toward the leaves. A rooted tree-decomposition is nice (see Kloks ) if each node t of T has at most two children and falls into one of the four types: (i) Join node: t has exactly two children t1 and t2, and Xt = Xt1 = Xt2. (ii) Introduce node: t has a unique child t′, and Xt = Xt′ ∪{v}. (iii) Forget node: t has a unique child t′, and Xt = Xt′ \ {v}. (iv) Leaf node: t is a leaf node in T . Given a tree-decomposition, a nice tree-decomposition of the same width always exists and can be computed in linear time . If G is an interval graph, we can construct a tree-decomposition of G (in fact, a path-decomposition) with special properties. Proposition 26 Let G be an interval graph with clique number ω and an interval model inducing orders <L and <R. Then, G has a nice tree-decomposition (P, X) of width ω −1 that can be computed in linear time, where moreover: (a) P is a path (hence there are no join nodes); (b) every bag is a clique; (c) going through P from the leaf to the root, the order in which vertices are introduced in an introduce node corresponds to <L; (d) going through P from the leaf to the root, the order in which vertices are forgotten in a forget node corresponds to <R; (e) the root’s bag is empty, and the leaf’s bag contains only one vertex. Proof Given a graph G, one can decide if it is an interval graph and, if so, compute a representation of it in linear time . This also gives us the ordered set of endpoints of intervals of G. To obtain (P, X), we first create the leaf node t, whose bag Xt contains the interval with smallest left endpoint. We then go through the set of all endpoints of intervals of G, from the second smallest to the Algorithms and complexity for distinguishing problems in interval and permutation graphs 15 largest. Let t be the last created node. If the new endpoint is a left endpoint ℓ(I), we create an introduce node t′ with Xt′ = Xt ∪{I}. If the new endpoint is a right endpoint r(I), we create a forget node t′ with Xt′ = Xt \ {I}. In the end we create the root node as a forget node t with Xt = ∅that forgets the last interval of G. Observe that one can associate to every node t (except the root) a point p of the real line, such that the bag Xt contains precisely the set of intervals containing p: if t is an introduce node, p is the point ℓ(I) associated to the creation of t, and if t is a forget node, it is the point r(I)+ϵ, where ϵ is sufficiently small and r(I) is the endpoint associated to the creation of t. This set forms a clique, proving Property (b). Furthermore this implies that the maximum size of a bag is ω, hence the width is at most ω −1 (and at least ω −1 since every clique must be included in some bag). Moreover it is clear that the procedure is linear-time, and by construction, Properties (a), (c), (d), (e) are fulfilled. Let us now show that (P, X) is a tree-decomposition. It is clear that every vertex belongs to some bag, proving Property (i) of Definition 25. Moreover let u, v be two adjacent vertices of G, and assume u <L v. Then, consider the introduce node of P where v is introduced. Since u has started before v but has not stopped before the start of v, both u, v belong to Xt, proving Property (ii). Finally, note that a vertex v appears exactly in all bags starting from the bag where v is introduced, until the bag where v is forgotten. Hence Property (iii) is fulfilled, and the proof is complete. ⊓ ⊔ The following lemma immediately follows from Proposition 24. Lemma 27 Let G be an interval graph with an interval model inducing orders <L and <R, let d ≥1 be an integer and let (P, X) be a tree-decomposition of Gd obtained by Proposition 26 (recall that by Proposition 24, Gd is an interval graph, and it has an intersection model inducing the same orders <L and <R). Then the following holds. (a) Let t be an introduce node of (P, X) with child t′, with Xt = Xt′ ∪{v}. Then, Xt contains every vertex w in G such that dG(v, w) ≤d and w <L v. (b) Let t′ be the child of a forget node t of (P, X), with Xt = Xt′ \ {v}. Then, Xt′ contains every vertex w in G such that dG(v, w) ≤d and v <R w. Proof We prove (a), the proof of (b) is the same. By Proposition 24, we may assume that <L is the same in G and Gd. By construction of (P, X) the introduce node of v contains all intervals w of Gd intersecting v with w <L v in Gd. Hence w <L v in G as well, and dG(v, w) ≤d. ⊓ ⊔ 3.1.3 Lemmas for the algorithm We now prove a few preliminary results necessary for the argumentation. We first start with a definition and a series of lemmas based on the linear structure of an interval graph, that will enable us to defer the decision-taking (about which vertex should belong to the solution in order distinguish a specific vertex pair) to later steps of the dynamic programming. Definition 28 Given a vertex u of an interval graph G, the rightmost path PR(u) of u is the path uR 0 , . . . , uR p where u = uR 0 , for every uR i (i ∈{0, . . . , p −1}) uR i+1 is the neighbour of uR i with the largest right endpoint, and thus uR p is the interval in G with largest right endpoint. Similarly, we define the leftmost path PL(u) = uL 0 , . . . , uL q where for every uL i (i ∈{0, . . . , q −1}) uL i+1 is the neighbour of uL i with the smallest left endpoint. Note that PR(u) and PL(u) are two shortest paths from uR 0 to uR p and uL q , respectively. Lemma 29 Let u be an interval in an interval graph G and PR(u) = uR 0 , . . . , uR p be the rightmost path of u, and let v be an interval starting after the end of uR i−1 (i ∈{1, . . . , p}), where uR i−1 ∈PR(u). Then d(u, v) = d(uR i , v) + i. Similarly, if v ends before the start of an interval uL i−1 in PL(u) = uL 0 , . . . , uL q (i ∈{1, . . . , q}), then d(u, v) = d(uL i , v) + i. Proof We prove the claim only for the first case, the second one is symmetric. Consider the shortest path from u to v by choosing the interval intersecting u that has the largest right endpoint, and iterating. This path coincides with PR(u) until it contains some interval uR j such that uR j intersects v. Since v starts after the end of uR i−1, we have i ≤j. Thus, the interval uR i lies on a shortest path from u to v, and hence d(u, v) = d(uR i , v) + d(u, uR i ) = d(uR i , v) + i. ⊓ ⊔ 16 Florent Foucaud et al. Lemma 30 Let u, v be a pair of intervals of an interval graph G and PR(u) = uR 0 , . . . , uR p , PR(v) = vR 0 , . . . , vR p′ their corresponding rightmost paths (recall that uR p = vR p′). Assuming that p ≤p′, for every uR i ∈PR(u) and vR i ∈PR(v) such that i ∈{0, . . . , p}, we have d(uR i , vR i ) ≤d(u, v). Proof First note that, by letting w = uR i , we have wR 1 = uR i+1. Therefore, we only need to prove the claim for i = 1. If u and v are adjacent, then either v = uR 1 (then we are done) or uR 1 must end after v. Then, either uR 1 intersects vR 1 , or uR 1 = vR 1 . In both cases, d(uR 1 , vR 1 ) ≤1. If u and v are not adjacent, we can assume that u ends before v starts. Then, by Lemma 29, d(uR 1 , v) = d(u, v) −1 and d(uR 1 , vR 1 ) ≤d(uR 1 , v) + d(v, vR 1 ) = d(u, v) −1 + 1 = d(u, v). ⊓ ⊔ We say that a pair u, v of intervals in an interval graph G is separated by interval x strictly from the right (strictly from the left, respectively) if x starts after both right endpoints of u, v (ends before both left endpoints of u, v respectively). In other words, x is not a neighbour of any of u and v. The next lemma is crucial for our algorithm. Lemma 31 Let u, v, x be three intervals in an interval graph G and let i be an integer such that x starts after both right endpoints of uR i ∈PR(u) and vR i ∈PR(v). Then the three following facts are equivalent: (1) x separates uR i , vR i ; (2) for every j with 0 ≤j ≤i, x separates uR j , vR j ; (3) for some j with 0 ≤j ≤i, x separates uR j , vR j . Similarly, assume that x ends before both left endpoints of uL i ∈PL(u) and vL i ∈PL(v). Then the three following facts are equivalent: (i) x separates uL i , vL i ; (ii) for every j with 0 ≤j ≤i, x separates uL j , vL j ; (iii) for some j with 0 ≤j ≤i, x separates uL j , vL j . Proof We prove only (1)–(3), the proof of (i)–(iii) is symmetric. Let 0 ≤j ≤i and u′ = uR j and v′ = vR j . Then (u′)R i−j = uR i and (v′)R i−j = vR i . By Lemma 29, d(uR j , x) = d(uR i , x) + (j −i) and similarly d(vR j , x) = d(vR i , x) + (j −i). Hence x separates uR i and vR i if and only if it separates uR j and vR j which implies the lemma. ⊓ ⊔ We now introduce a local version of resolving sets that will be used in our algorithm. Definition 32 A distance-2 resolving set is a set S of vertices where for each pair u, v of vertices at distance at most 2, there is a vertex x ∈S such that d(u, s) ̸= d(v, s). Using the following lemma, we can manage to “localize” the dynamic programming, as we will only need to distinguish pairs of vertices that will be present together in one bag. Lemma 33 Any distance-2 resolving set of an interval graph G is a resolving set of G. Proof Assume to the contrary that S is a distance-2 resolving set of an interval graph G but not a resolving set. It means that there is a pair of vertices u, v at distance at least 3 that are not separated by any vertex of S. Among all such pairs, we choose one, say {u, v}, such that d(u, v) is minimized. Without loss of generality, we assume that u ends before v starts. Consider uR 1 (vL 1 , respectively), the interval intersecting u (v, respectively) that has the largest right endpoint (smallest left endpoint, respectively). We have uR 1 ̸= vL 1 (since d(u, v) ≥3) and d(uR 1 , vL 1 ) = d(u, v) −2 < d(u, v). By minimality, uR 1 and vL 1 are separated by some vertex s ∈S. But s does not separate u and v, thus s / ∈{uR 1 , vL 1 }. Without loss of generality, we can assume that d(uR 1 , s) < d(vL 1 , s). In particular, d(vL 1 , s) ≥2 and s is ending before vL 1 starts. Thus, by Lemma 29, d(v, s) = d(vL 1 , s) + 1. However, we also have d(u, s) ≤ d(uR 1 , s) + 1 ≤d(vL 1 , s) < d(v, s). Hence s is separating u and v, a contradiction. ⊓ ⊔ The next lemma, which is a slightly modified version of a result in our paper , enables us to upper-bound the size of the bags in our tree-decompositions, which will induce diameter 4-subgraphs of G. Algorithms and complexity for distinguishing problems in interval and permutation graphs 17 Lemma 34 Let G be an interval graph with a resolving set of size k, and let B ⊆V (G) be a subset of vertices such that for each pair u, v ∈B, dG(u, v) ≤d. Then |B| ≤4dk2 + (2d + 3)k + 1. Proof Let s1, . . . , sk be the elements of a resolving set S of size k in G. Consider an interval representation of G, and let B be the minimal segment of the real line containing all intervals corresponding to vertices of B. For each i in {1, . . . , k}, consider the leftmost and rightmost paths PL(si) and PR(si), as defined in Definition 28. Let Li be the ordered set of left endpoints of intervals of PL(si), and let Ri be the ordered set of right endpoints of intervals of PR(si). Note that intervals at distance j of si in G are exactly the intervals finishing between ℓ(uL j+1) and ℓ(uL j ), or starting between r(uR j ) and r(uR j+1). Hence, for any interval of G, its distance to si is uniquely determined by the position of its right endpoint in the ordered set Li and the position of its left endpoint in the ordered set Ri. Moreover, note that, since any two vertices in B are at distance at most d, B may contain at most d points of Li and at most d points of Ri. Therefore, B may contain at most 2kd points of S 1≤i≤k(Li ∪Ri). This set of points defines a natural partition P of B into at most 2kd + 1 sub-segments, and any interval of B is uniquely determined by the positions of its two endpoints in P (if two intervals start and end in the same part of P, they are not separated by S, a contradiction). Let I ∈B \ S. For a fixed i ∈{1, . . . , k}, by definition of the sets Li, the interval I cannot contain two points of Li and similarly, it cannot contain two points of Ri. Thus, I contains at most 2k points of the union of all the sets Li and Ri. Therefore, if P denotes a part of P, there are at most 2k + 1 intervals with left endpoints in P. In total, there are at most (2kd + 1) · (2k + 1) intervals in B \ S and hence |B| ≤(2kd + 1) · (2k + 1) + k = 4dk2 + (2d + 3)k + 1. ⊓ ⊔ 3.2 The algorithm We are now ready to describe our algorithm. Theorem 35 Metric Dimension can be solved in time 2O(k4)n on interval graphs, i.e. it is FPT on this class when parameterized by the solution size k. Proof Let (P, X) be a path-decomposition of G4 (which by Proposition 24 is an interval graph) obtained using Proposition 26. The algorithm is a bottom-up dynamic programming on (P, X). By Proposition 26(b), every bag of (P, X) is a clique of G4 (i.e. an induced subgraph of diameter at most 4 in G) and hence by Lemma 34, it has O(k2) vertices. Thanks to Lemma 31, we can “localize” the problem by considering for separation, only pairs of vertices present together in the current bag. Let us now be more precise. For a node t in P, we denote by P(Xt) the pairs of intervals in Xt that are at distance at most 2 (in G). For each node t, we compute a set of configurations using the configurations of the child of t in P. A configuration contains full information about the local solution on Xt, but also stores necessary information about the vertex pairs that still need to be separated. More precisely, a configuration C = (S, sep, toSepR, cnt) of t is a tuple where: – S ⊆Xt contains the vertices of the sought solution belonging to Xt; – sep : P(Xt) →{0, 1, 2} assigns, to every pair in P(Xt), value 0 if the pair has not yet been separated, value 2 if it has been separated strictly from the left, and value 1 otherwise; – toSepR : P(Xt) →{0, 1} assigns, to every pair in P(Xt), value 1 if the pair needs to be separated strictly from the right (and it is not yet separated), and value 0 otherwise; – cnt is an integer counting the total number of vertices in the partial solution that has led to C. Starting with the leaf of P, for each node our algorithm goes through all possibilities of choosing S; however, sep, toSepR and cnt are computed along the way. At each new visited node t of P, a set of configurations is constructed from the configuration sets of the child of t. The algorithm makes sure that all the information is consistent, and that configurations that will not lead to a valid resolving set (or with cnt > k) are discarded. Leaf node: For the leaf node t, since by Proposition 26(e) Xt = {v}, we create two configurations C1 = (∅, sep, toSepR, 0) and C2 = ({v}, sep, toSepR, 1) (where sep and toSepR are empty in both configurations). 18 Florent Foucaud et al. Introduce node: Let t be an introduce node with t′ its child, where Xt = Xt′ ∪{v}. For every config-uration (S′, sep′, toSepR′, cnt′) of t′, we create two configurations C1 = (S′ ∪{v}, sep1, toSepR1, cnt′ + 1) (corresponding to the case where v is in the partial solution) and C2 = (S′, sep2, toSepR2, cnt′) (where v is not added to the partial solution). The elements of sep1 and toSepR1 in C1 are first copied from sep′ and toSepR′, and updated by checking, for every pair x, y of P(Xt) whether v separates x, y (note that v cannot separate any such pair strictly from the left). Also note that v is separated from all other vertices since it belongs to the solution, but for x = v we still need to check whether v, y are strictly separated from the left (in which case we set sep1(v, y) = 2, otherwise sep1(v, y) = 1). To do this, we compute vL 1 and yL 1 (by Lemma 27(a) they both belong to Xt), and we first check if they are strictly separated from the left, which is true if and only if sep′(vL 1 , yL 1 ) = 2. If vL 1 and yL 1 are separated strictly from the left, then so are v and y. Otherwise, if v and y are still strictly separated from the left, there must be an interval z ending before the left endpoint of y and separating v, y. Since z does not separate vL 1 and yL 1 strictly from the left, z must be adjacent to yL 1 and thus dG(v, z) ≤4 (since dG(v, y) ≤2). Then, by Lemma 27, z belongs to Xt, thus it is enough to test whether any vertex of S′ separates v, y strictly from the left. Moreover, we let toSepR1(v, y) = 0. For C2, we must compute sep2(v, w) and toSepR2(v, w) for every w such that (v, w) ∈P(Xt). To do so, we consider the first intervals of PL(v) and PL(w). We let sep2(v, w) = 2 if for the pair vL 1 , wL 1 with vL 1 ∈PL(v) and wL 1 ∈PL(w), sep′(vL 1 , wL 1 ) = 2, or if some vertex of S′ separates v, w strictly from the left. Otherwise, if v, w are separated by a neighbour of w, we set sep2(v, w) = 1. We also compute toSepR2 from toSepR′ by letting toSepR2(v, w) = 0 and copying all other values. If cnt + 1 > k, C1 is discarded. The remaining valid configurations among C1, C2 are added to the set of configurations of t. If in this set, there are two configurations that differ only on their value of cnt, we only keep the one with the smallest value of cnt. Forget node: Let t be a forget node and t′ be its child, with Xt = Xt′ \ {v}. For every configuration (S′, sep′, toSepR′, cnt′) of t′, we create the configuration (S′ \ {v}, sep, toSepR, cnt′). We create sep and toSepR by copying all entries sep′(x, y) and toSepR′(x, y) such that x, y ∈P(Xt). For every vertex w in Xt such that dG(v, w) ≤2, if sep′(v, w) = 0 or toSepR′(v, w) = 1 (i.e. v, w still need to be separated strictly from the right), we determine vR 1 and wR 1 and let toSepR(vR 1 , wR 1 ) = 1 (note that dG(v, vR 1 ) = 1, dG(v, wR 1 ) ≤3, v <R vR 1 and v <R wR 1 , hence by Lemma 27(b) vR 1 , wR 1 ∈Xt′ and hence vR 1 , wR 1 ∈Xt). However, if vR 1 = wR 1 , we discard the current configuration. Indeed, by Lemma 31, v, w cannot be separated strictly from the right: any shortest path to any of v, w from some vertex x whose interval starts after both right endpoints of v, w must go through vR 1 = wR 1 and hence d(x, vR 1 ) = d(x, wR 1 ). We also discard the configuration if vR 1 or wR 1 does not exist (i.e. v or w is the rightmost interval of G). Finally, if there are two configurations that differ only on their value of cnt, again we only keep the one with the smallest value of cnt. Root node: At root node t, since by Proposition 26(e) Xt = ∅, t has at most one configuration. We output “yes” only if this configuration exists, and if cnt ≤k. Otherwise, we output “no”. We now analyze the algorithm. Correctness. We claim that G has a resolving set of size at most k if and only if the root node of P contains a valid configuration. By Lemma 33, this is equivalent to proving that G has an optimal distance-2 resolving set of size at most k if and only if the root node of P contains a valid configuration. First, assume that the dynamic programming has succeeded, i.e. the root bag contains a valid configuration C. Assume that C has smallest value cnt. We want to prove that the union of all partial solutions S of all configurations that have led to the computation of C is a valid optimal solution S. We first prove that for every pair u, v of vertices with dG(u, v) ≤2 and u 0 in Ct′ or the algorithm has set toSepR(u1 r, v1 r) = 1, in which case uR 1 ̸= vR 1 . Assume we had sep′(u, v) = 1. Then, in some configuration Ct′′ that has led to computing Ct′ (possibly t′ = t′′), u and v were separated by some vertex in S belonging to Ct′′, and we are done. If sep′(u, v) = 2, similarly either u, v have been separated by some vertex of S belonging to a (possibly earlier) configuration, or we had sep(uL i , vL i ) = 2, in which case by Lemma 31 we are also done. If however, the algorithm has set toSepR(uR 1 , vR 1 ) = 1, recall that unless in some bag uR 1 , vR 1 is separated strictly from the right, when Algorithms and complexity for distinguishing problems in interval and permutation graphs 19 we forget uR 1 we set toSepR(uR 2 , vR 2 ) = 1. Hence, since C was a valid configuration (and has not been discarded), at some step we have separated uR i , vR i strictly from the right, which by Lemma 31 implies that u, v are separated by S, and we are done. Moreover S is optimal because we have chosen C so as to minimize the size cnt of the overall solution. At each step, the algorithm discards, among equivalent configurations, the ones with larger values of cnt, ensuring that the size of the solution is minimized. This proves our claim. For the converse, assume that G has an optimal distance-2 resolving set S of size at most k. We will need the following claim. Claim 36 Let u, v be a pair of vertices with dG(u, v) ≤2. Then, any vertex x that could separate u, v neither strictly from the right nor strictly from the left is present in some bag together with both u, v. Proof of claim. Necessarily, x is a neighbour of one of u, v in G. Hence dG(x, u) ≤3 and dG(x, v) ≤3. If x <L v, by Lemma 27(a) x, u, v are present in the bag where v is introduced. If v <L x, similarly x, u, v are present in the bag where x is introduced. (♦) We will prove that some configuration C was computed using a series of configurations where for each node t of P, the right subset S ∩Xt was guessed. By contradiction, if this was not the case, then at some step of the algorithm we would have discarded a configuration C′ although it arised from guessing the correct partial solution of S. Since S is optimal, C′ was not discarded because there was a copy of C′ with different value of counter cnt (otherwise this copy would lead to a solution strictly smaller than S). Hence the discarding of C′ has happened at a node t that is a forget node. Assume that t is a forget node where vertex v was forgotten (assume t′ is the child of t in P). This happens only if for some w ∈Xt with dG(v, w) ≤2, we had either (i) sep′(v, w) = 0 and vR 1 = wR 1 , or (ii) toSepR(v, w) = 1 and vR 1 = wR 1 . If (i) holds, then v, w are considered not to be separated, although they are actually separated (by our assumption on C′). Since vR 1 = wR 1 , vR 1 and wR 1 cannot be separated strictly from the right, hence by Lemma 31 v, w are not separated strictly from the right. If they are not separated strictly from the left, Claim 36 implies a contradiction because the vertex separating v, w was present together in a bag with v, w and hence we must have sep′(v, w) = 1. Hence, v, w are separated strictly from the left. But again by Lemma 31, this means that some vertices vL i , wL i in PR(v) × PR(w) have been separated strictly from the left (assume that i is maximal with this property). Since by Lemma 30, dG(vL i , wL i ) ≤2, by Lemma 27 these two vertices were present in some bag simultaneously, together with the vertex that is strictly separating them from the left (and has distance at most 4 from wL i ). Then in the configuration corresponding to this bag, sep(vL i , wL i ) = 2, and we had sep′(v, w) = 2 in C′, a contradiction. If (ii) holds, there exists a pair x, y such that in some earlier configuration, we had sep(x, y) = 0, v = xR i ∈PR(x) and w = yR i ∈PR(y). By the same reasoning as for (i) we obtain a contradiction. This proves this side of the implication, and completes the proof of correctness. Running time. At each step of the dynamic programming, we compute the configurations of a bag from the set of configurations of the child bag. The computation of each configuration is polynomial in the size of the current bag of (P, X). Since a configuration is precisely determined by a tuple (S, sep, toSepR) (if there are two configurations where only cnt differs, we only keep the one with smallest value), there are at most 2|Xt|3|Xt|22|Xt|2 ≤32|Xt|2 configurations for a bag Xt. Hence, in total the running time is upper-bounded by 2O(b2)n, where b is the maximum size of a bag in (P, X). Since any bag induces a subgraph of G of diameter at most 4, by Lemma 34, b = O(k2). Therefore 2O(b2)n = 2O(k4)n, as claimed. ⊓ ⊔ 4 Conclusion We proved that Locating-Dominating-Set, Open Locating-Dominating Set, Identifying Code and Metric Dimension are NP-complete even for interval graphs that have diameter 2 and for per-mutation graphs that have diameter 2. This is in contrast to related problems such as Dominating Set, which is linear-time solvable both on interval graphs and on permutation graphs. However, we do not know their complexity for unit interval graphs or bipartite permutation graphs. Note that both Locating-Dominating-Set and Metric Dimension are polynomial-time solvable on chain graphs, a subclass of bipartite permutation graphs . Probably the same approach as in would also work for Open Locating-Dominating Set and Identifying Code. 20 Florent Foucaud et al. Contrary to what we claimed in the conference version of this paper , our reduction gadgets are not interval graphs and permutation graphs at the same time. Hence, we leave it as an open question to determine the complexity of the studied problems when restricted to graphs which are both interval and permutation graphs. Similarly, it could be interesting to determine their complexity for graphs that are both split graphs and interval graphs, or split graphs and permutation graphs. We remark that our generic reduction would also apply to related problems that have been consid-ered in the literature, such as Locating-Total Dominating Set or Differentiating-Total Dominating Set . Regarding our positive result that Metric Dimension parameterized by the solution size is FPT on interval graphs, an interesting question is whether it can be extended to other graph classes, such as permutation graphs. Another interesting class is the one of chordal graph, since it is a proper superclass of both interval graphs and split graphs, both of which admit an FPT algorithm for Metric Dimension. During the revision of this paper, it was brought to our knowledge that in a recent paper, Belmonte, Fomin, Golovach and Ramanujan have answered these questions by showing that for any class of graphs of bounded tree-length, Metric Dimension is FPT when parameterized by the solution size. Examples of such classes are the ones of chordal graphs, asteroidal triple-free graphs and permutation graphs. Acknowledgements We thank Adrian Kosowski for helpful preliminary discussions on the topic of this paper. We are also grateful to the reviewers for their useful comments which subsequently made the paper clearer. References 1. G. Agnarsson, P. Damaschke and M. M. Halldórsson. Powers of geometric intersection graphs and dispersion algorithms. Discrete Applied Mathematics 132(1–3):3–16, 2003. 2. D. Auger. Minimal identifying codes in trees and planar graphs with large girth. European Journal of Combinatorics 31(5):1372–1384, 2010. 3. L. Babai. On the complexity of canonical labeling of strongly regular graphs. SIAM Journal of Computing 9(1):212–216, 1980. 4. E. Bampas, D. Bilò, G. Drovandi, L. Gualà, R. Klasing and G. Proietti. Network verification via routing table queries. Proceedings of the 18th International Colloquium on Structural Information and Communication Complexity, SIROCCO 2011, LNCS 6796:270–281, 2011. 5. Z. Beerliova, F. Eberhard, T. Erlebach, A. Hall, M. Hoffmann, M. Mihalák and L. S. Ram. Network discovery and verification. IEEE Journal on Selected Areas in Communications 24(12):2168–2181, 2006. 6. R. Belmonte, F. V. Fomin, P. A. Golovach, M. S. Ramanujan: Metric dimension of bounded width graphs. Proceedings of 40th International Symposium of Mathematical Foundations of Computer Science, MFCS 2015, LNCS 9235:115–126, 2015. 7. T. Y. Berger-Wolf, M. Laifenfeld and A. Trachtenberg. Identifying codes and the set cover problem. Proceedings of the 44th Annual Allerton Conference on Communication, Control and Computing, Monticello, USA, September 2006. 8. N. Bertrand, I. Charon, O. Hudry and A. Lobstein. 1-identifying codes on trees. Australasian Journal of Combinatorics 31:21–35, 2005. 9. B. Bollobás and A. D. Scott. On separating systems. European Journal of Combinatorics 28:1068–1071, 2007. 10. J. A. Bondy. Induced subsets. Journal of Combinatorial Theory, Series B 12(2):201–202, 1972. 11. K. S. Booth and G. S. Lueker. Testing for the consecutive ones property, interval graphs, and graph planarity using PQ-tree algorithms. Journal of Computer and System Sciences 13(3):335–379, 1976. 12. N. Bousquet, A. Lagoutte, Z. Li, A. Parreau and S. Thomassé. Identifying codes in hereditary classes of graphs and VC-dimension. SIAM Journal on Discrete Mathematics 29(4):2047–2064, 2015. 13. A. Brandstädt, V. B. Le and J. Spinrad. Graph Classes: A Survey, SIAM Monographs on Discrete Mathematics and Applications, 1999. 14. I. Charon, O. Hudry and A. Lobstein. Minimizing the size of an identifying or locating-dominating code in a graph is NP-hard. Theoretical Computer Science 290(3):2109–2120, 2003. 15. E. Charbit, I. Charon, G. Cohen, O. Hudry and A. Lobstein. Discriminating codes in bipartite graphs: bounds, extremal cardinalities, complexity. Advances in Mathematics of Communications 2(4):403–420, 2008. 16. G. Chartrand, L. Eroh, M. Johnson and O. Oellermann. Resolvability in graphs and the metric dimension of a graph. Discrete Applied Mathematics 105(1-3):99–113, 2000. 17. M. Chellali. On locating and differetiating-total domination in trees. Discussiones Mathematicae Graph Theory 28(3): 383–392, 2008. 18. G. Cohen, I. Honkala, A. Lobstein and G. Zémor. On identifying codes. Proceedings of the DIMACS Workshop on Codes and Association Schemes, Series in Discrete Mathematics and Theoretical Computer Science 5697–109, 2001. 19. C. Colbourn, P. J. Slater and L. K. Stewart. Locating-dominating sets in series-parallel networks. Congressus Numer-antium 56:135–162, 1987. 20. B. Courcelle. The monadic second-order logic of graphs. I. Recognizable sets of finite graphs. Information and Compu-tation 85(1):12–75, 1990. Algorithms and complexity for distinguishing problems in interval and permutation graphs 21 21. J. Diaz, O. Pottonen, M. Serna and E. J. van Leeuwen. On the complexity of metric dimension. Proceedings of the 20th European Symposium on Algorithms, ESA 2012, LNCS 7501:419–430, 2012. 22. R. G. Downey and M. R. Fellows. Fundamentals of Parameterized Complexity. Springer, 2013. 23. D. Eppstein. Metric dimension parameterized by max leaf number. Journal of Graph Algorithms and Applications 19(1):313–323, 2015. 24. L. Epstein, A. Levin and G. J. Woeginger. The (weighted) metric dimension of graphs: hard and easy cases. Algorithmica 72(4):1130–1171, 2015. 25. H. Fernau, P. Heggernes, P. van’t Hof, D. Meister and R. Saei. Computing the metric dimension for chain graphs. Information Processing Letters 115:671–676, 2015. 26. F. Foucaud. Decision and approximation complexity for identifying codes and locating-dominating sets in restricted graph classes. Journal of Discrete Algorithms 31:48–68, 2015. 27. F. Foucaud, S. Gravier, R. Naserasr, A. Parreau and P. Valicov. Identifying codes in line graphs. Journal of Graph Theory 73(4):425–448, 2013. 28. F. Foucaud, G. Mertzios, R. Naserasr, A. Parreau and P. Valicov. Algorithms and complexity for metric dimension and location-domination on interval and permutation graphs. Proceedings of the 41st International Workshop on Graph-Theoretic Concepts in Computer Science, WG 2015, LNCS, to appear. 29. F. Foucaud, G. Mertzios, R. Naserasr, A. Parreau and P. Valicov. Identification, location-domination and metric di-mension on interval and permutation graphs. I. Bounds. 2015. 30. M. R. Garey and D. S. Johnson. Computers and intractability: a guide to the theory of NP-completeness, W. H. Freeman, 1979. 31. M. C. Golumbic. Algorithmic graph theory and perfect graphs, Elsevier, 2004. 32. S. Gravier, R. Klasing and J. Moncel. Hardness results and approximation algorithms for identifying codes and locating-dominating codes in graphs. Algorithmic Operations Research 3(1):43–50, 2008. 33. M. Habib and C. Paul. A simple linear time algorithm for cograph recognition. Discrete Applied Mathematics 145(2):183–197, 2005. 34. F. Harary and R. A. Melter. On the metric dimension of a graph. Ars Combinatoria 2:191–195, 1976. 35. S. Hartung. Exploring parameter spaces in coping with computational intractability. PhD Thesis, TU Berlin, Germany, 2014. 36. S. Hartung and A. Nichterlein. On the parameterized and approximation hardness of metric dimension. Proceedings of the IEEE Conference on Computational Complexity, CCC 2013:266–276, 2013. 37. M. A. Henning and N. J. Rad. Locating-total domination in graphs. Discrete Applied Mathematics 160:1986–1993, 2012. 38. M. A. H. Henning and A. Yeo. Distinguishing-transversal in hypergraphs and identifying open codes in cubic graphs. Graphs and Combinatorics 30(4):909–932, 2014. 39. S. Hoffmann and E. Wanke. Metric dimension for Gabriel unit disk graphs is NP-Complete. Proceedings of ALGOSEN-SORS 2012:90-92, 2012. 40. R. M. Karp. Reducibility among combinatorial problems. In R. E. Miller and J. W. Thatcher, editors, Complexity of Computer Computations, pages 85–103. Plenum Press, 1972. 41. M. G. Karpovsky, K. Chakrabarty and L. B. Levitin. On a new class of codes for identifying vertices in graphs. IEEE Transactions on Information Theory 44:599–611, 1998. 42. S. Khuller, B. Raghavachari and A. Rosenfeld. Landmarks in graphs. Discrete Applied Mathematics 70(3):217–229, 1996. 43. J. H. Kim, O. Pikhurko, J. Spencer and O. Verbitsky. How complex are random graphs in First Order logic? Random Structures and Algorithms 26(1-2):119–145, 2005. 44. T. Kloks. Treewidth, Computations and Approximations. Springer, 1994. 45. P. Manuel, B. Rajan, I. Rajasingh and Chris Monica M. On minimum metric dimension of honeycomb networks. Journal of Discrete Algorithms 6(1):20–27, 2008. 46. B. M. E. Moret and H. D. Shapiro. On minimizing a set of tests. SIAM Journal of Scientifical and Statistical Compu-tation 6(4):983–1003, 1985. 47. T. Müller and J.-S. Sereni. Identifying and locating-dominating codes in (random) geometric networks. Combinatorics, Probability and Computing 18(6):925–952, 2009. 48. R. Niedermeier. Invitation to Fixed-Parameter Algorithms. Oxford University Press, 2006. 49. A. Rényi. On random generating elements of a finite Boolean algebra. Acta Scientiarum Mathematicarum Szeged 22:75–81, 1961. 50. S. J. Seo and P. J. Slater. Open neighborhood locating-dominating sets. The Australasian Journal of Combinatorics 46:109–120, 2010. 51. P. J. Slater. Leaves of trees. Congressus Numerantium 14:549–559, 1975. 52. P. J. Slater. Domination and location in acyclic graphs. Networks 17(1):55–64, 1987. 53. P. J. Slater. Dominating and reference sets in a graph. Journal of Mathematical and Physical Sciences 22(4):445–455, 1988. 54. J. Spinrad. Bipartite permutation graphs. Discrete Applied Mathematics 18(3):279–292, 1987. 55. J. Suomela. Approximability of identifying codes and locating-dominating codes. Information Processing Letters 103(1):28–33, 2007. 56. R. Ungrangsi, A. Trachtenberg and D. Starobinski. An implementation of indoor location detection systems based on identifying codes. Proceedings of Intelligence in Communication Systems, INTELLCOMM 2004, LNCS 3283:175–189, 2004.
2447
https://www.msdmanuals.com/home/women-s-health-issues/cancers-of-the-female-reproductive-system/cervical-cancer
Cervical Cancer - Women's Health Issues - MSD Manual Consumer Version honeypot link skip to main content Professional Consumer Consumer edition active ENGLISH MSD Manual Consumer Version HEALTH TOPICSHEALTHY LIVINGSYMPTOMSEMERGENCIESRESOURCESCOMMENTARYABOUT US HEALTH TOPICSHEALTHY LIVINGSYMPTOMSEMERGENCIESRESOURCESCOMMENTARY Home/ Women's Health Issues/ Cancers of the Female Reproductive System/ Cervical Cancer/ IN THIS TOPIC Causes Symptoms Diagnosis Treatment Prognosis More Information OTHER TOPICS IN THIS CHAPTER Overview of Female Reproductive System Cancers Cervical Cancer Cervical Cancer Screening and Prevention Vulvar Cancer Vaginal Cancer Cancer of the Uterus Ovarian Cancer, Fallopian Tube Cancer, and Peritoneal Cancer Molar Pregnancy Cervical Cancer ByPedro T. Ramirez, MD, Houston Methodist Hospital; Gloria Salvo, MD, MD Anderson Cancer Center Reviewed/Revised Oct 2023 | Modified Aug 2025 v806363 VIEW PROFESSIONAL VERSION GET THE QUICK FACTS Cervical cancer develops in the cervix (the lower part of the uterus). Most cervical cancers are caused by human papillomavirus (HPV) infection. Causes| Symptoms| Diagnosis| Treatment| Prognosis| More Information| Topic Resources 3D Models (0) Audios (0) Calculators (0) Images (1) Lab Test (0) Tables (0) Videos (0) Internal Female Reproductive... Cervical cancer usually results from infection with the human papillomavirus (HPV), which is transmitted during sexual contact. The first symptom is usually irregular vaginal bleeding, usually after sexual activity, but symptoms may not occur until the cancer has enlarged or spread. Cervical cancer screening tests (Papanicolaou [Pap] tests and/or HPV testing) can usually detect abnormalities, which are then biopsied. Treatment usually involves surgery to remove the cancer and often the surrounding tissue and often, if tumors are large or have spread, radiation therapy and chemotherapy. (See also Overview of Female Reproductive System Cancers.) The cervix is the lower part of the uterus. It extends into the vagina. In the United States, cervical cancer (cervical carcinoma) is the third most common gynecologic cancer among all women and is common among younger women. The average age at diagnosis is about 50 years, but it is most often diagnosed in women aged 35 to 44 years. Worldwide, most cervical cancer cases (almost 85%) and deaths due to cervical cancer (almost 90%) occur in low- and middle-resource countries. Cervical cancer is the most common cancer among females in 23 countries and the leading cause of cancer death in 36 countries. Internal Female Reproductive Anatomy Approximately 80 to 85% of cervical cancers are squamous cell carcinomas, which develop in the flat, skinlike cells that line the cervix. Most other cervical cancers are adenocarcinomas, which develop from gland cells. Cervical cancer begins with slow, progressive changes in cells on the surface of the cervix. These changes, called dysplasia or cervical intraepithelial neoplasia (CIN), are considered precancerous. That means that if untreated, they may progress to cancer; sometimes this takes several years. CIN is classified as mild (CIN 1), moderate (CIN 2), or severe (CIN 3). Cervical cancer begins on the surface of the cervix and can penetrate deep beneath the surface. Cervical cancer can spread in the following ways: By spreading directly to nearby tissues, including the vagina By entering the rich network of lymphatic vessels inside the cervix, then spreading to other parts of the body Rarely, by spreading through the bloodstream Causes of Cervical Cancer Precancerous changes in cervical cells (cervical intraepithelial neoplasia) and cervical cancer are almost always caused by the human papillomavirus (HPV), which is transmitted through sexual contact. The HPV virus can also cause genital warts or cancer of the vagina, vulva, or anus. Rates of cervical cancer have decreased steadily over the past several decades in countries that have access to HPV vaccines, cervical cancer screening, and treatment of cervical intraepithelial neoplasia. Risk factors for developing cervical cancer include the following: Having an increased chance of exposure to sexually transmitted infections (for example, having sexual intercourse for the first time at a young age, having more than one sex partner, or having sex partners who have risk factors for sexually transmitted infections) Using oral contraceptives (birth control pills) Smoking cigarettes Having had precancerous changes or cancer in the vulva, vagina, or anus Having a weakened immune system (due to a disorder such as cancer or AIDS or to medications such as chemotherapy drugs or corticosteroids) HPV can be transmitted through any kind of sexual activity, including oral, genital, or anal contact. HPV infection is very common, and approximately 80% of sexually active people are exposed to HPV infection at least once during their lifetime. Many HPV infections last only a short time, but some people can be infected with HPV more than one time, and some HPV infections last for years. Symptoms of Cervical Cancer Precancerous changes and early cervical cancer often do not cause symptoms. The first symptom of cervical cancer is usually abnormal bleeding from the vagina, most often after sexual activity. Spotting or heavier bleeding may occur between periods, or periods may be unusually heavy. Large cancers are more likely to bleed and may cause a foul-smelling discharge from the vagina and pain in the pelvic area. If the cancer is widespread, it can cause lower back pain and swelling of the legs. The urinary tract may be blocked, and without treatment, kidney failure can result. Diagnosis of Cervical Cancer Papanicolaou (Pap) tests Biopsy Routine Pap tests can detect abnormal, precancerous cells (dysplasia) on the surface of the cervix. Doctors check women with precancerous cells at regular intervals. Dysplasia can be treated, thus helping prevent cancer. Biopsy If a growth or another abnormal area is seen on the cervix during a pelvic examination or if a Pap test detects precancerous or cancerous cells, a biopsy is done. Usually, doctors do a procedure called colposcopy using an instrument with a binocular magnifying lens (colposcope), inserted through the vagina, to examine the cervix and to choose the best biopsy site. Two different types of test are done: Cervical biopsy: A tiny piece of the cervix, selected using the colposcope, is removed. Endocervical curettage: Tissue is scraped from inside the cervix. These tests are similar to having a Pap test. They usually cause only mild pain and a small amount of bleeding. If the diagnosis is not clear, a cone biopsy is done to remove a larger cone-shaped piece of tissue. Usually, a thin wire loop with an electrical current running through it is used. This procedure is called the loop electrosurgical excision procedure (LEEP). It requires only a local anesthetic. Alternative techniques are to use a scalpel (cold knife) or a laser (a highly focused beam of light). These procedures require an operating room and usually a general anesthetic. Staging of cervical cancer If cervical cancer is diagnosed, its exact size and locations (its stage) are determined. Staging begins with a physical examination of the pelvis and a chest x-ray. Usually, computed tomography (CT), magnetic resonance imaging (MRI), or a combination of CT and positron emission tomography (PET) is done to determine whether the cancer has spread to nearby tissues or to distant parts of the body. If these procedures are not available, doctors may do other imaging procedures to check specific organs, such as cystoscopy (bladder), sigmoidoscopy (colon), or IV urography (urinary tract). Doctors usually also check for spread to the lymph nodes by doing imaging tests or a biopsy. Knowing whether cancer has spread to the lymph nodes and how many lymph nodes are involved helps doctors predict the person's outcome and plan treatment. Stages of cervical cancer range from I (the earliest) to IV (advanced). Staging is based on how far the cancer has spread: Stage I: The cancer is confined to the cervix. Stage II: The cancer has spread outside the uterus, to the upper two thirds of the vagina or to tissues outside the uterus, but is still within the pelvis (which contains the internal reproductive organs, bladder, and rectum). Stage III: The cancer has spread throughout the pelvis and/or the lower third of the vagina and/or blocks the ureters and/or causes a kidney to malfunction and/or spreads to the lymph nodes near the aorta (the largest artery in the body). Stage IV: The cancer has spread outside the pelvis and/or to the bladder or rectum or to distant organs. Treatment of Cervical Cancer Surgery, radiation therapy, and/or chemotherapy Treatment of cervical cancer depends on the stage of the cancer. It may include surgery, radiation therapy, and chemotherapy. Precancerous changes and early stage I cervical cancer Precancerous cervical cells (cervical intraepithelial neoplasia) and cervical cancer that involves only the surface of the cervix (early stage I) are treated the same way. Doctors can often completely remove the cancer by removing part of the cervix with a cone biopsy procedure. They may use the loop electrosurgical excision procedure (LEEP), a laser, or a scalpel. These treatments preserve a woman’s ability to have children. Removal of the uterus (hysterectomy) may be done if women are not interested in preserving their ability to have children. If some cancer remains after the cone biopsy, hysterectomy or another cone biopsy may be done. If early-stage cancer has spread deeply into the cervix or into blood vessels or lymphatic vessels, a modified radical hysterectomy is done, and nearby lymph nodes are removed. A modified radical hysterectomy involves removing the cervix and some of the tissue next to it (called the parametrium). But unlike a standard radical hysterectomy, the modified radical hysterectomy involves removing only half of the parametrium. Lymph nodes may be checked for spread of cancer cells with a procedure called sentinel lymph node mapping. Another treatment option is external radiation therapy plus radioactive implants placed in the cervix to destroy the cancer (a type of internal radiation called brachytherapy). Radiation therapy may irritate the bladder or rectum. Later, as a result, the intestine may become blocked, and the bladder and rectum may be damaged. Also, the ovaries usually stop functioning, and the vagina may narrow. Late stage I and early stage II cervical cancer If cervical cancer involves more than the surface of the cervix but the cancer is still relatively small, treatment is typically Radical hysterectomy (a hysterectomy plus removal of surrounding tissues including the upper part of the vagina and ligaments) and evaluation of lymph nodes Hysterectomy is done by making a large incision in the abdomen (open surgery) or by using a thin viewing tube (laparoscope) and specialized surgical instruments inserted through small incisions just below the navel. Research suggests that when open surgery is done, the cancer is less likely to return and women are more likely to live longer than when laparoscopic surgery is done. If the cancer has grown or has begun to spread within the pelvis, treatment is typically Radiation therapy plus chemotherapy The ovaries are usually left in place because cervical cancer is unlikely to spread (metastasize) to the ovaries. If during surgery, doctors discover that cancer has spread outside the cervix, hysterectomy is not done, and radiation therapy plus chemotherapy is recommended. Late stage II through early stage IV cervical cancer When cervical cancer has spread further within the pelvis or has spread to other organs, the following treatment is preferred: Radiation therapy plus chemotherapy Doctors may use positive emission tomography with computed tomography (PET-CT) to determine whether lymph nodes are involved and thus determine where radiation should be directed. External radiation (directed at the pelvis from outside the body) is used to shrink the cancer and treat cancer that may have spread to nearby lymph nodes. Then radioactive implants are placed in the cervix to destroy the cancer (a type of internal radiation called brachytherapy). Chemotherapy is usually given with radiation therapy, often to make the tumor more likely to be damaged by radiation therapy. Extensive spread or recurrence of cervical cancer The main treatment for extensive spread or recurrence of cervical cancer is Chemotherapy However, chemotherapy reduces the cancer’s size and controls its spread in almost half of women treated, and the beneficial effect is usually only temporary. Adding another medication (such as monoclonal antibodies used to treat several types of cancer; this is called immunotherapy) may extend survival by a few months. If the cancer remains in the pelvis after radiation therapy, doctors may recommend surgery to remove some or all pelvic organs (called pelvic exenteration). These organs include the reproductive organs (vagina, uterus, fallopian tubes, and ovaries), bladder, urethra, rectum, and anus. Which organs are removed and whether all are removed depends on many factors, such as the cancer's location, the woman's anatomy, and her goals after surgery. Permanent openings—for urine (urostomy) and for stool (colostomy)—are made in the abdomen so that these waste products can leave the body and be collected in bags. Sentinel lymph node mapping and dissection A sentinel lymph node is the first lymph node that cancer cells are likely to spread to. There may be more than one sentinel lymph node. These nodes are called sentinel lymph nodes because they are the first to warn that cancer has spread. A sentinel lymph node dissection involves Identifying the sentinel lymph node (called mapping) Removing it Examining it to determine whether cancer cells are present To identify sentinel lymph nodes, doctors inject a blue or green dye and/or a radioactive substance into the cervix near the tumor. These substances map the pathway from the cervix to the first lymph node (or nodes) in the pelvis. During surgery, doctors then check for lymph nodes that look blue or green or that give off a radioactive signal (detected by a handheld device). Doctors remove this node (or nodes) and send it to a laboratory to be checked for cancer. If the sentinel lymph node or nodes do not contain cancer cells, no other lymph nodes are removed (unless they look abnormal). For women with early-stage cervical cancer, sentinel lymph node dissection is an alternative to removing lymph nodes in the pelvis. Cervical cancer spreads to the lymph nodes in only 15 to 20% of women with early-stage cancer. Sentinel lymph node dissection may help doctors limit the number of lymph nodes that need to be removed, sometimes to only one. Removing lymph nodes often causes problems such as accumulation of fluids in tissues, which can cause persistent swelling (lymphedema), and nerve damage. Fertility and menopause after cervical cancer Treatment with radical hysterectomy, chemotherapy, and/or radiation therapy usually makes it impossible for women to become pregnant or to carry a pregnancy to term. However, if being able to have children is important, a woman should talk to her doctor and get as much information as possible about how treatment affects fertility and whether they are eligible for treatments that do not make future pregnancy impossible. A cone biopsy (conization) may be an option for women who have low-risk, early-stage cervical cancer and who wish to preserve their ability to have children. Before this procedure, doctors check to see whether the cancer has spread to lymph nodes in the pelvis. If the cancer has not spread, doctors may completely remove the cancer by removing part of the cervix during a cone biopsy. If women with early-stage cervical cancer wish to preserve their ability to have children, a different cancer treatment called radical trachelectomy (a fertility-preserving treatment) may be possible. Doctors remove the cervix, the tissue next to the cervix, the upper part of the vagina, and the lymph nodes in the pelvis. To remove these tissues, doctors may do one of the following: Do open surgery Use a laparoscope inserted through a small incision just below the navel, then thread instruments through the laparoscope, sometimes with robotic assistance (laparoscopic surgery) Remove the tissues through the vagina (vaginal surgery) Then the uterus is re-attached to the lower part of vagina. Thus, women still can become pregnant. However, babies must be delivered by cesarean. Trachelectomy appears to be as effective as radical hysterectomy for many women with early-stage cervical cancer. If premenopausal women are having radiation therapy, doctors discuss options for protecting the ovaries to avoid causing premature menopause. Before radiation therapy of the pelvis, ovaries may be moved outside the radiation field (oophoropexy) to avoid exposing them to radiation. Prognosis for Cervical Cancer Prognosis depends on the stage of the cervical cancer. The percentages of women who are alive 5 years after diagnosis and treatment are Stage I: 80 to 90% of women Stage II: 60 to 75% Stage III: 30 to 40% Stage IV: 15% or fewer If the cancer is recurs, it usually does so within 2 years. More Information The following English-language resource may be useful. Please note that THE MANUAL is not responsible for the content of this resource. National Cancer Institute: Cervical Cancer: This web site provides links to general information about cervical cancer, as well as links to information about causes, prevention, screening, treatment, and research and about coping with cancer. Test your Knowledge Take a quiz on this topic Copyright © 2025 Merck & Co., Inc., Rahway, NJ, USA and its affiliates. All rights reserved. About Disclaimer Cookie Preferences Copyright© 2025 Merck & Co., Inc., Rahway, NJ, USA and its affiliates. All rights reserved. Find In Topic This Site Uses Cookies and Your Privacy Choice Is Important to Us We suggest you choose Customize my Settings to make your individualized choices. Accept Cookies means that you are choosing to accept third-party Cookies and that you understand this choice. See our Privacy Policy Customize my Settings Reject Cookies Accept Cookies
2448
https://fiveable.me/key-terms/inorganic-chemistry-i/electrical-conductivity
Electrical Conductivity - (Inorganic Chemistry I) - Vocab, Definition, Explanations | Fiveable | Fiveable ap study content toolsprintables upgrade All Key Terms Inorganic Chemistry I Electrical Conductivity 🧶inorganic chemistry i review key term - Electrical Conductivity Citation: MLA Definition Electrical conductivity is the ability of a material to conduct an electric current, which is determined by the movement of charged particles within that material. This property is essential for understanding how different types of solids, such as ionic, metallic, and covalent solids, behave when an electric field is applied. The nature of bonding and the arrangement of atoms in these solids significantly influence their conductivity. 5 Must Know Facts For Your Next Test Ionic solids typically conduct electricity when melted or dissolved in a solvent, as this allows ions to move freely. Metals are excellent conductors of electricity because they have a 'sea' of delocalized electrons that can move easily through the lattice. Covalent solids usually do not conduct electricity as they lack free charges; their electrons are localized and tightly bound in covalent bonds. Temperature can affect electrical conductivity; for most metals, conductivity decreases with increasing temperature due to increased atomic vibrations that scatter electrons. The conductivity of materials is often measured in siemens per meter (S/m), providing a quantitative assessment of how well a substance conducts electric current. Review Questions Compare and contrast the electrical conductivity of ionic and metallic solids, and explain the underlying reasons for their differences. Ionic solids conduct electricity primarily when melted or dissolved in water, as this process frees the ions, allowing them to move and carry charge. In contrast, metallic solids are excellent conductors even in solid form due to the presence of free-moving electrons within their structure. The key difference lies in the nature of bonding: ionic solids rely on mobile ions while metallic solids utilize delocalized electrons that facilitate electrical flow. Evaluate how temperature influences the electrical conductivity of metals compared to covalent network solids. In metals, increasing temperature typically leads to decreased electrical conductivity because thermal vibrations of the metal lattice impede the flow of electrons. Conversely, covalent network solids generally maintain low conductivity regardless of temperature since their electrons are tightly bound within covalent bonds and cannot move freely. Thus, while temperature fluctuations affect metals significantly, covalent network solids remain largely unaffected due to their inherent electronic structure. Assess the implications of electrical conductivity on the practical applications of ionic versus metallic solids in technology. The electrical conductivity of ionic and metallic solids plays a crucial role in their technological applications. For instance, ionic solids are utilized in batteries and electrolytes due to their ability to conduct electricity when ionized. On the other hand, metals are widely used in wiring and electronic components where continuous conductivity is necessary. Understanding these properties enables engineers to select suitable materials for specific applications, ensuring efficient energy transfer and functionality in electronic devices. Related terms Ionic Bonding:A type of chemical bonding that occurs between oppositely charged ions, resulting in the formation of ionic compounds that can conduct electricity when dissolved in water or molten. Metals: Elements characterized by their ability to conduct heat and electricity well due to the presence of free-moving electrons in their structure. Covalent Network Solids:Solids where atoms are bonded by covalent bonds in a continuous network, generally leading to poor electrical conductivity due to the lack of free-moving charged particles. Study Content & Tools Study GuidesPractice QuestionsGlossaryScore Calculators Company Get $$ for referralsPricingTestimonialsFAQsEmail us Resources AP ClassesAP Classroom every AP exam is fiveable history 🌎 ap world history🇺🇸 ap us history🇪🇺 ap european history social science ✊🏿 ap african american studies🗳️ ap comparative government🚜 ap human geography💶 ap macroeconomics🤑 ap microeconomics🧠 ap psychology👩🏾‍⚖️ ap us government english & capstone ✍🏽 ap english language📚 ap english literature🔍 ap research💬 ap seminar arts 🎨 ap art & design🖼️ ap art history🎵 ap music theory science 🧬 ap biology🧪 ap chemistry♻️ ap environmental science🎡 ap physics 1🧲 ap physics 2💡 ap physics c: e&m⚙️ ap physics c: mechanics math & computer science 🧮 ap calculus ab♾️ ap calculus bc📊 ap statistics💻 ap computer science a⌨️ ap computer science p world languages 🇨🇳 ap chinese🇫🇷 ap french🇩🇪 ap german🇮🇹 ap italian🇯🇵 ap japanese🏛️ ap latin🇪🇸 ap spanish language💃🏽 ap spanish literature go beyond AP high school exams ✏️ PSAT🎓 Digital SAT🎒 ACT honors classes 🍬 honors algebra II🐇 honors biology👩🏽‍🔬 honors chemistry💲 honors economics⚾️ honors physics📏 honors pre-calculus📊 honors statistics🗳️ honors us government🇺🇸 honors us history🌎 honors world history college classes 👩🏽‍🎤 arts👔 business🎤 communications🏗️ engineering📓 humanities➗ math🧑🏽‍🔬 science💶 social science RefundsTermsPrivacyCCPA © 2025 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. every AP exam is fiveable Study Content & Tools Study GuidesPractice QuestionsGlossaryScore Calculators Company Get $$ for referralsPricingTestimonialsFAQsEmail us Resources AP ClassesAP Classroom history 🌎 ap world history🇺🇸 ap us history🇪🇺 ap european history social science ✊🏿 ap african american studies🗳️ ap comparative government🚜 ap human geography💶 ap macroeconomics🤑 ap microeconomics🧠 ap psychology👩🏾‍⚖️ ap us government english & capstone ✍🏽 ap english language📚 ap english literature🔍 ap research💬 ap seminar arts 🎨 ap art & design🖼️ ap art history🎵 ap music theory science 🧬 ap biology🧪 ap chemistry♻️ ap environmental science🎡 ap physics 1🧲 ap physics 2💡 ap physics c: e&m⚙️ ap physics c: mechanics math & computer science 🧮 ap calculus ab♾️ ap calculus bc📊 ap statistics💻 ap computer science a⌨️ ap computer science p world languages 🇨🇳 ap chinese🇫🇷 ap french🇩🇪 ap german🇮🇹 ap italian🇯🇵 ap japanese🏛️ ap latin🇪🇸 ap spanish language💃🏽 ap spanish literature go beyond AP high school exams ✏️ PSAT🎓 Digital SAT🎒 ACT honors classes 🍬 honors algebra II🐇 honors biology👩🏽‍🔬 honors chemistry💲 honors economics⚾️ honors physics📏 honors pre-calculus📊 honors statistics🗳️ honors us government🇺🇸 honors us history🌎 honors world history college classes 👩🏽‍🎤 arts👔 business🎤 communications🏗️ engineering📓 humanities➗ math🧑🏽‍🔬 science💶 social science RefundsTermsPrivacyCCPA © 2025 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. Study Content & Tools Study GuidesPractice QuestionsGlossaryScore Calculators Company Get $$ for referralsPricingTestimonialsFAQsEmail us Resources AP ClassesAP Classroom every AP exam is fiveable history 🌎 ap world history🇺🇸 ap us history🇪🇺 ap european history social science ✊🏿 ap african american studies🗳️ ap comparative government🚜 ap human geography💶 ap macroeconomics🤑 ap microeconomics🧠 ap psychology👩🏾‍⚖️ ap us government english & capstone ✍🏽 ap english language📚 ap english literature🔍 ap research💬 ap seminar arts 🎨 ap art & design🖼️ ap art history🎵 ap music theory science 🧬 ap biology🧪 ap chemistry♻️ ap environmental science🎡 ap physics 1🧲 ap physics 2💡 ap physics c: e&m⚙️ ap physics c: mechanics math & computer science 🧮 ap calculus ab♾️ ap calculus bc📊 ap statistics💻 ap computer science a⌨️ ap computer science p world languages 🇨🇳 ap chinese🇫🇷 ap french🇩🇪 ap german🇮🇹 ap italian🇯🇵 ap japanese🏛️ ap latin🇪🇸 ap spanish language💃🏽 ap spanish literature go beyond AP high school exams ✏️ PSAT🎓 Digital SAT🎒 ACT honors classes 🍬 honors algebra II🐇 honors biology👩🏽‍🔬 honors chemistry💲 honors economics⚾️ honors physics📏 honors pre-calculus📊 honors statistics🗳️ honors us government🇺🇸 honors us history🌎 honors world history college classes 👩🏽‍🎤 arts👔 business🎤 communications🏗️ engineering📓 humanities➗ math🧑🏽‍🔬 science💶 social science RefundsTermsPrivacyCCPA © 2025 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. every AP exam is fiveable Study Content & Tools Study GuidesPractice QuestionsGlossaryScore Calculators Company Get $$ for referralsPricingTestimonialsFAQsEmail us Resources AP ClassesAP Classroom history 🌎 ap world history🇺🇸 ap us history🇪🇺 ap european history social science ✊🏿 ap african american studies🗳️ ap comparative government🚜 ap human geography💶 ap macroeconomics🤑 ap microeconomics🧠 ap psychology👩🏾‍⚖️ ap us government english & capstone ✍🏽 ap english language📚 ap english literature🔍 ap research💬 ap seminar arts 🎨 ap art & design🖼️ ap art history🎵 ap music theory science 🧬 ap biology🧪 ap chemistry♻️ ap environmental science🎡 ap physics 1🧲 ap physics 2💡 ap physics c: e&m⚙️ ap physics c: mechanics math & computer science 🧮 ap calculus ab♾️ ap calculus bc📊 ap statistics💻 ap computer science a⌨️ ap computer science p world languages 🇨🇳 ap chinese🇫🇷 ap french🇩🇪 ap german🇮🇹 ap italian🇯🇵 ap japanese🏛️ ap latin🇪🇸 ap spanish language💃🏽 ap spanish literature go beyond AP high school exams ✏️ PSAT🎓 Digital SAT🎒 ACT honors classes 🍬 honors algebra II🐇 honors biology👩🏽‍🔬 honors chemistry💲 honors economics⚾️ honors physics📏 honors pre-calculus📊 honors statistics🗳️ honors us government🇺🇸 honors us history🌎 honors world history college classes 👩🏽‍🎤 arts👔 business🎤 communications🏗️ engineering📓 humanities➗ math🧑🏽‍🔬 science💶 social science RefundsTermsPrivacyCCPA © 2025 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
2449
https://github.com/scipy/scipy/issues/7725
scipy.signal.kaiserord input args do not make sense as documented · Issue #7725 · scipy/scipy Skip to content Navigation Menu Toggle navigation Sign in Appearance settings Platform GitHub Copilot Write better code with AI GitHub Spark New Build and deploy intelligent apps GitHub Models New Manage and compare prompts GitHub Advanced Security Find and fix vulnerabilities Actions Automate any workflow Codespaces Instant dev environments Issues Plan and track work Code Review Manage code changes Discussions Collaborate outside of code Code Search Find more, search less Explore Why GitHub Documentation GitHub Skills Blog Integrations GitHub Marketplace MCP Registry View all features Solutions By company size Enterprises Small and medium teams Startups Nonprofits By use case App Modernization DevSecOps DevOps CI/CD View all use cases By industry Healthcare Financial services Manufacturing Government View all industries View all solutions Resources Topics AI DevOps Security Software Development View all Explore Learning Pathways Events & Webinars Ebooks & Whitepapers Customer Stories Partners Executive Insights Open Source GitHub Sponsors Fund open source developers The ReadME Project GitHub community articles Repositories Topics Trending Collections Enterprise Enterprise platform AI-powered developer platform Available add-ons GitHub Advanced Security Enterprise-grade security features Copilot for business Enterprise-grade AI features Premium Support Enterprise-grade 24/7 support Pricing Search or jump to... Search code, repositories, users, issues, pull requests... Search Clear Search syntax tips Provide feedback We read every piece of feedback, and take your input very seriously. [x] Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly Name Query To see all available qualifiers, see our documentation. Cancel Create saved search Sign in Sign up Appearance settings Resetting focus You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert {{ message }} scipy/scipyPublic Sponsor Sponsor scipy/scipy =================== ##### GitHub Sponsors Learn more about Sponsors numfocus numfocus Sponsor External links tidelift.com/funding/github/pypi/scipy Learn more about funding links in repositories. Report abuse NotificationsYou must be signed in to change notification settings Fork 5.5k Star 14.1k Code Issues 1.5k Pull requests 279 Actions Projects 0 Wiki Security### Uh oh! There was an error while loading.Please reload this page. Insights Additional navigation options Code Issues Pull requests Actions Projects Wiki Security Insights scipy.signal.kaiserord input args do not make sense as documented#7725 New issue Copy link New issue Copy link Closed Closed scipy.signal.kaiserord input args do not make sense as documented#7725 Copy link Labels Documentation Issues related to the SciPy documentation. Also check related to the SciPy documentation. Also check Description Phillip-M-Feldman opened on Aug 12, 2017 Issue body actions Here's what the documentation says: scipy.signal.kaiserord(ripple, width)[source] ripple : float Positive number specifying maximum ripple in passband (dB) and minimum ripple in stopband. When designing a lowpass digital filter, one normally specifies the maximum ripple in the passband and the minimum rejection in the stopband. With this function, there is no way to specify how much rejection one gets in the stopband, and the filter design code is also apparently trying to limit stopband ripple, which is something that no engineer would care about. The documentation can't just be badly worded, because there would have to be another parameter to specify the stopband rejection. Activity WarrenWeckesser added Documentation Issues related to the SciPy documentation. Also check related to the SciPy documentation. Also check scipy.signal on Aug 12, 2017 WarrenWeckesser commented on Aug 14, 2017 WarrenWeckesser on Aug 14, 2017 Member More actions Phillip, thanks for bringing this up. I think there are two issues here: The documentation of kaiserord() is too terse. SciPy doesn't have a function for designing a FIR filter (including the automatic selection of the order of the filter) based on the desired pass band ripple and stop band rejection. I'll address the first item here. The second is an enhancement request, so I'll create a second github issue for that. It looks like the explanation of the ripple argument of the function kaiserord needs some work. The function kaiserord implements the empirical FIR filter design formulas developed by Kaiser in the late 60's and early 70's. (The reference that I have handy is Sections 7.5.3 and 7.6 of the text "Discrete-Time Signal Processing" (3rd ed.) by Oppenheim and Schafer.) For reference, here's a typical diagram of the design specification for a lowpass filter. The graph of the magnitude of the frequency response of the filter must not enter the shaded area. Ideally, a design method allows both δ p and δ s to be specified, as shown in the diagram. In Kaiser's method, there is only one parameter that controls the passband ripple and the stopband rejection. That is, Kaiser's method assumes δ p = δ s. Let δ be that common value. The stop band rejection in dB is -20log 10(δ). This value (in dB) is the first argument of the function kaiserord. One can interpret the argument ripple as the maximum deviation (expressed in dB) allowed in |A(ω) - D(ω)|, where A(ω) is the magnitude of the actual frequency response of the filter and D(ω) is the desired frequency response. (That is, in the pass band, D(ω) = 1, and in the stop band, D(ω) = 0.) In the script below, |A(ω) - D(ω)| is plotted in the third plot. Kaiser developed an expression for β (the Kaiser window parameter) that depends on the stop band rejection, and also a formula for the filter order in terms of the stop band rejection and Δω, where Δω is the transition width between the pass and stop bands. The Kaiser window design method, then, is to determine the length of the filter and the Kaiser window parameter β using Kaiser's formula (implemented in scipy.signal.kaiserord), and then design the filter using the window method with a Kaiser window (using, for example, scipy.signal.firwin): numtaps, beta = kaiserord(ripple, width) taps = firwin(numtaps, cutoff, window=('kaiser', beta), [other args as needed]) Adding a good example to the docstring of kaiserord() is on the SciPy to-do list (#7168). In the meantime, here is a self-contained script that demonstrates the Kaiser method for a lowpass filter. (I sent a similar script to the mailing list.) I'll work on updating the docstring of kaiserord() to explain its arguments better and to include an example. (And maybe I'll add a version of these comments and this code to the tutorial section of the SciPy documentation.) ``` import numpy as np from scipy.signal import kaiserord, firwin import matplotlib.pyplot as plt def kaiser_lowpass(stop_db, cutoff, width, fs=1): """ Design a lowpass filter using the Kaiser window method. """ stop_db = np.abs(stop_db) # Convert to normalized frequencies nyq = 0.5fs cutoff = cutoff / nyq width = width / nyq # Design the parameters for the Kaiser window FIR filter. N, beta = kaiserord(stop_db, width) N |= 1 # Ensure a Type I FIR filter. taps = firwin(N, cutoff, window=('kaiser', beta), scale=False) return taps, beta User inputs... Values in Hz sample_rate = 1000.0 cutoff = 200.0 width = 25.0 stop_db = 60.0 Filter design... taps, beta = kaiser_lowpass(stop_db, cutoff, width, sample_rate) Compute and plot the frequency response... n = 16384 h = np.fft.rfft(taps, n) w = np.fft.rfftfreq(n, 1/sample_rate) delta = 10(-stop_db/20) plt.subplot(3, 1, 1) plt.plot(w, 20np.log10(np.abs(h))) upper_ripple = 20np.log10(1 + delta) lower_ripple = 20np.log10(1 - delta) lower_trans = cutoff - 0.5width upper_trans = cutoff + 0.5width plt.plot([0, lower_trans], [upper_ripple, upper_ripple], 'r', linewidth=1, alpha=0.4) plt.plot([0, lower_trans], [lower_ripple, lower_ripple], 'r', linewidth=1, alpha=0.4) plt.plot([upper_trans, 0.5sample_rate], [-stop_db, -stop_db], 'r', linewidth=1, alpha=0.4) plt.plot([lower_trans, lower_trans], [-stop_db, upper_ripple], color='r', linewidth=1, alpha=0.4) plt.plot([upper_trans, upper_trans], [-stop_db, upper_ripple], color='r', linewidth=1, alpha=0.4) plt.ylim(-1.5stop_db, 10) plt.ylabel('Gain (dB)') plt.title(('Kaiser Window Filter Design\n' 'Inputs: width %g Hz, stop band rejection %.1f dB ' '(δ = %g)\n' 'Kaiser design: N = %d, $\beta$ = %.3f') % (width, stop_db, delta, len(taps), beta), fontsize=10) plt.grid(alpha=0.25) plt.subplot(3, 1, 2) plt.plot(w, 20np.log10(np.abs(h))) upper_ripple = 20np.log10(1 + delta) lower_ripple = 20np.log10(1 - delta) plt.plot([0, lower_trans], [upper_ripple, upper_ripple], 'r', linewidth=1, alpha=0.4) plt.plot([0, lower_trans], [lower_ripple, lower_ripple], 'r', linewidth=1, alpha=0.4) plt.plot([upper_trans, 1], [-stop_db, -stop_db], 'r', linewidth=1, alpha=0.4) plt.plot([lower_trans, lower_trans], [-stop_db, upper_ripple], color='r', linewidth=1, alpha=0.4) plt.plot([upper_trans, upper_trans], [-stop_db, upper_ripple], color='r', linewidth=1, alpha=0.4) plt.ylim(2lower_ripple, 2upper_ripple) plt.ylabel('Gain (dB)') plt.xlim(0, cutoff) plt.grid(alpha=0.25) plt.subplot(3, 1, 3) desired = w < cutoff deviation = np.abs(np.abs(h) - desired) deviation[(w >= cutoff-0.5width) & (w <= cutoff + 0.5width)] = np.nan plt.plot(w, deviation) plt.plot([0, 0.5sample_rate], [delta, delta], 'r', linewidth=1, alpha=0.4) plt.ylabel('|A(ω) - D(ω)|') plt.grid(alpha=0.25) plt.xlabel('Frequency (Hz)') plt.tight_layout() plt.show() ``` Here is the plot generated by the script. Note that the filter violates the design requirement near the right end of the pass band. (I don't think this is unusual for the Kaiser method.) WarrenWeckesser added a commit that references this issue on Aug 15, 2017 DOC: signal: Copy-edit and add examples to the Kaiser-related functions. ... This issue will close once commit c77a9dc is merged into the 'main' branch.c77a9dc WarrenWeckesser mentioned this on Aug 15, 2017 DOC: signal: Copy-edit and add examples to the Kaiser-related functions. #7734 WarrenWeckesser added a commit that references this issue on Aug 15, 2017 DOC: signal: Copy-edit and add examples to the Kaiser-related functions. ... This issue will close once commit d9847ea is merged into the 'main' branch.d9847ea WarrenWeckesser commented on Aug 15, 2017 WarrenWeckesser on Aug 15, 2017 Member More actions @pmfeldman I created a pull request to update the documentation of kaiserord here: #7734. Comment there if you have any suggestions to improve it. Phillip-M-Feldman commented on Aug 15, 2017 Phillip-M-Feldman on Aug 15, 2017 via email Author More actions Hello Warren, Thanks so much for this beautiful explanation! I had not understood that Kaiser's method ties the passband ripple and stopband rejection together. I suspect that in most applications, a user of this method will be overspecifying one to avoid underspecifying the other, and that the length of the resulting filter will consequently be somewhat greater than the minimum achievable with other, more modern design methods. I will review the proposed documentation and comment within the next few days. Yours, Phillip … On Sun, Aug 13, 2017 at 8:57 PM, Warren Weckesser @.> wrote: Phillip, thanks for bringing this up. I think there are two issues here: - The documentation of kaiserord() is too terse. - SciPy doesn't have a function for designing a FIR filter (including the automatic selection of the order of the filter) based on the desired pass band ripple and stop band rejection. I'll address the first item here. The second is an enhancement request, so I'll create a second github issue for that. It looks like the explanation of the ripple argument of the function kaiserord needs some work. The function kaiserord implements the empirical FIR filter design formulas developed by Kaiser in the late 60's and early 70's. (The reference that I have handy is Sections 7.5.3 and 7.6 of the text "Discrete-Time Signal Processing" (3rd ed.) by Oppenheim and Schafer.) For reference, here's a typical diagram of the design specification for a lowpass filter. [image: figure_1] The graph of the magnitude of the frequency response of the filter must not enter the shaded area. Ideally, a design method allows both δp and δs to be specified, as shown in the diagram. In Kaiser's method, there is only one parameter that controls the passband ripple and the stopband rejection. That is, Kaiser's method assumes δp = δs. Let δ be that common value. The stop band rejection in dB is -20log 10(δ). This value (in dB) is the first argument of the function kaiserord. One can interpret the argument ripple as the maximum deviation (expressed in dB) allowed in |A(ω) - D(ω)|, where A(ω) is the magnitude of the actual frequency response of the filter and D(ω) is the desired frequency response. (That is, in the pass band, D(ω) = 1, and in the stop band, D(ω) = 0.) In the script below, |A(ω) - D(ω)| is plotted in the third plot. Kaiser developed an expression for β (the Kaiser window parameter) that depends on the stop band rejection, and also a formula for the filter order in terms of the stop band rejection and Δω, where Δω is the transition width between the pass and stop bands. The Kaiser window design method, then, is to determine the length of the filter and the Kaiser window parameter β using Kaiser's formula (implemented in scipy.signal.kaiserord), and then design the filter using the window method with a Kaiser window (using, for example, scipy.signal.firwin): numtaps, beta = kaiserord(ripple, width) taps = firwin(numtaps, cutoff, window=('kaiser', beta), [other args as needed]) Adding a good example to the docstring of kaiserord() is on the SciPy to-do list (#7168<#7168>). In the meantime, here is a self-contained script that demonstrates the Kaiser method for a lowpass filter. (I sent a similar script to the mailing list.) I'll work on updating the docstring of kaiserord() to explain its arguments better and to include an example. (And maybe I'll add a version of these comments and this code to the tutorial section of the SciPy documentation.) import numpy as np from scipy.signal import kaiserord, firwin import matplotlib.pyplot as plt def kaiser_lowpass(stop_db, cutoff, width, fs=1): """ Design a lowpass filter using the Kaiser window method. """ stop_db = np.abs(stop_db) # Convert to normalized frequencies nyq = 0.5fs cutoff = cutoff / nyq width = width / nyq # Design the parameters for the Kaiser window FIR filter. N, beta = kaiserord(stop_db, width) N |= 1 # Ensure a Type I FIR filter. taps = firwin(N, cutoff, window=('kaiser', beta), scale=False) return taps, beta # User inputs... # Values in Hz sample_rate = 1000.0 cutoff = 200.0 width = 25.0 stop_db = 60.0 # Filter design... taps, beta = kaiser_lowpass(stop_db, cutoff, width, sample_rate) # Compute and plot the frequency response... n = 16384 h = np.fft.rfft(taps, n) w = np.fft.rfftfreq(n, 1/sample_rate) delta = 10(-stop_db/20) plt.subplot(3, 1, 1) plt.plot(w, 20np.log10(np.abs(h))) upper_ripple = 20np.log10(1 + delta) lower_ripple = 20np.log10(1 - delta) lower_trans = cutoff - 0.5width upper_trans = cutoff + 0.5width plt.plot([0, lower_trans], [upper_ripple, upper_ripple], 'r', linewidth=1, alpha=0.4) plt.plot([0, lower_trans], [lower_ripple, lower_ripple], 'r', linewidth=1, alpha=0.4) plt.plot([upper_trans, 0.5sample_rate], [-stop_db, -stop_db], 'r', linewidth=1, alpha=0.4) plt.plot([lower_trans, lower_trans], [-stop_db, upper_ripple], color='r', linewidth=1, alpha=0.4) plt.plot([upper_trans, upper_trans], [-stop_db, upper_ripple], color='r', linewidth=1, alpha=0.4) plt.ylim(-1.5stop_db, 10) plt.ylabel('Gain (dB)') plt.title(('Kaiser Window Filter Design\n' 'Inputs: width %g Hz, stop band rejection %.1f dB ' '(δ = %g)\n' 'Kaiser design: N = %d, $\beta$ = %.3f') % (width, stop_db, delta, len(taps), beta), fontsize=10) plt.grid(alpha=0.25) plt.subplot(3, 1, 2) plt.plot(w, 20np.log10(np.abs(h))) upper_ripple = 20np.log10(1 + delta) lower_ripple = 20np.log10(1 - delta) plt.plot([0, lower_trans], [upper_ripple, upper_ripple], 'r', linewidth=1, alpha=0.4) plt.plot([0, lower_trans], [lower_ripple, lower_ripple], 'r', linewidth=1, alpha=0.4) plt.plot([upper_trans, 1], [-stop_db, -stop_db], 'r', linewidth=1, alpha=0.4) plt.plot([lower_trans, lower_trans], [-stop_db, upper_ripple], color='r', linewidth=1, alpha=0.4) plt.plot([upper_trans, upper_trans], [-stop_db, upper_ripple], color='r', linewidth=1, alpha=0.4) plt.ylim(2lower_ripple, 2upper_ripple) plt.ylabel('Gain (dB)') plt.xlim(0, cutoff) plt.grid(alpha=0.25) plt.subplot(3, 1, 3) desired = w < cutoff deviation = np.abs(np.abs(h) - desired) deviation[(w >= cutoff-0.5width) & (w <= cutoff + 0.5width)] = np.nan plt.plot(w, deviation) plt.plot([0, 0.5sample_rate], [delta, delta], 'r', linewidth=1, alpha=0.4) plt.ylabel('|A(ω) - D(ω)|') plt.grid(alpha=0.25) plt.xlabel('Frequency (Hz)') plt.tight_layout() plt.show() Here is the plot generated by the script. Note that the filter violates the design requirement near the right end of the pass band. (I don't think this is unusual for the Kaiser method.) [image: figure_1] — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <#7725 (comment)>, or mute the thread . WarrenWeckesser commented on Aug 15, 2017 WarrenWeckesser on Aug 15, 2017 Member More actions @pmfeldman wrote I suspect that in most applications, a user of this method will be overspecifying one to avoid underspecifying the other, and that the length of the resulting filter will consequently be somewhat greater than the minimum achievable with other, more modern design methods. Right, the method does not create an optimal filter. In some tests that I ran, the length of the optimal filter was typically about 90% that of the Kaiser design, and in some cases it was down to almost 80%. The method was an important contribution to DSP at the time, but computers have improved quite a bit in the last 50 years. :) Designing an optimal filter is much easier now (though SciPy could use improvement in this area), so the Kaiser method is probably not as important as it once was. WarrenWeckesser added a commit that references this issue on Aug 15, 2017 DOC: signal: Copy-edit and add examples to the Kaiser-related functions. ... This issue will close once commit 4a79cdd is merged into the 'main' branch.4a79cdd WarrenWeckesser added a commit that references this issue on Sep 6, 2017 DOC: signal: Copy-edit and add examples to the Kaiser-related functions. ... This issue will close once commit 9cfc953 is merged into the 'main' branch.9cfc953 rgommers closed this as completedin #7734on Sep 10, 2017 Sign up for freeto join this conversation on GitHub. Already have an account? Sign in to comment Metadata Metadata Assignees No one assigned Labels Documentation Issues related to the SciPy documentation. Also check related to the SciPy documentation. Also check Type No type Projects No projects Milestone No milestone Relationships None yet Development Code with agent mode Select code repository No branches or pull requests Participants Issue actions Footer © 2025 GitHub,Inc. Footer navigation Terms Privacy Security Status Community Docs Contact Manage cookies Do not share my personal information You can’t perform that action at this time. scipy.signal.kaiserord input args do not make sense as documented · Issue #7725 · scipy/scipy
2450
https://math.stackexchange.com/questions/138489/tangency-condition
Skip to main content Tangency condition Ask Question Asked Modified 13 years, 3 months ago Viewed 852 times This question shows research effort; it is useful and clear 2 Save this question. Show activity on this post. A family of functions {fa,b∣a,b∈R} is given thru fa,b(x)=x4+ax2+bx2+1. It is asked to pick those functions such that their graph Γa,b is tangent to the horizontal axis in two distinct points. I have written the conditions of tangency and of passage thru a point (x,0). The discussion is a bit long, but possible. I wonder if there is any "shortcut", i.e. some approach which leads to the solution more quickly. calculus Share CC BY-SA 3.0 Follow this question to receive notifications asked Apr 29, 2012 at 17:22 SiminoreSiminore 35.8k33 gold badges5555 silver badges8585 bronze badges 4 What tangency condition have you established? – Mark Bennet Commented Apr 29, 2012 at 17:33 The derivative of the function must vanish at two distinct points. – Siminore Commented Apr 29, 2012 at 17:58 1 Put x2:=u and construct tangencies (u,0) with u>0. – Christian Blatter Commented Apr 29, 2012 at 17:59 You should have that f and f′ are simultaneously zero. This should give you two polynomial equations, and the condition you need is that they have a common factor - if all else fails, you could use the division algorithm for polynomials. Note that f is an even function - evidently if x is a solution so is −x, so most of the time you only need to find one point - another comes automatically. Christian's hint uses this to simplify the calculations. There is a special case x=0 to consider. – Mark Bennet Commented Apr 29, 2012 at 18:17 Add a comment | 1 Answer 1 Reset to default This answer is useful 1 Save this answer. Show activity on this post. The derivative is (4x3+2ax)(x2+1)−(2x)(x4+ax2+b) divided by something harmless. The x4+ax2+b part will have to be 0, which means we will want 4x3+2ax to be 0. Short enough? If we want one of the points of tangency to be at x=0, we get b=0, and want 4x2+2a and x4+ax2 to be simultaneously 0 at some x≠0. Is this possible? If x=0 is not a point of tangency, we want 4x2+2a=0, x4+ax2+b=0. Should be easy to find out what b must be in terms of a. (The double part is not much of an issue because of symmetry.) Share CC BY-SA 3.0 Follow this answer to receive notifications answered Apr 29, 2012 at 19:18 André NicolasAndré Nicolas 515k4747 gold badges583583 silver badges1k1k bronze badges Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions calculus See similar questions with these tags. Featured on Meta Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Community help needed to clean up goo.gl links (by August 25) Related 0 Tangency of tangent function please help 1 How to find points of tangency 0 Finding point of tangency between two functions 1 Find Equations of tangent lines 3 Why is it enough for a Tangent Plane to only agree with the slope of an m-input function in m directions? 2 If the graph of an equation intersects the x-axis, is it possible for there to be a horizontal tangent 0 Tangency point rotated ellipse and circle. 0 Spivak, Ch. 22, Problem 14: Understanding some calculations involving Newton's Method 0 Deriving maximum horizontal distance between two graphs Hot Network Questions batman comics within batman story Should theological questions be considered philosophical questions and dealt with by philosophers? Excel lookup from one matrix or array to another Restaurants with a view of the Alps in Munich Modifying structures based on imaginary frequencies in Avogadro2 How do you mount a fabric poster on an easel at a conference? Which fields satisfy first-order induction? Excessive flame on one burner Why do word beginnings with X take a /z/ sound in English? Is it mentioned that the 18-day Mahabharata war remained incomplete? Do I need a visa to visit Turkey if I already hold a valid US visa? Why is the heliopause so hot? Two Card Betting Game: Asymmetrical Problem Incoming water pipe has no apparent ground Make a square tikzcd diagram Why do we introduce the continuous functional calculus for self-adjoint operators? How does the marking of God’s servants protect them from the impending destruction in Revelation 7:3? How are the crosswords made for mass-produced puzzle books? Concatenating a range of rows in pandas How do ester groups react with Na in liquid NH3? Do areas with high underground radon levels have uranium in it? Why adding \usepackage{mathtools} gives latex error in this example? How can I ensure players don't immediately leave once things get dangerous in a horror setting? Did the success of "Star Wars" contribute to the decision to make "Strangers" starring Don Henderson? more hot questions Question feed By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
2451
https://web.cs.umass.edu/publication/docs/2011/UM-CS-2011-028.pdf
A Computational Algorithm for Creating Geometric Dissection Puzzles Yahan Zhou Rui Wang University of Massachusetts Amherst Abstract Geometric dissection is a popular way to create puzzles. Given two input figures of equal area, a dissection seeks to partition one figure into pieces which can be reassembled to construct the other figure. While mathematically it is well-known that a dissection al-ways exists between two 2D polygons, the challenge is to find a solution with as few pieces as possible. In this paper, we present a computational method for creating geometric dissection puzzles. Our method starts by representing the input figures onto a discrete grid, such as a square or triangular lattice. Our goal is then to parti-tion both figures into the smallest number of clusters (pieces) such that there is a one-to-one and congruent matching between the two sets. Directly solving this combinatorial optimization problem is intractable with a brute-force approach. We propose a novel hierar-chical clustering method that can efficiently find an optimal solution by iteratively minimizing an objective function. In addition, we can modify the objective function to include an area-based term, which directs the solution towards pieces with a more balanced size. Fi-nally, we show extensions of our algorithm for dissecting multiple 2D figures, and for dissecting 3D shapes. Keywords: Geometric puzzles, dissection, optimization. 1 Introduction Geometric dissection is a mathematical problem that seeks to cut one geometric figure into pieces which can be reassembled to con-struct other figures. For a long time, geometric dissections have en-joyed great popularity in recreational math and puzzles [Lindgren 1972; Frederickson 1997]. One of the ancient examples of a dissec-tion puzzle was a graphical depiction of the Pythagorean theorem. Today, a popular dissection game is the Tangram puzzle [Slocum 2003], which uses 7 geometric pieces cut from a square to construct thousands of distinct shapes. Geometric dissection is also closely related to tiling and tessellation, both of which have numerous ap-plications in computer graphics and computational geometry. In its basic form, the geometric dissection asks whether any two shapes of the same area are equi-decomposable, that is, if they can be cut into a finite number of congruent polygonal pieces [Freder-ickson 1997]. Mathematically, it is long known that a dissection always exists for 2D polygons, due to the Bolyai-Gerwien theo-rem [Lowry 1814; Bolyai 1832; Gerwien 1833]. Although the the-orem provided a general solution to find the dissection solution, the upper bound on the number of pieces is quite high. In practice, many dissections can be achieved with far fewer pieces. Figure 2 gives a simple example. Therefore much recent work has focused on the challenge of finding the optimal dissections using as few pieces as possible, and this has inspired extensive research in the mathematics and computation geometry literature [Lindgren 1972; Cohn 1975; Frederickson 1997; Czyzowicz et al. 1999; Kranakis et al. 2000; Akiyama et al. 2003]. While many ingenious analytic solutions have been discovered for shapes such as equilateral trian-gles, squares and other regular polygons, finding the optimal solu-tion for general shapes remains a difficult open research problem. In this work, our goal is to seek an efficient computational algo-rithm for the geometric dissection problem, and we use our solver to facilitate the creation of dissection puzzles. To do so, we employ Figure 1: Four example sets of dissection puzzles created using our algorithm. The top two rows show 2D dissections – the first is a 4-piece dissection, and the second is a 5-piece dissection of a rectan-gle with a rasterized octagon. The bottom two rows show 3D dissec-tions – the first is a 6-piece dissection illustrating 33+43+53 = 63, and the second is an 8-piece dissection between a polycube bunny model and a cuboid. The inlets show partial constructions. For each set we show the 3D pieces on the left, and the two shapes constructed from them on the right. an optimization approach that operates in a discrete solution space. Our method assumes that the input figures can be represented onto a discrete grid such as a square lattice. We rasterize each input fig-ure into the lattice, and provide a simple editing interface to modify the rasterized figure or create one from scratch. Following this step, we can reformulate the dissection into a clus-ter optimization problem. Specifically, our goal is to partition each figure into the smallest number of clusters (pieces) such that there is a one-to-one and congruent matching between the two sets. We consider two pieces congruent if they match exactly under isomet-ric transformations, including translation, rotation, and flipping. As this is a combinatorial optimization problem, a brute-force solution is intractable even for small-scale problems. Therefore we propose a hierarchical clustering method that can efficiently find an optimal solution by iteratively minimizing an objective function. Our main idea is to start with two clusters in each figure, search for the parti-tioning that gives the best matching score, then progressively insert more clusters at each subsequent level until a dissection is found. The matching score is defined using a distance metric that penalizes mismatches. During optimization, we prioritize the search towards directions that are more likely to reach a dissection. Our algorithm can efficiently converge to a solution with a small number of pieces. Furthermore, we have found our solutions to be optimal for all test cases that we can verify optimality (see Section 4). With the computational approach, we can extended the creation of puzzles in several ways. First, we can replace the square lattice with a triangular lattice which can account for 45◦angled edges in 1 Figure 2: An example of dissecting two shapes (a 7×10 rectangle with a center hole and a 8×8 square) using only two pieces. the input figures. Other regular grids, such as the hexagonal lattice, are also possible. Second, we can modify the objective function to include an area-based term, which favors pieces with a more bal-anced size. This can help avoid solutions where some pieces are significantly larger than other pieces, which can reduce the playa-bility of the puzzles. Third, we show an extension of our algorithm to dissecting multiple input figures. We propose a global refine-ment to simultaneously optimize all input figures, instead of a triv-ial approach that simply overlays the pairwise dissections. Finally, we have also extended our algorithm to dissecting 3D shapes, thus creating 3D geometric puzzles. Figure 1 shows several examples produced using our method. It should be noted that as we require the input to be discretized, our method is not meant to substitute the analytic approaches to many general dissection problems. Rather, our aim is to find an efficient computational solution, which provides a convenient tool for users to create a variety of different dissection puzzles. 2 Related Work Geometric Dissections. Geometric dissection problems have a rich history, originating from the explorations of geometry by the an-cient Greeks [Allman 1889]. One of the earliest examples is a visual proof of the Pythagorean theorem by using dissections to demon-strate the equivalence of area. In Arabic-Islamic mathematics and art, dissection figures are frequently used to construct intriguing patterns ornamenting architectural moments [ ¨ Ozdural 2000]. Dis-section figures also provide a popular way to create puzzles and games. The Tangram [Slocum 2003], which is a dissection puzzle invented in ancient China, consists of 7 pieces cut from a square and then rearranged to form a repertoire of other shapes. In mathematics, an early significant result was the proof that any 2D polygon can be dissected using a finite number of pieces to other polygons of equal area [Lowry 1814; Wallace 1831; Bolyai 1832; Gerwien 1833] (although the same conclusion does not hold for 3D shapes [Dehn 1900]). This has commonly been referred to as the Bolyai-Gerwien theorem. Since then, attention has focused on the more challenging problem of finding optimal dissections that use the fewest number of pieces. For example, Cohen studied economical triangle to square dissections; Kranakis et al. studied the asymptotic number of pieces to dissect a regular m-gon into a regular n-gon; Akiyama et al. studied the optimality of a dissection method for turning a square into n smaller squares; Czyzowicz et al. studied the number of pieces to dissect a rational rectangle into a square , and under the additional constraints of glass cuts . In addition, the popularity of such problems is culminated in seminal books such as [Lindgren 1972; Frederickson 1997]. Despite extensive research, finding the minimum dissection solution has so far only been possible for a few special cases, while the general cases remain an open research problem. Our work is first to present a computational algorithm to solve a general dissec-tion problem in discrete domain. Another research area is dissections with special properties, such as hinged dissections where all pieces are hinged together at vertices, and remain connected as they are rearranged. An early example was demonstrated by [Dudeney 1902] that turns an equilateral triangle to square. Such an intriguing construction inspired a number of (a) square (b) right tri. (c) equilateral (d) hexgon Figure 3: Examples of several discrete lattice grids. studies including a well-known book by Frederickson . Re-cently, Abbott et al. proved that any two polygons of equal area have a hinged dissection, resolving a long-standing open prob-lem. Other types of hinges have also been studied, including twisted hinges [Frederickson 2007] and piano hinges [Frederickson 2006]. Tiling. A closely related subject to geometric dissections is tiling [Gr¨ unbaum and Shephard 1986], the basic form of which is to seek a collection of figures that can fill the plane infinitely with no over-laps or gaps. The use of tiling is ubiquitous in the design of patterns for architectural ornaments, mosaics, fabrics, carpets, and wallpa-pers. It is also seen throughout the history of art, especially in the drawings of M.C. Escher. In computer graphics, Kaplan and Salesin presented a technique called ‘escherization’, which can approximate any closed figure on the plane into a tileable shape, simulating Escher-style drawings. A number of well-known tiling patterns, such as Penrose tiling, polyomino tiling, Wang tiles have also been cleverly applied in graphics, especially for blue noise sampling [Ostromoukhov et al. 2004; Ostromoukhov 2007] and texture synthesis [Cohen et al. 2003; Fu and Leung 2005; Lagae and Dutr´ e 2006]. An excellent introduction and survey of tile-based methods in computer graphics can be found in [Lagae et al. 2008]. Tiling can also be used to create puzzles. Lagae and Dutr´ e have shown that the tile packing results can be used to create inter-esting jigsaw puzzles. Another relevant work is a method for cre-ating 3D polyomino puzzles presented by [Lo et al. 2009]. Their method aims to find a set of polyomino pieces that can tile a given parameterized surface, and they designed clever interlocks to make the puzzles physically realizable. Generally, tile-based puzzles study how to use a predefined set of pieces to cover a given shape; in contrast, geometric dissection puzzles study how to solve for a set of pieces that can simultaneously construct two or more shapes. Thus their solution methods are considerably different. Recreational Math and Art. Our work relates to a number of top-ics in computer graphics that are targeted towards recreational math and art, such as 3D Burr puzzles [Xin et al. 2011], ASCII art [Xu et al. 2010], paper popup [Li et al. 2011; Li et al. 2010], camou-flage images [Chu et al. 2010], shadow art [Mitra and Pauly 2009], 3D polyomino puzzles [Lo et al. 2009], maze construction [Xu and Kaplan 2007], papercraft models [Mitani and Suzuki 2004], jig-saw image mosaics [Kim and Pellacini 2002]. Solutions to many of them involve solving a complex optimization problem. For ex-ample, Chu et al. [Chu et al. 2010] used a multi-label graph cut algorithm to solve an pixel labeling problem. In general, our formu-lation for geometric dissections can be viewed as a label assignment problem (the label being the index of a piece). However, we haven’t found any existing solution that can directly benefit our case. This is mainly because unlike in image domains, our objective function cannot be defined using a local coherence metric, thus an algorithm such as graph-cuts is not applicable. 3 Algorithms and Implementation 3.1 Assumptions and Overview Given two input figures A and B of equal area, our goal is to find the minimum set of pieces to dissect A and B. To formulate it as 2 an optimization problem, we require both input figures to be repre-sented onto a discrete grid. The simplest choice is a square lattice as shown in Figure 3(a), which is naturally suitable for representing rectilinear polygons. For other shapes, such as discs, we rasterize them into the grid, resulting in approximated shapes. Note that for the purpose of creating puzzles, exact representation of the input is not necessary. At sufficient grid resolution, the discritization typ-ically produces acceptable shape approximations. Note that after discretization, the area (number of pixels) covered by each figure must remain the same. This can be ensured either by the design of the input figures, or by using a graphical interface (see Section 3.7) to touch up the rasterized figures. In the following, we use symbols A and B to denote the two rasterized figures of equal area. Given the input, we formulate the dissection into a cluster opti-mization problem. Specifically, our goal is to partition each figure into the smallest number of clusters (each cluster being a connected piece) such that there is a one-to-one and congruent matching be-tween the two sets of clusters. Here congruency refers to two pieces that match exactly under isometric transformations, including trans-lation, rotation, and flipping. Since the solution space is discrete, the possible transformations are also discrete. For example, on a square lattice with the grid size 1, all translations must be of integer values, and there are only 4 possible rotations: 0◦, 90◦, 180◦, and 270◦. Thus excluding translation, two congruent pieces must match under the 8 different combinations of rotation and flipping. Generally, solving such a clustering problem requires combinato-rial search, which would impose a very large solution space. As the dissection requires the solution pieces to fit exactly with each other in both input figures, leaving no holes or overlaps, standard fitting or clustering algorithms are unlikely to lead to valid results. To ef-ficiently solve the problem, we introduce a hierarchical clustering algorithm that progressively minimizes an objective function until a solution is found. We start the search from a random initial condi-tion, and apply refinement steps to iteratively reduce the objective function value. We use random exploration to keep the algorithm from getting stuck in local minima. Below we will first describe our algorithm for dissecting two input 2D figures defined on a square lattice, then describe its extensions to the triangular lattice, the dis-section of multiple figures, and finally the dissection of 3D shapes. Figure 4 provides a graphical overview of the algorithm. 3.2 Dissecting Two Figures on a Square Lattice Distance metric. Given two pieces on each figure: a ⊂A, b ⊂ B, we define a distance metric D that measures the bidirectional mismatches between them under the best possible alignment: D(a, b) = minTa,b  { p | p ∈a and (Ta,b × p) / ∈b} + { p | p ∈b and (T −1 a,b × p) / ∈a}  (1) where Ta,b is an isometric transformation from piece a to b, T −1 a,b is the reverse transformation, p counts the number of pixels that in one piece but not the other (i.e. it measures bidirectional mismatches). As D measures the minimum mismatches under all possible Ta,b, it will be 0 if the two pieces are congruent. To simplify the calculation of D, we first set the translation to align the centers of a and b together, then simply search among the 8 combinations of rotation and flipping to obtain D. While this does not consider other possible translations, we found it to work well in practice, and it preserves the crucial property that congruent pieces must result in zero distance. Note that if the center of a piece does not lie exactly on a grid point, we need to align it to the 4 nearby grid points and calculate D for each; the smallest among them is returned as the distance value. Matching. Next, assume the two figures A and B have both been partitioned into k clusters {ai} and {bj}, we need to match the elements in {ai} to those in {bj} such that the sum of distance between every matched pair is minimized. We call this a matching between the two sets, denoted as M. Mathematically, M = arg min m∈{{ai}→{bj}} X (ai,bj)∈m D(ai, bj) (2) where {ai} →{bj} denotes a bijection from {ai} to {bj}. Ba-sically we are seeking among all possible bijections the one that gives rise to the minimum total distance. This is known as the as-signment problem in graph theory, which is well-studied and can be solved by a maximum weighted bipartite matching. Specifically, we create a weighted bipartite graph between the two sets {ai} and {bj}: every element ai in {ai} is connected to every element bj in {bj} by an edge, whose weight is equal to the distance D be-tween the two elements. The goal is to find a bijection whose total edge weight is minimal. A common solution is based on a modified shortest path search, for which we use an optimized Bellman-Ford algorithm [West 2000]. It guarantees to yield the best matching in O(k3) time, where k is the number of clusters. We call the total pair distance under M the matching score, denoted as EM. In other words, EM = P (ai,bj)∈M D(ai, bj). Note that EM = 0 if M is a dissection solution. Thus the smaller EM is, the closer we are to reach a dissection solution. Objective function. Since the minimum number of pieces to achieve a dissection is unknown in advance, we propose a hierarchi-cal approach that solves the problem in multiple levels. Each level ℓpartitions the two input figures into ℓ+ 1 clusters, and outputs a set of best candidates at that level. The basic definition of such an objective function is simply the matching score EM. Specifi-cally, let’s denote with Ck = ({ai}k, {bj}k) a candidate solution where {ai}k and {bj}k are two given k-piece clusterings of A and B respectively; then the objective function Ek(C) is: E(Ck) = EM({ai}k, {bj}k) (3) At the end of each level ℓ, we select a set (Nb) of the best candidate solutions {Sℓ} which give the smallest values according to Eq 3, and use the set for initialization in the next level. The algorithm will terminate when a solution is found such that E(Sℓ) = 0. In the following we will describe our algorithms for the first and each subsequent level. Refer to Figure 4 for a graphical illustration. 3.2.1 Level 1 Optimization Seeding. At the first level, our goal is to compute the best set of 2-piece clusterings to approximate the dissection. To begin, we split A into two random clusters a1 and a2. This is done by starting from two random seeds, and growing each into a cluster using flood-fill. While we could also use other methods to grow the clusters, the flood-fill guarantees that each cluster is a connected component. We do the same for B, resulting in two random clusters b1 and b2. Compute matching. Now we have the initial sets of clusters {ai}2 and {bj}2, we can invoke Eq 2 to compute the matching M be-tween them. In Figure 4 we use the same color to indicate a matched pair. Note that there is no particular ordering of the clusters, so the colors may flip depending on the output of the matching algorithm. Forward copy-paste. Our next step is to refine the clusters. As the solution space is very large, randomly modifying each cluster by itself is unlikely to result in a better matching score. Therefore we introduce a more explicit approach that copies and pastes a cluster ai to its matched cluster bj, in order to force their shapes to become 3 Repeat N times Shape 1 Shape 2 Best candidates Shape 1 Shape 2 Initialize Cluster Seeds Cluster using Voronio diag. Calculate Matching Forward Copy-paste Backward Copy-paste Random Label Switching Level 1 Final Results Level 1 Initialize Sub-clusters Sub-cluster Refinement Calculating Matching Forward Copy-paste Backward Copy-paste Random Label Switching Level 2 Final Results Level 2 Loop R times Best candidates Loop R times Random Label Switching Re-calculate Matching Random Label Switching Re-calculate Matching Repeat N times Figure 4: An overview of our hierarchical optimization algorithm for dissecting two input figures: one is a 15×10 rectangle with an off-center hole, and the other is a 12×12 square. At each level, we show the steps being performed, and visualize the changes in one of the candidate solutions after each step. This example requires 3 pieces to dissect, which was found by our algorithm at the end of level 2. similar. This is called a forward copy-paste. To do so, we apply the transformation which yields the distance between ai and bj (Eq. 1) to ai, and pastes the result to B. Note that if the two matched clus-ters are not congruent yet, the paste may overwrite neighbor pixels that belong to other clusters. This is allowed, but we randomize the ordering of clusters for copy-paste in order to avoid bias. Pixels pasted outside the boundary of a figure are ignored. Following the above step, some pixels in B may have received no pasted pixels from A, thus they become holes. We use a random flood-fill to eliminate the holes. Specifically, we randomly select already pasted pixels and grow them outward to fill the hole. Random label switching. As mentioned above, during copy-paste, some clusters may overlap with each other, resulting in conflicts. Therefore our next step is to reduce such conflicts by modifying the cluster assignments for some pixels at the boundary of two clusters. To do so, we first recompute the matching between the current two sets of clusters, then simulate a copy-paste in the backward direc-tion, i.e. from B to A. During this process we record the pixels that would have overlapped after pasting. For each such pixel x, we randomly relabel it to the cluster of one of its four neighbor-ing pixels. This is called random label switching. Note that if x is surrounded completely by pixels of its own cluster, its label will remain the same. Thus only pixels on the boundary of a cluster can potentially be switched to a different label. Intuitively, the motivation of the forward copy-paste is to encourage the clusters in B to be shaped similarly to A, and the motivation of the random label switching is to modify the cluster boundaries in B to reduce cluster conflicts/overlaps. The two steps combined is called a forward refinement step. Backward refinement. The backward refinement performs exactly the same steps as the forward refinement, except in the reverse di-rection (i.e. a copy-paste from B to A, followed by a random label-ing switching in A). At this point, we have completed one iteration of back-and-forth refinement. Convergence. We repeat the back-and-forth refinement iteration for R times (the default value of R is 100). This typically reaches convergence very quickly, upon which we obtain a candidate solu-tion C2, whose associated objective function value is E(C2). Random seed exploration. The refinement process can be seen as a way to find local minimum from the initial seeds. Thus small changes to the initial seeds do not significantly affect the converged result. In order to seek global minimum, we apply random explo-ration, where we re-compute the the candidate solution N times (the default value of N is 400), each time with a different set of ini-tial seeds. After random exploration, the best Nb = 30 candidate solutions (i.e. those with the smallest objective function values) are selected and output as the level 1 final results, denoted as {S1}. At this point, if there exists a candidate solution whose matching score is 0, we have reached a perfect dissection. Otherwise we con-tinue with subsequent levels. The top portion of Figure 4 illustrates all steps in level 1. Note how the candidate solution refines follow-ing each step. The red outlines on some pixels indicate unmatched pixels between a pair of clusters which are not congruent yet. 3.2.2 Level ℓOptimization In each subsequent level ℓ, we start from one of the best candi-date solutions Sℓfrom the last level. Our goal is to insert a new cluster to Sℓ, and then search for the best (ℓ+ 1)-piece approxima-tion using the same back-and-forth refinement process as in level 1. Intuitively, as the output of the previous level are some of the clos-est ℓ-piece approximations to the dissection, they serve as excellent starting points for the new level. The main difference between level ℓand level 1 is in creating the initial clusters. Note that we cannot use completely random seed initialization as in level 1, because doing so will completely aban-don the results discovered in previous levels, and hence will not reduce the problem complexity. Instead, we introduce two heuris-tics to create initial clusters by exploiting the previous results, and we consider them both during the random exploration step. Splitting an existing cluster. In the first heuristic, we select a pair of pieces {ai, bj} from Sℓthat has the largest (worst) matching score, and split each into two sub-clusters. We refer to the pair as the parent clusters. The splitting introduces an additional cluster for each figure; the remaining clusters which were not split remain unchanged for now. Next we need to decide how to perform the split. A straightforward way is random split, but as the parent clus-ters are not well matched, a random split can create difficulties for convergence in subsequent steps. Therefore we need to optimize the splitting to create better matched sub-clusters to begin with. It turns out that we can optimize the splitting by using the same 4 (a) without area-based term (3 pcs) with area-based term (3 pcs) (b) without area-based term (8 pcs) with area-based term (8 pcs) Figure 5: Comparing solutions computed with and without area-based term. In both examples (a) and (b), two solutions are shown which achieve the dissection with equal number of pieces. Note how the area-based term leads to results where the size of each piece is more balanced, which is usually more preferrable. approach as level 1 optimization. To do so, we treat the parent clus-ters ai and bj as two input figures, and apply level 1 optimization to obtain the best 2-piece dissection between them. We have found this approach to work well in practice, creating sub-clusters that are matched as well as it can. Experiments show that this can signifi-cantly improve the quality of the subsequent refinement results. Creating new clusters from mismatched pixels. Our second heuristic is to create a new cluster from the currently mismatched pixels. For example, assume {ai, bj} are a matched pair but not yet congruent, then transforming ai to bj will result in some pixels that are not contained in bj. These pixels will be marked in ai as mismatched pixels. In Figure 4 the mismatched pixels are indicated with a red outline. After marking all mismatched pixels in A and B, we randomly select a seed from them and perform a flood fill to grow the seed into a cluster, which then becomes a new cluster to be inserted to the current level. Comparing the two heuristics. The rationale behind the first heuristic is that priority should be given to splitting the worst matched pair, as this is most likely to result in reduced matching score. The rationale behind the second heuristic is that when a can-didate solution is very close to reaching a dissection, priority should be given to the few pixels that remain unmatched. In practice, we account for both of them during our random exploration: among the N random tries, 75% will use the first heuristic to initialize the sub-clusters, and 25% will use the second heristic. This way we can combine the advantages of both. Global refinement and random exploration. Once the sub-clusters are created, we perform the same back-and-forth refine-ment process as in level 1. Now all clusters will participate in the refinement, therefore we call this step global refinement. Upon con-vergence, we obtain a candidate solution Cℓ+1. In addition, we perform random exploration for N times similarly to level 1, the goal of which is to seek global minimum. Each explo-ration starts from a randomly selected best candidate Sℓfrom the previous level, applies one of the two heuristics to insert a new clus-ter, and computes refinement. Again, after random exploration, the best Nb candidate solutions are output as the final results {Sℓ} of level ℓ. Figure 4 shows an example of level 2 optimization. For this example, our algorithm discovered a perfect dissection at the end of level 2, thus the program terminates with a 3-piece dissection. 3.3 Area-Based Term So far we have described a computational algorithm for finding the minimum dissection of two figures. However, there is no constraint on the size or shape of the resulting pieces. Thus a solution may contain pieces that are significantly larger than others. This is often undesirable, both for aesthetic reasons and for reducing the diffi-culty of the puzzles (since large pieces are easier to identify and place on a target figure). For better control of the solution, we in-troduce an area-based term into our objective function in order to favor a solution where the size of each piece is more balanced. To do so, we modify Eq 3 to include the area-based term Eα: E = EM({ai}k, {bj}k) + λ · [Eα({ai}k) + Eα({bj}k)] (4) where λ is a weight adjusting the relative importance of the two terms. Here Eα is the total area penalty. It is defined by summing up the area penalty α(ai) of each piece, which is calculated as: α(ai) =    A(ai)/ ¯ A −1 if A(ai) > 2 ¯ A ¯ A/A(ai) −1 if A(ai) < ¯ A/2 0 otherwise (5) In the above equation, A denotes the area of a piece, and ¯ A denotes the average area (i.e. the total area divided by the number of pieces). Essentially α(ai) penalizes a piece if it is either more than twice the average area, or less than half of the average area; otherwise we consider a piece to be within the normal range of size variations and assign a zero penalty. Figure 5 shows an example comparing solutions computed with and without applying the area-based term. Note that while both solu-tions achieved equal number of pieces, enabling the area penalty leads to pieces of a more balanced size, which is often preferable. In addition, more uniformly sized pieces also tend to be symmetric with each other, which is a desirable property. Note that the preference towards balanced area and the preference towards smaller number of pieces are often conflicting goals. For example, if the area weight factor λ is set too large, the solution will be heavily biased towards area uniformity, and will deviate from the goal of seeking the smallest number of pieces. To address this issue, we gradually decrease λ as the level ℓincreases. This will reduce the effect of area penalty over levels, encouraging the solver to fo-cus more on finding the minimum solution as the level increases. Our current implementation sets λ = 1 2 0.8ℓ−1. Avoiding split pieces. Another improvement we made to the objec-tive function is to include a term that penalizes split pieces. A split piece is one that contains disconnected components. While these components transform together in the same way, they are not phys-ically implementable. Thus we simply add a large penalty to such pieces in order to eliminate them during best candidate selection. Note that we do not actively prevent them because there are cases where split pieces are temporarily unavoidable, such as during the first several levels of processing when the input figures themselves are fragmented (see Figure 11 (c)-(f)). 3.4 Extension to the Triangular Lattice Besides using a square lattice, our method can also be extended to other lattices including the ones shown in Figure 3 (b,c,d). Cur-rently we have implemented the right triangular lattice shown in (b), which is constructed by splitting each grid on a square lattice to four isosceles triangles along the diagonals. Using this lattice, we can represent input figures with both rectilinear edges as well as 45◦angled edges. This usually makes the discrete representation more expressive. Figure 9 shows several examples. 5 With the triangular lattice, our algorithms remain almost the same, because the possible transformations, including translation, rota-tion, and flipping, remain the same with a square lattice. The main difference is that a triangle pixel has three neighbors (those con-nected to it along the three edges) while a square pixel has four. 3.5 Extension to 3D Shape Dissection We can also extend our algorithm to dissecting 3D shapes that are represented onto a cubic voxel grid. In this case, each voxel has six neighbors, and the transformation of each piece considers 24 different 3D rotations. However, unlike 2D, a piece is not allowed to be mirrored (which is analogous to flipping in 2D), because in general mirroring is not physically plausible in 3D. The area-based term is correspondingly modified to a volume-based term. Several examples of 3D dissection are shown in Figure 1, 6 and 7. Note that our implementation currently does not consider how the pieces can be locked together to form a stable 3D structure. If a piece has no sufficient support underneath it, the structure will not be phys-ically stable. Although the examples shown in this paper have not encountered such issue, it remains a direction for future research. 3.6 Dissecting Multiple Figures Finally, we present an extension of our algorithm to simultaneously dissecting multiple figures. Note that a trivial approach is to simply overlay the pairwise dissection solutions, and output the intersec-tions of all pieces. Unfortunately this will produce a large number of pieces that are overly fragmented – the upper bound is exponen-tial with respect to the pairwise dissection results. Here we achieve multi-figure dissection by adapting our optimiza-tion based algorithm. From Figure 4 we can see that the primary steps at each level of the algorithm consist of 1) cluster initialization and 2) cluster refinement. Below we discuss how these two steps are modified for a multi-figure setting respectively. The matching M is still computed from a bipartite graph between a pair of fig-ures. It is possible to redefine M based on the complete k-partite graph among all k figures, but computing the maximum weighted matching for such a graph is known to be an NP-hard problem. Multi-figure cluster initialization. At level 1, the initial two clus-ters for each figure are created in the same way as before, i.e. each figure independently creates two random clusters using a flood-fill on random seeds. At each subsequent level, the initial clusters are computed using pairwise sub-cluster refinement. Specifically, we first pick a random figure as the pivot figure. Without loss of gen-erality, let’s assume the pivot figure is A. Next, we select a piece from A that has the worst total matching scores with all other fig-ures, and split it as well as its matched pieces in all other figures into two sub-clusters. These sub-clusters then need to be refined, for which we use the same back-and-force process as before, except that we now perform one iteration of refinement at a time, between the pivot figure and other figures in a round-robin fashion. Multi-figure cluster refinement. As before, during global refine-ment, our goal is to modify the clusters in order to achieve improved matching result. To do so, we again select one figure as the pivot figure, then perform the matching and copy-paste from the pivot figure to all other figures. Next, we loop over all figure to perform random label switching. Here the candidates for label switching in a given figure is the set of all pixels that have at least one mismatch with any other figure. Once this is done, we proceed to the next figure as the pivot. Thus the original forward vs. backward refine-ment in the two-figure setting is now generalized to the multi-figure setting, where each figure will be used as the pivot once to perform a forward refinement with other figures. Figure 8: Examples of three-figure dissections. The top row shows a 6-piece dissection of a square, a rectangle with a cross hole, and a solid rectangle. The bottom row shows a 13-piece dissection of a square, the Chinese character for ‘person’, and a figure of a person. Figure 8 shows two examples of three-figure dissection results. Note that these results achieve perfect dissections between all three figures. Since the computation for multi-figure dissection is more expensive, the running time is considerably longer than before. 3.7 Implementation Details Algorithm implementation. A 2D figure is loaded from a binary image and stored as a 2D array. We represent a piece using the STL’s set data structure. The matching M between two clus-ters needs to be evaluated frequently, and we employ an optimized Bellman-Ford algorithm to quickly compute it. It is stored as a bidi-rectional list together with the transformations defined for each pair of pieces. We store the triangular lattice using a 2D array as well, where an array element stores the four triangle pixels, each at a dif-ferent orientation. A 3D shape is loaded from a binary image that represents each slice of the shape, and is stored a list of 2D arrays. Finally, we parallelize the random exploration step using multiple threads, since each exploration is independently computed. This allows us to achieve linear speedup using a multicore CPU. Figure editing interface. As our method requires the two input figures to contain equal number of pixels (or equal number of voxels in 3D), we implemented a simple user interface to assist the editing of input figures if necessary. For a user-provided figure, we first rasterize it onto the lattice grid, then allow the user to directly edit each pixel individually. Alternatively, the user can create a figure from scratch in the interface, similar to editing a standard binary image. The program reports the total number of pixels covered by each figure to facilitate pixel counting. Physical implementations. There are several ways to manufacture the dissection puzzles we created. If a square lattice is used, we can build the resulting pieces using Lego bricks, which are easy to construct. For triangular lattice or 3D dissections, we produce the puzzles using 3D printing. Figure 1 shows several examples of physically produced puzzles. 4 Results and Discussions Optimality. To examine the optimality of our algorithm, we com-pared our solutions for several representative dissection problems with the reference solutions described in Frederickson’s book en-titled Dissections: Plane and Fancy . These examples are demonstrated in Figure 11. As shown in the left column, all input figures are rectilinear polygons of integer coordinates, which can be exactly represented using a square lattice. Thus they provide a direct evaluation of our method. The optimal solutions for these 6 Figure 6: An example of 3D shape dissection. The first input is a 43 cube with a 23 cavity at the center, and the second is a 7x4x2 cuboid. The solution contains 4 pieces shown on the right. The two sets of images on the left show the assembly of the pieces into each input shape. Figure 7: A 3D shape dissection that illustrates 33 + 43 + 53 = 63. The solution contains 6 pieces that are shown in Figure 1 example three. examples are known and are listed in the right column of the figure. The middle column shows our solutions. Several inputs contain disconnected components, for which our algorithm can handle suc-cessfully. For all examples we achieve the same number of pieces with the reference. Note also that for many of them, our solutions are different from the reference (i.e. in the shape of the resulting pieces). The examples in Figure 2 and 4 were also produced using our algorithm, and the results are known to be the optimal. Performance. Our results were obtained on an Intel Core i7 2.66 GHz CPU with 6 GB RAM and 8 hyperthreads. For relatively sim-ple shapes, such as the 2D examples in Figure 1 and Figure 11, the total computation time is within 20 minutes. The figures in these examples generally contain 50∼160 pixels. Higher resolution in-put will result in increased computation time, but we have found that the cost is more dependent upon the number of levels (hence the number of pieces) required to solve a dissection and less depen-dent on the number of pixels. This is mainly because each higher level needs to process more clusters. In addition, multi-figure dis-sections generally take much longer time to run. For example, our longest computation time is 5 hours for the three-figure Chinese character dissection in Figure 8, which produced 13 pieces. All other 2D examples were computed within half an hour. For 3D dis-section, the example in Figure 6 took 7 minutes to run, and the one in Figure 7 took about an hour. Two-figure dissections. Figures 2, 4, 5, 11 all demonstrate exam-ples of two-figure dissections using a square lattice. In Figure 1, the first row shows a dissection between a rectangle and a square with a cross hole, and the second row shows a dissection between a rectan-gle and a rasterized octagon. Figure 9 shows two-figure dissections computed using a triangular lattice. Area-based term. In Figure 5 we have shown that enabling the area-based term often leads to results where the size of each piece is more balanced. In Figure 10 we shown an additional example where our algorithm has found multiple solutions at the final level. The best solution can be selected as the one that gives rise to the smallest area variance of the pieces. Other criteria can also be used to define the best solution. Three-figure dissections. Figure 8 shows two examples of three-figure dissections. The first is a 6-piece dissection of a 12×12 square, a 16×12 rectangle with a cross hole in the center, and a 16×9 solid rectangle. The second is a three-figure dissection of a square, a Chinese character meaning ‘person’, and a simple fig-ure of a person. Our algorithm found a 13-piece dissection of this example. Many Chinese or Japanese characters are hieroglyphic, thus they are suitable for creating dissection puzzles as the charac-ter look similar to the figure it represents. input var(area)=17.8 var(area)=19.8 var(area)=45.5 var(area)=31.8 var(area)=53.5 Figure 10: An example where the algorithm found multiple solu-tions with equal number of pieces. The input are a 9 × 9 Serpenski carpet and a 8 × 8 square. Five selected solutions are shown, all of which are 7-piece solutions. We calculate the area variance for each. Smaller variance corresponds to a more uniform/balanced size, which is generally preferable for aesthetic reasons. 3D dissections. Figures 1, 6 and 7 demonstrate 3D puzzles cre-ated using our algorithm. In particular, the third row in Fig-ure 1 is inspired by the 2D Pythagorean triples and demonstrates 33 + 43 + 53 = 63; and the fourth row is the dissection of a poly-cube Bunny model and a 6 × 6 × 7 cuboid. We have found these puzzles to be quite enjoyable and challenging to play with. Some of them look deceptively simple, but can take a considerable amount of time to solve. Limitations. One of the main limitations of our method is that due to discretization, many input figures cannot be exactly represented onto a discrete lattice grid. They have to be rasterized, resulting in approximate shapes. Therefore our method is not meant to sub-stitute analytic approaches to many dissection problems, especially those involving regular polygons. Nonetheless, for the purpose of generating puzzles, we have found the approximate shapes are suf-ficient in many cases. Furthermore, our results may provide insights and useful initial solutions for discovering an analytic dissection. Another limitation is that the user is currently given little control over the algorithm, other than adjusting the area-based term. Thus it’s difficult to constrain the solution to have certain desirable prop-erties. One example is the symmetry of the pieces, which is often desirable from an aesthetic point of view. We have not considered such properties during the solution process. However, as a dissec-tion problem often has multiple solutions, such as shown in Fig-ure 10 and 11, it’s possible to account for these properties when selecting the final best solution. An alternative way is to include a symmetry-based term in the objective function in order to actively enforce such constraints. 7 (a) Heart to Key (6 pcs) (b) H to House (6 pcs) (c) C to Cat (8 pcs) Figure 9: Three examples of two-figure dissections using the triangular lattice. The examples in (b) and (c) dissect an English letter with an object figure whose name starts with that letter. 5 Conclusions and Future Work In summary, we have presented an efficient computational algo-rithm to compute geometric dissections. We extended our algo-rithm to incorporating area-based weight, to triangular lattice, to dissecting multiple figures, and finally to dissecting 3D shapes. We believe our algorithm and extensions provide a convenient tool for users to design a variety of different geometric puzzles. In terms of applications, the ability to create dissection puzzles it-self presents an interesting application for educational and enter-tainment purposes. There are other practical applications. For ex-ample, the 3D extension of our algorithm may be used to solve manufacturing problems, such as decomposing a furniture into as few pieces as possible to fit in a specific packaging box. Another example is to design furniture that can transform between different shapes to provide multiple functions. In future work, besides addressing some of the limitations discussed in Section 4, we plan to explore a few additional directions. First, we plan to investigate how to design 3D puzzles that can be inter-locked with each other, providing a stable physical structure. Sec-ond, we plan to incorporate user-specified constraints into the de-sign. For example, we can allow the user to specify certain parts of the input that must remain integral pieces, thus preventing them from splitting. It is also possible to include a symmetry-based term, similar to our area-based term, in order to favor solutions with more symmetric pieces. Finally, by implementing the algorithm on mod-ern GPUs, we hope to gain significant performance speedup to-wards interactive design of puzzles. References ABBOTT, T. G., ABEL, Z., CHARLTON, D., DEMAINE, E. D., DEMAINE, M. L., AND KOMINERS, S. D. 2008. Hinged dis-sections exist. In Proc. of SCG, 110–119. AKIYAMA, J., NAKAMURA, G., NOZAKI, A., OZAWA, K., AND SAKAI, T. 2003. The optimality of a certain purely recursive dissection for a sequentially n-divisible square. Comput. Geom. Theory Appl. 24, 1, 27–39. ALLMAN, G. J. 1889. Geometry from Thales to Euclid. Hodges, Figgis, Dublin. BOLYAI, F. 1832. Tentamen juventutem. Typis Collegii Reformato-rum per Josephum et Simeonem Kali. Maros Vasarhelyini. CHU, H.-K., HSU, W.-H., MITRA, N. J., COHEN-OR, D., WONG, T.-T., AND LEE, T.-Y. 2010. Camouflage images. ACM Trans. Graph. 29, 4, 51:1–51:8. COHEN, M. F., SHADE, J., HILLER, S., AND DEUSSEN, O. 2003. Wang tiles for image and texture generation. ACM Trans. Graph. 22, 3, 287–294. COHN, M. J. 1975. Economical triangle-square dissection. Ge-ometriae Dedicata 3, 4, 447–467. CZYZOWICZ, J., KRANAKIS, E., AND URRUTIA, J. 1999. Dis-sections, cuts, and triangulations. In Proc. of the 11th Canadian Conference on Computational Geometry, 154–157. CZYZOWICZ, J., KRANAKIS, E., AND URRUTIA, J. 2007. Rec-tilinear glass-cut dissections of rectangles to squares. Applied Mathematical Sciences 1, 52, 2593–2600. DEHN, M. 1900. ¨ Uber den rauminhalt. Nachrichten von der Gesellschaft der Wissenschaften zu G¨ ottingen, Mathematisch-Physikalische Klasse, 345–354. DUDENEY, H. E. 1902. Puzzles and prizes. Weekly Dispatch, April 6–May 4. FREDERICKSON, G. N. 1997. Dissections: Plane and Fancy. Cambridge University Press. FREDERICKSON, G. N. 2002. Hinged Dissections: Swinging and Twisting. Cambridge University Press. FREDERICKSON, G. N. 2006. Piano-hinged Dissections: Time to Fold! A K Peters. FREDERICKSON, G. N. 2007. Unexpected twists in geometric dissections. Graph. Comb. 23, 1, 245–258. FU, C.-W., AND LEUNG, M.-K. 2005. Texture tiling on arbitrary topological surfaces using wang tiles. In Proc. of EGSR, 99–104. GERWIEN, P. 1833. Zerschneidung jeder beliebigen anzahl von gleichen geradlinigen figuren in dieselben st¨ ucke. Journal f¨ ur die reine und angewandte Mathematik (Crelle’s Journal) 10, 228– 234. GR ¨ UNBAUM, B., AND SHEPHARD, G. C. 1986. Tilings and pat-terns. W. H. Freeman & Co. KAPLAN, C. S., AND SALESIN, D. H. 2000. Escherization. In Proc. of SIGGRAPH, 499–510. KIM, J., AND PELLACINI, F. 2002. Jigsaw image mosaics. ACM Trans. Graph. 21, 3, 657–664. KRANAKIS, E., KRIZANC, D., AND URRUTIA, J. 2000. Efficient regular polygon dissections. Geometriae Dedicata 80, 1, 247– 262. LAGAE, A., AND DUTR´ E, P. 2006. An alternative for wang tiles: colored edges versus colored corners. ACM Trans. Graph. 25, 4, 1442–1459. LAGAE, A., AND DUTR´ E, P. 2007. The tile packing problem. Geombinatorics 17, 1, 8–18. 8 (a) Input: square 42 + 32 = 52 (split) Our solution (4 pieces) Reference (4 pieces) (b) Input: square 122 + 52 = 132 (joined) Our solution (3 pieces) Reference (3 pieces) (c) Input: cross 12 + 22 = square 52 Our solution (5 pieces) Reference (5 pieces) (d) Input: cross 22 + 12 + 22 = 32 Our solution (7 pieces) Reference (7 pieces) (e) Input: cross 32 + 42 = 52 Our solution (7 pieces) Reference (7 pieces) (f) Input: square 62 + 62 + 72 = 112 Our solution (5 pieces) Reference (5 pieces) Figure 11: A comparison of our solutions with reference solutions described in [Frederickson 1997]. Some of these examples are visualiza-tions of the Pythagorean triple numbers. The left column shows the input, the middle shows our solution, and the right shows the reference solution. For all examples we achieve the equal number of pieces with the reference. LAGAE, A., KAPLAN, C. S., FU, C.-W., OSTROMOUKHOV, V., AND DEUSSEN, O. 2008. Tile-based methods for interactive applications. In ACM SIGGRAPH 2008 classes, 93:1–93:267. LI, X.-Y., SHEN, C.-H., HUANG, S.-S., JU, T., AND HU, S.-M. 2010. Popup: automatic paper architectures from 3D models. ACM Trans. Graph. 29, 4, 111:1–111:9. LI, X.-Y., JU, T., GU, Y., AND HU, S.-M. 2011. A geometric study of v-style pop-ups: Theories and algorithms. ACM Trans. Graph. 30, 4, to appear. LINDGREN, H. 1972. Recreational Problems in Geometric Dis-sections and How to Solve Them. Dover Publications. LO, K.-Y., FU, C.-W., AND LI, H. 2009. 3D polyomino puzzle. ACM Trans. Graph. 28, 5, 157:1–157:8. LOWRY, M. 1814. Solution to question 269, [proposed] by mr. w. wallace. Leybourn, T. (ed.) Mathematical Repository 3, 1, 44–46. MITANI, J., AND SUZUKI, H. 2004. Making papercraft toys from meshes using strip-based approximate unfolding. ACM Trans. Graph. 23, 3, 259–263. MITRA, N. J., AND PAULY, M. 2009. Shadow art. ACM Trans. Graph. 28, 5, 156:1–156:7. OSTROMOUKHOV, V., DONOHUE, C., AND JODOIN, P.-M. 2004. Fast hierarchical importance sampling with blue noise proper-ties. ACM Trans. Graph. 23, 3, 488–495. OSTROMOUKHOV, V. 2007. Sampling with polyominoes. ACM Trans. Graph. 26, 3. ¨ OZDURAL, A. 2000. Mathematics and arts: Connections be-tween theory and practice in the medieval islamic world. His-toria Mathematica 27, 2, 171–201. SLOCUM, J. 2003. The Tangram Book. Sterling. WALLACE, W. 1831. Elements of Geometry (8th ed.). Bell & Bradfute, Edinburgh. WEST, D. B. 2000. Introduction to Graph Theory (2nd Edition). Prentice Hall. XIN, S.-Q., LAI, C.-F., FU, C.-W., WONG, T.-T., HE, Y., AND DANIEL, C.-O. 2011. Making Burr puzzles from 3D models. ACM Trans. Graph. 30, 4, to appear. XU, J., AND KAPLAN, C. S. 2007. Image-guided maze construc-tion. ACM Trans. Graph. 26, 3. XU, X., ZHANG, L., AND WONG, T.-T. 2010. Structure-based ascii art. ACM Trans. Graph. 29, 4, 52:1–52:10. 9
2452
https://sites.pitt.edu/~gdhart/AlgLectReals.pdf
Properties of Real Numbers Types of Numbers Natural Numbers (Counting Numbers) N N = {1, 2, 3, 4, 5, ...} Whole Numbers W W = {0, 1, 2, 3, 4, 5, ...} Integers Z Z = {..., –4, –3, –2, –1, 0, 1, 2, 3, 4, ...} Rational Numbers Q Q = { a b | a, b, ∈ Z , b ≠ 0} Irrational Numbers I Numbers that can be written as an infinite nonrepeating decimal Real Numbers R Any number that is rational or irrational (R = Q ∪ I) Real Number Line Visualize a line with equally spaced markers each of which is associated with the integers. If the integers have their natural order, then the real numbers can be visualized as points on the line. 0 1 2 3 -1 -2 -3 We notice that 1. Every real number corresponds to a unique point on the line. 2. Every point on the line corresponds to a unique real number. This is why the set of real numbers is sometimes referred to as the real number line. Types of Intervals Interval Algebraic Interval Notation Graph Notation Description (a, b) a b a < x < b Open, finite [a, b] a b a ≤ x ≤ b Closed, finite [a, b) a b a ≤ x < b Half–open, finite (a, b] a b a < x ≤ b Half–open, finite (a, ∞) a x > a Open, infinite (–∞, b) b x < b Open, infinite [a, ∞) a x ≥ a Closed, infinite (–∞, b] b x ≤ b Closed, infinite If your answer is composed of two (or more) distinct intervals, then the algebraic form of your answer must contains the conjunction 'OR'. Example 1: Describe algebraically the following intervals: a) -3 4 –3 < x < 4 b) -1 x ≤ –1 c) 2 8 2 ≤ x < 8 d) 5 x > 5 e) -4 2 x ≤ –4 or x > 2 Example 2: Describe the region(s) containing the indicated sign(s): Sign Graph Answer + 0 0 + + – -4 -1 x < –4 or x > –1 +, 0 0 0 + – + + – 2 4 5 () 2 < x ≤ 4 or x ≥ 5 –, 0 0 0 + – + + – 2 4 5 () x < 2 or 4 ≤ x ≤ 5 – 0 0 + – + – () -2 0 2 –2 < x < 0 or x > 2 Operations with Real Numbers Absolute Value | x | =    x if x ≥ 0 –x if x < 0 Rule for Order of Operations (Please Excuse My Dear Aunt Sally) 1. Parentheses: Simplify all groupings first. 2. Exponents: Calculate exponential powers and radicals. 3. Multiplication and Division: Perform all multiplications and divisions as they occur from left to right. 4. Addition and Subtraction: Perform all additions and subtractions as they occur from left to right. Evaluate the following expressions and note the use of the equal signs because we use mathematics writing style: Ex 1: (4 – 6)2 + 6(–4) + 5 Ex 2: 6 + 24 ÷ 3 • 2 + 3 16 (4 – 6)2 + 6(–4) + 5 6 + 24 ÷ 3 • 2 + 3 16 (P) = (–2)2 + 6(–4) + 5 (E) = 6 + 24 ÷ 3 • 2 + 3 • 4 (E) = 4 + 6(–4) + 5 (D) = 6 + 8 • 2 + 3 • 4 (M) = 4 – 24 + 5 (M) = 6 + 16 + 3 • 4 (S) = –20 + 5 (M) = 6 + 16 + 12 (A) = –15 (A) = 22 + 12 (A) = 34 Solving First Degree Equations Addition Property of Equality If you add or subtract the same quantity to both sides of an equation, it does not affect the solution. For any real numbers a, b, and c, if a = b, then a + c = b + c, if a = b, then a – c = b – c. Example: Solve 3x – 4 = 2x + 3 We can add the quantity (–2x + 4) to both sides of the equation : 3x – 4 = 2x + 3 3x – 4 – 2x + 4 = 2x + 3 – 2x + 4 x = 7 Multiplication Property of Equality If you multiply or divide the same nonzero quantity to both sides of an equation, it does not affect the solution. For any real numbers a, b, and c, with c ≠ 0, if a = b, then a • c = b • c, if a = b, then a ÷ c = b ÷ c. Example 2: Solve –4x = 12 We can divide both sides of the equation by –4. –4x = 12 –4x –4 = 12 –4 x = –3 Solving First Degree Equations in One Variable (Be GLAD) Remember: You can always simplify any side of the equation at any time. 1. Simplify the expressions on both sides of the equation. You must first eliminate all of the Grouping symbols and simplify if necessary. 2. If there are fractions, you may multiply both sides of the equation by the LCD. Simplify if necessary. 3. You want the variable terms on one side of the equation and the constant terms on the other side. If this is not the case, you should Add or subtract to both sides of the equation a variable term and/or a numerical value in order to get the variable terns on one side and the constant values on the other side. Simplify if necessary. 4. You want the variable term coefficient to be 1. If this is not the case, you should Divide both sides of the equation by the variable term coefficient. Simplify if necessary. 5. Check your answer by substituting it into the original equation. Note: Always make sure your final answer has the variable on the left side of the equation!!! Evaluating and Solving Formulas Solving for any term in a formula is similar Remember: You can always simplify any side of the equation at any time. 1. Simplify the expressions on both sides of the equation. You must first eliminate all of the Grouping symbols and simplify if necessary. 2. If there are fractions, you may multiply both sides of the equation by the LCD. Simplify if necessary. 3. You want all of the terms with the desired variable on one side of the equation and all other term on the other side. If this is not the case, you should Add or subtract to both sides of the equation various variable terms and/or a numerical value in order to get desired variable terns on one side and everything else on the other side. Simplify if necessary. 4. You want one variable term with a coefficient of 1. If this is not the case, you might have to factor out the desired variable and Divide both sides of the equation by the resulting coefficient. Simplify if necessary. Note: Always make sure your final answer has the desired variable on the left side of the equation!!! Solving Word Problems ( Super Solvers Use C.A.P.E.S.) 1. Read the problem carefully. (Reread it several times if necessary) 2. Categorize the problem type if possible. (Is it a problem of numerical expression, distance–rate–time, cost–profit, or simple interest type?) 3. Decide what is asked for, and Assign a variable to the unknown quantity. Label the variable so you know exactly what it represents. 4. Draw a Picture, diagram, or chart whenever possible!! 5. Form an Equation (or inequality) that relates the information provided. 6. Solve the equation (or inequality). 7. Check your solution with the wording of the problem to be sure it makes sense. 8. Write the solution of the problem as asked. distance–rate–time: distance = rate • time (d = r • t) cost–profit: profit = revenue – cost (P = R – C) simple interest: interest = principal rate (i = P • r) compound interest: A = P (1 + r n) nt Example: A grocery store bought ice cream for 59¢ a half gallon and stored it in two freezers. During the night, one freezer “defrosted” and ruined 14 half gallons. If the remaining ice cream is sold for 98¢ a half gallon, how many half gallons did the store buy if it made a profit of $42.44? charge = price • quantity bought .59 x sold .98 x – 14 Example: Last summer, Ernie sold surfboards. One style sold for $70 and the other sold for $50. He sold a total of 48 surfboards. How many of each style did he sell if the receipts from each style were equal? charge = price • quantity one style 70 x other style 50 48 – x Example: Sellit Realty Company gets a 6% fee for selling improved properties and 10% for selling unimproved land. Last week, the total sales were $220,000 and the total fees were $14,000. What were the sales from each of the two types of properties? fees = rate • sales improved .06 x unimproved .10 220000 – x Example: Maria jogs to the country at a rate of 10 mph. She returns along the same route at 6 mph. If the total trip took 1 hour 36 minutes, how far did she jog? distance = rate • time going x 10 coming x 6 Addition Property of Inequalities You can always add or subtract the same quantity to both sides of an inequality without affecting the solution. Example 1: Solve 3x – 4 ≥ 2x + 3 We add the quantity (–2x + 4) to both sides of the inequality: 3x – 4 ≥ 2x + 3 3x – 4 – 2x + 4 ≥ 2x + 3 – 2x + 4 x ≥ 7 Multiplication Property of Inequalities You can always multiply or divide the same positive quantity to both sides of an inequality without affecting the solution. You can always multiply or divide the same negative quantity to both sides of an inequality without affecting the solution provided you immediately change the direction of the inequality! Example 2: Solve –4x ≥ 12 We divide both sides of the inequality by –4. Since –4 is negative, we must immediately reverse the direction of the inequality: –4x > 12 –4x –4 < 12 –4 x < –3 Solving Linear Inequalities: Remember: You can always simplify any side of the inequality at any time. You can solve linear inequalities similar to linear equations, as long as you remember to reverse the direction whenever multiplying of dividing by a negative quantity. 1. Simplify the expressions on both sides of the inequality. You must first eliminate all of the Grouping symbols and simplify if necessary. 2. If there are numerical fractions, you may multiply both sides of the inequality by the LCD. If the LCD is negative, make sure your reverse the direction! Simplify if necessary. 3. You want the variable terms on one side of the inequality and the constant terms on the other side. If this is not the case, you should Add or subtract to both sides of the inequality a variable term and/or a numerical value in order to get the variable terns on one side and the constant values on the other side. Simplify if necessary. 4. You want the variable term coefficient to be 1. If this is not the case, you should Divide both sides of the equation by the variable term coefficient. If the coefficients negative, make sure your reverse the direction! Simplify if necessary. Note: Always make sure your final answer has the variable on the left side of the inequality!!! Absolute Value Inequalities | x | =    x if x ≥ 0 –x if x < 0 Solving Absolute Value Inequalities: You must put the absolute value on a side by itself! 1. If | x | < c (c > 0), then –c < x < c Notice that you have three sides to work with. 2. If | x | > c (c > 0), then x < –c or x > c Notice that you have to solve two different inequalities at the same time. 3. If | x | < –c (c > 0), then there is no solution 4. If | x | > –c (c > 0), then x can be any real number Note: Always make sure your final answer has the variable on the left side of the inequality, if possible!!! Properties of Exponents Property: Example: b0 = 1 50 = 1 b–n = 1 bn 3–2 = 1 32 = 1 9 bm • bn = bm + n 23 •25 = 28 bm bn = bm – n 59 57 = 52 = 25 (bm)n = bm • n (23)4 = 212 = 4096 (a • b)n = an • bn (4x)3 = 43 • x3 = 64x3       a b n = an bn       3 5 2 = 32 52 = 9 25 Some Additional Properties       a b –n = bn an 1 b–n = bn a–n b–m = bm an Remember: Only a factor may be moved from the numerator to the denominator or from the denominator to the numerator. When that happens, its exponent changes signs! x–n = 1 xn 1 x–n = xn xn = 1 x–n 1 xn = x–n x2x–3 x–7x4 = x2x7 x4x3 = x9 x7 = x2 Remember: All nonzero terms with a zero exponent equal 1. All exponents (including the zero exponent) only refer to the single factor immediately below and to the left of the exponent. Examples: Simplify the following using mathematics writing style: a) –70 + 3x0 b) 2x–3 + (3x)–1 = –(70) + 3(x0) = 2(x–3) + (3x)–1 = – (1) + 3(1) = 2 x3 + 1 3x = 2 Scientific Notation A positive number x in scientific notation has the form x = s x 10n where n is any integer and s is a number such that 1 ≤ s < 10. The trick is to count how many places it takes to move the decimal point to the appropriate position. Also, remember that “it's positive to move to the left.” Decimal Notation Scientific Notation 15 1.5 x 101 0.003 3.0 x 10–3 254 2.54 x 102 0.00000342 3.42 x 10–6 2 2.0 x 100 Example: Simplify (75000000)(200) (800000)(0.00025) using scientific notation. (75000000)(200) (800000)(0.00025) = (7.5 x 107)(2 x 102) (8 x 105)(2.5 x 10–4) = 3 4 x 108 = 0.75 x 108 = 7.5 x 107
2453
https://www.sohu.com/a/848026128_422646
什么是标准差?(定义、公式、作用及示例) 前言 本篇旨在为撰写《安全库存、最小库存、最大库存的定义、公式、作用及示例》一文进行预备。在计算安全库存的过程中,必须对标准差进行计算。 一、标准差的定义 标准差是一种用于衡量数据离散程度的统计量,反映了一组数据相对于其平均值的分散情况。简单来说,标准差能让我们直观地了解数据的分布范围和波动程度。 在统计学中,数据的分布特征至关重要。均值代表了数据的集中趋势,而标准差则描述了数据的离散趋势。如果一组数据的标准差较小,那就表明这些数据紧密围绕在平均值周围,数据的一致性较高;反之,若标准差较大,则意味着数据较为分散,各个数据点与平均值之间的差异明显。 例如,在一个班级学生的身高数据中,若标准差较小,说明学生们的身高比较接近,整体身高分布较为集中;若标准差较大,就说明学生身高差异较大,有高有矮,分布较为分散 。标准差在各个领域的数据分析中都扮演着关键角色,帮助我们深入理解数据的内在特征。 二、标准差的计算公式 (一)总体标准差公式 对于包含所有数据的总体而言,标准差的计算公式为: 符号含义: σ:代表总体标准差,它是衡量总体数据离散程度的关键指标。 Σ:是求和符号,表示对一系列数值进行累加操作。 xi:表示总体中的第i个数据值,其中i是从 1 到N的整数。 μ:为总体数据的平均值,它反映了总体数据的集中趋势。 N:是总体数据的数量,即总体中包含的数据点的总数。 (二)样本标准差公式 当我们处理的是从总体中抽取的样本数据时,标准差的计算公式有所不同: 符号含义: s:表示样本标准差,用于估计总体的离散程度。 xi:是样本中的第i个数据值 。 `x:代表样本数据的平均值,反映了样本数据的集中位置。 n:为样本数据的数量,即样本中包含的数据点个数。 这里分母使用n - 1而不是n ,被称为贝塞尔校正。这是因为在样本数据中,我们对总体的信息掌握有限,使用n - 1可以使样本标准差更好地无偏估计总体标准差,提高估计的准确性。 三、标准差的作用 (一)衡量数据的离散程度 标准差最核心的作用就是精准衡量数据的离散程度。通过计算标准差,我们能够清晰地知晓数据集中各个数据点与平均值之间的偏离情况。例如在产品质量检测中,如果一批产品的某项质量指标的标准差较小,那就表明这批产品的质量较为稳定,各个产品之间的差异不大;相反,如果标准差较大,则意味着产品质量参差不齐,存在较大的质量波动,需要对生产过程进行检查和调整。 (二)评估数据的稳定性和可靠性 在许多实际应用场景中,数据的稳定性和可靠性是至关重要的考量因素。标准差能够帮助我们有效评估数据的这些特性。在金融领域,股票价格的波动情况直接关系到投资者的收益和风险。通过计算股票价格的标准差,投资者可以直观地了解股票价格的稳定性。如果一只股票价格的标准差较小,说明其价格波动相对平稳,投资风险相对较低;而标准差较大的股票,其价格波动剧烈,投资风险也就相应较高。 (三)辅助数据比较和分组 标准差在不同数据集的比较中发挥着重要作用。当我们需要对比两组或多组数据的离散特征时,标准差是一个非常实用的工具。比如在教育领域,比较两个班级学生的学习成绩时,除了关注平均成绩外,标准差能让我们了解到哪个班级的成绩分布更为集中,哪个班级的成绩差异更大,从而为教学方法的调整和个性化辅导提供依据。在市场调研中,标准差也可用于对不同群体消费者行为数据的分析,帮助企业更好地进行市场细分和产品定位。 (四)为进一步统计分析提供基础 标准差是许多高级统计分析方法的基础数据。在进行假设检验时,我们需要根据样本数据的标准差来判断样本与总体之间的差异是否具有统计学意义;在构建置信区间时,标准差也是不可或缺的参数,它决定了我们对总体参数估计的准确程度;在回归分析中,标准差用于评估模型的拟合优度和误差大小,帮助我们判断模型的有效性和可靠性。因此,标准差为我们进行更深入、更复杂的统计分析提供了重要的基础信息。 四、标准差的重要性 (一)在统计学中的核心地位 标准差是统计学的核心概念之一,它与均值、中位数等共同构成了描述性统计的重要内容。在整个统计学体系中,标准差贯穿于各种统计方法和理论之中。无论是参数估计中对总体参数的推断,还是假设检验中对原假设的判断,亦或是方差分析中对不同组数据差异的分析,标准差都扮演着不可或缺的角色。它为我们理解和分析数据提供了关键的视角,是进行科学统计推断的基石。 (二)在决策制定中的应用价值 在商业、经济、工程等众多领域的决策过程中,标准差都具有极高的应用价值。企业在制定生产计划时,需要考虑产品质量数据的标准差,以确保生产的稳定性和产品质量的一致性;投资者在进行投资决策时,会参考各类资产的收益率标准差来评估风险,进而制定合理的投资组合策略;政府部门在制定宏观经济政策时,也会依据相关经济数据的标准差来分析经济运行的稳定性和波动性,为政策的制定提供科学依据。 (三)对研究和实验的重要意义 在科学研究和实验中,标准差对于评估实验结果的准确性和可靠性起着关键作用。科研人员通过分析实验数据的标准差,可以判断实验过程是否存在较大的误差,实验结果是否具有可重复性。如果实验数据的标准差较小,说明实验结果较为稳定、可靠,具有较高的可信度;反之,如果标准差较大,则需要对实验方法、仪器设备、操作流程等方面进行全面检查,找出可能导致数据波动的原因,以提高实验的准确性和科学性。 (四)在风险管理中的关键作用 五、如何计算标准差 (一)总体标准差的计算步骤 1)计算总体均值: 将总体中所有数据值相加,然后除以数据的总个数,即得到总体均值μ。 例如,对于总体数据3, 5, 7, 9, 11,总体均值μ=(3 + 5 + 7 + 9 + 11)/ 5 = 7。 2)计算每个数据值与均值的差值: 对于总体中的每个数据值xi,计算xi - μ。 以上述数据为例,差值分别为3 - 7 = -4, 5 - 7 = -2, 7 - 7 = 0, 9 - 7 = 2, 11 - 7 = 4。 3)计算差值的平方: 将每个差值进行平方,得到(xi - μ)2。 上述差值的平方分别为 (-4)2 = 16 ,(-2)2 = 4 ,(0)2 = 0 ,22 = 4 , (-42 = 16。 4)计算平方和: 将所有差值的平方相加,即 在这个例子中,平方和为16 + 4 + 0 + 4 + 16 = 40。 5)计算总体标准差: 将平方和除以总体数据的个数N,然后对结果取平方根,即 所以,该总体的标准差 σ = √(40/5)= √8≈ 2.83。 (二)样本标准差的计算步骤 样本标准差的计算步骤与总体标准差类似,但在最后一步有所不同。 1)计算样本均值: 将样本中所有数据值相加,再除以样本数据的个数,得到样本均值x。例如,对于样本数据 2, 4, 6, 8 ,样本均值x = (2 + 4 + 6 + 8)/4 = 5。 2)计算每个数据值与均值的差值: 计算xi -`x,差值分别为 2 - 5 = -3 , 4 - 5 = -1 , 6 - 5 = 1 , 8 - 5 = 3 。 3)计算差值的平方: 得到(xi -`x)2,分别为(-3)2 = 9, (-1)2 = 1 , 12 = 1 , 32 = 9 。 4)计算平方和: 将所有差值的平方相加,即 此例中平方和为9 + 1 + 1 + 9 = 20 。 5)计算样本标准差: 将平方和除以样本数据个数减 1(即n - 1),然后取平方根,即 所以,该样本的标准差 S=√(20/(4-1))=√(20/3)≈2.58。 在实际计算中,我们可以借助统计软件(如SPSS、R、Python中的相关库)或电子表格软件(如Excel)来快速准确地计算标准差,提高计算效率和准确性。 六、标准差示例 学生成绩分析: 假设有两个班级的英语考试成绩如下: 一班:75,80,85,90,95 二班:60,70,80,90,100 1)计算一班成绩的相关数据: 均值: `X1=(75+80+85+90+95)/=85 标准差: S1=√(((75-85)2 + (80-85)2 + (85-85)² + (90-85)² + (95-85)²) / (5-1)) =√(((-10)² + (-5)² + 0² +5² +10²) / 4) =√((100 + 25 + 0 + 25 + 100) / 4) =√(250 / 4) =√62.5 ≈7.91 2)计算二班成绩的相关数据: 均值: `X2 = (60+70+80+90+100) / 5 = 80 标准差: S2 =√(((60-80)² + (70-80)² + (80-80)² + (90-80)² + (100-80)²) / (5-1)) =√(((-20)² + (-10)² + 0² + 10² + 20²) / 4) =√((400 + 100 +0 + 100 + 400) / 4) =√(1000 / 4) =10 从计算结果可以看出,一班成绩的标准差约为7.91,二班成绩的标准差为10。这表明一班学生的成绩相对更为集中,围绕平均值的波动较小;而二班学生的成绩分布更为分散,学生之间的成绩差异较大。教师可以根据这些数据调整教学策略,对于一班可以注重培优,对于二班则需要加强对基础薄弱学生的辅导。 七、常见问题 1、标准差与方差有什么关系? 方差是标准差的平方,即标准差是方差的平方根。方差和标准差都用于衡量数据的离散程度,它们的作用相似,但标准差的单位与原始数据的单位相同,这使得标准差在实际应用中更具直观性,更便于理解数据的离散程度相对于原始数据的意义。而方差由于单位是原始数据单位的平方,在解释上相对不那么直观。 2、标准差为“0”意味着什么? 标准差为0表示数据集中所有的数据值都完全相同。因为标准差衡量的是数据相对于平均值的离散程度,如果所有数据都一样,那么它们与平均值的差值都为0,经过计算得到的标准差也就为0。例如数据集 5, 5, 5, 5 ,其标准差就是0。 3、标准差越大越好还是越小越好? 标准差的大小优劣不能一概而论,这取决于具体的应用场景和分析目的。在一些需要数据稳定、一致性高的场景中,如产品质量控制、精密仪器制造等,较小的标准差意味着产品质量稳定、性能可靠,是更理想的状态;而在某些情况下,如风险投资领域,较大的标准差可能意味着有更大的潜在收益空间,对于风险偏好较高的投资者来说,可能会更关注标准差较大的投资标的。所以,需要根据具体情况来评估标准差的大小是否合适。 4、样本标准差与总体标准差的区别只是公式上分母不同吗? 样本标准差与总体标准差在公式上分母确实存在差异,总体标准差公式分母是总体数量 N ,样本标准差公式分母是样本数量减1,即 n – 1 。但它们的区别不止于此。总体标准差是对整个总体数据离散程度的精确度量,反映的是总体的真实情况;而样本标准差是基于从总体中抽取的样本数据来计算的,目的是通过样本去估计总体的离散程度。由于样本只是总体的一部分,存在一定的随机性和不确定性,使用 n – 1 作为分母进行贝塞尔校正,能使样本标准差更无偏地估计总体标准差,减少因样本选取带来的偏差。 5、在实际应用中,如何选择使用总体标准差还是样本标准差? 如果能够获取到整个总体的数据,并且关注的是总体的离散特征,那么就使用总体标准差。例如,统计一个班级全体学生的某次考试成绩的离散程度,因为班级学生数量是有限且已知的总体,此时用总体标准差合适。但在大多数情况下,获取总体的全部数据往往比较困难或者成本过高,只能抽取样本进行分析,这时就需要使用样本标准差来估计总体的离散程度。像要了解某地区居民的收入水平离散情况,由于地区居民数量庞大,不可能调查到每一个人,只能抽取部分居民作为样本,这种情况下就用样本标准差。 6、标准差会受到极端值的影响吗? 标准差会受到极端值的显著影响。因为标准差的计算涉及到每个数据值与均值的差值平方,极端值(极大值或极小值)与均值的差值会比较大,经过平方后对标准差的贡献就更大。例如,一组数据原本是 10, 12, 14, 16, 18 ,计算出的标准差较小,数据相对集中;但如果加入一个极端值,如 100 ,新的数据组变为 10, 12, 14, 16, 18, 100 ,重新计算标准差会发现数值大幅增大,这表明数据的离散程度被显著夸大,而这就是极端值的影响。在分析数据时,如果存在极端值,需要谨慎考虑其对标准差的影响,必要时可对极端值进行特殊处理或单独分析。 【声明】:SmartWMS天津小蜜蜂,所发文章仅供大家学习参考,请不要作商业用途,转载请标明出处;仓储管理系统的专业性很强,文中难免有错误,一旦发现,请联系我们及时更正;如有侵权、也请告知我们立即删除。 小蜜蜂WMS专注智慧仓储管理15年,坚持传播仓储管理知识,SmartWMS无人值守自助领料智慧仓储管理系统,为用户降本增效;系统基于B/S架构,建构了仓储管理所需的完整功能,也可根据客户要求增加定制开发,系统提供OpenAPI,能够对接常用的管理信息系统,系统广泛应用于国内外各行各业。返回搜狐,查看更多
2454
https://www.academia.edu/8254142/Session_on_Escalators
(PDF) Session on Escalators close Welcome to Academia Sign up to get access to over 50 million papers By clicking Continue, you agree to our Terms of Use and Privacy Policy Continue with GoogleContinue with AppleContinue with Facebook email Continue with Email arrow_back Continue with Email Sign up or log in to continue. Email Next arrow_forward arrow_back Welcome to Academia Sign up to continue. By clicking Sign Up, you agree to our Terms of Use and Privacy Policy First name Last name Password Sign Up arrow_forward arrow_back Hi, Log in to continue. Password Reset password Log in arrow_forward arrow_back Password reset Check your email for your reset link. Your link was sent to Done arrow_back Facebook login is no longer available Reset your password to access your account: Reset Password Please hold while we log you in Academia.edu no longer supports Internet Explorer. To browse Academia.edu and the wider internet faster and more securely, please take a few seconds toupgrade your browser. Log In Sign Up Log In Sign Up more About Press Papers Terms Privacy Copyright We're Hiring! Help Center less Outline keyboard_arrow_down Title Abstract All Topics Mathematics download Download Free PDF Download Free PDF Session on Escalators Navya Niya description See full PDF download Download PDF format_quote Cite close Cite this paper MLA content_copy Niya, Navya. Session on Escalators. APA content_copy Niya, N. Session on Escalators. Chicago content_copy Niya, Navya. “Session on Escalators,” n.d. Vancouver content_copy Niya N. Session on Escalators. Harvard content_copy Niya, N. (no date) “Session on Escalators.” bookmark Save to Library share Share close Sign up for access to the world's latest research Sign up for free arrow_forward check Get notified about relevant papers check Save papers to use in your research check Join the discussion with peers check Track your impact Abstract Here speed is usually given as no. of steps per second Let speed of escalator is e steps/sec Let speed of man/woman = m steps per second Case 1: when escalator & man are moving in same direction, effective speed = (m + e) steps/sec Case 2: when escalator & man are moving in opposite direction, effective speed = (m -e) steps/sec ... Read more Related papers Modelling pedestrian escalator behaviour Michael Kinsey Pedestrian and …, 2010 This paper presents an escalator model for use in circulation and evacuation analysis. As part of the model development, human factors data was collected from a Spanish underground station. The collected data relates to: escalator/stair choice, rider/walker preference, rider side preference, walker travel speeds and escalator flow rates. The dataset provides insight into pedestrian behaviour in utilising escalators and is a useful resource for both circulation and evacuation models. Based on insight derived from the dataset a detailed microscopic escalator model which incorporates person-person interactions has been developed. A range of demonstration evacuation scenarios are presented using the newly developed microscopic escalator model. download Download free PDFView PDF chevron_right See full PDF download Download PDF Loading Preview Session on Escalators: (By Nitin Gupta) Escalator is similar to Boats & Streams Distance = no. of steps in escalator (when escalator is not moving) Here speed is usually given as no. of steps per second Let speed of escalator is e steps/sec Let speed of man/woman = m steps per second Case 1: when escalator & man are moving in same direction, effective speed = (m + e) steps/sec Case 2: when escalator & man are moving in opposite direction, effective speed = (m -e) steps/sec Another important concept: Case 1: when escalator & man are moving in same direction, --- No. of steps covered by man is always less than actual no. of steps in escalator. Case 2: when escalator & man are moving in opposite direction, --- No. of steps covered by man is always more than actual no. of steps in escalator. 1. You walk upwards on an escalator, with a speed of 1 step per second. After 50 steps you are at the end. You turn around and run downwards with a speed of 5 steps per second. After 125 steps you are back at the beginning of the escalator. The Question: How many steps do you need if the escalator stands still? Approach 1: Let N = no. of steps in escalator (when escalator is not moving) Case 1: walk upwards on an escalator, with a speed of 1 step per second. After 50 steps you are at the end N /(e+1) = 50/1-------------(1) Case 2: run downwards with a speed of 5 steps per second. After 125 steps you are back at the beginning of the escalator N/(5-e) = 125/5 --------(2) Dividing eq 1 by eq 2 , you will get e= 1 Putting e = 1 you will get N= 100 steps. Approach 2:Remember Case 1: when escalator & man are moving in same direction, --- No. of steps covered by man is always less than actual no. of steps in escalator.Case 2: when escalator & man are moving in opposite direction, --- No. of steps covered by man is always more than actual no. of steps in escalator.say escalator speed x steps/sec.so total steps = 50+50x.(from upward condition, in 50 sec escalator will cover 50x). Total time to reach up is 50 sec.total time to reach down = 25 sec. (125 steps, 5 steps/sec)total steps = 125- 25x (in 25 sec escalator will cover 25 x) 50+50x = 125 - 25 x = 75=>x =1,so total steps = 50 + 501= 100 2. A walks down an up-escalator and counts 150 steps. B walks up the same escalator and counts 75 steps. A takes three times as many steps in a given time as B. How many steps are visible on the escalator? 2. Approach 1: Let N = no. of steps in escalator (when escalator is not moving) Speed of a / speed of b = 3:2 , let speed of a = 3x & speed of b = 2x Case 1: "A " walk down on up- escalator, N /(3x-e) = 150/3x-------------(1) Case 2: "B" walk up on up- escalator, N/(2x+e) = 75/2x --------(2) Dividing eq 1 by eq 2 , you will get e= x Putting e = x you will get N= 120 steps. Approach 2: Remember Case 1: when escalator & man are moving in same direction, --- No. of steps covered by man is always less than actual no. of steps in escalator. Case 2: when escalator & man are moving in opposite direction, --- No. of steps covered by man is always more than actual no. of steps in escalator. Let T be time B takes to make 25 steps. Then B takes 3T to make 75, and A takes 2T to make 150. Suppose the escalator has N steps visible and moves n steps in time T. Then A covers N + 2n = 150, N - 3n = 75. Hence N = 120, n = 15. 3. Colin takes the underground train to work and uses an escalator at the railway station. If Colin runs up 8 steps of the escalator, then it takes him 37.5 seconds to reach the top of the escalator. If he runs up 14 steps of the escalator, then it takes him only 28.5 seconds to reach the top. How many seconds would it take Colin to reach the top if he did not run up any steps of the escalator at all? 3. If he runs up 8 steps, then he needs 37.5 seconds to reach the top. If he runs up 14 steps, then he needs 28.5 seconds to reach the top. The 6 additional steps take 9.0 seconds. Therefore, each step takes 1.5 seconds. Total steps in escalator = 8 + 37.5 / 1.5 = 33 or Total steps in escalator = 14 + 28.5 / 1.5 = 33. If Colin did not run up any steps at all, he would reach the top of the escalator in 49.5 seconds (i.e., 33 steps x 1.5 seconds/step). Alternative Solution through Equations: Let the total number of steps in the escalator be x. The escalator moves at a constant speed given by Speed of escalator = (x - 8)/37.5 = (x - 14)/28.5 The above equation may be solved as follows. 28.5 (x - 8) = 37.5 (x                      escalator = (33 - 8)/37.5 = (33 - 14)/28.5 = 1/1.5 steps/second. Time to reach top = Total Steps / Speed = 49.5 seconds. Approach 3 : Let escalator speed = e steps/sec No. of steps = 8+37.5x = 14+28.5x X = 2/3 No. of steps = 8 + 37.52/3 = 33 steps Now, Speed of escalator = (33 - 8)/37.5 = (33 - 14)/28.5 = 1/1.5 steps/second. Time to reach top = Total Steps / Speed = 49.5 seconds 4. Shyama and Vyom walk up an escalator (moving stairway). The escalator moves at a constant speed. Shyama takes three steps for every two of Vyom's steps. Shyama gets to the top of the escalator after having taken 25 steps, while Vyom (because his slower pace lets the escalator do a little more of the work) takes only 20 steps to reach the top. If the escalator were turned off, how many steps would they have to take to walk up? (cat 2001) a. 40 b. 50 c. 60 d. 80 4.20 steps should equivalent to 30 of Shyam (time wise) So time taken is in the ratio 5:6 20+6x=25+5x x=5 thus steps =25+55=50. 5. 2 kids, John and Jim, are running on an escalator (a moving stairway). John is running three times as fast as Jim, and by the time they are off the escalator, John has stepped on 75 stairs while Jim has stepped on 50 stairs. How many stairs does the escalator have? How is its speed related to the speed of the boys? Were they running with or against the escalator? 5. The answers are: the length is 100 stairs, the boys were running along the escalator which was moving with the same speed as the slow boy. Solution: in the time the fast boy stepped on 75 stairs, the slow one could step on only 25, so, since he stepped on 50, he spent twice as much time on the escalator as the fast one. Therefore his speed relative to the ground was half that of the fast boy, therefore the escalator's speed was the same as the speed of the slow boy, and he counted exactly half the stairs. Another way is to use algebra (omitted). Approach 2: Assume A takes 1 step per unit time. Then B will take 3 steps per same unit time. Also, assume the the escalator is moving at E steps per unit time. Let T be the total number of steps. Let ta be time taken by A on the escalator, tb = time taken by B on the escalator. Since A takes 50 steps - therefore we have: 50 = T/(1+E) units of time. similarly, 75 /3= T/(3+E) units of time. Solving for T, E we get E = 1 step per unit time; T = 100 steps 6. A man can walk up a moving "up" escalator in 30 second. The same can walk down this moving "up" escalator in 90 seconds. Assume his walking speed is same Upwards & downwards. How much time he will take to walk up the escalator when escalator is not moving ? (cat 1994) 6.45s 7. An escalator is descending at constant speed. A walks down and takes 50 steps to reach the bottom. B runs down faster and takes 90 steps to reach the bottom.If B takes 90 steps in the same time as A takes 10 steps then how many steps are visible when the escalator is not operating? 7.B = 9 A = 1 50 + 50x = 90 + 10x => x = 1 total steps = 100 another method for Q7:There are 100 steps in the escalator. Case : When A & B walk on unmoving surface -------------------------------------------- 10 Steps of A = 90 Steps of B .................... [Eq.1] => Speed of A : Speed of B :: 1 : 9 => Time of A : Time of B :: 9 : 1 Sorry, preview is currently unavailable. You can download the paper by clicking the button above. Related papers Vertical Transport Evacuation Modelling Michael Kinsey Within any high-rise structure or underground/subway station, occupants often heavily rely on vertical transport devices (e.g. escalators, lifts, etc) to travel vertically between levels. Typically such devices provide a faster and more comfortable means to travel than the equivalent stairs. Such devices also provide an additional means for occupant egress. However, the provision for utilising such devices in actual buildings for evacuations is rare. Despite a select number of structures throughout the world allowing the use of vertical transport devices within evacuation scenarios, little is understood with regards to evacuation vertical transport strategies and to what extent such strategies may be influenced by associated human factors. This thesis is intended to address this lack of understanding. The thesis provides an in depth review of evacuation usage of vertical transport devices in actual evacuations, their provision in building codes, empirical studies analysing human factors, representation within simulated environments, and analysis of previously explored operational strategies. The review provides a broad set of research questions that the thesis is intended to address. Human factors data associated with vertical transport device usage have been collected via an online survey and video analysis. The data analysis has instructed the development of the vertical transport device models and associated agent models within the buildingEXODUS evacuation software. The models include the representation of device selection, the influence of local conditions in close proximity to a device, and the influence of wait time upon device selection. The developed models have been used to demonstrate the influence of different vertical transport strategies and to what extent such strategies are influenced by human factors. Finally, the thesis concludes by summarising the increased understanding achieved through the work presented. download Download free PDFView PDF chevron_right Extended model of pedestrian escalator behaviour based on data collected within a Chinese underground station Michael Kinsey 2009 This paper presents data relating to pedestrian escalator behaviour collected in an underground station in Shanghai, China. While data was not collected under emergency or simulated emergency conditions, it is argued that the data collected under rush-hour conditions - where commuters are under time pressures to get to work on time - may be used to approximate emergency evacuation conditions – where commuters are also under time pressures to exit the building as quickly as possible. Data pertaining to escalator/stair choice, proportion of walkers to riders, walker speeds and side usage are presented. The collected data is used to refine the buildingEXODUS escalator model allowing the agents to select whether to use an escalator or neighbouring parallel stair based on congestion conditions at the base of the stair/escalator and expected travel times. The new model, together with the collected data, is used to simulate a series of hypothetical evacuation scenarios to demonstrate the impact of escalators on evacuation performance. download Download free PDFView PDF chevron_right Modelling Evacuation using Escalators: A London Underground Dataset Michael Kinsey The development and expansion of underground (subway) stations, often located deep underground, has been possible with the introduction of escalators capable of efficiently transporting large volumes of people . As a result, underground stations are reliant upon escalators for circulation and in many cases emergency evacuation. Despite this, few studies have attempted to quantify human factors associated with escalator usage (microscopic analysis), the majority of past studies focusing on establishing capacity (macroscopic analysis) rather than usage behaviours [2-4]. As such, it is uncertain how human factors associated with escalator usage impact escalator performance in both circulation and evacuation situations. It is also uncertain whether human factors associated with escalator usage has a cultural component. To address these issues, escalator human factors data within three underground stations in Spain (Barcelona) , China (Shanghai) and England (London) have been collected. In each location the same methodology for data collection and analysis was used. This paper presents an overview of the analysis for the English dataset. Furthermore, using the data collected along with the newly developed escalator model available within the buildingEXODUS evacuation software [6, 7], a series of evacuation scenarios of a hypothetical underground station are presented. The evacuation analysis is intended to explore the impact of using escalators with a variety of realistic human factors. download Download free PDFView PDF chevron_right LIFTS, ESCALATORS AND MOVING WALKS Naimul Kaysar download Download free PDFView PDF chevron_right A Review of Escalator Accidents and Preclusion GRD JOURNALS The study in this paper describes retrospective view of the escalator accidents. It also deals with the epidemiology of escalator allied injuries among adults and even in children from the delineation of Consumer Product Safety Commission (CPSC) and National Electronic Injury Surveillance System (NEISS). Invention of escalators has facilitated the travel of passengers in large number. It strengthened the power of technology at the same time it frightened the pedestrians, remembering that it is dangerous if it is used in a careless manner. Work in this paper gives a clear cut view of the escalator linked accidents, preventive measures required to overcome the fear called "escalator-accident". This paper also gives the classification of the escalator accidents in terms of age groups of the people using escalators and also in terms of escalator correlated injuries. It also describes the percentage distribution of the accidents among different age grouped people. CPSC has found that the escalator implicated accidents are more in number than those compared with that of the elevator related accidents. Escalator linked injuries occur more frequently and this frequency of occurrence results in significant trauma. download Download free PDFView PDF chevron_right Related topics Mathematicsadd Follow close Welcome to Academia Sign up to get access to over 50 million papers By clicking Continue, you agree to our Terms of Use and Privacy Policy Continue with GoogleContinue with AppleContinue with Facebook email Continue with Email arrow_back Continue with Email Sign up or log in to continue. Email Next arrow_forward arrow_back Welcome to Academia Sign up to continue. By clicking Sign Up, you agree to our Terms of Use and Privacy Policy First name Last name Password Sign Up arrow_forward arrow_back Hi, Log in to continue. Password Reset password Log in arrow_forward arrow_back Password reset Check your email for your reset link. Your link was sent to Done arrow_back Facebook login is no longer available Reset your password to access your account: Reset Password Please hold while we log you in Explore Papers Topics Features Mentions Analytics PDF Packages Advanced Search Search Alerts Journals Academia.edu Journals My submissions Reviewer Hub Why publish with us Testimonials Company About Careers Press Help Center Terms Privacy Copyright Content Policy 580 California St., Suite 400 San Francisco, CA, 94104 © 2025 Academia. All rights reserved
2455
https://www.math.uci.edu/~ndonalds/math140b/4int.pdf
4 Integration The theory of infinite series addresses how to sum infinitely many finite quantities. Integration, by contrast, is the business of summing infinitely many infinitesimal quantities. Attempts to do both have been part of mathematics for well over 2000 years, and the philosophical objections are just as old.17 The development and increased application of calculus from the late 1600s onward spurred mathe-maticians to put the theory on a firmer footing, though from Newton and Leibniz it took another 150 years before Bernhard Riemann (1856) provided a thorough development of the integral. 4.32 The Riemann Integral The basic idea behind Riemann integration is to approximate area using a sequence of rectangles whose width tends to zero. The following discussion illustrates the essential idea, which should be familiar from elementary calculus. Example 4.1. Suppose f (x) = x2 is defined on [0, 1]. For each n ∈N, let ∆x = 1 n and define xi = i∆x. Above each subinterval [xi−1, xi], raise a rectangle of height f (xi) = x2 i . The sum of the areas of these rectangles is the Rie-mann sum with right-endpoints18 Rn = n ∑ i=1 f (xi)∆x = n ∑ i=1 i2 n3 = n(n + 1)(2n + 1) 6n3 = 1 3 + 3n + 1 6n2 The Riemann sum with left-endpoints is defined similarly: Ln = n ∑ i=1 f (xi−1)∆x = n ∑ i=1 (i −1)2 n3 = 1 3 −3n −1 6n2 Since f is an increasing function, the area A under the curve plainly satisfies Ln ≤A ≤Rn By the squeeze theorem, we conclude that A = 1 3. 0 1 0 1 n = 16 Rn = 0.365234 0 1 0 1 n = 16 Ln = 0.302734 The example should feel convincing, though perhaps this is due to the simplicity of the function. To apply this approach to more general functions, we need to be significantly more rigorous. 17Two of Zeno’s ancient paradoxes are relevant here: Achilles and the Tortoise concerns a convergent infinite series, while the Arrow Paradox toys with integration by questioning whether time can be viewed as a sum of instants. Perhaps the most famous contemporary criticism comes from Bishop George Berkeley, who gave his name to the city and first UC campus: in 1734’s The Analyst, Berkeley savaged the foundations of calculus, describing the infinitesimal increments required in Newton’s theory of fluxions (derivatives) as merely the “ghosts of departed quantities.” 18Recall some basic identities: n ∑ i=1 i = 1 2n(n + 1), n ∑ i=1 i2 = 1 6n(n + 1)(2n + 1), n ∑ i=1 i3 = 1 4n2(n + 1)2 70 Definition 4.2. A partition P = {x0, . . . , xn} of an interval [a, b] is a finite sequence for which a = x0 < x1 < · · · < xn−1 < xn = b Choosing a sample point x∗ i in each subinterval [xi−1, xi] results in a tagged partition. The mesh of the partition is mesh(P) := max ∆xi, the width ∆xi = xi −xi−1 of the largest subinterval. If f : [a, b] →R, the Riemann sum ∑n i=1 f (x∗ i ) ∆xi evaluates the area of a family of n rectangles, as pictured. The heights f (x∗ i ) and thus areas can be negative or zero. f(x) x x1 x∗ 1 x2 x∗ 2 x3 x∗ 3 x4 x∗ 4 x5 x∗ 5 x6 x∗ 6 b = x7 x∗ 7 a = x0 b = x7 In elementary calculus, one typically computes Riemann sums for equally-spaced partitions with left, right or middle sample points. The flexibility of tagged partitions makes applying Riemann’s defini-tion a challenge, so we instead consider two special families of rectangles. Definition 4.3. Given a partition P of [a, b] and a bounded function f on [a, b], define Mi = sup x∈[xi−1,xi] f (x) U( f, P) = n ∑ i=1 Mi ∆xi mi = inf x∈[xi−1,xi] f (x) L( f, P) = n ∑ i=1 mi ∆xi U( f, P) and L( f, P) are the upper and lower Darboux sums for f with respect to P. The upper and lower Darboux integrals are U( f ) = inf U( f, P) L( f ) = sup L( f, P) where the supremum/infimum are taken over all partitions. Necessarily both integrals are finite. We say that f is (Riemann) integrable on [a, b] if U( f ) = L( f ). We denote this value by Z b a f or Z b a f (x) dx a x1 x2 x3 x4 x5 x6 x7 b Upper Darboux sum U( f, P) a x1 x2 x3 x4 x5 x6 x7 b Lower Darboux sum L( f, P) If the interval is understood or irrelevant, one often simply says that f is integrable and writes R f. Intuitively, L( f, P) is the sum of the areas of rectangles built on P which just fit under the graph of f. It is also the infimum of all Riemann sums on P. If f is discontinuous, then L( f, P) need not itself be a Riemann sum, as there might not exist suitable sample points! 71 Examples 4.4. 1. We revisit Example 4.1 in this language. Given a partition Q = {x0, . . . , xn} of [0, 1] and sample points x∗ i ∈[xi−1, xi], we compute the Riemann sum for f (x) = x2 n ∑ i=1 f (x∗ i ) ∆xi = n ∑ i=1 (x∗ i )2(xi −xi−1) Since f is increasing, we have x2 i−1 ≤(x∗ i )2 ≤x2 i on each interval, whence L( f, Q) = n ∑ i=1 (xi−1)2(xi −xi−1) ≤ n ∑ i=1 (x∗ i )2(xi −xi−1) ≤ n ∑ i=1 (xi)2(xi −xi−1) = U( f, Q) The Darboux sums are therefore the Riemann sums for left- and right-endpoints. If we take Qn to be the partition with subintervals of equal width ∆x = 1 n, then U( f ) = inf P U( f, P) ≤U( f, Qn) = n ∑ i=1  i n 2 ∆x = Rn is the right Riemann sum discussed originally. Similarly L( f ) ≥Ln. Since Ln and Rn both converge to 1 3 as n →∞, the squeeze theorem forces Ln ≤L( f ) ≤U( f ) ≤Rn = ⇒L( f ) = U( f ) = 1 3 Otherwise said, f is integrable on [0, 1] with R 1 0 x2 dx = 1 3. 2. Suppose f (x) = kx + c on [a, b], and that k > 0. Take the evenly spaced partition Pn where xi = a + b−a n i. Since f is increasing, the upper Darboux sum is again the Riemann sum with right-endpoints: U( f, Pn) = Rn = n ∑ i=1 f (xi)∆x = b −a n n ∑ i=1 k(b −a) n i + ak + c 0 b a c ak + c bk + c U(f, Pn) = b −a n k(b −a) n · 1 2n(n + 1) + (ak + c)n  − − − → n→∞ 1 2k(b −a)2 + (b −a)(ak + c) = k 2(b2 −a2) + c(b −a) Similarly, the lower Darboux sum is the Riemann sum with left-endpoints: L( f, Pn) = Ln = b −a n k(b −a) n · 1 2n(n −1) + (ak + c)n  − − − → n→∞ k 2(b2 −a2) + c(b −a) As above, Ln ≤L( f ) ≤U( f ) ≤Rn and the squeeze theorem prove that f is integrable on [a, b] with R b a f = k 2(b2 −a2) + c(b −a). 72 Now we have some examples, a few remarks are in order. Riemann versus Darboux Definition 4.3 is really that of the Darboux integral. Here is Riemann’s defi-nition: f : [a, b] →R being integrable with integral R b a f means ∀ϵ > 0, ∃δ such that (∀P, x∗ i ) mesh(P) < δ = ⇒ n ∑ i=1 f (x∗ i )∆xi − Z b a f < ϵ This is significantly more difficult to work with, though it can be shown to be equivalent to the Darboux integral. We won’t pursue Riemann’s formulation further, except to observe that if a function is integrable and mesh(Pn) →0, then R b a f = lim n→∞∑n i=1 f (x∗ i )∆xi: this allows us to approximate integrals using any sample points we choose, hence why right-endpoints (x∗ i = xi) are so common in Freshman calculus. Monotone Functions Darboux sums are easy to compute for monotone functions. As in the examples, if f is increasing, then each Mi = f (xi), from which U( f, P) is the Riemann sum with right-endpoints. Similarly, L( f, P) is the Riemann sum with left-endpoints. Area If f is positive and continuous,19 the Riemann integral R b a f serves as a definition for the area under the curve y = f (x). This should make intuitive sense: 1. In the second example where we have a straight line, we obtain the same value for the area by computing directly as the sum of a rectangle and a triangle! 2. For any partition P, the area under the curve should satisfy the inequalities L( f, P) ≤Area ≤U( f, P) But these are precisely the same inequalities satisfied by the integral itself! L( f, P) ≤L( f ) = Z b a f = U( f ) ≤U( f, P) In the examples we exhibited a sequence of partitions (Pn) where U( f, Pn) and L( f, Pn) converged to the same limit. The remaining results in this section develop some basic properties of partitions and make this limiting process rigorous. Definition 4.5. If P ⊆Q are both partitions of [a, b], we call Q a refinement of P. To refine a partition, we simply throw some more points in! Lemma 4.6. Suppose f : [a, b] →R is bounded. 1. If Q is a refinement of P (on [a, b]), then L( f, P) ≤L( f, Q) ≤U( f, Q) ≤U( f, P) 2. For any partitions P, Q of [a, b], we have L( f, P) ≤U( f, Q). 3. L( f ) ≤U( f ) 19We’ll see in Theorem 4.17 that every continuous function is integrable. 73 Proof. 1. We prove inductively. Suppose first that Q = P ∪{t} contains exactly one additional point t ∈(xk−1, xk). Write m1 = inf  f (x) : x ∈[xk−1, t] m2 = inf  f (x) : x ∈[t, xk−1] m = inf  f (x) : x ∈[xk−1, xk] = min{m1, m2} The Darboux sums L( f, P) and L( f, Q) are identical ex-cept for the terms involving t. This results in extra area: xk−1 xk t Extra area! · · · · · · m1 m2 L( f, Q) −L( f, P) = m1(t −xk−1) + m2(xk −t) −m(xk −xk−1) = (m1 −m)(t −xk−1) + (m2 −m)(xk −t) ≥0 More generally, since a refinement Q is obtained by adding finitely many new points, induction tells us that P ⊆Q = ⇒L( f, P) ≤L( f, Q). The argument for U( f, Q) ≤U( f, P) is similar, and the middle inequality is trivial. 2. If P and Q are partitions, then P ∪Q is a refinement of both P and Q. By part 1, L( f, P) ≤L( f, P ∪Q) ≤U( f, P ∪Q) ≤U( f, Q) (∗) 3. This is an exercise. Theorem 4.7. Suppose f : [a, b] →R is bounded. 1. (Cauchy criterion) f is integrable ⇐ ⇒∀ϵ > 0, ∃P such that U( f, P) −L( f, P) < ϵ. 2. f is integrable ⇐ ⇒∃(Pn)n∈N such that U( f, Pn) −L( f, Pn) →0. In such a situation, both sequences U( f, Pn) and L( f, Pn) converge to R b a f. Part 1 is termed a ‘Cauchy’ criterion since it doesn’t mention the integral (limit). Proof. We prove the Cauchy criterion, leaving part 2 as an exercise. (⇒) Suppose f is integrable and that ϵ > 0 is given. Since inf U( f, Q) = R f = sup L( f, R), there exist partitions Q, R such that U( f, Q) < Z f + ϵ 2 and L( f, R) > Z f −ϵ 2 Let P = Q ∪R and apply (∗): L( f, R) ≤L( f, P) ≤U( f, P) ≤U( f, Q). But then U( f, P) −L( f, P) ≤U( f, Q) −L( f, R) = U( f, Q) − Z f + Z f −L( f, R) < ϵ (⇐) Assume the right hand side. For every partition, L( f, P) ≤L( f ) ≤U( f ) ≤U( f, P). Thus 0 ≤U( f ) −L( f ) ≤U( f, P) −L( f, P) < ϵ Since this holds for all ϵ > 0, we see that U( f ) = L( f ): that is, f is integrable. 74 Examples 4.8. 1. Consider f (x) = √x on the interval [0, b]. We choose a sequence of partitions (Pn) that evaluate nicely when fed to this function: Pn = {x0, . . . , xn} where xi =  i n 2 b = ⇒∆xi = xi −xi−1 = b n2 i2 −(i −1)2 = (2i −1)b n2 Since f is increasing on [0, b], we see that U( f, Pn) = n ∑ i=1 f (xi)∆xi = n ∑ i=1 i √ b n · (2i −1)b n2 = b3/2 n3 n ∑ i=1 2i2 −i = b3/2 n3 1 3n(n + 1)(2n + 1) −1 2n(n + 1)  − − − → n→∞ 2 3b3/2 Similarly L( f, Pn) = n ∑ i=1 f (xi−1)∆xi = n ∑ i=1 (i −1) √ b n · (2i −1)b n2 = b3/2 n3 n ∑ i=1 2i2 −3i + 1 = b3/2 n3 1 3n(n + 1)(2n + 1) −3 2n(n + 1) + n  − − − → n→∞ 2 3b3/2 Since the limits are equal, we conclude that f is integrable and R b 0 √x dx = 2 3b3/2. 0 0 b √ b Upper Sum U(f, Pn) 0 0 b √ b Lower Sum L(f, Pn) 2. Here is the classic example of a non-integrable function. Let f : [a, b] →R to be the indicator function of the irrational numbers, f (x) = ( 1 if x ̸∈Q 0 if x ∈Q Suppose P = {x0, . . . , xn} is any partition of [a, b]. Since any interval of positive length contains both rational and irrational numbers, we see that sup  f (x) : x ∈[xi−1, xi] = 1 = ⇒U( f, P) = n ∑ i=1 (xi −xi−1) = b −a = ⇒U( f ) = b −a inf  f (x) : x ∈[xi−1, xi] = 0 = ⇒L( f, P) = 0 = ⇒L( f ) = 0 Since the upper and lower Darboux integrals differ, f is not (Riemann) integrable. 75 As any freshman calculus student can attest, if you can find an anti-derivative, then the fundamen-tal theorem of calculus (Section 4.34) makes evaluating integrals far easier. For instance, you are probably desperate to write d dx 2 3x3/2 = x1/2 = ⇒ Z b 0 √ x dx = 2 3x3/2 b 0 = 2 3b3/2 rather than computing Riemann/Darboux sums as in the previous example! However, in most prac-tical situations, no easy-to-compute anti-derivative exists; the best we can do is to approximate using Riemann sums for progressively finer partitions. Thankfully computers excel at such tedious work! Exercises 4.32. Key concepts: Darboux sums/integrals, Partitions, sample points & refinements, Cauchy & sequential criteria for integrability 1. Use partitions to find the upper and lower Darboux integrals on the interval [0, b]. Hence prove that the function is integrable and compute its integral. (a) f (x) = x3 (b) g(x) = 3 √x 2. Repeat question 1 for the following two functions. You cannot simply compute Riemann sums for left and right endpoints and take limits: why not? (a) h(x) = x(2 −x) on [0, 2] (Hint: choose a partition with 2n subintervals such that xn = 1 and observe that h(2 −x) = h(x)) (b) On the interval [0, 3], let k(x) = ( 2x if x ≤1 5 −x if x > 1 (Hint: this time try a partition with 3n subintervals) 3. Let f (x) = x for rational x and f (x) = 0 for irrational x. Calculate the upper and lower Darboux integrals for f on the interval [0, b]. Is f integrable on [0, b]? 4. Prove part 3 of Lemma 4.6: L( f ) ≤U( f ). 5. Prove part 2 of Theorem 4.7. f is integrable ⇐ ⇒∃(Pn)n∈N such that lim n→∞ U( f, Pn) −L( f, Pn)  = 0 Moreover, prove that both U( f, Pn) and L( f, Pn) converge to R f. 6. (a) Reread Definition 4.3. What happens if we allow f : [a, b] →R to be unbounded? (b) (Hard) Read “Riemann versus Darboux” on page 73. Explain why being Riemann integrable also forces f to be bounded. (c) (Hard) Explain the observation that L( f, P) is the infimum of the set of all Riemann sums on P. 7. (If you like coding) Write a short program to estimate R b a f (x) dx using Riemann sums. This can be very simple (equal partitions with right endpoints), or more complex (random partition and sample points given a mesh). Apply your program to estimate R 5 0 sin(x2e−√x) dx. 76 4.33 Properties of the Riemann Integral The rough take-away of this long section is that everything you think is integrable probably is! Ex-amples will be few, since we have not established many explicit values for integrals. Theorem 4.9 (Linearity). If f, g are integrable and k, l are constant, then k f + lg is integrable and Z k f + lg = k Z f + l Z g Example 4.10. Thanks to examples in the previous section, we can now calculate, e.g., Z 2 0 5x3 −3√ x dx = 5 · 1 4 · 24 −3 · 2 3 · 23/2 = 20 −4 √ 2 Proof. Suppose ϵ > 0 is given. By the Cauchy criterion (Theorem 4.7, part 1), there exist partitions R, S such that U( f, R) −L( f, R) < ϵ 2 and U(g, S) −L(g, S) < ϵ 2 If P = R ∪S, then both inequalities are satisfied by P (Lemma 4.6). On each subinterval, inf f (x) + inf g(x) ≤inf f (x) + g(x)  and sup f (x) + g(x)  ≤sup f (x) + sup g(x) since the individual suprema/infima could be ‘evaluated’ at different places. Thus L( f, P) + L(g, P) ≤L( f + g, P) ≤U( f + g, P) ≤U( f, P) + U(g, P) whence U( f + g, P) −L( f + g, P) < ϵ and f + g is integrable. Moreover, Z ( f + g) − Z f − Z g ≤  U( f, P) − Z f  +  U(g, P) − Z g  < ϵ Using lower Darboux integrals similarly obtains the other half of the inequality −ϵ < Z ( f + g) − Z f − Z g < ϵ Since this holds for all ϵ > 0, we conclude that R ( f + g) = R f + R g. That k f is integrable with R k f = k R f is an exercise. Put these together for the result. Corollary 4.11 (Changing endvalues). Suppose f is integrable on [a, b] and g : [a, b] →R satisfies f (x) = g(x) on (a, b). Then g is also integrable on [a, b] and R b a g = R b a f. Definition 4.12 (Integration on an open interval). A bounded function g : (a, b) →R is integrable if it has an integrable extension f : [a, b] →R where f (x) = g(x) on (a, b). In such a case, we define R b a g := R b a f. The Corollary (its proof is an exercise) shows that the choice of extension is irrelevant. 77 Theorem 4.13 (Basic integral comparisons). Suppose f and g are integrable on [a, b]. Then: 1. f (x) ≤g(x) = ⇒R f ≤R g 2. m ≤f (x) ≤M = ⇒m(b −a) ≤R b a f ≤M(b −a) 3. f g is integrable. 4. | f | is integrable and R f ≤R | f | 5. max( f, g) and min( f, g) are both integrable. Part 3 is not integration by parts since it doesn’t tell us how R f g relates to R f and R g! Proof. 1. Since g −f is positive and integrable, L(g −f, P) ≥0 for all partitions P. But then 0 ≤inf L(g −f, P) = L(g −f ) = Z g −f = Z g − Z f 2. Apply part 1 twice. 3. This is an exercise. 4. The integrability is an exercise. For the comparison, apply part 1 to −| f | ≤f ≤| f |. 5. Use max( f, g) = 1 2( f + g) + 1 2 | f −g|, etc., together with the previous parts. Theorem 4.14 (Domain splitting). Suppose f : [a, b] →R and let c ∈(a, b). If f is integrable on both [a, c] and [c, b], then it is integrable on [a, b] and Z b a f = Z c a f + Z b c f f(x) a c b x R c a f R b c f In light of this result, it is conventional to allow integral limits to be reversed: if a < b, then Z a b f := − Z b a f is consistent with Z a a f = 0 Proof. Let ϵ > 0 be given, then ∃R, S partitions of [a, c], [c, b] such that U( f, R) −L( f, R) < ϵ 2, U( f, S) −L( f, S) < ϵ 2 Choose P = R ∪S to partition [a, b], then U( f, P) −L( f, P) = U( f, R) + U( f, S) −L( f, R) −L( f, S) < ϵ Moreover f(x) a c b x R z }| { S z }| { Z b a f − Z c a f − Z b c f ≤U( f, P) −L( f, R) −L( f, S) = U( f, P) −L( f, P) < ϵ Showing that this expression is greater than −ϵ is similar. 78 Example 4.15. If f (x) = √x on [0, 1] and f (x) = 1 on [1, 2], then Z 2 0 f = Z 1 0 √ x dx + Z 2 1 1 dx = 2 3 + 1 = 5 3 Monotonic & Continuous Functions We establish the integrability of two large classes of functions. Definition 4.16. A function f : [a, b] →R is: Monotonic if it is either increasing (x < y = ⇒f (x) ≤f (y)) or decreasing. Piecewise monotonic if there is a partition P = {x0, . . . , xn} (finite!) of [a, b] such that f is monotonic on each open subinterval (xk−1, xk). Piecewise continuous if there is a partition such that f is uniformly continuous on each (xk−1, xk). Theorem 4.17. If f is monotonic or continuous on [a, b], then it is integrable. Examples 4.18. 1. Since sine is continuous, we can approximate via a sequence of Riemann sums Z π 0 sin x dx = π n lim n→∞ n ∑ i=1 sin πi n Evaluating this limit is another matter entirely, one best handled in the next section... 2. Similarly, e √x is integrable and therefore may be approximated via Riemann sums: Z 1 0 e √x dx = 1 n lim n→∞ n ∑ i=1 exp r i n = lim n→∞ n ∑ j=1 2j −1 n exp j n Both sums use right endpoints: the first has equal subintervals, while the second is analogous to Example 4.8.1. These limits would typically be estimated using a computer. Proof. Since [a, b] is closed and bounded, a continuous function f is uniformly so. Let ϵ > 0 be given: ∃δ > 0 such that ∀x, y ∈[a, b], |x −y| < δ = ⇒| f (x) −f (y)| < ϵ b −a Let P be a partition with mesh P < δ. Since f attains its bounds on each [xi−1, xi], ∃x∗ i , y∗ i ∈[xi−1, xi] such that Mi −mi = f (x∗ i ) −f (y∗ i ) < ϵ b −a from which U( f, P) −L( f, P) < n ∑ i=1 ϵ b −a(xi −xi−1) = ϵ The monotonicity argument is an exercise. Combining the proof with Definition 4.12: every uniformly continuous f : (a, b) →R is integrable. 79 Corollary 4.19. Piecewise continuous and bounded piecewise monotonic functions are integrable. Proof. If f is piecewise continuous, then the restriction of f to (xk−1, xk) has a continuous extension gk : [xk−1, xk] →R; this is integrable by Theorem 4.17. By Corollary 4.11, f is integrable on [xk−1, xk] with R xk xk−1 f = R xk xk−1 gk. Theorem 4.14 (n −1 times!) finishes things off: Z b a f = n ∑ k=1 Z xk xk−1 f The argument for piecewise monotonicity is similar. Example 4.20. The ‘fractional part’ function f (x) = x −⌊x⌋ is both piecewise continuous and piecewise monotone on any bounded interval. It is therefore integrable on any such interval. 0 1 0 1 2 3 4 5 For a final corollary, here is one more incarnation of the intermediate value theorem. Corollary 4.21 (IVT for integrals). If f is continuous on [a, b], then ∃ξ ∈(a, b) for which f (ξ) = 1 b −a Z b a f Proof. Since f is continuous, it is integrable on [a, b]. By the extreme value theorem it is also bounded and attains its bounds: ∃p, q ∈[a, b] such that f (p) := inf x∈[a,b] f (x), f (q) = sup x∈[a,b] f (x) Applying Theorem 4.13, part 2, with m = f (p) and M = f (q), we see that (b −a) f (p) ≤ Z b a f ≤(b −a) f (q) fav M m ξ a b p q Divide by b −a and apply the usual intermediate value theorem for f to see that the required ξ exists between p and q. In the picture, when f is positive and continuous, the grey area equals that under the curve; imagine levelling off the blue hill with a bulldozer... The notation fav = 1 b−a R b a f indicates the average value of f on [a, b]: to see why this interpretation is sensible, take a sequence of Riemann sums on equally-spaced partitions Pn to see that 1 b −a Z b a f = lim n→∞ n ∑ i=1 f (x∗ i )∆x = lim n→∞ f (x∗ 1) + · · · + f (x∗ n) n is the limit of a sequence of averages of equally-spaced samples f (x∗ i ). 80 What can/cannot be integrated? We now know a great many examples of integrable functions: • Piecewise continuous & monotonic functions are integrable. • Linear combinations, products, absolute values, maximums and minimums of (already) inte-grable functions. By contrast, we’ve only seen one non-integrable function (Example 4.8.2). After so many positive integrability conditions, it is reasonable to ask precisely which functions are Riemann integrable. Here is the answer, though it is quite tricky to understand. Theorem 4.22 (Lebesgue). Suppose f : [a, b] →R is bounded. Then f is Riemann integrable ⇐ ⇒it is continuous except on a set of measure zero Na¨ ıvely, the measure of a set is the sum of the lengths of its maximal subintervals, though unfortu-nately this doesn’t make for a very useful definition.20 Any countable subset has measure zero, so Lebesgue’s result is almost as if we can extend Corollary 4.19 to allow for infinite sums. For instance, Exercise 1.17.8 describes a function which is continuous only on the irrationals: it is thus Riemann integrable (indeed R b a f = 0 for any a < b). There are also uncountable sets with measure zero such as Cantor’s middle-third set C: the function f (x) = ( 1 if x ∈C 0 otherwise is continuous except on C and therefore Riemann integrable; again R 1 0 f (x) dx = 0. Exercises 4.33. Key concepts: Linear combinations, products, etc., of integrable functions are integrable, Continuous and monotone functions are integrable, Integrability on open intervals 1. Explain why R 2π 0 x2 sin8(ex) dx ≤8 3π3 2. If f is integrable on [a, b] prove that it is integrable on any interval [c, d] ⊆[a, b]. 3. We complete the proof of Theorem 4.9 (linearity of integration). (a) Suppose k > 0, let A ⊆R and define kA := {kx : x ∈A}. Prove that sup kA = k sup A and inf kA = k inf A. (b) If k > 0 prove that k f is integrable on any interval and that R k f = k R f. (c) How should you modify your argument if k < 0? 20Formally, the length of an open interval (a, b) is b −a and a set A ⊆R has measure zero if ∀ϵ > 0, ∃open intervals In such that A ⊆ ∞ [ n=1 In and ∞ ∑ i=1 length(In) < ϵ More generally, the Lebesgue measure of a set (subject to a technical condition) is the infimum of the sum of the lengths of any countable collection of open covering intervals. Measure theory is properly a matter for graduate study. Surprisingly, there exist sets with positive measure that contain no subintervals, and even sets which are non-measurable! 81 4. Give an example of an integrable but discontinuous function on a closed bounded interval [a, b] for which the conclusion of the Intermediate Value Theorem for Integrals is false. 5. Use Darboux sums to compute the value of the integral R 15/2 1/2 x −⌊x⌋dx (Example 4.20). 6. We prove and extend Corollary 4.11. Suppose f is integrable on [a, b]. (a) If g : [a, b] →R satisfies f (x) = g(x) for all x ∈(a, b), prove that g is integrable and R b a g = R b a f. (Hint: consider h = f −g and show that R h = 0) (b) Now suppose g : [a, b] →R satisfies f (x) = g(x) for all x ∈[a, b] except at finitely many points. Prove that g is integrable and R b a g = R b a f. 7. Show that an increasing function on [a, b] is integrable and thus complete Theorem 4.17. (Hint: Choose a partition with mesh P < ϵ f (b)−f (a)) 8. Suppose f and g are integrable on [a, b]. (a) Define h(x) = f (x) 2. We know: • f is bounded: ∃K such that | f (x)| ≤K on [a, b]. • Given ϵ > 0, ∃P such that U( f, P) −L( f, P) < ϵ 2K. For each subinterval [xi−1, xi], let Mi = sup f (x), mi = inf f (x), Mi = sup h(x), mi = inf h(x) Prove that Mi −mi ≤2(Mi −mi)K. Hence conclude that h is integrable. (b) Prove that f g is integrable. (Hint: f g = 1 4( f + g)2 −1 4( f −g)2) (c) Prove that U(| f | , P) −L(| f | , P) ≤U( f, P) −L( f, P) for any partition P. Hence conclude that | f | is integrable. (One can extend these arguments to show that if j is continuous, then j ◦f is integrable. Parts (a) and (c) correspond, respectively, to j(x) = x2 and j(x) = |x|.) 9. (Hard) Let f (x) =      x if x ̸= 0 and sin 1 x > 0 −x if x ̸= 0 and sin 1 x < 0 0 if x = 0 (a) Show that f is not piecewise continuous on [0, 1]. (b) Show that f is not piecewise monotonic on [0, 1]. (c) Show that f is integrable on [0, 1]. (Hint: given ϵ, hunt for a suitable partition to make U( f, P) −L( f, P) < ϵ by considering [0, x1] differently to the other subintervals) (d) Make a similar argument which proves that g = sin 1 x is integrable on (0, 1]. (Hint: Show that g has an integrable extension on [0, 1]) 82 4.34 The Fundamental Theorem of Calculus The key result linking integration and differentiation is usually presented in two parts. While there are significant subtleties, the rough statements are as follows (we follow the traditional numbering): Part I Differentiation reverses integration: d dx R x a f (t) dt = f (x) Part II Integration reverses differentiation: R b a F′(x) dx = F(b) −F(a) These facts seemed intuitively obvious to early practitioners of calcu-lus. Given a continuous positive function f: • Let F(x) denote the area under y = f (x) between 0 and x. • A small increase ∆x results in the area increasing by ∆F. • ∆F ≈f (x)∆x is approximately the area of a rectangle, whence ∆F ∆x ≈f (x). This is part I. • F(b) −F(a) ≈∑∆Fi ≈∑f (xi)∆xi. Since F′ = f, this is part II. ∆F ∆x x f(x) When Leibniz introduced the symbols R and d in the late 1600s, it was partly to reflect the fundamen-tal theorem.21 If you’re happy with non-rigorous notions of limit, rate of change, area, and (infinite) sums, the above is all you need! Of course we are very much concerned with the details: What must we assume about f and F, and how are these properties used in the proof? Theorem 4.23 (FTC, part I). Suppose f is integrable on [a, b]. For any x ∈[a, b], define F(x) := Z x a f (t) dt Then: 1. F is uniformly continuous on [a, b]; 2. If f is continuous at c ∈[a, b], then F is differentiable22 at c with F′(c) = f (c). Compare this with the na¨ ıve version above where we assumed f was continuous. We now require only the integrability of f, and its continuity at one point for the full result. 21R is a stylized S for sum, while d stands for difference. Given a sequence F = (F0, F1, F2, . . . , Fn), construct a new sequence of differences dF = (F1 −F0, F2 −F1, . . . , Fn −Fn−1) which can then be summed: Z dF = (F1 −F0) + (F2 −F1) + · · · (Fn −Fn−1) = Fn −F0 (∗) Viewing a function as an ‘infinite sequence’ of values spaced along an interval, dF becomes a sequence of infinitesimals and (∗) is essentially the fundamental theorem: R dF = F(b) −F(a). It is the concept of function that is suspect here, not the essential relationship between sums and differences. 22Strictly: if c = a, then F is right-differentiable, etc. 83 Examples 4.24. Examples in every elementary calculus course. 1. Since f (x) = sin2(x3 −7) is continuous on any bounded interval, we conclude that d dx Z x 4 sin2(t3 −7) dt = sin2(x3 −7) If one follows Theorem 4.14 and its conventions, then this is valid for all x ∈R. 2. The chain rule permits more complicated examples. For instance: f (t) = sin √ t is continuous on its domain [0, ∞) and y(x) = x2 + 3 has range [3, ∞) ⊆dom( f ), whence d dx Z x2+3 0 sin √ t dt = dy dx d dy Z y 0 sin √ t dt = 2x sin p x2 + 3 3. For a final positive example, we consider when d dx Z ex sin x tan(t2) dt = ex tan(e2x) −cos x tan(sin2x) Makes sense. To evaluate this, first choose any constant a and write Z ex sin x = Z ex a + Z a sin x = Z ex a − Z sin x a before differentiating. This is valid provided sin x, ex and a all lie in the same subinterval of dom tan(t2) = R \ {± q π 2 , ± q 3π 2 , ± q 5π 2 , . . .} Since |sin x| ≤1 < p π 2 , this requires e2x < π 2 ⇐ ⇒x < 1 2 ln π 2 Choosing a = 1 would certainly suffice. 4. Now consider why the theorem requires continuity. The piecewise continuous function f : [0, 2] →R : x 7→ ( 2x if x ≤1 1 2 if x > 1 has a jump discontinuity at x = 1. We can still compute F(x) = (R x 0 2t dt = x2 if x ≤1 R 1 0 2t dt + R x 1 1 2dt = 1 2(x + 1) if x > 1 This is continuous, indeed uniformly so! However the discontinu-ity of f results in F having a corner and thus being non-differentiable at x = 1. Indeed F′(x) = f (x) whenever x ̸= 1: that is, at all values of x where f is continuous. 0 1 2 f(x) 0 1 2 x 0 1 F(x) 0 1 2 x 84 Proving FTC I Neither half of the theorem is particularly difficult once you write down what you know and what you need to prove. Here are the key ingredients: 1. Uniform continuity for F means we must control the size of |F(y) −F(x)| = Z y a f (t) dt − Z x a f (t) dt = Z y x f (t) dt ≤ Z y x | f (t)| dt But the boundedness of f allows us to control this last integral... 2. F′(c) = f (c) means showing that lim x→c F(x)−F(c) x−c = f (c), which means controlling the size of F(x) −F(c) x −c −f (c) = 1 x −c Z x c f (t) dt −f (c) The trick here will is to bring the constant f (c) inside the integral as 1 x−c R x c f (c) dt so that the above becomes 1 |x−c| R x c | f (t) −f (c)| dt. This may now be controlled via the continuity of f... Proof. 1. Since f is integrable, it is bounded: ∃M > 0 such that | f (x)| ≤M for all x. Let ϵ > 0 be given and define δ = ϵ M. Then, for any x, y ∈[a, b], 0 < y −x < δ = ⇒|F(y) −F(x)| = Z y x f (t) dt ≤ Z y x | f (t)| dt (Theorem 4.13, part 4) ≤M(y −x) (Theorem 4.13, part 2) < Mδ = ϵ We conclude that F is uniformly continuous on [a, b]. 2. Let ϵ > 0 be given. Since f is continuous at c, ∃δ > 0 such that, for all t ∈[a, b], |t −c| < δ = ⇒| f (t) −f (c)| < ϵ 2 Now for all x ∈[a, b] (except c), 0 < |x −c| < δ = ⇒ F(x) −F(c) x −c −f (c) = 1 x −c Z x c f (t) −f (c) dt (Theorem 4.9) ≤ 1 |x −c| Z x c | f (t) −f (c)| dt (Theorem 4.13) ≤ 1 |x −c| ϵ 2 |x −c| = ϵ 2 < ϵ Clearly lim x→c F(x)−F(c) x−c = f (c). Otherwise said, F is differentiable at c with F′(c) = f (c). 85 The Fundamental Theorem, part II As with part I, the formulaic part of the result should be familiar, though we are more interested in the assumptions and where they are needed. Theorem 4.25 (FTC, part II). Suppose g is continuous on [a, b], differentiable on (a, b), and moreover that g′ is integrable on (a, b) (recall Definition 4.12). Then, Z b a g′ = g(b) −g(a) Part II is often expressed in terms of anti-derivatives: F being an anti-derivative of f if F′ = f. Com-bined with FTC, part I, we recover the familiar ‘+c’ result and a simpler version of the fundamental theorem often seen in elementary calculus. Corollary 4.26. Let f be continuous on [a, b]. • If F is an anti-derivative of f, then R b a f = F(b) −F(a). • Every anti-derivative of f has the form F(x) = R x a f (t) dt + c for some constant c. Examples 4.27. Again, basic examples should be familiar. 1. Plainly g(x) = x2 + 2x3/2 is continuous on [1, 4] and differentiable on (1, 4) with derivative g′(x) = 2x + 3√x; this last is continuous (and thus integrable) on (1, 4). We conclude that Z 4 1 2x + 3√ x dx = x2 + 2x3/2 4 1 = (16 + 16) −(1 + 2) = 29 2. If g(x) = sin(3x2), then g′(x) = 6x cos(3x2). Certainly g satisfies the hypotheses of the theorem on any bounded interval [a, b]. We conclude Z b a 6x cos(3x2) dx = sin(3b2) −sin(3a2) Moreover, every anti-derivative of f (x) = 6x cos(3x2) has the form F(x) = sin(3x2) + c. 3. Recall Example 4.24.4 where the discontinuity of f at x = 1 led to the non-differentiability of F(x) = R x 0 f (t) dt. The function F therefore fails the hypotheses of FTC II on the interval [0, 2]. It almost, however, satisfies the conclusions of FTC II, though this is somewhat tautological given the definition of F: except at x = 1, F is certainly an anti-derivative of f, and moreover R 2 0 f (x) dx = F(2) −F(0). In case you’re worried that this makes the theorem trivial, note that other anti-derivatives ˆ F of f exist (except at x = 1) which fail to satisfy the conclusion. For instance ˆ F(x) = ( x2 if x < 1 1 2x if x > 1 = ⇒ˆ F(2) −ˆ F(0) = 1 ̸= 3 2 = Z 2 0 f (x) dx 86 Proving FTC II Exercise 10 offers a relatively easy proof when g′ = f is continuous. For the real McCoy, we can only rely on the integrability of g′: the trick is to use the mean value theorem to write g(b) −g(a) as a Riemann sum over a suitable partition. Proof. Suppose ϵ > 0 is given. Since g′ is integrable, we may choose some partition P satisfying U(g′, P) −L(g′, P) < ϵ. Since g satisfies the mean value theorem on each subinterval, ∃ξi ∈(xi−1, xi) such that g′(ξi) = g(xi) −g(xi−1) xi −xi−1 from which g(b) −g(a) = n ∑ i=1 g(xi) −g(xi−1) = n ∑ i=1 g′(ξi)(xi −xi−1) This is a Riemann sum for g′ associated to the partition P. Since the upper and lower Darboux sums are the supremum and infimum of these, we see that L(g′, P) ≤g(b) −g(a) ≤U(g′, P) However R b a g′ satisfies the same inequality: L(g′, P) ≤R b a g′ ≤U(g′, P). Since these inequalities hold for all ϵ > 0, we conclude that R b a g′ = g(b) −g(a). While we certainly used the integrability of g′ in the proof, it might seem strange that we assumed it at all: shouldn’t every derivative be integrable? Perhaps surprisingly, the answer is no! If you want a challenge, look up the Volterra function, which is differentiable everywhere but whose derivative is non-integrable! The Rules of Integration If one wants to evaluate an integral, rather than merely show it exists, there are really only two options: 1. Evaluate Riemann sums and take limits. This is often difficult if not impossible to do explicitly. 2. Use FTC II. The problem now becomes the finding of anti-derivatives, for which the core method is essentially guess and differentiate. To obtain general rules, we can attempt to reverse the rules of differentiation. Integration by Parts Recall the product rule: the product g = uv of two differentiable functions is differentiable with g′ = u′v + uv′. Now apply Theorems 4.9, 4.13 and FTC II. Corollary 4.28 (Integration by Parts). Suppose u, v are continuous on [a, b], differentiable on (a, b), and that u′, v′ are integrable on (a, b). Then Z b a u′(x)v(x)dx = u(b)v(b) −u(a)v(a) − Z b a u(x)v′(x)dx This is significantly less useful than the product rule since it merely transforms the integral of one product into the integral of another. 87 Examples 4.29. With practice, there is no need to explicitly state u and v. 1. Let u(x) = x and v′(x) = cos x. Then u′(x) = 1 and v(x) = sin x. These certainly satisfy the hypotheses. We conclude Z π/2 0 x cos x dx = [x sin x]π/2 0 − Z π/2 0 sin x dx = π 2 sin π 2 −0 −[−cos x]π/2 0 = π 2 + cos π 2 −cos 0 = π 2 −1 2. Let u(x) = ln x and v′(x) = 1. Then u′(x) = 1 x and v(x) = x, whence Z e2 e ln x dx = [x ln x]e2 e − Z e2 e x x dx = e2 ln e2 −e ln e −[x]e2 e = 2e2 −e −e2 + e = e2 Change of Variables/Substitution We now turn our attention to the chain rule. If g(x) = F u(x)  , where F and u are differentiable, then g is differentiable with g′(x) = dg dx = dF du du dx = F′u(x)  u′(x) Now integrate both sides; the only issue is what assumptions are needed to invoke FTC II. Theorem 4.30 (Substitution Rule). Suppose u : [a, b] →R and f : range(u) →R are continuous. Suppose also that u is differentiable on (a, b) with integrable derivative u′. Then Z b a f u(x)  u′(x) dx = Z u(b) u(a) f (u) du This is the famous ‘u-sub’/change-of-variables formula from elementary calculus. Proof. We leave as an exercise the verification that both integrals exist. By the intermediate and extreme value theorems, range(u) is a closed bounded interval. Assume range(u) has positive length for otherwise both integrals are trivially zero. Choose any c ∈range(u) and define F : range(u) →R by F(v) := Z v c f (t) dt Since f is continuous, by FTC I says that F is differentiable with F′(u) = f (u). But now Z b a f u(x)  u′(x) dx = Z b a  d dx F u(x)  dx (chain rule) = F u(b)  −F u(a)  (FTC II) = Z u(b) u(a) f (u) du 88 Examples 4.31. Successfully applying the substitution rule can require significant creativity.23 1. To evaluate R √π 0 2x sin x2 dx, we consider the substitution u(x) = x2 defined on [0, √π]. Certainly u is continuous; moreover its derivative u′(x) = 2x is integrable on (0, √π). Finally f (u) = sin u is continuous on range(u) = [0, π]. The hypotheses are satisfied, whence Z √π 0 2x sin x2 dx = Z √π 0 f u(x)  u′(x) dx = Z u(π) u(0) f (u) du = Z π 0 sin u du = −cos u π 0 = 2 2. For the following integral, a simple factorization suggests the substitution u(x) = x2 −2. Plainly u : [ √ 2, √ 3] →[0, 1] and u′(x) = 2x is integrable. Moreover, f (u) = 1 u2+1 is continuous on range(u) = [0, 1]. We conclude Z √ 3 √ 2 2x x4 −4x2 + 5 dx = Z √ 3 √ 2 2x (x2 −2)2 + 1 dx = Z 1 0 1 u2 + 1 du = arctan u 1 0 = π 4 3. The hypotheses on u really are all that’s necessary. In particular, u need not be left-/right-differentiable at the endpoints of [a, b]. For instance, with f (u) = u2 and u(x) = √x on [0, 4], we easily verify 8 3 = Z 4 0 1 2 √ x dx = Z 4 0 x 2√x dx = Z 4 0 f u(x)  u′(x) dx = Z 2 0 f (u) du = Z 2 0 u2 du = 8 3 4. Sloppy ‘substitutions’ might lead to utter nonsense. For instance, u(x) = x2 suggests Z 2 −1 1 x dx = Z 2 −1 1 2x2 2x dx = Z 4 1 1 2u du = 1 2(ln 4 −ln 1) = ln 2 This is total gibberish: the first integral does not exist since 1 x is undefined at 0 ∈(−1, 2). Thankfully, the hypotheses of the substitution rule prevent this: f (u) = 1 2u is not continuous on range(u) = [0, 4]. While you are very unlikely to make precisely this mistake, the risk is real in more complicated or abstract situations. . . 23Hence the old adage, “Differentiation is a science, whereas integration is an art.” To illustrate by example, consider f (x) = tan(ex cos(3x2) + 4x3). The derivative is easily found using the product and chain rules: df dx = 1 1 + (ex cos(3x2) + 4x3)2  ex cos(3x2) −6xex sin(3x2) + 12x2 By contrast, if you want to find an explicit anti-derivative of f (x), the integration analogues (parts/substitution) are essen-tially useless. Similarly, the integral Z 1 0 tan(ex cos(3x2) + 4x3) dx is likely impossible to evaluate explicitly and can only be approximated, say by using Riemann sums. 89 Exercises 4.34. Key concepts: Complete statements of FTC parts I & II, Integration by Parts/Substitution 1. Calculate the following limits: (a) lim x→0 1 x Z x 0 et2dt (b) lim h→0 1 h Z 3+h 3 et2dt 2. Let f (t) =      0 if t < 0 t if 0 ≤t ≤1 4 if t > 1 (a) Determine the function F(x) = R x 0 f (t) dt and sketch it. Where is F continuous? (b) Where is F differentiable? Calculate F′ at the points of differentiability. 3. Let f be continuous on R. (a) Define F(x) = R x+1 x−1 f (t) dt. Carefully show that F is differentiable on R and compute F′. (b) Repeat for G(x) = R sin x 0 f (t) dt. 4. Recall Examples 4.24.4 and 4.27.3. Describe all anti-derivatives F of f on [0, 1) ∪(1, 2]. Which satisfy R 2 0 f (x) dx = F(2) −F(0)? 5. Suppose u, v satisfy the hypotheses of integration by parts. By FTC I, R x a u′(t)v(t) dt is an anti-derivative of u′(x)v(x): what does integration by parts say is another? 6. Use a substitution to integrate R 1 0 x √ 1 −x2 dx 7. Use integration by parts and the substitution rule to evaluate R b 0 arcsin x dx for any b < 1. 8. Use integration by parts to evaluate R b 0 x arctan x dx for any b > 0 9. If f and u satisfy the hypotheses of the substitution rule, explain why both ( f ◦u)u′ and f are integrable on the required intervals. 10. We prove a simpler version of the fundamental theorem when f : [a, b] →R is continuous. Part I Define F(x) = R x a f (t) dt. If c, x ∈[a, b] where c ̸= x, prove that m ≤F(x) −F(c) x −c ≤M where m, M are the maximum and minimum values of f (t) on the closed interval with endpoints c, x; why do m, M exist? Now deduce that F′(c) = f (c). Part II Now suppose F is any anti-derivative of f on [a, b]. Use part (a) and the mean value theorem to prove that R b a f (t) dt = F(b) −F(a). 90 4.36 Improper Integrals The Riemann integral has several limitations. Even allowing for functions to be integrable on open intervals (Definition 4.12), the existence of R b a f (x) dx requires both: • That (a, b) be a bounded interval. • That f be bounded on (a, b). Limits provide a natural way to extend the Riemann integral to unbounded intervals and functions. Definition 4.32. Suppose f : [a, b) →R satisfies the following properties: • f is integrable on every closed bounded subinterval [a, t] ⊆[a, b). • If b is finite, then f is unbounded at b (b can be ∞!) The improper integral of f on [a, b) is Z b a f (x) dx := lim t→b− Z t a f (x) dx This is convergent or divergent as is the limit. If an integral is improper at its lower limit (f : (a, b] →R, etc.), then R b a f (x) dx := lim s→a+ R b s f (x) dx. If an integral is improper at both ends, choose any c ∈(a, b) and define Z b a f (x) dx = lim s→a+ Z c s f (x) dx + lim t→b− Z t c f (x) dx provided both one-sided improper integrals exist and the limit sum makes sense. Theorem 4.14 says that the choice of c for a doubly-improper integral is irrelevant. Many properties of the Riemann integral transfer naturally to improper integrals, though not every-thing. . . For example, part 1 of Theorem 4.13 extends: Theorem 4.33. If 0 ≤f (x) ≤g(x) on [a, b), then R b a f ≤R b a g whenever the integrals exist (standard or improper). In particular: • R b a f = ∞= ⇒R b a g = ∞ • R b a g convergent = ⇒R b a f converges to some value ≤R b a g We leave some of the detail to Exercise 7. 91 Examples 4.34. 1. R t 0 x2 dx = 1 3t3 for any t > 0. Clearly Z ∞ 0 x2 dx = lim t→∞ 1 3t3 = ∞ More formally, the improper integral R ∞ 0 x2 dx diverges to infinity. 2. With f (x) = x−4/3 defined on [1, ∞), Z ∞ 1 x−4/3 dx = lim t→∞ Z t 1 x−4/3 dx = lim t→∞ h −3x−1/3it 1 = lim t→∞3 −3t−1/3 = 3 3. Consider f (x) = |x| e−x2/2 on (−∞, ∞). On any bounded interval [0, t], Z t 0 f (x) dx = Z t 0 xe−x2/2 dx = h −e−x2/2it 0 = 1 −e−t2/2 − − → t→∞1 By symmetry, Z ∞ −∞|x| e−x2/2 dx = 1 + 1 = 2 This example arises naturally in probability: multiplying by 1 √ 2π computes the expectation of |X| when X is a standard normally-distributed random variable E(|X|) = Z ∞ −∞ 1 √ 2π |x| e−x2/2 dx = r 2 π 4. Our knowledge of derivatives d dx sin−1 x = 1 √ 1−x2 (or the substitution rule) allows us to evaluate Z 1 0 1 √ 1 −x2 dx = lim t→1− Z t 0 1 √ 1 −x2 dx = lim t→1−sin−1 t = π 2 By symmetry, R 1 −1 1 √ 1−x2 dx = π. By comparison, we obtain bounds on another improper inte-gral: 1 √ 1 −x4 ≤ 1 √ 1 −x2 = ⇒ Z 1 −1 1 √ 1 −x4 dx ≤ Z 1 −1 1 √ 1 −x2 dx = π 5. Improper integrals need not exist. For instance, lim t→∞ Z t 0 sin x dx = lim t→∞1 −cos t diverges by oscillation. 92 Exercises 4.36. Key concepts: Formal definition and careful calculation of Improper Integrals 1. Use your answers from Section 4.34 to decide whether the improper integrals R 1 0 arcsin x dx and R ∞ 0 x arctan x dx exist. If so, what are their values? 2. Let p be a positive constant. Prove: Z 1 0 1 xp dx = ( 1 1−p if p < 1 ∞ if p ≥1 Z ∞ 1 1 xp dx = ( 1 p−1 if p > 1 ∞ if p ≤1 (The first of these justifies the convergence/divergence properties of p-series via the integral test) 3. Suppose f is integrable on [a, b]. Explain why R b a f (x) dx = lim t→b− R t a f (x) dx is still true, even though the integral is not improper. 4. State a version of integration by parts modified for when R b a u′(x)v(x) dx is improper at b. Now evaluate R ∞ 0 xe−4x dx. 5. What is wrong with the following calculation? Z ∞ −∞x dx = lim t→∞ 1 2x2 t −t = lim t→∞ 1 2(t2 −t2) = lim t→∞0 = 0 6. Prove or disprove: if R f and R g are convergent improper integrals, so is R f g. 7. Prove part of Theorem 4.33. Suppose 0 ≤f (x) ≤g(x) for all x ∈[a, b), and that R b a g is a convergent improper integral. Prove that R b a f converges and that R b a f ≤R b a g. 93 Extensions of the Riemann Integral (just for fun) In the 1890s, Thomas Stieltjes 24 offered a generalization of the Riemann integral. Definition 4.35. Let f : [a, b] →R be bounded and α : [a, b] →R monotonically increasing. Given a partition P = {x0, . . . , xn} of [a, b], define the sequence of differences ∆αi = α(xi) −α(xi−1) The upper/lower Darboux–Stieltjes sums/integrals are defined analogously to the pure Riemann case: U( f, P, α) = n ∑ i=1 sup [xi−1,xi] f (x) ∆αi L( f, P, α) = n ∑ i=1 inf [xi−1,xi] f (x) ∆αi U( f, α) = inf P U( f, P, α) L( f, α) = sup P L( f, P, α) If U( f, α) = L( f, α), we say that f is Riemann–Stieltjes integrable of class R(α) and denote its value R b a f (x) dα. The standard Riemann integral corresponds to α(x) = x. It is the ability to choose other functions α that makes the Riemann–Stieltjes integral both powerful and applicable. Standard Properties Most results in sections 4.32 and 4.33 hold with suitable modifications, as does the discussion of improper integrals. For instance, f ∈R(α) ⇐ ⇒∃P such that U( f, P, α) −L( f, P, α) < ϵ The result regarding the piecewise continuity of f is a notable exception: depending on α, a piecewise continuous f might not lie in R(α). Weighted integrals If α is differentiable, we obtain a standard Riemann integral Z b a f (x) dα = Z b a f (x)α′(x) dx weighted so that f (x) contributes more when α is increasing rapidly. Probability If α(a) = 0 and α(b) = 1, then α may be viewed as a probability distribution function and its derivative α′ as the corresponding probability density function. For example: 1. The uniform distribution on [a, b] has α = 1 b−a(x −a) so that Z b a f (x) dα = 1 b −a Z b a f (x) dx Since α′ is constant, the integrals weigh all values of x uniformly. 2. The standard normal distribution has α(x) = R x −∞ 1 √ 2πe−t2/2 dt. The fact that α′ = 1 √ 2πe−x2/2 is maximal when x = 0 reflects the fact that a normally distributed variable is clustered near its mean. In all cases, R f (x) dα = E( f (X)) computes an expectation (see, e.g., Example 4.34.3). 24Stieltjes was Dutch; the pronunciation is roughly ‘steelchez.’ 94 Non-differentiable or continuous α This provides major flexibility! For example, if Q = {s0, . . . , sn} partitions [a, b], and (ck)n k=1 is a positive sequence, then α(x) =      0 if x = a k ∑ i=1 ci if x ∈(sk−1, sk] defines an increasing step function, and the Riemann–Stieltjes integral a weighted sum Z b a f (x) dα = n ∑ i=1 ci f (si) Taking an infinite increasing sequence (sn) ⊆[a, b] results in an infinite series, which helps explain why so many results for series and integrals look similar! This also touches on probability. For example, let p ∈[0, 1], n ∈N, and sk = k on the interval [0, n]. If ck = (n k)pk(1 −p)n−k, then Z f (x) dα = n ∑ k=0 n k  pk(1 −p)n−k f (x) = E( f (X)) is the expectation of f (X) when X ∼B(n, p) is binomially distributed. Lebesgue Integration: Integrals and Convergence Lebesgue’s extension essentially uses rectangles whose heights tend to zero: cutting up the area under a curve using horizontal instead of vertical strips. One of its major purposes is to permit a more general interchange of limits and integration in many cases of pointwise (non-uniform) convergence. To see the problem, consider the sequence of piecewise continuous functions fn : [0, 1] →R : x 7→ ( 1 if x = p q ∈Q with q ≤n 0 otherwise Each fn is Riemann integrable with R 1 0 fn(x) dx = 0. However, the pointwise limit f (x) = ( 1 if x ∈Q 0 if x ̸∈Q is not Riemann integrable (compare Example 4.8.2). In the Lebesgue theory, the limit f turns out to be integrable with integral 0, so that lim n→∞ Z 1 0 fn(x) dx = Z 1 0 lim n→∞fn(x) dx Recall (Theorem 2.19) that the interchange of limits and integrals would be automatic if the conver-gence fn →f were uniform: of course the convergence isn’t uniform here. Like measure theory (recall Theorem 4.22), Lebesgue integration is a central topic in graduate analysis. 95
2456
https://www.scoliosisreductioncenter.com/blog/spinal-curvature
Call Us: Schedule a Consultation Skip to content Call Us: 321-939-2328 Schedule a Consultation Spinal Curvature Disorders: Lordosis, Kyphosis, & Scoliosis Last Updated: By Dr. Tony Nalda There are many different types of spinal conditions a person can develop. Some involve an over- or under-pronounced natural curvature, while others involve an unnatural sideways spinal curve that also rotates. Keep reading to understand the difference between lordosis, kyphosis, and scoliosis. The spine’s natural curves are important for facilitating its strength, flexibility, and ability to evenly distribute stress. While there is a normal range of natural curvatures, when curves fall beyond these normal ranges, conditions such as lordosis, kyphosis, and scoliosis can be the cause. Individuals with mild scoliosis may require only regular checkups to monitor the condition, while others could necessitate more intensive interventions like braces or surgery. Before we get into the specifics of different spinal conditions that involve abnormal spinal curvatures, let’s first ensure the role of the spine’s healthy curves are understood. What are Spinal Curvature Disorders? Spinal curvature disorders refer to a group of conditions that affect the natural curvature of the spine, leading to abnormal spinal curves and a range of symptoms. The three main types of spinal curvature disorders are lordosis, kyphosis, and scoliosis. Lordosis is characterized by an excessive inward curvature of the spine, often affecting the lumbar region. Kyphosis involves an exaggerated outward curvature, typically in the thoracic spine, resulting in a rounded back appearance. Scoliosis, on the other hand, is a sideways curvature of the spine that also involves spinal rotation. These conditions can arise from various factors, including genetics, disease, injury, and birth defects. Understanding the nature of these disorders is crucial for recognizing symptoms early and seeking appropriate treatment to prevent further complications. Types of Spinal Curvatures of a Healthy Spine Due to the spine’s natural and healthy curvatures, when viewed from the side, it will have a soft ‘S’ shape; when viewed from the front or back, it will appear straight. The spine is naturally curved because it makes it stronger, more flexible, and better able to distribute mechanical stress that’s incurred during movement. The spine has three main sections: cervical (neck), thoracic (middle and upper back), and lumbar (lower back), with each having a characteristic curvature type. The terms ‘lordosis’ and ‘kyphosis’ refer to the spine’s natural curves, and when these curves fall beyond a normal range, the conditions are also called ‘lordosis’ and ‘kyphosis’. Normal lordotic curves are found in the neck (cervical spine) and the lower back (lumbar spine). Lordotic curves bend forwards, towards the body’s center. Normal kyphotic curves are found in the chest (thoracic spine). Kyphotic curves bend backwards, away from the body’s center. The spine’s healthy and natural lordosis and kyphosis work to protect the health and biomechanics of the spine by keeping it aligned. When there is a loss of one or more of these curves, it can affect the spine, and other areas of the body, in multiple ways. In a healthy spine, the vertebrae are stacked on top of one another in a healthy alignment, separated by intervertebral discs that give the spine structure, flexibility, and act as the spine’s shock absorbers. When the spine loses one or more of its natural curvatures, the spine is no longer optimally aligned, and this is when problems occur. Abnormal Curvatures of the Spine Now that we have explored why the spine is naturally curved in the first place, and the different curvature types that characterize the different spinal sections, let’s explore some abnormal curvatures of the spine that can develop: lordosis, kyphosis, and scoliosis. A severe spinal curvature can significantly impact mobility and lead to potential health complications. Conditions such as dyspnea, ataxia, and chronic back pain can arise from severe spinal curvature, making evaluation and treatment based on the severity of the curvature crucial. Abnormal Lordosis Lordosis, as a spinal condition, is defined as an exaggerated inward curvature of the spine. While it most commonly affects the lumbar spine, the cervical spine can develop it as well. A normal lordotic range is considered to be between 40 and 60 degrees, and when a person’s lordotic range falls beyond this normal range, problems can occur. When a person develops an excessive lumbar lordosis, the changing postural effect can give a swayback appearance where the buttocks are more prominent as general postural elements tend to become more exaggerated. Lordosis can affect people of all ages, and when it affects the lower back, it can cause varying levels of back pain and mobility issues. Some causes of abnormal lordosis include conditions such as spondylolisthesis, osteoporosis, and obesity. Spondylolisthesis Spondylolisthesis is a spinal condition that develops when a vertebra slips out of place, sliding on to the one below and exposing it to adverse pressure and friction. When this happens, the health of the intervertebral discs are impacted as they rest between adjacent vertebrae and are also exposed to uneven pressure and wear. As the vertebrae shift and the discs are affected, the spine’s ability to maintain its healthy curvatures is compromised, hence the potential development of an abnormal lordotic curve. Osteoporosis Osteoporosis is a disease that affects bone health and develops as the body loses bone mass. This is most common in women as they age and experience hormone and bone-density changes related to menopause. When this happens, the bones of the spine become weak and are more prone to injury. A person with osteoporosis is at risk of developing a number of spinal conditions as the overall health and strength of the spinal vertebrae are impacted. Obesity When obesity is the cause of lumbar lordosis, this is because carrying extra weight stresses the lumbar spine and is related to lumbar spinal disease. Body mass index (BMI) is known to affect lumbar curvatures, so obesity can expose the spine to adverse spinal tension and added weight to support and distribute, with the potential to lead to the development of an abnormal lordotic curve. Symptoms of Lordosis While every case is different, there are some symptoms commonly associated with lordosis: A swayback appearance Buttocks being more pronounced When lying flat on the floor, there is a noticeable gap between the back and floor Back pain and discomfort Mobility issues While lordosis involves an abnormal forward spinal curvature, kyphosis involves an abnormal backward spinal curvature. Abnormal Kyphosis Kyphosis, as a spinal condition, is defined as an exaggerated outward curvature of the spine and can cause a forward-rounded posture. Kyphosis most commonly affects the thoracic or thoracolumbar sections of the spine, but it can also affect the cervical spine. A normal range of kyphosis is considered to be between 20 and 45 degrees, but when a structural abnormality causes a kyphotic curve to develop that is beyond this normal range, this is when the curvature becomes abnormal and problematic. People with excessive kyphosis can have posture that appears to be overly rounded forward with excessively-rounded shoulders, known to cause an overall ‘roundback’ appearance. There are many different types of kyphosis; the three most common are postural, Scheuermann’s, and congenital. Postural Kyphosis Postural kyphosis is the most common type and often becomes apparent during adolescence. It presents as poor posture and noticeable slouching, and this form is the easiest to treat because it is not caused by a structural abnormality in the spine. Postural kyphosis tends to cause a smooth and round curve that can be corrected when position is changed or when the patient makes an active effort to stand up straighter. Postural kyphosis is also more common in girls, is rarely painful, and as it’s postural and not structural, the condition is not progressive. Scheuermann’s Kyphosis Named after the radiologist who first described this type of kyphosis, Scheuermann’s kyphosis is a structural condition, meaning it’s not as easily treated as postural kyphosis. As a structural condition, Scheuermann’s kyphosis is caused by a structural abnormality within the spine. With this form, an X-ray taken from the side will reveal that multiple consecutive vertebrae are triangular in shape, rather than the normal rectangular shape of healthy vertebrae. When vertebrae are triangular in shape, they are not able to maintain the spine’s natural curvatures and alignment as the misshapen vertebrae cause them to wedge together towards the spine’s front, which decreases the disc space and results in the characteristic forward curvature of the upper back, giving the shoulders that rounded-forward appearance. Scheuermann’s curves tend to be sharp, angular, stiff, and rigid. It most commonly affects the thoracic spine, but can also develop along the lumbar spine. This form is more common in boys than girls, and its progression stops once skeletal maturity has been reached. Scheuermann’s kyphosis can be painful, especially for adults, and if pain is an issue, it’s most commonly felt around the curvature’s most-tilted vertebrae (at the apex of the curve). Pain can also be felt in the lower back if the kyphosis of the upper spine causes a counteractive curvature in the lumbar spine. Congenital Kyphosis With congenital kyphosis, patients are born with the condition as it develops in utero. This form develops due to a malformation in the spinal column. Either the spinal bones don’t form as they should, or multiple vertebrae end up fused together, instead of forming into distinct spinal bones. This form of kyphosis tends to progress with growth, and it’s not uncommon for these patients to have other medical issues related to systems or body parts not forming properly. Symptoms of Kyphosis While the signs and symptoms of kyphosis will vary depending on condition type and severity, some common signs and symptoms of kyphosis include: Rounded shoulders Pitched-forward posture A visible arch on the back Fatigue Spinal stiffness/rigidity Tight hamstrings Muscle pain If a condition is left untreated, or isn’t treated appropriately, kyphotic curves that continue to progress, increasing in severity, can produce these additional symptoms: Weakness, numbness, and/or tingling in the legs Loss of feeling and sensation Breathing impairment So now that we have touched on the roles of the spine’s natural curvatures and alignment, and what abnormal lordosis and kyphosis involves, let’s talk about how to respond to these conditions, in terms of treatment moving forward. Causes and Risk Factors Spinal curvature disorders can develop due to a combination of genetic and environmental factors. Some common causes and risk factors include: Genetics: Many spinal curvature disorders, such as idiopathic scoliosis, tend to run in families, indicating a genetic predisposition. Disease: Conditions like cerebral palsy and muscular dystrophy can increase the risk of developing spinal curvature disorders. These diseases affect muscle control and strength, which can lead to abnormal spinal curves. Injury: Traumatic injuries, such as those sustained in car accidents, can cause damage to the spine, leading to conditions like degenerative scoliosis. Birth Defects: Congenital scoliosis is a type of scoliosis present at birth, resulting from malformations in the spinal column during fetal development. Age: While spinal curvature disorders can occur at any age, certain types, such as degenerative scoliosis, are more common in older adults due to the wear and tear on the spine over time. Neuromuscular Conditions: Conditions like spina bifida and neuromuscular scoliosis can significantly increase the risk of developing abnormal spinal curvatures due to their impact on the nervous and muscular systems. Symptoms and Diagnosis The symptoms of spinal curvature disorders can vary widely depending on the type and severity of the condition. Common symptoms include: Back pain Difficulty standing up straight Difficulty breathing Numbness or tingling in the legs Weakness in the legs Diagnosing a spinal curvature disorder typically involves a comprehensive approach. A healthcare provider will start with a physical examination and review the patient’s medical history. Imaging tests, such as X-rays, magnetic resonance imaging (MRI), or computed tomography (CT) scans, are crucial for visualizing the spine’s structure and identifying abnormal curvatures. Additionally, a scoliosis screening may be performed to assess the patient’s posture, alignment, and spinal curves. Treatment for Lordosis and Kyphosis When it comes to progressive spinal conditions that are going to worsen over time, especially if left untreated, proactive treatment with the potential to reduce the abnormal curvatures and restore as much of the spine’s natural and healthy curves as possible is key. Here at the Scoliosis Reduction Center®, I’ve treated a number of spinal conditions, and when it comes to structural conditions like lordosis and kyphosis, treatment has to, first and foremost, impact the condition on a structural level. The goal of treatment is to control an abnormal curvature’s progression to prevent disruptions to posture and other symptoms such as pain (more common in adult patients). There are several important patient and condition characteristics that factor into how I approach treatment, including a patient’s age and overall health, how much growth they have yet to go through, the severity of the curvature, and the condition type, particularly relevant with kyphosis. I combine multiple treatment disciplines in order to fully craft the most effective and customized treatment plans possible: chiropractic care, in-office therapy, custom-prescribed home exercises, and specialized corrective bracing. Through a chiropractic-centered functional approach, I work closely with my lordosis and kyphosis patients to achieve a curvature reduction on a structural level by manipulating the most-tilted vertebrae of the curvatures back into a healthier alignment. We also work to increase core strength so the spine’s surrounding muscles are better able to support and stabilize the spine. In addition to lordosis and kyphosis, scoliosis is another highly-prevalent spinal condition that involves a loss of the spine’s healthy curves through the development of an abnormal spinal curvature. Scoliosis According to the National Scoliosis Foundation, there are currently close to seven million people living with scoliosis in the United States alone, and these are only ‘known’ diagnosed cases; obviously, there are also many people living with the condition unknown and undiagnosed. Amongst school-aged children, scoliosis is the most common spinal condition, and scoliosis accounts for 20 percent of all spinal conditions in the United States. Scoliosis is defined as an abnormal sideways curvature of the spine that rotates and has a minimum Cobb angle measurement of 10 degrees. Scoliosis is a progressive structural spinal condition. Cobb Angle Measurement A patient’s Cobb angle measurement tells me how far out of alignment their spine is and places the condition on its severity scale of mild, moderate, severe, or very severe: Mild scoliosis: Cobb angle measurement of between 10 and 25 degrees Moderate scoliosis: Cobb angle measurement of between 25 and 40 degrees Severe scoliosis: Cobb angle measurement of 40+ degrees Very-severe scoliosis: Cobb angle measurement of 80+ degrees While some patients may only require monitoring, others may need to consider scoliosis surgery as a viable treatment to correct or manage the condition. So while lordosis and kyphosis involve abnormal spinal curvatures that bend backwards or forwards, scoliosis involves an abnormal spinal curvature that bends to the side. The most common form of scoliosis is idiopathic, meaning it has no single-known cause, and idiopathic scoliosis accounts for 80 percent of known diagnosed cases. The most common form of idiopathic scoliosis is adolescent idiopathic scoliosis, diagnosed between the ages of 10 and 18. The remaining 20 percent are associated with known causes such as neuromuscular, congenital, degenerative, and traumatic. When an abnormal scoliotic curve develops, it can range greatly in severity and produce a wide range of symptoms based on a number of patient and condition characteristics such as patient age, severity level, condition type, and curvature location. While every case is different, common signs and symptoms of scoliosis include: Uneven shoulders/shoulder blades Uneven hips Rib arch Arms and legs that appear to hang at different lengths A visibly-curved spine Changes to gait and balance Clothing that doesn’t hang evenly Back, neck, head, or leg pain (more common in adults) As scoliosis is progressive, it’s particularly beneficial to detect and diagnose the condition early on so proactive treatment can be started early in the condition’s progressive line. One of the most common questions I’m asked is how painful scoliosis is, and the answer will largely depend on whether or not the patient has reached skeletal maturity. In adult patients, the condition can cause varying levels of pain that can range from intermittent and mild to chronic and debilitating. In older patients who have reached skeletal maturity, meaning they have stopped growing, their spines have settled due to gravity and maturity, so they are vulnerable to the compressive force of the scoliotic curve that can be felt not only by the spine, but also in its surrounding vessels, muscles, and nerves. In children and adolescents, scoliosis is not known to be painful because as they have not yet reached skeletal maturity, their spines are experiencing a constant lengthening motion, and this counteracts the compressive force of the curvature, known to be the source of most scoliosis-related pain. So after a scoliosis diagnosis is given, what’s the best way to control its progression and restore the spine’s natural curves and alignment? Treatment for Scoliosis As a scoliosis chiropractor, I know scoliosis, and I know the spine. Here at the Center, patients have access to multiple scoliosis-specific treatment disciplines for the most specific and customized treatment plans. Important patient and condition characteristics that help shape a treatment plan include patient age and overall health, condition severity, condition type, and curvature location. With these characteristics in mind, I design treatment plans that are based on how best to impact the condition structurally by reducing the abnormal curvature, and increasing core strength so the spine is better supported and stabilized. This is achieved through a chiropractic-centered functional approach that prioritizes the overall health and function of the spine. Through combining scoliosis-specific chiropractic care, in-office therapy, custom-prescribed home exercises, and specialized corrective bracing, I can fully customize each patient’s treatment plan to address the specifics of their condition. Conclusion The spine is an important part of human anatomy. It allows us to stand upright, practice good posture, and engage in flexible movement. In addition, as the spine works in tandem with the brain to form the body’s central nervous system (CNS), spinal conditions have the potential to cause numerous effects felt throughout the body. Just as there are different types of healthy curvatures a spine relies on for its strength, flexibility, and ability to distribute stress, there are different unhealthy curves and spinal conditions an unhealthy spine can develop. Different types of spinal curvature disorders include lordosis, kyphosis, and scoliosis. Lordosis involves an abnormal backward curvature that most commonly affects the lumbar spine and gives the posture a swayback appearance. Kyphosis involves an abnormal forward curvature that most commonly affects the thoracic spine and gives the posture a roundback appearance. Scoliosis involves an abnormal sideways curvature that can develop anywhere along the spine and gives the posture an overall asymmetrical appearance. When the spine develops an abnormal scoliotic curve, this means there is a loss of the spine’s natural curves and alignment. As the spine’s ability to function optimally is dependent on its health curves and alignment, the development of an unhealthy curvature negatively impacts the spine’s overall health and biomechanics. Here at the Scoliosis Reduction Center®, I work closely with every patient to design a fully customized treatment plan with the potential to reduce the abnormal spinal curvature and realign the spine, while preserving the spine’s strength and function. More About Me My Books Contact Us Dr. Tony Nalda Doctor of Chiropractic Severe migraines as a young teen introduced Dr. Nalda to chiropractic care. After experiencing life changing results, he set his sights on helping others who face debilitating illness through providing more natural approaches. After receiving an undergraduate degree in psychology and his Doctorate of Chiropractic from Life University, Dr. Nalda settled in Celebration, Florida and proceeded to build one of Central Florida’s most successful chiropractic clinics. His experience with patients suffering from scoliosis, and the confusion and frustration they faced, led him to seek a specialty in scoliosis care. In 2006 he completed his Intensive Care Certification from CLEAR Institute, a leading scoliosis educational and certification center. Scoliosis Curve Reduction Treatment Is Possible! Reach out to us today to get started on your journey. Whether you think you are ready to start treatment for scoliosis or you have further questions, the next step is to reach out to us. We look forward to you being our next success story! 321-939-2328 Schedule a Consultation Treatment Overview Treatment Approach Treatment Equipment Treatment By Age Treatment By Severity Report of Findings Scoliosis Overview Scoliosis By Age Scoliosis FAQs Mild Scoliosis Moderate Scoliosis Severe Scoliosis About Us About Dr. Tony Nalda Patient FAQs Our Treatment Results Dr. Tony's Case Studies Patient Success Stories
2457
https://www.chemistrysteps.com/sn1-sn2-e1-or-e2-mechanism-practice-problems/
Skip to content Chemistry Steps Elimination ReactionsNucleophilic Substitution Reactions SN1 SN2 E1 E2 Practice Problems In this practice problem, you will need to determine the major organic product and the mechanism of each reaction. This covers the competition between SN1, SN2 nucleophilic substitution, and E1/E2 elimination reactions. You can check this post (SN1 SN2 E1 E2 – How to Choose the Mechanism) before working on the problems. To correctly answer these questions, you need to review the main principles of substitution reactions and elimination reactions, as well as the regio- and stereochemistry of the above-mentioned reactions, depending on whether the given reaction goes via a unimolecular or bimolecular mechanism, which dictates, for example, the possibility of rearrangement reactions. You can also go over the stereoselectivity and stereospecificity of the E2 and E1 reactions. The reactivity of the substrate (alkyl halide), the effect of the solvent, and temperature should also be taken into consideration. SN1, SN2, E1, E2 Flow Chart 📥 Download this study guide and refer to page 3 as a reference to determine the correct mechanism. You can download the complete set of the study guides at 🗺️ All the Reactions of Alkyl Halides Connected in a Comprehensive Reaction Map This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password Practice 1. Predict the mechanism as SN1, SN2, E1, or E2 and draw the major organic product formed in each reaction. Consider any regioselectivity and stereoselectivity where applicable: a) Primary alkyl halide and strong, non-bulky base/nucleophile 1-Iodopropane is a primary substrate, and the ethoxide is a strong base/nucleophile. Do not worry about the ethanol because it is the solvent, and even though, in general, it can work as a reactant, the ethoxide is much stronger, both as a base and a nucleophile, and the reactivity of the ethanol is suppressed. The presence of a strong base or a nucleophile rules out the possibility of the unimolecular E1 and SN1 reactions. If it is strong, it is going to attack and react instead of waiting for the loss of the leaving group to happen first, which is the case in E1 and SN1 mechanisms. To choose between E2 and SN2, you need to remember that the more substituted the substrate, the more it prefers the E2 mechanism (3o > 2o > 1o) and vice versa, less substituted substrates prefer the SN2 route (1o> 2o> 3o). Finally, remember that these two are in competition, and in most cases, it is not going to be 100% predominance but rather a mixture with a major and minor component. In this case, the SN2 product is the major. So, as a take-home message from this exercise, remember for 1o alkyl halides (or any other substrate) SN2 > E2 and, no E1 and SN1, especially when reacted with a strong nucleophile or a base. Download this summary flow chart and follow the steps. This should help to determine SN1, SN2, E1, and E2 for most questions: This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password b) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password c) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password d) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password e) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password f) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password g) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password h) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password i) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password j) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password k) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password l) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password m) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password n) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password o) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password p) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password q) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password r) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password s) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password t) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password u) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password v) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password w) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password x) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password y) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password z) Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password 2. This exercise is designed to practice choosing between SN1 and SN2 based on the stereochemistry of the Product: Determine whether each of the following reactions proceeds via an SN1 or SN2 mechanism and then draw the curved arrow mechanism for each reaction: Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password Solution This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password 3. Deciding between SN1 vs SN2 based on the Substrate, Nucleophile, Leaving group and the Solvent: Determine, based on the identity of the substrate, nucleophile, and solvent, the mechanism of nucleophilic substitution of each reaction and draw the products, including stereochemistry. Answer This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password Solution This content is for registered users only. Click here to Register! By joining Chemistry Steps, you will gain instant access to the answers and solutions for all the Practice Problems, including over 40 hours of problem-solving videos, Multiple-Choice Quizzes, Puzzles, Reaction Maps, and the powerful set of Organic Chemistry 1 and 2 Summary Study Guides. Forgot Password 4. Does this all work out? Still not sure? There is also a multiple-choice quiz on Substitution and Elimination Reactions you can take! Check Also Introduction to Alkyl Halides Nomenclature of Alkyl Halides Nucleophilic Substitution Reactions – An Introduction All You Need to Know About the SN2 Reaction Mechanism The SN2 Mechanism: Kinetics, Thermodynamics, Curved Arrows, and Stereochemistry with Practice Problems The Stereochemistry of SN2 Reactions The SN1 Nucleophilic Substitution Reaction Reactions of Alkyl Halides with Water The Stereochemistry of the SN1 Reaction Mechanism The SN1 Mechanism: Kinetics, Thermodynamics, Curved Arrows, and Stereochemistry with Practice Problems The Substrate and Nucleophile in SN2 and SN1 Reactions Carbocation Rearrangements in SN1 Reactions with Practice Problems Ring Expansion Rearrangements Ring Contraction Rearrangements When Is the Mechanism SN1 or SN2? Reactions of Alcohols with HCl, HBr, and HI Acids SOCl2 and PBr3 for Conversion of Alcohols to Alkyl Halides How to Choose Molecules for Doing SN2 and SN1 Synthesis-Practice Problems SN1 vs E1 Reactons SN2 vs E2 Reactins Exceptions in SN2 and SN1 Reactions SN1 SN2 E1 E2 – How to Choose the Mechanism Reactions Map of Alkyl Halides Nucleophilic Substitution and Elimination Practice Quiz 17 thoughts on “SN1 SN2 E1 E2 Practice Problems” For practice question B, why would the product be 2 chloro pentane not 2 pentanol? Reply Indeed, it is an alcohol, Justine. Good catch. Reply 2. For W, how do you address the R and S if there is a plane of symmetry? Reply You still need to show the stereochemistry. Reply 3. in question o), should it undergo a carbocation rearrangement first. The carbocation will move one spot to the left, which is a secondary position plus resonance stabilization from the benzene ring. But in your solution, there is no carbocation rearrangement. Am I mistaken? Following my mechanism, then it will get attacked by the MeOH, which will form race mixture of 2 stereoisomers. Reply Great question! Although not every carbocation will go via rearrangement, it is a good habit to keep rearrangements in mind. I assume this is how you see it; the driving force for the hydride shift here would have been the resulting tertiary carbocation. However, a few important things to be careful about here. First, there is no hydrogen on the carbon in the ring connected to the methyl group. And even if there was one, it wouldn’t shift in this case, since the resulting tertiary carbocation would come at the price of breaking the aromaticity. Aromatic compounds have additional stability and breaking the double bonds in the ring is energetically not favorable. Moving that hydrogen would make a phenyl carbocation which, unlike benzylic carbocations, is very unstable since there is no resonance stabilization of the positive charge. The orbital alignments do not allow forming a conjugated system. Check the following articles for additional information about aromatic compounds: • Benzene – Aromatic Structure and Stability • Aromaticity and Huckel’s Rule• Reactions at the Benzylic Position Reply 4. Thanks for your comment. Now I see it. The reason why I made the mistake is my own carelessness: I thought there are 2 carbons in the “tail”. But really, there is only one carbon in the tail. When the carbocation formed, it is primary BUT stabilized by benzene ring. Reply That is correct-without any stabilization, primary carbocations are usually be formed to undergo any reaction. One exception that comes to mind is the diazonium salts that can lose nitrogen and form primary carbocations.The Reaction of Amines with Nitrous Acid Reply 5. Where does the second ring come in from in Y? Reply Hi, Remember, we carry out reactions on molar (whether that is milli-, micro- or nanomolar) scale and therefore, at any time, there is millions of reactant molecule present in the mixture. And even though, there might be shown only one molecule of a reactant, you should always keep in mind the possibility of an intramolecular reaction.I have added the mechanism too. Reply 6. For question f, can someone help me understand the major mechanism? My understanding is that water is a stronger Nu than it is a base, thus it will undergo SN1. However, I’m not sure my interpretation is correct. Reply Hi there, Water is a weak base and a poor nucleophile and therefore, the mechanism should be a unimolecular SN1 or E1. In most cases, the determining factor between these is going to be the heat which favors elimination. I talked about the effect of heat in this article: Check also the article on deciding between SN1, SN2, E1 and E2: Don’t forget to download the substitution and elimination study guide at Reply 7. hi! for question (I) could it possible for it to be Sn1/E1 as well? since there could be a rearrangement of carbocations. is it not because it’s a strong nucleophile? Reply Hi there! Just to clarify, is the question about the reaction of the alcohol with HBr or (bromomethyl)cyclopentane with the acetylide (triple bond anion)? Question l or i? Reply oh the (i) please Reply Yes, then you are correct – it is because acetylide ions are good nucleophiles and strong bases. You can find the list of good/poor nucleophiles, and strong/weak bases in the substitution and elimination study guide. The study guides are available to download here. Reply okayy thank you! Share Your Thoughts, Ask that Question! Cancel reply
2458
https://pubmed.ncbi.nlm.nih.gov/30913199/
ACOG Committee Opinion No. 774: Opportunistic Salpingectomy as a Strategy for Epithelial Ovarian Cancer Prevention - PubMed Clipboard, Search History, and several other advanced features are temporarily unavailable. Skip to main page content An official website of the United States government Here's how you know The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site. The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely. Log inShow account info Close Account Logged in as: username Dashboard Publications Account settings Log out Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation Search: Search AdvancedClipboard User Guide Save Email Send to Clipboard My Bibliography Collections Citation manager Display options Display options Format Save citation to file Format: Create file Cancel Email citation Email address has not been verified. Go to My NCBI account settings to confirm your email and then refresh this page. To: Subject: Body: Format: [x] MeSH and other data Send email Cancel Add to Collections Create a new collection Add to an existing collection Name your collection: Name must be less than 100 characters Choose a collection: Unable to load your collection due to an error Please try again Add Cancel Add to My Bibliography My Bibliography Unable to load your delegates due to an error Please try again Add Cancel Your saved search Name of saved search: Search terms: Test search terms Would you like email updates of new search results? Saved Search Alert Radio Buttons Yes No Email: (change) Frequency: Which day? Which day? Report format: Send at most: [x] Send even when there aren't any new results Optional text in email: Save Cancel Create a file for external citation management software Create file Cancel Your RSS Feed Name of RSS Feed: Number of items displayed: Create RSS Cancel RSS Link Copy Actions Cite Collections Add to Collections Create a new collection Add to an existing collection Name your collection: Name must be less than 100 characters Choose a collection: Unable to load your collection due to an error Please try again Add Cancel Permalink Permalink Copy Display options Display options Format Page navigation Title & authors Abstract Similar articles Cited by References MeSH terms Related information LinkOut - more resources Obstet Gynecol Actions Search in PubMed Search in NLM Catalog Add to Search . 2019 Apr;133(4):e279-e284. doi: 10.1097/AOG.0000000000003164. ACOG Committee Opinion No. 774: Opportunistic Salpingectomy as a Strategy for Epithelial Ovarian Cancer Prevention No authors listed PMID: 30913199 DOI: 10.1097/AOG.0000000000003164 Item in Clipboard ACOG Committee Opinion No. 774: Opportunistic Salpingectomy as a Strategy for Epithelial Ovarian Cancer Prevention No authors listed. Obstet Gynecol.2019 Apr. Show details Display options Display options Format Obstet Gynecol Actions Search in PubMed Search in NLM Catalog Add to Search . 2019 Apr;133(4):e279-e284. doi: 10.1097/AOG.0000000000003164. PMID: 30913199 DOI: 10.1097/AOG.0000000000003164 Item in Clipboard Cite Display options Display options Format Abstract Opportunistic salpingectomy may offer obstetrician-gynecologists and other health care providers the opportunity to decrease the risk of ovarian cancer in their patients who are already undergoing pelvic surgery for benign disease. By performing salpingectomy when patients undergo an operation during which the fallopian tubes could be removed in addition to the primary surgical procedure (eg, hysterectomy), the risk of ovarian cancer is reduced. Although opportunistic salpingectomy offers the opportunity to significantly decrease the risk of ovarian cancer, it does not eliminate the risk of ovarian cancer entirely. Counseling women who are undergoing routine pelvic surgery about the risks and benefits of salpingectomy should include an informed consent discussion about the role of oophorectomy and bilateral salpingo-oophorectomy. Bilateral salpingo-oophorectomy that causes surgical menopause reduces the risk of ovarian cancer but may increase the risk of cardiovascular disease, cancer other than ovarian cancer, osteoporosis, cognitive impairment, and all-cause mortality. Salpingectomy at the time of hysterectomy or as a means of tubal sterilization appears to be safe and does not increase the risk of complications such as blood transfusions, readmissions, postoperative complications, infections, or fever compared with hysterectomy alone or tubal ligation. The risks and benefits of salpingectomy should be discussed with patients who desire permanent sterilization. Additionally, ovarian function does not appear to be affected by salpingectomy at the time of hysterectomy based on surrogate serum markers or response to in vitro fertilization. Plans to perform an opportunistic salpingectomy should not alter the intended route of hysterectomy. Obstetrician-gynecologists should continue to observe and practice minimally invasive techniques. This Committee Opinion has been updated to include new information on the benefit of salpingectomy for cancer reduction, the feasibility of salpingectomy during vaginal hysterectomy, and long-term follow-up of women after salpingectomy. PubMed Disclaimer Similar articles ACOG Committee Opinion No. 774 Summary: Opportunistic Salpingectomy as a Strategy for Epithelial Ovarian Cancer Prevention.[No authors listed][No authors listed]Obstet Gynecol. 2019 Apr;133(4):842-843. doi: 10.1097/AOG.0000000000003165.Obstet Gynecol. 2019.PMID: 30913193 Review. The impact of opportunistic salpingectomy on ovarian cancer mortality and healthcare costs: a call for universal insurance coverage.Naumann RW, Hughes BN, Brown J, Drury LK, Herzog TJ.Naumann RW, et al.Am J Obstet Gynecol. 2021 Oct;225(4):397.e1-397.e6. doi: 10.1016/j.ajog.2021.03.032. Epub 2021 Mar 31.Am J Obstet Gynecol. 2021.PMID: 33798477 Examining indicators of early menopause following opportunistic salpingectomy: a cohort study from British Columbia, Canada.Hanley GE, Kwon JS, McAlpine JN, Huntsman DG, Finlayson SJ, Miller D.Hanley GE, et al.Am J Obstet Gynecol. 2020 Aug;223(2):221.e1-221.e11. doi: 10.1016/j.ajog.2020.02.005. Epub 2020 Feb 15.Am J Obstet Gynecol. 2020.PMID: 32067967 Cost-effectiveness of opportunistic salpingectomy vs tubal ligation at the time of cesarean delivery.Venkatesh KK, Clark LH, Stamilio DM.Venkatesh KK, et al.Am J Obstet Gynecol. 2019 Jan;220(1):106.e1-106.e10. doi: 10.1016/j.ajog.2018.08.032. Epub 2018 Aug 28.Am J Obstet Gynecol. 2019.PMID: 30170036 Risks and Benefits of Salpingectomy at the Time of Sterilization.Castellano T, Zerden M, Marsh L, Boggess K.Castellano T, et al.Obstet Gynecol Surv. 2017 Nov;72(11):663-668. doi: 10.1097/OGX.0000000000000503.Obstet Gynecol Surv. 2017.PMID: 29164264 Review. See all similar articles Cited by New opportunities for opportunistic salpingectomy: Leveraging non-gynecologic surgeries for ovarian cancer prevention.Guo XM, Fuh KC, Stone R.Guo XM, et al.Gynecol Oncol Rep. 2025 Mar 13;58:101720. doi: 10.1016/j.gore.2025.101720. eCollection 2025 Apr.Gynecol Oncol Rep. 2025.PMID: 40469876 Free PMC article.No abstract available. Racial Disparities in Sterilization Procedure Performed at Time of Cesarean Section.Walheim LK, Hong CX, Hamm RF.Walheim LK, et al.Am J Perinatol. 2024 May;41(S 01):e934-e938. doi: 10.1055/a-1974-9507. Epub 2022 Nov 9.Am J Perinatol. 2024.PMID: 36351447 Free PMC article. Endometriosis-Associated Ovarian Cancer: The Origin and Targeted Therapy.Murakami K, Kotani Y, Nakai H, Matsumura N.Murakami K, et al.Cancers (Basel). 2020 Jun 24;12(6):1676. doi: 10.3390/cancers12061676.Cancers (Basel). 2020.PMID: 32599890 Free PMC article. Combination of Laparoscopic Salpingectomy and Endometrial Ablation: A Potentially Underused Procedure.Greer Polite F, DeAgostino-Kelly M, Marchand GJ.Greer Polite F, et al.J Gynecol Surg. 2021 Feb 1;37(1):89-91. doi: 10.1089/gyn.2020.0097. Epub 2021 Feb 10.J Gynecol Surg. 2021.PMID: 35153453 Free PMC article. Comparing options for females seeking permanent contraception in high resource countries: a systematic review.Gormley R, Vickers B, Cheng B, Norman WV.Gormley R, et al.Reprod Health. 2021 Jul 20;18(1):154. doi: 10.1186/s12978-021-01201-z.Reprod Health. 2021.PMID: 34284794 Free PMC article. See all "Cited by" articles References Siegel RL, Miller KD, Jemal A. Cancer statistics, 2018. CA Cancer J Clin 2018;68:7–30. Matz M, Coleman MP, Sant M, Chirlaque MD, Visser O, Gore M, et al. The histology of ovarian cancer: worldwide distribution and implications for international survival comparisons (CONCORD-2). The CONCORD Working Group [published erratum appears in Gynecol Oncol 2017;147:726]. Gynecol Oncol 2017;144:405–13. Noone AM, Howlader N, Krapcho M, Miller D, Brest A, Yu M, et al, editors. SEER Cancer Statistics Review, 1975-2015. Bethesda (MD): National Cancer Institute; 2018. Available at: Retrieved November 14, 2018. Jacobs IJ, Menon U, Ryan A, Gentry-Maharaj A, Burnell M, Kalsi JK, et al. Ovarian cancer screening and mortality in the UK Collaborative Trial of Ovarian Cancer Screening (UKCTOCS): a randomised controlled trial [published erratum appears in Lancet 2016;387:944]. Lancet 2016;387:945–56. Partridge E, Kreimer AR, Greenlee RT, Williams C, Xu JL, Church TR, et al. Results from four rounds of ovarian cancer screening in a randomized trial. PLCO Project Team. Obstet Gynecol 2009;113:775–82. Show all 46 references MeSH terms Carcinoma, Ovarian Epithelial / prevention & control Actions Search in PubMed Search in MeSH Add to Search Cesarean Section Actions Search in PubMed Search in MeSH Add to Search Counseling Actions Search in PubMed Search in MeSH Add to Search Female Actions Search in PubMed Search in MeSH Add to Search Humans Actions Search in PubMed Search in MeSH Add to Search Hysterectomy Actions Search in PubMed Search in MeSH Add to Search Ovarian Neoplasms / prevention & control Actions Search in PubMed Search in MeSH Add to Search Ovariectomy Actions Search in PubMed Search in MeSH Add to Search Postoperative Complications Actions Search in PubMed Search in MeSH Add to Search Prophylactic Surgical Procedures Actions Search in PubMed Search in MeSH Add to Search Risk Factors Actions Search in PubMed Search in MeSH Add to Search Salpingectomy / adverse effects Actions Search in PubMed Search in MeSH Add to Search Sterilization, Reproductive / methods Actions Search in PubMed Search in MeSH Add to Search Related information Cited in Books MedGen LinkOut - more resources Medical MedlinePlus Health Information [x] Cite Copy Download .nbib.nbib Format: Send To Clipboard Email Save My Bibliography Collections Citation Manager [x] NCBI Literature Resources MeSHPMCBookshelfDisclaimer The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited. Follow NCBI Connect with NLM National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov
2459
https://artofproblemsolving.com/wiki/index.php/Angle?srsltid=AfmBOopBjktl6XIxNic_G408uWKTo0Hds3fUILEEXHu8kUBpfXfmeIwR
Art of Problem Solving Angle - AoPS Wiki Art of Problem Solving AoPS Online Math texts, online classes, and more for students in grades 5-12. Visit AoPS Online ‚ Books for Grades 5-12Online Courses Beast Academy Engaging math books and online learning for students ages 6-13. Visit Beast Academy ‚ Books for Ages 6-13Beast Academy Online AoPS Academy Small live classes for advanced math and language arts learners in grades 2-12. Visit AoPS Academy ‚ Find a Physical CampusVisit the Virtual Campus Sign In Register online school Class ScheduleRecommendationsOlympiad CoursesFree Sessions books tore AoPS CurriculumBeast AcademyOnline BooksRecommendationsOther Books & GearAll ProductsGift Certificates community ForumsContestsSearchHelp resources math training & toolsAlcumusVideosFor the Win!MATHCOUNTS TrainerAoPS Practice ContestsAoPS WikiLaTeX TeXeRMIT PRIMES/CrowdMathKeep LearningAll Ten contests on aopsPractice Math ContestsUSABO newsAoPS BlogWebinars view all 0 Sign In Register AoPS Wiki ResourcesAops Wiki Angle Page ArticleDiscussionView sourceHistory Toolbox Recent changesRandom pageHelpWhat links hereSpecial pages Search Angle Contents [hide] 1 Overview 2 Angle Measure 3 Classifying Angles 4 Angle Chasing 4.1 Properties Used in Angle Chasing Overview An angle is the union of two rays with a common endpoint. The common endpoint of the rays is called the vertex of the angle, and the rays themselves are called the sides of the angle. There are many notations for angles. The most common form is , read "angle ABC", where are points on the sides of the angle and is the vertex of the angle. Note that the same angle can be denoted many different ways by choosing different points along the side of the angle. If there is no ambiguity, this notation can be shortened to simply . Angle Measure The measure of is denoted , read "measure of angle ABC". There are different units for measuring angles. The three most common are degrees, radians and gradians. If two angles are congruent, they have the same angle measure. A ray drawn from the vertex of the angle, such that the angle formed by this ray and one of the sides is congruent to the angle formed by this ray and the other side, is called the angle bisector. Classifying Angles A straight angle is an angle formed by a pair of opposite rays, or a line. A straight angle has a measure of . A right angle is an angle that is supplementary to itself. A right angle has a measure of . An acute angle has a measure greater than zero but less than that of a right angle, i.e. is acute if and only if . An obtuse angle has a measure greater than that of a right angle but less than that of a straight angle, i.e. is obtuse if and only if . A reflex angle is an angle with measure greater than a straight angle, but less than 360 degrees, or radians. Angle Chasing Angle chasing is a technique where solvers apply angle properties determine the measures of unknown angles. It is commonly used in geometry problems. Usually this can use a variety of ways including circles, new lines, or transforming the figure somehow. Lots of angle chasing problems require you to think intuitively. Properties Used in Angle Chasing Two angles that are complementary add to . Two angles that are supplementary add to . Supplementary angles can be found when two lines intersect each other. Vertical angles are congruent to each other. Parallel lines can create equal or supplementary angles. An angle bisector splits an angle into two congruent angles. For instance, if is bisected by , then . If side lengths are known, the angle bisector theorem can be used to determine that a line bisects an angle. If angles are founded in a polygon, one can use angle formulas to find the unknown angle. If two polygons are congruent, corresponding angles are congruent. If angles are found in a circle, one can use angle properties and arc measure. Finding cyclic quadrilaterals can be a useful strategy in angle chasing since angles opposite with each other are supplementary. Retrieved from " Categories: Geometry Angles Art of Problem Solving is an ACS WASC Accredited School aops programs AoPS Online Beast Academy AoPS Academy About About AoPS Our Team Our History Jobs AoPS Blog Site Info Terms Privacy Contact Us follow us Subscribe for news and updates © 2025 AoPS Incorporated © 2025 Art of Problem Solving About Us•Contact Us•Terms•Privacy Copyright © 2025 Art of Problem Solving Something appears to not have loaded correctly. Click to refresh.
2460
https://rstudio-pubs-static.s3.amazonaws.com/1208174_6f3898a7c04d47b3aa71842548d03395.html
Geometric Distribution Code Show All Code Hide All Code Geometric Distribution Geometric Distribution 0.1 Geometric Distribution 0.1.1 Geometric Distribution 0.1.2 Type I Geometric Distribution 0.1.3 Videos DragonflyStats Last updated: August 01, 2024 0.1 Geometric Distribution 0.1.1 Geometric Distribution The Geometric distribution is related to the Binomial distribution in that both are based on independent trials in which the probability of success is constant and equal to p p. However, a Geometric random variable is the number of trials until the first failure, whereas a Binomial random variable is the number of successes in n trials. 0.1.2 Type I Geometric Distribution Geometric distributions model (some) discrete random variables. Typically, a Geometric random variable is the number of trials required to obtain the first failure, for example, the number of tosses of a coin until the first ‘tail’ is obtained, or a process where components from a production line are tested, in turn, until the first defective item is found. A discrete random variable X is said to follow a Geometric distribution with parameter p, written X∼G e(p)X∼G e(p), if it has probability distribution P(X=x)=p x−1(1−p)x P(X=x)=p x−1(1−p)x where x=1,2,3,…x=1,2,3,… p = success probability; 0<p<1 0<p<1 The trials must meet the following requirements: the total number of trials is potentially infinite; there are just two outcomes of each trial; success and failure; the outcomes of all the trials are statistically independent; all the trials have the same probability of success. The Geometric distribution has expected value and variance E(X)=1/(1−p)E(X)=1/(1−p) V(X)=p/(1−p)2 V(X)=p/(1−p)2 . Geometric distributions model (some) discrete random variables. Typically, a Geometric random variable is the number of trials required to obtain the first failure, for example, the number of tosses of a coin until the first ‘tail’ is obtained, or a process where components from a production line are tested, in turn, until the first defective item is found. A discrete random variable X is said to follow a Geometric distribution with parameter p, written X∼G e(p)X∼G e(p), if it has probability distribution P(X=x)=p x−1(1−p)x P(X=x)=p x−1(1−p)x where x=1,2,3,…x=1,2,3,… p = success probability; 0<p<1 0<p<1 \end{itemize} The trials must meet the following requirements: the total number of trials is potentially infinite; there are just two outcomes of each trial; success and failure; the outcomes of all the trials are statistically independent; all the trials have the same probability of success. \end{itemize} The Geometric distribution has expected value and variance E(X)=1/(1−p)E(X)=1/(1−p) V(X)=p/(1−p)2 V(X)=p/(1−p)2 . The Geometric distribution is related to the Binomial distribution in that both are based on independent trials in which the probability of success is constant and equal to p p. However, a Geometric random variable is the number of trials until the first failure, whereas a Binomial random variable is the number of successes in n trials. \end{itemize} %——————————————————————% The geometric distribution is used for Bernouilli Trials, where there outcome are classified as either failures or successes. \end{itemize} In probability theory, the geometric distribution is either of two discrete probability distributions: The probability distribution of the number of trials needed to get first success, supported on the set {1,2,3,…}{1,2,3,…} The probability distribution of the number of failures before the first success, supported on the set {0,1,2,3,…}{0,1,2,3,…} Which of these one calls “the” geometric distribution is a matter of convention and convenience. A solution for one can quickly be surmised from the other. These two different geometric distributions should not be confused with each other. Often, the name shifted geometric distribution is adopted for the former one (distribution of the number X); however, to avoid ambiguity, it is considered wise to indicate which is intended, by mentioning the support explicitly. It’s the probability that the first occurrence of success require k number of independent trials, each with success probability p. If the probability of success on each trial is p, then the probability that the kth trial (out of k trials) is the first success is P(X=k)=(1−p)k−1 p s p a c e f o r k=1,2,3,…P(X=k)=(1−p)k−1 p s p a c e f o r k=1,2,3,… The above form of geometric distribution is used for modeling the number of trials until the first success. By contrast, the following form of geometric distribution is used for modeling number of failures until the first success: P(Y=k)=(1−p)k p s p a c e f o r k=0,1,2,3,…P(Y=k)=(1−p)k p s p a c e f o r k=0,1,2,3,… In either case, the sequence of probabilities is a geometric sequence. For example, suppose an ordinary die is thrown repeatedly until the first time a “1” appears. The probability distribution of the number of times it is thrown is supported on the infinite set 1,2,3,…1,2,3,… and is a geometric distribution with p = 1/6. The expected value of a geometrically distributed random variable X is 1/p and the variance is (1−p)/p 2(1−p)/p 2: E(X)=1 p,v a r(X)=1−p p 2.E(X)=1 p,v a r(X)=1−p p 2. Similarly, the expected value of the geometrically distributed random variable Y (where Y corresponds to the pmf listed in the right column) is (1 - p)/p, and its variance is (1 - p)/p2: E(Y)=1−p p,v a r(Y)=1−p p 2.E(Y)=1−p p,v a r(Y)=1−p p 2. \end{itemize} Now consider an experiment with only two outcomes. Independent repeated trials of such an experiment are called Bernoulli trials, named after the Swiss mathematician Jacob Bernoulli (1654–1705). The term means that the outcome of any trial does not depend on the previous outcomes (such as tossing a coin). We will call one of the outcomes the success" and the other outcome thefailure”. \end{itemize} Suppose that I am at a party and I start asking girls to dance. Let X be the number of girls that I need to ask in order to find a partner. If the first girl accepts, then X=1. If the first girl declines but the next girl accepts, then X=2. And so on. When X=n, it means that I failed on the first n-1 tries and succeeded on the nth try. My probability of failing on the first try is (1-p). My probabilty of failing on the first two tries is (1-p)(1-p). My probability of failing on the first n-1 tries is (1-p)n-1. Then, my probability of succeeding on the nth try is p.Thus, we have P(X=n)=(1−p)n−1 p P(X=n)=(1−p)n−1 p This is known as the geometric distribution. When you have a sequence of numbers in which the (n+1)th number is a multiple of the nth number, it is called a geometric sequence. In this case, P(X = n+1) is a multiple of P(X = n). (What is that multiple?) What is the probability that it will take more than n tries to succeed? We know that if I ask an infinite number of girls to dance, eventually one of them will accept. So, the probability that it will take more than n tries is the same as the probability that I fail n times. That is, P(X>n)=(1−p)n P(X>n)=(1−p)n If X is geometric with parameter p, what is E(X)? We are faced with an infinite sum. Multiplying X times P(X) for X = 1, 2, 3, … gives S=p+2 p(1−p)+3 p(1−p)2+...+n p(1−p)n−1S=p+2 p(1−p)+3 p(1−p)2+...+n p(1−p)n−1 Multiply both sides by (1-p) and you have (1-p)S = p(1-p) + 2p(1-p)2 + 3p(1-p)3 +…+np(1-p)n %——————————————————————————-% Subtracting from gives S−(1−p)S=p S=p[1+(1−p)+(1−p)2+...(1−p)n]=p(1/p)=1 S=1/p S−(1−p)S=p S=p[1+(1−p)+(1−p)2+...(1−p)n]=p(1/p)=1 S=1/p Therefore, the mean of the geometric distribution is equal to 1/p.If we are trying to estimate how many girls I will have to ask to dance until I find a partner, and p, the probability of one girl accepting, is .2, then on average I will have to ask five girls. You will not have to know it, but for the record, the variance of the geometric distribution is (1-p)/p2. %———————– The formulae for geometric distribution is %P(X=k) = (1-p)^{k-1} p^k% % P(X ) = ? %P(X=k) = (1-0.2)^{4-1} ^4% \end{itemize} %————————————————————-% Geometric distributions model (some) discrete random variables. Typically, a Geometric random variable is the number of trials required to obtain the first . For example, the number of tosses of a coin untill the first ‘tail’ is obtained, or a process where components from a production line are tested, in turn, until the first defective item is found. \end{itemize} A Geometric random variable is the number of trials until the first , whereas a Binomial random variable is the number of successes in n n trials. \end{itemize} %————————————————————-% A discrete random variable X is said to follow a Geometric distribution with parameter p, written X∼G e o(p),X∼G e o(p), if it has probability distribution P(X=x)=p x−1(1−p)x P(X=x)=p x−1(1−p)x where x=0,1,2,3,…x=0,1,2,3,… p = success probability; 0<p<1 0<p<1 \end{itemize} P(X=n)=(1−p)n−1 p P(X=n)=(1−p)n−1 p P(X>n)=(1−p)n P(X>n)=(1−p)n %————————————–% $ E[X] = 1/p $ The variance of the geometric distribution is V a r(X)=(1−p)/p 2 V a r(X)=(1−p)/p 2 . \end{itemize} %————————————————————-% The trials must meet the following requirements: the total number of trials is potentially infinite; there are just two outcomes of each trial; success and failure; the outcomes of all the trials are statistically independent; all the trials have the same probability of success. \end{itemize} %————————————————————-% The Geometric distribution has expected value and variance E(X)=1(1−p)E(X)=1(1−p) V(X)=p(1−p)2 V(X)=p(1−p)2 . \end{itemize} %————————————————————-% The Geometric distribution is related to the Binomial distribution in that both are based on independent trials in which the probability of success is constant and equal to p p. However, a Geometric random variable is the number of trials until the first , whereas a Binomial random variable is the number of successes in n n trials. \end{itemize} %————————————————————-% %=========================================================% Geometric distributions model (some) discrete random variables. Typically, a Geometric random variable is the number of trials required to obtain the first failure, for example, the number of tosses of a coin untill the first ‘tail’ is obtained, or a process where components from a production line are tested, in turn, until the first defective item is found. %=========================================================% A discrete random variable X is said to follow a Geometric distribution with parameter p, written X∼G e(p)X∼G e(p), if it has probability distribution P(X=x)=p x−1(1−p)x P(X=x)=p x−1(1−p)x where x=1,2,3,…x=1,2,3,… p = success probability; 0<p<1 0<p<1 \end{itemize} %=========================================================% The trials must meet the following requirements: the total number of trials is potentially infinite; there are just two outcomes of each trial; success and failure; the outcomes of all the trials are statistically independent; all the trials have the same probability of success. \end{itemize} %=========================================================% The Geometric distribution has expected value and variance E(X)=1/(1−p)E(X)=1/(1−p) V(X)=p/(1−p)2 V(X)=p/(1−p)2 . The Geometric distribution is related to the Binomial distribution in that both are based on independent trials in which the probability of success is constant and equal to p p. However, a Geometric random variable is the number of trials until the first failure, whereas a Binomial random variable is the number of successes in n trials. %————————————–% P(X=n)=(1−p)n−1 p P(X=n)=(1−p)n−1 p P(X>n)=(1−p)n P(X>n)=(1−p)n The expeced value is $ E[X] = 1/p $ The variance of the geometric distribution is V a r(X)=1−p p 2 V a r(X)=1−p p 2 . \end{itemize} The following conditions characterize the hypergeometric distribution: The result of each draw (the elements of the population being sampled) can be classified into one of two mutually exclusive categories (e.g.Pass/Fail or Female/Male or Employed/Unemployed). The probability of a success changes on each draw, as each draw decreases the population (sampling without replacement from a finite population). A random variable X follows the hypergeometric distribution if its probability mass function (pmf) is given by P(X=k)=(K k)(N−K n−k)(N n),P(X=k)=(K k)(N−K n−k)(N n), where N is the population size, K is the number of success states in the population, n is the number of draws, k is the number of observed successes, (a b)(a b) is a binomial coefficient. \end{itemize} When sampling is done without replacement of each sampled item taken from a finite population of items, the Bernoulli process does not apply because there is a systematic change in the probability of success as items are removed from the population. When sampling without replacement is used in a situation that would otherwise qualify as a Bernoulli process, the hypergeometric distribution is the appropriate discrete probability distribution. Given that X is the designated number of successes, N is the total number of items in the population, T is the total number of successes included in the population, and n is the number of items in the sample, the formula for determining hypergeometric probabilities is \end{itemize} Two types of groups Select k k from Group 1 Select n−k n−k from group 2. \end{itemize} (n 1 k 1)×(n 2 k 2)(n T k T)(n 1 k 1)×(n 2 k 2)(n T k T) k T=k 1+k 2 k T=k 1+k 2 n T=n 1+n 2 n T=n 1+n 2 Suppose we have to selected a group of 8 people from 18. Of these 18 people, 10 are males and 8 are females. What is the probability that the committee contains 5 females \end{itemize} \end{document} In the last class, we looked at how to compute the mean, variance and standard deviation. As these are key outcomes of this part of the course, we shall briefly go over this material again The mean (i.e.average) value is denoted with a bar over the set name i.e.” “. (pronounced “x bar”) is the sample mean. %=================================================% Example A sample data set comprised of five values. What is the sample mean value of data set ” ” (i.e.What is ?) The sample mean is 44 %=================================================% Variance How do we calculate the variance? We can use scientific calculators or we can calculate it by hand using the following formula : We are calculating the difference between each observation x and the mean . Remark : The mean is used in the calculation. Some of the differences will be positive and some will be negative so we square the differences to make them all positive. %=================================================% An easier formula to use if you are calculating the sample standard deviation by hand is The population variance (which is rarely know) is denoted by the Greek letter (sigma squared). Important :The standard deviation σ σ is the square root of the variance σ 2 σ 2. \end{itemize} \end{document} 0.1.3 Videos Geometric Distribution with Chi Square Test for Goodness of Fit
2461
https://www.varsitytutors.com/practice/subjects/ap-statistics/help/how-to-use-tables-of-normal-distribution
AP Statistics - How to use tables of normal distribution | Practice Hub Skip to main content Practice Hub Search subjects AI TutorAI DiagnosticsAI FlashcardsAI WorksheetsAI SolverGamesProgress Sign In HomeAP StatisticsLearn by ConceptHow to use tables of normal distribution How to use tables of normal distribution Help Questions AP Statistics › How to use tables of normal distribution Questions 1 - 8 1 Find . Explanation First, we use our normal distribution table to find a p-value for a z-score greater than 0.50. Our table tells us the probability is approximately, . Next we use our normal distribution table to find a p-value for a z-score greater than 1.23. Our table tells us the probability is approximately, . We then subtract the probability of z being greater than 0.50 from the probability of z being less than 1.23 to give us our answer of, . 2 Find the area under the standard normal curve between Z=1.5 and Z=2.4. .0586 0.9000 0.3220 0.0768 0.0822 Explanation 3 Alex took a test in physics and scored a 35. The class average was 27 and the standard deviation was 5. Noah took a chemistry test and scored an 82. The class average was 70 and the standard deviation was 8. Show that Alex had the better performance by calculating - Alex's standard normal percentile and Noah's standard normal percentile Alex = .945 Noah = .933 Alex = .923 Noah = .911 Alex = .901 Noah = .926 Alex = .855 Noah = .844 Alex = .778 Noah = .723 Explanation Alex - on the z-table Noah - on the z-table 4 When and Find . .72 .76 .68 .61 .81 Explanation 5 Arrivals to a bed and breakfast follow a Poisson process. The expected number of arrivals each week is 4. What is the probability that there are exactly 3 arrivals over the course of one week? Explanation 6 Gabbie earned a score of 940 on a national achievement test. The mean test score was 850 with a sample standard deviation of 100. What proportion of students had a higher score than Gabbie? (Assume that test scores are normally distributed.) Explanation When we get this type of problem, first we need to calculate a z-score that we can use in our table. To do that, we use our z-score formula: where, Plugging into the equation we get: We then use our table to look up a p-value for z > 0.9. Since we want to calculate the probability of students who earned a higher score than Gabbie we need to subtract the P(z<0.9) to get our answer. 7 The masses of tomatoes are normally distributed with a mean of grams and a standard deviation of grams. What mass of tomatoes would be the percentile of the masses of all the tomatoes? Explanation The Z score for a normal distribution at the percentile is So , which can be found on the normal distribution table. The mass of tomatoes in the percentile of all tomatoes is standard deviations below the mean, so the mass is . 8 Find . Explanation First, we use the table to look up a p-value for z > -1.22. This gives us a p-value of, . Next, we use the table to look up a p-value for z > 1.59. This gives us a p-value of, . Finally we subtract the probability of z being greater than -1.22 from the probability of z being less than 1.59 to arrive at our answer of, . Return to subject Powered by Varsity Tutors⋅ © 2025 All Rights Reserved
2462
https://stackoverflow.com/questions/7753976/regular-expression-for-numbers-without-leading-zeros
regex - Regular expression for numbers without leading zeros - Stack Overflow Join Stack Overflow By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google Sign up with GitHub OR Email Password Sign up Already have an account? Log in Skip to main content Stack Overflow 1. About 2. Products 3. For Teams Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers Advertising Reach devs & technologists worldwide about your product, service or employer brand Knowledge Solutions Data licensing offering for businesses to build and improve AI tools and models Labs The future of collective knowledge sharing About the companyVisit the blog Loading… current community Stack Overflow helpchat Meta Stack Overflow your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Let's set up your homepage Select a few topics you're interested in: python javascript c#reactjs java android html flutter c++node.js typescript css r php angular next.js spring-boot machine-learning sql excel ios azure docker Or search from our full list: javascript python java c# php android html jquery c++ css ios sql mysql r reactjs node.js arrays c asp.net json python-3.x .net ruby-on-rails sql-server swift django angular objective-c excel pandas angularjs regex typescript ruby linux ajax iphone vba xml laravel spring asp.net-mvc database wordpress string flutter postgresql mongodb wpf windows xcode amazon-web-services bash git oracle-database spring-boot dataframe azure firebase list multithreading docker vb.net react-native eclipse algorithm powershell macos visual-studio numpy image forms scala function vue.js performance twitter-bootstrap selenium winforms kotlin loops express dart hibernate sqlite matlab python-2.7 shell rest apache entity-framework android-studio csv maven linq qt dictionary unit-testing asp.net-core facebook apache-spark tensorflow file swing class unity-game-engine sorting date authentication go symfony t-sql opencv matplotlib .htaccess google-chrome for-loop datetime codeigniter perl http validation sockets google-maps object uitableview xaml oop visual-studio-code if-statement cordova ubuntu web-services email android-layout github spring-mvc elasticsearch kubernetes selenium-webdriver ms-access ggplot2 user-interface parsing pointers c++11 google-sheets security machine-learning google-apps-script ruby-on-rails-3 templates flask nginx variables exception sql-server-2008 gradle debugging tkinter delphi listview jpa asynchronous web-scraping haskell pdf jsp ssl amazon-s3 google-cloud-platform jenkins testing xamarin wcf batch-file generics npm ionic-framework network-programming unix recursion google-app-engine mongoose visual-studio-2010 .net-core android-fragments assembly animation math svg session intellij-idea hadoop rust next.js curl join winapi django-models laravel-5 url heroku http-redirect tomcat google-cloud-firestore inheritance webpack image-processing gcc keras swiftui asp.net-mvc-4 logging dom matrix pyspark actionscript-3 button post optimization firebase-realtime-database web jquery-ui cocoa xpath iis d3.js javafx firefox xslt internet-explorer caching select asp.net-mvc-3 opengl events asp.net-web-api plot dplyr encryption magento stored-procedures search amazon-ec2 ruby-on-rails-4 memory canvas audio multidimensional-array random jsf vector redux cookies input facebook-graph-api flash indexing xamarin.forms arraylist ipad cocoa-touch data-structures video azure-devops model-view-controller apache-kafka serialization jdbc woocommerce razor routes awk servlets mod-rewrite excel-formula beautifulsoup filter docker-compose iframe aws-lambda design-patterns text visual-c++ django-rest-framework cakephp mobile android-intent struct react-hooks methods groovy mvvm ssh lambda checkbox time ecmascript-6 grails google-chrome-extension installation cmake sharepoint shiny spring-security jakarta-ee plsql android-recyclerview core-data types sed meteor android-activity activerecord bootstrap-4 websocket graph replace scikit-learn group-by vim file-upload junit boost memory-management sass import async-await deep-learning error-handling eloquent dynamic soap dependency-injection silverlight layout apache-spark-sql charts deployment browser gridview svn while-loop google-bigquery vuejs2 dll highcharts ffmpeg view foreach makefile plugins redis c#-4.0 reporting-services jupyter-notebook unicode merge reflection https server google-maps-api-3 twitter oauth-2.0 extjs terminal axios pip split cmd pytorch encoding django-views collections database-design hash netbeans automation data-binding ember.js build tcp pdo sqlalchemy apache-flex mysqli entity-framework-core concurrency command-line spring-data-jpa printing react-redux java-8 lua html-table ansible jestjs neo4j service parameters enums material-ui flexbox module promise visual-studio-2012 outlook firebase-authentication web-applications webview uwp jquery-mobile utf-8 datatable python-requests parallel-processing colors drop-down-menu scipy scroll tfs hive count syntax ms-word twitter-bootstrap-3 ssis fonts rxjs constructor google-analytics file-io three.js paypal powerbi graphql cassandra discord graphics compiler-errors gwt socket.io react-router solr backbone.js memory-leaks url-rewriting datatables nlp oauth terraform datagridview drupal oracle11g zend-framework knockout.js triggers neural-network interface django-forms angular-material casting jmeter google-api linked-list path timer django-templates arduino proxy orm directory windows-phone-7 parse-platform visual-studio-2015 cron conditional-statements push-notification functional-programming primefaces pagination model jar xamarin.android hyperlink uiview visual-studio-2013 vbscript google-cloud-functions gitlab azure-active-directory jwt download swift3 sql-server-2005 configuration process rspec pygame properties combobox callback windows-phone-8 linux-kernel safari scrapy permissions emacs scripting raspberry-pi clojure x86 scope io expo azure-functions compilation responsive-design mongodb-query nhibernate angularjs-directive request bluetooth reference binding dns architecture 3d playframework pyqt version-control discord.js doctrine-orm package f# rubygems get sql-server-2012 autocomplete tree openssl datepicker kendo-ui jackson yii controller grep nested xamarin.ios static null statistics transactions active-directory datagrid dockerfile uiviewcontroller webforms discord.py phpmyadmin sas computer-vision notifications duplicates mocking youtube pycharm nullpointerexception yaml menu blazor sum plotly bitmap asp.net-mvc-5 visual-studio-2008 yii2 electron floating-point css-selectors stl jsf-2 android-listview time-series cryptography ant hashmap character-encoding stream msbuild asp.net-core-mvc sdk google-drive-api jboss selenium-chromedriver joomla devise cors navigation anaconda cuda background frontend binary multiprocessing pyqt5 camera iterator linq-to-sql mariadb onclick android-jetpack-compose ios7 microsoft-graph-api rabbitmq android-asynctask tabs laravel-4 environment-variables amazon-dynamodb insert uicollectionview linker xsd coldfusion console continuous-integration upload textview ftp opengl-es macros operating-system mockito localization formatting xml-parsing vuejs3 json.net type-conversion data.table kivy timestamp integer calendar segmentation-fault android-ndk prolog drag-and-drop char crash jasmine dependencies automated-tests geometry azure-pipelines android-gradle-plugin itext fortran sprite-kit header mfc firebase-cloud-messaging attributes nosql format nuxt.js odoo db2 jquery-plugins event-handling jenkins-pipeline nestjs leaflet julia annotations flutter-layout keyboard postman textbox arm visual-studio-2017 gulp stripe-payments libgdx synchronization timezone uikit azure-web-app-service dom-events xampp wso2 crystal-reports namespaces swagger android-emulator aggregation-framework uiscrollview jvm google-sheets-formula sequelize.js com chart.js snowflake-cloud-data-platform subprocess geolocation webdriver html5-canvas centos garbage-collection dialog sql-update widget numbers concatenation qml tuples set java-stream smtp mapreduce ionic2 windows-10 rotation android-edittext modal-dialog spring-data nuget doctrine radio-button http-headers grid sonarqube lucene xmlhttprequest listbox switch-statement initialization internationalization components apache-camel boolean google-play serial-port gdb ios5 ldap youtube-api return eclipse-plugin pivot latex frameworks tags containers github-actions c++17 subquery dataset asp-classic foreign-keys label embedded uinavigationcontroller copy delegates struts2 google-cloud-storage migration protractor base64 queue find uibutton sql-server-2008-r2 arguments composer-php append jaxb zip stack tailwind-css cucumber autolayout ide entity-framework-6 iteration popup r-markdown windows-7 airflow vb6 g++ ssl-certificate hover clang jqgrid range gmail Next You’ll be prompted to create an account to view your personalized homepage. Home Questions AI Assist Labs Tags Challenges Chat Articles Users Jobs Companies Collectives Communities for your favorite technologies. Explore all Collectives Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Collectives™ on Stack Overflow Find centralized, trusted content and collaborate around the technologies you use most. Learn more about Collectives Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Regular expression for numbers without leading zeros Ask Question Asked 13 years, 11 months ago Modified2 years, 10 months ago Viewed 89k times This question shows research effort; it is useful and clear 31 Save this question. Show activity on this post. I need a regular expression to match any number from 0 to 99. Leading zeros may not be included, this means that f.ex. 05 is not allowed. I know how to match 1-99, but do not get the 0 included. My regular expression for 1-99 is ^[1-9][0-9]?$ regex Share Share a link to this question Copy linkCC BY-SA 3.0 Improve this question Follow Follow this question to receive notifications edited Oct 13, 2011 at 12:36 AGuyCalledGeraldAGuyCalledGerald asked Oct 13, 2011 at 12:17 AGuyCalledGeraldAGuyCalledGerald 8,230 18 18 gold badges 78 78 silver badges 124 124 bronze badges 1 1 This was almost right, but you just had your optional argument '?' on the second digit instead of the leading digit.Scott W –Scott W 2013-11-21 16:35:34 +00:00 Commented Nov 21, 2013 at 16:35 Add a comment| 12 Answers 12 Sorted by: Reset to default This answer is useful 49 Save this answer. Show activity on this post. There are plenty of ways to do it but here is an alternative to allow any number length without leading zeros 0-99: ^(0|[1-9][0-9]{0,1})$ 0-999 (just increase {0,2}): ^(0|[1-9][0-9]{0,2})$ 1-99: ^([1-9][0-9]{0,1})$ 1-100: ^([1-9][0-9]{0,1}|100)$ Any number in the world ^(0|[1-9][0-9])$ 12 to 999 ^(1[2-9]|[2-9][0-9]{1}|[1-9][0-9]{2})$ Share Share a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications edited Sep 7, 2017 at 2:50 answered Jan 16, 2015 at 19:05 GuishGuish 5,168 1 1 gold badge 40 40 silver badges 39 39 bronze badges 2 Comments Add a comment Gautam Sareriya Gautam SareriyaOver a year ago If I want to add range between 12 to 999 then what would be the regex? 2017-08-09T12:29:57.79Z+00:00 0 Reply Copy link bobble bubble bobble bubbleOver a year ago Any defined range can be generated here (check [•] match whole string). 2022-11-28T00:27:47.02Z+00:00 0 Reply Copy link This answer is useful 21 Save this answer. Show activity on this post. Updated: ^([0-9]|[1-9][0-9])$ Matches 0-99. Doesn't match values with leading zeros. Depending on your application you may need to escape the parentheses and the or symbol. Share Share a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications edited Oct 13, 2011 at 12:47 answered Oct 13, 2011 at 12:21 Manny DManny D 20.8k 2 2 gold badges 31 31 silver badges 31 31 bronze badges 3 Comments Add a comment aioobe aioobeOver a year ago He doen't want leading zeros on for instance 05 (which this expressions actually matches). 2011-10-13T12:23:06.957Z+00:00 1 Reply Copy link Manny D Manny DOver a year ago I understood "may not be included" as leading zeros were optional. 2011-10-13T12:24:24.727Z+00:00 0 Reply Copy link redbmk redbmkOver a year ago That doesn't work for single digit numbers (e.g. 9). Also, you could also shrink it down a bit: ^(0|[1-9]\d?)$. 2016-04-13T21:21:17.047Z+00:00 0 Reply Copy link Add a comment This answer is useful 4 Save this answer. Show activity on this post. ^(0|[1-9][0-9]?)$ Test here (various samples included) You have to add a 0|, but be aware that the "or" (|) in Regexes has the lowest precedence. ^0|[1-9][0-9]?$ in reality means (^0)|([1-9][0-9]?$) (we will ignore that now there are two capturing groups). So it means "the string begins with 0" OR "the string ends with [1-9][0-9]?". An alternative to using brackets is to repeat the ^$, like ^0$|^[1-9][0-9]?$. Share Share a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications edited Oct 13, 2011 at 12:56 answered Oct 13, 2011 at 12:41 xanatosxanatos 112k 13 13 gold badges 209 209 silver badges 295 295 bronze badges 2 Comments Add a comment AGuyCalledGerald AGuyCalledGeraldOver a year ago I tried everything but not this.. :( Why do I need the outer brackets?? I don´t understand. 2011-10-13T12:46:09.17Z+00:00 0 Reply Copy link xanatos xanatosOver a year ago @Jan-FrederikCarl Because the | has precedence nothing. So without the brackets, ^0|[1-9][0-9]?$ == (?:^0)|(?:[1-9][0-9]?$) (where (?: ) is the non-capturing group). But what you want is something equivalent to (?:^0$)|(?:^[1-9][0-9]?$) 2011-10-13T12:48:35.253Z+00:00 0 Reply Copy link This answer is useful 2 Save this answer. Show activity on this post. [...] but do not get the 0 included. Just add 0|... in front of the expression: ^(0|[1-9][0-9]?)$ ^^ Share Share a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications edited Oct 13, 2011 at 12:43 answered Oct 13, 2011 at 12:19 aioobeaioobe 422k 115 115 gold badges 831 831 silver badges 844 844 bronze badges 1 Comment Add a comment Manny D Manny DOver a year ago What you tested doesn't match your expression here. 2011-10-13T12:32:35.397Z+00:00 2 Reply Copy link This answer is useful 2 Save this answer. Show activity on this post. javascript console.log(/^0(?! \d+$)/.test('0123')); // true console.log(/^0(?! \d+$)/.test('10123')); // false console.log(/^0(?! \d+$)/.test('00123')); // true console.log(/^0(?! \d+$)/.test('088770123')); // true Run code snippet Edit code snippet Hide Results Copy Expand How about this? Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications answered Sep 24, 2020 at 12:07 GauravGaurav 869 9 9 silver badges 11 11 bronze badges Comments Add a comment This answer is useful 2 Save this answer. Show activity on this post. A simpler answer without using the or operator makes the leading digit optional: ^[1-9]?[0-9]$ Matches 0-99 disallowing leading zeros (01-09). Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications edited Nov 28, 2022 at 0:23 bobble bubble 18.7k 4 4 gold badges 31 31 silver badges 52 52 bronze badges answered Nov 21, 2013 at 16:33 Scott WScott W 202 2 2 silver badges 12 12 bronze badges 1 Comment Add a comment Mark MarkOver a year ago This only works for two-digit numbers, unlike most examples here. 2021-08-24T22:04:37.233Z+00:00 1 Reply Copy link This answer is useful 1 Save this answer. Show activity on this post. This should do the trick: ^(?:0|[1-9][0-9]?)$ Share Share a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications answered Oct 13, 2011 at 12:42 OldskoolOldskool 35k 7 7 gold badges 55 55 silver badges 67 67 bronze badges Comments Add a comment This answer is useful 1 Save this answer. Show activity on this post. Answer: ^([1-9])?(\d)$ Explanation: ^ // beginning of the string ([1-9])? // first group (optional) in range 1-9 (not zero here) (\d) // second group matches any digit including 0 $ // end of the string Same as (Not grouping): ^[1-9]?\d$ Test: Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications edited Oct 30, 2019 at 0:53 answered Oct 30, 2019 at 0:37 Ro.Ro. 1,801 1 1 gold badge 19 19 silver badges 17 17 bronze badges Comments Add a comment This answer is useful 0 Save this answer. Show activity on this post. Try this it will help you ^([0-9]|[1-9][0-9])$ Share Share a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications answered Sep 6, 2017 at 12:35 Shoaib QuraishiShoaib Quraishi 258 2 2 silver badges 18 18 bronze badges Comments Add a comment This answer is useful 0 Save this answer. Show activity on this post. ([1-9][0-9]+). this will be simple and efficient it will help with any range of whole numbers ([1-9][0-9\.]+). this expression will help with decimal numbers Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications edited Jun 3, 2021 at 16:20 answered Jun 3, 2021 at 16:15 Raghava NarasimmanRaghava Narasimman 11 1 1 bronze badge 2 Comments Add a comment Andy A. Andy A.Over a year ago Welcome to SO. Your second pattern allows more then one decimal point. 2021-06-03T18:17:37.71Z+00:00 0 Reply Copy link greybeard greybeardOver a year ago Your first pattern allows digit sequences with more than two digits. 2021-06-04T04:51:46.227Z+00:00 0 Reply Copy link This answer is useful 0 Save this answer. Show activity on this post. You can use the following regex: [1-9][0-9]\d|0 Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications edited Mar 24, 2022 at 20:30 vvvvv 32.8k 19 19 gold badges 69 69 silver badges 103 103 bronze badges answered Mar 24, 2022 at 20:28 WaildWaild 1 2 2 bronze badges 1 Comment Add a comment AdrianHHH AdrianHHHOver a year ago That matches either the single digit 0 or a three-digit number. The question asked for 0 to 99. 2022-03-24T20:47:13.863Z+00:00 1 Reply Copy link This answer is useful -1 Save this answer. Show activity on this post. ^(0{1,})?([1-9][0-9]{0,1})$ It includes: 1-99, 01-099, 00...1- Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications edited Nov 26, 2022 at 22:14 Vickel 7,990 6 6 gold badges 40 40 silver badges 63 63 bronze badges answered Nov 26, 2022 at 21:53 KLaudorowiczKLaudorowicz 1 1 Comment Add a comment Vickel VickelOver a year ago While this code may answer the question, providing additional context regarding how and/or why it solves the problem would improve the answer's long-term value. 2022-11-26T22:15:52.693Z+00:00 2 Reply Copy link Your Answer Thanks for contributing an answer to Stack Overflow! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions regex See similar questions with these tags. The Overflow Blog The history and future of software development (part 1) Getting Backstage in front of a shifting dev experience Featured on Meta Spevacus has joined us as a Community Manager Introducing a new proactive anti-spam measure New and improved coding challenges New comment UI experiment graduation Policy: Generative AI (e.g., ChatGPT) is banned Report this ad Report this ad Linked -1Find all numbers that differ from a single digit 0 with RegEx followed by a specific string 35How to match numbers between X and Y with regexp? 3Remove all alphabetical characters and leading zeros from strings in an array with JavaScript -2How regex a number length 7 in a string but the number is not start with 0 in Python? Related 0Regex numeric range with no leading zeros 1Regular expression : validate numbers greater than 0, with or without leading zeros 3Regex to match integer without leading zero (s) 1Regex that allow for numbers with leading zeros 0Regex for leading zeros and exclude normal zeros 2Regex for allowing numbers without leading and ending 0 3How to ignore digits that start with 0 (zero) using regular expression? 0Regex remove leading zeros if string is number 0Regex start with any number and it should end without zeros 1Regex for number without leading or ending zeros Hot Network Questions Bypassing C64's PETSCII to screen code mapping Is it safe to route top layer traces under header pins, SMD IC? How many stars is possible to obtain in your savefile? Why do universities push for high impact journal publications? I have a lot of PTO to take, which will make the deadline impossible Languages in the former Yugoslavia If Israel is explicitly called God’s firstborn, how should Christians understand the place of the Church? Explain answers to Scientific American crossword clues "Éclair filling" and "Sneaky Coward" How different is Roman Latin? How to start explorer with C: drive selected and shown in folder list? My dissertation is wrong, but I already defended. How to remedy? What’s the usual way to apply for a Saudi business visa from the UAE? PSTricks error regarding \pst@makenotverbbox Is existence always locational? Lingering odor presumably from bad chicken Can peaty/boggy/wet/soggy/marshy ground be solid enough to support several tonnes of foot traffic per minute but NOT support a road? Alternatives to Test-Driven Grading in an LLM world Matthew 24:5 Many will come in my name! Does a Linux console change color when it crashes? Why is the definite article used in “Mi deporte favorito es el fútbol”? Another way to draw RegionDifference of a cylinder and Cuboid Overfilled my oil Copy command with cs names "Unexpected"-type comic story. Aboard a space ark/colony ship. Everyone's a vampire/werewolf Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? Probable spam. This comment promotes a product, service or website while failing to disclose the author's affiliation. Unfriendly or contains harassment/bigotry/abuse. This comment is unkind, insulting or attacks another person or group. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Stack Overflow Questions Help Chat Products Teams Advertising Talent Company About Press Work Here Legal Privacy Policy Terms of Service Contact Us Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings
2463
https://ijfmts.com/html-article/12997
Analysis of vitreous humour in determining postmortem interval (time since death) – A prospective study - IP Int J Forensic Med Toxicol Sci IP International Journal of Forensic Medicine and Toxicological Sciences Official Publication of Khyati Education And Research Foundation Home About Journal Publication Ethics Editorial Editorial Board Editorial Procedure Reviewer Reviewer Instructions Author Author Instructions Publication Charges Articles Current Issue Archive Ahead of Print Subscribe Contact Us Submit Manuscript Visibility 337 Views Downloads 219 Downloads Permissions Cite this article DOI10.18231/j.ijfmts.2020.029 CrossMark Citation2 CITATIONS 2 Total citations 0 Recent citations 0.71 Field Citation Ratio n/a Relative Citation Ratio Captures Readers: 6 see details Analysis of vitreous humour in determining postmortem interval (time since death) – A prospective study Author Details: S Angayarkanni IP International Journal of Forensic Medicine and Toxicological Sciences. 5(4):121-129, 2020. | 10.18231/j.ijfmts.2020.029 View PDF Abstract Aim: Aim of the study was to find out post-mortem interval (time since death) on 100 cases which were brought for medico-legal autopsy at Institute of Forensic Medicine and Toxicology, Rajiv Gandhi Government General Hospital, Madras Medical College, Chennai by analyzing changes in biochemical markers in vitreous humour of both eyes. Materials andMethods:Two samples of vitreous humour were taken from both eyes, first sample immediately after the body was received in the mortuary and second sample at the time of starting post-mortem examination. The samples were sent to the Department of Biochemistry for analyzing sodium and potassium concentration. Observations:After the statistical analysis it was observed that p<0.001 which is a statistically significant data and concludes positive correlation of time since death with potassium increase whereas sodium falls as time increases which is not statistically significant. The study also showed the regression for vitreous samples with time since death as a dependent variable and depicts a linear correlation between time since death and increase in potassium. The T test compared values of sodium and potassium and it showed p>0.05 which concludes there is no significant difference. Conclusion:It was concluded that there was a linear correlation between time since death and potassium levels in vitreous samples from both sides at different time intervals. There is no significant difference in values of samples from both sides. The study also concludes that sodium values are not statistically significant and shows negative correlation with time since death. Introduction ‘Forensic Medicine’ or ‘Legal Medicine’ is the application of knowledge and principles of medicine for the purpose of law, both civil and criminal. Its main objective is to aid in the administration of justice. Forensic experts summoned to the scene of death are aimed to provide an estimation of the time elapsed since death. In forensic medicine and thereby criminal law, the important problem is to estimate the time of death. In the court of law, a medical practitioner has to give evidence as a medical jurist in order to prove the innocence or guilt of an accused. He has a great responsibility that he may be the only reliable evidence in the court of law. Post mortem interval is the interval between the death and time of examination of a body. This is important in knowing when the crime was committed. It helps police to start their inquiries with the available information and also in dealing the cases more efficiently. It also helps in including and excluding the suspects and culprits and in confirming the statements of the suspect. Estimation of time since death is useful in civil cases such as inheritance of property, insurance claims etc. Longer the post mortem interval, wider will be the limits of probability. Though the changes like cooling of body, eye changes, post-mortem staining, rigor mortis, stomach contents, bladder and bowel contents, decomposition changes and circumstantial evidence can sometimes yield a reasonable accurate result in early post-mortem hours, they are not reliable because of environmental factors. Hence forensic pathologists and biochemists have been concentrating on biochemical changes that occur in body fluids such as blood and compartmental fluids like vitreous humour, cerebrospinal fluid, pleural fluid, pericardial fluid and synovial fluid. Because of the lack of oxygen in the circulation, alteration in the enzymatic reaction and stoppage of production of metabolites, extensive biochemical changes occur in all body fluids. Such changes provide chemical markers which help to determine time of death accurately. These metabolites are products of metabolism and intermediates of smaller molecular size which have functions in normal growth and development of cells. They may be exogenous or endogenous changes and hence metabolite profiles are altered following death. The changes that occur after death have been identified and attributed to agonal period, changes in the early post-mortem period and diffusion of substances between various body fluids. Biochemical markers are divided into two classes - metabolites and proteins. Sodium, potassium, chloride, calcium, magnesium, phosphate, lactic acid, hypoxanthine, urea, creatinine, uric acid, ammonia, catecholamines, ethanol are metabolites. Other class includes total proteins and enzymes aspartate aminotransferase and lactate dehydrogenase. Immediately after death, the changes that occur in the chemical composition of body fluids such as blood, vitreous humour, synovial fluid and cerebrospinal fluid are described by thanatochemistry. After death, the electrolytes and the chemicals redistribute, cellular integrity is lost, energy dependent trans membrane transportation is absent. Hence it will be difficult to assess post-mortem blood samples. Because of the breakdown of the active membrane transport and rapid breakdown of metabolism after death, only stable analytes can be estimated in blood samples. Among the various body fluids like blood, serum, cerebrospinal fluid, aqueous humour, synovial fluid and vitreous humour, estimation of concentration of potassium in vitreous humour is the most widely used method. Vitreous fluid is an acellular , transparent, inert, colourless , hydrophilic viscous fluid that is present between the lens and retina within the eyeball which is an important supporting structure that serves the optical function. Its weight is approximately 4 grams and its volume is approximately 4 cc. It is composed of 99% of water with soluble proteins, amino acids, low molecular weight constituents, glucose, type II collagen, hyaluronic acid, inorganic salts and ascorbic acid., It is more than 40 years that the biochemical changes in vitreous have been on analysis. The underlying principle in analysing vitreous humour is that it is a closed compartment which is separated from the rest of the body. But ambient temperature may influence. The composition of the vitreous is closely related to that of serum, aqueous humour and cerebrospinal fluid. It is relatively stable, easily accessible, less susceptible to rapid chemical changes, well protected from decomposition and contamination. Hence it is more suitable for analysis than other fluids in estimating time since death. Eye and thereby Vitreous humour is well protected even in case of severe head injury and in burns. This was frequently remarked as “miraculous Escape”. Many studies have been conducted on analysis of sodium and potassium in vitreous humour., , Concentration of sodium, potassium and urate were analysed in vitreous humour of both the eyes at the same time of death showed variation between the eyes. Many studies have shown that the vitreous potassium level increases with increase in post-mortem interval. , , , , , , , , , , , , , , , , , , , , , , , , , , There was a linear relationship of increase in potassium concentration with increase in postmortem interval. As the post mortem interval increases, the potassium also increases whereas there is negative correlation with sodium and no correlation with chloride and calcium. Sodium concentration in normal body is 136-145 mEq/L.Potassium concentration in normal body is 50-55 mEq/kg of body weight of which 160 mmol/l ie. 98% of the potassium exists within cell whereas extracellular concentration is 3.5-5.5 mmol/l. The normal sodium value in vitreous humour is 118-124 mmol/L and potassium value in vitreous humour is 2.6-4.2 mmol/L. Potassium is actively transported from ciliary body into anterior vitreous and posterior chamber. Potassium is also contributed by the lens. Aims and Objectives To study the use of vitreous humour in finding post-mortem interval To compare the distribution of substances in between two eyes To determine post-mortem interval even in decomposed and charred bodies Materialsand Methods This study was carried out on 100 cases which were brought for medico-legal autopsy at Rajiv Gandhi Government General Hospital, Madras Medical College, Chennai-3. Cases which were admitted in hospital and whose exact times of death were known were selected for the study. Details of these cases were obtained from the hospital records, police records, relatives and friends. Cases whose exact time of death was not known and with previous history of eye or orbital injury or surgery, posterior segment diseases were excluded from the study. Vitreous humour was collected from both eyes. First sample of vitreous humour was aspirated from both the eyes simultaneously as early as possible after the entry of the body into mortuary. Second sample was taken from both the eyes simultaneously at the time of post-mortem examination. The samples which were turbid and mixed with blood were discarded. The details of the cases were recorded. Samples of vitreous humour were collected from the posterior chamber by aspirating gradually and slowly through a puncture 5-6 mm away from the limbus using a sterile 20 gauge needle taking care to avoid tearing of any loose tissue fragments surrounding the vitreous chamber. Figure 1: Procedure of Vitreous aspiration The samples were sent to the Institute of Biochemistry, Rajiv Gandhi Government General Hospital, Madras Medical College, Chennai-3. The samples were analysed by Medica Easylyte Sodium/potassium analyser. It is a fully automated, electrolyte system that is microprocessor controlled. It uses Ion Selective Electrode(ISE) technology for the measurement of electrolytes in which are stored, an easily accessible quality control data. It measures sodium, potassium, chloride, lithium, calcium and pH of the serum, whole blood, plasma, urine, vitreous and synovial fluid. The calibrants are packed in a convenient solution pack with Standard A solution, 800 ml (140.0 mmol/L sodium 4.0 mmol/L potassium Buffer Preservative wetting agent)., Standard B solution 180 ml (35.0 mmol/L sodium,16.0 mmol/L potassium Buffer PreservativeWetting agent) Wash solution, 80 ml (0.1 mol/L (Ammonium bi fluoride) Ion selective Electrode method consists of a thin membrane across which only the intended ion can be transported. The transport of ions from a high concentration to a low concentration through a selective binding with some sites within the membrane creates a potential difference which is measured in Volts. Figure 2: Medica Easylyte Electrolyte Analyzer Observations This study was carried out on 100 cases whose time of death was known who were brought to the mortuary, Rajiv Gandhi Government General Hospital, Madras Medical College, Chennai-3 for post-mortem examination. Samples of vitreous humour were taken at intervals. First sample was taken as early as possible after the entry of the dead bodies into the mortuary from both the eyes. Second sample was taken at the time of post-mortem examination from both the eyes. Only the clear vitreous humour was analysed. Turbid or blood stained samples were excluded from the study. The samples were analysed for sodium and potassium by selective ion electrode method. S.NoTime since Death (hours )Sample ISample II Number of casesPercentage of casesNumber of casesPercentage of cases 1.Within 12 hours 53 53%31 31% 2.12.1-24 hours 40 40%57 57% 3.Above 24 hours 7 7%12 12% Total 100 100%100 100% Table 1 Percentage of cases depending upon Time since death This [Table 1] shows percentage of cases depending upon time since death in sample 1 and sample 2 of vitreous humour of both eyes. P valueTime since deathNMeanStd. DeviationStd. Error95% Confidence Interval for MeanMinMax Lower Bound Upper Bound Sample I R Eye Na P<0.001<12 hrs 53 139.6623 9.51061 1.30638 137.0408 142.2837 126.50 157.90 12-24 hrs 40 145.2725 7.76732 1.22812 142.7884 147.7566 125.10 158.70 24 hrs 7 137.3571 7.32481 2.76852 130.5828 144.1315 125.70 147.90 Total 100 141.7450 9.12358.91236 139.9347 143.5553 125.10 158.70 Sample I R Eye K P<0.001<12 hrs 53 6.9509 2.85628.39234 6.1637 7.7382 2.90 18.60 12-24 hrs 40 8.9075 1.76394.27890 8.3434 9.4716 5.80 14.60 24 hrs 7 11.1143 3.74941 1.41714 7.6467 14.5819 7.20 16.30 Total 100 8.0250 2.82311.28231 7.4648 8.5852 2.90 18.60 Sample I Lt Eye Na P<0.001<12 hrs 31 142.0935 9.18633 1.64991 138.7240 145.4631 126.30 158.00 12-24 hrs 57 141.8316 9.26536 1.22723 139.3731 144.2900 124.80 159.00 24 hrs 12 141.2750 8.20179 2.36765 136.0638 146.4862 126.00 151.80 Total 100 141.8460 9.03689.90369 140.0529 143.6391 124.80 159.00 Sample I Lt Eye K P<0.001<12 hrs 31 7.9323 3.21894.57814 6.7515 9.1130 3.10 18.70 12-24 hrs 57 7.4649 2.30732.30561 6.8527 8.0771 3.20 14.90 24 hrs 12 9.9333 2.79686.80738 8.1563 11.7104 6.90 14.90 Total 100 7.9060 2.76240.27624 7.3579 8.4541 3.10 18.70 Sample 2 R Eye Na P<0.001<12 hrs 31 140.6968 7.21722 1.29625 138.0495 143.3441 130.60 156.70 12-24 hrs 57 146.3298 7.04151.93267 144.4615 148.1982 128.50 165.80 24 hrs 12 139.6417 7.50630 2.16688 134.8724 144.4109 126.80 149.40 Total 100 143.7810 7.67383.76738 142.2583 145.3037 126.80 165.80 Sample 2 R Eye K P<0.001<12 hrs 31 7.7839 1.88186.33799 7.0936 8.4741 4.90 14.70 12-24 hrs 57 9.9561 2.17420.28798 9.3792 10.5330 6.80 19.10 24 hrs 12 12.8917 3.49089 1.00773 10.6737 15.1097 7.90 18.10 Total 100 9.6350 2.74209.27421 9.0909 10.1791 4.90 19.10 Sample 2 L Eye Na P<0.001<12 hrs 31 140.7677 7.30823 1.31260 138.0871 143.4484 131.00 156.40 12-24 hrs 57 146.4877 6.89241.91292 144.6589 148.3165 128.30 165.30 24 hrs 12 140.2250 6.81604 1.96762 135.8943 144.5557 126.30 149.20 Total 100 143.9630 7.53521.75352 142.4679 145.4581 126.30 165.30 Sample2 L Eye K P<0.001<12 hrs 31 7.6129 1.77759.31926 6.9609 8.2649 5.20 13.90 12-24 hrs 57 9.8018 2.13897.28331 9.2342 10.3693 6.80 18.90 24 hrs 12 12.5250 3.29162.95021 10.4336 14.6164 7.50 17.70 Total 100 9.4500 2.65020.26502 8.9241 9.9759 5.20 18.90 Table 2 Descriptive for Sample I & II – Time since death Coefficientsa ModelUnstandardized CoefficientsStandardized CoefficientstSig. BStd. ErrorBeta Sample 1(Constant).740.170 4.355.000 Right K Vitreous humour.100.020.450 4.986.000 (Constant).781.175 4.463.000 Left K Vitreous humour.096.021.421 4.592.000 Sample 2(Constant).548.189 2.899.005 Right K Vitreous humour.131.019.574 6.940.000 (Constant).535.194 2.751.007 Left K Vitreous humour.135.020.567 6.816.000 a. Dependent Variable: TIME SINCE DEATH Table 3 Regression co-efficients for Vitreous Fluid Right Na 1Right K 1Left Na 1Left K 1Right Na 2Right K 2Left Na 2Left K 2 TSD Pearson Correlation.004.580.004.536٭٭-.195.611-.173.581 P Value.967.000.969.000.052.000.088.000 N 100 100 100 100 100 100 99 100 Table 4 Correlation between TSD and K and Na Values GroupNMeanStd. DeviationStd. Error Mean sample I Eye Vitreous Sodium Right 100 141.7450 9.12358.91236 Left 100 141.8460 9.03689.90369 sample I Eye Vitreous Potassium Right 100 8.0250 2.82311.28231 Left 100 7.9060 2.76240.27624 sample II Eye Vitreous Sodium Right 100 143.7810 7.67383.76738 Left 100 143.9630 7.53521.75352 sample II Eye Vitreous Potassium Right 100 9.6350 2.74209.27421 Left 100 9.4500 2.65020.26502 Table 5 Group statistics T – Test for timing 0 – 12 hoursGroup Statistics P valueGroupNMeanStd. DeviationStd. Error Mean Left Na Vitreous humour p>0.05 Sample 1 53 139.7415 9.38445 1.28905 Sample 2 31 140.7677 7.30823 1.31260 Left K Vitreous humour p>0.05 Sample 1 53 6.9245 2.85986.39283 Sample 2 31 7.6129 1.77759.31926 Right Na Vitreous humour p>0.05 Sample 1 53 139.6623 9.51061 1.30638 Sample 2 31 140.6097 7.09983 1.27517 Right K Vitreous humour p>0.05 Sample 1 53 6.9415 2.85727.39248 Sample 2 31 7.6903 1.85461.33310 T – Test for timing 12 – 24 hours Left Na Vitreous humour p>0.05 Sample 1 40 145.3875 7.70588 1.21841 Sample 2 57 146.4877 6.89241.91292 Left K Vitreous humour P<0.05Sample 1 40 8.7575 1.75921.27816 Sample 2 57 9.8018 2.13897.28331 Right Na Vitreous humour p>0.05 Sample 1 40 145.2725 7.76732 1.22812 Sample 2 57 146.3298 7.04151.93267 Right K Vitreous humour P<0.05Sample 1 40 8.9075 1.76394.27890 Sample 2 57 9.9561 2.17420.28798 T – Test for timing more than 24 hours Left Na Vitreous humour p>0.05 Sample 1 7 137.5429 7.34231 2.77513 Sample 2 12 140.2250 6.81604 1.96762 Left K Vitreous humour p>0.05 Sample 1 7 10.6429 3.40385 1.28653 Sample 2 12 12.5250 3.29162.95021 Right Na Vitreous humour p>0.05 Sample 1 7 137.3571 7.32481 2.76852 Sample 2 12 139.6417 7.50630 2.16688 Right K Vitreous humour p>0.05 Sample 1 7 11.1143 3.74941 1.41714 Sample 2 12 12.8917 3.49089 1.00773 Table 6 T-Tests SampleTime since deathNMeanStd. DeviationStd. Error95% Confidence Interval for MeanMinMaxp value Lower Bound Upper Bound Left Na 1 Vitreous humour 0-12 hrs 53 139.74 9.38 1.29 137.15 142.33 126.30 158.00 p<0.001 12-24 hrs 40 145.39 7.71 1.22 142.92 147.85 124.80 159.00 Above 24 hrs 7 137.54 7.34 2.78 130.75 144.33 126.00 148.10 Total 100 141.85 9.04.90 140.05 143.64 124.80 159.00 Left K 1 Vitreous humour 0-12 hrs 53 6.92 2.86.39 6.14 7.71 3.10 18.70 p<0.001 12-24 hrs 40 8.76 1.76.28 8.19 9.32 6.30 14.90 Above 24 hrs 7 10.64 3.40 1.29 7.49 13.79 6.90 14.90 Total 100 7.92 2.75.27 7.37 8.46 3.10 18.70 Right Na1 Vitreous humour 0-12 hrs 53 139.66 9.51 1.31 137.04 142.28 126.50 157.90 p<0.001 12-24 hrs 40 145.27 7.77 1.23 142.79 147.76 125.10 158.70 Above 24 hrs 7 137.36 7.32 2.77 130.58 144.13 125.70 147.90 Total 100 141.75 9.12.91 139.93 143.56 125.10 158.70 Right K 1 Vitreous humour 0-12 hrs 53 6.94 2.86.39 6.15 7.73 2.90 18.60 p<0.001 12-24 hrs 40 8.91 1.76.28 8.34 9.47 5.80 14.60 Above 24 hrs 7 11.11 3.75 1.42 7.65 14.58 7.20 16.30 Total 100 8.02 2.83.28 7.46 8.58 2.90 18.60 Left Na 2 Vitreous humour 0-12 hrs 31 140.8 7.3 1.3 138.1 143.4 131.0 156.4 p<0.001 12-24 hrs 56 146.6 6.9 0.9 144.7 148.4 128.3 165.3 Above 24 hrs 12 140.2 6.8 2.0 135.9 144.6 126.3 149.2 Total 99 144.0 7.6 0.8 142.5 145.5 126.3 165.3 Left K 2 Vitreous humour 0-12 hrs 31 7.6 1.8 0.3 7.0 8.3 5.2 13.9 p<0.001 12-24 hrs 57 9.8 2.1 0.3 9.2 10.4 6.8 18.9 Above 24 hrs 12 12.5 3.3 1.0 10.4 14.6 7.5 17.7 Total 100 9.5 2.7 0.3 8.9 10.0 5.2 18.9 Right Na 2 Vitreous humour 0-12 hrs 31 140.6 7.1 1.3 138.0 143.2 130.6 156.7 p<0.001 12-24 hrs 57 146.3 7.0 0.9 144.5 148.2 128.5 165.8 Above 24 hrs 12 139.6 7.5 2.2 134.9 144.4 126.8 149.4 Total 100 143.8 7.7 0.8 142.2 145.3 126.8 165.8 Right K 2 Vitreous humour 0-12 hrs 31 7.7 1.9 0.3 7.0 8.4 4.9 14.7 p<0.001 12-24 hrs 57 10.0 2.2 0.3 9.4 10.5 6.8 19.1 Above 24 hrs 12 12.9 3.5 1.0 10.7 15.1 7.9 18.1 Total 100 9.6 2.8 0.3 9.1 10.2 4.9 19.1 Table 7 Oneway ANOVA for Comparing Left and Right Na and K values for Three periods (0 to12, 12 to 24 and Above 24 hrs Sample 1 & II Figure 3: Correlation of sample I of right vitreous potassium with time since death Figure 4: Correlation of sample I of left vitreous potassium with time since death Figure 5: Correlation of sample II of right vitreous potassium with time since death Figure 6: Correlation of sample II of left vitreous potassium with time since death [Table 1] shows number and percentage of cases from sample 1 and 2 which falls within the time intervals. It was concluded that in sample 1, there were 53 cases with time since death within 12 hours which constituted 53%, there were 40 cases with time since death from 12.1 to 24 hours which constituted 40% and 7 cases with time since death more than 24 hours which constituted 7% out of 100 total cases. In sample 2, there were 31 cases with time since death within 12 hours which constituted 31%, there were 57 cases with time since death from 12.1 to 24 hours which constituted 57% and 12 cases with time since death more than 24 hours which constituted 12% out of 100 total cases. [Table 2] shows total number of cases N, Mean, Standard deviation, standard error, 95% confidence interval for mean and the range of sodium and potassium values of sample I & II of right and left eye. Each group have different values (<12 hours, 12.1-24 hours and >24 hours). The table depicts p< 0.001 which is a statistically significant data which concludes the positive correlation of time since death with potassium increase whereas sodium level falls as time increases. Hence it is not statistically significant. [Table 3] shows the regression for sample I and II of right and left eye vitreous fluid with time since death as a dependent variable. The values of vitreous potassium and time since death were significantly correlated(r=0.450), (r=0.421), (r=0.574), (r=0.567) [Table 4] shows the Pearson coefficient of 0.580 of sample I of right vitreous potassium, 0.536 of sample I of left vitreous potassium, 0.611 of sample II of right vitreous potassium and 0.581 of sample II of left vitreous potassium which has a significant correlation with time since death whereas sodium has negative correlation. [Table 5] shows the comparison of sodium and potassium values of sample 1 and 2 of right and left vitreous. Statistical analysis shows P>0.05 which concludes there is no significant difference between right and left values. [Table 6] shows the T test which compares the values of sodium and potassium of sample 1 and 2 of right and left vitreous with timing 0-12 hours,12-24 hours, and more than 24 hours. Statistical analysis shows p>0.05 for sodium values which is not significant whereas p<0.05 for potassium is significant. which concludes there is no significant difference. [Table 7] shows the one way ANOVA for comparing sodium and potassium values of both sides for three periods in sample I and sample 2 respectively. It shows p<0.001 which is a statistically significant data which concludes that there is a linear correlation of increase in potassium with time since death. Conflicts of interest All contributing authors declare no conflicts of interest. Source of Funding None. References Parikh CK. . Personal Identity. In Parikh’s Textbook of Medical Jurisprudence. 2000. [Google Scholar] Dr, Subrahmanyam’s. . Law publishers (India) Pvt. Ltd. 2001. [Google Scholar] Dr KS, Narayan Reddy. . The Essentials of Forensic Medicine and Toxicology. . [Google Scholar] Aggarwal RL, Gupta PC, Nagar CK. Determination of time of death by estimating potassium level in the cadaver vitreous humour. Indian Journal of Ophthalmology. 1983;31(5):528-531. [Google Scholar] Donaldson AE, Lamont IL. Biochemical changes that occur after death: Potential markers for Determining Post mortem. Interval PloS One. 2013;8(11):82011-82011. [Google Scholar] Vij K. . Textbook of Forensic Medicine and Toxicology. 2ed:Churchill Livingstone Pvt Ltd. 2002. [Google Scholar] Madea B, Musshoff F. Postmortem biochemistry. Forensic Science International. 2007;165:165-171. [Google Scholar][Crossref] Coe JI. Vitreous potassium as a measure of the postmortem interval: An historical review and critical evaluation. Forensic Science International. 1989;42(3):201-213. [Google Scholar][Crossref] Khurana AK. . Anatomy and physiology of Eye : 85. ;113:114-114. [Google Scholar] Jacobeic A, Fredrick. . Age related changes in vitreous-Structure, Principles and practice of Ophthalmology 2nd edition. . [Google Scholar] Saugstad O, Olaisen B. Post-mortem hypoxanthine levels in the vitreous humour an introductory report. Forensic Science International. 1978;12(1):33-36. [Google Scholar][Crossref] Sturner WQ, Gantner GE. The Postmortem Interval. American Journal of Clinical Pathology. 1964;42(2):137-144. [Google Scholar][Crossref] Singh D, Prashad R, Parkash C, Bansal YS, Sharma SK, Pandey AN. Linearization of the relationship between serum sodium, potassium concentration, their ratio and time since death in Chandigarh zone of north-west India. Forensic Science International. 2002;130:1-7. [Google Scholar][Crossref] DSD, APaH, DPR, AP, DSSK. Research Fellow - Analysis of sodium and potassium in vitreous humour have been studied to estimate time since death. Double logarithmic, linear relationship between postmortem vitreous Sodium/Potassium electrolytes concentration ratio and time since death in subjects of Chandigarh zone of northwest India. Senior Lecturer, Mr. Pandey Avadh Naresh1. 2005. [Google Scholar] Chandrakanth H, Kanchan T, Balaraj B, Virupaksha H, Chandrashekar T. Postmortem vitreous chemistry – An evaluation of sodium, potassium and chloride levels in estimation of time since death (during the first 36 h after death). Journal of Forensic and Legal Medicine. 2013;20(4):211-216. [Google Scholar][Crossref] Balasooriya B, Hill C, Williams A. The biochemistry of vitreous humour. A comparative study of the potassium, sodium and urate concentrations in the eyes at identifical time intervals after death. Forensic Science International. 1984;26:85-91. [Google Scholar][Crossref] Dr. Vishal Garg -changes in the levels of vitreous potassium with increasing time since death. JIAFM. 2004;26(4). [Google Scholar] Sturner WQ, GE. Ganter - The postmortem interval; a study of potassium in the vitreous humour. Am J Clin Pathol. 1964;42(2):137-144. [Google Scholar] WMH. Hughes-Levels of potassium in the vitreous humour after death. Med Sci Law. 1965;5:150-156. [Google Scholar] Hansson L, Uotila U, Lindfors R, K. Laiho Potassium content of the vitreous body as an aid in determining the time of death. J Forensic Sci. 1966;11(3):390-393. [Google Scholar] J. Lie - Changes of potassium concentration in the vitreous humour after death. Am J Med Sci. 1967;254:136-146. [Google Scholar] Leahy MS, ER. Farber - Postmortem chemistry of human vitreous humour. J Forensic Sci. 1969;12:214-222. [Google Scholar] Madea B, Henssge C, Honig W, A. Gerbracht - References for determining the time of death by postmortem vitreous humour. Forensic Sci Int. 1989;40:231-243. [Google Scholar] Madea B, Kreuser C, S. Banaschak - Postmortem biochemical examination of synovial fluid-a preliminary study. Forensic Sci Int. 2001;118:29-35. [Google Scholar] J. Coe - Use of chemical determinations on vitreous humour in forensic pathology. J Forensic Sci. 1972;17:541-546. [Google Scholar] Swift PGF, Worthy E, Emery JL. Biochemical state of the vitreous humour of infants at necropsy. Archives of Disease in Childhood. 1974;49:680-685. [Google Scholar][Crossref] Komura S, S. Oshtro - Potassium levels in the aqueous and vitreous humor after death. Tohoku J Exp Med. 1977;122:65-68. [Google Scholar] J, WSC. . Coe - Postmortem chemistry of blood cerebrospinal fluid and vitreous humor-Tedeschi. 1977;II:1033-1060. [Google Scholar] Blumenfeld TA, Mantell CH, Catherman RL, WA. Blank -Postmortem vitreous humour chemistry in sudden infant death syndrome and in other causes of death in childhood. Am J Clin Pathol. 1979;71:219-223. [Google Scholar] Choo-Kang E, Mckoy C, C. Escoffery - Vitreous humor analytes in assessing the postmortem interval and the antemortem clinical status. West Indian Med J. 1983;32:23-23. [Google Scholar] Farmer JG, Benomran, Watson WA. Harland - Magnesium,potassium, sodium and calcium in postmortem vitreous humour from humans. Forensic Sci Int. 1985;27:1-13. [Google Scholar] Devgun MS, JA. Dunbar - Biochemical investigation of vitreous; application in forensic medicine, especially in relation to alcohol. Forensic Sci Int. 1986;31:27-34. [Google Scholar] Stephen RJ, KG. Richards- Vitreous humour chemistry. The use of potassium concentration for the prediction of postmortem interval. J Forensic Sci. 1987;32(2):503-509. [Google Scholar] Sparks DL, Oeltgen PR, Kryscio RJ, JL. Hunsaker -Comparison of chemical methods for determination of postmortem interval. J Forensic Sci. 1989;34(1):197-206. [Google Scholar] AH. Singh -Potassium concentration analysis in vitreous humour for estimation of time of Death. J Forensic Med Toxicol. 1999;11(3-4):12-16. [Google Scholar] Lange N, Swearer S, WQ. Sturner - Human postmortem interval estimation from vitreous potassium; an analysis of original data from six different studies. Forensic Sci Int. 1994;66:159-174. [Google Scholar] Knight B. . The use of vitreous humour chemistry in timing death, Forensic pathology. 1996. [Google Scholar] Govekar G, Bishnukumar PC, Dikshit TK. Mishra - Study of potassium in vitreous in relation to death interval and cause of death. J Forensic Med Toxicol. 1996;4(1):26-28. [Google Scholar] James RA, Hoadley BG. Sampson - Determination of postmortem interval by sampling vitreous humour - Am. J Forensic Med Pathol. 1997;18(2):158-162. [Google Scholar] Pounder DJ, Carson DO, Johnston K, Y. Orihara -Electrolyte concentration differences between left and right vitreous humor samples. J Forensic Sci. 1998;43(3):604-607. [Google Scholar] Chaudhary BL, Veena M, BH. Tirpude - Potassium concentration in vitreous humour in relation to death interval. J Forensic Med Toxicol. 2007;24(1):26-30. [Google Scholar] NA. Sheikh - Estimation of potassium interval according to time course of potassium ion activity in cadaveric synovial fluid - Indian. J Forensic Med Toxicol. 2007;1:45-49. [Google Scholar] Siddhamsetty AK, Verma SK, Kohli A, Verma A, Puri D, Singh A. Exploring time of death from potassium, sodium, chloride, glucose & calcium analysis of postmortem synovial fluid in semi arid climate. Journal of Forensic and Legal Medicine. 2014;28:11-14. [Google Scholar][Crossref] William MH. . Singapore :Harcourt Brace and Company Asia Pvt Ltd. 1992. [Google Scholar] Abstract Introduction Aims and Objectives Materials and Methods Observations Conflicts of interest Source of Funding References We recommend Determination of postmortem interval and cause of death: Do the levels of biochemical parameters in vitreous humor provide an additional assistance to medico le...D.V.S.S. Ramavataram, Indian Journal of Forensic and Community Medicine, 2023 Postmortem changes in skin appendages-A histological studyBoban Babu, IP International Journal of Forensic Medicine and Toxicological Sciences, 2020 A study on chemical analysis of toxic substances in Autopsy samplesT.Vedanayagam, Indian Journal of Forensic and Community Medicine, 2021 Delay in final opinion of autopsy requiring Histo-pathological and chemical analysisSandeep Raju, IP International Journal of Forensic Medicine and Toxicological Sciences The Utility of Insects in Estimation of Post Mortem Interval in Human Dead BodiesJuglan Sarthak, Indian Journal of Forensic and Community Medicine, 2021 Regulation of cell growth by vitreous humourGerard A. Lutty, Robert J. Mello, Carol Chandler, et al., J Cell Sci, 1985 The Survival of Artemia Salina (L.) in Various MediaP. C. CROGHAN, J Exp Biol, 1958 A Contribution to the Anatomy and Physiology of the RetinaJ Cell Sci Memoirs: The Histoehemieal Recognition of LipineJOHN R. BAKER, J Cell Sci, 1946 Memoirs: The Preparation of the Eye for Histological ExaminationJ Cell Sci Powered by Targeting settings Do not sell my personal information × How to Cite This Article Vancouver Angayarkanni S. Analysis of vitreous humour in determining postmortem interval (time since death) – A prospective study [Internet]. IP Int J Forensic Med Toxicol Sci. 2020 [cited 2025 Sep 29];5(4):121-129. Available from: 📋 Copy APA Angayarkanni, S. (2020). Analysis of vitreous humour in determining postmortem interval (time since death) – A prospective study. IP Int J Forensic Med Toxicol Sci, 5(4), 121-129. 📋 Copy MLA Angayarkanni, S. "Analysis of vitreous humour in determining postmortem interval (time since death) – A prospective study." IP Int J Forensic Med Toxicol Sci, vol. 5, no. 4, 2020, pp. 121-129. 📋 Copy Chicago Angayarkanni, S.. "Analysis of vitreous humour in determining postmortem interval (time since death) – A prospective study." IP Int J Forensic Med Toxicol Sci 5, no. 4 (2020): 121-129. 📋 Copy × IP International Journal of Forensic Medicine and Toxicological Sciences ISSN (Online): 2456-9615 ISSN (Print): 2581-9844 CODEN: IIJFA2 Publisher:IP Innovative Publication Licensing & Access This journal is licensed under: Creative Commons Attribution-NonCommercial-ShareAlike Allows others to distribute, remix, adapt, and build upon the work for non-commercial purposes, as long as they credit the creator and license their new creations under the same terms. Home | About | Editorial Board | Archive | Current Issue | Subscription | Legal Disclaimer | © 2025 IP International Journal of Forensic Medicine and Toxicological Sciences Published by IP Innovative Publication Pvt. Ltd. (www.ipinnovative.com) Shares Share Tweet Email Share Share Share Share Share
2464
https://www.youtube.com/watch?v=v7ffr0_WRxc
Pulling a block up an incline with force at an angle: find normal force and acceleration. Zak's Lab 16100 subscribers 23 likes Description 838 views Posted: 7 Sep 2024 We find normal force and acceleration for a block pulled up a ramp by a rope tilted at an angle with respect to the ramp surface. 🧠 Access full flipped physics courses with video lectures and examples at To start things out, we make a complete force diagram for the block on an incline: the force of gravity mg points straight down, and we find the perpendicular and parallel components of weight as mgcos(theta) and mgsin(theta) respectively. Next, we find the components of the applied force pulling up at an angle. Finally, we label the unknown normal force in the force diagram. Remember that the normal force points perpendicular to the surface, and it's there to express the fact that the block is constrained to the surface of the ramp (in other words, it does what it has to in order to balance all forces in the perpendicular direction). To solve for the normal force, we use the fact that the perpendicular acceleration of the block is zero, so the forces must balance in that direction. We quickly solve for normal force. Note that there is an extremely common mistake here: students often assume that the normal force is always mgcos(theta), but normal force is not mgcos(theta) this time, because our applied force has a perpendicular component! So when pulling a block up an incline with force at an angle, we have to carefully consider all perpendicular forces when calculating the normal force. Finally, we analyze forces in the parallel direction and apply Newton's second law to calculate the parallel acceleration along the ramp, and we're done! 1 comments Transcript: in this problem we're studying a block on a smooth incline meaning that friction is going to be negligible here and that block has a mass of 5 kg the incline has an angle of 25° now we're also pulling on this block with a force of 60 newtons so we're pulling the block up the ramp and that force is inclined at 30° above the surface of the ramp and we have three basic parts in the problem first we want to make a complete force diagram for this block second we want to get the normal force on the Block and third the acceleration so to get started on this force diagram there's the force of gravity pointing exactly straight down and I'm just going to write it symbolically for now we gave it a magnitude of mg now we need to decompose this Force Vector into perpendicular and parallel components so there's a perpendicular dashed line and we noted in there that the angle between the vertical and the perpendicular is exactly the same as the angle of incline and I'll post a link to the video where that was first derived so now we can view the components of the weights in these perpendicular and parallel directions now that perpendicular component is given by the adjacent side of the right triangle so we're going to use the cosine for that and then the parallel component we use the sign for that so again for now just staying symbolic we're going to call it mg cosine 25 for the magnitude of the perpendicular piece and mg sin 25 for the magnitude of the parallel piece now we normally don't leave the parallel piece down there below the ramp we're going to move that up and attach it to the 5 kg block so that takes care of everything with gravity and we'll move on to our 60 nton Force where we're pulling with a rope on this thing so there's our 60 Newton Force vector and we want to decompose this one into parallel and perpendicular components and doing the usual decomposition we get 60 cosine of 30° for that parallel piece and 60 sin 30° for the perpendicular piece now there's only one force left to put in the diagram and that's the normal force and for now we're just going to call that little n so now I want to start going through my diagram and getting numbers on everything so mg is 5 9.8 which comes out to 49 Newtons and this mg cosine of 25 would be 49 cosine 25 and to three significant digits that comes up to 44.4 newtons our mg sin 25 that's 49 sin 25 that comes out to about 20.7 Newtons moving over to our applied force I have 60 cosine 30 for the parallel component and that comes out to about 52.0 Newtons and our perpendicular component is 60 sin 30 which gives us exactly 30 Newtons and the normal force isn't unknown here now in Part B we want to compute the normal force on the Block and for that we use a perpendicular analysis so in the perpendicular Direction I have zero acceleration because this block is constrained to the surface and that means all the forces that are pointing up and to the left perpendicular to this ramp must be balanced by all the forces pointing down and to the right perpendicular to the surface in other words n + 30 Newtons must be equal to that 44.4 Newton perpendicular component of gravity so this gets all the forces is balanced in the perpendicular Direction guaranteeing the acceleration is zero in the perpendicular Direction so we can quickly solve for n here and we get 14.4 Newtons for that now I want to point out an extremely common mistake with this normal force so the most common mistake that I see is that students just get in the habit of saying that the normal force must be equal to mg cosine of theta every time but the difference here is that we have a second Force tampering with the perp ular Direction so to get the normal force just right so all these perpendicular forces balance you have to consider that perpendicular component of our applied force that's actually helping the normal force so the normal force doesn't have to be as big as it would be otherwise in part C we want to get the acceleration of the block and so this is happening in the parallel direction we're going to do a parallel analysis of the forces here and apply Newton's Second Law I can already see in my diagram that the force going up the ramp that's 52 Newtons is bigger than the force going down the ramp that's that parallel component of gravity and that's only 20.7 Newtons so I know my acceleration is going to be up the ramp and we're just going to call that the positive direction for the analysis so when we apply Newton's Second Law just a quick reminder here that's a vector equation and Vector equations are true component by component so what I want to say here is that fnet in the parallel direction is equal to ma in the parallel Direction so in that parallel Direction our net force is going to be 52.0 nton in the positive direction and then minus 20.7 Newtons that's in the negative Direction and this is going to be equal to the mass of our block 5 kg times the magnitude of our parallel acceleration in other words the acceleration along the surface of the ramp so we just do the subtraction divide by 5 kg and we get a parallel and and when I run the numbers on this to three significant digits I get 6.26 m/s squared and we're done if you enjoyed this video or at least found it useful check out another one by clicking one of the links on the left or click the Zach laab logo on the right to explore dozens of physics and math playlists as always you can leave your questions comments and requests in the comments section below and I'll get back to you within 24 hours thanks for watching Zach's lab and best of luck on your math and physics Journey
2465
https://www.facebook.com/groups/951357615665893/posts/1774811393320507/
National Geographic Nature | Snake fangs are specialized tools for injecting venom or trapping their prey | Facebook Log In Log In Forgot Account? National Geographic Nature Paulo Santos · December 8, 2024 · Snake fangs are specialized tools for injecting venom or trapping their prey. Some species have retractable tusks that unfold when they attack. Furthermore, their snout can be opened to almost 180 degrees thanks to flexible ligaments, allowing them to swallow prey much larger than their head. These adaptations are fundamental to their success as predators. All reactions: 373 21 comments 67 shares Like Comment Share Most relevant Ernesto Paul Medina Paredes Mi vecina 41w 8 View all 2 replies Kathryn Carter Dang! 41w View 1 reply Alfred Salcedo Hermosos colmillos como para limpiarse los dientes con ellos! 41w See more on Facebook See more on Facebook Email or phone number Password Log In Forgot password? or Create new account
2466
https://www.youtube.com/watch?v=oaHF_6aa3-0
Straight Line : Minimum value of Sum of Distance of a Line from two fixed Points (Part 2) By SPA Sir SPA Sir Concepts Of Math 1550 subscribers 18 likes Description 441 views Posted: 21 Dec 2019 Concepts of Math SPA Sir (Ex Faculty of Bansal, Vibrant, Resonance & Brilliant Tutorials) is having more than 13 yr of Exp. In IIT- JEE field. He is known for his Quality Content for IIT-JEE (Mains & Advance). His Focused and Simplified Teaching methodology help students to understand better. Gmail : spagrawal2005@gmail.com Contact no. : 8233985969 / 8209248907 Facebook : Instagram : Straight Line : Minimum Value of Sum of Distances from two fixed points (Part 1) Straight Line : Minimum Value of Sum of Distances from two fixed Points (Part 2) Straight Line : Maximum value of Difference of Distances from two fixed Points (Part 1) Straight Line : Maximum value of Difference of Distances from two fixed points (Part 2) Straight Line : Maximum value of Difference of distances from two fixed points ( Part 3) Concept of Maths is an online Education Channel for IIT JEE, Olympiad, NTSE and CBSE Concepts_of_Math#Mathematics#SPA_Sir#IIT_JEE 3 comments Transcript:
2467
https://www.youtube.com/watch?v=eQpc2QRFv7Y
Mesh Analysis for Circuits Explained Engineer4Free 249000 subscribers 3462 likes Description 315247 views Posted: 15 May 2021 This tutorial introduces Mesh Analysis and explains how to use it to solve unknowns in circuits. I find it helpful to label on unknown branch currents before starting the problem, and then substituting the mesh currents in later. This reduces the chance of getting some terms with the wrong sign, but is not a necessary step, as most books will skip that part. Mesh analysis can be used to solve circuits with multiple loops, and in a later video, we'll go over some examples of supermesh analysis as well (we need to do that if there is a current source between two mesh currents). This video is part of a full free course on electric circuits. The course covers DC circuits, circuit laws, current & voltage sources, series & parallel resistors, nodal analysis, mesh analysis, and AC circuits. Links to the course are here: Website: YouTube: If you found this video helpful, then please give a 👍 and subscribe. If you're able to, please support my work on Patreon: Also follow these: Facebook: Instagram: Twitter: LinkedIn: Thanks for watching, I hope it helps! 80 comments Transcript: hey so in this video i'm just going to introduce mesh analysis talk about how we can use it to solve for unknowns in circuit problems and then solve this example here that we see on the screen to figure out what the currents are going through each of the resistors and we're going to use mesh analysis and then in the next couple of videos we'll do some more complicated examples so you can get some different ideas of how to apply this now so when we're doing mesh analysis you have to identify each mesh and each mesh is basically each loop that's going around in the circuit without like incorporating other loops inside for example if you start here we have one loop that looks like this right just back to this point and we have another loop over here that we can go around the circuit and basically get back to the same point that we started so we're going to eventually apply kirchhoff's voltage law around each of those loops but what you do is you actually identify like you just kind of draw on what we call a mesh current that's flowing around each of these loops so we'll say we have i1 going around this loop and we have i2 going around this loop so these mesh currents like sort of do exist and sort of don't in cases where they're like flowing through just a single resistor that's not like between two um between two meshes then it will sort of be the actual current that's flowing through we might have the sign backwards but the magnitude will be the same and then for elements like this or basically the actual current that's flowing through an element like this is going to be the net of both of these for example if we want to draw the 1 ohm resistor just like this it's going to have some current let's just say whatever the current is maybe let's just assume actually some currents for all of these let's say that we just assume that ia is going to go this way we're going to assume that there's another current here we're going to call it ib and we're going to call it the there's just some other current here that's going down through this resistor let's just call that ic we can just kind of assume the directions it doesn't actually matter and if we find to get negative magnitudes here for these then we're going to determine that we've just got them in the wrong direction but looking at ic here then when we're coming down into this resistor we have this mesh mesh 1 which is going this way i1 and we have this other one which is going up i2 and so really ic is just going to be equal to the sum of the vectors that are going this way minus the vectors that are going that way so it's going to be sorry that was i1 so we have i1 minus i2 but if we look at this i2 is going counterclockwise around the loop this way so i2 is coming down like this so we have i2 is going this way but we know that 1.5 amps is going up so we know by inspection that actually that i2 is going to be equal to negative 1.5 amps so we can write that for example we can just say i2 is equal to negative 1.5 amps and then here if we look at this resistor we were saying that i2 is going this way right it's like passing through this resistor like that but if i2 is going this way and it's a negative 1.5 amps that means we can just switch the direction and this is going to be the actual current that's flowing through the resistor is going to be 1.5 amps flowing to the left now the way that we drew ib is actually turns out to be in the correct direction which is matching the direction of you know the actual current in the branch so we can even just write it here that ib is equal to 1.5 amps that is going in the direction that ib is labeled on and that's also going to be equal to negative i2 and then when we look at the other loop here we're basically we've labeled on we've assumed a direction for the current that's actually flowing through this resistor to be going to the right but that matches what we have for i1 because i1 is spinning past it's going that way and so i1 is going to be equal to ia based on the way that we've assumed these so we have i a is equal to i1 now when you're doing mesh analysis it is most common to write your mesh currents which are i1 and i2 going clockwise and if you just basically just always draw them clockwise it just like reduces your chance for screwing up later especially when you come into situations like this because you kind of just you always have the same form here so that's always the first step in mesh analysis is to draw on the mesh currents and and write down any information that you have for example in this case we were able to solve what this one is really easily and then even based on that we were able to find out what one of the actual branch currents is in one of these elements generally to find the rest of the information we're going to need to apply kvl kirchhoff's voltage law in each loop for example this though this one we've already solved so we don't really need to do anything here so for the rest of this problem we would only need to apply kvl in this loop so let's do that let's write k v l loop 1 and if you remember from kvl we basically just take the voltage drop or voltage gain across every element and we pick a starting point so like let's just start here and when you enter through the negative terminal depending on which sign convention you follow i like to follow the passive sign convention where you enter in a negative terminal and you assign that a negative value and when you enter the positive terminal of an element you assign that with a positive value in kvl so that would be a plus seven minus and this would be a plus and a minus and a plus and a minus and that's based because current is always flowing into the positive side of a resistor and we've assumed that that's the way that these currents are flowing so if you want to go with that let's just remember that ohm's law is v equals ir and kvl we're just like summing up all of the voltage drops and gains and setting it equal to zero so when we start at this red dot here on the bottom we're going to come into the battery and we're going to assign this a negative three for the voltage so we have negative three and then when we enter in the positive terminal of the resistor here we need the voltage drop but the voltage is equal to the resistance times the current now i find it helpful to actually start off with the currents that you've kind of labeled on the assumed currents through each branch then you'll see why you can skip to the next step if you want but i find it more straightforward and like less chance of making errors if you do it this way so the voltage drop across this element is going to be ia times the resistance so we add that because we come in the positive terminal and we have 2 times i a the units of this term are going to be in volts and i'm just going to drop all of the units here just to keep it like less stuff going on and then the voltage drop across this we're going to have a positive because we're coming into the positive terminal and it's going to be the current times the resistance so ic times 1 ohm and we set all of that equal to 0 because basically when we come back out here we're back at the point that we started at okay so at this point i would substitute i a and ic for the mesh currents that we need and we've already identified those up here so we have i a is equal to i1 and ic is equal to i1 minus i2 we need to substitute these in because these are basically what's going to relate this side to this side because we have them up here so we can just rewrite that as negative 3 plus 2 times i 1 plus 1 times i1 minus i2 and that's all equal to 0. okay so we can just simplify that a little bit and at this point we still have we have negative 3 plus 3 i1 and we have a value for i2 which was up here i2 was equal to negative 1.5 so we can just give ourselves a little bit more space here and basically just add 3 to each side so we have 3 i1 is equal to positive 1.5 and we find that i1 is just going to be equal to 1.5 over 3 or basically i1 is equal to 0.5 amps and if we put everything into focus just here we go then we can see that i1 is equal to 0.5 amps so this is equal to 0.5 amps ib was equal to 1.5 amps and for ic is equal to i1 minus i2 and so we have i1 is 0.5 amps minus i2 well i2 is actually negative 1.5 so we basically find that ic is equal to 2 amps so if you wanted to label these on ia would be 0.5 amps ib 1.5 amps and ic is equal to 2 amps and that actually checks out with kirchhoff's current law at the junction as well you have 1.5 plus 0.5 flowing in and two flowing out so depending on what the question would be asking you like if you're asking to find the power dissipation in the resistors or often it's just asking you to find the mesh current themselves which is the blue lines or the branch currents which are the green ones depending on this kind of structure of the whatever you're being asked for basically just apply ohm's law and then go on with that so yeah that's a quick introduction to mesh analysis i would just recommend um when you're doing kvl kirchhoff's voltage law around the loops be really cautious when you're inputting in um these positive and negative terms and especially i like to put in actually the the branch currents like that we've assumed and then substitute them for the mesh current because it's really easy to kind of accidentally get a backwards um but generally doing it this way you won't make that mistake and you'll actually end up with the correct answers so anyways guys i'll see you in the next video and we'll go over another example of mesh current analysis
2468
https://www.aafp.org/pubs/afp/issues/2000/0801/p633.html
Scheduled maintenance is planned for September 26–29. You may experience brief interruptions during this time. JEFFREY T. KIRCHNER, D.O. Am Fam Physician. 2000;62(3):633-634 Pneumonia accounts for a significant degree of morbidity in children, especially those less than five years of age. The annual incidence is approximately 30 to 45 cases per 1,000 children in this age group. Among older children, the incidence is approximately 16 to 22 cases per 1,000. A number of these children have repeated episodes of pneumonia, which usually raises a red flag to the physician that an underlying disease process may be predisposing the child to pneumonia. “Recurrent” pneumonia is defined in the literature as two episodes or more in one year or more than three episodes of pneumonia in a child at any time, with radiographic clearing between episodes. How to further evaluate these children has not been well addressed with primary studies in the pediatric literature. Owayed and colleagues performed a retrospective study to determine the frequency of underlying illnesses in children hospitalized with recurrent pneumonia and the percentage of those with known underlying illness before pneumonia recurrence. The authors reviewed the charts for a 10-year period of all patients younger than 18 years who were admitted to a children's hospital with a diagnosis of pneumonia. From this group, children who had two or more episodes of pneumonia in one year or three or more episodes in their lifetime, plus children who had radiographic confirmation of pneumonia on admission, were included in the study. Using a standard data extraction form, information was abstracted from the charts regarding patient age, sex, percentile body weight and the age at which an underlying illness was diagnosed. Also reviewed were the results of the diagnostic evaluations that included computed tomography of the chest, sweat chloride testing, echocardiography, barium swallow, laryngoscopy and bronchoscopy, esophageal pH manometry, quantitative serum immunoglobulins and testing for human immunodeficiency virus (HIV) infection. Of the 2,952 charts reviewed, 238 children met the definition for recurrent pneumonia. Approximately 60 percent were males, and the mean age at diagnosis was 3.7 years (range: 2.5 months to 15.6 years). An underlying illness was diagnosed in 220 of these children. Aspiration syndrome was diagnosed as the cause of pneumonia in 114 children, an immune disorder in 24 and congenital heart disease in 22. Additional diagnoses included bronchial asthma in 19 children, congenital or acquired anomalies of the airway or lung in 18, gastroesophageal reflux in 13 and sickle cell anemia in 10. A predisposing factor for recurrent pneumonia could not be determined in 18 of the children. More than one half of the children with aspiration syndrome had cerebral palsy. Of those with an immune deficiency, 13 had a malignant neoplasm, five had a dysgammaglobulinemia, five had HIV infection and one had autoimmune pancytopenia. Of significance, 178 of the 238 children had been diagnosed with an underlying illness before the first episode of pneumonia, and 25 children were diagnosed during their first episode of pneumonia. Only 17 children were diagnosed with an underlying disease after having recurrent pneumonia. Of these 17 children, seven had asthma, and four had aspiration syndrome, three had gastroesophageal reflux and two had airway anomalies. Only one child had an underlying immune disorder. The authors conclude from this data that most children with recurrent pneumonia are known to have an underlying illness. The most common predisposing factor is aspiration syndrome. In children without a known medical or anatomic condition who have recurrent pneumonia, bronchial asthma and gastroesophageal reflux should be ruled out as part of the evaluation. Underlying immune disorders that were not previously diagnosed are rare and, in about 10 percent of children, an underlying illness will not be found despite an extensive work-up. Continue Reading Advertisement More in AFP Copyright © 2000 by the American Academy of Family Physicians. This content is owned by the AAFP. A person viewing it online may make one printout of the material and may use that printout only for his or her personal, non-commercial reference. This material may not otherwise be downloaded, copied, printed, stored, transmitted or reproduced in any medium, whether now known or later invented, except as authorized in writing by the AAFP. See permissions for copyright questions and/or permission requests. Copyright © 2025 American Academy of Family Physicians. All Rights Reserved.
2469
https://curriculum.illustrativemathematics.org/HS/teachers/1/6/index.html
Illustrative Mathematics Algebra 1, Unit 6 - Teachers | IM Demo Skip to main content Professional LearningContact Us For full sampling or purchase, contact an IM Certified Partner:Imagine LearningKendall HuntKiddom Math Tools Four-Function CalculatorScientific CalculatorGraphing CalculatorGeometrySpreadsheetProbability CalculatorConstructions Algebra 1 Algebra & GeometryAlgebra 1GeometryAlgebra 2Algebra 1 Supports Unit 6 Algebra 1Unit 1Unit 2Unit 3Unit 4Unit 5Unit 6Unit 7 Alg1.6 Introduction to Quadratic Functions In this unit, students study quadratic functions systematically. They look at patterns which grow quadratically and contrast them with linear and exponential growth. Then they examine other quadratic relationships via tables, graphs, and equations, gaining appreciation for some of the special features of quadratic functions and the situations they represent. They analyze equivalent quadratic expressions and how these expressions help to reveal important behavior of the associated quadratic function and its graph. They gain an appreciation for the factored,standard, and vertex forms of a quadratic function and use these forms to solve problems. Read More Lessons Lessons A Different Kind of Change 1 A Different Kind of Change 2 How Does it Change? Quadratic Functions 3 Building Quadratic Functions from Geometric Patterns 4 Comparing Quadratic and Exponential Functions 5 Building Quadratic Functions to Describe Situations (Part 1) 6 Building Quadratic Functions to Describe Situations (Part 2) 7 Building Quadratic Functions to Describe Situations (Part 3) Working with Quadratic Expressions 8 Equivalent Quadratic Expressions 9 Standard Form and Factored Form 10 Graphs of Functions in Standard and Factored Forms Features of Graphs of Quadratic Functions 11 Graphing from the Factored Form 12 Graphing the Standard Form (Part 1) 13 Graphing the Standard Form (Part 2) 14 Graphs That Represent Situations 15 Vertex Form 16 Graphing from the Vertex Form 17 Changing the Vertex About IM In the News Curriculum Grades K-5 Grades 6-8 Grades 9-12 Professional Learning Standards and Tasks Jobs Privacy Policy Facebook Twitter IM Blog Contact Us 855-741-6284 What is IM Certified™? © 2019 Illustrative Mathematics®. Licensed under the Creative Commons Attribution 4.0 license. The Illustrative Mathematics name and logo are not subject to the Creative Commons license and may not be used without the prior and express written consent of Illustrative Mathematics. This book includes public domain images or openly licensed images that are copyrighted by their respective owners. Openly licensed images remain under the terms of their respective licenses. See the image attribution section for more information.
2470
https://faculty.washington.edu/ezivot/econ301/labor_market_equilibrium.htm
Labor market equilibrium Home Syllabus Homework Notes Announcements LinksEconomics 301 Intermediate Macroeconomics Labor Market Equilibrium Winter 2000 Last updated: January 14, 2000 Note: These notes are preliminary and incomplete and they are not guaranteed to be free of errors. Please let me know if you find typos or other errors. Equilibrium in the Labor Market Combining the behavioral models for labor demand and labor supply together allows us to deduce the equilibrium real wage and the equilibrium amount of employment. In equilibrium, ND = NS which determines the equilibrium values of the real wage and employment. It is implicitly assumed that in equilibrium everyone who wants a job has a job. In this sense, the equilibrium value of employment is also called full employment. When the labor market is in equilibrium, there is no tendency to move away from equilibrium. That is, at the equilibrium values of w and N there are no forces acting in the labor market to move the market away from the equilibrium values. Understanding Equilibrium To understand equilibrium, it is helpful to see what happens when the labor market is out of equilibrium. The figure below illustrates a situation where the current real wage is higher than the equilibrium real wage. At w 0 the supply of labor, NS 0 is greater than the demand for labor, ND 0 , and so there is an excess supply of labor in the labor market. Workers bid down the real wage until it falls to the equilibrium value, w. The above graph demonstrates that when the current wage is such that it is not equal to the equilibrium real wage competitive market forces act to push the wage toward the equilibrium wage. As the wage adjusts, labor demand and labor supply move closer to equality. Combining the Production with the Labor Market: The Determination of Full Employment Output The labor market determines the equilibrium or full employment level of labor input to the aggregate production function. Therefore, we define full employment output, Y, in the following way: Y = A 0 F(K 0, N ) where N denotes the full employment labor amount determined by equilibrium in the labor market. Note: The textbook by Abel and Bernanke uses "bars" on top of equilibrium values. Since I can't figure out how to put bars on top of letters in HTML, I will denote an equilibrium value with a superscript "" and the color red. Full employment output is depicted in the graph below Factors that Affect Potential Output Since the production function is linked with the labor market to determine full employment (or potential) output, anything that shifts the production function (which shifts labor demand) or the labor supply curve will affect potential output. To illustrate, the graph below shows how potential output is affected by an increase in productivity. An Increase in productivity from A 0 to A 1 shifts up both the production function and labor demand (red curves). The labor market then adjusts to a new equilibrium with a higher amount of employment and a higher real wage. The increase in employment leads to a higher level of potential output. Notice that output is higher both because the production function shifts up and because employment increases.
2471
https://unstop.com/blog/dynamic-method-dispatch-in-java
Dynamic Method Dispatch In Java (Runtime Polymorphism) +Codes // Unstop InternshipsJobsCompeteMentorshipCoursesPractice Login Host For Business HomeResource Centre Dynamic Method Dispatch In Java (Runtime Polymorphism) + Examples Table of content: What Is Dynamic Method Dispatch In Java Programming? What Is Upcasting & Its Importance In Dynamic Method Dispatch In Java? Dynamic Method Dispatch In Java Example: Animal Dynamic Method Dispatch In Java With Data Member Advantages Of Dynamic Method Dispatch In Java Disadvantages Of Dynamic Method Dispatch In Java Static vs. Dynamic Binding & Dynamic Method Dispatch In Java Conclusion Frequently Asked Questions Dynamic Method Dispatch In Java (Runtime Polymorphism) + Examples Dynamic method dispatch in Java is a runtime process that resolves calls to overridden methods based on the actual type of object, enabling flexible and polymorphic behavior. 21 mins read Dynamic method dispatch is a fundamental concept in object-oriented programming, particularly in Java, where the decision of which method to call is made at runtime rather than compile time. This mechanism allows Java to execute the overridden method of a subclass based on the actual object type, enabling runtime polymorphism. In simpler terms, when a superclass reference variable points to a subclass object, the overridden method invoked is determined dynamically at runtime. This flexibility is a cornerstone of Java's polymorphic behavior, making programs more extensible and adaptable. In this article, we will discuss dynamic method dispatch in Java, its mechanics, examples, and benefits, and provide a comparison with static binding for a clearer understanding. What IsDynamic Method DispatchIn Java Programming? The concept of dynamic method dispatch in Java programming arises from the inheritance model, where a superclass can contain methods with a general definition and its subclasses can provide their specific implementations (this is the concept ofmethod overriding). Dynamic method dispatch in Java, also known as runtime polymorphism, is a mechanism that ensures the correct overridden method is executed based on the actual object type at runtime (and not compile time). This is, even if the reference to call the method is of the superclass type (type of object is the base class). Normally, having methods with the same name/ identifier in multiple classes can lead to ambiguity problems in determining which method to invoke in response to amethod call. Java resolves the ambiguity of which method to call during runtime, relying on the Java Virtual Machine (JVM) to identify and execute the appropriate method implementation dynamically (based on the type of object referenced at that moment). For example, consider a class Parent and its subclass Child, and both have a method called speak(). Depending on whether the reference variable is assigned a Parent or Child object, the JVM dynamically decides which speak() method implementation to execute. But how does the Java programming language decide which method to invoke at runtime? The answer lies inupcasting. Without upcasting, dynamic method dispatch wouldn’t be possible, as the reference and object types would always match, and method resolution would happen at compile time instead of runtime. This compile-time resolution is known as static binding, and we’ll explore the differences between static and dynamic binding in more detail in a later section. For now, let’s understand upcasting and its role in dynamic method dispatch. What Is Upcasting & Its Importance InDynamic Method DispatchIn Java? Upcasting refers to the process of casting a subclass object to a superclass reference type. It enables polymorphism by allowing the superclass reference type to hold a subclass object, but the actual method execution is determined dynamically. To better understand this concept, imagine we have a class named Parent and its child class (which inherits from Parent) named Child. Now, we have two ways to create an object in reference to the Parent class. Direct Instantiation: In this approach, we create an instance of the Parent class and assign it to a reference variable obj of type Parent. It's a straightforward instantiation of the Parent class. In this case, any methods or properties specific to Parent can be accessed through this reference. Upcasting: In this approach, we're creating an object of the Child class but assigning it to a reference variable obj of the Parent type. This process of casting a reference variable of a subclass to its superclass type is known as upcasting. It allows a superclass reference variable to hold the address of a subclass object, enabling polymorphic behavior and dynamic method dispatch in Java as this superclass reference variable, although referring to a specific subclass object, can dynamically invoke overridden methods of the subclass at runtime. The second approach (upcasting) enables a superclass reference variable to hold the address of a subclass object, ensuring the correct method is dynamically determined and executed. This dynamic decision-making capability allows for the precise invocation of the appropriate method implementation at runtime, popularly known as runtime polymorphism. Syntax Of Upcasting ForDynamic Method DispatchIn Java: SuperclassReference = new Subclass(); // Upcasting Here, The term SuperclassReference is the reference variable of the superclass type. It can refer to objects of its own class or any of its subclasses. The new keyword is used for the instantiation/ creation of a new object, and Subclass() is the constructor of the class whose object is being created. Together, these create an object of the subclass type. Let's consider a real-life analogy using a class hierarchy representing vehicles. Suppose we have a superclass Vehicle and a subclass Car that extends Vehicle. Code Example: class Vehicle { void drive() { System.out.println("Driving a vehicle");} } class Car extends Vehicle { void drive() { System.out.println("Driving a car");} void drift() { System.out.println("Performing a drift!");} } class Main { public static void main(String[] args) { Vehicle vehicle1 = new Vehicle(); // Creating a Vehicle object Vehicle vehicle2 = new Car(); // Upcasting: Creating a Car object but referring to it as a Vehicle vehicle1.drive(); // Output: Driving a vehicle vehicle2.drive(); // Output: Driving a car // vehicle2.drift(); // This line won't compile - drift() method is not part of Vehicle class } } Output: Driving a vehicle Driving a car Explanation: In the Java code example, We first create a class called Vehicle, containing a method drive(), which prints a string message ("Driving a vehicle") to the console using out.println. Then, we create another class named Car, which extends Vehicle class (or inherits from Vehicle). Inside: We define a drive() method that prints another string message ("Driving a car") to the console. This method overrides the superclass method. It also has a drift() method that prints the message– "Performing a drift!", to the console. Moving on, we define the Main class with the main() method to begin the execution of the program. Inside main(), we first create an instance of the Vehicle class and assign it to vehicle1 reference variable of type Vehicle (direct instantiation). Then, we create an object of the Car class and assign that to vehicle2 reference variable of type Vehicle. This is upcasting illustrating how a subclass (Car) object can be treated as its superclass (Vehicle). Next, we call the drive() method using vehicle1, which straightforwardly invokes the method from the superclass. After that, we call the drive() method using vehicle2. As a result, the overridden version in the Car class is executed because the actual object type is Car. This demonstrates dynamic method dispatch. We then have a commented line, where we call the drift() method using vehicle2. This results in a compilation error because the drift() method is not defined in the Vehicle class, and the reference type (Vehicle) restricts access to subclass-specific methods. This means that even though vehicle2 refers to a Car object, the reference type is Vehicle, so methods specific to Car, such as drift(), are not accessible through this reference. Thecompiler will throw an error when you attempt to call drift() on vehicle2. Now that we understand dynamic method dispatch in Java let’s see how upcasting is essential in making this mechanism work. Want to expand your knowledge of Java programming? Check out thisamazing courseand become the Java developer of your dreams. Why Is Upcasting Necessary forDynamic Method Dispatch? Dynamic method dispatch relies on upcasting to create a scenario where a method cannot be resolved at compile time. When a subclass object is referenced using a superclass variable, the exact method to be invoked is determined at runtime, enabling polymorphic behavior. For example: Parent obj = new Child(); Here, obj is a reference of the Parent type, but the actual object it points to is of the Child class. This creates the ambiguity necessary for dynamic method dispatch to come into play, allowing the JVM to resolve the method dynamically based on the object type. Dynamic Method DispatchIn Java Example: Animal Let’s take another scenario with a superclass Animal and two subclasses, Dog and Cat. In this example Java program, each subclass overrides the common makeSound() method from the Animal class while also adding unique behaviors of its own. Code Example: class Animal { void makeSound() { System.out.println("Generic animal sound"); } } class Dog extends Animal { void makeSound() { System.out.println("Woof!");} void wagTail() { System.out.println("Dog is wagging its tail"); } } class Cat extends Animal { void makeSound() { System.out.println("Meow!");} void jump() { System.out.println("Cat is jumping"); } } public class Main { public static void main(String[] args) { Animal animal1 = new Dog(); // Upcasting to Animal Animal animal2 = new Cat(); // Upcasting to Animal animal1.makeSound(); // Output: Woof! animal2.makeSound(); // Output: Meow! // animal1.wagTail(); // This line won't compile - wagTail() is not part of Animal class // animal2.jump(); // This line won't compile - jump() is not part of Animal class } } Output: Woof! Meow! Explanation: In this example Java code, we define an Animal class with a makeSound() method that all animals share. Following that, we define the Dog and Cat classes, which are the subclasses of Animal, and each class overrides the makeSound() method to provide its own implementation. Then in the main() method: We create two references of type Animal — animal1 and animal2 — but assign them to Dog and Cat objects, respectively. This is upcasting, where we treat subclasses (Dog and Cat) as their superclass (Animal). Remember, upcasting is essential for dynamic method dispatch to function properly. Then, we call the makeSound() method on animal1, and it prints "Woof!", even though the reference type is Animal. This is because animal1 actually points to a Dog object, and the overridden method in the Dog class is executed at runtime. Similarly, calling makeSound() on animal2 prints "Meow!", as animal2 refers to a Cat object. Next, we have two commented lines where we call the wagTail() method on animal1 (animal1.wagTail()) and the jump() method on animal2 (animal2.jump()). If run, these lines will throw an error because the Animal class does not know about methods specific to its subclasses. These methods (wagTail() and jump()) are only available within the Dog and Cat classes, respectively. Since animal1 and animal2 are reference variables of type Animal, they cannot access methods defined in their specific subclass without explicit casting. Dynamic Method DispatchIn Java With Data Member While Java allows methods to be overridden in subclasses, which is a core part of runtime polymorphism, data members (variables) behave differently. Unlike methods, variables cannot be overridden in Java. This means that while a subclass can inherit variables from its superclass, it cannot replace or redefine them. Each class maintains its own copy of the inherited variables, and the reference type of an object determines which methods are accessible, but variables always refer to the member declared in the reference type, not the actual object. Let's understand it through an example. Suppose we have a superclass Vehicle and its two subclasses Car and Truck. Each class has a name variable representing the type of vehicle and a getDescription() method that provides a description of the vehicle. Code Example: class Vehicle{ String name = "Vehicle"; String getDescription(){ return "This is a generic vehicle.";} } class Car extends Vehicle{ String name = "Car"; String getDescription() { return "This is a car.";} } class Truck extends Vehicle{ String name = "Truck"; String getDescription() { return "This is a truck.";} } class Main { public static void main(String[] args){ Vehicle vehicle1 = new Car(); // Upcasting to Vehicle Vehicle vehicle2 = new Truck(); // Upcasting to Vehicle System.out.println(vehicle1.name); // Output: Vehicle System.out.println(vehicle1.getDescription()); // Output: This is a car. System.out.println(vehicle2.name); // Output: Vehicle System.out.println(vehicle2.getDescription()); // Output: This is a truck. } } Output: Vehicle This is a car. Vehicle This is a truck. Explanation: In this example, we have a class hierarchy representing vehicles. The Vehicle class has a name variable and a getDescription() method. The subclasses Car and Truck each inherit from Vehicle, with their own versions of the name variable and the getDescription() method. The Vehicle class has a variable name initialized to "Vehicle". The Car class has its own name variable, initialized to "Car", and similarly for the Truck class. Despite the object reference (vehicle1 and vehicle2) pointing to instances of Car and Truck, both vehicle1.name and vehicle2.name output "Vehicle". This happens because the name variable is not overridden and is tied to the reference type (Vehicle), not the actual object type (Car or Truck). The getDescription() method in Vehicle returns a generic description of the vehicle, while both the Car and Truck classes override this method to provide their specific descriptions. Despite the references being of type Vehicle when getDescription() is called, Java uses dynamic dispatch to call the correct method based on the actual object type (i.e., Car or Truck), not the reference type. Therefore, vehicle1.getDescription() prints "This is a car." and vehicle2.getDescription() prints "This is a truck.". Here is your chance to top the leaderboard while practising your coding skills: Participate in the100-Day Coding Sprintat Unstop. Advantages OfDynamic Method DispatchIn Java Dynamic method dispatch offers several pivotal advantages, which include: Run-time Polymorphism: Dynamic method dispatch in Java enables run-time polymorphism, allowing the program to invoke the correct overridden method based on the actual object type. This enhances flexibility by allowing a single interface to serve different classes, each with its unique implementation. Example: Suppose we have a superclass Shape with a method draw(), and its subclasses Circle and Rectangle override this method with their specific implementations. At runtime, the correct draw() method based on the actual object type (Circle or Rectangle) is invoked. Shape shape = new Circle(); shape.draw(); // Calls Circle's draw method at runtime Code Reusability: Method overriding allows subclasses to provide a specific implementation of a method already defined in the superclass. This promotes the reuse of method names across different classes, reducing redundancy and making code more maintainable. _Example:_ A superclass Animal can have a method makeSound(), and subclasses like Dog and Cat can override this method to produce their unique sounds, all using the same method name makeSound(). Flexibility inMethod Implementation: Dynamic method dispatch in Java allows subclasses to define their specific behavior while adhering to a common interface. This ensures a consistent method signature while providing flexibility for subclasses to implement their own logic. _Example:_ An interface Shape might have a method calculateArea(). Different shapes, like Circle and Rectangle can implement this interface and define their unique logic for calculating the area while maintaining the common interface structure. Enhanced Maintainability: Dynamic method dispatch in Java allows for easier modifications and enhancements in code without affecting the superclass or other subclasses. This separation between superclass and subclasses enables developers to modify specific behaviors in subclasses independently, promoting maintainability. _Example:_ If there's a need to change the behavior of a specific subclass without altering the superclass or other subclasses, developers can easily modify that subclass's overridden method. Promotes Extensibility: Subclasses can extend the functionality of the superclass by adding new methods or modifying existing behaviors. This allows new features to be integrated without altering the core functionality of the superclass, fostering system extensibility. Example: A superclass Vehicle might have a method startEngine(), and subclasses like Car and Bike can add their unique methods accelerate() or applyBrakes() to extend the functionality provided by the superclass. Disadvantages OfDynamic Method DispatchIn Java While dynamic dispatch has multiple advantages, there are some downsides that we must consider. Performance Overhead: Dynamic method dispatch involves determining the appropriate method to call at runtime based on the actual object type. This runtime resolution can introduce a performance overhead compared to static method calls, which are resolved at compile time. Potential for Runtime Errors: Since method calls are resolved at runtime based on the actual object type, there is a potential for runtime errors if the methods involved in runtime resolution are not properly overridden in a subclass or if there are potential errors in the method implementation. Static vs.Dynamic Binding&Dynamic Method DispatchIn Java In Java, binding refers to how methods or variables are connected to their calls. This can occur either at compile time (static binding) or at runtime (dynamic binding), and it largely depends on the method type and the reference used. Static Binding In static binding, the method call is resolved at compile time based on the reference variable's type. This is also known as early binding. Static binding is used for: Private methods Static methods Final methods Overloaded methods Since the method resolution happens at compile time, it’s determined by the type of the reference variable, not the actual object type. Dynamic Binding In dynamic binding, method calls are resolved at runtime based on the actual object type, not the reference variable. This is also known as late binding. Dynamic binding is used primarily for overridden methods (instance methods). This behavior is a allowing method overriding in subclasses. Impact Of Access Modifiers & Static On Binding Private Methods: These methods are bound statically. Since they can’t be overridden, method calls are resolved at compile time based on the reference type. Static Methods: Like private methods, static methods are also bound statically. The call is resolved based on the reference type, not the actual object type. Final Methods: Since final methods can’t be overridden, they are statically bound to the class at compile time. Now, let's take a look at the differences between static and dynamic binding: | Basis | Static Binding | Dynamic Binding | --- | Definition | Method call is resolved at compile time. | Method call is resolved at runtime. | | Occurrence | Occurs for overloaded methods, private methods, static methods, and final methods. | Occurs for overridden methods (instance methods). | | Type of Methods | Can be static, private, final, or overloaded. | Typically instance or overridden methods. | | Determination | Determined by the reference type at compile time. | Determined by the actual object type at runtime. | | Key Use Cases | Early checking of method existence and type (compile-time). | Enables polymorphic behavior, method overriding, and runtime method resolution. | | Performance | Faster since method resolution happens at compile time. | Slight performance overhead due to runtime method resolution. | | Binding Time | Early binding (done at compile time). | Late binding (done at runtime). | | Method Visibility | Can be used for private and static methods, where visibility is determined at compile time. | Only applicable for methods that can be overridden or are virtual. | | Flexibility | Less flexible since methods can't be changed dynamically. | More flexible, allowing subclasses to provide their own implementations. | Code ExampleForStatic Binding In static binding, the reference type determines the method call. Suppose we have a superclass Employee and two subclasses, Manager and Engineer. Each class will have a static method getRole() that returns the role of the employee. Code Example: class Employee { static String getRole() { return "Employee"; } } class Manager extends Employee { static String getRole() { return "Manager"; } } class Engineer extends Employee { static String getRole() { return "Engineer";} } class Main { public static void main(String[] args) { Employee employee1 = new Manager(); Employee employee2 = new Engineer(); System.out.println(employee1.getRole()); // Output: Employee (static binding) System.out.println(employee2.getRole()); // Output: Employee (static binding) } } Output: Employee Employee Explanation: In the sample Java code, The getRole() method is static in the Employee, Manager, and Engineer classes. Static methods are bound at compile time based on the reference type, so employee1.getRole() and employee2.getRole() both return "Employee", even though the actual objects are of type Manager and Engineer. Code Example For Dynamic Binding In dynamic binding, the method call is resolved at runtime based on the actual object type. Consider the same example discussed above, but this time, we don't have static methods in the classes. Code Example: class Employee { String getRole() { return "Employee"; } } class Manager extends Employee { String getRole() { return "Manager"; } } class Engineer extends Employee { String getRole() { return "Engineer"; } } class Main { public static void main(String[] args) { Employee employee1 = new Manager(); Employee employee2 = new Engineer(); System.out.println(employee1.getRole()); // Output: Manager (dynamic binding) System.out.println(employee2.getRole()); // Output: Engineer (dynamic binding) } } Output: Manager Engineer Explanation: In the sample Java code, The getRole() method is an instance method, which means it can be overridden by subclasses. In this case, the actual object type s (Manager and Engineer) determine which getRole() method is invoked, demonstrating dynamic binding. The method resolution occurs at runtime based on the object's actual type, allowing for different behavior for each subclass (and subclass methods). Need more guidance on how to become a Java developer? You can now select an expert to beyour mentor here. Conclusion Dynamic method dispatch in Java, also known as runtime polymorphism, allows methods to be invoked based on the actual type of the object at runtime. This process is a key component of Java's polymorphic behavior, enabling overriding of methods in subclasses. It relies on upcasting, where a subclass object is treated as an object of its superclass, providing flexibility and enhancing the maintainability of code. Through dynamic method resolution, Java can determine the correct version of a method to execute, even when the reference variable is of the superclass type. However, dynamic dispatch only applies to methods that are overridden in subclasses. Variables, on the other hand, are bound statically, and they cannot be overridden. Dynamic dispatch offers numerous advantages, including code reusability, flexibility, and extensibility in method implementation. However, it comes at the cost of some performance drawbacks and the possibility of runtime errors. Java employs static binding for private, static, and final methods, where method call resolution is based on the type of reference variable at compile time. On the other hand, dynamic binding, used for virtual methods, resolves method calls based on the actual class of the object at runtime, enabling polymorphic behavior and method overriding. Also read: Top 100+ Java Interview Questions And Answers (2024) Frequently Asked Questions Q1. What is the difference between overriding anddynamic method dispatch? Overriding and dynamic method dispatch are closely related concepts but serve different roles in Java. Overriding occurs when a subclass provides a specific implementation for a method that is already defined in its superclass. It allows the subclass to specialize or change the behavior of the inherited method. Dynamic method dispatch, on the other hand, is the mechanism that determines which overridden method to invoke at runtime based on the actual type of the object rather than the type of reference variable. Consider a scenario involving shapes, where we have a superclass Shape and two subclasses, Circle and Square. Each shape has a method draw() that prints a message specific to that shape. Code Example: class Shape { void draw() { System.out.println("Drawing a shape");} } class Circle extends Shape { void draw() { System.out.println("Drawing a circle");} } class Square extends Shape { void draw() { System.out.println("Drawing a square");} } class Main { public static void main(String[] args) { Shape shape1 = new Circle(); // Upcasting to Shape Shape shape2 = new Square(); // Upcasting to Shape shape1.draw(); // Output: Drawing a circle (dynamic method dispatch) shape2.draw(); // Output: Drawing a square (dynamic method dispatch) } } Output: Drawing a circle Drawing a square Explanation: In this example: Both Circle and Square classes override the draw() method defined in the Shape class, providing their specific implementations. This represents the concept of method overriding. When shape1.draw() is called, even though shape1 is of type Shape, it refers to a Circle object. At runtime, the actual object type (Circle) is considered, and the overridden draw() method in the Circle class is executed. This represents dynamic method dispatch in Java. The same dynamic dispatch occurs for shape2.draw() with the Square object. Q2. Is Static Dispatch faster than dynamic dispatch? Yes, static dispatch is generally faster than dynamic dispatch in Java. Static dispatch involves method calls that can be resolved at compile time based on the reference type. This early resolution allows the compiler to directly link the method call to the appropriate method implementation. On the other hand, dynamic dispatch involves method calls that are resolved at runtime based on the actual type of the object. This requires additional runtime overhead to determine the correct method implementation to call, which can result in slightly slower performance in this case compared to static dispatch. Q3. What is the difference betweendynamic bindinganddynamic method dispatch? Dynamic binding refers to the process of determining the method implementation to invoke at runtime based on the actual class of the object. Dynamic method dispatch is a specific mechanism of dynamic binding where the method call is resolved to an overridden method at runtime. Essentially, dynamic method dispatch is the runtime resolution of method calls that have been overridden in subclasses. | Basis | Dynamic Binding | Dynamic Method Dispatch | --- | Definition | Process of determining method implementation at runtime. | Specific case of dynamic binding where overridden methods are resolved at runtime. | | Occurrence | Encompasses the entire process of resolving method calls at runtime. | Refers specifically to resolving overridden method calls at runtime. | | Scope | Includes fields, method calls, and other dynamic resolutions. | Focuses on resolving overridden methods. | | Key Use Cases | Achieving polymorphic behavior and method overriding. | Resolving which overridden method to call at runtime. | | Syntax | Conceptual, not defined by code. | ParentClass obj = new ChildClass(); obj.overriddenMethod(); | Q4. Are private, static, or final methods subject todynamic method dispatchin Java? No, private, static, and final methods do not participate in dynamic method dispatch. These methods are bound statically during compile time. Private methods are not accessible to subclasses and cannot be overridden, so they aren't part of dynamic dispatch. Static methods are tied to the class and not to specific object instances, so they are resolved based on the reference type at compile time. Final methods cannot be overridden, so there's no need for dynamic dispatch to resolve the method. Q5 . Doesdynamic method dispatchwork with overloaded methods? No, dynamic method dispatch in Java does not work with overloaded methods. Overloading resolvesmethod calls atcompile time based on the method signature (number or type of parameters). In contrast, dynamic dispatch applies only to overridden methods where the method's implementation is determined at runtime based on the actual object type. Q6. Candynamic method dispatchoccur across different classes in the inheritance hierarchy? Yes, dynamic method dispatch in Java works across different classes in the inheritance hierarchy. When a method is overridden in a subclass, it retains the same signature as the superclass method. At runtime, when the method is called via a reference to the superclass, the actual class of the object determines which method to execute, demonstrating polymorphism across the hierarchy. Let's consider an example using a business-oriented scenario, such as employees in an organization to understand dynamic method dispatch across different classes in the inheritance hierarchy. Code Example: class Employee{ void displayInfo(){ System.out.println("Employee information");} } class Manager extends Employee { @Override void displayInfo() { System.out.println("Manager information");} } class Engineer extends Employee { @Override void displayInfo() { System.out.println("Engineer information");} } class Main { public static void main(String[] args) { Employee employee1 = new Manager(); // Upcasting to Employee Employee employee2 = new Engineer(); // Upcasting to Employee employee1.displayInfo(); // Output: Manager information (dynamic method dispatch) employee2.displayInfo(); // Output: Engineer information (dynamic method dispatch) } } Output: Manager information Engineer information Explanation: In this example, we have a class hierarchy representing employees. The Employee class has a method displayInfo(), which is overridden in its subclasses Manager and Engineer. In the Main class, we create instances of Manager and Engineerandupcast them to Employee. When we call the displayInfo() method on these Employee references, the actual method invoked is determined by the runtime type of the objects (Manager and Engineer), demonstrating dynamic method dispatch across different classes in the inheritance hierarchy. Q7. What happens if a method is not overridden but is called usingdynamic method dispatch? If a method is not overridden in a subclass but is called using dynamic method dispatch in Java, the method from the superclass will be executed. The process works as follows: The type of reference variable is considered during compilation, but the actual type of object is checked during runtime. If the method is not overridden in the subclass, the JVM will invoke the superclass method as the correct version. In conclusion, dynamic method dispatch in Java allows for maintainable code by supporting polymorphism, but it requires careful attention to performance considerations and method resolution. Do check out the following interesting topics: Data Types In Java | Primitive & Non-Primitive WithCode Examples How To Install Java For Windows, MacOS, And Linux? With Examples Advantages And Disadvantages of JavaProgramming Language Java Developer Resume - See How To Make ATS Catch Your Resume Difference Between Java And JavaScript Explained In Detail Edited by Shivani Goyal Manager, Content An economics graduate with a passion for storytelling, I thrive on crafting content that blends creativity with technical insight. At Unstop, I create in-depth, SEO-driven content that simplifies complex tech topics and covers a wide array of subjects, all designed to inform, engage, and inspire our readers. My goal is to empower others to truly #BeUnstoppable through content that resonates. When I’m not writing, you’ll find me immersed in art, food, or lost in a good book—constantly drawing inspiration from the world around me. Tags: Java Programming Language Computer Science Engineering Comments Add comment Login to continue reading And access exclusive content, personalized recommendations, and career-boosting opportunities. Login with Email Don't have an account? Sign up Never miss an Update Featured Opportunities TATA Crucible Campus Quiz 2025 Samsung E.D.G.E. Season 10 Unstop National Olympiad 2025 EY Techathon 6.0 HP Power Lab 2.0 Top-Rated Practice by Students Subscribe to our newsletter Subscribe Share Blogs you need to hog! ### Porter's 5 Forces: Comprehensive Guide To This Analytical Tool Srishti Magan ### How To Organize Hackathons & Coding Competitions Shivani Goyal ### How To Win A Business Simulation Game? Shreeya Thakur ### Social Media: Boon Or Bane? Delving Deep Into The Debate Shreeya Thakur Powered By Best Viewed in Chrome, Opera, Mozilla, EDGE & Safari. Copyright © 2025 FLIVE Consulting Pvt Ltd - All rights reserved.
2472
https://mpl.mpg.de/fileadmin/user_upload/Chekhova_Research_Group/Lecture_5_2.pdf
Lecture 2. Spatial and temporal coherence. Coherent modes, photon number per mode. Spatial and temporal coherence. Measurement of the first-order CFs using interferometers. Coherence volume. Number of photons per mode. Spatial and temporal modes. Schmidt (coherent) modes. 1. Temporal coherence. From the Wiener-Khinchin theorem, we see that the shape of the spectrum for any stationary radiation can be found by measuring the first-order time correlation function . ) ( E ) ( E ) ( ) ( ) ( ) 1 (       t t G How to measure it? We should obviously try to split the field in two and then delay one part. Consider, for instance, doing it in the Michelson interferometer. Let the transmission of the beamsplitter be 50% (in fact it can be anything – why?). If the (positive-frequency) field at the input is ) ( E ) ( t  and the path lengths are t c and ) (   t c , then the field at the output will be )). ( E ) ( (E 2 1 ) ( E ) ( ) ( ) (            t t t t t out The negative-frequency field will be its complex conjugate, )). ( E ) ( (E 2 1 ) ( E ) ( ) ( ) (            t t t t t out And the instantaneous intensity will be obtained by taking their product: .]} . ) ( E ) ( [E ) ( ) ( { 4 1 ) ( E ) ( E ) ( ) ( ) ( ) ( ) ( c c t t t t t t I t t I t t t I out out out                      This instantaneous intensity will fluctuate with time, so let us now average it over time (we assume the field to be ergodic). )}]. ( Re{ [ 2 1 4 / )] ( ) ( [ 2 / ) 1 ( ) 1 ( ) 1 (    G I G G I Iout      In terms of normalized CFs, )}]. ( Re{ 1 [ 2 ) 1 (  g I Iout   Let us introduce the slowly varying amplitude of the field: t i e t t 0 ) ( E ) ( E 0 ) (     , then     0 ) ( E ) ( E ) ( 0 0 ) 1 ( i e t t G    , and if we change the delay (arrow in the figure), the intensity will have fast oscillations with the frequency 0  and the envelope. Maximum values of the output intensity will be given by |] ) ( | [ 2 1 ) 1 ( max  G I Iout   , and the minimum values, by |] ) ( | [ 2 1 ) 1 ( min  G I Iout   . Iout(t) I(t) Fig.1 Hence, the visibility, defined as , min max min max out out out out I I I I V    will be given by the CF modulus: . ) ( ) 1 (  g V  Note that in this definition, it does not matter in what units we measure intensity as the visibility is dimensionless. We see that at large delays, the interference disappears. Accordingly, at large times the CF turns to zero, 0 ) ( ) 1 (   g . Note that at every time instance, the interference pattern (spatial, for example) actually exists but fluctuates in time. A stable interference pattern exists only at relatively small time delays .  This is the meaning of coherence: when two fields are coherent, they interfere (form a stable interference pattern with unity visibility). Otherwise, they are partially coherent or incoherent (zero visibility of the interference pattern). Interference visibility and the amplitudes. Note that the visibility also depends on the ratio of the amplitudes of these fields. However, one can show that this dependence is weak. Consider, for instance, two perfectly coherent fields with the amplitudes 2 , 1 E , or intensities 2 , 1 I . Constructive interference will give 2 1 2 1 max 2 I I I I I    , destructive interference will give 2 1 2 1 min 2 I I I I I    . The visibility is then 2 1 2 1 2 I I I I V   . In other words, it is the ratio of the mean geometric to the mean arithmetic of the two intensities. For example, if 2 / 2 1  I I , V=0.94, pretty high! For , 10 / 2 1  I I V=0.57. So in the interference experiments, it is not so important to perfectly balance the two contributions. Mach-Zehnder interferometer. Instead of Michelson, one can use other two-beam interferometers. The most common one is Mach-Zehnder (MZ), Fig.3. The bad thing about it is that scanning of one of the mirrors (changing the path) also leads to the transverse displacement of the beam. The good thing is that it also possible in a polarization version (in the same figure below). It consists of two birefringent plates. The two paths are then formed by the ordinary and extraordinary beams. Their length difference is changed by tilting the plates. Coherence time coh  : by definition, it is the time delay at which the visibility decays twice. Fig.3 2 / I  coh  Destructive interference ) ( out I Constructive interference Fig.2 Strictly, it can be defined as the normalized integral of 2 2 ) ( G . Coherence length: coh coh c l   , the displacement of the mirror in the interferometer at which the visibility decays twice. By virtue of the Wiener-Khinchin theorem, the width of the CF is inversely proportional to the width of the spectrum: ;      coh Because , 2    c  , 2 2 0        c where 0  is the mean wavelength, and , 2 2 0     c coh and . 2 2 0     coh l Examples: 1. A laser pointer:  ~10 nm, 600 ~ 0  nm, . 20 ~  coh l 2. A Ti-Sa laser: the same. 3. A good CW laser with the bandwidth 100 kHz generating at 532 nm: nm m ms s m c f 7 5 18 2 2 8 18 2 5 2 0 10 8 . 1 10 3 10 532 10 3 10 532 10                  700 ~ 10 4 10 532 16 18 2 m lcoh      m. 2. Spatial coherence. Young’s experiment. To test whether fields at two spatially separated points are coherent, one should make them interfere. The natural idea is the Young interference experiment. When it was made for the first time (by Young of course), two pinholes were used and the sunlight. However, if the two pinholes were put directly into sunlight, no interference would be observed. It was necessary to have another pinhole preceding these two; otherwise the beams from the Sun hitting the two pinholes would not be coherent. Measurement of the spatial CF. Indeed, let us gradually increase the separation between the pinholes and look at the interference pattern. At very small separation, the visibility will be unity but as the separation increases it will decrease. The field at some point C is formed by the field propagating from point A and the one propagating from point B, ). , ( E ) , ( E ) , ( E ) ( ) ( ) ( B B A A C t t r t t r t r             If point C is symmetric w.r.t. A,B, then t t t B A      . The instantaneous intensity at point C will be Fig.4 A B C .}. . ) , ( E ) , ( {E ) , ( ) , ( )] , ( E ) , ( )][E , ( E ) , ( [E ) , ( E ) , ( E ) , ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( c c t t r t t r t t r I t t r I t t r t t r t t r t t r t r t r t r I A B B A B A B A C C C                                             The averaged intensity at point C will be ). , , , ( Re 2 ) , ( ) , ( ) , ( ) 1 ( t t t t r r G t t r I t t r I t r I B A B A C                 If the field is stationary and homogeneous (in the wide sense), then )}. 0 , ( Re{ 1 ( 2 ) , ( ) 1 ( B A C r r g I t r I       Again, the spatial CF ) ( ) 0 , ( ) 1 ( ) 1 ( B A B A r r g r r g        will have a fast oscillating part and a slowly varying amplitude: , ) ( E ) ( E 0 0 ) ( r k i e r r       and ) ( 0 0 ) 1 ( 0 ) ( ) ( ) ( B A r r k i B A B A e I r E r E r r g           . So the visibility of the interference will be again given by the CF modulus: . ) 0 , ( ) 2 ( B A r r g V    If the points A,B are taken asymmetrically, the CF will also depend on the time, A B B A t t r r g V        , ) , ( ) 2 (   . Coherence radius. So if we increase the distance between points A,B, the situation will be as shown in Fig.2. The distance at which the visibility decreases twice is called the coherence radius coh  , or transverse coherence length. The value of the coherence radius is determined by the angle at which the source (with the diameter a) is seen from the plane A,B: . , ; , 0 0 0 z a z a a z a a coh d d coh               This relation has a simple explanation: the larger a , the broader the transverse wavevector spectrum for light reaching points A,B: a k    0 2    . (Remember that a is much larger than the distance between points A,B.) And the broader the transverse wavevector spectrum, the more narrow the spatial CF; this is the spatial analogue of the Wiener-Khinchin theorem. Roughly, we can write a coh k     0 2 ~    . The van Cittert-Zernike theorem. Rigorously, one should take the intensity distribution over the source a  z Fig.5 A B C B Br s  Fig.6 A B ' r  A Ar s  A s  B s  S 0 (which is considered as flat) and take its Fourier transform, to get the spatial CF: , ' ) ' ( ' ) ' ( ) 0 , ( ' ) ( ) 1 ( 0 r d r I r d e r I r r g S S r s s ik B A A B                where ' r  is the coordinate on the source surface, S means the whole surface, and B A s ,  are unit vectors from a point on the source surface towards the points B A r ,  (Fig.6). Michelson’s stellar interferometer. We see that from the measurement of the spatial CF, one can determine the angular size of a light-emitting object. Using this principle, Michelson managed to measure the angular sizes of several bright stars. The interferometer is shown in Fig.7. The difficulty here is that the measurement is sensitive to the phase, hence it is vulnerable to atmospheric turbulence. It works well only for points A,B not too much separated (by not more than 10 m). So the stars should be large enough (large a) or close enough (small z). Smaller angular sizes can be measured in Hanbury Brown-Twiss stellar interferometer (Lecture 3). 3. Coherence volume. Number of modes. Coherence volume. Now, after we defined the coherence length and the coherence radius, we can imagine a part of space where radiation is coherent. It can be viewed (Fig.8) as a rectangular cylinder with the length coh l and the transverse size coh  . The volume of this cylinder is called the coherence volume. coh coh coh l V 2   What is the coherence radius for sunlight? Coherence length? Coherence volume? The same, for a laser. So we can divide all space into ‘elementary cells’ as shown in Fig.8 and assume that the radiation inside each cell is coherent while the radiation in different cells is incoherent. Such a cell can be called ‘a mode’ (not necessarily modes should be chosen like this, but this is an option). Detection volume. Then, one can introduce a detection volume, equal to the product of the detection time, the speed of light, and the detection area, det det det cT A V  . And the number of detected independent modes, or independent cells of space, will be given by coh V V m det  . This number should not be less than one; strictly, we should write Fig.8 coh  coh l Propagation direction Fig.7           . 1 , 1 1 , det det det coh coh coh V V V V V V m Separately, one can introduce the number of longitudinal modes and the number of transverse modes: . , 2 det det || coh coh A m T m      4. Number of photons per mode (the degeneracy parameter) An important value is the mean number of photons per coherence volume (per mode), defined by Mandel as the degeneracy parameter. It is obtained if the energy contained in the coherence volume is divided by the energy of a single photon. For instance, if a source has the mean intensity I , then the energy flowing through the coherence area per unit time (the power) is 2 coh I P   . Then, the energy in the coherence volume is obtained just by multiplying this by the coherence time, c V I I W coh coh coh coh / 2     , and the mean number of photons per mode is c V I W N coh coh e       mod . - It is called the degeneracy parameter because it shows the occupation number of a single ‚cell’; while for fermions this occupation number cannot exceed 1, for photons it can take any value. - It is this number of photons per mode that is given by the Planck formula for the blackbody radiation. - In nonlinear optics, the interaction efficiency is given by e N mod . Hence the degeneracy parameter of a light field also determines its ability to interact with other light fields. - The degeneracy factor is also important because it tells whether the photon structure of light is pronounced ( 1 mod  e N ) or not ( 1 mod  e N ). Space/time or wavevector/frequency. Equivalently, one can define coherence volume in the space given not by Cartesian coordinates and time but by wavevector and frequency. 5. Modes: plane monochromatic waves or something else? (Coherent-mode, or Schmidt-mode, representation) Consider a two-point spatial CF ) 0 , , ( ) , ( 2 1 ) 1 ( 2 1 ) 1 ( r r G r r G      . It is not factorable in the general case. In the special case where it is factorable as ), ( ) ( ) , ( 2 1 2 1 ) 1 ( r f r f r r G      the normalized CF is unimodular: . 1 ) ( ) ( ) ( ) ( ) ( ) ( ) , ( ) , ( ) , ( ) , ( 2 2 1 1 2 1 2 2 ) 1 ( 1 1 ) 1 ( 2 1 ) 1 ( 2 1 ) 1 (    r f r f r f r f r f r f r r G r r G r r G r r g               It means that such a field is spatially coherent everywhere. But even in the general case the CF can be represented as a sum of such coherent terms. Indeed, according to the so-called Mercer’s theorem [Mandel&Wolf], for any ‘good’ function of two variables (in our case, ) , ( ) , ( 2 1 ) 1 ( 1 2 ) 1 ( r r G r r G      ) there exists the representation (called also the Mercer expansion or the Schmidt decomposition) ), ( ) ( ) , ( 2 1 2 1 ) 1 ( r f r f r r G n n n n        where ) ( , 1 r fn n   are eigenfunctions and eigenvalues of the integral equation (Fredholm equation) ). ( ) ( ) , ( 1 2 2 1 ) 1 ( 2 r f r f r r G r d n n n D         The eigenvalues are real and positive, 0  n  , and the eigenfunctions are orthonormal, . ) ( ) ( nm m n D r f r f r d       Then we see that the CF is represented as a series of factorable CFs, ). ( ) ( ) , ( ), , ( ) , ( 2 1 2 1 ) 1 ( 2 1 ) 1 ( 2 1 ) 1 ( r f r f r r G r r G r r G n n n n n n           Each of these factorable CFs is unimodular, as we have shown above. Hence, they represent coherent modes. And these coherent modes are not plane waves but should be found separately for each CF. For instance, in the case of a double-Gaussian CF (Schell model), }, 2 / ) ( exp{ } 2 / ) ( exp{ ~ ) , ( 2 2 2 1 2 2 2 1 2 1 ) 1 (         r r r r r r G       coherent modes are given by Hermite-Gaussian polynomials. The same consideration is valid for temporal CF. Home task: Find the coherence volume for a laser pointer emitting into a bandwidth of 10 nm around 650 nm. The beam has a diameter of 3 mm and a diffraction divergence (Why does it matter?) At what power will the laser pointer have a single photon in the coherence volume? Books: 1. Mandel & Wolf, Optical coherence and quantum optics, Sec. 4.2-4.4, 4.7 2. Klyshko, Physical foundations of quantum electronics, Sec. 7.2.3-7.2.6
2473
https://ocw.mit.edu/courses/18-315-combinatorial-theory-introduction-to-graph-theory-extremal-and-enumerative-combinatorics-spring-2005/pages/readings/
Browse Course Material Course Info Instructor Prof. Igor Pak Departments As Taught In Spring 2005 Level Topics Mathematics Algebra and Number Theory Discrete Mathematics Topology and Geometry Learning Resource Types assignment Problem Sets Download Course search GIVE NOW about ocw help & faqs contact us 18.315 | Spring 2005 | Graduate Combinatorial Theory: Introduction to Graph Theory, Extremal and Enumerative Combinatorics Readings The following textbooks are the main textbooks for the class: Stanley, R. P. Enumerative Combinatorics. Vol. I and II. Cambridge, UK: Cambridge University Press, 1999. ISBN: 0521553091 (hardback: vol. I); 0521663512 (paperback: vol. I); 0521560691 (hardback: vol. II). Bollobás, B. Modern Graph Theory (Graduate Texts in Mathematics). New York, NY: Springer-Verlag, 1998. ISBN: 0387984917. ———. Extremal Graph Theory. New York, NY: Dover, 2004. ISBN: 0486435962. Jukna, S. Extremal Combinatorics. New York, NY: Springer-Verlag, Berlin, 2000. ISBN: 3540663134. The following textbooks can be used as supplemental reading: Diestel, R. Graph Theory (Graduate Texts in Mathematics). New York, NY: Springer-Verlag, 1997. ISBN: 3540261834. (Available electronically on the Graph Theory Web site by R. Diestel). Matousek, J. Lectures on Discrete Geometry (Graduate Texts in Mathematics). New York, NY: Springer-Verlag, 2002. ISBN: 0387953736. The following readings specifically deal with problem 6 from Problem Set 1: The original paper is here: Burago, Ju. D., and V. A. Zalgaller. “Polyhedral embedding of a net.” Vestnik Leningrad Univ 15 (1960): 66-80. (In Russian) A recent relatively simple solution: Maehara, H. “Acute triangulations of polygons.” European J Combin 23 (2002): 45-55. Interestingly enough, if one allows right triangles there exist plentiful literature: Baker, B. S., E. Grosse, and C. S. Rafferty. “Nonobtuse triangulation of polygons.” Discrete Comput Geom 3 (1988): 147-168. Bern, M., and D. Eppstein. “Polynomial-size nonobtuse triangulation of polygons.” Internat J Comput Geom Appl 2 (1992): 241-255; Errata 449-450. Bern, M., S. Mitchell, and J. Ruppert. “Linear-size nonobtuse triangulation of polygons.” Discrete Comput Geom 14 (1995): 411-428. The following table lists the readings assigned for each lecture. | Lec # | Topics | Readings | --- | 1 | Course Introduction Ramsey Theorem | Bollobás, B. Modern Graph Theory (Graduate Texts in Mathematics). New York, NY: Springer-Verlag, 1998, pp. 182-189. ISBN: 0387984917. | | 2 | Additive Number Theory Theorems of Schur and Van der Waerden | Jukna, S. Extremal Combinatorics. New York, NY: Springer-Verlag, Berlin, 2000, pp. 326. ISBN: 3540663134. Khinchin, A. Y. Three Pearls of Number Theory. Mineola, NY: Dover Publications, Inc., 1998, section 1. ISBN: 0486400263. (Reprint of the 1952 translation.) | | 3 | Lower Bound in Schur’s Theorem Erdös-Szekeres Theorem (Two Proofs) 2-Colorability of Multigraphs Intersection Conditions | Jukna, S. Extremal Combinatorics. New York, NY: Springer-Verlag, Berlin, 2000, pp. 230, 327, and 65-66. ISBN: 3540663134. | | 4 | More on Colorings Greedy Algorithm Height Functions Argument for 3-Colorings of a Rectangle Erdös Theorem | Jukna, S. Extremal Combinatorics. New York, NY: Springer-Verlag, Berlin, 2000, pp. 66-67. ISBN: 3540663134. Luby, M., D. Randall, and A. Sinclair. “Markov Chain Algorithms for Planar Lattice Structures.” FOCS 1995. (Paper) | | 5 | More on Colorings (cont.) Erdös-Lovász Theorem Brooks Theorem | Jukna, S. Extremal Combinatorics. New York, NY: Springer-Verlag, Berlin, 2000, pp. 67. ISBN: 3540663134. Bollobás, B. Modern Graph Theory (Graduate Texts in Mathematics). New York, NY: Springer-Verlag, 1998, pp. 145-149. ISBN: 0387984917. | | 6 | 5-Color Theorem Vizing’s Theorem | Bollobás, B. Modern Graph Theory (Graduate Texts in Mathematics). New York, NY: Springer-Verlag, 1998, pp. 146-154. ISBN: 0387984917. ———. Extremal Graph Theory. New York, NY: Dover, 2004, pp. 221-234. ISBN: 0486435962. | | 7 | Edge Coloring of Bipartite Graphs Heawood Formula | Bollobás, B. Modern Graph Theory (Graduate Texts in Mathematics). New York, NY: Springer-Verlag, 1998, pp. 154-161. ISBN: 0387984917. ———. Extremal Graph Theory. New York, NY: Dover, 2004, pp. 243-254. ISBN: 0486435962. | | 8 | Glauber Dynamics The Diameter Explicit Calculations Bounds on Chromatic Number via the Number of Edges, and via the Independence Number | | | 9 | Chromatic Polynomial NBC Theorem | | | 10 | Acyclic Orientations Stanley’s Theorem Two Definitions of the Tutte Polynomial | Bollobás, B. Modern Graph Theory (Graduate Texts in Mathematics). New York, NY: Springer-Verlag, 1998, pp. 335-339. ISBN: 0387984917. | | 11 | More on Tutte Polynomial Special Values External and Internal Activities Tutte’s Theorem | Bollobás, B. Modern Graph Theory (Graduate Texts in Mathematics). New York, NY: Springer-Verlag, 1998, pp. 345-354. ISBN: 0387984917. | | 12 | Tutte Polynomial for a Cycle Gessel’s Formula for Tutte Polynomial of a Complete Graph | Gessel, I. M. “Enumerative applications of a decomposition for graphs and digraphs.” Discrete Math 139, no. 1-3 (1995): 257–271. (Paper) | | 13 | Crapo’s Bijection Medial Graph and Two Type of Cuts Introduction to Knot Theory Reidemeister Moves | Bollobás, B. Modern Graph Theory (Graduate Texts in Mathematics). New York, NY: Springer-Verlag, 1998, pp. 358-363. ISBN: 0387984917. Korn, M., and I. Pak. Combinatorial evaluations of the Tutte polynomial. Preprint (2003) available at Research (Igor Pak Home Page). (Paper) | | 14 | Kauffman Bracket and Jones Polynomial | Bollobás, B. Modern Graph Theory (Graduate Texts in Mathematics). New York, NY: Springer-Verlag, 1998, pp. 364-371. ISBN: 0387984917. | | 15 | Linear Algebra Methods Oddtown Theorem Fisher’s Inequality 2-Distance Sets | Jukna, S. Extremal Combinatorics. New York, NY: Springer-Verlag, Berlin, 2000, section 14. ISBN: 3540663134. | | 16 | Non-uniform Ray-Chaudhuri-Wilson Theorem Frankl-Wilson Theorem | | | 17 | Borsuk Conjecture Kahn-Kalai Theorem | Aigner, M., and G. Ziegler. Proof from the BOOK. 2nd ed. New York, NY: Springer-Verlag, August 1998, pp. 83-88. ISBN: 3540636986. | | 18 | Packing with Bipartite Graphs Testing Matrix Multiplication | | | 19 | Hamiltonicity, Basic Results Tutte’s Counter Example Length of the Longest Path in a Planar Graph | Diestel, R. Graph Theory (Graduate Texts in Mathematics). New York, NY: Springer-Verlag, 1997, section 10.1. ISBN 3540261834. (Available electronically on the Graph Theory Web site by R. Diestel). | | 20 | Grinberg’s Formula Lovász and Babai Conjectures for Vertex-transitive Graphs Dirac’s Theorem | Bollobás, B. Extremal Graph Theory. New York, NY: Dover, 2004, pp. 143-146. ISBN: 0486435962. | | 21 | Tutte’s Theorem Every Cubic Graph Contains either no HC, or At Least Three Examples of Hamiltonian Cycles in Cayley Graphs of Sn | | | 22 | Hamiltonian Cayley Graphs of General Groups | Pak, I., and R. Radoicic. “Hamiltonian paths in Cayley graphs.” Preprint (2002) available at Research (Igor Pak Home Page). (Paper) | | 23 | Menger Theorem Gallai-Milgram Theorem | Diestel, R. Graph Theory (Graduate Texts in Mathematics). New York, NY: Springer-Verlag, 1997, sections 2.5, and 3.3. ISBN 3540261834. (Available electronically on the Graph Theory Web site by R. Diestel). | | 24 | Dilworth Theorem Hall’s Marriage Theorem Erdös-Szekeres Theorem | Jukna, S. Extremal Combinatorics. New York, NY: Springer-Verlag, Berlin, 2000, pp. 38-39, and 97-100. ISBN: 3540663134. | | 25 | Sperner Theorem Two Proofs of Mantel Theorem Graham-Kleitman Theorem | Jukna, S. Extremal Combinatorics. New York, NY: Springer-Verlag, Berlin, 2000, pp. 40-41, and 45-46. ISBN: 3540663134. | | 26 | Swell Colorings Ward-Szabo Theorem Affine Planes | Jukna, S. Extremal Combinatorics. New York, NY: Springer-Verlag, Berlin, 2000, pp. 43-45, and 161-163. ISBN: 3540663134. | | 27 | Turán’s Theorem Asymptotic Analogues | Bollobás, B. Modern Graph Theory (Graduate Texts in Mathematics). New York, NY: Springer-Verlag, 1998, pp. 108-111. ISBN: 0387984917. | | 28 | Pattern Avoidance The case of S3 and Catalan Numbers Stanley-Wilf Conjecture | | | 29 | Permutation Patterns Arratia Theorem Furedi-Hajnal Conjecture | Arratia, R. “On the Stanley-Wilf conjecture for the number of permutations avoiding a given pattern.” Electron J Combin 6, no. 1 (1999). (Paper) | | 30 | Proof by Marcus and Tardos of the Stanley-Wilf Conjecture | Marcus, A., and G. Tardos. “Excluded permutation matrices and the Stanley-Wilf conjecture.” J Combin Theory Ser A 107, no. 1 (2004): 153–160. | | 31 | Non-intersecting Path Principle Gessel-Viennot Determinants Binet-Cauchy Identity | Stanley, R. P. Enumerative Combinatorics. Vol. I. Cambridge, UK: Cambridge University Press, 1999, section 2.7. ISBN: 0521553091 (hardback : vol. I); 0521663512. (paperback : vol. I). | | 32 | Convex Polyomino Narayana Numbers MacMahon Formula | Stanley, R. P. Enumerative Combinatorics. Vol. II. Cambridge, UK: Cambridge University Press, 1999, pp. 378. ISBN: 0521560691 (hardback: vol. II). | | 33 | Solid Partitions MacMahon’s Theorem Hook-content Formula | Stanley, R. P. Enumerative Combinatorics. Vol. II. Cambridge, UK: Cambridge University Press, 1999, section 7. ISBN: 0521560691 (hardback: vol. II). | | 34 | Hook Length Formula | Pak, I. “Hook Length Formula and Geometric Combinatorics.” Séminaire Lotharingien de Combinatoire 46 (2001): article B46f. | | 35 | Two Polytope Theorem | Pak, I. “Hook Length Formula and Geometric Combinatorics.” Séminaire Lotharingien de Combinatoire 46 (2001): article B46f. | | 36 | Connection to RSK Special Cases | Pak, I. “Hook Length Formula and Geometric Combinatorics.” Séminaire Lotharingien de Combinatoire 46 (2001): article B46f. | | 37 | Duality Number of Involutions in Sn | Pak, I. “Hook Length Formula and Geometric Combinatorics.” Séminaire Lotharingien de Combinatoire 46 (2001): article B46f. | | 38 | Direct bijective Proof of the Hook Length Formula | Novelli, J. C., I. Pak, and A. V. Stoyanovsky. “A direct bijective proof of the hook-length formula.” Discrete Mathematics and Theoretical Computer Science 1 (1997): 53-67. | | 39 | Introduction to Tilings Thurston’s Theorem | Thurston, W. P. “Conway’s tiling groups.” Amer Math Monthly 97, no. 8 (1990): 757-773. | Course Info Instructor Prof. Igor Pak Departments Mathematics As Taught In Spring 2005 Level Graduate Topics Mathematics Algebra and Number Theory Discrete Mathematics Topology and Geometry Learning Resource Types assignment Problem Sets Download Course Over 2,500 courses & materials Freely sharing knowledge with learners and educators around the world. Learn more © 2001–2025 Massachusetts Institute of Technology Creative Commons License Terms and Conditions Proud member of: © 2001–2025 Massachusetts Institute of Technology You are leaving MIT OpenCourseWare Please be advised that external sites may have terms and conditions, including license rights, that differ from ours. MIT OCW is not responsible for any content on third party sites, nor does a link suggest an endorsement of those sites and/or their content. Continue
2474
https://stats.stackexchange.com/questions/559659/why-is-the-median-less-sensitive-to-extreme-values-compared-to-the-mean
Skip to main content Asked Modified 3 years, 8 months ago Viewed 6k times This question shows research effort; it is useful and clear Save this question. Show activity on this post. 9 3 There's a number of measures of robustness which capture different aspects of sensitivity of statistics to observations. You might find the influence function and the empirical influence function useful concepts and gross error sensitivity of particular interest as a single measure that captures the difference you're looking at here. The books by Huber and by Hampel et al may be helpful. Glen_b – Glen_b 2022-01-08 07:45:11 +00:00 Commented Jan 8, 2022 at 7:45 I think this answer and the plots therein are illuminating. See especially the section "Analyzing Sensitivity". COOLSerdash – COOLSerdash 2022-01-08 08:00:40 +00:00 Commented Jan 8, 2022 at 8:00 1 The answer lies in the implicit error functions. For mean you have a squared loss which penalizes large values aggressively compared to median which has an implicit absolute loss function. Cagdas Ozgenc – Cagdas Ozgenc 2022-01-08 16:11:25 +00:00 Commented Jan 8, 2022 at 16:11 1 stats.stackexchange.com/questions/132829 provides one clear definition and illustrates the kind of analysis that can be done. whuber – whuber ♦ 2022-01-08 19:21:47 +00:00 Commented Jan 8, 2022 at 19:21 "The median is not directly calculated using the "value" of any of the measurements, but only using the "ranked position" of the measurements". This is not strictly true. The sample median is value of one of the observations for odd N, and is some kind of mean of at least two values for even N. Thinking about which values those are relative to ranks relates to your question. Alexis – Alexis 2022-01-09 00:59:02 +00:00 Commented Jan 9, 2022 at 0:59 | Show 4 more comments 10 Answers 10 Reset to default This answer is useful 35 Save this answer. Show activity on this post. if you write the sample mean x¯ as a function of an outlier O, then its sensitivity to the value of an outlier is dx¯(O)/dO=1/n, where n is a sample size. the same for a median is zero, because changing value of an outlier doesn't do anything to the median, usually. example to demonstrate the idea: 1,4,100. the sample mean is x¯=35, if you replace 100 with 1000, you get x¯=335. the median stays the same 4. this is assuming that the outlier O is not right in the middle of your sample, otherwise, you may get a bigger impact from an outlier on the median compared to the mean. TL;DR; adding the outlier you may be tempted to measure the impact of an outlier by adding it to the sample instead of replacing a valid observation with na outlier. it can be done, but you have to isolate the impact of the sample size change. if you don't do it correctly, then you may end up with pseudo counter factual examples, some of which were proposed in answers here. I'll show you how to do it correctly, then incorrectly. The mean xn changes as follows when you add an outlier O to the sample of size n: x¯n+O−x¯n=nx¯n+On+1−x¯n Now, let's isolate the part that is adding a new observation xn+1 from the outlier value change from xn+1 to O. We have to do it because, by definition, outlier is an observation that is not from the same distribution as the rest of the sample xi. Remember, the outlier is not a merely large observation, although that is how we often detect them. It is an observation that doesn't belong to the sample, and must be removed from it for this reason. Here's how we isolate two steps: x¯n+O−x¯n=nx¯n+xn+1n+1−x¯n+O−xn+1n+1=(x¯n+1−x¯n)+O−xn+1n+1 Now, we can see that the second term O−xn+1n+1 in the equation represents the outlier impact on the mean, and that the sensitivity to turning a legit observation xn+1 into an outlier O is of the order 1/(n+1), just like in case where we were not adding the observation to the sample, of course. Note, that the first term x¯n+1−x¯n, which represents additional observation from the same population, is zero on average. If we apply the same approach to the median x¯¯n we get the following equation: x¯¯n+O−x¯¯n=(x¯¯n+1−x¯¯n)+0×(O−xn+1)=(x¯¯n+1−x¯¯n) In other words, there is no impact from replacing the legit observation xn+1 with an outlier O, and the only reason the median x¯¯n changes is due to sampling a new observation from the same distribution. a counter factual, that isn't The analysis in previous section should give us an idea how to construct the pseudo counter factual example: use a large n≫1 so that the second term in the mean expression O−xn+1n+1 is smaller that the total change in the median. Here's one such example: "... our data is 5000 ones and 5000 hundreds, and we add an outlier of -100..." Let's break this example into components as explained above. As an example implies, the values in the distribution are 1s and 100s, and -100 is an outlier. So, we can plug x10001=1, and look at the mean: x¯10000+O−x¯10000=(50.5−50500110001)+−100−5050011000110001≈0.00495−0.00150≈0.00345 The term −0.00150 in the expression above is the impact of the outlier value. It's is small, as designed, but it is non zero. The same for the median: x¯¯10000+O−x¯¯10000=(x¯¯10001−x¯¯10000)=(1−50.5)=−49.5 Voila! We manufactured a giant change in the median while the mean barely moved. However, if you followed my analysis, you can see the trick: entire change in the median is coming from adding a new observation from the same distribution, not from replacing the valid observation with an outlier, which is, as expected, zero. a counter factual, that is Now, what would be a real counter factual? In all previous analysis I assumed that the outlier O stands our from the valid observations with its magnitude outside usual ranges. These are the outliers that we often detect. What if its value was right in the middle? Let's modify the example above:"... our data is 5000 ones and 5000 hundreds, and we add an outlier of ..." 20! Let's break this example into components as explained above. As an example implies, the values in the distribution are 1s and 100s, and 20 is an outlier. So, we can plug x10001=1, and look at the mean: x¯10000+O−x¯10000=(50.5−50500110001)+20−5050011000110001≈0.00495−0.00305≈0.00190 The term −0.00305 in the expression above is the impact of the outlier value. It's is small, as designed, but it is non zero. The break down for the median is different now! x¯¯10000+O−x¯¯10000=(x¯¯10001−x¯¯10000)=(1−50.5)+(20−1)=−49.5+19=−30.5 In this example we have a nonzero, and rather huge change in the median due to the outlier that is 19 compared to the same term's impact to mean of -0.00305! This shows that if you have an outlier that is in the middle of your sample, you can get a bigger impact on the median than the mean. conclusion Note, there are myths and misconceptions in statistics that have a strong staying power. For instance, the notion that you need a sample of size 30 for CLT to kick in. Virtually nobody knows who came up with this rule of thumb and based on what kind of analysis. So, it is fun to entertain the idea that maybe this median/mean things is one of these cases. However, it is not. Indeed the median is usually more robust than the mean to the presence of outliers. Share CC BY-SA 4.0 Improve this answer Follow this answer to receive notifications edited Jan 10, 2022 at 20:07 answered Jan 8, 2022 at 7:06 AksakalAksakalAksakal 63.2k66 gold badges108108 silver badges210210 bronze badges 17 And yet, following on Owen Reynolds' logic, a counter example: X:1,1,… 4,997 times,1,100,100,… 4,997 times,100, so x¯=50.5, and x~=50.5. But alter a single observation thus: X:−100,1,1,… 4,997 times,1,100,100,… 4,996 times,100, so now x¯=50.48, but x~=1, ergo in this example the median is more sensitive than the mean. Alexis – Alexis 2022-01-09 07:19:30 +00:00 Commented Jan 9, 2022 at 7:19 5 @Alexis : Moving a non-outlier to be an outlier is not equivalent to making an outlier lie more ... out-ly. Eric Towers – Eric Towers 2022-01-09 08:21:54 +00:00 Commented Jan 9, 2022 at 8:21 @Alexis that’s an interesting point. I find it helpful to visualise the data as a curve. A mean or median is trying to simplify a complex curve to a single value (~ the height), then standard deviation gives a second dimension (~ the width) etc. However, your data is bimodal (it has two peaks), in which case a single number will struggle to adequately describe the shape Matt – Matt 2022-01-09 09:49:18 +00:00 Commented Jan 9, 2022 at 9:49 @Alexis I’ll add explanation why adding observations conflates the impact of an outlier Aksakal – Aksakal 2022-01-09 13:57:42 +00:00 Commented Jan 9, 2022 at 13:57 @Matt "A mean or median is trying to simplify a complex curve to a single value (~ the height)" I'm not entirely sure what curve you're talking about, but shouldn't this be "the horizontal position of the highest point" rather than "the height"? Stef – Stef 2022-01-09 14:53:36 +00:00 Commented Jan 9, 2022 at 14:53 | Show 12 more comments This answer is useful 18 Save this answer. Show activity on this post. A reasonable way to quantify the "sensitivity" of the mean/median to an outlier is to use the absolute rate-of-change of the mean/median as we change that data point. To that end, consider a subsample x1,...,xn−1 and one more data point x (the one we will vary). If we denote the sample mean of this data by x¯n and the sample median of this data by x~n then we have: Sensitivity of meanSensitivity of median (n odd)Sensitivity of median (n even)≡∣∣∣dx¯ndx∣∣∣=1n,≡∣∣∣dx~ndx∣∣∣=I(x=x((n+1)/2)<x((n+3)/2)),≡∣∣∣dx~ndx∣∣∣=12⋅I(x(n/2)⩽x⩽x(n/2+1)<x(n/2+2)). In the trivial case where n⩽2 the mean and median are identical and so they have the same sensitivity. In the non-trivial case where n>2 they are distinct. In this latter case the median is more sensitive to the internal values that affect it (i.e., values within the intervals shown in the above indicator functions) and less sensitive to the external values that do not affect it (e.g., an "outlier"). Share CC BY-SA 4.0 Improve this answer Follow this answer to receive notifications edited Jan 9, 2022 at 21:44 answered Jan 8, 2022 at 21:28 BenBenBen 141k77 gold badges278278 silver badges636636 bronze badges 0 Add a comment | This answer is useful 7 Save this answer. Show activity on this post. Changing an outlier doesn't change the median; as long as you have at least three data points, making an extremum more extreme doesn't change the median, but it does change the mean by the amount the outlier changes divided by n. Adding an outlier, or moving a "normal" point to an extreme value, can only move the median to an adjacent central point. For instance, if you start with the data [1,2,3,4,5], and change the first observation to 100 to get [100,2,3,4,5], the median goes from 3 to 4. So not only is the a maximum amount a single outlier can affect the median (the mean, on the other hand, can be affected an unlimited amount), the effect is to move to an adjacently ranked point in the middle of the data, and the data points tend to be more closely packed close to the median. So, for instance, if you have nine points evenly spaced in Gaussian percentile, such as [-1.28, -0.84, -0.52, -0.25, 0, 0.25, 0.52, 0.84, 1.28]. The average separation between observations is 0.32, but changing one observation can change the median by at most 0.25. The median of a bimodal distribution, on the other hand, could be very sensitive to change of one observation, if there are no observations between the modes. Share CC BY-SA 4.0 Improve this answer Follow this answer to receive notifications edited Jan 11, 2022 at 15:33 Kilian Foth 10333 bronze badges answered Jan 9, 2022 at 1:05 AcccumulationAcccumulationAcccumulation 5,50199 silver badges1414 bronze badges Add a comment | This answer is useful 4 Save this answer. Show activity on this post. Extreme values influence the tails of a distribution and the variance of the distribution. This also influences the mean of a sample taken from the distribution. Extreme values do not influence the center portion of a distribution. This means that the median of a sample taken from a distribution is not influenced so much. Example Below is an illustration with a mixture of three normal distributions with different means. The mixture is 90% a standard normal distribution making the large portion in the middle and two times 5% normal distributions with means at +μ and −μ. The value of μ is varied giving distributions that mostly change in the tails. The consequence of the different values of the extremes is that the distribution of the mean (right image) becomes a lot more variable. For large sample sizes The sample variance of the mean will relate to the variance of the population: Var[mean(xn)]≈1nVar[x] The sample variance of the median will relate to the slope of the cumulative distribution (and the height of the distribution density near the median) Var[median(xn)]≈1n14f(median(x))2 Example where the mean is less influenced by outliers In general we have that large outliers influence the variance Var[x] a lot, but not so much the density at the median f(median(x)). But, it is possible to construct an example where this is not the case. If we mix/add some percentage ϕ of outliers to a distribution with a variance of the outliers that is relative v larger than the variance of the distribution (and consider that these outliers do not change the mean and median), then the new mean and variance will be approximately Var[mean(xn)]≈1n(1−ϕ+ϕv)Var[x] Var[mean(xn)]≈1n14((1−ϕ)f(median(x))2 So the relative change (of the sample variance of the statistics) are for the mean δμ=(v−1)ϕ and for the median δm=2ϕ−ϕ2(1−ϕ)2. And we have δm>δμ if v<1+2−ϕ(1−ϕ)2 An example here is a continuous uniform distribution with point masses at the end as 'outliers'. The variance of a continuous uniform distribution is 1/3 of the variance of a Bernoulli distribution with equal spread. So v=3 and for any small ϕ>0 the condition is fulfilled and the median will be relatively more influenced than the mean. This is a contrived example in which the variance of the outliers is relatively small. This is done by using a continuous uniform distribution with point masses at the ends. So the outliers are very tight and relatively close to the mean of the distribution (relative to the variance of the distribution). Share CC BY-SA 4.0 Improve this answer Follow this answer to receive notifications edited Jan 9, 2022 at 16:07 answered Jan 9, 2022 at 14:13 Sextus EmpiricusSextus EmpiricusSextus Empiricus 92.6k66 gold badges122122 silver badges332332 bronze badges Add a comment | This answer is useful 4 Save this answer. Show activity on this post. Mathematical description/proof/viewpoint for special case There is a short mathematical description/proof in the special case of Comparing the sensitivity in terms of the variance of the sample statistic. (this can be generalized to the 3rd central moment, and possibly other cost functions) Symmetric distributions in which case we can express the distribution of the median and mean in terms of integrals of the quantile functions in a similar way. Let's assume that the distribution is centered at 0 and the sample size n is odd (such that the median is easier to express as a beta distribution). Then in terms of the quantile function QX(p) we can express Var[mean(Xn)]Var[median(Xn)]==1n∫101n∫101⋅QX(p)2dpfn(p)⋅QX(p)2dp where f(p)=nBeta(n+12,n+12)pn−12(1−p)n−12 Below is a plot of fn(p) when n=9 and it is compared to the constant value of 1 that is used to compute the variance of the sample mean. What the plot shows is that the contribution of the squared quantile function to the variance of the sample statistics (mean/median) is for the median larger in the center and lower at the edges. Adding outliers versus changing outliers. When we change outliers, then the quantile function QX(p) changes only at the edges where the factor fn(p)<1 and so the mean is more influenced than the median. When we add outliers, then the quantile function QX(p) is changed in the entire range. So the median might in some particular cases be more influenced than the mean. Example: Say we have a mixture of two normal distributions with different variances and mixture proportions. Then the change of the quantile function is of a different type when we change the variance in comparison to when we change the proportions. Below is an example of different quantile functions where we mixed two normal distributions. The black line is the quantile function for the mixture of 90% a distribution with σ=1 and ϕ=10% a distribution with σoutlier=2. On the left we changed the proportion of outliers ϕ∈{20%,30%,40%}. On the right we changed the variance of outliers with σoutlier∈{4,8,16}. The quantile function of a mixture is a sum of two components in the horizontal direction. Whether we add more of one component or whether we change the component will have different effects on the sum. Generalizations The conditions that the distribution is symmetric and that the distribution is centered at 0 can be lifted. It will make the integrals more complex. Var[mean(Xn)]Var[median(Xn)]==1n∫101n∫101⋅(QX(p)−Q(pmean))2dpfn(p)⋅(QX(p)−QX(pmedian))2dp now these 2nd terms in the integrals are different. We have (QX(p)−Q(pmean))2 and (QX(p)−QX(pmedian))2. But we still have that the factor in front of it is the constant 1 versus the factor fn(p) which goes towards zero at the edges. The condition that we look at the variance is more difficult to relax. I have made a new question that looks for simple analogous cost functions. But we could imagine with some intuitive handwaving that we could eventually express the cost function as a sum of multiple expressions mean:E[S(Xn)]=∑igi(n)∫101⋅hi,n(QX)dpmedian:E[S(Xn)]=∑igi(n)∫10fn(p)⋅hi,n(QX)dp where we can not solve it with a single term but in each of the terms we still have the fn(p) factor, which goes towards zero at the edges. Share CC BY-SA 4.0 Improve this answer Follow this answer to receive notifications edited Jan 10, 2022 at 9:46 answered Jan 10, 2022 at 0:00 Sextus EmpiricusSextus EmpiricusSextus Empiricus 92.6k66 gold badges122122 silver badges332332 bronze badges Add a comment | This answer is useful 3 Save this answer. Show activity on this post. An outlier is not precisely defined, a point can more or less of an outlier. You might say outlier is a fuzzy set where membership depends on the distance d to the pre-existing average. Call such a point a d-outlier. The key difference in mean vs median is that the effect on the mean of a introducing a d-outlier depends on d, but the effect on the median does not. Using Big-0 notation, the effect on the mean is O(d), and the effect on the median is O(1). Btw "the average weight of a blue whale and 100 squirrels will be closer to the blue whale's weight"--this is not true. Share CC BY-SA 4.0 Improve this answer Follow this answer to receive notifications answered Jan 9, 2022 at 11:26 ludogludogludog 6133 bronze badges 0 Add a comment | This answer is useful 3 Save this answer. Show activity on this post. A helpful concept when considering the sensitivity/robustness of mean vs. median (or other estimators in general) is the breakdown point. This is the proportion of (arbitrarily wrong) outliers that is required for the estimate to become arbitrarily wrong itself. Using this definition of "robustness", it is easy to see how the median is less sensitive: At least HALF your samples have to be outliers for the median to break down (meaning it is maximally robust), while a SINGLE sample is enough for the mean to break down. Share CC BY-SA 4.0 Improve this answer Follow this answer to receive notifications answered Jan 10, 2022 at 10:45 radioflashradioflashradioflash 13111 bronze badge Add a comment | This answer is useful 3 Save this answer. Show activity on this post. I'm going to say no, there isn't a proof the median is less sensitive than the mean since it's not always true. At least not if you define "less sensitive" as a simple "always changes less under all conditions". I'm told there are various definitions of sensitivity, going along with rules for well-behaved data for which this is true. Say our data is 5000 ones and 5000 hundreds, and we add an outlier of -100 (or we change one of the hundreds to -100). The median jumps by 50 while the mean barely changes. That seems like very fake data. So say our data is only multiples of 10, with lots of duplicates. It could even be a proper bell-curve. Then it's possible to choose outliers which consistently change the mean by a small amount (much less than 10), while sometimes changing the median by 10. Or simply changing a value at the median to be an appropriate outlier will do the same. Or we can abuse the notion of outlier without the need to create artificial peaks. Take the 100 values 1,2 ... 100. Mean and median both 50.5. Then add an "outlier" of -0.1 -- median shifts by exactly 0.5 to 50, mean (5049.9/101) drops by almost 0.5 but not quite. Of course we already have the concepts of "fences" if we want to exclude these barely outlying outliers. If feels as if we're left claiming the rule is always true for sufficiently "dense" data where the gap between all consecutive values is below some ratio based on the number of data points, and with a sufficiently strong definition of outlier. Share CC BY-SA 4.0 Improve this answer Follow this answer to receive notifications edited Jan 10, 2022 at 17:54 answered Jan 8, 2022 at 16:07 Owen ReynoldsOwen ReynoldsOwen Reynolds 22711 silver badge22 bronze badges 17 6 Make the outlier −∞ mean would go to −∞, the median would drop “only” by 100. Tim – Tim 2022-01-08 17:17:00 +00:00 Commented Jan 8, 2022 at 17:17 7 "Less sensitive" depends on your definition of "sensitive" and how you quantify it. In the literature on robust statistics, there are plenty of useful definitions for which the median is demonstrably "less sensitive" than the mean. When your answer goes counter to such literature, it's important to be very clear concerning how you define your terms so that people are not confused into believing you are contradicting what is well known. whuber – whuber ♦ 2022-01-08 19:23:32 +00:00 Commented Jan 8, 2022 at 19:23 2 This specially constructed example is not a good counter factual because it intertwined the impact of outlier with increasing a sample. The big change in the median here is really caused by the latter. Consider adding two 1s. Aksakal – Aksakal 2022-01-08 20:25:04 +00:00 Commented Jan 8, 2022 at 20:25 1 @Aksakal The 1st ex. would also work if a 100 changed to a -100. Likewise in the 2nd a number at the median could shift by 10. I felt adding a new value was simpler and made the point just as well. Owen Reynolds – Owen Reynolds 2022-01-08 20:39:33 +00:00 Commented Jan 8, 2022 at 20:39 1 (I upvoted for the opening sentence "I'm going to say no, there isn't a proof the median is less sensitive than the mean since it's not always true. At least not if you define "less sensitive" as a simple "always changes less under all conditions". ", and because you recognized your own counterexample "seems like very fake data.") Stef – Stef 2022-01-09 14:59:53 +00:00 Commented Jan 9, 2022 at 14:59 | Show 12 more comments This answer is useful 2 Save this answer. Show activity on this post. Actually, there are a large number of illustrated distributions for which the statement can be wrong! Background for my colleagues, per Wikipedia on Multimodal distributions: Bimodal distributions have the peculiar property that – unlike the unimodal distributions – the mean may be a more robust sample estimator than the median. This is clearly the case when the distribution is U shaped like the arcsine distribution. It may not be true when the distribution has one or more long tails. Here is another educational reference (from Douglas College) which is certainly accurate for large data scenarios: In symmetrical, unimodal datasets, the mean is the most accurate measure of central tendency. For asymmetrical (skewed), unimodal datasets, the median is likely to be more accurate. For bimodal distributions, the only measure that can capture central tendency accurately is the mode. So, evidently, in the case of said distributions, the statement is incorrect (lacking a specificity to the class of unimodal distributions). Share CC BY-SA 4.0 Improve this answer Follow this answer to receive notifications answered Jan 10, 2022 at 19:45 AJKOERAJKOERAJKOER 2,32811 gold badge1414 silver badges99 bronze badges Add a comment | This answer is useful -1 Save this answer. Show activity on this post. Ironically, you are asking about a generalized truth (i.e., normally true but not always) and wonder about a proof for it. If you want a reason for why outliers TYPICALLY affect mean more so than median, just run a few examples. Your light bulb will turn on in your head after that. Step 1: Take ANY random sample of 10 real numbers for your example. Step 2: Identify the outlier with a value that has the greatest absolute value. Step 3: Add a new item (eleventh item) to your sample set and assign it a positive value number that is 1000 times the magnitude of the absolute value you identified in Step 2. Step 4: Add a new item (twelfth item) to your sample set and assign it a negative value number that is 1000 times the magnitude of the absolute value you identified in Step 2. Step 5: Calculate the mean and median of the new data set you have. Compare the results to the initial mean and median. Which one changed more, the mean or the median. Step 6. Repeat the exercise starting with Step 1, but use different values for the initial ten-item set. Again, did the median or mean change more? No matter what ten values you choose for your initial data set, the median will not change AT ALL in this exercise! You can use a similar approach for item removal or item replacement, for which the mean does not even change one bit. Clearly, changing the outliers is much more likely to change the mean than the median. Others with more rigorous proofs might be satisfying your urge for rigor, but the question relates to generalities but allows for exceptions. So, you really don't need all that rigor. There are exceptions to the rule, so why depend on rigorous proofs when the end result is, "Well, 'typically' this rule works but not always...". The example I provided is simple and easy for even a novice to process. Share CC BY-SA 4.0 Improve this answer Follow this answer to receive notifications answered Jan 10, 2022 at 17:38 BruzoteBruzoteBruzote 1 Add a comment | Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions mathematical-statistics mean outliers median robust See similar questions with these tags. Featured on Meta stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Linked 25 Mean and Median properties 5 Bell shape of student in a class has a height mean of 68 inches with 16% of them taller than 71 inches. What % of students are taller than 65 inches? 8 Should we always minimize squared deviations if we want to find the dependency of mean on features? 4 Compute quantile function from a mixture of Normal distribution 10 Solution to exercice 2.2a.16 of "Robust Statistics: The Approach Based on Influence Functions" 1 The expectation of a function of the sample mean in terms of an expectation of a function of the variable E[g(X¯−μ)]=h(n)⋅E[f(X−μ)] Related 103 If mean is so sensitive, why use it in the first place? 7 How to estimate the parameters of a Gaussian distribution sample with outliers? 8 Median + MAD for skewed data 2 Is mean ever less than the median? 1 What is the best way to determine which proteins are significantly bound on a testing chip? 6 Iterative outlier diagnostic 2 Checking for an increase in outliers over time Hot Network Questions Definition of simple algebra Which video game had the first "summon supporter" mechanic? (Summoning a "sub character" to fight) Emoji-art sunburst Counter arguments to the ANE Biblical Hermenuetic Connect two voltages to one capacitor Unusual pitch in a short modulation Is MemoryHigh throttling per process or per service? Does Eastern philosophy say something concerning nothingness or the idea of nothingness? High Voltage H-Bridge "dip" on switch How does circuit distance relate to fault-tolerance? Genesis 3:15, Does it refer to Jesus Christ? How can I find all simple cycles in a multigraph with edge tags? With #!/bin/sh, what is the closest to ;;& (bash's fallthrough)? "senior" as honorific? PC used in movie "Ghosthouse" (1988) Selfish Numbers Removing rows in Layout legend with PyQGIS Spectral measure associated with the harmonic oscillator When do games typically "leave known territory"? Can I pass a pointer to a consteval function as an NTTP in C++? Mystery book in the world of The Stand For loop using the multido package Create Grid working not as expected Was the C64's "Bytes Free" message hard coded to show 38911 bytes? Question feed
2475
https://ximera.osu.edu/ode/main/drainingTank/drainingTank
Tank Draining - Ximera Statistics Get HelpContact my instructor Report error to authorsRequest help using XimeraReport bug to programmers Another Math Editor Failed Saved! Saving… Reconnecting…Save Update Erase Edit MeProfileSuperviseLogout Sign InSign In with GoogleSign In with TwitterSign In with GitHub Sign In Warning × You are about to erase your work on this activity. Are you sure you want to do this? No, keep my work.Yes, delete my work. Updated Version Available × There is an updated version of this activity. If you update to the most recent version of this activity, then your current progress on this activity will be erased. Regardless, your record of completion will remain. How would you like to proceed? Keep the old version.Delete my work and update to the new version. Mathematical Expression Editor × +–× ÷ x ⁿ √ⁿ√ π θ φ ρ ()|| sin cos tan arcsin arccos arctan e ˣ ln log [?]\blue{[?]}[?] Cancel OK About this Project Project and contact information. 1.2 Basic Concepts We define ordinary differential equations and what it means for a function to be a solution to such an equation. 1.1 Applications Leading to Differential Equations We discuss population growth, Newton’s law of cooling, glucose absorption, and spread of epidemics as phenomena that can be modeled with differential equations. 2.1 Linear First-Order Differential Equations We develop a technique for solving first-order linear differential equations. 1.3 Direction Fields for First Order Equations We explore direction fields (also called slope fields) for some examples of first order differential equations. 2.2 Separable Equations We define what it means for a first order equation to be separable, and we work out solutions to a few examples of separable equations. 2.3 Existence and Uniqueness of Solutions of Nonlinear Equations We study an existence and uniqueness theorem for a first-order initial value problem. We do not attempt the proof, as it is beyond the scope of this book. 2.5 Exact Equations We learn how to recognize whether or not a first-order equation is exact. We also learn how to solve an exact equation. Transformation of Homogeneous Equations into Separable Equations Need an abstract 2.4A Bernoulli’s Equations We show how multiplying an equation by an integrating factor can make the equation exact, and we give examples where this is a nice technique for solving a first-order equation. 2.6 Integrating Factors We show how multiplying an equation by an integrating factor can make the equation exact, and we give examples where this is a nice technique for solving a first-order equation. Euler’s Method Need Abstract 3.2 The Improved Euler Method and Related Methods We explore some ways to improve upon Euler’s method for approximating the solution of a differential equation. 3.3 Runge-Kutta Method We study a fourth order method known as Runge-Kutta which is more accurate than any of the other methods studied in this chapter. 4.1 Exponential Growth and Decay We solve a separable differential equation and describe a few of its many applications. 4.2A Newton’s Law of Cooling We study Newton’s Law of Cooling as an application of a first order separable differential equation. Mixing Problems 4.3 Elementary Mechanics We study several applications of first order differential equations to elementary mechanics. 4.4 Autonomous Second Order Equations We define autonomous equations, explain how autonomous second order equations can be reduced to first order equations, and give several applications. 4.5 Applications to Curves We study a number of ways that families of curves can be defined using differential equations. 5.1 Homogeneous Linear Equations We develop a technique for solving homogeneous linear differential equations. 5.2 Constant Coefficient Homogeneous Equations We examine the various possibilities for types of solutions when solving constant coefficient homogeneous equations. 5.3 Nonhomogeneous Linear Equations We discuss theory related to nonhomogeneous linear equations. 5.4 The Method of Undetermined Coefficients I We explore the solution of nonhomogeneous linear equations in the case where the forcing function is the product of an exponential function and a polynomial. 5.5 The Method of Undetermined Coefficients II We explore the solution of nonhomogeneous linear equations with other forcing functions. 5.6 Reduction of Order We explore a technique for reducing a second order nonhomgeneous linear differential equation to first order when we know a nontrivial solution to the complementary homogeneous equation. 5.7 Variation of Parameters We study the method of variation of parameters for finding a particular solution to a nonhomogeneous second order linear differential equation. 6.1 Spring Problems I We study undamped harmonic motion as an application of second order linear differential equations. 6.2 Spring Problems II We return to our study of harmonic motion as an application of second order linear differential equations, this time considering the cases where damping occurs. 6.3 The RLC Circuit We study electric circuits as an application of second order linear differential equations. 6.4 Motion Under A Central Force We study the motion of a object moving under the influence of a central force. 7.1 Review of Power Series We review the basic properties of power series representation of functions. 7.2 Series Solutions Near an Ordinary Point I We consider the utilization of power series to determine solutions to certain differential equations. 7.3 Series Solutions Near an Ordinary Point II We consider the utilization of power series to determine solutions to more general differential equations. 7.4 Regular Singular Points: Euler Equations We consider the utilization of power series to determine solutions to differential equations near a singular point. We also study Euler equations. 7.1 The Method of Frobenius I We begin our study of the method of Frobenius for finding series solutions of linear second order differential equations. 7.6 The Method of Frobenius II We continue our study of the method of Frobenius for finding series solutions of linear second order differential equations, extending to the case where the indicial equation has a repeated real root. 7.7 The Method of Frobenius III We conclude our study of the method of Frobenius for finding series solutions of linear second order differential equations, considering the case where the indicial equation has distinct real roots that differ by an integer. 8.1 Introduction to the Laplace Transform We begin our study of Laplace transforms with the definition, and we derive the Laplace Transform of some basic functions. 8.2 The Inverse Laplace Transform Given , the Laplace transform of some function , we study techniques for recovering the function . 8.3 Solution of Initial Value Problems We demonstrate how Laplace transforms can be used to solve constant coefficient second order initial value problems. 8.4 The Unit Step Function We introduce the unit step function and some of its applications. 8.5 Constant Coefficient Equations with Piecewise Continuous Forcing Functions We show how Laplace Transforms may be used to solve initial value problems with piecewise continuous forcing functions. 8.6 Convolution We define the convolution of two functions, and discuss its application to computing the inverse Laplace transform of a product. 8.7 Constant Coefficient Equations with Impulses We study the solution of initial value problems where the external force is an impulse. 8.8 A Brief Table of Laplace Transforms This is a short table of Laplace Transforms. 9.1 Introduction to Linear Higher Order Equations Given an th order linear differential equation, we discuss necessary and sufficient conditions for a set of functions to be a fundamental set of solutions. 9.2 Higher Order Constant Coefficients Homogeneous Equations We discuss the solution of an th order homogeneous linear differential equation. 9.3 Undetermined Coefficients for Higher Order Equations We discuss the solution of an th order nonhomogeneous linear differential equation, making use of the method of undetermined coefficients to find a particular solution. 9.4 Variation of Parameters for Higher Order Equations We discuss the solution of an th order nonhomogeneous linear differential equation, making use of variation of parameters to find a particular solution. Introduction to Systems of Differential Equations 10.2 Linear Systems of Differential Equations We show how linear systems can be written in matrix form, and we make many comparisons to topics we have studied earlier. 10.3 Basic Theory of Homogeneous Linear System We study the theory of homogeneous linear systems, noting the parallels with the study of linear homogeneous scalar equations. Constant Coefficient Homogeneous Systems I 10.5 Constant Coefficient Homogeneous Systems II We continue our study of constant coefficient homogeneous systems. In this section we consider the case where has real eigenvalues, but does not have linearly independent eigenvectors. Constant Coefficient Homogeneous Systems III 10.7 Variation of Parameters for Nonhomogeneous Linear Systems We study constant coefficient nonhomogeneous systems, making use of variation of parameters to find a particular solution. Modeling a spring-mass system This lab describes an activity with a spring-mass system, designed to explore concepts related to modeling a real world system with wide applicability. Hot Potato! Activity on heating and cooling An experiment involving Newton’s Law of Cooling. Solving ODEs with Sage This activity shows how to use Sage to solve differential equations. Experiment with Sage interacts This activity demonstrates the use of a Sage interact. An Epidemics Model Activity with a model for an epidemic. Based on: ( add source ) Hot Potato! Cooling Activity An experiment involving Newton’s Law of Cooling. The Simple Pendulum An experiment involving a simple pendulum. Tank Draining An experiment involving a draining tank. ode Ordinary Differential Equations Tank Draining An experiment involving a draining tank. Tank Draining This activity is intended to illustrate how the modeling process with differential equations is used to solve a practical problem. Beginning with physics principles like conservation of mass and energy and a few simplifying assumptions, a differential equation is derived to describe the draining of water from a container. After solving the differential equation, students can predict the time necessary to drain the container and then check this prediction with a simple experiment using readily available materials. Overview of the Model Consider an open cylindrical tank of height that is filled with water or some other freely flowing liquid. The cross sectional area is a constant value of and a small circular hole near the bottom has a much smaller area . When the water is allowed to flow from the hole, the tank will eventually drain until the water level reaches the hole. Our goal is to predict how long this draining process will take. We will try to measure the draining time, which we will define as the elapsed time from when the water is allowed to flow out until the water level reaches the top of the hole. Before outlining the derivation below, make a prediction about what parameters impact the draining time. Make a list on paper of every variable upon which the draining time will depend. Examine your list. Did you include parameters like air pressure, the density of the fluid, or the shape of the small hole? Did you include , , and ? If so, what would an increase in these parameters do to the draining time? As the water drains, will the flow rate remain constant? Now that you have written down your predictions, let’s derive a model for this process. When the model is derived, return to your list to check it. Before developing a full model, reflect on your physical experience and/or intuition with draining containers to answer the following question. Which of the following graphs best predicts how the height of water in this tank will change with time? Expand for discussion. The first option is incorrect. This plot indicates that the height of water is changing most rapidly in the middle of the draining process and slowly in the beginning and end. This is not the observed behavior of the height of water in a draining tank. The second option is correct. This plot correctly indicates the observed behavior. Notice that the height of water in the tank changes most rapidly in the beginning. This is because when the water level is the highest, the flow rate out will be the greatest. The flow rate out diminishes as the height of water is reduced. The third option is incorrect. This plot indicates that the water level will decrease at a constant rate throughout the draining process. This is not the observed behavior, because the flow rate of water actually depends on the height of water in the tank. The fourth option is incorrect. This plot suggests that the height of water will change slowly at the beginning and will change more rapidly at the end. This is not the observed behavior of such a system. Model Derivation As with many modeling problems leading to differential equations, it is helpful to begin with a generic balance equation: While this principle can be applied to many different quantities for a defined system, we can immediately apply it to the mass of the water in the tank. This equation can be called a rate equation because each of the terms refer to a rate (rate of flow in, rate of destruction, rate of accumulation, etc.) Can you see which of the terms in this generic equation can be crossed off? From physics, we recall that mass – just like energy and momentum – is a conserved quantity under normal circumstances. This means that it cannot be generated or destroyed. Furthermore, we note that water only flows out of, not into, the tank during the draining process. This leaves us with: We also recognize that the density of water (mass per unit volume) is constant. Thus, since it is easier in our experiment to describe changes in volume and the rate of flow of volume, we can instead write: We can now incorporate both of the areas in the diagram above. First, we recognize that the rate of volume flow out of the tank is equal to the velocity of the water through the small hole multiplied by the area . Second, we take note of the relationship between the volume, the height, and the cross sectional area of the cylindrical tank. The rate of accumulation or depletion of volume is equal to the rate of change in height multiplied by the cross sectional area. Our balance equation now becomes: Observe that the units of this equation are still a rate of change of volume (length cubed per time). It is helpful that we now have the time rate of change of height in our equation. We can and will estimate both of the areas. Yet, we do not yet have an idea of the velocity of the water coming out of the small hole. Our derivation so far relied on the principle of conservation of mass. To find the velocity out, we will employ the principle of conservation of energy. For this system, the principle of conservation of energy leads us to Bernoulli’s Equation, which is an important relationship between pressure, velocity, and height of a flowing fluid: Here, is the pressure, is the density, and is the velocity of the fluid. The gravitational constant and the height of the fluid, relative to some reference point, also appear. The units of each term in this equation are pressure, which is force per unit area. However, it is also helpful to realize that this is also energy per volume. Can you see this if we note that energy is force times distance and volume is area times another distance? Bernoulli’s equation shows us how energy, though conserved overall, can be transferred in different categories. A fluid may have energy due to its pressure, due to its velocity (kinetic energy), and due to its height (potential energy). All three of these ways of having energy are included in this equation on a per volume basis. Now, considering two points in the system, we can use this relationship to specify the velocity of water flowing out of the small hole. Point 1 is at the very top of the water in the tank. Point 2 is on the water as it leaves the small hole. Bernoulli’s equation is applied to these two points: Since both points are open to the atmosphere, they are at almost exactly the same pressure. The small difference in height does not produce a very different air pressure. For this reason, and can be crossed off together. Since the density of water does not change at all, it can be cancelled from all of the remaining terms. Rearranging gives the following: Further simplification is possible if we neglect . This is defensible because is much larger than since the cross sectional area of the tank is – in most cases – significantly larger than the area of the small hole. The square of must therefore be smaller, relatively speaking, than the square of . Water will be moving much more quickly out of the small hole than the movement of the top surface of the water. We can replace the different in heights between two points with , which is the height of water above the small hole at any point in time. This can now be written as: It is evident that as the tank drains, the velocity of the water draining out will decrease toward zero since the height of the water is decreasing toward zero. When we conduct the experiment, we expect that the fastest stream of water will be seen at the very beginning. Having solved for the velocity at Point 2, which is the “velocity out” in the simplified balance equation above, we can now put everything together: Since , , and are all constant in our model of the cylindrical tank, we can lump all the constants together as and write: It is instructive to check the units of this differential equation, which are length per time since it gives us the rate of change of height. Note that the units of the constant are . We have now derived a differential equation for the height of the fluid in the tank by using principles from physics and some appropriate simplifications. Separation and integration leads us to a solution for water height as a function of time: Here we specify the constant of integration in terms of the initial height at . Rearrangement gives the solution of our differential equation: From here, we can determine the time necessary for the tank to drain, because this is when . If we substitute for the constant , we find that the final time is Note that, according to our assumptions in this model, no other factors will impact the draining time: not the air pressure, the density of the fluid, or the shape of the drain hole. In fact, the drain hole and the cross section of the tank could be circular, square, or any other shape. We only require that the area of the cross section of the tank remain constant. For instance, this model would correctly predict the time to drain a cube shaped container. One other interesting aspect of the mathematics here is evident when one studies the solution to the differential equation, which is parabolic in form. Notice that if the time exceeds the calculated draining time, the solution predicts that the height of the water would again increase. This is an aphysical (not real) prediction, because once the tank drains completely the height of the fluid will stay at exactly zero. Examine the differential equation and note that is a stable (equilibrium) solution. Before conducting an experiment to check the accuracy of this model, let’s examine your predictions of which parameters impact the draining time. Consider a tank with cross sectional area and hole area with an initial height . What will happen to the draining time in each of the following cases? If is doubled, the draining time will… Shorten by some amount that cannot be specified Shorten by half Remain unchanged Lengthen to twice its initial value Lengthen by some amount that cannot be specified Change in some other way Correct Try again Check work Expand for discussion. In the equation for above, the draining time is directly proportional to the cross sectional area of the tank . If is doubled, the draining time will… Shorten by some amount that cannot be specified Shorten by half Remain unchanged Lengthen to twice its initial value Lengthen by some amount that cannot be specified Change in some other way Correct Try again Check work Expand for discussion. While a larger initial height will cause the draining time to increase, it is interesting to note that the dependence is not directly proportional in the same way as it was with cross sectional area . Doubling the initial height will cause the draining time to be just over larger. Note that the initial height is inside the square root in the equation for above. Tank Draining Experiment To conduct your own experiment, you should assemble the following near a sink or an outside location: (a) A plastic bottle or other container that has a constant cross section. Most containers, such as a two-liter soda bottle, would work. Ideally, the bottle will be at least partially transparent to see the water level. The top should be open to the atmosphere to prevent a vacuum. You could even choose to cut the top of the bottle off with scissors to make measuring easier, although this is not required. The example demonstrated here is done with a plastic vinegar bottle shown at right. Note that we will only allow the water to drain through the region with a constant cross section (just above the label to the hole punctured just below the label). (b) A pushpin that can be used to poke a small hole for the drain. (c) A pencil or pen that can be used to widen the drain hole. (d) A ruler with fine gradations, preferably with metric units such as centimeters. (e) A stopwatch, clock, or phone to record the drain time Use the pushpin to start the hole, then widen it with the pencil or pen. Try to make the whole as close to circular as possible. The diameter of a pen, less than , is an appropriate size to conduct a first experiment, but you could do more experiments after incrementally widening the hole. Mark the place on the bottle for the initial water level. Plan to drain only the region with a constant cross sectional area. (Our simple model was not derived for an area A that depended on the height.) Measure this initial height from your water mark down to the top of the small hole. Also make your best estimate of the diameter of the container and the small hole in order to calculate the areas and . If using a container with something other than a circular cross section, calculate the area according to that geometry. Use all of these parameters to estimate the time to drain . It is recommended that you convert all parameters to a common unit such as meters and use . As you fill the bottle to the line, you could leave the pen stuck in the bottle or keep your finger over the hole. Make your best estimate of the draining time and stop the timer the moment the water level reaches the top of the hole. As the tank drains, recall our prediction that the velocity of the stream would be largest at the beginning. Draining will slow down greatly as the water height diminishes. In this example, the initial height was measured to be . The hole diameter was approximated to be about , giving a hole area . The container diameter was about , giving a cross sectional area . Note that is over times larger than , supporting our previous assumption that is much larger than . The lumped constant is thus calculated to be and the draining time is predicted to be about seconds. See the predicted trajectory of the water level vs. time in the plot below. When the experiment was carried out, it actually took seconds for the tank to drain. Thus, the observed time was almost longer than the predicted time. This is not bad. Is your estimate also slightly shorter than the actual draining time? Sources of Error It is instructive to consider what sources of error may have been most important in our derivation and measurements. List what assumptions you believe may have been most dubious. Are all of your measurements accurate? Below, we will examine each major assumption and also estimate possible measurement errors in our analysis. When possible, we can predict whether a flaw in our model would cause an overestimate or an underestimate in draining time. (a) We assumed that the liquid water was freely flowing. Specifically, we assumed that the viscosity of our fluid was negligible. One can imagine the importance of viscosity in the case of less freely flowing substances like honey or molasses. Furthermore, it is possible that rough edges near our small drain hole – crudely poked with a pen or pencil – could have inhibited the flow of water, slightly slowing the draining process. A fluid’s viscosity causes resistance to flow that – to a certain extent – lessens the overall conversion of potential to kinetic energy because some of that energy goes into internal energy (essentially heating the fluid and its surroundings). In the case of water, it is likely that the velocity flowing out of the tank was overestimated by a small amount – likely a couple percent. Thus, for this reason, the model used here would tend to underestimate the draining time. In our example above, we might have been a bit closer to the correct answer. (b) Neglecting the term seemed reasonable during the derivation, and allowed us to further simplify Bernoulli’s Equation: Instead of neglecting this velocity of the top surface of the water, we could have chosen to relate it to the other velocity of the water at the drain hole. Recall that our balance equation had led to the following relationship: We recognize that the ”velocity out” is and that is equal to , since it is the rate of change of the height of the water. For that reason, we recognize that . This enables us to write Bernoulli’s equation as: In contrast to our previous simplified result , we now arrive at: However, here we realize that our previous simplification was more than warranted. As noted in the example above, the area is more than three hundred times smaller than , which means that denominator is very nearly , making the difference negligible. Other assumptions and errors in measurement likely dwarf this error. (c) Measurement error may also have been significant. For instance, we measured the small drain hole with a ruler, approximating the diameter to be about . Given its small size, this is a difficult estimate to make with only a ruler. Run the calculation again to see that if we had been just off in this estimate of this diameter, the draining time would vary by almost , or about seconds. This is a substantial change in the overall prediction resulting from a very modest difference in our measurement. Other measurement errors are possible including the other two lengths and the recorded time, but these are likely smaller than that caused by the measurement of the small hole. This suggests that the accuracy of our modeling is highly dependent on the estimation of key distances, and measurement errors here could outweigh the effects of our assumptions regarding the physics of the model. Measurement with calipers or the careful use of a drill bit to make the drain hole may be warranted to improve the accuracy of the model. ← Previous Next → Courses Calculus OneCalculus TwoCalculus Three About FAQDevelopment TeamWorkshopContact Us Social FacebookTwitterGoogle PlusGitHub Built at The Ohio State University OSU with support fromNSF Grant DUE-1245433, theShuttleworth Foundation, the Department of Mathematics, and the Affordable Learning ExchangeALX. © 2013–2025, The Ohio State University — Ximera team 100 Math Tower, 231 West 18th Avenue, Columbus OH, 43210–1174 Phone: (773) 809–5659 | Contact If you have trouble accessing this page and need to request an alternate format, contact ximera@math.osu.edu. Start typing the name of a mathematical function to automatically insert it. (For example, "sqrt" for root, "mat" for matrix, or "defi" for definite integral.) Controls Press......to do left/right arrows Move cursor shift+left/right arrows Select region ctrl+a Select all ctrl+x/c/v Cut/copy/paste ctrl+z/y Undo/redo ctrl+left/right Add entry to list or column to matrix shift+ctrl+left/right Add copy of current entry/column to to list/matrix ctrl+up/down Add row to matrix shift+ctrl+up/down Add copy of current row to matrix ctrl+backspace Delete current entry in list or column in matrix ctrl+shift+backspace Delete current row in matrix × Start typing the name of a mathematical function to automatically insert it. (For example, "sqrt" for root, "mat" for matrix, or "defi" for definite integral.) Symbols Type......to get norm∣∣[?]∣∣||\blue{[?]}||∣∣[?]∣∣ text[?]\text{\blue{[?]}}[?] sym_name[?]\backslash\texttt{\blue{[?]}}[?] abs∣[?]∣\left|\blue{[?]}\right|∣[?]∣ sqrt[?]\sqrt{\blue{[?]}}[?]​ paren([?])\left(\blue{[?]}\right)([?]) floor⌊[?]⌋\lfloor \blue{[?]} \rfloor⌊[?]⌋ factorial[?]!\blue{[?]}![?]! exp[?][?]{\blue{[?]}}^{\blue{[?]}}[?][?] sub[?][?]{\blue{[?]}}{\blue{[?]}}[?][?]​ frac[?][?]\dfrac{\blue{[?]}}{\blue{[?]}}[?][?]​ int∫[?]d[?]\displaystyle\int{\blue{[?]}}d\blue{[?]}∫[?]d[?] defi∫[?][?][?]d[?]\displaystyle\int{\blue{[?]}}^{\blue{[?]}}\blue{[?]}d\blue{[?]}∫[?][?]​[?]d[?] deriv d d[?][?]\displaystyle\frac{d}{d\blue{[?]}}\blue{[?]}d[?]d​[?] sum∑[?][?][?]\displaystyle\sum_{\blue{[?]}}^{\blue{[?]}}\blue{[?]}[?]∑[?]​[?] prod∏[?][?][?]\displaystyle\prod_{\blue{[?]}}^{\blue{[?]}}\blue{[?]}[?]∏[?]​[?] root[?][?]\sqrt[\blue{[?]}]{\blue{[?]}}[?][?]​ vec⟨[?]⟩\left\langle \blue{[?]} \right\rangle⟨[?]⟩ mat([?])\left(\begin{matrix} \blue{[?]} \end{matrix}\right)([?]​) ⋅\cdot⋅ infinity∞\infty∞ arcsin arcsin⁡([?])\arcsin\left(\blue{[?]}\right)arcsin([?]) arccos arccos⁡([?])\arccos\left(\blue{[?]}\right)arccos([?]) arctan arctan⁡([?])\arctan\left(\blue{[?]}\right)arctan([?]) sin sin⁡([?])\sin\left(\blue{[?]}\right)sin([?]) cos cos⁡([?])\cos\left(\blue{[?]}\right)cos([?]) tan tan⁡([?])\tan\left(\blue{[?]}\right)tan([?]) sec sec⁡([?])\sec\left(\blue{[?]}\right)sec([?]) csc csc⁡([?])\csc\left(\blue{[?]}\right)csc([?]) cot cot⁡([?])\cot\left(\blue{[?]}\right)cot([?]) log log⁡([?])\log\left(\blue{[?]}\right)lo g([?]) ln ln⁡([?])\ln\left(\blue{[?]}\right)ln([?]) alpha α\alpha α beta β\beta β gamma γ\gamma γ delta δ\delta δ epsilon ϵ\epsilon ϵ zeta ζ\zeta ζ eta η\eta η theta θ\theta θ iota ι\iota ι kappa κ\kappa κ lambda λ\lambda λ mu μ\mu μ nu ν\nu ν xi ξ\xi ξ omicron ο\omicron ο pi π\pi π rho ρ\rho ρ sigma σ\sigma σ tau τ\tau τ upsilon υ\upsilon υ phi ϕ\phi ϕ chi χ\chi χ psi ψ\psi ψ omega ω\omega ω Gamma Γ\Gamma Γ Delta Δ\Delta Δ Theta Θ\Theta Θ Lambda Λ\Lambda Λ Xi Ξ\Xi Ξ Pi Π\Pi Π Sigma Σ\Sigma Σ Phi Φ\Phi Φ Psi Ψ\Psi Ψ Omega Ω\Omega Ω × Global settings: Settings ×
2476
https://virtualnerd.com/graphing-equations/find-slope/slope-from-two-points
How Do You Find the Slope of a Line from Two Points? | Virtual Nerd Real math help. Linear Equations and Functions Switch to: Middle Grades Math Pre-Algebra Algebra 1 Algebra 2 Geometry Common Core SAT Math ACT Math Texas Programs Texas Standards Graphing Linear Equations Finding Slopes How Do You Find the Slope of a Line from Two Points? How Do You Find the Slope of a Line from Two Points? Note: Calculating the slope of a line from two given points? Use the slope formula! This tutorial will show you how! Keywords: problem slope line points point find slope slope formula two points Linear Equations and Functions Switch to: Middle Grades Math Pre-Algebra Algebra 1 Algebra 2 Geometry Common Core SAT Math ACT Math Texas Programs Texas Standards Graphing Linear Equations Finding Slopes Background Tutorials Finding Slopes Image 2: What's the Formula for Slope?Image 3: What's the Formula for Slope? What's the Formula for Slope? When you're dealing with linear equations, you may be asked to find the slope of a line. That's when knowing the slope formula really comes in handy! Learn the formula to find the slope of a line by watching this tutorial. Image 4: What Does the Slope of a Line Mean?Image 5: What Does the Slope of a Line Mean? What Does the Slope of a Line Mean? You can't learn about linear equations without learning about slope. The slope of a line is the steepness of the line. There are many ways to think about slope. Slope is the rise over the run, the change in 'y' over the change in 'x', or the gradient of a line. Check out this tutorial to learn about slope! Further Exploration Defining Linear Equations Image 6: What's Point-Slope Form of a Linear Equation?Image 7: What's Point-Slope Form of a Linear Equation? What's Point-Slope Form of a Linear Equation? When you're learning about linear equations, you're bound to run into the point-slope form of a line. This form is quite useful in creating an equation of a line if you're given the slope and a point on the line. Watch this tutorial, and learn about the point-slope form of a line! Image 8: What's Slope-Intercept Form of a Linear Equation?Image 9: What's Slope-Intercept Form of a Linear Equation? What's Slope-Intercept Form of a Linear Equation? When you're learning about linear equations, you're bound to run into the point-slope form of a line. This form is quite useful in creating an equation of a line if you're given the slope and a point on the line. Watch this tutorial, and learn about the point-slope form of a line! Graphing Lines and Finding Points Image 10: How Do You Graph a Line If You're Given the Slope and a Single Point?Image 11: How Do You Graph a Line If You're Given the Slope and a Single Point? How Do You Graph a Line If You're Given the Slope and a Single Point? Trying to graph a line from a given slope and a point? Think you need to find an equation first? Think again! In this tutorial, see how to use that given slope and point to graph the line. About Terms of Use Privacy Contact
2477
http://athenasc.com/prob-solved_2ndedition.pdf
Introduction to Probability 2nd Edition Problem Solutions (last updated: 9/29/22) c ⃝Dimitri P. Bertsekas and John N. Tsitsiklis Massachusetts Institute of Technology WWW site for book information and orders Athena Scientific, Belmont, Massachusetts 1 C H A P T E R 1 Solution to Problem 1.1. We have A = {2, 4, 6}, B = {4, 5, 6}, so A ∪B = {2, 4, 5, 6}, and (A ∪B)c = {1, 3}. On the other hand, Ac ∩Bc = {1, 3, 5} ∩{1, 2, 3} = {1, 3}. Similarly, we have A ∩B = {4, 6}, and (A ∩B)c = {1, 2, 3, 5}. On the other hand, Ac ∪Bc = {1, 3, 5} ∪{1, 2, 3} = {1, 2, 3, 5}. Solution to Problem 1.2. (a) By using a Venn diagram it can be seen that for any sets S and T, we have S = (S ∩T) ∪(S ∩T c). (Alternatively, argue that any x must belong to either T or to T c, so x belongs to S if and only if it belongs to S ∩T or to S ∩T c.) Apply this equality with S = Ac and T = B, to obtain the first relation Ac = (Ac ∩B) ∪(Ac ∩Bc). Interchange the roles of A and B to obtain the second relation. (b) By De Morgan’s law, we have (A ∩B)c = Ac ∪Bc, and by using the equalities of part (a), we obtain (A∩B)c = (Ac∩B)∪(Ac∩Bc) ∪(A∩Bc)∪(Ac∩Bc) = (Ac∩B)∪(Ac∩Bc)∪(A∩Bc). (c) We have A = {1, 3, 5} and B = {1, 2, 3}, so A ∩B = {1, 3}. Therefore, (A ∩B)c = {2, 4, 5, 6}, 2 and Ac ∩B = {2}, Ac ∩Bc = {4, 6}, A ∩Bc = {5}. Thus, the equality of part (b) is verified. Solution to Problem 1.5. Let G and C be the events that the chosen student is a genius and a chocolate lover, respectively. We have P(G) = 0.6, P(C) = 0.7, and P(G ∩C) = 0.4. We are interested in P(Gc ∩Cc), which is obtained with the following calculation: P(Gc∩Cc) = 1−P(G∪C) = 1−P(G)+P(C)−P(G∩C) = 1−(0.6+0.7−0.4) = 0.1. Solution to Problem 1.6. We first determine the probabilities of the six possible outcomes. Let a = P({1}) = P({3}) = P({5}) and b = P({2}) = P({4}) = P({6}). We are given that b = 2a. By the additivity and normalization axioms, 1 = 3a + 3b = 3a + 6a = 9a. Thus, a = 1/9, b = 2/9, and P({1, 2, 3}) = 4/9. Solution to Problem 1.7. The outcome of this experiment can be any finite sequence of the form (a1, a2, . . . , an), where n is an arbitrary positive integer, a1, a2, . . . , an−1 belong to {1, 3}, and an belongs to {2, 4}. In addition, there are possible outcomes in which an even number is never obtained. Such outcomes are infinite sequences (a1, a2, . . .), with each element in the sequence belonging to {1, 3}. The sample space consists of all possible outcomes of the above two types. Solution to Problem 1.8. Let pi be the probability of winning against the opponent played in the ith turn. Then, you will win the tournament if you win against the 2nd player (probability p2) and also you win against at least one of the two other players [probability p1 + (1 −p1)p3 = p1 + p3 −p1p3]. Thus, the probability of winning the tournament is p2(p1 + p3 −p1p3). The order (1, 2, 3) is optimal if and only if the above probability is no less than the probabilities corresponding to the two alternative orders, i.e., p2(p1 + p3 −p1p3) ≥p1(p2 + p3 −p2p3), p2(p1 + p3 −p1p3) ≥p3(p2 + p1 −p2p1). It can be seen that the first inequality above is equivalent to p2 ≥p1, while the second inequality above is equivalent to p2 ≥p3. Solution to Problem 1.9. (a) Since Ω= ∪n i=1Si, we have A = n [ i=1 (A ∩Si), while the sets A ∩Si are disjoint. The result follows by using the additivity axiom. (b) The events B ∩Cc, Bc ∩C, B ∩C, and Bc ∩Cc form a partition of Ω, so by part (a), we have P(A) = P(A ∩B ∩Cc) + P(A ∩Bc ∩C) + P(A ∩B ∩C) + P(A ∩Bc ∩Cc). (1) 3 The event A ∩B can be written as the union of two disjoint events as follows: A ∩B = (A ∩B ∩C) ∪(A ∩B ∩Cc), so that P(A ∩B) = P(A ∩B ∩C) + P(A ∩B ∩Cc). (2) Similarly, P(A ∩C) = P(A ∩B ∩C) + P(A ∩Bc ∩C). (3) Combining Eqs. (1)-(3), we obtain the desired result. Solution to Problem 1.10. Since the events A ∩Bc and Ac ∩B are disjoint, we have using the additivity axiom repeatedly, P(A∩Bc)∪(Ac∩B) = P(A∩Bc)+P(Ac∩B) = P(A)−P(A∩B)+P(B)−P(A∩B). Solution to Problem 1.14. (a) Each possible outcome has probability 1/36. There are 6 possible outcomes that are doubles, so the probability of doubles is 6/36 = 1/6. (b) The conditioning event (sum is 4 or less) consists of the 6 outcomes  (1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (3, 1) , 2 of which are doubles, so the conditional probability of doubles is 2/6 = 1/3. (c) There are 11 possible outcomes with at least one 6, namely, (6, 6), (6, i), and (i, 6), for i = 1, 2, . . . , 5. Thus, the probability that at least one die is a 6 is 11/36. (d) There are 30 possible outcomes where the dice land on different numbers. Out of these, there are 10 outcomes in which at least one of the rolls is a 6. Thus, the desired conditional probability is 10/30 = 1/3. Solution to Problem 1.15. Let A be the event that the first toss is a head and let B be the event that the second toss is a head. We must compare the conditional probabilities P(A ∩B | A) and P(A ∩B | A ∪B). We have P(A ∩B | A) = P(A ∩B) ∩A P(A) = P(A ∩B) P(A) , and P(A ∩B | A ∪B) = P(A ∩B) ∩(A ∪B) P(A ∪B) = P(A ∩B) P(A ∪B). Since P(A ∪B) ≥P(A), the first conditional probability above is at least as large, so Alice is right, regardless of whether the coin is fair or not. In the case where the coin is fair, that is, if all four outcomes HH, HT, TH, TT are equally likely, we have P(A ∩B) P(A) = 1/4 1/2 = 1 2, P(A ∩B) P(A ∪B) = 1/4 3/4 = 1 3. A generalization of Alice’s reasoning is that if A′, B′, and C′ are events such that B′ ⊂C′ and A′ ∩B′ = A′ ∩C′ (for example if A′ ⊂B′ ⊂C′), then the event 4 A′ is at least as likely if we know that B′ has occurred than if we know that C′ has occurred. Alice’s reasoning corresponds to the special case where A′ = A ∪B, B′ = A, and C′ = A ∪B. Solution to Problem 1.16. In this problem, there is a tendency to reason that since the opposite face is either heads or tails, the desired probability is 1/2. This is, however, wrong, because given that heads came up, it is more likely that the two-headed coin was chosen. The correct reasoning is to calculate the conditional probability p = P(two-headed coin was chosen | heads came up) = P(two-headed coin was chosen and heads came up) P(heads came up) . We have P(two-headed coin was chosen and heads came up) = 1 3, P(heads came up) = 1 2, so by taking the ratio of the above two probabilities, we obtain p = 2/3. Thus, the probability that the opposite face is tails is 1 −p = 1/3. Solution to Problem 1.17. Let A be the event that the batch will be accepted. Then A = A1 ∩A2 ∩A3 ∩A4, where Ai, i = 1, . . . , 4, is the event that the ith item is not defective. Using the multiplication rule, we have P(A) = P(A1)P(A2 | A1)P(A3 | A1∩A2)P(A4 | A1∩A2∩A3) = 95 100 · 94 99 · 93 98 · 92 97 = 0.812. Solution to Problem 1.18. Using the definition of conditional probabilities, we have P(A ∩B | B) = P(A ∩B ∩B) P(B) = P(A ∩B) P(B) = P(A | B). Solution to Problem 1.19. Let A be the event that Alice does not find her paper in drawer i. Since the paper is in drawer i with probability pi, and her search is successful with probability di, the multiplication rule yields P(Ac) = pidi, so that P(A) = 1 −pidi. Let B be the event that the paper is in drawer j. If j ̸= i, then A ∩B = B, P(A ∩B) = P(B), and we have P(B | A) = P(A ∩B) P(A) = P(B) P(A) = pj 1 −pidi . Similarly, if i = j, we have P(B | A) = P(A ∩B) P(A) = P(B)P(A | B) P(A) = pi(1 −di) 1 −pidi . Solution to Problem 1.20. (a) Figure 1.1 provides a sequential description for the three different strategies. Here we assume 1 point for a win, 0 for a loss, and 1/2 point 5 0 - 0 Timid play pw Bold play (a) (b) (c) Bold play Bold play Bold play Bold play Timid play Timid play Timid play 0 - 0 1 - 0 2 - 0 1 - 1 1 - 1 0 - 1 0 - 2 0 - 2 0 - 1 1 - 1 0.5- 0.5 0.5- 1.5 0.5- 1.5 0 - 0 1 - 0 1 - 1 1 - 1 0 - 1 0 - 2 1.5- 0.5 pw pw pw pw pd pd pd 1- pd 1- pd 1- pd pd 1- pd 1- pw 1- pw 1- pw 1- pw 1- pw Figure 1.1: Sequential descriptions of the chess match histories under strategies (i), (ii), and (iii). for a draw. In the case of a tied 1-1 score, we go to sudden death in the next game, and Boris wins the match (probability pw), or loses the match (probability 1 −pw). (i) Using the total probability theorem and the sequential description of Fig. 1.1(a), we have P(Boris wins) = p2 w + 2pw(1 −pw)pw. The term p2 w corresponds to the win-win outcome, and the term 2pw(1 −pw)pw corre-sponds to the win-lose-win and the lose-win-win outcomes. (ii) Using Fig. 1.1(b), we have P(Boris wins) = p2 dpw, corresponding to the draw-draw-win outcome. (iii) Using Fig. 1.1(c), we have P(Boris wins) = pwpd + pw(1 −pd)pw + (1 −pw)p2 w. 6 The term pwpd corresponds to the win-draw outcome, the term pw(1 −pd)pw corre-sponds to the win-lose-win outcome, and the term (1 −pw)p2 w corresponds to lose-win-win outcome. (b) If pw < 1/2, Boris has a greater probability of losing rather than winning any one game, regardless of the type of play he uses. Despite this, the probability of winning the match with strategy (iii) can be greater than 1/2, provided that pw is close enough to 1/2 and pd is close enough to 1. As an example, if pw = 0.45 and pd = 0.9, with strategy (iii) we have P(Boris wins) = 0.45 · 0.9 + 0.452 · (1 −0.9) + (1 −0.45) · 0.452 ≈0.54. With strategies (i) and (ii), the corresponding probabilities of a win can be calculated to be approximately 0.43 and 0.36, respectively. What is happening here is that with strategy (iii), Boris is allowed to select a playing style after seeing the result of the first game, while his opponent is not. Thus, by being able to dictate the playing style in each game after receiving partial information about the match’s outcome, Boris gains an advantage. Solution to Problem 1.21. Let p(m, k) be the probability that the starting player wins when the jar initially contains m white and k black balls. We have, using the total probability theorem, p(m, k) = m m + k + k m + k 1 −p(m, k −1) = 1 − k m + k p(m, k −1). The probabilities p(m, 1), p(m, 2), . . . , p(m, n) can be calculated sequentially using this formula, starting with the initial condition p(m, 0) = 1. Solution to Problem 1.22. We derive a recursion for the probability pi that a white ball is chosen from the ith jar. We have, using the total probability theorem, pi+1 = m + 1 m + n + 1pi + m m + n + 1(1 −pi) = 1 m + n + 1pi + m m + n + 1, starting with the initial condition p1 = m/(m + n). Thus, we have p2 = 1 m + n + 1 · m m + n + m m + n + 1 = m m + n. More generally, this calculation shows that if pi−1 = m/(m+n), then pi = m/(m+n). Thus, we obtain pi = m/(m + n) for all i. Solution to Problem 1.23. Let pi,n−i(k) denote the probability that after k ex-changes, a jar will contain i balls that started in that jar and n −i balls that started in the other jar. We want to find pn,0(4). We argue recursively, using the total probability 7 theorem. We have pn,0(4) = 1 n · 1 n · pn−1,1(3), pn−1,1(3) = pn,0(2) + 2 · n −1 n · 1 n · pn−1,1(2) + 2 n · 2 n · pn−2,2(2), pn,0(2) = 1 n · 1 n · pn−1,1(1), pn−1,1(2) = 2 · n −1 n · 1 n · pn−1,1(1), pn−2,2(2) = n −1 n · n −1 n · pn−1,1(1), pn−1,1(1) = 1. Combining these equations, we obtain pn,0(4) = 1 n2  1 n2 + 4(n −1)2 n4 + 4(n −1)2 n4  = 1 n2  1 n2 + 8(n −1)2 n4  . Solution to Problem 1.24. Intuitively, there is something wrong with this rationale. The reason is that it is not based on a correctly specified probabilistic model. In particular, the event where both of the other prisoners are to be released is not properly accounted in the calculation of the posterior probability of release. To be precise, let A, B, and C be the prisoners, and let A be the one who considers asking the guard. Suppose that all prisoners are a priori equally likely to be released. Suppose also that if B and C are to be released, then the guard chooses B or C with equal probability to reveal to A. Then, there are four possible outcomes: (1) A and B are to be released, and the guard says B (probability 1/3). (2) A and C are to be released, and the guard says C (probability 1/3). (3) B and C are to be released, and the guard says B (probability 1/6). (4) B and C are to be released, and the guard says C (probability 1/6). Thus, P(A is to be released | guard says B) = P(A is to be released and guard says B) P(guard says B) = 1/3 1/3 + 1/6 = 2 3. Similarly, P(A is to be released | guard says C) = 2 3. Thus, regardless of the identity revealed by the guard, the probability that A is released is equal to 2/3, the a priori probability of being released. Solution to Problem 1.25. Let m and m be the larger and the smaller of the two amounts, respectively. Consider the three events A = {X < m), B = {m < X < m), C = {m < X). 8 Let A (or B or C) be the event that A (or B or C, respectively) occurs and you first select the envelope containing the larger amount m. Let A (or B or C) be the event that A (or B or C, respectively) occurs and you first select the envelope containing the smaller amount m. Finally, consider the event W = {you end up with the envelope containing m}. We want to determine P(W) and check whether it is larger than 1/2 or not. By the total probability theorem, we have P(W | A) = 1 2 P(W | A) + P(W | A) = 1 2(1 + 0) = 1 2, P(W | B) = 1 2 P(W | B) + P(W | B) = 1 2(1 + 1) = 1, P(W | C) = 1 2 P(W | C) + P(W | C) = 1 2(0 + 1) = 1 2. Using these relations together with the total probability theorem, we obtain P(W) = P(A)P(W | A) + P(B)P(W | B) + P(C)P(W | C) = 1 2 P(A) + P(B) + P(C) + 1 2P(B) = 1 2 + 1 2P(B). Since P(B) > 0 by assumption, it follows that P(W) > 1/2, so your friend is correct. Solution to Problem 1.26. (a) We use the formula P(A | B) = P(A ∩B) P(B) = P(A)P(B | A) P(B) . Since all crows are black, we have P(B) = 1 −q. Furthermore, P(A) = p. Finally, P(B | A) = 1 −q = P(B), since the probability of observing a (black) crow is not affected by the truth of our hypothesis. We conclude that P(A | B) = P(A) = p. Thus, the new evidence, while compatible with the hypothesis “all cows are white,” does not change our beliefs about its truth. (b) Once more, P(A | C) = P(A ∩C) P(C) = P(A)P(C | A) P(C) . Given the event A, a cow is observed with probability q, and it must be white. Thus, P(C | A) = q. Given the event Ac, a cow is observed with probability q, and it is white with probability 1/2. Thus, P(C | Ac) = q/2. Using the total probability theorem, P(C) = P(A)P(C | A) + P(Ac)P(C | Ac) = pq + (1 −p)q 2. Hence, P(A | C) = pq pq + (1 −p)q 2 = 2p 1 + p > p. 9 Thus, the observation of a white cow makes the hypothesis “all cows are white” more likely to be true. Solution to Problem 1.27. Since Bob tosses one more coin that Alice, it is im-possible that they toss both the same number of heads and the same number of tails. So Bob tosses either more heads than Alice or more tails than Alice (but not both). Since the coins are fair, these events are equally likely by symmetry, so both events have probability 1/2. An alternative solution is to argue that if Alice and Bob are tied after 2n tosses, they are equally likely to win. If they are not tied, then their scores differ by at least 2, and toss 2n+1 will not change the final outcome. This argument may also be expressed algebraically by using the total probability theorem. Let B be the event that Bob tosses more heads. Let X be the event that after each has tossed n of their coins, Bob has more heads than Alice, let Y be the event that under the same conditions, Alice has more heads than Bob, and let Z be the event that they have the same number of heads. Since the coins are fair, we have P(X) = P(Y ), and also P(Z) = 1 −P(X) −P(Y ). Furthermore, we see that P(B | X) = 1, P(B | Y ) = 0, P(B | Z) = 1 2. Now we have, using the total probability theorem, P(B) = P(X) · P(B | X) + P(Y ) · P(B | Y ) + P(Z) · P(B | Z) = P(X) + 1 2 · P(Z) = 1 2 · P(X) + P(Y ) + P(Z) = 1 2. as required. Solution to Problem 1.30. Consider the sample space for the hunter’s strategy. The events that lead to the correct path are: (1) Both dogs agree on the correct path (probability p2, by independence). (2) The dogs disagree, dog 1 chooses the correct path, and hunter follows dog 1 [probability p(1 −p)/2]. (3) The dogs disagree, dog 2 chooses the correct path, and hunter follows dog 2 [probability p(1 −p)/2]. The above events are disjoint, so we can add the probabilities to find that under the hunter’s strategy, the probability that he chooses the correct path is p2 + 1 2p(1 −p) + 1 2p(1 −p) = p. On the other hand, if the hunter lets one dog choose the path, this dog will also choose the correct path with probability p. Thus, the two strategies are equally effective. 10 Solution to Problem 1.31. (a) Let A be the event that a 0 is transmitted. Using the total probability theorem, the desired probability is P(A)(1 −ϵ0) + 1 −P(A) (1 −ϵ1) = p(1 −ϵ0) + (1 −p)(1 −ϵ1). (b) By independence, the probability that the string 1011 is received correctly is (1 −ϵ0)(1 −ϵ1)3. (c) In order for a 0 to be decoded correctly, the received string must be 000, 001, 010, or 100. Given that the string transmitted was 000, the probability of receiving 000 is (1 −ϵ0)3, and the probability of each of the strings 001, 010, and 100 is ϵ0(1 −ϵ0)2. Thus, the probability of correct decoding is 3ϵ0(1 −ϵ0)2 + (1 −ϵ0)3. (d) When the symbol is 0, the probabilities of correct decoding with and without the scheme of part (c) are 3ϵ0(1 −ϵ0)2 + (1 −ϵ0)3 and 1 −ϵ0, respectively. Thus, the probability is improved with the scheme of part (c) if 3ϵ0(1 −ϵ0)2 + (1 −ϵ0)3 > (1 −ϵ0), or (1 −ϵ0)(1 + 2ϵ0) > 1, which is equivalent to 0 < ϵ0 < 1/2. (e) Using Bayes’ rule, we have P(0 | 101) = P(0)P(101 | 0) P(0)P(101 | 0) + P(1)P(101 | 1). The probabilities needed in the above formula are P(0) = p, P(1) = 1 −p, P(101 | 0) = ϵ2 0(1 −ϵ0), P(101 | 1) = ϵ1(1 −ϵ1)2. Solution to Problem 1.32. The answer to this problem is not unique and depends on the assumptions we make on the reproductive strategy of the king’s parents. Suppose that the king’s parents had decided to have exactly two children and then stopped. There are four possible and equally likely outcomes, namely BB, GG, BG, and GB (B stands for “boy” and G stands for “girl”). Given that at least one child was a boy (the king), the outcome GG is eliminated and we are left with three equally likely outcomes (BB, BG, and GB). The probability that the sibling is male (the conditional probability of BB) is 1/3 . Suppose on the other hand that the king’s parents had decided to have children until they would have a male child. In that case, the king is the second child, and the sibling is female, with certainty. 11 Solution to Problem 1.33. Flip the coin twice. If the outcome is heads-tails, choose the opera. if the outcome is tails-heads, choose the movies. Otherwise, repeat the process, until a decision can be made. Let Ak be the event that a decision was made at the kth round. Conditional on the event Ak, the two choices are equally likely, and we have P(opera) = ∞ X k=1 P(opera | Ak)P(Ak) = ∞ X k=1 1 2P(Ak) = 1 2. We have used here the property P∞ k=0 P(Ak) = 1, which is true as long as P(heads) > 0 and P(tails) > 0. Solution to Problem 1.34. The system may be viewed as a series connection of three subsystems, denoted 1, 2, and 3 in Fig. 1.19 in the text. The probability that the entire system is operational is p1p2p3, where pi is the probability that subsystem i is operational. Using the formulas for the probability of success of a series or a parallel system given in Example 1.24, we have p1 = p, p3 = 1 −(1 −p)2, and p2 = 1 −(1 −p)1 −p1 −(1 −p)3 . Solution to Problem 1.35. Let Ai be the event that exactly i components are operational. The probability that the system is operational is the probability of the union ∪n i=kAi, and since the Ai are disjoint, it is equal to n X i=k P(Ai) = n X i=k p(i), where p(i) are the binomial probabilities. Thus, the probability of an operational system is n X i=k  n i  pi(1 −p)n−i. Solution to Problem 1.36. (a) Let A denote the event that the city experiences a black-out. Since the power plants fail independent of each other, we have P(A) = n Y i=1 pi. (b) There will be a black-out if either all n or any n −1 power plants fail. These two events are disjoint, so we can calculate the probability P(A) of a black-out by adding their probabilities: P(A) = n Y i=1 pi + n X i=1 (1 −pi) Y j̸=i pj ! . 12 Here, (1 −pi) Q j̸=i pj is the probability that n −1 plants have failed and plant i is the one that has not failed. Solution to Problem 1.37. The probability that k1 voice users and k2 data users simultaneously need to be connected is p1(k1)p2(k2), where p1(k1) and p2(k2) are the corresponding binomial probabilities, given by pi(ki) =  ni ki  pki i (1 −pi)ni−ki, i = 1, 2. The probability that more users want to use the system than the system can accommodate is the sum of all products p1(k1)p2(k2) as k1 and k2 range over all possible values whose total bit rate requirement k1r1+k2r2 exceeds the capacity c of the system. Thus, the desired probability is X {(k1,k2) | k1r1+k2r2>c, k1≤n1, k2≤n2} p1(k1)p2(k2). Solution to Problem 1.38. We have pT = P(at least 6 out of the 8 remaining holes are won by Telis), pW = P(at least 4 out of the 8 remaining holes are won by Wendy). Using the binomial formulas, pT = 8 X k=6  8 k  pk(1 −p)8−k, pW = 8 X k=4  8 k  (1 −p)kp8−k. The amount of money that Telis should get is 10 · pT /(pT + pW ) dollars. Solution to Problem 1.39. Let the event A be the event that the professor teaches her class, and let B be the event that the weather is bad. We have P(A) = P(B)P(A | B) + P(Bc)P(A | Bc), and P(A | B) = n X i=k  n i  pi b(1 −pb)n−i, P(A | Bc) = n X i=k  n i  pi g(1 −pg)n−i. Therefore, P(A) = P(B) n X i=k  n i  pi b(1 −pb)n−i + 1 −P(B) n X i=k  n i  pi g(1 −pg)n−i. 13 Solution to Problem 1.40. Let A be the event that the first n −1 tosses produce an even number of heads, and let E be the event that the nth toss is a head. We can obtain an even number of heads in n tosses in two distinct ways: 1) there is an even number of heads in the first n −1 tosses, and the nth toss results in tails: this is the event A ∩Ec; 2) there is an odd number of heads in the first n −1 tosses, and the nth toss results in heads: this is the event Ac ∩E. Using also the independence of A and E, qn = P(A ∩Ec) ∪(Ac ∩E) = P(A ∩Ec) + P(Ac ∩E) = P(A)P(Ec) + P(Ac)P(E) = (1 −p)qn−1 + p(1 −qn−1). We now use induction. For n = 0, we have q0 = 1, which agrees with the given formula for qn. Assume, that the formula holds with n replaced by n −1, i.e., qn−1 = 1 + (1 −2p)n−1 2 . Using this equation, we have qn = p(1 −qn−1) + (1 −p)qn−1 = p + (1 −2p)qn−1 = p + (1 −2p)1 + (1 −2p)n−1 2 = 1 + (1 −2p)n 2 , so the given formula holds for all n. Solution to Problem 1.41. We have P(N = n) = P(A1,n−1 ∩An,n) = P(A1,n−1)P(An,n | A1,n−1), where for i ≤j, Ai,j is the event that contestant i’s number is the smallest of the numbers of contestants 1, . . . , j. We also have P(A1,n−1) = 1 n −1. We claim that P(An,n | A1,n−1) = P(An,n) = 1 n. The reason is that by symmetry, we have P(An,n | Ai,n−1) = P(An,n | A1,n−1), i = 1, . . . , n −1, while by the total probability theorem, P(An,n) = n−1 X i=1 P(Ai,n−1)P(An,n | Ai,n−1) = P(An,n | A1,n−1) n−1 X i=1 P(Ai,n−1) = P(An,n | A1,n−1). 14 Hence P(N = n) = 1 n −1 · 1 n. An alternative solution is also possible, using the counting methods developed in Section 1.6. Let us fix a particular choice of n. Think of an outcome of the experiment as an ordering of the values of the n contestants, so that there are n! equally likely outcomes. The event {N = n} occurs if and only if the first contestant’s number is smallest among the first n −1 contestants, and contestant n’s number is the smallest among the first n contestants. This event can occur in (n −2)! different ways, namely, all the possible ways of ordering contestants 2, . . . , n −1. Thus, the probability of this event is (n −2)!/n! = 1/(n(n −1)), in agreement with the previous solution. Solution to Problem 1.49. A sum of 11 is obtained with the following 6 combina-tions: (6, 4, 1) (6, 3, 2) (5, 5, 1) (5, 4, 2) (5, 3, 3) (4, 4, 3). A sum of 12 is obtained with the following 6 combinations: (6, 5, 1) (6, 4, 2) (6, 3, 3) (5, 5, 2) (5, 4, 3) (4, 4, 4). Each combination of 3 distinct numbers corresponds to 6 permutations, while each combination of 3 numbers, two of which are equal, corresponds to 3 permutations. Counting the number of permutations in the 6 combinations corresponding to a sum of 11, we obtain 6 + 6 + 3 + 6 + 3 + 3 = 27 permutations. Counting the number of permutations in the 6 combinations corresponding to a sum of 12, we obtain 6 + 6 + 3 + 3 + 6 + 1 = 25 permutations. Since all permutations are equally likely, a sum of 11 is more likely than a sum of 12. Note also that the sample space has 63 = 216 elements, so we have P(11) = 27/216, P(12) = 25/216. Solution to Problem 1.50. The sample space consists of all possible choices for the birthday of each person. Since there are n persons, and each has 365 choices for their birthday, the sample space has 365n elements. Let us now consider those choices of birthdays for which no two persons have the same birthday. Assuming that n ≤365, there are 365 choices for the first person, 364 for the second, etc., for a total of 365 · 364 · · · (365 −n + 1). Thus, P(no two birthdays coincide) = 365 · 364 · · · (365 −n + 1) 365n . It is interesting to note that for n as small as 23, the probability that there are two persons with the same birthday is larger than 1/2. Solution to Problem 1.51. (a) We number the red balls from 1 to m, and the white balls from m + 1 to m + n. One possible sample space consists of all pairs of integers (i, j) with 1 ≤i, j ≤m + n and i ̸= j. The total number of possible outcomes is (m + n)(m + n −1). The number of outcomes corresponding to red-white selection, (i.e., i ∈{1, . . . , m} and j ∈{m + 1, . . . , m + n}) is mn. The number of outcomes corresponding to white-red selection, (i.e., i ∈{m + 1, . . . , m + n} and j ∈{1, . . . , m}) is also mn. Thus, the desired probability that the balls are of different color is 2mn (m + n)(m + n −1). 15 Another possible sample space consists of all the possible ordered color pairs, i.e., {RR, RW, WR, WW}. We then have to calculate the probability of the event {RW, WR}. We consider a sequential description of the experiment, i.e., we first select the first ball and then the second. In the first stage, the probability of a red ball is m/(m+n). In the second stage, the probability of a red ball is either m/(m+n−1) or (m−1)/(m+n−1) depending on whether the first ball was white or red, respectively. Therefore, using the multiplication rule, we have P(RR) = m m + n · m −1 m −1 + n, P(RW) = m m + n · n m −1 + n, P(WR) = n m + n · m m + n −1, P(WW) = n m + n · n −1 m + n −1. The desired probability is P{RW, WR} = P(RW) + P(WR) = m m + n · n m −1 + n + n m + n · m m + n −1 = 2mn (m + n)(m + n −1). (b) We calculate the conditional probability of all balls being red, given any of the possible values of k. We have P(R | k = 1) = m/(m + n) and, as found in part (a), P(RR | k = 2) = m(m −1)/(m + n)(m −1 + n). Arguing sequentially as in part (a), we also have P(RRR | k = 3) = m(m −1)(m −2)/(m + n)(m −1 + n)(m −2 + n). According to the total probability theorem, the desired answer is 1 3  m m + n + m(m −1) (m + n)(m −1 + n) + m(m −1)(m −2) (m + n)(m −1 + n)(m −2 + n)  . Solution to Problem 1.52. The probability that the 13th card is the first king to be dealt is the probability that out of the first 13 cards to be dealt, exactly one was a king, and that the king was dealt last. Now, given that exactly one king was dealt in the first 13 cards, the probability that the king was dealt last is just 1/13, since each “position” is equally likely. Thus, it remains to calculate the probability that there was exactly one king in the first 13 cards dealt. To calculate this probability we count the “favorable” outcomes and divide by the total number of possible outcomes. We first count the favorable outcomes, namely those with exactly one king in the first 13 cards dealt. We can choose a particular king in 4 ways, and we can choose the other 12 cards in 48 12  ways, therefore there are 4 · 48 12  favorable outcomes. There are 52 13  total outcomes, so the desired probability is 1 13 · 4 ·  48 12   52 13  . For an alternative solution, we argue as in Example 1.10. The probability that the first card is not a king is 48/52. Given that, the probability that the second is 16 not a king is 47/51. We continue similarly until the 12th card. The probability that the 12th card is not a king, given that none of the preceding 11 was a king, is 37/41. (There are 52−11 = 41 cards left, and 48−11 = 37 of them are not kings.) Finally, the conditional probability that the 13th card is a king is 4/40. The desired probability is 48 · 47 · · · 37 · 4 52 · 51 · · · 41 · 40. Solution to Problem 1.53. Suppose we label the classes A, B, and C. The proba-bility that Joe and Jane will both be in class A is the number of possible combinations for class A that involve both Joe and Jane, divided by the total number of combinations for class A. Therefore, this probability is  88 28   90 30 . Since there are three classes, the probability that Joe and Jane end up in the same class is 3 ·  88 28   90 30 . A much simpler solution is as follows. We place Joe in one class. Regarding Jane, there are 89 possible “slots”, and only 29 of them place her in the same class as Joe. Thus, the answer is 29/89, which turns out to agree with the answer obtained earlier. Solution to Problem 1.54. (a) Since the cars are all distinct, there are 20! ways to line them up. (b) To find the probability that the cars will be parked so that they alternate, we count the number of “favorable” outcomes, and divide by the total number of possible outcomes found in part (a). We count in the following manner. We first arrange the US cars in an ordered sequence (permutation). We can do this in 10! ways, since there are 10 distinct cars. Similarly, arrange the foreign cars in an ordered sequence, which can also be done in 10! ways. Finally, interleave the two sequences. This can be done in two different ways, since we can let the first car be either US-made or foreign. Thus, we have a total of 2 · 10! · 10! possibilities, and the desired probability is 2 · 10! · 10! 20! . Note that we could have solved the second part of the problem by neglecting the fact that the cars are distinct. Suppose the foreign cars are indistinguishable, and also that the US cars are indistinguishable. Out of the 20 available spaces, we need to choose 10 spaces in which to place the US cars, and thus there are 20 10  possible outcomes. Out of these outcomes, there are only two in which the cars alternate, depending on 17 whether we start with a US or a foreign car. Thus, the desired probability is 2/20 10  , which coincides with our earlier answer. Solution to Problem 1.55. We count the number of ways in which we can safely place 8 distinguishable rooks, and then divide this by the total number of possibilities. First we count the number of favorable positions for the rooks. We will place the rooks one by one on the 8 × 8 chessboard. For the first rook, there are no constraints, so we have 64 choices. Placing this rook, however, eliminates one row and one column. Thus, for the second rook, we can imagine that the illegal column and row have been removed, thus leaving us with a 7×7 chessboard, and with 49 choices. Similarly, for the third rook we have 36 choices, for the fourth 25, etc. In the absence of any restrictions, there are 64 · 63 · · · 57 = 64!/56! ways we can place 8 rooks, so the desired probability is 64 · 49 · 36 · 25 · 16 · 9 · 4 64! 56! . Solution to Problem 1.56. (a) There are 8 4  ways to pick 4 lower level classes, and 10 3  ways to choose 3 higher level classes, so there are  8 4  10 3  valid curricula. (b) This part is more involved. We need to consider several different cases: (i) Suppose we do not choose L1. Then both L2 and L3 must be chosen; otherwise no higher level courses would be allowed. Thus, we need to choose 2 more lower level classes out of the remaining 5, and 3 higher level classes from the available 5. We then obtain 5 2 5 3  valid curricula. (ii) If we choose L1 but choose neither L2 nor L3, we have 5 3 5 3  choices. (iii) If we choose L1 and choose one of L2 or L3, we have 2 · 5 2 5 3  choices. This is because there are two ways of choosing between L2 and L3, 5 2  ways of choosing 2 lower level classes from L4, . . . , L8, and 5 3  ways of choosing 3 higher level classes from H1, . . . , H5. (iv) Finally, if we choose L1, L2, and L3, we have 5 1 10 3  choices. Note that we are not double counting, because there is no overlap in the cases we are considering, and furthermore we have considered every possible choice. The total is obtained by adding the counts for the above four cases. Solution to Problem 1.57. Let us fix the order in which letters appear in the sentence. There are 26! choices, corresponding to the possible permutations of the 26-letter alphabet. Having fixed the order of the letters, we need to separate them into words. To obtain 6 words, we need to place 5 separators (“blanks”) between the letters. With 26 letters, there are 25 possible positions for these blanks, and the number of choices is 25 5  . Thus, the desired number of sentences is 26!25 5  . Generalizing, the number of sentences consisting of w nonempty words using exactly once each letter 18 from a l-letter alphabet is equal to l!  l −1 w −1  . Solution to Problem 1.58. (a) The sample space consists of all ways of drawing 7 elements out of a 52-element set, so it contains 52 7  possible outcomes. Let us count those outcomes that involve exactly 3 aces. We are free to select any 3 out of the 4 aces, and any 4 out of the 48 remaining cards, for a total of 4 3 48 4  choices. Thus, P(7 cards include exactly 3 aces) =  4 3  48 4   52 7  . (b) Proceeding similar to part (a), we obtain P(7 cards include exactly 2 kings) =  4 2  48 5   52 7  . (c) If A and B stand for the events in parts (a) and (b), respectively, we are looking for P(A ∪B) = P(A) + P(B) −P(A ∩B). The event A ∩B (having exactly 3 aces and exactly 2 kings) can occur by choosing 3 out of the 4 available aces, 2 out of the 4 available kings, and 2 more cards out of the remaining 44. Thus, this event consists of 4 3 4 2 44 2  distinct outcomes. Hence, P(7 cards include 3 aces and/or 2 kings) =  4 3  48 4  +  4 2  48 5  −  4 3  4 2  44 2   52 7  . Solution to Problem 1.59. Clearly if n > m, or n > k, or m −n > 100 −k, the probability must be zero. If n ≤m, n ≤k, and m −n ≤100 −k, then we can find the probability that the test drive found n of the 100 cars defective by counting the total number of size m subsets, and then the number of size m subsets that contain n lemons. Clearly, there are 100 m  different subsets of size m. To count the number of size m subsets with n lemons, we first choose n lemons from the k available lemons, and then choose m −n good cars from the 100 −k available good cars. Thus, the number of ways to choose a subset of size m from 100 cars, and get n lemons, is  k n  100 −k m −n  , 19 and the desired probability is  k n  100 −k m −n   100 m  . Solution to Problem 1.60. The size of the sample space is the number of different ways that 52 objects can be divided in 4 groups of 13, and is given by the multinomial formula 52! 13! 13! 13! 13!. There are 4! different ways of distributing the 4 aces to the 4 players, and there are 48! 12! 12! 12! 12! different ways of dividing the remaining 48 cards into 4 groups of 12. Thus, the desired probability is 4! 48! 12! 12! 12! 12! 52! 13! 13! 13! 13! . An alternative solution can be obtained by considering a different, but proba-bilistically equivalent method of dealing the cards. Each player has 13 slots, each one of which is to receive one card. Instead of shuffling the deck, we place the 4 aces at the top, and start dealing the cards one at a time, with each free slot being equally likely to receive the next card. For the event of interest to occur, the first ace can go anywhere; the second can go to any one of the 39 slots (out of the 51 available) that correspond to players that do not yet have an ace; the third can go to any one of the 26 slots (out of the 50 available) that correspond to the two players that do not yet have an ace; and finally, the fourth, can go to any one of the 13 slots (out of the 49 available) that correspond to the only player who does not yet have an ace. Thus, the desired probability is 39 · 26 · 13 51 · 50 · 49. By simplifying our previous answer, it can be checked that it is the same as the one obtained here, thus corroborating the intuitive fact that the two different ways of dealing the cards are probabilistically equivalent. 20 C H A P T E R 2 Solution to Problem 2.1. Let X be the number of points the MIT team earns over the weekend. We have P(X = 0) = 0.6 · 0.3 = 0.18, P(X = 1) = 0.4 · 0.5 · 0.3 + 0.6 · 0.5 · 0.7 = 0.27, P(X = 2) = 0.4 · 0.5 · 0.3 + 0.6 · 0.5 · 0.7 + 0.4 · 0.5 · 0.7 · 0.5 = 0.34, P(X = 3) = 0.4 · 0.5 · 0.7 · 0.5 + 0.4 · 0.5 · 0.7 · 0.5 = 0.14, P(X = 4) = 0.4 · 0.5 · 0.7 · 0.5 = 0.07, P(X > 4) = 0. Solution to Problem 2.2. The number of guests that have the same birthday as you is binomial with p = 1/365 and n = 499. Thus the probability that exactly one other guest has the same birthday is  499 1  1 365 364 365 498 ≈0.3486. Let λ = np = 499/365 ≈1.367. The Poisson approximation is e−λλ = e−1.367 · 1.367 ≈ 0.3483, which closely agrees with the correct probability based on the binomial. Solution to Problem 2.3. (a) Let L be the duration of the match. If Fischer wins a match consisting of L games, then L −1 draws must first occur before he wins. Summing over all possible lengths, we obtain P(Fischer wins) = 10 X l=1 (0.3)l−1(0.4) = 0.571425. (b) The match has length L with L < 10, if and only if (L −1) draws occur, followed by a win by either player. The match has length L = 10 if and only if 9 draws occur. The probability of a win by either player is 0.7. Thus pL(l) = P(L = l) = ( (0.3)l−1(0.7), l = 1, . . . , 9, (0.3)9, l = 10, 0, otherwise. Solution to Problem 2.4. (a) Let X be the number of modems in use. For k < 50, the probability that X = k is the same as the probability that k out of 1000 customers need a connection: pX(k) =  1000 k  (0.01)k(0.99)1000−k, k = 0, 1, . . . , 49. 21 The probability that X = 50, is the same as the probability that 50 or more out of 1000 customers need a connection: pX(50) = 1000 X k=50  1000 k  (0.01)k(0.99)1000−k. (b) By approximating the binomial with a Poisson with parameter λ = 1000·0.01 = 10, we have pX(k) = e−10 10k k! , k = 0, 1, . . . , 49, pX(50) = 1000 X k=50 e−10 10k k! . (c) Let A be the event that there are more customers needing a connection than there are modems. Then, P(A) = 1000 X k=51  1000 k  (0.01)k(0.99)1000−k. With the Poisson approximation, P(A) is estimated by 1000 X k=51 e−10 10k k! . Solution to Problem 2.5. (a) Let X be the number of packets stored at the end of the first slot. For k < b, the probability that X = k is the same as the probability that k packets are generated by the source: pX(k) = e−λ λk k! , k = 0, 1, . . . , b −1, while pX(b) = ∞ X k=b e−λ λk k! = 1 − b−1 X k=0 e−λ λk k! . Let Y be the number of number of packets stored at the end of the second slot. Since min{X, c} is the number of packets transmitted in the second slot, we have Y = X −min{X, c}. Thus, pY (0) = c X k=0 pX(k) = c X k=0 e−λ λk k! , pY (k) = pX(k + c) = e−λ λk+c (k + c)!, k = 1, . . . , b −c −1, 22 pY (b −c) = pX(b) = 1 − b−1 X k=0 e−λ λk k! . (b) The probability that some packets get discarded during the first slot is the same as the probability that more than b packets are generated by the source, so it is equal to ∞ X k=b+1 e−λ λk k! , or 1 − b X k=0 e−λ λk k! . Solution to Problem 2.6. We consider the general case of part (b), and we show that p > 1/2 is a necessary and sufficient condition for n = 2k + 1 games to be better than n = 2k −1 games. To prove this, let N be the number of Celtics’ wins in the first 2k −1 games. If A denotes the event that the Celtics win with n = 2k + 1, and B denotes the event that the Celtics win with n = 2k −1, then P(A) = P(N ≥k + 1) + P(N = k) · 1 −(1 −p)2 + P(N = k −1) · p2, P(B) = P(N ≥k) = P(N = k) + P(N ≥k + 1), and therefore P(A) −P(B) = P(N = k −1) · p2 −P(N = k) · (1 −p)2 =  2k −1 k −1  pk−1(1 −p)kp2 −  2k −1 k  (1 −p)2pk(1 −p)k−1 = (2k −1)! (k −1)! k!pk(1 −p)k(2p −1). It follows that P(A) > P(B) if and only if p > 1 2. Thus, a longer series is better for the better team. Solution to Problem 2.7. Let random variable X be the number of trials you need to open the door, and let Ki be the event that the ith key selected opens the door. (a) In case (1), we have pX(1) = P(K1) = 1 5, pX(2) = P(Kc 1)P(K2 | Kc 1) = 4 5 · 1 4 = 1 5, pX(3) = P(Kc 1)P(Kc 2 | Kc 1)P(K3 | Kc 1 ∩Kc 2) = 4 5 · 3 4 · 1 3 = 1 5. Proceeding similarly, we see that the PMF of X is pX(x) = 1 5, x = 1, 2, 3, 4, 5. 23 We can also view the problem as ordering the keys in advance and then trying them in succession, in which case the probability of any of the five keys being correct is 1/5. In case (2), X is a geometric random variable with p = 1/5, and its PMF is pX(k) = 1 5 · 4 5 k−1 , k ≥1. (b) In case (1), we have pX(1) = P(K1) = 2 10, pX(2) = P(Kc 1)P(K2 | Kc 1) = 8 10 · 2 9, pX(3) = P(Kc 1)P(Kc 2 | Kc 1)P(K3 | Kc 1 ∩Kc 2) = 8 10 · 7 9 · 2 8 = 7 10 · 2 9. Proceeding similarly, we see that the PMF of X is pX(x) = 2 · (10 −x) 90 , x = 1, 2, . . . , 10. Consider now an alternative line of reasoning to derive the PMF of X. If we view the problem as ordering the keys in advance and then trying them in succession, the probability that the number of trials required is x is the probability that the first x −1 keys do not contain either of the two correct keys and the xth key is one of the correct keys. We can count the number of ways for this to happen and divide by the total number of ways to order the keys to determine pX(x). The total number of ways to order the keys is 10! For the xth key to be the first correct key, the other key must be among the last 10 −x keys, so there are 10 −x spots in which it can be located. There are 8! ways in which the other 8 keys can be in the other 8 locations. We must then multiply by two since either of the two correct keys could be in the xth position. We therefore have 2 · 10 −x · 8! ways for the xth key to be the first correct one and pX(x) = 2 · (10 −x)8! 10! = 2 · (10 −x) 90 , x = 1, 2, . . . , 10, as before. In case (2), X is again a geometric random variable with p = 1/5. Solution to Problem 2.8. For k = 0, 1, . . . , n −1, we have pX(k + 1) pX(k) =  n k + 1  pk+1(1 −p)n−k−1  n k  pk(1 −p)n−k = p 1 −p · n −k k + 1 . Solution to Problem 2.9. For k = 1, . . . , n, we have pX(k) pX(k −1) =  n k  pk(1 −p)n−k  n k −1  pk−1(1 −p)n−k+1 = (n −k + 1)p k(1 −p) = (n + 1)p −kp k −kp . 24 If k ≤k∗, then k ≤(n+1)p, or equivalently k−kp ≤(n+1)p−kp, so that the above ratio is greater than or equal to 1. It follows that pX(k) is monotonically nondecreasing. If k > k∗, the ratio is less than one, and pX(k) is monotonically decreasing, as required. Solution to Problem 2.10. Using the expression for the Poisson PMF, we have, for k ≥1, pX(k) pX(k −1) = λk · e−λ k! · (k −1)! λk−1 · e−λ = λ k . Thus if k ≤λ the ratio is greater or equal to 1, and it follows that pX(k) is monotonically increasing. Otherwise, the ratio is less than one, and pX(k) is monotonically decreasing, as required. Solution to Problem 2.13. We will use the PMF for the number of girls among the natural children together with the formula for the PMF of a function of a random variable. Let N be the number of natural children that are girls. Then N has a binomial PMF pN(k) =     5 k  · 1 2 5 , if 0 ≤k ≤5, 0, otherwise. Let G be the number of girls out of the 7 children, so that G = N + 2. By applying the formula for the PMF of a function of a random variable, we have pG(g) = X {n | n+2=g} pN(n) = pN(g −2). Thus pG(g) =     5 g −2  · 1 2 5 , if 2 ≤g ≤7, 0, otherwise. Solution to Problem 2.14. (a) Using the formula pY (y) = P {x | x mod(3)=y} pX(x), we obtain pY (0) = pX(0) + pX(3) + pX(6) + pX(9) = 4/10, pY (1) = pX(1) + pX(4) + pX(7) = 3/10, pY (2) = pX(2) + pX(5) + pX(8) = 3/10, pY (y) = 0, if y ̸∈{0, 1, 2}. (b) Similarly, using the formula pY (y) = P {x | 5 mod(x+1)=y} pX(x), we obtain pY (y) =        2/10, if y = 0, 2/10, if y = 1, 1/10, if y = 2, 5/10, if y = 5, 0, otherwise. 25 Solution to Problem 2.15. The random variable Y takes the values k ln a, where k = 1, . . . , n, if and only if X = ak or X = a−k. Furthermore, Y takes the value 0, if and only if X = 1. Thus, we have pY (y) =          2 2n + 1, if y = ln a, 2 ln a, . . . , n ln a, 1 2n + 1, if y = 0, 0, otherwise. Solution to Problem 2.16. (a) The scalar a must satisfy 1 = X x pX(x) = 1 a 3 X x=−3 x2, so a = 3 X x=−3 x2 = (−3)2 + (−2)2 + (−1)2 + 12 + 22 + 32 = 28. We also have E[X] = 0 because the PMF is symmetric around 0. (b) If z ∈{1, 4, 9}, then pZ(z) = pX(√z) + pX(−√z) = z 28 + z 28 = z 14. Otherwise pZ(z) = 0. (c) var(X) = E[Z] = X z zpZ(z) = X z∈{1,4,9} z2 14 = 7. (d) We have var(X) = X x (x −E[X])2pX(x) = 12 · pX(−1) + pX(1) + 22 · pX(−2) + pX(2) + 32 · pX(−3) + pX(3) = 2 · 1 28 + 8 · 4 28 + 18 · 9 28 = 7. Solution to Problem 2.17. If X is the temperature in Celsius, the temperature in Fahrenheit is Y = 32 + 9X/5. Therefore, E[Y ] = 32 + 9E[X]/5 = 32 + 18 = 50. Also var(Y ) = (9/5)2var(X), 26 where var(X), the square of the given standard deviation of X, is equal to 100. Thus, the standard deviation of Y is (9/5) · 10 = 18. Hence a normal day in Fahrenheit is one for which the temperature is in the range [32, 68]. Solution to Problem 2.18. We have pX(x) =  1/(b −a + 1), if x = 2k, where a ≤k ≤b, k integer, 0, otherwise, and E[X] = b X k=a 1 b −a + 12k = 2a b −a + 1(1 + 2 + · · · + 2b−a) = 2b+1 −2a b −a + 1 . Similarly, E[X2] = b X k=a 1 b −a + 1(2k)2 = 4b+1 −4a 3(b −a + 1), and finally var(X) = 4b+1 −4a 3(b −a + 1) −  2b+1 −2a b −a + 1 2 . Solution to Problem 2.19. We will find the expected gain for each strategy, by computing the expected number of questions until we find the prize. (a) With this strategy, the probability of finding the location of the prize with i ques-tions, where i = 1, . . . , 8, is 1/10. The probability of finding the location with 9 questions is 2/10. Therefore, the expected number of questions is 2 10 · 9 + 1 10 8 X i=1 i = 5.4. (b) It can be checked that for 4 of the 10 possible box numbers, exactly 4 questions will be needed, whereas for 6 of the 10 numbers, 3 questions will be needed. Therefore, with this strategy, the expected number of questions is 4 10 · 4 + 6 10 · 3 = 3.4. Solution to Problem 2.20. The number C of candy bars you need to eat is a geometric random variable with parameter p. Thus the mean is E[C] = 1/p, and the variance is var(C) = (1 −p)/p2. Solution to Problem 2.21. The expected value of the gain for a single game is infinite since if X is your gain, then E[X] = ∞ X k=1 2k · 2−k = ∞ X k=1 1 = ∞. 27 Thus if you are faced with the choice of playing for given fee f or not playing at all, and your objective is to make the choice that maximizes your expected net gain, you would be willing to pay any value of f. However, this is in strong disagreement with the behavior of individuals. In fact experiments have shown that most people are willing to pay only about $20 to $30 to play the game. The discrepancy is due to a presumption that the amount one is willing to pay is determined by the expected gain. However, expected gain does not take into account a person’s attitude towards risk taking. Solution to Problem 2.22. (a) Let X be the number of tosses until the game is over. Noting that X is geometric with probability of success P{HT, TH} = p(1 −q) + q(1 −p), we obtain pX(k) = 1 −p(1 −q) −q(1 −p)k−1p(1 −q) + q(1 −p) , k = 1, 2, . . . Therefore E[X] = 1 p(1 −q) + q(1 −p) and var(X) = pq + (1 −p)(1 −q) p(1 −q) + q(1 −p)2 . (b) The probability that the last toss of the first coin is a head is PHT | {HT, TH} = p(1 −q) p(1 −q) + (1 −q)p. Solution to Problem 2.23. Let X be the total number of tosses. (a) For each toss after the first one, there is probability 1/2 that the result is the same as in the preceding toss. Thus, the random variable X is of the form X = Y +1, where Y is a geometric random variable with parameter p = 1/2. It follows that pX(k) =  (1/2)k−1, if k ≥2, 0, otherwise, and E[X] = E[Y ] + 1 = 1 p + 1 = 3. We also have var(X) = var(Y ) = 1 −p p2 = 2. (b) If k > 2, there are k −1 sequences that lead to the event {X = k}. One such sequence is H · · · HT, where k −1 heads are followed by a tail. The other k −2 possible sequences are of the form T · · · TH · · · HT, for various lengths of the initial T · · · T 28 segment. For the case where k = 2, there is only one (hence k −1) possible sequence that leads to the event {X = k}, namely the sequence HT. Therefore, for any k ≥2, P(X = k) = (k −1)(1/2)k. It follows that pX(k) =  (k −1)(1/2)k, if k ≥2, 0, otherwise, and E[X] = ∞ X k=2 k(k−1)(1/2)k = ∞ X k=1 k(k−1)(1/2)k = ∞ X k=1 k2(1/2)k− ∞ X k=1 k(1/2)k = 6−2 = 4. We have used here the equalities ∞ X k=1 k(1/2)k = E[Y ] = 2, and ∞ X k=1 k2(1/2)k = E[Y 2] = var(Y ) + E[Y ]2 = 2 + 22 = 6, where Y is a geometric random variable with parameter p = 1/2. Solution to Problem 2.24. (a) There are 21 integer pairs (x, y) in the region R =  (x, y) | −2 ≤x ≤4, −1 ≤y −x ≤1 , so that the joint PMF of X and Y is pX,Y (x, y) = n 1/21, if (x, y) is in R, 0, otherwise. For each x in the range [−2, 4], there are three possible values of Y . Thus, we have pX(x) = n 3/21, if x = −2, −1, 0, 1, 2, 3, 4, 0, otherwise. The mean of X is the midpoint of the range [−2, 4]: E[X] = 1. The marginal PMF of Y is obtained by using the tabular method. We have pY (y) =            1/21, if y = −3, 2/21, if y = −2, 3/21, if y = −1, 0, 1, 2, 3, 2/21, if y = 4, 1/21, if y = 5, 0, otherwise. 29 The mean of Y is E[Y ] = 1 21 · (−3 + 5) + 2 21 · (−2 + 4) + 3 21 · (−1 + 1 + 2 + 3) = 1. (b) The profit is given by P = 100X + 200Y, so that E[P] = 100 · E[X] + 200 · E[Y ] = 100 · 1 + 200 · 1 = 300. Solution to Problem 2.25. (a) Since all possible values of (I, J) are equally likely, we have pI,J(i, j) = ( 1 Pn k=1 mk , if j ≤mi, 0, otherwise. The marginal PMFs are given by pI(i) = m X j=1 pI,J(i, j) = mi Pn k=1 mk , i = 1, . . . , n, pJ(j) = n X i=1 pI,J(i, j) = lj Pn k=1 mk , j = 1, . . . , m, where lj is the number of students that have answered question j, i.e., students i with j ≤mi. (b) The expected value of the score of student i is the sum of the expected values pija + (1 −pij)b of the scores on questions j with j = 1, . . . , mi, i.e., mi X j=1 pija + (1 −pij)b . Solution to Problem 2.26. (a) The possible values of the random variable X are the ten numbers 101, . . . , 110, and the PMF is given by pX(k) =  P(X > k −1) −P(X > k), if k = 101, . . . 110, 0, otherwise. We have P(X > 100) = 1 and for k = 101, . . . 110, P(X > k) = P(X1 > k, X2 > k, X3 > k) = P(X1 > k) P(X2 > k) P(X3 > k) = (110 −k)3 103 . 30 It follows that pX(k) = ( (111 −k)3 −(110 −k)3 103 , if k = 101, . . . 110, 0, otherwise. (An alternative solution is based on the notion of a CDF, which will be introduced in Chapter 3.) (b) Since Xi is uniformly distributed over the integers in the range [101, 110], we have E[Xi] = (101 + 110)/2 = 105.5. The expected value of X is E[X] = ∞ X k=−∞ k · pX(k) = 110 X k=101 k · px(k) = 110 X k=101 k · (111 −k)3 −(110 −k)3 103 . The above expression can be evaluated to be equal to 103.025. The expected improve-ment is therefore 105.5 - 103.025 = 2.475. Solution to Problem 2.31. The marginal PMF pY is given by the binomial formula pY (y) =  4 y  1 6 y 5 6 4−y , y = 0, 1, . . . , 4. To compute the conditional PMF pX|Y , note that given that Y = y, X is the number of 1’s in the remaining 4 −y rolls, each of which can take the 5 values 1, 3, 4, 5, 6 with equal probability 1/5. Thus, the conditional PMF pX|Y is binomial with parameters 4 −y and p = 1/5: pX|Y (x | y) =  4 −y x  1 5 x 4 5 4−y−x , for all nonnegative integers x and y such that 0 ≤x + y ≤4. The joint PMF is now given by pX,Y (x, y) = pY (y)pX|Y (x | y) =  4 y  1 6 y 5 6 4−y  4 −y x  1 5 x 4 5 4−y−x , for all nonnegative integers x and y such that 0 ≤x + y ≤4. For other values of x and y, we have pX,Y (x, y) = 0. Solution to Problem 2.32. Let Xi be the random variable taking the value 1 or 0 depending on whether the first partner of the ith couple has survived or not. Let Yi be the corresponding random variable for the second partner of the ith couple. Then, we have S = Pm i=1 XiYi, and by using the total expectation theorem, E[S | A = a] = m X i=1 E[XiYi | A = a] = mE[X1Y1 | A = a] = mE[Y1 | X1 = 1, A = a]P(X1 = 1 | A = a) = mP(Y1 = 1 | X1 = 1, A = a)P(X1 = 1 | A = a). 31 We have P(Y1 = 1 | X1 = 1, A = a) = a −1 2m −1, P(X1 = 1 | A = a) = a 2m. Thus E[S | A = a] = m a −1 2m −1 · a 2m = a(a −1) 2(2m −1). Note that E[S | A = a] does not depend on p. Solution to Problem 2.38. (a) Let X be the number of red lights that Alice encounters. The PMF of X is binomial with n = 4 and p = 1/2. The mean and the variance of X are E[X] = np = 2 and var(X) = np(1 −p) = 4 · (1/2) · (1/2) = 1. (b) The variance of Alice’s commuting time is the same as the variance of the time by which Alice is delayed by the red lights. This is equal to the variance of 2X, which is 4var(X) = 4. Solution to Problem 2.39. Let Xi be the number of eggs Harry eats on day i. Then, the Xi are independent random variables, uniformly distributed over the set {1, . . . , 6}. We have X = P10 i=1 Xi, and E[X] = E 10 X i=1 Xi ! = 10 X i=1 E[Xi] = 35. Similarly, we have var(X) = var 10 X i=1 Xi ! = 10 X i=1 var(Xi), since the Xi are independent. Using the formula of Example 2.6, we have var(Xi) = (6 −1)(6 −1 + 2) 12 ≈2.9167, so that var(X) ≈29.167. Solution to Problem 2.40. Associate a success with a paper that receives a grade that has not been received before. Let Xi be the number of papers between the ith success and the (i + 1)st success. Then we have X = 1 + P5 i=1 Xi and hence E[X] = 1 + 5 X i=1 E[Xi]. After receiving i −1 different grades so far (i −1 successes), each subsequent paper has probability (6 −i)/6 of receiving a grade that has not been received before. Therefore, the random variable Xi is geometric with parameter pi = (6−i)/6, so E[Xi] = 6/(6−i). It follows that E[X] = 1 + 5 X i=1 6 6 −i = 1 + 6 5 X i=1 1 i = 14.7. 32 Solution to Problem 2.41. (a) The PMF of X is the binomial PMF with parameters p = 0.02 and n = 250. The mean is E[X] = np = 250·0.02 = 5. The desired probability is P(X = 5) =  250 5  (0.02)5(0.98)245 = 0.1773. (b) The Poisson approximation has parameter λ = np = 5, so the probability in (a) is approximated by e−λ λ5 5! = 0.1755. (c) Let Y be the amount of money you pay in traffic tickets during the year. Then E[Y ] = 5 X i=1 50 · E[Yi], where Yi is the amount of money you pay on the ith day. The PMF of Yi is P(Yi = y) =      0.98, if y = 0, 0.01, if y = 10, 0.006, if y = 20, 0.004, if y = 50. The mean is E[Yi] = 0.01 · 10 + 0.006 · 20 + 0.004 · 50 = 0.42. The variance is var(Yi) = E[Y 2 i ]−E[Yi]2 = 0.01·(10)2 +0.006·(20)2 +0.004·(50)2 −(0.42)2 = 13.22. The mean of Y is E[Y ] = 250 · E[Yi] = 105, and using the independence of the random variables Yi, the variance of Y is var(Y ) = 250 · var(Yi) = 3, 305. (d) The variance of the sample mean is p(1 −p) 250 so assuming that |p −ˆ p| is within 5 times the standard deviation, the possible values of p are those that satisfy p ∈[0, 1] and (p −0.02)2 ≤25p(1 −p) 250 . 33 This is a quadratic inequality that can be solved for the interval of values of p. After some calculation, the inequality can be written as 275p2 −35p + 0.1 ≤0, which holds if and only if p ∈[0.0025, 0.1245]. Solution to Problem 2.42. (a) Noting that P(Xi = 1) = Area(S) Area[0, 1] × [0, 1] = Area(S), we obtain E[Sn] = E " 1 n n X i=1 Xi # = 1 n n X i=1 E[Xi] = E[Xi] = Area(S), and var(Sn) = var 1 n n X i=1 Xi ! = 1 n2 n X i=1 var(Xi) = 1 nvar(Xi) = 1 n 1 −Area(S) Area(S), which tends to zero as n tends to infinity. (b) We have Sn = n −1 n Sn−1 + 1 nXn. (c) We can generate S10000 (up to a certain precision) as follows : 1. Initialize S to zero. 2. For i = 1 to 10000 3. Randomly select two real numbers a and b (up to a certain precision) independently and uniformly from the interval [0, 1]. 4. If (a −0.5)2 + (b −0.5)2 < 0.25, set x to 1 else set x to 0. 5. Set S := (i −1)S/i + x/i . 6. Return S. By running the above algorithm, a value of S10000 equal to 0.7783 was obtained (the exact number depends on the random number generator). We know from part (a) that the variance of Sn tends to zero as n tends to infinity, so the obtained value of S10000 is an approximation of E[S10000]. But E[S10000] = Area(S) = π/4, this leads us to the following approximation of π: 4 · 0.7783 = 3.1132. (d) We only need to modify the test done at step 4. We have to test whether or not 0 ≤cos πa + sin πb ≤1. The obtained approximation of the area was 0.3755. 34 C H A P T E R 3 Solution to Problem 3.1. The random variable Y = g(X) is discrete and its PMF is given by pY (1) = P(X ≤1/3) = 1/3, pY (2) = 1 −pY (1) = 2/3. Thus, E[Y ] = 1 3 · 1 + 2 3 · 2 = 5 3. The same result is obtained using the expected value rule: E[Y ] = Z 1 0 g(x)fX(x) dx = Z 1/3 0 dx + Z 1 1/3 2 dx = 5 3. Solution to Problem 3.2. We have Z ∞ −∞ fX(x)dx = Z ∞ −∞ λ 2 e−λ|x| dx = 2 · 1 2 Z ∞ 0 λe−λx dx = 2 · 1 2 = 1, where we have used the fact R ∞ 0 λe−λxdx = 1, i.e., the normalization property of the exponential PDF. By symmetry of the PDF, we have E[X] = 0. We also have E[X2] = Z ∞ −∞ x2 λ 2 e−λ|x|dx = Z ∞ 0 x2λe−λxdx = 2 λ2 , where we have used the fact that the second moment of the exponential PDF is 2/λ2. Thus var(X) = E[X2] −E[X]2 = 2/λ2. Solution to Problem 3.5. Let A = bh/2 be the area of the given triangle, where b is the length of the base, and h is the height of the triangle. From the randomly chosen point, draw a line parallel to the base, and let Ax be the area of the triangle thus formed. The height of this triangle is h −x and its base has length b(h −x)/h. Thus Ax = b(h −x)2/(2h). For x ∈[0, h], we have FX(x) = 1 −P(X > x) = 1 −Ax A = 1 −b(h −x)2/(2h) bh/2 = 1 − h −x h 2 , while FX(x) = 0 for x < 0 and FX(x) = 1 for x > h. The PDF is obtained by differentiating the CDF. We have fX(x) = dFX dx (x) = ( 2(h −x) h2 , if 0 ≤x ≤h, 0, otherwise. 35 Solution to Problem 3.6. Let X be the waiting time and Y be the number of customers found. For x < 0, we have FX(x) = 0, while for x ≥0, FX(x) = P(X ≤x) = 1 2P(X ≤x | Y = 0) + 1 2P(X ≤x | Y = 1). Since P(X ≤x | Y = 0) = 1, P(X ≤x | Y = 1) = 1 −e−λx, we obtain FX(x) = ( 1 2(2 −e−λx), if x ≥0, 0, otherwise. Note that the CDF has a discontinuity at x = 0. The random variable X is neither discrete nor continuous. Solution to Problem 3.7. (a) We first calculate the CDF of X. For x ∈[0, r], we have FX(x) = P(X ≤x) = πx2 πr2 = x r 2 . For x < 0, we have FX(x) = 0, and for x > r, we have FX(x) = 1. By differentiating, we obtain the PDF fX(x) = ( 2x r2 , if 0 ≤x ≤r, 0, otherwise. We have E[X] = Z r 0 2x2 r2 dx = 2r 3 . Also E[X2] = Z r 0 2x3 r2 dx = r2 2 , so var(X) = E[X2] −E[X]2 = r2 2 −4r2 9 = r2 18. (b) Alvin gets a positive score in the range [1/t, ∞) if and only if X ≤t, and otherwise he gets a score of 0. Thus, for s < 0, the CDF of S is FS(s) = 0. For 0 ≤s < 1/t, we have FS(s) = P(S ≤s) = P(Alvin’s hit is outside the inner circle) = 1−P(X ≤t) = 1−t2 r2 . For 1/t < s, the CDF of S is given by FS(s) = P(S ≤s) = P(X ≤t)P(S ≤s | X ≤t) + P(X > t)P(S ≤s | X > t). 36 We have P(X ≤t) = t2 r2 , P(X > t) = 1 −t2 r2 , and since S = 0 when X > t, P(S ≤s | X > t) = 1. Furthermore, P(S ≤s | X ≤t) = P(1/X ≤s | X ≤t) = P(1/s ≤X ≤t) P(X ≤t) = πt2 −π(1/s)2 πr2 πt2 πr2 = 1−1 s2t2 . Combining the above equations, we obtain P(S ≤s) = t2 r2  1 − 1 s2t2  + 1 −t2 r2 = 1 − 1 s2r2 . Collecting the results of the preceding calculations, the CDF of S is FS(s) =        0, if s < 0, 1 −t2 r2 , if 0 ≤s < 1/t, 1 − 1 s2r2 , if 1/t ≤s. Because FS has a discontinuity at s = 0, the random variable S is not continuous. Solution to Problem 3.8. (a) By the total probability theorem, we have FX(x) = P(X ≤x) = pP(Y ≤x) + (1 −p)P(Z ≤x) = pFY (x) + (1 −p)FZ(x). By differentiating, we obtain fX(x) = pfY (x) + (1 −p)fZ(x). (b) Consider the random variable Y that has PDF fY (y) =  λeλy, if y < 0 0, otherwise, and the random variable Z that has PDF fZ(z) =  λe−λz, if y ≥0 0, otherwise. We note that the random variables −Y and Z are exponential. Using the CDF of the exponential random variable, we see that the CDFs of Y and Z are given by FY (y) =  eλy, if y < 0, 1, if y ≥0, 37 FZ(z) = n 0, if z < 0, 1 −e−λz, if z ≥0. We have fX(x) = pfY (x) + (1 −p)fZ(x), and consequently FX(x) = pFY (x) + (1 − p)FZ(x). It follows that FX(x) =  peλx, if x < 0, p + (1 −p)(1 −e−λx), if x ≥0, =  peλx, if x < 0, 1 −(1 −p)e−λx, if x ≥0. Solution to Problem 3.11. (a) X is a standard normal, so by using the normal table, we have P(X ≤1.5) = Φ(1.5) = 0.9332. Also P(X ≤−1) = 1 −Φ(1) = 1 −0.8413 = 0.1587. (b) The random variable (Y −1)/2 is obtained by subtracting from Y its mean (which is 1) and dividing by the standard deviation (which is 2), so the PDF of (Y −1)/2 is the standard normal. (c) We have, using the normal table, P(−1 ≤Y ≤1) = P−1 ≤(Y −1)/2 ≤0 = P(−1 ≤Z ≤0) = P(0 ≤Z ≤1) = Φ(1) −Φ(0) = 0.8413 −0.5 = 0.3413, where Z is a standard normal random variable. Solution to Problem 3.12. The random variable Z = X/σ is a standard normal, so P(X ≥kσ) = P(Z ≥k) = 1 −Φ(k). From the normal tables we have Φ(1) = 0.8413, Φ(2) = 0.9772, Φ(3) = 0.9986. Thus P(X ≥σ) = 0.1587, P(X ≥2σ) = 0.0228, P(X ≥3σ) = 0.0014. We also have P|X| ≤kσ = P|Z| ≤k = Φ(k) −P(Z ≤−k) = Φ(k) −1 −Φ(k) = 2Φ(k) −1. Using the normal table values above, we obtain P(|X| ≤σ) = 0.6826, P(|X| ≤2σ) = 0.9544, P(|X| ≤3σ) = 0.9972, where t is a standard normal random variable. 38 Solution to Problem 3.13. Let X and Y be the temperature in Celsius and Fahrenheit, respectively, which are related by X = 5(Y −32)/9. Therefore, 59 degrees Fahrenheit correspond to 15 degrees Celsius. So, if Z is a standard normal random variable, we have using E[X] = σX = 10, P(Y ≤59) = P(X ≤15) = P  Z ≤15 −E[X] σX  = P(Z ≤0.5) = Φ(0.5). From the normal tables we have Φ(0.5) = 0.6915, so P(Y ≤59) = 0.6915. Solution to Problem 3.15. (a) Since the area of the semicircle is πr2/2, the joint PDF of X and Y is fX,Y (x, y) = 2/πr2, for (x, y) in the semicircle, and fX,Y (x, y) = 0, otherwise. (b) To find the marginal PDF of Y , we integrate the joint PDF over the range of X. For any possible value y of Y , the range of possible values of X is the interval [− p r2 −y2, p r2 −y2], and we have fY (y) = Z √ r2−y2 −√ r2−y2 2 πr2 dx =    4 p r2 −y2 πr2 , if 0 ≤y ≤r, 0, otherwise. Thus, E[Y ] = 4 πr2 Z r 0 y p r2 −y2 dy = 4r 3π , where the integration is performed using the substitution z = r2 −y2. (c) There is no need to find the marginal PDF fY in order to find E[Y ]. Let D denote the semicircle. We have, using polar coordinates E[Y ] = Z Z (x,y)∈D yfX,Y (x, y) dx dy = Z π 0 Z r 0 2 πr2 s(sin θ)s ds dθ = 4r 3π . Solution to Problem 3.16. Let A be the event that the needle will cross a horizontal line, and let B be the probability that it will cross a vertical line. From the analysis of Example 3.11, we have that P(A) = 2l πa, P(B) = 2l πb. Since at most one horizontal (or vertical) line can be crossed, the expected number of horizontal lines crossed is P(A) [or P(B), respectively]. Thus the expected number of crossed lines is P(A) + P(B) = 2l πa + 2l πb = 2l(a + b) πab . The probability that at least one line will be crossed is P(A ∪B) = P(A) + P(B) −P(A ∩B). 39 Let X (or Y ) be the distance from the needle’s center to the nearest horizontal (or vertical) line. Let Θ be the angle formed by the needle’s axis and the horizontal lines as in Example 3.11. We have P(A ∩B) = P  X ≤l sin Θ 2 , Y ≤l cos Θ 2  . We model the triple (X, Y, Θ) as uniformly distributed over the set of all (x, y, θ) that satisfy 0 ≤x ≤a/2, 0 ≤y ≤b/2, and 0 ≤θ ≤π/2. Hence, within this set, we have fX,Y,Θ(x, y, θ) = 8 πab. The probability P(A ∩B) is PX ≤(l/2) sin Θ, Y ≤(l/2) cos Θ = Z Z x≤(l/2) sin θ y≤(l/2) cos θ fX,Y,Θ(x, y, θ) dx dy dθ = 8 πab Z π/2 0 Z (l/2) cos θ 0 Z (l/2) sin θ 0 dx dy dθ = 2l2 πab Z π/2 0 cos θ sin θ dθ = l2 πab. Thus we have P(A ∪B) = P(A) + P(B) −P(A ∩B) = 2l πa + 2l πb −l2 πab = l πab 2(a + b) −l . Solution to Problem 3.18. (a) We have E[X] = Z 3 1 x2 4 dx = x3 12 3 1 = 27 12 −1 12 = 26 12 = 13 6 , P(A) = Z 3 2 x 4 dx = x2 8 3 2 = 9 8 −4 8 = 5 8. We also have fX|A(x) = ( fX(x) P(A) , if x ∈A, 0, otherwise, = ( 2x 5 , if 2 ≤x ≤3, 0, otherwise, 40 from which we obtain E[X | A] = Z 3 2 x · 2x 5 dx = 2x3 15 3 2 = 54 15 −16 15 = 38 15. (b) We have E[Y ] = E[X2] = Z 3 1 x3 4 dx = 5, and E[Y 2] = E[X4] = Z 3 1 x5 4 dx = 91 3 . Thus, var(Y ) = E[Y 2] −E[Y ]2 = 91 3 −52 = 16 3 . Solution to Problem 3.19. (a) We have, using the normalization property, Z 2 1 cx−2 dx = 1, or c = 1 Z 2 1 x−2 dx = 2. (b) We have P(A) = Z 2 1.5 2x−2 dx = 1 3, and fX|A(x | A) =  6x−2, if 1.5 < x ≤2, 0, otherwise. (c) We have E[Y | A] = E[X2 | A] = Z 2 1.5 6x−2x2 dx = 3, E[Y 2 | A] = E[X4 | A] = Z 2 1.5 6x−2x4 dx = 37 4 , and var(Y | A) = 37 4 −32 = 1 4. Solution to Problem 3.20. The expected value in question is E[Time] = 5 + E[stay of 2nd student] · P(1st stays no more than 5 minutes) + E[stay of 1st | stay of 1st ≥5] + E[stay of 2nd] · P(1st stays more than 5 minutes). 41 We have E[stay of 2nd student] = 30, and, using the memorylessness property of the exponential distribution, E[stay of 1st | stay of 1st ≥5] = 5 + E[stay of 1st] = 35. Also P(1st student stays no more than 5 minutes) = 1 −e−5/30, P(1st student stays more than 5 minutes) = e−5/30. By substitution we obtain E[Time] = (5 + 30) · (1 −e−5/30) + (35 + 30) · e−5/30 = 35 + 30 · e−5/30 = 60.394. Solution to Problem 3.21. (a) We have fY (y) = 1/l, for 0 ≤y ≤l. Furthermore, given the value y of Y , the random variable X is uniform in the interval [0, y]. Therefore, fX|Y (x | y) = 1/y, for 0 ≤x ≤y. We conclude that fX,Y (x, y) = fY (y)fX|Y (x | y) = ( 1 l · 1 y , 0 ≤x ≤y ≤l, 0, otherwise. (b) We have fX(x) = Z fX,Y (x, y) dy = Z l x 1 ly dy = 1 l ln(l/x), 0 ≤x ≤l. (c) We have E[X] = Z l 0 xfX(x) dx = Z l 0 x l ln(l/x) dx = l 4. (d) The fraction Y/l of the stick that is left after the first break, and the further fraction X/Y of the stick that is left after the second break are independent. Furthermore, the random variables Y and X/Y are uniformly distributed over the sets [0, l] and [0, 1], respectively, so that E[Y ] = l/2 and E[X/Y ] = 1/2. Thus, E[X] = E[Y ] E hX Y i = l 2 · 1 2 = l 4. Solution to Problem 3.22. Define coordinates such that the stick extends from position 0 (the left end) to position 1 (the right end). Denote the position of the first break by X and the position of the second break by Y . With method (ii), we have X < Y . With methods (i) and (iii), we assume that X < Y and we later account for the case Y < X by using symmetry. Under the assumption X < Y , the three pieces have lengths X, Y −X, and 1 −Y . In order that they form a triangle, the sum of the lengths of any two pieces must exceed the length of the third piece. Thus they form a triangle if X < (Y −X) + (1 −Y ), (Y −X) < X + (1 −Y ), (1 −Y ) < X + (Y −X). 42 y 1 f X,Y(x ,y) = 2 f X|Y(x |y ) 1 - y 1 (a) (b) x 1 1 - y y x 1 1 - y Figure 3.1: (a) The joint PDF. (b) The conditional density of X. These conditions simplify to X < 0.5, Y > 0.5, Y −X < 0.5. Consider first method (i). For X and Y to satisfy these conditions, the pair (X, Y ) must lie within the triangle with vertices (0, 0.5), (0.5, 0.5), and (0.5, 1). This triangle has area 1/8. Thus the probability of the event that the three pieces form a triangle and X < Y is 1/8. By symmetry, the probability of the event that the three pieces form a triangle and X > Y is 1/8. Since there two events are disjoint and form a partition of the event that the three pieces form a triangle, the desired probability is 1/8 + 1/8 = 1/4. Consider next method (ii). Since X is uniformly distributed on [0, 1] and Y is uniformly distributed on [X, 1], we have for 0 ≤x ≤y ≤1, fX,Y (x, y) = fX(x) fY | X(y | x) = 1 · 1 1 −x. The desired probability is the probability of the triangle with vertices (0, 0.5), (0.5, 0.5), and (0.5, 1): Z 1/2 0 Z x+1/2 1/2 fX,Y (x, y)dydx = Z 1/2 0 Z x+1/2 1/2 1 1 −xdydx = Z 1/2 0 x 1 −xdydx = −1 2+ln 2. Consider finally method (iii). Consider first the case X < 0.5. Then the larger piece after the first break is the piece on the right. Thus, as in method (ii), Y is uniformly distributed on [X, 1] and the integral above gives the probability of a triangle being formed and X < 0.5. Considering also the case X > 0.5 doubles the probability, giving a final answer of −1 + 2 ln 2. Solution to Problem 3.23. (a) We interpret the word “triangle” to mean the “closed triangle,” that is, its boundary is also included. The area of the triangle is 1/2, so that fX,Y (x, y) = 2, on the triangle indicated in Fig. 3.1(a), and zero everywhere else. 43 (b) We have fY (y) = Z ∞ −∞ fX,Y (x, y) dx = Z 1−y 0 2 dx = 2(1 −y), 0 ≤y ≤1. (c) We have fX|Y (x | y) = fX,Y (x, y) fY (y) = 1 1 −y , 0 ≤y < 1, 0 ≤x ≤1 −y. When y = 1, fY (y) is zero and so the conditional PDF is undefined. Furthermore, given any y, with 0 ≤y < 1, fX|Y (x | y) = 0 when x lies outside the interval [0, 1 −y]. Finally, the conditional PDF is undefined for y / ∈[0, 1]. The conditional density is shown in the figure. Intuitively, since the joint PDF is constant, the conditional PDF (which is a “slice” of the joint, at some fixed y) is also constant. Therefore, the conditional PDF must be a uniform distribution. Given that Y = y, X ranges from 0 to 1−y. Therefore, for the PDF to integrate to 1, its height must be equal to 1/(1 −y), in agreement with the figure. (d) For y > 1 or y < 0, the conditional PDF is undefined, since these values of y are impossible. For 0 ≤y < 1, the conditional mean E[X | Y = y] is obtained using the uniform PDF in Fig. 3.1(b), and we have E[X | Y = y] = 1 −y 2 , 0 ≤y < 1. For y = 1, X must be equal to 0, with certainty, so E[X | Y = 1] = 0. Thus, the above formula is also valid when y = 1. The conditional expectation is undefined when y is outside [0, 1]. The total expectation theorem yields E[X] = Z 1 0 1 −y 2 fY (y) dy = 1 2 −1 2 Z 1 0 yfY (y) dy = 1 −E[Y ] 2 . (e) Because of symmetry, we must have E[X] = E[Y ]. Therefore, E[X] = 1−E[X] /2, which yields E[X] = 1/3. Solution to Problem 3.24. The conditional density of X given that Y = y is uniform over the interval [0, (2 −y)/2], and we have E[X | Y = y] = 2 −y 4 , 0 ≤y ≤2. Therefore, using the total expectation theorem, E[X] = Z 2 0 2 −y 4 fY (y) dy = 2 4 −1 4 Z 2 0 yfY (y) dy = 2 −E[Y ] 4 . 44 Similarly, the conditional density of Y given that X = x is uniform over the interval [0, 2(1 −x)], and we have E[Y | X = x] = 1 −x, 0 ≤x ≤1. Therefore, E[Y ] = Z 1 0 (1 −x)fX(x) dx = 1 −E[X]. By solving the two equations above for E[X] and E[Y ], we obtain E[X] = 1 3, E[Y ] = 2 3. Solution to Problem 3.25. Let C denote the event that X2 + Y 2 ≥c2. The probability P(C) can be calculated using polar coordinates, as follows: P(C) = 1 2πσ2 Z 2π 0 Z ∞ c re−r2/2σ2 dr dθ = 1 σ2 Z ∞ c re−r2/2σ2 dr = e−c2/2σ2. Thus, for (x, y) ∈C, fX,Y |C(x, y) = fX,Y (x, y) P(C) = 1 2πσ2 e −1 2σ2 (x2 + y2 −c2). Solution to Problem 3.34. (a) Let A be the event that the first coin toss resulted in heads. To calculate the probability P(A), we use the continuous version of the total probability theorem: P(A) = Z 1 0 P(A | P = p)fP (p) dp = Z 1 0 p2ep dp, which after some calculation yields P(A) = e −2. (b) Using Bayes’ rule, fP |A(p) = P(A|P = p)fP (p) P(A) =    p2ep e −2, 0 ≤p ≤1, 0, otherwise. 45 (c) Let B be the event that the second toss resulted in heads. We have P(B | A) = Z 1 0 P(B | P = p, A)fP |A(p) dp = Z 1 0 P(B | P = p)fP |A(p) dp = 1 e −2 Z 1 0 p3ep dp. After some calculation, this yields P(B | A) = 1 e −2 · (6 −2e) = 0.564 0.718 ≈0.786. 46 C H A P T E R 4 Solution to Problem 4.1. Let Y = p |X|. We have, for 0 ≤y ≤1, FY (y) = P(Y ≤y) = P( p |X| ≤y) = P(−y2 ≤X ≤y2) = y2, and therefore by differentiation, fY (y) = 2y, for 0 ≤y ≤1. Let Y = −ln |X|. We have, for y ≥0, FY (y) = P(Y ≤y) = P(ln |X| ≥−y) = P(X ≥e−y) + P(X ≤−e−y) = 1 −e−y, and therefore by differentiation fY (y) = e−y, for y ≥0, so Y is an exponential random variable with parameter 1. This exercise provides a method for simulating an exponential random variable using a sample of a uniform random variable. Solution to Problem 4.2. Let Y = eX. We first find the CDF of Y , and then take the derivative to find its PDF. We have P(Y ≤y) = P(eX ≤y) = n P(X ≤ln y), if y > 0, 0, otherwise. Therefore, fY (y) = ( d dxFX(ln y), if y > 0, 0, otherwise, = ( 1 y fX(ln y), if y > 0, 0, otherwise. When X is uniform on [0, 1], the answer simplifies to fY (y) = ( 1 y , if 1 < y ≤e, 0, otherwise. Solution to Problem 4.3. Let Y = |X|1/3. We have FY (y) = P(Y ≤y) = P|X|1/3 ≤y = P−y3 ≤X ≤y3 = FX(y3) −FX(−y3), 47 and therefore, by differentiating, fY (y) = 3y2fX(y3) + 3y2fX(−y3), for y > 0. Let Y = |X|1/4. We have FY (y) = P(Y ≤y) = P|X|1/4 ≤y = P(−y4 ≤X ≤y4) = FX(y4) −FX(−y4), and therefore, by differentiating, fY (y) = 4y3fX(y4) + 4y3fX(−y4), for y > 0. Solution to Problem 4.4. We have FY (y) =      0, if y ≤0, P(5 −y ≤X ≤5) + P(20 −y ≤X ≤20), if 0 ≤y ≤5, P(20 −y ≤X ≤20), if 5 < y ≤15, 1, if y > 15. Using the CDF of X, we have P(5 −y ≤X ≤5) = FX(5) −FX(5 −y), P(20 −y ≤X ≤20) = FX(20) −FX(20 −y). Thus, FY (y) =      0, if y ≤0, FX(5) −FX(5 −y) + FX(20) −FX(20 −y), if 0 ≤y ≤5, FX(20) −FX(20 −y), if 5 < y ≤15, 1, if y > 15. Differentiating, we obtain fY (y) = ( fX(5 −y) + fX(20 −y), if 0 ≤y ≤5, fX(20 −y), if 5 < y ≤15, 0, otherwise, consistent with the result of Example 3.14. Solution to Problem 4.5. Let Z = |X −Y |. We have FZ(z) = P|X −Y | ≤z = 1 −(1 −z)2. (To see this, draw the event of interest as a subset of the unit square and calculate its area.) Taking derivatives, the desired PDF is fZ(z) = n 2(1 −z), if 0 ≤z ≤1, 0, otherwise. 48 Solution to Problem 4.6. Let Z = |X −Y |. To find the CDF, we integrate the joint PDF of X and Y over the region where |X −Y | ≤z for a given z. In the case where z ≤0 or z ≥1, the CDF is 0 and 1, respectively. In the case where 0 < z < 1, we have FZ(z) = P(X −Y ≤z, X ≥Y ) + P(Y −X ≤z, X < Y ). The events {X −Y ≤z, X ≥Y } and {Y −X ≤z, X < Y } can be identified with subsets of the given triangle. After some calculation using triangle geometry, the areas of these subsets can be verified to be z/2 + z2/4 and 1/4 −(1 −z)2/4, respectively. Therefore, since fX,Y (x, y) = 1 for all (x, y) in the given triangle, FZ(z) =  z 2 + z2 4  +  1 4 −(1 −z)2 4  = z. Thus, FZ(z) = ( 0, if z ≤0, z, if 0 < z < 1, 1, if z ≥1. By taking the derivative with respect to z, we obtain fZ(z) = n 1, if 0 ≤z ≤1, 0, otherwise. Solution to Problem 4.7. Let X and Y be the two points, and let Z = max{X, Y }. For any t ∈[0, 1], we have P(Z ≤t) = P(X ≤t)P(Y ≤t) = t2, and by differentiating, the corresponding PDF is fZ(z) = ( 0, if z ≤0, 2z, if 0 ≤z ≤1, 0, if z ≥1. Thus, we have E[Z] = Z ∞ −∞ zfZ(z)dz = Z 1 0 2z2dz = 2 3. The distance of the largest of the two points to the right endpoint is 1 −Z, and its expected value is 1 −E[Z] = 1/3. A symmetric argument shows that the distance of the smallest of the two points to the left endpoint is also 1/3. Therefore, the expected distance between the two points must also be 1/3. Solution to Problem 4.8. Note that fX(x) and fY (z −x) are nonzero only when x ≥0 and x ≤z, respectively. Thus, in the convolution formula, we only need to integrate for x ranging from 0 to z: fZ(z) = Z ∞ −∞ fX(x)fY (z −x) dx = Z z 0 λe−λxλe−λ(z−x) dx = λ2e−z Z z 0 dx = λ2ze−λz. 49 Solution to Problem 4.9. Let Z = X −Y . We will first calculate the CDF FZ(z) by considering separately the cases z ≥0 and z < 0. For z ≥0, we have (see the left side of Fig. 4.6) FZ(z) = P(X −Y ≤z) = 1 −P(X −Y > z) = 1 − Z ∞ 0 Z ∞ z+y fX,Y (x, y) dx  dy = 1 − Z ∞ 0 µe−µy Z ∞ z+y λe−λx dx  dy = 1 − Z ∞ 0 µe−µye−λ(z+y) dy = 1 −e−λz Z ∞ 0 µe−(λ+µ)y dy = 1 − µ λ + µe−λz. For the case z < 0, we have using the preceding calculation FZ(z) = 1 −FZ(−z) = 1 −  1 − λ λ + µe−µ(−z)  = λ λ + µeµz. Combining the two cases z ≥0 and z < 0, we obtain FZ(z) =      1 − µ λ + µe−λz, if z ≥0, λ λ + µeµz, if z < 0. The PDF of Z is obtained by differentiating its CDF. We have fZ(z) =      λµ λ + µe−λz, if z ≥0, λµ λ + µeµz, if z < 0. For an alternative solution, fix some z ≥0 and note that fY (x −z) is nonzero only when x ≥z. Thus, fX−Y (z) = Z ∞ −∞ fX(x)fY (x −z) dx = Z ∞ z λe−λxµe−µ(x−z) dx = λµeλz Z ∞ z e−(λ+µ)x dx = λµeλz 1 λ + µe−(λ+µ)z = λµ λ + µe−µz, 50 in agreement with the earlier answer. The solution for the case z < 0 is obtained with a similar calculation. Solution to Problem 4.10. We first note that the range of possible values of Z are the integers from the range [1, 5]. Thus we have pZ(z) = 0, if z ̸= 1, 2, 3, 4, 5. We calculate pZ(z) for each of the values z = 1, 2, 3, 4, 5, using the convolution formula. We have pZ(1) = X x pX(x)pY (1 −x) = pX(1)pY (0) = 1 3 · 1 2 = 1 6, where the second equality above is based on the fact that for x ̸= 1 either pX(x) or pY (1 −x) (or both) is zero. Similarly, we obtain pZ(2) = pX(1)pY (1) + pX(2)pY (0) = 1 3 · 1 3 + 1 3 · 1 2 = 5 18, pZ(3) = pX(1)pY (2) + pX(2)pY (1) + pX(3)pY (0) = 1 3 · 1 6 + 1 3 · 1 3 + 1 3 · 1 2 = 1 3, pZ(4) = pX(2)pY (2) + pX(3)pY (1) = 1 3 · 1 6 + 1 3 · 1 3 = 1 6, pZ(5) = pX(3)pY (2) = 1 3 · 1 6 = 1 18. Solution to Problem 4.11. The convolution of two Poisson PMFs is of the form k X i=0 λie−λ i! · µk−ie−µ (k −i)! = e−(λ+µ) k X i=0 λiµk−i i! (k −i)!. We have (λ + µ)k = k X i=0  k i  λiµk−i = k X i=0 k! i! (k −i)!λiµk−i. Thus, the desired PMF is e−(λ+µ) k! k X i=0 k! λiµk−i i! (k −i)! = e−(λ+µ) k! (λ + µ)k, which is a Poisson PMF with mean λ + µ. Solution to Problem 4.12. Let V = X + Y . As in Example 4.10, the PDF of V is fV (v) = ( v, 0 ≤v ≤1, 2 −v, 1 ≤v ≤2, 0, otherwise. Let W = X + Y + Z = V + Z. We convolve the PDFs fV and fZ, to obtain fW (w) = Z fV (v)fZ(w −v) dv. 51 We first need to determine the limits of the integration. Since fV (v) = 0 outside the range 0 ≤v ≤2, and fW (w −v) = 0 outside the range 0 ≤w −v ≤1, we see that the integrand can be nonzero only if 0 ≤v ≤2, and w −1 ≤v ≤w. We consider three separate cases. If w ≤1, we have fW (w) = Z w 0 fV (v)fZ(w −v) dv = Z w 0 v dv = w2 2 . If 1 ≤w ≤2, we have fW (w) = Z w w−1 fV (v)fZ(w −v) dv = Z 1 w−1 v dv + Z w 1 (2 −v) dv = 1 2 −(w −1)2 2 −(w −2)2 2 + 1 2. Finally, if 2 ≤w ≤3, we have fW (w) = Z 2 w−1 fV (v)fZ(w −v) dv = Z 2 w−1 (2 −v) dv = (3 −w)2 2 . To summarize, fW (w) =      w2/2, 0 ≤w ≤1, 1 −(w −1)2/2 −(2 −w)2/2, 1 ≤w ≤2, (3 −w)2/2, 2 ≤w ≤3, 0, otherwise. Solution to Problem 4.13. We have X −Y = X +Z −(a+b), where Z = a+b−Y is distributed identically with X and Y . Thus, the PDF of X + Z is the same as the PDF of X + Y , and the PDF of X −Y is obtained by shifting the PDF of X + Y to the left by a + b. Solution to Problem 4.14. For all z ≥0, we have, using the independence of X and Y , and the form of the exponential CDF, FZ(z) = Pmin{X, Y } ≤z = 1 −Pmin{X, Y } > z = 1 −P(X > z, Y > z) = 1 −P(X > z)P(Y > z) = 1 −e−λze−µz = 1 −e−(λ+µ)z. 52 This is recognized as the exponential CDF with parameter λ + µ. Thus, the mini-mum of two independent exponentials with parameters λ and µ is an exponential with parameter λ + µ. Solution to Problem 4.17. Because the covariance remains unchanged when we add a constant to a random variable, we can assume without loss of generality that X and Y have zero mean. We then have cov(X −Y, X + Y ) = E (X −Y )(X + Y ) = E[X2] −E[Y 2] = var(X) −var(Y ) = 0, since X and Y were assumed to have the same variance. Solution to Problem 4.18. We have cov(R, S) = E[RS] −E[R]E[S] = E[WX + WY + X2 + XY ] = E[X2] = 1, and var(R) = var(S) = 2, so ρ(R, S) = cov(R, S) p var(R)var(S) = 1 2. We also have cov(R, T) = E[RT] −E[R]E[T] = E[WY + WZ + XY + XZ] = 0, so that ρ(R, T) = 0. Solution to Problem 4.19. To compute the correlation coefficient ρ(X, Y ) = cov(X, Y ) σXσY , we first compute the covariance: cov(X, Y ) = E[XY ] −E[X]E[Y ] = E[aX + bX2 + cX3] −E[X]E[Y ] = aE[X] + bE[X2] + cE[X3] = b. We also have var(Y ) = var(a + bX + cX2) = E (a + bX + cX2)2 −E[a + bX + cX2]2 = (a2 + 2ac + b2 + 3c2) −(a2 + c2 + 2ac) = b2 + 2c2, 53 and therefore, using the fact var(X) = 1, ρ(X, Y ) = b √ b2 + 2c2 . Solution to Problem 4.22. If the gambler’s fortune at the beginning of a round is a, the gambler bets a(2p−1). He therefore gains a(2p−1) with probability p, and loses a(2p −1) with probability 1 −p. Thus, his expected fortune at the end of a round is a1 + p(2p −1) −(1 −p)(2p −1) = a1 + (2p −1)2 . Let Xk be the fortune after the kth round. Using the preceding calculation, we have E[Xk+1 | Xk] = 1 + (2p −1)2 Xk. Using the law of iterated expectations, we obtain E[Xk+1] = 1 + (2p −1)2 E[Xk], and E[X1] = 1 + (2p −1)2 x. We conclude that E[Xn] = 1 + (2p −1)2nx. Solution to Problem 4.23. (a) Let W be the number of hours that Nat waits. We have E[X] = P(0 ≤X ≤1)E[W | 0 ≤X ≤1] + P(X > 1)E[W | X > 1]. Since W > 0 only if X > 1, we have E[W] = P(X > 1)E[W | X > 1] = 1 2 · 1 2 = 1 4. (b) Let D be the duration of a date. We have E[D | 0 ≤X ≤1] = 3. Furthermore, when X > 1, the conditional expectation of D given X is (3 −X)/2. Hence, using the law of iterated expectations, E[D | X > 1] = E E[D | X] | X > 1 = E h3 −X 2 X > 1 i . Therefore, E[D] = P(0 ≤X ≤1)E[D | 0 ≤X ≤1] + P(X > 1)E[D | X > 1] = 1 2 · 3 + 1 2 · E h3 −X 2 X > 1 i = 3 2 + 1 2  3 2 −E[X | X > 1] 2  = 3 2 + 1 2  3 2 −3/2 2  = 15 8 . 54 (c) The probability that Pat will be late by more than 45 minutes is 1/8. The number of dates before breaking up is the sum of two geometrically distributed random variables with parameter 1/8, and its expected value is 2 · 8 = 16. Solution to Problem 4.24. (a) Consider the following two random variables: X = amount of time the professor devotes to his task [exponentially distributed with parameter λ(y) = 1/(5 −y)]; Y = length of time between 9 a.m. and his arrival (uniformly distributed between 0 and 4). Note that E[Y ] = 2. We have E[X | Y = y] = 1 λ(y) = 5 −y, which implies that E[X | Y ] = 5 −Y, and E[X] = E E[X | Y ] = E[5 −Y ] = 5 −E[Y ] = 5 −2 = 3. (b) Let Z be the length of time from 9 a.m. until the professor completes the task. Then, Z = X + Y. We already know from part (a) that E[X] = 3 and E[Y ] = 2, so that E[Z] = E[X] + E[Y ] = 3 + 2 = 5. Thus the expected time that the professor leaves his office is 5 hours after 9 a.m. (c) We define the following random variables: W = length of time between 9 a.m. and arrival of the Ph.D. student (uniformly dis-tributed between 9 a.m. and 5 p.m.). R = amount of time the student will spend with the professor, if he finds the professor (uniformly distributed between 0 and 1 hour). T = amount of time the professor will spend with the student. Let also F be the event that the student finds the professor. To find E[T], we write E[T] = P(F)E[T | F] + P(F c)E[T | F c] Using the problem data, E[T | F] = E[R] = 1 2 (this is the expected value of a uniformly distribution ranging from 0 to 1), E[T | F c] = 0 55 (since the student leaves if he does not find the professor). We have E[T] = E[T | F]P(F) = 1 2P(F), so we need to find P(F). In order for the student to find the professor, his arrival should be between the arrival and the departure of the professor. Thus P(F) = P(Y ≤W ≤X + Y ). We have that W can be between 0 (9 a.m.) and 8 (5 p.m.), but X + Y can be any value greater than 0. In particular, it may happen that the sum is greater than the upper bound for W. We write P(F) = P(Y ≤W ≤X + Y ) = 1 −P(W < Y ) + P(W > X + Y ) We have P(W < Y ) = Z 4 0 1 4 Z y 0 1 8 dw dy = 1 4 and P(W > X + Y ) = Z 4 0 P(W > X + Y | Y = y)fY (y) dy = Z 4 0 P(X < W −Y | Y = y)fY (y) dy = Z 4 0 Z 8 y FX|Y (w −y)fW (w)fY (y) dw dy = Z 4 0 1 4 Z 8 y 1 8 Z w−y 0 1 5 −y e− x 5−y dx dw dy = 12 32 + 1 32 Z 4 0 (5 −y)e−8−y 5−y dy. Integrating numerically, we have Z 4 0 (5 −y)e−8−y 5−y dy = 1.7584. Thus, P(Y ≤W ≤X + Y ) = 1 −P(W < Y ) + P(W > X + Y ) = 1 −0.68 = 0.32. The expected amount of time the professor will spend with the student is then E[T] = 1 2P(F) = 1 2 0.32 = 0.16 = 9.6 mins. Next, we want to find the expected time the professor will leave his office. Let Z be the length of time measured from 9 a.m. until he leaves his office. If the professor 56 doesn’t spend any time with the student, then Z will be equal to X + Y . On the other hand, if the professor is interrupted by the student, then the length of time will be equal to X + Y + R. This is because the professor will spend the same amount of total time on the task regardless of whether he is interrupted by the student. Therefore, E[Z] = P(F)E[Z | F] + P(F c)E[Z | F c] = P(F)E[X + Y + R] + P(F c)E[X + Y ]. Using the results of the earlier calculations, E[X + Y ] = 5, E[X + Y + R] = E[X + Y ] + E[R] = 5 + 1 2 = 11 2 . Therefore, E[Z] = 0.68 · 5 + 0.32 · 11 2 = 5.16. Thus the expected time the professor will leave his office is 5.16 hours after 9 a.m. Solution to Problem 4.29. The transform is given by M(s) = E[esX] = 1 2es + 1 4e2s + 1 4e3s. We have E[X] = d dsM(s) s=0 = 1 2 + 2 4 + 3 4 = 7 4, E[X2] = d2 ds2 M(s) s=0 = 1 2 + 4 4 + 9 4 = 15 4 , E[X3] = d3 ds3 M(s) s=0 = 1 2 + 8 4 + 27 4 = 37 4 . Solution to Problem 4.30. The transform associated with X is MX(s) = es2/2. By taking derivatives with respect to s, we find that E[X] = 0, E[X2] = 1, E[X3] = 0, E[X4] = 3. Solution to Problem 4.31. The transform is M(s) = λ λ −s. Thus, d dsM(s) = λ (λ −s)2 , d2 ds2 M(s) = 2λ (λ −s)3 , d3 ds3 M(s) = 6λ (λ −s)4 , 57 d4 ds4 M(s) = 24λ (λ −s)5 , d5 ds5 M(s) = 120λ (λ −s)6 . By setting s = 0, we obtain E[X3] = 6 λ3 , E[X4] = 24 λ4 , E[X5] = 120 λ5 . Solution to Problem 4.32. (a) We must have M(0) = 1. Only the first option satisfies this requirement. (b) We have P(X = 0) = lim s→−∞M(s) = e2(e−1−1) ≈0.2825. Solution to Problem 4.33. We recognize this transform as corresponding to the following mixture of exponential PDFs: fX(x) = ( 1 3 · 2e−2x + 2 3 · 3e−3x, for x ≥0, 0, otherwise. By the inversion theorem, this must be the desired PDF. Solution to Problem 4.34. For i = 1, 2, 3, let Xi, i = 1, 2, 3, be a Bernoulli random variable that takes the value 1 if the ith player is successful. We have X = X1+X2+X3. Let qi = 1−pi. Convolution of the PMFs of X1 and X2 yields the PMF of Z = X1+X2: pZ(z) =      q1q2, if z = 0, q1p2 + p1q2, if z = 1, p1p2, if z = 2, 0, otherwise. Convolution of the PMFs of Z and X3 yields the PMF of X = X1 + X2 + X3: pX(x) =        q1q2q3, if x = 0, p1q2q3 + q1p2q3 + q1q2p3, if x = 1, q1p2p3 + p1q2p3 + p1p2q3, if x = 2, p1p2p3, if x = 3, 0, otherwise. The transform associated with X is the product of the transforms associated with Xi, i = 1, 2, 3. We have MX(s) = (q1 + p1es)(q2 + p2es)(q3 + p3es). By carrying out the multiplications above, and by examining the coefficients of the terms eks, we obtain the probabilities P(X = k). These probabilities are seen to coincide with the ones computed by convolution. 58 Solution to Problem 4.35. We first find c by using the equation 1 = MX(0) = c · 3 + 4 + 2 3 −1 , so that c = 2/9. We then obtain E[X] = dMX ds (s) s=0 = 2 9 · (3 −es)(8e2s + 6e3s) + es(3 + 4e2s + 2e3s) (3 −es)2 s=0 = 37 18. We now use the identity 1 3 −es = 1 3 · 1 1 −es/3 = 1 3  1 + es 3 + e2s 9 + · · ·  , which is valid as long as s is small enough so that es < 3. It follows that MX(s) = 2 9 · 1 3 · (3 + 4e2s + 2e3s) ·  1 + es 3 + e2s 9 + · · ·  . By identifying the coefficients of e0s and es, we obtain pX(0) = 2 9, pX(1) = 2 27. Let A = {X ̸= 0}. We have pX|{X∈A}(k) =    pX(k) P(A) , if k ̸= 0, 0, otherwise, so that E[X | X ̸= 0] = ∞ X k=1 kpX|A(k) = ∞ X k=1 kpX(k) P(A) = E[X] 1 −pX(0) = 37/18 7/9 = 37 14. Solution to Problem 4.36. (a) We have U = X if X = 1, which happens with probability 1/3, and U = Z if X = 0, which happens with probability 2/3. Therefore, U is a mixture of random variables and the associated transform is MU(s) = P(X = 1)MY (s) + P(X = 0)MZ(s) = 1 3 · 2 2 −s + 2 3e3(es−1). 59 (b) Let V = 2Z + 3. We have MV (s) = e3sMZ(2s) = e3se3(e2s−1) = e3(s−1+e2s). (c) Let W = Y + Z. We have MW (s) = MY (s)MZ(s) = 2 2 −se3(es−1). Solution to Problem 4.37. Let X be the number of different types of pizza ordered. Let Xi be the random variable defined by Xi = n 1, if a type i pizza is ordered by at least one customer, 0, otherwise. We have X = X1 + · · · + Xn, and by the law of iterated expectations, E[X] = E E[X | K] = E E[X1 + · · · + Xn | K] = n E E[X1 | K] . Furthermore, since the probability that a customer does not order a pizza of type 1 is (n −1)/n, we have E[X1 | K = k] = 1 − n −1 n k , so that E[X1 | K] = 1 − n −1 n K . Thus, denoting p = n −1 n , we have E[X] = n E 1 −pK = n −n E pK = n −n E eK log p = n −nMK(log p). Solution to Problem 4.41. (a) Let N be the number of people that enter the elevator. The corresponding transform is MN(s) = eλ(es−1). Let MX(s) be the common transform associated with the random variables Xi. Since Xi is uniformly distributed within [0, 1], we have MX(s) = es −1 s . The transform MY (s) is found by starting with the transform MN(s) and replacing each occurrence of es with MX(s). Thus, MY (s) = eλ(MX(s)−1) = e λes−1 s −1 . 60 (b) We have using the chain rule E[Y ] = d dsMY (s) s=0 = d dsMX(s) s=0 · λeλ(MX(s)−1) s=0 = 1 2 · λ = λ 2 , where we have used the fact that MX(0) = 1. (c) From the law of iterated expectations we obtain E[Y ] = E E[Y | N] = E NE[X] = E[N]E[X] = λ 2 . Solution to Problem 4.42. Take X and Y to be normal with means 1 and 2, respectively, and very small variances. Consider the random variable that takes the value of X with some probability p and the value of Y with probability 1 −p. This random variable takes values near 1 and 2 with relatively high probability, but takes values near its mean (which is 3−2p) with relatively low probability. Thus, this random variable is not normal. Now let N be a random variable taking only the values 1 and 2 with probabilities p and 1−p, respectively. The sum of a number N of independent normal random variables with mean equal to 1 and very small variance is a mixture of the type discussed above, which is not normal. Solution to Problem 4.43. (a) Using the total probability theorem, we have P(X > 4) = 4 X k=0 P(k lights are red)P(X > 4 | k lights are red). We have P(k lights are red) =  4 k  1 2 4 . The conditional PDF of X given that k lights are red, is normal with mean k minutes and standard deviation (1/2) √ k. Thus, X is a mixture of normal random variables and the transform associated with its (unconditional) PDF is the corresponding mixture of the transforms associated with the (conditional) normal PDFs. However, X is not normal, because a mixture of normal PDFs need not be normal. The probability P(X > 4 | k lights are red) can be computed from the normal tables for each k, and P(X > 4) is obtained by substituting the results in the total probability formula above. (b) Let K be the number of traffic lights that are found to be red. We can view X as the sum of K independent normal random variables. Thus the transform associated with X can be found by replacing in the binomial transform MK(s) = (1/2+(1/2)es)4 the occurrence of es by the normal transform corresponding to µ = 1 and σ = 1/2. Thus MX(s) =  1 2 + 1 2  e (1/2)2s2 2 +s 4 . Note that by using the formula for the transform, we cannot easily obtain the proba-bility P(X > 4). 61 Solution to Problem 4.44. (a) Using the random sum formulas, we have E[N] = E[M] E[K], var(N) = E[M] var(K) + E[K]2var(M). (b) Using the random sum formulas and the results of part (a), we have E[Y ] = E[N] E[X] = E[M] E[K] E[X], var(Y ) = E[N] var(X) + E[X]2var(N) = E[M] E[K] var(X) + E[X]2 E[M] var(K) + E[K]2var(M)  . (c) Let N denote the total number of widgets in the crate, and let Xi denote the weight of the ith widget. The total weight of the crate is Y = X1 + · · · + XN, with N = K1 + · · · + KM, so the framework of part (b) applies. We have E[M] = 1 p, var(M) = 1 −p p2 , (geometric formulas), E[K] = µ, var(M) = µ, (Poisson formulas), E[X] = 1 λ, var(M) = 1 λ2 , (exponential formulas). Using these expressions into the formulas of part (b), we obtain E[Y ] and var(Y ), the mean and variance of the total weight of a crate. 62 C H A P T E R 5 Solution to Problem 5.1. (a) We have σMn = 1/√n, so in order that σMn ≤0.01, we must have n ≥10, 000. (b) We want to have P|Mn −h| ≤0.05 ≥0.99. Using the facts h = E[Mn], σ2 Mn = 1/n, and the Chebyshev inequality, we have P|Mn −h| ≤0.05 = P|Mn −E[Mn]| ≤0.05 = 1 −P|Mn −E[Mn]| ≥0.05 ≥1 − 1/n (0.05)2 . Thus, we must have 1 − 1/n (0.05)2 ≥0.99, which yields n ≥40, 000. (c) Based on Example 5.3, σ2 Xi ≤(0.6)2/4, so he should use 0.3 meters in place of 1.0 meters as the estimate of the standard deviation of the samples Xi in the calculations of parts (a) and (b). In the case of part (a), we have σMn = 0.3/√n, so in order that σMn ≤0.01, we must have n ≥900. In the case of part (b), we have σMn = 0.3/√n, so in order that σMn ≤0.01, we must have n ≥900. In the case of part (a), we must have 1 −0.09/n (0.05)2 ≥0.99, which yields n ≥3, 600. Solution to Problem 5.4. Proceeding as in Example 5.5, the best guarantee that can be obtained from the Chebyshev inequality is P|Mn −f| ≥ϵ ≤ 1 4nϵ2 . (a) If ϵ is reduced to half its original value, and in order to keep the bound 1/(4nϵ2) constant, the sample size n must be made four times larger. (b) If the error probability δ is to be reduced to δ/2, while keeping ϵ the same, the sample size has to be doubled. Solution to Problem 5.5. In cases (a), (b), and (c), we show that Yn converges to 0 in probability. In case (d), we show that Yn converges to 1 in probability. (a) For any ϵ > 0, we have P|Yn| ≥ϵ = 0, 63 for all n with 1/n < ϵ, so P|Yn| ≥ϵ →0. (b) For all ϵ ∈(0, 1), we have P|Yn| ≥ϵ = P|Xn|n ≥ϵ = PXn ≥ϵ1/n + PXn ≤−ϵ1/n = 1 −ϵ1/n, and the two terms in the right-hand side converge to 0, since ϵ1/n →1. (c) Since X1, X2, . . . are independent random variables, we have E[Yn] = E[X1] · · · E[Xn] = 0. Also var(Yn) = E[Y 2 n ] = E[X2 1] · · · E[X2 n] = var(X1)n =  4 12 n , so var(Yn) →0. Since all Yn have 0 as a common mean, from Chebyshev’s inequality it follows that Yn converges to 0 in probability. (d) We have for all ϵ ∈(0, 1), using the independence of X1, X2, . . ., P|Yn −1| ≥ϵ = Pmax{X1, . . . , Xn} ≥1 + ϵ + Pmax{X1, . . . , Xn} ≤1 −ϵ = P(X1 ≤1 −ϵ, . . . , Xn ≤1 −ϵ) = P(X1 ≤1 −ϵ)n =  1 −ϵ 2 n . Hence P|Yn −1| ≥ϵ →0. Solution to Problem 5.8. Let S be the number of times that the result was odd, which is a binomial random variable, with parameters n = 100 and p = 0.5, so that E[X] = 100 · 0.5 = 50 and σS = √ 100 · 0.5 · 0.5 = √ 25 = 5. Using the normal approximation to the binomial, we find P(S > 55) = P S −50 5 > 55 −50 5  ≈1 −Φ(1) = 1 −0.8413 = 0.1587. A better approximation can be obtained by using the de Moivre-Laplace approx-imation, which yields P(S > 55) = P(S ≥55.5) = P S −50 5 > 55.5 −50 5  ≈1 −Φ(1.1) = 1 −0.8643 = 0.1357. Solution to Problem 5.9. (a) Let S be the number of crash-free days, which is a binomial random variable with parameters n = 50 and p = 0.95, so that E[X] = 50 · 0.95 = 47.5 and σS = √ 50 · 0.95 · 0.05 = 1.54. Using the normal approximation to the binomial, we find P(S ≥45) = P S −47.5 1.54 ≥45 −47.5 1.54  ≈1 −Φ(−1.62) = Φ(1.62) = 0.9474. 64 A better approximation can be obtained by using the de Moivre-Laplace approximation, which yields P(S ≥45) = P(S > 44.5) = P S −47.5 1.54 ≥44.5 −47.5 1.54  ≈1 −Φ(−1.95) = Φ(1.95) = 0.9744. (b) The random variable S is binomial with parameter p = 0.95. However, the random variable 50−S (the number of crashes) is also binomial with parameter p = 0.05. Since the Poisson approximation is exact in the limit of small p and large n, it will give more accurate results if applied to 50−S. We will therefore approximate 50−S by a Poisson random variable with parameter λ = 50 · 0.05 = 2.5. Thus, P(S ≥45) = P(50 −S ≤5) = 5 X k=0 P(n −S = k) = 5 X k=0 e−λ λk k! = 0.958. It is instructive to compare with the exact probability which is 5 X k=0  50 k  0.05k · 0.9550−k = 0.962. Thus, the Poisson approximation is closer. This is consistent with the intuition that the normal approximation to the binomial works well when p is close to 0.5 or n is very large, which is not the case here. On the other hand, the calculations based on the normal approximation are generally less tedious. Solution to Problem 5.10. (a) Let Sn = X1 + · · · + Xn be the total number of gadgets produced in n days. Note that the mean, variance, and standard deviation of Sn is 5n, 9n, and 3√n, respectively. Thus, P(S100 < 440) = P(S100 ≤439.5) = P S100 −500 30 < 439.5 −500 30  ≈Φ(439.5 −500 30 ) = Φ(−2.02) = 1 −Φ(2.02) = 1 −0.9783 = 0.0217. 65 (b) The requirement P(Sn ≥200 + 5n) ≤0.05 translates to P  Sn −5n 3√n ≥200 3√n  ≤0.05, or, using a normal approximation, 1 −Φ  200 3√n  ≤0.05, and Φ  200 3√n  ≥0.95. From the normal tables, we obtain Φ(1.65) ≈0.95, and therefore, 200 3√n ≥1.65, which finally yields n ≤1632. (c) The event N ≥220 (it takes at least 220 days to exceed 1000 gadgets) is the same as the event S219 ≤1000 (no more than 1000 gadgets produced in the first 219 days). Thus, P(N ≥220) = P(S219 ≤1000) = P  S219 −5 · 219 3 √ 219 ≤1000 −5 · 219 3 √ 219  = 1 −Φ(2.14) = 1 −0.9838 = 0.0162. Solution to Problem 5.11. Note that W is the sample mean of 16 independent identically distributed random variables of the form Xi −Yi, and a normal approxima-tion is appropriate. The random variables Xi −Yi have zero mean, and variance equal to 2/12. Therefore, the mean of W is zero, and its variance is (2/12)/16 = 1/96. Thus, P|W| < 0.001 = P |W| p 1/96 < 0.001 p 1/96 ! ≈Φ0.001 √ 96 −Φ−0.001 √ 96 = 2Φ0.001 √ 96 −1 = 2Φ(0.0098) −1 ≈2 · 0.504 −1 = 0.008. Let us also point out a somewhat different approach that bypasses the need for the normal table. Let Z be a normal random variable with zero mean and standard deviation equal to 1/ √ 96. The standard deviation of Z, which is about 0.1, is much larger than 0.001. Thus, within the interval [−0.001, 0.001], the PDF of Z is approxi-mately constant. Using the formula P(z −δ ≤Z ≤z + δ) ≈fZ(z) · 2δ, with z = 0 and δ = 0.001, we obtain P|W| < 0.001 ≈P(−0.001 ≤Z ≤0.001) ≈fZ(0) · 0.002 = 0.002 √ 2π(1/ √ 96) = 0.0078. 66 C H A P T E R 6 Solution to Problem 6.1. (a) The random variable R is binomial with parameters p and n. Hence, pR(r) =  n r  (1 −p)n−rpr, for r = 0, 1, 2, . . . , n, E[R] = np, and var(R) = np(1 −p). (b) Let A be the event that the first item to be loaded ends up being the only one on its truck. This event is the union of two disjoint events: (i) the first item is placed on the red truck and the remaining n −1 are placed on the green truck, and, (ii) the first item is placed on the green truck and the remaining n −1 are placed on the red truck. Thus, P(A) = p(1 −p)n−1 + (1 −p)pn−1. (c) Let B be the event that at least one truck ends up with a total of exactly one package. The event B occurs if exactly one or both of the trucks end up with exactly 1 package, so P(B) =          1, if n = 1, 2p(1 −p), if n = 2,  n 1  (1 −p)n−1p +  n n −1  pn−1(1 −p), if n = 3, 4, 5, . . . (d) Let D = R −G = R −(n −R) = 2R −n. We have E[D] = 2E[R] −n = 2np −n. Since D = 2R −n, where n is a constant, var(D) = 4var(R) = 4np(1 −p). (e) Let C be the event that each of the first 2 packages is loaded onto the red truck. Given that C occurred, the random variable R becomes 2 + X3 + X4 + · · · + Xn. Hence, E[R | C] = E[2 + X3 + X4 + · · · + Xn] = 2 + (n −2)E[Xi] = 2 + (n −2)p. Similarly, the conditional variance of R is var(R | C) = var(2 + X3 + X4 + · · · + Xn) = (n −2)var(Xi) = (n −2)p(1 −p). 67 Finally, given that the first two packages are loaded onto the red truck, the probability that a total of r packages are loaded onto the red truck is equal to the probability that r −2 of the remaining n −2 packages go to the red truck: pR|C(r) =  n −2 r −2  (1 −p)n−rpr−2, for r = 2, . . . , n. Solution to Problem 6.2. (a) Failed quizzes are a Bernoulli process with parameter p = 1/4. The desired probability is given by the binomial formula:  6 2  p2(1 −p)4 = 6! 4! 2! 1 4 2 3 4 4 . (b) The expected number of quizzes up to the third failure is the expected value of a Pascal random variable of order three, with parameter 1/4, which is 3 · 4 = 12. Subtracting the number of failures, we have that the expected number of quizzes that Dave will pass is 12 −3 = 9. (c) The event of interest is the intersection of the following three independent events: A: there is exactly one failure in the first seven quizzes. B: quiz eight is a failure. C: quiz nine is a failure. We have P(A) =  7 1  1 4  3 4 6 , P(B) = P(C) = 1 4, so the desired probability is P(A ∩B ∩C) = 7 1 4 3 3 4 6 . (d) Let B be the event that Dave fails two quizzes in a row before he passes two quizzes in a row. Let us use F and S to indicate quizzes that he has failed or passed, respectively. We then have P(B) = P(FF ∪SFF ∪FSFF ∪SFSFF ∪FSFSFF ∪SFSFSFF ∪· · ·) = P(FF) + P(SFF) + P(FSFF) + P(SFSFF) + P(FSFSFF) + P(SFSFSFF) + · · · = 1 4 2 + 3 4 1 4 2 + 1 4 · 3 4 1 4 2 + 3 4 · 1 4 · 3 4 1 4 2 + 1 4 · 3 4 · 1 4 · 3 4 1 4 2 + 3 4 · 1 4 · 3 4 · 1 4 · 3 4 1 4 2 + · · · = 1 4 2 + 1 4 · 3 4 1 4 2 + 1 4 · 3 4 · 1 4 · 3 4 1 4 2 + · · ·  +  3 4 1 4 2 + 3 4 · 1 4 · 3 4 1 4 2 + 3 4 · 1 4 · 3 4 · 1 4 · 3 4 1 4 2 + · · ·  . 68 Therefore, P(B) is the sum of two infinite geometric series, and P(B) = 1 4 2 1 −1 4 · 3 4 + 3 4 1 4 2 1 −3 4 · 1 4 = 7 52. Solution to Problem 6.3. The answers to these questions are found by considering suitable Bernoulli processes and using the formulas of Section 6.1. Depending on the specific question, however, a different Bernoulli process may be appropriate. In some cases, we associate trials with slots. In other cases, it is convenient to associate trials with busy slots. (a) During each slot, the probability of a task from user 1 is given by p1 = p1|BpB = (5/6) · (2/5) = 1/3. Tasks from user 1 form a Bernoulli process and P(first user 1 task occurs in slot 4) = p1(1 −p1)3 = 1 3 · 2 3 3 . (b) This is the probability that slot 11 was busy and slot 12 was idle, given that 5 out of the 10 first slots were idle. Because of the fresh-start property, the conditioning information is immaterial, and the desired probability is pB · pI = 5 6 · 1 6. (c) Each slot contains a task from user 1 with probability p1 = 1/3, independent of other slots. The time of the 5th task from user 1 is a Pascal random variable of order 5, with parameter p1 = 1/3. Its mean is given by 5 p1 = 5 1/3 = 15. (d) Each busy slot contains a task from user 1 with probability p1|B = 2/5, independent of other slots. The random variable of interest is a Pascal random variable of order 5, with parameter p1|B = 2/5. Its mean is 5 p1|B = 5 2/5 = 25 2 . (e) The number T of tasks from user 2 until the 5th task from user 1 is the same as the number B of busy slots until the 5th task from user 1, minus 5. The number of busy slots (“trials”) until the 5th task from user 1 (“success”) is a Pascal random variable of order 5, with parameter p1|B = 2/5. Thus, pB(t) =  t −1 4  2 5 5  1 −2 5 t−5 , t = 5, 6, . . .. 69 Since T = B −5, we have pT (t) = pB(t + 5), and we obtain pT (t) =  t + 4 4  2 5 5  1 −2 5 t , t = 0, 1, . . .. Using the formulas for the mean and the variance of the Pascal random variable B, we obtain E[T] = E[B] −5 = 25 2 −5 = 7.5, and var(T) = var(B) = 51 −(2/5) (2/5)2 . Solution to Problem 6.8. The total number of accidents between 8 am and 11 am is the sum of two independent Poisson random variables with parameters 5 and 3 · 2 = 6, respectively. Since the sum of independent Poisson random variables is also Poisson, the total number of accidents has a Poisson PMF with parameter 5+6=11. Solution to Problem 6.9. As long as the pair of players is waiting, all five courts are occupied by other players. When all five courts are occupied, the time until a court is freed up is exponentially distributed with mean 40/5 = 8 minutes. For our pair of players to get a court, a court must be freed up k+1 times. Thus, the expected waiting time is 8(k + 1). Solution to Problem 6.10. (a) This is the probability of no arrivals in 2 hours. It is given by P(0, 2) = e−0.6·2 = 0.301. For an alternative solution, this is the probability that the first arrival comes after 2 hours: P(T1 > 2) = Z ∞ 2 fT1(t) dt = Z ∞ 2 0.6e−0.6t dt = e−0.6·2 = 0.301. (b) This is the probability of zero arrivals between time 0 and 2, and of at least one arrival between time 2 and 5. Since these two intervals are disjoint, the desired probability is the product of the probabilities of these two events, which is given by P(0, 2)1 −P(0, 3) = e−0.6·2(1 −e−0.6·3) = 0.251. For an alternative solution, the event of interest can be written as {2 ≤T1 ≤5}, and its probability is Z 5 2 fT1(t) dt = Z 5 2 0.6e−0.6t dt = e−0.6·2 −e−0.6·5 = 0.251. (c) If he catches at least two fish, he must have fished for exactly two hours. Hence, the desired probability is equal to the probability that the number of fish caught in the first two hours is at least two, i.e., ∞ X k=2 P(k, 2) = 1 −P(0, 2) −P(1, 2) = 1 −e−0.6·2 −(0.6 · 2)e−0.6·2 = 0.337. 70 For an alternative approach, note that the event of interest occurs if and only if the time Y2 of the second arrival is less than or equal to 2. Hence, the desired probability is P(Y2 ≤2) = Z 2 0 fY2(y) dy = Z 2 0 (0.6)2ye−0.6y dy. This integral can be evaluated by integrating by parts, but this is more tedious than the first approach. (d) The expected number of fish caught is equal to the expected number of fish caught during the first two hours (which is 2λ = 2 · 0.6 = 1.2), plus the expectation of the number N of fish caught after the first two hours. We have N = 0 if he stops fishing at two hours, and N = 1, if he continues beyond the two hours. The event {N = 1} occurs if and only if no fish are caught in the first two hours, so that E[N] = P(N = 1) = P(0, 2) = 0.301. Thus, the expected number of fish caught is 1.2 + 0.301 = 1.501. (e) Given that he has been fishing for 4 hours, the future fishing time is the time until the first fish is caught. By the memorylessness property of the Poisson process, the future time is exponential, with mean 1/λ. Hence, the expected total fishing time is 4 + (1/0.6) = 5.667. Solution to Problem 6.11. We note that the process of departures of customers who have bought a book is obtained by splitting the Poisson process of customer departures, and is itself a Poisson process, with rate pλ. (a) This is the time until the first customer departure in the split Poisson process. It is therefore exponentially distributed with parameter pλ. (b) This is the probability of no customers in the split Poisson process during an hour, and using the result of part (a), equals e−pλ. (c) This is the expected number of customers in the split Poisson process during an hour, and is equal to pλ. Solution to Problem 6.12. Let X be the number of different types of pizza ordered. Let Xi be the random variable defined by Xi = n 1, if a type i pizza is ordered by at least one customer, 0, otherwise. We have X = X1 + · · · + Xn, and E[X] = nE[X1]. We can think of the customers arriving as a Poisson process, and with each customer independently choosing whether to order a type 1 pizza (this happens with probability 1/n) or not. This is the situation encountered in splitting of Poisson pro-cesses, and the number of type 1 pizza orders, denoted Y1, is a Poisson random variable with parameter λ/n. We have E[X1] = P(Y1 > 0) = 1 −P(Y1 = 0) = 1 −e−λ/n, so that E[X] = nE[X1] = n1 −e−λ/n . 71 Solution to Problem 6.13. (a) Let R be the total number of messages received during an interval of duration t. Note that R is a Poisson random variable with arrival rate λA + λB. Therefore, the probability that exactly nine messages are received is P(R = 9) = (λA + λB)T9e−(λA+λB)t 9! . (b) Let R be defined as in part (a), and let Wi be the number of words in the ith message. Then, N = W1 + W2 + · · · + WR, which is a sum of a random number of random variables. Thus, E[N] = E[W]E[R] =  1 · 2 6 + 2 · 3 6 + 3 · 1 6  (λA + λB)t = 11 6 (λA + λB)t. (c) Three-word messages arrive from transmitter A in a Poisson manner, with average rate λApW (3) = λA/6. Therefore, the random variable of interest is Erlang of order 8, and its PDF is given by f(x) = (λA/6)8x7e−λAx/6 7! . (d) Every message originates from either transmitter A or B, and can be viewed as an independent Bernoulli trial. Each message has probability λA/(λA +λB) of originating from transmitter A (view this as a “success”). Thus, the number of messages from transmitter A (out of the next twelve) is a binomial random variable, and the desired probability is equal to  12 8   λA λA + λB 8  λB λA + λB 4 . Solution to Problem 6.14. (a) Let X be the time until the first bulb failure. Let A (respectively, B) be the event that the first bulb is of type A (respectively, B). Since the two bulb types are equally likely, the total expectation theorem yields E[X] = E[X | A]P(A) + E[X | B]P(B) = 1 · 1 2 + 1 3 · 1 2 = 2 3. (b) Let D be the event of no bulb failures before time t. Using the total probability theorem, and the exponential distributions for bulbs of the two types, we obtain P(D) = P(D | A)P(A) + P(D | B)P(B) = 1 2e−t + 1 2e−3t. 72 (c) We have P(A | D) = P(A ∩D) P(D) = 1 2e−t 1 2e−t + 1 2e−3t = 1 1 + e−2t . (d) We first find E[X2]. We use the fact that the second moment of an exponential random variable T with parameter λ is equal to E[T 2] = E[T]2+var(T) = 1/λ2+1/λ2 = 2/λ2. Conditioning on the two possible types of the first bulb, we obtain E[X2] = E[X2 | A]P(A) + E[X2 | B]P(B) = 2 · 1 2 + 2 9 · 1 2 = 10 9 . Finally, using the fact E[X] = 2/3 from part (a), var(X) = E[X2] −E[X]2 = 10 9 −22 32 = 2 3. (e) This is the probability that out of the first 11 bulbs, exactly 3 were of type A and that the 12th bulb was of type A. It is equal to  11 3  1 2 12 . (f) This is the probability that out of the first 12 bulbs, exactly 4 were of type A, and is equal to  12 4  1 2 12 . (g) The PDF of the time between failures is (e−x + 3e−3x)/2, for x ≥0, and the associated transform is 1 2  1 1 −s + 3 3 −s  . Since the times between successive failures are independent, the transform associated with the time until the 12th failure is given by h1 2  1 1 −s + 3 3 −s i12 . (h) Let Y be the total period of illumination provided by the first two type-B bulbs. This has an Erlang distribution of order 2, and its PDF is fY (y) = 9ye−3y, y ≥0. Let T be the period of illumination provided by the first type-A bulb. Its PDF is fT (t) = e−t, t ≥0. 73 We are interested in the event T < Y . We have P(T < Y | Y = y) = 1 −e−y, y ≥0. Thus, P(T < Y ) = Z ∞ 0 fY (y)P(T < Y | Y = y) dy = Z ∞ 0 9ye−3y1 −e−y dy = 7 16, as can be verified by carrying out the integration. We now describe an alternative method for obtaining the answer. Let T A 1 be the period of illumination of the first type-A bulb. Let T B 1 and T B 2 be the period of illumination provided by the first and second type-B bulb, respectively. We are interested in the event {T A 1 < T B 1 + T B 2 }. We have P(T A 1 < T B 1 + T B 2 ) = P(T A 1 < T B 1 ) + P(T A 1 ≥T B 1 ) P(T A 1 < T B 1 + T B 2 | T A 1 ≥T B 1 ) = 1 1 + 3 + P(T A 1 ≥T B 1 ) P(T A 1 −T B 1 < T B 2 | T A 1 ≥T B 1 ) = 1 4 + 3 4P(T A 1 −T B 1 < T B 2 | T A 1 ≥T B 1 ). Given the event T A 1 ≥T B 1 , and using the memorylessness property of the exponential random variable T A 1 , the remaining time T A 1 −T B 1 until the failure of the type-A bulb is exponentially distributed, so that P(T A 1 −T B 1 < T B 2 | T A 1 ≥T B 1 ) = P(T A 1 < T B 2 ) = P(T A 1 < T B 1 ) = 1 4. Therefore, P(T A 1 < T B 1 + T B 2 ) = 1 4 + 3 4 · 1 4 = 7 16. (i) Let V be the total period of illumination provided by type-B bulbs while the process is in operation. Let N be the number of light bulbs, out of the first 12, that are of type B. Let Xi be the period of illumination from the ith type-B bulb. We then have V = Y1 +· · ·+YN. Note that N is a binomial random variable, with parameters n = 12 and p = 1/2, so that E[N] = 6, var(N) = 12 · 1 2 · 1 2 = 3. Furthermore, E[Xi] = 1/3 and var(Xi) = 1/9. Using the formulas for the mean and variance of the sum of a random number of random variables, we obtain E[V ] = E[N]E[Xi] = 2, and var(V ) = var(Xi)E[N] + E[Xi]2var(N) = 1 9 · 6 + 1 9 · 3 = 1. 74 (j) Using the notation in parts (a)-(c), and the result of part (c), we have E[T | D] = t + E[T −t | D ∩A]P(A | D) + E[T −t | D ∩B]P(B | D) = t + 1 · 1 1 + e−2t + 1 3  1 − 1 1 + e−2t  = t + 1 3 + 2 3 · 1 1 + e−2t . Solution to Problem 6.15. (a) The total arrival process corresponds to the merging of two independent Poisson processes, and is therefore Poisson with rate λ = λA+λB = 7. Thus, the number N of jobs that arrive in a given three-minute interval is a Poisson random variable, with E[N] = 3λ = 21, var(N) = 21, and PMF pN(n) = (21)ne−21 n! , n = 0, 1, 2, . . .. (b) Each of these 10 jobs has probability λA/(λA +λB) = 3/7 of being of type A, inde-pendently of the others. Thus, the binomial PMF applies and the desired probability is equal to  10 3  3 7 3 4 7 7 . (c) Each future arrival is of type A with probability λA/(λA+λB) = 3/7, independently of other arrivals. Thus, the number K of arrivals until the first type A arrival is geometric with parameter 3/7. The number of type B arrivals before the first type A arrival is equal to K −1, and its PMF is similar to a geometric, except that it is shifted by one unit to the left. In particular, pK(k) = 3 7  4 7 k , k = 0, 1, 2, . . .. (d) The fact that at time 0 there were two type A jobs in the system simply states that there were exactly two type A arrivals between time −1 and time 0. Let X and Y be the arrival times of these two jobs. Consider splitting the interval [−1, 0] into many time slots of length δ. Since each time instant is equally likely to contain an arrival and since the arrival times are independent, it follows that X and Y are independent uniform random variables. We are interested in the PDF of Z = max{X, Y }. We first find the CDF of Z. We have, for z ∈[−1, 0], P(Z ≤z) = P(X ≤z and Y ≤z) = (1 + z)2. By differentiating, we obtain fZ(z) = 2(1 + z), −1 ≤z ≤0. (e) Let T be the arrival time of this type B job. We can express T in the form T = −K + X, where K is a nonnegative integer and X lies in [0,1]. We claim that X 75 is independent from K and that X is uniformly distributed. Indeed, conditioned on the event K = k, we know that there was a single arrival in the interval [−k, −k + 1]. Conditioned on the latter information, the arrival time is uniformly distributed in the interval [−k, k + 1] (cf. Problem 6.18), which implies that X isuniformly distributed in [0, 1]. Since this conditional distribution of X is the same for every k, it follows that X is independent of −K. Let D be the departure time of the job of interest. Since the job stays in the system for an integer amount of time, we have that D is of the form D = L + X, where L is a nonnegative integer. Since the job stays in the system for a geometrically distributed amount of time, and the geometric distribution has the memorylessness property, it follows that L is also memoryless. In particular, L is similar to a geometric random variable, except that its PMF starts at zero. Furthermore, L is independent of X, since X is determined by the arrival process, whereas the amount of time a job stays in the system is independent of the arrival process. Thus, D is the sum of two independent random variables, one uniform and one geometric. Therefore, D has “geometric staircase” PDF, given by fD(d) = 1 2 ⌊d⌋ , d ≥0, and where ⌊d⌋stands for the largest integer below d. Solution to Problem 6.16. (a) The random variable N is equal to the number of successive interarrival intervals that are smaller than τ. Interarrival intervals are independent and each one is smaller than τ with probability 1 −e−λτ. Therefore, P(N = 0) = e−λτ, P(N = 1) = e−λτ1−e−λτ , P(N = k) = e−λτ1−e−λτk, so that N has a distribution similar to a geometric one, with parameter p = e−λτ, except that it shifted one place to the left, so that it starts out at 0. Hence, E[N] = 1 p −1 = eλτ −1. (b) Let Tn be the nth interarrival time. The event {N ≥n} indicates that the time between cars n −1 and n is less than or equal to τ, and therefore E[Tn | N ≥n] = E[Tn | Tn ≤τ]. Note that the conditional PDF of Tn is the same as the unconditional one, except that it is now restricted to the interval [0, τ], and that it has to be suitably renormalized so that it integrates to 1. Therefore, the desired conditional expectation is E[Tn | Tn ≤τ] = Z τ 0 sλe−λs ds Z τ 0 λe−λs ds . This integral can be evaluated by parts. We will provide, however, an alternative approach that avoids integration. We use the total expectation formula E[Tn] = E[Tn | Tn ≤τ]P(Tn ≤τ) + E[Tn | Tn > τ]P(Tn > τ). 76 We have E[Tn] = 1/λ, P(Tn ≤τ) = 1 −e−λτ, P(Tn > τ) = e−λτ, and E[Tn | Tn > τ] = τ + (1/λ). (The last equality follows from the memorylessness of the exponential PDF.) Using these equalities, we obtain 1 λ = E[Tn | Tn ≤τ]1 −e−λτ +  τ + 1 λ  e−λτ, which yields E[Tn | Tn ≤τ] = 1 λ −  τ + 1 λ  e−λτ 1 −e−λτ . (c) Let T be the time until the U-turn. Note that T = T1 + · · · + TN + τ. Let v denote the value of E[Tn | Tn ≤τ]. We find E[T] using the total expectation theorem: E[T] = τ + ∞ X n=0 P(N = n)E[T1 + · · · + TN | N = n] = τ + ∞ X n=0 P(N = n) n X i=1 E[Ti | T1 ≤τ, . . . , Tn ≤τ, Tn+1 > τ] = τ + ∞ X n=0 P(N = n) n X i=1 E[Ti | Ti ≤τ] = τ + ∞ X n=0 P(N = n)nv = τ + vE[N], where E[N] was found in part (a) and v was found in part (b). The second equality used the fact that the event {N = n} is the same as the event {T1 ≤τ, . . . , Tn ≤ τ, Tn+1 > τ}. The third equality used the independence of the interarrival times Ti. Solution to Problem 6.17. We will calculate the expected length of the photog-rapher’s waiting time T conditioned on each of the two events: A, which is that the photographer arrives while the wombat is resting or eating, and Ac, which is that the photographer arrives while the wombat is walking. We will then use the total expec-tation theorem as follows: E[T] = P(A)E[T | A] + P(Ac)E[T | Ac]. The conditional expectation E[T | A] can be broken down in three components: (i) The expected remaining time up to when the wombat starts its next walk; by the memorylessness property, this time is exponentially distributed and its expected value is 30 secs. (ii) A number of walking and resting/eating intervals (each of expected length 50 secs) during which the wombat does not stop; if N is the number of these intervals, then N + 1 is geometrically distributed with parameter 1/3. Thus the expected length of these intervals is (3 −1) · 50 = 100 secs. 77 (iii) The expected waiting time during the walking interval in which the wombat stands still. This time is uniformly distributed between 0 and 20, so its expected value is 10 secs. Collecting the above terms, we see that E[T | A] = 30 + 100 + 10 = 140. The conditional expectation E[T | Ac] can be calculated using the total expecta-tion theorem, by conditioning on three events: B1, which is that the wombat does not stop during the photographer’s arrival interval (probability 2/3); B2, which is that the wombat stops during the photographer’s arrival interval after the photographer arrives (probability 1/6); B3, which is that the wombat stops during the photographer’s arrival interval before the photographer arrives (probability 1/6). We have E[T | Ac, B1] = E[photographer’s wait up to the end of the interval] + E[T | A] = 10 + 140 = 150. Also, it can be shown that if two points are randomly chosen in an interval of length l, the expected distance between the two points is l/3 (an end-of-chapter problem in Chapter 3), and using this fact, we have E[T | Ac, B2] = E[photographer’s wait up to the time when the wombat stops] = 20 3 . Similarly, it can be shown that if two points are randomly chosen in an interval of length l, the expected distance between each point and the nearest endpoint of the interval is l/3. Using this fact, we have E[T | Ac, B3] = E[photographer’s wait up to the end of the interval] + E[T | A] = 20 3 + 140. Applying the total expectation theorem, we see that E[T | Ac] = 2 3 · 150 + 1 6 · 20 3 + 1 6 20 3 + 140  = 125.55. To apply the total expectation theorem and obtain E[T], we need the probability P(A) that the photographer arrives during a resting/eating interval. Since the expected length of such an interval is 30 seconds and the length of the complementary walking interval is 20 seconds, we see that P(A) = 30/50 = 0.6. Substituting in the equation E[T] = P(A)E[T | A] + 1 −P(A) E[T | Ac], we obtain E[T] = 0.6 · 140 + 0.4 · 125.55 = 134.22. 78 C H A P T E R 7 Solution to Problem 7.1. We construct a Markov chain with state space S = {0, 1, 2, 3}. We let Xn = 0 if an arrival occurs at time n. Also, we let Xn = i if the last arrival up to time n occurred at time n −i, for i = 1, 2, 3. Given that Xn = 0, there is probability 0.2 that the next arrival occurs at time n + 1, so that p00 = 0.2, and p01 = 0.8. Given that Xn = 1, the last arrival occurred at time n −1, and there is zero probability of an arrival at time n + 1, so that p12 = 1. Given that Xn = 2, the last arrival occurred at time n −2. We then have p20 = P(Xn+1 = 0 | Xn = 2) = P(T = 3 | T ≥3) = P(T = 3) P(T ≥3) = 3 8, and p23 = 5/8. Finally, given that Xn = 3, an arrival is guaranteed at time n + 1, so that p30 = 1. Solution to Problem 7.2. It cannot be described as a Markov chain with states L and R, because P(Xn+1 = L | Xn = R, Xn−1 = L) = 1/2, while P(Xn+1 = L | Xn = R, Xn−1 = R, Xn−2 = L) = 0. Solution to Problem 7.3. The answer is no. To establish this, we need to show that the Markov property fails to hold, that is we need to find scenarios that lead to the same state and such that the probability law for the next state is different for different scenarios. Let Xn be the 4-state Markov chain corresponding to the original example. Let us compare the two scenarios (Y0, Y1) = (1, 2) and (Y0, Y1) = (2, 2). For the first scenario, the information (Y0, Y1) = (1, 2) implies that X0 = 2 and X1 = 3, so that P(Y2 = 2 | Y0 = 1, Y1 = 2) = PX2 ∈{3, 4} | X1 = 3 = 0.7. For the second scenario, the information (Y0, Y1) = (2, 2) is not enough to determine X1, but we can nevertheless assert that P(X1 = 4 | Y0 = Y1 = 2) > 0. (This is because the conditioning information Y0 = 2 implies that X0 ∈{3, 4}, and for either choice of X0, there is positive probability that X1 = 4.) We then have P(Y2 = 2 | Y0 = Y1 = 2) = P(Y2 = 2 | X1 = 4, Y0 = Y1 = 2)P(X1 = 4 | Y0 = Y1 = 2) + P(Y2 = 2 | X1 = 3, Y0 = Y1 = 2)1 −P(X1 = 4 | Y0 = Y1 = 2) = 1 · P(X1 = 4 | Y0 = Y1 = 2) + 0.71 −P(X1 = 4 | Y0 = Y1 = 2) = 0.7 + 0.3 · P(X1 = 4 | Y0 = Y1 = 2) > 0.7. 79 Thus, P(Y2 = 2 | Y0 = 1, Y1 = 2) ̸= P(Y2 = 2 | Y0 = Y1 = 2), which implies that Yn does not have the Markov property. Solution to Problem 7.4. (a) We introduce a Markov chain with state equal to the distance between spider and fly. Let n be the initial distance. Then, the states are 0, 1, . . . , n, and we have p00 = 1, p0i = 0, for i ̸= 0, p10 = 0.4, p11 = 0.6, p1i = 0, for i ̸= 0, 1, and for all i ̸= 0, 1, pi(i−2) = 0.3, pi(i−1) = 0.4, pii = 0.3, pij = 0, for j ̸= i −2, i −1, i. (b) All states are transient except for state 0 which forms a recurrent class. Solution to Problem 7.5. It is periodic with period 2. The two corresponding subsets are {2, 4, 6, 7, 9} and {1, 3, 5, 8}. Solution to Problem 7.10. For the first model, the transition probability matrix is  1 −b b r 1 −r  . We need to exclude the cases b = r = 0 in which case we obtain a periodic class, and the case b = r = 1 in which case there are two recurrent classes. The balance equations are of the form π1 = (1 −b)π1 + rπ2, π2 = bπ1 + (1 −r)π2, or bπ1 = rπ2. This equation, together with the normalization equation π1 +π2 = 1, yields the steady-state probabilities π1 = r b + r , π2 = b b + r . For the second model, we need to exclude the case b = r = 1 that makes the chain periodic with period 2, and the case b = 1, r = 0, which makes the chain periodic with period ℓ+ 1. The balance equations are of the form π1 = (1 −b)π1 + r(π(2,1) + · · · + π(2,ℓ−1)) + π(2,ℓ), π(2,1) = bπ1, π(2,i) = (1 −r)π(2,i−1), i = 2, . . . , ℓ. The last two equations can be used to express π(2,i) in terms of π1, π(2,i) = (1 −r)i−1bπ1, i = 1, . . . , ℓ. 80 Substituting into the normalization equation π1 + Pℓ i=1 π(2,i) = 1, we obtain 1 = 1 + b ℓ X i=1 (1 −r)i−1 ! π1 = 1 + b1 −(1 −r)ℓ r ! π1, or π1 = r r + b1 −(1 −r)ℓ. Using the equation π(2,i) = (1 −r)i−1bπ1, we can also obtain explicit formulas for the π(2,i). Solution to Problem 7.11. We use a Markov chain model with 3 states, H, M, and E, where the state reflects the difficulty of the most recent exam. We are given the transition probabilities " rHH rHM rHE rMH rMM rME rEH rEM rEE # = " 0 .5 .5 .25 .5 .25 .25 .25 .5 # . It is easy to see that our Markov chain has a single recurrent class, which is aperiodic. The balance equations take the form π1 = 1 4(π2 + π3), π2 = 1 2(π1 + π2) + 1 4π3, π3 = 1 2(π1 + π3) + 1 4π2, and solving these with the constraint P i πi = 1 gives π1 = 1 5, π2 = π3 = 2 5. Solution to Problem 7.12. (a) This is a generalization of Example 7.6. We may proceed as in that example and introduce a Markov chain with states 0, 1, . . . , n, where state i indicates that there i available rods at Alvin’s present location. However, that Markov chain has a somewhat complex structure, and for this reason, we will proceed differently. We consider a Markov chain with states 0, 1, . . . , n, where state i indicates that Alvin is offthe island and has i rods available. Thus, a transition in this Markov chain reflects two trips (going to the island and returning). It is seen that this is a birth-death process. This is because if there are i rods offthe island, then at the end of the round trip, the number of rods can only be i −1, i or i + 1. We now determine the transition probabilities. When i > 0, the transition prob-ability pi,i−1 is the probability that the weather is good on the way to the island, but is bad on the way back, so that pi,i−1 = p(1−p). When 0 < i < n, the transition prob-ability pi,i+1 is the probability that the weather is bad on the way to the island, but is 81 good on the way back, so that pi,i+1 = p(1 −p). For i = 0, the transition probability pi,i+1 = p0,1 is just the probability that the weather is good on the way back, so that p0,1 = p. The transition probabilities pii are then easily determined because the sum of the transition probabilities out of state i must be equal to 1. To summarize, we have pii = ( (1 −p)2 + p2, for i > 0, 1 −p, for i = 0, 1 −p + p2, for i = n, pi,i+1, = n (1 −p)p, for 0 < i < n, p, for i = 0, pi,i−1 = n (1 −p)p, for i > 0, 0, for i = 0. Since this is a birth-death process, we can use the local balance equations. We have π0p01 = π1p10, implying that π1 = π0 1 −p, and similarly, πn = · · · = π2 = π1 = π0 1 −p. Therefore, 1 = n X i=0 πi = π0  1 + n 1 −p  , which yields π0 = 1 −p n + 1 −p, πi = 1 n + 1 −p, for all i > 0. (b) Assume that Alvin is offthe island. Let A denote the event that the weather is nice but Alvin has no fishing rods with him. Then, P(A) = π0p = p −p2 n + 1 −p. Suppose now that Alvin is on the island. The probability that he has no fishing rods with him is again π0, by the symmetry of the problem. Therefore, P(A) is the same. Thus, irrespective of his location, the probability that the weather is nice but Alvin cannot fish is (p −p2)/(n + 1 −p). Solution to Problem 7.13. (a) The local balance equations take the form 0.6π1 = 0.3π2, 0.2π2 = 0.2π3. They can be solved, together with the normalization equation, to yield π1 = 1 5, π2 = π3 = 2 5. 82 (b) The probability that the first transition is a birth is 0.6π1 + 0.2π2 = 0.6 5 + 0.2 · 2 5 = 1 5. (c) If the state is 1, which happens with probability 1/5, the first change of state is certain to be a birth. If the state is 2, which happens with probability 2/5, the probability that the first change of state is a birth is equal to 0.2/(0.3 + 0.2) = 2/5. Finally, if the state is 3, the probability that the first change of state is a birth is equal to 0. Thus, the probability that the first change of state that we observe is a birth is equal to 1 · 1 5 + 2 5 · 2 5 = 9 25. (d) We have P(state was 2 | first transition is a birth) = P(state was 2 and first transition is a birth) P(first transition is a birth) = π2 · 0.2 1/5 = 2 5. (e) As shown in part (c), the probability that the first change of state is a birth is 9/25. Furthermore, the probability that the state is 2 and the first change of state is a birth is 2π2/5 = 4/25. Therefore, the desired probability is 4/25 9/25 = 4 9. (f) In a birth-death process, there must be as many births as there are deaths, plus or minus 1. Thus, the steady-state probability of births must be equal to the steady-state probability of deaths. Hence, in steady-state, half of the state changes are expected to be births. Therefore, the conditional probability that the first observed transition is a birth, given that it resulted in a change of state, is equal to 1/2. This answer can also be obtained algebraically: P(birth | change of state) = P(birth) P(change of state) = 1/5 1 5 · 0.6 + 2 5 · 0.5 + 2 5 · 0.2 = 1/5 2/5 = 1 2. (g) We have P(leads to state 2 | change) = P(change that leads to state 2) P(change) = π1 · 0.6 + π3 · 0.2 2/5 = 1 2. This is intuitive because for every change of state that leads into state 2, there must be a subsequent change of state that leads away from state 2. Solution to Problem 7.14. (a) Let pij be the transition probabilities and let πi be the steady-state probabilities. We then have P(X1000 = j, X1001 = k, X2000 = l | X0 = i) = rij(1000)pjkrkl(999) ≈πjpjkπl. 83 (b) Using Bayes’ rule, we have P(X1000 = i | X1001 = j) = P(X1000 = i, X1001 = j) P(X1001 = j) = πipij πj . Solution to Problem 7.15. Let i = 0, 1 . . . , n be the states, with state i indicating that there are exactly i white balls. The nonzero transition probabilities are p00 = ϵ, p01 = 1 −ϵ, pnn = ϵ, pn,n−1 = 1 −ϵ, pi,i−1 = (1 −ϵ) i n, pi,i+1 = (1 −ϵ)n −i n , i = 1, . . . , n −1. The chain has a single recurrent class, which is aperiodic. In addition, it is a birth-death process. The local balance equations take the form πi(1 −ϵ)n −i n = πi+1(1 −ϵ)i + 1 n , i = 0, 1, . . . n −1, which leads to πi = n(n −1) . . . (n −i + 1) 1 · 2 · · · i π0 = n! i! (n −i)!π0 =  n i  π0. We recognize that this has the form of a binomial distribution, so that for the proba-bilities to add to 1, we must have π0 = 1/2n. Therefore, the steady-state probabilities are given by πj =  n j  1 2 n , j = 0, . . . , n. Solution to Problem 7.16. Let j = 0, 1 . . . , m be the states, with state j corre-sponding to the first urn containing j white balls. The nonzero transition probabilities are pj,j−1 =  j m 2 , pj,j+1 = m −j m 2 , pjj = 2j(m −j) m2 . The chain has a single recurrent class that is aperiodic. This chain is a birth-death process and the steady-state probabilities can be found by solving the local balance equations: πj m −j m 2 = πj+1 j + 1 m 2 , j = 0, 1, . . . , m −1. The solution is of the form πj = π0  m(m −1) · · · (m −j + 1) 1 · 2 · · · j 2 = π0  m! j! (m −j)! 2 = π0  m j 2 . We recognize this as having the form of the hypergeometric distribution (Problem 61 of Chapter 1, with n = 2m and k = m), which implies that π0 = 2m m  , and πj =  m j 2  2m m , j = 0, 1, . . . , m. 84 Solution to Problem 7.17. (a) The states form a recurrent class, which is aperiodic since all possible transitions have positive probability. (b) The Chapman-Kolmogorov equations are rij(n) = 2 X k=1 rik(n −1)pkj, for n > 1, and i, j = 1, 2, starting with rij(1) = pij, so they have the form r11(n) = r11(n −1)(1 −α) + r12(n −1)β, r12(n) = r11(n −1)α + r12(n −1)(1 −β), r21(n) = r21(n −1)(1 −α) + r22(n −1)β, r22(n) = r21(n −1)α + r22(n −1)(1 −β). If the rij(n−1) have the given form, it is easily verified by substitution in the Chapman-Kolmogorov equations that the rij(n) also have the given form. (c) The steady-state probabilities π1 and π2 are obtained by taking the limit of ri1(n) and ri2(n), respectively, as n →∞. Thus, we have π1 = β α + β , π2 = α α + β . Solution to Problem 7.18. Let the state be the number of days that the gate has survived. The balance equations are π0 = π0p + π1p + · · · + πm−1p + πm, π1 = π0(1 −p), π2 = π1(1 −p) = π0(1 −p)2, and similarly πi = π0(1 −p)i, i = 1, . . . , m. We have using the normalization equation 1 = π0 + m X i=1 πi = π0 1 + m X i=1 (1 −p)i ! , so π0 = p 1 −(1 −p)m+1 . The long-term expected frequency of gate replacements is equal to the long-term ex-pected frequency of visits to state 0, which is π0. Note that if the natural lifetime m of a gate is very large, then π0 is approximately equal to p. Solution to Problem 7.28. (a) For j < i, we have pij = 0. Since the professor will continue to remember the highest ranking, even if he gets a lower ranking in a subsequent year, we have pii = i/m. Finally, for j > i, we have pij = 1/m, since the class is equally likely to receive any given rating. 85 (b) There is a positive probability that on any given year, the professor will receive the highest ranking, namely 1/m. Therefore, state m is accessible from every other state. The only state accessible from state m is state m itself. Therefore, m is the only recurrent state, and all other states are transient. (c) This question can be answered by finding the mean first passage time to the ab-sorbing state m starting from i. It is simpler though to argue as follows: since the probability of achieving the highest ranking in a given year is 1/m, independent of the current state, the required expected number of years is the expected number of trials to the first success in a Bernoulli process with success probability 1/m. Thus, the expected number of years is m. Solution to Problem 7.29. (a) There are 3 different paths that lead back to state 1 after 6 transitions. One path makes two self-transitions at state 2, one path makes two self-transitions at state 4, one path makes one self-transition at state 2 and one self-transition at state 4. By adding the probabilities of these three paths, we obtain r11(6) = 2 3 · 3 5 · 1 3 · 2 5 + 1 9 + 4 25.  = 182 1125. (b) The time T until the process returns to state 1 is equal to 2 (the time it takes for the transitions from 1 to 2 and from 3 to 4), plus the time it takes for the state to move from state 2 to state 3 (this is geometrically distributed with parameter p = 2/3), plus the time it takes for the state to move from state 4 to state 1 (this is geometrically distributed with parameter p = 3/5). Using the formulas E[X] = 1/p and var(X) = (1 −p)/p2 for the mean and variance of a geometric random variable, we find that E[T] = 2 + 3 2 + 5 3 = 31 6 , and var(T) =  1 −2 3  · 32 22 +  1 −3 5  · 52 32 = 67 36. (c) Let A be the event that X999, X1000, and X1001 are all different. Note that P(A | X999 = i) =  2/3, for i = 1, 2, 3/5, for i = 3, 4. Thus, using the total probability theorem, and assuming that the process is in steady-state at time 999, we obtain P(A) = 2 3(π1 + π2) + 3 5(π3 + π4) = 2 3 · 15 31 + 3 5 · 16 31 = 98 155. Solution to Problem 7.30. (a) States 4 and 5 are transient, and all other states are recurrent. There are two recurrent classes. The class {1, 2, 3} is aperiodic, and the class {6, 7} is periodic. (b) If the process starts at state 1, it stays within the aperiodic recurrent class {1, 2, 3}, and the n-step transition probabilities converge to steady-state probabilities πi. We have πi = 0 for i / ∈{1, 2, 3}. The local balance equations take the form π1 = π2, π2 = 6π3. 86 Using also the normalization equation, we obtain π1 = π2 = 6 13, π3 = 1 13. (c) Because the class {6, 7} is periodic, there are no steady-state probabilities. In particular, the sequence r66(n) alternates between 0 and 1, and does not converge. (d) (i) The probability that the state increases by one during the first transition is equal to 0.5π1 + 0.1π2 = 18 65. (d) (ii) The probability that the process is in state 2 and that the state increases is 0.1π2 = 0.6 13 . Thus, the desired conditional probability is equal to 0.6/13 18/65 = 1 6. (d) (iii) If the state is 1 (probability 6/13), it is certain to increase at the first change of state. if the state is 2 (probability 6/13), it has probability 1/6 of increasing at the first change of state. Finally, if the state is 3, it cannot increase at the first change of state. Therefore, the probability that the state increases at the first change of state is equal to 6 13 + 1 6 · 6 13 = 7 13. (e) (i) Let a4 and a5 be the probability that the class {1, 2, 3} is eventually reached, starting from state 4 and 5, respectively. We have a4 = 0.2 + 0.4a4 + 0.2a5, a5 = 0.7a4, which yields a4 = 0.2 + 0.4a4 + 0.14a4, and a4 = 10/23. Also, the probability that the class {6, 7} is reached, starting from state 4, is 1 −(10/23) = 13/23. (e) (ii) Let µ4 and µ5 be the expected times until a recurrent state is reached, starting from state 4 and 5, respectively. We have µ4 = 1 + 0.4µ4 + 0.2µ5, µ5 = 1 + 0.7µ4. Substituting the second equation into the first, and solving for µ4, we obtain µ4 = 60 23. 87 Solution to Problem 7.36. Define the state to be the number of operational machines. The corresponding continuous-time Markov chain is the same as a queue with arrival rate λ and service rate µ (the one of Example 7.15). The required probability is equal to the steady-state probability π0 for this queue. Solution to Problem 7.37. We consider a continuous-time Markov chain with state n = 0, 1, . . . , 4, where n = number of people waiting. For n = 0, 1, 2, 3, the transitions from n to n + 1 have rate 1, and the transitions from n + 1 to n have rate 2. The balance equations are πn = πn−1 2 , n = 1, . . . , 4, so that πn = π0/2n, n = 1, . . . , 4. Using the normalization equation P4 i=0 πi = 1, we obtain π0 = 1 1 + 2−1 + 2−2 + 2−3 + 2−4 = 16 31. A passenger who joins the queue (in steady-state) will find n other passengers with probability πn/(π0 + π1 + π2 + π3), for n = 0, 1, 2, 3. The expected number of passengers found by Penelope is E[N] = π1 + 2π2 + 3π3 π0 + π1 + π2 + π3 = (8 + 2 · 4 + 3 · 2)/31 (16 + 8 + 4 + 2)/31 = 22 30 = 11 15. Since the expected waiting time for a new taxi is 1/2 minute, the expected waiting time (by the law of iterated expectations) is E[T] = E[N] · 1 2 = 11 30. Solution to Problem 7.38. Define the state to be the number of pending requests. Thus there are m + 1 states, numbered 0, 1, . . . , m. At state i, with 1 ≤i ≤m, the transition rate to i −1 is qi,i−1 = µ. At state i, with 0 ≤i ≤m −1, the transition rate to i + 1 is qi,i+1 = (m −i)λ. This is a birth-death process, for which the steady-state probabilities satisfy (m −i)λπi = µπi+1, i = 0, 1, . . . , m −1, together with the normalization equation π1 + · · · + πm = 1. The solution to these equations yields the steady-state probabilities. 88 C H A P T E R 8 Solution to Problem 8.1. There are two hypotheses: H0 : the phone number is 2537267, H1 : the phone number is not 2537267, and their prior probabilities are P(H0) = P(H1) = 0.5. Let B be the event that Artemisia obtains a busy signal when dialing this number. Under H0, we expect a busy signal with certainty: P(B | H0) = 1. Under H1, the conditional probability of B is P(B | H1) = 0.01. Using Bayes’ rule we obtain the posterior probability P(H0 | B) = P(B | H0)P(H0) P(B | H0)P(H0) + P(B | H1)P(H1) = 0.5 0.5 + 0.005 ≈0.99. Solution to Problem 8.2. (a) Let K (or ¯ K) be the event that Nefeli knew (or did not know, respectively) the answer to the first question, and let C be the event that she answered the question correctly. Using Bayes’ rule, we have P(K | C) = P(K) P(C | K) P(K) P(C | K) + P( ¯ K) P(C | ¯ K) = 0.5 · 1 0.5 · 1 + 0.5 · 1 3 = 3 4. (b) The probability that Nefeli knows the answer to a question that she answered correctly is 3/4 by part (a), so the posterior PMF is binomial with n = 6 and p = 3/4. Solution to Problem 8.3. (a) Let X denote the random wait time. We have the observation X = 30. Using Bayes’ rule, the posterior PDF is fΘ|X(θ | 30) = fΘ(θ)fX|Θ(30 | θ) R fΘ(θ′)fX|Θ(30 | θ′) dθ′ . Using the given prior, fΘ(θ) = 10θ for θ ∈[0, 1/ √ 5], we obtain fΘ|X(θ | 30) =      10 θfX|Θ(30 | θ) R 1/ √ 5 0 10 θ′fX|Θ(30 | θ′) dθ′ , if θ ∈[0, 1/ √ 5], 0, otherwise. 89 We also have fX|Θ(30 | θ) = θe−30θ, so the posterior is fΘ|X(θ | 30) =      θ2e−30θ R 1/ √ 5 0 (θ′)2e−30θ′ dθ′ , if θ ∈[0, 1/ √ 5], 0, otherwise. The MAP rule selects ˆ θ that maximizes the posterior (or equivalently its numer-ator, since the denominator is a positive constant). By setting the derivative of the numerator to 0, we obtain d dθ θ2e−30θ = 2 θe−30θ −30 θ2e−30θ = (2 −30 θ)θe−30θ = 0. Therefore, ˆ θ = 2/30. The conditional expectation estimator is E[Θ | X = 30] = R 1/ √ 5 0 θ3e−30θ dθ R 1/ √ 5 0 (θ′)2e−30θ′ dθ′ . (b) Let Xi denote the random wait time for the ith day, i = 1, . . . , 5. We have the ob-servation vector X = x, where x = (30, 25, 15, 40, 20). Using Bayes’ rule, the posterior PDF is fΘ|X(θ | x) = fΘ(θ)fX|Θ(x | θ) R fΘ(θ)fX|Θ(x | θ′) dθ′ . In view of the independence of the Xi, we have for θ ∈[0, 1/ √ 5], fX|Θ(x | θ) = fX1|Θ(x1 | θ) · · · fX5|Θ(x5 | θ) = θe−x1θ · · · θe−x5θ = θ5e−(x1+···+x5)θ = θ5e−(30+25+15+40+20)θ = θ5e−130θ. Using the given prior, fΘ(θ) = 10θ for θ ∈[0, 1/ √ 5], we obtain the posterior fΘ|X(θ | x) =      θ6e−130θ R 1/ √ 5 0 (θ′)6e−130θ′ dθ′ , if θ ∈[0, 1/ √ 5], 0, otherwise. To derive the MAP rule, we set the derivative of the numerator to 0, obtaining d dθ θ6e−130θ = 6θ5e−130θ −130 θ6e−130θ = (6 −130 θ)θ5e−30θ = 0. 90 Therefore, ˆ θ = 6 130. The conditional expectation estimator is E Θ | X = (30, 25, 15, 40, 20) = R 1/ √ 5 0 θ7e−130θ dθ R 1/ √ 5 0 (θ′)6e−130θ′ dθ′ . Solution to Problem 8.4. (a) Let X denote the random variable representing the number of questions answered correctly. For each value θ ∈{θ1, θ2, θ3}, we have using Bayes’ rule, pΘ | X(θ | k) = pΘ(θ) pX | Θ(k | θ) P3 i=1 pΘ(θi) pX | Θ(k | θi) . The conditional PMF pX | Θ is binomial with n = 10 and probability of success pi equal to the probability of answer correctly a question, given that the student is of category i, i.e., pi = θi + (1 −θi) · 1 3 = 2θi + 1 3 . Thus we have p1 = 1.6 3 , p2 = 2.4 3 , p3 = 2.9 3 . For a given number of correct answers k, the MAP rule selects the category i for which the corresponding binomial probability 10 k  pk i (1 −pi)10−k is maximized. (b) The posterior PMF of M is given by pM|X(m | X = k) = 3 X i=1 pΘ | X(θi | X = k) P(M = m | X = k, Θ = θi). The probabilities pΘ | X(θi | X = k) were calculated in part (a), and the probabilities P(M = m | X = k, Θ = θi) are binomial and can be calculated in the manner described in Problem 2(b). For k = 5, the posterior PMF can be explicitly calculated for m = 0, . . . , 5. The MAP and LMS estimates can be obtained from the posterior PMF. The probabilities pΘ | X(θi | X = k) were calculated in part (a), pΘ | X(θ1 | X = 5) ≈0.9010, pΘ | X(θ2 | X = 5) ≈0.0989, pΘ | X(θ3 | X = 5) ≈0.0001. The probability that the student knows the answer to a question that she an-swered correctly is qi = θi θi + (1 −θi)/3 for i = 1, 2, 3. The probabilities P(M = m | X = k, Θ = θi) are binomial and are given by P(M = m | X = k, Θ = θi) =  k m  qm i (1 −qi)k−m 91 For k = 5, the posterior PMF can be explicitly calculated for m = 0, . . . , 5 pM|X(0 | X = 5) ≈0.0145, pM|X(1 | X = 5) ≈0.0929, pM|X(2 | X = 5) ≈0.2402, pM|X(3 | X = 5) ≈0.3173, pM|X(4 | X = 5) ≈0.2335, pM|X(5 | X = 5) ≈0.1015, It follows that the MAP estimate is ˆ m = 3. The conditional expectation estimate is E[M|X = 5] = 5 X m=1 mpM|X(m | X = 5) ≈2.9668 ≈3. Solution to Problem 8.5. According to the MAP rule, we need to maximize over θ ∈[0, 1] the posterior PDF fΘ|X(θ | k) = fΘ(θ)pX|Θ(k | θ) Z fΘ(θ′)pX|Θ(k | θ′) dθ′ , where X is the number of heads observed. Since the denominator is a positive constant, we only need to maximize fΘ(θ) pX|Θ(k | θ) =  n k   2 −4 1 2 −θ  θk (1 −θ)n−k. The function to be minimized is differentiable except at θ = 1/2. This leads to two different possibilities: (a) the maximum is attained at θ = 1/2; (b) the maximum is attained at some θ < 1/2, at which the derivative is equal to zero; (c) the maximum is attained at some θ > 1/2, at which the derivative is equal to zero. Let us consider the second possibility. For θ < 1/2, we have fΘ(θ) = 4θ. The function to be maximized, ignoring the constant term 4n k  , is θk+1(1 −θ)n−k. By setting the derivative to zero, we find ˆ θ = (k + 1)/(n + 1), provided that (k + 1)/(n + 1) < 1/2. Let us now consider the third possibility. For θ > 1/2, we have fΘ(θ) = 4(1 −θ). The function to be maximized, ignoring the constant term 4n k  , is θk(1 −θ)n−k+1. 92 By setting the derivative to zero, we find ˆ θ = k/(n+1), provided that k/(n+1) > 1/2. If neither condition (k + 1)/(n + 1) < 1/2 and k/(n + 1) > 1/2 holds, we must have the first possibility, with the maximum attained at ˆ θ = 1/2. To summarize, the MAP estimate is given by ˆ θ =              k + 1 n + 1, if k + 1 n + 1 < 1 2, 1 2, if k n + 1 ≤1 2 ≤k + 1 n + 1, k n + 1, if 1 2 < k n + 1. Figure 8.1 shows a plot of the function fθ(θ) θk(1 −θ)n−k, for three different values of k, as well as a plot of ˆ θ as function of k, all for the case where n = 10. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 1 2 x 10 !3 !(!)pX|"(x| !) k=3 ! 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.5 1 x 10 !3 !(!)pX|"(x| !) ! k=5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 1 2 x 10 !3 !(!)pX|"(x| !) k=7 ! 0 1 2 3 4 5 6 7 8 9 10 0 0.5 1 k(n=10) Estimated ! Estimated ! vs. k Figure 8.1: (a)-(c) Plots of the function fθ(θ)θk(1−θ)n−k in Problem 8.5, when n = 10, and for k = 3, 5, 7, respectively. (d) The MAP estimate ˆ θ as a function of k, when n = 10. Solution to Problem 8.6. (a) First we calculate the values of c1 and c2. We have c1 = 1 R 60 5 e−0.04xdx ≈0.0549, 93 c2 = 1 R 60 5 e−0.16xdx ≈0.3561. Next we derive the posterior probability of each hypothesis, pΘ|T (1 | 20) = 0.3fT | Θ(x | Θ = 1) 0.3fT | Θ(x | Θ = 1) + 0.7fT | Θ(x | Θ = 2) = 0.3 · 0.0549 e−0.04·20 0.3 · 0.0549 e−0.04·20 + 0.7 · 0.3561 e−0.16·20 = 0.4214, and pΘ|T (2 | 20) = 0.7fT | Θ(x | Θ = 2) 0.3fT | Θ(x | Θ = 1) + 0.7fT | Θ(x | Θ = 2) = 0.7 · 0.3561 e−0.16·20 0.3 · 0.0549 e−0.04·20 + 0.7 · 0.3561 e−0.16·20 = 0.5786. Therefore she would accept the hypothesis that the problem is not difficult, and the probability of error is pe = pΘ|T (1 | 20) = 0.4214. (b) We write the posterior probability of each hypothesis, pΘ|T1,T2,T3,T4,T5(1 | 20, 10, 25, 15, 35) = 0.3fT1,T2,T3,T4,T5 | Θ(20, 10, 25, 15, 35 | Θ = 1) 0.3fT1,T2,T3,T4,T5 | Θ(20, 10, 25, 15, 35 | Θ = 1) + 0.7fT1,T2,T3,T4,T5 | Θ(20, 10, 25, 15, 35 | Θ = 2) = 0.3 · 0.05495 exp(−0.04 · (20 + 10 + 25 + 15 + 35)) 0.3 · 0.05495 exp(−0.04 · (20 + 10 + 25 + 15 + 35)) + 0.7 · 0.35615 exp(−0.16 · (20 + 10 + 25 + 15 + 35)) = 0.9171, and similarly pΘ|T1,T2,T3,T4,T5(2 | 20, 10, 25, 15, 35) = 0.3fT1,T2,T3,T4,T5 | Θ(20, 10, 25, 15, 35 | Θ = 2) 0.3fT1,T2,T3,T4,T5 | Θ(20, 10, 25, 15, 35 | Θ = 1) + 0.7fT1,T2,T3,T4,T5 | Θ(20, 10, 25, 15, 35 | Θ = 2) = 0.0829. So this time the professor would accept the hypothesis that the problem is difficult. The probability of error is 0.0829, much lower than the case of a single observation. Solution to Problem 8.7. (a) Let H1 and H2 be the hypotheses that box 1 or 2, respectively, was chosen. Let X = 1 if the drawn ball is white, and X = 2 if it is black. We introduce a parameter/random variable Θ, taking values θ1 and θ2, corresponding to H1 and H2, respectively. We have the following prior distribution for Θ: pΘ(θ1) = p, pΘ(θ2) = 1 −p, 94 where p is given. Using Bayes’ rule, we have pΘ|X(θ1 | 1) = pΘ(θ1)pX|Θ(1 | θ1) pΘ(θ1)pX|Θ(1 | θ1) + pΘ(θ2)pX|Θ(1 | θ2) = 2p/3 2p/3 + (1 −p)/3 = 2p 1 + p. Similarly we calculate the other conditional probabilities of interest: pΘ|X(θ2 | 1) = 1 −p 1 + p, pΘ|X(θ1 | 2) = p 2 −p, pΘ|X(θ2 | 2) = 2 −2p 2 −p . If a white ball is drawn (X = 1), the MAP rule selects box 1 if pΘ|X(θ1 | 1) > pΘ|X(θ2 | 1), that is, if 2p 1 + p > 1 −p 1 + p, or p > 1/3, and selects box 2 otherwise. If a black ball is drawn (X = 2), the MAP rule is selects box 1 if pΘ|X(θ1 | 2) > pΘ|X(θ2 | 2), that is, if p 2 −p > 2 −2p 2 −p , or p > 2/3, and selects box 2 otherwise. Suppose now that the two boxes have equal prior probabilities (p = 1/2). Then, the MAP rule decides on box 1 (or box 2) if X = 1 (or X = 2, respectively). Given an initial choice of box 1 (Θ = θ1), the probability of error is e1 = P(X = 2 | θ1) = 1 3. Similarly, for an initial choice of box 2 (Θ = θ2), the probability of error is e2 = P(X = 1 | θ2) = 1 3. The overall probability of error of the MAP decision rule is obtained using the total probability theorem: P(error) = pΘ(θ1)e1 + pΘ(θ2)e2 = 1 2 · 1 3 + 1 2 · 1 3 = 1 3. Thus, whereas prior to knowing the data (the value of X), the probability of error for either decision was 1/2, after knowing the data and using the MAP rule, the probability of error is reduced to 1/3. This is in fact a general property of the MAP rule: with 95 more data, the probability of error cannot increase, regardless of the observed value of X (see Problem 8.9). Solution to Problem 8.8. (a) Let K be the number of heads observed before the first tail, and let pK|Hi(k) be the PMF of K when hypothesis Hi is true. Note that event K = k corresponds to a sequence of k heads followed by a tail, so that pK|Hi(k) = (1 −qi)qk i , k = 0, 1, . . . , i = 1, 2. Using Bayes’ rule, we obtain P(H1 | K = k) = pK|H1(k)P(H1) pK(k) = 1 2(1 −q1)qk 1 1 2(1 −q1)qk 1 + 1 2(1 −q0)qk 0 = (1 −q1)qk 1 (1 −q1)qk 1 + (1 −q0)qk 0 . (b) An error occurs in two cases: if H0 is true and K ≥k∗, or if H1 is true and K < k∗. So, the probability of error, denoted by pe, is pe = P(K ≥k∗| H0)P(H0) + P(K < k∗| H1)P(H1) = ∞ X k=k∗ pK|H0(k)P(H0) + k∗−1 X k=0 pK|H1(k)P(H1) = P(H0) ∞ X k=k∗ (1 −q0)qk 0 + P(H1) k∗−1 X k=0 (1 −q1)qk 1 = P(H0)(1 −q0) qk∗ 0 1 −q0 + P(H1)(1 −q1)1 −qk∗ 1 1 −q1 = P(H1) + P(H0)qk∗ 0 −P(H1)qk∗ 1 = 1 2(1 + qk∗ 0 −qk∗ 1 ). To find the value of k∗that minimizes pe, we temporarily treat k∗as a continuous variable and differentiate pe with respect to k∗. Setting this derivative to zero, we obtain dpe dk∗= 1 2 (log q0)qk∗ 0 −(log q1)qk∗ 1  = 0. The solution to this equation is k = log(| log q0|) −log(| log q1|) | log q0| −| log q1| . As k∗ranges from 0 to k, the derivative of pe is nonzero, so that pe is monotonic. Since q1 > q0, the derivative is negative at k∗= 0. This implies that pe is monotonically decreasing as k∗ranges from 0 to k. Similarly, the derivative of pe is positive for very 96 large values of k∗, which implies that pe is monotonically increasing as k∗ranges from k to infinity. It follows that k minimizes pe. However, k∗can only take integer values, so the integer k∗that minimizes pe is either ⌊k⌋or ⌈k⌉, whichever gives the lower value of Pe. We now derive the form of the MAP decision rule, which minimizes the probabil-ity of error, and show that it is of the same type as the decision rules we just studied. With the MAP decision rule, for any given k, we accept H1 if P(K = k | H1)P(H1) > P(K = k | H0)P(H0), and accept H0 otherwise. Note that if (1 −q1)qk 1P(H1) > (1 −q0)qk 0P(H0), then (1 −q1)qk+1 1 P(H1) > (1 −q0)qk+1 0 P(H0), since q1 > q0. Similarly, if (1 −q1)qk 1P(H1) < (1 −q0)qk 0P(H0), then (1 −q1)qk−1 1 P(H1) < (1 −q0)qk−1 0 P(H0). This implies that if we decide in favor of H1 when a value k is observed, then we also decide in favor of H1 when a larger value is observed. Similarly, if we decide in favor H0 when a value k is observed, then we also decide in favor of H0 when a smaller value k is observed. Therefore, the MAP rule is of the type considered and optimized earlier, and thus will not result in a lower value of pe. (c) As in part (b) we have pe = P(H1) + P(H0)qk∗ 0 −P(H1)qk∗ 1 . Consider the case where P(H1) = 0.7, q0 = 0.3 and q1 = 0.7. Using the calculations in part (b), we have k = log  P(H0) log(v0) P(H1) log(v1)  log v1 v0  ≈0.43 Thus, the optimal value of k∗is either ⌊k⌋= 0 or ⌈k⌉= 1. We find that with either choice the probability of error pe is the same and equal to 0.3. Thus, either choice minimizes the probability of error. Note that k decreases as P(H1) increases from 0.7 to 1.0. So the choice k∗= 0 remains optimal in this range. As a result, we always decide in favor of H1, and the probability of error is pe = P(H0) = 1 −P(H1). Solution to Problem 8.10. Let Θ be the car speed and let X be the radar’s measurement. Similar to Example 8.11, the joint PDF of Θ and X is uniform over the set of pairs (θ, x) that satisfy 55 ≤θ ≤75 and θ ≤x ≤θ + 5. As in Example 8.11, for 97 any given x, the value of Θ is constrained to lie on a particular interval, the posterior PDF of Θ is uniform over that interval, and the conditional mean is the midpoint of that interval. In particular, E[Θ | X = x] =          x 2 + 27.5, if 55 ≤x ≤60, x −2.5, if 60 ≤x ≤75, x 2 + 35, if 75 ≤x ≤80. Solution to Problem 8.11. From Bayes’ rule, pΘ|X(θ | x) = pX|Θ(x | θ)pΘ(θ) pX(x) = pX|Θ(x | θ)pΘ(θ) 100 X i=1 pX|Θ(x | i)pΘ(i) = 1 θ · 1 100 100 X i=x 1 i · 1 100 =              1 θ 100 X i=x 1 i , for θ = x, x + 1, . . . , 100, 0, for θ = 1, 2, . . . , x −1. Given that X = x, the posterior probability is maximized at ˆ θ = x, and this is the MAP estimate of Θ given x. The LMS estimate is ˆ θ = E[Θ | X = x] = 100 X θ=1 θpΘ|X(θ | x) = 101 −x 100 X i=x 1 i . Figure 8.2 plots the MAP and LMS estimates of Θ as a function of X. Solution to Problem 8.12. (a) The posterior PDF is fΘ|X1,...,Xn(θ | x1, . . . , xn) dθ = fΘ(θ)fX|Θ(x | θ) Z 1 0 fΘ(θ′)fX|Θ(x | θ′) dθ′ =          1 θn 1 n −1x1−n − 1 n −1 , if x ≤θ, 0, otherwise. 98 0 20 40 60 80 100 0 10 20 30 40 50 60 70 80 90 100 X MAP Estimate Conditional Expectation Estimator Figure 8.2: MAP and LMS estimates of Θ as a function of X in Problem 8.11. Using the definition of conditional expectation we obtain E[Θ | X1 = x1, . . . , Xn = xn] = Z 1 0 θ · fΘ|X1,...,Xn(θ | x1, . . . , xn) dθ = Z 1 x θ · fΘ|X1,...,Xn(θ | x1, . . . , xn) dθ = 1 n −2x2−n − 1 n −2 1 n −1x1−n − 1 n −1 = n −1 n −2 · x2−n −1 x1−n −1 = n −1 n −2 · x2−n −1 x1−n −1 = n −1 n −2 · x(1 −xn−2) 1 −xn−1 . (b) The conditional mean squared error of the MAP estimator is E (ˆ Θ −Θ)2 | X1 = x1, . . . ,Xn = xn = E (x −Θ)2 | X1 = x1, . . . , Xn = xn = x2 −2xn −1 n −2 · x2−n −1 x1−n −1 +  n −1 n −3 · x3−n −1 x1−n −1  , 99 and the conditional mean squared error for the LMS estimator is E (ˆ Θ −Θ)2 | X1 = x1, . . . ,Xn = xn = E " x2−n −1 x1−n −1 −Θ 2 X1 = x1, . . . , Xn = xn # = −  n −1 n −2 · x2−n −1 x1−n −1 2 +  n −1 n −3 · x3−n −1 x1−n −1  . We plot in Fig. 8.3 the estimators and the corresponding conditional mean squared errors as a function of x, for the case where n = 5. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.5 1 xM Estimator 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.01 0.02 0.03 xM Mean Squared Error Maximum a Posteriori Estimates Conditional Expectation Estimates Figure 8.3: The MAP and LMS estimates, and their conditional mean squared errors, as functions of x in Problem 8.12. (c) When x is held fixed at 0.5, the MAP estimate also remains fixed at 0.5. On the other hand, the LMS estimate given by the expression found in part (b), can be seen to be larger than 0.5 and converge to 0.5 as n →∞. Furthermore, the conditional mean squared error decreases to zero as n increases to infinity; see Fig. 8.4. Solution to Problem 8.14. Here Θ is uniformly distributed in the interval [4, 10] and X = Θ + W, where W is uniformly distributed in the interval [−1, 1], and is independent of Θ. The linear LMS estimator of Θ given X is ˆ Θ = E[Θ] + cov(Θ, X) σ2 X X −E[X] . 100 2 4 6 8 10 12 14 16 18 20 0.4 0.5 0.6 0.7 0.8 n Estimators Maximum a Posteriori Estimates Conditional Expectation Estimates 2 4 6 8 10 12 14 16 18 20 0 0.01 0.02 0.03 0.04 n Mean Squared Errors Figure 8.4: Asymptotic behavior of the MAP and LMS estimators, and the corresponding conditional mean squared errors, for fixed x = 0.5, and n →∞in Problem 8.12. We have E[X] = E[Θ] + E[W] = E[Θ], σ2 X = σ2 Θ + σ2 W , cov(Θ, X) = EΘ −E[Θ]X −E[X] = EΘ −E[Θ]2 = σ2 Θ, where the last relation follows from the independence of Θ and W. Using the formulas for the mean and variance of the uniform PDF, we have E[Θ] = 7, σ2 Θ = 3, E[W] = 0, σ2 W = 1/3. Thus, the linear LMS estimator is ˆ Θ = 7 + 3 3 + 1/3 X −7 , or ˆ Θ = 7 + 9 10 X −7 . The mean squared error is (1 −ρ2)σ2 Θ. We have ρ2 =  cov(Θ, X) σΘσX 2 =  σ2 Θ σΘσX 2 = σ2 Θ σ2 X = 3 3 + 1/3 = 9 10. 101 Hence the mean squared error is (1 −ρ2)σ2 Θ =  1 −9 10  · 3 = 3 10. Solution to Problem 8.15. The conditional mean squared error of the MAP esti-mator ˆ Θ = X is E (ˆ Θ −Θ)2 | X = x = Eˆ Θ2 −2ˆ ΘΘ + Θ2 | X = x = x2 −2xE[Θ | X = x] + E[Θ2 | X = x] = x2 −2x101 −x 100 X i=x 1 i + 100 X i=x i 100 X i=x 1 i . The conditional mean squared error of the LMS estimator ˆ Θ = 101 −X 100 X i=X 1 i . is E[(ˆ Θ −Θ)2 | X = x] = E[ˆ Θ2 −2ˆ ΘΘ + Θ2 | X = x] = 101 −x2 100 X i=x 1 i −2101 −x 100 X i=x 1 i E[Θ | X = x] + E[Θ2 | X = x] = −(101 −x)2  100 X i=x 1 i 2 + 100 X i=x i 100 X i=x 1 i . To obtain the linear LMS estimator, we compute the expectation and variance of X. We have E[X] = E E[X | Θ] = E hΘ + 1 2 i = (101/2) + 1 2 = 25.75, and var(X) = E[X2] −E[X]2 = 1 100 100 X x=1 x2 100 X θ=x 1 θ ! −(25.75)2 = 490.19. 102 The covariance of Θ and X is cov(Θ, X) = E (X −E[X])(Θ −E[Θ]) = 100 X θ=1 1 100 θ X x=1 1 θ (x −25.75)(θ −50) = 416.63. Applying the linear LMS formula yields ˆ Θ = E[Θ] + cov(Θ, X) var(X) X −E[X] = 50 + 416.63 490.19(X −25.75) = 0.85X + 28.11. The mean squared error of the linear LMS estimator is E (ˆ Θ −Θ)2 | X = x = Eˆ Θ2 −2ˆ ΘΘ + Θ2 | X = x = ˆ Θ2 −2ˆ ΘE[Θ | X = x] + E[Θ2 | X = x] = (0.85x + 28.11)2 −2 (0.85x + 28.11) 101 −x P100 i=x 1 i + P100 i=x i P100 i=x 1 i . Figure 8.5 plots the conditional mean squared error of the MAP, LMS, and linear LMS estimators, as a function of x. Note that the conditional mean squared error is lowest for the LMS estimator, but that the linear LMS estimator comes very close. 0 20 40 60 80 100 0 50 100 150 0 20 40 60 80 100 0 500 1000 1500 x Mean Squared Error MAP Estimate Conditional Expectation Estimator Linear Least Squares Estimator Figure 8.5: Estimators and their conditional mean squared errors in Problem 8.15. 103 Solution to Problem 8.16. (a) The LMS estimator is g(X) = E[Θ | X] =      1 2X, if 0 ≤X < 1, X −1 2, if 1 ≤X ≤2. (b) We first derive the conditional variance E (Θ −g(X))2 | X = x . If x ∈[0, 1], the conditional PDF of Θ is uniform over the interval [0, x], and EΘ −g(X)2 | X = x = x2/12. Similarly, if x ∈[1, 2], the conditional PDF of Θ is uniform over the interval [1 −x, x], and EΘ −g(X)2 | X = x = 1/12. We now evaluate the expectation and variance of g(X). Note that (Θ, X) is uniform over a region with area 3/2, so that the constant c must be equal to 2/3. We have E g(X) = E E[Θ | X] = E[Θ] = Z Z θfX,Θ(x, θ) dθdx = Z 1 0 Z x 0 θ 2 3 dθ dx + Z 2 1 Z x x−1 θ 2 3 dθ dx = 7 9. Furthermore, varg(X) = varE[Θ | X] = E hE[Θ | X]2 −  E E[Θ | X]2 = Z 2 0 E[Θ | X]2fX(x) dx −E[Θ]2 = Z 1 0 1 2x 2 · 2 3x dx + Z 2 1  x −1 2 2 · 2 3 dx − 7 9 2 = 103 648 = 0.159, where fX(x) = ( 2x/3, if 0 ≤x ≤1, 2/3, if 1 ≤x ≤2. 104 (c) The expectations E hΘ −g(X)2i and E var(Θ | X) are equal because by the law of iterated expectations, E hΘ −g(X)2i = E h EΘ −g(X)2 | Xi = E var(Θ | X) . Recall from part (b) that var(Θ | X = x) = ( x2/12, if 0 ≤x < 1, 1/12, if 1 ≤x ≤2. It follows that E var(Θ | X) = Z x var(Θ | X = x)fX(x) dx = Z 1 0 x2 12 · 2 3x dx + Z 2 1 1 12 · 2 3 dx = 5 72. (d) By the law of total variance, we have var(Θ) = E var(Θ | X) + varE[Θ | X] . Using the results from parts (b) and (c), we have var(Θ) = E var(Θ | X) + varE[Θ | X] = 5 72 + 103 648 = 37 162. An alternative approach to calculating the variance of Θ is to first find the marginal PDF fΘ(θ) and then apply the definition var(Θ) = Z 2 0 θ −E[Θ]2fΘ(θ) dθ. (e) The linear LMS estimator is ˆ Θ = E[Θ] + cov(X, Θ) σ2 X X −E[X] . We have E[X] = Z 1 0 Z x 0 2 3x dθ dx + Z 2 1 Z x x−1 2 3x dθ dx = 2 9 + 1 = 11 9 , E[X2] = Z 1 0 Z x 0 2 3x2 dθ dx + Z 2 1 Z x x−1 2 3x2 dθ dx = 1 6 + 14 9 = 31 18, var(X) = E[X2] −(E[X])2 = 71 162, E[Θ] = Z 1 0 Z x 0 2 3θ dθ dx + Z 2 1 Z x x−1 2 3θ dθ dx = 1 9 + 2 3 = 7 9, 105 E[XΘ] = Z 1 0 Z x 0 2 3xθ dθ dx + Z 2 1 Z x x−1 2 3xθ dθ dx = 1 12 + 17 18 = 37 36, cov(X, Θ) = E[XΘ] −E[X]E[Θ] = 37 36 −11 9 · 7 9. Thus, the linear LMS estimator is ˆ Θ = 7 9 + 37 36 −11 9 · 7 9 71 162  X −11 9  = 0.5626 + 0.1761X. Its mean squared error is E (Θ −ˆ Θ)2 = E " (Θ −0.5626 −0.1761X)2 # = E " Θ2 −2Θ (0.5626 + 0.1761X) + (0.5626 + 0.1761X)2 # . After some calculation we obtain the value of the mean squared error, which is approx-imately 0.2023. Alternatively, we can use the values of var(X), var(Θ), and cov(X, Θ) we found earlier, to calculate the correlation coefficient ρ, and then use the fact that the mean squared error is equal to (1 −ρ2)var(Θ), to arrive at the same answer. Solution to Problem 8.17. We have cov(Θ, X) = E[Θ3/2W] −E[Θ] E[X] = E[Θ/2] E[W] −E[Θ] E[Θ] = 0, so the linear LMS estimator of Θ is simply ˆ Θ = µ, and does not make use of the available observation. Let us now consider the transformed observation Y = X2 = ΘW 2, and linear estimators of the form ˆ Θ = aY + b. We have E[Y ] = E[ΘW 2] = E[Θ] E[W 2] = µ, E[ΘY ] = E[Θ2W 2] = E[Θ2] E[W 2] = σ2 + µ2, cov(Θ, Y ) = E[ΘY ] −E[Θ] E[Y ] = (σ2 + µ2) −µ2 = σ2, var(Y ) = E[Θ2W 4] −E[Y ]2 = (σ2 + µ2)E[W 4] −µ2. Thus, the linear LMS estimator of Θ based on Y is of the form ˆ Θ = µ + σ2 (σ2 + µ2)E[W 4] −µ2 Y −σ2), and makes effective use of the observation: the estimate of Θ, the conditional variance of X becomes large whenever a large value of X2 is observed. 106 Solution to Problem 8.18. (a) The conditional CDF of X is given by FX|Θ(x | θ) = P(X ≤x | Θ = θ) = P(Θ cos W ≤x | Θ = θ) = P  cos W ≤x θ  . We note that the cosine function is one-to-one and decreasing over the interval [0, π/2], so for 0 ≤x ≤θ, FX|Θ(x | θ) = P  W ≥cos−1 x θ  = 1 −2 π cos−1 x θ . Differentiation yields fX|Θ(x | θ) = 2 π √ θ2 −x2 , 0 ≤x ≤θ. We have fΘ,X(θ, x) = fΘ(θ)fX|Θ(x | θ) = 2 πl √ θ2 −x2 , 0 ≤θ ≤l, 0 ≤x ≤θ. Thus the joint PDF is nonzero over the triangular region {(θ, x) | 0 ≤θ ≤l, 0 ≤x ≤θ} or equivalently {(θ, x) | 0 ≤x ≤l, x ≤θ ≤l}. To obtain fX(x), we integrate the joint PDF over θ: fX(x) = 2 πl Z l x 1 √ θ2 −x2 dθ = log  θ + p θ2 −x2  l x = 2 πl log  l + √ l2 −x2 x  , 0 ≤x ≤l, where we have used the integration formula in the hint. We have fΘ|X(θ | x) = fΘ,X(θ, x) fX(x) = 1 log  l + √ l2 −x2 x p θ2 −x2 , x ≤θ ≤l. Thus, the LMS estimate is given by E[Θ | X = x] = Z ∞ −∞ θfΘ|X(θ | x) dx = 1 log  l + √ l2 −x2 x  Z l x θ √ θ2 −x2 dθ = √ θ2 −x2 l x log  l + √ l2 −x2 x  = √ l2 −x2 log  l + √ l2 −x2 x , 0 ≤x ≤l. 107 It is worth noting that limx→0 E[Θ | X = x] = 0 and that limx→l E[Θ | X = x] = l, as one would expect. (b) The linear LMS estimator is ˆ Θ = E[Θ] + cov(Θ, X) σ2 X (X −E[X]). Since Θ is uniformly distributed between 0 and l, it follows that E[Θ] = l/2. We obtain E[X] and E[X2], using the fact that Θ is independent from W, and therefore also independent from cos W and cos2 W. We have E[X] = E[Θ cos W] = E[Θ] E[cos W] = E[Θ] · 2 π Z π/2 0 cos w dw = l 2 · 2 π sin w π/2 0 = 2 π , and E[X2] = E[Θ2 cos2 W] = E Θ2E[cos2 W] = E[Θ2] E[cos2 W] = 1 l Z l 0 θ2 dθ · 2 π Z π/2 0 cos2 w dw = l3 3l · 1 π Z π/2 0 (1 + cos 2w) dw = l2 3π · π 2 = l2 6 . Thus, var(X) = l2 6 −l2 π2 = l2(π2 −6) 6π2 . We also have E[ΘX] = E[Θ2 cos W] = E[Θ2] E[cos W] = l2 3 · 2 π = 2l2 3π . Hence, cov(Θ, X) = 2l2 3π −l 2 · l π = l2 π 2 3 −1 2  = l2 6π . Therefore, ˆ Θ = l 2 + l2 6π · 6π2 l2(π2 −6)  X −l π  = l 2 + π π2 −6  X −l π  . The mean squared error is (1 −ρ2)σ2 Θ = σ2 Θ −cov2(Θ, X) σ2 X = l2 12 − l4 36π2 · 6π2 l2(π2 −6) = l2 12  1 − 2 π2 −6  = l2 12 · π2 −8 π2 −6. 108 Solution to Problem 8.19. (a) Let X be the number of detected photons. From Bayes’ rule, we have P(transmitter is on | X = k) = P(X = k | transmitter is on) · P(transmitter is on) P(X = k) = P(Θ + N = k) · p P(N = k) · (1 −p) + P(Θ + N = k) · p. The PMFs of Θ and Θ + N are pΘ(θ) = λθe−λ θ! , pΘ+N(n) = (λ + µ)ne−(λ+µ) n! . Thus, using part (a) we obtain P(transmitter is on | X = k) = p · (λ + µ)ke−(λ+µ) k! p · (λ + µ)ke−(λ+µ) k! + (1 −p) · µke−µ k! = p(λ + µ)ke−λ p(λ + µ)ke−λ + (1 −p)µk . (b) We calculate P(transmitter is on | X = k) and decide that the transmitter is on if and only if this probability is at least 1/2; equivalently, if and only if p(λ + µ)ke−λ ≥(1 −p)µk. (c) Let S be the number of transmitted photons, so that S is equal to Θ with probability p, and is equal to 0 with probability 1 −p. The linear LMS estimator is ˆ S = E[S] + cov(S, X) σ2 X X −E[X] . We calculate all the terms in the preceding expression. Since Θ and N are independent, S and N are independent as well. We have E[S] = pE[Θ] = pλ, E[S2] = pE[Θ2] = p(λ2 + λ), σ2 S = E[S2] −(E[S])2 = p(λ2 + λ) −(pλ)2. It follows that E[X] = E[S] + E[N] = (pλ + µ), and σ2 X = σ2 S + σ2 N = pλ2 + λ −(pλ)2 + µ = (pλ + µ) + p(1 −p)λ2. 109 Finally, we calculate cov(S, X): cov(S, X) = E (S −E[S])(X −E[X])] = E (S −E[S])(S −E[S] + N −E[N]) = E (S −E[S])(S −E[S]) + E (S −E[S])(N −E[N]) = σ2 S + E (S −E[S])(N −E[N]) = σ2 S = p(λ2 + λ) −(pλ)2, where we have used the fact that S −E[S] and N −E[N] are independent, and that E S −E[S] = E N −E[N] = 0. 110 C H A P T E R 9 Solution to Problem 9.1. Let Xi denote the random homework time for the ith week, i = 1, . . . , 5. We have the observation vector X = x, where x = (10, 14, 18, 8, 20). In view of the independence of the Xi, for θ ∈[0, 1], the likelihood function is fX(x; θ) = fX1(x1; θ) · · · fX5(x5; θ) = θe−x1θ · · · θe−x5θ = θ5e−(x1+···+x5)θ = θ5e−(10+14+18+8+20)θ = θ5e−70θ. To derive the ML estimate, we set to 0 the derivative of fX(x; θ) with respect to θ, obtaining d dθ θ5e−70θ = 5θ4e−70θ −70 θ5e−70θ = (5 −70 θ)θ4e−70θ = 0. Therefore, ˆ θ = 5 70 = 1 14. Solution to Problem 9.2. (a) Let the random variable N be the number of tosses until the kth head. The likelihood function is the Pascal PMF of order k: pN(n; θ) =  n −1 k −1  θk(1 −θ)n−k, n = k, k + 1, . . . We maximize the likelihood by setting its derivative with respect to θ to zero: 0 = k  n −1 k −1  (1 −θ)n−kθk−1 −(n −k)  n −1 k −1  (1 −θ)n−k−1θk, which yields the ML estimator ˆ Θ1 = k N . Note that ˆ Θ1 is just the fraction of heads observed in N tosses. (b) In this case, n is a fixed integer and K is a random variable. The PMF of K is binomial: pK(k; θ) =  n k  θk(1 −θ)n−k, k = 0, 1, 2, . . . , n. 111 For given n and k, this is a constant multiple of the PMF in part (a), so the same calculation yields the estimator ˆ Θ2 = K n . We observe that the ML estimator is again the fraction of heads in the observed trials. Note that although parts (a) and (b) involve different experiments and different random variables, the ML estimates obtained are similar. However, it can be shown that ˆ Θ2 is unbiased [since E[ˆ Θ2] = E[K]/n = θ · n/n = θ], whereas ˆ Θ1 is not [since E[1/N] = 1/E[N]]. Solution to Problem 9.3. (a) Let s be the sum of all the ball numbers. Then for all i, E[Xi] = s k , E[Yi] = s k . We have E[ ˆ S] = E " k n n X i=1 Xi # = k n n X i=1 E [Xi] = k n n X i=1 s k = s, so ˆ S is an unbiased estimator of s. Similarly, E[ ˜ S] = s. Finally, let L = S k = 1 N n X i=1 Xi = 1 N N X j=1 Yj. We have E[L] = n X n=1 E[L | N = n]pN(n) = n X n=1 E " 1 n n X i=1 Yi N = n # pN(n) = n X n=1 E[Y1]pN(n) = E[Y1] = s k , so that S = k E[L] = s, and S is an unbiased estimator of s. (b) We have var( ˆ S) = k2 n var(X1), var( ˜ S) = k 2 m var(Y1). 112 Thus, var( ˆ S) = k2 n var(X1) = k2 n pE[Y 2 1 ] −p2(E[Y1])2 = k 2 n  1 pE[Y 2 1 ] −E[Y1]2 = k 2 n  var(Y1) + 1 −p p E[Y 2 1 ]  = k 2 n var(Y1)  1 + r(1 −p) p  = var( ˜ S) · m n · p + r(1 −p) p . It follows that when m = n, var( ˜ S) var( ˆ S) = p p + r(1 −p). Furthermore, in order for var( ˆ S) ≈var( ˜ S), we must have m ≈ np p + r(1 −p). (c) We have var(S) = var k N n X i=1 Xi ! = k 2var 1 N n X i=1 Xi ! = k 2var  1 N N X i=1 Yi   = k 2 E[L2] −E[L]2 , where L was defined in part (a): L = 1 N n X i=1 Xi = 1 N N X i=1 Yi. We showed in part (a) that E[L] = E[Y1], 113 and we will now evaluate E[L2]. We have E[L2] = n X n=1 E[L2 | N = n]pN(n) = n X n=1 1 n2 E   n X i=1 Yi !2 N = n  pN(n) = n X n=1 1 n2 nE[Y 2 1 ] + n(n −1)E[Y1]2 pN(n) = E[Y 2 1 ] n X n=1 1 npN(n) + E[Y1]2 n X n=1 n −1 n pN(n) = E[Y 2 1 ] −(E[Y1])2 n X n=1 1 npN(n) + (E[Y1])2, It follows that var(S) = k 2E[L2] −E[L]2 = k 2E[Y 2 1 ] −(E[Y1])2) n X n=1 1 npN(n) = E h 1 N i k 2E[Y 2 1 ] −(E[Y1])2 . Thus, we have var(S) var( ˆ S) = E 1/N k 2(E[Y 2 1 ] −(E[Y1])2) (1/n)k 2 1 pE[Y 2 1 ] −(E[Y1])2 = E n/N (E[Y 2 1 ] −(E[Y1])2) 1 pE[Y 2 1 ] −(E[Y1])2 = E n/N p p + r(1 −p). We will show that E n/N p ≈1 for large n, so that var(S) var( ˆ S) ≈ 1 p + r(1 −p). We show this by proving an upper and a lower bound to E h 1 N i = n X n=1 1 npN(n). 114 We have n X n=1 1 npN(n) = n X n=1  1 n + 1 + 1 n(n + 1)  pN(n) = n X n=1 1 n + 1pN(n) + n X n=1 1 n(n + 1)pN(n) = 1 (n + 1)p + n X n=1 1 n(n + 1)pN(n) = 1 (n + 1)p + 1 (n + 1)p n X n=1 1 n  n + 1 n + 1  pk+1(1 −p)n−n ≤ 1 (n + 1)p + 1 (n + 1)p n X n=2 3 n + 2  n + 1 n + 1  pk+1(1 −p)n−n = 1 (n + 1)p + 3 (n + 1)(n + 2)p2 n X n=2  n + 2 n + 2  pn+2(1 −p)n−n ≤ 1 (n + 1)p + 3 (n + 1)(n + 2)p2 . The first inequality comes from bounding 1/n by 3/(n + 1) when n + 1 is greater than 1, and ignoring the term n = 1. The second inequality holds because n X n=2  n + 2 n + 2  pn+2(1 −p)n−n < n X n=−2  n + 2 n + 2  pn+2(1 −p)n−n = 1. The preceding calculation also shows that 1 (n + 1)p ≤ n X n=1 1 npN(n). It follows that for large n, we have E 1/N ≈1/(n + 1)p or E n/N p ≈1. Solution to Problem 9.4. (a) Figure 9.1 plots a mixture of two normal distributions. Denoting θ = (p1, µ1, σ1, . . . , pm, µm, σm), the PDF of each Xi is fXi(xi; θ) = m X j=1 pj · 1 √ 2πσj exp  −(xi −µj)2 2σ2 j  . Using the independence assumption, the likelihood function is fX1,...,Xn(x1, . . . , xn; θ) = n Y i=1 fXi(xi; θ) = n Y i=1 m X j=1 pj · 1 √ 2πσj exp  −(xi −µj)2 2σ2 j ! , 115 and the log-likelihood function is log fX1,...,Xn(x1, . . . , xn; θ) = n X i=1 log m X j=1 pj · 1 √ 2πσj exp  −(xi −µj)2 2σ2 j ! . (b) The likelihood function is p1 · 1 √ 2πσ1 exp  −(x −µ1)2 2σ2 1  + (1 −p1) · 1 √ 2πσ2 exp  −(x −µ2)2 2σ2 2  , and is linear in p1. The ML estimate of p1 is ˆ p1 =    1, if 1 √ 2πσ1 exp  −(x −µ1)2 2σ2 1  > 1 √ 2πσ2 exp  −(x −µ2)2 2σ2 2  , 0, otherwise, and the ML estimate of p2 is ˆ p2 = 1 −ˆ p1. (c) The likelihood function is the sum of two terms [cf. the solution to part (b)], the first involving µ1, the second involving µ2. Thus, we can maximize each term separately and find that the ML estimates are ˆ µ1 = ˆ µ2 = x. (d) Fix p1, . . . , pm to some positive values. Fix µ2, . . . , µm and σ2 2, . . . , σ2 m to some arbitrary (respectively, positive) values. If µ1 = x1 and σ2 1 tends to zero, the likelihood fX1(x1; θ) tends to infinity, and the likelihoods fXi(xi; θ) of the remaining points (i > 1) are bounded below by a positive number. Therefore, the overall likelihood tends to infinity. !10 !5 0 5 10 15 0 0.1 0.2 0.3 0.4 x Probability Density Function !10 !5 0 5 10 15 0 0.05 0.1 0.15 0.2 x Probability Density Function gaussian 1 gaussian 2 Mixture of Gaussians Figure 9.1: The mixture of two normal distributions with p1 = 0.7 and p2 = 0.3 in Problem 9.4. 116 Solution to Problem 9.5. (a) The PDF of the location Xi of the ith event is fXi(xi; θ) =  c(θ)θe−θxi, if m1 ≤xi ≤m2, 0, otherwise, where c(θ) is a normalization factor, c(θ) = 1 Z m2 m1 θe−θx dx = 1 e−m1θ −e−m2θ . The likelihood function is fX(x; θ) = n Y i=1 fXi(xi; θ) = n Y i=1 1 e−m1θ −e−m2θ θeθxi =  1 e−m1θ −e−m2θ n θn n Y i=1 eθxi, and the corresponding log-likelihood function is log fX(x; θ) = n X i=1 log fXi(xi; θ) = −n log (e−m1θ −e−m2θ) + n log θ + θ n X i=1 xi. (b) We plot the likelihood and log-likelihood functions in Fig. 9.2. The ML estimate is approximately 0.26. 10 !2 10 !1 10 0 10 1 0 0.5 1 1.5 x 10 !6 Likelihood 10 !2 10 !1 10 0 10 1 !10 3 !10 2 !10 1 Log Likelihood Figure 9.2: Plots of the likelihood and log-likelihood functions in Problem 9.5. 117 Solution to Problem 9.6. (a) The likelihood function for a single observation x is 1 2 · 1 √ 2π σ1 exp  −(x −µ1)2 2σ2 1  + 1 2 · 1 √ 2π σ2 exp  −(x −µ2)2 2σ2 2  , so the likelihood function is fX1,...,Xn(x1, . . . , xn; θ) = n Y i=1 2 X j=1 1 2 · 1 √ 2π σj exp  −(xi −µj)2 2σ2 j ! . (b) We plot the likelihood as a function of σ2 and µ2 in Figure 9.3. The ML estimates are found (by a fine grid/brute force optimization) to be ˆ σ2 ≈7.2 and ˆ µ2 ≈173. 0 2 4 6 8 10 100 150 200 !100 !50 0 !2 Log Likelihood Function u2 !2 u2 1 2 3 4 5 6 7 8 9 10 120 140 160 180 200 Figure 9.3 Plot of the log-likelihood and its contours as a function of σ2 and µ2. (c) We plot the likelihood as a function of µ1 and µ2 in Fig. 9.4. The ML estimates are found (by a fine grid/brute force optimization) to be ˆ µ1 ≈174 and ˆ µ2 ≈156. (d) Let Θ denote the gender of the student, with Θ = 1 for a female student and Θ = 0 for a male student. Using Bayes’ rule, we compare the posterior probabilities, P(Θ = 1 | X = x) = 1 2 · 1 √ 2πσ1 exp n −(x−µ1)2 2σ2 1 o 1 2 · 1 √ 2πσ1 exp n −(x−µ1)2 2σ2 1 o + 1 2 · 1 √ 2πσ2 exp n −(x−µ2)2 2σ2 2 o, 118 120 140 160 180 200 120 140 160 180 200 !2000 !1000 0 u1 Log Likelihood Function u2 u1 u2 130 140 150 160 170 180 190 200 140 160 180 200 Figure 9.4 Plot of the log-likelihood and its contours as a function of µ1 and µ2. and P(Θ = 0 | X = x) = 1 2 · 1 √ 2πσ2 exp n −(x−µ2)2 2σ2 2 o 1 2 · 1 √ 2πσ1 exp n −(x−µ1)2 2σ2 1 o + 1 2 · 1 √ 2πσ2 exp n −(x−µ2)2 2σ2 2 o. The MAP rule involves a comparison of the two numerators. When σ1 = σ2, it reduces to a comparison of |x −µ1| to |x −µ2|. Using the estimates in part (c), we will decide that the student is female if x < 165, and male otherwise. Solution to Problem 9.7. The PMF of Xi is pXi(x) = e−θ θx x! , x = 0, 1, . . . . The log-likelihood function is log pX(x1, . . . , xn; θ) = n X i=1 log pXi(xi; θ) = −nθ + n X i=1 xi log θ − n X i=1 log(xi!), and to maximize it, we set its derivative to 0. We obtain 0 = −n + 1 θ n X i=1 xi, 119 which yields the estimator ˆ Θn = 1 n n X i=1 Xi. This estimator is unbiased, since E[Xi] = θ, so that E[ˆ Θn] = 1 n n X i=1 E[Xi] = θ. It is also consistent, because ˆ Θn converges to θ in probability, by the weak law of large numbers. Solution to Problem 9.8. The PDF of Xi is fXi(x) = n 1/θ, if 0 ≤xi ≤θ, 0, otherwise. The likelihood function is fX(x1, . . . , xn; θ) = fX1(x1; θ) · · · fXn(xn; θ) = n 1/θn, if 0 ≤maxi=1,...,n xi ≤θ, 0, otherwise. We maximize the likelihood function and obtain the ML estimator as ˆ Θn = max i=1,...,n Xi. It can be seen that ˆ Θn converges in probability to θ (the upper endpoint of the interval where Xi takes values); see Example 5.6. Therefore the estimator is consistent. To check whether ˆ Θn is unbiased, we calculate its CDF, then its PDF (by differ-entiation), and then E[ˆ Θn]. We have, using the independence of the Xi, F ˆ Θn(x) = ( 0, if x < 0, xn/θn, if 0 ≤x ≤θ, 1, if x > θ, so that f ˆ Θn(x) = ( 0, if x < 0, nxn−1/θn, if 0 ≤x ≤θ, 0, if x > θ. Hence E[ˆ Θn] = n θn Z θ 0 x xn−1 dx = n θn  xn+1 n + 1  θ 0 = n θn · θn+1 n + 1 = n n + 1θ. Thus ˆ Θn is not unbiased, but it is asymptotically unbiased. Some alternative estimators that are unbiased are a scaled version of the ML estimator ˆ Θ = n + 1 n max i=1,...,n Xi, 120 or one that relies on the sample mean being an unbiased estimate of θ/2: ˆ Θ = 2 n n X i=1 Xi. Solution to Problem 9.9. The PDF of Xi is fXi(xi) = n 1, if θ ≤xi ≤θ + 1, 0, otherwise. The likelihood function is fX(x1, . . . , xn; θ) = fX1(x1; θ) · · · fXn(xn; θ) = n 1, if θ ≤mini=1,...,n xi ≤maxi=1,...,n xi ≤θ + 1, 0, otherwise. Any value in the feasible interval  max i=1,...,n Xi −1, min i=1,...,n Xi  maximizes the likelihood function and is therefore a ML estimator. Any choice of estimator within the above interval is consistent. The reason is that mini=1,...,n Xi converges in probability to θ, while maxi=1,...,n Xi converges in probability to θ + 1 (cf. Example 5.6). Thus, both endpoints of the above interval converge to θ. Let us consider the estimator that chooses the midpoint ˆ Θn = 1 2  max i=1,...,n Xi + min i=1,...,n Xi −1  of the interval of ML estimates. We claim that it is unbiased. This claim can be verified purely on the basis of symmetry considerations, but nevertheless we provide a detailed calculation. We first find the CDFs of maxi=1,...,n Xi and mini=1,...,n Xi, then their PDFs (by differentiation), and then E[ˆ Θn]. The details are very similar to the ones for the preceding problem. We have by straightforward calculation, fminiXi(x) =  n(θ + 1 −x)n−1, if θ ≤x ≤θ + 1, 0, otherwise, fmaxiXi(x) =  n(x −θ)n−1, if θ ≤x ≤θ + 1, 0, otherwise. Hence E  min i=1,...,n Xi  = n Z θ+1 θ x(θ + 1 −x)n−1 dx = −n Z θ+1 θ (θ + 1 −x)ndx + (θ + 1)n Z θ+1 θ (θ + 1 −x)n−1 dx = −n Z 1 0 xn dx + (θ + 1)n Z 1 0 xn−1 dx = − n n + 1 + θ + 1 = θ + 1 n + 1. 121 Similarly, E  max i=1,...,n Xi  = θ + n n + 1, and it follows that E[ˆ Θn] = 1 2E  max i=1,...,n Xi + min i=1,...,n Xi −1  = θ. Solution to Problem 9.10. (a) To compute c(θ), we write 1 = ∞ X k=0 pK(k; θ) = ∞ X k=0 c(θ)e−θk = c(θ) 1 −e−θ , which yields c(θ) = 1 −e−θ. (b) The PMF of K is a shifted geometric distribution with parameter p = 1 −e−θ (shifted by 1 to the left, so that it starts at k = 0). Therefore, E[K] = 1 p −1 = 1 1 −e−θ −1 = e−θ 1 −e−θ = 1 eθ −1, and the variance is the same as for the geometric with parameter p, var(K) = 1 −p p2 = e−θ (1 −e−θ)2 . (c) Let Ki be the number of photons emitted the ith time that the source is triggered. The joint PMF of K = (K1, . . . , Kn) is pK(k1, . . . , kn; θ) = c(θ)n n Y i=1 e−θki = c(θ)ne−θsn, where sn = n X i=1 ki. The log-likelihood function is log pK(k1, . . . , kn; θ) = n log c(θ) −θsn = n log 1 −e−θ −θsn. We maximize the log-likelihood by setting to 0 the derivative with respect to θ: d dθ log pK(k1, . . . , kn; θ) = n e−θ 1 −e−θ −sn = 0, or e−θ = sn/n 1 + sn/n. 122 Taking the logarithm of both sides gives the ML estimate of θ, ˆ θn = log  1 + n sn  , and the ML estimate of ψ = 1/θ, ˆ ψn = 1 ˆ θn = 1 log  1 + n sn . (d) We verify that ˆ Θn and ˆ Ψn are consistent estimators of θ and ψ, respectively. Let Sn = K1 + · · · + Kn. By the strong law of large numbers, we have Sn n →E[K] = 1 eθ −1, with probability 1. Hence 1 + (n/Sn) converges to eθ, so that ˆ Θn = log  1 + n Sn  →θ, and similarly, ˆ Ψn →1 θ = ψ. Since convergence with probability one implies convergence in probability, we conclude that these two estimators are consistent. Solution to Problem 9.16. (a) We consider a model of the form y = θ0 + θ1x, where x is the temperature and y is the electricity consumption. Using the regression formulas, we obtain ˆ θ1 = n X i=1 (xi −x)(yi −y) n X i=1 (xi −x)2 = 0.2242, ˆ θ0 = y −ˆ θ1x = 2.1077, where x = 1 n n X i=1 xi = 81.4000, y = 1 n n X i=1 yi = 20.3551. The linear regression model is y = 0.2242x + 2.1077. Figure 9.5 plots the data points and the estimated linear relation. 123 70 75 80 85 90 95 100 17 18 19 20 21 22 23 24 Temperature Electricity Figure 9.5: Linear regression model of the relationship between temperature and electricity in Problem 9.16. (b) Using the estimated model with x = 90, we obtain y = 0.2242x + 2.1077 = 22.2857. Solution to Problem 9.17. (a) We have ˆ θ1 = 5 X i=1 (xi −x)(yi −y) 5 X i=1 (xi −x)2 , ˆ θ0 = y −ˆ θ1x, where x = 1 5 5 X i=1 xi = 4.9485, y = 1 5 5 X i=1 yi = 134.3527. The resulting ML estimates are ˆ θ1 = 40.6005, ˆ θ0 = −66.5591. (b) Using the same procedure as in part (a), we obtain ˆ θ1 = 5 X i=1 (x2 i −x)(yi −y) 5 X i=1 (x2 i −x)2 , ˆ θ0 = y −ˆ θ1x, 124 where x = 1 5 5 X i=1 x2 i = 33.6560, y = 1 5 5 X i=1 yi = 134.3527. which for the given data yields ˆ θ1 = 4.0809, ˆ θ0 = −2.9948. Figure 9.6 shows the data points (xi, yi), i = 1, . . . , 5, the estimated linear model y = 40.6005x −66.5591, and the estimated quadratic model y = 4.0809x2 −2.9948. 0 2 4 6 8 10 12 !100 0 100 200 300 400 500 X Y Sample Data Points Estimated First!oder Model Estimated Second!order Model Figure 9.6: Regression plot for Problem 9.17. (c) This is a Bayesian hypothesis testing problem, where the two hypotheses are: H1 : Y = 40.6005X −66.5591, H2 : Y = 4.0809X2 −2.9948. We evaluate the posterior probabilities of H1 and H2 given Y1, . . . , Y5, P(H1 | Y1, . . . , Y5) = P(H1) Q5 i=1 fYi(yi | H1) P(H1) Q5 i=1 fYi(yi | H1) + P(H2) Q5 i=1 fYi(yi | H2) , 125 and P(H2 | Y1, . . . , Y5) = P(H2) Q5 i=1 fYi(yi | H2) P(H1) Q5 i=1 fYi(yi | H1) + P(H2) Q5 i=1 fYi(yi | H2) . We compare P(H1) Q5 i=1 fYi(yi | H1) and P(H2) Q5 i=1 fYi(yi | H2), by comparing their logarithms. Using σ2 to denote the common noise variance in the two models, we have log P(H1) 5 Y i=1 fYi(yi | H1) ! = log 1 2 5 Y i=1 1 √ 2πσ exp  −(yi −θ1xi −θ0)2 2σ2 ! = − 5 X i=1 (yi −θ1xi −θ0)2 2σ2 + c = −3400.7 2σ2 + c, and log P(H2) 5 Y i=1 fYi(yi | H2) ! = log 1 2 5 Y i=1 1 √ 2πσ exp  −(yi −θ1x2 i −θ0)2 2σ2 ! = − 5 X i=1 (yi −θ1x2 i −θ0)2 2σ2 + c = −52.9912 2σ2 + c, where c is a constant that depends only on σ and n. Using the MAP rule, we select the quadratic model. Note that when σ1 = σ2 and P(H1) = P(H2), as above, comparing the posterior probabilities is equivalent to comparing the sum of the squared residuals and selecting the model for which the sum is smallest. Solution to Problem 9.20. We have two hypotheses, the null hypothesis H0 : µ0 = 20, σ0 = 4, which we want to test against H1 : µ1 = 25, σ1 = 5. Let X be the random variable X = X1 + X2 + X3. We want the probability of false rejection to be P X > γ; H0  = 0.05. Since the mean and variance of X under H0 are 3µ0 and 3σ2 0, respectively, it follows that γ −3µ0 √ 3 σ0 = Φ−1 (0.95) = 1.644853, 126 and hence γ = 1.644853 · √ 3 · 42 + 60 = 71.396. The corresponding probability of false acceptance of H0 is P X ≤γ; H1  = Z γ −∞ 1 √ 2πσ1 √ 3 e −(x−µ1)2 2·3·σ2 1 dx = Z 71.396 −∞ 1 √ 2π5 √ 3 e −(x−75)2 2∗3∗52 dx = Φ  71.396 −75 5 √ 3  = Φ (−0.41615) = 0.33864. Solution to Problem 9.21. We have two hypotheses H0 and H1, under which the observation PDFs are fX(x; H0) = 1 5 √ 2π e −(x−60)2 2·25 , and fX(x; H1) = 1 8 √ 2π e −(x−60)2 2·64 . (a) The probability of false rejection of H0 is P(x ∈R; H0) = 2  1 −Φ γ 5  = 0.1, which yields that γ = 8.25. The acceptance region of H0 is {x | 51.75 < x < 68.25}, and the probability of false acceptance is P (51.75 < x < 68.25; H1) = Z 68.25 51.75 1 8 √ 2π e −(x−60)2 2·82 dx = 2Φ 68.25 −60 8  −1 = 0.697. Consider now the LRT. Let L(x) be the likelihood ratio and ξ be the critical value. We have L(x) = fX(x; H1) fX(x; H0) = 8 5e 39 3200 (x−60)2, and the rejection region is n x | e 39 3200 (x−60)2 > 5ξ/8 o . This is the same type of rejection region as R =  x | |x −60| > γ , with ξ and γ being in one-to-one correspondence. Therefore, for the same probability of false rejection, the rejection region of the LRT is also R =  x | |x −60| > γ . 127 (b) Let X = X1+···+Xn n . To determine γ, we set P(X / ∈R; H0) = 2  1 −Φ  γ√n 5  = 0.1, which yields γ = Φ−1(0.95) √n . The acceptance region is R =  x | 60 −Φ−1(0.95) √n < x < 60 + Φ−1(0.95) √n  , and the probability of false acceptance of H0 is P{X ∈R; H1} = 2Φ  Φ−1(0.95)/√n 8/√n  −1 = 2Φ  Φ−1(0.95) 8  −1 = 0.697. We observe that, even if the probability of false rejection is held constant, the prob-ability of false acceptance of H0 does not decrease with n increasing. This suggests that the form of acceptance region we have chosen is inappropriate for discriminating between these two hypotheses. (c) Consider now the LRT. Let L(x) be the likelihood ratio and ξ be the critical value. We have L(x) = fX(x1, . . . , xn; H1) fX(x1, . . . , xn; H0) = 8 5e 39 3200 Pn i=1(xi−60)2 , and the rejection region is n x | e 39 3200 Pn i=1(xi−60)2 > 5ξ/8 o . Solution to Problem 9.22. (a) We want to find kn satisfying P (X ≥kn; H0) = n X k=kn  n k  1 2 n ≤0.05. Assuming that n is large enough, we use the normal approximation and obtain P (X ≥kn; H0) ≈1 −Φ kn −1 2 −1 2n 1 2 √n  , so we have kn−1 2 −1 2n 1 2 √n = Φ−1 (0.95) = 1.644853, and kn = 1 2n + 1 2 + 1.644853 · 1 2 √n = 1 2n + 0.822427√n + 1 2. 128 (b) The probability of making a correct decision given H1 should be greater than 0.95, i.e., P (X ≥kn; H1) = n X X=kn  n k  3 5 k 2 5 n−k ≥0.95, which can be approximated by P (X ≥kn; H1) ≈1 −Φ kn −1 2 −3 5n p 3 5 2 5n ! ≥0.95. Solving the above inequality we obtain n ≥  10  0.82243 + √ 6 5 1.644853 2 = 265.12 Therefore, ˆ n = 266 is the smallest integer that satisfies the requirements on both false rejection and false acceptance probabilities. (c) The likelihood ratio when X = k is of the form L(k) = n k  0.6k(1 −0.6)n−k n k  0.5k(1 −0.5)n−k = 0.8n1.5k. Since L(k) monotonically increases, the LRT rule would be to reject H0 if X > γ, where γ is a positive integer. We need to guarantee that the false rejection probability is 0.05, i.e., P (X ≥γ; H0) = n X i=γ+1  n i  0.5k(1 −0.5)n−k ≈1 −Φ γ −1 2 −1 2n 1 2 √n  = 0.05, which gives γ ≈147. Then the false acceptance probability is calculated as P (X < γ; H1) = γ X i=1  n i  0.6k(1 −0.6)n−k ≈Φ γ −1 2 −3 5n p 3 5 2 5n ! ≈0.05. Solution to Problem 9.23. Let H0 and H1 be the hypotheses corresponding to λ0 and λ1, respectively. Let X be the number of calls received on the given day. We have pX(k; H0) = e−λ0 λk 0 k! , pX(k; H1) = e−λ1 λk 1 k! . The likelihood ratio is L(k) = pX(k; H1) pX(k; H0) = eλ1−λ0 λ1 λ0 k . 129 The rejection region is of the form R =  k | L(k) > ξ , or by taking logarithms, R =  k | log L(k) > log ξ =  k | λ0 −λ1 + klog λ1 −log λ0) > log ξ . Assuming λ1 > λ0, we have R = {k | k > γ}, where γ = λ0 −λ1 + log ξ log λ1 −log λ0 . To determine the value of γ for a probability of false rejection equal to α, we must have α = P(k > γ; H0) = 1 −FX(γ; H0), where FX( · ; H0) is the CDF of the Poisson with parameter λ0. Solution to Problem 9.24. Let H0 and H1 be the hypotheses corresponding to λ0 and λ1, respectively. Let X = (X1, . . . , Xn) be the observation vector. We have, using the independence of X1, . . . , Xn, fX(x1, . . . , xn; H0) = λn 0 e−λ0(x1+···+xn), fX(x1, . . . , xn; H1) = λn 1 e−λ1(x1+···+xn), for x1, . . . , xn ≥0. The likelihood ratio is L(x) = fX(x1, . . . , xn; H1) fX(x1, . . . , xn; H0) = λ1 λ0 n e−(λ1−λ0)(x1+···+xn). The rejection region is of the form R =  x | L(x) > ξ , or by taking logarithms, R =  x | log L(x) > log ξ =  x | nlog λ1 −log λ0)+(λ0 −λ1)(x1 +· · ·+xn) > log ξ . Assuming λ0 > λ1, we have R = {x | x1 + · · · + xn > γ}, where γ = n(log λ0 −log λ1) + log ξ λ0 −λ1 . To determine the value of γ for a probability of false rejection equal to α, we must have α = P(x1 + · · · + xn > γ; H0) = 1 −FY (γ; H0), 130 where FY ( · ; H0) is the CDF of Y = X1 + · · · + Xn, which is an nth order Erlang random variable with parameter λ0. Solution to Problem 9.25. (a) Let X denote the sample mean for n = 10. In order to accept µ = 5, we must have |x −5| 1/√n ≤1.96, or equivalently, x ∈[5 −1.96/√n, 5 + 1.96/√n]. (b) For n = 10, X is normal with mean µ and variance 1/10. The probability of falsely accepting µ = 5 when µ = 4 becomes P(5−1.96/ √ 10 ≤X ≤5+1.96/ √ 10; µ = 4) = Φ( √ 10+1.96)−Φ( √ 10−1.96) ≈0.114. Solution to Problem 9.26. (a) We estimate the unknown mean and variance as ˆ µ = x1 + · · · + xn n = 8.47 + 10.91 + 10.87 + 9.46 + 10.40 5 ≈10.02, and ˆ σ2 = 1 (n −1) n X i=1 (xi −ˆ µ)2 ≈1.09. (b) Using the fact t(4, 0.05) = 2.132 (from the t-distribution tables with 4 degrees of freedom), we find that |ˆ µ −µ| σ/√n = |10.02 −9| 1.09/ √ 5 = 2.1859 > 2.132, so we reject the hypothesis µ = 9. Solution to Problem 9.27. Denoting by x1, . . . , xn and y1, . . . , yn the samples of life lengths on the first and second island respectively, we have for each i = 1, . . . , n, xi ∼N µX, σ2 X  , yi ∼N µY , σ2 Y  . Let X and Y be the sample means. Using independence between each sample, we have X ∼N  µX, σ2 X n  , Y ∼N  µY , σ2 Y n  , and using independence between X and Y we further have X −Y ∼N  µX −µY , σ2 X n + σ2 Y n  . 131 To accept the hypothesis µX = µY at the 95% significance level, we must have |x −y| q σ2 X n + σ2 Y n < 1.96. Since, using the problem’s data x = 181 and y = 177, the expression on the left-hand side above can be calculated to be |x −y| q σ2 X n + σ2 Y n = |181 −177| p 32 n + 29 n ≈1.62 < 1.92. Therefore we accept the hypothesis. Solution to Problem 9.28. Let θ be the probability that a single item produced by the machine is defective, and K be the number of defective items out of n = 600 samples. Thus K is a binomial random variable and its PMF is pK(k) =  n k  kθ(n −k)1−θ, k = 0, . . . , n. We have two hypotheses: H0 : θ < 0.03, H1 : θ ≥0.03. We calculate the p-value α∗= P(K ≥28; H0) = 600 X 28  600 k  k0.03(600 −k)1−0.03, which can be approximated by a normal distribution since n = 600 is large: α∗= P K −np p np(1 −p) ≥ 28 −np p np(1 −p) ! = P K −600 · 0.03 p 600 · 0.03(1 −0.03) ≥ 28 −600 · 0.03 p 600 · 0.03(1 −0.03) ! = P  K −18 √ 17.46 ≥28 −18 √ 17.46  ≈1 −Φ(2.39) ≈0.84. Since α∗is smaller than the 5% level of significance, there is a strong evidence that the null hypothesis should be rejected. Solution to Problem 9.29. Let Xi be the number of rainy days in the ith year, and let S = P5 i Xi, which is also a Poisson random variable with mean 5µ. We have 132 two hypotheses H0 (µ = 35) and H1 (µ ≥35). Given the level of significance α = 0.05 and an observed value s = 159, the test would reject H0 if either P(S ≥s; H0) ≤α/2 or P(S ≤s; H0) ≥α/2. Therefore the p-value is α∗= 2 · min  P(S ≥159; H0), P(S ≤159; H0) = 2 · P(S ≤159; H0) ≈2 · Φ  159 −5 · 35 √ 5 · 35  ≈0.2262, where we use a normal approximation to P(S ≤159; H0). The obtained p-value is above the 5% level, so the test accepts the null hypothesis. Solution to Problem 9.30. (a) The natural rejection rule is to reject H0 when the sample mean X is greater than some value ξ. So the probability of false rejection is P(X > ξ; H0) = 1 −Φ( ξ p v n ) = 0.05, which gives ξ = Φ−1(0.95) q v n ≈1.16. Therefore when the observation is X = 0.96 < 1.16, we accept the null hypothesis H0. (b) With n = 5, the critical value is ξ = Φ−1(0.95) q v n ≈0.52. We compute the sample mean X = (0.96−0.34+0.85+0.51−0.24)/5 = 0.348, smaller than ξ. So we still accept the H0. (c) Assuming that the variance is unknown, we estimate it by ˆ v = Pn i=1(Xi −X)2 n −1 ≈0.3680 using the t-distribution with n = 5, the T value is T = X −0 p 1/n ≈1.2827 < 2.132 = tn−1,α. So we still accept the H0. 133
2478
https://www.khanacademy.org/science/how-does-the-human-body-work/x0fe8768432761c62:neural-control-and-coordination/x0fe8768432761c62:sense-organs/v/human-ear-structure-working
Published Time: Mon, 08 Sep 2025 22:39:55 GMT Khan Academy Skip to main content If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains .kastatic.org and .kasandbox.org are unblocked. Explore Browse By Standards Math: Pre-K - 8th grade Pre-K through grade 2 (Khan Kids) 2nd grade 3rd grade 4th grade 5th grade 6th grade 7th grade 8th grade Basic geometry and measurement See Pre-K - 8th grade Math Math: Illustrative Math-aligned 3rd grade math 4th grade math 5th grade math 6th grade math 7th grade math 8th grade math Algebra 1 Math: Eureka Math-aligned 3rd grade math 4th grade math 5th grade math 6th grade math 7th grade math 8th grade math Math: Get ready courses Get ready for 3rd grade Get ready for 4th grade Get ready for 5th grade Get ready for 6th grade Get ready for 7th grade Get ready for 8th grade Get ready for Algebra 1 Get ready for Geometry Get ready for Algebra 2 Get ready for Precalculus Get ready for AP® Calculus Get ready for AP® Statistics Math: high school & college Algebra 1 Geometry Algebra 2 Integrated math 1 Integrated math 2 Integrated math 3 Trigonometry Precalculus High school statistics Statistics & probability College algebra AP®︎/College Calculus AB AP®︎/College Calculus BC AP®︎/College Statistics Multivariable calculus Differential equations Linear algebra See all Math Math: Multiple grades Early math review Arithmetic Basic geometry and measurement Pre-algebra Algebra basics Test prep SAT Math SAT Reading and Writing Get ready for SAT Prep: Math NEW Get Ready for SAT Prep: Reading and Writing NEW LSAT MCAT Science Middle school biology Middle school Earth and space science Middle school chemistry NEW Middle school physics NEW High school biology High school chemistry High school physics Hands-on science activities NEW Teacher resources (NGSS) NEW AP®︎/College Biology AP®︎/College Chemistry AP®︎/College Environmental Science AP®︎/College Physics 1 AP®︎/College Physics 2 Organic chemistry Cosmology and astronomy Electrical engineering See all Science Economics Macroeconomics AP®︎/College Macroeconomics Microeconomics AP®︎/College Microeconomics Finance and capital markets See all Economics Reading & language arts Up to 2nd grade (Khan Kids) 2nd grade 3rd grade 4th grade reading and vocab NEW 5th grade reading and vocab NEW 6th grade reading and vocab 7th grade reading and vocab NEW 8th grade reading and vocab NEW 9th grade reading and vocab NEW 10th grade reading and vocab NEW Grammar See all Reading & Language Arts Computing Intro to CS - Python Computer programming AP®︎/College Computer Science Principles Computers and the Internet Computer science theory Pixar in a Box See all Computing Life skills Social & emotional learning (Khan Kids) Khanmigo for students AI for education Financial literacy Internet safety Social media literacy Growth mindset College admissions Careers Personal finance See all Life Skills Social studies US history AP®︎/College US History US government and civics AP®︎/College US Government & Politics Constitution 101 NEW World History Project - Origins to the Present World history AP®︎/College World History Climate project NEW Art history AP®︎/College Art History See all Social studies Partner courses Ancient Art Asian Art Biodiversity Music NASA Natural History New Zealand - Natural & cultural history NOVA Labs Philosophy Khan for educators Khan for educators (US) NEW Khanmigo for educators NEW Khan for parents NEW Search AI for Teachers FreeDonateLog inSign up Search for courses, skills, and videos Use of cookies Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy Accept All Cookies Strictly Necessary Only Cookies Settings Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer. More information Allow All Manage Consent Preferences Strictly Necessary Cookies Always Active Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account. For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session. Functional Cookies [x] Functional Cookies These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences. Targeting Cookies [x] Targeting Cookies These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission. We do not use cookies to serve third party ads on our Service. Performance Cookies [x] Performance Cookies These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates. For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users. We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails. Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Reject All Confirm My Choices
2479
https://www.bcm.edu/healthcare/specialties/obstetrics-and-gynecology/ob-gyn-care-centers/endometriosis-center/endometriosis-treatment
Endometriosis Treatment | Baylor Medicine Skip to main content Healthcare MegaMenu_Healthcare_Col1_menu1 Specialties Cardiovascular Care Oncology Neurosurgery Primary Care View All Specialties MegaMenu_Healthcare_Col1_menu2 For Health Professionals Refer a Patient Clinical Trials Professional Development View All MegaMenu_Healthcare_Col2_menu1 For Patients & Visitors MyChart Login Accepted Insurance Pay My Bill Patient Information View All MegaMenu_Healthcare_Col2_menu2 Clinical Trials Autism Cancer Obesity Substance Abuse View All Clinical Trials General Inquiries Call today to schedule an appointment or fill out an online request form. If requested before 2 p.m. you will receive a response today. Call 713-798-1000 Monday - Friday 8 a.m. - 5 p.m. ONLINE Request Now Request non-urgent appointments Image As Houston's premier academic medical practice, Baylor Medicine delivers compassionate, innovative, evidence-based care. Find a Doctor Education MegaMenu_Education_Col1_menu1 Degree Programs & Admissions M.D. Program Ph.D. Programs DNP Program (Nurse Anesthesia) Genetic Counseling Program P.A. Program Orthotics & Prosthetics Program Baccalaureate/M.D. Programs Dual Degree Programs View All Programs MegaMenu_Education_Col1_menu2 Financing Your Education Tuition & Fees Financial Aid CARES ACT MegaMenu_Education_Col2_menu1 Schools School of Medicine Graduate School of Biomedical Sciences National School of Tropical Medicine School of Health Professions MegaMenu_Education_Col2_menu2 Advanced Training Programs Residency Programs Clinical Fellowships Postdoctoral Research Positions Continuing Professional Development Diploma in Tropical Medicine View All MegaMenu_Education_Col3_menu1 Resources Departments Academic Centers Academic Calendars Education Cores View All MegaMenu_Education_Col3_menu2 Information For... Students Postdoctoral Researchers Faculty Alumni America's fourth largest city is a great place to live, work and play. Find out why. Get to Know Houston Research MegaMenu_Research_Col1_menu1 Research at Baylor Academic Centers Departments Faculty Labs From the Labs News Our Research Research Centers Strategic Research Centers MegaMenu_Research_Col2_menu1 Research Offices Advanced Technology Cores Clinical Research Institute for Clinical & Translational Research Office of Research Leadership Research IT Sponsored Programs MegaMenu_Research_Col3_menu1 Additional Research Services BCM Ventures Service Labs VIICTR America's fourth largest city is a great place to live, work and play. Find out why. Get to Know Houston Community MegaMenu_Community_Col1_menu1 Healthcare Outreach Community Programs Healthcare for Homeless More MegaMenu_Community_Col1_menu2 General Resources Community Events News Blogs Baylor in the Community MegaMenu_Community_Col2_menu1 Global Outreach Global Health Global Programs MegaMenu_Community_Col3_menu1 Educational Outreach SMART Program BioEd Online More America's fourth largest city is a great place to live, work and play. Find out why. Get to Know Houston About MegaMenu_About_Col1_menu1 About Us Academic Centers Alumni Careers Departments Giving Leadership Mission, Vision, Values News Our Affiliates Fast Facts Accreditation MegaMenu_About_Col2_menu1 Offices President's Office Office of Research Ombuds Office BCM Ventures View All MegaMenu_About_Col3_menu1 Our Campus Compliance Safety and Security Resource Stewardship & Sustainability Team Shop Find a Person America's fourth largest city is a great place to live, work and play. Find out why. Get to Know Houston Give Intranet Careers Give Search Toggle navigation Menu Search Mobile Menu Healthcare Specialties Cardiovascular Care Oncology Neurosurgery Primary Care View All Specialties For Health Professionals Refer a Patient Clinical Trials Professional Development View All For Patients & Visitors MyChart Accepted Insurance Pay My Bill Patient Information View All Clinical Trials Autism Cancer Obesity Substance Abuse View All Clinical Trials Find a Doctor Make an Appointment Education Degree Programs & Admissions M.D. Program Ph.D. Programs Doctor of Nurse Practice (Anesthesia) Genetic Counseling Program P.A. Program Orthotics & Prosthetics Program Baccalaureate/M.D. Programs Dual Degree Programs View All Programs Financing Your Education Tuition & Fees Financial Aid Schools School of Medicine Graduate School of Biomedical Sciences National School of Tropical Medicine School of Health Professions Advanced Training Programs Residency Programs Clinical Fellowships Postdoctoral Research Positions Continuing Professional Development Diploma in Tropical Medicine Resources Departments Academic Centers Academic Calendars View All Information For... Students Postdoctoral Researchers Alumni Research Research at Baylor Academic Centers Departments Faculty Labs From the Labs News Our Research Research Centers Strategic Research Center Research Offices Advanced Technology Cores Center of Comparative Medicine Clinical Research Institute for Clinical & Translational Research Office of Research Leadership Research IT Sponsored Programs Additional Research Services Service Labs VIICTR Community Healthcare Outreach Community Events Healthcare for Homeless More General Resources Community Events Blogs News Baylor in the Community Global Programs Educational Outreach Saturday Morning Scientist Program SMART Program BioEd Online More About About Us Academic Centers Alumni Careers Departments Leadership Mission, Vision, Values News Our Affiliates Offices President's Office Office of Research Ombuds Office View All Our Campus Compliance Weather Safety Resource Stewardship & Sustainability Team Shop GiveCareersIntranet Mobile Menu Bottom Links Careers Contact Us News Healthcare: Obstetrics and Gynecology Specialties Find a Physician For Patients For Physicians Clinical Trials Request an Appointment MyChart More + News Baylor College of Medicine Healthcare Specialties Obstetrics and Gynecology Ob/Gyn Care Centers Endometriosis Center Endometriosis Treatment Navigation Obstetrics Meet Our Team Gynecology Meet Our Team Ob/Gyn Care Centers Endometriosis Center Endometriosis Endometriosis Diagnosis Endometriosis Treatment Endometriosis Care Team Family Fertility Center The Menopause Center Minimally Invasive Gynecologic Surgery - Center of Excellence Pelvic Health and Wellness Center Placenta Accreta Spectrum Program Vulvovaginal Health Clinic The Women’s Place – Center for Reproductive Psychiatry Ob/Gyn Hospitalists Meet Our Team Ob/Gyn Conditions Abnormal Menstrual Bleeding Abnormal Pap Smears Abnormal Uterine Bleeding Amenorrhea Cervical Dysplasia Cervical Polyps Colposcopy Endometrial Polyps Genital Herpes Gestational Trophoblastic Disease Human Papillomavirus (HPV) Medical Disorders and Problems in Pregnancy Menstrual Disorders Multiple Pregnancy Ovarian Cysts Paget’s Disease of the Vulva Pregnancy Complications Recurrent Bacterial Vaginosis Recurrent Yeast Infections Uterine Fibroids Vaginal Atrophy Vaginismus Vaginitis Vestibulodynia Vulvar Dermatoses Vulvar Intraepithelial Neoplasia (VIN) Vulvodynia Ob/Gyn Procedures Contraception (Birth Control) Diagnostic Hysteroscopy Endometrial Ablation Adhesion Lysis Female Sterilization Hysterectomy Laparoscopic Fibroid Removal Laparoscopic Gynecological Surgery Laparoscopic Treatment of Ovarian Cysts Loop Electrosurgical Excision Procedure Myomectomy Pelvic Floor Reconstruction Pelvic Surgery Polypectomy Reconstructive Pelvic Surgery Robotic Female Surgery or Advanced Robotics Therapeutic Hysteroscopy Tubal Ligation Reversal Uterine Fibroid Embolization Vaginal Reconstructive Surgery Vestibulectomy da Vinci Hysterectomy Global Women’s Health Gynecologic Oncology Maternal-Fetal Medicine Meet Our Team Fetal Therapy and Surgery Minimally Invasive Gynecologic Surgery Meet Our Team Pediatric and Adolescent Gynecology Meet Our Team Prenatal and Reproductive Genetics Meet Our Team Reproductive Endocrinology and Infertility Assisted Reproductive Technologies Diminished Ovarian Reserve Endometriosis and Infertility Fertility Preservation Program Frequently Asked Questions about Infertility How to Choose a Fertility Clinic Information for Referring Physicians Intrauterine Insemination (IUI) Intracytoplasmic Sperm Injection (ICSI) IVF with Embryo Freezing In Vitro Fertilization (IVF) Male Factor Infertility Oncofertility – Fertility preservation Oocyte Donation Oocyte Freezing (Egg Banking) Ovarian Tissue Freezing Polycystic Ovary Syndrome (PCOS) Preimplantation Genetic Testing (PGT) Recurrent Pregnancy Loss (Recurrent Miscarriages) Sperm Cryopreservation (Sperm Banking) Meet Our Team Reproductive Psychiatry Meet Our Team Urogynecology and Reconstructive Pelvic Surgery Anal Sphincteroplasty Anterior and Posterior Repair (Colporrhaphy) Birth Injuries Bladder Instillation Botulinum Toxin Injections Chronic Urinary Tract Infections Cystoscopy Fecal Incontinence Fistulas Gynecologic Reconstructive Surgery Interstitial Cystitis (Painful Bladder Syndrome) Overactive Bladder Pelvic Organ Prolapse Pessaries Sacral Nerve Stimulation Sling Procedures for Urinary Incontinence Urethral Bulking Agent Injections Urethral Diverticulum Urinary Incontinence in Women Urinary Retention Urodynamic Testing Uterine Prolapse Meet Our Team For Patients Meet Our Team Make an Appointment CALL 832–826–7500 Monday-Friday 7 a.m.-5 p.m. ONLINE Request Now Request non-urgent appointments Find a Physician Endometriosis Treatment Master Heading Evidence-Based Endometriosis Treatments That Bring Relief Content We offer specialized care and proven treatments that reduce pain, improve fertility, and restore the health and well-being of our endometriosis patients, including excision surgery, considered the gold standard in endometriosis treatment. Treatment strategies include: Pain medication Hormone therapies Surgery, using the latest minimally invasive techniques for faster, easier recovery Our endometriosis specialists work closely with each patient to develop a treatment strategy based on their unique symptoms and personal preferences. Medical Treatments Our approach to treatment is conservative – we treat endometriosis patients medically whenever possible to avoid surgery. Medical treatments include over-the-counter nonsteroidal anti-inflammatory drugs (NSAIDs), prescription pain medications, and hormone therapies. Hormone therapies are used to control or prevent the patient’s menstrual cycle, which can help alleviate pain, shrink endometrial tissue, and slow or prevent new growth. These suppression medications may be used before and after surgery. Surgical Treatments When surgery is required, the procedure may be performed robotically or laparoscopically, a minimally invasive approach, or through traditional abdominal surgery, if necessary. Surgical options for endometriosis include: Excision surgery. The preferred treatment for endometriosis is robotic or laparoscopic excision surgery. In this highly advanced procedure, the skilled surgeon cuts out the endometriosis tissue while carefully preserving the organs and nearby structures impacted by the disease. Our board-certified, fellowship-trained minimally invasive gynecologic surgeons are sought after for their expertise in performing these meticulous procedures. A hysterectomy (with excision of endometriosis, with or without removal of ovaries).Surgery to remove the uterus may be an option in severely painful cases of adenomyosis, where the endometriosis has grown into the uterine wall and isn’t responding to nonsurgical treatments. For women of reproductive age, this is a last resort. Following the procedure, hormonal treatment may be necessary if both ovaries are also removed. Heading Addressing the Full Impact of Endometriosis Content We partner with a broad range of specialists to ensure our patients receive the care they need for all areas of their health affected by endometriosis. Our multifaceted team for comprehensive endometriosis care includes: Minimally invasive gynecologic surgery (MIGS)–for shorter recovery times, and less pain and scarring compared to traditional open surgery Reproductive endocrinology and infertility – for evidence-based fertility treatments and hormone therapy Urogynecology– for specialized care of urinary tract issues caused by endometriosis Urologic and colorectal surgeons – for treatment of endometriosis involving the bowel, bladder and ureter The Menopause Center– for specialized care of early onset menopause Reproductive psychiatry– to lessen the pain, anxiety and depression caused by endometriosis Pelvic floor physical therapy (PT)– to strengthen pelvic muscles, improve function and reduce pelvic pain caused by endometriosis Pediatric and adolescent gynecologists – for unique expertise in the care and treatment of young patients (under 21) Heading Benefits of Our Approach to Treating Endometriosis Content Improved surgical outcomes–through a coordinated approach to endometriosis surgery that addresses all organs impacted by the disease Reduced pelvic pain and stronger pelvic floor muscles –through pelvic floor physical therapy Improved mental health–through reproductive psychiatrists experienced in the treatment of endometriosis patients Improved sexual health–through experts in addressing sexual dysfunction caused by endometriosis, including pain with intercourse Relief from menopause symptoms brought on by endometriosis treatment –through board-certified menopause specialists Expert fertility services – through the Family Fertility Center, including the most advanced treatments and technologies available today to help endometriosis patients conceive Heading Get Immediate Help Content We understand the pain of endometriosis and the urgent need for relief. Immediate openings are available. Call 832-826-7500 to schedule an appointment. To make a referral, visit the Baylor College of Medicine Obstetrics and Gynecology referral page. Follow Us Healthcare Footer Menu Healthcare Specialties MyChart Login For Patients & Visitors For Health Professionals Clinical Trials Find a Physician Education Footer Menu Education Programs & Admissions Student & Trainee Resources Faculty Resources School of Medicine Graduate School of Biomedical Sciences National School of Tropical Medicine School of Health Professions Tuition & Fees Financial Aid Research Footer Menu Research Our Research Core Labs Faculty Labs Research Centers Research Offices Community Footer Menu Community Healthcare Outreach Education Outreach Global Programs Community Events About Footer Menu About Our Campus Departments Academic Centers Administrative Offices Affiliates Leadership Giving Alumni Quicklinks Footer Menu Resource Links Contact Us Find a Person Careers BCM Team Shop News Title IX Office Compliance Covid Response Site ©1998-2023 Baylor College of Medicine® | One Baylor Plaza, Houston, Texas 77030 | 713-798-4951 Have an edit or suggestion for this page? Compliance Privacy Intranet Back to top
2480
https://collegedunia.com/exams/questions/the-number-of-monochloro-derivatives-possible-for-683ead81e603c3f693955e7d
The number of monochloro derivatives possible for 2,2-Dimethylbutane and 2,3-Dimethylbutane are respectively Exams Chemistry Organic Chemistry the number of monochloro derivatives possible for Question: The number of monochloro derivatives possible for 2,2-Dimethylbutane and 2,3-Dimethylbutane are respectively Show Hint Count chemically different types of hydrogen atoms to determine possible substitution products. AP EAPCET - 2025 AP EAPCET Updated On:Jun 3, 2025 3, 2 2, 3 4, 2 2, 4 Hide Solution Verified By Collegedunia The Correct Option is A Solution and Explanation In 2,2-Dimethylbutane, due to the presence of symmetry, three types of hydrogens exist which lead to 3 monochloro derivatives. In 2,3-Dimethylbutane, only 2 distinct types of hydrogen environments are present, leading to 2 monochloro derivatives. Download Solution in PDF Was this answer helpful? 0 0 Top Questions on Organic Chemistry Write salient features of SN 2^2 2 mechanism. What is the action of following reagents on bromomethane : (i) bromobenzene (ii) mercurous fluoride Maharashtra Class XII - 2025 Chemistry Organic Chemistry View Solution Explain Cannizzaro’s reaction with the help of benzaldehyde. Write the reaction for the conversion of cyclohexene to adipic acid. Maharashtra Class XII - 2025 Chemistry Organic Chemistry View Solution Identify ‘A’ and ‘B’ in the following reaction and rewrite the complete reaction :CH 3-CH=CH 2→Peroxide HBr A→-KBr alcoholic KCN B \text{CH}_3\text{-CH=CH}_2 \xrightarrow[\text{Peroxide}]{\text{HBr}} \text{A} \xrightarrow[\text{-KBr}]{\text{alcoholic KCN}} \text{B} CH 3​-CH=CH 2​HBr Peroxide​A alcoholic KCN-KBr​B Maharashtra Class XII - 2025 Chemistry Organic Chemistry View Solution Write the structural formula and IUPAC name of the alcohol having molecular formula C 4 4 4​H 10 {10}10​O which does not undergo oxidation under normal condition. Maharashtra Class XII - 2025 Chemistry Organic Chemistry View Solution Write the reaction for the preparation of : (i) acetaldehyde by Rosenmund reaction. (ii) benzaldehyde by Gatterman-Koch formylation. Maharashtra Class XII - 2025 Chemistry Organic Chemistry View Solution View More Questions Questions Asked in AP EAPCET exam The differential equation corresponding to the family of parabolas whose axis is along x=1 x = 1 x=1 is Identify the correct option from the following: AP EAPCET - 2025 Differential Equations View Solution If an electron in the excited state falls to ground state, a photon of energy 5 eV is emitted, then the wavelength of the photon is nearly AP EAPCET - 2025 Nuclear physics View Solution The number of ways of dividing 15 persons into 3 groups containing 3, 5 and 7 persons so that two particular persons are not included into the 5 persons group is AP EAPCET - 2025 Binomial theorem View Solution The domain and range of a real valued function f(x)=cos⁡(x−3) f(x) = \cos (x-3) f(x)=cos(x−3) are respectively. AP EAPCET - 2025 Functions View Solution If the line 4 x−3 y+7=0 4x - 3y + 7 = 0 4 x−3 y+7=0 touches the circle x 2+y 2−6 x+4 y−12=0 x^2 + y^2 - 6x + 4y - 12 = 0 x 2+y 2−6 x+4 y−12=0 at (α,β) (\alpha, \beta) (α,β), then find α+2 β \alpha + 2\beta α+2 β. AP EAPCET - 2025 Circle View Solution View More Questions
2481
https://en.wiktionary.org/wiki/quadratic_equation
quadratic equation - Wiktionary, the free dictionary Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main Page Community portal Requested entries Recent changes Random entry Help Glossary Contact us Special pages Feedback If you have time, leave us a note. Search Search [x] Appearance Appearance move to sidebar hide Text Small Standard Large This page always uses small font size Width Standard Wide The content is as wide as possible for your browser window. Color (beta) Automatic Light Dark This page is always in light mode. Donations Preferences Create account Log in [x] Personal tools Donations Create account Log in Pages for logged out editors learn more Contributions Talk [x] Toggle the table of contents Contents move to sidebar hide Beginning 1 EnglishToggle English subsection 1.1 Noun 1.1.1 Translations 1.2 Further reading quadratic equation [x] 13 languages Eesti Français Bahasa Indonesia Magyar മലയാളം 日本語 Oromoo Polski Svenska தமிழ் ไทย Tiếng Việt 中文 Entry Discussion Citations [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Print/export Create a book Download as PDF Printable version In other projects Visibility Show translations Hide synonyms Show quotations From Wiktionary, the free dictionary English [edit] English Wikipedia has an article on: quadratic equation Wikipedia Noun [edit] quadraticequation (pluralquadratic equations) (algebra) A polynomial equation of the second degree; an equation that can be rearranged in standard form as a x 2+b x+c=0{\displaystyle ax^{2}+bx+c=0}, where x{\displaystyle x} is an unknownvalue, a{\displaystyle a}, b{\displaystyle b}, and c{\displaystyle c} are knownnumbers, and where a{\displaystyle a} does not equal zero. coordinate terms▲quotations▼Coordinate terms:linear equation, cubic equation 2015 September 8, Katie Everson, “Why I've written a book for teenagers about taking drugs”, in The Guardian‎:When I was seven years old at a new school, I cried all day, every day. When I was 14 years old, I cried in maths lessons. An improvement, I guess. It wasn’t that I was bad at maths; I knew my way around a quadratic equation, let me tell you. 2018 November 25, Kevin McKenna, “Remote island pub for sale ... just don’t expect a quiet life”, in The Guardian‎:The arid argot of the estate agent’s schedule becomes obsolete when the task is to sell a pub-restaurant at the edge of a Highland wilderness. It’s like trying to describe the glory of the Lake District with a quadratic equation. 2020 February 5, Kenneth Chang, Jonathan Corum, “This Professor’s ‘Amazing’ Trick Makes Quadratic Equations Easier”, in The New York Times‎:The quadratic equation has frustrated math students for millenniums. […] If, while watching the Super Bowl, you had wanted to estimate how far a pass thrown by Patrick Mahomes traveled through the air, you would have been solving a quadratic equation. 2024 April 8, Lydia Wang, Morgan Sullivan, Rachel Varina, “24 Best Hinge Prompts to Actually Help Your Matches Get to Know You”, in Cosmopolitan‎:If there’s one skill we absolutely should have been taught in school, it's how to craft the perfect dating app profile. I mean, if I can still remember the quadratic equation, surely I should have been schooled in the art of snagging a date. But alas, our education system seems to think finding our soulmates isn't as essential as algebra. Translations [edit] show ▼±second-degree equation [Select preferred languages] [Clear all] Armenian: քառակուսի հավասարում(kʻaṙakusi havasarum) Catalan: equació quadràticaf Chinese: Mandarin: 二次方程(zh)(èr-cì fāngchéng) Czech: kvadratická rovnicef Danish: andengradsligningc Dutch: tweedegraadsvergelijkingf, vierkantsvergelijkingf Finnish: toisen asteen yhtälö French: équation du second degré(fr)f German: quadratische Gleichungf Greenlandic: andengradsligningi Hindi: द्विघात समीकरणm(dvighāt samīkraṇ) Hungarian: másodfokú egyenlet(hu) Italian: equazione di secondo gradof, equazione quadraticaf Japanese: 二次方程式(ja)(niji hōteishiki) Korean: 이차 방정식(icha bangjeongsik) Maori: whārite pūrua Polish: równanie kwadratowen Portuguese: equaçãoquadráticaf, equação do segundo grauf Romanian: ecuație de gradul al doilea(ro)f, ecuație pătraticăf Russian: квадра́тное уравне́ние(ru)n(kvadrátnoje uravnénije) Spanish: ecuación cuadráticaf, ecuación de segundo gradof Tagalog: dawaking tumbasan Thai: สมการกำลังสอง Turkish: ikinci dereceden denklem, ikinci derece denklem Urdu: دو ضربی مساواتf(do-zarbī musāvāt) Add translation: More [x] masc. - [x] masc. dual - [x] masc. pl. - [x] fem. - [x] fem. dual - [x] fem. pl. - [x] common - [x] common dual - [x] common pl. - [x] neuter - [x] neuter dual - [x] neuter pl. - [x] singular - [x] dual - [x] plural - [x] imperfective - [x] perfective Noun class: Plural class: Transliteration: (e.g. zìmǔ for 字母) Literal translation: Raw page name: (e.g. 疲れる for 疲れた) Qualifier: (e.g. literally, formally, slang) Script code: (e.g. Cyrl for Cyrillic, Latn for Latin) Nesting: (e.g. Serbo-Croatian/Cyrillic) Further reading [edit] “quadratic equation, n.”, in OED Online⁠, Oxford: Oxford University Press, launched 2000. “quadratic equation”, in Merriam-Webster Online Dictionary, Springfield, Mass.: Merriam-Webster, 1996–present. “quadratic equation”, in The American Heritage Dictionary of the English Language, 5th edition, Boston, Mass.: Houghton Mifflin Harcourt, 2016, →ISBN. Retrieved from " Categories: English lemmas English nouns English countable nouns English multiword terms en:Algebra English terms with quotations en:Polynomials Hidden categories: Pages with entries Pages with 1 entry Entries with translation boxes Terms with Armenian translations Terms with Catalan translations Mandarin terms with non-redundant manual transliterations Terms with Mandarin translations Terms with Czech translations Terms with Danish translations Terms with Dutch translations Terms with Finnish translations Terms with French translations Terms with German translations Terms with Greenlandic translations Terms with Hindi translations Terms with Hungarian translations Terms with Italian translations Japanese terms with redundant script codes Terms with Japanese translations Terms with Korean translations Terms with Maori translations Terms with Polish translations Terms with Portuguese translations Terms with Romanian translations Terms with Russian translations Terms with Spanish translations Terms with Tagalog translations Terms with Thai translations Terms with Turkish translations Urdu terms with redundant script codes Terms with Urdu translations This page was last edited on 6 April 2025, at 17:43. Definitions and other text are available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Privacy policy About Wiktionary Disclaimers Code of Conduct Developers Statistics Cookie statement Mobile view Search Search [x] Toggle the table of contents quadratic equation 13 languagesAdd topic
2482
https://www.sciencedirect.com/science/article/pii/S2665944125000148
From acute tubular injury to tubular repair and chronic kidney diseases – KIM-1 as a promising biomarker for predicting renal tubular pathology - ScienceDirect Skip to main contentSkip to article Journals & Books ViewPDF Download full issue Search ScienceDirect Outline Abstract Keywords 1. Introduction 2. KIM-1 and its pathophysiological functions in renal diseases 3. KIM-1 in acute tubular injury during AKI 4. KIM-1 in the transition from AKI to CKD and the development of CKD 5. KIM-1 expression during renal tubular dedifferentiation to renal neoplasm 6. Conclusions CRediT authorship contribution statement Declaration of competing interest Acknowledgment Data availability References Show full outline Figures (4) Current Research in Physiology Volume 8, 2025, 100152 From acute tubular injury to tubular repair and chronic kidney diseases – KIM-1 as a promising biomarker for predicting renal tubular pathology Author links open overlay panel Ping L.Zhang a, Ming-Lin Liu b c Show more Outline Add to Mendeley Share Cite rights and content Under a Creative Commons license Open access Abstract Kidney Injury Molecule-1 (KIM-1) has emerged as a significant biomarker and mechanistic player in kidney pathology, particularly in acute kidney injury (AKI). Normally absent in healthy kidney proximal tubules, KIM-1 becomes upregulated specifically along the proximal tubule cells' surface in response to acute injury, reflecting the differential vulnerability of convoluted versus straight proximal tubules. Functionally, KIM-1 aids proximal tubules in clearing apoptotic cells and moderating inflammatory responses, thereby helping to prevent excessive immune activation during the early stages of injury. Clinically, KIM-1 is a sensitive, non-invasive biomarker for detecting proximal tubular injury, allowing for assessment in urine, plasma samples, and tissue biopsies in AKI. However, if tubular injury persists without repair, prolonged KIM-1 expression can drive chronic inflammatory responses and interstitial fibrosis, leading to chronic kidney disease (CKD). In addition, KIM-1's role may extend further into promoting tubular dedifferentiation, potentially contributing to renal cell carcinoma under certain conditions. Over the past two decades, KIM-1 research has reshaped our understanding of kidney pathophysiology and immunology, spanning acute injury responses to chronic disease progression. This review aims to provide an updated synthesis of recent findings, highlighting KIM-1's role across the spectrum of renal injury and repair. Previous article in issue Next article in issue Keywords Acute kidney injury Tubular repair Kidney injury molecule-1 Tubular dedifferentiation Chronic kidney disease 1. Introduction The kidneys contain over one million functional nephrons, which consists of a glomerulus and its associated renal tubules. However, the glomeruli and renal tubules have a limited capacity for self-repair following injury. Significant damage to any part of the nephron—whether to the glomerulus, the renal tubules, or both—there is a substantial risk of developing interstitial fibrosis, which can progressively impair kidney function by interfering with normal blood filtration and or renal tubular reabsorption, and subsequently leading to chronic kidney disease (CKD). Landmark studies led by Dr. Berry M. Brenner and colleagues examined specific effects of hyperfiltration on glomerular health. Hyperfiltration occurs when there is an increased pressure within the glomeruli. In their studies using animal models with focal segmental glomerulosclerosis (FSGS) or diabetic nephropathy, they found that hyperfiltration within the glomeruli led to further glomerular damage (Brenner et al., 1996). This excessive filtration process contributed to global glomerulosclerosis, a condition where larger portions of the glomerulus develop scarring. This scarring process, in turn, promoted interstitial fibrosis and tubular atrophy, as the affected parts of the nephron could no longer function properly. These cumulative changes cause progressive decline in kidney function, leading to CKD and, ultimately, end-stage renal disease (ESRD). Dr. Brenner's studies identified that angiotensin-converting enzyme (ACE) inhibitors could reduce hyperfiltration in the glomeruli (Brenner, 2002) by selectively dilating the efferent arterioles, the blood vessels that exit the glomerulus. By dilating these arterioles, ACE inhibitors lowers pressure within the glomeruli, thereby reducing hyperfiltration independently of the drug's ability to control blood pressure. This mechanism effectively reduces glomerular stress and slows the progression of nephron damage in animal models with FSGS or diabetic nephropathy. Following these promising animal studies, clinical trials demonstrated that ACE inhibitors and angiotensin-2 receptor blockers (ARBs) significantly delay the progression of CKD in patients with diabetic nephropathy and other renal diseases (Brenner et al., 2001). The trials provided evidence that inhibition of angiotensin-2 by ACE inhibitors or ARBs could slow the path to ESRD by mitigating glomerular hyperfiltration and reducing interstitial fibrosis and tubular atrophy, thus preserving kidney function. Dr. Brenner's pioneering work established that targeting nephron hyperfiltration could slow disease progression, validating ACE inhibitors or ARBs as essential treatment for CKD, particularly in patients with diabetes and other glomerular/renal vascular diseases (Taal and Brenner, 2000; Brenner, 2002). Beyond traditional treatment, recent advances highlight two promising drug classes for addressing glomerular hyperfiltration, including sodium-glucose cotransporter-2 (SGLT2) inhibitors (Wada et al., 2024; Vallon and Verma, 2021) and glucagon-like peptide-1 (GLP-1) receptor agonists (Bjornstad et al., 2024). SGLT2 inhibitors decrease glomerular hyperfiltration by blocking sodium-glucose reabsorption in the proximal tubule, which increases sodium delivery to the macula densa and activates tubuloglomerular feedback, which constricts the afferent arteriole. As a result, intraglomerular pressure decreases, protecting the kidneys from hyperfiltration-induced damage (Vallon and Verma, 2021), and slowing progression of kidney disease (Kalantar-Zadeh et al., 2021). This mechanism of SGLT2 is independent of glycemic control (Vallon and Verma, 2021). Notably, GLP-1 receptor agonists also reduce glomerular hyperfiltration through multiple mechanisms (Bjornstad et al., 2024). The systemic effects of GLP-1 include improved glycemic control, weight loss, and blood pressure reduction, contributing to decreased renal workload (Hti Lar Seng et al., 2023). Additionally, preclinical studies suggest that GLP-1 exerts direct renal actions, including reduced afferent arteriolar tone and anti-inflammatory effects, which may provide protection against hyperfiltration-induced kidney damage (Bjornstad et al., 2024), and slow the progression of chronic kidney disease (Kalantar-Zadeh et al., 2021). However, their exact nephroprotective mechanisms remain under investigation. The current review will shift focus to how renal tubular injury contributes to kidney repair and CKD, particularly following acute kidney injury (AKI). a common clinical issue (Bonventre, 2010). Many AKIs result from acute tubular injury to the proximal tubules due to their chief role in the active transportation of reabsorbing electrolytes, thus being vulnerable to hypoxic insult. Whereas distal nephron tubules are relatively resistant to ischemic or toxic injury because of their main role in passive electrolyte and water transportation (Rosen and Stillman, 2008). Traditionally, the proximal tubules are divided into three segments based on their morphologic differences in rat kidneys (Maunsbach, 1966). Segment 1 (S1) includes the main portion of the convoluted proximal tubules. Segment 2 (S2) is composed of a portion of distal convoluted proximal tubules and straight proximal tubules mainly in the medullary rays. Segment 3 (S3) is composed of straight proximal tubules in the outer stripe of the outer medulla. Major portion of the S2 proximal tubules is located in the medullary ray, whereas S3 proximal tubules locate in medulla and receive venous blood. S3 segments appear more vulnerable to ischemic insults than cortical S1 segments that receive highly oxygenated blood (Brezis and Rosen, 1995). Since the 1970s, this classification of the proximal tubules has been widely used animal and human studies. The S3 proximal tubules are composed of a large zone in rat kidneys; it is, therefore, convenient to focus on this zone for ischemic analysis in rodents (Zhang et al., 2008a, Zhang et al., 2008b). However, most morphologic studies of renal tubular injury in humans were based on light microscopy. This limitation makes it difficult to accurately separate the convoluted proximal tubules in S1 from the distal portion of the convoluted tubules in S2. Obvious acute tubular injury (or acute tubular necrosis) is identifiable in human renal biopsies when injured proximal tubules are dilated and flattened with diminished brush borders on Periodic acid-Schiff (PAS) stained sections (Parasuraman et al., 2013). However, the morphologic changes in mild acute tubular injury are difficult to ascertain. This challenge led to the investigation of biomarkers, such as tumor suppression gene p53 and proliferative marker Ki-67, which confirmed subtle acute tubular injury. In 1998, kidney injury molecule-1 (KIM-1) was discovered for investigating the acute injury of proximal tubules in rats (Ichimura et al., 1998). KIM-1 is typically absent in healthy kidneys but becomes specifically upregulated in injured proximal tubular cells, and such upregulation can persist until the damaged cells completely recovered (Huo et al., 2010). Therefore, KIM-1 has emerged as a specific marker for proximal tubular injury, enabling critical advances in renal pathology through urine and serum analysis, genetic studies, and mechanistic investigations (Bonventre and Yang, 2010; Bonventre, 2014). The KIM-1's functions include its role in acute tubular injury, its potential role in generating interstitial fibrosis, and its use as a urinary biomarker for analysis of renal function. This review summarizes the morphologic expression of KIM-1 in acute tubular injury, its implications for proximal tubule physiology, its links to CKD progression, and its association with de-differentiated renal tissue and malignant transformation. 2. KIM-1 and its pathophysiological functions in renal diseases KIM-1, also named T cellimmunoglobulin mucin domains-1 (TIM-1) and hepatitis A virus cellular receptor-1 (HAVCR-1), was first identified in injured proximal tubules of rat kidneys following acute ischemic injury (Ichimura et al., 1998). KIM-1 expression can be induced by tubular stress of damaging compensatory hyperfiltration, which occurs in remaining tubules after nephron loss. KIM-1 is the most highly upregulated protein in the proximal tubule of the injured kidneys, but also expressed in other organs or tissues, including immune cells and the liver (Ichimura et al., 1998). KIM-1 is a type I transmembrane protein with an extracellular domain, a transmembrane domain, and an intracellular domain (Ichimura et al., 1998). The extracellular domain of KIM-1 contains a mucin domain and six-cysteine immunoglobulin-like domains, which are involved in protein-protein interactions and binding to extracellular matrix components (Ichimura et al., 1998; Bailly et al., 2002). Later studies revealed that ADAM 10/17 cleaves the extracellular domains of KIM-1, releasing a soluble form in the urine or blood (Schweigert et al., 2014; Zhang et al., 2019; Schmidt et al., 2022). Soluble KIM-1 is a promising biomarker for diagnosing various renal diseases, including acute and chronic renal diseases (Bonventre et al., 2013). There are two populations of proximal tubular epithelium including regular tubular epithelial cells with brush borders and scattered progenitor cells without brush borders (Lazzeri et al., 2018). It is well known that hypoxia-inducible factors (HIFs) are the master transcription factors regulating hypoxia-associated transcription response (Schodel and Ratcliffe, 2019; Shu et al., 2019). With normal oxygen, HIF-1α is degraded. In contrast, hypoxia results in a retention of HIF-1α that activates many genes such as erythropoietin and CD133 to deal with the hypoxic challenge. Without oxygen, the tubular progenitor cells undergo cellular proliferation with mitosis and the remaining tubular epithelial cells undergo endoreplication cycles leading to tubular epithelial hypertrophy without mitosis (endocycle) (Lazzeri et al., 2018). In the meantime, in response to severe insults from ischemia, injured proximal tubules release apoptotic bodies with phosphatidylserine (PS) on their surface, which activate KIM-1 as PS receptor along the proximal tubules (Ichimura et al., 2008; Yang et al., 2015; Brooks et al., 2015) (Fig. 1). KIM-1's extracellular six-cysteine immunoglobulin-like domains enable its binding to PS, a natural ligand exposed on the surface of apoptotic cells or membrane extracellular vesicles (EVs) (Zhang and Liu, 2021). KIM-1 is a type 1 transmembranous glycoprotein located along the luminal surface of proximal tubules, and is only up-regulated during acute tubular injury (Ichimura et al., 1998). Notably, KIM-1 is also a phagocytosis and scavenger receptor for sensing dying cells in health and disease (Tutunea-Fatan et al., 2024; Ichimura et al., 2008). KIM-1 plays a phagocytic role in engulfing the apoptotic bodies or EVs into the residual proximal tubules, and subsequently delivering the apoptotic bodies or EVs from the phagosomes to lysosomes for autophagy-mediated clearance (Brooks et al., 2015). The process of phagocytosis/endocytosis and autophagy-mediated clearance serve as an immune surveillance mechanism to prevent apoptotic bodies from triggering innate inflammatory responses, thus helping avoid further kidney damage and maintains self-tolerance in proximal tubules (Ichimura et al., 2012; Yang et al., 2015; Brooks et al., 2015; Tutunea-Fatan et al., 2024). Without clearance of dying cells or apoptotic EVs by KIM-1's protective phagocytosis, the kidneys would demonstrate more harmful over-reactive inflammation and subsequent interstitial fibrosis in various renal diseases, including CKD (Zhang and Liu, 2021; Tutunea-Fatan et al., 2024). However, Chen et al. reported that KIM-1 expressed by injured renal tubules can mediate EV uptake by recognizing PS, which participated in the amplification of tubule inflammation induced by hypoxia, leading to the development of tubulointerstitial inflammation in ischemic kidney injury (Chen et al., 2023). 1. Download: Download high-res image (718KB) 2. Download: Download full-size image Fig. 1. Schematic illustration of proximal tubules under normal and hypoxic conditions. The top panel shows normal proximal tubules composed of progenitor cells (yellow without brush borders) and regular epithelial cells (green color with brush borders). Oxygen permeates to individual epithelial cells and inhibits the production of hypoxia-inducible factor 1-alpha (HIF1a). The lower panel illustrates a scenario in the absence of oxygen. To repair the tubular injury, the progenitor cell undergoes mitosis-related cell proliferation leading to the growth of additional cells (yellow cells from the left side) and other epithelial cells enter endocycles for tubular epithelial hypertrophy (enlarged cells without additional cell number growth, green cells on the right side). Meanwhile, kidney injury molecule-1 (KIM-1) as a phagocytotic receptor binds to the apoptotic bodies and prevents them from leaking into the peritubular capillary for induction of inflammation. Interestingly, KIM-1/TIM-1 is also expressed in T cells (Angiari et al., 2014) and B cells (Bod et al., 2023), where it modulates immune and autoimmune responses. Angiari et al. reported that TIM-1 is a major P-selectin ligand with a specialized role in T cell trafficking during inflammatory responses and the induction of autoimmune disease (Angiari and Constantin, 2014; Angiari et al., 2014). In addition, Nozaki and colleagues demonstrate that endogenous KIM-1/TIM-1 can promote Th1 and Th17 nephritogenic immune responses, and its neutralization can reduce renal injury by limiting cell-mediated injury and inflammation in experimental glomerulonephritis (Nozaki et al., 2012). Furthermore, inhibition of KIM-1/TIM-1 protects against kidney or cerebral ischemia-reperfusion injury (Rong et al., 2011; Zheng et al., 2019). Notably, a recent study demonstrated that engineered red blood cell-derived EVs equipped with KIM-1 binding peptides effectively delivered P65 and Snai1 siRNAs to the injured tubules, leading to reduced expression of P-p65 and Snai1, which inhibits renal inflammation and fibrosis in mice subjected to ischemia/reperfusion injury, thus blunting the chronic progression of ischemic AKI (Tang et al., 2021). If an acute tubular injury fails to heal, the damaged proximal tubules become flat and shrink into a small gland appearance, called atrophic renal tubules. Interstitial fibrosis often develops around these atrophic renal tubules. KIM-1 staining is usually found within the lumens of atrophic proximal tubules. However, this should not be confused with acute injury as only KIM-1 expression in non-atrophic proximal tubules should be counted as ongoing acute tubular injury. Bonventre and colleagues found that transgenic mice with turned-on KIM-1 develop more fibrosis in the interstitium, implying that KIM-1 may play a maladaptive role in interstitial fibrosis during CKD development (Humphreys et al., 2013; Yin et al., 2016). Therefore, KIM-1 is a multifunctional protein with complex roles in renal pathophysiology. It promotes renal repair during acute renal injury, mediates phagocytosis and endocytosis, modulates inflammatory and immune responses, and contributes to renal fibrosis and the transition from AKI to CKD, and the progression of CKD. A better understanding of KIM-1's dual functions could help to develop innovative strategies for the diagnosis and treatment of kidney diseases. 3. KIM-1 in acute tubular injury during AKI From the embryology point of view, glomeruli and proximal tubules are derived from the cap mesenchyme of metanephros, whereas distal tubules and collecting ducts are derived from the ureteric bud (Moritz and Wintour, 1999). They fuse together to form a nephron system. Because the proximal tubules carry out 80% of the nephron's reabsorptive activity, mostly through active electrolyte transportations, the nature of this high energy-dependent status in the proximal tubules makes the proximal tubules vulnerable to an ischemic injury, especially when compared to distal tubular resistance to ischemic insults (Brezis and Rosen, 1995). Since Ichimura and colleagues first described KIM-1 in 1998 for its role in repairing acute proximal tubule damage in post-ischemic kidneys (Ichimura et al., 1998), our understanding of KIM-1 has evolved to a range of pathological conditions related to AKI. 3.1. KIM-1 expression in animal models with AKI Ichimura et al. found that KIM-1 expression was significantly upregulated in rat kidneys following acute tubular injury, suggesting its involvement in the healing process (Ichimura et al., 1998; Bonventre and Yang, 2010). As early as 3 h following reperfusion after an ischemic insult, rat kidneys show a 7.8 fold increase in the gene up-regulation of KIM-1 (Zhang et al., 2008a, Zhang et al., 2008b). Several other studies also demonstrate the rapid upregulated KIM-1 protein expression of KIM-1 as early as 6 h following ischemia-reperfusion injury (Ichimura et al. 1998, 2004; Vaidya et al., 2009). In the ischemic-reperfusion model of rodents, KIM-1 is up-regulated in the injured proximal tubules of rat kidneys following ischemic or cytotoxic challenges (Fig. 2). The injury pattern is typically more prominent in S3 proximal tubules, followed by S2 proximal tubules, whereas S1 proximal tubules demonstrated relatively mild injury (Fig. 2). As the area of S3 proximal tubules is obviously large in rats, the injured S3 proximal tubules stained positively by KIM-1 can be readily identified. 1. Download: Download high-res image (1MB) 2. Download: Download full-size image Fig. 2. KIM-1 expression in renal ischemic model. A. In the normal kidney of a rat, there is no expression of kidney injury molecule-1 (KIM-1) in any renal tubules. B-D. After ischemic and reperfusion injury, the proximal tubules in medullary rays (S2 segment) and the outer stripe of the inner medulla (S3 segment) show much stronger KIM-1 staining (brown color staining) than proximal tubules around glomeruli (S1 segment). This finding indicates that the S1 segment of proximal tubules is more resistant to an ischemic insult when compared to the S2 and S3 segments of proximal tubules. G – glomerulus. (Scale Bars: 1 mm on A-B and 0.25 mm on C-D) (Magnifications ×100 in A-B and x400 in C-D). 3.2. KIM-1 expression in acute tubular injury of human renal grafts Interestingly, AKG7 monoclonal antibody against KIM-1 is a unique antibody that is only overexpressed in the injured human proximal tubules by immunohistochemical staining (Bailly et al., 2002). Notably, there is no KIM-1 staining in normal proximal tubules, glomeruli, distal nephron tubules, and other internal organs (Zhang et al., 2008a, Zhang et al., 2008b). Protocol biopsies from early renal transplant grafts with healthy renal parenchyma were negative for KIM-1 staining, while study groups with transplant biopsies were KIM-1 positive in injured proximal tubules as acute tubular injury or following acute cellular rejection (Zhang et al., 2008a, Zhang et al., 2008b). In addition, stronger KIM-1 immuno-staining is correlated with increased serum creatinine levels secondary to either ischemic or rejection insults, suggesting that KIM-1 is a reliable marker for assessing renal dysfunction in transplant recipients (Zhang et al., 2008a, Zhang et al., 2008b). Interestingly, the higher levels of KIM-1 detected in the injured proximal tubules have better recovery in the transplant grafts over one and a half years (Zhang et al., 2008a, Zhang et al., 2008b). These findings imply that acutely injured proximal tubules possess a strong capacity for repair and recovery, potentially linked to the degree of KIM-1 expression. In accordance with the Banff criteria, type 1 acute antibody-mediated rejection is characterized by an “acute tubular necrosis (ATN)-like” injury with positive donor-specific antibody, positive C4d staining, and the absence of neutrophil infiltration of peritubular capillaries or thrombotic microangiopathy. To validate this “ATN-like” injury as a true acute tubular injury, we compared it to negative controls, renal transplant biopsies with ischemia-associated acute tubular injury, and biopsies diagnosed with “ATN-like” type 1 acute antibody-mediated rejection (Johnson et al., 2013). KIM-1 staining consistently identified acute tubular injury in type 1 acute antibody-mediated rejection, highlighting its crucial role in the renal dysfunction associated with this condition. Furthermore, KIM-1 is useful for differential diagnosis in renal transplant biopsies, especially in ruling out acute cellular rejection. In two patients with sickle cell disease (Wang et al., 2015), one biopsy showed positive staining for both KIM-1 and iron in the proximal tubules, implying acute tubular injury was due to recurrent sickle cell nephropathy. The other demonstrated thrombotic microangiopathy with positive KIM-1 staining but negative iron staining, indicating a more likely cause of thrombotic microangiopathy for acute tubular injury (Wang et al., 2015). Therefore, KIM-1 expression has been used as a useful biomarker to detect acute tubular injury of human renal grafts in patients with renal transplantation. 3.3. KIM-1 expression in human native kidneys In adult kidneys, there is no KIM-1 expression in normal renal parenchyma on renal biopsy, but KIM-1 immuno-staining has been used to identify acute tubular injury secondary to using nephrotoxic drug, as well as many types of glomerulonephritis, acute tubular injury, acute interstitial nephritis and tubular obstruction associated tubular injury (Han et al., 2002; van Timmeren et al., 2007). In a patient with end-stage liver disease who underwent a liver transplant and received Tacrolimus (Cosner et al., 2015), his serum creatinine elevated after the transplant. A renal biopsy with KIM-1 staining of proximal tubules in medullary rays but not in those proximal tubules around the glomeruli suggested acute tubular injury due to acute Tacrolimusnephrotoxicity (Cosner et al., 2015). Stopping Tacrolimus restored normal renal function, which has been maintained normal since. Moreover, renal biopsies reveal that acute tubular injury in proximal tubules is often present in various glomerular, renal tubular, and vascular diseases. In adult patients, 80%–92% of renal biopsies are KIM-1 positive while only 82% of pediatric biopsies are positive for KIM-1 expression (Yin et al., 2019). In adults, a receiver operating characteristic (ROC) curve between KIM-1 expression and serum creatinine levels shows a good under-curve area (UCA) at 0.87 (Yin et al., 2019), indicating that KIM-1 staining is a “good” predictor of acute renal tubular injury in correlating to serum creatinine. In the pediatric population, KIM-1 also demonstrates a great correlation with serum creatinine as well with a fair UCA at 0.74 (Yin et al., 2019). In addition, we find that a higher KIM-1/serum creatinine ratio is associated with better renal functional recovery, implying that stronger KIM-1 responses may represent greater proximal tubule repair activity. On the other hand, KIM-1 staining often correlates well with other biomarkers of acute proximal tubule injury. Since PAS staining, a marker of brush borders, is diminished in acute tubular injury (Parasuraman et al., 2013), dual staining reveals a reciprocal relationship between reduced PAS staining and upregulated KIM-1 staining (Yin et al., 2019). As KIM-1 is a phagocytotic factor, early upregulation of KIM-1 often coincides with increased CD68 expression, a lysosomal biomarker, suggesting proximal tubule phagocytic capacity. In severe acute tubular injury with strong KIM-1 expression in the proximal tubules, CD68 expression decreases, implying a potential loss of proximal tubular repairing capability (unpublished observation). Several mechanisms have been proposed for how injured renal tubules are repaired following an acute tubular injury. It is believed that either intratubular progenitor cells or residual tubular epithelial cells play a critical role in repairing the damaged renal tubules by restoring new epithelial cells (Humphreys et al., 2008; Lazzeri et al., 2019). CD133, a progenitor cell marker normally expressed in the parietal epithelium and scattered in proximal tubules (Lasagni and Romagnani, 2010; Romagnani, 2011), is co-expressed with KIM-1 in injured proximal tubules during acute tubular injury, indicating the activation of multiple repair and regenerative mechanisms (Zhang and Hafron, 2014). Furthermore, KIM-1 staining has been used to understand acute tubular injury patterns of different renal diseases. In acute calcium phosphate nephropathy with distal nephron tubule calcium phosphate deposition, KIM-1 staining can highlight proximal tubule acute tubular injury without calcium phosphate, indicating secondary acute tubular injury due to distal tubular obstruction (Hayek et al., 2013). In contrast to calcium phosphate nephropathy, polarizable calcium oxalate nephropathy shows KIM-1 staining in proximal tubules containing calcium oxalate crystals, suggesting direct obstruction as the potential cause of acute tubular injury (Hayek et al., 2013). In acute tubular injury due to ischemia, KIM-1 expression is dominantly seen in the medullary rays. However, in renal biopsies with thrombotic microangiopathy, KIM-1 staining is predominantly in the convoluted proximal tubules around the glomeruli, reflecting the immediate ischemic insult caused by thrombotic glomeruli (unpublished data). 3.4. KIM-1 expression in autopsy kidneys in human Using full sections of kidneys from adult autopsy cases, the striking finding of KIM-1 immuno-staining is that KIM-1 staining highlights more acute tubular injury in straight proximal tubules located in medullary rays and the outer stripe of outer medulla than that in the convoluted proximal tubules due to ischemic injury (Fig. 3A–C) (Yin et al., 2018). Using light microscopy, proximal tubules can be divided into three zones based on their vulnerability to ischemic insults (Fig. 3D): Zone 1 (convoluted proximal tubules), Zone 2 (straight proximal tubules in the medullary rays), and Zone 3 (straight proximal tubules in the outer stripe of the outer medulla (lower than arcuate artery) (Yin et al., 2018). This concept is also partially supported by our biopsy study that Zone 2 proximal tubules showed more KIM-1 expression than Zone 1 proximal tubules due to Tacrolimus nephrotoxicity (Cosner et al., 2015). Similarly, KIM-1 staining is useful in identifying acute tubular injury in pediatric autopsy kidneys as well (Yin et al., 2018). KIM-1 staining is essentially not affected by the autolysis of autopsies, indicating the durability of KIM-1 protein. This durable character of KIM-1 in autopsy kidneys supports that this biomarker can be reliably measured in urine analysis for renal dysfunction in animal models and human studies (Zhou et al., 2008; Vaidya et al., 2009; Hsu et al., 2017; Park et al., 2017). 1. Download: Download high-res image (1MB) 2. Download: Download full-size image Fig. 3. KIM-1 expression in human autopsy kidneys. On full sections of human kidney, KIM-1 immuno-staining in the modified zone 1 (Z1 of proximal tubules around glomeruli) (A) is weaker than that in modified zone 2 (Z2 of proximal tubules in medullary rays) (B) and modified zone 3 (Z3 of proximal tubules in the outer stripe of inner medulla) (C), compatible with ischemic renal injury model in rats (Fig. 1, using conventional concept of S1 to S3 proximal tubules). Schematic illustration of Z1 to Z3, highlighting the vulnerable Z2 and Z3 when compared to Z1 (D). Note. Z1, Z2, and Z3 of proximal tubules are modified from the previous S1-S3 proximal tubular system for the light microscopic identification of different zones of proximal tubules easily (see text for details). AA – arcuate artery, G – glomerulus, CPT – convoluted proximal tubules, SPT – straight proximal tubules, LH – loop of Henle, DT – distal tubule, CD – collecting duct. (Scale Bars: 0.25 mm on A-C) (Magnification ×400 in A-C). Since the beginning of 2020, the coronavirus disease 19 (COVID19) has infected billions of people and caused millions of deaths worldwide. The SARS-CoV-2 is known to cause lung infection with subsequent infection of other organs, including the kidneys through viral interaction with cell surface receptors, angiotensin-converting enzyme 2, and subsequent endocytosis (Luan et al., 2020; Zhou et al., 2020; Tan et al., 2004). Among our autopsy cases, we found that two patients died of COVID19 infection-associated complications. The kidneys of both patients demonstrated extensively positive staining of KIM-1 despite their normal levels of pre-mortem serum creatinine, implying serum creatinine can underestimate renal dysfunction (Fig. 4A and B) (Zhang et al., 2020). The direct infection of COVID-19 in renal tissue causes acute tubular injury and collapsing focal segmental glomerulosclerosis (Su et al., 2020; Farkash et al., 2020; Kissling et al., 2020; Braun et al., 2020; Puelles et al., 2020). However, several recent studies report no definite SARS-CoV-2 detected in the renal biopsies or autopsy kidneys from COVID-19-positive patients by RNAin situ hybridization techniques (Wu et al., 2020; Sharma et al., 2020; Santoriello et al., 2020; Kudose et al., 2020). 1. Download: Download high-res image (1MB) 2. Download: Download full-size image Fig. 4. Morphologic changes by routine hematoxylin/eosin staining and KIM-1 expression in autopsy kidneys from patients complicated by COVID19 infection. A-B. Morphologic changes and KIM-1 expression in adult kidneys (autopsy). Although this 53 Asian man with COVID19 had pre-mortem serum creatinine level at 0.63 mg/dl, light microscopy showed dilated proximal tubules (A) and diffuse and strong KIM-1 staining (brown color) in proximal tubules, consistent with moderate acute tubular injury (B). C-D. Morphologic changes and KIM-1 expression in pediatric kidneys (autopsy). A 5-year-old African-American girl with COVID19 infection had a pre-mortem serum creatinine of 0.17 mg/dl, but light microscopy revealed flattened proximal tubules epithelium with cytoplasmic vacuolization (C) and diffuse and moderate KIM-1 expression (brown color) along the luminal surface of proximal tubules (D), consistent with mild acute tubular injury (Scale Bars: 0.25 mm on A-D) (Magnifications ×400 in A-D). 3.5. Discrepancies in acute kidney injury studies between humans vs animals In recent years, multiple review articles indicate that animal models of human acute kidney injury (AKI) do not generate specific therapies that benefit the disease in humans, in terms of preventing its occurrence, ameliorating its severity, hastening its recovery, or delaying a potential transformation from AKI to CKD (Nath, 2015; de Caestecker et al., 2015; Agarwal et al., 2016; Liu et al., 2017). There are several possibilities for the discrepancy in studies with humans versus animals. The first is that rodents and humans have different renal anatomic structures with different renal responses to ischemic injury. But KIM-1 up-regulation after ischemic injury appears to have similar response patterns between animal models (Fig. 2) and human kidneys (Fig. 3), both of which show more prominent expression of KIM-1 in Zone 2 (S2 segment) and Zone 3 (S3 segment) than in Zone 1 (S1 segment). The second possibility is that the design and statistical analysis of these studies may be suboptimal and require more careful consideration and optimal scrutiny. A third possibility is that there are differences in disease progression between animal models and injured human kidneys. As our current population becomes older, many elderly patients with AKIs often have underlying kidney disease due to hypertension, diabetes, or both. Once their kidneys develop AKI in addition to kidney disease, their chance of renal recovery is low. However, animals used for generating AKI models are usually young and healthy, and their kidneys have a robust recovery capacity in response to a variety of treatments. Therefore, different scenarios of disease progression between animal models and humans may be the key issue for the failed goals of some preclinical AKI studies. However, KIM-1 is a valuable biomarker for studying acute tubular injury in both human diseases and animal models. 4. KIM-1 in the transition from AKI to CKD and the development of CKD In addition to its role in AKI, emerging evidence indicates that KIM-1 is also involved in various pathophysiological conditions in CKD. Increasing evidence indicates that AKI and CKD are interconnected conditions with overlapping pathophysiological mechanisms (Zhang et al., 2024). AKI frequently progresses to CKD due to inadequate repair mechanisms, ongoing inflammatory and immune responses, and fibrosis, with both conditions sharing common signaling pathways involving cell death, immune responses, and the accumulation of extracellular matrix (ECM) (Zhang et al., 2024). 4.1. KIM-1 in the transition from AKI to CKD KIM-1 is typically absent from healthy kidneys, while acute kidney injury results in KIM-1 initial upregulation and exhibits anti-inflammatory and protective functions by facilitating the clearance of damaged and apoptotic cells. However, persistent upregulation of KIM-1 expression following AKI is associated with the development of CKD (Ko et al., 2010), and prolonged overexpression of KIM-1 contributes to inflammatory damage and interstitial fibrosis of the kidneys (Humphreys et al., 2013). Xu and colleagues reported that KIM-1 expression was upregulated in the proximal tubules of mice with AKI on day 1 following renal ischemia-reperfusion, and remained at substantially higher levels in the kidneys on days 7 and 14 (Xu et al., 2022). This initial acute tubule injury, with KIM-1 upregulation, was followed by increased expression of vascular cell adhesion molecule 1 (VCAM1) and a second wave of immune activation with infiltration of T cells and neutrophils, as well as proximal tubule cell loss, tubule atrophy, and the development of CKD by days 30 (Xu et al., 2022). In another study, Holderied et al. investigated the impact of varying durations of unilateral ischemia/reperfusion injury to explore the critical threshold beyond which the renal tissue damage becomes irreversible, with the “point of no return” ischemia time of 35 min (Holderied et al., 2020). Prolonged ischemia of 35 or 45 min resulted in persistent upregulation of inflammation, injury, cell death, and fibrosis markers, including KIM-1, leading to atrophy of the ischemic kidney and compensatory hypertrophy of the contralateral kidney (Holderied et al., 2020). These interesting findings indicate a potential transition from acute to chronic kidney disease (Holderied et al., 2020). Therefore, KIM-1 serves as a valuable biomarker for AKI and CKD, as well as the transition between these pathological conditions (Ko et al., 2010; Zhang et al., 2024). By monitoring KIM-1 levels, clinicians may assess kidney injury severity, identify patients at risk for developing CKD, and implement timely interventions. 4.2. KIM-1 in chronic kidney diseases Notably, elevated circulating levels of KIM-1 have been linked to both acute and chronic kidney damage (Sabbisetti et al., 2014). Additionally, elevated levels of KIM-1 in the blood or urine are often associated with the progression of CKD. In the Boston Kidney Biopsy Cohort and Chronic Renal Insufficiency Cohort Studies, patients with diabetic nephropathy, glomerulopathies, and tubulointerstitial disease have significantly higher plasma levels of KIM-1, indicating a prognostic value of plasma KIM-1 across a spectrum of kidney diseases (Schmidt et al., 2022). Therefore, plasma KIM-1 may serve as a tool in the non-invasive assessment of kidney tubular injury, and higher plasma KIM-1 levels are independently associated with progression to kidney failure (Schmidt et al., 2022). Sabbisetti and colleagues found that KIM-1 levels elevated in both blood and urine in mice with unilateral ureteral obstruction, even when plasma creatinine levels remained unchanged (Sabbisetti et al., 2014), suggesting that KIM-1 might be a more sensitive biomarker for kidney damage than traditional markers like creatinine (Sabbisetti et al., 2014). Lupus nephritis is a renal complication of systemic lupus erythematosus (SLE), a prototype autoimmune disease characterized by the immune system attacking its organs/tissues, including the kidneys. Lupus nephritis causes permanent kidney damage, gradually leading to CKD, which is a major cause of death in patients with SLE. Nozaki and colleagues found that urinary KIM-1 levels can be used to screen active lupus nephritis (LN) and estimate tubular KIM-1 tissue expression in renal biopsies from patients with active lupus nephritis, predicting renal damage, ongoing glomerular nephritis, tubulointerstitial inflammation, and tubular atrophy in patients with LN (Nozaki et al., 2014). Nozaki et al. also found that urinary KIM-1 correlates with LN disease activity and renal histopathology findings, serving as a predicting biomarker for monitoring treatment responses in patients with LN (Nozaki et al., 2023). In addition, Ding et al. demonstrated that urinary KIM-1, together with neutrophil gelatinase-associated lipocalin (NGAL), and monocyte chemoattractant protein-1 (MCP-1) were associated with kidney injury, and a combination of these three biomarkers showed increased power in predicting tubulointerstitial lesions and renal outcomes in patients with LN (Ding et al., 2018). Furthermore, the Renal Activity Index for Lupus (RAIL), a composite biomarker incorporating urinary KIM-1, NGAL, MCP-1, adiponectin, hemopexin, and ceruloplasmin, is a valuable and accurate non-invasive tool for assessing LN activity and monitoring response to LN therapy (Cody et al., 2023). In addition, KIM-1 contributes to kidney damage during CKD development by promoting inflammation, fibrosis, ECM deposition, and epithelial-to-mesenchymal transition (EMT) (Huang et al., 2023). Thus, it serves as a potent biomarker in the non-invasive assessment of kidney tubular injury and disease activities in CKD development and monitoring therapeutic responses during LN therapy (Nozaki et al., 2014; Schmidt et al., 2022; Cody et al., 2023). Additionally, KIM-1 may play a role in regulating proteinuria (Karmakova capital Te et al., 2021), a common symptom of CKD. All of the above indicated the potential of KIM-1, either alone or in combination with other biomarkers, to serve as valuable biomarkers for the diagnosis, prognosis, and monitoring of CKD progression and therapeutics. A comprehensive understanding of the functions of KIM-1 can help us to develop new therapeutic strategies to slow the progression of CKD and improve patient outcomes. Therefore, recent research efforts have been shifted towards developing KIM-1 inhibitors, such as TW-37, a small molecule inhibitor that can block KIM-1-mediated uptake of palmitic acid-albumin in vivo in a mouse kidney injury model (Mori et al., 2021). This inhibitor has shown promise in alleviating renal inflammation and fibrosis, ultimately delaying the progression of CKD (Mori et al., 2021). 5. KIM-1 expression during renal tubular dedifferentiation to renal neoplasm There is a significant chance for the development of various renal cell carcinomas in the end stage of kidney disease (Tickoo et al., 2006; El-Zaatari and Truong, 2022; Al-Othman et al., 2024). In addition to clear cell renal cell carcinoma (RCC) and papillary RCC, acquired cystic kidney disease-associated RCC and clear cell papillary RCC can be seen as well (Tickoo et al., 2006; Al-Othman et al., 2024). During the embryonic development of the kidney, a critical change is the mesenchyme to epithelial transition (MET). When developed kidneys are injured, there is a trend of epithelium to mesenchyme transition (EMT) for tubular repair (Zeisberg and Kalluri, 2004). The EMT process can lead to aberrant pathways in renal tubules, contributing to tumorigenesis (Chuang et al., 2008). This may explain why patients with chronic kidney disease have a higher risk of developing RCC compared to the general population (Denton et al., 2002). From the pathologic point of view, normal proximal tubules are negative for the mesenchymal marker vimentin but injured proximal tubules and RCCs derived from proximal tubules, including clear cell RCC and papillary RCC, stain positive for vimentin, indicating that they have undergone an EMT process. KIM-1 is up-regulated in clear cell RCC and papillary RCC, but stains negatively in the renal neoplasms from distal nephron tubules, namely chromophobe RCC and oncocytoma (Han et al., 2005; Lin et al., 2007; Zhang et al., 2019). KIM-1 does not show expression in many types of carcinomas except for a small percentage of colonic carcinomas and 40% of clear cell carcinomas of the ovary (Lin et al., 2007). Colonic carcinomas may have an overlapping mucinous component of the extra-cellular KIM-1 domain in clear cell RCC. The clear cell carcinoma of the ovary may reserve some overlapping KIM-1 component from the mesonephros, which may be similar to that of the clear cell RCC which has a metanephros origin. The DNA sequences of KIM-1 in clear cell RCC and papillary RCC remain intact (Zhang et al., 2014), implying that KIM-1 may have certain functions in RCC. Furthermore, KIM-1 and a phagocytic biomarker CD68 co-express in clear cell RCC and papillary RCC but not in chromophobe RCC and oncocytoma, implying that KIM-1 may play a role in the phagocytosis of apoptotic debris in RCC cells as well (Zhang et al., 2014). This potential scavenging role of KIM-1 may contribute to a “self-cleaning” of tumor cells to promote tumor spread. Due to its upregulation in clear cell and papillary renal cell RCC, KIM-1 has been used as a valuable urinary biomarker to detect RCC (Morrissey et al., 2011; Zhang et al., 2014; Mijuskovic et al., 2018). Several studies have shown that the urine assay of KIM-1 in RCC appears to be very useful in early detection of RCC, mainly because clear cell RCC and papillary RCC represent the vast majority (approximately 90%) of RCC (Zhang et al., 2014; Morrissey et al., 2011; Mijuskovic et al., 2018). In 2018, a multicenter study using plasma samples from the European Prospective Investigation into Cancer and Nutrition (EPIC) confirmed that positive KIM-1 serology testing is correlated with the detection of RCC by imaging study and higher serology values of KIM-1 are associated with poorer outcomes of RCC (Scelo et al., 2018). Recently, another international study based on two independent cohorts, the WHO/International Agency for Research on Cancer (IARC) K2 multinational prospective study and Johns Hopkins Brady Urological Institute Biorepository, demonstrated that plasma KIM-1 was associated with malignant renal pathology, worse metastasis-free survival, and increased risk of death in patients with RCC (Xu et al., 2024). Therefore, the accumulated clinical evidence suggests that KIM-1 could be a valuable biomarker for screening RCC in the future. 6. Conclusions KIM-1, initially linked to acute tubular injury during AKI, is increasingly demonstrated its role in AKI to CKD transition and CKD development. Renal pathology using KIM-1 staining helps identify injured proximal tubules, particularly in medullary rays and inner medulla, and reveals a complex immune system involved in injured proximal tubules, interstitial inflammatory reaction, and interstitial fibrosis, leading to CKD development. The role of KIM-1 in AKI to CKD transition and CKD development deserves extensive investigations in the future. Furthermore, KIM-1 is valuable to differentiate RCC subtypes as clear cell RCC and papillary RCC is from proximal tubules while chromophobe RCC is from distal nephron tubules. This capability of KIM-1 raises the question of whether the existing RCC classification needs to be revised to reflect their different tubular origins. KIM-1 immuno-staining is also useful in confirming metastasis of either clear cell RCC or papillary RCC. Finally, urine and serologic KIM-1 analysis shows promise as a non-invasive biomarker for screening early RCC and confirmation before invasive procedures. In summary, KIM-1 is a specific and sensitive biomarker to identify and monitor acute tubular injury, AKI to CKD transition, and CKD development, as well as screening and confirming RCC. CRediT authorship contribution statement Ping L. Zhang: conceived the concepts and designed the study, drafted the first manuscript, and made critical revision of the manuscript, and finalized the article, All authors critically contributed to the manuscript for important intellectual content. Ming-Lin Liu: conceived the concepts and designed the study, drafted the first manuscript, and made critical revision of the manuscript, and finalized the article, All authors critically contributed to the manuscript for important intellectual content. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgment Author PLZ thanks his former mentor Dr. Barry M. Brenner (Emeritus Director of the Renal Division of Brigham and Women's Hospital, Boston) for his teaching, inspiration, and pioneering role in understanding how controlling glomerular hyperfiltration can delay the progression of CKD in animal models and humans. Author PLZ also appreciates the long-term collaboration of Dr. Joseph V. Bonventre, Renal Division, Brigham and Women's Hospital, Boston, for the KIM studies and congratulates him for his receiving the Homer W. Smith Award of the American Society of Nephrology on October 26, 2024, mainly for his outstanding contribution in understanding acute kidney injury, repairing and generation of kidney tissue. The authors also thank Dr. Olaf Kroneman, Division of Nephrology, Corewell Health (East), Royal Oak, Michigan, for his critical review of this manuscript. This work was supported by Lupus Research Alliance (416805) and NIHR21AI144838 (to MLL). Special issue articles Recommended articles Data availability No data was used for the research described in the article. References Agarwal et al., 2016A. Agarwal, Z. Dong, R. Harris, P. Murray, S.M. Parikh, M.H. Rosner, J.A. Kellum, C. Ronco, Xiii Working Group Acute Dialysis Quality Initiative Cellular and molecular mechanisms of AKI J. Am. Soc. Nephrol., 27 (2016), pp. 1288-1299 View in ScopusGoogle Scholar Al-Othman et al., 2024Y. Al-Othman, S.I. Daraiseh, J.D. Schwartz, O. Kroneman, K. Putchakayala, M. Elzieny, C.A. Thorburn, S.R. Cohn, H.D. Kanaan, D.S. Bedi, C.A. Lamb, Z.H. Qu, J.M. Hafron, P.L. Zhang Malignant tumors identified in adult polycystic kidney disease can Be derived from both proximal tubular and distal tubular origins Ann. Clin. Lab. Sci., 54 (2024), pp. 371-377 View in ScopusGoogle Scholar Angiari and Constantin, 2014S. Angiari, G. Constantin Regulation of T cell trafficking by the T cell immunoglobulin and mucin domain 1 glycoprotein Trends Mol. Med., 20 (2014), pp. 675-684 View PDFView articleView in ScopusGoogle Scholar Angiari et al., 2014S. Angiari, T. Donnarumma, B. Rossi, S. Dusi, E. Pietronigro, E. Zenaro, V. Della Bianca, L. Toffali, G. Piacentino, S. Budui, P. Rennert, S. Xiao, C. Laudanna, J.M. Casasnovas, V.K. Kuchroo, G. Constantin TIM-1 glycoprotein binds the adhesion receptor P-selectin and mediates T cell trafficking during inflammation and autoimmunity Immunity, 40 (2014), pp. 542-553 View PDFView articleView in ScopusGoogle Scholar Bailly et al., 2002V. Bailly, Z. Zhang, W. Meier, R. Cate, M. Sanicola, J.V. Bonventre Shedding of kidney injury molecule-1, a putative adhesion protein involved in renal regeneration J. Biol. Chem., 277 (2002), pp. 39739-39748 View PDFView articleView in ScopusGoogle Scholar Bjornstad et al., 2024P. Bjornstad, S.A. Arslanian, T.S. Hannon, P.S. Zeitler, J.L. Francis, A.M. Curtis, I. Turfanda, D.A. Cox Dulaglutide and glomerular hyperfiltration, proteinuria, and albuminuria in youth with type 2 diabetes: post hoc analysis of the AWARD-PEDS study Diabetes Care, 47 (2024), pp. 1617-1621 CrossrefView in ScopusGoogle Scholar Bod et al., 2023L. Bod, Y.C. Kye, J. Shi, E. Torlai Triglia, A. Schnell, J. Fessler, S.M. Ostrowski, M.Y. Von-Franque, J.R. Kuchroo, R.M. Barilla, S. Zaghouani, E. Christian, T.M. Delorey, K. Mohib, S. Xiao, N. Slingerland, C.J. Giuliano, O. Ashenberg, Z. Li, D.M. Rothstein, D.E. Fisher, O. Rozenblatt-Rosen, A.H. Sharpe, F.J. Quintana, L. Apetoh, A. Regev, V.K. Kuchroo B-cell-specific checkpoint molecules that regulate anti-tumour immunity Nature, 619 (2023), pp. 348-356 CrossrefView in ScopusGoogle Scholar Bonventre, 2010J.V. Bonventre 'Pathophysiology of AKI: injury and normal and abnormal repair' Contrib. Nephrol., 165 (2010), pp. 9-17 CrossrefView in ScopusGoogle Scholar Bonventre, 2014J.V. Bonventre 'Kidney injury molecule-1: a translational journey' Trans. Am. Clin. Climatol. Assoc., 125 (2014), pp. 293-299 discussion 99 View in ScopusGoogle Scholar Bonventre and Yang, 2010J.V. Bonventre, L. Yang 'Kidney injury molecule-1' Curr. Opin. Crit. Care, 16 (2010), pp. 556-561 View in ScopusGoogle Scholar Bonventre et al., 2013J.V. Bonventre, D. Basile, K.D. Liu, D. McKay, B.A. Molitoris, K.A. Nath, T.L. Nickolas, M.D. Okusa, P.M. Palevsky, R. Schnellmann, K. Rys-Sikora, P.L. Kimmel, R.A. Star Dialogue kidney research national Clin. J. Am. Soc. Nephrol., 8 (2013), pp. 1606-1608 'AKI: a path forward' View in ScopusGoogle Scholar Braun et al., 2020F. Braun, M. Lutgehetmann, S. Pfefferle, M.N. Wong, A. Carsten, M.T. Lindenmeyer, D. Norz, F. Heinrich, K. Meissner, D. Wichmann, S. Kluge, O. Gross, K. Pueschel, A.S. Schroder, C. Edler, M. Aepfelbacher, V.G. Puelles, T.B. Huber SARS-CoV-2 renal tropism associates with acute kidney injury Lancet, 396 (2020), pp. 597-598 View PDFView articleView in ScopusGoogle Scholar Brenner, 2002B.M. Brenner Remission of renal disease: recounting the challenge, acquiring the goal J. Clin. Investig., 110 (2002), pp. 1753-1758 View in ScopusGoogle Scholar Brenner et al., 1996B.M. Brenner, E.V. Lawler, H.S. Mackenzie The hyperfiltration theory: a paradigm shift in nephrology Kidney Int., 49 (1996), pp. 1774-1777 View PDFView articleCrossrefView in ScopusGoogle Scholar Brenner et al., 2001B.M. Brenner, M.E. Cooper, D. de Zeeuw, W.F. Keane, W.E. Mitch, H.H. Parving, G. Remuzzi, S.M. Snapinn, Z. Zhang, S. Shahinfar, Renaal Study Investigators Effects of losartan on renal and cardiovascular outcomes in patients with type 2 diabetes and nephropathy N. Engl. J. Med., 345 (2001), pp. 861-869 View in ScopusGoogle Scholar Brezis and Rosen, 1995M. Brezis, S. Rosen Hypoxia of the renal medulla--its implications for disease N. Engl. J. Med., 332 (1995), pp. 647-655 Google Scholar Brooks et al., 2015C.R. Brooks, M.Y. Yeung, Y.S. Brooks, H. Chen, T. Ichimura, J.M. Henderson, J.V. Bonventre 'KIM-1-/TIM-1-mediated phagocytosis links ATG5-/ULK1-dependent clearance of apoptotic cells to antigen presentation' EMBO J., 34 (2015), pp. 2441-2464 CrossrefView in ScopusGoogle Scholar capital Te et al., 2021Karmakova capital Te, Cyrilliccapital A. Cyrillic, N.S. Sergeeva, CyrillicY. Kanukoev capital Ka, B.Y. Alekseev, A. CyrillicD Kaprin capital Kidney injury molecule 1 (KIM-1): a multifunctional glycoprotein and biological marker Sovrem Tekhnologii Med., 13 (2021), pp. 64-78 (Review) Google Scholar Chen et al., 2023J. Chen, T.T. Tang, J.Y. Cao, Z.L. Li, X. Zhong, Y. Wen, A.R. Shen, B.C. Liu, L.L. Lv KIM-1 augments hypoxia-induced tubulointerstitial inflammation through uptake of small extracellular vesicles by tubular epithelial cells Mol. Ther., 31 (2023), pp. 1437-1450 View PDFView articleCrossrefGoogle Scholar Chuang et al., 2008M.J. Chuang, K.H. Sun, S.J. Tang, M.W. Deng, Y.H. Wu, J.S. Sung, T.L. Cha, G.H. Sun 'Tumor-derived tumor necrosis factor-alpha promotes progression and epithelial-mesenchymal transition in renal cell carcinoma cells' Cancer Sci., 99 (2008), pp. 905-913 CrossrefView in ScopusGoogle Scholar Cody et al., 2023E.M. Cody, S.E. Wenderfer, K.E. Sullivan, A.H.J. Kim, W. Figg, H. Ghumman, T. Qiu, B. Huang, P. Devarajan, H.I. Brunner Urine biomarker score captures response to induction therapy with lupus nephritis Pediatr. Nephrol., 38 (2023), pp. 2679-2688 CrossrefView in ScopusGoogle Scholar Cosner et al., 2015D. Cosner, X. Zeng, P.L. Zhang Proximal tubular injury in medullary rays is an early sign of acute Tacrolimus nephrotoxicity J .Transpl., 2015 (2015), Article 142521 Google Scholar de Caestecker et al., 2015M. de Caestecker, B.D. Humphreys, K.D. Liu, W.H. Fissell, J. Cerda, T.D. Nolin, D. Askenazi, G. Mour, F.E. Harrell Jr., N. Pullen, M.D. Okusa, S. Faubel, Asn Aki Advisory Group Bridging translation by improving preclinical study design in AKI J. Am. Soc. Nephrol., 26 (2015), pp. 2905-2916 View in ScopusGoogle Scholar Denton et al., 2002M.D. Denton, C.C. Magee, C. Ovuworie, S. Mauiyyedi, M. Pascual, R.B. Colvin, A.B. Cosimi, N. Tolkoff-Rubin Prevalence of renal cell carcinoma in patients with ESRD pre-transplantation: a pathologic analysis Kidney Int., 61 (2002), pp. 2201-2209 View PDFView articleView in ScopusGoogle Scholar Ding et al., 2018Y. Ding, L.M. Nie, Y. Pang, W.J. Wu, Y. Tan, F. Yu, M.H. Zhao 'Composite urinary biomarkers to predict pathological tubulointerstitial lesions in lupus nephritis' Lupus, 27 (2018), pp. 1778-1789 CrossrefView in ScopusGoogle Scholar El-Zaatari and Truong, 2022Z.M. El-Zaatari, L.D. Truong 'Renal cell carcinoma in end-stage renal disease: a review and update' Biomedicines, 10 (2022) Google Scholar Farkash et al., 2020E.A. Farkash, A.M. Wilson, J.M. Jentzen Ultrastructural evidence for direct renal infection with SARS-CoV-2 J. Am. Soc. Nephrol., 31 (8) (2020), pp. 1683-1687 CrossrefView in ScopusGoogle Scholar Han et al., 2002W.K. Han, V. Bailly, R. Abichandani, R. Thadhani, J.V. Bonventre Kidney Injury Molecule-1 (KIM-1): a novel biomarker for human renal proximal tubule injury Kidney Int., 62 (2002), pp. 237-244 View PDFView articleView in ScopusGoogle Scholar Han et al., 2005W.K. Han, A. Alinani, C.L. Wu, D. Michaelson, M. Loda, F.J. McGovern, R. Thadhani, J.V. Bonventre Human kidney injury molecule-1 is a tissue and urinary tumor marker of renal cell carcinoma J. Am. Soc. Nephrol., 16 (2005), pp. 1126-1134 View in ScopusGoogle Scholar Hayek et al., 2013S. Hayek, R. Parasuraman, H.S. Desai, D. Samarapungavan, W. Li, S.C. Wolforth, G.H. Reddy, S.R. Cohn, L.L. Rocher, F. Dumler, M.T. Rooney, P.L. Zhang 'Primary cilia metaplasia in renal transplant biopsies with acute tubular injury' Ultrastruct. Pathol., 37 (2013), pp. 159-163 CrossrefView in ScopusGoogle Scholar Holderied et al., 2020A. Holderied, F. Kraft, J.A. Marschner, M. Weidenbusch, H.J. Anders '"Point of no return" in unilateral renal ischemia reperfusion injury in mice' J. Biomed. Sci., 27 (2020), p. 34 View in ScopusGoogle Scholar Hsu et al., 2017C.Y. Hsu, D. Xie, S.S. Waikar, J.V. Bonventre, X. Zhang, V. Sabbisetti, T.E. Mifflin, J. Coresh, C.J. Diamantidis, J. He, C.M. Lora, E.R. Miller, R.G. Nelson, A.O. Ojo, M. Rahman, J.R. Schelling, F.P. Wilson, P.L. Kimmel, H.I. Feldman, R.S. Vasan, K.D. Liu, Study Investigators Cric, C.K.D. Biomarkers Consortium Urine biomarkers of tubular injury do not improve on the clinical model predicting chronic kidney disease progression Kidney Int., 91 (2017), pp. 196-203 View PDFView articleView in ScopusGoogle Scholar Hti Lar Seng et al., 2023N.S. Hti Lar Seng, P. Lohana, S. Chandra, B. Jim The fatty kidney and beyond: a silent epidemic Am. J. Med., 136 (2023), pp. 965-974 View PDFView articleView in ScopusGoogle Scholar Huang et al., 2023R. Huang, P. Fu, L. Ma Kidney fibrosis: from mechanisms to therapeutic medicines Signal Transduct. Targeted Ther., 8 (2023), p. 129 View in ScopusGoogle Scholar Humphreys et al., 2008B.D. Humphreys, M.T. Valerius, A. Kobayashi, J.W. Mugford, S. Soeung, J.S. Duffield, A.P. McMahon, J.V. Bonventre 'Intrinsic epithelial cells repair the kidney after injury' Cell Stem Cell, 2 (2008), pp. 284-291 View PDFView articleView in ScopusGoogle Scholar Humphreys et al., 2013B.D. Humphreys, F. Xu, V. Sabbisetti, I. Grgic, S. Movahedi Naini, N. Wang, G. Chen, S. Xiao, D. Patel, J.M. Henderson, T. Ichimura, S. Mou, S. Soeung, A.P. McMahon, V.K. Kuchroo, J.V. Bonventre 'Chronic epithelial kidney injury molecule-1 expression causes murine kidney fibrosis' J. Clin. Investig., 123 (2013), pp. 4023-4035 View in ScopusGoogle Scholar Huo et al., 2010W. Huo, K. Zhang, Z. Nie, Q. Li, F. Jin 'Kidney injury molecule-1 (KIM-1): a novel kidney-specific injury molecule playing potential double-edged functions in kidney injury' Transplant. Rev., 24 (2010), pp. 143-146 View PDFView articleView in ScopusGoogle Scholar Ichimura et al., 1998T. Ichimura, J.V. Bonventre, V. Bailly, H. Wei, C.A. Hession, R.L. Cate, M. Sanicola 'Kidney injury molecule-1 (KIM-1), a putative epithelial cell adhesion molecule containing a novel immunoglobulin domain, is up-regulated in renal cells after injury' J. Biol. Chem., 273 (1998), pp. 4135-4142 View PDFView articleView in ScopusGoogle Scholar Ichimura et al., 2004T. Ichimura, C.C. Hung, S.A. Yang, J.L. Stevens, J.V. Bonventre Kidney injury molecule-1: a tissue and urinary biomarker for nephrotoxicant-induced renal injury Am. J. Physiol. Ren. Physiol., 286 (2004), pp. F552-F563 View in ScopusGoogle Scholar Ichimura et al., 2008T. Ichimura, E.J. Asseldonk, B.D. Humphreys, L. Gunaratnam, J.S. Duffield, J.V. Bonventre Kidney injury molecule-1 is a phosphatidylserine receptor that confers a phagocytic phenotype on epithelial cells J. Clin. Investig., 118 (2008), pp. 1657-1668 View in ScopusGoogle Scholar Ichimura et al., 2012T. Ichimura, C.R. Brooks, J.V. Bonventre 'Kim-1/Tim-1 and immune cells: shifting sands' Kidney Int., 81 (2012), pp. 809-811 View PDFView articleCrossrefView in ScopusGoogle Scholar Johnson et al., 2013R.K. Johnson, D. Sarmarapungavan, R.K. Parasuraman, G. Maine, M.T. Rooney, S.C. Wolforth, G.H. Reddy, S.R. Cohn, F. Dumler, L.L. Rocher, W. Li, P.L. Zhang Acute tubular injury is an important component in type I acute antibody-mediated rejection Transplant. Proc., 45 (2013), pp. 3262-3268 View PDFView articleView in ScopusGoogle Scholar Kalantar-Zadeh et al., 2021K. Kalantar-Zadeh, T.H. Jafar, D. Nitsch, B.L. Neuen, V. Perkovic Chronic kidney disease Lancet, 398 (2021), pp. 786-802 View PDFView articleView in ScopusGoogle Scholar Kissling et al., 2020S. Kissling, S. Rotman, C. Gerber, M. Halfron, F. Lamoth, S. Sadallah, F. Fakhouri Collapsing glomerulopathy in a COVID-19 patient Kidney Int. (2020), 10.1016/j.kint.2020.04.006 Google Scholar Ko et al., 2010G.J. Ko, D.N. Grigoryev, D. Linfert, H.R. Jang, T. Watkins, C. Cheadle, L. Racusen, H. Rabb Transcriptional analysis of kidneys during repair from AKI reveals possible roles for NGAL and KIM-1 as biomarkers of AKI-to-CKD transition Am. J. Physiol. Ren. Physiol., 298 (2010), pp. F1472-F1483 CrossrefView in ScopusGoogle Scholar Kudose et al., 2020S. Kudose, I. Batal, D. Santoriello, K. Xu, J. Barasch, Y. Peleg, P. Canetta, L.E. Ratner, M. Marasa, A.G. Gharavi, M.B. Stokes, G.S. Markowitz, V.D. D'Agati Kidney biopsy findings in patients with COVID-19 J. Am. Soc. Nephrol., 31 (2020), pp. 1959-1968 CrossrefView in ScopusGoogle Scholar Lasagni and Romagnani, 2010L. Lasagni, P. Romagnani 'Glomerular epithelial stem cells: the good, the bad, and the ugly' J. Am. Soc. Nephrol., 21 (2010), pp. 1612-1619 View in ScopusGoogle Scholar Lazzeri et al., 2018E. Lazzeri, M.L. Angelotti, A. Peired, C. Conte, J.A. Marschner, L. Maggi, B. Mazzinghi, D. Lombardi, M.E. Melica, S. Nardi, E. Ronconi, A. Sisti, G. Antonelli, F. Becherucci, L. De Chiara, R.R. Guevara, A. Burger, B. Schaefer, F. Annunziato, H.J. Anders, L. Lasagni, P. Romagnani Endocycle-related tubular cell hypertrophy and progenitor proliferation recover renal function after acute kidney injury Nat. Commun., 9 (2018), p. 1344 View in ScopusGoogle Scholar Lazzeri et al., 2019E. Lazzeri, M.L. Angelotti, C. Conte, H.J. Anders, P. Romagnani 'Surviving acute organ failure: cell polyploidization and progenitor proliferation' Trends Mol. Med., 25 (2019), pp. 366-381 View PDFView articleView in ScopusGoogle Scholar Lin et al., 2007F. Lin, P.L. Zhang, X.J. Yang, J. Shi, T. Blasick, W.K. Han, H.L. Wang, S.S. Shen, B.T. Teh, J.V. Bonventre Human kidney injury molecule-1 (hKIM-1): a useful immunohistochemical marker for diagnosing renal cell carcinoma and ovarian clear cell carcinoma Am. J. Surg. Pathol., 31 (2007), pp. 371-381 CrossrefView in ScopusGoogle Scholar Liu et al., 2017K.D. Liu, B.D. Humphreys, Z.H. Endre The ten barriers for translation of animal data on AKI to the clinical setting Intensive Care Med., 43 (2017), pp. 898-900 CrossrefView in ScopusGoogle Scholar Luan et al., 2020J. Luan, Y. Lu, X. Jin, L. Zhang Spike protein recognition of mammalian ACE2 predicts the host range and an optimized ACE2 for SARS-CoV-2 infection Biochem. Biophys. Res. Commun., 526 (2020), pp. 165-169 View PDFView articleView in ScopusGoogle Scholar Maunsbach, 1966A.B. Maunsbach 'Observations on the segmentation of the proximal tubule in the rat kidney. Comparison of results from phase contrast, fluorescence and electron microscopy' J.Ultrastruct. Res., 16 (1966), pp. 239-258 View PDFView articleView in ScopusGoogle Scholar Mijuskovic et al., 2018M. Mijuskovic, I. Stanojevic, N. Milovic, S. Cerovic, D. Petrovic, D. Maksic, B. Kovacevic, T. Andjelic, P. Aleksic, B. Terzic, M. Djukic, D. Vojvodic 'Tissue and urinary KIM-1 relate to tumor characteristics in patients with clear renal cell carcinoma' Int. Urol. Nephrol., 50 (2018), pp. 63-70 CrossrefView in ScopusGoogle Scholar Mori et al., 2021Y. Mori, A.K. Ajay, J.H. Chang, S. Mou, H. Zhao, S. Kishi, J. Li, C.R. Brooks, S. Xiao, H.M. Woo, V.S. Sabbisetti, S.C. Palmer, P. Galichon, L. Li, J.M. Henderson, V.K. Kuchroo, J. Hawkins, T. Ichimura, J.V. Bonventre KIM-1 mediates fatty acid uptake by renal tubular cells to promote progressive diabetic kidney disease Cell Metab., 33 (2021), pp. 1042-10461 e7 View in ScopusGoogle Scholar Moritz and Wintour, 1999K.M. Moritz, E.M. Wintour 'Functional development of the meso- and metanephros' Pediatr. Nephrol., 13 (1999), pp. 171-178 View in ScopusGoogle Scholar Morrissey et al., 2011J.J. Morrissey, A.N. London, M.C. Lambert, E.D. Kharasch Sensitivity and specificity of urinary neutrophil gelatinase-associated lipocalin and kidney injury molecule-1 for the diagnosis of renal cell carcinoma Am. J. Nephrol., 34 (2011), pp. 391-398 CrossrefView in ScopusGoogle Scholar Nath, 2015K.A. Nath Models of human AKI: resemblance, reproducibility, and return on investment J. Am. Soc. Nephrol., 26 (2015), pp. 2891-2893 View in ScopusGoogle Scholar Nozaki et al., 2012Y. Nozaki, D.J. Nikolic-Paterson, S.L. Snelgrove, H. Akiba, H. Yagita, S.R. Holdsworth, A.R. Kitching 'Endogenous Tim-1 (Kim-1) promotes T-cell responses and cell-mediated injury in experimental crescentic glomerulonephritis' Kidney Int., 81 (2012), pp. 844-855 View PDFView articleCrossrefView in ScopusGoogle Scholar Nozaki et al., 2014Y. Nozaki, K. Kinoshita, T. Yano, T. Shiga, S. Hino, K. Niki, K. Kishimoto, M. Funauchi, I. Matsumura 'Estimation of kidney injury molecule-1 (Kim-1) in patients with lupus nephritis' Lupus, 23 (2014), pp. 769-777 CrossrefView in ScopusGoogle Scholar Nozaki et al., 2023Y. Nozaki, T. Shiga, C. Ashida, D. Tomita, T. Itami, K. Kishimoto, K. Kinoshita, I. Matsumura U-KIM-1 as a predictor of treatment response in lupus nephritis Lupus, 32 (2023), pp. 54-62 CrossrefView in ScopusGoogle Scholar Parasuraman et al., 2013R. Parasuraman, S.C. Wolforth, W.N. Wiesend, F. Dumler, M.T. Rooney, W. Li, P.L. Zhang 'Contribution of polyclonal free light chain deposition to tubular injury' Am. J. Nephrol., 38 (2013), pp. 465-474 CrossrefGoogle Scholar Park et al., 2017M. Park, C.Y. Hsu, A.S. Go, H.I. Feldman, D. Xie, X. Zhang, T. Mifflin, S.S. Waikar, V.S. Sabbisetti, J.V. Bonventre, J. Coresh, R.G. Nelson, P.L. Kimmel, J.W. Kusek, M. Rahman, J.R. Schelling, R.S. Vasan, K.D. Liu, Investigators Chronic Renal Insufficiency Cohort Study, and C. K. D. Biomarkers Consortium Urine kidney injury biomarkers and risks of cardiovascular disease events and all-cause death: the CRIC study Clin. J. Am. Soc. Nephrol., 12 (2017), pp. 761-771 View in ScopusGoogle Scholar Puelles et al., 2020V.G. Puelles, M. Lutgehetmann, M.T. Lindenmeyer, J.P. Sperhake, M.N. Wong, L. Allweiss, S. Chilla, A. Heinemann, N. Wanner, S. Liu, F. Braun, S. Lu, S. Pfefferle, A.S. Schroder, C. Edler, O. Gross, M. Glatzel, D. Wichmann, T. Wiech, S. Kluge, K. Pueschel, M. Aepfelbacher, T.B. Huber 'Multiorgan and renal tropism of SARS-CoV-2' N. Engl. J. Med., 383 (2020), pp. 590-592 CrossrefView in ScopusGoogle Scholar Romagnani, 2011P. Romagnani 'Family portrait: renal progenitor of Bowman's capsule and its tubular brothers' Am. J. Pathol., 178 (2011), pp. 490-493 View PDFView articleView in ScopusGoogle Scholar Rong et al., 2011S. Rong, J.K. Park, T. Kirsch, H. Yagita, H. Akiba, O. Boenisch, H. Haller, N. Najafian, A. Habicht The TIM-1:TIM-4 pathway enhances renal ischemia-reperfusion injury J. Am. Soc. Nephrol., 22 (2011), pp. 484-495 View in ScopusGoogle Scholar Rosen and Stillman, 2008S. Rosen, I.E. Stillman Acute tubular necrosis is a syndrome of physiologic and pathologic dissociation J. Am. Soc. Nephrol., 19 (2008), pp. 871-875 View in ScopusGoogle Scholar Sabbisetti et al., 2014V.S. Sabbisetti, S.S. Waikar, D.J. Antoine, A. Smiles, C. Wang, A. Ravisankar, K. Ito, S. Sharma, S. Ramadesikan, M. Lee, R. Briskin, P.L. De Jager, T.T. Ngo, M. Radlinski, J.W. Dear, K.B. Park, R. Betensky, A.S. Krolewski, J.V. Bonventre 'Blood kidney injury molecule-1 is a biomarker of acute and chronic kidney injury and predicts progression to ESRD in type I diabetes' J. Am. Soc. Nephrol., 25 (2014), pp. 2177-2186 View in ScopusGoogle Scholar Santoriello et al., 2020D. Santoriello, P. Khairallah, A.S. Bomback, K. Xu, S. Kudose, I. Batal, J. Barasch, J. Radhakrishnan, V. D'Agati, G. Markowitz 'Postmortem kidney pathology findings in patients with COVID-19' J. Am. Soc. Nephrol., 31 (2020), pp. 2158-2167 CrossrefView in ScopusGoogle Scholar Scelo et al., 2018G. Scelo, D.C. Muller, E. Riboli, M. Johansson, A.J. Cross, P. Vineis, K.K. Tsilidis, P. Brennan, H. Boeing, P.H.M. Peeters, R.C.H. Vermeulen, K. Overvad, H.B. Bueno-de-Mesquita, G. Severi, V. Perduca, M. Kvaskoff, A. Trichopoulou, C. La Vecchia, A. Karakatsani, D. Palli, S. Sieri, S. Panico, E. Weiderpass, T.M. Sandanger, T.H. Nost, A. Agudo, J.R. Quiros, M. Rodriguez-Barranco, M.D. Chirlaque, T.J. Key, P. Khanna, J.V. Bonventre, V.S. Sabbisetti, R.S. Bhatt KIM-1 as a blood-based marker for early detection of kidney cancer: a prospective nested case-control study Clin. Cancer Res., 24 (2018), pp. 5594-5601 CrossrefView in ScopusGoogle Scholar Schmidt et al., 2022I.M. Schmidt, A. Srivastava, V. Sabbisetti, G.M. McMahon, J. He, J. Chen, J.W. Kusek, J. Taliercio, A.C. Ricardo, C.Y. Hsu, P.L. Kimmel, K.D. Liu, T.E. Mifflin, R.G. Nelson, R.S. Vasan, D. Xie, X. Zhang, R. Palsson, I.E. Stillman, H.G. Rennke, H.I. Feldman, J.V. Bonventre, S.S. Waikar, Consortium Chronic Kidney Disease Biomarkers, and Cric Study Investigators the 'Plasma kidney injury molecule 1 in CKD: findings from the Boston kidney biopsy cohort and CRIC studies' Am. J. Kidney Dis., 79 (2022), pp. 231-243 e1 View PDFView articleView in ScopusGoogle Scholar Schodel and Ratcliffe, 2019J. Schodel, P.J. Ratcliffe Mechanisms of hypoxia signalling: new implications for nephrology Nat. Rev. Nephrol., 15 (2019), pp. 641-659 CrossrefView in ScopusGoogle Scholar Schweigert et al., 2014O. Schweigert, C. Dewitz, K. Moller-Hackbarth, A. Trad, C. Garbers, S. Rose-John, J. Scheller 'Soluble T cell immunoglobulin and mucin domain (TIM)-1 and -4 generated by A Disintegrin and Metalloprotease (ADAM)-10 and -17 bind to phosphatidylserine' Biochim. Biophys. Acta, 1843 (2014), pp. 275-287 View PDFView articleView in ScopusGoogle Scholar Sharma et al., 2020P. Sharma, N.N. Uppal, R. Wanchoo, H.H. Shah, Y. Yang, R. Parikh, Y. Khanin, V. Madireddy, C.P. Larsen, K.D. Jhaveri, V. Bijol, Covid-Research Consortium Northwell Nephrology COVID-19-Associated kidney injury: a case series of kidney biopsy findings J. Am. Soc. Nephrol., 31 (2020), pp. 1948-1958 CrossrefView in ScopusGoogle Scholar Shu et al., 2019S. Shu, Y. Wang, M. Zheng, Z. Liu, J. Cai, C. Tang, Z. Dong Hypoxia and hypoxia-inducible factors in kidney injury and repair Cells, 8 (2019) Google Scholar Su et al., 2020H. Su, M. Yang, C. Wan, L.X. Yi, F. Tang, H.Y. Zhu, F. Yi, H.C. Yang, A.B. Fogo, X. Nie, C. Zhang Renal histopathological analysis of 26 postmortem findings of patients with COVID-19 in China Kidney Int., 98 (2020), pp. 219-227 View PDFView articleView in ScopusGoogle Scholar Taal and Brenner, 2000M.W. Taal, B.M. Brenner Renoprotective benefits of RAS inhibition: from ACEI to angiotensin II antagonists Kidney Int., 57 (2000), pp. 1803-1817 View PDFView articleView in ScopusGoogle Scholar Tan et al., 2004Y.J. Tan, E. Teng, S. Shen, T.H. Tan, P.Y. Goh, B.C. Fielding, E.E. Ooi, H.C. Tan, S.G. Lim, W. Hong A novel severe acute respiratory syndrome coronavirus protein, U274, is transported to the cell surface and undergoes endocytosis J. Virol., 78 (2004), pp. 6723-6734 View in ScopusGoogle Scholar Tang et al., 2021T.T. Tang, B. Wang, Z.L. Li, Y. Wen, S.T. Feng, M. Wu, D. Liu, J.Y. Cao, Q. Yin, D. Yin, Y.Q. Fu, Y.M. Gao, Z.Y. Ding, J.Y. Qian, Q.L. Wu, L.L. Lv, B.C. Liu Kim-1 targeted extracellular vesicles: a new therapeutic platform for RNAi to treat AKI J. Am. Soc. Nephrol., 32 (2021), pp. 2467-2483 CrossrefView in ScopusGoogle Scholar Tickoo et al., 2006S.K. Tickoo, M.N. dePeralta-Venturina, L.R. Harik, H.D. Worcester, M.E. Salama, A.N. Young, H. Moch, M.B. Amin Spectrum of epithelial neoplasms in end-stage renal disease: an experience from 66 tumor-bearing kidneys with emphasis on histologic patterns distinct from those in sporadic adult renal neoplasia Am. J. Surg. Pathol., 30 (2006), pp. 141-153 CrossrefView in ScopusGoogle Scholar Tutunea-Fatan et al., 2024E. Tutunea-Fatan, S. Arumugarajah, R.S. Suri, C.R. Edgar, I. Hon, J.D. Dikeakos, L. Gunaratnam Sensing dying cells in health and disease: the importance of kidney injury molecule-1 J. Am. Soc. Nephrol., 35 (2024), pp. 795-808 CrossrefView in ScopusGoogle Scholar Vaidya et al., 2009V.S. Vaidya, G.M. Ford, S.S. Waikar, Y. Wang, M.B. Clement, V. Ramirez, W.E. Glaab, S.P. Troth, F.D. Sistare, W.C. Prozialeck, J.R. Edwards, N.A. Bobadilla, S.C. Mefferd, J.V. Bonventre A rapid urine test for early detection of kidney injury Kidney Int., 76 (2009), pp. 108-114 View PDFView articleCrossrefView in ScopusGoogle Scholar Vallon and Verma, 2021V. Vallon, S. Verma Effects of SGLT2 inhibitors on kidney and cardiovascular function Annu. Rev. Physiol., 83 (2021), pp. 503-528 CrossrefView in ScopusGoogle Scholar van Timmeren et al., 2007M.M. van Timmeren, M.C. van den Heuvel, V. Bailly, S.J. Bakker, H. van Goor, C.A. Stegeman 'Tubular kidney injury molecule-1 (KIM-1) in human renal disease' J. Pathol., 212 (2007), pp. 209-217 CrossrefView in ScopusGoogle Scholar Wada et al., 2024Y. Wada, K. Kidokoro, M. Kondo, A. Tokuyama, H. Kadoya, H. Nagasu, E. Kanda, T. Sasaki, D.Z.I. Cherney, N. Kashihara 'Evaluation of glomerular hemodynamic changes by sodium-glucose-transporter 2 inhibition in type 2 diabetic rats using in vivo imaging' Kidney Int., 106 (2024), pp. 408-418 View PDFView articleView in ScopusGoogle Scholar Wang et al., 2015Y. Wang, M. Doshi, S. Khan, W. Li, P.L. Zhang Utility of iron staining in identifying the cause of renal allograft dysfunction in patients with sickle cell disease Case Rep. Transpl., 2015 (2015), Article 528792 Google Scholar Wu et al., 2020H. Wu, C.P. Larsen, C.F. Hernandez-Arroyo, M.M.B. Mohamed, T. Caza, M. Sharshir, A. Chughtai, L. Xie, J.M. Gimenez, T.A. Sandow, M.A. Lusco, H. Yang, E. Acheampong, I.A. Rosales, R.B. Colvin, A.B. Fogo, J.C.Q. Velez AKI and collapsing glomerulopathy associated with COVID-19 and APOL 1 high-risk genotype J. Am. Soc. Nephrol., 31 (2020), pp. 1688-1695 CrossrefView in ScopusGoogle Scholar Xu et al., 2022L. Xu, J. Guo, D.G. Moledina, L.G. Cantley Immune-mediated tubule atrophy promotes acute kidney injury to chronic kidney disease transition Nat. Commun., 13 (2022), p. 4892 View in ScopusGoogle Scholar Xu et al., 2024W. Xu, V. Gaborieau, S.M. Niman, A. Mukeria, X. Liu, K.P. Maremanda, A. Takakura, D. Zaridze, M.L. Freedman, W. Xie, D.F. McDermott, T.K. Choueiri, P.J. Catalano, V. Sabbisetti, J.V. Bonventre, P.M. Pierorazio, N. Singla, P. Brennan, R.S. Bhatt Plasma kidney injury molecule-1 for preoperative prediction of renal cell carcinoma versus benign renal masses, and association with clinical outcomes J. Clin. Oncol., 42 (2024), pp. 2691-2701 CrossrefView in ScopusGoogle Scholar Yang et al., 2015L. Yang, C.R. Brooks, S. Xiao, V. Sabbisetti, M.Y. Yeung, L.L. Hsiao, T. Ichimura, V. Kuchroo, J.V. Bonventre 'KIM-1-mediated phagocytosis reduces acute injury to the kidney' J. Clin. Investig., 125 (2015), pp. 1620-1636 View in ScopusGoogle Scholar Yin et al., 2016W. Yin, S.M. Naini, G. Chen, D.M. Hentschel, B.D. Humphreys, J.V. Bonventre Mammalian target of rapamycin mediates kidney injury molecule 1-dependent tubule injury in a surrogate model J. Am. Soc. Nephrol., 27 (2016), pp. 1943-1957 View in ScopusGoogle Scholar Yin et al., 2018W. Yin, P.L. Zhang, J.K. Macknis, F. Lin, J.V. Bonventre Kidney injury molecule-1 identifies antemortem injury in postmortem adult and fetal kidney Am. J. Physiol. Ren. Physiol., 315 (2018), pp. F1637-F1643 CrossrefView in ScopusGoogle Scholar Yin et al., 2019W. Yin, T. Kumar, Z. Lai, X. Zeng, H.D. Kanaan, W. Li, P.L. Zhang Kidney injury molecule-1, a sensitive and specific marker for identifying acute proximal tubular injury, can be used to predict renal functional recovery in native renal biopsies Int. Urol. Nephrol., 51 (2019), pp. 2255-2265 CrossrefView in ScopusGoogle Scholar Zeisberg and Kalluri, 2004M. Zeisberg, R. Kalluri The role of epithelial-to-mesenchymal transition in renal fibrosis J. Mol. Med. (Berl.), 82 (2004), pp. 175-181 View in ScopusGoogle Scholar Zhang and Hafron, 2014P.L. Zhang, J.M. Hafron Progenitor/stem cells in renal regeneration and mass lesions Int. Urol. Nephrol., 46 (2014), pp. 2227-2236 CrossrefView in ScopusGoogle Scholar Zhang and Liu, 2021P.L. Zhang, M.L. Liu Extracellular vesicles mediate cellular interactions in renal diseases-Novel views of intercellular communications in the kidney J. Cell. Physiol., 236 (2021), pp. 5482-5494 CrossrefView in ScopusGoogle Scholar Zhang et al., 2008aP.L. Zhang, M. Lun, C.M. Schworer, T.M. Blasick, K.K. Masker, J.B. Jones, D.J. Carey Heat shock protein expression is highly sensitive to ischemia-reperfusion injury in rat kidneys Ann. Clin. Lab. Sci., 38 (2008), pp. 57-64 View in ScopusGoogle Scholar Zhang et al., 2008bP.L. Zhang, L.I. Rothblum, W.K. Han, T.M. Blasick, S. Potdar, J.V. Bonventre Kidney injury molecule-1 expression in transplant biopsies is a sensitive measure of cell injury Kidney Int., 73 (2008), pp. 608-614 View PDFView articleCrossrefView in ScopusGoogle Scholar Zhang et al., 2014P.L. Zhang, J.W. Mashni, V.S. Sabbisetti, C.M. Schworer, G.D. Wilson, S.C. Wolforth, K.M. Kernen, B.D. Seifman, M.B. Amin, T.J. Geddes, F. Lin, J.V. Bonventre, J.M. Hafron Urine kidney injury molecule-1: a potential non-invasive biomarker for patients with renal cell carcinoma Int. Urol. Nephrol., 46 (2014), pp. 379-388 CrossrefView in ScopusGoogle Scholar Zhang et al., 2019K.J. Zhang, G.D. Wilson, S. Kara, A. Majeske, P.L. Zhang, J.M. Hafron Diagnostic role of kidney injury molecule-1 in renal cell carcinoma Int. Urol. Nephrol., 51 (2019), pp. 1893-1902 CrossrefView in ScopusGoogle Scholar Zhang et al., 2020P.L. Zhang, M. Deebajah, J. Fullmer, T. Ichimura, M.B. Amin, J.V. Bonventre Morphological evidence suggests that kidney injury molecule-1 may serve as a proximal tubule receptor for SARS-CoV-2 PO0833 31 (10S) (2020), p. 296 View in ScopusGoogle Scholar Zhang et al., 2024T. Zhang, R.E. Widdop, S.D. Ricardo Transition from acute kidney injury to chronic kidney disease: mechanisms, models, and biomarkers Am. J. Physiol. Ren. Physiol., 327 (2024), pp. F788-F805 CrossrefView in ScopusGoogle Scholar Zheng et al., 2019Y. Zheng, L. Wang, M. Chen, L. Liu, A. Pei, R. Zhang, S. Gan, S. Zhu 'Inhibition of T cell immunoglobulin and mucin-1 (TIM-1) protects against cerebral ischemia-reperfusion injury' Cell Commun. Signal., 17 (2019), p. 103 Google Scholar Zhou et al., 2008Y. Zhou, V.S. Vaidya, R.P. Brown, J. Zhang, B.A. Rosenzweig, K.L. Thompson, T.J. Miller, J.V. Bonventre, P.L. Goering Comparison of kidney injury molecule-1 and other nephrotoxicity biomarkers in urine and kidney following acute exposure to gentamicin, mercury, and chromium Toxicol. Sci., 101 (2008), pp. 159-170 CrossrefView in ScopusGoogle Scholar Zhou et al., 2020P. Zhou, X.L. Yang, X.G. Wang, B. Hu, L. Zhang, W. Zhang, H.R. Si, Y. Zhu, B. Li, C.L. Huang, H.D. Chen, J. Chen, Y. Luo, H. Guo, R.D. Jiang, M.Q. Liu, Y. Chen, X.R. Shen, X. Wang, X.S. Zheng, K. Zhao, Q.J. Chen, F. Deng, L.L. Liu, B. Yan, F.X. Zhan, Y.Y. Wang, G.F. Xiao, Z.L. Shi A pneumonia outbreak associated with a new coronavirus of probable bat origin Nature, 579 (2020), pp. 270-273 CrossrefView in ScopusGoogle Scholar Cited by (0) This article is part of a special issue entitled: Mechanisms-CKD progression published in Current Research in Physiology. © 2025 The Authors. Published by Elsevier B.V. Part of special issue Comprehensive review of the mechanisms underlying chronic kidney disease initiation and progression Edited by Ravi Nistala University of Missouri,Missouri,USA, Mohammed Razzaque INSTITUTE OF MOLECULAR MEDICINE Jamia Hamdard University,New Delhi,India View special issue Recommended articles In this Issue Kidney International, Volume 102, Issue 2, 2022, p. 217 View PDF ### Performance of hydrogen peroxide decomposition in a preheated monopropellant thruster catalyst chamber Aerospace Science and Technology, Volume 165, 2025, Article 110484 Jeongmoo Huh, …, Rashid Albraiki View PDF ### Transthyretin uptake in placental cells is regulated by the high-density lipoprotein receptor, scavenger receptor class B member 1 Molecular and Cellular Endocrinology, Volume 474, 2018, pp. 89-96 Kelly A.Landers, …, Kerry Richard ### Does the commercialization reform of rural credit cooperatives promote the growth of family farms? International Review of Economics & Finance, Volume 102, 2025, Article 104271 Zhongwen Chen, …, Cheng-To Lin View PDF ### Cannabinoid hyperemesis syndrome: A review Revista de Gastroenterología de México (English Edition), Volume 90, Issue 2, 2025, pp. 214-226 R.A.Jiménez-Castillo, …, T.Venkatesan View PDF ### Insulin resistance in cancer risk and prognosis Seminars in Cancer Biology, Volume 114, 2025, pp. 73-87 Emmanuel Jacobo-Tovar, …, Rodolfo Guardado-Mendoza Show 3 more articles Article Metrics Captures Mendeley Readers 5 Mentions News Mentions 1 View details About ScienceDirect Remote access Contact and support Terms and conditions Privacy policy Cookies are used by this site.Cookie settings All content on this site: Copyright © 2025 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the relevant licensing terms apply. × Read strategically, not sequentially ScienceDirect AI extracts key findings from full-text articles, helping you quickly assess an article's relevance to your research. Unlock your AI access We use cookies that are necessary to make our site work. We may also use additional cookies to analyze, improve, and personalize our content and your digital experience. You can manage your cookie preferences using the “Cookie Settings” link. For more information, see ourCookie Policy Cookie Settings Accept all cookies Cookie Preference Center We use cookies which are necessary to make our site work. We may also use additional cookies to analyse, improve and personalise our content and your digital experience. For more information, see our Cookie Policy and the list of Google Ad-Tech Vendors. You may choose not to allow some types of cookies. However, blocking some types may impact your experience of our site and the services we are able to offer. See the different category headings below to find out more or change your settings. You may also be able to exercise your privacy choices as described in our Privacy Policy Allow all Manage Consent Preferences Strictly Necessary Cookies Always active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. Cookie Details List‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. Cookie Details List‎ Contextual Advertising Cookies [x] Contextual Advertising Cookies These cookies are used for properly showing banner advertisements on our site and associated functions such as limiting the number of times ads are shown to each user. Cookie Details List‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm my choices
2483
https://math.stackexchange.com/questions/980872/finding-number-of-ways-to-get-a-sum-of-100
combinatorics - Finding number of ways to get a sum of $100$ - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Finding number of ways to get a sum of 100 Ask Question Asked 10 years, 11 months ago Modified10 years, 11 months ago Viewed 992 times This question shows research effort; it is useful and clear 2 Save this question. Show activity on this post. If we are given to find the number of ways 10 positive integers can sum to 50, we simply find the coefficient of x 100 in (x+x 2+...+x 90)10, which turns out to be (99 90). But what do we do if we are asked to find the ways if the numbers are distinct? How do we proceed then? Can we use a generating functions approach? I have thought a lot, but didn't cup up with anything doable. Please help me out. combinatorics Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications edited Oct 19, 2014 at 13:55 user1001001user1001001 asked Oct 19, 2014 at 13:47 user1001001user1001001 5,246 2 2 gold badges 25 25 silver badges 56 56 bronze badges 6 Well in particular, there are no 10 distinct positive integers with sum 50. Clearly the ten distinct numbers 1,…,10 have the smallest possible sum, but 1+⋯+10=10⋅11 2=55>50.Ben Frankel –Ben Frankel 2014-10-19 13:52:16 +00:00 Commented Oct 19, 2014 at 13:52 @BenFrankel Alright, I'll make it five.user1001001 –user1001001 2014-10-19 13:54:51 +00:00 Commented Oct 19, 2014 at 13:54 @BenFrankel Infact, I'll make the sum 100 user1001001 –user1001001 2014-10-19 13:58:45 +00:00 Commented Oct 19, 2014 at 13:58 Related: The number of partitions of n into distinct parts equals the number of partitions of n into odd parts.hardmath –hardmath 2014-10-19 14:06:27 +00:00 Commented Oct 19, 2014 at 14:06 Are you imposing two constraints on a partition of n=100, namely the number of parts (10) and the distinctness of the parts?hardmath –hardmath 2014-10-19 14:07:57 +00:00 Commented Oct 19, 2014 at 14:07 |Show 1 more comment 1 Answer 1 Sorted by: Reset to default This answer is useful 3 Save this answer. Show activity on this post. A generating function approach is possible, but first let's note that if q(n,k) denotes the number of partitions of n into exactly k distinct parts and p(n,k) denotes the partition of n into exactly k (not necessarily distinct) parts, then: q(n,k)=p(n−(k 2),k) For details see this Answer to the previously asked Partition an integer n into exactly k distinct parts, which however does not go into the generating function aspect. We are therefore able to translate the generating function for p(n,k) already noted in the Question to one for q(n,k). In particular, since q(100,10)=p(55,10), this should be the coefficient of x 55 in: x 10(1−x)(1−x 2)…(1−x 10) According to Henry Bottomley's Partition and composition calculator (may require browser permissions for Java applet to run), this number is 33401. Since the generating function for p(n,k) is: P k(x)=∞∑n=k p(n,k)x n=x k(1−x)(1−x 2)…(1−x k) the generating function for q(n,k) is: Q k(x)=∞∑n=k(k+1)/2 p(n−(k 2),k)x n =x(k 2)∞∑m=k p(m,k)x m =x k(k+1)/2(1−x)(1−x 2)…(1−x k) The same conclusion is reached at Partitions Into Distinct Parts (MathPages). Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Apr 13, 2017 at 12:21 CommunityBot 1 answered Oct 19, 2014 at 15:31 hardmathhardmath 37.8k 20 20 gold badges 81 81 silver badges 150 150 bronze badges Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions combinatorics See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Linked 28The number of partitions of n into distinct parts equals the number of partitions of n into odd parts 6Partition an integer n into exactly k distinct parts 1Finding the coefficient of a generating function 2Selecting a random set of 10 numbers between 1 and 100, what are the odds that some subset add to 100? -1Partitions of 100 into 10 parts having DIFFERENT size 0Positive Integer Solutions to an Equation with Individual Variable Constraints Related 1Number of ways in which result of games can be predicted correctly 1Determining Stirling number 0Finding number of ways of distributing toys without generating function 1Number of ways to get 17 by rolling dice 4 times (with combinatorial argument) 2Counting the number of ways (variants) 2Finding the number of ways of choosing 4 distinct integers from first n natural numbers so that no two are consecutive. 1Probability of choosing pairs of number such that the sum is 100 0is there any method to count the number of ways in which 100 can be expressed as sum of even integers? 1Number of ways to divide n-distinct things into m r-sized groups 2100 voters choose candidates Hot Network Questions alignment in a table with custom separator Is there a way to defend from Spot kick? "Unexpected"-type comic story. Aboard a space ark/colony ship. Everyone's a vampire/werewolf Passengers on a flight vote on the destination, "It's democracy!" What’s the usual way to apply for a Saudi business visa from the UAE? Matthew 24:5 Many will come in my name! Why do universities push for high impact journal publications? Riffle a list of binary functions into list of arguments to produce a result Any knowledge on biodegradable lubes, greases and degreasers and how they perform long term? Gluteus medius inactivity while riding Exchange a file in a zip file quickly What's the expectation around asking to be invited to invitation-only workshops? A time-travel short fiction where a graphologist falls in love with a girl for having read letters she has not yet written… to another man The rule of necessitation seems utterly unreasonable For every second-order formula, is there a first-order formula equivalent to it by reification? Who is the target audience of Netanyahu's speech at the United Nations? If Israel is explicitly called God’s firstborn, how should Christians understand the place of the Church? How can the problem of a warlock with two spell slots be solved? Drawing the structure of a matrix Bypassing C64's PETSCII to screen code mapping Do we need the author's permission for reference What is a "non-reversible filter"? Does a Linux console change color when it crashes? Is it safe to route top layer traces under header pins, SMD IC? more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
2484
https://www.ncbi.nlm.nih.gov/books/NBK563150/
Granular Cell Tumor - StatPearls - NCBI Bookshelf An official website of the United States government Here's how you know The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site. The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely. Log inShow account info Close Account Logged in as: username Dashboard Publications Account settings Log out Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation Bookshelf Search database Search term Search Browse Titles Advanced Help Disclaimer NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health. StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2025 Jan-. StatPearls [Internet]. Show details Treasure Island (FL): StatPearls Publishing; 2025 Jan-. Search term Granular Cell Tumor Daniel Neelon; Ford Lannan; John Childs. Author Information and Affiliations Authors Daniel Neelon 1; Ford Lannan; John Childs. Affiliations 1 Walter Reed National Military Medical Center Last Update: July 3, 2023. Go to: Continuing Education Activity Granular cell tumors are rare neoplasms derived from Schwann cells with characteristic pathologic findings. They can exhibit either benign or malignant behavior and most commonly occur in the oral cavity, skin, and gastrointestinal tract. They must be appropriately treated to avoid recurrence or metastasis of malignant lesions. This activity reviews the diagnosis, management, and prognosis of granular cell tumors and highlights the role of the interprofessional team in evaluating and treating patients with this neoplasm. Objectives: Describe the typical clinical presentation of patients with granular cell tumors. Identify the histologic features required to differentiate benign from malignant granular cell tumors. Outline the management of patients with granular cell tumors. Access free multiple choice questions on this topic. Go to: Introduction Granular cell tumors were described as early as 1926 by the Russian pathologist Abrikossoff. They were initially coined granular cell myoblastomas, as they were believed to be of muscular origin. With the advent of immunohistochemical stains and electron microscopy, they are now believed to be Schwannian derivation.Notably, a subset of S100-negative “non-neural” granular cell tumors have been identified which may not derive from neural tissue. These rare tumors are most commonly reported in the skin, oral cavity, digestive tract, and subcutaneous tissue. However,they can occur anywhere in the body, including breast, bladder, nervous system, respiratory and genitourinary tracts. All age groups and genders can be affected, but it is classically found in women in their 4th to 6th decades of life.Granular cell tumors typically present as solitary, painless nodules less than 3-4 cm large and may be found incidentally. The vast majority behave indolently. Based on histologic criteria or the presence of metastasis, however, 1% to 2% of these lesions can be malignant, with poor prognosis and few curative options beyond surgical excision. Go to: Etiology The etiology of granular cell tumors to-date is poorly described, and its genetic underpinnings and pathophysiology are not well understood. Although it is known that these tumors may demonstrate recurrent genetic mutations in the setting of specific syndromes, the mutations driving sporadic tumorigenesis have only recently become evident with the use of whole-genome sequencing. Syndrome Associations It is known that the clinical finding of multiple granular cell tumors is associated with Noonan syndrome, neurofibromatosis I, and LEOPARD syndrome.Authors have reported PTPN11 gene mutations in granular cell tumors associated with LEOPARD and Noonan syndromes.It has also been proposed that the etiology of multiple granular cell tumors may be linked to abnormal RAS/MAPK pathway cell signaling, given that this mutation is a shared feature in all 3 syndromes. In another study, granular cell tumor was also associated with germline PTEN mutations in patients with PTEN hamartoma tumor syndrome. It is important to note that many of these syndrome-associated gene mutations have not yet been tested in large studies and are mostly derived from case studies or small case series. To the contrary, one small follow-on study attempting to confirm the importance of PTEN and PTPN11 mutations in granular cell tumors could not detect either of them and instead found KDR, GNAQ, and ATM mutations in a small sample of granular cell tumors, none of which were recurrent within the sample. Non-Neural Tumors Although the question of shared cell lineage with conventional granular cell tumors is debated, there is also a subset of non-neural, S100-negative granular cell tumors which reportedly harbor an ALK gene fusion and which stain positively for Anaplastic Lymphoma Kinase immunohistochemistry. ATP6AP1 and ATP6AP2 Mutations Recently, whole-genome sequencing and targeted sequencing analysis have been used to identify mutations in granular cell tumors. In 2018, Pareja et al. used whole-exome sequencing and targeted sequencing analysis to identify recurrent ATP6AP1 and ATP6AP2 inactivating somatic mutations in 72% of their sampled granular cell tumors.Subsequent in-vitro impairment of these pH regulators in Schwann cells not only spurred oncogenesis via increased phosphorylation of PDGFR-B, SFK, and STAT-5, but it also resulted in impaired vesicle acidification, impaired endocytosis, and build-up of intracytoplasmic granules. This re-created the characteristic histologic finding of eosinophilic granules in the cytoplasm. The authors suggest that these 2 mutations are pathognomonic for granular cell tumors, as they are each found in less than 0.3% of other common cancer types. Impairment of ATP6AP1 and ATP6AP2 resulted in impairment of the V-ATPase (H+ ATPase) complex with a subsequent decrease in lysosomal activity. Not only does this correlate with the characteristic cytoplasmic findings, but it also explains the tumor’s positive immunohistochemical staining for TFE3 and wild-type MITF, as lysosomal inhibition induces activation of transcription factors MITF, TFE3, and TFEB.A larger follow-on study in 2019 verified the presence of truncating or splice site mutations in ATP6AP1 and ATP6AP2 as well as ATP6V0C, which also codes for a V-ATPase accessory protein. Malignant Tumors Further studies have sought to specifically characterize malignant granular cell tumors via whole-genome sequencing, finding mutations in the new tumor suppressor candidate BRD7 and in GFRA2 of the tyrosine kinase receptor pathway.Other studies have reported potential driver mutations in ATM, ASXL1, NOTCH2, and PARP4 pathways in malignant granular cell tumors,as well as the loss of heterozygosity in chromosomes 9p and 17p in oral granular cell tumors. Go to: Epidemiology Granular cell tumors can appear in all age groups but are thought to arise most commonly in the 4th to 6th decades.These tumors are most often seen in women (ranging from 1.8 to 2.4 female: male).While two-thirds of the benign lesions are reported in African-American patients, recent cohort analysis of 113 patients found that over 70% of the patients with malignant cutaneous tumors were Caucasian. These are rare tumors. According to a study by Lack et al. at a single institution over 32 years, the overall incidence of granular cell tumors in surgical specimens was 0.03%. Although most are solitary, 7% to 29% of patients can have multiple lesions on presentation, 30% of which involve the skin and subcutaneous tissue.In 45 to 65% of cases, they are found in the head and neck region. Overall, they involve the cutis and subcutaneous tissue in 30% of patients, involve the breasts in 15% of cases, the respiratory tract in 10% of cases, and the gastrointestinal tract in 5% to 11%. 1% to 2% of them are malignant based on histology or clinical presentation. Go to: Histopathology Morphology Granular cell tumors are characterized by infiltrative, non-encapsulated nests, cords, or sheets of polygonal and occasionally spindled cells with abundant eosinophilic, finely granular cytoplasm. Many of the tumors have Pustulo-ovoid bodies of Milian, which are large granules with clear halos.The granules are period acid-Schiff (PAS) positive and diastase resistant, and they are thought to represent lysosomes.The nests are often separated by fibrous tissue and located in the dermis, subcutis, or submucosa.Nuclei have dense chromatin, are relatively small, and are centrally placed. Malignant Features These lesions can be sub-classified into benign, atypical, and malignant categories based on histologic features. The first system was developed in 1998 by Fanburg-Smith et al.using these six features: necrosis, increased mitotic count (greater than 2 per 10 high power fields), spindled tumor cells, nuclear pleomorphism, prominent nucleoli, and vesicular nuclei, and high nuclear to cytoplasmic ratio. Lesions with none of these features are categorized as benign, and those with 1 or 2 features are categorized as atypical. Tumors with 3 or more are called malignant and have a considerably worse prognosis. Immunohistochemistry Both benign and malignant tumor cells typically stain positively for S-100, CD68, neuron-specific enolase, CD57, inhibin, calretinin, TFE3, SOX10, CD56, PGP9.5, and vimentin.There is a rare non-neural variant which stains negatively for S-100 but is positive for CD68, CD10, and occasionally neuron-specific enolase.In some studies, proliferation markers ki-67 and PHH3 were shown to be good predictors of atypical histology. Additional Features Although benign granular cell tumors are readily identified morphologically, there are some features that can be mistaken for malignancy. Benign lesions located deep to the skin or deep to a mucosal surface can induce reactive pseudoepitheliomatous hyperplasia, which closely mimics squamous cell carcinoma in superficial biopsies and is found in over half of cases.Benign tumors can also demonstrate both vascular and perineural invasion, but these histologic features do not confer malignancy or an adverse prognosis. Go to: History and Physical General Granular cell tumors typically present as skin-colored or brown-red, solitary, painless, and slow-growing nodules in the head and neck area that are less than 3-4 cm in diameter.They are rarely painful or pruritic. They are most reported in the skin, oral cavity, gastrointestinal tract, breast, and respiratory tract. It is postulated that their high frequency in the tongue and skin parallels the large concentration of peripheral nerves in those areas. They have also been reported in every organ system, including the genitourinary tract, thyroid, neurohypophysis, and pancreaticobiliary system. They may present as multiple nodules, which should prompt an exam for other findings associated with Noonan syndrome, neurofibromatosis type I, and LEOPARD syndrome. Breast In the breast, granular cell tumors can arise in any location, including all 4 quadrants, the axilla, and nipple. Some authors previously argued that they might arise predominantly in the upper inner quadrant in the distribution of the supraclavicular nerve, although this is debated in recent literature. 70% of these present as palpable masses and others are found incidentally or during screening without a palpable mass on the exam. Most of them are painless, although some patients have reported pain, pruritis, skin retraction, thickening or dimpling, and reactive lymphadenopathy on presentation. Gastrointestinal Tract In the gastrointestinal tract, these tumors are often found incidentally during screening, and patients are usually asymptomatic. They occur most frequently in the distal esophagus, followed by the duodenum, anus, and stomach, although cases in the colon, biliary tract, and rectum have also been reported. Some patients present with non-specific symptoms such as belching, dysphagia, abdominal distension, or hematochezia. Endoscopy typically reveals a hard, isolated grey-white submucosal or mucosal nodule with normal overlying mucosa, and the lesions generally do not have unique features to set them apart from other etiologies of gastrointestinal polyps. Oral Cavity In the oral cavity, these are known to occur most frequently on the anterior tongue and are seen as a firm yellow or pink, non-painful, solitary nodule.Although these are also seen on the lip, palate, and buccal mucosa, some studies have found that these locations account for less than 20% of oral cavity granular cell tumors. Neurohypophysis In the neurohypophysis, they are most often asymptomatic and found incidentally at autopsy. However, they may rarely present with bitemporal visual impairment due to mass effect, headache, or hyperprolactinemia with hypogonadism. Given their rarity, they are often initially diagnosed with radiology as pituitary adenomas or craniopharyngiomas. Malignant Tumors As for the malignant counterpart, the tumors often present as subcutaneous masses, most commonly in the lower extremities.They are usually larger than benign lesions and are known to present with metastatic lesions in the lung, lymph nodes, and bones.Clinical findings associated with malignant lesions include rapid growth, ulceration, invasion of adjacent structures by radiology, and diameter greater than 5 cm. Go to: Evaluation Radiology Imaging is typically not pursued prior to excisional biopsy for small, benign-appearing nodules in the skin or oral cavity. However, tumors in the gastrointestinal tract, breast, extremity soft tissue, or other uncommon locations are often imaged because they cannot be distinguished clinically from other benign or malignant lesions. Intramuscular Although no radiologic findings are unique to granular cell tumors, MRI is the best imaging modality to differentiate benign from malignant lesions. In the muscle, benign granular cell tumors appear iso-intense or brighter than muscle on T1-weighted imaging. On T2-weighted sequences, the center of the lesion is also iso-intense to muscle or suppressed fat, but there is peripheral enhancement. They are round- or oval-shaped, less than 4 cm in diameter, and superficial. By comparison, the malignant counterparts often show high signal intensity on T2-weighted sequences, have the same iso-intensity to muscle on T1-weighted images, are larger than 4 cm in diameter, and may show the invasion of adjacent structures. Breast In the breast, imaging findings can be difficult to distinguish from carcinoma. They can be seen on mammography as small, round, well-circumscribed lesions that are less than 3 cm in diameter, but they have also been reported to be indistinct, stellate, heterogeneous with hypodense rims, poorly circumscribed, and spiculated without calcifications. Ultrasound findings are also non-specific in the breast. They are the most commonly heterogeneous, solid, and poorly defined masses featuring posterior shadowing and high depth to width ratio. Some series have described them as predominantly hyperechoic, while others have described them as hypoechoic. Rarely, however, they can be well-circumscribed with weak internal echoes and acoustic enhancement.On MRI, breast lesions demonstrate similar findings as compared to intramuscular lesions, with the intermediate signal in T1-weighted sequences and peripheral enhancement in T2-weighted images with iso-intensity to muscle. Esophagus In the esophagus, granular cell tumors are more often hypoechoic, homogeneous, and smooth-edged by ultrasound, although hyperechoic, heterogeneous, and irregular lesions have been described.On MRI, esophageal lesions have been described as having low T1 signal intensity with homogeneous enhancement and high T2 intensity. Pathology Given the similar clinical and radiologic findings between granular cell tumors and other benign and malignant entities, diagnosis via biopsy for microscopic analysis is required for all lesions..Both benign and malignant tumors have characteristic histologic findings such that excisional biopsies are readily diagnosed by morphology in combination with immunohistochemical stains. For lesions amenable to it, excisional biopsy is preferred to core biopsy or shave biopsy. This will minimize the likelihood of sampling error, whereby the tumor underlying reactive squamous atypia is missed or where portions of the tumor showing malignant histology are missed. Go to: Treatment / Management Surgery Complete excision to negative margins with close clinical follow-up is recommended for granular cell tumors in nearly all locations, whether benign or malignant.In the skin, wide local excision is performed for the diagnosis of smaller lesions, but larger lesions may require a biopsy followed by excision. Given the propensity of these tumors to recur with positive margins, Mohs surgery is occasionally employed to ensure complete removal of the tumor, particularly in cosmetically and functionally important locations.Sentinel lymph node biopsy is only recommended for lesions suspected to be malignant by clinical impression or histology. Lymph node dissection is generally only suggested for palpable lymph nodes or biopsy-proven metastatic disease, although some authors recommend lymph node dissection upfront in malignant breast tumors. Chemotherapy and Radiation Therapy The current understanding is that there is a limited role for chemotherapy and radiation therapy. There have been rare case reports of these modalities being used with some success on patients with malignant tumors and metastatic disease on presentation.A handful of case reports have shown that the sarcoma treatment pazopanib can treat some recurrent malignant granular cell tumors, but there is no current standard chemotherapy regimen for malignant or metastatic disease given the lack of randomized clinical trials with this specific lesion. The use of adjuvant radiation therapy has been similarly controversial and ill-defined, with scattered reports recommending its use in recurrent malignant lesions or inoperable metastases.In the United States, only 11.3% of patients with malignant granular cell tumors receive radiation therapy. Management in the Gastrointestinal Tract In the gastrointestinal tract, it is important to follow up on the initial endoscopy with endoscopic ultrasound in order to assess tumor size, location, and anatomic depth of invasion. This will determine treatment modality. In contrast to other locations, asymptomatic esophageal granular cell tumors less than 1 cm can be monitored with follow-up endoscopic ultrasounds. Tumors greater than 1 cm, lesions causing symptoms, rapidly growing nodules, or those suspicious for malignancy are resected. Endoscopic mucosal resection is used to resect benign tumors under 2 cm, while submucosal endoscopic resection is used for benign tumors between 2 to 3 cm in the submucosa. Conventional open surgery or video-assisted thoracoscopic surgery are pursued tumors located in the muscularis propria, large or malignant lesions, and in the presence of other contraindications to endoscopic resection. In the colon, endoscopic mucosal resection or endoscopic submucosal dissection are also the best strategies for benign tumors under 2 cm. Some authors recommend endoscopic submucosal excavation for benign tumors up to 5 cm with traditional surgery reserved for those greater than 5 cm and for malignant lesions. Others recommend polypectomy for anything under 4 cm, while some recommend colectomy for any lesion greater than 2 cm. Go to: Differential Diagnosis Clinical Differential The clinical differential diagnosis for granular cell tumors is extensive and varies by the location of the tumor. It may include alveolar soft parts sarcoma, adnexal tumors, apocrine carcinoma, basal cell carcinoma, cholangiocarcinoma, colonic adenoma, cystic lesions, dermatofibroma, dermoid cyst, desmoid tumor, duct ectasia, fat necrosis, fibroadenoma, fibrosarcoma, gastrointestinal stromal tumor, granulomatous mastitis, hidradenoma, hypertrophic scar, invasive mammary carcinoma, irritation fibroma, keloid, leiomyoma, leiomyosarcoma, lipoma, malignant fibrous histiocytoma, neurofibroma, nodular fasciitis, oncocytic renal cell carcinoma, prurigo nodularis, regressing verruca, rhabdomyosarcoma, sclerosing adenosis, schwannoma, steatoma, and traumatic neuroma. Histologic Differential The histologic differential largely includes tumors that have similar morphologic findings and those with granular variants. It includes alveolar soft parts sarcoma, ameloblastoma, angiosarcoma, atypical fibroxanthoma, basal cell carcinoma, congenital granular cell epulis, dermatofibroma, dermatofibrosarcoma protuberans, epithelioid histiocytoma, fibroxanthoma, granular cell dermatofibroma, hibernoma, leiomyoma, leiomyosarcoma, lobomycosis, malignant fibrous histiocytoma, malignant peripheral nerve sheath tumor, melanocytic nevus, melanoma, neurofibroma, non-neural granular cell tumor, primitive polypoid granular cell tumor, reactive granular cell change, reticulohistiocytoma, rhabdomyoma, rhabdomyosarcoma, schwannoma, squamous cell carcinoma, trichoblastoma, and xanthoma. Go to: Staging Currently, no clear staging system exists for malignant granular cell tumors.Nonetheless, for lesions expected to be malignant by clinical presentation or by histology, sentinel lymph node biopsies, and possible lymph node dissection are recommended to evaluate for regional metastasis. Go to: Prognosis Benign The prognosis for benign granular cell tumors is excellent, as complete surgical resection is considered curative.There have only been sporadic case reports of metastatic lesions arising from histologically benign primaries.With wide local excision, the recurrence rate has been reported to be as low as 2% to 8%, although that increases to 21% to 50% with incomplete excision. Malignant By contrast, patients with malignant granular cell tumors have a substantially worse prognosis, with 74% and 65% survival rates at 5 years and 10 years, respectively. They see a 32% to 41% rate of recurrence and 11 to 62% rate of metastasis between 3 to 37 months after diagnosis.Notably, patients with tumors greater than 5 cm see decreased 5-year survival rates of 51% compared to 90% in those with tumors less than 5 cm. Similarly, those with distant metastases at diagnosis have 0% survival at 5 years compared to 81% in those without metastases. Go to: Complications Complications include the following Surgical site infection Recurrence Metastasis, regional and distant Poor aesthetics of surgical scar Local invasion and tissue destruction Discomfort due to mass effect (dysphagia, nerve impingement, constipation, abdominal fullness, bitemporal hemianopsia, hyperprolactinemia) Go to: Deterrence and Patient Education Following the resection of granular cell tumors, patients should monitor for metastatic and recurrent lesions. This is especially true with malignant tumors. They should also be reminded to keep their follow-up appointments for appropriate surveillance imaging of metastatic lesions or nodules which were not excised (small esophageal tumors). This will allow for the proper treatment of any newly discovered tumors. Patients with multiple granular cell tumors should also be educated about the possible association with neurofibromatosis I, Noonan syndrome, and LEOPARD syndrome (lentigines,electrocardiogram (ECG) conduction abnormalities,ocular hypertelorism,pulmonic stenosis,abnormal genitalia,retardation of growth, and sensorineuraldeafness). They will likely need a thorough exam and additional studies to rule these out. Go to: Pearls and Other Issues Non-Representative Biopsy Numerous pitfalls exist in the work-up and diagnosis of granular cell tumors. One well-known pitfall involves a shave biopsy of the superficial skin or mucosa overlying a granular cell tumor. Granular cell tumors are commonly associated with superficial pseudoepitheliomatous hyperplasia, which has a morphologic appearance that is strikingly similar to squamous cell carcinoma. If only the epidermis overlying a deep dermal or subcutaneous nodule is shaved, the patient will be misdiagnosed and receive inappropriate therapy. Similarly, needle biopsies are discouraged for lesions amenable to complete excisional biopsies. Some granular cell tumors may have patchy foci of malignant histology such that a single core may not be representative of the entire tumor. Incomplete Excision Another pitfall is not obtaining broad margins on local excisions of lesions where it is feasible. Clinically benign-appearing lesions may still have malignant histology, so wide local excision with negative margins is recommended for all lesions suspected or proven to be granular cell tumors to minimize the possibility of recurrence. Lymph Node Dissection Although it is debated in the literature, performing a lymph node dissection for benign or malignant tumors before a sentinel lymph node biopsy may result in unnecessary surgery and morbidity associated with lymph node dissections. Many authors recommend sentinel lymph node biopsies only for malignant tumors before a full lymph node dissection. Poor Communication Finally, if there is poor communication between the clinician and the pathologist, the patient may mistakenly be diagnosed with a benign granular cell tumor rather than a malignant one. The history of rapid growth, size over 5 cm, ulceration, and prominent lymphadenopathy, for example, may prompt the diagnosis of a malignant lesion regardless of histologic findings. Failure to communicate that to the pathologist may result in a benign diagnosis and, subsequently, inappropriate follow-up and under-investigation for metastatic lesions. Go to: Enhancing Healthcare Team Outcomes Granular cell tumors are rare neoplasms whose clinical and radiologic findings are often indistinguishable from other, more common lesions. These tumors, whether malignant or benign, typically present as solitary masses in a wide range of organ systems without any unique features until examined microscopically. Furthermore, while some of the malignant lesions present as such with nodal or distant metastases, many are not found to be malignant until the histology is worked up. Subsequently, while a single physician may be initiating the care of a patient with a granular cell tumor, it is important to consult with an interdisciplinary team of specialists promptly to expedite workup and treatment. A dermatologist who diagnoses a malignant granular cell tumor of the skin by biopsy or excision may need to order additional imaging and rely on radiology to identify metastatic lesions. An interventional pulmonologist may need to be consulted for biopsy of lung lesions, with the subsequent engagement of thoracic surgery, surgical oncology, medical oncology, and radiation oncology to discuss and pursue management options for metastatic lesions with the patient. Radiology Radiologists play an important role in determining the etiology of the lesion, as they can advise the primary physician to keep benign tumors in mind, such as benign granular cell tumors, even when the imaging demonstrates malignant features such as spiculation. If the patient has a prior history of other benign granular cell tumors, it is important to relay that history to the radiologists to ensure they consider multiple foci of such tumors in the differential even if the lesion appears malignant. The radiologist may also be able to discern features suggestive of a malignant versus benign granular cell tumor. Pathology Pathologists are also a vital part of the team, as they identify the diagnostic morphologic and immunohistochemical findings in this tumor. They will ultimately communicate with the team whether the lesion is benign or its malignant counterpart. It is important to provide appropriate clinical history and imaging findings to the pathologist when tissue is sent, as this will impact diagnosis as well as classification of the tumor as malignant or benign. Failing to impart the information that lymphadenopathy was noted near the tumor or the fact that synchronous liver lesions were noticed in addition to a skin primary, for example, may lead the pathologist to call a lesion benign or atypical rather than malignant. Alternatively, if it is not communicated to the pathologist that a biopsy was just a superficial shave of a deep dermal lesion, the reactive pseudoepitheliomatous hyperplasia overlying a benign granular cell tumor may be erroneously called squamous cell carcinoma.46 Nursing and Pharmacy Leading up to and following surgical excision of the lesion, the nurses play an essential role in the interprofessional group, as they monitor the patient’s pre-operative, intra-operative, and post-operative vital signs and symptoms concerning for infection, and they assist with the education of the patient and family. Pharmacists will ensure the patient is sent home on the appropriate pain medication. Outcomes The outcomes of granular cell tumors depend in part on whether the lesions are malignant or benign. Benign tumors have excellent outcomes with wide local excision and rarely recur or metastasize. On the other hand, patients with large, malignant lesions and metastatic disease have dismally poor outcomes. Some studies report up to 41% recurrence after excision and a 62% rate of metastasis in malignant lesions. Another recent study demonstrated that patients with metastatic disease at diagnosis have 0% survival at 5 years.84 It is therefore important to maintain clear communication with diagnostic specialists and to promptly consult with an inter-disciplinary group of sub-specialists. This will decrease the time to treatment and will reduce the likelihood that any given malignant lesion will metastasize or cause further morbidity for the patient. Go to: Review Questions Access free multiple choice questions on this topic. Click here for a simplified version. Comment on this article. Figure Granular Cell Tumor Contributed by John Childs, MD; Ford Lannan, MD; and Daniel Neelon, MD Figure Granular Cell Tumor Contributed by John Childs, MD; Ford Lannan, MD; and Daniel Neelon, MD Go to: References 1. Rekhi B, Jambhekar NA. Morphologic spectrum, immunohistochemical analysis, and clinical features of a series of granular cell tumors of soft tissues: a study from a tertiary referral cancer center. Ann Diagn Pathol. 2010 Jun;14(3):162-7. [PubMed: 20471560] 2. FISHER ER, WECHSLER H. Granular cell myoblastoma--a misnomer. Electron microscopic and histochemical evidence concerning its Schwann cell derivation and nature (granular cell schwannoma). Cancer. 1962 Sep-Oct;15:936-54. [PubMed: 13893237] 3. Lewin MR, Montgomery EA, Barrett TL. New or unusual dermatopathology tumors: a review. J Cutan Pathol. 2011 Sep;38(9):689-96. [PubMed: 21790713] 4. Lazar AJ, Fletcher CD. Primitive nonneural granular cell tumors of skin: clinicopathologic analysis of 13 cases. Am J Surg Pathol. 2005 Jul;29(7):927-34. [PubMed: 15958858] 5. Fernandez-Flores A, Cassarino DS, Riveiro-Falkenbach E, Rodriguez-Peralto JL, Fernandez-Figueras MT, Monteagudo C. Cutaneous dermal non-neural granular cell tumor is a granular cell dermal root sheath fibroma. J Cutan Pathol. 2017 Jun;44(6):582-587. [PubMed: 28266050] 6. Lack EE, Worsham GF, Callihan MD, Crawford BE, Klappenbach S, Rowden G, Chun B. Granular cell tumor: a clinicopathologic study of 110 patients. J Surg Oncol. 1980;13(4):301-16. [PubMed: 6246310] 7. Suchitra G, Tambekar KN, Gopal KP. Abrikossoff's tumor of tongue: Report of an uncommon lesion. J Oral Maxillofac Pathol. 2014 Jan;18(1):134-6. [PMC free article: PMC4065432] [PubMed: 24959055] 8. Machado I, Cruz J, Lavernia J, Llombart-Bosch A. Solitary, multiple, benign, atypical, or malignant: the "Granular Cell Tumor" puzzle. Virchows Arch. 2016 May;468(5):527-38. [PubMed: 26637199] 9. Richmond AM, La Rosa FG, Said S. Granular cell tumor presenting in the scrotum of a pediatric patient: a case report and review of the literature. J Med Case Rep. 2016 Jun 04;10(1):161. [PMC free article: PMC4893259] [PubMed: 27259474] 10. Collins BM, Jones AC. Multiple granular cell tumors of the oral cavity: report of a case and review of the literature. J Oral Maxillofac Surg. 1995 Jun;53(6):707-11. [PubMed: 7776058] 11. Rose B, Tamvakopoulos GS, Yeung E, Pollock R, Skinner J, Briggs T, Cannon S. Granular cell tumours: a rare entity in the musculoskeletal system. Sarcoma. 2009;2009:765927. [PMC free article: PMC2821775] [PubMed: 20169099] 12. Gündüz Ö, Erkin G, Bilezikçi B, Adanalı G. Slowly Growing Nodule on the Trunk: Cutaneous Granular Cell Tumor. Dermatopathology (Basel). 2016 Apr-Jun;3(2):23-7. [PMC free article: PMC4965530] [PubMed: 27504442] 13. Aoyama K, Kamio T, Hirano A, Seshimo A, Kameoka S. Granular cell tumors: a report of six cases. World J Surg Oncol. 2012 Sep 29;10:204. [PMC free article: PMC3502223] [PubMed: 23021251] 14. Jobrack AD, Goel S, Cotlar AM. Granular Cell Tumor: Report of 13 Cases in a Veterans Administration Hospital. Mil Med. 2018 Sep 01;183(9-10):e589-e593. [PubMed: 29548015] 15. Schrader KA, Nelson TN, De Luca A, Huntsman DG, McGillivray BC. Multiple granular cell tumors are an associated feature of LEOPARD syndrome caused by mutation in PTPN11. Clin Genet. 2009 Feb;75(2):185-9. [PubMed: 19054014] 16. Castagna J, Clerc J, Dupond AS, Laresche C. [Multiple granular cell tumours in a patient with Noonan's syndrome and juvenile myelomonocytic leukaemia]. Ann Dermatol Venereol. 2017 Nov;144(11):705-711. [PubMed: 28728859] 17. Park SH, Lee SH. Noonan syndrome with multiple lentigines with PTPN11 (T468M) gene mutation accompanied with solitary granular cell tumor. J Dermatol. 2017 Nov;44(11):e280-e281. [PubMed: 28681392] 18. Ramaswamy PV, Storm CA, Filiano JJ, Dinulos JG. Multiple granular cell tumors in a child with Noonan syndrome. Pediatr Dermatol. 2010 Mar-Apr;27(2):209-11. [PubMed: 20537083] 19. Moos D, Droitcourt C, Rancherevince D, Marec Berard P, Skowron F. Atypical granular cell tumor occurring in an individual with Noonan syndrome treated with growth hormone. Pediatr Dermatol. 2012 Sep-Oct;29(5):665-6. [PubMed: 22329457] 20. Sidwell RU, Rouse P, Owen RA, Green JS. Granular cell tumor of the scrotum in a child with Noonan syndrome. Pediatr Dermatol. 2008 May-Jun;25(3):341-3. [PubMed: 18577039] 21. Marchese C, Montera M, Torrini M, Goldoni F, Mareni C, Forni M, Locatelli L. Granular cell tumor in a PHTS patient with a novel germline PTEN mutation. Am J Med Genet A. 2003 Jul 15;120A(2):286-8. [PubMed: 12833416] 22. França JA, de Sousa SF, Moreira RG, Bernardes VF, Guimarães LM, Santos JN, Diniz MG, Gomez RS, Gomes CC. Sporadic granular cell tumours lack recurrent mutations in PTPN11, PTEN and other cancer-related genes. J Clin Pathol. 2018 Jan;71(1):93-94. [PubMed: 29097601] 23. Cohen JN, Yeh I, Jordan RC, Wolsky RJ, Horvai AE, McCalmont TH, LeBoit PE. Cutaneous Non-Neural Granular Cell Tumors Harbor Recurrent ALK Gene Fusions. Am J Surg Pathol. 2018 Sep;42(9):1133-1142. [PubMed: 30001233] 24. Pareja F, Brandes AH, Basili T, Selenica P, Geyer FC, Fan D, Da Cruz Paula A, Kumar R, Brown DN, Gularte-Mérida R, Alemar B, Bi R, Lim RS, de Bruijn I, Fujisawa S, Gardner R, Feng E, Li A, da Silva EM, Lozada JR, Blecua P, Cohen-Gould L, Jungbluth AA, Rakha EA, Ellis IO, Edelweiss MIA, Palazzo J, Norton L, Hollmann T, Edelweiss M, Rubin BP, Weigelt B, Reis-Filho JS. Loss-of-function mutations in ATP6AP1 and ATP6AP2 in granular cell tumors. Nat Commun. 2018 Aug 30;9(1):3533. [PMC free article: PMC6117336] [PubMed: 30166553] 25. Sekimizu M, Yoshida A, Mitani S, Asano N, Hirata M, Kubo T, Yamazaki F, Sakamoto H, Kato M, Makise N, Mori T, Yamazaki N, Sekine S, Oda I, Watanabe SI, Hiraga H, Yonemoto T, Kawamoto T, Naka N, Funauchi Y, Nishida Y, Honoki K, Kawano H, Tsuchiya H, Kunisada T, Matsuda K, Inagaki K, Kawai A, Ichikawa H. Frequent mutations of genes encoding vacuolar H+ -ATPase components in granular cell tumors. Genes Chromosomes Cancer. 2019 Jun;58(6):373-380. [PubMed: 30597645] 26. Wei L, Liu S, Conroy J, Wang J, Papanicolau-Sengos A, Glenn ST, Murakami M, Liu L, Hu Q, Conroy J, Miles KM, Nowak DE, Liu B, Qin M, Bshara W, Omilian AR, Head K, Bianchi M, Burgher B, Darlak C, Kane J, Merzianu M, Cheney R, Fabiano A, Salerno K, Talati C, Khushalani NI, Trump DL, Johnson CS, Morrison CD. Whole-genome sequencing of a malignant granular cell tumor with metabolic response to pazopanib. Cold Spring Harb Mol Case Stud. 2015 Oct;1(1):a000380. [PMC free article: PMC4850888] [PubMed: 27148567] 27. Gomes CC, Fonseca-Silva T, Gomez RS. Evidence for loss of heterozygosity (LOH) at chromosomes 9p and 17p in oral granular cell tumors: a pilot study. Oral Surg Oral Med Oral Pathol Oral Radiol. 2013 Feb;115(2):249-53. [PubMed: 23312918] 28. Xu S, Zhao Q, Wei S, Wu Y, Liu J, Shi T, Zhou Q, Chen J. Next Generation Sequencing Uncovers Potential Genetic Driver Mutations of Malignant Pulmonary Granular Cell Tumor. J Thorac Oncol. 2015 Oct;10(10):e106-9. [PubMed: 26398830] 29. Davis R, Deak K, Glass CH. Pulmonary Granular Cell Tumors: A Study of 4 Cases Including a Malignant Phenotype. Am J Surg Pathol. 2019 Oct;43(10):1397-1402. [PubMed: 31180915] 30. Becelli R, Perugini M, Gasparini G, Cassoni A, Fabiani F. Abrikossoff's tumor. J Craniofac Surg. 2001 Jan;12(1):78-81. [PubMed: 11314193] 31. Mirza FN, Tuggle CT, Zogg CK, Mirza HN, Narayan D. Epidemiology of malignant cutaneous granular cell tumors: A US population-based cohort analysis using the Surveillance, Epidemiology, and End Results (SEER) database. J Am Acad Dermatol. 2018 Mar;78(3):490-497.e1. [PMC free article: PMC5815907] [PubMed: 28989104] 32. Tamborini F, Cherubino M, Scamoni S, Valdatta LA. Granular cell tumor of the toe: a case report. Dermatol Res Pract. 2010;2010 [PMC free article: PMC2938433] [PubMed: 20862204] 33. Pushpa G, Karve PP, Subashini K, Narasimhan MN, Ahmad PB. Abrikossoff's Tumor: An Unusual Presentation. Indian J Dermatol. 2013 Sep;58(5):407. [PMC free article: PMC3778800] [PubMed: 24082205] 34. Gross VL, Lynfield Y. Multiple cutaneous granular cell tumors: a case report and review of the literature. Cutis. 2002 May;69(5):343-6. [PubMed: 12041812] 35. Porta N, Mazzitelli R, Cacciotti J, Cirenza M, Labate A, Lo Schiavo MG, Laghi A, Petrozza V, Della Rocca C. A case report of a rare intramuscular granular cell tumor. Diagn Pathol. 2015 Sep 17;10:162. [PMC free article: PMC4573292] [PubMed: 26377191] 36. Hatta J, Yanagihara M, Hasei M, Abe S, Tanabe H, Mochizuki T. Case of multiple cutaneous granular cell tumors. J Dermatol. 2009 Sep;36(9):504-7. [PubMed: 19712278] 37. Barakat M, Kar AA, Pourshahid S, Ainechi S, Lee HJ, Othman M, Tadros M. Gastrointestinal and biliary granular cell tumor: diagnosis and management. Ann Gastroenterol. 2018 Jul-Aug;31(4):439-447. [PMC free article: PMC6033765] [PubMed: 29991888] 38. Epstein DS, Pashaei S, Hunt E, Fitzpatrick JE, Golitz LE. Pustulo-ovoid bodies of Milian in granular cell tumors. J Cutan Pathol. 2007 May;34(5):405-9. [PubMed: 17448196] 39. Battistella M, Cribier B, Feugeas JP, Roux J, Le Pelletier F, Pinquier L, Plantier F., Cutaneous Histopathology Section of the French Society of Dermatology. Vascular invasion and other invasive features in granular cell tumours of the skin: a multicentre study of 119 cases. J Clin Pathol. 2014 Jan;67(1):19-25. [PubMed: 23908453] 40. Fanburg-Smith JC, Meis-Kindblom JM, Fante R, Kindblom LG. Malignant granular cell tumor of soft tissue: diagnostic criteria and clinicopathologic correlation. Am J Surg Pathol. 1998 Jul;22(7):779-94. [PubMed: 9669341] 41. An S, Jang J, Min K, Kim MS, Park H, Park YS, Kim J, Lee JH, Song HJ, Kim KJ, Yu E, Hong SM. Granular cell tumor of the gastrointestinal tract: histologic and immunohistochemical analysis of 98 cases. Hum Pathol. 2015 Jun;46(6):813-9. [PubMed: 25882927] 42. Schoolmeester JK, Lastra RR. Granular cell tumors overexpress TFE3 without corollary gene rearrangement. Hum Pathol. 2015 Aug;46(8):1242-3. [PubMed: 26009539] 43. Maiorano E, Favia G, Napoli A, Resta L, Ricco R, Viale G, Altini M. Cellular heterogeneity of granular cell tumours: a clue to their nature? J Oral Pathol Med. 2000 Jul;29(6):284-90. [PubMed: 10890560] 44. Kanno A, Satoh K, Hirota M, Hamada S, Umino J, Itoh H, Masamune A, Egawa S, Motoi F, Unno M, Ishida K, Shimosegawa T. Granular cell tumor of the pancreas: A case report and review of literature. World J Gastrointest Oncol. 2010 Feb 15;2(2):121-4. [PMC free article: PMC2999167] [PubMed: 21160931] 45. Kapur P, Rakheja D, Balani JP, Roy LC, Amirkhan RH, Hoang MP. Phosphorylated histone H3, Ki-67, p21, fatty acid synthase, and cleaved caspase-3 expression in benign and atypical granular cell tumors. Arch Pathol Lab Med. 2007 Jan;131(1):57-64. [PubMed: 17227124] 46. Ferreira JC, Oton-Leite AF, Guidi R, Mendonça EF. Granular cell tumor mimicking a squamous cell carcinoma of the tongue: a case report. BMC Res Notes. 2017 Jan 03;10(1):14. [PMC free article: PMC5217610] [PubMed: 28057062] 47. Chilukuri S, Peterson SR, Goldberg LH. Granular cell tumor of the heel treated with Mohs technique. Dermatol Surg. 2004 Jul;30(7):1046-9. [PubMed: 15209799] 48. Bamps S, Oyen T, Legius E, Vandenoord J, Stas M. Multiple granular cell tumors in a child with Noonan syndrome. Eur J Pediatr Surg. 2013 Jun;23(3):257-9. [PubMed: 22915371] 49. Brown AC, Audisio RA, Regitnig P. Granular cell tumour of the breast. Surg Oncol. 2011 Jun;20(2):97-105. [PubMed: 20074934] 50. Xu GQ, Chen HT, Xu CF, Teng XD. Esophageal granular cell tumors: report of 9 cases and a literature review. World J Gastroenterol. 2012 Dec 21;18(47):7118-21. [PMC free article: PMC3531704] [PubMed: 23323018] 51. Shrestha B, Khalid M, Gayam V, Mukhtar O, Thapa S, Mandal AK, Kaler J, Khalid M, Garlapati P, Iqbal S, Posner G. Metachronous Granular Cell Tumor of the Descending Colon. Gastroenterology Res. 2018 Aug;11(4):317-320. [PMC free article: PMC6089591] [PubMed: 30116432] 52. Yang SY, Min BS, Kim WR. A Granular Cell Tumor of the Rectum: A Case Report and Review of the Literature. Ann Coloproctol. 2017 Dec;33(6):245-248. [PMC free article: PMC5768480] [PubMed: 29354608] 53. Sohn DK, Choi HS, Chang YS, Huh JM, Kim DH, Kim DY, Kim YH, Chang HJ, Jung KH, Jeong SY. Granular cell tumor of colon: report of a case and review of literature. World J Gastroenterol. 2004 Aug 15;10(16):2452-4. [PMC free article: PMC4576310] [PubMed: 15285042] 54. Yanoma T, Fukuchi M, Sakurai S, Shoji H, Naitoh H, Kuwano H. Granular cell tumor of the esophagus with elevated preoperative serum carbohydrate antigen 19-9: a case report. Int Surg. 2015 Feb;100(2):365-9. [PMC free article: PMC4337455] [PubMed: 25692443] 55. Chen Y, Chen Y, Chen X, Chen L, Liang W. Colonic granular cell tumor: Report of 11 cases and management with review of the literature. Oncol Lett. 2018 Aug;16(2):1419-1424. [PMC free article: PMC6036509] [PubMed: 30008819] 56. van de Loo S, Thunnissen E, Postmus P, van der Waal I. Granular cell tumor of the oral cavity; a case series including a case of metachronous occurrence in the tongue and the lung. Med Oral Patol Oral Cir Bucal. 2015 Jan 01;20(1):e30-3. [PMC free article: PMC4320418] [PubMed: 24880452] 57. Alotaiby FM, Fitzpatrick S, Upadhyaya J, Islam MN, Cohen D, Bhattacharyya I. Demographic, Clinical and Histopathological Features of Oral Neural Neoplasms: A Retrospective Study. Head Neck Pathol. 2019 Jun;13(2):208-214. [PMC free article: PMC6513954] [PubMed: 29931661] 58. Gagliardi F, Spina A, Barzaghi LR, Bailo M, Losa M, Terreni MR, Mortini P. Suprasellar granular cell tumor of the neurohypophysis: surgical outcome of a very rare tumor. Pituitary. 2016 Jun;19(3):277-85. [PubMed: 26753850] 59. Pérez-González YC, Pagura L, Llamas-Velasco M, Cortes-Lambea L, Kutzner H, Requena L. Primary cutaneous malignant granular cell tumor: an immunohistochemical study and review of the literature. Am J Dermatopathol. 2015 Apr;37(4):334-40. [PubMed: 25794371] 60. Paul SP, Osipov V. An unusual granular cell tumour of the buttock and a review of granular cell tumours. Case Rep Dermatol Med. 2013;2013:109308. [PMC free article: PMC3770008] [PubMed: 24066243] 61. Blacksin MF, White LM, Hameed M, Kandel R, Patterson FR, Benevenia J. Granular cell tumor of the extremity: magnetic resonance imaging characteristics with pathologic correlation. Skeletal Radiol. 2005 Oct;34(10):625-31. [PubMed: 16003548] 62. Hammas N, El Fatemi H, Jayi S, Hafid I, Fikri G, El Houari A, Seqqali N, Tizniti S, Melhouf MA, Amarti A. Granular cell tumor of the breast: a case report. J Med Case Rep. 2014 Dec 26;8:465. [PMC free article: PMC4307888] [PubMed: 25541096] 63. Gavriilidis P, Michalopoulou I, Baliaka A, Nikolaidou A. Granular cell breast tumour mimicking infiltrating carcinoma. BMJ Case Rep. 2013 Feb 18;2013 [PMC free article: PMC3604474] [PubMed: 23420726] 64. Yang WT, Edeiken-Monroe B, Sneige N, Fornage BD. Sonographic and mammographic appearances of granular cell tumors of the breast with pathological correlation. J Clin Ultrasound. 2006 May;34(4):153-60. [PubMed: 16615051] 65. Coates SJ, Mitchell K, Olorunnipa OB, DeSimone RA, Otterburn DM, Simmons RM. An unusual breast lesion: granular cell tumor of the breast with extensive chest wall invasion. J Surg Oncol. 2014 Sep;110(3):345-7. [PubMed: 24863566] 66. Scaranelo AM, Bukhanov K, Crystal P, Mulligan AM, O'Malley FP. Granular cell tumour of the breast: MRI findings and review of the literature. Br J Radiol. 2007 Dec;80(960):970-4. [PubMed: 17940129] 67. Lewis RB, Mehrotra AK, Rodriguez P, Levine MS. From the radiologic pathology archives: esophageal neoplasms: radiologic-pathologic correlation. Radiographics. 2013 Jul-Aug;33(4):1083-108. [PubMed: 23842973] 68. Hwang JS, Beebe KS, Rojas J, Peters SR. Malignant granular cell tumor of the thigh. Orthopedics. 2011 Aug 08;34(8):e428-31. [PubMed: 21815590] 69. Gardner ES, Goldberg LH. Granular cell tumor treated with Mohs micrographic surgery: report of a case and review of the literature. Dermatol Surg. 2001 Aug;27(8):772-4. [PubMed: 11493306] 70. Chen J, Wang L, Xu J, Pan T, Shen J, Hu W, Yuan X. Malignant granular cell tumor with breast metastasis: A case report and review of the literature. Oncol Lett. 2012 Jul;4(1):63-66. [PMC free article: PMC3398379] [PubMed: 22807961] 71. Singh VA, Gunasagaran J, Pailoor J. Granular cell tumour: malignant or benign? Singapore Med J. 2015 Sep;56(9):513-7. [PMC free article: PMC4582131] [PubMed: 26451054] 72. Marchand Crety C, Garbar C, Madelis G, Guillemin F, Soibinet Oudot P, Eymard JC, Servagi Vernat S. Adjuvant radiation therapy for malignant Abrikossoff's tumor: a case report about a femoral triangle localisation. Radiat Oncol. 2018 Jun 20;13(1):115. [PMC free article: PMC6011335] [PubMed: 29925410] 73. Liu TT, Han Y, Zheng S, Li B, Liu YQ, Chen YX, Liu YF, Wang EH. Primary cutaneous malignant granular cell tumor: a case report in China and review of the literature. Diagn Pathol. 2015 Jul 19;10:113. [PMC free article: PMC4506611] [PubMed: 26187381] 74. Katiyar V, Vohra I, Uprety A, Yin W, Gupta S. Recurrent Unresectable Malignant Granular Cell Tumor With Response to Pazopanib. Cureus. 2020 May 26;12(5):e8287. [PMC free article: PMC7317126] [PubMed: 32601562] 75. Morita S, Hiramatsu M, Sugishita M, Gyawali B, Shibata T, Shimokata T, Urakawa H, Mitsuma A, Moritani S, Kubota T, Ichihara S, Ando Y. Pazopanib monotherapy in a patient with a malignant granular cell tumor originating from the right orbit: A case report. Oncol Lett. 2015 Aug;10(2):972-974. [PMC free article: PMC4509081] [PubMed: 26622607] 76. Chen WS, Zheng XL, Jin L, Pan XJ, Ye MF. Novel diagnosis and treatment of esophageal granular cell tumor: report of 14 cases and review of the literature. Ann Thorac Surg. 2014 Jan;97(1):296-302. [PubMed: 24140217] 77. Lu W, Xu MD, Zhou PH, Zhang YQ, Chen WF, Zhong YS, Yao LQ. Endoscopic submucosal dissection of esophageal granular cell tumor. World J Surg Oncol. 2014 Jul 17;12:221. [PMC free article: PMC4126351] [PubMed: 25030028] 78. Cha JM, Lee JI, Joo KR, Choe JW, Jung SW, Shin HP, Lim SJ. Granular cell tumor of the descending colon treated by endoscopic mucosal resection: a case report and review of the literature. J Korean Med Sci. 2009 Apr;24(2):337-41. [PMC free article: PMC2672140] [PubMed: 19399282] 79. Znati K, Harmouch T, Benlemlih A, Elfatemi H, Chbani L, Amarti A. Solitary granular cell tumor of cecum: a case report. ISRN Gastroenterol. 2011;2011:943804. [PMC free article: PMC3168574] [PubMed: 21991536] 80. Cardis MA, Ni J, Bhawan J. Granular cell differentiation: A review of the published work. J Dermatol. 2017 Mar;44(3):251-258. [PubMed: 28256763] 81. Qureshi NA, Tahir M, Carmichael AR. Granular cell tumour of the soft tissues: a case report and literature review. Int Semin Surg Oncol. 2006 Aug 24;3:21. [PMC free article: PMC1560141] [PubMed: 16930486] 82. Thacker MM, Humble SD, Mounasamy V, Temple HT, Scully SP. Case report. Granular cell tumors of extremities: comparison of benign and malignant variants. Clin Orthop Relat Res. 2007 Feb;455:267-73. [PubMed: 16936589] 83. Meissner M, Wolter M, Schöfer H, Kaufmann R. A solid erythematous tumour. Granular cell tumour (GCT). Clin Exp Dermatol. 2010 Apr;35(3):e44-5. [PubMed: 20500174] 84. Moten AS, Zhao H, Wu H, Farma JM. Malignant granular cell tumor: Clinical features and long-term survival. J Surg Oncol. 2018 Nov;118(6):891-897. [PubMed: 30196562] Disclosure:Daniel Neelon declares no relevant financial relationships with ineligible companies. Disclosure:Ford Lannan declares no relevant financial relationships with ineligible companies. Disclosure:John Childs declares no relevant financial relationships with ineligible companies. Continuing Education Activity Introduction Etiology Epidemiology Histopathology History and Physical Evaluation Treatment / Management Differential Diagnosis Staging Prognosis Complications Deterrence and Patient Education Pearls and Other Issues Enhancing Healthcare Team Outcomes Review Questions References Copyright © 2025, StatPearls Publishing LLC. This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal. Bookshelf ID: NBK563150 PMID: 33085297 Share on Facebook Share on Twitter Views PubReader Print View Cite this Page In this Page Continuing Education Activity Introduction Etiology Epidemiology Histopathology History and Physical Evaluation Treatment / Management Differential Diagnosis Staging Prognosis Complications Deterrence and Patient Education Pearls and Other Issues Enhancing Healthcare Team Outcomes Review Questions References Related information PMCPubMed Central citations PubMedLinks to PubMed Similar articles in PubMed A case report of a rare intramuscular granular cell tumor.[Diagn Pathol. 2015]A case report of a rare intramuscular granular cell tumor.Porta N, Mazzitelli R, Cacciotti J, Cirenza M, Labate A, Lo Schiavo MG, Laghi A, Petrozza V, Della Rocca C. Diagn Pathol. 2015 Sep 17; 10:162. Epub 2015 Sep 17. [Childhood cutaneous Abrikossoff tumor].[Arch Pediatr. 2011][Childhood cutaneous Abrikossoff tumor].Lahmam Bennani Z, Boussofara L, Saidi W, Bayou F, Ghariani N, Belajouza C, Sriha B, Denguezli M, Nouira R. Arch Pediatr. 2011 Jul; 18(7):778-82. Epub 2011 May 19. Case report: Abrikossoff's tumor of the facial skin.[Front Med (Lausanne). 2023]Case report: Abrikossoff's tumor of the facial skin.Ardeleanu V, Jecan RC, Moroianu M, Teodoreanu RN, Tebeica T, Moroianu LA, Bujoreanu FC, Nwabudike LC, Tatu AL. Front Med (Lausanne). 2023; 10:1149735. Epub 2023 May 31. Review [Granular cell tumors of the breast].[Ann Pathol. 1994]Review [Granular cell tumors of the breast].Boulat J, Mathoulin MP, Vacheret H, Andrac L, Habib MC, Pellissier JF, Piana L, Charpin C. Ann Pathol. 1994; 14(2):93-100. Review Granular cell tumor of the heel treated with Mohs technique.[Dermatol Surg. 2004]Review Granular cell tumor of the heel treated with Mohs technique.Chilukuri S, Peterson SR, Goldberg LH. Dermatol Surg. 2004 Jul; 30(7):1046-9. See reviews...See all... Recent Activity Clear)Turn Off)Turn On) Granular Cell Tumor - StatPearlsGranular Cell Tumor - StatPearls Your browsing activity is empty. Activity recording is turned off. Turn recording back on) See more... Follow NCBI Connect with NLM National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov PreferencesTurn off External link. Please review our privacy policy. Cite this Page Close Neelon D, Lannan F, Childs J. Granular Cell Tumor. [Updated 2023 Jul 3]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2025 Jan-. Available from: Making content easier to read in Bookshelf Close We are experimenting with display styles that make it easier to read books and documents in Bookshelf. Our first effort uses ebook readers, which have several "ease of reading" features already built in. The content is best viewed in the iBooks reader. You may notice problems with the display of some features of books or documents in other eReaders. Cancel Download Share Share on Facebook Share on Twitter URL
2485
https://www.cs.rpi.edu/~slotag/classes/SP21/notes/lec26.pdf
26.1 Face Coloring and Hamiltonian Cycles A face coloring is the coloring of each face of a planar graph. A face coloring is proper if each pair of faces that share an edge have differing colors. This is equivalent to the map coloring problem we discussed with regards to planar graph vertex coloring and the 4/5-color theorems. As such, we can consider face coloring of a planar graph G as equivalent to vertex coloring of its dual graph G∗. Similarly, like we used triangulations in our 4/5-color theorems, if we can show that all dual graphs of triangulations are 4-face-colorable, we equivalently show that all triangulations are 4-colorable. As all planar graphs are subgraphs of some triangulation, this would effectively give us a proof of the 4-color theorem. Tait’s Theorem states that a simple 2-edge-connected 3-regular plane graph is 3-edge-colorable if and only if it is 4-face-colorable. Such a 3-edge-coloring of a 3-regular graph is referred to as a Tait coloring. Showing that every 2-edge-connected 3-regular planar graph is 3-edge-colorable can be reduced to showing every 3-connected 3-regular graph is 3-edge-colorable. Thus, the 4-color Theorem reduces to finding Tait colorings of 3-edge-connected 3-regular planar graphs. The statement of the existence of such colorings was referred to as Tait’s Conjecture. We can now consider the similarities between the existence of a Hamiltonian cycle in a graph and its colorability. First, note that every Hamiltonian 3-regular graph has a Tait coloring. While it was first assumed by Tait that every 3-connected 3-regular planar graph had a Hamiltonian cycle (thus proving the 4-color theorem), this is not the case. Grinberg’s Theorem gives a necessary condition for the existence of a Hamiltonian cycle in such graphs: If G is a loopless plane graph with Hamiltonian cycle C, and G has f ′ i faces of length i inside C and f ′′ i faces of length outside C, then P i(i −2)(f ′ i −f ′′ i ) = 0. This gives us another necessary (but not sufficient) condition for a class of Hamiltonian graphs. Consider applying this condition to the below graph, which is the smallest known non-Hamiltonian 3-connected 3-regular planar graph. 46
2486
https://en.wikipedia.org/wiki/Incidence_(geometry)
Published Time: 2003-12-04T09:52:02Z Incidence (geometry) - Wikipedia Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk Contents move to sidebar hide (Top) 1 PG(2,F) 2 Incidence expressed algebraically 3 The line incident with a pair of distinct pointsToggle The line incident with a pair of distinct points subsection 3.1 Collinearity 4 Intersection of a pair of linesToggle Intersection of a pair of lines subsection 4.1 Concurrence 5 See also 6 References [x] Toggle the table of contents Incidence (geometry) [x] 7 languages Deutsch Español Italiano Polski Русский Svenska Українська Edit links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Print/export Download as PDF Printable version In other projects Wikidata item Appearance move to sidebar hide From Wikipedia, the free encyclopedia In geometry, an incidencerelation is a heterogeneous relation that captures the idea being expressed when phrases such as "a point lies on a line" or "a line is contained in a plane" are used. The most basic incidence relation is that between a point, P, and a line, l, sometimes denoted P I l. If P and l are incident, P I l, the pair (P, l) is called a flag. There are many expressions used in common language to describe incidence (for example, a line passes through a point, a point lies in a plane, etc.) but the term "incidence" is preferred because it does not have the additional connotations that these other terms have, and it can be used in a symmetric manner. Statements such as "line l 1 intersects line l 2" are also statements about incidence relations, but in this case, it is because this is a shorthand way of saying that "there exists a point P that is incident with both line l 1 and line l 2". When one type of object can be thought of as a set of the other type of object (viz., a plane is a set of points) then an incidence relation may be viewed as containment. Statements such as "any two lines in a plane meet" are called incidence propositions. This particular statement is true in a projective plane, though not true in the Euclidean plane where lines may be parallel. Historically, projective geometry was developed in order to make the propositions of incidence true without exceptions, such as those caused by the existence of parallels. From the point of view of synthetic geometry, projective geometry should be developed using such propositions as axioms. This is most significant for projective planes due to the universal validity of Desargues' theorem in higher dimensions. In contrast, the analytic approach is to define projective space based on linear algebra and utilizing homogeneous co-ordinates. The propositions of incidence are derived from the following basic result on vector spaces: given subspaces U and W of a (finite-dimensional) vector space V, the dimension of their intersection is dim U + dim W − dim (U + W). Bearing in mind that the geometric dimension of the projective space P(V) associated to V is dim V − 1 and that the geometric dimension of any subspace is positive, the basic proposition of incidence in this setting can take the form: linear subspacesL and M of projective space P meet provided dim L + dim M ≥ dim P. The following sections are limited to projective planes defined over fields, often denoted by PG(2, F), where F is a field, or P2 F. However these computations can be naturally extended to higher-dimensional projective spaces, and the field may be replaced by a division ring (or skewfield) provided that one pays attention to the fact that multiplication is not commutative in that case. PG(2,F) [edit] Main article: Homogeneous coordinates Let V be the three-dimensional vector space defined over the field F. The projective plane P(V) = PG(2, F) consists of the one-dimensional vector subspaces of V, called points, and the two-dimensional vector subspaces of V, called lines. Incidence of a point and a line is given by containment of the one-dimensional subspace in the two-dimensional subspace. Fix a basis for V so that we may describe its vectors as coordinate triples (with respect to that basis). A one-dimensional vector subspace consists of a non-zero vector and all of its scalar multiples. The non-zero scalar multiples, written as coordinate triples, are the homogeneous coordinates of the given point, called point coordinates. With respect to this basis, the solution space of a single linear equation {(x, y, z) | ax + by + cz = 0} is a two-dimensional subspace of V, and hence a line of P(V). This line may be denoted by line coordinates[a, b, c], which are also homogeneous coordinates since non-zero scalar multiples would give the same line. Other notations are also widely used. Point coordinates may be written as column vectors, (x, y, z)T, with colons, (x: y: z), or with a subscript, (x, y, z)P. Correspondingly, line coordinates may be written as row vectors, (a, b, c), with colons, [a: b: c] or with a subscript, (a, b, c)L. Other variations are also possible. Incidence expressed algebraically [edit] Given a point P = (x, y, z) and a line l = [a, b, c], written in terms of point and line coordinates, the point is incident with the line (often written as P I l), if and only if, ax + by + cz = 0. This can be expressed in other notations as: a x+b y+c z=[a,b,c]⋅(x,y,z)=(a,b,c)L⋅(x,y,z)P={\displaystyle ax+by+cz=[a,b,c]\cdot (x,y,z)=(a,b,c){L}\cdot (x,y,z){P}=}=[a:b:c]⋅(x:y:z)=(a,b,c)(x y z)=0.{\displaystyle =[a:b:c]\cdot (x:y:z)=(a,b,c)\left({\begin{matrix}x\y\z\end{matrix}}\right)=0.} No matter what notation is employed, when the homogeneous coordinates of the point and line are just considered as ordered triples, their incidence is expressed as having their dot product equal 0. The line incident with a pair of distinct points [edit] Let P 1 and P 2 be a pair of distinct points with homogeneous coordinates (x 1, y 1, z 1) and (x 2, y 2, z 2) respectively. These points determine a unique line l with an equation of the form ax + by + cz = 0 and must satisfy the equations: ax 1 + by 1 + cz 1 = 0 and ax 2 + by 2 + cz 2 = 0. In matrix form this system of simultaneous linear equations can be expressed as: (x y z x 1 y 1 z 1 x 2 y 2 z 2)(a b c)=(0 0 0).{\displaystyle \left({\begin{matrix}x&y&z\x_{1}&y_{1}&z_{1}\x_{2}&y_{2}&z_{2}\end{matrix}}\right)\left({\begin{matrix}a\b\c\end{matrix}}\right)=\left({\begin{matrix}0\0\0\end{matrix}}\right).} This system has a nontrivial solution if and only if the determinant, |x y z x 1 y 1 z 1 x 2 y 2 z 2|=0.{\displaystyle \left|{\begin{matrix}x&y&z\x_{1}&y_{1}&z_{1}\x_{2}&y_{2}&z_{2}\end{matrix}}\right|=0.} Expansion of this determinantal equation produces a homogeneous linear equation, which must be the equation of line l. Therefore, up to a common non-zero constant factor we have l = [a, b, c] where: a = y 1 z 2 - y 2 z 1,b = x 2 z 1 - x 1 z 2, and c = x 1 y 2 - x 2 y 1. In terms of the scalar triple product notation for vectors, the equation of this line may be written as: P ⋅ P 1 × P 2 = 0, where P = (x, y, z) is a generic point. Collinearity [edit] Main article: Collinear Points that are incident with the same line are said to be collinear. The set of all points incident with the same line is called a range. If P 1 = (x 1, y 1, z 1), P 2 = (x 2, y 2, z 2), and P 3 = (x 3, y 3, z 3), then these points are collinear if and only if |x 1 y 1 z 1 x 2 y 2 z 2 x 3 y 3 z 3|=0,{\displaystyle \left|{\begin{matrix}x_{1}&y_{1}&z_{1}\x_{2}&y_{2}&z_{2}\x_{3}&y_{3}&z_{3}\end{matrix}}\right|=0,} i.e., if and only if the determinant of the homogeneous coordinates of the points is equal to zero. Intersection of a pair of lines [edit] Main article: Line-line intersection Let l 1 = [a 1, b 1, c 1] and l 2 = [a 2, b 2, c 2] be a pair of distinct lines. Then the intersection of lines l 1 and l 2 is point a P = (x 0, y 0, z 0) that is the simultaneous solution (up to a scalar factor) of the system of linear equations: a 1 x + b 1 y + c 1 z = 0 and a 2 x + b 2 y + c 2 z = 0. The solution of this system gives: x 0 = b 1 c 2 - b 2 c 1,y 0 = a 2 c 1 - a 1 c 2, and z 0 = a 1 b 2 - a 2 b 1. Alternatively, consider another line l = [a, b, c] passing through the point P, that is, the homogeneous coordinates of P satisfy the equation: ax+ by + cz = 0. Combining this equation with the two that define P, we can seek a non-trivial solution of the matrix equation: (a b c a 1 b 1 c 1 a 2 b 2 c 2)(x y z)=(0 0 0).{\displaystyle \left({\begin{matrix}a&b&c\a_{1}&b_{1}&c_{1}\a_{2}&b_{2}&c_{2}\end{matrix}}\right)\left({\begin{matrix}x\y\z\end{matrix}}\right)=\left({\begin{matrix}0\0\0\end{matrix}}\right).} Such a solution exists provided the determinant, |a b c a 1 b 1 c 1 a 2 b 2 c 2|=0.{\displaystyle \left|{\begin{matrix}a&b&c\a_{1}&b_{1}&c_{1}\a_{2}&b_{2}&c_{2}\end{matrix}}\right|=0.} The coefficients of a, b and c in this equation give the homogeneous coordinates of P. The equation of the generic line passing through the point P in scalar triple product notation is: l ⋅ l 1 × l 2 = 0. Concurrence [edit] Lines that meet at the same point are said to be concurrent. The set of all lines in a plane incident with the same point is called a pencil of lines centered at that point. The computation of the intersection of two lines shows that the entire pencil of lines centered at a point is determined by any two of the lines that intersect at that point. It immediately follows that the algebraic condition for three lines, [a 1, b 1, c 1], [a 2, b 2, c 2], [a 3, b 3, c 3] to be concurrent is that the determinant, |a 1 b 1 c 1 a 2 b 2 c 2 a 3 b 3 c 3|=0.{\displaystyle \left|{\begin{matrix}a_{1}&b_{1}&c_{1}\a_{2}&b_{2}&c_{2}\a_{3}&b_{3}&c_{3}\end{matrix}}\right|=0.} See also [edit] Menelaus theorem Ceva's theorem Concyclic Hopcroft's problem of finding point–line incidences Incidence matrix Incidence algebra Incidence structure Incidence geometry Levi graph Hilbert's axioms References [edit] ^Joel G. Broida & S. Gill Williamson (1998) A Comprehensive Introduction to Linear Algebra, Theorem 2.11, p 86, Addison-WesleyISBN0-201-50065-5. The theorem says that dim (L + M) = dim L + dim M − dim (L ∩ M). Thus dim L + dim M> dim P implies dim (L ∩ M) > 0. Harold L. Dorwart (1966) The Geometry of Incidence, Prentice Hall. | v t e Incidence structures | | Representation | Incidence matrix Incidence graph | | Fields | Combinatorics Block design Steiner system Geometry Incidence Projective plane Graph theory Hypergraph Statistics Blocking | | Configurations | Complete quadrangle Fano plane Möbius–Kantor configuration Pappus configuration Hesse configuration Desargues configuration Reye configuration Schläfli double six Cremona–Richmond configuration Kummer configuration Grünbaum–Rigby configuration Klein configuration Dual | | Theorems | Sylvester–Gallai theorem De Bruijn–Erdős theorem Szemerédi–Trotter theorem Beck's theorem Bruck–Ryser–Chowla theorem | | Applications | Design of experiments Kirkman's schoolgirl problem | Retrieved from " Category: Projective geometry Hidden category: Articles containing proofs This page was last edited on 22 November 2024, at 00:00(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Search Search [x] Toggle the table of contents Incidence (geometry) 7 languagesAdd topic
2487
https://oeis.org/A009287/internal
A009287 - OEIS login The OEIS is supported by the many generous donors to the OEIS Foundation. Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!) A009287 a(1) = 3; thereafter a(n+1) = least k with a(n) divisors. 8 %I #53 Aug 17 2025 02:52:03 %S 3,4,6,12,60,5040,293318625600, %T 670059168204585168371476438927421112933837297640990904154667968000000000000 %N a(1) = 3; thereafter a(n+1) = least k with a(n) divisors. %C The sequence must start with 3, since a(1)=1 or a(1)=2 would lead to a constant sequence. - M. F. Hasler, Sep 02 2008 %C The calculation of a(7) and a(8) is based upon the method in A037019 (which, apparently, is the method previously used by the authors of A009287). So a(7) and a(8) are correct unless n=a(6)=5040 or n=a(7)=293318625600 are "exceptional" as described in A037019. - Rick L. Shepherd, Aug 17 2006 %C a(7) is correct because 5040 is not exceptional (see A072066). - T. D. Noe, Sep 02 2008 %C Terms from a(2) to a(7) are highly composite (that is, found in A002182), but a(8) is not. - Ivan Neretin, Mar 28 2015 [Equivalently, the first 6 terms are in A002183, but a(7) is not. Note that the smallest number with at least a(7) divisors is A002182(695) ~ 1.77 10^59 with 293534171136 divisors, which is much smaller than a(8) ~ 6.70 10^75. - Jianing Song, Jul 15 2021] %C Grime reported that Ramanujan unfortunately missed a(7) with 5040 divisors. - Frank Ellermann, Mar 12 2020 %C It is possible to prepend 2 to this sequence as follows. a(0) = 2; for n > 0, a(n) = the smallest natural number greater than a(n-1) with a(n-1) divisors. - Hal M. Switkay, Jul 03 2022 %D Amarnath Murthy, Pouring a few more drops in the ocean of Smarandache Sequences and Conjectures (to be published in the Smarandache Notions Journal) [Note: this author submitted two erroneous versions of this sequence to the OEIS, A036460 and A061080, entries which contained invalid conjectures.] %H Jason Earls, A note on the Smarandache divisors of divisors sequence and two similar sequences, in Smarandache Notions Journal (2004), Vol. 14.1, page 274. %H James Grime and Brady Haran, Infinite Anti-Primes, Numberphile video (2016). %F a(n) = A005179(a(n-1)). %e 5040 is the smallest number with 60 divisors. %t f[n_] := Block[{k = 3, s = (Times @@ (Prime[Range[Length@ #]]^Reverse[# - 1])) & @ Flatten[FactorInteger[#] /. {a_Integer, b_} :> Table[a, {b}]] & /@ Range@ 10000}, Reap@ Do[Sow[k = s], {n}] // Flatten // Rest]; f@ 6 ( Michael De Vlieger, Mar 28 2015, after Wouter Meeussen at A037019 ) %Y Coincides with A251483 for 1 <= n <= 7 (only). %Y Cf. A000005, A005179, A037019, A251483. %K nonn %O 1,1 %A David W. Wilson and James Kilfiger (jamesk(AT)maths.warwick.ac.uk) %E Entry revised by N. J. A. Sloane, Aug 25 2006 LookupWelcomeWikiRegisterMusicPlot 2DemosIndexWebCamContributeFormatStyle SheetTransformsSuperseekerRecents The OEIS Community Maintained by The OEIS Foundation Inc. Last modified September 28 21:00 EDT 2025. Contains 388816 sequences. License Agreements, Terms of Use, Privacy Policy
2488
https://www.themathpage.com/Arith/prime-numbers.htm
Prime numbers and prime factorization - A complete course in arithmetic S k i l l i n A R I T H M E T I C Table of Contents|Home|Introduction Lesson 32 PRIME NUMBERS and PRIME FACTORIZATION In this Lesson, we will address the following: What does it mean to say that a smaller number is a proper divisor of a larger number? What is a prime number? What is a number called if it is not prime? Square roots Divisors Prime divisors Prime factorization Square factors Is there a last prime? A natural number is a collection of indivisible units. 1 is the source of every natural number. Every natural number is a multiple—the repeated addition—of 1. By a number in what follows, we will mean a natural number. Now, many numbers are multiples of numbers other than 1. 12, for example, is a multiple of 1, 2, 3, 4, and 6. 12 is divisible—it could be divided equally—into any of those. We say that 1, 2, 3, 4, and 6 are the proper divisors of 12. 1.What does it mean to say that a smaller number is a proper divisor of a larger number? It means that the larger number is composed of —is a multiple of—the smaller number. The proper divisors do not include the number itself. Now, 1 is a proper divisor of every number. Certain numbers, however, have 1 as their only proper divisor. For example, 7. 7 is composed only of 1's. We say therefore that 7 is a prime number. 2.What is a prime number? A prime number has 1 as its only proper divisor. 10 is not a prime number, because it is divisible into 2's: and into 5's: We call 10 a composite number. 10 can be composed of numbers other than 1. Problem 1.Is 1 itself a prime number? No. 1 has no proper divisors. It is not composed of other numbers. 1 is the measure. It cannot be measured. Problem 2.Write the first ten prime numbers. 2, 3, 5, 7, 11, 13, 17, 19, 23, 29. With the exception of 2, then—which is the only even prime—a prime number is a kind of odd number. Thus every number other than 1 is either prime or composite. What is more: Every composite number is a multiple of some prime number. Equivalently, Every composite number has at least one prime number as a divisor. (Euclid, VII. 31.) As we have seen, 10 has the prime divisors 2 and 5: The composite number 12 is divisible by the prime number 3: We will now be looking for the prime divisors of a number. And we will see that it will be necessary to look only up to the square root of the number. Square roots When we multiply a number by itself, we say that we have "squared" the number. The square of 5 is 5 × 5 = 25. We then say that 5 is the square root of 25. The square numbers are the numbers we get by squaring a number: 1, 4, 9, 16, 25, and so on. 50, for example, is not a square number, therefore it does not have an exact square root. Its square root, however, is between 7 and 8. For, 7×7 = 49. 8×8 = 64. Problem 4.The square root of 175 falls between which two numbers? 13 and 14. 13 squared is 169. 14 squared is 196. Divisors We can always find the divisors of a number in pairs. One member of the pair will be less than the square root, and the other will be more. (If the number is a square number, then its square root will be its own partner.) For example, here are the pairs of divisors of 24: 1 and 24. (Because 1 × 24 = 24.) 2 and 12. (Because 2 × 12 = 24.) 3 and 8. (Because 3 × 8 = 24.) 4 and 6. (Because 4 × 6 = 24.) Each number on the left is less than the square root of 24, and each number on the right is more. The point is: When we look for divisors of a number, it is necessary to look only up to its square root. When we find a divisor less than the square root, we will have found its partner, which is more. Problem 5.If we are looking for the divisors of 157, up to what number must we look? 12. Because 12 squared is 144. While 13 squared is 169. Prime divisors Example 1.63 is a multiple of which prime numbers? Answer.Here again are the first few prime numbers: 2, 3, 5, 7, 11, 13, 17. We must test whether 2 is a divisor, or 3, or 5, and so on. But we need test only up to 7, because 11 is more than the square root of 63. Is 2 a divisor of 63? No, it is not. How do we know? Because 63 is not an even number. And how do we know that? Because even numbers end in 0, 2, 4, 6, or 8. Next, is 3 a divisor of 63? There is a test for divisibility by 3, and it is as follows: If the sum of the digits is divisible by 3, then the number is divisible by 3. The sum of the digits of 63 is 6 + 3 = 9, which is divisible by 3. That tells us that 63 is divisible by 3. Next, is 63 divisible by 5? There is a simple test for divisibility by 5: The number ends in either 0 or 5. 63 does not end in 0 or 5. Therefore, 63 is not divisible by 5. Finally, is 63 divisible by 7? Yes, it is. 63 is equal to nine 7's. 63, then, is a multiple of the prime numbers 3 and 7. Example 2.78 is a multiple of which prime numbers? Answer.78 is even. Therefore, it is a multiple of 2: 78 = 39 × 2. Now, 39 is composite. Therefore it is a multiple of some prime: 39 = 3 × 13. 39 is thus composed of 3's and 13's. And since 78 is composed of 39's, then 78 is also composed of 3's and 13's. 78 therefore has three prime divisors: 2, 3, and 13. 78 is a multiple of each one. Problem 6.What is the smallest prime that is a divisor of the following? a) 231. 3. The sum of the digits is divisible by 3. b) 3,165. 3. c) 3,265. 5. d) 91. 7. e) 121. 11. Prime factorization When numbers are multiplied, they are called factors. Since 30 = 2 × 15, we say that 2 and 15 are factors of 30. Now, 2 is a prime factor but 15 is not. However, 15 = 3 × 5. Therefore we can express 30 as a product of prime factors only: 30 = 2 × 3 × 5. "2 × 3 × 5" is called the prime factorization of 30. And it is unique. That is, apart from the order of the factors: Every composite number can be uniquely factored as a product of prime numbers only. Note: We could have found those same factors by factoring 30 in any way. For example, 30 = 5 × 6. And since 6 = 2 × 3, 30 = 5 × 2 × 3. Apart from the order, we have found the same prime factors. Example 3.Find the prime factorization of 102. Solution.When we write the factors of a number, then each prime divisor will be appear as a factor. 2 is obviously a prime factor. Its partner will be half of 102, which is 51. (Lesson 16.) 102 = 2 × 51. Now, since the sum of the digits of 51 is divisible by 3, we know that 51 has a prime factor 3. 3 times what number is equal to 51? On mentally decomposing 51 into 30 + 21, 51 3=30 + 21 3=10 + 7 =17. 51 = 3 × 17. And 17 is a prime. Therefore, 102 = 2 × 3 × 17. That is the prime factorization of 102. Problem 7.Which of these numbers is prime and which is composite? If a number is composite, write its prime factorization. a) 29. Prime. b) 50. 2 × 5 × 5. c) 73. Prime. d) 32. 2 × 2 × 2 × 2 × 2. e) 60. 2 × 2 × 3 × 5. f) 135. 3 × 3 × 3 × 5. g) 137. Prime. h) 143. 11 × 13. i) 169. 13 × 13. j) 360. 2 × 2 × 2 × 3 × 3 × 5. k) 450. 2 × 5 × 5 × 3 × 3. Square factors We sometimes want to know whether a number has a square number as a factor. 50 for example has the square factor 25. 50 = 25 × 2. But for a less familiar number, such as 60, we can discover whether or not it has square factors by writing its prime factorization. We could proceed as follows: 60=2 × 30 =2 × 2 × 15 =2 × 2 × 3 × 5. When a prime appears twice, that product is a square number. 60, then, has one square factor, namely 2 × 2 = 4. 60 = 4 × 15. Example 4.Does 180 have any square factors? Solution.180=2 × 90 =2 × 2 × 45 =2 × 2 × 9 × 5 =2 × 2 × 3 × 3 × 5 180, then, has two square factors: 2 × 2 = 4, and 3 × 3 = 9. But 4 × 9 is itself a square number—36. For, A product of square numbers is itself a square number. 2 × 2 × 3 × 3=2 × 3 × 2 × 3 =6 × 6. Problem 8.Find the square factors of each number by writing its prime factorization. a) 112 = 2 × 2 × 2 × 2 × 7 = 16 × 7. b) 450 = 3 × 3 × 5 × 5 × 2 = 3 × 5 × 3 × 5 = 225 × 2. c) 153 = 3 × 51 = 3 × 3 × 17 = 9 × 17. d) 294 = 2 × 147 = 2 × 3 × 49 = 49 × 6. e)1225=25 × 49. 1225 is itself a square number. It is the square of 5 × 7 = 35. Is there a last prime? As the numbers get larger, the greater the possibility that they will have a divisor and be composite. So there might in fact be a last prime. Now, if there were a last prime, then we could imagine a list that contains every prime up to and including the last one. We will now prove that there is no such list, which is to say, There is no last prime. Here is the theorem: There are more prime number than in any given list of them. (Euclid, IX. 20.) Let the following, then, be any list of prime numbers: 2, 3, 5, 7, 11, . . . , P. Now construct the number N which will be the product of every prime on that list: N = 2 × 3 × 5 × 7 × 11 × . . . × P. Every prime on the list is thus a divisor of N. Add 1 to N: N + 1 = (2 × 3 × 5 × 7 × 11 × . . . × P) + 1. The first thing to note is that N + 1 is not on the list, because it is greater than every number on the list. Now, N + 1 is either prime or composite. If it is prime, then we have found a prime that is not on the list, and the theorem is proved. If N + 1 is composite, then it has a prime factor p. But p is not one of the primes on the listFor if it were, then p would be a divisor of both N + 1 and N. That would imply that p divides their difference (Lesson 11), namely 1—which is absurd. Therefore, if N + 1 is composite, then there is a prime p that is a divisor of N + 1 but not a divisor of N, which is to say, p is not a prime on the list. Therefore, there are more prime numbers than in any given list of them. Which is what we wanted to prove. A modern enunciation of this theorem is: The number of primes is infinite. Euclid thus teaches us what we mean, or rather what we should mean, when we say that a collection of numbers is infinite. It means: "No list of them is complete; there is no limit to those that we could name." We are thus referring to something that we could actually be aware of; namely a list. It does not refer to something that we cannot be aware of: a list that never ends. What would it even mean to say that a list that never ends "exists"? It certainly does not exist in this world. In quantum mechanics, it is not even possible to say that an electron exists until it is observed. A famous problem in mathematics is the twin prime conjecture. It states that there are infinitely many pairs of primes that differ only by 2. For example, 5 and 7, 17 and 19, 41 and 43. The conjecture has never been proved. What about prime triples, which are three primes that differ only by 2? For example, 3, 5, 7. What do you think? Are there many—perhaps even an infinite number—of such triples? Or is 3, 5, 7 the only one? 3, 5, 7 is the only such triple. Because in every sequence of three odd numbers, at least one of them is a multiple of 3. If the first is a multiple of 3, then the proposition is proved; for example, 21, 23, 25. 21 is not a prime. If the first is 1 more than a multiple of 3, then, on adding 2, the next will be a multiple of 3; for example, 25, 27, 29. Finally, if the first is 1 less than a multiple of 3, then the next will be 1 more, and the third will be a multiple of 3; for example, 35, 37, 39. In the sequence, 3, 5, 7, 3 is the only multiple of 3 that is a prime. Next Lesson: Greatest common divisor. Lowest common multiple. Introduction| Home|Table of Contents Copyright © 2021 Lawrence Spector E-mail:teacher@themathpage.com
2489
https://meridian.allenpress.com/aplm/article/147/10/1204/489720/Recent-Advances-in-the-Classification-of
Published Time: 2023-01-03 Recent Advances in the Classification of Gynecological Tract Tumors: Updates From the 5th Edition of the World Health Organization “Blue Book” | Archives of Pathology & Laboratory Medicine Skip to Main Content Search Dropdown Menu header search search input Search input auto suggest Search User Tools Dropdown Register Sign In Open Menu Toggle Menu Menu Home Issues Open Menu Current Issue Available Issues New Articles Web-Only Article Index Editorial Board Subscribe Instructions for Authors Open Menu Instructions for Authors Authorship Form Submit Manuscripts Open External Link Advertising Jobs Open External Link Skip Nav Destination Close navigation menu Article navigation Volume 147, Issue 10 October 2023 Previous Article Article Contents OVARY ENDOMETRIOSIS AND TUMORS OF THE PERITONEUM, FALLOPIAN TUBE, AND BROAD LIGAMENT UTERUS GESTATIONAL TROPHOBLASTIC DISEASE TUMORS OF THE CERVIX, VAGINA, AND VULVA NEUROENDOCRINE NEOPLASIA HEMATOLYMPHOID PROLIFERATIONS AND NEOPLASIA MESENCHYMAL TUMORS OF THE FGT MELANOCYTIC LESIONS METASTASES TO THE FGT GENETIC TUMOR SYNDROMES References Author notes Article Navigation REVIEW ARTICLE|January 03 2023 Recent Advances in the Classification of Gynecological Tract Tumors: Updates From the 5th Edition of the World Health Organization “Blue Book” Open Access Vinita Parkash, MBBS, MPH Corresponding Author; Vinita Parkash, MBBS, MPH From the Department of Pathology (Parkash, Siddon, Panse), Yale University School of Medicine, New Haven, Connecticut Corresponding Author: Vinita Parkash, MBBS, MPH, Department of Pathology, PO Box 208070, Yale University School of Medicine, New Haven, CT 06510 (email: vinita.parkash@yale.edu). Search for other works by this author on: This Site PubMed Google Scholar Omonigho Aisagbonhi, MD; Omonigho Aisagbonhi, MD Department of Pathology, University of California at San Diego, La Jolla, California (Aisagbonhi, Fadare) Search for other works by this author on: This Site PubMed Google Scholar Nicole Riddle, MD; Nicole Riddle, MD The Department of Pathology and Cell Biology, Ruffolo, Hooper, and Associates, University of South Florida College of Medicine, Tampa, Florida (Riddle, Siddon). Search for other works by this author on: This Site PubMed Google Scholar Alexa Siddon, MD; Alexa Siddon, MD From the Department of Pathology (Parkash, Siddon, Panse), Yale University School of Medicine, New Haven, Connecticut Department of Laboratory Medicine (Siddon), Yale University School of Medicine, New Haven, Connecticut The Department of Pathology and Cell Biology, Ruffolo, Hooper, and Associates, University of South Florida College of Medicine, Tampa, Florida (Riddle, Siddon). Search for other works by this author on: This Site PubMed Google Scholar Gauri Panse, MD; Gauri Panse, MD From the Department of Pathology (Parkash, Siddon, Panse), Yale University School of Medicine, New Haven, Connecticut The Department of Dermatology (Panse), Yale University School of Medicine, New Haven, Connecticut Search for other works by this author on: This Site PubMed Google Scholar Oluwole Fadare, MD Oluwole Fadare, MD Department of Pathology, University of California at San Diego, La Jolla, California (Aisagbonhi, Fadare) Search for other works by this author on: This Site PubMed Google Scholar Arch Pathol Lab Med (2023) 147 (10): 1204–1216. Article history Accepted: August 01 2022 Split-Screen Views Icon Views Open Menu Article contents Figures & tables Open the PDF for in another window Share Icon Share Facebook Twitter LinkedIn Email Bluesky Guest Access Tools Icon Tools Open Menu Get Permissions Cite Icon Cite Search Site Citation Vinita Parkash, Omonigho Aisagbonhi, Nicole Riddle, Alexa Siddon, Gauri Panse, Oluwole Fadare; Recent Advances in the Classification of Gynecological Tract Tumors: Updates From the 5th Edition of the World Health Organization “Blue Book”. _Arch Pathol Lab Med_ 1 October 2023; 147 (10): 1204–1216. doi: Download citation file: Ris (Zotero) Reference Manager EasyBib Bookends Mendeley Papers EndNote RefWorks BibTex toolbar search Search Dropdown Menu toolbar search search input Search input auto suggest Search Context.— The World Health Organization Classification of Tumours: Female Genital Tract Tumors, 5th edition, published in September 2020, comes 6 years after the 4th edition, and reflects the monumental leaps made in knowledge about the biology of gynecological tumors. Major changes include revised criteria for the assignment of the site of origin of ovarian and fallopian tube tumors, a revision in the classification of squamous and glandular lesions of the lower genital tract based on human papillomavirus association, and an entire chapter devoted to genetic tumor syndromes. This article highlights the changes in the 5th edition relative to the 4th edition, with a focus on areas of value to routine clinical practice. Objective.— To provide a comprehensive update on the World Health Organization classification of gynecological tumors, highlighting in particular updated diagnostic criteria and terminology. Data Sources.— The 4th and 5th editions of the World Health Organization Classification of Tumours. Conclusions.— The World Health Organization has made several changes in the 5th edition of the update on female genital tumors. Awareness of the changes is needed for pathologists’ translation into contemporary practice. This review article summarizes key changes in the 5th edition of the World Health Organization (WHO) “blue book” on tumors of the female genital tract (FGT) released in 2020.1 The 5th edition is produced with support from the International Society of Gynecological Pathology and follows the more stringent structured process created more generally for the revision of the WHO Classification of Tumours series with the incorporation of input from other disciplines.2 The change in the title gives a nod to gender parity, replacing the previous title of Tumours of the Female Reproductive Organs with Female Genital Tumors.3 The book is also available as an online digital edition with supplemental material.1 An annual Web subscription to all WHO “blue books” has the added benefit of access to postpublication corrigenda, which can be downloaded as a pdf. The hard copy of the 5th edition has twice as many pages as the 4th edition (635 versus 307 pages) to accommodate the advances in knowledge, especially molecular discoveries amassed in the field in the 6 years between the 2 editions.1,2 The text is complemented by a larger number of images. The book is reorganized with the classification and staging of tumors forming 2 separate sections at the beginning of the book rather than occurring at the beginning of individual chapters. New chapters are introduced to consolidate neoplasias that share common features across organs (neuroendocrine, hematolymphoid, melanocytic tumors, and mesenchymal and metastatic tumors of the lower genital tract [LGT]). Finally, a new chapter on genetic tumor syndromes that affect the FGT has been added to incorporate the considerable advances from massive parallel sequencing which have revealed associations between gynecological tumors and genetic predisposition syndromes that were not previously known. The headings for the description of individual entities now include the ICD-11 code, cytology, and diagnostic molecular pathology. Importantly, sections on essential and desirable diagnostic criteria and acceptable and unacceptable terminology (synonyms) have been added to promote greater uniformity of nomenclature and diagnosis. More broadly, there is an effort to harmonize processes and promote uniform nomenclature across organ systems. Thus, the classification of neuroendocrine neoplasia of epithelial origin aligns with the pulmonary and pancreatic terminology (ovarian carcinoid is excluded as it is a germ cell tumor) and is based on the WHO classification of neuroendocrine tumors (NETs) and the International Agency for Research on Cancer recommendations.4 Mitotic counts are standardized to an area in square millimeters rather than the less reproducible high-power fields (HPFs), defined as a 0.55-mm-diameter field or an area of 0.24 mm 2. There is a general trend toward a binary classification of “preinvasive” lesions. OVARY Epithelial Tumors of the Ovary The changes and some of the rationale for the changes in the 5th edition in this section are summarized below.5 Immunohistochemistry is Recommended for the Subclassification of Ovarian Carcinoma Subtypes Data show significant prognostic differences between tumor types and improved reproducibility in the classification of ovarian tumors using immunohistochemistry.6 These are summarized in Table 1. Table 1. Immunohistochemistry in Subtyping of Ovarian Carcinoma View large View Large Criteria for Assigning the Primary Site in Extrauterine High-Grade Serous Carcinoma Are Enumerated Tumors are considered to be primary in the fallopian tube if serous tubal intraepithelial carcinoma (STIC) is present, tubal mucosal high-grade serous carcinoma (HGSC) is present, or the tube is partly or entirely inseparable from a tubo-ovarian tumoral mass. Tumors are considered to originate from the ovary only if both tubes are grossly visible in their entirety, have undergone sectioning and extensive examination of the fimbria, and do not exhibit STIC or mucosal HGSC. A peritoneal origin is only assigned if there is no gross or microscopic evidence of STIC or HGSC in tubes or ovaries after complete histologic examination of both tubes and ovaries. Using these criteria approximately 80% of extrauterine HGSCs are classified as tubal in origin. Chemotherapy Response Scoring is Recommended for HGSC Treated With Neoadjuvant Therapy Chemotherapy response scoring (CRS) score 1 correlates with no response, CRS 2 with an intermediate response, and CRS 3 with a near-complete response. Importantly, CRS scoring is performed in the omental sections only, and does not include response patterns at other sites, including in the ovary.7 Carcinosarcoma is Now Categorized as a Carcinoma Rather Than a Mixed Malignant Epithelial and Mesenchymal Tumor Support for the reclassification of carcinosarcoma as carcinoma and not a mixed epithelial and mesenchymal tumor comes from studies showing that both the epithelial and mesenchymal components of a carcinosarcoma share identical genetic abnormalities and an epithelial origin.8–10 Furthermore, carcinosarcoma recurrences are usually HGSC, with which it also shares a common metastatic pattern.11 Seromucinous Carcinoma is Now Considered a Subtype of Endometrioid Carcinoma In the 4th edition of the WHO series, seromucinous carcinoma was defined as a carcinoma comprised predominantly of serous and endocervical-type mucinous epithelium, commonly containing foci of clear cells, with endometrioid and squamous differentiation.12 However, in a rigorous study using morphology, immunophenotyping, and genotyping to investigate the reproducibility of ovarian seromucinous carcinoma, Rambau et al13 found that the diagnosis of seromucinous carcinoma had a low interobserver agreement. Furthermore, when morphology, immunophenotype, and genotype were integrated, cases initially diagnosed as seromucinous carcinoma were reclassified as endometrioid carcinoma (72%), low-grade serous carcinoma (25%), or mucinous carcinoma (3%). The 5th edition of the WHO book has thus eliminated seromucinous carcinoma as a distinct entity and rather categorizes tumors with endocervical-type mucinous glands as endometrioid carcinoma, seromucinous subtype. However, seromucinous borderline tumor remains a separate entity.14 Thus, the label “mucinous” for borderline tumors and carcinoma appends only to tumors showing gastrointestinal (gastric or colonic) mucinous differentiation.15 Mesonephric-Like Adenocarcinoma is a New Entity That Has Been Added to the Category of “Other Carcinomas.” Mesonephric-like adenocarcinoma is a rare tumor defined as an adenocarcinoma displaying mesonephric differentiation. These tumors are thought to arise via transdifferentiation of more typical Müllerian carcinomas because they have been found to be associated with endometriosis, borderline tumors, and low-grade serous carcinomas.16 Histologic sections show a variety of growth patterns (tubular, glandular/pseudoendometrioid, ductal with slit-like angulated glands, papillary, spindled, and solid) with characteristic eosinophilic colloid-like luminal secretions. Nuclei are typically crowded with vesicular chromatin and inconspicuous nucleoli. They are morphologically distinguished from endometrioid carcinomas by lack of mucin and squamous differentiation.17 Mesonephric-like adenocarcinomas stain positive for GATA3, TTF1, CD10, and PAX8 but are negative for estrogen receptor (ER), progesterone receptor (PR), and WT1, and are p53 wild-type by immunohistochemistry.17,18 Due to their rarity, their prognosis is not known. Mixed Carcinoma Has Been Reintroduced as a Tumor Type Mixed carcinoma is defined as an ovarian carcinoma composed of 2 or more different histologic types. It is emphasized that truly mixed carcinomas of the ovary are rare (<1%); a single neoplastic type morphologically mimicking another tumor type is more common. For a diagnosis of mixed carcinoma to be made, at least 2 clearly recognizable histologic types must be identified on routine hematoxylin-eosin sections, with immunohistochemical stains confirming this morphologic impression. The most common combination of mixed ovarian carcinoma is endometrioid and clear cell carcinomas.19 The Criteria for Assigning and Managing Concurrent Uterine and Ovarian Endometrioid Carcinomas as Synchronous Primaries Versus Metastatic Tumors Are More Clearly Defined Though clonally related, synchronous primary ovarian and endometrial endometrioid carcinomas (EECs) have an indolent behavior and can be conservatively managed.20–22 For concurrent ovarian and EECs to be considered synchronous primaries, all of the following criteria need to be met: (1) both tumors are low-grade (International Federation of Gynecology and Obstetrics [FIGO] 1 or 2), (2) there is <50% myometrial invasion, (3) there is no involvement of any other site (ie, endometrial tumor is confined to the endometrium and ovarian tumor is limited to one ovary without surface involvement), and (4) extensive lymphovascular invasion is absent.20 These are the generally applicable criteria for classifying tumors as independent primaries in this setting, with the additional criterion that the ovarian tumor should be unilateral and without surface involvement or rupture.23 Nonetheless, this remains an evolving area without any data that unequivocally support any one approach. Retired Terminology The terminology “noninvasive low-grade serous carcinoma” is no longer recommended; rather, the micropapillary pattern of a serous borderline tumor is considered a subtype of borderline serous tumor (micropapillary/cribriform subtype). For simplicity and consistency, other alternative terminologies for borderline tumors such as “atypical proliferative serous, mucinous, or endometrioid tumors” are no longer recommended. Nonepithelial Tumors of the Ovary With a few exceptions as described below, there were no significant changes between the 4th and 5th editions to the sections on germ cell tumors, germ cell–sex-cord stromal tumors, mesothelial tumors, miscellaneous other tumors, or tumor-like lesions of the ovary.1,3 The most noteworthy investigative developments between the 2 editions have been in the category of sex-cord stromal tumors, where the molecular events that underlie some of the tumors continue to be rapidly elucidated, including some that may be diagnostically useful, but which do not or only minimally affect the fundamental classification or definitions of individual entities (Table 2). Accordingly, the 5th edition reflects these changes. Table 2. Ovarian Sex-Cord Stromal Tumors for Which a Molecular Alteration Has Been Identified in the Majority of Cases View large View Large Gynandroblastoma Was Reintroduced as a Tumor Type It is included in the subcategory of mixed sex-cord stromal tumors. Gynandroblastoma had been omitted from the 4th edition,3 a notable departure from the 3rd edition, wherein gynandroblastoma was defined as a tumor composed of “an admixture of well-differentiated Sertoli cell and granulosa cell components with the second cell population comprising at least 10% of the lesion."24 In the 5th edition, gynandroblastoma is defined as a “sex-cord stromal tumor with elements of both female and male differentiation,” without any proportional requirement for each component.25 Morphologically, gynandroblastomas display female (adult or juvenile granulosa cell tumor) and male (Sertoli or Sertoli-Leydig cell tumor) components that, in most cases, are spatially discrete at least partially. The most common combination is a predominant Sertoli-Leydig cell tumor admixed with a minor juvenile granulosa cell tumor, and there is some evidence that tumors with this combination are in fact variants of Sertoli-Leydig cell tumors.26,27 In one molecular study of gynandroblastomas, hotspot DICER1 mutations were identified in both components in a subset of cases, and all were devoid of FOXL2 mutations, even when a component with the morphology of adult granulosa cell tumor was present.26 Gynandroblastomas should be distinguished from “sex-cord stromal tumor not otherwise specified,” which is defined as a sex-cord stromal tumor that is devoid of definitive features of any of the aforementioned tumor types,25,28 and which is typically wild-type for both DICER1 and FOXL2.29 Primary Smooth Muscle Tumors of the Ovary Are Introduced as a Tumor Type Smooth muscle tumors of the ovary display all the varied histologies seen in their uterine counterparts.30 These are believed to arise from smooth muscle metaplasia of ovarian stroma, endometriosis, vessel wall, or teratoma. ENDOMETRIOSIS AND TUMORS OF THE PERITONEUM, FALLOPIAN TUBE, AND BROAD LIGAMENT A brief new chapter on endometriosis describes the various patterns of and lesions occurring in endometriosis that can simulate tumors. The sections on peritoneal tumors and broad ligament have minor changes only. As in the literature elsewhere, the term “well-differentiated papillary mesothelioma” is replaced by “well-differentiated papillary mesothelial tumors” since these behave in a benign fashion. Well-differentiated papillary mesothelial tumors show TRAF7 mutations.31 Pseudomyxoma peritonei is considered a metastatic neoplasm, most commonly from the appendix. UTERUS Epithelial Tumors of the Uterine Corpus The 5th edition features several substantive changes to the sections on epithelial neoplasms of the uterine corpus.32 Integrated Molecular-Histologic Typing of Endometrial Carcinoma Is Recommended for High-Grade Endometrial Carcinoma Integrated molecular diagnoses are identified as the best approach for risk stratification of endometrial carcinoma, especially for high-grade carcinomas in countries with high human-development indices. The algorithm recommends using POLE mutation analysis followed by immunohistochemistry for mismatch-repair protein assessment and finally p53 to stratify tumors into 4 clinically prognostic groups—group 1: POLE-mutated tumors with a good prognosis; group 2: microsatellite instability tumors with an intermediate prognosis; group 3: tumors with no specific molecular profile, with an intermediate prognosis; and group 4: TP53-mutated tumor with a poor prognosis.32,33 Binary Grading of Endometrial Endometrioid Carcinomas Is Recommended EECs have traditionally been graded using the 3-tiered FIGO grading scheme, wherein tumors are categorized as grade 1 (G1), grade 2 (G2), or grade 3 (G3) based on whether they show <5% solid growth, 6% to 50% solid growth, or >50% solid growth, respectively.34,35 Tumors showing high nuclear grade are assigned the next highest grade than would otherwise be assigned based on architecture alone.34,35 However, several groups have proposed a binary grading scheme that combines FIGO grades 1 and 2 into a low-grade group and FIGO grade 3 as high-grade, based on studies that showed a binary system to be more reproducible and directly relatable to current clinical practices than the 3-tiered FIGO scheme, although whether its prognostic value is superior or even comparable remains a matter of debate.36–41 Nonetheless, the binary scheme was recommended by the International Society of Gynecologic Pathologists in 2019.42 In the 5th edition, it is noted that “EEC is graded using FIGO grading criteria,” presumably as a statement of current practices, but it also is overtly stated elsewhere that “binary grading is recommended.”32,33 New Histotypes of Endometrial Carcinoma Are Cataloged A newly recognized histotype in the 5th edition is mesonephric-like carcinoma,16,17,43–45 defined therein as “an adenocarcinoma resembling mesonephric differentiation.”46 Mesonephric adenocarcinoma is also recognized separately, defined as “an adenocarcinoma originating from mesonephric remnants.”46 In routine practice, these 2 entities are not readily distinguishable and are generally considered under the rubric of mesonephric-like carcinoma. The limited data suggest that they are clinically aggressive.44,45 At the morphologic level, they are heterogeneous, displaying variably tubular, ductal, sex-cord–like, solid, papillary, retiform, or spindled patterns.16,44,45 Small glands with luminal, eosinophilic, colloid-like material are frequently present. They are typically immunoreactive for GATA3, TTF1, calretinin, and CD10 (luminal) and are generally negative or only focally positive for ER and PR.44–46KRAS and NRAS mutations are frequently present.43 Also newly recognized is mucinous endometrial adenocarcinoma of the gastric (gastrointestinal) type.47–51 These tumors, which have been the subject of only a few publications, resemble gastric-type endocervical adenocarcinomas and/or display intestinal differentiation at the morphologic and immunophenotypic levels.47–51 Preliminary evidence suggests that they are clinically aggressive.47 In the 5th edition, conventional mucinous carcinoma of the endometrium, defined in the 4th edition as “an endometrial carcinoma in which >50% of the neoplasm is composed of mucinous cells” is no longer recognized as a separate entity, but rather as a subtype of endometrioid carcinoma.33 Unresolved issues include how to define gastric/gastrointestinal differentiation, how to characterize purely mucinous carcinomas that are thought to be non–gastric/gastrointestinal, whether a mucinous carcinoma with a minimal endometrioid component should still be classified as endometrioid carcinoma with mucinous differentiation, whether an endometrioid component is allowable in a mucinous adenocarcinoma of the gastric (gastrointestinal) type, among others. Also formally included in the 5th edition is the long-recognized primary squamous cell carcinoma of the endometrium, which resembles squamous cell carcinomas at other sites, but which may be cytologically bland and show a pushing myoinvasive front.46,52,53 Carcinosarcomas Are Classified as Epithelial Tumors As in the section on ovary, carcinosarcomas, previously classified into the “mixed epithelial and mesenchymal tumors” category in prior editions, have now more appropriately been classified into the “epithelial tumors” category.54 Mesenchymal Tumors of the Uterine Corpus Intravenous Leiomyomatosis and Metastasizing Leiomyoma Are Distinct Entities Metastasizing leiomyoma is characterized by one or more well-demarcated smooth muscle proliferations at an extrauterine site, that are devoid of cytologic atypia, tumor cell necrosis, and an elevated mitotic index, typically in a patient with a history of uterine leiomyoma. Intravenous leiomyomatosis is characterized by the presence of morphologically benign smooth muscle cells within vessels, in the absence of a leiomyoma or beyond the borders of a leiomyoma. Fumarate Hydratase–Deficient Leiomyoma Is Described This is now the preferred designation for the leiomyoma subtype that has been associated with the hereditary leiomyomatosis and renal cell cancer syndrome. Fumarate hydratase (FH)-deficient leiomyomas have been associated with somatic or germline mutations in the FH gene, and to various extents show alveolar-type edema, bizarre nuclei, nucleolomegaly with perinucleolar halo, eosinophilic and rhabdoid-like cytoplasmic inclusions, oval nuclei that may be arranged in chains, staghorn (hemangiopericytoma-like) vessels, and infiltrative tongues of smooth muscle in the background myometrium (Figure 1, A through D).55–57 They characteristically show loss of expression of the FH protein, albeit in a potentially subtle pattern.56,57 However, neither morphology nor immunophenotyping optimally identifies all patients with a germline FH mutation.56,57 Figure 1. View largeDownload slide Fumarate hydratase (FH)-deficient leiomyoma. Low magnification shows an alveolar pattern of growth (A). High magnification shows atypical nuclei, cytoplasmic rhabdoid globules (B), and prominent perinucleolar halos (C). FH stain is negative in the tumor, while positive in vessels and the adjacent myometrium (D) (hematoxylin-eosin, original magnification ×40 [A and D]; FH stain, original magnification ×400 [B and C]). Figure 1. View largeDownload slide Fumarate hydratase (FH)-deficient leiomyoma. Low magnification shows an alveolar pattern of growth (A). High magnification shows atypical nuclei, cytoplasmic rhabdoid globules (B), and prominent perinucleolar halos (C). FH stain is negative in the tumor, while positive in vessels and the adjacent myometrium (D) (hematoxylin-eosin, original magnification ×40 [A and D]; FH stain, original magnification ×400 [B and C]). Close modal Criteria for Defining Malignancy in Leiomyosarcoma Subtypes Are More Specifically Enumerated Myxoid leiomyosarcomas must show at least one of the following features: infiltrative or irregular borders, >1 mitotic figures (MF)/10 HPF, tumor cell necrosis, or moderate to severe cytologic atypia. Epithelioid leiomyosarcomas must show at least one of the following 3 features: moderate to severe cytologic atypia, tumor cell necrosis, or 4 or more MF/10 HPFs. Endometrial Stromal Tumors and Undifferentiated Uterine Sarcoma Subtyping Is Expanded The basic classification of endometrial stromal neoplasms (endometrial stromal nodule, low- and high-grade endometrial stromal sarcoma) remains unchanged from the previous edition, an apparent constancy that belies the evolution in the recognized pathologic spectrum of high-grade endometrial stromal sarcomas (HGESSs) that resulted from the identification of additional recurrent genetic abnormalities with associated morphologic correlates between the 2 editions.1,3 HGESS now includes tumors showing the morphologic and immunophenotypic profile associated with YWHAE::NUTM2A/B fusions, BCOR internal tandem duplications, and ZC3H7B::BCOR fusions. Each has distinctive clinicopathologic features that are detailed elsewhere.58–63 Relatedly, undifferentiated uterine sarcoma, which is a diagnosis of exclusion, has gotten progressively more restricted as molecular analyses identify recurrent fusions that in addition to other factors, more appropriately classify as HGESS some tumors that were previously classified as undifferentiated uterine sarcoma.64 Other Mesenchymal Tumors There is an expanded discussion of perivascular epithelioid tumors (PEComas), which are distinctive mesenchymal tumors comprised of putative “perivascular epithelioid cells” and which express smooth muscle and melanocytic markers. Notably, TFE3-rearranged PEComas are recognized. These tumors are typically comprised of alveolar, nested, or solid configurations of cells with clear or clear to eosinophilic cytoplasm, and unlike conventional PEComas, frequently display only focal or even absent expression of smooth muscle actin (SMA) and Melan-A.65–67 HMB-45 and TFE3 are typically positive.66,68 Two separate algorithms for stratifying the malignant potential in PEComas are outlined, without any specific endorsement of either algorithm.65–67 Inflammatory myofibroblastic tumors (IMTs) are also more robustly discussed to reflect the significant advances that have been made regarding these rare tumors, including the recognition that (1) a small subset may present with extrauterine disease or may recur, and that clinicopathologic features fail to predict behavior; (2) a significant subset is associated with pregnancy or placenta; (3) although >95% of uterine IMTs are ALK-positive by immunohistochemistry, only about 75% are ALK fluorescence in-situ hybridization positive; ALK-negative uterine IMTs may display other novel fusions, including TIMP3::RET and ETV6::NTRK3.69–73 Retired Terminology Adenofibroma has been retired, as these lesions represent either benign polyps or adenosarcoma. GESTATIONAL TROPHOBLASTIC DISEASE This section has one major change, which may be difficult to translate to everyday practice for the average pathologist in the United States. The essential diagnostic criterion for the definitive diagnosis of partial hydatidiform mole is diandric triploidy, confirmed by molecular analysis74; the previous edition allowed for this diagnosis to be made using a variety of “triploidy-confirming” methodologies in the setting of appropriate histology.75 Additional changes include the expansion of the category of intraplacental choriocarcinoma to include intramolar choriocarcinoma arising in a molar gestation, although precise diagnostic criteria are not defined.76 Trophoblastic Proliferation in Early-Gestation (Complete) Mole Is Introduced Although not identified as a unique morphologic entity, this lesion deserves mention as it is likely to cross the pathologist’s microscope.76 Samples from early (complete) molar gestations, or those with bleeding may show only solid implantation-site trophoblast proliferation with atypia and mitoses; these may be mistaken for a nonvillous trophoblastic tumor. The absence of a mass lesion and proximity to a recent pregnancy event separate these lesions clinically from solid trophoblastic tumors. Genotyping is important to classify these lesions accurately, as the treatment of these lesions follows that of villous trophoblastic disease rather than true solid trophoblastic tumors. Mixed Trophoblastic Tumor and Atypical Placental-Site Nodule Are Recognized as Tumor Types The rare mixed trophoblastic tumor has variable admixtures of the 3 nonvillous trophoblastic tumors: placental-site trophoblastic tumor, epithelioid trophoblastic tumor, and choriocarcinoma. The epithelioid trophoblastic tumor–choriocarcinoma combination is the most common. Too few cases are reported to accurately predict behavior, but those with choriocarcinoma seem more commonly associated with metastatic disease.77 The second newly recognized entity is the atypical placental-site nodule (APSN), defined as showing features that are “intermediate between those of typical placental site nodules and epithelioid trophoblastic tumor.”78 Although diagnostic criteria for APSN are not well established, some proposed criteria to distinguish APSN from the more typical placental-site nodule include a larger size (5–10 mm), higher levels of cytologic atypia, cellularity, and mitotic activity, and a Ki-67 index of greater than 5%.78,79 TUMORS OF THE CERVIX, VAGINA, AND VULVA These sections have perhaps the greatest amount of reorganization and recharacterization of entities to incorporate the substantial advances in our understanding of the various neoplasias in these areas. While the epithelial tumors of the LGT remain site-based, all mesenchymal tumors of the LGT are discussed in a common section due to common and overlapping pathogenesis and diagnostic criteria.1 Research during the last decade shows that both squamous and glandular carcinomas of the LGT demonstrate clinically important differences between human papillomavirus (HPV)-associated and HPV-independent subtypes. Thus, the WHO recommends the adoption of HPV-status–based classification of both squamous (cervix, vagina, and vulva) and glandular lesions (cervix) in the LGT.80 However, there are some site-specific differences in incidence and the histomorphologic spectrum of disease. Virtually all squamous preneoplasias, and invasive squamous carcinomas of the cervix, are HPV-associated; an HPV-independent squamous preneoplastic lesion is not defined.81 However, a significant minority of cervical adenocarcinomas and a significant percentage of vulvar squamous preneoplasias and invasive squamous carcinomas are HPV-independent. Flat low-grade squamous intraepithelial lesion, while common in the cervix, is rare in the vulva. The Separation of High-Grade Squamous Intraepithelial Lesion Into Cervical Intraepithelial Neoplasia 2 and 3 is Acknowledged as Prognostically Relevant in Young Patients Although the WHO promotes the use of the 2-tier Bethesda terminology for the tissue diagnosis of squamous preneoplasia, the importance of subdivision of high-grade squamous intraepithelial lesion (HSIL) into cervical intraepithelial neoplasia (CIN) 2 and 3 is acknowledged for young patients, as CIN 2 lesions show high rates of regression in young patients and treatment stratification is recommended on this variable in the new consensus risk-based management guidelines from the American Society of Colposcopy and Cervical Pathology.82,83 The WHO also emphasizes avoiding the overapplication of p16 for diagnosing HSIL, but rather recommends the use of p16 to downgrade indeterminate lesions from HSIL to low-grade squamous intraepithelial lesion when p16 negative.83 Histology Reliably Separates Most HPV-Associated and HPV-Independent Cervical Adenocarcinomas Histomorphologic features and immunohistochemical patterns separate the HPV-independent subtypes from the HPV-dependent adenocarcinoma.84 Pseudostratification, floating mitoses, and block positivity for p16 reliably distinguish HPV-associated invasive and in-situ adenocarcinoma from the HPV-independent subtypes (Figure 2, A and B). The gastric subtype is the most common subtype of HPV-independent adenocarcinoma and is characterized by cells with clear or pale eosinophilic cytoplasm and conspicuous cell borders. It is often TP53 mutant, and a subset show positivity for HER2/neu, which may have implications for clinical management.85,86 Figure 2. View largeDownload slide Comparative features of human papillomavirus (HPV)-associated and HPV-independent adenocarcinoma. HPV-associated adenocarcinomas show mucin depletion with prominent “floating mitoses” and apoptotic bodies (A). Gastric-type adenocarcinoma shows voluminous pink mucin, with sharp cellular borders and low- to intermediate-level atypia (B) (hematoxylin-eosin, original magnification ×100 [A and B]). Figure 2. View largeDownload slide Comparative features of human papillomavirus (HPV)-associated and HPV-independent adenocarcinoma. HPV-associated adenocarcinomas show mucin depletion with prominent “floating mitoses” and apoptotic bodies (A). Gastric-type adenocarcinoma shows voluminous pink mucin, with sharp cellular borders and low- to intermediate-level atypia (B) (hematoxylin-eosin, original magnification ×100 [A and B]). Close modal A Pattern-Based Classification Is Recommended for Invasive HPV-Associated Adenocarcinoma of the Cervix Invasive HPV-associated adenocarcinomas reliably stratify into prognostically relevant subgroups based on the pattern of invasion (presence of destructive invasion, confluent growth, and lymphatic space invasion) and this classification is endorsed by the WHO (Table 3; Figure 3, A through D).84,87,88 Table 3. Pattern-Based Classification of HPV-Associated Invasive Cervical Adenocarcinoma View large View Large Figure 3. View largeDownload slide Pattern-based classification of invasive human papillomavirus (HPV)-associated adenocarcinoma of the cervix. Pattern A invasive adenocarcinoma shows rounded tumor without desmoplasia or infiltrative pattern of growth (A). Pattern B shows small foci of infiltrative glands, which may be present within the tumor or at the leading edge of the tumor; confluent growth is absent. This case shows inflammation at the leading edge of the tumor (B), but only rare foci of destructive invasion were identified (C). Pattern C invasive adenocarcinoma shows destructive stromal invasion, often with confluent growth (D). Patterns B and C may show foci of lymphatic invasion (hematoxylin-eosin, original magnifications ×1.45 [A, B, and D] and ×100 [C]). Figure 3. View largeDownload slide Pattern-based classification of invasive human papillomavirus (HPV)-associated adenocarcinoma of the cervix. Pattern A invasive adenocarcinoma shows rounded tumor without desmoplasia or infiltrative pattern of growth (A). Pattern B shows small foci of infiltrative glands, which may be present within the tumor or at the leading edge of the tumor; confluent growth is absent. This case shows inflammation at the leading edge of the tumor (B), but only rare foci of destructive invasion were identified (C). Pattern C invasive adenocarcinoma shows destructive stromal invasion, often with confluent growth (D). Patterns B and C may show foci of lymphatic invasion (hematoxylin-eosin, original magnifications ×1.45 [A, B, and D] and ×100 [C]). Close modal HPV-Independent Vulvar Squamous Lesions Stratify Into TP53-Mutated and TP53-Independent Histologies Unlike the cervix, a significant plurality of vulvar squamous preneoplasias and invasive squamous cancers are HPV-independent, with wide variability in incidence cited between studies.89 HPV-associated vulvar intraepithelial neoplasia (VIN) mirrors the histology of its counterparts in the cervix and the vagina, although there are vulva-associated rare morphologic iterations. HPV-independent lesions include the now-well-recognized “differentiated VIN” and an emerging spectrum of lesions that can be broadly categorized as “vulvar preneoplasia with aberrant maturation,” and which incorporate at least 2 different pathways for neoplastic transformation (Figure 4, A through D).90 The more commonly recognized pattern of differentiated VIN is TP53 mutated and HPV and p16 negative; the other patterns (differentiated exophytic VIN and vulvar acanthosis with altered differentiation) show wild-type TP53, and associate with somatic mutations in other genes, in particular NOTCH1 and PIK3CA.91,92 Figure 4. View largeDownload slide Human papillomavirus (HPV)-independent squamous vulvar intraepithelial neoplasia of the vulva. Differentiated vulvar intraepithelial neoplasia showing fusion of rete with marked basal atypia (A). TP53 immunohistochemistry shows a mutant “overexpression” pattern (B). Non-p53 mutated differentiated vulvar intraepithelial lesion showing altered differentiation with abrupt keratinization (C). TP53 stain shows a wild-type pattern of staining with only scattered basal cells showing variable positive staining (inset). Higher magnification shows minimal basal atypia and prominent eosinophilic cytoplasm of keratinocytes (D) (hematoxylin-eosin, original magnifications ×100 [A and B], ×40 [C], and ×400 [D]; original magnification ×40 [C inset]). Figure 4. View largeDownload slide Human papillomavirus (HPV)-independent squamous vulvar intraepithelial neoplasia of the vulva. Differentiated vulvar intraepithelial neoplasia showing fusion of rete with marked basal atypia (A). TP53 immunohistochemistry shows a mutant “overexpression” pattern (B). Non-p53 mutated differentiated vulvar intraepithelial lesion showing altered differentiation with abrupt keratinization (C). TP53 stain shows a wild-type pattern of staining with only scattered basal cells showing variable positive staining (inset). Higher magnification shows minimal basal atypia and prominent eosinophilic cytoplasm of keratinocytes (D) (hematoxylin-eosin, original magnifications ×100 [A and B], ×40 [C], and ×400 [D]; original magnification ×40 [C inset]). Close modal Retired Terminology “Serous carcinoma of the cervix” has been retired, as these lesions likely represent either “drop metastases” from the uterus or ovary or HPV-associated endocervical adenocarcinoma with serous carcinoma-like cytoarchitecture.93,94 NEUROENDOCRINE NEOPLASIA The section on neuroendocrine neoplasia of the gynecological tract is new and harmonizes terminology across systems.4 This terminology does not extend to ovarian carcinoid tumors as these are of germ-cell origin and a subtype of monodermal teratoma. Elsewhere in the gynecological tract, as in other organs, neuroendocrine neoplasia derives from a somatic origin, generally from the neuroectodermal or epithelial cells.4 It is classified either as a NET, replacing the 2014 low-grade NET, or neuroendocrine carcinoma (NEC), reflecting the biologically independent pathways of origin of these tumors. NETs are low- to intermediate-grade tumors with positivity for at least one neuroendocrine marker with nested, trabecular, or glandular growth patterns of monotonous cells and graded as either 1 or 2 based on 3 prognostically relevant organ-specific parameters: mitotic count, proliferative index, and necrosis. However, the extreme rarity of these tumors in the gynecological tract limits identifying reliable cut-offs, and a mitotic count of 5 per 10 HPF is proposed to label the tumor as grade 2. In general, necrosis should be absent in grade 1 tumors. NECs are not graded, as these are high-grade by definition. Two subtypes are recognized—small-cell and large-cell type—and may coexist in different parts of the same tumor. This label replaces the term “ovarian small cell carcinoma of pulmonary type.” Both types of tumors may show block p16 positivity independent of HPV association, although cervical tumors are commonly HPV induced. Importantly, neuroendocrine immunohistochemistry in the absence of histomorphology does not allow for labeling the tumor as NEC. They may occasionally be admixed with non-NECs, most commonly an adenocarcinoma. HEMATOLYMPHOID PROLIFERATIONS AND NEOPLASIA The hematopoietic tumors of the FGT are reorganized in the 5th edition in a single chapter to allow for a more cohesive discussion of the entities.95 The main hematolymphoid proliferations within the FGT include florid reactive lymphoid hyperplasia, various lymphomas, and myeloid sarcoma. Reactive lymphoid hyperplasia is the only benign entity typically seen in the FGT but can mimic lymphoma and is generally located within the cervix as a result of infection (such as HPV, Epstein-Barr virus, HIV, or Chlamydia trachomatis), surgery, or intrauterine device placement.96 Lymphomas can be separated into either those primary to the FGT or those secondary to systemic disease. Primary lymphomas include a variety of types, most frequently B-cell non-Hodgkin lymphomas, including diffuse large B-cell lymphoma, follicular lymphoma, Burkitt lymphoma, or extranodal marginal zone lymphoma.97,98 All lymphomas should be classified based on their morphologic, immunohistochemical, and molecular characteristics. Follicular lymphoma comprises 10% to 20% of FGT lymphomas, can be primary or secondary, and typically affects adults. Burkitt lymphoma can be seen in any FGT organ; however, the ovaries are most frequently involved. Endemic Burkitt lymphoma typically affects young children, while Burkitt lymphoma in adolescents and adults is sporadic. While quite rare, primary marginal zone lymphoma of the FGT is most commonly endometrial, although it may be seen in the ovaries, cervix, vagina, or vulva.97,98 Finally, myeloid sarcoma (mass-forming tumor of myeloid blasts) would be classified according to the cytogenetic and molecular findings of acute myeloid leukemia.99 MESENCHYMAL TUMORS OF THE FGT This entirely new substantially expanded section includes the molecular-genetic findings that are now integral to accurately typing soft tissue tumors. These are summarized in Table 4 and are expounded upon in the new WHO Soft Tissue and Bone 5th edition.100 Three new soft tissue lesions that are particular to the FGT are described; all are rare. Table 4. Immunohistochemical and Molecular Characteristics of Select Mesenchymal Tumors of the Female Genital Tract View large View Large Postoperative Spindle Cell Nodules Are Defined These are non-neoplastic reactive spindle cell proliferations with a variably edematous stroma and ill-defined edges that are often mistaken for sarcoma. Mitoses can be prominent and chronic inflammation is typically present. SMA and desmin may be present as seen in any myofibroblastic proliferation, but ALK (seen in 50% of inflammatory myofibroblastic tumors) is negative. Lipoblastoma-Like Tumor of the Vulva Is Introduced as a New Tumor Type This tumor has the potential to recur and occurs in women of reproductive age (mean or median, age 35 years). It belongs to the family of RB1-deleted soft tissue tumors. The histomorphologic features are a mixture of those seen in lipoblastoma, myxoid liposarcoma, and spindle cell lipoma. These tumors demonstrate a lobulated growth pattern of mature adipocytes and lipoblasts admixed with variably cellular cytologically bland spindle cells in an often myxoid stroma. It shows an arborizing vascular pattern and, by immunohistochemistry, is generally negative for S100, MDM2, and CDK4; it may show CD34 positivity. Most demonstrate immunohistochemical loss of PLAG1 and HMGA2 and either a complete loss or mosaic pattern of nuclear RB1 immunoexpression.101 Prepubertal Fibroma Is Defined This is a benign fibroblastic proliferation that commonly presents as asymmetric enlargement of one labium majus in a prepubertal girl. Composed of a bland hypocellular patternless growth with variable edema, myxoid change, and/or collagen deposition, it may represent an asymmetrical physiological response to hormonal surges in puberty. CD34 is strong and diffuse, while S100, SMA, and desmin are negative. ER and PR are variable and not useful in the diagnosis. These have a potential for recurrence, especially when incompletely excised. NTRK-Rearranged Spindle Cell Neoplasm Is Recognized as a Tumor Type This is a rare soft tissue sarcoma resembling a fibrosarcoma that occurs in the cervix or lower uterine segment of young adults.102 At least some cases have metastasized and shown an aggressive clinical course. Histologically they are a mildly to moderately atypical cellular proliferation with a variable mitotic index, staghorn vasculature, and/or necrosis. They are positive for CD34, S100, and cyclin-D1, while being negative for BCOR, ER, PR, CD10, SMA, and desmin. However, published experience is limited, and the failure to coexpress CD34 and S100 does not exclude the diagnosis.103 Pan-Trk is frequently positive and has some utility as a screening modality, but displays neither absolute sensitivity nor specificity, as Pan-Trk sensitivity tends to be much lower in tumors with NTRK3 fusions, and since high-grade endometrial stromal sarcomas frequently express Pan-Trk.104–106 The Criteria for Risk Stratification of Smooth Muscle Tumors of the LGT Is Standardized to Align With Uterine Criteria An important change is the standardization of risk stratification criteria of smooth muscle tumor of the vulva and vagina to that of uterine criteria. Thus, the presence of 2 of 3 criteria (>10 mitoses per 10 HPF, moderate to severe cytologic atypia, and tumor necrosis) classifies a smooth muscle tumor of the LGT as leiomyosarcoma.30,107 MELANOCYTIC LESIONS Melanocytic neoplasms account for 10% to 12% of vulvar pigmented lesions and can present a diagnostic challenge. Genital nevi may have worrisome features due to a greater tendency to architectural atypia with large, irregular, and confluent nests at the junction (atypical melanocytic nevus of the genital type).108 However, overall symmetry, evidence of maturation with descent of dermal melanocytes, and absence of significant cytologic atypia allow for recognition of the lesions as benign. These lesions frequently harbor BRAF V600E mutations and complete excision is curative. Dysplastic melanocytic nevus presents most often on the labia majora, and shows features such as a nested and lentiginous junctional component extending beyond the dermal component (“shoulder”), variable cytologic atypia, and an inflammatory infiltrate. These lesions often show overlapping features with atypical melanocytic nevus of the genital type and may harbor BRAF or NRAS mutations. Conservative removal of dysplastic nevi with severe atypia is recommended by some authors. Mucosal melanoma of the FGT is rare and displays histomorphologic and immunohistochemical features of melanomas elsewhere. Although a subset harbor BRAF and KIT mutations, and are amenable to targeted therapies, the incidence of these mutations in FGT melanomas is lower than in cutaneous melanoma as they do not have an ultraviolet signature. FGT melanomas are staged similar to cutaneous melanomas but demonstrate poorer survival compared to their cutaneous counterparts. A variety of congenital and acquired melanocytic nevi may be seen in the vulva, vagina, and cervical mucosa, and mirror the histopathologic findings of lesions elsewhere in the skin. METASTASES TO THE FGT Metastases to the FGT are uncommon, with the most common site of receipt of metastases being the vagina, typically from a cervical or endometrial primary. Breast and gastrointestinal tumors make up the bulk of nongynecological metastases. GENETIC TUMOR SYNDROMES This chapter is an aggregation of all genetic tumor syndromes whose associated proliferations may be present in the FGT.109 There is an updated description of each syndrome, including definition, localization, clinical features, epidemiology, pathogenesis, histopathology of associated tumors, essential and desirable diagnostic criteria, inheritance patterns, and diagnostic molecular pathology, among others. These syndromes, which are described more comprehensively elsewhere,110 are listed in Table 5 with the associated FGT tumors for each. Ovarian dysgenesis, a heterogeneous group of congenital sex-development disorders, is also included in this chapter, presumably because this disorder increases the risk of developing gonadoblastoma and other germ cell tumors. Table 5. Genital Tumor Syndromes With Associated Diseases That Involve the Female Genital Tract View large View Large References 1. WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon, France : International Agency for Research on Cancer (IARC) ; 2020 . WHO Classification of Tumours ; vol. 4 . Accessed May 20, 2022 . 2. Cree IA, White VA, Indave BI, Lokuhetty D. Revising the WHO classification: female genital tract tumours . Histopathology . 2020 ; 76 (1) : 151 – 156 . Google Scholar PubMed 3. Kurman RJ, Carcangiu ML, Herrington CS, Young RH, eds. WHO Classification of Tumours of Female Reproductive Organs . 4th ed. Lyon, France : International Agency for Research on Cancer ; 2014 . WHO Classification of Tumors; vol. 3 . Google Scholar 4. Rindi G, Klimstra DS, Abedi-Ardekani B, et al. A common classification framework for neuroendocrine neoplasms: an International Agency for Research on Cancer (IARC) and World Health Organization (WHO) expert consensus proposal . Mod Pathol . 2018 ; 31 (12) : 1770 – 1786 . Google Scholar PubMed 5. McCluggage WG, Lax SF, Longacre TA, et al. Tumors of the ovary . In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon (France) : International Agency for Research on Cancer ; 2020 ; 4 : 32 – 35 . Google Scholar 6. Köbel M, Luo L, Grevers X, et al. Ovarian carcinoma histotype: strengths and limitations of integrating morphology with wmmunohistochemical predictions . Int J Gynecol Pathol . 2019 ; 38 (4) : 353 – 362 . Google Scholar PubMed 7. Cohen PA, Powell A, Böhm S, et al. Pathological chemotherapy response score is prognostic in tubo-ovarian high-grade serous carcinoma: a systematic review and meta-analysis of individual patient data . Gynecol Oncol . 2019 ; 154 (2) : 441 – 448 . Google Scholar PubMed 8. Abeln EC, Smit VT, Wessels JW, de Leeuw WJ, Cornelisse CJ, Fleuren GJ. Molecular genetic evidence for the conversion hypothesis of the origin of malignant mixed Müllerian tumours . J Pathol . 1997 ; 183 (4) : 424 – 431 . Google Scholar PubMed 9. Fujii H, Yoshida M, Gong ZX, et al. Frequent genetic heterogeneity in the clonal evolution of gynecological carcinosarcoma and its influence on phenotypic diversity . Cancer Res . 2000 ; 60 (1) : 114 – 120 . Google Scholar PubMed 10. Jin Z, Ogata S, Tamura G, et al. Carcinosarcomas (malignant Mullerian mixed tumors) of the uterus and ovary: a genetic study with special reference to histogenesis . Int J Gynecol Pathol . 2003 ; 22 (4) : 368 – 373 . Google Scholar PubMed 11. Gallardo A, Matias-Guiu X, Lagarda H, et al. Malignant Mullerian mixed tumor arising from ovarian serous carcinoma: a clinicopathologic and molecular study of two cases . Int J Gynecol Pathol . 2002 ; 21 (3) : 268 – 272 . Google Scholar PubMed 12. Kobel M, Bell DA, Carcangiu ML, et al. Seromucinous tumors. Tumors of the ovary . In: Kurman RJ, Carcangiu ML, eds. WHO Classification of Tumours of Female Reproductive Organs . 4th ed. Lyon, France : IARC ; 2014 (3) : 38 – 40 . Google Scholar 13. Rambau PF, McIntyre JB, Taylor J, et al. morphologic reproducibility, genotyping, and immunohistochemical profiling do not support a category of seromucinous carcinoma of the ovary . Am J Surg Pathol . 2017 ; 41 (5) : 685 – 695 . Google Scholar PubMed 14. Kobel M, Kim KR, McCluggage WG, Shih I, Singh N. Seromucinous borderline tumor . In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon, France : IARC ; 2020 ; 4 : 69 – 70 . Google Scholar 15. Vang R, Khunarmornpong S, Kobel M, Longacre TA, Ramalingam P. Mucinous borderline tumor . In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon, France : IARC ; 2020 ; 4 : 50 – 52 . Google Scholar 16. McCluggage WG, Vosmikova H, Laco J. Ovarian combined low-grade serous and mesonephric-like adenocarcinoma: further evidence for a Mullerian origin of mesonephric-like adenocarcinoma . Int J Gynecol Pathol . 2020 ; 39 (1) : 84 – 92 . Google Scholar PubMed 17. McFarland M, Quick CM, McCluggage WG. Hormone receptor-negative, thyroid transcription factor 1-positive uterine and ovarian adenocarcinomas: report of a series of mesonephric-like adenocarcinomas . Histopathology . 2016 ; 68 (7) : 1013 – 1020 . Google Scholar PubMed 18. Pors J, Cheng A, Leo JM, Kinloch MA, Gilks B, Hoang L. A comparison of GATA3, TTF1, CD10, and calretinin in identifying mesonephric and mesonephric-like carcinomas of the gynecologic tract . Am J Surg Pathol . 2018 ; 42 (12) : 1596 – 1606 . Google Scholar PubMed 19. Mackenzie R, Talhouk A, Eshragh S, et al. Morphologic and molecular characteristics of mixed epithelial ovarian cancers . Am J Surg Pathol . 2015 ; 39 (11) : 1548 – 1557 . Google Scholar PubMed 20. Kobel M, Huntsman DG, Lim D, et al. Endometrioid carcinoma of the ovary . In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon, France : IARC ; 2020 ; 4 : 58 – 61 . Google Scholar 21. Anglesio MS, Wang YK, Maassen M, et al. Synchronous endometrial and ovarian carcinomas: evidence of clonality . J Natl Cancer Inst . 2016 ; 108 (6) : djv428 . Google Scholar PubMed 22. Schultheis AM, Ng CK, De Filippo MR, et al. Massively parallel sequencing-based clonality analysis of synchronous endometrioid endometrial and ovarian carcinomas . J Natl Cancer Inst . 2016 ; 108 (6) : djv427 . Google Scholar PubMed 23. Casey L, Singh N. Metastases to the ovary arising from endometrial, cervical and fallopian tube cancer: recent advances . Histopathol . 2020 ; 76 (1) : 37 – 51 . Google Scholar 24. Tavassoli FA, Mooney E, Gersell D, et al. Sex cord-stromal tumours . In: Tavassoli FA, Devilee P, eds. Pathology and Genetics of Tumours of the Breast and Female Genital Organs . 3rd ed. Lyon, France : IARC ; 2003 : 149 – 151 . Google Scholar 25. Kommoss F, Karnezis AN. Gynandroblastoma . In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon, France : IARC ; 2020 ; 4 : 32 – 35 . Google Scholar 26. Wang Y, Karnezis AN, Magrill J, et al. DICER1 hot-spot mutations in ovarian gynandroblastoma . Histopathology . 2018 ; 73 (2) : 306 – 313 . Google Scholar PubMed 27. Ordulu Z, Young RH. Sertoli-Leydig cell tumors of the ovary with follicular differentiation often resembling juvenile granulosa cell tumor: a report of 38 cases including comments on sex cord-stromal tumors of mixed forms (so-called gynandroblastoma) . Am J Surg Pathol . 2021 ; 45 (1) : 59 – 67 . Google Scholar PubMed 28. Seidman JD. Unclassified ovarian gonadal stromal tumors. A clinicopathologic study of 32 cases . Am J Surg Pathol . 1996 ; 20 (6) : 699 – 706 . Google Scholar PubMed 29. Stewart CJR, Amanuel B, De Kock L, et al. Evaluation of molecular analysis in challenging ovarian sex cord-stromal tumours: a review of 50 cases . Pathology . 2020 ; 52 (6) : 686 – 693 . Google Scholar PubMed 30. Lerwill MF, Sung R, Oliva E, Prat J, Young RH. Smooth muscle tumors of the ovary: a clinicopathologic study of 54 cases emphasizing prognostic criteria, histologic variants, and differential diagnosis . Am J Surg Pathol . 2004 ; 28 (11) : 1436 – 1451 . Google Scholar PubMed 31. Stevers M, Rabban JT, Garg K et al. Well-differentiated papillary mesothelioma of the peritoneum is genetically defined by mutually exclusive mutations in TRAF7 and CDC42 . Mod Pathol . 2019 ; 32 (1) : 88 – 99 . Google Scholar PubMed 32. Matias-Guiu, Longacre, McCluggage WG, Nucci MR, Oliva E. Tumors of the uterine corpus : In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon, France : IARC ; 2020 ; 4 : 246 – 247 . Google Scholar 33. Bosse T, Davidson B, Euscher E, et al. Endometrioid Carcinoma of the uterine corpus . In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon, France : IARC ; 2020 ; 4 : 252 – 255 . Google Scholar 34. Creasman W. Revised FIGO staging for carcinoma of the endometrium . Int J Gynaecol Obstet . 2009 ; 105 (2) : 109 . Google Scholar PubMed 35. Zaino RJ, Kurman RJ, Diana KL, Morrow CP. The utility of the revised International Federation of Gynecology and Obstetrics histologic grading of endometrial adenocarcinoma using a defined nuclear grading system. A Gynecologic Oncology Group study . Cancer . 1995 ; 75 (1) : 81 – 86 . Google Scholar PubMed 36. Conlon N, Leitao MM Abu-Rustum NR, Soslow RA. Grading uterine endometrioid carcinoma: a proposal that binary is best . Am J Surg Pathol . 2014 ; 38 (12) : 1583 – 1587 . Google Scholar PubMed 37. Alkushi A, Abdul-Rahman ZH, Lim P, et al. Description of a novel system for grading of endometrial carcinoma and comparison with existing grading systems . Am J Surg Pathol . 2005 ; 29 (3) : 295 – 304 . Google Scholar PubMed 38. Lax SF, Kurman RJ, Pizer ES, Wu L, Ronnett BM. A binary architectural grading system for uterine endometrial endometrioid carcinoma has superior reproducibility compared with FIGO grading and identifies subsets of advance-stage tumors with favorable and unfavorable prognosis . Am J Surg Pathol . 2000 ; 24 (9) : 1201 – 1208 . Google Scholar PubMed 39. Scholten AN, Smit VT, Beerman H, van Putten WL, Creutzberg CL. Prognostic significance and interobserver variability of histologic grading systems for endometrial carcinoma . Cancer . 2004 ; 100 (4) : 764 – 772 . Google Scholar PubMed 40. Guan H, Semaan A, Bandyopadhyay S, et al. Prognosis and reproducibility of new and existing binary grading systems for endometrial carcinoma compared to FIGO grading in hysterectomy specimens . Int J Gynecol Cancer . 2011 ; 21 (4) : 654 – 660 . Google Scholar PubMed 41. Sagae S, Saito T, Satoh M, et al. The reproducibility of a binary tumor grading system for uterine endometrial endometrioid carcinoma, compared with FIGO system and nuclear grading . Oncology . 2004 ; 67 (5–6) : 344 – 350 . Google Scholar PubMed 42. Soslow RA, Tornos C, Park KJ, Malpica A, et al. Endometrial carcinoma diagnosis: use of FIGO grading and genomic subcategories in clinical practice: recommendations of the International Society of Gynecological Pathologists . Int J Gynecol Pathol . 2019 ; 38 (Iss1 Suppl 1) : S64 – S74 . Google Scholar Crossref Search ADS PubMed 43. Mirkovic J, McFarland M, Garcia E, et al. Targeted genomic profiling reveals recurrent KRAS mutations in mesonephric-like adenocarcinomas of the female genital tract . Am J Surg Pathol . 2018 ; 42 (2) : 227 – 233 . Google Scholar PubMed 44. Pors J, Segura S, Chiu DS, et al. Clinicopathologic characteristics of mesonephric adenocarcinomas and mesonephric-like adenocarcinomas in the gynecologic tract: a multi-institutional study . Am J Surg Pathol . 2021 ; 45 (4) : 498 – 506 . Google Scholar PubMed 45. Euscher ED, Bassett R, Duose DY, et al. Mesonephric-like carcinoma of the endometrium: a subset of endometrial carcinoma with an aggressive behavior . Am J Surg Pathol . 2020 ; 44 (4) : 429 – 443 . Google Scholar PubMed 46. Singh N, Euscher ED, Hoang LN, Ip PPC, Park KJ. Other endometrial carcinomas . In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon, France : IARC ; 2020 ; 4 : 264 – 265 . Google Scholar 47. Wong RW, Ralte A, Grondin K, Talia KL, McCluggage WG. Endometrial gastric (gastrointestinal)-type mucinous lesions: report of a series illustrating the spectrum of benign and malignant lesions . Am J Surg Pathol . 2020 ; 44 (3) : 406 – 419 . Google Scholar PubMed 48. Trippel M, Imboden S, Papadia A, et al. Intestinal differentiated mucinous adenocarcinoma of the endometrium with sporadic MSI high status: a case report . Diagn Pathol . 2017 ; 12 (1) : 39 . Google Scholar PubMed 49. Stolnicu S, Podoleanu C, Orban I, Francisc R. Endometrial adenocarcinoma with gastrointestinal differentiation—a newly described entity, with morphologic diversity . Pol J Pathol . 2021 ; 72 (1) : 84 – 86 . Google Scholar PubMed 50. Travaglino A, Raffone A, Gencarelli A, et al. Endometrial gastric-type carcinoma: an aggressive and morphologically heterogenous new histotype arising from gastric metaplasia of the endometrium . Am J Surg Pathol . 2020 ; 44 (7) : 1002 – 1004 . Google Scholar PubMed 51. Ardighieri L, Palicelli A, Ferrari F, et al. Endometrial carcinomas with intestinal-type metaplasia/differentiation: does mismatch repair system defects matter? Case report and systematic review of the literature . J Clin Med . 2020 ; 9 (8) : 2552 . Google Scholar PubMed 52. Bures N, Nelson G, Duan Q, Magliocco A, Demetrick D, Duggan MA. Primary squamous cell carcinoma of the endometrium: clinicopathologic and molecular characteristics . Int J Gynecol Pathol . 2013 ; 32 (6) : 566 – 575 . Google Scholar PubMed 53. Goodman A, Zukerberg LR, Rice LW, Fuller AF, Young RH, Scully RE. Squamous cell carcinoma of the endometrium: a report of eight cases and a review of the literature . Gynecol Oncol . 1996 ; 61 (1) : 54 – 60 . Google Scholar PubMed 54. Palacios J, Ali-Fehmi R, Carlson JW. Carcinosarcoma of the uterine corpus . In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . Lyon, France : IARC ; 2020 ; 4 : 266 – 267 . Google Scholar 55. Sanz-Ortega J, Vocke C, Stratton P, Linehan WM, Merino MJ. Morphologic and molecular characteristics of uterine leiomyomas in hereditary leiomyomatosis and renal cancer (HLRCC) syndrome . Am J Surg Pathol . 2013 ; 37 (1) : 74 – 80 . Google Scholar PubMed 56. Chan E, Rabban JT, Mak J, Zaloudek C, Garg K. Detailed morphologic and immunohistochemical characterization of myomectomy and hysterectomy specimens from women with hereditary leiomyomatosis and renal cell carcinoma syndrome (HLRCC) . Am J Surg Pathol . 2019 ; 43 (9) : 1170 – 1179 . Google Scholar PubMed 57. Rabban JT, Chan E, Mak J, Zaloudek C, Garg K. Prospective detection of germline mutation of fumarate hydratase in women with uterine smooth muscle tumors using pathology-based screening to trigger genetic counseling for hereditary leiomyomatosis renal cell carcinoma syndrome: a 5-year single institutional experience . Am J Surg Pathol . 2019 ; 43 (5) : 639 – 655 . Google Scholar PubMed 58. Lee CH, Nucci MR. Endometrial stromal sarcoma—the new genetic paradigm . Histopathology . 2015 ; 67 (1) : 1 – 19 . Google Scholar PubMed 59. Lee CH, Mariño-Enriquez A, Ou W, Zhu M, Ali RH, et al. The clinicopathologic features of YWHAE-FAM22 endometrial stromal sarcomas: a histologically high-grade and clinically aggressive tumor . Am J Surg Pathol . 2012 ; 36 (5) : 641 – 653 . Google Scholar PubMed 60. Mariño-Enriquez A, Lauria A, Przybyl J, et al. BCOR internal tandem duplication in high-grade uterine sarcomas . Am J Surg Pathol . 2018 ; 42 (3) : 335 – 341 . Google Scholar PubMed 61. Lewis N, Soslow RA, Delair DF, et al. ZC3H7B-BCOR high-grade endometrial stromal sarcomas: a report of 17 cases of a newly defined entity . Mod Pathol . 2018 ; 31 (4) : 674 – 684 . Google Scholar PubMed 62. Panagopoulos I, Thorsen J, Gorunova L, et al. Fusion of the ZC3H7B and BCOR genes in endometrial stromal sarcomas carrying an X;22-translocation . Genes Chromosomes Cancer . 2013 ; 52 (7) : 610 – 618 . Google Scholar PubMed 63. Momeni-Boroujeni A, Chiang S. Uterine mesenchymal tumours: recent advances . Histopathology . 2020 ; 76 (1) : 64 – 75 . Google Scholar PubMed 64. Cotzia P, Benayed R, Mullaney K, et al. Undifferentiated uterine sarcomas represent under-recognized high-grade endometrial stromal sarcomas . Am J Surg Pathol . 2019 ; 43 (5) : 662 – 669 . Google Scholar PubMed 65. Folpe AL, Mentzel T, Lehr HA, Fisher C, Balzer BL, Weiss SW. Perivascular epithelioid cell neoplasms of soft tissue and gynecologic origin: a clinicopathologic study of 26 cases and review of the literature . Am J Surg Pathol . 2005 ; 29 (12) : 1558 – 1575 . Google Scholar PubMed 66. Schoolmeester JK, Howitt BE, Hirsch MS, Dal Cin P, Quade BJ, Nucci MR. Perivascular epithelioid cell neoplasm (PEComa) of the gynecologic tract: clinicopathologic and immunohistochemical characterization of 16 cases . Am J Surg Pathol . 2014 ; 38 (2) : 176 – 188 . Google Scholar PubMed 67. Bennett JA, Braga AC, Pinto A, et al: A morphologic, immunohistochemical, and molecular analysis of 32 tumors . Am J Surg Pathol . 2018 ; 42 (10) : 1370 – 1383 . Google Scholar PubMed 68. Bennett JA, Oliva E. Perivascular epithelioid cell tumors (PEComa) of the gynecologic tract . Genes Chromosomes Cancer . 2021 ; 60 (3) : 168 – 179 . Google Scholar PubMed 69. Rabban JT, Zaloudek CJ, Shekitka KM, Tavassoli FA. Inflammatory myofibroblastic tumor of the uterus: a clinicopathologic study of 6 cases emphasizing distinction from aggressive mesenchymal tumors . Am J Surg Pathol . 2005 ; 29 (10) : 1348 – 1355 . Google Scholar PubMed 70. Cheek EH, Fadra N, Jackson RA, et al. Uterine inflammatory myofibroblastic tumors in pregnant women with and without involvement of the placenta: a study of 6 cases with identification of a novel TIMP3-RET fusion . Hum Pathol . 2020 ; 97 : 29 – 39 . Google Scholar Crossref Search ADS PubMed 71. Devereaux KA, Fitzpatrick MB, Hartinger S, Jones C, Kunder CA, Longacre TA. Pregnancy-associated inflammatory myofibroblastic tumors of the uterus are clinically distinct and highly enriched for TIMP3-ALK and THBS1-ALK fusions . Am J Surg Pathol . 2020 ; 44 (7) : 970 – 981 . Google Scholar PubMed 72. Mohammad N, Haimes JD, Mishkin S, et al. ALK is a specific diagnostic marker for inflammatory myofibroblastic tumor of the uterus . Am J Surg Pathol . 2018 ; 42 (10) : 1353 – 1359 . Google Scholar PubMed 73. Takahashi A, Kurosawa M, Uemura M, Kitazawa J, Hayashi Y. Anaplastic lymphoma kinase-negative uterine inflammatory myofibroblastic tumor containing the ETV6-NTRK3 fusion gene: a case report . J Int Med Res . 2018 ; 46 (8) : 3498 – 3503 . Google Scholar PubMed 74. Buza N, Colgan TJ, Sebire N, et al. Partial hydatidiform mole . In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon, France : IARC ; 2020 ; 4 : 317 – 318 . Google Scholar Crossref Search ADS 75. Hui P, Baergen RN, Cheung ANY, et al. Molar pregnancies . In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon, France : IARC ; 2020 ; 4 : 163 – 165 . Google Scholar 76. Cheung AN, Baergen RN, Hui P, et al. Gestational choriocarcinoma . In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon, France : IARC ; 2020 ; 4 : 327 – 331 . Google Scholar 77. Mao TL, Baergen RN, Cheung ANY, et al. Mixed trophoblastic tumour . In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon, France : IARC ; 2020 ; 4 : 332 – 334 . Google Scholar 78. Kaur B, Baergen RN, Cheung AN, Hui P, Mao TL. Placental site nodule and plaque . In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon, France : IARC ; 2020 ; 4 : 313 – 314 . Google Scholar 79. Kaur B, Short D, Fisher RA, Savage PM, Seckl MJ, Sebire NJ. Atypical placental site nodule (APSN) and association with malignant gestational trophoblastic disease; a clinicopathologic study of 21 cases . Int J Gynecol Pathol . 2015 ; 34 (2) : 152 – 158 . Google Scholar PubMed 80. Herrington CS, Bray F, Ordi J. Tumours of the uterine cervix . In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon, France : IARC ; 2020 ; 4 : 332 – 334 . Google Scholar 81. Saco A, Carrilho C, Focchi GRA. Squamous cell carcinoma, HPV-independent of the uterine cervix . In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon, France : IARC ; 2020 ; 4 : 350 – 351 . Google Scholar 82. Perkins RB, Guido RS, Castle PE, et al. 2019 ASCCP risk-based management consensus guidelines for abnormal cervical cancer screening tests and cancer precursors . J Low Genit Tract Dis . 2020 ; 24 (4) : 102 – 131 . Google Scholar PubMed 83. Mills AM, Carrilho C, Focchi GRA, et al. Squamous intraepithelial lesions of the uterine cervix . In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon, France : IARC ; 2020 ; 4 : 342 – 346 . Google Scholar 84. Parra-Herran C, Alvarado-Cabrero I, Hoang L, et al. Adenocarcinoma, HPV associated of the uterine cervix . In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon, France : IARC ; 2020 ; 4 : 367 – 371 . Google Scholar 85. Shi H, Shao Y, Lu W, Lu B. An analysis of HER2 amplification in cervical adenocarcinoma: correlation with clinical outcomes and the international endocervical adenocarcinoma criteria and classification (IECC) . J Pathol Clin Res . 2021 ; 7 (1) : 86 – 95 . Google Scholar PubMed 86. Stolnicu S, Barsan I, Hoang L, et al. Diagnostic algorithmic proposal based on comprehensive immunohistochemical evaluation of 297 invasive endocervical adenocarcinomas . Am J Surg Pathol . 2018 ; 42 (8) : 989 – 1000 . Google Scholar PubMed 87. Roma AA, Diaz De Vivar A, Park KJ, et al. Invasive endocervical adenocarcinoma: a new pattern-based classification system with important clinical significance . Am J Surg Pathol . 2015 ; 39 (5) : 667 – 672 . Google Scholar PubMed 88. Stolnicu S, Barsan I, Hoang L, et al. International endocervical adenocarcinoma criteria and classification (IECC): a new pathogenetic classification for invasive adenocarcinomas of the endocervix . Am J Surg Pathol . 2018 ; 42 (2) : 214 – 226 . Google Scholar PubMed 89. Zhang J, Zhang Y, Zhang Z. Prevalence of human papillomavirus and its prognostic value in vulvar cancer: a systematic review and meta-analysis . PLoS One . 2018 ; 13 (9) : e0204162 . Google Scholar PubMed 90. Heller DS, Day T, Allbritton JI, et al. Diagnostic criteria for differentiated vulvar intraepithelial neoplasia and vulvar aberrant maturation . J Low Genit Tract Dis . 2021 ; 25 (1) : 57 – 70 . Google Scholar PubMed 91. Nooij LS, Ter Haar NT, Ruano D, et al. Genomic characterization of vulvar (pre)cancers identifies distinct molecular subtypes with prognostic significance . Clin Cancer Res . 2017 ; 23 (22) : 6781 – 6789 . Google Scholar PubMed 92. Watkins JC, Howitt BE, Horowitz NS, et al. Differentiated exophytic vulvar intraepithelial lesions are genetically distinct from keratinizing squamous cell carcinomas and contain mutations in PIK3CA . Mod Pathol . 2017 ; 30 (3) : 448 – 458 . Google Scholar PubMed 93. Wong RW, Ng JHY, Han KC, et al. Cervical carcinomas with serous-like papillary and micropapillary components: illustrating the heterogeneity of primary cervical carcinomas . Mod Pathol . 2021 ; 34 (1) : 207 – 221 . Google Scholar PubMed 94. Park KJ. Cervical adenocarcinoma: integration of HPV status, pattern of invasion, morphology and molecular markers into classification . Histopathology . 2020 ; 76 (1) : 112 – 127 . Google Scholar PubMed 95. Ferry JA, Chan JKC. Hematolymphoid proliferations and neoplasia . In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon, France : IARC ; 2020 ; 4 : 317 – 318 . Google Scholar 96. Ramalingam P, Zoroquiain, Valbuena J, et al. Florid reactive lymphoid hyperplasia (lymphoma-like lesion) of the uterine cervix . Ann Diagn Pathol . 2012 : 16 (1) : 21 – 28 . Google Scholar PubMed 97. Nasioudis D, Kampaktsis P, Frey M, et al. Primary lymphoma of the female genital tract: an analysis of 697 cases . Gynecol Oncol . 2017 ; 145 (3) : 305 – 309 . Google Scholar PubMed 98. Kosari F, Daneshbod Y, Parwaresch R, et al. Lymphomas of the female genital tract: a study of 186 cases and review of the literature . Am J Surg Pathol . 2005 . 29 (11) : 1512 – 1520 . Google Scholar PubMed 99. Swerdlow S, Campo E, Harris NL, et al. WHO Classification of Tumours of Haematopoietic and Lymphoid Tissues . Revised 4th ed. Lyon, France : IARC ; 2017 . Google Scholar 100. WHO Classification of Tumours Editorial Board . Soft Tissue Tumors . 5th ed. Vol. 3 . Lyon, France : IARC ; 2019 . 101. Schoolmeester JK, Michal M, Steiner P, et al. Lipoblastoma-like tumor of the vulva: a clinicopathologic, immunohistochemical, fluorescence in situ hybridization and genomic copy number profiling study of seven cases . Mod Pathol . 2018 ; 31 (12) : 1862 – 1868 . Google Scholar PubMed 102. Chiang S, Cotzia P, Hyman DM, et al. NTRK fusions define a novel uterine sarcoma subtype with features of fibrosarcoma . Am J Surg Pathol . 2018 ; 42 (6) : 791 – 798 . Google Scholar PubMed 103. Chiang S. S100 and pan-Trk staining to report NTRK fusion-positive uterine sarcoma: Proceedings of the ISGyP Companion Society Session at the 2020 USCAP Annual Meeting . Int J Gynecol Pathol . 2021 ; 40 (1) : 24 – 27 . Google Scholar PubMed 104. Momeni-Boroujeni A, Benayed R, Hensley ML, et al. High NTRK3 expression in high-grade endometrial stromal sarcomas with BCOR abnormalities. Abstracts from USCAP 2020: Gynecologic and Obstetric Pathology (1047-1234) . Lab Invest . 2020 ; 100 : 1024 – 1197 . Google Scholar PubMed 105. Gatalica Z, Xiu J, Swensen J, et al. Molecular characterization of cancers with NTRK gene fusions . Mod Pathol . 2019 ; 32 (1) : 147 – 153 . Google Scholar PubMed 106. Solomon JP, Linkov I, Rosado A, et al. NTRK fusion detection across multiple assays and 33,997 cases: diagnostic implications and pitfalls . Mod Pathol . 2020 ; 33 (1) : 38 – 46 . Google Scholar PubMed 107. Sayeed S, Xing D, Jenkins SM, et al. Criteria for risk stratification of vulvar and vaginal smooth muscle tumors: an evaluation of 71 cases comparing proposed classification systems . Am J Surg Pathol . 2018 ; 42 (1) : 84 – 94 . Google Scholar PubMed 108. Cree IA, Singh R. Melanocytic lesions . In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon, France : IARC ; 2020 ; 4 : 528 – 538 . Google Scholar 109. Brenton JD, Cheung ANY, Laz SF. Genetic tumor syndromes of the female genital tract . In: WHO Classification of Tumours Editorial Board . Female Genital Tumours . 5th ed. Lyon, France : IARC ; 2020 ; 4 : 544 – 563 . Google Scholar 110. Adam MP, Ardinger HH, Pagon RA, et al. , editors. GeneReviews® [Internet] . Seattle, WA : University of Washington, Seattle ; 1993–2021 . Accessed July 1, 2022 . Google Scholar 111. Shah SP, Köbel M, Senz J, et al. Mutation of FOXL2 in granulosa-cell tumors of the ovary . N Engl J Med . 2009 ; 360 (26) : 2719 – 2729 . Google Scholar PubMed 112. Bessière L, Todeschini AL, Auguste A, et al. A hot-spot of in-frame duplications activates the oncoprotein akt1 in juvenile granulosa cell tumors . EBioMedicine . 2015 ; 2 (5) : 421 – 431 . Google Scholar PubMed 113. Heravi-Moussavi A, Anglesio MS, Cheng SW, et al. Recurrent somatic DICER1 mutations in nonepithelial ovarian cancers . N Engl J Med . 2012 ; 366 (3) : 234 – 242 . Google Scholar PubMed 114. Witkowski L, Mattina J, Schönberger S, et al. DICER1 hotspot mutations in non-epithelial gonadal tumours . Br J Cancer . 2013 ; 109 (10) : 2744 – 2750 . Google Scholar PubMed 115. de Kock L, Terzic T, McCluggage WG, et al. DICER1 mutations are consistently present in moderately and poorly differentiated Sertoli-Leydig cell tumors . Am J Surg Pathol . 2017 ; 41 (9) : 1178 – 1187 . Google Scholar PubMed 116. Karnezis AN, Wang Y, Keul J, et al. DICER1 and FOXL2 mutation status correlates with clinicopathologic features in ovarian Sertoli-Leydig cell tumors . Am J Surg Pathol . 2019 ; 43 (5) : 628 – 638 . Google Scholar PubMed 117. Kim SH, Da Cruz Paula A, Basili T, et al. Identification of recurrent FHL2-GLI2 oncogenic fusion in sclerosing stromal tumors of the ovary . Nat Commun . 2020 ; 11 (1) : 44 . Google Scholar PubMed 118. Irving JA, Lee CH, Yip S, Oliva E, McCluggage WG, Young RH. Microcystic stromal tumor: a distinctive ovarian sex cord-stromal neoplasm characterized by FOXL2, SF-1, WT-1, Cyclin D1, and β-catenin nuclear expression and CTNNB1 mutations . Am J Surg Pathol . 2015 ; 39 (10) : 1420 – 1426 . Google Scholar PubMed 119. Bi R, Bai QM, Yang F, et al. Microcystic stromal tumour of the ovary: frequent mutations of beta-catenin (CTNNB1) in six cases . Histopathology . 2015 ; 67 (6) : 872 – 879 . Google Scholar PubMed 120. de Jonge MM, Ritterhouse LL, de Kroon CD, et al. Germline BRCA-associated endometrial carcinoma is a distinct clinicopathologic entity . Clin Cancer Res . 2019 ; 25 (24) : 7517 – 7526 . Google Scholar PubMed Author notes The authors have no relevant financial interest in the products or companies described in this article. © 2023 College of American Pathologists 2023 Send Email Recipient(s) will receive an email with a link to 'Recent Advances in the Classification of Gynecological Tract Tumors: Updates From the 5th Edition of the World Health Organization “Blue Book”' and will not need an account to access the content. Your Name: Your Email Address: CC: - [x] Recipient 1: Recipient 2: Recipient 3: Recipient 4: Recipient 5: Subject: Recent Advances in the Classification of Gynecological Tract Tumors: Updates From the 5th Edition of the World Health Organization “Blue Book” Optional Message: (Optional message may have a maximum of 1000 characters.) Submit × Sponsored Video 9,255 Views 12Web of Science 11Crossref View Metrics ×Close Modal Citing articles via Web Of Science (12) Google Scholar CrossRef (11) Latest Most Read Most Cited Neoadjuvant Therapy and Lung Cancer: Role of Pathologists Sanja Dacic, MD, PhD Reporting Results of Biomarker Testing of Specimens From Patients With Carcinoma of Gynecologic Origin: The Updated College of American Pathologists Protocol Gulisa Turashvili, MD, PhD, Anthony N. Karnezis, MD, PhD, Keren I. Hulkower, PhD, Colleen Hebert, DHA, Lara Harik, MD, Barbara Crothers, DO, Giovanna Giannico, MD, Kristin K. Deeb, PhD, Krisztina Hanley, MD, Raji Ganesan, FRCPath, Anne Mills, MD, Natalia Buza, MD QuANTUM-First: Clinical Validation of the LeukoStrat Companion Diagnostic for the Selection of Patients With Acute Myeloid Leukemia Harboring FMS-Like Tyrosine Kinase 3–Internal Tandem Duplications for Treatment With Quizartinib Jaime E. Connolly Rohrbach, PhD, Ken C. N. Chang, PhD, Maha Karnoub, PhD, Li Liu, PhD, Yasser Mostafa Kamel, MD, Shirin Khambata-Ford, PhD, Shawn Rivera, BS, Jelveh Lameh, PhD, Ekaterina Rudenko, MS, Jordan Thornes, BS, Sarah Todt, BA, Jason Gerhold, BS, Ying Huang, PhD, Jeffrey E. Miller, PhD, Alexander E. Perl, MD, Mark J. Levis, MD, PhD, Kazumi Ito, DVM, PhD Erratum Contributions From the University of Michigan 2023 New Frontiers in Pathology Conference Sara E. Bailey, MD, Kyle S. Conway, MD Pathology Made Simple: ChatGPT’s Summarization of Pathology Reports Gali Zabarsky Shasha, MD, Nora Balint-Lahat, MD, Ginette Schiby, MD, Assaf Debby, MD, Iris Barshack, MD, Chen Mayer, MD Gender Inclusion in the Cytopathology Laboratory: Review of Current Practice and Call to Action Suzanne Crumley, MD, Tatjana Antic, MD, Donna K. Russell, MEd, CT(ASCP)HT, Kaitlin E. Sundling, MD, PhD, Eric C. Huang, MD, PhD, Lananh Nguyen, MD, Amberly Nunez, MD, Jordan Reynolds, MD, Anupama Sharma, MD, MHA, James Dvorak, MLS(ASCP), Sana Tabbara, MD The Hypercoagulable State: A Study of Clot Waveform Analysis, Thrombin Generation, and Clot Scanning Electron Microscopy Maria Filomena Ruberto, PhD, Silvia Marongiu, MD, Terenzio Congiu, PhD, Luigi Barberini, Maria Conti, MD, Carmen Porcu, MD, Dimitrios Marco Ntoukas, MD, Gavino Faa, MD, Francesco Marongiu, MD, Doris Barcellona, MD A Prospective Video-Reflexive Ethnographic Study of Direct Patient-Pathologist Interactions With Heart and Lung Allograft Recipients Anja C. Roden, MD, Gladys B. Asiedu, PhD, Angela K. Regnier, Melanie C. Bois, MD, Jennifer M. Boland, MD, Eunhee S. Yi, MD, Ying-Chun Lo, MD, PhD, Nicole L. Larson, Kristina L. Peters, Xuan Zhu, PhD, MS, John P. Scott, MD, Marie Christine Aubry, MD, Joseph J. Maleszewski, MD Outcomes on Excision and Clinical Follow-up of Isolated Atypical Apocrine Adenosis Diagnosed on Core Needle Biopsy: A Multi-institutional Study Niloufar Pourfarrokh, MD, George Nicholas Ateek, MD, Julie M. Jorns, MD, Brian D. Stewart, MD, Jaya Ruth Asirvatham, MD Get Email Alerts Article Activity Alert Publish Ahead of Print Alert Latest Issue Alert Close Modal eISSN 1543-2165 ISSN 0003-9985 Privacy Policy Get Adobe Acrobat Reader Support Close Modal Close Modal This Feature Is Available To Subscribers Only Sign In or Create an Account Close Modal Close Modal This site uses cookies. By continuing to use our website, you are agreeing to our privacy policy. Accept
2490
https://askfilo.com/user-question-answers-smart-solutions/kc-for-the-reaction-ni-4co-is-gives-raise-to-ni-co-4-3231323532363339
Question asked by Filo student Kc for the reaction Ni+4co is gives raise to Ni(co)4 Views: 5,054 students Updated on: Jan 21, 2025 Text SolutionText solutionverified iconVerified Concepts: Equilibrium constant, Chemical reaction, Complex formation Explanation: To find the equilibrium constant (Kc) for the reaction Ni+4CO⇌Ni(CO)4​, we start by writing the expression for Kc. The equilibrium constant is defined as the ratio of the concentration of the products to the concentration of the reactants, each raised to the power of their coefficients in the balanced equation. In this case, the balanced equation shows that 1 mole of Ni reacts with 4 moles of CO to produce 1 mole of Ni(CO)4. Therefore, the expression for Kc is given by: Kc​=[Ni][CO]4[Ni(CO)4​]​ where [Ni(CO)4] is the concentration of the complex, [Ni] is the concentration of nickel, and [CO] is the concentration of carbon monoxide. This expression allows us to calculate Kc when we know the equilibrium concentrations of the reactants and products. Step by Step Solution: Step 1 Write the balanced chemical equation: Ni + 4 CO ⇌ Ni(CO)4. Step 2 Write the expression for the equilibrium constant Kc: Kc = [Ni(CO)4] / ([Ni][CO]^4). Step 3 Substitute the equilibrium concentrations into the Kc expression to calculate its value. Final Answer: Kc = [Ni(CO)4] / ([Ni][CO]^4) Students who ask this question also asked Views: 5,834 Topic: Smart Solutions View solution Views: 5,032 Topic: Smart Solutions View solution Views: 5,706 Topic: Smart Solutions View solution Views: 5,559 Topic: Smart Solutions View solution Stuck on the question or explanation? Connect with our tutors online and get step by step solution of this question. | | | --- | | Question Text | Kc for the reaction Ni+4co is gives raise to Ni(co)4 | | Updated On | Jan 21, 2025 | | Topic | All topics | | Subject | Smart Solutions | | Class | Class 10 | | Answer Type | Text solution:1 | Are you ready to take control of your learning? Download Filo and start learning with your favorite tutors right away! Questions from top courses Explore Tutors by Cities Blog Knowledge © Copyright Filo EdTech INC. 2025
2491
https://artofproblemsolving.com/wiki/index.php/Alternating_sum?srsltid=AfmBOoobf1AB4kjKPgIt3bxPRDlRhZMHXB_wO7yD4CC7NOqLvPM4Q-Rf
Art of Problem Solving Alternating sum - AoPS Wiki Art of Problem Solving AoPS Online Math texts, online classes, and more for students in grades 5-12. Visit AoPS Online ‚ Books for Grades 5-12Online Courses Beast Academy Engaging math books and online learning for students ages 6-13. Visit Beast Academy ‚ Books for Ages 6-13Beast Academy Online AoPS Academy Small live classes for advanced math and language arts learners in grades 2-12. Visit AoPS Academy ‚ Find a Physical CampusVisit the Virtual Campus Sign In Register online school Class ScheduleRecommendationsOlympiad CoursesFree Sessions books tore AoPS CurriculumBeast AcademyOnline BooksRecommendationsOther Books & GearAll ProductsGift Certificates community ForumsContestsSearchHelp resources math training & toolsAlcumusVideosFor the Win!MATHCOUNTS TrainerAoPS Practice ContestsAoPS WikiLaTeX TeXeRMIT PRIMES/CrowdMathKeep LearningAll Ten contests on aopsPractice Math ContestsUSABO newsAoPS BlogWebinars view all 0 Sign In Register AoPS Wiki ResourcesAops Wiki Alternating sum Page ArticleDiscussionView sourceHistory Toolbox Recent changesRandom pageHelpWhat links hereSpecial pages Search Alternating sum An alternating sum is a series of real numbers in which the terms alternate sign. For example, the alternating harmonic series is . Alternating sums also arise in other cases. For instance, the divisibility rule for 11 is to take the alternating sum of the digits of the integer in question and check if the result is divisble by 11. Given an infinite alternating sum, , with , if corresponding sequence approaches a limit of [0|zero]] monotonically then the series converges. Error estimation Suppose that an infinite alternating sum satisfies the the above test for convergence. Then letting equal and the -term partial sum equal , the Alternating Series Error Bound states that The value of the error term must also have the opposite sign as , the last term of the partial series. Examples of infinite alternating sums This article is a stub. Help us out by expanding it. Retrieved from " Category: Stubs Art of Problem Solving is an ACS WASC Accredited School aops programs AoPS Online Beast Academy AoPS Academy About About AoPS Our Team Our History Jobs AoPS Blog Site Info Terms Privacy Contact Us follow us Subscribe for news and updates © 2025 AoPS Incorporated © 2025 Art of Problem Solving About Us•Contact Us•Terms•Privacy Copyright © 2025 Art of Problem Solving Something appears to not have loaded correctly. Click to refresh.
2492
https://www.mheducation.com/highered/mhp/product/shargel-yu-s-applied-biopharmaceutics-pharmacokinetics-8th-edition.html
Sign In Select a product below: Sign in to Shop: My Account Details Log In to My PreK-12 Platform Log In to My Higher Ed Platform Sign In Select a product below: Sign in to Shop: My Account My Account Details Log In to My PreK-12 Platform Log In to My Higher Ed Platform Cart Search Higher Ed Browse by Discipline Business and Economics AccountingBusiness CommunicationBusiness LawBusiness MathematicsBusiness Statistics & AnalyticsComputer & Information TechnologyDecision Sciences & Operations ManagementEconomicsFinanceKeyboardingIntroduction to BusinessInsurance & Real EstateManagement Information SystemsManagementMarketing Humanities, Social Science and Language American GovernmentAnthropologyArtCareer DevelopmentCommunicationCriminal JusticeDevelopmental EnglishEducationFilmCompositionHealth and Human Performance HistoryHumanitiesMusicPhilosophy and ReligionPsychologySociologyStudent SuccessTheaterWorld Languages Science, Engineering and Math Agriculture & ForestryAnatomy & PhysiologyAstronomy & Physical ScienceBiology - MajorsBiology - Non-MajorsChemistryCell/Molecular Biology & GeneticsEarth & Environmental ScienceEcologyEngineering/Computer ScienceEngineering Technologies - Trade & TechHealth ProfessionsMathematicsMicrobiologyNutritionPhysicsPlants & Animals Learning Solutions Digital Products Connect® Course management and student learning tools backed by great support. McGraw Hill GO Greenlight learning with the new eBook+ ALEKS®Personalize learning and assessment ALEKS® Placement, Preparation, and Learning Achieve accurate math placement SIMnet Ignite mastery of MS Office and IT skills McGraw Hill eBook & ReadAnywhere AppGet learning that fits anytime, anywhere Sharpen: Study App A reliable study app for students Virtual Labs Flexible, realistic science simulations AI Reader Encourage Discovery, Boost Understanding Services Affordable AccessReduce costs and increase success Learning Management System IntegrationLog in and sync up Content Collections powered by Create®Curate and deliver your ideal content Custom Courseware SolutionsTeach your course your way Education for AllLet’s build a future where every student has a chance to succeed Business ProgramExplore business learning solutions & resources Professional ServicesCollaborate to optimize outcomes Remote ProctoringValidate online exams even offsite Institutional SolutionsIncrease engagement, lower costs, and improve access for your students EvergreenUpdated, relevant materials—without the hassle. Support Support General Help & Support Info Customer Service & Tech Support contact information Online Technical Support Center FAQs, articles, chat, email or phone support Support At Every StepInstructor tools, training and resources for ALEKS, Connect & SIMnet Instructor Sample Requests Get step by step instructions for requesting an evaluation, exam, or desk copy Platform System Check System status in real time Log in to Product Select your Higher Ed platform Connect Connect Math Hosted by ALEKS ALEKS My Bookshelf Redi Select your PreK–12 platform AP/Honors & Electives ALEKS ConnectED my.mheducation.com Open Learning Platform Redbird Sign in to mheducation.com Shop About Get Support Business and Economics AccountingBusiness CommunicationBusiness LawBusiness MathematicsBusiness Statistics & AnalyticsComputer & Information TechnologyDecision Sciences & Operations ManagementEconomicsFinanceKeyboardingIntroduction to BusinessInsurance & Real EstateManagement Information SystemsManagementMarketing Humanities, Social Science and Language American GovernmentAnthropologyArtCareer DevelopmentCommunicationCriminal JusticeDevelopmental EnglishEducationFilmCompositionHealth and Human Performance HistoryHumanitiesMusicPhilosophy and ReligionPsychologySociologyStudent SuccessTheaterWorld Languages Science, Engineering and Math Agriculture & ForestryAnatomy & PhysiologyAstronomy & Physical ScienceBiology - MajorsBiology - Non-MajorsChemistryCell/Molecular Biology & GeneticsEarth & Environmental ScienceEcologyEngineering/Computer ScienceEngineering Technologies - Trade & TechHealth ProfessionsMathematicsMicrobiologyNutritionPhysicsPlants & Animals Digital Products Connect® Course management and student learning tools backed by great support. McGraw Hill GO Greenlight learning with the new eBook+ ALEKS®Personalize learning and assessment ALEKS® Placement, Preparation, and Learning Achieve accurate math placement SIMnet Ignite mastery of MS Office and IT skills McGraw Hill eBook & ReadAnywhere AppGet learning that fits anytime, anywhere Sharpen: Study App A reliable study app for students Virtual Labs Flexible, realistic science simulations AI Reader Encourage Discovery, Boost Understanding Services Affordable AccessReduce costs and increase success Learning Management System IntegrationLog in and sync up Content Collections powered by Create®Curate and deliver your ideal content Custom Courseware SolutionsTeach your course your way Education for AllLet’s build a future where every student has a chance to succeed Business ProgramExplore business learning solutions & resources Professional ServicesCollaborate to optimize outcomes Remote ProctoringValidate online exams even offsite Institutional SolutionsIncrease engagement, lower costs, and improve access for your students EvergreenUpdated, relevant materials—without the hassle. Support General Help & Support Info Customer Service & Tech Support contact information Online Technical Support Center FAQs, articles, chat, email or phone support Support At Every StepInstructor tools, training and resources for ALEKS, Connect & SIMnet Instructor Sample Requests Get step by step instructions for requesting an evaluation, exam, or desk copy Platform System Check System status in real time ISBN10: 126014299X | ISBN13: 9781260142990 Shargel and Yu's Applied Biopharmaceutics & Pharmacokinetics, 8th Edition, 8th Edition © 2022 Published January 21, 2022 Access this eBook by following the instructions found below in How to access your eBook. More information about this Print book can be found in the page section below. More information about this eBook can be found in the page section below. More information about this Print book can be found in the page section below. How to Access Your eBook Step 1 . Download Adobe Digital Editions to your PC or Mac desktop/laptop. Step 2. Register and authorize your Adobe ID (optional). To access your eBook on multiple devices, first create an Adobe ID at account.adobe.com. Then, open Adobe Digital Editions, go to the Help menu, and select "Authorize Computer" to link your Adobe ID. Step 3. Open Your eBook. Use Adobe Digital Editions to open the file. If the eBook doesn’t open, contact customer service for assistance. Overview The authoritative textbook on the principles and practical applications of biopharmaceutics and pharmacokinetics Shargel & Yu's Applied Biopharmaceutics & Pharmacokinetics has been the standard textbook in its field for over 40 years. This eighth edition includes recent scientific developments in the field and embodies the collective contribution of experts with deep knowledge and experience in the selected subject areas. Shargel & Yu's Applied Biopharmaceutics & Pharmacokinetics, Eighth Edition provides the reader with a fundamental understanding of biopharmaceutics and pharmacokinetics principles that can be applied to patient drug therapy and rational drug product development. Shargel & Yu's Applied Biopharmaceutics & Pharmacokinetics, Eighth Edition has been expanded and revised to include advancements in biopharmaceutics and pharmacokinetics. The chapter sequence has been reorganized into four main sections, providing a more logical sequence for students. The textbook starts with fundamental concepts, followed by application of these principles to optimize drug therapy and to the rational development of drug products. Each chapter includes theoretical concepts with practical examples and clinical applications. Frequently asked questions provide a discussion of overall concepts. Features: Expanded and revised chapters to include scientific advances in biopharmaceutics and pharmacokinetics Four main sections providing a natural buildup of knowledge: introduction to biopharmaceutics and pharmacokinetics, fundamentals of biopharmaceutics, pharmacokinetic calculations, clinical pharmacokinetics and pharmacodynamics, and biopharmaceutics and pharmacokinetics in drug product development Additional chapters for this edition include: oPhysiological factors related to drug absorption oApproaches to pharmacokinetics and pharmacodynamics calculations oNovel and complex dosage Forms oClinical Development and Therapeutic Equivalence of Generic Drug and Biosimilar Products oPharmacokinetics and Pharmacodynamics in Clinical Drug Product Development Additional information on drug therapy, drug product performance, and other related topics Frequently asked questions, practice problems, clinical examples and learning questions Need support?  We're here to help - Get real-world support and resources every step of the way. Company Info Get Help Additional Resources Follow McGraw Hill:
2493
https://research.cm.utexas.edu/nbauld/fischer.htm
Fischer Projection Structures Fischer Projection Structures I. A Single Stereocenter q Fischer structures can be used to simply represent absolute configurations of chiral compounds at their stereocenters. An advantage of such structures is that they can easily represent multiple stereocenters, and allow easy identification of planes of symmetry, etc. q The Fischer structures of (R)-2-butanol and its enantiomer are shown below. q You are not expected to be able to see from a given Fischer structure whether the configuration is R or S, but if given the structure of R you should be able to draw the Fischer structure of its enantiomer. II. Two Equivalent Stereocenters. q Given a Fischer structure of a molecule having two stereocenters, you should be able to draw the Fischer structure of its enantiomer and the other stereoisomers. q Given the configuration of the the original stereoisomer, you should be able to designate the stereochemistry of all of the other stereoisomers. q Note that the Fischer structure implies an eclipsed conformation. q Consider 2,3-dibromobutane: q So, if given the Fischer structure of say the S,S stereoisomer, you should be able to draw the structure of its enantiomer and any other stereoisomers. III. Two Non-equivalent Stereocenters. q Consider 3-chloro-2-butanol: q Again, given one structure and its designation, you should be able to draw and designate all other stereoisomers. Of course, reflection in the mirror reverses all configurations. Also, interchanging any two groups inverts the configuration.
2494
https://www.nature.com/articles/s41560-022-01122-6
The effect of European fuel-tax cuts on the oil income of Russia | Nature Energy Loading [MathJax]/jax/output/HTML-CSS/config.js Your privacy, your choice We use essential cookies to make sure the site can function. We also use optional cookies for advertising, personalisation of content, usage analysis, and social media. By accepting optional cookies, you consent to the processing of your personal data - including transfers to third parties. Some third parties are outside of the European Economic Area, with varying standards of data protection. See our privacy policy for more information on the use of your personal data. Manage preferences for further information and to change your choices. Accept all cookies Skip to main content Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Advertisement View all journals Search Search Search articles by subject, keyword or author Show results from Search Advanced search Quick links Explore articles by subject Find a job Guide to authors Editorial policies Log in Explore content Explore content Research articles Reviews & Analysis News & Comment Current issue Collections Follow us on Twitter Sign up for alerts RSS feed About the journal About the journal Aims & Scope Journal Information Journal Metrics About the Editors Research Cross-Journal Editorial Team Reviews Cross-Journal Editorial Team Our publishing models Editorial Values Statement Editorial Policies Content Types Contact Publish with us Publish with us Submission Guidelines For Reviewers Language editing services Open access funding Submit manuscript Sign up for alerts RSS feed nature nature energy analyses article The effect of European fuel-tax cuts on the oil income of Russia Download PDF Download PDF Analysis Open access Published: 15 September 2022 The effect of European fuel-tax cuts on the oil income of Russia Johan GarsORCID: orcid.org/0000-0002-2303-19641, Daniel SpiroORCID: orcid.org/0000-0002-6819-832X2& Henrik WachtmeisterORCID: orcid.org/0000-0003-0266-142X3 Nature Energyvolume 7,pages 989–997 (2022)Cite this article 17k Accesses 14 Citations 91 Altmetric Metrics details Abstract Following Russia’s invasion of Ukraine, there has been a surge in transport fuel prices. Consequently, many European Union (EU) countries are cutting taxes on petrol and diesel to shield consumers. Using standard theory and empirical estimates, here we assess how such tax cuts influence the oil income in Russia. We find that an EU-wide tax cut of €0.20 l−1 increases Russia’s oil profits by around €8 million per day in the short and long term. This is equivalent to €3,100 million per year, 0.2% of Russia’s gross domestic product or 5% of its military spending. We show that a cash transfer to EU citizens—with a fiscal burden equivalent to the tax cut—reduces these side effects to a fraction. Similar content being viewed by others Not all oil types are alike in trade substitution Article Open access 29 August 2024 The impact of petrol and diesel oil taxes in EU member states on CO 2 emissions from passenger cars Article Open access 02 January 2024 The impact of Russia–Ukraine war on crude oil prices: an EMC framework Article Open access 02 January 2024 Main Following Russia‘s military attack on Ukraine, the European Union (EU) and United States have imposed a large number of sanctions on Russia1n "),2; "). The attack has also led to a negative supply shock of oil, partly because Russia’s ability to export has been hampered by the lack of will to insure Russian ships3."), but also due to industry preparation for the upcoming EU oil import ban4; "). Together with surging post-pandemic demand, this has led to very high prices of transport fuels5."),6."). In response, a large number of European countries are either discussing or have already implemented a reduction in fuel taxes to help consumers cope with high prices. These include Austria, Belgium, France, Germany, Italy, the Netherlands, Romania and Sweden (see the Methods section ‘Fuel price and taxes’ and, for example, refs. 7."),8."),9.") for details). Such tax reductions have problematic consequences since they increase demand, thus making current supply even more scarce. Some of the tax reduction will be attenuated by an increase in the underlying oil price, leading to increased profits for oil producers. Here, we assess the magnitude of this effect using basic theory and empirical estimates from the oil sector. We ask: ‘How much does the oil income in Russia increase following fuel-tax reductions in the EU?’ Knowing the answer to this question is highly relevant to policy as Russia’s oil profits may undermine the geopolitical interests of the EU, reduce the effectiveness of the EU’s sanctions and ultimately improve the ability of Russia to wage war. Here, we show that an EU-wide fuel-tax cut equivalent to €0.20 l−1 would increase Russia’s oil profits by €36 million per day in the first month, €8.4 million per day during the rest of the first year and €8.2 million per day beyond the first year. The additional profits are equivalent to 0.2% of Russia’s gross domestic product (GDP) and 5% of its defence spending. The fiscal cost to the EU would be €170 million per day during the first year. An alternative policy with an equivalent fiscal cost is studied as well: providing EU citizens with cash transfers. Such a policy yields a fraction of the tax cut’s profits to Russia and is ultimately more flexible for citizens as they can use the cash on anything they please. Theoretical approach Here, we describe how we derive the effects of a decreased fuel tax in the EU on Russian oil profits. Our analysis uses a standard model of supply and demand for oil. Our approach is similar to those of Erickson and Lazarus10 and Faehn et al.11. To analyse the effects of the EU’s tax, we distinguish between oil demand for road transport fuels in the EU and remaining global oil demand. Similarly, to focus on the effects on Russian revenues, we distinguish between oil supply from Russia and supply from the rest of the world. We also consider the alternative policy of income transfers to households. The first step is to derive how the global oil market responds to changes in the EU’s road transport fuel tax. More detailed derivations are provided in the Methods. The global per-unit crude oil price is denoted p. To make the crude oil usable as fuel for end users, it must be refined, transported and so on. For the EU, we assume a per-unit cost c for this (in the Methods, we consider analytically the case where these costs are instead proportional to the oil price, and assess this case quantitatively in Supplementary Note 3). Additionally, road fuel users in the EU pay a fuel excise tax τ per unit of fuel, as well as the value-added tax (VAT) rate v EU. The market equilibrium is found by equating the demand for crude oil for road transport in the EU, (D_{{\rm{EU}}}((1 + v_{{\rm{EU}}})(p + c + \tau ))) (in many EU countries, VAT also applies to the excise tax; hence, (1 + v EU) multiplies τ), and the remaining global demand for crude oil, D ROW(p), to the supply from Russia, S RU(p), and from the rest of the world, S ROW(p): $$\begin{array}{{20}{c}} {D_{{\rm{EU}}}\left( {\left( {1 + v_{{\rm{EU}}}} \right)\left( {p + c + \tau } \right)} \right) + D_{{\rm{ROW}}}\left( p \right) = S_{{\rm{RU}}}\left( p \right) + S_{{\rm{ROW}}}\left( p \right)} \end{array}$$ (1) Since our focus here is on tax cuts on transport fuel, D EU should be understood as the demand for crude oil to be used as oil-based road transport fuel in the EU (that is, mainly petrol and diesel). With some abuse of technical differences, we will often refer to it only as fuel. D ROW should be understood as the global demand for all other oil products except road fuel in the EU (that is, it also includes non-road oil products in the EU). We assume that the EU’s crude demand depends on the fuel price, including costs and taxes, while we assume that rest of the world demand depends on the oil price. The effect of a change in the tax on the equilibrium price can then be found by treating the price as a function of the tax, differentiating the equilibrium condition fully with respect to the tax, solving for the derivative of the price with respect to the tax and rewriting. Let x denote the share of global demand for oil that comes from road fuel demand in the EU and let y denote Russia’s share of the global oil supply. The market response to a change in the tax depends on the supply and demand elasticities in the submarkets. Let (\tilde \varepsilon {D,{\rm{EU}}}), (\varepsilon {D,{\rm{ROW}}}), (\varepsilon {S,{\rm{RU}}}) and (\varepsilon {S,{\rm{ROW}}}) denote the demand elasticities in the EU and the rest of the world and the supply elasticities of Russia and the rest of the world, respectively. The demand elasticity for the EU measures the response in demanded quantities of oil to changes in the end user prices, including additional costs and taxes, while the other elasticities measure responses of quantities of oil to changes in the oil price. Using this notation, the effect of a tax change on the oil price is given by the derivative $$\begin{array}{{20}{c}} {\frac{{{\rm{d}}p}}{{{\rm{d}}\tau }} = \frac{{x\frac{p}{{p + c + \tau }}\tilde \varepsilon {D,{\rm{EU}}}}}{{y\varepsilon {S,{\rm{RU}}} + \left( {1 - y} \right)\varepsilon {S,{\rm{ROW}}} - x\frac{p}{{p + c + \tau }}\tilde \varepsilon {D,{\rm{EU}}} - \left( {1 - x} \right)\varepsilon _{D,{\rm{ROW}}}}}.} \end{array}$$ (2) Note the ratio multiplying the EU demand elasticity. This ratio corrects for the fact that demand depends on the price including the supply costs and taxes and hence that a change in the global oil price p will have a smaller percentage effect on consumer prices (see Methods for further discussion of this). The effect of a tax change Δ τ on the oil price can be approximated linearly as $$\begin{array}{{20}{c}} {{\Delta}_\tau p \approx \frac{{{\rm{d}}p}}{{{\rm{d}}\tau }}{\Delta}\tau .} \end{array}$$ (3) Let the fuel price in the EU be denoted (f \equiv \left( {1 + v_{{\rm{EU}}}} \right)\left( {p + c + \tau } \right)). The change in the fuel price is $$\begin{array}{{20}{c}} {{\Delta}\tau f \approx \left( {1 + v{{\rm{EU}}}} \right)\left( {{\Delta}_\tau p + {\Delta}\tau } \right).} \end{array}$$ (4) The EU’s fuel tax revenues are (T_{{\rm{EU}}} \equiv \left(\right.v_{{\rm{EU}}}(p + c) +)((1 + v_{{\rm{EU}}})\tau \left.\right)D_{{\rm{EU}}}) and the fiscal cost of the tax change (that is, the lost tax revenue) can be found by differentiating with respect to τ—taking into account that the oil price depends on the tax—and making a linear approximation. The fiscal cost is then $$\begin{array}{{20}{c}} {{\Delta}\tau T{{\rm{EU}}} \approx \left[ {1 + \left( {v_{{\rm{EU}}} + \frac{{\tau + v_{{\rm{EU}}}\left( {p + c + \tau } \right)}}{{p + c + \tau }}\varepsilon {D,{\rm{EU}}}} \right)\left( {1 + \frac{{{\rm{d}}p}}{{{\rm{d}}\tau }}} \right)} \right]D{{\rm{EU}}}{\Delta}\tau .} \end{array}$$ (5) Finally, we translate an oil price change into a change in Russian oil profits. The oil profits of Russia are (\pi {{\rm{RU}}} = (p - e)S{{\rm{RU}}}(p)), where e represents the oil extraction costs, which are assumed to be constant per unit. This assumption is realistic under the production changes considered here. Again, treating p as a function of τ, differentiating with respect to τ and making a linear approximation gives that Russian oil profits change by $$\begin{array}{{20}{c}} {{\Delta}\tau \pi {{\rm{RU}}} \approx \left( {1 + \frac{{p - e}}{p}\varepsilon {S,{\rm{RU}}}} \right)S{{\rm{RU}}}\left( p \right){\Delta}_\tau p.} \end{array}$$ (6) As an explicit aim of the tax cut is to shield consumers from increasing fuel costs, an alternative way to do so is to give general income transfers to people that correspond to the reduction in tax revenues that would result from a decreased fuel tax. From a welfare perspective, this is preferable since people can then choose how to use the money. To the extent that the tax cut is supposed to appease particular groups, it is also possible to direct the income transfers to these groups. Such options include giving money to all car owners (not based on their driving), giving money to particular regions where the population is more reliant on cars (for example, rural areas) or reimbursing the tax collected in a region or municipality to that same region or municipality. Another rationale to uphold fuel taxes is to internalize climate damages from fossil fuel use (that is, Pigouvian taxes). From the perspective of this Analysis, an additional effect of an income transfer is that a share smaller than one will be spent on fuel; hence, the increase in Russian oil profits will be smaller. How much smaller will be assessed quantitatively. To analyse this alternative policy option, we assume that road fuel demand in the EU now depends on disposable income I in addition to the fuel price. Differentiating the market equilibrium condition with respect to I and treating the equilibrium price as a function of income, we get that the oil price change due to an income change is $${\frac{{{\rm{d}}p}}{{{\rm{d}}I}} = \frac{p}{I}\frac{{x\varepsilon {I,{\rm{EU}}}}}{{y\varepsilon {S,{\rm{RU}}} + \left( {1 - y} \right)\varepsilon {S,{\rm{ROW}}} - x\frac{p}{{p + c + \tau }}\tilde \varepsilon {D,{\rm{EU}}} - \left( {1 - x} \right)\varepsilon _{D,{\rm{ROW}}}}},}$$ (7) where ε I,EU is the income elasticity of road fuel demand in the EU. The effects of a change in the disposable income Δ I on the oil price, p, and the fuel price in the EU, f, are $$\begin{array}{{20}{c}} {{\Delta}Ip \approx \frac{{{\rm{d}}p}}{{{\rm{d}}I}}{\Delta}I\,{{{\mathrm{and}}}}\,{\Delta}_If \approx \left( {1 + v{{\rm{EU}}}} \right){\Delta}_Ip.} \end{array}$$ (8) The effect on Russian oil profits due to the income transfer is $$\begin{array}{{20}{c}} {{\Delta}I\pi {{\rm{RU}}} \approx \left( {1 + \frac{{p - e}}{p}\varepsilon {S,{\rm{RU}}}} \right)S{{\rm{RU}}}{\Delta}_Ip.} \end{array}$$ (9) Data and estimates vary over different time horizons We quantitatively assess the effects described theoretically in three different cases: the very short term, the short term and the long term. Here, we sketch the qualitative effects in the different time horizons. The size of the effects depends on the numerical values, which are presented at the end of this section. We start by describing the long-term effects. Note that we use long term to describe the effects beyond 1 year, to be thought of as in 1–3 years but not more. Other studies deriving price elasticity estimates often use long term to describe effects on longer time horizons. In the Methods, we discuss our selection and range of elasticity estimates in more detail and how they relate to the time horizon. In the long term, supply is somewhat elastic and demand is rather elastic too. This is because producers have time to adjust their production and make some capacity investments. Likewise, consumers can acquire new habits or find solutions based on a new fuel price, and those in the process of buying a new vehicle will take the fuel price into account (see, for example, ref. 12). This is the case illustrated in Fig. 1. The grey lines show the demand from the EU and the rest of the world, respectively. The black lines represent global demand and supply. A tax reduction in the EU shifts the EU’s demand outwards (dashed grey line). This in turn increases global demand by an equivalent amount (dashed black line). There are two effects of this: an increase in the oil price (a shift along the vertical axis) and an increase in the quantity of oil produced (a shift along the horizontal axis). Fig. 1: Illustration of supply and demand in the long term. Oil demand for road transport in Europe (D EU) increases by a tax cut (the D EU demand line shifts to the right from the solid to the dashed line position). Consequently, total world oil demand (D EU+ROW) increases by an equal amount (the D EU+ROW line shifts to the right from the solid to the dashed line position). World oil supply (S) is somewhat elastic and a new, higher, equilibrium price is attained at a higher quantity level (dashed lines). Full size image In the short term, to be thought of as 1–12 months, the supply elasticity in terms of quantity is lower than in the long term. This can be seen in Fig. 2a, where supply is illustrated as fixed. Demand elasticity is lower than in the long term since most of the consumer choice regards how much to drive rather than what vehicle to buy or how to change long-term habits. Since, as in the figure, supply is fixed, a reduced tax results in increased oil price but no increase in production or consumption. In practice, some of the increased demand from the EU is attenuated by decreased consumption in the rest of the world. Fig. 2: Illustration of supply and demand in the short term and in the very short term. a, In the short term, oil demand for road transport in the EU (D EU) increases by a tax cut (the D EU demand curve shifts to the right from the solid to the dashed line position). Consequently, total world oil demand (D EU+ROW) increases by an equal amount (the D EU+ROW demand curve shifts to the right from the solid to the dashed position). World oil supply (S) is fixed (inelastic) and a new, higher, equilibrium price is attained (dashed line) at the same quantity level. b, In the very short term, oil demand for road transport in Europe (D EU) increases by a tax cut as in the short term, but due to transport rigidities, the EU is modelled as an isolated market. Hence, the total demand equals EU demand (D EU). Supply to the EU (S EU) is fixed (inelastic) and a new, higher, equilibrium price is attained (dashed line) at the same quantity level. Full size image In the very short term, to be thought of as up to 1 month, there are limitations on how much oil, which was originally meant for other markets, can be redirected quickly to the EU. The reason for this is that bilateral contracts of supply can be viewed as partly fixed; oil tankers on their way to one country cannot in the very short term easily be sent elsewhere. To capture this, we model this as though the EU is an isolated oil market. The supply is fixed, both in terms of quantity and who supplies the oil. Implicitly, this also means that the oil price in the EU may differ from the oil price elsewhere. This case is illustrated in Fig. 2b. Importantly, here Russia is relatively a much larger supplier than on the global market. Demand is also less elastic than in the short term. We view our modelling here as a thought experiment. Reality in the very short term probably lies somewhere in between our modelling here and the short-term scenario described above. One simplification and limitation of our model is that we do not consider oil inventories. We discuss the potential effects of an EU import embargo in the next section. The parameters, quantities and shares in equations (1)–(9) are based on previous research and current data. They are reported in Table 1. Where relevant, we distinguish between very-short-term, short-term and long-term elasticities and shares. Motivations for the values are provided in the table, with further information available in Methods. Table 1 Parameters for equations (1)–(9) describing Russia’s oil income Full size table In a sensitivity analysis in Supplementary Note 2, we perturb the key parameters to show how this affects our results. Effects of fuel-tax cuts on Russian oil income Here, we analyse the effects of a fuel-tax cut in the very short term, the short term and the long term. The tax cut considered is equivalent to €0.20 l−1 including VAT (that is, it amounts to €0.2/(1 + VAT)). This is based on a weighted average of currently announced tax cuts in EU countries (equivalent to roughly 10% of the price) and on the possibility that all countries cut the taxes to the EU’s minimum level (see the Methods section ‘Fuel price and taxes’ for details). The results in the very short term are presented in the top row of Table 2. Of the 20 cents of tax reduction, 7 cents are passed through to oil suppliers. Russia, being an important supplier, attains a large share of the fiscal cost of the policy, making an additional €36 million per day. Apart from financing Russia, the policy is also quite ineffective in lowering consumer prices in the very short term; consumers only experience 12 cents of reduction per litre despite the tax reduction being 20 cents. The results here take into account Russia’s current reduced supply to the EU (see the Methods section ‘Size of markets and Russian export declines’). Table 2 Effect of an EU fuel-tax cut of €0.20 l−1 Full size table In the short-term case, to be thought of as the remaining part of the first year, the consumer price in the EU is reduced by almost the full tax reduction and the now global oil price is increased by much less than in the very short term. Nevertheless Russia—a large supplier also globally—is still receiving sizable additional profits (€8.4 million per day or €3.1 billion in year equivalents). In the long-term case, to be thought of as beyond 1 year and up to 3 years, supply becomes somewhat elastic and so does demand. The price effects are again smaller and the fiscal cost to the EU is smaller than in the very short and short term. This is because, instead of an oil price increase, there is an increase in the supply. Russia’s additional oil profits are still sizeable at €8.2 million per day or €3.0 billion per year. In Supplementary Note 2, we carry out a sensitivity analysis of these results with respect to other parameter values. The results are generally not very sensitive in the very short term. The long-term and short-term results are more sensitive. The sensitivity analysis suggests that Russia’s short-term profit gains can be one-third compared with those using our preferred parameter values (reported here in the main text). However, they may also be around 70% higher. We also investigate whether the discount on Russian oil (through the Urals price) is likely to change our results (Supplementary Note 4) and let the costs c be proportional to the oil price (Supplementary Note 3). With regard to the Urals price, while the time window is too short to infer the long-term effects, our analysis suggests that an increased global oil price (for example, due to a tax cut, as considered here) will lead to an equivalent increase in the Urals price. Hence, our results are probably not affected by this discount. We refer the reader to Supplementary Note 4 for more details. A final issue of robustness is whether the results change due to an import embargo. On 3 June 2022, the EU announced an extensive but incomplete import embargo on Russian oil42 ") to be imposed with a delay of several months. Had the embargo been imposed before the tax cuts, the particular effects relying on the rigidity of transport would disappear. It would make the mechanisms and results in the very short term identical to those in the short term, which are independent of whether an oil import embargo exists or not. However, since the import embargo will take force with a long delay and since most of the tax cuts have already been implemented, we view the very short-term results presented here as an assessment of what has possibly already materialized. It should be noted that the results for the short and long term are driven by an increase in the global oil price due to higher EU demand. Furthermore, in such time frames, transport routes can be adjusted. Hence, the results in the short and long term are independent of whether or not the EU imposes an import embargo on Russia. As long as there are sufficiently many or large buyers of Russian oil, the market will adjust the trade flows. Are the additional profits for Russia large? We now discuss this from a few different perspectives. First note that in the very short term, a large share of the EU’s fiscal cost (24%) is sent to Russia. In the short term and long term, much less is sent, but still around 5–7% of what is meant to help European consumers is instead going to Russia. The additional Russian profits are sizeable compared with Russia’s pre-invasion GDP, which was about €3.7 billion per day. The EU’s tax cut increases Russia’s GDP by ~1% in the very short term versus 0.2% in the short term and the long term. We can also compare them with Russia’s military spending, which was about €160 million per day pre-invasion (based on ref. 13U (2022)."), the average yearly military spending in 2015–2020 was US$65 billion; a $ to € exchange rate of 0.9 makes this €160 million per day). The daily profit increase then corresponds to 23, 5 and 5% of military spending in the very short term, short term and long term, respectively. Almost all of these revenues stay in the Russian economy as 93% of Russian production is owned by Russian companies14/ ") (both state owned and privately owned). Rosneft and Gazprom—the two main majority state-owned companies—produce ~46% of Russia’s oil. The average government take for oil (that is, combined state taxes, tariffs and so on) in Russia is 50% of total revenues14; "). Hence, almost all oil revenues stay in the Russian economy while 50% goes directly to the Russian state as taxes and fees and an additional 23% goes to state-controlled companies. Finally, it should be noted that the effects are linear in the size of the tax cut. This implies that, should the tax cut be twice as large, the Russian additional profits will be twice as large too, and vice versa in case the tax cut is half of what we study. This follows from our linear approximation (see the section ‘Theoretical approach’). This is a reasonable assumption as long as the effects on aggregate global demand and supply are small, which indeed is the case in both the short- and long-term cases. In the very short term (where the EU is a submarket), changes in the tax cut may not scale linearly. This is because, if the tax cut is sufficiently large, the EU demand change is sizeable. Hence, a doubling of the tax cut cannot be assumed to imply doubling of Russian profits; it can be more and it can be less than doubling. One issue that plays a role is whether the elasticities are constant over the demand and supply curves. To date, this is a largely unresolved question in the literature. Effects of an alternative direct cash transfer policy As seen, a fuel-tax cut in the EU provides Russia with a large additional income. Furthermore, part of the help meant for EU fuel consumers in the form of a tax cut is instead passed through to increased oil prices, especially in the very short term. The question is then whether it is possible to help EU consumers in a way that does not benefit Russia. Here, we look at one such alternative, namely in the form of a cash transfer to consumers with an equivalent fiscal burden to the tax cut. That is, we tie the transfer to €150 million, €170 million and €115 million in the very short, short and long term, respectively. The results are presented in Table 3. The increased income leads to an increased demand for fuel. However, since the cash can be spent on anything, most of it is used for other things. Hence, the fuel price increases only marginally (1.1 cents in the very short term and much less in the long term). One perspective on this, which highlights the main benefit of cash transfers compared with fuel-tax cuts, is that consumers, through these policies, receive the equivalent of 20 cents per litre of fuel consumed on average. To receive the benefits of a tax cut, consumers have to buy fuel. If they receive it in the form of cash instead, they can choose to spend it all on fuel, to spend none of it on fuel or somewhere in between; cash is a more flexible currency than a tax cut. Even a person who spends the whole transfer on fuel will gain from a cash transfer since the fuel price increases only by 1.1 cents. Therefore, this allows for varying preferences in the population (it is also possible to direct the cash to particular groups that are hit harder by the price increase; see ‘Theoretical approach’). Another benefit of a cash transfer is that it avoids decreasing the fuel tax (that is, a Pigouvian tax), thus abating climate concerns. Table 3 Effect of a fiscal equivalent cash transfer Full size table Perhaps most importantly for the subject matter here, Russia’s profit gains are substantially lower with the cash transfer (in the short term ~15% of the profits received from a tax cut and in the long term much less). It can be noted that Russia’s profits under a cash transfer are also substantially lower when compared with the lowest profits of a tax cut considered in the sensitivity analysis (see Supplementary Note 2). The conclusion that an income transfer yields much lower Russian profits is not likely to change even if we consider other expenditures of households in the EU. The reason for this is that an income elasticity of 1, as we use here15, implies that fuel’s share of income is constant when income increases. Put differently, since private road-transport fuels correspond to 7% of consumer spending in the EU (figure for 2008 from ref. 16t ")), only 7% of the cash transfer would go to road fuel. Transport accounts for 13.2% of households’ expenditures17."), not all of which goes to oil products. Elasticity for other transport is not very different from that for road fuels (for aviation income, elasticity is around 1 (ref. 18."))). Hence, the amount of cash transfer that may go to oil products is effectively capped by oil’s cost share in the EU. Elasticities for other categories of consumer spending vary. For food, elasticity is around 0.5 in the EU19.") and it accounts for 14.8% of expenditures, while housing-related expenditures correspond to 25.7% (a small share of which is gas for heating). Various smaller categories correspond to the remaining 48% (ref. 20.")). Conclusion We have analysed how much a fuel-tax cut in the EU will increase Russian income from oil. The effects are substantial. A tax cut of €0.20 increases Russia’s daily oil profits by €8.4 million during the first year and €8.2 million if it remains longer (in the very short term, the daily profit increase can be substantially higher). We show that a fiscally equivalent cash transfer can achieve similar alleviation to consumers to a tax cut but with a fraction of the increased profits for Russia. Methods Derivation of model Here, we provide derivations of the model used in the analysis. We distinguish between the demand for crude oil for vehicle fuel in the EU, D EU, and the remaining oil demand D ROW. Note that the rest of the world demand includes oil demand in the EU for uses other than vehicle fuel. Since we want to study the effects of taxes on vehicle fuel in the EU, we explicitly consider these and assume that the demand for oil for fuel in the EU depends on the price, including refinement, transportation and taxes. Let p denote the crude oil price, v the VAT rate and τ the per-unit tax on vehicle fuel in the EU. The costs for refinement and transportation could have one per-unit component, c, and one component proportional to the price, z. We thus have the fuel price $$\begin{array}{{20}{c}} {f = \left( {1 + v} \right)\left( {\left( {1 + z} \right)p + c + \tau } \right)} \end{array}$$ (10) and crude demand in the EU depends on this price and on income I, D EU(f, I). The rest of the world demand depends directly on the crude oil price D ROW(p). In the baseline model, we focus on per-unit costs, corresponding to z = 0. In the Supplementary Information, we instead consider completely proportional costs, corresponding to c = 0. The supply is a function of the crude oil price and we distinguish between oil supply from Russia, S RU(p), and supply from the rest of the world, S ROW(p). The world market price of crude oil is determined by the equilibrium in the global oil market: $$\begin{array}{{20}{c}} {D_{{\rm{EU}}}\left( {f,I} \right) + D_{{\rm{ROW}}}\left( p \right) = S_{{\rm{RU}}}\left( p \right) + S_{{\rm{ROW}}}\left( p \right).} \end{array}$$ (11) Model of fuel-tax cut Treating the equilibrium price p as a function of τ, and differentiating both sides fully with respect to τ, taking equation (10) into account, gives $$\begin{array}{{20}{c}} {\left( {1 + v} \right)\frac{{\partial D_{{\rm{EU}}}\left( {f,I} \right)}}{{\partial f}}\left( {\left( {1 + z} \right)\frac{{{\rm{d}}p}}{{{\rm{d}}\tau }} + 1} \right) + D_{{\rm{ROW}}}^\prime \left( p \right)\frac{{{\rm{d}}p}}{{{\rm{d}}\tau }} = \left( {S_{{\rm{RU}}}^\prime \left( p \right) + S_{{\rm{ROW}}}^\prime \left( p \right)} \right)\frac{{{\rm{d}}p}}{{{\rm{d}}\tau }}} \end{array}$$ (12) Let D and S denote total demand and supply and let x and y denote shares of demand and supply as given by $$\begin{array}{{20}{c}} {D_{{\rm{EU}}} = xD,D_{{\rm{ROW}}} = \left( {1 - x} \right)D,S_{{\rm{RU}}} = yS\,{{{\mathrm{and}}}}\,S_{{\rm{ROW}}} = \left( {1 - y} \right)S.} \end{array}$$ (13) Multiplying the left- and right-hand sides of equation (12) by p/D and p/S, respectively (with S = D in equilibrium), yields $$\frac{{{{{\mathrm{d}}}}p}}{{{{{\mathrm{d}}}}\tau }} = \frac{{\left( {1 + v} \right)\frac{p}{D}\frac{{\partial D_{{\rm{EU}}}\left( {f,I} \right)}}{{\partial f}}}}{{\frac{p}{S}S_{{\rm{RU}}}^\prime \left( p \right) + \frac{p}{S}S_{{\rm{ROW}}}^\prime \left( p \right) - \left( {1 + v} \right)\left( {1 + z} \right)\frac{p}{D}\frac{{\partial D_{{\rm{EU}}}\left( {f,I} \right)}}{{\partial f}} - \frac{p}{D}D_{{\rm{ROW}}}^\prime \left( p \right)}}.$$ Using equation (13) and multiplying the terms containing the derivative of D EU by f/f, we get $$\frac{{{{{\mathrm{d}}}}p}}{{{{{\mathrm{d}}}}\tau }} = \frac{{\frac{{\left( {1 + v} \right)p}}{f}\frac{{xf}}{{D_{{\rm{EU}}}}}\frac{{\partial D_{{\rm{EU}}}\left( {f,I} \right)}}{{\partial f}}}}{{\frac{{yp}}{{S_{{\rm{RU}}}}}S_{{\rm{RU}}}^\prime \left( p \right) + \frac{{\left( {1 - y} \right)p}}{{S_{{\rm{ROW}}}}}S_{{\rm{ROW}}}^\prime \left( p \right) - \frac{{\left( {1 + v} \right)\left( {1 + z} \right)p}}{f}\frac{{xf}}{{D_{{\rm{EU}}}}}\frac{{\partial D_{{\rm{EU}}}\left( {f,I} \right)}}{{\partial f}} - \frac{{\left( {1 - x} \right)p}}{{D_{{\rm{ROW}}}}}D_{{\rm{ROW}}}^\prime \left( p \right)}}.$$ Using equation (10), we arrive at $${\frac{{{\rm{d}}p}}{{{\rm{d}}\tau }} = \frac{{x\frac{p}{{\left( {1 + z} \right)p + c + \tau }}\tilde \varepsilon {D,{\rm{EU}}}}}{{y\varepsilon {S,{\rm{RU}}} + \left( {1 - y} \right)\varepsilon {S,{\rm{ROW}}} - x\frac{{\left( {1 + z} \right)p}}{{\left( {1 + z} \right)p + c + \tau }}\tilde \varepsilon {D,{\rm{EU}}} - \left( {1 - x} \right)\varepsilon _{D,{\rm{ROW}}}}},}$$ (14) where we have the price elasticities of supply $$\begin{array}{{20}{c}} {\varepsilon {S,{\rm{RU}}} \equiv \frac{p}{{S{{\rm{RU}}}}}S_{{\rm{RU}}}^\prime \left( p \right)\,{{{\mathrm{and}}}}\,\varepsilon {S,{\rm{ROW}}} \equiv \frac{p}{{S{{\rm{ROW}}}}}S_{{\rm{ROW}}}^\prime \left( p \right)} \end{array}$$ (15) and demand $$\begin{array}{{20}{c}} {\tilde \varepsilon {D,{\rm{EU}}} \equiv \frac{f}{{D{{\rm{EU}}}}}\frac{{\partial D_{{\rm{EU}}}\left( {f,I} \right)}}{{\partial f}}\,{{{\mathrm{and}}}}\,\varepsilon {D,{\rm{ROW}}} \equiv \frac{p}{{D{{\rm{ROW}}}}}D_{{\rm{ROW}}}^\prime \left( p \right).} \end{array}$$ (16) We can differentiate the fuel price from equation (10) with respect to τ to get $$\frac{{{{{\mathrm{d}}}}f}}{{{{{\mathrm{d}}}}\tau }} = \left( {1 + v} \right)\left( {\left( {1 + z} \right)\frac{{{{{\mathrm{d}}}}p}}{{{{{\mathrm{d}}}}\tau }} + 1} \right).$$ The changes in the oil price p and the EU fuel price f induced by a change Δ τ in the tax can be linearly approximated as $$\begin{array}{{20}{c}} {{\Delta}\tau p \approx \frac{{{\rm{d}}p}}{{{\rm{d}}\tau }}{\Delta}\tau \,{{{\mathrm{and}}}}\,{\Delta}\tau f \approx \left( {1 + v} \right)\left( {\left( {1 + z} \right){\Delta}_\tau p + {\Delta}\tau } \right).} \end{array}$$ (17) The EU tax revenues associated with fuel contain both the direct VAT revenue and the excise duty with VAT applied to it: $$T_{{\rm{EU}}} = \left( {v\left( {(1 + z)p + c} \right) + \left( {1 + v} \right)\tau } \right)D_{{\rm{EU}}}\left( {f,I} \right).$$ Differentiating this fully with respect to τ gives $$\begin{array}{{20}{l}} {\frac{{{{{\mathrm{d}}}}T_{{\rm{EU}}}}}{{{{{\mathrm{d}}}}\tau }}} \hfill & = \hfill & {\left[ {\tau + v\left( {(1 + z)p + c + \tau } \right)} \right]\frac{{\partial D_{{\rm{EU}}}\left( {f,I} \right)}}{{\partial f}}\left[ {\left( {1 + v} \right)\left( {(1 + z)\frac{{{{{\mathrm{d}}}}p}}{{{{{\mathrm{d}}}}\tau }} + 1} \right)} \right]} \hfill \ {} \hfill & {} \hfill & { + \left[ {v\left( {(1 + z)\frac{{{{{\mathrm{d}}}}p}}{{{{{\mathrm{d}}}}\tau }} + 1} \right) + 1} \right]D_{{\rm{EU}}}} \hfill \ {} \hfill & = \hfill & {\left( {1 + v} \right)\frac{{\tau + v\left( {(1 + z)p + c + \tau } \right)}}{f}\frac{f}{{D_{{\rm{EU}}}}}\frac{{\partial D_{{\rm{EU}}}\left( {f,I} \right)}}{{\partial f}}\left[ {\left( {(1 + z)\frac{{{{{\mathrm{d}}}}p}}{{{{{\mathrm{d}}}}\tau }} + 1} \right)} \right]D_{{\rm{EU}}}} \hfill \ {} \hfill & {} \hfill & { + \left[ {v\left( {(1 + z)\frac{{{{{\mathrm{d}}}}p}}{{{{{\mathrm{d}}}}\tau }} + 1} \right) + 1} \right]D_{{\rm{EU}}}} \hfill \ {} \hfill & = \hfill & {\left[ {1 + \left( {\frac{{\tau + v\left( {(1 + z)p + c + \tau } \right)}}{{(1 + z)p + c + \tau }}\tilde \varepsilon {D,{\rm{EU}}} + v} \right)\left( {(1 + z)\frac{{{{{\mathrm{d}}}}p}}{{{{{\mathrm{d}}}}\tau }} + 1} \right)} \right]D{{\rm{EU}}}.} \hfill \end{array}$$ Multiplying by Δ τ and using Δ τ f from equation (17) delivers a linear approximation of the change in tax revenues from the tax change Δ τ: $${{{\mathrm{{\Delta}}}}}\tau T{{\rm{EU}}} \approx \left[ {{{{\mathrm{{\Delta}}}}}\tau + \left( {\frac{{\tau + v\left( {(1 + z)p + c + \tau } \right)}}{{(1 + z)p + c + \tau }}\tilde \varepsilon {D,{\rm{EU}}} + v} \right)\left( {(1 + {{{\mathrm{z}}}}){{{\mathrm{{\Delta}}}}}\tau p + {{{\mathrm{{\Delta}}}}}\tau } \right)} \right]D_{{\rm{EU}}}.$$ The Russian oil profits are $$\pi {{\rm{RU}}} = \left( {p - e} \right)S{{\rm{RU}}}\left( p \right),$$ where e represents Russian extraction costs that are assumed to be constant. Treating p as a function of τ and differentiating fully with respect to τ gives $$\frac{{{{{\mathrm{d}}}}\pi {{\rm{RU}}}}}{{{{{\mathrm{d}}}}\tau }} = \frac{{{{{\mathrm{d}}}}p}}{{{{{\mathrm{d}}}}\tau }}S{{\rm{RU}}} + \left( {p - e} \right)S_{{\rm{RU}}}^\prime \left( p \right)\frac{{{{{\mathrm{d}}}}p}}{{{{{\mathrm{d}}}}\tau }} = \left( {1 + \frac{{p - e}}{p}\varepsilon {S,{\rm{RU}}}} \right)S{{\rm{RU}}}\frac{{{{{\mathrm{d}}}}p}}{{{{{\mathrm{d}}}}\tau }}$$ and, using Δ τ p from equation (17), a linear approximation of the change in Russian oil profits is $${{{\mathrm{{\Delta}}}}}\tau \pi {{\rm{RU}}} \approx \left( {1 + \frac{{p - e}}{p}\varepsilon {S,{\rm{RU}}}} \right)S{{\rm{RU}}}{{{\mathrm{{\Delta}}}}}_\tau p.$$ Model of income transfer Here, we consider the effects of transferring income to households instead of lowering the fuel tax. Treating the equilibrium oil price as a function of I, differentiating equation (11) fully with respect to I and using equation (10) gives $$\begin{array}{{20}{c}} {\frac{{\partial D_{{\rm{EU}}}\left( {f,I} \right)}}{{\partial I}} + \frac{{\partial D_{{\rm{EU}}}\left( {f,I} \right)}}{{\partial f}}\left( {1 + v} \right)\left( {1 + z} \right)\frac{{{\rm{d}}p}}{{{\rm{d}}I}} + D_{{\rm{ROW}}}^\prime \left( p \right)\frac{{{\rm{d}}p}}{{{\rm{d}}I}} = \left( {S_{RU}^\prime \left( p \right) + S_{{\rm{ROW}}}^\prime \left( p \right)} \right)\frac{{{\rm{d}}p}}{{{\rm{d}}I}}.} \end{array}$$ (18) Multiplying the left- and right-hand sides of equation (18) by p/D and p/S, respectively (with S = D in equilibrium), we get $$\frac{{{{{\mathrm{d}}}}p}}{{{{{\mathrm{d}}}}I}} = \frac{{\frac{p}{D}\frac{{\partial D_{{\rm{EU}}}\left( {f,I} \right)}}{{\partial I}}}}{{\frac{p}{S}S_{{\rm{RU}}}^\prime \left( p \right) + \frac{p}{S}S_{{\rm{ROW}}}^\prime \left( p \right) - \left( {1 + v} \right)\left( {1 + {{{\mathrm{z}}}}} \right)\frac{p}{D}\frac{{\partial D_{{\rm{EU}}}\left( {f,I} \right)}}{{\partial f}} - \frac{p}{D}D_{{\rm{ROW}}}^\prime \left( p \right)}}.$$ Using equation (13) yields $$\frac{{{{{\mathrm{d}}}}p}}{{{{{\mathrm{d}}}}I}} = \frac{{\frac{p}{I}\frac{{xI}}{{D_{{\rm{EU}}}}}\frac{{\partial D_{{\rm{EU}}}\left( {f,I} \right)}}{{\partial I}}}}{{\frac{{yp}}{{S_{{\rm{RU}}}}}S_{{\rm{RU}}}^\prime \left( p \right) + \frac{{\left( {1 - y} \right)p}}{{S_{{\rm{ROW}}}}}S_{{\rm{ROW}}}^\prime \left( p \right) - \frac{{\left( {1 + v} \right)\left( {1 + {{{\mathrm{z}}}}} \right)p}}{f}\frac{{xf}}{{D_{{\rm{EU}}}}}\frac{{\partial D_{{\rm{EU}}}\left( {f,I} \right)}}{{\partial f}} - \frac{{\left( {1 - x} \right)p}}{{D_{{\rm{ROW}}}}}D_{{\rm{ROW}}}^\prime \left( p \right)}}.$$ Using equations (10), (15) and (16), we arrive at $$\frac{{{{{\mathrm{d}}}}p}}{{{{{\mathrm{d}}}}I}} = \frac{p}{I}\frac{{x\varepsilon {I,{\rm{EU}}}}}{{y\varepsilon {S,{\rm{RU}}} + \left( {1 - y} \right)\varepsilon {S,{\rm{ROW}}} - x\frac{{(1 + z)p}}{{(1 + z)p + c + \tau }}\tilde \varepsilon {D,{\rm{EU}}} - \left( {1 - x} \right)\varepsilon _{D,{\rm{ROW}}}}},$$ where the income elasticity of demand in the EU is $$\varepsilon {I,{\rm{EU}}} \equiv \frac{I}{{D{{\rm{EU}}}}}\frac{{\partial D_{{\rm{EU}}}\left( {f,I} \right)}}{{\partial I}}.$$ The responses of the oil price p and the EU fuel price f to an income change Δ I can be linearly approximated as $$\begin{array}{{20}{c}} {{\Delta}_Ip \approx \frac{{{\rm{d}}p}}{{{\rm{d}}I}}{\Delta}I\,{{{\mathrm{and}}}}\,{\Delta}_If \approx \left( {1 + v} \right)\left( {1 + z} \right){\Delta}_Ip.} \end{array}$$ (19) Treating p as a function of I, differentiating oil profits (\pi {{\rm{RU}}} = \left( {p - e} \right)S{{\rm{RU}}}\left( p \right)) fully with respect to I and using the Russian supply elasticity in equation (15) gives $$\frac{{{{{\mathrm{d}}}}\pi {{\rm{RU}}}}}{{{{{\mathrm{d}}}}I}} = \frac{{{{{\mathrm{d}}}}p}}{{{{{\mathrm{d}}}}I}}S{{\rm{RU}}} + \left( {p - e} \right)S_{{\rm{RU}}}^\prime \left( p \right)\frac{{{{{\mathrm{d}}}}p}}{{{{{\mathrm{d}}}}I}} = \left( {1 + \frac{{p - e}}{p}\varepsilon {S,{\rm{RU}}}} \right)S{{\rm{RU}}}\frac{{{{{\mathrm{d}}}}p}}{{{{{\mathrm{d}}}}I}}$$ and, using equation (19), a linear approximation of the change in Russian oil profits is $${{{\mathrm{{\Delta}}}}}I\pi {{\rm{RU}}} \approx \left( {1 + \frac{{p - e}}{p}\varepsilon {S,{\rm{RU}}}} \right)S{{\rm{RU}}}{{{\mathrm{{\Delta}}}}}_Ip.$$ Parameter values of elasticity of demand The price elasticity of demand for road transport fuels (gasoline and diesel) and crude oil is low in the short term but increases with time. In the short term, many fuel consumers can only drive less to reduce consumption, while in the longer term many can shift to more efficient vehicles or change their commuting distance or mode of transportation. Several studies compile existing estimates of the demand elasticity of gasoline, diesel and crude oil (see, for example, refs. 21,22,23,24). These estimates are derived using different methods over different time periods and locations. Short-term gasoline elasticity estimates range from −0.04 to −0.5 (refs. 23,24) with several review studies deriving averages around −0.25 (ref. 24). Aklilu24 also provides additional original estimates for EU countries using recent data, finding an EU average short-term gasoline elasticity of −0.255. We use −0.25 in our calculations, in line with these recent EU estimates as well as the wider literature samples. Long-term gasoline demand elasticity estimates range from −0.2 to −1 (ref. 23). Aklilu24 compiles review studies with even higher ranges but with averages around −0.7. Aklilu’s own empirical study finds a long-term EU average of −0.88. We use −0.9 in our calculations based on these results. Crude oil demand elasticity is usually found to be lower than that of gasoline. This is to be expected since crude oil is only a part of the gasoline price. If we assume that crude oil represents half the cost of retail gasoline, a 10% increase in the price of crude oil would translate to a 5% increase in the price of gasoline, and the demand elasticities for oil would be about half those for gasoline21. Caldara et al.22 compile 31 studies for short-term world oil demand in the range of −0.04 to −0.9 with a mean of −0.22 and a median of −0.13. We follow Hamilton21 and use half of the gasoline estimates for wider oil demand (that is, −0.125 for our short-term oil demand elasticity and −0.45 for our long-term oil demand elasticity), which is also in line with the estimates of Caldara et al.22. In the sensitivity analysis in Supplementary Note 2, we also explore lower and higher values in the range found in the literature. Parameter values of elasticity of supply The price elasticity of global oil supply is low (close to zero) in the short term and grows only slowly in the longer term. New conventional oil fields take several years to bring into production and additional supply in the short term (within 1 year) can only come from either inventory, politically withheld supply (including, for example, Saudi Arabian spare capacity), shale oil production and infill drilling in conventional fields. Compared with demand elasticity studies, supply elasticity studies are rare. Caldara et al.22 compile six studies applying different methods and data and find short-term (within 1 year) supply elasticities in the range of 0–0.27. These estimates are based on historical data and do not necessarily reflect current or future oil supply. As a complement, we also rely on modelled forward-looking estimates of supply elasticity derived by Wachtmeister25 using a bottom-up modelling framework and Rystad UCube field-by-field data, as well as our own judgement of the current oil market outlook. For our very-short-term and short-term scenarios, we use a supply elasticity of 0 for both Russia and the rest of the world. This corresponds to a hypothetical scenario with no inventory draw, no additional production by the Organization of the Petroleum Exporting Countries and a timeframe below 6 months where the shale response is still low. In the sensitivity analysis (Supplementary Note 2), we present a short-term case using 0.1, which can be seen as reflecting a 12-month shale response and/or a stronger response from the Organization of the Petroleum Exporting Countries. For our long-term scenario, we use 0.13, which is in line with a modelled 3-year horizon estimate of global supply by Wachtmeister25. A value of 0.13 is also used as a central estimate by Erickson and Lazarus10, even though their studied time horizon is longer than ours. In the sensitivity analysis (see Supplementary Note 2), we explore 0.2 as a higher estimate, reflecting a stronger supply response. Note again that our long-term scenario reflects a supply response in 1–3 years. Other studies reporting long-term supply elasticity estimates might use long term to describe longer time horizons. For example, Gately26, Brook et al.278 ") and Erickson and Lazarus10") use long term to describe responses up to 15 years ahead. Other costs and refinery margins We translate oil production (crude oil, condensate and natural gas liquids) in barrels per day to a corresponding volume of refined products. We make the simplifying assumption that one barrel of oil yields 170 l of products and fuels that can be sold to consumers. In the base case, the variable production cost of these fuels is assumed to correspond directly to the crude oil price (that is, the variable fuel production cost is the global oil price per barrel (Brent; in US$ per barrel) measured in Euros per litre of fuel product (p)). The retail fuel price (consumer price) is then the variable fuel production cost (oil price, p) plus the other, fixed, production costs, c (refining, transport, margins and so on), plus the fuel tax, τ, then VAT is applied to all of these. In the sensitivity analysis (see Supplementary Note 3), we explore other variable production costs (z) and discuss which case is more likely. Our base case c of €0.45 l−1 is derived backwards from a consumer price of €1.9 l−1. c thus includes the current refinery margins (the value difference between crude oil and refined products), which vary in time and are currently at historically high levels. Size of markets and Russian export declines We assume that Russia has already lost 1 million barrels per day of oil exports based on recent export data28.") and analysis29; "). In January, before the war, the global supply and demand of oil was estimated to be 100 million barrels per day (ref. 30; ")). Consequently, we assume the current oil market to be 99 million barrels per day. Road transport fuel’s share of total EU oil consumption is 47.5% (ref. 31.")). The EU’s share of global oil consumption is 12% (ref. 32.")), which yields our _x_ = 0.475 × 0.12 = 5.7%. The EU imports, in normal times, ~35% of its oil from Russia33."). For the very short term, we assume that the reduction in Russia’s export (1 million barrels per day) fully accrues to the EU. This implies _y_ = 29% in the very short term. Fuel price and taxes For our analysis, we need to construct an EU-level fuel price and fuel tax. However, fuel prices and taxes differ between the EU countries. We construct EU-level values using weighted averages based on each country’s share of total EU fuel consumption using data from Eurostat34."). The data underlying the quantification in this section are summarized in Supplementary Table 1. For the country-level fuel price, we use data from the European Commission (from the fuel-prices.eu database35/ ")). We use the average prices for the month of May 2022 (see columns 3 and 4 of Supplementary Table 1) and obtain weighted averages of €1.89 l−1 for petrol and €1.87 for diesel. Hence, we choose €1.9 as transport fuel price. In assessing the EU’s tax reduction, we want to emphasize that announced and implemented tax reductions come in several forms (for example, excise duty reduction and directed VAT reductions), new ones are being suggested and their time spans are varied. Hence, there is scarcely any way to assess the final aggregate outcome until possibly several years from the time of writing. Instead, to obtain a rough estimate of what the final outcome may be, we look at two scenarios. The first scenario is based on the possibility that all EU countries reduce their excise tax to the EU minimum level (€0.359 l−1 for unleaded petrol and €0.33 l−1 for diesel). In this scenario, we use the countries’ tax levels from July 2021 (ref. 36f ")) and reduce them to the minimum level (we use the numbers for unleaded petrol and gas oil for propellant use (diesel); in some instances, different tax levels apply to different subcategories, in which case we take an average). The data are summarized in columns 5–8 of Supplementary Table 1. We then obtain a weighted average tax reduction of €0.24 for petrol and €0.15 for diesel (these do not include the indirect effects on VAT that apply to excise duties). The second scenario (columns 9 and 10 in Supplementary Table 1) takes the post-invasion announcements to date (mid-June 2022). For this, we take the compilation of Transport & Environment37."), cross-check their entries with news articles and add directed VAT reductions for fuels (for Estonia and Romania). For most countries, we verify the numbers from Transport & Environment37.") (see the footnote of Supplementary Table 1). We interpret (but ultimately cannot verify) the numbers to exclude the indirect effects on VAT payments in those cases (such as Germany) where VAT applies to the excise duty (the number for Sweden in Supplementary Table 1 includes this indirect VAT effect). The most consequential decision is probably our choice to set Poland’s pre-invasion VAT reduction to zero. In calculating VAT reductions in Euros, we use the average price in January and February 2022 (this is conservative since prices were lower than post-invasion). We then obtain a weighted average tax reduction of €0.165 l−1 for petrol and €0.132 l−1 for diesel. We also calculate the percentage reduction in fuel taxes. We obtain percentage reductions for petrol and diesel of 9.7 and 8.2%, respectively, compared with the price in January and February, 8.8 and 7.4%, respectively, compared with the 3-month pre-reduction average price and 5 and 4.3%, respectively, compared with the price in May. These numbers are not shown in Supplementary Table 1. Based on these scenarios, each with its own caveats, we use a tax reduction of €0.20 (including indirect VAT effects) for the analysis. Data availability All of the data generated or analysed during this study are included in this published article (and its Supplementary Information files) or are publicly available. Code availability Model implementation code written in MATLAB is available as Supplementary Code. References Sanctions Adopted Following Russia’s Military Aggression Against Ukraine (European Commission. 2022); Ukraine-/Russia-Related Sanctions (U.S. Department of the Treasury, 2022); Saul, J. All at sea: Russian-linked oil tanker seeks a port. Reuters (2022). Russia’ s War on Ukraine: EU Adopts Sixth Package of Sanctions Against Russia (European Commission, 2022); Andersson, J. & Tippmann, C. The impact of rising gasoline prices on Swedish households—is this time different? Free Network (2022). Martin, J. Gas prices hit new record sparking fears over bill rises. BBC News (2022). Jones, G. Analysis: climate goals take second place as EU states cut petrol prices. Reuters (2022). Chambers, M. German finance minister plans gasoline discount. Reuters (2022). De Beaupuy, F. France plans $2.2 billion fuel rebate in bid to help motorists. Bloomberg UK (2022). Erickson, P. & Lazarus, M. Impact of the Keystone XL pipeline on global oil markets and greenhouse gas emissions. Nat. Clim. Change4, 778–781 (2014) ArticleGoogle Scholar Faehn, T., Hagem, C., Lindholt, L., Mæland, S. & Einar Rosendahl, K.Climate policies in a fossil fuel producing country—demand versus supply side policies. Energy J.38, 77–102 (2017). Google Scholar Severen, C. & van Benthem, A. A. Formative experiences and the price of gasoline. J. Appl. Econ.14, 256–284 (2022). Google Scholar Military expenditure (current USD)—Russian Federation. The World Bank (2022). UCube (Rystad Energy, 2022); Dahl, C. A. Measuring global gasoline and diesel price and income elasticities. Energy Policy41, 2–13 (2012). ArticleGoogle Scholar Expenditure on Personal Mobility (European Environment Agency, 2011); Transport costs EU households over €1.1 trillion. Eurostat (2020). Hanson, D., Toru Delibasi, T., Gatti, M. & Cohen, S.How do changes in economic activity affect air passenger traffic? The use of state-dependent income elasticities to improve aviation forecasts, J. Air Transp. Manag.98, 102147 (2022). ArticleGoogle Scholar Femenia, F.A meta-analysis of the price and income elasticities of food demand. Ger. J. Agric. Econ.68, 77–98 (2019). Google Scholar Household consumption by purpose. Eurostat (2021). Hamilton, J. D.Causes and consequences of the oil shock of 2007–08, Brookings Pap. Econ. Act.40, 215–283 (2009). ArticleGoogle Scholar Caldara, D., Cavallo, M. & Iacoviello, M. Oil price elasticities and oil price fluctuations. J. Monet. Econ.103, 1–20 (2019). ArticleGoogle Scholar Hössinger, R., Link, C., Sonntag, A. & Stark, J. Estimating the price elasticity of fuel demand with stated preferences derived from a situational approach. Transp. Res. A Policy Pract.103, 154–171 (2017). ArticleGoogle Scholar Aklilu, A. Z. Gasoline and diesel demand in the EU: implications for the 2030 emission goal. Renew. Sustain. Energy Rev.118, 109530 (2020). ArticleGoogle Scholar Wachtmeister, H. World Oil Supply in the 21st Century: a Bottom-up Perspective. PhD thesis, Uppsala Univ. (2020). Gately, D.OPEC’s incentives for faster output growth. Energy J.25, 75–96 (2004). ArticleGoogle Scholar Brook, A. M., Price, R., Sutherland, D., Westerlund, N. & André, C. Oil Price Developments: Drivers, Economic Consequences and Policy Responses (OECD, 2004); Wech, D. Russian crude exports remain high. Vortexa (2022). Oil Market Report—May 2022 (IEA, 2022); Oil Market Report — February 2022 (IEA, 2022); Oil and petroleum products—a statistical overview. Eurostat (2022). Statistical review of world energy 2021 BP (2021). Statistical review of world energy 2020 BP (2020). Final energy consumption in transport by type of fuel. Eurostat (2022). Fuel Prices Archive by Country (European Commission, 2022); Excise Duty Tables: Part II Energy Products and Electricity (European Commission, 2021); Taxpayers face €9bn bill for fuel tax cuts skewed towards the rich. Transport & Environment (2022). Oil market and Russian supply—Russian supplies to global energy markets. IEA (2022). Download references Acknowledgements We thank S. O’Brien for excellent research assistance; N. Rossbach, P. Olsson and J. Norberg at the Swedish Defence Research Agency for valuable input on Russian military spending; M. Höök for valuable support; and P. Bansal for important comments. This work has been supported by Formas under grant number 2020-00371 (to J.G. and D.S.). Funding Open access funding provided by Uppsala University. Author information Authors and Affiliations Beijer Institute, The Royal Swedish Academy of Sciences, Stockholm, Sweden Johan Gars Department of Economics, Uppsala University, Uppsala, Sweden Daniel Spiro Department of Earth Sciences, Uppsala University, Uppsala, Sweden Henrik Wachtmeister Authors 1. Johan GarsView author publications Search author on:PubMedGoogle Scholar 2. Daniel SpiroView author publications Search author on:PubMedGoogle Scholar 3. Henrik WachtmeisterView author publications Search author on:PubMedGoogle Scholar Contributions All of the authors contributed equally to the project. J.G., D.S. and H.W. together designed the study, developed the methodology, interpreted the results and wrote and edited the manuscript. H.W. led the data and input parameter work, J.G. led the model implementation. D.S. led the coordination of the project. Corresponding author Correspondence to Henrik Wachtmeister. Ethics declarations Competing interests The authors declare no competing interests. Peer review Peer review information Nature Energy thanks Michael Ross, Michael Plante and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Additional information Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary information Supplementary Information Supplementary Notes 1–4, including Supplementary Tables 1–6 and Supplementary Figs. 1–3. Supplementary Software Supplementary Code. MATLAB code used for the implementation of the analysis. Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit Reprints and permissions About this article Cite this article Gars, J., Spiro, D. & Wachtmeister, H. The effect of European fuel-tax cuts on the oil income of Russia. Nat Energy7, 989–997 (2022). Download citation Received: 28 April 2022 Accepted: 15 August 2022 Published: 15 September 2022 Issue Date: October 2022 DOI: Share this article Anyone you share the following link with will be able to read this content: Get shareable link Sorry, a shareable link is not currently available for this article. Copy shareable link to clipboard Provided by the Springer Nature SharedIt content-sharing initiative Subjects Crude oil Economics Energy economics Energy policy Energy supply and demand This article is cited by The pass-through of subsidizing petrol consumption: the case of Spain Juan Luis Jiménez Jordi Perdiguero José Manuel Cazorla-Artiles SERIEs (2025) Winners and losers of a Russian oil-export restriction Johan Gars Daniel Spiro Henrik Wachtmeister Public Choice (2025) International trade sanctions imposed due to the Russia-Ukraine war may cause unequal distribution of environmental and health impacts Guohe Huang Leian Chen Bin Luo Communications Earth & Environment (2024) Burden of the global energy price crisis on households Yuru Guan Jin Yan Klaus Hubacek Nature Energy (2023) Download PDF Associated content European fuel tax cuts increase Russian oil profits Johan Gars Daniel Spiro Henrik Wachtmeister Nature Energy Policy Brief 15 Sept 2022 Sections Figures References Abstract Main Theoretical approach Data and estimates vary over different time horizons Effects of fuel-tax cuts on Russian oil income Effects of an alternative direct cash transfer policy Conclusion Methods Data availability Code availability References Acknowledgements Funding Author information Ethics declarations Peer review Additional information Supplementary information Rights and permissions About this article This article is cited by Advertisement Fig. 1: Illustration of supply and demand in the long term. View in articleFull size image Fig. 2: Illustration of supply and demand in the short term and in the very short term. View in articleFull size image Sanctions Adopted Following Russia’s Military Aggression Against Ukraine (European Commission. 2022); Ukraine-/Russia-Related Sanctions (U.S. Department of the Treasury, 2022); Saul, J. All at sea: Russian-linked oil tanker seeks a port. Reuters (2022). Russia’ s War on Ukraine: EU Adopts Sixth Package of Sanctions Against Russia (European Commission, 2022); Andersson, J. & Tippmann, C. The impact of rising gasoline prices on Swedish households—is this time different? Free Network (2022). Martin, J. Gas prices hit new record sparking fears over bill rises. BBC News (2022). Jones, G. Analysis: climate goals take second place as EU states cut petrol prices. Reuters (2022). Chambers, M. German finance minister plans gasoline discount. Reuters (2022). De Beaupuy, F. France plans $2.2 billion fuel rebate in bid to help motorists. Bloomberg UK (2022). Erickson, P. & Lazarus, M. Impact of the Keystone XL pipeline on global oil markets and greenhouse gas emissions. Nat. Clim. Change4, 778–781 (2014) ArticleGoogle Scholar Faehn, T., Hagem, C., Lindholt, L., Mæland, S. & Einar Rosendahl, K.Climate policies in a fossil fuel producing country—demand versus supply side policies. Energy J.38, 77–102 (2017). Google Scholar Severen, C. & van Benthem, A. A. Formative experiences and the price of gasoline. J. Appl. Econ.14, 256–284 (2022). Google Scholar Military expenditure (current USD)—Russian Federation. The World Bank (2022). UCube (Rystad Energy, 2022); Dahl, C. A. Measuring global gasoline and diesel price and income elasticities. Energy Policy41, 2–13 (2012). ArticleGoogle Scholar Expenditure on Personal Mobility (European Environment Agency, 2011); Transport costs EU households over €1.1 trillion. Eurostat (2020). Hanson, D., Toru Delibasi, T., Gatti, M. & Cohen, S.How do changes in economic activity affect air passenger traffic? The use of state-dependent income elasticities to improve aviation forecasts, J. Air Transp. Manag.98, 102147 (2022). ArticleGoogle Scholar Femenia, F.A meta-analysis of the price and income elasticities of food demand. Ger. J. Agric. Econ.68, 77–98 (2019). Google Scholar Household consumption by purpose. Eurostat (2021). Hamilton, J. D.Causes and consequences of the oil shock of 2007–08, Brookings Pap. Econ. Act.40, 215–283 (2009). ArticleGoogle Scholar Caldara, D., Cavallo, M. & Iacoviello, M. Oil price elasticities and oil price fluctuations. J. Monet. Econ.103, 1–20 (2019). ArticleGoogle Scholar Hössinger, R., Link, C., Sonntag, A. & Stark, J. Estimating the price elasticity of fuel demand with stated preferences derived from a situational approach. Transp. Res. A Policy Pract.103, 154–171 (2017). ArticleGoogle Scholar Aklilu, A. Z. Gasoline and diesel demand in the EU: implications for the 2030 emission goal. Renew. Sustain. Energy Rev.118, 109530 (2020). ArticleGoogle Scholar Wachtmeister, H. World Oil Supply in the 21st Century: a Bottom-up Perspective. PhD thesis, Uppsala Univ. (2020). Gately, D.OPEC’s incentives for faster output growth. Energy J.25, 75–96 (2004). ArticleGoogle Scholar Brook, A. M., Price, R., Sutherland, D., Westerlund, N. & André, C. Oil Price Developments: Drivers, Economic Consequences and Policy Responses (OECD, 2004); Wech, D. Russian crude exports remain high. Vortexa (2022). Oil Market Report—May 2022 (IEA, 2022); Oil Market Report — February 2022 (IEA, 2022); Oil and petroleum products—a statistical overview. Eurostat (2022). Statistical review of world energy 2021 BP (2021). Statistical review of world energy 2020 BP (2020). Final energy consumption in transport by type of fuel. Eurostat (2022). Fuel Prices Archive by Country (European Commission, 2022); Excise Duty Tables: Part II Energy Products and Electricity (European Commission, 2021); Taxpayers face €9bn bill for fuel tax cuts skewed towards the rich. Transport & Environment (2022). Oil market and Russian supply—Russian supplies to global energy markets. IEA (2022). Nature Energy (Nat Energy) ISSN 2058-7546 (online) nature.com sitemap About Nature Portfolio About us Press releases Press office Contact us Discover content Journals A-Z Articles by subject protocols.io Nature Index Publishing policies Nature portfolio policies Open access Author & Researcher services Reprints & permissions Research data Language editing Scientific editing Nature Masterclasses Research Solutions Libraries & institutions Librarian service & tools Librarian portal Open research Recommend to library Advertising & partnerships Advertising Partnerships & Services Media kits Branded content Professional development Nature Awards Nature Careers Nature Conferences Regional websites Nature Africa Nature China Nature India Nature Japan Nature Middle East Privacy Policy Use of cookies Your privacy choices/Manage cookies Legal notice Accessibility statement Terms & Conditions Your US state privacy rights © 2025 Springer Nature Limited Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. Email address Sign up [x] I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Close Get the most important science stories of the day, free in your inbox.Sign up for Nature Briefing
2495
http://verso.mat.uam.es/~fernando.chamizo/supervision/TFG/past/memoirs/TFG_carlos_cervinno.pdf
Departamento de Matemáticas, Facultad de Ciencias Universidad Autónoma de Madrid La función zeta de Riemann Trabajo de fin de grado Grado en Matemáticas Autor: Carlos Cerviño Luridiana Tutor: Fernando Chamizo Lorente Eva Tourís Lojo Curso 2024-2025 Resumen La función zeta de Riemann, ζ, es uno de los objetos más estudiados de la teoría analítica de números. Su importancia radica en la relación que guardan sus ceros con la distribución de los números primos, y es este el contexto en el que emerge de forma natural la conocida hipótesis de Riemann. El objetivo del trabajo es desarrollar una teoría que nos permita explicitar esta relación entre la función zeta y los números primos. Para ello, se establecen las propiedades analíticas de ζ, primero construyendo extensiones meromorfas de la función a dominios más amplios y después a partir de su relación con la función Γ y de la ecuación funcional que surge a partir de esta relación. Tras esto, pasamos a estudiar diversas funciones relacionadas con ζ que sirven de base para desarrollar la demostración de Donald J. Newman del Teorema de los números primos. Por último, se presenta la famosa hipótesis de Riemann y, asumiendo que la hipótesis es falsa, se construyen cotas inferiores para la diferencia entre la función contadora de primos y el logaritmo integral. Abstract The Riemann zeta function, ζ, is one of the most studied objects of the analytic number theory. Its significance stems from the relation between its zeros and the dis-tribution of the prime numbers, and is in this context where the well-known Riemann hypothesis naturally arises. The purpose of this paper is to develop the background necessary to make explicit this relation between the zeta function and the prime numbers. To achieve this, we establish the analytic properties of the zeta function, beginning with the construction of meromorphic extensions to broader domains and continuing through its relation with the Gamma function and the functional equation that arises from this relation. Afterward, we proceed with the study of a variety of functions related to the zeta function, which play a fundamental role in the Donald J. Newman’s proof of the Prime Number Theorem. Finally, we state the famous Riemann hypothesis and, assuming that it is false, we build lower bounds for the difference between the prime counting function and the integral logarithm. iii Índice general 1 Propiedades analíticas básicas 1 1.1 Definición y propiedades analíticas . . . . . . . . . . . . . . . . . . . . 1 1.2 Aproximaciones a partir de las sumas parciales . . . . . . . . . . . . . 3 2 La ecuación funcional de ζ 7 2.1 La función Γ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 La ecuación funcional . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3 Los ceros triviales y otros valores . . . . . . . . . . . . . . . . . . . . . 12 3 Relación con funciones aritméticas 15 3.1 El producto de Euler . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.2 Funciones aritméticas y la función ζ . . . . . . . . . . . . . . . . . . . 16 4 La distribución de los números primos 21 4.1 Historia del teorema . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.2 El teorema de los números primos . . . . . . . . . . . . . . . . . . . . . 22 5 La hipótesis de Riemann 27 5.1 Resultados previos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.2 La hipótesis de Riemann . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Apéndice 31 Bibliografía 33 v CAPÍTULO 1 Propiedades analíticas básicas El contenido de este primer capítulo consiste en una primera definición de la función zeta de Riemann, ζ, y la prueba de algunas de sus propiedades analíticas. Dado que las extensiones analíticas (meromorfas) son únicas, con ζ nos referiremos tanto a la función en su primera definición como a su extensión a cualquier otro dominio. 1.1. Definición y propiedades analíticas El objetivo de esta sección es dar una definición de ζ, demostrar su meromorfía y construir algunas de sus extensiones. Como es natural, empezamos con la definición. Definición 1.1. La función zeta de Riemann se define como la siguiente serie: (1.1) ζ(s) = ∞ X n=1 1 ns para ℜ(s) > 1. En el tercer capítulo veremos que esta definición es equivalente a Q p∈P(1−p−s)−1, donde P denota el conjunto de los números primos. Por ahora, comprobemos que ζ es analítica. Proposición 1.2. La función ζ está bien definida y es holomorfa en ℜ(s) > 1, y la serie diverge para s < 1 real. Demostración. Como para cada n ∈N la función fn(s) = n−s = e−s ln n es holomorfa, aplicaremos el criterio M de Weierstrass. En este caso, para cada δ > 1 tomamos Mδ,n := n−δ de modo que | n−s |= n−ℜ(s) ≤Mδ,n para todo n ∈N y Re(s) ≥δ. Así, dado que P∞ n=1 n−δ < +∞, el criterio M de Weierstrass nos asegura que la serie converge localmente uniformemente y por tanto define una función holomorfa en ℜ(s) > 1. Para ver que la serie diverge para s < 1 real basta con compararla término a término con la serie armónica. 1 2 Propiedades analíticas básicas Ahora que sabemos que ζ es holomorfa en ℜ(s) > 1, es natural preguntarse si es posible definirla para ℜ(s) ≤1 de alguna manera «razonable». Para avanzar en esta dirección nos ayudaremos del siguiente resultado: Proposición 1.3. Para ℜ(s) > 1 se cumplen las siguientes igualdades: (1.2) ζ(s) = s Z ∞ 1 ⌊x⌋ xs+1 dx = s s −1 −1 2 −s Z ∞ 1 x −⌊x⌋−1/2 xs+1 dx Demostración. Para la primera igualdad, vemos que podemos escribir ⌊x⌋ xs+1 = ∞ X n=1 n xs+1 1[n,n+1)(x) = l´ ım N→∞ N X n=1 n xs+1 1[n,n+1)(x). Para el caso s real tenemos una serie de funciones no negativas, de modo que por el Teorema de la convergencia monótona podemos alterar el orden de sumación e integración: s Z ∞ 1 ⌊x⌋ xs+1 dx = s ∞ X n=1 Z n+1 n n xs+1 dx = ∞ X n=1 n n−s −(n + 1)−s = ∞ X n=2 n−s +1 = ζ(s). Para s complejo, la sucesión de funciones que estamos integrando está dominada por ⌊x⌋x−ℜ(s)−1 y, como esta función es integrable, el Teorema de la convergencia dominada nos permite de nuevo sacar el límite fuera de la integral y repetir los cálculos para llegar al mismo resultado, pero ahora con s no necesariamente real dentro de ℜ(s) > 1. Para la segunda igualdad, podemos fijarnos en que para ℜ(s) > 1 las funciones x−s y x−s−1 son integrables como funciones de x en [1, ∞), por lo que por linealidad y por (1.2) se obtiene s s −1 −1 2 −s Z ∞ 1 x −⌊x⌋−1/2 xs+1 dx = ζ(s)+ s s −1 −s Z ∞ 1 x−s dx−1 2 + s 2 Z ∞ 1 1 xs+1 dx. Dado que los cuatro sumandos de la derecha se cancelan en parejas, solo sobrevive ζ(s). Esta proposición nos acerca a una primera extensión de ζ a ℜ(s) > 0: como la función f(s, x) = (x −⌊x⌋−1/2) x−s−1 es integrable como función de x en [1, ∞) incluso para 0 < ℜ(s) < 1, su integral converge localmente uniformemente y define una función holomorfa en todo el dominio ℜ(s) > 0. Para comprobar esto último nos apoyaremos en el Teorema de Fubini y en el Teorema de Morera. Dada una curva cerrada γ, C1 a trozos en ℜ(s) > 0, vemos por el Teorema de Fubini que I γ Z ∞ 1 x −⌊x⌋−1/2 xs+1 dx ds = Z ∞ 1 (x −⌊x⌋−1/2) I γ 1 xs+1 ds dx 1.2 Aproximaciones a partir de las sumas parciales 3 y como s 7→x−s−1 es holomorfa, la integral de línea se anula y lleva la integral doble a 0, tal y como necesitábamos para que el Teorema de Morera nos asegure que la función es holomorfa. Juntando este resultado con la segunda igualdad de (1.2) acabamos encontrar una extensión meromorfa de ζ a ℜ(s) > 0, con un polo simple de residuo 1 en s = 1. En el segundo capítulo veremos que la función puede ser extendida al resto del plano complejo y que este es su único polo. 1.2. Aproximaciones a partir de las sumas parciales A lo largo de esta sección buscaremos una manera de aproximar ζ en la región 0 < ℜ(s) < 1, que es la región en la que se centra la hipótesis de Riemann. Antes de entrar en materia, continuamos con otra extensión, esta vez a ℜ(s) > −1. Proposición 1.4. La función ζ admite una extensión a ℜ(s) > −1 con un único polo en s = 1, que es simple, dada por (1.3) ζ(s) = 1 s −1 + 1 2 −s(s + 1) Z ∞ 1 g(x)x−s−2 dx, donde g(x) = R x 1 t −⌊t⌋−1 2  dt. Demostración. La función g está acotada y es 1-periódica, como se observa al desa-rrollar: (1.4) g(x) = Z x 1  t −⌊t⌋−1 2  dt = (x −⌊x⌋)2 −(x −⌊x⌋) 2 . Apelando nuevamente al Teorema de Morera, podemos ver que 1 s −1 + 1 2 −s(s + 1) Z ∞ 1 g(x)x−s−2 dx define una función meromorfa en ℜ(s) > −1 con un único polo en s = 1, de forma que solo queda comprobar que esa función coincide con ζ. Para ello, por ahora nos centraremos en el término integral. Integrando por partes, obtenemos (s + 1) Z ∞ 1 g(x)x−s−2 dx = −g(x)x−s−1 ∞ x=1 + Z ∞ 1 x −⌊x⌋−1/2 xs+1 dx. Como los dos extremos de la evaluación son 0, sustituimos y vemos que 1 s −1 + 1 2 −s(s + 1) Z ∞ 1 g(x)x−s−2 dx = 1 s −1 + 1 2 −s Z ∞ 1 x −⌊x⌋−1/2 xs+1 dx = ζ(s), donde la última igualdad viene de (1.2). 4 Propiedades analíticas básicas Este resultado nos permite continuar con la extensión de ζ y sienta la base para la siguiente generalización: Proposición 1.5. Para N ∈Z+, ℜ(s) > −1 y con g defiinida en (1.4) se cumple la siguiente igualdad: (1.5) ζ(s) = N X n=1 1 ns + N1−s s −1 −N−s 2 −s(s + 1) Z ∞ N g(x)x−s−2 dx. Demostración. Para esta proposición daremos una prueba más rudimentaria. Denote-mos por ζN(s) = PN n=1 n−s. Atacando primero el término integral, da como resultado 2 Z ∞ N g(x)x−s−2 dx = N1−s s −1 + ∞ X n=N n2 s + 1 n−s−1 −(n + 1)−s−1 −2 Z ∞ N ⌊x⌋ xs+1 dx −N−s s + Z ∞ N ⌊x⌋ xs+2 dx. Si aquí nos centramos en los términos integrales nuevamente, vemos que tienen la misma estructura, de modo que basta comprobar que Z ∞ N ⌊x⌋ xs+1 dx = 1 s ∞ X n=N n1−s − ∞ X n=N+1 (n −1)n−s ! = ζ(s) −ζN(s) + N1−s s , mientras que con las mismas ideas de reindexar y expandir la serie da como resultado ∞ X n=N n−s+1 −n2(n + 1)−s−1 = 2 (ζ(s) −ζN(s)) −(ζ(s + 1) −ζN(s + 1)) + N1−s. Por tanto, tras un poco de trabajo y muchas simplificaciones, el término integral queda así: 1 2 N1−s s −1 + 2 (ζ(s) −ζN(s)) + N1−s + N−s s + 1 −2 ζ(s) −ζN(s) + N1−s + N−s s ! . A partir de aquí con sustituir y simplificar podemos llegar al resultado. De esta manera, a partir de las sumas parciales de ζ hemos obtenido una familia de expresiones equivalentes para la extensión meromorfa de ζ en ℜ(s) > −1 que nos permitirán aproximar nuestra función en 0 < ℜ(s) < 1 a partir de sumas parciales. Antes de enunciar y demostrar el teorema necesitaremos el siguiente lema. Lema 1.6. La función g definida en (1.4) admite el siguiente desarrollo en serie de Fourier: (1.6) g(x) = −1 12 + 1 4π2 X n∈Z−{0} 1 n2 e2πinx = −1 12 + 1 2π2 ∞ X n=1 cos(2πnx) n2 . 1.2 Aproximaciones a partir de las sumas parciales 5 Demostración. Podemos ver que g(x) = 1 2x(x −1) en (0, 1). Como es 1-periódica, es C1 a trozos y la convergencia es absoluta. Integrando por partes, vemos que sus coeficientes son ˆ g(0) = Z 1 0 g(x) dx = −1 12, ˆ g(n) = Z 1 0 g(x)e−2πinx dx = 1 4π2n2 , n ∈Z −{0}. El paso de la serie exponencial a la serie de cosenos es inmediato usando la defini-ción exponencial del coseno. Y ahora tenemos todos los ingredientes para probar el resultado que buscábamos, que nos permite aproximar ζ en la región 0 < ℜ(s) < 1 a través de sus sumas parciales y de un término de corrección. Teorema 1.7. Existe una constante C tal que ζ(s) − N X n=1 1 ns −N1−s s −1 ≤CN−ℜ(s) para 0 < ℜ(s) < 1 y 1 + |ℑ(s)| ≤N ∈Z+. Demostración. Definimos las sucesiones de funciones fn(x) := 2πnx −ℑ(s) log x y hn(x) := f′ n(x)xℜ(s)+2−1 para n ∈Z −{0}, y observamos que para ℜ(s) > −1, 1 + ℑ(s) ≤N ∈Z+ y x > N las funciones están bien definidas, pues f′ n no se anula. Apoyándonos en la identidad e2πinxx−s−2 = f′ n(x) eifn(x)hn(x), integramos en [N, ∞) por partes y vemos que Z ∞ N e2πinxx−s−2 dx = Z ∞ N f′ n(x) eifn(x)hn(x) dx = ieifn(N)hn(N)+i Z ∞ N eifn(x)h′ n(x) dx. Como fn es real, |eifn(x)| = 1, así que tomando módulos y usando la desigualdad triangular, Z ∞ N e2πinxx−s−2 dx ≤ hn(N) + Z ∞ N h′ n(x) dx = 2 hn(N) , pues h′ n es continua y no se anula, por lo que su signo no cambia. Además, hn(N) = N−ℜ(s)−1 2πnN −ℑ(s) ≤N−ℜ(s)−1, tal y como se puede comprobar observando en qué región trabajamos. Ahora, usando (1.5), vemos que ζ(s) − N X n=1 1 ns −N1−s s −1 = −N−s 2 −s(s + 1) Z ∞ N g(x)x−s−2 dx, de manera que solo necesitamos acotar el módulo del último término por AN−ℜ(s) para alguna constante A. Para ello, primero acotamos la integral: sustituimos g por 6 Propiedades analíticas básicas su desarrollo en serie de Fourier y sacamos la serie fuera de la integral con el Teorema de Fubini: Z ∞ N g(x)x−s−2 dx = −1 12 N−s−1 s + 1 + 1 4π2 X n∈Z−{0} 1 n2 Z ∞ N e2πinxx−s−2 dx ≤N−ℜ(s)−1 12 + 1 2π2 X n∈Z−{0} |hn(N)| n2 ≤AN−ℜ(s), donde A = 1 12 + 1 2π2 P n∈Z−{0} n−2 < ∞. Por tanto, s(s + 1) Z ∞ N g(x)x−s−2 dx ≤AN−ℜ(s)|s + 1| N ≤AN−ℜ(s)(2 + ℑ(s)) N ≤2AN−ℜ(s), de modo que C = 1/2 + 2A es la cota deseada. CAPÍTULO 2 La ecuación funcional de ζ Este capítulo tiene por objetivo hallar una extensión de ζ al resto del plano com-plejo mediante una ecuación funcional siguiendo las ideas de Riemann en su famosa memoria de 1859 [2, §8], . De esta salen, sin demasiada complicación, los resultados de evaluar ζ en los números pares negativos. 2.1. La función Γ Antes de lanzarnos a por la ecuación funcional tenemos que definir ciertas propie-dades de otra función involucrada en esta: la función Γ. Definición 2.1. Definimos la función Gamma mediante la integral (2.1) Γ(s) = Z ∞ 0 xs−1e−x dx, para ℜ(s) > 0. Esta función es la generalización del factorial por antonomasia, como veremos a continuación. De hecho, su origen histórico está en la interpolación de factoriales . Teorema 2.2. La función Γ satisface Γ(s + 1) = sΓ(s) y admite una extensión me-romorfa a C con polos simples en Z≤0. Demostración. Integrando por partes, vemos que si ℜ(s) > 0 tenemos Γ(s + 1) = Z ∞ 0 xse−x dx = −xse−x ∞ x=0 + Z ∞ 0 sxs−1e−x dx = sΓ(s). Si para −1 < ℜ(s) ≤0 con s ̸= 0 definimos Γ(s) = Γ(s+1) s , obtenemos una continuación meromorfa a ℜ(s) > −1 con un polo simple en s = 0. Repitiendo, en −2 < ℜ(s) ≤−1, definimos Γ(s) = Γ(s+1) s = Γ(s+2) s(s+1) , y así inductivamente para llegar a, dado n ∈N, (2.2) Γ(s) := Γ(s + n) s(s + 1) · · · (s + n −1) para ℜ(s) > −n. Esto da lugar a una función meromorfa en el plano complejo con polos simples en Z≤0. 7 8 La ecuación funcional de ζ Como decíamos, Γ es una generalización del factorial, pues Γ(n + 1) = n! para n ∈Z≥0 tal y como puede deducirse de la ecuación funcional junto con que Γ(1) = 1. Para continuar con otras ecuaciones funcionales de Γ usaremos el siguiente lema: Lema 2.3. Para ℜ(s) > ℜ(w) > 0 se cumple (2.3) Γ(s) Z ∞ 0 xw−1(1 + x)−s dx = Γ(s −w)Γ(w). Demostración. Escribiendo la formulación integral de Γ vemos que Γ(s) Z ∞ 0 xw−1(1 + x)−s dx = Z ∞ 0 Z ∞ 0 xw−1(1 + x)−sys−1e−y dx dy que con el cambio de variables x = u/v, y = u + v queda como Z ∞ 0 Z ∞ 0 uw−1v1−w(v + u)−svs(u + v)s−1e−u−v(u + v)v−2 du dv = Z ∞ 0 uw−1e−u du Z ∞ 0 vs−w−1e−v dv = Γ(w)Γ(s −w). Corolario 2.4. La función Γ no se anula y por tanto 1/Γ define una función entera que solo se anula en Z≤0. Demostración. Si Γ(s0) = 0 para ℜ(s0) > 0, podemos tomar wn = 1/n partiendo de un N ∈N tal que Nℜ(s0) > 1 para poder aplicar el lema. De esta manera, tenemos Γ(s0 −1/n)Γ(1/n) = 0 para todo n ≥N. Como Γ(1/n) > 0 si n ∈N, necesariamente Γ(s0 −1/n) = 0 para n ≥N, contradiciendo el principio de la identidad. De este modo, vemos que no hay ceros en ℜ(s) > 0, y por (2.2) tampoco los hay en el resto del plano complejo. Ahora tenemos los ingredientes para proceder con la demostración de las fórmulas de reflexión y de duplicación, que aunque no son necesarias para llegar a una ecuación funcional de ζ nos servirán para deducir una segunda ecuación equivalente. Teorema 2.5. (Fórmula de reflexión de Euler). La función Γ cumple (2.4) Γ(1 −w)Γ(w) = π sen(πw) para w ∈C −Z. Demostración. Tomando s = 1 y 0 < ℜ(w) < 1 en (2.3) vemos que (2.5) Γ(1 −w)Γ(w) = Z ∞ 0 xw−1 1 + x dx =: I(w). Definimos γR como la curva cerrada C1 a trozos construida conectando i/R con R+i/R y R −i/R con −i/R mediante segmentos de recta que denominaremos γR,+ y γR,− 2.1 La función Γ 9 respectivamente, y los correspondientes extremos mediante arcos de circunferencia de radio 1/R y √ R2 + R−2 =: R∗centrados en el origen, como muestra el esquema: Por el Teorema de los residuos, vemos que para todo R > 0 I γR zw−1 1 + z dz = 2πi Resz=−1  zw−1 1 + z  = −2πieπiw. Separando la integral en las cuatro integrales sobre segmentos de recta y arcos, vemos que las integrales sobre los arcos tienden a 0: una estimación básica del módulo de-muestra que la del arco interior está acotado por πR1−ℜ(w)/(R −1) y la del exterior por 2πRℜ(w) ∗ /(R∗−1), y ℜ(s), 1 −ℜ(s) < 1. Por otra parte, en el segmento γR,+ l´ ım R→∞ Z γR,+ zw−1 1 + z dz = l´ ım R→∞ Z R 0 (x + i/R)w−1 1 + x + i/R dx = I(w) porque los integrandos están dominados por el de I(ℜ(w)). Por otro lado, debido a la orientación y a la continuidad de la determinación del ángulo a lo largo de γR, l´ ım R→∞ Z γR,− zw−1 1 + z dz = −e2πiw l´ ım R→∞ Z γR,+ zw−1 1 + z dz. Juntando todo, −2πieπiw = l´ ım R→∞ I γR zw−1 1 + z dz = 1 −e2πiw l´ ım R→∞ Z γR,+ zw−1 1 + z dz = 1 −e2πiw I(w), y despejando llegamos a I(w) = π cosec(πw). Esto se extiende al resto del plano complejo considerando f(w) = Γ(1 −w)Γ(w) −π cosec(πw) en 0 < ℜ(w) < 1 y comprobando que extiende analíticamente a 0 en C −Z de forma natural. Teorema 2.6. (Fórmula de duplicación de Legendre). La función Γ cumple (2.6) Γ(s)Γ s + 1/2  = 21−2s√πΓ(2s), para 2s ∈C −Z≤0. 10 La ecuación funcional de ζ Demostración. Para ℜ(s) > 1/2, sustituyendo w = 1/2 en (2.4) tenemos que Γ(s −1/2)Γ(1/2) Γ(s) = Z ∞ 0 x−1/2(1 + x)−s dx, y con el cambio x = (y−1)2 4y vemos que (1 + x)−s = 22sys(1 + y)−2s, así que queda Z 1 0 2y1/2 y −122sys(1 + y)−2s y2 −1 4y2 dy = 22s−1 Z 1 0 ys−3/2(1 + y)1−2s dy, que con el cambio y →y−1 queda como 22s−1 Z 1 0 ys−3/2(1 + y)1−2s dy = 22s−1 Z ∞ 1 ys−3/2(1 + y)1−2s dy. Uniendo las dos integrales, como ℜ(2s −1 −s + 1/2) > 0 llegamos a Γ(s −1/2)Γ(1/2) Γ(s) = 22s−2 Z ∞ 0 ys−3/2(1 + y)1−2s dy = 22s−2 Γ(s −1/2)2 Γ(2s −1) . Tomando s →s + 1/2 y despejando, para ℜ(s) > 0 obtenemos Γ(s)Γ s + 1/2  = 21−2sΓ(1/2)Γ(2s) = 21−2s√πΓ(2s), lo que se extiende al resto del dominio de Γ de manera similar a como probamos la anterior. Es inmediato que Γ(1/2) = √π, pues podemos sustituir s = 1/2 en (2.4) y en la Definición 2.1 para ver que su cuadrado es π y que es positivo. 2.2. La ecuación funcional El propósito de esta sección es presentar una prueba de la ecuación funcional y analizar las consecuencias que se pueden extraer de la misma. Aunque al final se pueden encontrar dos ecuaciones funcionales, una es un disfraz que podemos ponerle o quitarle a la otra mediante las fórmulas de reflexión y de duplicación de Γ. Empezamos con una proposición y un lema necesarios para probar el teorema. Proposición 2.7. Para y ∈R se cumple la siguiente igualdad: (2.7) Z ∞ −∞ e−πx2−2πixy dx = e−πy2. Demostración. Sea h(y) = R ∞ −∞e−πx2−2πixy dx. Vemos que h(0) = 1, mientras que h′(y) = −i Z ∞ −∞ 2πxe−πx2e−2πixy dx = −2πy Z ∞ −∞ e−πx2−2πixy dx = −2πyh(y), donde la segunda igualdad viene de integrar por partes. Por tanto, h(y) = e−πy2. 2.2 La ecuación funcional 11 Lema 2.8. Sea G(t) = P n∈Z e−πn2t. Para t ∈R+ se cumple G(t) = 1 √ t P n∈Z e−πn2/t. Demostración. Como F(x) := 1 √ t P n∈Z e−π(x+n)2/t es continua y 1-periódica, coincide con P n∈Z ˆ F(n)e2πinx, su serie de Fourier. Para calcular ˆ F(n), como todo converge absolutamente, usamos Fubini y cambiamos x + n →x para llegar a ˆ F(n) = 1 √ t X n∈Z Z n+1/2 n−1/2 e−πx2/te−2πinx dx = Z ∞ −∞ e−πw2−2πin √ tw dw = e−πn2t, donde w = x √ t y usamos (2.7). Evaluando F(0) obtenemos la igualdad. Ahora tenemos los elementos necesarios para el siguiente teorema, del que se de-ducen los resultados que buscamos de manera casi inmediata. Teorema 2.9. Sea la función ω : R+ − →R definida por ω(x) = P∞ n=1 e−πn2x y ξ(s) = s(1 −s)π−s/2Γ(s/2)ζ(s). Para ℜ(s) > 1 tenemos la siguiente igualdad: ξ(s) = s(1 −s) Z ∞ 1 xs/2−1 + x(1−s)/2−1 ω(x) dx −1. Demostración. En primer lugar, podemos ver que (2.8) ω(x−1) = √x −1 2 + √x ω(x). Esto se deduce del Lema 2.8 y de que ω(x) = G(x)−1 2 con G(t) = P n∈Z e−πn2t: ω(x−1) = G(x−1) −1 2 = √xG(x) −1 2 = √x −1 2 + √x ω(x). Si consideramos las definiciones originales de ζ y Γ y cambiamos t = πn2x, vemos que ξ(s) = s(1 −s) ∞ X n=1 Z ∞ 0  t πn2 s/2 e−tt−1 dt = s(1 −s) ∞ X n=1 Z ∞ 0 xs/2−1e−πn2x dx. Como converge absolutamente, aplicamos Fubini para llegar a la primera igualdad: ξ(s) = s(1 −s) Z ∞ 0 ∞ X n=1 xs/2−1e−πn2x dx = s(1 −s) Z ∞ 0 xs/2−1ω(x) dx. Para la segunda igualdad, cambiamos x 7→x−1 en (0, 1) y usamos (2.8): Z ∞ 0 xs/2−1ω(x) dx = Z ∞ 1 x−s/2−1ω(x−1) dx + Z ∞ 1 xs/2−1ω(x) dx = Z ∞ 1 x−s/2−1 √x −1 2 dx + Z ∞ 1  x(1−s)/2−1 + xs/2−1 ω(x) dx. De esta manera, como la primera integral vale 1 s(s−1), llegamos a lo que buscábamos: ξ(s) = −1 + s(1 −s) Z ∞ 1  x(1−s)/2−1 + xs/2−1 ω(x) dx. 12 La ecuación funcional de ζ Pasemos a analizar qué se deduce del Teorema 2.9. En primer lugar, como ω es de la clase de Schwartz, la integral converge para todo s ∈C, definiendo una función entera y, además, la segunda igualdad vemos que ξ(s) = ξ(1 −s) para s ∈C. Los siguientes corolarios son consecuencia directa de estas dos observaciones. Corolario 2.10. La función ζ(s) − 1 s−1 es entera. Demostración. Vemos que F(s) := (s−1)ζ(s) = −ξ(s)πs/2 sΓ(s/2) es entera, pues es producto de −ξ(s), πs/2 y 1 sΓ(s), funciones enteras. Como F(1) = 1, es inmediato que F(s) − F(1)  /(s−1) = ζ(s)−1/(s−1) es entera si la definimos en s = 1 como γ, la constante de Euler-Mascheroni (ver Proposición A.2 en el apéndice). Corolario 2.11. (Ecuación funcional simétrica). Para s ∈C tenemos (2.9) π−s/2Γ s 2  ζ(s) = π−(1−s)/2Γ 1 −s 2  ζ(1 −s). Corolario 2.12. (Ecuación funcional no simétrica). Para s ∈C tenemos (2.10) ζ(s) = 2(2π)s−1 sen πs 2  Γ(1 −s)ζ(1 −s). Demostración. Despejando ζ de (2.9) tenemos que ζ(s) = πs−1/2 Γ (1 −s)/2  Γ(s/2) ζ(1 −s). Por las fórmulas de reflexión (2.4) y de duplicación (2.6), Γ (1 −s)/2  Γ(s/2) = sen(πs/2) π Γ (1 −s)/2  Γ(1 −s/2) = 2sπ−1/2 sen(πs/2)Γ(1 −s). Sustituyendo, se tiene el resultado deseado. 2.3. Los ceros triviales y otros valores Vamos a mostrar rápidamente la existencia de infinitos ceros (los llamados ceros triviales) de ζ y a mostrar cómo calcular alguno de sus otros valores. Teorema 2.13. La función ζ se anula en los números pares negativos. Demostración. Evaluando s = −2n con n ∈Z+ en (2.10), el seno se anula y el resto de términos son finitos, por lo que ζ(−2n) = 0. Ahora pasaremos a evaluar ζ(−1) y ζ(−3). Resulta curioso como, a pesar de que si los evaluamos en la definición original ζ(s) = P∞ n=1 n−s el resultado es claramente infinito, los valores en ζ coinciden con los asignados por Ramanujan a estas series divergentes. En otras palabras, el famoso 1 + 2 + 3 + · · · = −1 12 tiene «sentido». 2.3 Los ceros triviales y otros valores 13 Proposición 2.14. Se tiene que ζ(−1) = −1/12 y ζ(−3) = 1 120. Demostración. Evaluando s = −1 en (2.10), como Γ(2) = 1 y ζ(2) = π2 6 , vemos que ζ(−1) = −1 12. Para ζ(−3) repetimos el proceso: como Γ(4) = 6 y ζ(4) = π4 90, llegamos a que ζ(−3) = 1 120. Se puede probar (ver [4, §1.5]) que para los enteros negativos ζ(−k) = (−1)k Bk+1 k+1 , donde Bk representa el k-ésimo número de Bernoulli, dado por x ex−1 = P∞ n=0 Bn xn n! . CAPÍTULO 3 Relación con funciones aritméticas Para entender la famosa relación entre ζ y los números primos debemos salir un momento del mundo del análisis complejo y empezar a estudiar su relación con las funciones aritméticas, es decir, las funciones f : N →C. 3.1. El producto de Euler En esta brevísima sección vamos a demostrar que ζ admite una definición como producto infinito, tal y como adelantamos al inicio del primer capítulo. En lo que sigue, p denota un primo y P = {2, 3, 5, ...} es el conjunto de todos los primos. Teorema 3.1. La función ζ admite el siguiente desarrollo como producto infinito: (3.1) ζ(s) = Y p∈P (1 −p−s)−1 para ℜ(s) > 1. Demostración. Consideramos, para k ∈N, Nk := {n ∈N : p|n ⇒p ≤k}, el conjunto de los números sin factores primos mayores que k. Dado ℜ(s) > 1, se tiene que p−ℜ(s) < 1, por lo que podemos sustituir cada término del producto por su correspondiente serie geométrica. Además, al ser un producto finito de series absolutamente convergentes, podemos multiplicarlas y reordenarlas para llegar a Y p≤k (1 −p−s)−1 = Y p≤k ∞ X n=0 p−ns = X n∈Nk n−s. De esta manera, como la serie P∞ n=1 n−s converge absolutamente, tenemos que ζ(s) − Y p≤k (1 −p−s)−1 = ∞ X n=1 n−s − X n∈Nk n−s ≤ X n/ ∈Nk n−ℜ(s) ≤ ∞ X n=k n−ℜ(s) k→∞ − − − →0. En realidad, este teorema no es exclusivo de ζ, sino que es un caso particular de un teorema más general [11, Theorem 1.9]. Además, como Q p≤k(1 −p−s)−1 ̸= 0 para todo k ∈N y ℜ(s) > 1 y converge a ζ localmente uniformemente en ese abierto, el Teorema de Hurwitz [1, Teorema 2.3] nos asegura que ζ(s) ̸= 0 para ℜ(s) > 1. 15 16 Relación con funciones aritméticas 3.2. Funciones aritméticas y la función ζ En primer lugar, lo natural es preguntarse qué son las funciones multiplicativas que se mencionan en [11, Theorem 1.9] y dar algún ejemplo. Definición 3.2. Una función aritmética f : N →C se dice multiplicativa si f(1) = 1 y f(nm) = f(n)f(m) para todo n y m coprimos. Si f(nm) = f(n)f(m) para todo n, m ∈N, se dice que la función es completamente multiplicativa. Esta definición de función multiplicativa no solo abarca la función constante 1 o la función identidad, que cumplen las propiedades de forma trivial, sino que también recoge, por ejemplo, a la función de Euler φ(n) = #{m ∈N : m ≤n, (n, m) = 1}, que es multiplicativa pero no completamente multiplicativa. Esto se deduce observando que cuando (n, m) = 1 tenemos Z/nmZ ≃Z/nZ × Z/mZ por el Teorema chino del resto. Como φ(n) = |(Z/nZ)×|, tomando unidades a ambos lados llegamos al resultado. Por tanto, basta saber que, para p primo y n ∈Z+, tenemos φ(pn) = pn −pn−1, lo que ya indica que no es completamente multiplicativa, pues por ejemplo φ(4) = 2 ̸= 1 = φ(2)φ(2). Otro ejemplo no trivial es la función d(n) = #{m ∈N : m|n}, que cuenta el número de divisores de un número entero, ya que la identidad d(pα1 1 · · · pαk k ) = (α1+1) · · · (αk+ 1) revela su naturaleza multiplicativa. Nuevamente, d(4) = 3 ̸= 4 = d(2)d(2), por lo que d tampoco es completamente multiplicativa. El siguiente resultado relaciona φ y d con ζ de forma muy directa. Teorema 3.3. Para ℜ(s) > 2 se cumple (3.2) ζ2(s) = ∞ X n=1 d(n) ns y ζ(s −1) ζ(s) = ∞ X n=1 φ(n) ns . Demostración. Usando el producto de Euler para ζ, vemos que en ℜ(s) > 2 ζ2(s) = Y p∈P ∞ X n=0 p−ns !2 = Y p∈P ∞ X n=0 (n + 1)p−ns = Y p∈P ∞ X n=0 d(pn)p−ns = ∞ X n=1 d(n)n−s, donde la última igualdad viene de la versión general del Teorema 3.1. Sin embargo, para aplicarlo falta comprobar que P∞ n=1 d(n)n−s converge absolutamente. Usando la cota trivial d(n) ≤n la convergencia absoluta para todo ℜ(s) > 2 es inmediata, aunque como ilustraremos esta región no es la óptima. Para la segunda igualdad, usamos la fórmula producto, la misma cota y el Teorema: ζ(s −1) ζ(s) = Y p∈P ∞ X n=0 p−n(s−1)(1 −p−s) = Y p∈P ∞ X n=0 pnp−ns(1 −p−s) = Y p∈P ∞ X n=0 pnp−ns − ∞ X n=1 pn−1p−ns ! = Y p∈P ∞ X n=0 φ(pn)p−ns = ∞ X n=1 φ(n)n−s. 3.2 Funciones aritméticas y la función ζ 17 Obsérvese que, en la primera identidad del Teorema 3.3, se puede afinar la región de igualdad hasta ℜ(s) > 1 probando que d(n) = o(nε) para todo ε > 0, pero esto se sale del objetivo del trabajo. Aunque ahora pueda parecer que todas las funciones aritméticas relacionadas con ζ son multiplicativas, esto no es así, como demuestra el siguiente ejemplo: Definición 3.4. La función de von Mangoldt se define como (3.3) Λ(n) = ( log p si n = pk para algún p primo y k ∈Z+, 0 en otro caso. Aunque Λ claramente no es multiplicativa, aparece de forma natural al derivar ζ. Proposición 3.5. Sea Λ la función de von Mangoldt. Entonces (3.4) −ζ′(s) ζ(s) = ∞ X n=1 Λ(n) ns para ℜ(s) > 1. Demostración. Como ζ(s) ̸= 0 en ℜ(s) > 1, podemos aplicar el logaritmo al producto de Euler de ζ, y al ser continuo transforma el producto en serie: −log ζ(s) = X p∈P log(1 −p−s). Derivando a ambos lados e identificando la serie geométrica, −ζ′(s) ζ(s) = X p∈P p−s log p 1 −p−s = X p∈P ∞ X n=1 p−ns log p = ∞ X n=1 Λ(n)n−s. Antes de continuar tendiendo puentes entre ζ y los números primos, estudiaremos la función −ζ′(s)/ζ(s) −(s −1)−1 en su dominio más natural: el complementario de Z =  ρ ∈C : ζ(ρ) = 0 . Observamos que Z es cerrado porque está compuesto de puntos aislados sin puntos de acumulación (si no, el principio de la identidad forzaría a ζ a ser idénticamente nula), por lo que U = C −Z es abierto. Proposición 3.6. La función −ζ′(s)/ζ(s) −(s −1)−1 es holomorfa en el abierto U = C −Z y tiene polos simples en los elementos de Z. Demostración. En primer lugar, vemos que −ζ′(s)/ζ(s) −(s −1)−1 es holomorfa en U −{1} por ser diferencia de funciones holomorfas en este abierto. Veamos ahora que tiene polos simples en cada ρ ∈Z. Sea ρ ∈Z un cero de ζ orden n. Entonces, existen un entorno B de ρ y una función g holomorfa en B que no se anula tal que ζ(s) = (s −ρ)ng(s) en B. Por tanto, ζ′(s) = (s −ρ)n−1ng(s) −(s −ρ)g′(s)  tiene un cero de orden n −1 en ρ. Dividiendo, llegamos a que −ζ′(s)/ζ(s) tiene un polo simple en s = ρ. 18 Relación con funciones aritméticas Por último, veamos que −ζ′(s)/ζ(s) −(s −1)−1 tiene límite en s = 1. Sea F(s) = (s −1)ζ(s), entera con F(1) = 1 y F ′(1) = γ, la constante de Euler-Mascheroni (ver Proposición A.2 en el apéndice). Vemos que ζ′(s) = F ′(s) s−1 − F(s) (s−1)2 , por lo que l´ ım s→1  −ζ′(s) ζ(s) − 1 s −1  = l´ ım s→1  −F ′(s) F(s) + 1 s −1 − 1 s −1  = −γ ∈R. La función Λ puede ser complicada de tratar, por lo que manejaremos una «imita-ción» que se limita a los primos y que su serie no añade singularidades en ℜ(s) > 1/2. Definición 3.7. Para ℜ(s) > 1 definimos Φ(s) = P p∈P p−s log p. Como log p = o(pε) para todo ε > 0, la serie converge absolutamente en ℜ(s) > 1 y define una función holomorfa. Su parecido a −ζ′(s)/ζ(s) motiva el siguiente resultado. Proposición 3.8. La función Φ(s) + ζ′(s)/ζ(s) admite una extensión holomorfa a ℜ(s) > 1/2. Demostración. Basta con considerar sus expresiones en serie: Φ(s) + ζ′(s)/ζ(s) = X p∈P p−s log p − X p∈P ∞ X n=1 p−ns log p = − X p∈P ∞ X n=2 p−ns log p. Para que la serie defina una función holomorfa basta con que converja absolutamente, y como para todo ε > 0 se tiene que log p = o(pε), basta ver que, dado ε > 0, en ℜ(s) > 1/2 + ε tenemos X p∈P ∞ X n=2 p−nℜ(s)+ε = X p∈P p−2ℜ(s)+ε 1 −p−ℜ(s) ≤ X p∈P p−2ℜ(s)+ε 1 −1/4 ≤4 3 ∞ X n=1 n−2ℜ(s)+ε < ∞. La importancia de la función Φ está en los resultados que mostraremos a conti-nuación, pues la relacionan con la primera función de Chebyshev ϑ. Definición 3.9. Para x ≥1, definimos la primera función de Chebyshev ϑ como (3.5) ϑ(x) = X p≤x log p, donde p recorre los primos. Esta función es protagonista de la demostración del Teorema de los números primos que presentaremos aquí. Seguimos con la siguiente igualdad que relaciona Φ y ϑ. Proposición 3.10. Sea ϑ la primera función de Chebyshev. Se cumple (3.6) Φ(s) = s Z ∞ 1 ϑ(x) xs+1 dx para ℜ(s) > 1. 3.2 Funciones aritméticas y la función ζ 19 Demostración. Escribiendo ϑ(x)x−s−1 = P p∈P(log p)x−s−11[p,∞)(x), para s real y mayor que 1, tenemos una sucesión creciente de funciones (medibles) no negativas. Por tanto, por el Teorema de la convergencia monótona, s Z ∞ 1 ϑ(x) xs+1 dx = s X p∈P (log p) Z ∞ p x−s−1 dx = X p∈P (log p)p−s = Φ(s). Para ℜ(s) > 1 en general, usamos el Teorema de la convergencia dominada, pues es precisamente ϑ(x)x−ℜ(s)−1 la función integrable que domina la sucesión con s. Estas relaciones permiten, asumiendo la existencia de un límite, determinar el comportamiento asintótico de ϑ. El resultado es puramente analítico y no depende de ϑ en particular, pero aplicarlo a esta función tiene consecuencias interesantes como la aparición recurrente de primos en cierta clase de intervalos. Vamos con el teorema. Teorema 3.11. Si ℓ= l´ ımx→∞ ϑ(x) x existe, entonces ℓ= 1. Demostración. En primer lugar, observamos que, por las proposiciones anteriores, la función Φ tiene un polo simple en s = 1 con residuo igual a 1. Definimos f(x) = ϑ(x) x y vemos que (3.7) l´ ım s→1+(s −1) Z ∞ 1 f(x)x−s dx = l´ ım s→1+(s −1) Z ∞ 1 ϑ(x) xs+1 dx = 1. Como l´ ımx→∞f(x) = ℓ, dado un ε > 0 existe un x0 > 1 tal que para todo x ≥x0 se cumple |f(x) −ℓ| < ε. Separando la integral por x0 y tomando el límite, vemos que (3.8) l´ ım s→1+(s −1) Z ∞ 1 f(x)x−s dx = l´ ım s→1+(s −1) Z ∞ x0 f(x)x−s dx Por tanto, como para todo C ∈R tenemos C = l´ ıms→1+(s −1) R ∞ x0 Cx−s dx, ℓ−ε ≤l´ ım s→1+(s −1) Z ∞ x0 f(x)x−s dx ≤ℓ+ ε. Juntando esto con (3.7) y (3.8), dado ε > 0 tenemos que |1−ℓ| ≤ε, es decir, ℓ= 1. A falta de mostrar por qué son equivalentes, el resultado hermano del Teorema de los números primos consiste precisamente en que ℓ= 1, por lo que todavía necesita-ríamos probar la existencia de ℓ. Por otra parte, en el siguiente corolario sí entra en juego la naturaleza de ϑ (al contrario que en el teorema anterior), y, aunque seguimos necesitando que ℓexista, no importa que ℓ= 1 sino que ℓ̸= 0. Corolario 3.12. Si ℓexiste, entonces para cada α > 1 existe un n0 tal que el intervalo (n, αn] contiene un primo para todo n > n0. Demostración. Sea α > 1. Supongamos que existen n1 < n2 < . . . tales que no hay primos en (nk, αnk] para todo k ∈N. Tomando n1 suficientemente grande, podemos 20 Relación con funciones aritméticas suponer que αnk ≥nk + 1 para todo k ∈N, de modo que, como no hay primos en [nk + 1, αnk], tenemos que ϑ(nk + 1) = ϑ(αnk). Así, llegamos a que ℓ= l´ ım k→∞ ϑ(αnk) αnk = l´ ım k→∞ ϑ(nk + 1) nk + 1 nk + 1 αnk = 1 α l´ ım k→∞ ϑ(nk + 1) nk + 1 = ℓ α. Como suponemos que ℓexiste, ℓ= 1 por el Teorema 3.11, lo que hace que ℓ= ℓ/α sea imposible con α > 1, contradiciendo la existencia la sucesión creciente (nk)∞ k=1. Por último, vamos a presentar una primera conexión directa entre la función ζ y la función contadora de primos π. Como ya es habitual, es una integral lo que conecta ambas funciones. Teorema 3.13. Sea π(x) = #{p ≤x} la función contadora de los primos. Entonces (3.9) log ζ(s) = s Z ∞ 2 π(x) x(xs −1) dx para ℜ(s) > 1. Demostración. Obsérvese que la función log ζ(s) está bien definida para ℜ(s) > 1 por la ausencia de ceros en esa región. Ahora, notando que π(x) = P p∈P 1[p,∞)(x), para s > 1 estamos en condiciones de aplicar el Teorema de Tonelli para cambiar sumación e integración, y estamos en la región de convergencia del producto de Euler. Uniendo estos dos datos s Z ∞ 2 X p∈P 1[p,∞)(x) x(xs −1) dx = X p∈P Z ∞ p sx−s−1 1 −x−s dx = − X p∈P log(1 −p−s) = log ζ(s). Para el caso s / ∈(1, ∞), basta notar que las dos expresiones definen funciones ho-lomorfas en ℜ(s) > 1 y coinciden en (1, ∞). Apelando al principio de la identidad, deducimos que definen la misma función en esta región. CAPÍTULO 4 La distribución de los números primos El Teorema de los números primos afirma que π(x) ∼x/ log x, donde π(x) = #{p ≤x} es la función contadora de los primos, es decir, que para x suficientemente grande la proporción de primos en [1, x] es aproximadamente 1/ log x. Aunque pueda parecer que este resultado se puede deducir del estudio de N o de Z, las primeras demostraciones se basaron fuertemente en el análisis de la función ζ como función de variable compleja. 4.1. Historia del teorema A principios del siglo XIX, el matemático francés Andrien-Marie Legendre conjetu-ró que π(x) podía aproximarse por x/(log x−B) (ver [10, §4.VIII]), donde B = 1,08366 pasó a conocerse como la constante de Legendre. Alrededor de esas fechas, Gauss y Dirichlet consideraron el logaritmo integral Li(x) = R x 2 dt log t como función aproxima-dora de primos, lo que reforzaba la idea de que la conjetura sobre el comportamiento asintótico de Legendre era correcta, aunque esta segunda función (o el otro logaritmo integral li(x) = R x 0 dt log t, donde se considera el valor principal de Cauchy para x > 1) es una aproximación mucho mejor y es protagonista de las consecuencias más inmediatas de la hipótesis de Riemann, como veremos en el quinto capítulo. Las primeras aproximaciones al teorema se deben al matemático ruso Pafnuty Chebyshev, que logro probar resultados más débiles considerando la función ζ para valores reales. En 1896, Hadamard y de la Vallée Poussin obtuvieron de manera in-dependiente las primeras demostraciones completas, ambas basadas en las ideas de Riemann que consideraban ζ como función de variable compleja, y no fue hasta 1949 que Selberg y Erdős dieron con una prueba «elemental», sin variable compleja ni la función ζ. La cuestión de si la demostración debía publicarse en un solo artículo conjunto o si cada uno debía publicar su contribución dio lugar a una disputa entre ambos . Volviendo a la conjetura de Legendre, es curioso cómo el valor más lógico de la constante de Legendre B se obtiene despejando π(x) ∼x/(log x −B), es decir, 21 22 La distribución de los números primos B = l´ ımn→∞  log n − n π(n)  . Se puede probar (ver ) que este límite existe y es igual a 1, lo que convierte esta constante de Legendre en el número más sencillo posible. 4.2. El teorema de los números primos La demostración que presentaremos aquí se la debemos al matemático estadouni-dense Donald J. Newman , que conserva la idea original con unas reducciones muy drásticas, aunque seguiremos la simplificación de D. Zagier . Al igual que el resto, empieza probando que ζ(s) ̸= 0 para ℜ(s) = 1, para lo que necesitaremos el siguiente lema: Lema 4.1. Para σ > 1 se tiene (4.1) (1 −σ) 2 X k=−2  4 2 + k ζ′(σ + ikt0) ζ(σ + ikt0) = (σ −1) ∞ X n=1 Λ(n) nσ nit0/2 + n−it0/24 ≥0. Demostración. Usando (3.4), llegamos a lo siguiente: − 2 X k=−2  4 2 + k ζ′(σ + ikt0) ζ(σ + ikt0) = 2 X k=−2  4 2 + k  ∞ X n=1 Λ(n) nσ+ikt0 = ∞ X n=1 Λ(n) nσ 2 X k=−2  4 2 + k  n−ikt0 = ∞ X n=1 Λ(n) nσ n2it0 4 X k=0 4 k  n−ikt0. Identificando lo último como n2it0(1 + n−it0)4 = (nit0/2 + n−it0/2)4, tenemos que (1 −σ) 2 X k=−2  4 2 + k ζ′(σ + ikt0) ζ(σ + ikt0) = (σ −1) ∞ X n=1 Λ(n) nσ nit0/2 + n−it0/24 ≥0, pues tenemos una serie de términos reales no negativos debido a que nit0/2 +n−it0/2 = 2 cos(t0 log(n)/2) ∈R. Corolario 4.2. La función ζ no tiene ceros para ℜ(s) = 1. Demostración. Como ζ(¯ s) = ζ(s), vemos que ζ(s) = 0 ⇔ζ(¯ s) = 0. Supongamos que ζ tiene ceros de orden m, n ≥0 en s0 = 1 ± it0 y en s1 = 1 ± 2it0 respectivamente. Aplicando ideas similares a la del principio del argumento, vemos que l´ ım σ→1+(1 −σ)ζ′(σ ± it0) ζ(σ ± it0) = −m y l´ ım σ→1+(1 −σ)ζ′(σ ± 2it0) ζ(σ ± 2it0) = −n, y, por la Proposición 3.6, l´ ımσ→1+(1−σ) ζ′(σ) ζ(σ) = 1. Tomando el límite en (4.1) llegamos a que 0 ≤ 4 2  −m 4 1  −m 4 3  −n 4 0  −n 4 4  ≤6 −8m, imposible para m ≥1 y por tanto ζ no tiene un cero en 1±it0. Como t0 es arbitrario, ζ(s) ̸= 0 para ℜ(s) = 1. 4.2 El teorema de los números primos 23 La ausencia de ceros de ζ en ℜ(s) = 1 es necesaria para garantizar que la función Φ(s) = P p∈P p−s log p se comporta bien cerca de ℜ(s) = 1, que es lo que necesitamos para probar el Teorema de los números primos. Corolario 4.3. La función G(s) = Φ(s) −(s −1)−1, donde Φ(s) = P p∈P p−s log p, admite una extensión holomorfa a un abierto U ⊃R = {s ∈C : ℜ(s) ≥1}. Demostración. La función −ζ′(s)/ζ(s) −(s −1)−1 es holomorfa en el abierto A = {s ∈C : ζ(s) ̸= 0} y Φ(s) + ζ′(s)/ζ(s) es holomorfa en B = {s ∈C : ℜ(s) > 1/2} (ver Proposiciones 3.6 y 3.8). La suma Φ(s) −(s −1)−1 es holomorfa en la intersección U = A ∩B, y ambos abiertos contienen a R por el corolario anterior. En realidad, la expresión para G que nos interesa es la siguiente (4.2) G(s) = Z ∞ 1 (ϑ(x) −x) dx xs+1 . La reducción de Newman (y nuestro objetivo ahora) consiste en demostrar el Teo-rema 4.5 o, equivalentemente, que la integral de Riemann impropia R ∞ 1 (ϑ(x) −x) dx x2 converge. Vamos a probar un resultado sencillo antes de aventurarnos a demostrarlo y ver cómo deducir de ello el teorema de los números primos. Proposición 4.4. La función ϑ(x)/x está acotada en (1, ∞). Demostración. Dado x > 1, existe un k ∈Z+ tal que x ∈(2k−1, 2k], de modo que ϑ(x) ≤ϑ(2k). Vemos que ϑ(1) = 0 sienta la base de una suma telescópica, y notando que p ∈(2j−1, 2j] implica que p divide a 2j 2j−1  , tenemos las siguientes desigualdades: ϑ(x) ≤ k X j=1 ϑ(2j) −ϑ(2j−1)  y eϑ(2j)−ϑ(2j−1) = Y 2j−1 2k−1, concluimos que ϑ(x) ≤4x log 2, es decir, ϑ(x)/x ≤4 log 2. Ahora, con un poco de análisis complejo podemos demostrar el teorema previo al Teorema de los números primos en el que se basa la reducción de Newman. Teorema 4.5. Sea ϑ la primera función de Chebyshev. Se cumple (4.3) I(a, b) = Z b a (ϑ(x) −x) dx x2 →0 cuando b > a →∞. Demostración. Consideraremos U como en el Corolario 4.3 y la región D = {s ∈C : |s| ≤R, ℜ(s) > −δ} con R > 1, donde δ ∈(0, 1) es tal que {s + 1 : s ∈D} ⊂U 24 La distribución de los números primos y definimos, para c > 1, Gc(s) = R c 1 (ϑ(x) −x) x−s−1 dx y las siguientes familias de funciones: hc(s) = cs 1+ s2 R2  , Bc(s) = Gc(s+1)hc(s) y Ac(s) = Bc(s)−G(s+1)hc(s). Por (4.2) todas estas funciones son holomorfas en U. Usando la fórmula integral de Cauchy, vemos que, sumando y restando un término cruzado, 1 2πi Z ∂D Ab(s) −Aa(s)  ds s = 1 2πi Z ∂D Gb(s + 1) −Ga(s + 1)  hb(s) ds s −1 2πi Z ∂D Ga(s + 1) −G(s + 1)  (ha(s) −hb(s)) ds s = Gb(1) −Ga(1) = I(a, b). La segunda igualdad viene de que, por la Proposición 4.4, | ϑ(x)−x xs+1 | ≤Kx−1 ∈L1([a, b]) para ℜ(s) > 1, así que podemos aplicar el Teorema de la convergencia dominada: Gb(1) −Ga(1) = l´ ım s→1+ Z b a ϑ(x) −x xs+1 dx = Z b a ϑ(x) −x x2 dx. Ahora, descomponiendo ∂D = C1⊔C2 con C1 la semicircunferencia derecha {|s| = R, ℜ(s) ≥0} y C2 el resto, vemos que, con las definiciones de Ac y Bc, 2πiI(a, b) = Z C1 Ab(s)−Aa(s)  ds s + Z C2 Bb(s)−Ba(s)−(hb(s)−ha(s))G(s+1)  ds s . El objetivo ahora es demostrar que para cada ε > 0 existe un R > 1 tal que J1 = Z C1 Ab(s) −Aa(s)  ds s < ε y J2 = Z C2 Bb(s) −Ba(s)  ds s < ε. Para ello, abreviemos ℜ(s) = σ. Para s ∈C1, se tiene que |hc(s)| = cσ |s| R s R + R s = cσ s R + R¯ s R2 = cσR−1|s + ¯ s| = cσR−12σ y |Gc(s + 1) −G(s + 1)| ≤ Z ∞ c |ϑ(x) −x| xσ+2 dx ≤ Z ∞ c K xσ+1 dx = Kσ−1c−σ. Con esto, llegamos a que J1 ≤πR sups∈C1 |Ab(s)−Aa(s)| |s| ≤πR 4KR−1 R = 4πKR−1, por lo que J1 es tan pequeño como queramos. Por otro lado, para acotar J2, como Bc(s)/s es holomorfa en ℜ(s) < 0 podemos cambiar el contorno sobre el que integramos a C3 = {|s| = R, ℜ(s) < 0}. Además, para s ∈C3 se cumple |Gc(s + 1)| ≤K|σ|−1(c−σ −1) ≤K|σ|−1c−σ y |hc(s)| = 2|σ|cσR−1, por lo que de nuevo J2 ≤πR sups∈C3 |Bb(s)−Ba(s)| |s| ≤4πKR−1. Con esto, solo nos queda demostrar que R C2 hb(s) −ha(s)  G(s + 1) ds s tiende a 0 cuando b > a →∞. Sabemos que sups∈C2 |G(s+1)| |s| =: S < ∞, y como ℜ(s) < 0 para s ∈C2, también tenemos que |hb(s) −ha(s)| ≤|hb(s)| + |ha(s)| ≤4aℜ(s) por lo que Z C2 hb(s) −ha(s)  G(s + 1) ds s ≤ Z C2 4Saℜ(s) |ds| a→∞ − − − →0. En resumen, |I(a, b)| con b > a es arbitrariamente pequeño cuando a →∞. 4.2 El teorema de los números primos 25 Con esto, la demostración del Teorema de los números primos se reduce a relacionar ϑ(x) con π(x) y trabajar con límites superiores e inferiores. Teorema 4.6. (Teorema de los números primos). Sea π(x) = #{p ≤x} la función contadora de los primos. Entonces (4.4) l´ ım x→+∞ π(x) log x x = 1. Demostración. Argumentando por reducción al absurdo, supongamos primero que l´ ım supx→+∞ π(x) log x x > 1, es decir, existe un L > 1 y una sucesión creciente no acotada (xn)∞ n=1 tal que π(xn) log xn > Lxn. Sea β = L+1 2L . Vemos que, por un lado, para x ≥xn, ϑ(x) −β  π(xn) −π(xβ n)  log xn ≥ X xβ n<p≤xn log p − X xβ n<p≤xn β log xn = X xβ n<p≤xn log p −log xβ n ≥0; y por otro lado, por las definiciones de L y β y porque π(xβ n) ≤xβ n, tenemos que β  π(xn) −π(xβ n)  log xn ≥L + 1 2 xn −βxβ n log xn. Combinando estas dos desigualdades, llegamos a que para x ≥xn se cumple ϑ(x) ≥L+1 2 xn −βxβ n log xn. Sustituyendo esta desigualdad en (4.3) obtenemos I(xn, Lxn) ≥ Z Lxn xn L + 1 2 xn −βxβ n log xn −x  dx x2 = L2 −1 2L −cn −log L, donde cn = L2−1 2L2 xβ−1 n log xn n→∞ − − − →0 porque β < 1. Por tanto, utilizando (4.3) y la hipótesis de que L > 1, llegamos a 0 = l´ ım n→∞I(xn, Lxn) ≥L2 −1 2L −log L > 0, una contradicción. De esta manera, deducimos que l´ ım supx→+∞ π(x) log x x ≤1. Para ver que l´ ım infx→+∞ π(x) log x x ≥1, la idea es la misma: suponemos que existe una constante ℓ< 1 y una sucesión creciente no acotada (xn)∞ n=1 tal que π(xn) log xn < ℓxn para todo n. Usamos la desigualdad trivial ϑ(x) ≤π(xn) log xn para x ≤xn y observamos que I(ℓxn, xn) ≤1 −ℓ+ log ℓ< 0, contradiciendo (4.3) nuevamente. Con este resultado surgen otras preguntas, como cuál es la velocidad de conver-gencia o si hay alguna cota para la diferencia entre las funciones. Las conjeturas de Legendre, de Gauss y de Dirichlet eran correctas, pues (4.5) π(x) ∼ x log x ∼ x log x −1 ∼Li(x) ∼li(x). De todas estas, el logaritmo integral es la que mejor aproxima π(x), y las cotas para sus diferencias dependen directamente de los ceros no triviales de ζ (los distintos de 2, 4, ...). Es en este contexto en el que aparece la famosa hipótesis de Riemann. CAPÍTULO 5 La hipótesis de Riemann El propósito de este capítulo es presentar la famosa hipótesis de Riemann y un teorema en el que se muestra su relación con la distribución de los números primos. 5.1. Resultados previos Antes de introducir la Hipótesis y el último teorema, demostraremos dos resultados que necesitaremos más adelante. Empezamos con uno relativo a los ceros de ζ. Proposición 5.1. La función ζ no se anula para s ∈(0, 1). Demostración. Recurriremos a una expresión similar a la dada en (1.2): como x−⌊x⌋≥ 0 se tiene que R ∞ 1 x−⌊x⌋ xs+1 dx ≥0 para s ∈(0, 1), por lo que en este intervalo ζ(s) = s s −1 −s Z ∞ 1 x −⌊x⌋ xs+1 dx ≤ s s −1 < 0. Esta proposición nos permitirá usar el siguiente teorema en la demostración del de la siguiente sección. Teorema 5.2 (Lema de Landau). Sea f : [2, ∞) − →R localmente integrable tal que f(x)/x es positiva y acotada para x mayor que cierto x0. Si la trasformada integral Lf(s) = Z ∞ 2 f(x) xs+1 dx se extiende a una función holomorfa en algún abierto que contiene a un intervalo (σ, 1] con 0 < σ < 1, entonces la integral converge en ℜ(s) > σ y es holomorfa allí. Demostración. Sin pérdida de generalidad, podemos asumir que f ≥0 en [2, ∞), pues la transformada integral de la parte negativa de f define una función entera y la integración en un intervalo finito no afecta a la convergencia. 27 28 La hipótesis de Riemann Ahora, definimos σ′ := ´ ınf{δ > σ : Lf(s) converge para ℜ(s) > δ}. Está claro que σ′ ∈[σ,1]. En primer lugar, vemos que Lf(δ) no converge si y solo si Lf(δ) = ∞, de modo que converge para ℜ(s) ≥δ si y sólo si Lf(δ) < ∞, pues para cada s con ℜ(s) ≥δ la integral converge absolutamente porque Lf(ℜ(s)) ≤Lf(δ) < ∞. Por tanto, para δ < σ′, Lf(δ) = ∞, pues si no σ′ no sería ínfimo. Supongamos que σ < σ′ y sean σ < σ−< σ′ < σ+ tales que σ−pertenece a un disco centrado en σ+ en el que Lf tiene una extensión holomorfa F. Se cumple F(σ−) = ∞ X n=0 L(n) f (σ+) n! (σ−−σ+)n = ∞ X n=0 (σ+ −σ−)n n! Z ∞ 2 f(x)(log x)n x1+σ+ dx. Aplicando el Teorema de Tonelli, llegamos a una contradicción, pues vemos que F(σ−) = Z ∞ 2 ∞ X n=0 (σ+ −σ−)n(log x)n n! f(x) x1+σ+ dx = Z ∞ 2 xσ+−σ−f(x) x1+σ+ dx = Lf(σ−) y F(σ−) = |F(σ−)| < ∞pero Lf(σ−) = ∞porque σ−< σ′. Por tanto, σ = σ′, y usando los Teoremas de Fubini y Tonelli llegamos a que F(s) = Lf(s) para ℜ(s) > σ. 5.2. La hipótesis de Riemann Tras hallar algunos de los primeros ceros no triviales, Riemann conjeturó en su famosa memoria de 1859 que todos tenían parte real igual a 1/2. Esta es la hipótesis de Riemann, que se puede formular equivalentemente como que la constante σ0 = sup{σ > 0 : ζ(σ + it) = 0 para algún t ∈R}, de la que depende el error en la aproximación dada por el Teorema de los números primos, es igual a 1/2. Para ver que ambas formulaciones son equivalentes, lo primero que podemos notar es que no hay ceros para ℜ(s) > 1, y observando la ecuación funcional simétrica (2.9) vemos que, para 0 < σ < 1, ρ = σ + it es un cero si y solo si ρ′ = (1 −σ) + it lo es, por lo que si σ ̸= 1/2 alguno de los dos ceros tendrá parte real mayor que 1/2. Esto nos indica que 1/2 ≤σ0 ≤1. La importancia de σ0 radica en su relación con el error de la aproximación del Teorema de los números primos, pues π(x) = Li(x)+O(xσ0 log x) como demuestra [8, Theorem 30] De esta manera, la hipótesis de Riemann se coloca en el mejor caso posible en lo que a error se refiere. En esta línea, el siguiente teorema proporciona, suponiendo que la Hipótesis es falsa, cotas para el error y muestra que π(x) −Li(x) ̸= O(xσ0−ε) para cualquier ε > 0. Teorema 5.3. Supongamos σ0 ̸= 1 2. Si el supremo que define σ0 es un máximo, esto es, si existe t0 ∈R tal que ζ(ρ0) = 0 con ρ0 = σ0 + it0, entonces l´ ım sup x→∞ π(x) −Li(x) xσ0/ log x ≥ 1 |ρ0| y − 1 |ρ0| ≥l´ ım inf x→∞ π(x) −Li(x) xσ0/ log x . 5.2 La hipótesis de Riemann 29 En particular, el límite no existe. Por otro lado, si el supremo no es máximo, entonces l´ ım sup x→∞ π(x) −Li(x) xσ = −l´ ım inf x→∞ π(x) −Li(x) xσ = +∞ para cualquier σ < σ0. Demostración. Empecemos suponiendo que el supremo que define σ0 es un máximo y sea ρ0 = σ0 +it0 un cero donde se alcanza el máximo, que, por el Corolario 4.2, vemos que 1/2 < σ0 < 1. Para el límite superior, la idea es aplicar el Lema de Landau a f(x) = −π(x) + Li(x) + ℓxσ0 log x = xσ0 log x  ℓ−π(x) −Li(x) xσ0/ log x  , donde ℓes mayor que el límite superior del enunciado si este es finito (si es infinito no hay nada que probar), pues f(x)/x es acotada y positiva a partir de cierto punto. Para ello, vamos a estudiar cada una de las componentes por separado. En primer lugar, para g(x) = −π(x), vemos que Lg(s) + s−1 log ζ(s) tiene una extensión holomorfa a ℜ(s) > 1/2. Podemos comprobar que para ℜ(s) > 1 la trans-formada Lg está bien definida y, recordando el Teorema 3.13, en esta región Lg(s) + s−1 log ζ(s) = − Z ∞ 2 π(x) xs+1 dx + Z ∞ 2 π(x) x(xs −1) dx = Z ∞ 2 π(x) xs+1(xs −1) dx, y como π(x)/x está acotado, la integral converge absolutamente para ℜ(s) > 1/2 y define una función holomorfa para esta región. En segundo lugar, si consideramos hσ(x) = xσ/ log x donde σ ≤1, vemos que Lhσ(s) + log(s −σ) está bien definida para ℜ(s) > σ y define una función holomorfa. Además, con el Teorema de la convergencia dominada llegamos a que su derivada es L′ hσ(s) + 1 s −σ = Z ∞ 2 −log x xs−σ+1 log x dx + 1 s −σ = −2σ−s + 1 s −σ , que se extiende a una función entera, y como C es convexo, la primitiva de esta función entera, Lhσ(s) + log(s −σ), también se extiende a una función entera. Por último, consideramos h(x) = Li(x). Para ℜ(s) > 1, integrando por partes, sLh(s) = s Z ∞ 2 x−s−1Li(x) dx = −x−sLi(x) ∞ x=2 + Z ∞ 2 x xs+1 log x dx = Lh1(s), de modo que sLh(s) + log(s −1) = Lh1(s) + log(s −1) es entera. Uniendo estos tres resultados, tenemos una extensión holomorfa a ℜ(s) > 1/2 de L(s) = Lf(s) + s−1 log (s −1)ζ(s)  + ℓlog(s −σ0), y, en particular, tenemos una extensión a un entorno de ℜ(s) ≥σ0. Además, Lf se extiende a una función holomorfa en ℜ(s) > σ0 y, por el Lema de Landau, la extensión sigue dada por la transformada integral. Ahora, dada una constante C tal que g(x) = f(x) + C ≥0 en [2, ∞), metiendo el módulo dentro de la integral obtenemos que Lg(s)/|Lg(s + it0)| ≥1 para s > σ0. 30 La hipótesis de Riemann Llamando R(s) = L(s)+C2−s/s, vemos que R sigue siendo holomorfa en ℜ(s) > 1/2, por lo que está acotada en entornos de σ0 y ρ0, y Lg(s) = R(s)−s−1 log (s−1)ζ(s)  − ℓlog(s −σ0). De esta manera, atendiendo a los elementos que divergen, tenemos que 1 ≤l´ ım s→σ+ 0 Lg(s) |Lg(s + it0)| = l´ ım s→σ+ 0 −ℓlog(s −σ0) |(s + ito)−1 log ζ(s + it0)| = l´ ım s→σ+ 0 log(s −σ0) log ζ(s + it0) ℓ|ρ0|. Aplicando la regla de L’Hôpital, vemos que l´ ım s→σ+ 0 log(s −σ0) log ζ(s + it0) = l´ ım s→σ+ 0 (s −σ0)ζ′(s + it0) ζ(s + it0) !−1 = m−1, donde m ≥1 es la multiplicidad del cero. De este modo, llegamos a que 1 |ρ0| ≤m |ρ0| ≤m |ρ0| l´ ım s→σ+ 0 Lg(s) |Lg(s + it0)| = ℓ. Así, como podemos tomar ℓarbitrariamente cerca del límite superior, necesariamente l´ ım sup x→∞ π(x) −Li(x) xσ0/ log x ≥ 1 |ρ0|. La prueba para el límite inferior es idéntica: considerando ℓmenor que este y −f(x) = xσ0 log x π(x) −Li(x) xσ0/ log x −ℓ  , que también cumple las hipótesis del Lema de Landau, comprobamos que ℓ≤−1/|ρ0|, y tomando ℓarbitrariamente cerca del límite inferior, llegamos a l´ ım inf x→∞ π(x) −Li(x) xσ0/ log x ≤−1 |ρ0|. Para finalizar, si el supremo no es máximo, podemos tomar un cero ρ′ 0 = σ′ 0 + it′ 0 con 1/2 < σ′ 0 < σ0 y notamos que, por la Proposición 5.1, el cero no está en (σ′ 0, 1). Suponiendo que existe un ℓmayor que el límite superior, notamos que Lf (con σ′ 0 en lugar de σ0 en la definición de f) todavía se extiende a una función holomorfa en un entorno de (σ′ 0, 1], por lo que por el Lema de Landau se extiende a ℜ(s) > σ′ 0. Sin embargo, esto lleva a una contradicción, pues L y Lf son holomorfas en ℜ(s) > σ′ 0 pero L(s) −Lf(s) = s−1 log (s −1)ζ(s)  + ℓlog(s −σ′ 0) tiene infinitos polos en σ′ 0 < ℜ(s) < σ0, por lo que no puede ser holomorfa. Por tanto, no puede existir un ℓ mayor que el límite superior. El caso para el límite inferior es análogo. De forma colateral, este teorema demuestra que hay infinitos cambios de signo de π(x) −Li(x) si σ0 ̸= 1/2. Apéndice En este apéndice, nos apoyaremos en la teoría desarrollada a lo largo del primer capítulo para establecer dos resultados básicos sobre la función (s −1)ζ(s), a la que apelamos en los siguientes capítulos, y para dar con expresiones sencillas para de-terminados valores de ζ. Destaca por su valor histórico el problema de Basilea o la relación de nuestra función con la constante de Euler-Mascheroni γ, aunque no nos reduciremos solo a ellos. Definición A.1. La constante de Euler-Mascheroni se define como γ := l´ ım N→∞  1 + 1 2 + 1 3 + · · · + 1 N −log(N + 1)  . Este límite existe y es finito, como demuestra la siguiente observación: dado que 0 ≤1 n −log n + 1 n  = Z 1 0  1 n − 1 x + n  dx = Z 1 0 x n(x + n) dx ≤ Z 1 0 1 n2 dx = 1 n2 y que ζ(2) < ∞, la serie P∞ n=1 1 n −log n+1 n  converge por comparación. Descompo-niendo las sumas parciales en una suma armónica y una parte logarítmica, como la parte logarítmica es telescópica obtenemos que el valor de la serie es precisamente γ. Proposición A.2. La función F(s) = (s −1)ζ(s) admite una extensión holomorfa en un entorno de s = 1, que cumple F(1) = 1 y F ′(1) = γ. Demostración. Para ℜ(s) > 0 usamos la ecuación (1.2) y vemos que F(s) = s −s −1 2 −s(s −1) Z ∞ 1 x −⌊x⌋−1/2 xs+1 dx, de modo que claramente l´ ıms→1 F(s) = 1 y al definir F(1) := 1 se obtiene una función holomorfa. Ahora, tomando la derivada por definición y usando (1.2), vemos que la conver-gencia de la integral en ℜ(s) > 0 permite meter el límite dentro: F ′(1) = l´ ım s→1  ζ(s)− 1 s −1  = 1 2−l´ ım s→1 Z ∞ 1 x −⌊x⌋−1/2 xs+1 dx = 1 2− Z ∞ 1 x −⌊x⌋−1/2 x2 dx. 31 32 Apéndice Por otro lado, l´ ım N→∞ Z N 1 x −⌊x⌋−1/2 x2 dx = l´ ım N→∞ log(N) − N−1 X n=1 n  1 n − 1 n + 1  + 1 2N −1 2 ! = 1 2−γ, de modo que F ′(1) = γ, tal y como buscábamos. Esto nos revela los primeros términos del desarrollo de Laurent de la función zeta: ζ(s) = 1 s−1 + γ + · · · . Pasando al problema de Basilea y sus generalizaciones, tenemos el siguiente resultado. Proposición A.3. Se cumple que ζ(0) = −1/2, ζ(2) = π2/6 y ζ(4) = π4/90. Demostración. Usando (1.3) vemos que ζ(0) = 1 0 −1 + 1 2 −0 Z ∞ 1 g(x)x−2 dx = −1 2. Para ζ(2) evaluamos en x = 0 la expresión de g en serie de cosenos dada en (1.6): 0 = g(0) = −1 12 + 1 2π2 ∞ X n=1 cos(0) n2 = −1 12 + 1 2π2 ζ(2). Despejando, ζ(2) = π2/6. Por último, para ζ(4) recurrimos a la identidad de Parseval y a (1.6), que nos aseguran que 1 120 = Z 1 0 |g(x)|2 dx = X n∈Z |ˆ g(n)|2 = 1 144 + X n∈Z−{0} 1 16π4n4 = 1 144 + 1 8π4 ζ(4). Nuevamente, despejamos para obtener ζ(4) = π4/90. Bibliografía F. Chamizo. Convergencia de funciones holomorfas. Variable Comple-ja II. resumenes/cnv.pdf, Curso 2018/2019. H. Davenport. Multiplicative number theory, volume 74 of Graduate Texts in Mathematics. Springer-Verlag, New York, third edition, 2000. Revised and with a preface by H. L. Montgomery. J. Dutka. The early history of the factorial function. Arch. Hist. Exact Sci., 43(3):225–249, 1991. H. M. Edwards. Riemann’s zeta function. Academic Press, New York-London, 1974. Pure and Applied Mathematics, Vol. 58. W. J. Ellison. Les nombres premiers. Hermann, Paris, 1975. En collaboration avec M. Mendès France, Publications de l’Institut de Mathématique de l’Université de Nancago, No. IX, Actualités Scientifiques et Industrielles, No. 1366. P. Erdős. On a new method in elementary number theory which leads to an elementary proof of the prime number theorem. Proc. Nat. Acad. Scis. U.S.A., 35 (1949), 374–384. D. Goldfeld. The Elementary proof of the Prime Number Theorem, An Histo-rical Perspective. Selected Publications of Dorian Goldfeld, columbia.edu/~goldfeld/ErdosSelbergDispute.pdf, 2003. A. E. Ingham. The distribution of prime numbers. Cambridge Mathematical Library. Cambridge University Press, Cambridge, 1990. Reprint of the 1932 ori-ginal, With a foreword by R. C. Vaughan. H. Iwaniec. Lectures on the Riemann zeta function, volume 62 of University Lecture Series. American Mathematical Society, Providence, RI, 2014. A. M. Legendre. Essai sur la théorie des nombres, seconde édition. Courcier, Paris, 1808. H. L. Montgomery and R. C. Vaughan. Multiplicative number theory. I. Classical theory, volume 97 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2007. 33 34 BIBLIOGRAFÍA D. J. Newman. Analytic number theory, volume 177 of Graduate Texts in Mathe-matics. Springer-Verlag, New York, 1998. J. Pintz. On Legendre’s Prime Number Formula. The American Mathematical Monthly, 87(9), 733–735, 1980. L. Schoenfeld. Sharper bounds for the Chebyshev functions θ(x) and ψ(x). II. Math. Comp., 30(134):337–360, 1976. A. Selberg. An Elementary Proof of the Prime-Number Theorem. Annals of Mathematics, 87(9), 50(2), 305–313, 1949. E. C. Titchmarsh. The theory of the Riemann zeta-function. The Clarendon Press, Oxford University Press, New York, second edition, 1986. Edited and with a preface by D. R. Heath-Brown. D. Zagier. Newman’s short proof of the prime number theorem. Amer. Math. Monthly, 104(8):705–708, 1997.
2496
https://www.khanacademy.org/science/in-in-class9th-physics-india/in-in-gravity/in-in-motion-of-objects-in-the-influence-of-gravitational-force-of-earth/v/free-fall-2-body-solved-numerical-gravity-physics-khan-academy
Use of cookies Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer. More information Manage Consent Preferences Strictly Necessary Cookies Always Active Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account. For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session. Functional Cookies These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences. Targeting Cookies These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission. We do not use cookies to serve third party ads on our Service. Performance Cookies These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates. For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users. We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails. Cookie List Consent Leg.Interest label label label
2497
https://secure-media.collegeboard.org/digitalServices/pdf/ap/ap17-statistics-q1.pdf
2017 AP Statistics Sample Student Responses and Scoring Commentary Inside: • Free Response Question 1 • Scoring Guideline • Student Samples • Scoring Commentary © 2017 The College Board. College Board, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks of the College Board. Visit the College Board on the Web: www.collegeboard.org. AP Central is the official online home for the AP Program: apcentral.collegeboard.org AP® ST A TIS TICS 2017 SCORING GUIDELINES Question 1 The primary goals of this question were to assess a student's ability to (1) explain statistical terms used when describing the relationship between two variables; (2) interpret the slope of a linear regression equation; and (3) calculate a value of y when given a regression equation, a value of x, and a residual. Intent of Question Solution Part {a): In the context of a scatterplot in which y represents weight and x represents length, the following are defined. A positive relationship means that wolves with higher values of length also tend to have higher weights. A linear relationship means that as length increases by one meter, weight tends to change by a constant amount, on average. A strong relationship means that the data points fall close to a line (or curve). Part {b): The slope of 35.02 indicates that two wolves that differ by one meter in length are predicted to differ by 35. 02 kilograms in weight, with the longer wolf having the greater weight. Part {c): In general, a residual is equal to actual weight minus predicted weight, or equivalently, actual weight = predicted weight + residual. For the wolf with length 1.4 meters and residual of -9.67, the predicted weight is -16.46 + 35.02(1.4) = 32.568 kilograms. Therefore, the actual weight of the wolf is 32.568 + ( -9.67) = 22.898 kilograms. Scoring Parts (a), (b), and (c) are scored as essentially correct (E), partially correct (P), or incorrect (I). Part (a) is scored as follows: Essentially correct (E) if the response includes the following four components: 1. A reasonable definition of positive 2. A reasonable definition of linear 3. A reasonable definition of strong 4. At least one definition in context © 2017 The College Board. Visit the College Board on the Web. www.collegeboard.org AP® STATISTICS 2017 SCORING GUIDELINES Question 1 (continued) Partially correct (P) if the response includes only three of the four components. Incorrect (I) if the response does not meet the conditions for E or P. Notes: • The description of a positive relationship should clearly indicate that relatively low values of one variable tend to appear with relatively low values of the other variable, and relatively high values of the first variable tend to appear with relatively high values of the other variable. • Examples of acceptable responses: • As length increases, so does weight. • Longer wolves weigh more. • The points on the graph go up as you move from left to right. • Examples of unacceptable responses: • As length goes up, weight changes. • Both length and weight get bigger. • The correlation is greater than 0. • The description of a linear relationship can take one of two approaches: the data pattern (data points exhibit the pattern of a line in the graph) or the constant rate of change (as the explanatory variable changes, the response variable exhibits a constant rate of change). • Examples of acceptable responses: • The points generally follow a straight line. • The relationship between x and y is straight. • Length and weight have a constant slope. • Examples of unacceptable responses: • The points all line up. • You can draw a straight line through the points. • There is a positive correlation. • Every increase in x yields a 35.02 increase in y. • The description of strong should indicate how close points are to a line. • Examples of acceptable responses: • Observed values are close to predicted values. • Deviations from the least-squares regression line are small. • The correlation coefficient is close to 1. • Examples of unacceptable responses: • All the points are close together. • The scatterplots are clustered together. • There is a high positive correlation. • Context can be shown by referring to length and weight or by using meters and kilograms. • Sketches and graphs can be used to help clarify definitions, but a sketch alone cannot satisfy a definition component. Part (b) is scored as follows: Essentially correct (E) if the response includes the following three components: 1. The correct value of 35.02 for the slope. 2. An interpretation that includes an increase of a specified amount of weight for each unit increase in length. 3. An indication that the relationship is not exact by using words such as “on average” or “predicted weight.” © 2017 The College Board. Visit the College Board on the Web: www.collegeboard.org. AP® STATISTICS 2017 SCORING GUIDELINES Question 1 (continued) Partially correct (P) if the response includes only two of the three components. Note: If the response identifies the slope as -16.46 (the intercept value), the second component is satisfied only if the response states that for each one-meter increase in length there is a decrease in predicted or average weight of 16.46 kilograms. Incorrect (I) if the response does not meet the criteria for E or P. Part (c) is scored as follows: Essentially correct (E) if the response includes the following two components: 1. A correct computation for the predicted value 32.568 kilograms. 2. A correct computation for the actual weight 22.9 kilograms using the given residual and the predicted value. Partially correct (P) if the response provides a correct computation for the medicted value but is not able to complete the correct calculation of the actual weight, including if the residual is defined in the wrong direction as (predicted weight) - (actual weight) to give an answer of 42.24 kilograms; OR if the response provides an incorrect value for the predicted weight, but then uses that value correctly to determine the actual weight as (predicted weight) + residual = (predicted weight) (-9.67 ) ; OR if the response provides a correct answer for the actual weight but does not give sufficient information to determine how it was calculated. Incorrect (I) if the response does not meet the criteria for E or P. Notes: • The expression -16.46 35.02(1.4) is enough to satisfy the first component. • The equation -16.46 + 35.02(1.4) - 9.67 = 22.9 satisfies both components. • Arithmetic mistakes are overlooked if they do not lead to an unreasonable answer (such as a negative value). For example, 32.568 + ( -9.67) = 21.9 satisfies the second component. © 2017 The College Board. Visit the College Board on the Web: www collegeboard.org. AP® ST A TIS TICS 2017 SCORING GUIDELINES Question 1 (continued) 4 Complete Response Three parts essentially correct 3 Substantial Response Two parts essentially correct and one part partially correct 2 Developing Response Two parts essentially correct and no parts partially correct OR One part essentially correct and one or two parts partially correct OR Three parts partially correct 1 Minimal Response One part essentially correct OR No parts essentially correct and two parts partially correct © 2017 The College Board. Visit the College Board on the Web. www.collegeboard.org 1 © 2017 The College Board. Visit the College Board on the Web: www.collegeboard.org. 2 © 2017 The College Board. Visit the College Board on the Web: www.collegeboard.org. 1 © 2017 The College Board. Visit the College Board on the Web: www.collegeboard.org. 2 © 2017 The College Board. Visit the College Board on the Web: www.collegeboard.org. 1 © 2017 The College Board. Visit the College Board on the Web: www.collegeboard.org. © 2017 The College Board. Visit the College Board on the Web: www.collegeboard.org. 2 AP® ST A TIS TICS 2017 SCORING COMMENTARY Question 1 Overview The primary goals of this question were to assess a student's ability to (1) explain statistical terms used when describing the relationship between two variables; (2) interpret the slope of a linear regression equation; and (3) calculate a value of y when given a regression equation, a value of x, and a residual. Sample: lA Score:4 Each of the two sentences in the response to part (a-i) provides a reasonable definition of a positive relationship. They both indicate that the value of one variable tends to increase as the value of the other variable increases. The first sentence also satisfies the context requirement for part (a) because it describes the relationship in terms of length and weight. The first sentence of the response to part (a-ii) provides a reasonable definition of a linear relationship by indicating that the points on the scatterplot exhibit a straight line pattern. The second sentence provides context. The response to part (a-iii) provides a reasonable definition of the strength of a relationship. It indicates that a relationship is strong when the points on the scatterplot are close to a line with the phrase "residuals are generally fairly small." The two graphs displayed in the response provide an alternative acceptable explanation because one graph illustrates a strong relationship, and the other illustrates a weaker relationship. Because reasonable definitions are given for all three components, and at least one is given in context, part (a) was scored as essentially correct. In part (b) the response provides an acceptable interpretation of the slope of the least-squares regression line by indicating that the "least squares regression line predicts that the weight in kilograms of a wolf would increase by approximately 35.02" for each 1-meter increase in length. It uses the correct value for the slope, and it clearly indicates that the least-squares regression line does not describe changes in the actual weights of wolves. The response is presented in the context of lengths and weights of wolves. Part (b) was scored as essentially correct. In part (c) the response provides a correct formula for a residual as a difference between an actual weight and a predicted weight. It displays the correct formula and reports the correct value for the predicted weight. It shows how to use the residual to compute the actual weight of the wolf from the predicted weight. Part (c) was scored as essentially correct. Because three parts were scored as essentially correct, the response earned a score of 4. Sample: lB Score: 3 The statement about "positive correlation" in the first sentence of the response to part (a-i) essentially defines a positive relationship as a positive relationship. It does not explain the meaning of a linear relationship. The sentence is extraneous and does not affect the scoring. The second sentence provides a reasonable definition of a positive relationship, and it is presented in the context of the wolf study. The first sentence of the response to part (a-ii) merely introduces the concept that the response addresses. The second and third sentences describe a positive relationship. This was considered extraneous and not a parallel response. The fourth sentence describes a consequence of a straight line relationship but does not explain what is meant by a linear relationship. This response is not a reasonable definition of a linear relationship, but it does satisfy the context requirement for part (a). The first sentence of the response to part (a-iii) is too vague to provide a reasonable definition of the strength of a relationship. The first part of the second sentence is a reasonable definition of strength because it indicates that the points on the scatterplot are close to a straight line, in this case, the least-squares regression line. The rest of the second sentence enhances the explanation. Because reasonable definitions are given for two of the three components, and at least one is given in context, part (a) was scored as partially correct. © 2017 The College Board. Visit the College Board on the Web. www.collegeboard.org AP® ST A TIS TICS 2017 SCORING COMMENTARY Question 1 (continued) In part (b) the response correctly interprets the slope as a 35.02 kg increase in y for each 1-meter increase in x. The correct value of the slope is used, and the response clearly indicates that the slope reflects a change in a predicted response. The response is presented in context. Part (b) was scored as essentially correct. In part (c) the response provides a correct formula and value for the predicted weight. It displays a correct formula for the residual and correctly calculates the actual weight of the wolf. Consequently, part (c) was scored as essentially correct. Context is provided in the form of units of length and weight, but context is not needed for an essentially correct response to part (c). Because two parts were scored as essentially correct, and one part was scored as partially correct, the response earned a score of 3. Sample: lC Score: 2 The reference to "an upward-right trend" in the response to part (a-i) is not sufficiently precise to qualify as a reasonable definition of a positive relationship. The inclusion of the graph, however, provides the additional explanation needed for an acceptable response. This is a reasonable definition of a positive relationship, and it satisfies the context requirement for part (a). The response to part (a-ii) does not address the meaning of a linear relationship. Instead it uses a value of a correlation coefficient close to 1 or -1 to describe a strong relationship. Although it is not a reasonable definition of a linear relationship, the response does satisfy the context requirement for part (a). In the response to part (a-iii), indicating that the "data should be near the regression line" provides a reasonable definition of a strong relationship. The response also satisfies the context requirement for part (a). Because reasonable definitions are given for two of the three components, and at least one is given in context, part (a) was scored as partially correct. The response to part (b) correctly identifies the value of the slope, but the interpretation is incorrect because it indicates that the predicted weight will increase by 35.02 kg for "every increase" in length instead of a 1-meter increase in length. Because the response refers to predicted weights and is presented in context, it was scored as partially correct. In part (c) the response provides a correct formula for the predicted weight, but the value for the predicted weight is incorrect. This is an arithmetic error that does not affect the score. A correct formula is presented for computing the actual weight from the residual and the incorrect predicted weight. Part (c) was scored as essentially correct. Because one part was scored as essentially correct, and two parts were scored as partially correct, the response earned a score of 2. © 2017 The College Board. Visit the College Board on the Web. www.collegeboard.org
2498
https://www.chegg.com/homework-help/questions-and-answers/calculate-thrust-rocket-turbojet-engine-see-figures--engines-stationary-u-0-total-mass-flo-q34511738
Your solution’s ready to go! Our expert help has broken down your problem into an easy-to-learn solution you can count on. Question: Calculate the thrust of a rocket and turbojet engine (see the figures below). Both engines are stationary (u 0) and their total mass flow rate at exit is 120 lbm/sec be sure to include the acceleration of gravity to achieve the correct units of thrust (Ib)]. The surrounding ambient pressure is 14.7 psia. For the turbojet engine, the ratio of the air to fuel Not the question you’re looking for? Post any question and get expert help quickly. Chegg Products & Services CompanyCompany Company Chegg NetworkChegg Network Chegg Network Customer ServiceCustomer Service Customer Service EducatorsEducators Educators
2499
https://chem.libretexts.org/Courses/University_of_Missouri/MU%3A__1330H_(Keller)/12%3A_Solids_and_Modern_Materials/12.1%3A_Classes_of_Materials
12.1: Classes of Materials - Chemistry LibreTexts Skip to main content Table of Contents menu search Search build_circle Toolbar fact_check Homework cancel Exit Reader Mode school Campus Bookshelves menu_book Bookshelves perm_media Learning Objects login Login how_to_reg Request Instructor Account hub Instructor Commons Search Search this book Submit Search x Text Color Reset Bright Blues Gray Inverted Text Size Reset +- Margin Size Reset +- Font Type Enable Dyslexic Font - [x] Downloads expand_more Download Page (PDF) Download Full Book (PDF) Resources expand_more Periodic Table Physics Constants Scientific Calculator Reference expand_more Reference & Cite Tools expand_more Help expand_more Get Help Feedback Readability x selected template will load here Error This action is not available. chrome_reader_mode Enter Reader Mode 12: Solids and Modern Materials MU: 1330H (Keller) { } { "12.1:Classes_of_Materials" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12.2:_Materials_for_Structure" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12.3:_Materials_for_Medicine" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12.4:_Materials_for_Electronics" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12.5:_Materials_for_Optics" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12.6:_Materials_for_Nanotechnology" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12.E:_Solids_and_Modern_Materials(Exercises)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" } { "01._Introduction:_Matter_and_Measurement" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "02._Atoms_Molecules_and_Ions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "03._Stoichiometry:_Calculations_with_Chemical_Formulas_and_Equations" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "04._Reactions_in_Aqueous_Solution" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "05._Thermochemistry" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "06._Electronic_Structure_of_Atoms" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "07._Periodic_Properties_of_the_Elements" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "08._Basic_Concepts_of_Chemical_Bonding" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "09._Molecular_Geometry_and_Bonding_Theories" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "10:_Gases" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "11:_Liquids_and_Intermolecular_Forces" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12:_Solids_and_Modern_Materials" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "13:_Properties_of_Solutions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "14:_Chemical_Kinetics" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "15:_Chemical_Equilibrium" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "16:_AcidBase_Equilibria" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "17:_Additional_Aspects_of_Aqueous_Equilibria" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "18:_Chemistry_of_the_Environment" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "19:_Chemical_Thermodynamics" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "20:_Electrochemistry" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "21:_Nuclear_Chemistry" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "22:_Chemistry_of_the_Nonmetals" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "23:_Metals_and_Metallurgy" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "24:_Chemistry_of_Coordination_Chemistry" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "25:_Chemistry_of_Life:_Organic_and_Biological_Chemistry" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" } Tue, 29 Aug 2017 15:03:00 GMT 12.1: Classes of Materials 91243 91243 Delmar Larsen { } Anonymous Anonymous User 2 false false [ "article:topic", "band theory", "energy band", "bandwidth", "band gap", "overlapping bands", "electrical insulator", "conduction band", "semiconductor", "showtoc:no", "license:ccbyncsa", "licenseversion:40" ] [ "article:topic", "band theory", "energy band", "bandwidth", "band gap", "overlapping bands", "electrical insulator", "conduction band", "semiconductor", "showtoc:no", "license:ccbyncsa", "licenseversion:40" ] Search site Search Search Go back to previous article Sign in Username Password Sign in Sign in Sign in Forgot password Contents 1. Home 2. Campus Bookshelves 3. University of Missouri 4. MU: 1330H (Keller) 5. 12: Solids and Modern Materials 6. 12.1: Classes of Materials Expand/collapse global location Home Campus Bookshelves Alma College American River College Anoka-Ramsey Community College Arkansas Northeastern College Athabasca University Barry University Barstow Community College Bellarmine University Bellingham Technical College Bennington College Bethune-Cookman University Bloomsburg - Commonwealth University of Pennsylvania Monsignor Bonner & Archbishop Prendergast Catholic High School Brevard College BridgeValley Community and Technical College British Columbia Institute of Technology California Polytechnic State University, San Luis Obispo California State University, Chico Cameron University Cañada College Case Western Reserve University Centre College Chabot College Chandler Gilbert Community College Chippewa Valley Technical College City College of San Francisco Clackamas Community College CLC-Innovators Cleveland State University College of Marin College of the Canyons Colorado College Colorado State University Colorado State University Pueblo Community College of Allegheny County Cornell College CSU Chico CSU Fullerton CSU San Bernardino DePaul University De Anza College Diablo Valley College Douglas College Duke University Earlham College Eastern Mennonite University East Tennessee State University El Paso Community College Erie Community College Fordham University Franklin and Marshall College Fresno City College Fullerton College Furman University Galway-Mayo Institute of Technology Georgian College Georgia Southern University Gettysburg College Grand Rapids Community College Grand Valley State University Green River College Grinnell College Harper College Heartland Community College Honolulu Community College Hope College Howard University Indiana Tech Intercollegiate Courses Johns Hopkins University Kenyon College Knox College Kutztown University of Pennsylvania Lafayette College Lakehead University Lansing Community College Lebanon Valley College Lewiston High School Lock Haven University of Pennsylvania Los Angeles Trade Technical College Los Medanos College Louisville Collegiate School Lubbock Christian University Lumen Learning Madera Community College Manchester University Martin Luther College Maryville College Matanuska-Susitna College McHenry County College Mendocino College Meredith College Metropolitan State University of Denver Middle Georgia State University Millersville University Minnesota State Community and Technical College Modesto Junior College Montana State University Monterey Peninsula College Mountain View College Bookshelves Learning Objects 12.1: Classes of Materials Last updated Aug 29, 2017 Save as PDF 12: Solids and Modern Materials 12.2: Materials for Structure Page ID 91243 ( \newcommand{\kernel}{\mathrm{null}\,}) Table of contents 1. Band Theory 2. One-Dimensional Systems 3. Multidimensional Systems 4. Band Gap 5. Requirements for Metallic Behavior 6. Insulators 7. Semiconductors 8. Temperature and Conductivity 9. n- and p-Type Semiconductors 1. Example 12.1.1 2. Exercise 12.1.1 Summary Bonding in metals and semiconductors can be described using band theory, in which a set of molecular orbitals is generated that extends throughout the solid. The primary learning objective of this Module is to describe the electrical properties of solid using band theory. Band Theory To explain the observed properties of metals, a more sophisticated approach is needed than the electron-sea model commonly described. The molecular orbital theory used to explain the delocalized π bonding in polyatomic ions and molecules such as NO 2−, ozone, and 1,3-butadiene can be adapted to accommodate the much higher number of atomic orbitals that interact with one another simultaneously in metals. In a 1 mol sample of a metal, there can be more than 10 24 orbital interactions to consider. In our molecular orbital description of metals, however, we begin by considering a simple one-dimensional example: a linear arrangement of n metal atoms, each containing a single electron in an s orbital. We use this example to describe an approach to metallic bonding called band theory, which assumes that the valence orbitals of the atoms in a solid interact, generating a set of molecular orbitals that extend throughout the solid. One-Dimensional Systems If the distance between the metal atoms is short enough for the orbitals to interact, they produce bonding, antibonding, and nonbonding molecular orbitals. The left portion of Figure 12.1.1 shows the pattern of molecular orbitals that results from the interaction of ns orbitals as n increases from 2 to 5. Figure 12.1.1: The Molecular Orbital Energy-Level Diagram for a Linear Arrangement of n Atoms, Each of Which Contains a Singly Occupied s Orbital. As n becomes very large, the energy separation between adjacent levels becomes so small that a single continuous band of allowed energy levels results. The lowest-energy molecular orbital corresponds to positive overlap between all the atomic orbitals to give a totally bonding combination, whereas the highest-energy molecular orbital contains a node between each pair of atoms and is thus totally antibonding. As we saw previously, the lowest-energy orbital is the completely bonding molecular orbital, whereas the highest-energy orbital is the completely antibonding molecular orbital. Molecular orbitals of intermediate energy have fewer nodes than the totally antibonding molecular orbital. The energy separation between adjacent orbitals decreases as the number of interacting orbitals increases. For n = 30, there are still discrete, well-resolved energy levels, but as n increases from 30 to a number close to Avogadro’s number, the spacing between adjacent energy levels becomes almost infinitely small. The result is essentially a continuum of energy levels, as shown on the right in Figure 12.1.1, each of which corresponds to a particular molecular orbital extending throughout the linear array of metal atoms. The levels that are lowest in energy correspond to mostly bonding combinations of atomic orbitals, those highest in energy correspond to mostly antibonding combinations, and those in the middle correspond to essentially nonbonding combinations. The continuous set of allowed energy levels shown on the right in Figure 12.1.1 is called an energy band. The difference in energy between the highest and lowest energy levels is the bandwidth and is proportional to the strength of the interaction between orbitals on adjacent atoms: the stronger the interaction, the larger the bandwidth. Because the band contains as many energy levels as molecular orbitals, and the number of molecular orbitals is the same as the number of interacting atomic orbitals, the band in Figure 12.1.1 contains n energy levels corresponding to the combining of s orbitals from n metal atoms. Each of the original s orbitals could contain a maximum of two electrons, so the band can accommodate a total of 2n electrons. Recall, however, that each of the metal atoms we started with contained only a single electron in each s orbital, so there are only n electrons to place in the band. Just as with atomic orbitals or molecular orbitals, the electrons occupy the lowest energy levels available. Consequently, only the lower half of the band is filled. This corresponds to filling all of the bonding molecular orbitals in the linear array of metal atoms and results in the strongest possible bonding. Multidimensional Systems The previous example was a one-dimensional array of atoms that had only s orbitals. To extrapolate to two- or three-dimensional systems and atoms with electrons in p and d orbitals is straightforward in principle, even though in practice the mathematics becomes more complex, and the resulting molecular orbitals are more difficult to visualize. The resulting energy-level diagrams are essentially the same as the diagram of the one-dimensional example in Figure 12.1.1, with the following exception: they contain as many bands as there are different types of interacting orbitals. Because different atomic orbitals interact differently, each band will have a different bandwidth and will be centered at a different energy, corresponding to the energy of the parent atomic orbital of an isolated atom. Band Gap Because the 1s, 2s, and 2p orbitals of a period 3 atom are filled core levels, they do not interact strongly with the corresponding orbitals on adjacent atoms. Hence they form rather narrow bands that are well separated in energy (Figure 12.1.2). These bands are completely filled (both the bonding and antibonding levels are completely populated), so they do not make a net contribution to bonding in the solid. The energy difference between the highest level of one band and the lowest level of the next is the band gap. It represents a set of forbidden energies that do not correspond to any allowed combinations of atomic orbitals. Figure 12.1.2: The Band Structures of the Period 3 Metals Na, Mg, and Al. The 3s and 3p valence bands overlap in energy to form a continuous set of energy levels that can hold a maximum of eight electrons per atom. Because they extend farther from the nucleus, the valence orbitals of adjacent atoms (3s and 3p in Figure 12.1.2) interact much more strongly with one another than do the filled core levels; as a result, the valence bands have a larger bandwidth. In fact, the bands derived from the 3s and 3p atomic orbitals are wider than the energy gap between them, so the result is overlapping bands. These have molecular orbitals derived from two or more valence orbitals with similar energies. As the valence band is filled with one, two, or three electrons per atom for Na, Mg, and Al, respectively, the combined band that arises from the overlap of the 3s and 3p bands is also filling up; it has a total capacity of eight electrons per atom (two electrons for each 3s orbital and six electrons for each set of 3p orbitals). With Na, therefore, which has one valence electron, the combined valence band is one-eighth filled; with Mg (two valence electrons), it is one-fourth filled; and with Al, it is three-eighths filled, as indicated in Figure 12.1.2. The partially filled valence band is absolutely crucial for explaining metallic behavior because it guarantees that there are unoccupied energy levels at an infinitesimally small energy above the highest occupied level. Band theory can explain virtually all the properties of metals. Metals conduct electricity, for example, because only a very small amount of energy is required to excite an electron from a filled level to an empty one, where it is free to migrate rapidly throughout the crystal in response to an applied electric field. Similarly, metals have high heat capacities (as you no doubt remember from the last time a doctor or a nurse placed a stethoscope on your skin) because the electrons in the valence band can absorb thermal energy by being excited to the low-lying empty energy levels. Finally, metals are lustrous because light of various wavelengths can be absorbed, causing the valence electrons to be excited into any of the empty energy levels above the highest occupied level. When the electrons decay back to low-lying empty levels, they emit light of different wavelengths. Because electrons can be excited from many different filled levels in a metallic solid and can then decay back to any of many empty levels, light of varying wavelengths is absorbed and reemitted, which results in the characteristic shiny appearance that we associate with metals. Requirements for Metallic Behavior For a solid to exhibit metallic behavior, it must have a set of delocalized orbitals forming a band of allowed energy levels, and the resulting band must be partially filled (10%–90%) with electrons. Without a set of delocalized orbitals, there is no pathway by which electrons can move through the solid. Band theory explains the correlation between the valence electron configuration of a metal and the strength of metallic bonding. The valence electrons of transition metals occupy either their valence ns, (n − 1)d, and np orbitals (with a total capacity of 18 electrons per metal atom) or their ns and (n − 1)d orbitals (a total capacity of 12 electrons per metal atom). These atomic orbitals are close enough in energy that the derived bands overlap, so the valence electrons are not confined to a specific orbital. Metals with 6 to 9 valence electrons (which correspond to groups 6–9) are those most likely to fill the valence bands approximately halfway. Those electrons therefore occupy the highest possible number of bonding levels, while the number of antibonding levels occupied is minimal. Not coincidentally, the elements of these groups exhibit physical properties consistent with the presence of the strongest metallic bonding, such as very high melting points. Insulators In contrast to metals, electrical insulators are materials that conduct electricity poorly because their valence bands are full. The energy gap between the highest filled levels and the lowest empty levels is so large that the empty levels are inaccessible: thermal energy cannot excite an electron from a filled level to an empty one. The valence-band structure of diamond, for example, is shown in Figure 12.1.3⁢a. Because diamond has only 4 bonded neighbors rather than the 6 to 12 typical of metals, the carbon 2s and 2p orbitals combine to form two bands in the solid, with the one at lower energy representing bonding molecular orbitals and the one at higher energy representing antibonding molecular orbitals. Each band can accommodate four electrons per atom, so only the lower band is occupied. Because the energy gap between the filled band and the empty band is very large (530 kJ/mol), at normal temperatures thermal energy cannot excite electrons from the filled level into the empty band. Thus there is no pathway by which electrons can move through the solid, so diamond has one of the lowest electrical conductivities known. Figure 12.1.2: Energy-Band Diagrams for Diamond, Silicon, and Germanium. The band gap gets smaller from C to Ge. Semiconductors What if the difference in energy between the highest occupied level and the lowest empty level is intermediate between those of electrical conductors and insulators? This is the case for silicon and germanium, which have the same structure as diamond. Because Si–Si and Ge–Ge bonds are substantially weaker than C–C bonds, the energy gap between the filled and empty bands becomes much smaller as we go down group 14 (part (b) and part (c) of Figure 12.1.2). Consequently, thermal energy is able to excite a small number of electrons from the filled valence band of Si and Ge into the empty band above it, which is called the conduction band. Exciting electrons from the filled valence band to the empty conduction band causes an increase in electrical conductivity for two reasons: The electrons in the previously vacant conduction band are free to migrate through the crystal in response to an applied electric field. Excitation of an electron from the valence band produces a “hole” in the valence band that is equivalent to a positive charge. The hole in the valence band can migrate through the crystal in the direction opposite that of the electron in the conduction band by means of a “bucket brigade” mechanism in which an adjacent electron fills the hole, thus generating a hole where the second electron had been, and so forth. Consequently, Si is a much better electrical conductor than diamond, and Ge is even better, although both are still much poorer conductors than a typical metal (Figure 12.1.4). Figure 12.1.4: A Logarithmic Scale Illustrating the Enormous Range of Electrical Conductivities of Solids Substances such as Si and Ge that have conductivities between those of metals and insulators are called semiconductors. Many binary compounds of the main group elements exhibit semiconducting behavior similar to that of Si and Ge. For example, gallium arsenide (GaAs) is isoelectronic with Ge and has the same crystalline structure, with alternating Ga and As atoms; not surprisingly, it is also a semiconductor. The electronic structure of semiconductors is compared with the structures of metals and insulators in Figure 12.1.5. Figure 12.1.5: A Comparison of the Key Features of the Band Structures of Metals, Semiconductors, and Insulators. Metallic behavior can arise either from the presence of a single partially filled band or from two overlapping bands (one full and one empty). Temperature and Conductivity Because thermal energy can excite electrons across the band gap in a semiconductor, increasing the temperature increases the number of electrons that have sufficient kinetic energy to be promoted into the conduction band. The electrical conductivity of a semiconductor therefore increases rapidly with increasing temperature, in contrast to the behavior of a purely metallic crystal. In a metal, as an electron travels through the crystal in response to an applied electrical potential, it cannot travel very far before it encounters and collides with a metal nucleus. The more often such encounters occur, the slower the net motion of the electron through the crystal, and the lower the conductivity. As the temperature of the solid increases, the metal atoms in the lattice acquire more and more kinetic energy. Because their positions are fixed in the lattice, however, the increased kinetic energy increases only the extent to which they vibrate about their fixed positions. At higher temperatures, therefore, the metal nuclei collide with the mobile electrons more frequently and with greater energy, thus decreasing the conductivity. This effect is, however, substantially smaller than the increase in conductivity with temperature exhibited by semiconductors. For example, the conductivity of a tungsten wire decreases by a factor of only about two over the temperature range 750–1500 K, whereas the conductivity of silicon increases approximately 100-fold over the same temperature range. These trends are illustrated in Figure 12.1.6. Figure 12.1.6: The Temperature Dependence of the Electrical Conductivity of a Metal versus a Semiconductor. The conductivity of the metal (tungsten) decreases relatively slowly with increasing temperature, whereas the conductivity of the semiconductor (silicon) increases much more rapidly. n- and p-Type Semiconductors Doping is a process used to tune the electrical properties of commercial semiconductors by deliberately introducing small amounts of impurities. If an impurity contains more valence electrons than the atoms of the host lattice (e.g., when small amounts of a group 15 atom are introduced into a crystal of a group 14 element), then the doped solid has more electrons available to conduct current than the pure host has. As shown in Figure 12.1.7⁢a, adding an impurity such as phosphorus to a silicon crystal creates occasional electron-rich sites in the lattice. The electronic energy of these sites lies between those of the filled valence band and the empty conduction band but closer to the conduction band. Because the atoms that were introduced are surrounded by host atoms, and the electrons associated with the impurity are close in energy to the conduction band, those extra electrons are relatively easily excited into the empty conduction band of the host. Such a substance is called an n-type semiconductor, with the n indicating that the added charge carriers are negative (they are electrons). Figure 12.1.7: Structures and Band Diagrams of n-Type and p-Type Semiconductors. (a) Doping silicon with a group 15 element results in a new filled level between the valence and conduction bands of the host. (b) Doping silicon with a group 13 element results in a new empty level between the valence and conduction bands of the host. In both cases, the effective band gap is substantially decreased, and the electrical conductivity at a given temperature increases dramatically. If the impurity atoms contain fewer valence electrons than the atoms of the host (e.g., when small amounts of a group 13 atom are introduced into a crystal of a group 14 element), then the doped solid has fewer electrons than the pure host. Perhaps unexpectedly, this also results in increased conductivity because the impurity atoms generate holes in the valence band. As shown in Figure 12.1.7⁢b, adding an impurity such as gallium to a silicon crystal creates isolated electron-deficient sites in the host lattice. The electronic energy of these empty sites also lies between those of the filled valence band and the empty conduction band of the host but much closer to the filled valence band. It is therefore relatively easy to excite electrons from the valence band of the host to the isolated impurity atoms, thus forming holes in the valence band. This kind of substance is called a p-type semiconductor, with the p standing for positive charge carrier (i.e., a hole). Holes in what was a filled band are just as effective as electrons in an empty band at conducting electricity. The electrical conductivity of a semiconductor is roughly proportional to the number of charge carriers, so doping is a precise way to adjust the conductivity of a semiconductor over a wide range. The entire semiconductor industry is built on methods for preparing samples of Si, Ge, or GaAs doped with precise amounts of desired impurities and assembling silicon chips and other complex devices with junctions between n- and p-type semiconductors in varying numbers and arrangements. Because silicon does not stand up well to temperatures above approximately 100°C, scientists have been interested in developing semiconductors made from diamonds, a more thermally stable material. A new method has been developed based on vapor deposition, in which a gaseous mixture is heated to a high temperature to produce carbon that then condenses on a diamond kernel. This is the same method now used to create cultured diamonds, which are indistinguishable from natural diamonds. The diamonds are heated to more than 2000°C under high pressure to harden them even further. Doping the diamonds with boron has produced p-type semiconductors, whereas doping them with boron and deuterium achieves n-type behavior. Because of their thermal stability, diamond semiconductors have potential uses as microprocessors in high-voltage applications. Example 12.1.1 A crystalline solid has the following band structure, with the purple areas representing regions occupied by electrons. The lower band is completely occupied by electrons, and the upper level is about one-third filled with electrons. Predict the electrical properties of this solid. What would happen to the electrical properties if all of the electrons were removed from the upper band? Would you use a chemical oxidant or reductant to effect this change? What would happen to the electrical properties if enough electrons were added to completely fill the upper band? Would you use a chemical oxidant or reductant to effect this change? Given: band structure Asked for: variations in electrical properties with conditions Strategy: Based on the occupancy of the lower and upper bands, predict whether the substance will be an electrical conductor. Then predict how its conductivity will change with temperature. After all the electrons are removed from the upper band, predict how the band gap would affect the electrical properties of the material. Determine whether you would use a chemical oxidant or reductant to remove electrons from the upper band. Predict the effect of a filled upper band on the electrical properties of the solid. Then decide whether you would use an oxidant or a reductant to fill the upper band. Solution: The material has a partially filled band, which is critical for metallic behavior. The solid will therefore behave like a metal, with high electrical conductivity that decreases slightly with increasing temperature. Removing all of the electrons from the partially filled upper band would create a solid with a filled lower band and an empty upper band, separated by an energy gap. If the band gap is large, the material will be an electrical insulator. If the gap is relatively small, the substance will be a semiconductor whose electrical conductivity increases rapidly with increasing temperature. Removing the electrons would require an oxidant because oxidants accept electrons. Adding enough electrons to completely fill the upper band would produce an electrical insulator. Without another empty band relatively close in energy above the filled band, semiconductor behavior would be impossible. Adding electrons to the solid would require a reductant because reductants are electron donors. Exercise 12.1.1 A substance has the following band structure, in which the lower band is half-filled with electrons (purple area) and the upper band is empty. Predict the electrical properties of the solid. What would happen to the electrical properties if all of the electrons were removed from the lower band? Would you use a chemical oxidant or reductant to effect this change? What would happen to the electrical properties if enough electrons were added to completely fill the lower band? Would you use a chemical oxidant or reductant to effect this change? Answer: The solid has a partially filled band, so it has the electrical properties of a conductor. Removing all of the electrons from the lower band would produce an electrical insulator with two empty bands. An oxidant is required. Adding enough electrons to completely fill the lower level would result in an electrical insulator if the energy gap between the upper and lower bands is relatively large, or a semiconductor if the band gap is relatively small. A reductant is required. Metallic behavior requires a set of delocalized orbitals and a band of allowed energy levels that is partially occupied. The electrical conductivity of a semiconductor increases with increasing temperature, whereas the electrical conductivity of a metal decreases with increasing temperature. n-Type semiconductors are negative charge carriers; the impurity has more valence electrons than the host. p-Type semiconductors are positive charge carriers; the impurity has fewer valence electrons than the host. Summary Band theory assumes that the valence orbitals of the atoms in a solid interact to generate a set of molecular orbitals that extend throughout the solid; the continuous set of allowed energy levels is an energy band. The difference in energy between the highest and lowest allowed levels within a given band is the bandwidth, and the difference in energy between the highest level of one band and the lowest level of the band above it is the band gap. If the width of adjacent bands is larger than the energy gap between them, overlapping bands result, in which molecular orbitals derived from two or more kinds of valence orbitals have similar energies. Metallic properties depend on a partially occupied band corresponding to a set of molecular orbitals that extend throughout the solid to form a band of energy levels. If a solid has a filled valence band with a relatively low-lying empty band above it (a conduction band), then electrons can be excited by thermal energy from the filled band into the vacant band where they can then migrate through the crystal, resulting in electrical conductivity. Electrical insulators are poor conductors because their valence bands are full. Semiconductors have electrical conductivities intermediate between those of insulators and metals. The electrical conductivity of semiconductors increases rapidly with increasing temperature, whereas the electrical conductivity of metals decreases slowly with increasing temperature. The properties of semiconductors can be modified by doping, or introducing impurities. Adding an element with more valence electrons than the atoms of the host populates the conduction band, resulting in an n-type semiconductor with increased electrical conductivity. Adding an element with fewer valence electrons than the atoms of the host generates holes in the valence band, resulting in a p-type semiconductor that also exhibits increased electrical conductivity. 12.1: Classes of Materials is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts. 11.7: Bonding in Metals by Anonymous is licensed CC BY-NC-SA 3.0. Toggle block-level attributions Back to top 12: Solids and Modern Materials 12.2: Materials for Structure Was this article helpful? Yes No Recommended articles 12.1: Classes of Materials 12.1: Classes of Materials 9.3: Band Theory of SolidsThe energy levels of an electron in a crystal can be determined by solving Schrödinger’s equation for a periodic potential and by studying changes to ... 9.11: Bonding in SemiconductorsWith the aid of simple diagrams, show how different band energy ranges in solids can produce conductors, insulators, and semiconductors. Describe the ... Covalent-Network Solids: Semiconductors and Insulators Article typeSection or PageLicenseCC BY-NC-SALicense Version4.0Show Page TOCno on page Tags band gap band theory bandwidth conduction band electrical insulator energy band overlapping bands semiconductor © Copyright 2025 Chemistry LibreTexts Powered by CXone Expert ® ? The LibreTexts libraries arePowered by NICE CXone Expertand are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Privacy Policy. Terms & Conditions. Accessibility Statement.For more information contact us atinfo@libretexts.org. Support Center How can we help? Contact Support Search the Insight Knowledge Base Check System Status× contents readability resources tools ☰ 12: Solids and Modern Materials 12.2: Materials for Structure Complete your gift to make an impact